Skip to content

Wikimedia Europe

Visual Portfolio, Posts & Image Gallery for WordPress

JohnDarrochNZ, CC BY-SA 4.0, via Wikimedia Commons

Michael S Adler, CC BY-SA 4.0, via Wikimedia Commons

Stefan Krause, Germany, FAL, via Wikimedia Commons

Charles J. Sharp, CC BY-SA 4.0, via Wikimedia Commons

Benh LIEU SONG (Flickr), CC BY-SA 4.0, via Wikimedia Commons

NASA Goddard Space Flight Center from Greenbelt, MD, USA, Public domain, via Wikimedia Commons

Markus Trienke, CC BY-SA 2.0, via Wikimedia Commons

Terrorist clicks? drastic measures to moderate online communications under the anti-terrorist banner

What is the best way to combat terrorism? According to the European Commission, it is to clean the internet of terrorist content. Despite little clarity as to what terrorist content really is, the EU institutions are working towards a new regulation that would likely require a wide range of online services to follow the ill-designed measures – measures that would also affect Wikipedia. Yet, the lack of clear definitions, combined with proposed requirements to filter or immediately remove information, threatens democratic discourse and online collaboration.

What is this terrorist content regulation about, again?

In the autumn 2018, the European Commission (EC) published a proposal of a Regulation on preventing the dissemination of terrorist content online (TERREG) as a part of the Digital Single Market framework. Its framing suggests that curbing terrorism is not the main objective of this piece of legislation. Instead, it seeks to provide internet platforms with unified rules on how to deal with content that is considered terrorist, across the EU, and to outline consequences of their failing to comply. 

The rules boil down to a bypass of judiciary oversight in limiting freedom of expression, instead transfering that power to private actors: platforms hosting their users’ content, and content filters. All this makes it very easy to restrict access to information about unfolding civic events, which can sometimes produce violent imagery. Meanwhile, failure to remove disturbing, violent content upon notice is already punishable under the EU law and only 6% of European internet users report coming across what they perceive to be terrorist content (according to a Flash Eurobarometer poll from 2018). 

The EC wants platforms to comply with removal orders regarding a specific piece of content, issued by a “competent authority”, within an hour. The authority may also decide to issue a referral, which instructs a platform to review whether a specific piece of content should stay up or not according to their terms of service. This would grant a lot of power to these private entities regarding people’s freedom of expression, especially if public authorities “ask” to review certain information on a platform. We’ve seen that to avoid liability, social media platforms tend to overblock content, from the Syrian Archive materials to teaching resources on the holocaust. Unfortunately, the EC favors such an active approach to content removal and proposes a set of proactive measures preventing the terrorist content from appearing online. (Does this sound familiar?)

To add to this problematic intention, it is unclear from the proposed definition what is considered terrorist content. The definition specifies that this is information “advocating”, “glorifying”, “encouraging”, and “promoting” the commission of terrorist offences. So terrorist content, in the Commission’s view, is not only violent videos and incitement to violence that quite clearly is already illegal and at least easy to recognise. The nature of expression, especially on political and culturally sensitive issues, depends profoundly on the context. Meanwhile, the definition in the proposed regulation makes no difference as to the intent of publication, thereby jeopardising the appearance of important information, news and other journalistic materials, artistic expression, or evidence of human rights abuses. Yet, the ability to document (war) crimes, injustice, and other transgressions of norms, values, and laws – and for people to have access to those documents – is important for a functioning democracy that is based on discourse and debate and for the international community to hold transgressors accountable.

The EC wants to stretch the scope of the regulation from platforms that provide public access to content, such as social media, to services providing private storage or non-public restricted access (e.g. file storage in a centralised cloud service) to messaging services. Thus, potentially a great part of our everyday communications could be impacted. 

So, why are we still discussing this? 

In the course of the legislative debate, the European Parliament (EP) proposed important changes to the EC’s proposal. The EP proposed in its report that competent authorities must be judiciary, deleted the referrals, and banned general monitoring obligations (upload filters). Importantly, the Members of the European Parliament (MEPs) decided to create exceptions for journalistic, informational, artistic and educational purposes, and to limit the scope to those platforms that provide public access to content they host. The version of the Council of the EU, however, resembles heavily the EC’s proposal. Currently, the institutions are negotiating over a close to final text in the trilogue.

Even as the trilogue is entering its decisive phase, it is hard to verify where the Member States stand on the issues. There is not much debate at the national level on the proposal; meanwhile, in several European Capitals, laws resembling or copying the provisions of the terrorist content regulation proposal have already been discussed or introduced: such as the NetzDG law in Germany and Avia law in France

I do not write about terrorism, why should I care?

One problem with this regulation is that the broad definitions make it hard to say what does not qualify as terrorist content when political issues, civil disobedience, dissent, or criticism of public policies are concerned. The trouble is, that

“The governments of a couple of European countries are threatening the rule of law in such ways that have led the European Commission to intervene. Equipping those governments with such a tool to suppress certain kinds of expression is very concerning.”

Making this matter worse, some countries that seem to do better in terms of protecting democracy as well, have been suppressing dissent: Spain, for instance, has punished puppeteers for satirical performance and ordered GitHub to block access to open source code for an app used to organise small-scale civil protests in Catalonia.

Yet, even with a more precise definition of terrorist content, the way information is shared or published, is an important factor when evaluating the effects. Let’s consider an example: a video of armed men threatening to harm a certain group is posted by that group on social media. What if that same video is a part of a news piece illustrating this issue and offering a critical commentary and a comment by a police officer investigating the case? What if it is uploaded upon a request of a civil rights group that collects material to document abuse, hate crime, and build a body of evidence to assist prosecutions? 

If we agree that these uses of content are not equivalent, then how are automated systems and algorithms supposed to differentiate between the diverging cases to comply with requirements to prevent uploads? Intent and context matter, we cannot stress this enough! 

In a democratic society we can agree that we may want the disturbing, manifestly illegal content such as depictions of violent acts to be gone from the internet. The debate about TERREG is about how we may get there. Do we need a regulation that envisions such a broad and blurry framework to remove content?

“Do we want to hand over the power to decide about potentially important information to privately owned internet platforms? Or we should better investigate what part the platforms play in amplifying the reach of such content – often self-imposed by their terms of service? “

A democratic society is in a constant debate about the limits of its democratic freedoms. This debate is about the balance between the freedom of expression and other rights. If we “deplatform” people who threaten others in order to achieve their political goals, we may gain some peace of mind while we browse. However, muting them online will not make these problems disappear; instead it may well prevent us from understanding the roots of terrorism and violence.

If we eradicate a broad array of voices under the pretext of fighting terrorist messages online,

“Journalists may not be able to report on related problems accurately due to a lack of sources. Providing adequate and accurate information is also a challenge on Wikipedia if there are no sources to cite. “

It’s not an outlier, it’s a trend

The measures envisioned in TERREG are not an outlier. It is part of a pattern that we have also seen in the content filtering debate around copyright. The EU copyright reform highlighted the EU institutions’ unwavering belief in technology as a sufficient remedy to societal ills.

Unfortunately that same approach is also being applied to the COVID-19 crisis: platforms are asked to step up their content moderation game as a quick remedy to curbing access to disinformation about the disease and the ways it spreads. As human moderation teams are under stress from the pandemic, companies resort to automated systems to make decisions about content. Yet, as they themselves admit, the technology is not developed enough to deal with misinformation and the overblocking of content is a certain consequence of this “soft push” by governments. 

This is not to say that articles about drinking bleach to stay healthy should be available on social media more easily than those about the benefits of washing hands, but rather to point out that making the private actors solely responsible often ends poorly for the users and freedom of expression. The problems of power disparity between platforms and the people who use them as well as social media companies’ influence over public discourse will not be solved by allowing them to take more and more decisions about what can or cannot be said online. 

Economy of clicks

EU lawmakers need to take this to heart for the upcoming EU reform of platform liability. The Digital Services Act package is one of the most important initiatives of the current legislative term in the European Union as its objective is to update the law to the changing landscape of internet services, with a special regard to platforms hosting user-generated content. 

It presents a unique opportunity for a debate on the role of platforms’ business models in the dissemination of controversial or allegedly illegal content. From the perspective of commercial platforms, it is more important to keep people engaged with the content than what kind of messages they engage with. Discouraging such practices not only through adequate rules around advertising, algorithmic promotion, and privacy protections but also via market regulation could help all of us experience a better version of the internet. 

For now, as the EC strives to curb terrorism by regulating the internet, we hope that the MEPs can inject some sense into the final round of trilogue negotiations. If the definitions won’t be narrowed and made more precise, and if the platforms will be taking decisions on who is a terrorist with the help of ill-designed automated tools, we may wake up to the reality in which we will be second guessing ourselves not only when we post online but also when we click. 

This blogpost originally appeared on Wikimedia Foundation policy blog