co-authored by Jan Gerlach
The European Commission’s proposal for a Regulation on preventing the dissemination of terrorist content online runs the risk of repeating many of the mistakes written into the copyright directive, envisioning technological solutions to a complex problem that could bring significant damage to user rights. The proposal includes a number of prescriptive rules that will create frameworks for censorship and potentially harm important documentation about terrorism online. It would further enshrine the rule and power of private entities over people’s right to discuss their ideas.
However, there are still ways to shape this proposal to further its objectives and promote accountability. The report on the proposal will be up for a vote in the Civil Liberties and Justice Committee (LIBE) in the European Parliament on 8 April, and Wikimedia urges the committee to consider the following advice:
1. Stop treating the internet like one giant, private social media platform
According to the draft, any platform that hosts third party content—from social media to Wikimedia projects like Wikipedia and potentially to services hosting private files—needs to describe how it deals with content that may be related to terrorism in its own terms of service. While people can talk about terrorism in different forms and for different purposes, such as for research, awareness raising, and news reporting, the regulation would force platforms to decide what is and what is not an acceptable way to have these conversations.
Yet, the law should not mirror websites’ approach to curbing illegal content by applying their terms of service because this would remove incentives to do better. The proposed regulation would oblige all platforms to act in a similar manner, regardless of the content they host or their operational model. That includes Wikipedia, where a rigorous set of top-down policies may interfere with its robust and effective system of transparent community dispute resolution over content.
“While people talk about terrorism in different forms and for different purposes, such as for research, awareness raising, and news reporting, the regulation would force platforms to decide what is and what is not an acceptable way to have these conversations.”
Instead, legislators should clearly define illegal terrorist content and leave hosting service providers little room for interpretation.
2. Let courts decide, not machines
Similarly to the new copyright directive, the regulation envisions the use of automated tools to proactively detect, identify and disable access to terrorist content. Deciding what is and what is not expression that condones terrorism is a complicated matter and context is crucial in deciding whether content is illegal under anti-terrorist laws. Such decisions need to be made by courts, not by algorithms which may or may not be subject to human oversight.
Where law enforcement relies on code, the code becomes the law. That goes against how our free knowledge projects operate, with vibrant and open deliberation on what should have its place on Wikipedia, and what shouldn’t. Platforms’ content moderation should build on a proper framework that involves well-prepared people, not only machines.
3. Do not overturn the principles of free expression
Freedom of expression is a right that can only be exercised by the practice of expressing one’s thoughts, ideas, or opinion. Boundaries are only applied when that expression is deemed unacceptable. Content filtering works on exactly the opposite premises—prematurely stifling expression before it has a chance to be heard and assessed.
Any reference to measures that may lead to proactive content filtering should be removed from the proposal. Upload filters overturn jurisprudence and legal practices in all jurisdictions that recognize freedom of expression as a human right. They operate in secrecy and their decisions are shrouded in trade secrets of companies running them. Relying on these technologies may stop some of the communication we don’t want, but it is not worth the price of undermining the foundation of free expression.
4. Do not force websites to remove legal content
The proposal envisions that, in addition to content removal orders, the competent authority can issue a referral to request a company check whether content violates their terms of service. Platforms will face penalties if they do not speedily address these referrals, which creates a strong incentive to act non-transparently and remove content that may in fact be legal.
The measure should be removed from the proposal. Instead, authorities tasked with tackling terrorist content should be required to focus on cases where the terrorist context is evident and issue an order to remove the piece of content in question. Lawmakers need to leave room for the less evident cases to be discussed as acceptable freedom of expression.
The LIBE Committee will vote on a few good changes that have been proposed, most notably the removal of proactive measures and referrals. Providing an exclusion for content disseminated for educational, artistic, journalistic or research purposes is a good idea. However, if the dissemination of terrorist content does not need to be intentional to be removed (as in calling for aiding and abetting terrorist activities), a lot of important information may still get caught up in a surge of removals. We hope that the committee responsible for ensuring respect for civil liberties in EU legislation, will rise to the occasion. We will continue to monitor the legislative process for this regulation and remain committed to defending and promoting free knowledge.
This blogpost originally appeared on Wikimedia Foundation policy blog