Blind Faith in Technology Diverts EU Efforts to Fight Terrorism

This is the second in a series of posts and responses addressing the EU’s regulation on online terrorist content; the first post is HERE and the third HERE. [Ed.]

By Chloé Berthélémy and Diego Naranjo

After emptying their content moderators offices and sending their employees back home due to health safety guidelines, Facebook and the like promised to fight the spread of disinformation about the virus with the help of their so-called artificial intelligence. It only took a few hours to observe glitches in the system.

Their “anti-spam” system was striking down quality COVID-19 content from trustworthy sources as violations of the platforms’ community guidelines. This whole episode perfectly demonstrates why relying on automated processes can only be detrimental to freedom of expression, but more importantly in times of crisis, to the freedom of receiving and imparting information.

The current context led even the Alan Turing Institute to suggest that content moderators should be considered “key workers” in the context of the COVID-19 pandemic.

Using the coronavirus crisis to justify the introduction of mandatory filters in the Terrorist Content Regulation is therefore grossly inappropriate.

Along with other civil society groups, we repeatedly exposed how filters do not understand the intention of the author and the context of publication, and  are consequently unable to distinguish illegal content from legitimate one.

For example, YouTube’s algorithms keep deleting footage of human rights violations in the Syrian conflict – interpreting it as “extremist content” – despite the fact that this is often the only evidence that exists. Unfortunately, the European Commission and many other stakeholders seem convinced that automated tools offer the easy fix they were looking for in order to solve a very complex problem.

It is completely senseless and contemptuous of existing EU legal principles to argue that because platforms already use curation algorithms to determine what we see and read online, it is acceptable to have automated take-downs.

There are genuine problems with content selection algorithms used by some social media services, but making such algorithms mandatory for all content on all online services will only amplify and perpetuate those problems. On the legal principles, any limitation on the exercise of fundamental rights must be necessary, proportionate and provided for by law. More, Member States must ensure that a fair balance is struck between the various fundamental rights at stake.

Neither outsourcing the decision to private companies nor coercing them to restrict content without any proper legality assessment meet those requirements. Further, pushing platforms to delete even more content than they currently do will not magically lead to a “safe and secure” internet.

Removal orders abiding by the due process principle, issued by a judge and followed by criminal prosecution are better ways forward to effectively address online terrorist violence and propaganda.

The issue of cross-border removal orders is crucial in regard to the rule of law principle. Despite the 2017 Terrorism Directive, a fragmentation of the definitions of what constitutes terrorist propaganda in the EU remains. In the last years we witnessed several Member States abusing anti-terrorism laws to silence critical voices and criminalise the legitimate exercise of freedom of expression.

The Council of Europe Commissioner for Human Rights already denounced the misuses and disproportionate nature of counter-terrorism legislation. In this context, a cooperation mechanism between the issuing Member State and the Member State where the content is hosted ensures that  removal orders are proportionate and that constitutional protections, such as for journalistic content, are respected.

The argument of mutual trust between Member States does not hold long in a scenario in which two Member States are subject to Article 7 proceedings and some political leaders are granted sweeping powers to rule their country by decree.

Lastly, little evidence supports the claim that the adoption of another censorship law is desperately needed and urgent. Counter-terrorism measures are frequently adopted with a sense of emergency, giving no time for thorough discussions on the human rights impacts and safeguards.

The Commission’s impact assessment itself acknowledges that only 6% of respondents to its public consultation have encountered terrorist content. It also recognised that the removal of alleged terrorist content can impair an investigation and reduce the chances of disrupting criminal activity and obtaining the necessary evidence for prosecution purposes.

This appalling lack of evidence for a new legislation follows a standard pattern in the Commission’s content moderation policies: whether in the fight against child abuse material or hate speech, the Commission has systematically failed, so far, to provide any statistics on how much of the content being deleted as a result of its legislation is actually illegal or on the impact of these legislative measures.

Pretending that the removal of terrorist content online should be the number one priority of the EU in the fight against terrorism is also disregarding the scientific literature on violent radicalisation factors.

The upcoming Digital Services Act is a great opportunity for the EU to address the monopolisation of our online communication space by very few powerful platforms that dictate what we say or see online.

Further action should be done specifically in relation to the micro-targeting practices of the online advertising industry (Ad Tech). What we need for a healthy public debate online are not gatekeepers entitled by governments to restrict content as they wish. Instead, we need diversified, community-led and user-empowering initiatives, that allow everyone to contribute and participate.


This article was originally published by EURACTIVE, republished here with permission.

Chloé Berthélémy is a policy adviser at European Digital Rights (EDRi), and on Twitter @ChloBemy. Diego Naranjo is the head of policy at EDRi, and on Twitter @DNBSevilla.

Leave a Reply