By Reem Ahmed
This article summarises a recent paper published in Studies in Conflict & Terrorism that forms part of a special issue on the practicalities and complexities of (regulating) online terrorist content moderation. The special issue contains papers that were presented at Swansea University’s Terrorism and Social Media Conference 2022.
As part of its broader efforts to regulate platforms, on 29 April 2021 the European Union adopted Regulation (EU) 2021/784 on addressing the dissemination of terrorist content online (TCO). The legislative process was contentious, and different actors across the digital rights and human rights domains scrutinised the proposal and negotiations, lobbying all EU institutions to amend specific provisions to ensure the regulation respected fundamental rights. These actors were partially successful in influencing key aspects of the final text of the TCO, challenging the traditional perception that counterterrorism policymaking tends to be depoliticised and shielded from public contestation. In this blog post, I highlight the key themes that emerged from an analysis of the discourses of different actors during the negotiation process of the TCO. These findings suggest that while discourses of urgency and necessity remained evident on the part of EU institutions, the issue of platform governance engaged a more diverse range of actors who were able to (publicly) challenge and contest the EU’s approach.
Counterterrorism and (de-)politicisation
Within the security studies literature, counterterrorism measures have traditionally been understood in the context of depoliticization and securitisation. According to this logic, when an issue is securitised, decision-making departs from “normal” politics, and there is a lack of public debate as political actors and “security professionals” steer the discourse. However, several scholars have challenged this thesis, arguing that a politicisation perspective can also be applied to security governance. This is particularly evident in cases where proposed counterterrorism measures have potential implications for privacy or digital rights. In order to explore the dynamics of politicisation and securitisation at play, this study traced the evolution of the TCO and compared the key discourses and argumentation patterns of digital and human rights advocates and the EU institutions during the negotiations. Whilst the EU institutions and digital rights advocates broadly agreed that terrorist content online is an important issue that needs to be addressed, three key areas of contention emerged from the analysis: the assessment of the terrorist threat online; normative understandings of fundamental rights in the digital space; and the role of platforms. Fundamentally, the priorities of the EU institutions and digital rights and human rights advocates differed.
Threat assessment, digital rights, and the role of platforms
For the EU institutions, the threat of terrorism online was framed as urgent and thus required swift action. Often expressing frustration regarding the lack of progress during the negotiations, the EU used the terrorist attacks in Austria, France, and Germany in 2020 as examples of why the final text needed to be agreed quickly. In contrast, whilst digital and human rights advocates expressed that the EU was pursuing legitimate aims, they emphasised the importance of establishing a solid evidence base and using and evaluating the measures the EU already had at its disposal, such as the Directive (EU) 2017/541 on combatting terrorism. These groups also expressed urgency, but rather as a means of highlighting the risks that the TCO could pose to fundamental rights.
Underlying much of the debate and contention surrounding the TCO were the differing conceptualisations of safeguarding fundamental rights while countering terrorism online. These conflicting discourses reflect the different normative positions regarding the understanding of fundamental rights in the digital space. Whilst the EU institutions emphasised the need to balance fundamental rights and security, digital and human rights activists challenged this commitment by revealing inconsistencies in the proposed measures. By calling on the EU to enforce actionable measures to protect fundamental rights, digital and human rights advocates drew attention to the fact that the EU’s use of “balance” was empty rhetoric. As such, these groups directly challenged the EU’s normative stance as a defender of fundamental rights.
Finally, the roles and responsibilities of platforms were highly contested. Within the EU discourse, there has been a gradual shift from terrorist abuse of the internet to terrorist misuse and exploitation of the platforms. As a result, the roles and responsibilities of platforms have featured prominently in the regulatory debates. The EU Commission, the Council, and some members of the European Parliament were keen to highlight and attribute responsibility to platforms to protect their users. This reflects the EU’s general shift of focus in recent years from liability to platform responsibility. The critical point of contention for digital and human rights advocates was the – in their view – (disproportionate) power that would be afforded to platforms to police and decide on content. There were also concerns that the EU would outsource its own responsibility to protect fundamental rights by imposing a duty of care on platforms in the absence of adequate oversight.
This analysis shed light on the discourses articulated by different actors involved in the legislative process of platform regulation. Whilst securitising narratives of urgency and necessity remained evident, these discourses were challenged and contested in a public setting. This is in line with previous research, which suggests that counterterrorism can have both securitising and politicising elements. Platform governance – which touches on salient issues such as privacy and freedom of expression – is inherently highly politicised and open to contestation; it is thus interesting to see how these politicising dynamics have seeped into counterterrorism legislation and, as a consequence, engaged a more diverse range of actors into the security landscape. This raises further questions regarding review and accountability mechanisms in counterterrorism legislation in general. Whilst review bodies and courts have played a key role in remedying – often rushed – counterterrorism legislation ex-post, this move towards more scrutiny and a diversity of actors at the policy-making stage is an interesting development. Moreover, this case showed that the role of the European Parliament was vital in providing access and transporting the concerns of digital and human rights activists to the “elite” negotiation level. Further research could explore the dynamics and relationships between the European Parliament and digital and human rights NGOs in more detail.
Reem Ahmed is a Researcher at the Institute for Peace Research and Security Policy at the University of Hamburg.
Image Credit: Pexels