Being one step ahead: How platforms (can) counter terrorism online

By Franziska Selzer

To combat terrorist content online, platforms must take action. Meta, for instance, reported taking action, including removing content, covering disturbing content with a warning, and disabling accounts, against 30.8 million pieces of terrorist content on Facebook in the first three quarters of 2024. Of this content, over 99% was found by Meta proactively before user reporting. Taking a few steps back, there is still a lot of uncertainty around terrorist content among the public and tech platforms: What does terrorist content mean? Which platforms are particularly at risk of being misused by terrorist actors? What moderation techniques, whether automated or manual, are effective in combating terrorist and extremist content?

The ‘Tech Terror Takedown’ (TTT) Podcast, a limited-series podcast delivered by LMU Munich as part of the EU-funded TATE project, addressed these questions and shed light on current trends in terrorist and extremist communication online. The podcast offers tips for platforms to proactively identify terrorist content and implement effective measures to prevent its re-upload. International experts from academia, politics and the tech industry discuss four broader topical areas (see Figure 1):

1) Understanding extremist content and ecosystems

With the increased accessibility and user-generated nature of online content comes the greater risk of platforms enabling the spread of harmful, extremist and terrorist content. “Extremists attempt to integrate their views into the center of society”, said Prof. Diana Rieger (LMU Munich) during the first TTT-Podcast episode with moderator Sophia Rothut. The platforms architecture can offer the possibility of targeted recruitment by extremists who strive to create a general atmosphere of fear in society to further deepen societal cleavages and polarization. ”Many platforms, big and small, are currently exploited for terrorist purposes [as seen] in our TATE research”, mentioned Prof. Maura Conway (Dublin City University) in Episode 5. ‘We’ve identified over 100 platforms in the last six months that are hosting specifically terrorist content’. In Episode 7, Broderick McDonald (University of Oxford) further expanded on practical examples of which techniques extremists and terrorists use(d) to bypass content moderation and appear more legitimate, like image obscuration and the use of news agencies’ logos.

2) Regulatory frameworks and legal challenges

A special focus of the TTT-podcast lied on the Terrorist Content Online Regulation (TCO Regulation), which came into effect in June 2022 and obliged any online platform offering its services within the European Union to remove terrorist content online upon notification. After receiving a removal order from a competent authority, online platforms have one hour to remove or disable access to terrorist content on their platform. Adhering to the ‘one-hour-rule’ and implementing effective and proportionate measures to prevent terrorist content from being re-uploaded poses significant challenges, especially for small and micro platforms. These platforms often lack resources to tackle terrorist content online making them even more likely to be exploited by terrorist actors. While Friederike Wegener (European Commission) explained the TCO Regulation in detail in Episode 3, it is a recurring topic in the other episodes as well. In Episode 15, Wegener and her colleague Martina Maiello underline the importance of the Digital Services Act (DSA) and the TCO Regulation in jointly reducing online harm in the EU. Maiello stated “The TCO and the DSA work, because they work together”. However, there are some major differences between both regulations such as the one-hour-removal rule of the TCO Regulation, calling for immediate action by platforms.

3) Content moderation and ethical challenges

Identifying extremist and terrorist content online is a complex endeavor. Differentiating between neutral information and propaganda requires a certain level of know-how, a topic further illustrated by Prof. Catherine Bouko (Ghent University) in Episode 10. Accordingly, the interplay between freedom of expression and the need to safeguard online communities marks a crucial issue in content moderation. “It always is a very delicate balancing act (…)”, explained Prof. Carsten Reinemann (LMU Munich) in Episode 8. This balancing act mainly depends on legal regulations and policies of platforms which are influenced by different cultural and historical approaches on freedom of speech. Once a piece of content is classified as extremist or terrorist, platforms are obliged to provide appropriate content moderation. Although a variety of automated tools exists, the role of human oversight remains indispensable. Rather, the effective combination of both is crucial as pointed out by Dr. Ashley Mattheis (Dublin City University) and Prof. Stuart Macdonald (Swansea University) in Episode 4.

4) Collaborative approaches and tools

One widely used and beneficial tool in the sector of law enforcement and online safety is Open-Source Intelligence (OSINT). Ritu Gill, an OSINT Analyst, explained the fundamentals and benefits of OSINT to combat violent extremism and terrorism. She shared insight of investigations and operations with OSINT and gave practical advice on how to integrate it in everyday work of practitioners. Pointing out its indispensability, she concluded: “Often with open source, the way I see it, it’s these breadcrumbs that help us put the puzzle together”. In addition, Andrew Staniforth (SAHER Europe) expanded on the aspect of collaboration between law enforcement and tech companies in tackling online harms. Regarding the rapidly changing and dynamic strategies of terrorist and extremist actors, counterterrorism is a collaborative endeavour. Referring to his and Prof. Stuart Macdonald’s GNET-study, he further explained the synergies and challenges in collaborative counterterrorism efforts of law enforcement and tech companies. Only through continuous collaboration across sectors can we effectively counter online extremism and terrorism and build a digital environment that can ensure safety and resilience.

Figure 1. Thematic overview of the podcast episodes

Note. Numbers in brackets indicate the corresponding episode number.

For a deeper dive, check out the Tech Terror Takedown Podcast on YouTube and Spotify.

Franziska Selzer is a Master student and student assistant at the Department of Media and Communication of Ludwig-Maximilians-University Munich. Her research interests lie in political communication with a focus on radical and extremist online communication. 

IMAGE CREDIT: Franziska Selzer

Want to submit a blog post? Click here.