After the Attack: The Challenge of Bystander Content

By Alastair Reed, Anne Craanen, and Arthur Bradley

In the digital age, the aftermath of terrorist attacks is often captured and disseminated not only by the perpetrators, but also by bystanders. Mobile phone videos, CCTV footage, body cam recordings, and livestreams routinely surface online within moments of such events. While this bystander content is not created by attackers themselves, its rapid circulation raises complex challenges for policymakers, technology companies, and civil society as they must balance competing demands of safety and security, with freedom of speech and public interest in events of considerable societal impact.

Communication and violence are two sides of the terrorist coin—the impact of an attack depends not only on the act itself but also on its dissemination to a wider audience. In the past, terrorists relied on high-profile, “spectacular” events—such as the 9/11 or the 1972 Munich Olympics attacks —to ensure media coverage. Today, however, they no longer need traditional media to amplify their message. Bystander footage can go viral within minutes, bypassing media gatekeepers and reaching global audiences directly.

Although significant progress has been made in addressing the circulation of terrorist-produced content—especially following the 2019 Christchurch attack—most crisis response protocols have focused on content created by perpetrators. The issue of bystander footage has remained largely unaddressed, with little consensus around how such content should be treated. In a recent workshop in Brussels, Coventry University and the Institute for Strategic Dialogue (ISD) convened experts to examine the potential harms associated with bystander content and explore appropriate policy responses.

Understanding the Harm

Public and political outrage often follows the spread and exploitation of bystander content, accompanied by calls for its removal. Yet the underlying question is frequently unclear: what specific harms are we trying to prevent? When should bystander content be considered as supporting or glorifying terrorism, or otherwise furthering the objectives of terrorists — and when should it be subject to removal on these grounds? But equally, when should bystander content be protected by freedom of speech and public interest concerns?

Although not produced by attackers, the viral spread of bystander footage can inadvertently amplify their message and help them to gain the publicity they seek. This concern is heightened as the dissemination of bystander footage increasingly appears to be a deliberate aspect of terrorist attack strategies—enabling extremists to bypass mainstream media gatekeepers and reach a global audience directly.

Terrorist and Violent Extremist (TVE) groups have actively incorporated bystander footage into their propaganda. After the November 2020 Vienna attack, for example, Islamic State (IS) included a still picture from CCTV footage of the attack in its Al-Naba newsletter. Similarly, extremist groups with opposing ideologies have weaponized bystander footage for their own narratives. Following the Islamist-inspired 2024 Mannheim knife attack, far-right groups disseminated the footage of the attack caught on a livestream to reinforce long-standing calls for the deportation of Muslims and immigrants. Right-wing extremist networks also repurposed the imagery in memes and other content promoting anti-Muslim and anti-immigrant narratives. In these cases, bystander content can be weaponised as a tool for radicalization and mobilisation across multiple ideological spectrums.

Beyond Terrorism: Additional Harms

The potential harms of bystander content extend beyond the realm of terrorism. Such footage often contains graphic imagery akin to that seen in other violent crimes, accidents, or natural disasters—material typically avoided by traditional media and most social platforms due to its disturbing nature. This raises an important question: should bystander footage of terrorist attacks be moderated based purely on its graphic content?

Closely tied to the issue of graphic violence is the emotional and ethical consideration of victim dignity. Images or videos depicting victims can be deeply traumatic for them, their families and the wider public. This concern came into sharp focus following the August 2020 Kouré attack in Niger, in which IS-linked militants killed seven humanitarian workers and their guide. Graphic images of the victims circulated on social media and local press outlets, including via accounts linked to far-right and jihadist groups. This led to a legal complaint by an organisation representing terrorist victims to try to ensure their removal. As their spokesperson argued “On the one hand, these photos shouldn’t be going around, and furthermore, they’re being used to incite hatred and that is abhorrent”. In this case, bystander content not only violated the dignity of the victims but also became a tool for inciting violence and hatred.

Navigating Public Interest

Balancing safety, security, and freedom of expression is a core challenge in content moderation, particularly when it comes to bystander footage. While it is essential to prevent terrorist actors from leveraging media coverage to advance their objectives, the public also has a right to information—especially regarding events with significant societal impact. Removing or suppressing content that is actively being analysed and debated can limit access to information and restrict public discourse. Across all forms of media, from traditional journalism to digital platforms, the principle of “public interest” exceptions is well established—allowing content that may otherwise violate editorial or platform rules to remain available when it serves the broader public good.

This issue is particularly relevant in authoritarian countries with closed or heavily restricted media environments, where social media is often the only space where the public can access news not controlled by the state. A recent Meta Oversight Board case regarding bystander footage from the 2024 Moscow terrorist attacks highlighted this concern. The Board overturned Meta’s decision to remove content depicting the moment of the attack on visible victims, stating  “This is particularly the case when the footage has been viewed by millions of people and accompanied by allegations that the attack was partly attributable to Ukraine. The Board notes the importance of maintaining access to information during crises particularly in Russia, where people rely on social media to access information or to raise awareness among international audiences.”

Conclusion

Bystander content is now a consistent feature of terrorist attacks in the digital age. Its circulation raises difficult questions about amplification, harm, and responsibility. While this content can provide valuable context and information, it can also serve as a vector for glorification of terrorism, misinformation, and exploitation. However, addressing this challenge doesn’t require rewriting the rulebook—it requires applying the one we already have.

Rather than developing entirely new policies and guidance, we must look to existing frameworks on glorification of terrorism, hate speech, graphic content, and incitement, among others. A coordinated, harms-based approach should guide decisions on when and how to intervene—ensuring that moderation is targeted, consistent, and with clear parameters to guard against unwarranted erosion of freedom of expression. Bystander footage sits at the intersection of multiple policy areas. What’s needed now is not another layer of legislation, but clear understanding of the problem, and a better alignment of current tools—alongside clearer protocols for how and when to use them.

ISD and Coventry University will take this work forward in the coming months, producing policy guidance to support a more coherent and rights-based response to bystander content. This will feed into ongoing efforts and complement broader international work on crisis response and digital safety. 


Alastair Reed is Professor of Security and Strategic Communications at the Centre for Peace and Security, Coventry University. His research focuses on understanding and analysing terrorist and extremist propaganda, and countering extremist content online. He is a member of the Vox-Pol Network, and an associate fellow at RUSI and ICCT.

Anne Craanen is a researcher at Swansea University and a member of the VOX-Pol Network. She is also a Senior Research and Policy Manager, Extremism at the Institute for Strategic Dialogue, specialising in gender and extremism, regulatory approaches to online terrorist and extremist content, and countering extremism and terrorism.

Arthur Bradley is an independent consultant who specialises in OSINT, online investigations, and terrorist propaganda ecosystems. His affiliations include Human Digital, the VOX-Pol Institute, and the Institute for Strategic Dialogue, and he is an external contributor on right-wing extremism for Jane’s Terrorism and Insurgency Centre (JTIC).

Image credit: Niña Venus on Unsplash