By Heidi Schulze, Brigitte Naderer, and Diana Rieger
The VOX-Pol workshop “Borderline Content Online” can be viewed here
Managing harmful online content remains one of the central challenges of the digital age. The advent of large language models (LLMs) like ChatGPT, which can generate vast amounts of content quickly and with little effort, complicates this challenge and makes it more pressing. While there is growing recognition and application of moderation practices and platform regulation policies across many countries and platforms (e.g., the EU regulations DSA & TCO), much harmful online content falls outside specific legislation or platform Terms of Service. This type of content, known as borderline content, skirts the edge of legality and acceptability, posing risks for radicalization, extremism, and societal harm without clearly warranting removal. On 13th of June 2024, VOX-Pol members Brigitte Naderer, Heidi Schulze, and Diana Rieger organized an online workshop on Borderline Content Online. They invited five speakers to discuss the topic from four different angles, summarized in the following sections.
- Fear Speech as Strategic Borderline Communication (Simon Greipl & Heidi Schulze, LMU)
One particularly insidious form of borderline content is fear speech because “rather than ‘hate speech’ [it] may be more relevant when assessing violent conflict escalation”. Defined as any deliberate communicative act that portrays a particular group or institutional entity as harmful, fear speech aims to drive the perception of a threat, thereby instilling a sense of fear in its audience. In online environments, this type of speech can foster a climate of hostility and exacerbate threat perceptions, potentially contributing to radicalization dynamics. For example, during the COVID-19 pandemic, fear speech flourished online, exploiting uncertainties to create a sense of shared threat, which can justify violence against perceived threats. A large scale content analysis of the Telegram communication of far-right, conspiracy and COVID-19 protest actors found fear speech prevalence ranging from 21% to 50% in crises-related topics. Topics related to COVID-19 and conspiracy theories were particularly characterized by fear-instilling rhetoric. However, it’s still unclear whether fear speech is tied to specific social environments or characteristics of different platforms.
2. Hybridised Online Hate and Extremism in the Israel/Gaza Conflict (Hannah Rose, ISD)
The recent conflict between Israel and Hamas has highlighted the complexity of managing online hate and extremism. Following the October 7th attacks, there was a marked increase in both antisemitic and anti-Muslim hate speech online. This surge was driven by established narratives that were quickly adapted to the context of the conflict. Antisemitic conspiracy theories and classical antisemitic slurs proliferated, as did anti-Muslim tropes portraying Muslims as inherently violent or incapable of integrating into Western societies. The hybridized nature of this hate speech, blending legal and illegal content across mainstream and fringe platforms, poses significant challenges for moderation and regulation and might contribute to further escalations of the conflict and the cementing of stereotypes.
3. Legitimizing hostility through humor (Ursula Schmid, LMU)
Humorous hate speech, another facet of borderline content, combines derogatory attitudes with humor cues such as puns, irony, or comic styles. This combination allows hate speech to appear more light-hearted and less extreme, often evading content moderation efforts. Humor can obscure the harmful nature of the message, normalize hostility, and make it more socially acceptable. It can be used strategically by actors to circumvent regulations. Furthermore, effect research has shown that humorous hate speech is less likely to be perceived as hostile and more likely to be seen as socially acceptable, particularly in environments used for entertainment purposes. This normalization can lead to a gradual acceptance of more extreme content over time.
4. Borderline Content and Human Rights (Broderick McDonald, ICSR)
Borderline content, while not illegal, can nonetheless be deeply troubling and harmful. It is also difficult to assess at scale because content is often unique and requires careful considerations. When thinking about countering these harms, we must do so within a human rights framework that protects freedom of expression while limiting damage to individuals or communities. The moderation of borderline content on private platforms is particularly challenging because it involves balancing the protection of free speech with the need to mitigate harm. Some tools that are available and applied by some platforms include de-amplification, pre-bunking, redirecting to helpful and supportive sources, user-assisted content moderation and investment in trust and safety teams. Borderline content remains one of the most important and challenging issues because it reaches so many diverse aspects of our societies and online discussions.
Discussion Summary
The four presentations highlighted the multifaceted nature of borderline content and the challenges associated with regulating it. Fear speech exploits societal fears and uncertainties, humorous hate speech normalizes hostility through humor, and lawful but awful content tests the boundaries of free speech. Crises and (international) conflicts can further accelerate the distribution of conspiracy narratives, outgroup-stereotypes, and misinformation. Each form of borderline content requires tailored strategies for identification, moderation, and regulation. The discussion underscored the need for comprehensive approaches that consider the legal, social, and psychological dimensions of online harm.
How to Deal with Borderline Content
Addressing borderline content requires a multi-pronged approach involving various stakeholders, including governments, platforms, and civil society. Key actions include:
- Transparent Data Access for Researchers: To better comprehend the extense, effects, and consequences of borderline content on social media platorms, researchers need be granted access to platform data of both published and deleted content.
- Enhanced Transparency and Accountability: Platforms should provide clear guidelines and transparent processes for content moderation, allowing users to understand and appeal decisions.
- International Cooperation: Governments should collaborate internationally to develop consistent standards and share best practices for managing online harm.
- Proactive Measures: Techniques like pre-bunking and the redirect method can help inoculate users against harmful narratives before they take hold.
- Context-Sensitive Approaches: Content moderation should consider the context in which content is shared, recognizing the differences between platforms and cultural norms.
- Support for Content Moderators: Platforms must invest in training and mental health support for content moderators who are exposed to harmful material, especially those in the Global South.
- Engagement with Civil Society: Policymakers should engage with researchers, advocacy groups, and affected communities to develop nuanced and effective regulatory frameworks.
Heidi Schulze is a Research Associate at the Department of Media and Communication at Ludwig-Maximilians-Universität München. Her research focuses on radicalization dynamics in online environments, radical/extremist (group) communication in alternative social platforms and fringe communities, as well as characteristics and audiences of hyperpartisan news websites.
Brigitte Naderer is a Postdoctoral Researcher at the Center for Public Health, Department of Social and Preventive Medicine at the Medical University of Vienna (Austria). Her research focuses on media literacy, persuasive communication, online radicalization, and media effects on children and adolescents.
Diana Rieger is a Professor at the Department of Media and Communication at Ludwig-Maximilians-Universität München. Her research focuses on extremist communication, hate speech, online radicalization, and the effects of entertainment media on wellbeing-related outcomes.
Image Credit: PEXELS