Journal Article |
Far-right conspiracy groups on fringe platforms: A longitudinal analysis of radicalization dynamics on Telegram
View Abstract
Societal crises, such as the COVID-19 pandemic, produce societal instability and create a fertile ground for radicalization. Extremists exploit such crises by distributing disinformation to amplify uncertainty and distrust among the public. Based on these developments, this study presents a longitudinal analysis of far-right communication on fringe platforms, demonstrating radicalization dynamics. Public Telegram communication of three movements active in Germany (QAnon, Identitarian Movement, Querdenken) was analyzed through a quantitative content analysis of 4500 messages posted to nine channels between March 2020 and February 2021. We study the movements’ discourse using several indicators of radicalization dynamics. The increasing prevalence of conspiracy narratives, anti-elitism, political activism, and support for violence indicate radicalization dynamics in these movements’ online communication. However, these dynamics varied within the movements. It can be concluded that, when studying radicalization dynamics online, it is crucial to not just focus on one single indicator, but consider longitudinal changes across several indicators, ideally comparing different movements.
|
2022 |
Schulze, H., Hohner, J., Greipl, S., Girgnhuber, M., Desta, I. and Rieger, D. |
View
Publisher
|
Journal Article |
Too civil to care? How online hate speech against different social groups affects bystander intervention
View Abstract
A large share of online users has already witnessed online hate speech. Because targets tend to interpret such bystanders’ lack of reaction as agreement with the hate speech, bystander intervention in online hate speech is crucial as it can help alleviate negative consequences. Despite evidence regarding online bystander intervention, however, whether bystanders evaluate online hate speech targeting different social groups as equally uncivil and, thereby, equally worthy of intervention remains largely unclear. Thus, we conducted an online experiment systematically varying the type of online hate speech as homophobia, racism, and misogyny. The results demonstrate that, although all three forms were perceived as uncivil, homophobic hate speech was perceived to be less uncivil than hate speech against women. Consequently, misogynist hate speech, compared to homophobic hate speech, increased feelings of personal responsibility and, in turn, boosted willingness to confront.
|
2023 |
Obermaier, M., Schmid, U.K. and Rieger, D. |
View
Publisher
|
Journal Article |
How social media users perceive different forms of online hate speech: A qualitative multi-method study
View Abstract
Although many social media users have reported encountering hate speech, differences in the perception between different users remain unclear. Using a qualitative multi-method approach, we investigated how personal characteristics, the presentation form, and content-related characteristics influence social media users’ perceptions of hate speech, which we differentiated as first-level (i.e. recognizing hate speech) and second-level perceptions (i.e. attitude toward it). To that end, we first observed 23 German-speaking social media users as they scrolled through a fictitious social media feed featuring hate speech. Next, we conducted remote self-confrontation interviews to discuss the content and semi-structured interviews involving interactive tasks. Although it became apparent that perceptions are highly individual, some overarching tendencies emerged. The results suggest that the perception of and indignation toward hate speech decreases as social media use increases. Moreover, direct and prosecutable hate speech is perceived as being particularly negative, especially in visual presentation form.
|
2022 |
Schmid, U.K., Kümpel, A.S. and Rieger, D. |
View
Publisher
|
Journal Article |
Understanding Online Platform Usage of Extremist Groups via Graph Analytics
View Abstract
Graph analytics has become instrumental in uncovering insights across various domains, specifically in social networks. It serves as a crucial tool for analyzing the relationship between users in different online platforms. In this research, we apply methods of social network analysis to examine the communication patterns among participants in an online forum recognized for far-right extremism. Our study demonstrates the actors’ relationships and activities through different aspects of applications over networks. In extensive analysis, we identify the influential actors and map their relationships throughout the course of 76 monthly networks. Moreover, we illustrate the evolution of networks over that period, and their connections with significant events. The findings of this analysis aim to understand the nature of interactions and networks, and to allow practitioners to take necessary precautions to mitigate far-right activities on various online platforms.
|
2025 |
Hossain, T., Akbas, E., Lemieux, A.E. and Massignan, V. |
View
Publisher
|
Report |
Hash-Sharing Database Review: Challenges and Opportunities
View Abstract
Hash Sharing Working Group: As technologies, content, and types of violent extremist and terrorist groups change, GIFCT continuously reviews its definitions and parameters to evolve in line with trends and member needs. This group reviewed GIFCT’s Hash-Sharing Database inclusion criteria and proposed enhancements to improve its transparency and accuracy. The National Consortium for the Study of Terrorism and Responses to Terrorism (START) worked with GIFCT to produce a set of concrete recommendations for Enhancing Transparency and Accuracy in GIFCT’s Hash-Sharing Database. GIFCT will be implementing recommendations throughout 2025 to ensure the database remains effective and impactful in cross-platform counterterrorism efforts.
|
2024 |
The Global Internet Forum to Counter Terrorism |
View
Publisher
|
Report |
GIFCT Incident Response Working Group Report: A Review of the Content Incident Protocol and Incident Response Framework
View Abstract
Incident Response Working Group: GIFCT conducted a robust multistakeholder review of its Incident Response Framework (IRF) to better reflect the needs and feedback of key stakeholders and draw on lessons learned from prior activations. Widespread concerns about the potential rise in AI generated content around terrorist attacks, and the increasing prevalence of violence associated with accelerationist movements or self-mobilized radicalization informed discussions aimed at updating the IRF. Working with the Center on Terrorism, Extremism, and Counterterrorism (CTEC) GIFCT undertook a comprehensive review and expert dialogue series which resulted in the report Future-Proofing GIFCT’s Incident Response: Addressing Societal Harms, and will inform GIFCT work in updating the IRF in 2025.
|
2025 |
The Global Internet Forum to Counter Terrorism |
View
Publisher
|