Chapter |
The Evolution of Terrorism in Digital Era: Cyberterrorism, Social Media, and Modern Extremism Chapter
View Abstract
This study explores the significant impact of the internet on terrorism and extremism by focusing on three key areas: cyberterrorism, social media usage, and attack planning. Cyberterrorism has emerged as a major threat, with incidents such as the WannaCry ransomware attack highlighting its potential to disrupt critical infrastructures. The study reveals that the low cost and difficulty in tracing cyberterrorists contribute to its rising prevalence. Furthermore, social media has become a vital tool for extremist groups in that it facilitates global communication, recruitment, and propaganda dissemination. Events such as the Christchurch mosque attacks illustrate the challenges of monitoring extremist content online. The internet also aids in attack planning by providing discreet communication channels, easy access to materials, and simplified travel arrangements, as seen in the 9/11 and Manchester Arena bombings. Additionally, the chapter discusses how technological advancements have enhanced security agencies’ capabilities, leading to more effective prevention and response strategies. The findings highlight the dual role of the internet as both a facilitator of terrorism and a crucial asset in countering these threats.
|
2024 |
Ball, D. and Montasari, R. |
View
Publisher
|
Report |
Online Hate and Harassment: The American Experience 2024
View Abstract
Severe online hate and harassment increased four points across the board in the past year, which was dominated by an unprecedented surge in antisemitism online and offline in the wake of Hamas’ brutal attack on Israel on October 7. Decreases in platform enforcement and data access and new threats of hate and disinformation from generative AI tools all potentially contributed to this year’s findings.
ADL conducts this nationally representative survey annually to find out how many American adults experience hate or harassment on social media. Since 2022, we have also surveyed teens ages 13-17. This survey was conducted in February and March 2024 and asked about the preceding 12 months.
Key Findings
- Severe harassment overall went up: 22% of Americans experienced severe harassment1 on social media in the past 12 months, an increase from 18% in 2023 (including an increase in physical threats from 7% to 10%).
- Harassment by disability: People with disabilities were more likely to be harassed than non-disabled people, 45% compared to 36% respectively for any harassment and 31% vs. 19% for severe harassment.
- People with disabilities were more likely to be harassed than in the year before, 45% compared to 35% for any harassment and 31% vs. 20% for severe harassment
- Harassment for disability spiked: Reasons for online harassment in the past 12 months remained stable, except for disability, which spiked from 4% to 12%, despite the proportion of disabled respondents remaining similar.
- LGBTQ+ people were the most harassed of the marginalized groups surveyed: LGBTQ+ people experienced increases in physical threats (from 6% to 14%), while transgender people, as a subgroup, reported severe harassment to a higher degree from last year (from 30% to 45%).
- Jewish adults were more likely to be harassed for their religion (34% of those harassed compared to 18% of non-Jews) and 41% changed their online behavior to avoid being recognized as Jewish. Nearly two-thirds (63%) felt less safe than they did last year.
- Platforms: Facebook remains the most common platform where harassment was experienced at 61% of harassment, while the incidence of harassment rose on WhatsApp (from 14% to 25%) and Telegram (from 7% to 13%).
Key Recommendations
Allow researchers inside the black box: The federal government should broaden data access for researchers and follow California’s lead and require technology companies to standardize transparency reporting and broaden data access.
Address hate on messaging apps: Messaging platforms should strengthen anti-hate policies and invest in tools to combat hate and harassment.
Support targets of hate: Platforms should implement recommended reporting tools and features that reduce hate and improve abuse reporting.
Invest in trust and safety: Tech companies should increase trust and safety resources (human and automated) to ensure platforms are enforcing their rules around hate speech and violence.
|
2024 |
Anti-Defamation League |
View
Publisher
|
Journal Article |
The overlap between viewing child sexual abuse material and fringe or radical content online
View Abstract
Drawing on a survey of 13,302 online Australians, this study examines the characteristics and behaviours of respondents who viewed child sexual abuse material (CSAM) and fringe or radical content online, or both. In the past 12 months, 40.6 percent of respondents had viewed fringe or radical content and 4.5 percent had viewed CSAM. Among respondents who viewed CSAM, 64.7 percent had also viewed fringe or radical content, while 7.1 percent of those who viewed radical content had also viewed CSAM. Respondents who viewed only CSAM or only fringe or radical content were similar to one another. Respondents who viewed both were more likely to be younger and male and had higher rates of criminal justice system contact and diagnosed mental illness. Their online activity, including the platforms used, also differed.
|
2024 |
Cubitt, T., Morgan, A. and Brown, R. |
View
Publisher
|
Chapter |
Visual Methods for Sensitive Images: Ethics and Reflexivity in Criminology On/Offline
View Abstract
This chapter addresses the impact and discomfort of researching extremist digital subcultures. It provides reflections about the visual and personal dimension from two researchers investigating online extremist far-right and incel content, using memes as a case study. Through visually stimulating images, humor, and narratives, memes can normalize and desensitize extremism and violence, not only for the consumers but also for the researchers. We compare our research experiences to help prepare others who plan to study extremist content online by identifying themes of dehumanization and provocation, internalization and enabling harmful humor, and hyperawareness in everyday life. We give specific suggestions for handling these challenges and conclude by discussing the importance of adopting a reflexive approach for orienting our positionality in the research process.
|
2024 |
Våge, P. and Andersen, J.C. |
View
Publisher
|
Journal Article |
Digital Communication Strategy to Counteract the Use of Social Media as a Propaganda Tool for Terrorist Groups
View Abstract
Social media has become a critical tool for terrorist groups like ISIS and Al-Qaeda, leveraging these platforms to disseminate ideology, recruit members, and mobilize support. This study aims to identify the patterns of terrorist propaganda on social media and develop effective digital communication strategies to counteract these activities. The research employs a qualitative approach through literature review and content analysis, grounded in digital communication, strategy, and propaganda theories. The findings reveal that terrorist groups utilize emotional narratives to disseminate ideology, personalized approaches for recruitment, and social media to organize collective actions. Successful digital communication strategies include AI-based early detection, culturally relevant counter-narratives, digital literacy to raise public awareness, cross-sector collaboration, and robust law enforcement. The study’s holistic approach is essential to counter terrorist propaganda, emphasizing close collaboration among governments, technology companies, and society. Further research is recommended to explore emerging technologies and innovative strategies to address the evolving threat.
|
2024 |
Yudho, A.D.S., Afifuddin, M. and Suhirwan, S. |
View
Publisher
|
Report |
Radicalisation through Gaming: The Role of Gendered Social Identity
View Abstract
This project aims to understand, through a gender and intersectional lens, how socialisation processes coupled with exposure to harassment, hate-based discrimination and extreme content can potentially lower resilience to radicalisation in gaming.
|
2024-12-17 12:00:00 |
White, J., Wallner, C., Lamphere-Englund, G., Frankie, L., Kowert, R., Schlegel, L., Kingdon, A., Phelan, A., Newhouse, A., Saiz Erausquin, G. and Regeni, P. |
View
Publisher
|