Chapter |
Social Media and Terrorist Financing
View Abstract
Social media and terrorist financing (SMTF) refers to how terror organisations (TO), sometimes called violent extremist organisations (VEOs), have exploited free social media platforms to raise money for terrorism/fighting/jihad, to recruit, train, equip, and transport fighters, to support extremist religious proto-states (e.g., Islamic State of Iraq and Syria [ISIS]), to market/brand an organisation including to disseminate its ideologies, and/or to amass wealth. This chapter considers the scope of SMTF and legal/policy responses to its ever-evolving practices, discusses how it is practised and by whom, and considers the challenges of fighting SMTF. SMTF is part of the larger concept of cyberterrorism where terrorists target computer systems maliciously and use digital technology to facilitate terror attacks, though some argue that these two actions are completely different phenomena. The chapter concludes by summarising the key points of the discussion, offering suggestions for future study of this phenomenon, and reflecting on the societal impacts of SMTF.
|
2024 |
Alley-Young, G. |
View
Publisher
|
Report |
Human Rights Assessment: Global Internet Forum to Counter Terrorism
View Abstract
The Global Internet Forum to Counter Terrorism (GIFCT) commissioned BSR to conduct a human rights assessment of its strategy, governance, and activities. The purpose of this assessment is to identify actual and potential human rights impacts (including both risks and opportunities) arising from GIFCT’s work and make recommendations for how GIFCT and its participants can address these impacts. BSR undertook this human rights review from December 2020 to May 2021. This assessment combines human rights assessment methodology based on the UN Guiding Principles on Business and Human Rights (UNGPs) with consideration of the human rights principles, standards, and methodologies upon which the UNGPs were built. This review was funded by GIFCT, though BSR retained editorial control over its contents.
|
2021 |
Allison-Hope, D., Andersen, L. and Morgan, S. |
View
Publisher
|
Journal Article |
From Directorate of Intelligence to Directorate of Everything: The Islamic State’s Emergent Amni-Media Nexus
View Abstract
This article, which is based on original interview data gathered from eastern Syria between January and October 2018, examines the emergent dominance of the Islamic State’s Directorate of General Security (DGS). We track how this institution, which is currently operating through a network of diwan-specific security offices grouped under the Unified Security Center (USC), has come to oversee and manage an increasingly wide array of the group’s insurgent activities—including intelligence and military operations and religious and managerial affairs. Focusing in particular on its role in the context of media production—which comprises anything from facilitation and security to monitoring, distribution and evaluation—we illustrate the critical importance of this most elusive directorate, positing that, in its current form, it could stand to facilitate the survival of the Islamic State for months—if not years—to come.
|
2019 |
Almohammad, A. and Winter, C. |
View
Publisher
|
Report |
Decoding Hate: Using Experimental Text Analysis to Classify Terrorist Content
View Abstract
This paper uses automated text analysis – the process by which unstructured text is extracted, organised and processed into a meaningful format – to develop tools capable of analysing
Islamic State (IS) propaganda at scale. Although we have used a static archive of IS material, the underlying principle is that these techniques can be deployed against content produced by any number of violent extremist movements in real‑time. This study therefore aims to complement work that looks at technology‑driven strategies employed by social media, video‑hosting and file‑sharing platforms to tackle violent extremist content disseminators.
|
2020 |
Alrhmoun, A., Maher, S. and Winter, C. |
View
Publisher
|
Journal Article |
Automating Terror: The Role and Impact of Telegram Bots in the Islamic State’s Online Ecosystem
View Abstract
In this article, we use network science to explore the topology of the Islamic State’s “terrorist bot” network on the online social media platform Telegram, empirically identifying its connections to the Islamic State supporter-run groups and channels that operate across the platform, with which these bots form bipartite structures. As part of this, we examine the diverse activities of the bots to determine the extent to which they operate in synchrony with one another as well as explore their impacts. We show that these bots are mainly clustered around two communities of Islamic State supporters, or “munasirun,” with one community focusing on facilitating discussion and exchange, and the other one augmenting content distribution efforts. Operating as such, this network of bots is used to lubricate and augment the Islamic State’s influence activities, including facilitating content amplification and community cultivation efforts, and connecting people with the movement based on common behaviors, shared interests, and/or ideological proximity while minimizing risk for the broader organization.
|
2023 |
Alrhmoun, A., Winter, C. and Kertész, J. |
View
Publisher
|
Journal Article |
Hate, Obscenity, and Insults: Measuring the Exposure of Children to Inappropriate Comments in YouTube
View Abstract
Social media has become an essential part of the daily routines of children and adolescents. Moreover, enormous efforts have been made to ensure the psychological and emotional well-being of young users as well as their safety when interacting with various social media platforms. In this paper, we investigate the exposure of those users to inappropriate comments posted on YouTube videos targeting this demographic. We collected a large-scale dataset of approximately four million records, and studied the presence of five age-inappropriate categories and the amount of exposure to each category. Using natural language processing and machine learning techniques, we constructed ensemble classifiers that achieved high accuracy in detecting inappropriate comments. Our results show a large percentage of worrisome comments with inappropriate content: we found 11% of the comments on children’s videos to be toxic, highlighting the importance of monitoring comments, particularly on children platforms.
|
2021 |
Alshamrani, S., Abusnaina, A., Abuhamad, M., Nyang, D. and Mohaisen, D. |
View
Publisher
|