Report |
Unleashing the Potential of Short-Form Video: A Guide for Creators Making Content to Counter Extremism
View Abstract
This guide is intended to help creators producing or thinking about making shortform video content seeking to counter extremism. Our goal is not to tell you what to create; your original content is what makes your channel creative and organic. Instead, we hope to provide you with tools and tips to create stronger content that harnesses evidence from decades of academic research. Creating short-form video content (with expected video length to be 15-60 seconds) that counters extremism (both violent and nonviolent) and promotes positive values is a powerful way to engage with your audience. To help you succeed in this mission, we have compiled a guide that not only inspires creativity but also provides practical tips for further success.
|
2024 |
Whittaker, J., Atamuradova, F., Yilmaz, K., Copeland, S., El Sayed, L. and Deedman, J. |
View
Publisher
|
VOX-Pol Publication |
The Last Twitter Census
View Abstract
This report compares two large random samples of Twitter accounts that tweet in English: one taken just before Elon Musk acquired Twitter in October 2022, and one taken three months later, in January 2023. It also examines several related datasets collected during the period following the acquisition, a period in which, the study found, new accounts were created at a record-breaking pace. Some extremist and conspiracy networks created accounts faster than the baseline rate, probably because changes to Twitter’s trust and safety policies had been announced. In the context of these policy announcements, the study examines some reinstated accounts, with mixed results. Despite the loosening of several content policies, accounts that automated the sending of tweets (‘bots’) saw activity drop sharply during the period of the study, with many bot accounts being suspended or deactivated, while others voluntarily curtailed their activity in light of the API changes announced. Deactivated accounts were dominated by sex- related content and apparent financial spam or scams, often coupled with automated tweeting.
|
2024 |
Berger, J.M. |
View
Publisher
|
Journal Article |
Conspiracy, misinformation, radicalisation: understanding the online pathway to indoctrination and opportunities for intervention
View Abstract
In response to the rise of various fringe movements in recent years, from anti-vaxxers to QAnon, there has been increased public and scholarly attention to misinformation and conspiracy theories and the online communities that produce them. However, efforts at understanding the radicalisation process largely focus on those who go on to commit violent crimes. This article draws on three waves of research exploring the experiences of individuals currently or formerly involved in fringe communities, including the different stages of investment they progressed through, and ultimately, what made people leave. We propose a pathway model for understanding contemporary online radicalisation, including potential interventions that could be safely made at each stage. Insight into the experience of being immersed in these communities is essential for engaging with these people empathetically, and therefore preventing both the emergence of violent terrorists and protecting vulnerable people from being drawn into these communities.
|
2024 |
Booth, E., Lee, J., Rizoiu, M.A. and Farid, H. |
View
Publisher
|
VOX-Pol Publication |
AI Extremism: Technology, Tactics, Actors
View Abstract
Over the past decade, two major phenomena have developed in the digital realm. On the one hand, extremism has grown massively on the Internet, with sprawling online ecosystems hosting a wide range of radical subcultures and communities associated with both ‘stochastic terrorism’ and the ‘mainstreaming of extremism’. On the other hand, Artificial Intelligence (AI) has undergone exponential improvement: from ChatGPT to video deepfakes, from autonomous vehicles to face-recognition CCTV systems, an array of AI technologies has abruptly entered our everyday lives. This report examines ‘AI extremism’, the toxic encounter of these two evolutions – each worrying in its own right. Like past technological progress, AI will indeed be – in fact already is – used in various ways to bolster extremist agendas. Identifying the many opportunities for action that come with a range of AI models, and linking them with different types of extremist actors, we offer a clear overview of the numerous facets of AI extremism. Building on the nascent academic and government literature on the issue as well as on our own empirical and theoretical work, we provide new typologies and concepts to help us organize our understanding of AI extremism, systematically chart its instantiations, and highlight thinking points for stakeholders in countering violent extremism.
|
2024 |
Baele, S. and Brace, L. |
View
Publisher
|
Report |
Unleashing the Potential of Short-Form Video: Strategic Communications for Countering Extremism in the Digital Age
View Abstract
The report begins by outlining some of the broad knowledge around the idea of mass persuasion, before focusing specifically on lessons that have been learned in the field of P/CVE. This is followed by a synthesis of existing “How To” guides for the creation of strategic communications from a range of policy and practitioner stakeholders. Then, we discuss specific knowledge of audiovisual content, particularly considerations for short-form video content. The report concludes by outlining how stakeholders, including social media platforms, can monitor, measure, and evaluate the impact of this type of content.
|
2024 |
Whittaker, J., Atamuradova, F., Yilmaz, K., Copeland, S., El Sayed, L. and Deedman, J. |
View
Publisher
|
Journal Article |
Distinct patterns of incidental exposure to and active selection of radicalizing information indicate varying levels of support for violent extremism
View Abstract
Exposure to radicalizing information has been associated with support for violent extremism. It is, however, unclear whether specific information use behavior, namely, a distinct pattern of incidental exposure (IE) to and active selection (AS) of radicalizing content, indicates stronger violent extremist attitudes and radical action intentions. Drawing on a representative general population sample (N = 1509) and applying latent class analysis, we addressed this gap in the literature. Results highlighted six types of information use behavior. The largest group of participants reported a near to zero probability of both IE to and AS of radicalizing material. Two groups of participants were characterized by high or moderate probabilities of incidental exposure as well as a low probability of active selection of radicalizing content. The remaining groups displayed either low, moderate, or high probabilities of both IE and AS. Importantly, we showed between-group differences regarding violent extremist attitudes and radical behavioral intentions. Individuals reporting near zero or high probabilities for both IE to and AS of radicalizing information expressed the lowest and strongest violent extremist attitudes and willingness to use violence respectively. Groups defined by even moderate probabilities of AS endorsed violent extremism more strongly than those for which the probability for incidental exposure was moderate or high but AS of radicalizing content was unlikely.
|
2024 |
Schumann, S., Clemmow, C., Rottweiler, B. and Gill, P. |
View
Publisher
|