Uncovering Salafi jihadist terror activity through advanced technological tools

This study investigates the evolving challenges intelligence and law enforcement agencies face in countering Salafi jihadist terrorist activities across digital platforms, focusing on integrating artificial intelligence (AI) and data fusion technologies in open-source intelligence (OSINT) methodologies. Through an analysis of case studies involving ISIS and other non-state actors, the study examines how terrorist organisations adapt

The Radicalization Risks Of GPT-3 And Advanced Neural Language Models

In 2020, OpenAI developed GPT-3, a neural language model that is capable of sophisticated natural language generation and completion of tasks like classification, question-answering, and summarization. While OpenAI has not opensourced the model’s code or pre-trained weights at the time of writing, it has built an API to experiment with the model’s capacity. The Center

NATO Science for Peace and Security-funded Advanced Research Workshop on ‘Terrorist Use of the Internet: Assessment and Response’

A NATO Science for Peace and Security-funded Advanced Research Workshop on ‘Terrorist Use of the Internet: Assessment and Response’, jointly organised by VOX-Pol and the University of Swansea’s Cyberterrorism Project, was hosted at Dublin City University from 27 – 29 June, 2016. The invitation-only workshop provided an opportunity for the 60 participants which included academics,

Stefan Meingast

Stefan Meingast is Chairman and Managing Director of SCENOR, a non-profit and independent research organisation in Austria. His research addresses violent extremism, terrorism, radicalisation, and conspiracy myths, with a particular focus on digital communication environments. He has coordinated numerous projects, for instance examining right-wing extremist networks and hate speech across youth-oriented platforms, investigated religiously motivated

Kate Tomkins

Kate Tomkins is a Doctoral researcher of Criminology at the University of Southampton. Her research examines the dynamics of extremism and accelerationism, employing a multi-methodological approach that incorporates network analysis and advanced digital semiotic content analysis through a combination of qualitative and quantitative methods. With an emphasis on the relationships between digital semiotics, social identity

The VOX-Pol Blog Editorial Team

The VOX-Pol Blog publishes weekly on Wednesdays on the topic of online extremism and online terrorism. It began in 2014 and now has over 500 entries. Each Blog post is added into the VOX-Pol Online Library so it’s searchable by title, topic and author. The Blog publishes original research, article summaries, book reviews, editorials, and

Exploiting the Algorithm: How British Extreme Right-Wing Individuals and Groups Leverage Grok and Generative AI for Malign Purposes

By Alice Sibley and Joshua Bowes As artificial intelligence (AI) becomes increasingly integrated and embedded into social media platforms, wariness around its harmful exploitation has grown. As previous research has shown, malignant actors, ranging from misogynistic online users to extremists, have exploited AI to spread harmful conspiracy theories, share racist images and disseminate disinformation. Generative

I analyzed more than 100 extremist manifestos: Misogyny was the common thread

Karmvir K. Padda, University of Waterloo Two years have passed since a 24-year-old former student walked into a gender studies classroom at the University of Waterloo and stabbed the professor and two students. The attack left the campus shaken and sparked national outrage. Many saw the attack as a shocking but isolated act of violence.

‘I got sent something of people shooting themselves’ – research shows young people can’t avoid harmful content online

Dougal Sutherland, Te Herenga Waka — Victoria University of Wellington A new report from New Zealand’s Classification Office has revealed how young people are being exposed to harmful content online and what it is doing to their mental health. The Classification Office spoke with ten different groups of young people aged between 12 and 25