Journal Article |
#FailedRevolutions: Using Twitter to Study the Antecedents of ISIS Support
View Abstract
Within a fairly short amount of time, the Islamic State of Iraq and Syria (ISIS) has managed to put large swaths of land in Syria and Iraq under their control. To many observers, the sheer speed at which this “state” was established was dumbfounding. To better understand the roots of this organization and its supporters we present a study using data from Twitter. We start by collecting large amounts of Arabic tweets referring to ISIS and classify them into pro-ISIS and anti-ISIS. This classification turns out to be easily done simply using the name variants used to refer to the organization: the full name and the description as “state” is associated with support, whereas abbreviations usually indicate opposition. We then “go back in time” by analyzing the historic timelines of both users supporting and opposing and look at their pre-ISIS period to gain insights into the antecedents of support. To achieve this, we build a classifier using pre-ISIS data to “predict”, in retrospect, who will support or oppose the group. The key story that emerges is one of frustration with failed Arab Spring revolutions. ISIS supporters largely differ from ISIS opposition in that they refer a lot more to Arab Spring uprisings that failed. We also find temporal patterns in the support and opposition which seems to be linked to major news, such as reported territorial gains, reports on gruesome acts of violence, and reports on airstrikes and foreign intervention.
|
2015 |
Magdy, W., Darwish, K., and Weber, I. |
View
Publisher
|
Journal Article |
#FailedRevolutions: Using Twitter to study the antecedents of ISIS support
View Abstract
Lately, the Islamic State of Iraq and Syria (ISIS) has managed to control large parts of Syria and Iraq. To better understand the roots of support for ISIS, we present a study using Twitter data. We collected a large number of Arabic tweets referring to ISIS and classified them as pro-ISIS or anti-ISIS. We then analyzed the historical timelines of both user groups and looked at their pre-ISIS period to gain insights into the antecedents of support. Also, we built a classifier to ‘predict’, in retrospect, who will support or oppose the group. We show that ISIS supporters largely differ from ISIS opposition in that the former referred a lot more to Arab Spring uprisings that failed than the latter.
|
2015 |
Magdy, W., Darwish, K. and Weber, I. |
View
Publisher
|
Policy |
(Young) Women’s Usage of Social Media and Lessons for Preventing Violent Extremism
View Abstract
The RAN small-scale expert meeting on (young) women’s usage of social media and lessons learned for preventing violet extremism (PVE) was aimed at unpacking some of the gaps. This paper summarises the highlights of the discussion, discusses the vulnerabilities that are specific to (young) women, explains how recruiters use these vulnerabilities online and, finally, presents the recommendations that the experts stressed during the meeting.
|
2020 |
Krasenberg, J. and Handle, J. |
View
Publisher
|
Journal Article |
(((They))) rule: Memetic antagonism and nebulous othering on 4chan
View Abstract
Previously theorised as vehicles for expressing progressive dissent, this article considers how political memes have become entangled in the recent reactionary turn of web subcultures. Drawing on Chantal Mouffe’s work on political affect, this article examines how online anonymous communities use memetic literacy, memetic abstraction, and memetic antagonism to constitute themselves as political collectives. Specifically, it focuses on how the subcultural and highly reactionary milieu of 4chan’s /pol/ board does so through an anti-Semitic meme called triple parentheses. In aggregating the contents of this peculiar meme from a large dataset of /pol/ comments, the article finds that /pol/ users, or anons, tend to use the meme to formulate a nebulous out-group resonant with populist demagoguery.
|
2019 |
Tuters, M. and Hagen, S. |
View
Publisher
|
Journal Article |
“You Need to Be Sorted Out With a Knife”: The Attempted Online Silencing of Women and People of Muslim Faith Within Academia
View Abstract
Academics are increasingly expected to use social media to disseminate their work and knowledge to public audiences. Although this has various advantages, particularly for alternative forms of dissemination, the web can also be an unsafe space for typically oppressed or subordinated groups. This article presents two auto-ethnographic accounts of the abuse and hate academics researching oppressed groups, namely, women and people of Muslim faith, experienced online. In doing so, this article falls into four parts. The first section provides an overview of existing literature, particularly focusing on work which explores the violence and abuse of women and people of Muslim faith online. The second section considers the auto-ethnographic methodological approach adopted in this article. The third section provides the auto-ethnographic accounts of the author’s experiences of hate and abuse online. The final section locates these experiences within broader theoretical concepts, such as silencing, and considers possible implications of such online hate in both an academic context and beyond.
|
2016 |
Barlow, C., Awan, I. |
View
Publisher
|
Journal Article |
“You Know What to Do”: Proactive Detection of YouTube Videos Targeted by Coordinated Hate Attacks
View Abstract
Video sharing platforms like YouTube are increasingly targeted by aggression and hate attacks. Prior work has shown how these attacks often take place as a result of “raids,” i.e., organized efforts by ad-hoc mobs coordinating from third-party communities. Despite the increasing relevance of this phenomenon, however, online services often lack effective countermeasures to mitigate it. Unlike well-studied problems like spam and phishing, coordinated aggressive behavior both targets and is perpetrated by humans, making defense mechanisms that look for automated activity unsuitable. Therefore, the de-facto solution is to reactively rely on user reports and human moderation. In this paper, we propose an automated solution to identify YouTube videos that are likely to be targeted by coordinated harassers from fringe communities like 4chan. First, we characterize and model YouTube videos along several axes (metadata, audio transcripts, thumbnails) based on a ground truth dataset of videos that were targeted by raids. Then, we use an ensemble of classifiers to determine the likelihood that a video will be raided with very good results (AUC up to 94%). Overall, our work provides an important first step towards deploying proactive systems to detect and mitigate coordinated hate attacks on platforms like YouTube.
|
2019 |
Mariconti, E., Suarez-Tangil, G., Blackburn, J., de Cristofaro, E., Kourtellis, N., Leontiadis, I., Serrano, J.L. and Stringhini, G. |
View
Publisher
|