VOX-Pol Publication |
Extreme Digital Speech: Contexts, Responses and Solutions
View Abstract
Extreme digital speech (EDS) is an emerging challenge that requires co-ordination between governments, civil society and the private sector. In this report, a range of experts on countering extremism consider the challenges that EDS presents to these stakeholders, the impact that EDS has and the responses taken by these actors to counter it. By focusing on EDS, consideration of the topic is limited to the forms of extreme speech that take place online, often on social media platforms and multimedia messaging applications such as WhatsApp and Telegram. Furthermore, by focusing on EDS rather than explicitly violent forms of extreme speech online, the report departs from a focus on violence and incorporates a broader range of issues such as hateful and dehumanising speech and the complex cultures and politics that have formed around EDS.
|
2020 |
Ganesh, B. and Bright, J. (Eds.) |
View
Publisher
|
Journal Article |
Many Faced Hate: A Cross Platform Study of Content Framing and Information Sharing by Online Hate Groups
View Abstract
Hate groups are increasingly using multiple social media platforms to promote extremist ideologies. Yet we know little about their communication practices across platforms. How do hate groups (or “in-groups”), frame their hateful agenda against the targeted group or the “out-group?” How do they share information? Utilizing “framing” theory from social movement research and analyzing domains in the shared links, we juxtapose the Facebook and Twitter communication of 72 Southern Poverty Law Center (SPLC) designated hate groups spanning five hate ideologies. Our findings show that hate groups use Twitter for educating the audience about problems with the out-group, maintaining positive self-image by emphasizing in-group’s high social status, and for demanding policy changes to negatively affect the out-group. On Facebook, they use fear appeals, call for active participation in group events (membership requests), all while portraying themselves as being oppressed by the out-group and failed by the system. Our study unravels the ecosystem of cross-platform communication by hate groups, suggesting that they use Facebook for group radicalization and recruitment, while Twitter for reaching a diverse follower base.
|
2020 |
Phadke, S, and Mitra, T. |
View
Publisher
|
Journal Article |
Interactive Search and Exploration in Discussion Forums Using Multimodal Embeddings
View Abstract
In this paper we present a novel interactive multimodal learning system, which facilitates search and exploration in large networks of social multimedia users. It allows the analyst to identify and select users of interest, and to find similar users in an interactive learning setting. Our approach is based on novel multimodal representations of users, words and concepts, which we simultaneously learn by deploying a general-purpose neural embedding model. The usefulness of the approach is evaluated using artificial actors, which simulate user behavior in a relevance feedback scenario. Multiple experiments were conducted in order to evaluate the quality of our multimodal representations and compare different embedding strategies. We demonstrate the capabilities of the proposed approach on a multimedia collection originating from the violent online extremism forum Stormfront, which is particularly interesting due to the high semantic level of the discussions it features.
|
2020 |
Gornishka, I., Rudinac, S. and Worring, M. |
View
Publisher
|
Journal Article |
“You Know What to Do”: Proactive Detection of YouTube Videos Targeted by Coordinated Hate Attacks
View Abstract
Video sharing platforms like YouTube are increasingly targeted by aggression and hate attacks. Prior work has shown how these attacks often take place as a result of “raids,” i.e., organized efforts by ad-hoc mobs coordinating from third-party communities. Despite the increasing relevance of this phenomenon, however, online services often lack effective countermeasures to mitigate it. Unlike well-studied problems like spam and phishing, coordinated aggressive behavior both targets and is perpetrated by humans, making defense mechanisms that look for automated activity unsuitable. Therefore, the de-facto solution is to reactively rely on user reports and human moderation. In this paper, we propose an automated solution to identify YouTube videos that are likely to be targeted by coordinated harassers from fringe communities like 4chan. First, we characterize and model YouTube videos along several axes (metadata, audio transcripts, thumbnails) based on a ground truth dataset of videos that were targeted by raids. Then, we use an ensemble of classifiers to determine the likelihood that a video will be raided with very good results (AUC up to 94%). Overall, our work provides an important first step towards deploying proactive systems to detect and mitigate coordinated hate attacks on platforms like YouTube.
|
2019 |
Mariconti, E., Suarez-Tangil, G., Blackburn, J., de Cristofaro, E., Kourtellis, N., Leontiadis, I., Serrano, J.L. and Stringhini, G. |
View
Publisher
|
Book |
Islamic State’s Online Activity And Responses
View Abstract
‘Islamic State’s Online Activity and Responses’ provides a unique examination of Islamic State’s online activity at the peak of its “golden age” between 2014 and 2017 and evaluates some of the principal responses to this phenomenon. Featuring contributions from experts across a range of disciplines, the volume examines a variety of aspects of IS’s online activity, including their strategic objectives, the content and nature of their magazines and videos, and their online targeting of females and depiction of children. It also details and analyses responses to IS’s online activity – from content moderation and account suspensions to informal counter-messaging and disrupting terrorist financing – and explores the possible impact of technological developments, such as decentralised and peer-to-peer networks, going forward. Platforms discussed include dedicated jihadi forums, major social media sites such as Facebook, Twitter, and YouTube, and newer services, including Twister.
‘Islamic State’s Online Activity and Responses’ is essential reading for researchers, students, policymakers, and all those interested in the contemporary challenges posed by online terrorist propaganda and radicalisation. The chapters were originally published as a special issue of Studies in Conflict & Terrorism.
|
2019 |
Conway, M. and Macdonald, S. |
View
Publisher
|
Journal Article |
Weaponizing white thymos: flows of rage in the online audiences of the alt-right
View Abstract
The alt-right is a growing radical right-wing network that is particularly effective at mobilizing emotion through digital communications. Introducing ‘white thymos’ as a framework to theorize the role of rage, anger, and indignation in alt-right communications, this study argues that emotive communication connects alt-right users and mobilizes white thymos to the benefit of populist radical right politics. By combining linguistic, computational, and interpretive techniques on data collected from Twitter, this study demonstrates that the alt-right weaponizes white thymos in three ways: visual documentation of white victimization, processes of legitimization of racialized pride, and reinforcement of the rectitude of rage and indignation. The weaponization of white thymos is then shown to be central to the culture of the alt-right and its connectivity with populist radical right politics.
|
2020 |
Ganesh, B. |
View
Publisher
|