Book |
Digital Extremisms: Readings in Violence, Radicalisation and Extremism in the Online Space
View Abstract
This book explores the use of the internet by (non-Islamic) extremist groups, drawing together research by scholars across the social sciences and humanities. It offers a broad overview of the best of research in this area, including research contributions that address far-right, (non-Islamic) religious, animal rights, and nationalist violence online, as well as a discussion of the policy and research challenges posed by these unique and disparate groups. It offers an academically rigorous, introductory text that addresses extremism online, making it a valuable resource for students, practitioners and academics seeking to understand the unique characteristics such risks present.
|
2020 |
Littler, M. and Lee, B. (Eds.) |
View
Publisher
|
Journal Article |
An Approach for Radicalization Detection Based on Emotion Signals and Semantic Similarity
View Abstract
The Internet has become an important tool for modern terrorist groups as a means of spreading their propaganda messages and recruitment purposes. Previous studies have shown that the analysis of social signs can help in the analysis, detection, and prediction of radical users. In this work, we focus on the analysis of affect signs in social media and social networks, which has not been yet previously addressed. The article contributions are: (i) a novel dataset to be used in radicalization detection works, (ii) a method for utilizing an emotion lexicon for radicalization detection, and (iii) an application to the radical detection domain of an embedding-based semantic similarity model. Results show that emotion can be a reliable indicator of radicalization, as well as that the proposed feature extraction methods can yield high-performance scores.
|
2020 |
Araque, O. and Iglesias, C.A. |
View
Publisher
|
Journal Article |
Memes And Symbolic Violence: #Proudboys And The Use Of Memes For Propaganda And The Construction Of Collective Identity
View Abstract
As a social media platform, Instagram has a strong influence on youth culture, identity, and perceptions of the world, with the application serving not only for youth to follow accounts that are aspirational but also for entertainment and identity building through memes. Meme accounts that are explicitly conservative and that espouse white supremacist, hateful ideology and subsequently, identity, are incredibly prevalent. Media serve as powerful institutions for the socialization of youth, and content on the platform reveals that memes are serving as building blocks of ideological meaning. This study conducted a discourse analysis of the memes and content circulated by the alt-right affiliate movement the ‘Proud Boys,’ which is being sold to young men as a fraternity-like organization to celebrate ‘Western ideals’. Proud Boys operate on an ideology that consists of both symbolic and physical violence, and the popularity of these groups is growing. Using Bourdieu’s work on language as a framework, this article is an exploration to their recruitment and world-building practices on Instagram using memes and will be necessary to understand the movement, and to gain further insight into how memes are being used as propaganda.
|
2018 |
DeCook, J. R. |
View
Publisher
|
Journal Article |
Too Dark To See Explaining Adolescents Contact With Online Extremism And Their Ability To Recognize It
View Abstract
Adolescents are considered especially vulnerable to extremists’ online activities because they are ‘always online’ and because they are still in the process of identity formation. However, so far, we know little about (a) how often adolescents encounter extremist content in different online media and (b) how well they are able to recognize extremist messages. In addition, we do not know (c) how individual-level factors derived from radicalization research and (d) media and civic literacy affect extremist encounters and recognition abilities. We address these questions based on a representative face-to-face survey among German adolescents (n = 1,061) and qualitative interviews using a think-aloud method (n = 68). Results show that a large proportion of adolescents encounter extremist messages frequently, but that many others have trouble even identifying extremist content. In addition, factors known from radicalization research (e.g., deprivation, discrimination, specific attitudes) as well as extremism-related media and civic literacy influence the frequency of extremist encounters and recognition abilities.
|
2019 |
Nienierza, A., Reinemann, C., Fawzi, N., Riesmeyer, C. and Neumann, K. |
View
Publisher
|
Journal Article |
Antisemitism on Twitter: Collective efficacy and the role of community organisations in challenging online hate speech
View Abstract
In this paper, we conduct a comprehensive study of online antagonistic content related to Jewish identity posted on Twitter between October 2015 and October 2016 by UK-based users. We trained a scalable supervised machine learning classifier to identify antisemitic content to reveal patterns of online antisemitism perpetration at the source. We built statistical models to analyse the inhibiting and enabling factors of the size (number of retweets) and survival (duration of retweets) of information flows in addition to the production of online antagonistic content. Despite observing high temporal variability, we found that only a small proportion (0.7%) of the content was antagonistic. We also found that antagonistic content was less likely to disseminate in size or survive fora longer period. Information flows from antisemitic agents on Twitter gained less traction, while information flows emanating from capable and willing counter-speech actors -i.e. Jewish organisations- had a significantly higher size and survival rates. This study is the first to demonstrate that Sampson’s classic sociological concept of collective efficacy can be observed on social media (SM). Our findings suggest that when organisations aiming to counter harmful narratives become active on SM platforms, their messages propagate further and achieve greater longevity than antagonistic messages. On SM, counter-speech posted by credible, capable and willing actors can be an effective measure to prevent harmful narratives. Based on our findings, we underline the value of the work by community organisations in reducing the propagation of cyberhate and increasing trust in SM platforms.
|
2019 |
Ozalp, A.S., Williams, M.L., Burnap, P., Liu, H. and Mostafa, M. |
View
Publisher
|
Journal Article |
Echo Chambers Exist! (But They’re Full of Opposing Views)
View Abstract
The theory of echo chambers, which suggests that online political discussions take place in conditions of ideological homogeneity, has recently gained popularity as an explanation for patterns of political polarization and radicalization observed in many democratic countries. However, while micro-level experimental work has shown evidence that individuals may gravitate towards information that supports their beliefs, recent macro-level studies have cast doubt on whether this tendency generates echo chambers in practice, instead suggesting that cross-cutting exposures are a common feature of digital life. In this article, we offer an explanation for these diverging results. Building on cognitive dissonance theory, and making use of observational trace data taken from an online white nationalist website, we explore how individuals in an ideological ‘echo chamber’ engage with opposing viewpoints. We show that this type of exposure, far from being detrimental to radical online discussions, is actually a core feature of such spaces that encourages people to stay engaged. The most common ‘echoes’ in this echo chamber are in fact the sound of opposing viewpoints being undermined and marginalized. Hence echo chambers exist not only in spite of but thanks to the unifying presence of oppositional viewpoints. We conclude with reflections on policy implications of our study for those seeking to promote a more moderate political internet.
|
2020 |
Bright, J., Marchal, N., Ganesh, B. and Rudinac, S.
|
View
Publisher
|