Journal Article |
The Gift of Gab: A Netnographic Examination of the Community Building Mechanisms in Far-Right Online Space
View Abstract
Major social media platforms have recently taken a more proactive stand against harmful far-right content and pandemic-related disinformation on their sites. However, these actions have catalysed the growth of fringe online social networks for participants seeking right-wing content, safe havens, and unhindered communication channels. To better understand these isolated systems of online activity and their success, the study on Gab Social examines the mechanisms used by the far right to form an alternative collective on fringe social media. My analysis showcases how these online communities are built by perpetuating meso-level identity-building narratives. By examining Gab’s emphasis on creating its lasting community base, the work offers an experiential examination of the different communication devices and multimedia within the platform through a netnographic and qualitative content analysis lens. The emergent findings and discussion detail the far right’s virtual community-building model, revolving around its sense of in-group superiority and the self-reinforcing mechanisms of collective. Not only does this have implications for understanding Gab’s communicative dynamics as an essential socialisation space and promoter of a unique meso-level character, but it also reflects the need for researchers to (re)emphasise identity, community, and collectives in far-right fringe spaces.
|
2024 |
Collins, J. |
View
Publisher
|
Journal Article |
Multi-Ideology, Multiclass Online Extremism Dataset, and Its Evaluation Using Machine Learning
View Abstract
Social media platforms play a key role in fostering the outreach of extremism by influencing the views, opinions, and perceptions of people. These platforms are increasingly exploited by extremist elements for spreading propaganda, radicalizing, and recruiting youth. Hence, research on extremism detection on social media platforms is essential to curb its influence and ill effects. A study of existing literature on extremism detection reveals that it is restricted to a specific ideology, binary classification with limited insights on extremism text, and manual data validation methods to check data quality. In existing research studies, researchers have used datasets limited to a single ideology. As a result, they face serious issues such as class imbalance, limited insights with class labels, and a lack of automated data validation methods. A major contribution of this work is a balanced extremism text dataset, versatile with multiple ideologies verified by robust data validation methods for classifying extremism text into popular extremism types such as propaganda, radicalization, and recruitment. The presented extremism text dataset is a generalization of multiple ideologies such as the standard ISIS dataset, GAB White Supremacist dataset, and recent Twitter tweets on ISIS and white supremacist ideology. The dataset is analyzed to extract features for the three focused classes in extremism with TF‐IDF unigram, bigrams, and trigrams features. Additionally, pretrained word2vec features are used for semantic analysis. The extracted features in the proposed dataset are evaluated using machine learning classification algorithms such as multinomial Naïve Bayes, support vector machine, random forest, and XGBoost algorithms. The best results were achieved by support vector machine using the TF‐IDF unigram model confirming 0.67 F1 score. The proposed multi‐ideology and multiclass dataset shows comparable performance to the existing datasets limited to single ideology and binary labels.
|
2023 |
Gaikwad, M., Ahirrao, S., Phansalkar, S., Kotecha, K. and Rani, S. |
View
Publisher
|
Journal Article |
Deplatforming Norm-Violating Influencers on Social Media Reduces Overall Online Attention Toward Them
View Abstract
From politicians to podcast hosts, online platforms have systematically banned (“deplatformed”) influential users for breaking platform guidelines. Previous inquiries on the effectiveness of this intervention are inconclusive because 1) they consider only few deplatforming events; 2) they consider only overt engagement traces (e.g., likes and posts) but not passive engagement (e.g., views); 3) they do not consider all the potential places users impacted by the deplatforming event might migrate to. We address these limitations in a longitudinal, quasi-experimental study of 165 deplatforming events targeted at 101 influencers. We collect deplatforming events from Reddit posts and then manually curate the data, ensuring the correctness of a large dataset of deplatforming events. Then, we link these events to Google Trends and Wikipedia page views, platform-agnostic measures of online attention that capture the general public’s interest in specific influencers. Through a difference-in-differences approach, we find that deplatforming reduces online attention toward influencers. After 12 months, we estimate that online attention toward deplatformed influencers is reduced by -63% (95% CI [-75%,-46%]) on Google and by -43% (95% CI [-57%,-24%]) on Wikipedia. Further, as we study over a hundred deplatforming events, we can analyze in which cases deplatforming is more or less impactful, revealing nuances about the intervention. Notably, we find that both permanent and temporary deplatforming reduce online attention toward influencers; Overall, this work contributes to the ongoing effort to map the effectiveness of content moderation interventions, driving platform governance away from speculation.
|
2024 |
Ribeiro, M.H., Jhaver, S., Reignier-Tayar, M. and West, R. |
View
Publisher
|
Policy |
Tackling terrorist content online – Propaganda and content moderation
View Abstract
The whitepaper has three parts. The first part provides some contextual background, describing the diverse range of online services utilised by terrorists and extremists and the process by which propaganda is disseminated online. The second part details industry responses. As well as referrals from users and law enforcement, it describes the use of AI for proactive detection and collaborative, cross-platform initiatives. The third part describes four issues for discussion: transparency; definitional clarity; the impact on those targeted; and, the use of online data for predictive purposes.
|
2023 |
Macdonald, S. and Staniforth, A. |
View
Publisher
|
Journal Article |
Studying the Impact of ISIS Propaganda Campaigns
View Abstract
Over the past decade, a large number of extremist and hate groups have turned to internet platforms to inspire mass violence. Currently, there is little reliable evidence on how such campaigns radicalize targeted audiences. We provide systematic, large-scale, microevidence on the effect of Islamic State propaganda on social media. We use several machine learning algorithms to detect recruitment messages in online propaganda, identify their dissemination on Twitter, and quantify the reactions of exposed users. Analyzing content produced by the Islamic State between 2015 and 2016 shows that propaganda conveying the material, spiritual, and social benefits of joining ISIS increased online support for the group, while content displaying brutal violence decreased endorsement of ISIS across a wide range of videos. Only the group’s most extreme supporters reacted positively to violent propaganda.
|
2023 |
Mitts, T., Phillips, G. and Walter, B.F. |
View
Publisher
|
Book |
Gaming and Extremism: The Radicalization of Digital Playgrounds
View Abstract
Charting the increase in the use of games for the dissemination of extremist propaganda, radicalization, recruitment, and mobilization, this book examines the “gamification of extremism.” Editors Linda Schlegel and Rachel Kowert bring together a range of insights from world-leading experts in the field to provide the first comprehensive overview of gaming and extremism. The potential nexus between gaming and extremism has become a key area of concern for researchers, policymakers, and practitioners seeking to prevent and counter radicalization and this book offers insights into key trends and debates, future directions, and potential prevention efforts. This includes the exploration of how games and game adjacent spaces, such as Discord, Twitch, Steam, and DLive, are being leveraged by extremists for the purposes of radicalization, recruitment, and mobilization. Additionally, the book presents the latest counterterrorism techniques, surveys promising preventing/countering violent extremism (P/CVE) measures currently being utilized in the gaming sphere, and examines the ongoing challenges, controversies, and current gaps in knowledge in the field. This text will be of interest to students and scholars of gaming and gaming culture, as well as an essential resource for researchers and practitioners working in prevention and counter-extremism, professionals working at gaming-related tech companies, and policymakers.
|
2024 |
Schlegel, L. and Kowert, R. |
View
Publisher
|