Journal Article |
Down the Rabbit Hole: Detecting Online Extremism, Radicalisation, and Politicised Hate Speech
View Abstract
Social media is a modern person’s digital voice to project and engage with new ideas and mobilise communities—a power shared with extremists. Given the societal risks of unvetted content-moderating algorithms for Extremism, Radicalisation, and Hate speech (ERH) detection, responsible software engineering must understand the who, what, when, where, and why such models are necessary to protect user safety and free expression. Hence, we propose and examine the unique research field of ERH context mining to unify disjoint studies. Specifically, we evaluate the start-to-finish design process from socio-technical definition-building and dataset collection strategies to technical algorithm design and performance. Our 2015–2021 51-study Systematic Literature Review (SLR) provides the first cross-examination of textual, network, and visual approaches to detecting extremist affiliation, hateful content, and radicalisation towards groups and movements. We identify consensus-driven ERH definitions and propose solutions to existing ideological and geographic biases, particularly due to the lack of research in Oceania/Australasia. Our hybridised investigation on Natural Language Processing, Community Detection, and visual-text models demonstrates the dominating performance of textual transformer-based algorithms. We conclude with vital recommendations for ERH context mining researchers and propose an uptake roadmap with guidelines for researchers, industries, and governments to enable a safer cyberspace.
|
2023 |
Govers, J., Feldman, P., Dant, A. and Patros, P. |
View
Publisher
|
Journal Article |
The role of information skewness in shaping extremist content: A look at four extremists
View Abstract
Introduction. Extremism—distinct from activism—poses a serious threat to the healthy functioning of a society. In the contemporary world, the ability of extremists to spread their narratives using digital information environments has increased tremendously. Despite a substantial body of research on extremism, our understanding of the role of information and its properties in shaping extremist content is sketchy.
Method. To fill this gap, the current research has used ‘content analysis’ and ‘affective lexicon’ to identify and categorise terms from the publicly available online content of four extremists–two groups and two individuals. The property of information skewness provided the deciphering lens through which the categorised content was assessed.
Analysis. Contextual categories of information relevant to all the extremists were developed to analyse the content meaningfully. Six categories of religion, ideology, politics-history, cognition, affection, and conation provided the framework used to analyse and deductively categorise the data using content analysis. The affective lexicon developed by Ortony et al.(1987) was used to identify words belonging to the categories of cognition, affection (emotions and feelings), and conation (behaviour/actions).
Results. The findings reveal that the property of information skewness plays a significant role in shaping extremist content and two aspects of this property (a) intensity and (b) positivity or negativity can be used to (1) classify extremists into meaningful categories and (2) identify generalisable information strategies of extremists.
Conclusions. It is hoped that the findings of this research will inform future enquiries into the role of information and its properties in shaping extremist content and help security agencies to effectively engage in information warfare with extremists.
|
2023 |
Afzal, W. |
View
Publisher
|
Journal Article |
Algorithmic extremism? The securitization of artificial intelligence (AI) and its impact on radicalism, polarization and political violence
View Abstract
This paper analyses how AI and algorithms are being used to radicalize, polarize, and spread racism and political instability. AI and algorithms are not just tools deployed by national security agencies but contributors to polarization, radicalism and political violence. Securitization processes are a missing link to understanding how AI has been designed and used and to the harmful outcomes that it has generated. Reconceptualizing AI-enabled conflict is necessary in a way that is more aware of the human, social and psychological impacts of the technology.
|
2023 |
Burton, J. |
View
Publisher
|
Journal Article |
Topological Data Mapping of Online Hate Speech, Misinformation, and General Mental Health: A Large Language Model Based Study
View Abstract
The advent of social media has led to an increased concern over its potential to propagate hate speech and misinformation, which, in addition to contributing to prejudice and discrimination, has been suspected of playing a role in increasing social violence and crimes in the United States. While literature has shown the existence of an association between posting hate speech and misinformation online and certain personality traits of posters, the general relationship and relevance of online hate speech/misinformation in the context of overall psychological wellbeing of posters remain elusive. One difficulty lies in the lack of adequate data analytics tools capable of adequately analyzing the massive amount of social media posts to uncover the underlying hidden links. Recent progresses in machine learning and large language models such as ChatGPT have made such an analysis possible. In this study, we collected thousands of posts from carefully selected communities on the social media site Reddit. We then utilized OpenAI’s GPT3 to derive embeddings of these posts, which are high-dimensional real-numbered vectors that presumably represent the hidden semantics of posts. We then performed various machine-learning classifications based on these embeddings in order to understand the role of hate speech/misinformation in various communities. Finally, a topological data analysis (TDA) was applied to the embeddings to obtain a visual map connecting online hate speech, misinformation, various psychiatric disorders, and general mental health.
|
2023 |
Alexander, A. and Wang, H. |
View
Publisher
|
Journal Article |
Crowdfunding platforms as conduits for ideological struggle and extremism: On the need for greater regulation and digital constitutionalism
View Abstract
Crowdfunding platforms remain understudied as conduits for ideological struggle. While other social media platforms may enable the expression of hateful and harmful ideas, crowdfunding can actively facilitate their enaction through financial support. In addressing such risks, crowdfunding platforms attempt to mitigate complicity but retain legitimacy. That is, ensuring their fundraising tools are not exploited for intolerant, violent or hate-based purposes, yet simultaneously avoiding restrictive policies that undermine their legitimacy as ‘open’ platforms. Although social media platforms are routinely scrutinized for enabling misinformation, hateful rhetoric and extremism, crowdfunding has largely escaped critical inquiry, despite being repeatedly implicated in amplifying such threats. Drawing on the ‘Freedom Convoy’ movement as a case study, this article employs critical discourse analysis to trace how crowdfunding platforms reveal their underlying values in privileging either collective safety or personal liberty when hosting divisive causes. The radically different policy decisions adopted by crowdfunding platforms GoFundMe and GiveSendGo expose a concerning divide between ‘Big Tech’ and ‘Alt-Tech’ platforms regarding what harms they are willing to risk, and the ideological rationales through which these determinations are made. There remain relatively few regulatory safeguards guiding such impactful strategic choices, leaving crowdfunding platforms susceptible to weaponization. With Alt-Tech platforms aspiring to build an ‘alternative internet’, this paper highlights the urgent need to explore digital constitutionalism in the crowdfunding space, establishing firmer boundaries to better mitigate fundraising platforms becoming complicit in catastrophic harms.
|
2023 |
Wade, M., Baker, S.A. and Walsh, M.J. |
View
Publisher
|
Journal Article |
Toxic play: Examining the issue of hate within gaming
View Abstract
This article examines the problem of hate and toxic behavior in gaming. Videogames have risen to become a dominant cultural form, seeing significant increases in players, playtime, and revenue. More people are playing games than ever before, broadening “gamers” into a highly diverse demographic. Yet this rise has been accompanied by a growing recognition of the racism, sexism, xenophobia, and other forms of harassment taking place on these platforms. Hate within gaming creates toxic communities and takes a toll particularly on marginalized groups, raising both ethical and financial issues for the industry, who seek to address this problem in multiple ways. This paper surveys and synthesizes recent research on the topic from both inside and outside academia, laying out the problem, its manifestations, key drivers, and current responses. It concludes with a research agenda that offers a foundation for researchers, policy-makers, and companies to build from.
|
2023 |
Munn, L. |
View
Publisher
|