Journal Article |
Conversations with other (alt-right) women: How do alt-right female influencers narrate a far-right identity?
View Abstract
In the process of shifting far-right ideas from the fringes to the centre of the political spectrum, the alt-right has infiltrated online spaces to mainstream extremist ideas. As part of this process, female alt-right influencers have emerged within various popular social media platforms and fringe outlets, seeking to build credibility for the movement with new audiences. Contrary to previous assumptions about women as harmless adherents of far-right ideology, alt-right women are emerging as “organic intellectuals”, influential in the formation of everyday beliefs and principles in congruence with the tenets of far-right ideology. Their narratives strategically weave far-right ideological discourses, such as the imminent crisis of white identity, with topical matters on lifestyle and well-being. This article examines the rhetoric of online influencers as they shape an ideological space which is contributing to the normalization or mainstreaming of far-right ideas. In doing so, it addresses two questions: How do alt-right female influencers narrate a far-right identity? How do they mainstream white supremacist ideas online? Drawing on new empirical material from a series of far-right podcasts, this article demonstrates that alt-right women strategically construct a “liberated” female identity rooted in femininity, traditionalism and gender complementarity, and problematize feminism and women’s emancipation as constitutive of the crisis facing the white race. It further identifies the presence of an elaborate cultural narrative around white victimhood which alt-right influencers use to mainstream their ideology. To counter the perpetuation of far-right ideas in society, women’s participation in shaping far-right ideology should not remain unaddressed. This article sheds some light on how a small but highly visible group of influencers are actively working to promote a dangerous far-right ideology.
|
2022 |
Maria-Elena, K., Yannick, V.L. and Vanessa, N. |
View
Publisher
|
Journal Article |
Far-Right ‘Reactions’: a comparison of Australian and Canadian far-right extremist groups on Facebook
View Abstract
Little is known about which features of Facebook’s interface appeal to users of far-right extremist groups, how such features may influence a user’s interpretation of far-right extremist themes and narratives, and how this is being experienced across various nations. This paper looks at why certain ‘Reactions’ appealed to users in Australian and Canadian far-right groups on Facebook, and how these ‘Reactions’ may have characterized user decisions during their interaction with far-right extremist themes and narratives. A mixed methods approach has been used to conduct a cross-national comparative analysis of three years of ‘Reaction’ use across 59 Australian and Canadian far-right extremist groups on Facebook (2016–2019). The level of user engagement with administrator posts was assessed using ‘Reactions’ and identified themes and narratives that generated the most user engagement specific to six ‘Reactions’ ( ‘Love’, ‘Haha’, ‘Wow’, ‘Sad’, ‘Angry’ and ‘Thankful’). This was paired with an in-depth qualitative analysis of the themes and narratives that attracted the most user engagement specific to two popular ‘Reactions’ used over time ( ‘Angry’ and ‘Love’). Results highlight ‘Angry’ and ‘Love’ as the two most popular ‘Reactions’ assigned to in-group-out-group themes and narratives, with ‘ algorithms having propelled their partnership in these groups.
|
2022 |
Hutchinson, J. and Droogan, J. |
View
Publisher
|
Journal Article |
https://www.tandfonline.com/doi/full/10.1080/18335330.2021.1969030
View Abstract
This study investigated the phenomena of group polarisation with particular attention to the differences between offline and online settings. Polarisation is a process that leads people to develop extreme ideologies. Three hundred and seven participants were recruited and randomly assigned to different experimental conditions, i.e. antisocial and prosocial polarisation, within groups of 6 people composed of four confederates, participating in discussions about a social dilemma under two different circumstances: face to face and online. The degree of polarisation was assessed considering the final decisions adopted by the participants, as well as the internal dynamics characterising their final attitudes, i.e. compliance versus conversion. Results showed that online groups appeared more susceptible to polarisation and their members reported a greater degree of conformism. In particular, within online environments, the risk of being polarised, both antisocially and prosocially, increased by around 12%. Furthermore, in an online setting, a greater degree of conversion emerged only when the members decided to adopt a pro-social behaviour, while a greater degree of compliance emerged whenever they decided to adopt antisocial behaviour.
|
2021 |
Sabadini, C., Rinaldi, M. and Guazzini, A. |
View
Publisher
|
Journal Article |
Differentiating terrorist groups: a novel approach to leverage their online communication
View Abstract
Any intervention in the violent acts of terrorist groups requires accurate differentiation among the groups themselves, which has largely been overlooked in their study beyond qualitative work. To explore the notion of terrorist group differentiation, the online communication of six violent groups were collected: Al-Nusrah Front, al-Qa’ida Central, al-Qa’ida in the Arabian Peninsula, Hamas, Islamic State of Iraq and Syria, and Taliban. All six groups embedded their ideology in digitised documents that were shared through multiple online social networks and media platforms in attempts to influence individuals to identify with their beliefs. The way these groups constructed social roles for their supporters in their ideology was proposed as a novel way to differentiate them and key term extraction was used to find important terms referenced in their communication. Experimental classification was devised to find the highest-ranking roles capable of prediction. Role terms produced high accuracy scores across experiments differentiating the groups (95%CI: 95–98%), with varying inter-group and intra-ideological differences emerging from authority-, religion-, closeness-, and conflict-based social roles. This suggests these constructs possess strong predictive potential to separate terrorist groups through nuanced expressions observed in their communication behaviour and advances our understanding of how these groups deploy harmful ideology.
|
2021 |
De Bruyn, P.C. |
View
Publisher
|
Chapter |
Birds of a Feather: A Comparative Analysis of White Supremacist and Violent Male Supremacist Discourses
View Abstract
This chapter explores the intersection of white and male supremacy, both of which misrepresent women as genetically and intellectually inferior and reduce them to reproductive and/or sexual functions. The white power movement historically has been characterized by sexism and misogyny, as evidenced by the movement’s attempts to retain European heritage and maintain whiteness by policing the behavior and controlling the bodies of white women. However, the influence of white supremacist discourses on physically violent manifestations of the male supremacist movement remains largely understudied. Using supervised machine learning, we compare a corpus of violent male supremacist manifestos and other multimodal content with highly influential white nationalist texts and the manifestos of violent white supremacists to identify the shared beliefs, tropes and justifications for violence deployed within.
|
2022 |
Pruden, M.L., Lokmanoglu, A.D., Peterscheck, A. and Veilleux-Lepage, Y. |
View
Publisher
|
Journal Article |
A semi-supervised algorithm for detecting extremism propaganda diffusion on social media
View Abstract
Extremist online networks reportedly tend to use Twitter and other Social Networking Sites (SNS) in order to issue propaganda and recruitment statements. Traditional machine learning models may encounter problems when used in such a context, due to the peculiarities of microblogging sites and the manner in which these networks interact (both between themselves and with other networks). Moreover, state-of-the-art approaches have focused on non-transparent techniques that cannot be audited; so, despite the fact that they are top performing techniques, it is impossible to check if the models are actually fair. In this paper, we present a semi-supervised methodology that uses our Discriminatory Expressions algorithm for feature selection to detect expressions that are biased towards extremist content (Francisco and Castro 2020). With the help of human experts, the relevant expressions are filtered and used to retrieve further extremist content in order to iteratively provide a set of relevant and accurate expressions. These discriminatory expressions have been proved to produce less complex models that are easier to comprehend, and thus improve model transparency. In the following, we present close to 70 expressions that were discovered by using this method alongside the validation test of the algorithm in several different contexts.
|
2022 |
Francisco, M., Benítez-Castro, M.Á., Hidalgo-Tenorio, E. and Castro, J.L. |
View
Publisher
|