Journal Article |
Affective Practice of Soldiering: How Sharing Images Is Used to Spread Extremist and Racist Ethos on Soldiers of Odin Facebook Site
View Abstract
The paper explores how visual affective practice is used to spread and bolster a nationalist, extremist and racist ethos on the public Facebook page of the anti-immigrant group, Soldiers of Odin. Affective practice refers to a particular sensibility of political discourse, shaped by social formations and digital technologies—the contexts in which political groups or communities gather, discuss and act. The study shows how visual affective practice and sharing and responding to images fortify moral claims, sense exclusionary solidarity and promote white nationalist masculinity which legitimizes racist practices of “soldiering.” By examining both the representations and their reactions (emoticons), the study demonstrates how ideas and values are collectively strengthened through affective sharing and are supported by platform infrastructures. Most importantly, it demonstrates that instead of considering the affect of protecting the nation as a natural result of “authentic” gut feeling, we should understand the ways it is purposefully and collectively produced and circulated.
|
2021 |
Nikunen, K., Hokka, J. and Nelimarkka, M. |
View
Publisher
|
Journal Article |
Governing Hate: Facebook and Digital Racism
View Abstract
This article is concerned with identifying the ideological and techno-material parameters that inform Facebook’s approach to racism and racist contents. The analysis aims to contribute to studies of digital racism by showing Facebook’s ideological position on racism and identifying its implications. To understand Facebook’s approach to racism, the article deconstructs its governance structures, locating racism as a sub-category of hate speech. The key findings show that Facebook adopts a post-racial, race-blind approach that does not consider history and material differences, while its main focus is on enforcement, data, and efficiency. In making sense of these findings, we argue that Facebook’s content governance turns hate speech from a question of ethics, politics, and justice into a technical and logistical problem. Secondly, it socializes users into developing behaviors/contents that adapt to race-blindness, leading to the circulation of a kind of flexible racism. Finally, it spreads this approach from Silicon Valley to the rest of the world.
|
2021 |
Siapera, E. and Viejo-Otero, P. |
View
Publisher
|
Journal Article |
On Frogs, Monkeys, and Execution Memes: Exploring the Humor-Hate Nexus at the Intersection of Neo-Nazi and Alt-Right Movements in Sweden
View Abstract
This article is based on a case study of the online media practices of the militant neo-Nazi organization the Nordic Resistance Movement, currently the biggest and most active extreme-right actor in Scandinavia. I trace a recent turn to humor, irony, and ambiguity in their online communication and the increasing adaptation of stylistic strategies and visual aesthetics of the Alt-Right inspired by online communities such as 4chan, 8chan, Reddit, and Imgur. Drawing on a visual content analysis of memes (N = 634) created and circulated by the organization, the analysis explores the place of humor, irony, and ambiguity across these cultural expressions of neo-Nazism and how ideas, symbols, and layers of meaning travel back and forth between neo-Nazi and Alt-right groups within Sweden today.
|
2021 |
Askanius, T. |
View
Publisher
|
Journal Article |
Digital Dog Whistles: The New Online Language of Extremism
View Abstract
Terrorists and extremists groups are communicating sometimes openly but very often in concealed formats. Recently Far-right extremists including white supremacist, anti-Semite groups, racists and neo-Nazis started using a coded “New Language”. Alarmed by police and security forces attempts to find them online and by the social platforms attempts to remove their contents, they try to apply the new language of codes and doublespeak. This study explores the emergence of a new language, the system of code words developed by Far-right extremists. What are the characteristics of this new language? How is it transmitted? How is it used? Our survey of online Far-right contents reveals the use of visual and textual codes for extremists. These hidden languages enable extremists to hide in plain sight and for others to easily identify like-minded individuals. There is no doubt that the “new language” used online by Far-right groups comprises all the known attributes of a language: It is very creative, productive and instinctive, uses exchanges of verbal or symbolic utterances shared by certain individuals and groups. These findings should serve both Law Enforcement and private sector bodies interested in preventing hate speech online.
|
2020 |
Weimann, G. |
View
Publisher
|
Journal Article |
Online Extremism and Terrorism Research Ethics: Researcher Safety, Informed Consent, and the Need for Tailored Guidelines
View Abstract
This article reflects on two core issues of human subjects’ research ethics and how they play out for online extremism and terrorism researchers. Medical research ethics, on which social science research ethics are based, centers the protection of research subjects, but what of the protection of researchers? Greater attention to researcher safety, including online security and privacy and mental and emotional wellbeing, is called for herein. Researching hostile or dangerous communities does not, on the other hand, exempt us from our responsibilities to protect our research subjects, which is generally ensured via informed consent. This is complicated in data-intensive research settings, especially with the former type of communities, however. Also grappled with in this article therefore are the pros and cons of waived consent and deception and the allied issue of prevention of harm to subjects in online extremism and terrorism research. The best path forward it is argued—besides talking through the diversity of ethical issues arising in online extremism and terrorism research and committing our thinking and decision-making around them to paper to a much greater extent than we have done to-date—may be development of ethics guidelines tailored to our sub-field.
|
2021 |
Conway, M. |
View
Publisher
|
Journal Article |
Discourse patterns used by extremist Salafists on Facebook: identifying potential triggers to cognitive biases in radicalized content
View Abstract
Understanding how extremist Salafists communicate, and not only what, is key to gaining insights into the ways they construct their social order and use psychological forces to radicalize potential sympathizers on social media. With a view to contributing to the existing body of research which mainly focuses on terrorist organizations, we analyzed accounts that advocate violent jihad without supporting (at least publicly) any terrorist group and hence might be able to reach a large and not yet radicalized audience. We constructed a critical multimodal and multidisciplinary framework of discourse patterns that may work as potential triggers to a selection of key cognitive biases and we applied it to a corpus of Facebook posts published by seven extremist Salafists. Results reveal how these posts are either based on an intense crisis construct (through negative outgroup nomination, intensification and emotion) or on simplistic solutions composed of taken-for-granted statements. Devoid of any grey zone, these posts do not seek to convince the reader; polarization is framed as a presuppositional established reality. These observations reveal that extremist Salafist communication is constructed in a way that may trigger specific cognitive biases, which are discussed in the paper.
|
2021 |
Bouko, C., Naderer, B., Rieger, D., Van Ostaeyen, P. and Voué, P. |
View
Publisher
|