Journal Article |
Distinct patterns of incidental exposure to and active selection of radicalizing information indicate varying levels of support for violent extremism
View Abstract
Exposure to radicalizing information has been associated with support for violent extremism. It is, however, unclear whether specific information use behavior, namely, a distinct pattern of incidental exposure (IE) to and active selection (AS) of radicalizing content, indicates stronger violent extremist attitudes and radical action intentions. Drawing on a representative general population sample (N = 1509) and applying latent class analysis, we addressed this gap in the literature. Results highlighted six types of information use behavior. The largest group of participants reported a near to zero probability of both IE to and AS of radicalizing material. Two groups of participants were characterized by high or moderate probabilities of incidental exposure as well as a low probability of active selection of radicalizing content. The remaining groups displayed either low, moderate, or high probabilities of both IE and AS. Importantly, we showed between-group differences regarding violent extremist attitudes and radical behavioral intentions. Individuals reporting near zero or high probabilities for both IE to and AS of radicalizing information expressed the lowest and strongest violent extremist attitudes and willingness to use violence respectively. Groups defined by even moderate probabilities of AS endorsed violent extremism more strongly than those for which the probability for incidental exposure was moderate or high but AS of radicalizing content was unlikely.
|
2024 |
Schumann, S., Clemmow, C., Rottweiler, B. and Gill, P. |
View
Publisher
|
Journal Article |
Distinguishing Features Of The Activity Of Extreme Right Groups Under Conditions Of State Counteraction To Online Extremism In Russia
View Abstract
The conservative shift taken by Russian authorities forced members of the Russian extreme right to seek shelter online. Nevertheless, they fell under censorship restrictions. The objective of this study is to reveal the distinguishing features of extreme right online groups and their participants’ activity under conditions of censorship. The groups studied were identified by means of linguistic markers of extreme right sentiments and attitudes. The metrics of social network analysis were used to analyze interconnections between the groups and the internal migrations of closed communities. The study revealed that (1) extreme right online communities use the tactic of creating mirror Internet sites in case the main group is blocked; (2) blocking of the most extreme oppositional extreme right online groups induces the remaining ones to imitate obedience to law, using “softer forms” of extremist rhetoric; (3) the audience of the blocked groups continues spreading extremist ideas through channels related to other subjects. The study’s authors conclude that prohibiting extreme right discourse promotes the proliferation of extreme right ideas and sentiments.
|
2019 |
Myagkov, M., Kashpur, V. V., Baryshev, A. A., Goiko, V. L. and Shchekotin, E. V. |
View
Publisher
|
Journal Article |
Do Internet Searches for Islamist Propaganda Precede or Follow Islamist Terrorist Attacks?
View Abstract
Using a Vector-Autoregressive (VAR) model, this paper analyzes the relationship between Islamist terrorist attacks and Internet searches for the phrases such as “join Jihad” or “join ISIS.” It was found that Internet searches for “join Jihad” and “taghut” (Arabic word meaning “to rebel”) preceded the Islamist terrorist attacks by three weeks over the period January 2014 to December 2016. Internet searches for “kufar” (the derogatory Arabic word for non-Muslims) preceded the attacks that resulted in deaths from the Islamist terrorist groups. Casualties, including those injured and killed by the Islamist groups, were also found to precede Internet searches for “join Jihad” and “ISIS websites.” Countermeasures to the usage of social media for terrorist activity are also discussed. As an example, if Internet searches for specific terms can be identified that precede a terrorist attack, authorities can be on alert to possibly stop an impending attack. Chat rooms and online discussion groups can also be used to disseminate information to argue against terrorist propaganda that is being released.
|
2019 |
Enomoto, C. E. and Douglas, K. |
View
Publisher
|
Journal Article |
Do Machines Replicate Humans? Toward a Unified Understanding of Radicalizing Content on the Open Social Web
View Abstract
The advent of the Internet inadvertently augmented the functioning and success of violent extremist organizations. Terrorist organizations like the Islamic State in Iraq and Syria (ISIS) use the Internet to project their message to a global audience. The majority of research and practice on web‐based terrorist propaganda uses human coders to classify content, raising serious concerns such as burnout, mental stress, and reliability of the coded data. More recently, technology platforms and researchers have started to examine the online content using automated classification procedures. However, there are questions about the robustness of automated procedures, given insufficient research comparing and contextualizing the difference between human and machine coding. This article compares output of three text analytics packages with that of human coders on a sample of one hundred nonindexed web pages associated with ISIS. We find that prevalent topics (e.g., holy war) are accurately detected by the three packages whereas nuanced concepts (Lone Wolf attacks) are generally missed. Our findings suggest that naïve approaches of standard applications do not approximate human understanding, and therefore consumption, of radicalizing content. Before radicalizing content can be automatically detected, we need a closer approximation to human understanding.
|
2019 |
Hall, M., Logan, M., Ligon, G.S. and Derrick, D.C. |
View
Publisher
|
Journal Article |
Do Platforms Kill?
View Abstract
This Article analyzes intermediaries’ civil liability for terror attacks under the anti-terror statutes and other doctrines in tort law. It aims to contribute to the literature in several ways. First, it outlines the way intermediaries aid terrorist activities either willingly or unwittingly. By identifying the role online intermediaries play in terrorist activities, one may lay down the first step towards creating a legal policy that would mitigate the harm caused by terrorists’ incitement over the internet. Second, this Article outlines a minimum standard of civil liability that should be imposed on intermediaries for speech made by terrorists on their platforms. Third, it highlights the contradictions between intermediaries’ policies regarding harmful content and the technologies that create personalized experiences for users, which can sometimes recommend unlawful content and connections.
|
2020 |
Lavi, M. |
View
Publisher
|
Report |
Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis
View Abstract
The ever-increasing amount of user-generated content online has led, in recent years, to an expansion in research and investment in automated content analysis tools. Scrutiny of automated content analysis has accelerated during the COVID-19 pandemic, as social networking services have placed a greater reliance on these tools due to concerns about health risks to their moderation staff from in-person work. At the same time, there are important policy debates around the world about how to improve content moderation while protecting free expression and privacy. In order to advance these debates, we need to understand the potential role of automated content analysis tools.
This paper explains the capabilities and limitations of tools for analyzing online multimedia content and highlights the potential risks of using these tools at scale without accounting for their limitations. It focuses on two main categories of tools: matching models and computer prediction models. Matching models include cryptographic and perceptual hashing, which compare user-generated content with existing and known content. Predictive models (including computer vision and computer audition) are machine learning techniques that aim to identify characteristics of new or previously unknown content.
|
2021 |
Thakur, D. and Llansó, E. |
View
Publisher
|