Report |
Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis
View Abstract
The ever-increasing amount of user-generated content online has led, in recent years, to an expansion in research and investment in automated content analysis tools. Scrutiny of automated content analysis has accelerated during the COVID-19 pandemic, as social networking services have placed a greater reliance on these tools due to concerns about health risks to their moderation staff from in-person work. At the same time, there are important policy debates around the world about how to improve content moderation while protecting free expression and privacy. In order to advance these debates, we need to understand the potential role of automated content analysis tools.
This paper explains the capabilities and limitations of tools for analyzing online multimedia content and highlights the potential risks of using these tools at scale without accounting for their limitations. It focuses on two main categories of tools: matching models and computer prediction models. Matching models include cryptographic and perceptual hashing, which compare user-generated content with existing and known content. Predictive models (including computer vision and computer audition) are machine learning techniques that aim to identify characteristics of new or previously unknown content.
|
2021 |
Thakur, D. and Llansó, E. |
View
Publisher
|
Journal Article |
The hidden hierarchy of far-right digital guerrilla warfare
View Abstract
The polarizing tendency of politically leaned social media is usually claimed to be spontaneous, or a by-product of underlying platform algorithms. This contribution revisits both claims by articulating the digital world of social media and rules derived from capitalist accumulation in the post-Fordist age, from a transdisciplinary perspective articulating the human and exact sciences. Behind claims of individual freedom, there is a rigid pyramidal hierarchy of power heavily using military techniques developed in the late years of the cold war, namely Russia Reflexive Control and the Boyd’s decision cycle in the USA. This hierarchy is not the old-style “command-and-control” from Fordist times, but an “emergent” one, whereby individual agents respond to informational stimuli, coordinated to move as a swarm. Such a post-Fordist organizational structure resembles guerrilla warfare. In this new world, it is the far right who plays the revolutionaries by deploying avant-garde guerrilla methods, while the so-called left paradoxically appears as conservatives defending the existing structure of exploitation. Although the tactical goal is unclear, the strategic objective of far-right guerrillas is to hold on to power and benefit particular groups to accumulate more capital. We draw examples from the Brazilian far right to support our claims.
|
2021 |
Cesarino, L. and Nardelli, P.H.J. |
View
Publisher
|
Journal Article |
Identifying Key Players in Violent Extremist Networks: Using Socio-Semantic Network Analysis as Part of a Program of Content Moderation
View Abstract
Some moderation strategies of online content have targeted the individuals believed to be the most influential in the diffusion of such material, while others have focused on censorship of the content itself. Few approaches consider these two aspects simultaneously. The present study addresses this gap by showing how a socio-semantic network analysis can help identify individuals and subgroups who are strategically positioned in radical networks and whose comments encourage the use of violence. It also made it possible to identify the individuals and subgroups who act as intermediaries and whose statements are often the most violent.
|
2021 |
Bérubé, M., Beaulieu, L.A., Mongeau, P. and Saint-Charles, J. |
View
Publisher
|
Journal Article |
Hidden order across online extremist movements can be disrupted by nudging collective chemistry
View Abstract
Disrupting the emergence and evolution of potentially violent online extremist movements is a crucial challenge. Extremism research has analyzed such movements in detail, focusing on individual- and movement-level characteristics. But are there system-level commonalities in the ways these movements emerge and grow? Here we compare the growth of the Boogaloos, a new and increasingly prominent U.S. extremist movement, to the growth of online support for ISIS, a militant, terrorist organization based in the Middle East that follows a radical version of Islam. We show that the early dynamics of these two online movements follow the same mathematical order despite their stark ideological, geographical, and cultural differences. The evolution of both movements, across scales, follows a single shockwave equation that accounts for heterogeneity in online interactions. These scientific properties suggest specific policies to address online extremism and radicalization. We show how actions by social media platforms could disrupt the onset and ‘flatten the curve’ of such online extremism by nudging its collective chemistry. Our results provide a system-level understanding of the emergence of extremist movements that yields fresh insight into their evolution and possible interventions to limit their growth.
|
2021 |
Velásquez, N., Manrique, P., Sear, R., Leahy, R., Restrepo, N.J., Illari, L., Lupu, Y. and Johnson, N.F. |
View
Publisher
|
Policy |
Operating with impunity: legal review
View Abstract
The independent Commission for Countering Extremism has published a legal review, to examine whether existing legislation adequately deals with hateful extremism.
|
2021 |
Commission for Countering Extremism |
View
Publisher
|
Journal Article |
Online Hate and Zeitgeist of Fear: A Five-Country Longitudinal Analysis of Hate Exposure and Fear of Terrorism After the Paris Terrorist Attacks in 2015
View Abstract
Acts of terror lead to both a rise of an extended sense of fear that goes beyond the physical location of the attacks and to increased expressions of online hate. In this longitudinal study, we analyzed dynamics between the exposure to online hate and the fear of terrorism after the Paris attacks in November 13, 2015. We hypothesized that exposure to online hate is connected to a perceived Zeitgeist of fear (i.e., collective fear). In turn, the perceived Zeitgeist of fear is related to higher personal fear of terrorism both immediately after the attacks and a year later. Hypotheses were tested using path modeling and panel data (N = 2325) from Norway, Finland, Spain, France, and the United States a few weeks after the Paris attacks in November 2015 and again a year later in January 2017. With the exception of Norway, exposure to online hate had a positive association with the perceived Zeitgeist of fear in all our samples. The Zeitgeist of fear was correlated with higher personal fear of terrorism immediately after the attacks and one year later. We conclude that online hate content can contribute to the extended sense of fear after the terrorist attacks by skewing perceptions of social climate.
|
2021 |
Kaakinen, M., Oksanen, A., Gadarian, S.K., Solheim, Ø.B., Herreros, F., Winsvold, M.S., Enjolras, B. and Steen‐Johnsen, K. |
View
Publisher
|