Journal Article |
Analyzing predisposing, precipitating, and perpetuating factors of militancy through declassified interrogation summaries: A case study
View Abstract
Researchers and policymakers have supported a public health approach to countering violent extremism throughout the War on Terror. However, barriers to obtaining primary data include concerns from minority groups about stigmatization, the ethics of harming research subjects by exposing them to violent content, and restrictions on researchers from institutions and governments. Textual analyses of declassified documents from government agencies may overcome these barriers. This article contributes a method for analyzing the predisposing, precipitating, and perpetuating factors of terrorism through open source texts. This method is applied to FBI interrogation summaries of Al Qaeda terrorist Umar Farouk Abdulmutallab who attempted an attack aboard an airplane in 2009. This analysis shows that consuming militant content online led him to narrow his social relationships offline to extremists and foster identifications with subjugated Muslims around the world. After deciding to wage militancy, loyalty to Al Qaeda members, swearing allegiance to and obeying group leaders, and interpreting religious texts militantly perpetuated violent activities. Such work can advance empirical work on militant behavior to develop interventions.
|
2020 |
Aggarwal, N.K. |
View
Publisher
|
Report |
Hosting Hate
View Abstract
Extreme online content from far-right organisations, including the website of a banned terrorist group, is accessible via hardware based in the UK, potentially in breach of the law, and in contrast with Theresa May’s call for technology companies to act to remove terrorist content from their platforms.
|
2018 |
HOPE not hate |
View
Publisher
|
Report |
Cyber Swarming, Memetic Warfare and Viral Insurgency: How Domestic Militants Organize on Memes to Incite Violent Insurrection and Terror Against Government and Law Enforcement
View Abstract
In this briefing, we document a recently formed apocalyptic militia ideology which, through the use of memes—coded inside jokes conveyed by image or text—advocates extreme violence against law enforcement and government officials. Termed the ‘boogaloo’, this ideology self-organizes across social media communities, boasts tens of thousands of users, exhibits a complex division of labor, evolves well-developed channels to innovate and distribute violent propaganda, deploys a complex communication network on extremist, mainstream and dark Web communities, and articulates a hybrid structure between lone-wolf and cell-like organization. Like a virus which awakens from dormancy, this meme has emerged with startling speed in merely the last 3–4 months.
|
2020 |
Goldenberg, A. and Finkelstein, J. |
View
Publisher
|
Journal Article |
Predictors of Viewing Online Extremism Among America’s Youth
View Abstract
Exposure to hate material is related to a host of negative outcomes. Young people might be especially vulnerable to the deleterious effects of such exposure. With that in mind, this article examines factors associated with the frequency that youth and young adults, ages 15 to 24, see material online that expresses negative views toward a social group. We use an online survey of individuals recruited from a demographically balanced sample of Americans for this project. Our analysis controls for variables that approximate online routines, social, political, and economic grievances, and sociodemographic traits. Findings show that spending more time online, using particular social media sites, interacting with close friends online, and espousing political views online all correlate with increased exposure to online hate. Harboring political grievances is likewise associated with seeing hate material online frequently. Finally, Whites are more likely than other race/ethnic groups to be exposed to online hate frequently.
|
2018 |
Costello, M., Barrett-Fox, R., Bernatzky, C., Hawdon, J. and Mendes, K. |
View
Publisher
|
Journal Article |
Who views online extremism? Individual attributes leading to exposure
View Abstract
Who is likely to view materials online maligning groups based on race, nationality, ethnicity, sexual orientation, gender, political views, immigration status, or religion? We use an online survey (N = 1034) of youth and young adults recruited from a demographically balanced sample of Americans to address this question. By studying demographic characteristics and online habits of individuals who are exposed to online extremist groups and their messaging, this study serves as a precursor to a larger research endeavor examining the online contexts of extremism. Descriptive results indicate that a sizable majority of respondents were exposed to negative materials online. The materials were most commonly used to stereotype groups. Nearly half of negative material centered on race or ethnicity, and respondents were likely to encounter such material on social media sites. Regression results demonstrate African-Americans and foreign-born respondents were significantly less likely to be exposed to negative material online, as are younger respondents. Additionally, individuals expressing greater levels of trust in the federal government report significantly less exposure to such materials. Higher levels of education result in increased exposure to negative materials, as does a proclivity towards risk-taking.
|
2016 |
Costello, M., Hawdon, J., Ratliff, T. and Grantham, T. |
View
Publisher
|
Journal Article |
Predicting Online Extremism, Content Adopters, and Interaction Reciprocity
View Abstract
We present a machine learning framework that leverages a mixture of metadata, network, and temporal features to detect extremist users, and predict content adopters and interaction reciprocity in social media. We exploit a unique dataset containing millions of tweets generated by more than 25 thousand users who have been manually identified, reported, and suspended by Twitter due to their involvement with extremist campaigns. We also leverage millions of tweets generated by a random sample of 25 thousand regular users who were exposed to, or consumed, extremist content. We carry out three forecasting tasks, (i) to detect extremist users, (ii) to estimate whether regular users will adopt extremist content, and finally (iii) to predict whether users will reciprocate contacts initiated by extremists. All forecasting tasks are set up in two scenarios: a post hoc (time independent) prediction task on aggregated data, and a simulated real-time prediction task. The performance of our framework is extremely promising, yielding in the different forecasting scenarios up to 93 % AUC for extremist user detection, up to 80 % AUC for content adoption prediction, and finally up to 72 % AUC for interaction reciprocity forecasting. We conclude by providing a thorough feature analysis that helps determine which are the emerging signals that provide predictive power in different scenarios.
|
2016 |
Ferrara, E., Wang, W.Q., Varol, O., Flammini, A. and Galstyan, A. |
View
Publisher
|