Report |
Cyber Swarming, Memetic Warfare and Viral Insurgency: How Domestic Militants Organize on Memes to Incite Violent Insurrection and Terror Against Government and Law Enforcement
View Abstract
In this briefing, we document a recently formed apocalyptic militia ideology which, through the use of memes—coded inside jokes conveyed by image or text—advocates extreme violence against law enforcement and government officials. Termed the ‘boogaloo’, this ideology self-organizes across social media communities, boasts tens of thousands of users, exhibits a complex division of labor, evolves well-developed channels to innovate and distribute violent propaganda, deploys a complex communication network on extremist, mainstream and dark Web communities, and articulates a hybrid structure between lone-wolf and cell-like organization. Like a virus which awakens from dormancy, this meme has emerged with startling speed in merely the last 3–4 months.
|
2020 |
Goldenberg, A. and Finkelstein, J. |
View
Publisher
|
Journal Article |
Predictors of Viewing Online Extremism Among America’s Youth
View Abstract
Exposure to hate material is related to a host of negative outcomes. Young people might be especially vulnerable to the deleterious effects of such exposure. With that in mind, this article examines factors associated with the frequency that youth and young adults, ages 15 to 24, see material online that expresses negative views toward a social group. We use an online survey of individuals recruited from a demographically balanced sample of Americans for this project. Our analysis controls for variables that approximate online routines, social, political, and economic grievances, and sociodemographic traits. Findings show that spending more time online, using particular social media sites, interacting with close friends online, and espousing political views online all correlate with increased exposure to online hate. Harboring political grievances is likewise associated with seeing hate material online frequently. Finally, Whites are more likely than other race/ethnic groups to be exposed to online hate frequently.
|
2018 |
Costello, M., Barrett-Fox, R., Bernatzky, C., Hawdon, J. and Mendes, K. |
View
Publisher
|
Journal Article |
Who views online extremism? Individual attributes leading to exposure
View Abstract
Who is likely to view materials online maligning groups based on race, nationality, ethnicity, sexual orientation, gender, political views, immigration status, or religion? We use an online survey (N = 1034) of youth and young adults recruited from a demographically balanced sample of Americans to address this question. By studying demographic characteristics and online habits of individuals who are exposed to online extremist groups and their messaging, this study serves as a precursor to a larger research endeavor examining the online contexts of extremism. Descriptive results indicate that a sizable majority of respondents were exposed to negative materials online. The materials were most commonly used to stereotype groups. Nearly half of negative material centered on race or ethnicity, and respondents were likely to encounter such material on social media sites. Regression results demonstrate African-Americans and foreign-born respondents were significantly less likely to be exposed to negative material online, as are younger respondents. Additionally, individuals expressing greater levels of trust in the federal government report significantly less exposure to such materials. Higher levels of education result in increased exposure to negative materials, as does a proclivity towards risk-taking.
|
2016 |
Costello, M., Hawdon, J., Ratliff, T. and Grantham, T. |
View
Publisher
|
Journal Article |
Predicting Online Extremism, Content Adopters, and Interaction Reciprocity
View Abstract
We present a machine learning framework that leverages a mixture of metadata, network, and temporal features to detect extremist users, and predict content adopters and interaction reciprocity in social media. We exploit a unique dataset containing millions of tweets generated by more than 25 thousand users who have been manually identified, reported, and suspended by Twitter due to their involvement with extremist campaigns. We also leverage millions of tweets generated by a random sample of 25 thousand regular users who were exposed to, or consumed, extremist content. We carry out three forecasting tasks, (i) to detect extremist users, (ii) to estimate whether regular users will adopt extremist content, and finally (iii) to predict whether users will reciprocate contacts initiated by extremists. All forecasting tasks are set up in two scenarios: a post hoc (time independent) prediction task on aggregated data, and a simulated real-time prediction task. The performance of our framework is extremely promising, yielding in the different forecasting scenarios up to 93 % AUC for extremist user detection, up to 80 % AUC for content adoption prediction, and finally up to 72 % AUC for interaction reciprocity forecasting. We conclude by providing a thorough feature analysis that helps determine which are the emerging signals that provide predictive power in different scenarios.
|
2016 |
Ferrara, E., Wang, W.Q., Varol, O., Flammini, A. and Galstyan, A. |
View
Publisher
|
Journal Article |
Children: extremism and online radicalization
View Abstract
There can be few greater fears for a parent than their child being contacted by a stranger, indoctrinated with an extreme ideology, and encouraged to join a violent movement, all while accessing the internet from their bedroom. Children’s smartphones and computers
may be portals to the most dangerous places on earth. The use of the internet, and more specifically social media, by violent extremists is certainly nothing new. The technical skills and proficiency displayed by groups such as ISIS have been causing concern for governments, law enforcement, industry, schools, religious leaders, and parents around the world. In reality the radicalization of children is rare and particularly nuanced, and far from a linear process that exclusively occurs online. While the media have reported high-profile cases, such as the teenagers from Chicago who tried to leave the United States and join ISIS or the three girls from the United Kingdom who traveled through Turkey to Syria the frequency and nature of conversion to violent extremism is a lot more complex than often reported. Evidence from search histories, online interactions and social media profiles suggest that contact is being made by those intent on radicalizing others. However, while information can be sought and contact can be initiated, complete conversion to a violent ideology is not happening in isolation online. That is to say that the internet is serving as a facilitator, not a direct means of recruitment. In any case action needs to be taken to safeguard children and young people from these risks.
|
2016 |
Morris, E. |
View
Publisher
|
Report |
Bots, Fake News and The Anti-Muslim Message on Social Media
View Abstract
• In this report, we show how recent terror attacks in the UK have been successfully exploited by anti-Muslim activists over social media, to increase their reach and grow their audiences.
• Monitoring key anti-Muslim social media accounts and their networks, we show how even small events are amplified through an international network of activists.
• We also provide concrete evidence of a leading anti-Muslim activist whose message is hugely amplified by the use of a 100+ strong ‘bot army’.
• The global reach, low price and lack of regulation on social media platforms presents new possibilities for independent, single issue and extremist viewpoints to gain significant audiences.
• We delve into the murky and secretive world of the dark web to explore just what tools are available for manipulating social media and show how easy it is to make use of these tactics even for non-tech savvy users.
• Through testing, we conclude that even cheaply inflating one’s number of followers has an effect on the ability to reach a larger audience.
• We situate these developments in the context of increasing hostility towards Muslims and immigration in the Europe and the US.
• “Trigger events” such as terror attacks, and other events that reflect badly on Muslims and Islam, cause both an increase in anti-Muslim hate on the street and, as we will show, also online.
|
2018 |
HOPE not hate |
View
Publisher
|