Policy |
Predicting harm among incels (involuntary celibates): the roles of mental health, ideological belief and social networking
View Abstract
Incels are a sub-culture community of men who forge a sense of identity around their perceived inability to form sexual or romantic relationships. In recent years, there has been a small, but growing, number of violent attacks that have been attributed to individuals who identify as incels. The purpose of this study was to use a large sample of incels from the UK and US to establish (a) their demographic make-up; (b) the consistency of their attitudes and beliefs; (c) their adherence to a common world view, (d) how they network with other incels; (e) whether there are cross-cultural differences between incels from the UK and US in the above; and finally, whether there is a predictive relationship between incel mental health, networking and ideology and the extent of their harmful attitudes and beliefs.
|
2024 |
Whittaker, J., Thomas, A. and Costello, W. |
View
Publisher
|
Journal Article |
Predicting Online Extremism, Content Adopters, and Interaction Reciprocity
View Abstract
We present a machine learning framework that leverages a mixture of metadata, network, and temporal features to detect extremist users, and predict content adopters and interaction reciprocity in social media. We exploit a unique dataset containing millions of tweets generated by more than 25 thousand users who have been manually identified, reported, and suspended by Twitter due to their involvement with extremist campaigns. We also leverage millions of tweets generated by a random sample of 25 thousand regular users who were exposed to, or consumed, extremist content. We carry out three forecasting tasks, (i) to detect extremist users, (ii) to estimate whether regular users will adopt extremist content, and finally (iii) to predict whether users will reciprocate contacts initiated by extremists. All forecasting tasks are set up in two scenarios: a post hoc (time independent) prediction task on aggregated data, and a simulated real-time prediction task. The performance of our framework is extremely promising, yielding in the different forecasting scenarios up to 93 % AUC for extremist user detection, up to 80 % AUC for content adoption prediction, and finally up to 72 % AUC for interaction reciprocity forecasting. We conclude by providing a thorough feature analysis that helps determine which are the emerging signals that provide predictive power in different scenarios.
|
2016 |
Ferrara, E., Wang, W.Q., Varol, O., Flammini, A. and Galstyan, A. |
View
Publisher
|
Journal Article |
Predicting Violent Extremism with Machine Learning: A Scoping Review
View Abstract
The purpose of this scoping review is to highlight the machine learning tools used in research to address and prevent violent extremism. To achieve this goal, the following objectives guide this study: (1) describe outcomes that have been studied; (2) summarize the data sources used; and (3) determine whether the reporting of machine learning predictive models aligns with the established reporting guidelines for reporting of prediction models. ProQuest, Compendex, IEEE, JStor and PubMed were searched from June to July 2022. Based on the Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) guidelines, databases were searched for articles related to machine learning models applied to the address and prevention of violent extremism. Following standards established by reporting guidelines, findings were extracted from published articles, including general study characteristics, aspects of model development, and reporting of results. Of 53 unique articles identified by the search, 18 were included in the review. Most articles were published between 2016 and 2022 (n = 16, 88.8%). Studies focused on violent extremism worldwide, with the majority of studies not specifically focused on a distinct region (n = 11, 61.1%). The most frequently used machine learning algorithms were support vector machines (n = 9, 50%), followed by random forests (n = 5, 27.7%), natural language processing (n = 4, 22.2%), and deep learning (n = 4, 22.2%). The number of features used varied greatly, ranging from 17 to 7556. Many studies did not report an epistemological or theoretical framework which guided their machine learning approaches or interpretation of findings (n = 8, 44.4%). Many studies did not incorporate the TRIPOD or any other recommended guidelines for the reporting of predictive models. Future research in this field should prioritize evaluating the impact of prediction models on decisions for addressing and preventing violent extremism.
|
2023 |
Richardson, M.A. |
View
Publisher
|
Journal Article |
Predictors of Viewing Online Extremism Among America’s Youth
View Abstract
Exposure to hate material is related to a host of negative outcomes. Young people might be especially vulnerable to the deleterious effects of such exposure. With that in mind, this article examines factors associated with the frequency that youth and young adults, ages 15 to 24, see material online that expresses negative views toward a social group. We use an online survey of individuals recruited from a demographically balanced sample of Americans for this project. Our analysis controls for variables that approximate online routines, social, political, and economic grievances, and sociodemographic traits. Findings show that spending more time online, using particular social media sites, interacting with close friends online, and espousing political views online all correlate with increased exposure to online hate. Harboring political grievances is likewise associated with seeing hate material online frequently. Finally, Whites are more likely than other race/ethnic groups to be exposed to online hate frequently.
|
2018 |
Costello, M., Barrett-Fox, R., Bernatzky, C., Hawdon, J. and Mendes, K. |
View
Publisher
|
Journal Article |
Preliminary Analytical Considerations In Designing A Terrorism And Extremism Online Network Extractor
View Abstract
It is now widely understood that extremists use the Internet in attempts to accomplish many of their objectives. In this chapter we present a web-crawler called the Terrorism and Extremism Network Extractor (TENE), designed to gather information about extremist activities on the Internet. In particular, this chapter will focus on how TENE may help differentiate terrorist websites from anti-terrorist websites by analyzing the context around the use of predetermined keywords found within the text of the webpage. We illustrate our strategy through a content analysis of four types of web-sites. One is a popular white supremacist website, another is a jihadist website, the third one is a terrorism-related news website, and the last one is an official counterterrorist website. To explore differences between these websites, the presence of, and context around 33 keywords was examined on both websites. It was found that certain words appear more often on one type of website than the other, and this may potentially serve as a good method for differentiating between terrorist websites and ones that simply refer to terrorist activities. For example, words such as “terrorist,” “security,” “mission,” “intelligence,” and “report,” all appeared with much greater frequency on the counterterrorist website than the white supremacist or the jihadist websites. In addition, the white supremacist and the jihadist websites used words such as “destroy,” “kill,” and “attack” in a specific context: not to describe their activities or their members, but to portray themselves as victims. The future developments of TENE are discussed.
|
2014 |
Bouchard, M., Joffres, K. and Frank, R. |
View
Publisher
|
VOX-Pol Blog |
Pressuring Platforms to Censor Content is Wrong Approach to Combatting Terrorism
View Abstract
|
2016 |
Craig, S. and Llansó, E. |
View
Publisher
|