Journal Article |
Hate in the Machine: Anti-Black and Anti-Muslim Social Media Posts as Predictors of Offline Racially and Religiously Aggravated Crime
View Abstract
National governments now recognize online hate speech as a pernicious social problem. In the wake of political votes and terror attacks, hate incidents online and offline are known to peak in tandem. This article examines whether an association exists between both forms of hate, independent of ‘trigger’ events. Using Computational Criminology that draws on data science methods, we link police crime, census and Twitter data to establish a temporal and spatial association between online hate speech that targets race and religion, and offline racially and religiously aggravated crimes in London over an eight-month period. The findings renew our understanding of hate crime as a process, rather than as a discrete event, for the digital age.
|
2020 |
Williams, M.L., Burnap, P., Javed, A., Liu, H. and Ozalp, S. |
View
Publisher
|
Report |
Hate Messages and Violent Extremism in Digital Environments
View Abstract
This report presents research carried out within the project (Ku2016/01373/D – Uppdrag till Totalförsvarets forskningsinstitut (FOI) att göra kartläggningar och analyser av våldsbejakande extremistisk propaganda) that has been assigned to the Swedish defence research agency by the Swedish Government. The project will continue until March 2019. The report briefly describes the channels of communication that prevail on the Internet, as and the methods used for the analyses. Since computer support makes it possible to analyse large amounts of data and to identify patterns that are difficult for humans to observe, the analyses carried out within the project are mainly computer supported. Using examples, the report provides insight into how proponents of violent extremist ideologies convey their messages online, which we hope can lead to further discussions about propaganda and hate messages. The report also contains some examples of analyses; an analysis of jargon in a webforum, a comparative study of a sample of immigration-critic alternative media, and a machine learning-based study of text written by violent lone offenders.
|
2017 |
FOI Totalförsvarets forskningsinstitut |
View
Publisher
|
Journal |
Hate Online: A Content Analysis of Extremist Internet Sites
View Abstract
Extremists, such as hate groups espousing racial supremacy or separation, have established an online presence. A content analysis of 157 extremist web sites selected through purposive sampling was conducted using two raters per site. The sample represented a variety of extremist groups and included both organized groups and sites maintained by apparently unaffiliated individuals. Among the findings were that the majority of sites contained external links to other extremist sites (including international sites), that roughly half the sites included multimedia content, and that half contained racist symbols. A third of the sites disavowed racism or hatred, yet one third contained material from supremacist literature. A small percentage of sites specifically urged violence. These and other findings suggest that the Internet may be an especially powerful tool for extremists as a means of reaching an international audience, recruiting members, linking diverse extremist groups, and allowing maximum image control.
|
2003 |
Gerstenfeld, P., Grant, D. and Chiang, C. |
View
Publisher
|
Journal Article |
Hate Speech and Covert Discrimination on Social Media: Monitoring the Facebook Pages of Extreme-Right Political Parties in Spain
View Abstract
This study considers the ways that overt hate speech and covert discriminatory practices circulate on Facebook despite its official policy that prohibits hate speech. We argue that hate speech and discriminatory practices are not only explained by users’ motivations and actions, but are also formed by a network of ties between the platform’s policy, its technological affordances, and the communicative acts of its users. Our argument is supported with longitudinal multimodal content and network analyses of data extracted from official Facebook pages of seven extreme-right political parties in Spain between 2009 and 2013. We found that the Spanish extreme-right political parties primarily implicate discrimination, which is then taken up by their followers who use overt hate speech in the comment space.
|
2016 |
Ben-David, A. and Matamoros Fernández, A. |
View
Publisher
|
Report |
Hate Speech and Radicalisation Online The OCCI Research Report
View Abstract
The research series Hate Speech and Radicalisation on the Internet provides interdisciplinary insights into the current developments of extremist activities on the internet. With the aid of expert contributions from all over Germany, the psychological, political, anthropological and technological aspects of online hate speech and radicalisation will be considered and recommendations will be made for political leaders, social media platforms as well as NGOs and activists.
|
2019 |
Baldauf, J., Ebner, J. and Guhl, J. (Eds.) |
View
Publisher
|
Journal Article |
Hate Speech Detection on Twitter: Feature Engineering v.s. Feature Selection
View Abstract
The increasing presence of hate speech on social media has drawn significant investment from governments, companies, and empirical research. Existing methods typically use a supervised text classification approach that depends on carefully engineered features. However, it is unclear if these features contribute equally to the performance of such methods. We conduct a feature selection analysis in such a task using Twitter as a case study, and show findings that challenge conventional perception of the importance of manual feature engineering: automatic feature selection can drastically reduce the carefully engineered features by over 90% and selects predominantly generic features often used by many other language related tasks; nevertheless, the resulting models perform better using automatically selected features than carefully crafted task-specific features.
|
2018 |
Robinson, D., Zhang, Z. and Tepper, J. |
View
Publisher
|