Journal Article |
Hate, Obscenity, and Insults: Measuring the Exposure of Children to Inappropriate Comments in YouTube
View Abstract
Social media has become an essential part of the daily routines of children and adolescents. Moreover, enormous efforts have been made to ensure the psychological and emotional well-being of young users as well as their safety when interacting with various social media platforms. In this paper, we investigate the exposure of those users to inappropriate comments posted on YouTube videos targeting this demographic. We collected a large-scale dataset of approximately four million records, and studied the presence of five age-inappropriate categories and the amount of exposure to each category. Using natural language processing and machine learning techniques, we constructed ensemble classifiers that achieved high accuracy in detecting inappropriate comments. Our results show a large percentage of worrisome comments with inappropriate content: we found 11% of the comments on children’s videos to be toxic, highlighting the importance of monitoring comments, particularly on children platforms.
|
2021 |
Alshamrani, S., Abusnaina, A., Abuhamad, M., Nyang, D. and Mohaisen, D. |
View
Publisher
|
Journal |
Hate Speech or ‘Reasonable Racism’? The Other in Stormfront
View Abstract
We use the construct of the “other” to explore how hate operates rhetorically within the virtual conclave of Stormfront, credited as the first hate Web site. Through the Internet, white supremacists create a rhetorical vision that resonates with those who feel marginalized by contemporary political, social, and economic forces. However, as compared to previous studies of on-line white supremacist rhetoric, we show that Stormfront discourse appears less virulent and more palatable to the naive reader. We suggest that Stormfront provides a “cyber transition” between traditional hate speech and “reasonable racism,” a tempered discourse that emphasizes pseudo- rational discussions of race, and subsequently may cast a wider net in attracting audiences.
|
2009 |
Meddaugh, P.M. and Kay, J. |
View
Publisher
|
Journal Article |
Hate Speech Detection on Twitter: Feature Engineering v.s. Feature Selection
View Abstract
The increasing presence of hate speech on social media has drawn significant investment from governments, companies, and empirical research. Existing methods typically use a supervised text classification approach that depends on carefully engineered features. However, it is unclear if these features contribute equally to the performance of such methods. We conduct a feature selection analysis in such a task using Twitter as a case study, and show findings that challenge conventional perception of the importance of manual feature engineering: automatic feature selection can drastically reduce the carefully engineered features by over 90% and selects predominantly generic features often used by many other language related tasks; nevertheless, the resulting models perform better using automatically selected features than carefully crafted task-specific features.
|
2018 |
Robinson, D., Zhang, Z. and Tepper, J. |
View
Publisher
|
Report |
Hate Speech and Radicalisation Online The OCCI Research Report
View Abstract
The research series Hate Speech and Radicalisation on the Internet provides interdisciplinary insights into the current developments of extremist activities on the internet. With the aid of expert contributions from all over Germany, the psychological, political, anthropological and technological aspects of online hate speech and radicalisation will be considered and recommendations will be made for political leaders, social media platforms as well as NGOs and activists.
|
2019 |
Baldauf, J., Ebner, J. and Guhl, J. (Eds.) |
View
Publisher
|
Journal Article |
Hate Speech and Covert Discrimination on Social Media: Monitoring the Facebook Pages of Extreme-Right Political Parties in Spain
View Abstract
This study considers the ways that overt hate speech and covert discriminatory practices circulate on Facebook despite its official policy that prohibits hate speech. We argue that hate speech and discriminatory practices are not only explained by users’ motivations and actions, but are also formed by a network of ties between the platform’s policy, its technological affordances, and the communicative acts of its users. Our argument is supported with longitudinal multimodal content and network analyses of data extracted from official Facebook pages of seven extreme-right political parties in Spain between 2009 and 2013. We found that the Spanish extreme-right political parties primarily implicate discrimination, which is then taken up by their followers who use overt hate speech in the comment space.
|
2016 |
Ben-David, A. and Matamoros Fernández, A. |
View
Publisher
|
Journal |
Hate Online: A Content Analysis of Extremist Internet Sites
View Abstract
Extremists, such as hate groups espousing racial supremacy or separation, have established an online presence. A content analysis of 157 extremist web sites selected through purposive sampling was conducted using two raters per site. The sample represented a variety of extremist groups and included both organized groups and sites maintained by apparently unaffiliated individuals. Among the findings were that the majority of sites contained external links to other extremist sites (including international sites), that roughly half the sites included multimedia content, and that half contained racist symbols. A third of the sites disavowed racism or hatred, yet one third contained material from supremacist literature. A small percentage of sites specifically urged violence. These and other findings suggest that the Internet may be an especially powerful tool for extremists as a means of reaching an international audience, recruiting members, linking diverse extremist groups, and allowing maximum image control.
|
2003 |
Gerstenfeld, P., Grant, D. and Chiang, C. |
View
Publisher
|