Journal Article |
Us and them: identifying cyber hate on Twitter across multiple protected characteristics
View Abstract
Hateful and antagonistic content published and propagated via the World Wide Web has the potential to cause harm and suffering on an individual basis, and lead to social tension and disorder beyond cyber space. Despite new legislation aimed at prosecuting those who misuse new forms of communication to post threatening, harassing, or grossly offensive language – or cyber hate – and the fact large social media companies have committed to protecting their users from harm, it goes largely unpunished due to difficulties in policing online public spaces. To support the automatic detection of cyber hate online, specifically on Twitter, we build multiple individual models to classify cyber hate for a range of protected characteristics including race, disability and sexual orientation. We use text parsing to extract typed dependencies, which represent syntactic and grammatical relationships between words, and are shown to capture ‘othering’ language – consistently improving machine classification for different types of cyber hate beyond the use of a Bag of Words and known hateful terms. Furthermore, we build a data-driven blended model of cyber hate to improve classification where more than one protected characteristic may be attacked (e.g. race and sexual orientation), contributing to the nascent study of intersectionality in hate crime.
|
2016 |
Burnap, P. and Williams, M.L. |
View
Publisher
|
Journal |
Us and them: identifying cyber hate on Twitter across multiple protected characteristics
View Abstract
Hateful and antagonistic content published and propagated via the World Wide Web has the potential to cause harm and suffering on an individual basis, and lead to social tension and disorder beyond cyber space. Despite new legislation aimed at prosecuting those who misuse new forms of communication to post threatening, harassing, or grossly offensive language – or cyber hate – and the fact large social media companies have committed to protecting their users from harm, it goes largely unpunished due to difficulties in policing online public spaces. To support the automatic detection of cyber hate online, specifically on Twitter, we build multiple individual models to classify cyber hate for a range of protected characteristics including race, disability and sexual orientation. We use text parsing to extract typed dependencies, which represent syntactic and grammatical relationships between words, and are shown to capture ‘othering’ language – consistently improving machine classification for different types of cyber hate beyond the use of a Bag of Words and known hateful terms. Furthermore, we build a data-driven blended model of cyber hate to improve classification where more than one protected characteristic may be attacked (e.g. race and sexual orientation), contributing to the nascent study of intersectionality in hate crime.
|
2016 |
Burnap, P. and Williams, M.L. |
View
Publisher
|
Journal Article |
Upvoting Extremism: Collective Identity Formation and the Extreme Right on Reddit
View Abstract
Since the advent of the Internet, right-wing extremists and those who subscribe to extreme right views have exploited online platforms to build a collective identity among the like-minded. Research in this area has largely focused on extremists’ use of websites, forums, and mainstream social media sites, but overlooked in this research has been an exploration of the popular social news aggregation site Reddit. The current study explores the role of Reddit’s unique voting algorithm in facilitating “othering” discourse and, by extension, collective identity formation among members of a notoriously hateful subreddit community, r/The_Donald. The results of the thematic analysis indicate that those who post extreme-right content on r/The_Donald use Reddit’s voting algorithm as a tool to mobilize like-minded members by promoting extreme discourses against two prominent out-groups: Muslims and the Left. Overall, r/The_Donald’s “sense of community” facilitates identity work among its members by creating an environment wherein extreme right views are continuously validated.
|
2020 |
Gaudette, T., Scrivens, R., Davies, G. and Frank, R. |
View
Publisher
|
Report |
Unraveling The Impact Of Social Media On Extremism: Implications for Technology Regulation and Terrorism Prevention
View Abstract
Social media has been remarkably effective in bringing together groups of individuals at a scale and speed unthinkable just a few years ago. While there is a positive aspect of digital activism in raising awareness and mobilizing for equitable societal outcomes, it is equally true that social media has a dark side in enabling political polarization and radicalization. This paper highlights that algorithmic bias and algorithmic manipulation accentuate these developments. We review some of the key technological aspects of social media and its impact on society, while also outlining remedies and implications for regulation. For the purpose of this paper we will define a digital platform as a technology intermediary that enables interaction between groups of users (such as Amazon or Google) and a social media platform as a digital platform for social media.
|
2019 |
Susarla, A. |
View
Publisher
|
Policy |
UNODC Digest Of Terrorist Cases
View Abstract
The judicial cases featured in this Digest cover relevant aspects of the international legal regime against terrorism. It provides a comparative analysis of national statutory frame- works for terrorism prosecutions, and it identifies legal issues and pitfalls encountered in investigating and adjudicating relevant offences. In addition, it identifies practices related to specialized investigative and prosecutorial techniques. It also addresses the links between terrorism and other forms of crime (like organized crime, the trafficking of drugs, people and arms), as well as how to disrupt terrorist financing.
|
2010 |
United Nations Office on Drugs and Crime (UNODC) |
View
Publisher
|
Report |
Unleashing the Potential of Short-Form Video: Strategic Communications for Countering Extremism in the Digital Age
View Abstract
The report begins by outlining some of the broad knowledge around the idea of mass persuasion, before focusing specifically on lessons that have been learned in the field of P/CVE. This is followed by a synthesis of existing “How To” guides for the creation of strategic communications from a range of policy and practitioner stakeholders. Then, we discuss specific knowledge of audiovisual content, particularly considerations for short-form video content. The report concludes by outlining how stakeholders, including social media platforms, can monitor, measure, and evaluate the impact of this type of content.
|
2024 |
Whittaker, J., Atamuradova, F., Yilmaz, K., Copeland, S., El Sayed, L. and Deedman, J. |
View
Publisher
|