Report |
Understanding online hate: VSP Regulation and the broader context
View Abstract
This report aims to contribute to our understanding of online hate in the context of the requirements of the revised Audiovisual Media Services Directive (AVMSD) for Video Sharing Platforms (VSPs) to protect the general public from incitement to hatred or violence. However, online hate is complex and it can only be fully understood by considering issues beyond the very specific focus of these regulations. Hence, we draw on recent social and computational research to consider a range of points outside VSP regulations, such as the impact, nature and dynamics of online hate. For similar reasons,
we have considered expressions of hate across a range of online spaces, including VSPs as well as other online platforms. In particular, we have closely examined how online hate is currently addressed by industry, identifying key and emerging issues in content moderation practices. Our analyses will be relevant to a range of experts and stakeholders working to address online hate, including researchers, platforms, regulators and civil society organisations.
|
2021 |
Vidgen, B., Burden, E. and Margetts, H. |
View
Publisher
|
Report |
The Domestic Extremist Next Door: How Digital Platforms Enable the War Against American Government
View Abstract
Digital platforms enabled the disturbing rise of domestic extremism, culminating with the January 6 attack on the U.S. Capitol. Militia groups use social media networks to plan operations, recruit new members, and spread anti-democracy propaganda, a new Digital Citizens Alliance (Digital Citizens) and Coalition for a Safer Web (CSW) investigation has found.
|
2021 |
Digital Citizens Alliance |
View
Publisher
|
Journal Article |
GAFAM and Hate Content Moderation: Deplatforming and Deleting the Alt-right
View Abstract
Purpose – This chapter demonstrates the power that Google, Apple, Facebook, Amazon and Microsoft (or the “GAFAM”) exercise over platforms within society, highlights the alt-right’s use of GAFAM sites and services as a platform for hate, and examines GAFAM’s establishment and use of hate content moderation apparatuses to de-platform alt-right users and delete hate content. Approach – Drawing upon a political economy of communications approach, this chapter demonstrates GAFAM’s power in society. It also undertakes a reading of GAFAM “terms of service agreements” and “community guidelines” documents to identify GAFAM hate content moderation apparatuses. Findings – GAFAM are among the most powerful platforms in the world, and their content moderation apparatuses are empowered by the US government’s cyber-libertarian approach to Internet law and regulation. GAFAM are defining hate speech, deciding what’s to be done about it, and censoring it. Value – This chapter probes GAFAM’s hate content moderation apparatuses for Internet platforms, and shows how GAFAM enable and constrain the alt-right’s hate speech on their platforms. It also reflexively assesses the politics of empowering GAFAM to de-platform the alt-right.
|
2021 |
Mirrlees, T. |
View
Publisher
|
Report |
Mapping right-wing extremism in Victoria: Applying a gender lens to develop prevention and deradicalisation approaches
View Abstract
This project aims to map right-wing extremism in Victoria through the lens of gender. It begins from the premise that there is an underexplored connection between antifeminist sentiment and far-right extremist sentiment. It does this by focusing on select Victorian-based online groups that have an anti-feminist and far-right profile. The project also works with stakeholders who work in the areas of gender and family violence, to gain insight into their practices and experiences.
|
2020 |
Agius, C., Cook, K., Nicholas, L., Ahmed, A., bin Jehangir, H., Safa, N., Hardwick,
T. and Clark, S. |
View
Publisher
|
Report |
Layers of Lies: A First Look at Irish Far-Right Activity on Telegram
View Abstract
This report aims to provide a first look into Irish far-right activity on the messaging app, Telegram, where the movement is operating both as identifiable groups and influencers, and anonymously-run channels and groups.
The report looks at the activity across 34 such Telegram channels through the lens of a series of case studies where content posted on these channels resulted in real life consequences. Using both quantitative and qualitative methods, the report examines the tactics, language and trends within these channels, providing much-needed detail on the activity of the Irish far-right online.
|
2021 |
Gallagher, A. and O’Connor, C. |
View
Publisher
|
Journal Article |
Variations on a Theme? Comparing 4chan, 8kun, and Other chans’ Far-Right “/pol” Boards
View Abstract
Online forums such as 4chan and 8chan have grown in notoriety following a number of high-profile attacks conducted in 2019 by right-wing extremists who used their “/pol” boards (dedicated to “politically incorrect” discussions). Despite growing academic interest in these online spaces, little is still known about them; in particular, their similarities and differences remain to be teased out, and their respective roles in fostering a certain farright subculture need to be specified. This article therefore directly compares the content and discussion pace of six different /pol boards of “chan” forums, including some that exist solely on the dark web. We find that while these boards constitute together a particular subculture, differences in terms of both rate of traffic and content demonstrate the fragmentation of this subculture. Specifically, we show that the different /pol boards can be grouped
into a three-tiered architecture based upon both at once how popular they are and how extreme their content is.
|
2021 |
Baele, S.J., Brace, L. and Coan, T.G. |
View
Publisher
|