Social media corporations as actors of counter-terrorism
September 18, 2023
This article discusses the role of giant social media corporations Facebook, Google (YouTube), and Twitter in counter-terrorism and countering violent extremisms (CT/CVEs). Based on a qualitative investigation mobilizing corporate communications as well as a collection of interviews with European stakeholders, it argues that these firms have become actors in this policy area of what is ...
GAFAM and Hate Content Moderation: Deplatforming and Deleting the Alt-right
September 18, 2023
Purpose – This chapter demonstrates the power that Google, Apple, Facebook, Amazon and Microsoft (or the “GAFAM”) exercise over platforms within society, highlights the alt-right’s use of GAFAM sites and services as a platform for hate, and examines GAFAM’s establishment and use of hate content moderation apparatuses to de-platform alt-right users and delete hate content. ...
Terrorist Communications: Are Facebook, Twitter, and Google Responsible for the Islamic State’s Actions?
September 18, 2023
Four of the world’s largest Internet companies pledged to monitor, combat, and prevent terrorists from using their social media platforms to conduct operations in May 2016. One month later, Twitter, Facebook, and Google were sued for deaths caused by the Islamic State in 2015, and their alleged allowance and facilitation of terrorist communication. A growing ...
All You Need Is “Love”: Evading Hate Speech Detection
September 18, 2023
With the spread of social networks and their unfortunate use for hate speech, automatic detection of the latter has become a pressing problem. In this paper, we reproduce seven state-of-the-art hate speech detection models from prior work, and show that they perform well only when tested on the same type of data they were trained ...
The platform governance triangle: conceptualising the informal regulation of online content
September 18, 2023
From the new Facebook ‘Oversight Body’ for content moderation to the ‘Christchurch Call to eliminate terrorism and violent extremism online,’ a growing number of voluntary and non-binding informal governance initiatives have recently been proposed as attractive ways to rein in Facebook, Google, and other platform companies hosting user-generated content. Drawing on the literature on transnational ...
Three Constitutional Thickets: Why Regulating Online Violent Extremism is Hard
September 18, 2023
In this paper, I review U.S. constitutional considerations for lawmakers seeking to balance terrorist threats against free expression online. The point is not to advocate for any particular rule. In particular, I do not seek to answer moral or norms-based questions about what content Internet platforms should take down. I do, however, note the serious ...
Beyond The “Big Three”: Alternative Platforms For Online Hate Speech
September 18, 2023
In recent years, most international studies on hate speech online have focused on the three platforms traditionally considered the most influential: Facebook, YouTube and Twitter. However, their predominance as the biggest international social networks is no longer uncontested. Other networks are on the rise and young users especially lose interest in the ‘old’ platforms. In ...