Content moderation: Social media and countering online radicalisation

As terrorist, extremist, and hateful content has become widespread on social media, platforms have responded with content moderation – the flagging, review, and enforcement of rules and standards on user-generated content online. This chapter provides an introduction to contemporary content moderation practices, technologies, and contexts and outlines key debates in the field. The chapter explores the changing regulatory context around hate speech and terrorist content online in the EU. This changing context, such as Germany’s NetzDG law and the EU’s 2021 regulation on terrorist content online, is setting new standards and statutory requirements for platforms and websites. While content moderation is a form of self-regulation run by companies themselves, the regulatory context is shifting towards multistakeholder governance. This chapter then maps key stakeholders and technologies in content moderation as they pertain to radicalisation, looking at trusted flaggers, Internet Referral Units, and automated content moderation. After providing an overview of the regulatory context, stakeholders, and technologies in content moderation, the chapter explores normative questions about deplatforming, how transparency hinders research and accountability in content moderation, and explores inconsistencies in the enforcement of content moderation rules, particularly in the case of the far right. By providing an outline of the regulatory context, mapping multistakeholder arrangements in content moderation, and exploring key debates, this chapter provides readers with an introduction to the growing literature on content moderation and countering radicalisation.

x
Tags: Deplatforming, Internet Referral Unit (IRU), NetzDG, Radicalisation, regulation