Cognitive assemblages: The entangled nature of algorithmic content moderation
December 3, 2024
This article examines algorithmic content moderation, using the moderation of violent extremist content as a specific case. In recent years, algorithms have increasingly been mobilized to perform essential moderation functions for online social media platforms such as Facebook, YouTube, and Twitter, including limiting the proliferation of extremist speech. Drawing on Katherine Hayles’ concept of “cognitive ...
Far-right social media communication in the light of technology affordances: a systematic literature review
December 3, 2024
Most analyses of far-right communication on social media focus on one specific platform, while findings are generalized. In this study, I argue that the far right’s use of social media depends on technology affordances – the linkage between platform design and usage – and, thus, might not always be generalizable. After discussing six affordances – ...
Politicization and Right-Wing Normalization on YouTube: A Topic-Based Analysis of the “Alternative Influence Network”
October 30, 2024
Scholarship has highlighted the rise of political influencer networks on YouTube, raising concerns about the platform’s propensity to spread and even incentivize politically extreme content. While many studies have focused on YouTube’s algorithmic infrastructure, limited research exists on the actual content in these networks. Building on Lewis’s (2018) classification of an “alternative influencer” network, we ...
From Bad to Worse: Auto-generating & Autocompleting Hate
October 29, 2024
Do social media and search companies exacerbate antisemitism and hate through their own design and system functions? In this joint study by the ADL Center for Technology and Society (CTS) and Tech Transparency Project (TTP), we investigated search functions on both social media platforms and Google. Our results show how these companies’ own tools–such as ...
From Bad to Worse: Algorithmic Amplification of Antisemitism and Extremism
October 29, 2024
Do social media companies exacerbate antisemitism and hate through their own recommendation and amplification tools? We investigated how four of the biggest social media platforms treated users who searched for or engaged with content related to anti-Jewish tropes, conspiracy theories, and other topics. Three of them–Facebook, Instagram, and Twitter1 (now known as X after a ...
U.S. Users’ Exposure to YouTube Videos On- and Off-platform
September 16, 2024
YouTube is one of the most important platforms on the Internet. However, it is not just a singular destination: because YouTube videos may be embedded into any website, it is a systemically important platform for the entire web. Unfortunately, existing studies do not examine playable YouTube videos embedded around the web, instead focusing solely on ...
Recommending Toxicity: The role of algorithmic recommender functions on YouTube Shorts and TikTok in promoting male supremacist influencers
September 14, 2024
This study tracked, recorded and coded the content recommended to 10 experimental or ‘sockpuppet’ accounts on 10 blank smartphones, 5 on YouTube Shorts and 5 on TikTok. On each platform, we set up 5 types of accounts: one 16-year old boy and one 18-year old boy who sought out content typically associated with gender-normative young ...