From Bad to Worse: Algorithmic Amplification of Antisemitism and Extremism

Do social media companies exacerbate antisemitism and hate through their own recommendation and amplification tools? We investigated how four of the biggest social media platforms treated users who searched for or engaged with content related to anti-Jewish tropes, conspiracy theories, and other topics. Three of them–Facebook, Instagram, and Twitter1 (now known as X after a rebranding by owner Elon Musk)–filled their feeds with far more antisemitic content and conspiratorial accounts and recommended the most hateful influencers and pages to follow. One platform, YouTube, did not.

In this joint study by the ADL Center for Technology and Society (CTS) and Tech Transparency Project (TTP), researchers created six test personas of different ages and genders and set up accounts for them on Facebook, Instagram, Twitter, and YouTube. Researchers had them search each platform for a basket of terms related to conspiracy theories as well as popular internet personalities, commentators, and video games. The study then examined the content that the platforms’ algorithms recommended to these hypothetical users.

x
Tags: algorithmic recommendation systems, antisemitism, Facebook, Instagram, Twitter (X), YouTube