content moderation
Blog
Using AI to monitor the internet for terror content is inescapable – but also fraught with pitfalls
March 13, 2024Stuart Macdonald, Swansea University; Ashley A. Mattheis, Dublin City University, and David Wells, Swansea University Every minute, millions of social media posts, photos and videos flood the internet. On average, Facebook users share 694,000 stories, X (formerly Twitter) users post 360,000 posts, Snapchat users send 2.7 million snaps and YouTube users upload more than 500 ...
Blog
Content Moderation, Transparency (Reporting) and Human Rights
July 28, 2021Our Cyber Threats Research Centre colleagues couldn’t host an in-person TASM Conference this year, but instead organised a week of virtual events from 21 to 25 June 2021. This post is the third in a three-part series based on overviews of three of the virtual TASM panels . Read parts one and two. [Ed.] By Lucy Brown The Christchurch terrorist ...
Blog
Algorithmic Transparency and Content Amplification
July 21, 2021Our Cyber Threats Research Centre colleagues couldn’t host an in-person TASM Conference this year, but instead organised a week of virtual events from 21 to 25 June 2021. This post is the second in a three-part series based on overviews of three of the virtual TASM panels . Read parts one and three. [Ed.] By Adam Whitter-Jones Many Internet and social ...
Blog
Automation in Online Content Moderation: In Search of Lost Legitimacy and the Risks of Censorship
April 21, 2021Want to submit a blog post? Click here. By Charis Papaevangelou, Jiahong Huang, Lucia Mesquita, and Sara Creta At a recent workshop, JOLT Early Stage Researchers (ESRs) worked in multi-disciplinary teams to develop ideas for research projects that address a major issue surrounding media and technology. As the European Commission prepares to announce its much-anticipated ...
Blog
Moderating Terrorist and Extremist Content
February 24, 2021Want to submit a blog post? Click here. By Joan Barata According to the latest figures provided by Facebook, 99,6% of the content actioned on grounds of terrorism (mostly related to the Islamic State, al-Qaeda, and their affiliates) was found and flagged before any user reported it. This being said, it is also worth noting ...
Blog
One Database to Rule Them All
November 4, 2020A response to this article can be found HERE. [Ed.] By Svea Windwehr and Jillian C York The Invisible Content Cartel that Undermines the Freedom of Expression Online Every year, millions of images, videos and posts that allegedly contain terrorist or violent extremist content are removed from social media platforms like YouTube, Facebook, or Twitter. ...
Blog
Artificial Intelligence and the Future of Online Content Moderation
July 11, 2018By Nick Feamster This post contains reflections from a Humboldt Institute for Internet and Society workshop on the use of artificial intelligence in governing communication online that took place earlier this year [Ed.] Context In the United States and Europe, many platforms that host user content, such as Facebook, YouTube, and Twitter, have enjoyed safe harbor protections for the ...