By Ellie Rogers
Most major tech companies are actively removing illegal content such as terrorist and violent extremist content (TVEC) as part of the e-Commerce directive. However, there has been an increasing focus on the need for tech companies to address borderline TVEC proportionately, consistently and transparently. There is no universal definition for borderline TVEC, which adds to moderation debates surrounding this content, but it often refers to content that is hateful, or harmful and comes close to violating platform’s TVE, or hate speech policies. It is important to address borderline TVEC because it can spread misinformation and harm, exposing users to themes associated with violent extremism. Furthermore, evidence suggests that in certain contexts, borderline TVEC is algorithmically amplified by platforms.
Given its non-violative nature, there are discussions surrounding the proportionality of moderation approaches to address borderline TVEC, to ensure they respect freedom of expression whilst safeguarding users against online harms. Reduction methods such as downranking and user-controlled moderation may be more proportionate approaches than removal to address the spread of borderline TVEC. But any content moderation must be accompanied by meaningful transparency from tech companies to adhere to upcoming legislation, respect user rights, and protect users from online harms.
Regulating Borderline TVEC
Moderation debates are exacerbated by user concerns surrounding their freedom of expression, and mistrust between platforms and users, due to a lack of transparency from tech companies. Current transparency reporting by tech companies is largely done on a voluntary basis. Tech companies who do produce transparency reports often focus exclusively on metrics such as content removals, rather than content moderation processes, so do not provide a full picture, and often lack appropriate detail. Recent regulatory efforts such as the European Union’s Digital Services Act (DSA) and United Kingdom’s Online Safety Bill, now Online Safety Act (OSA) seek to address these gaps through increased transparency requirements.
In addition to calling for further measures on removing illegal content, the DSA proposes safeguards that focus on meaningful transparency, to allow users to act in informed ways towards the spread of illegal and harmful content. Requirements include increased communication on the moderation of content, including notifying users when and why their content is removed or restricted to allow for appeals, annual transparency reports outlining content moderation statistics, national orders, complaints and the use of human and automated moderation. More specifically, the DSA aims to address harms associated with algorithms, by requiring transparency on the operation of algorithms, and how this impacts content surfaced to users. The DSA also outlines provisions to allow researchers more access to data on social media platforms.
The OSA primarily focuses on the moderation of illegal content, and protecting young people from legal but harmful content. It also requires Ofcom to issue mandatory transparency notices to in-scope platforms, which require transparency reports that provide more detail on content moderation processes, to provide users with more control over their online environment. Ofcom are also required to be transparent about their role in improving platforms’ safety measures. However, there is a lack of guidance on how to achieve meaningful transparency surrounding harmful content such as borderline TVEC and the moderation of this content.
Proportionality of Borderline TVEC Moderation
Platforms may have to deal with borderline TVEC on an individual basis, as a one-size-fits all approach risks not respecting freedom of expression rights, but it is unclear to what extent this approach is utilised across the social media ecosystem. Platforms address the prevalence and spread of borderline TVEC through a range of methods including removal, and reduction methods such as downranking.
Borderline TVEC is non-violative by definition, so removing this content may be disproportionate and risks over-censorship. Over-censorship can have chilling effects on speech, as users may reduce their speech online, to avoid having their content removed. Chilling effects may be exacerbated for minority communities. For example, the over-removal of non-harmful Arabic language content means users may avoid posting this content due to fears of disproportionate removal. Reduction measures may be more proportionate responses to borderline TVEC, by allowing the content to remain on the platform, whilst safeguarding users by reducing its visibility. Reduction measures such as downranking have been used on certain platforms, to prioritise credible and informative sources in place of the harmful content. Some evidence suggests that reduction methods have decreased user engagement with borderline content, but this data is limited for long- term effects, borderline TVEC specifically, and across platforms, partly due to a lack of access to data on platforms to allow for research. Additionally, platforms have received backlash over the use of reduction approaches due to a lack of transparency on borderline TVEC definitions and moderation processes.
Platforms may also provide users with control over content they are exposed to, through simple measures such as users negatively interacting, unfollowing or blocking certain accounts or content, or through more complex tools such as sensitivity controls and muting keywords or content. Increased user-control is included within the OSA, and may be a more proportionate response to borderline TVEC over removal, as it allows the content to remain on the platform, but reduces it for users who opt out. This approach raises ethical questions, as it is debated whether users should be responsible for protecting themselves against harmful content. For example, users may not want to customise settings, and some users may not be aware of these functions due to a lack of transparency from platforms.
Lack of Transparency in Moderating Borderline TVEC
There is currently a lack of clear communication from platforms on how harmful content such as borderline TVEC is defined and identified, the prevalence of this content and content moderation processes used to address this content. This lack of transparency has resulted in confusion and mistrust between users and platforms. There is also limited access to data for researchers to study the prevalence of borderline TVEC and the effectiveness of moderation efforts. Although recent regulatory efforts put forward increased transparency requirements for tech companies to adhere to, there remains a lack of clear guidance on how tech companies can achieve meaningful transparency surrounding borderline TVEC.
Algorithm systems are complex and vary across platforms, so a one-size fits all approach for transparency reporting may not be possible. However, guidance for platforms to achieve clearer communication on how borderline TVEC may be surfaced to users and how users can personalise and shape their online space is important to improve controllability of algorithmic systems. Moreover, there is a need for clearer guidance on transparency surrounding content moderation processes used for borderline TVEC including the violation they are responding to, and how to appeal decisions. This can improve trust between platforms and users by holding platforms accountable for decisions, highlighting which moderation approaches are used and why, and clarifying definitions of borderline TVEC. Finally, there is a need for improved and equal access to data on platforms for independent researchers. This allows for research and audits on the prevalence of borderline TVEC and the effectiveness and impact of content moderation approaches, to aid platforms in mapping the evolving risk of harmful content, and to monitor and adjust moderation efforts when necessary. Additional guidance such as the Santa Clara Principles may be beneficial for platforms to adhere to as well as regulatory requirements, to achieve meaningful transparency surrounding harmful content such as borderline TVEC.
Conclusion
The current reduction efforts which include downranking, and providing users with control over their online environments may be the most proportionate responses to address borderline TVEC, to reduce the visibility of harmful content for users, whilst protecting freedom of expression and user agency. Moreover, dealing with borderline TVEC on a case-by-case basis allows for more consideration of freedom of expression and proportionality in approaches. However, there is limited evidence on the prevalence of borderline TVEC and the effectiveness of moderation approaches due to a lack of transparency from platforms, so any conclusions on the reduction of borderline TVEC must be taken with caution.
Consequently, it is essential that any content moderation efforts are accompanied by meaningful transparency from tech companies, as required in upcoming legislation such as the DSA and OSA. These efforts need to be assisted with clearer guidance on transparency surrounding harmful content such as borderline TVEC to ensure security and accountability on platforms, to improve trust between users and platforms, to ensure user rights are protected, and to allow for the continual monitoring and improvement of content moderation efforts across the social media ecosystem.
Ellie Rogers is a PhD candidate in Criminology at Swansea University. Her research focuses on countering online extremism, terrorist and violent extremist content, algorithms, and counter-speech.
Image Credit: Freepik
Want to submit a blog post? Click here.