By Laura Higson-Bliss
The UK government’s much anticipated online safety bill has now been released. The bill seeks to impose a duty of care on companies, such as social media platforms, to remove illegal content, and in some cases, “legal but harmful” content, quickly.
Failure to comply will result in heavy fines or, in extreme circumstances, company executives facing prosecution. Yet what is considered “legal but harmful” content remains unclear.
The requirement for those who fall within the scope of the online safety bill (platforms where content is created, uploaded or shared by users, including search engines) to remove legal but harmful content has been at the forefront of the government’s plans to make the UK “the safest place in the world to go online” since the release of the online harms white paper in 2019. But the white paper gave no indication as to how broad or narrow the definition might be.
Various organisations highlighted the lack of clarity as to what was meant by legal but harmful content during the government’s consultation period following the release of the white paper.
As a result, the government attempted to provide some more information, defining harm as “reasonably foreseeable risk of a significant adverse physical or psychological impact on individuals”. There was no further clarity beyond this definition.
The draft bill reflected this approach, defining harm as “adverse physical or psychological harm”, with the onus placed on companies to decide if content on their platform could be considered harmful.
Although this update provided something of a definition, stakeholders expressed concerns that the concept of harm was still vague, and that it would be difficult for companies to moderate harmful content on this basis.
Shifting responsibility
In the latest version of the online safety bill, which is currently being considered before parliament, the government continues to define harmful content as material which could cause “physical or psychological harm”.
While previously it would have been for companies such as social media platforms to determine what material on their site could possibly cause harm, now it will be for the government, with the approval of parliament, to determine what content meets this threshold. Then, companies will need to moderate content accordingly.
This change to the bill is an attempt to protect freedom of expression and to reduce the likelihood of companies over-censoring content on their platforms.
It seems that the rationale behind maintaining such a vague definition of “harmful” is to ensure the bill is future proof – allowing the government and parliament to react quickly to “harms” as they arise.
Take for instance the Momo challenge which caught the public’s attention in 2019. Reports suggested children were being encouraged by an internet user “Momo” to perform dangerous acts, including self-harm. Had the online safety bill been enforced at the time, it would have allowed parliament to put increasing pressure on companies to tackle Momo (though this challenge was later revealed to be a hoax).
The agreed categories of legal but harmful content are expected to be set out in secondary legislation. Though it’s not yet clear what will be considered, the government has put some suggestions forward. There is significant emphasis on the removal of content which encourages people to self-harm. For the government, this is a clear example of content they would consider to be legal but harmful.
Previously, social media companies have come under heavy criticism for not removing photos or videos of self-harm. At face value, it might seem appropriate to take down content that actively encourages people to self-harm. But what about where people are supporting others who self-harm? These are two different scenarios but can be easily confused.
This is an issue previously flagged by the Samaritans, a British charity which supports people in emotional distress. According to Samaritans chief executive Julie Bentley:
Whilst we need a regulatory “floor” around suicide and self-harm content, this must not lead to all conversations about suicide and self-harm being shut down, as we need safe spaces where people can share how they’re feeling, connect with others, and find information and sources of support.
Other examples the government has flagged as potentially constituting legal but harmful content include exposure to eating disorders, online bullying and intimidation of public figures.
Freedom of speech
While the government claims the bill aims to protect freedom of expression, a model where the government is empowered to impose bans on broad topics could actually have the opposite effect. It’s not impossible to foresee material that promotes gambling, drinking or even references to blasphemy might be prohibited in the future. Indeed, experts are already raising concerns that the bill posses significant risk to freedom of speech.
Online companies need to be held to account more for content on their sites, but this should not be at the expense of disproportionately restricting freedom of expression. If the government truly wants the UK to become the safest place in the world to go online, while also protecting freedom of speech, we need to rethink the boundaries of what we consider to be harmful, or at least give the concept of harm a more precise meaning.
Laura Higson-Bliss is a Lecturer in Law at Keele University. On Twitter @DrHigsonBliss. This article was originally published on The Conversation, republished here under a Creative Commons license.