By Jim Killock
In a report published last month by the Home Affairs Select Committee brands social media companies as behaving irresponsibly in failing to remove extremist material.
It takes the view that the job of removing illegal extremist videos and postings is entirely the responsibility of the companies, and does not envisage a role for courts to adjudicate what is in fact legal or not.
This is a complex issue, where the companies have to take responsibility for content on their platforms from many perspectives, including public expectation. There are legitimate concerns.
The approaches the committee advocates is however extremely unbalanced and could provoke a regime of automated censorship, that would impact legal content including material opposing extremism.
We deal below with two of the recommendations in the report to give some indication of how problematic the report is.
Government should consult on stronger law and system of fines for companies that fail to remove illegal content
Platforms receive reports from people about content; the committee assume this content can be regarded as illegal. Sometimes it may be obvious. However, not every video or graphic will be “obviously” illegal. Who then decides that there is a duty to remove material? Is it the complainant, the platform, or the original publisher? Or an independent third party such as a court?
The comparison with copyright is enlightening here. Copyright owners must identify material and assert their rights: even when automatic content matching is used, a human must assert the owner’s rights to take down a Youtube video. Of course, the video’s author can object. Meanwhile, this system is prone to all kinds of errors.
However, there is a clear line of accountability for all its faults. The copyright owner is responsible for asserting a breach of copyright; the author is responsible for defending their right to publish; and both accept that a court must decide in the event of a dispute.
With child abuse material, there is a similar expectation that material is reviewed by the IWF who make a decision about the legality or otherwise. It is not up to the public to report directly to companies.
None of this need for accountability and process is reflected in the HASC report, which merely asserts that reports of terrorist content by non-interested persons should create a liability on the platform.
Ultimately, fines for failure to remove content as suggested by the committee could only be reasonable if the reports had been made through a robust process and it was clear that the material was in fact in breach of the law.
Social media companies that fail to proactively search for and remove illegal material should pay towards costs of the police doing so instead
There is always a case for general taxation that could be used for the police. However, hypothecated resources in cases like this are liable to generate more and more calls for specific “Internet taxes” to deal with problems that can be blamed on companies, even when they have little to do with the activity in reality.
We should ask: is the posting of terrorist content a problem generated by the platforms, or by other wider social problems? It is not entirely obvious that this problem has in some way been produced by social media companies. It is clear that extremists use these platforms, just as they use transport, mail and phones. It appears to be the visibility of extremists activities that is attracting attention and blame on platforms, rather than an objective link between the aims of Twitter and Facebook and terrorists.
We might also ask: despite the apparent volumes of content that is posted and reposted, how much attention does it really get? This is important to know if we are trying to assess how to deal with the problem
Proactive searching by companies is something HASC ought to be cautious about. This is inevitably error prone. It can only lead one way, which is to over-zealous matching, for fear that content is not removed. In the case of extremist content, it is perfectly reasonable to assume that content opposing extremism while quoting or reusing propagandist content would be identified and removed.
The incentives that HASC propose would lead to censorship of legal material by machines. HASC’s report fails to mention or examine this, assuming instead that technology will provide the answers.
Jim Killock is the Executive Director of Open Rights Group, which campaigns for privacy and free speech. He has led campaigns against three strikes and the Digital Economy Act, the company Phorm and its plans to snoop on UK users, and against pervasive government Internet surveillance. He works on data protection and privacy issues. You can follow Jim Killock on Twitter: @jimkillock