by Ian Brown
The second Vox-Pol workshop, on the ethics and politics of online monitoring of violent extremism, took place in Brussels from 19-20 January. Around thirty experts – from law enforcement and intelligence agencies, governments and parliaments, civil society, and universities – met for two days to discuss the challenges that have dominated the news since the Charlie Hebdo murders the previous week.
Short versions of ten papers were presented to stimulate discussion. These began with discussion of the problems with the commonly-used term “radicalisation”, with Akil Awan showing this is a largely post-9/11 term that can obfuscate more than illuminate, with a lack of understanding of “the relationship between words and actions, or between the holding of non-violent ‘extreme’ views and how and if those necessarily become manifest as actual violence,” and the role the Internet plays. Awan argued that we should not “pathologise and fetishise” the Internet, since “extreme” online behaviour won’t necessarily translate to the real world. More clarity is needed if online policing interventions are to be effective.
Sadhbh McCarthy looked at the changing technologies available for online policing, in particular the use of “human sensors” in providing online intelligence to police agencies, both via active cooperation and reporting, and as “data points” in monitoring activity. The rise of “big data” means that many more companies will in future gather information that may be of use to law enforcement agencies, and will have more diverse interests than Internet companies.
The majority of information being used for extremism investigations by police still comes from traditional human intelligence, and there is still nervousness about the reliability of automated decisions in such a complex area. Not all online content can be treated as actionable intelligence, and adding more “hay” to “haystacks” of available data will not necessarily help investigations. New tools are needed for public safety policing alongside traditional targeted surveillance. But especially following the Snowden revelations, a “crisis of trust” has developed between governments and publics, which will need significant efforts to address.
A number of case studies were presented during the workshop. These included Robindra Prabhu’s discussion of the Internet usage of Anders Behring Breivik before his atrocities in Norway, which an independent commission concluded could not have provided a warning of his plans even with more active monitoring by police; and Danit Gal’s look at Hamas’ use of “viral ‘scare songs’ and videos, large-scale image recycling, documentation and broadcasting of frontal attacks and Hebrew speaking presence and direct communications on social networks” during the 2008-2009 war in Gaza.
Several speakers looked at countering online racist and xenophobic extremism. Andrea Cerase, of EU-funded LIGHT ON project, presented work on improving the identification and reporting of hate speech to ISPs, social media platforms and law enforcement agencies, while Istvan Janto-Petnehazi analysed the use of Romanian newspapers’ online comment sections to propagate hate speech to a general audience, and how effective site usage policies could be in reducing this. There was some discussion of how user reporting tools can become politicised, as for example seen in attempts by supporters of Syrian president Bashar al-Assad to have anti-regime materials removed from social media platforms.
A final set of presentations looked at legal reforms and judgments related to policing online extremism. Benjamin Ducol analysed the development of French counter-terrorism law, particularly provisions such as the criminalisation of “apologie du terrorisme”, and how different stakeholders had reacted to proposed legal changes. Valentin Stoian compared the judgments of the Romanian Constitutional Court and the EU Court of Justice on data retention laws, which police argue are essential for investigating online criminality, but which both courts concluded unacceptably violated individuals’ privacy.
TJ McIntyre assessed the role of ISPs and social media companies in policing extremist material, identifying some of the problems caused by a lack of public law oversight in protecting freedom of expression and privacy, including a lack of transparency and effective remedies when mistakes are made. He described problems that could come from duties on private companies to report suspected extremism, with the potential for a flood of false positive reports and the significant resources needed to analyse them.
There was some discussion of the use of filtering and blocking systems as protection for children and vulnerable adults, including those with mental illnesses, which has become a bigger issue as children increasingly use mobile Internet access with less adult supervision. Care is also needed that measures taken by social media platforms cooperating with law enforcement agencies does not encourage extremists to migrate to smaller, less cooperative platforms. Finally, Ian Brown discussed the main reform proposals that have been made following the revelations by Edward Snowden of extensive online surveillance by North American and European governments, looking at how new international protections can be put in place to ensure such surveillance is necessary and proportionate.
A number of themes emerged during the workshop. One was the heavily contested nature of radicalisation – how far commonalities can be found between the complex socioeconomic situations, and history of Internet usage, of individuals that have gone on to commit violent, ideologically-inspired acts — and what this means for monitoring online behaviour. The participants agreed that while “radicalisation” remained a problematic term, it is a useful and well-understood shorthand for those addressing the issue. A second theme was the online prevalence of hate speech, and what types of policing mechanisms can be used to limit its spread while protecting freedom of expression. Related to this were debates about the responsibilities of public and private sector actors, and how to ensure democratic legitimacy for anti-extremism programmes that impact on privacy and other human rights.
The workshop was held immediately before the Computers, Privacy and Data Protection conference in Brussels, which attracted over 1,000 privacy researchers, regulators and policymakers, to facilitate participation in both events. Vox-Pol organised a packed-out panel discussion at CPDP, with four of the workshop speakers bringing the outcomes of the workshop to a much larger audience:
Workshop chair Ian Brown also spoke on a second Vox-Pol related CPDP panel, ‘Crypto wars reloaded? Privacy Technologies, Cybersecurity Governance and Government Access to Data’. This followed renewed attention from policymakers to the increasing use of encryption to protect data on smartphones and shared using messaging tools like WhatsApp, with UK prime minister David Cameron asking in mid-January: “In extremis, it has been possible to read someone’s letter, to listen to someone’s call, to mobile communications … are we going to allow a means of communications where it simply is not possible to do that? My answer to that question is: no, we must not.”
Other panellists included the former head of Belgian Federal Computer Crime Unit, Luc Beirens, and the author of the widely-used e-mail encryption software PGP, Phil Zimmerman. The discussion ranged over the possibilities for police access to encrypted data; the lessons that can be drawn from the extended debate throughout the 1990s over the widespread use of cryptographic tools; and whether the security gains from broader use of encryption outweigh the impact on police access to data for investigations:
A project report on the subject of the workshop will be published in April.