by Anna Orosz
The Center for Media, Data and Society at Central European University in Budapest, Hungary hosted the third VOX-Pol workshop on 5 – 6 March with the participation of nearly 40 experts from policy making, human rights groups, activists, law enforcement, social media companies, and academia. The diverse background and expertise of the participants enabled workshop sessions to address and discuss the themes and issues related to violent online extremism in a complex manner, taking different actors’ and stakeholders’ viewpoints into consideration.
The fundamental aim of the workshop was to put the role of social media and internet companies in responding to violent online extremism in a broader perspective, in order to gain a better understanding of the inter-relations between internet governance, the role of technology companies as political intermediaries, and end-users. There was broad consensus among the group that in order for real progress to be made in coming up with policy solutions that support security, human rights, and free expression, there needs to be engagement with the full range of actors, from companies, policy makers, human rights, privacy, freedom of expression, internet governance, law, computer science, cyber security, law enforcement, intelligence, and academia. This is whom the workshop sought to bring together and VOX-Pol will continue to reach out to.
The workshop was held under the Chattam House Rule. This synopsis of discussions reflects this.
In the immediate aftermath of the horrific attacks in Paris, the Joint Statement of EU Ministers for Interior and Justice expressed concern at the “increasingly frequent use of the Internet to fuel hatred and violence.” Their statement argued that “the partnership of the major Internet providers is essential to create the conditions of a swift reporting of material that aims to incite hatred and terror and the condition of its removing, where appropriate/possible.” In response, serious concerns over freedom of expression and privacy have been expressed, especially as regards the potential for increased surveillance and the push for intermediary liability.
The opening session of the workshop framed the discussion around how stakeholders should handle the balancing between extremism and online privacy after the attack on Charlie Hebdo. Contributors to the session illustrated various authoritarian state responses to online extremism, citing case studies from Russia, China, and Nigeria, where related laws are often crafted in a way that allows for an expansive and vague definition of online extremism in order to enable their enforcement in a broader and more flexible way, and especially empowering state security agencies to target opposing parties and organizations. However, the attacks in France and Denmark also gave Western governments the opportunity to push for policies that restrict free speech further. In the session, participants also discussed the role internet and mass media plays processes of violent radicalization, especially by reproducing visual messages, and social media’s personalized, immersive, ‘always on’ aspects, which were felt to distinguish it from broadcast formats
The following session continued with an analysis of existing human rights laws, examining the difference in legal approaches when it comes to social media content and provision of assistance to security forces, and raising the question whether it is social media companies’ duty to assist security forces or not, especially with regard to the increasing tension between freedom of expression, privacy, and national security. As there is no coherent international application of human rights laws and articles, countries take different approaches to their use (applying them directly or indirectly). It is also an increasing concern as regards applicability of these laws that the definition of extremism changes over time and has very different meanings in different countries.
The third session discussed intermediary liability, and how it poses specific challenges for social media company responses to violent extremism and protecting freedom of expression. Freedom of expression online across borders brings transnational questions of jurisdiction where courts don’t follow similar laws. It was argued that a global framework to address liability can only be developed in a cooperative, multi-stakeholder format, not accomplished by the state only. The current political discourse is that “liability” shifts to “responsibility”, which implies that the policing of the online sphere should be more active. This can lead to interstate tension, and the fragmentation of infrastructure.
The final session of Day 1 focused on content removal, blocking, and filtering of extremist content, examining related current governmental and editorial policies and terms of service, and discussing the cooperation of law enforcement and social media companies. Talks once again returned to Russia, where cartoons and historical images can be regarded by the state as examples of extremism, as can support for Ukrainian nationalism. Worryingly, there is a high degree of public support for new internet regulations, and these expansive policies are highly backed by the Russian public. The Russian example highlights how extremism can be classified as a form of governance, and used by authoritarian actors to manipulate society, arbitrarily judging what is good and bad. Some workshop participants argued for anonymity, especially in light of the current UK parliamentary debate on this issue, and raised the difficulties in balancing between freedom of expression and security, especially when the fundamental principle is to ensure that common space expresses a diverse array of opinions.
Day 2 started out with a complex analysis of the interplay between corporate responsibility and extremist content. Key issues that emerged were related to issues of transparency, including the reporting practices of internet and social media companies, and the reasons for it; the often controversial nature of their actions in certain countries; and an emerging consensus that transparency in and of itself is an insufficient response towards accountability. Issues were also raised regarding resources to operationally manage the volume of data by law enforcement and content by companies. Some point out that policy makers are not often aware of these operational challenges, and that outsourcing the volume problem to industry does not take either operational difficulties or data protection into account.
In the closing session we examined measures being taken to protect online freedoms in light of violent extremism, and the possibilities to resolve the polarizing dynamic of privacy versus security online in an environment of heightened concerns over violent extremism. Contributors to the session debated the best ways to ensure these decisions are multi-stakeholder, and called for best practices in order to enhance transparency and support Internet companies’ compliance. It was argued that privacy and security are not mutually exclusive.
In the final wrap-up session, we focused on participants’ recommendations for next steps for research, knowledge needs, and how best practices should be developed to better respond to violent online extremism. A follow-up blog post will address these.
#voxpolceu