Is Big Tech Ready to Tackle Extremism? The Bergen Plan of Action

By Maisie Draper

A news story that falsely claimed that the COVID-19 vaccine caused the death of an American doctor was the most viewed article on Facebook in the US for the first three months of 2021. Instead of going public about the platform promoting such misinformation, Facebook held back on publishing the report until The New York Times disclosed its existence. Facebook was reportedly worried that it would “make the company look bad.”

This is the latest in a string of attempts by Facebook and other technology companies to avoid accountability for the spread of misinformation, violent extremist content and incitements to terror on their platforms.

In response, last month, an international summit was held in Bergen, Norway, to commemorate the 77 victims of the terror attacks in Oslo and Utøya on July 22, 2011. Since then, radical-right extremism has become an increasing transnational threat, with a 320% rise in the number of global attacks in the last five years.

At the summit, closed-door workshops, discussions and Q&A sessions facilitated open and robust dialogue among leading global stakeholders committed to fighting radical-right terrorism and extremism. These included representatives from Facebook, Microsoft, Google, Twitter, the British, US, New Zealand and Norwegian governments, the UK’s communication regulator OFCOM, the Global Internet Forum to Counter Terrorism, the UN, the EU and more.

The “22 July at Ten” organizers published the Bergen Plan of Action, outlining five steps for a collaborative, multi-stakeholder approach among governments, tech platforms and civil society organizations aimed at helping tackle the spread of violent extremist content and mitigating the rising rates of online radicalization.

Black Box

While many researchers no longer subscribe to the technologically deterministic view that social media is the sole cause of radical-right extremism, there is evidence that it does, in fact, rapidly accelerate the process. An internal Facebook report from 2016 found that 64% of people who joined an extremist group on the platform did so because the company’s algorithm suggested it. This explains why many online communities, including white and male supremacist groups, can act as gateways to more extreme groups and ideologies.

By design, Facebook algorithms prioritize content that gets more clicks, which in turn amplifies inflammatory content and increases the echo chamber effect. Apparently, it’s just not in Facebook’s interest to make judgments on the quality of the content shared on the platform, given that its business model is centered on maximizing revenue from adverts. After Facebook’s 2016 report, for example, alterations to the “recommender algorithm” were turned down as they were deemed “anti-growth.”

The first step of the Bergen Plan of Action calls for a tougher and more comprehensive approach to countering violent extremism through an approach that involves the whole of society, including governments, the corporate sector and civil society organizations.

Sessions with the US Department of State raised valid concerns regarding how this would work in practice, not least given tech companies’ well-documented resistance to publicizing information about their algorithms and transparency in general. The seemingly impenetrable black-box nature of their recommendation engines has been used to relinquish responsibility from their role in spreading extreme content.

While much is yet to be resolved, the Bergen Plan of Action has the potential to provide vital independent, peer-reviewed research, allowing internet users to make better decisions about the information they consume and share online. The fourth step of the Bergen Plan of Action recommends establishing a global network of civil society organizations in order to facilitate sharing methods for tackling all stages of online radicalization. Specialist tools such as Moonshot’s Redirect Method could be used to identify and safeguard vulnerable individuals, directing them toward safer content and trained counselors.

Another important aspect of the global network is to avoid the duplication of research across international counter-terrorism organizations.

Whole of Society

A further focus of the conference was on the role of content moderation. Historically, tech companies have repeatedly abdicated responsibility for removing radical-right content. Social media platforms received fresh criticism after their failure to remove the extremist messaging that helped incite the violence at the US Capitol on January 6.

During the Bergen conference, representatives from Google and Facebook discussed the challenges of creating a global definition of terrorism that does not impinge on the right to free speech, especially the First Amendment to the US Constitution. However, 10 years after the tragic events of July 22, 2011, Anders Behring Breivik’s “2083” terrorist manifesto is still one of Norway’s top Google search results. This alone, argues Matthew Feldman, the director of the Centre for Analysis Radical Right, shows that Google’s approach is inadequate.

The Bergen Plan of Action proposes that an independent body of moderators should be used across all tech companies to better enable transparency and accountability surrounding moderation. Moderators would also be supported with counseling and mental-health resources since the nature of the content they’re exposed to can be immensely distressing.

Likewise, policy decisions would be backed up by transparent, externally verified data to show measurable impacts of each policy. The benefits of this approach are showcased by Twitter’s deplatforming of Donald Trump. Within a week of the former US president being banned, false claims of election rigging fell from 2.5 million to 688,000 — a 73% decline.

In this plan, moderators would also be trained to recognize terrorist and violent extremist content in a multitude of languages and cultural contexts. This would provide a humanistic approach to deciphering extreme rhetoric and discourse — an obvious improvement on the current, overly-automated process responsible for the majority of content removal.

An independent group of experienced moderators could better navigate more nuanced issues of cancel culture and free speech in marginal or highly-charged cases while also retaining engagement from marginalized users and minimizing polarization.

As expressed by former Pinterest employee Ifeoma Ozoma, tackling the spread of harmful content online comes down to conviction: “If you want to understand how non-accidental any of this is, think about pornography. How often do you randomly encounter porn on Facebook, or Twitter, or YouTube, or wherever else? Not that often.”

Getting tech companies to endorse the Bergen Plan of Action may well be the biggest challenge to its success. It will require a recentering of Big Tech’s efforts around duty of care to users instead of growth and profit. Yet a whole-of-society approach necessitates their involvement.

Policies that offer social media companies financial incentives have shown promise, as have ones the mete out punishments, such as the Network Enforcement Act in Germany that fines social media companies up to €50 million ($58.7 million) for failing to quickly remove violent extremist content.

Ultimately, a multi-stakeholder solution — including governments, tech platforms and civil society — that can be meaningfully adopted by each sector is an ambitious but vital task to prevent future events like the Capitol Hill insurrection and radical-right terrorist attacks like the 2011 tragedy in Norway.


Maisie Draper holds a degree in computer science from the University of Manchester. Her dissertation explored the use of technical innovation as education tools for teaching programming in secondary schools. At present, she works at the Centre for the Analysis of the Radical Right as well as interning at a FinTech start-up. On Twitter @draper_maisie.

This article was originally published on Fair Observer with the title, ‘Is Big Tech Ready to Tackle Extremism?’ Republished here with authorisation. Image credit: pngtree.

Leave a Reply