Access Now Addresses the U.N. Security Council on Countering Hate Speech Online

By Brett Solomon

On October 28, 2021, before the United Nations Security Council, Access Now spoke on addressing and countering online hate speech and preventing incitement to discrimination, hostility, and violence on social media.

The Security Council is the United Nations’ most powerful body, with “primary responsibility for the maintenance of international peace and security.” This event, chaired by Kenya, is an important acknowledgement from the U.N. Security Council that addressing conflict offline also requires rights-respecting approaches to hate and incitement to violence online. Far too often, governments use “combating hate speech” as an excuse for repressive policies, and companies are complicit in this crisis and must do better. States need to step up and lead. (Access Now has a comprehensive roadmap to guide decision-makers toward good policy on these issues.)

The Security Council has been too slow to acknowledge the impact of the internet, as well as new and emerging technologies, on its mandate and work. The U.N.’s most powerful body needs to understand and recognize its place in the digital age, and it cannot do that without robust support from civil society, technologists, academia, and, most importantly, the communities affected by violence and conflict that shift online and off. Today, we pressed the Council to open its doors to more of our partners and community.

Access Now’s Executive Director and Co-founder, Brett Solomon, delivered the following speech to the Council:

Thank you, Chair, Your Excellencies, and honorable participants, for this opportunity, and to the government of Kenya for convening us on this most challenging of topics. It is not a moment too soon. Since I co-founded Access Now in 2009, nearly every conflict this Council has addressed has been impacted, for better or for worse, by social media.

I speak for Access Now today — we are a global team of more than 100 experts with ECOSOC accreditation — but my organization shares our struggle with hundreds of civil society organizations serving billions of impacted individuals worldwide.

It is the right time for the Security Council and its member states to bring contemporary and good-faith leadership to address online  hate speech and incitement to violence.

That means moving away from failed “silver bullets” — like knee-jerk content removals, holding tech company executives personally liable, or even shutting down the internet. These measures treat symptoms, not the cause, and are often abused by those in power to maintain political control — no one in the room is free from this critique.

We also must capture social media’s value as a solution to conflict. As Kenya declared to this body earlier this month, “ … platforms have changed how most people find and interpret information.” Access to platforms can provide a lifeline in times of emergencies. Social media is also a key tool for community organizing, peace-building, and political participation.

Security Council Members, let’s get back to first principles: Social media is not going away any time soon. Given that, we deserve better. We deserve social media that de-escalates conflict and empowers marginalized communities, rather than enhancing inequity and division. Anything short is a roadblock to peace and security. We deserve states that stop calls for hatred and genocide at their roots, that build constructive relationships with companies and civil society, and that regulate with human rights and affected communities at the center. Removing so-called “hate speech” cannot be used to justify repression.

Your primary role, Excellencies – as protector and promoter of our rights and dignity – demands nothing less.

To both states and companies, I have 26 recommendations for you — but in these seven minutes, I will highlight three for each. With these six, between you both, we can take great strides in solving the problem of hate speech in conflict zones.

To the Member States of the Council and General Assembly:

1) I implore member states to understand the terms and the legality. Not all speech is the same. 

  • International law provides a clear starting point — incitement to genocide, hostility, discrimination, or violence are all prohibited. Such content can lawfully be removed.
  • Carefully crafted national law can also prohibit other forms of hate speech, but only to pursue a legitimate aim, and as is strictly necessary and proportionate. It must very narrowly define illegal content.
  • Warning! That does not justify restricting offensive speech, or shocking speech, or political positions that you disagree with or those which challenge your authority. Such speech is protected and must not be criminalized.

Any restriction on social media must reflect the U.N. Strategy and Plan of Action on Hate Speech and the excellent Rabat Plan of Action.

2) The focus must be targeted reduction of hate speech, not censorship of lawful content.

Most serious cases of online hate speech are inseparable from the offline context. They are a reflection of years of simmering tensions that find expression and amplification online. Many states themselves are the perpetrators of hate speech. Trolls and online armies — sometimes supported by governments — are attacking the most vulnerable, marginalized, or politically underrepresented, where hate speech has its most fertile ground. Here you must step in to urgently and unambiguously condemn acts of incitement, and act according to your obligations under international law.

This does not mean manipulating — or shutting down — entire social media networks, services, or even the internet as a whole. Blocking such content is rarely, if ever, justified. However, any blocking must be strictly limited in time and constrained to specific types of manifestly illegal content, as clearly defined by international law or narrowly tailored national law.

3) States must engage with the private sector responsibly to reduce hate speech on platforms.

As the primary duty bearers, states should encourage and enable companies to adopt rights-respecting practices and together ensure that incitement to genocide has no place on the platform.

What we’ve often seen instead are states abusing these systems. So-called internet referral units are gaming community standards to limit speech and pressuring companies to act as their censoring proxies. Communications ministries are forcing telcos to shut down services for the population, while preserving access to the internet for their elites.

Similarly, bad hate speech laws often pressure companies to over-censor, and to remove content unreasonably fast or face penalty. This privatizes enforcement, misplacing what should be the role of an independent judge. Meanwhile, these policies and the tools companies deploy in response don’t stop hate speech, but instead silence those who should be most protected — including women, girls, the LGBTQ+ community, immigrants, journalists, and activists. Governments should be on the lookout for when social media companies are abusing the power to remove content.

To the tech companies present:

I understand this is largely a briefing for States, but your presence here and your role as co-enablers of the hate speech crisis requires me to equally recommend some immediate changes in your own practices.

1) Protecting your at-risk users must be your number-one priority. Facebook, you should change your practices, not just your name.Given the massive profits platforms are making from exploiting our data, your minimal investment in this problem is  insulting. And when it comes to combating hate speech online in the Global South, the inequity is scandalous. Those most at risk from hate speech are knocking on your door, beseeching you to remove the hate and stop the violence. One activist said to me in exasperation when talking of the tech company executives, “Don’t these people have families?” We have been sounding the alarm on this crisis in Ethiopia, India, Palestine, Myanmar, and Sudan, just to name a few.

Civil society has long been at the ready to help you deepen your understanding of conflict zones where your users are struggling to survive. But you keep telling us you don’t have resources. Now is the time to invest in trusted, informed human capital, to co-create urgent responses with local civil society, and to understand the languages in which hate speech is communicated. Driving continuous growth while underinvesting in what is necessary to keep people safe is nothing short of reckless.

2) Companies must stop using harmful manipulation to increase profits and endanger users.Platforms optimizing for engagement at any cost — based on privacy-invasive online tracking and surveillance-based advertisements — is at the heart of this problem. These practices incentivize amplifying hate speech, sensationalism, and incendiary content. It puts the companies’ business model in direct odds with people’s safety, and it has to change.

To start, instead of recommending content via opaque algorithms without people’s consent — which spreads hate speech like wildfire, especially in communities with low digital literacy — platforms should provide their users with informed choices about how algorithms recommend content and prioritize information in their feed, the implications of such recommendation systems, and at least allow people to opt into algorithmic prioritization instead of merely opting out.

This era defined by algorithmic radicalization and manipulation must come to an end.

3) Account for your human rights impacts before harm is caused, and ensure new “solutions” don’t make matters worse. That means adopting — and sticking to — policies that put users at risk first and hold decision makers to account. You need robust due diligence safeguards that proactively identify potential harms, along with commitments to sharing the results of comprehensive human rights impact assessments with civil society partners. You also need processes to preserve evidence of the human rights abuses playing out on your platforms, along with meaningful transparency reporting.

Companies are rushing to implement context-blind automated tools as stopgap solutions — with disastrous results. Poorly trained systems with too little human oversight are leading to unjustified speech restrictions, particularly for the most vulnerable. Going forward, all content moderation and curation criteria, rules, sanctions, and exceptions should be driven by a human element. They should be clear, specific, predictable, and properly communicated to users in advance.

To the Security Council:

Hate speech on social media is a threat to U.N. norms and to this very Council. It’s past time to recognize the impacts of digital on peace and security. The good news is, you already have the roadmap. The Strategy and Plan of Action makes clear how the U.N. itself should proceed. Access Now’s 26 Recommendations on Content Governance is a helpful resource.

We offer our support in convening a future Arria Formula meeting on human rights in the digital age, perhaps even on the public record. We promise to inform and guide your excellencies as to digital threats like invasive surveillance technology, internet shutdowns, and online hate, through further, open engagement, including in the development of General Assembly and Security Council resolutions. We also recommend inviting more civil society into this chamber.

I look forward to questions and discussion with this Council, and will share my written remarks with the Chair.

Thank you.


Brett Solomon is the Executive Director and co-founder of Access Now, where he leads the organization’s fight to defend and extend the digital rights of users at risk around the world. On Twitter @solomonbrett and @accessnow.

This article was originally published on Access Now, republished here with permission. Image credit: Wikipedia.

Leave a Reply