Facebook serves more than 1.5 billion people globally. Although the majority of people use the site for positive purposes, there are some who use the platform in negative ways. With that in mind, Facebook has created a set of policies – its Community Standards – detailing what type of content people can and cannot post. For instance, Facebook prohibits and removes hate speech and does not allow dangerous organisations (defined as groups that engage in terrorist or organised criminal activity) to have a presence on Facebook. Content that supports or promotes those groups is removed. However, sometimes people post content which other users may consider hateful or extreme, but which does not violate Facebook’s policies.
To counter this type of disagreeable or extremist content, Facebook has publicly stated that it believes counter-speech is not only potentially more effective, but also more likely to succeed in the long run. Counter-speech is a common, crowd-sourced response to extremist or hateful content. Extreme posts are often met with disagreement, derision and countercampaigns.
Combating extremism in this way has some advantages: it is faster, more flexible and responsive, and capable of dealing with extremism from anywhere and in any language; and it retains the principle of free and open public spaces for debate. However, the forms counter-speech takes are as varied as the extremism it argues against. It is also likely that it is not always as effective as it could be; and some types of counter-speech could potentially even be counter-productive. In the light of its belief in the power of counter-speech and the growing interest in a more rigorous and evidence-led approach to understanding it better, Facebook commissioned Demos to undertake this research report, examining the extent to which different types of counter-speech are produced and shared on Facebook.