By Sam Jackson
For more than a decade, we’ve been debating how to respond to hate speech – broadly understood as “offensive discourse targeting a group or an individual based on inherent characteristics (such as race, religion or gender).”1 The status quo in the United States holds that governments may not restrict speech outside of narrow exceptions (for things like threats, defamation, and obscenity),2 and hate speech in general doesn’t fit into those exceptions, though some examples of hate speech might, like if the hate speech is likely to incite “imminent lawless action”. Non-governmental actors engaged in content moderation, such as social media platforms, are generally understood to be legally permitted to restrict speech according to their own rules, though whether and how companies exercise this legal right is a matter of ongoing debate.
One response to hate speech is the “marketplace of ideas” model, which argues that the most effective way to defeat loathsome ideas is to challenge them with other ideas, because good ideas will defeat bad ones on a level playing field.3 Additionally, according to this approach, restrictions aren’t effective, are too blunt, and will be weaponized by those in power against legitimate critics. And it’s certainly true that authoritarian regimes have weaponized terms like terrorism to outlaw dissent, while risk-averse platforms remove content from activists that documents hate crimes, violence against civilians, and other violations of human rights. This framework of competition has long been a dominant metaphor to understanding approaches to the regulation of speech in the United States, although experts have increasingly pushed back against it in recent years.
Persuasive hate speech
This marketplace approach is rooted in understanding hate speech primarily as persuasive communication of ideas that downplays the immediate harms of that speech. And indeed, some hate speech is persuasive, aiming to change minds. For example, some “academics” peddle pseudo-scientific research arguing that racial minorities are intellectually inferior or pose threats to White people.4 Part of the purpose of this kind of work is to persuade non-believers of the truth of their hate speech.
For counter-speech to defeat hate speech in this framework, we need to think of hate speech as something with a “truth value” (i.e., statements that can be said to be true or untrue) that can be contested. In this understanding, counter-speech can become a kind of competitor in the marketplace: calling out hate speech as unacceptable states our values and, at least indirectly, attempts to convince others that our values are good.
Expressive hate speech
However, some hate speech is less concerned with “truth value” and more concerned with making targets feel unsafe or unwelcome.5 When someone uses a racial slur, they are not (primarily) saying anything of substance other than “I believe you belong to this racial group, and I do not like people who belong to this racial group.”6 While there is persuasive potential in this kind of hate speech (for example, convincing bystanders that the target is part of a harmful minoritized group, or that it is permissible to use racial slurs), that persuasion is secondary to the immediate goal of harming the target.
At the same time, even the most persuasive of hate speech also has expressive outcomes. When prominent public voices argue, for example, about whether trans folks should be permitted to use bathrooms that correspond to their gender identity, trans folks justifiably hear expressions of the idea that trans people aren’t normal and deserve to have their autonomy disproportionately restricted by the government.
Just as targets can hear expressive hate within persuasive hate speech, they can hear expressive solidarity within persuasive counter-speech: when an ally argues that legislating bathroom access for trans folks is unjust, trans folks could also hear expressions of affinity and belonging.
Countering hate speech
As we think about hate speech interventions, we must consider the different types of harms that come with different forms of hate speech. If an instance of hate speech seems more persuasive than expressive in function, counter-speech that argues against the truth propositions in that hate speech might be a reasonable response. For example, in response to assertions that Latinx immigrants are violent criminals, scholars can collect and publish data to investigate crime rates among immigrants versus non-immigrants, using our best relevant evidence to contradict the truth value of the incorrect assertion about immigrant crime.
However, research on fact-checking and debunking warns about backlash to information correction. For example, when people see negations of false statements (such as “Mexican immigrants are not rapists”), they tend to remember the false statement without remembering its negation; e.g., in this example, people would remember “Mexican immigrants are rapists,” forgetting the “not” part of that statement. This happens through a few possible mechanisms, including a tendency to believe that familiar information is true regardless of its truth. That is, the more we hear something, the more we believe it is true, even if we heard about it in the context of refuting the piece of information and including a tendency to forget negation (i.e., forgetting the “not” in the above example).7 As observers have pointed out in the context of recent fabricated controversies related to DEI efforts, saying that “’malfunctioning doors have nothing to do with DEI’ is still a sentence with both ‘malfunctioning doors’ and ‘DEI’ in it.” A better alternative, according to this perspective, is to change the terms of the discussion to shine a light on the hateful ideas driving the fabricated controversies: don’t say that the racist is wrong, spend your time pointing out the racist’s racism. But this is still engaging in the marketplace of ideas, in a certain sense: rather than a straight competition between two versions of the same “product” (i.e., idea), this is like pointing out the unsanitary conditions in the competitor’s factory.
Regardless of what we think of the metaphor of the “marketplace of ideas,” we must be clear about what harms we want to reduce and how those harms manifest. Counter-speech against expressive hate speech can only be effective if it undoes the immediate harms of hateful expression, taking someone who was made to feel unsafe and unwelcome and demonstrating that they have support and community. This is the Paxlovid of dealing with hate speech: just as the antiviral medications in Paxlovid reduce the chances that someone will have severe complications from COVID-19 once they’ve already been infected, persuasive counter-speech can perhaps reduce some of the harms that have already been caused by hate speech. But as we know from COVID, prevention efforts (like mask wearing) are more efficient and effective than treatment.
This distinction between expressive and persuasive hate speech can hopefully lead us to better understand the different types of harm associated with hate speech.8 Deep knowledge of problems is critical to designing interventions that can address those problems. But unpacking complexity doesn’t necessarily directly lead to better interventions. Though distinguishing expressive hate speech from persuasive may make sense conceptually and might be analytically useful when examining individual cases, I’m not sure that we can distinguish these forms at scale. Identifying expressive hate speech, especially outside of the clearest examples, is challenging for a number of reasons, not least of which is that it’s difficult to know what effects an instance of hate speech will have on those who are the target of the attack until after they have experienced those effects (at which point it’s too late for prevention and we have to think about reducing negative effects after the fact). Instead, this distinction can help us better understand the shortcomings of the marketplace of ideas approach to thinking about harmful speech.
There are more fundamental hurdles to leveraging the expressive-persuasive distinction for content moderation purposes, though. Most importantly, most instances of hate speech likely contain some degree of both expressive content and persuasive content; there are probably relatively few instances that are pure expression or pure persuasion.9 The primary utility of deploying this distinction in cases of mixed persuasion and expression is in recognizing that any intervention is likely to only address some of the harms associated with a particular example of hate speech.
1 As with so many concepts related to extremism, there is considerable debate about the definition of hate speech among experts. Further complicating this, some (but far from all) nation states have laws criminalizing hate speech that also attempt to define the term but within the specific legal and sociopolitical contexts of the nation in question. For an approachable overview, see Caitlin Ring Carlson’s Hate Speech.
2 Unless, it seems, the government action is led by Republicans alleging anti-conservative bias in content moderation practices based on only anecdotal evidence. The Supreme Court heard challenges to state laws along these lines from Texas and Florida in February 2024. https://www.texastribune.org/2024/02/26/texas-social-media-law-supreme-court/.
3 Maitra and McGowan summarize some other objections to certain flavors of the “marketplace of ideas” framework, including the idea that such counter-speech expectations often fall on those who are targeted by hate speech even though these folks tend to suffer from discrimination that can limit their ability to talk about issues related to hate speech.
4 One prominent example is Kevin MacDonald, an evolutionary psychologist who has openly advocated white supremacy for years and retired as a full professor from the faculty of California State University – Long Beach in 2014. https://daily49er.com/news/2014/04/14/controversial-psychology-professor-to-retire-in-the-fall/https://www.irehr.org/2016/08/30/who-is-kevin-macdonald/.
5 The idea that speech itself can directly harm has been the subject of scholarly attention for decades. For example, Richard Delgado wrote about what would later be called “assaultive speech” in 1982. See https://scholarship.law.ua.edu/fac_articles/360/.
6 “I do not like people who belong to this racial group” is perhaps the mildest of negative sentiments implied by the use of a racial slur. In reality, the negative sentiment is likely to be more severe, along the lines “I believe people who belong to this racial group are inherently inferior to other people” or “I believe people who belong to this racial group do not deserve basic human dignity.”
7 The tendency to forget negation leads to a very nuts-and-bolts semantic suggestion: counter-speech should use positive antonyms (e.g., “trans women are women”) rather than negating the original words of the speech being countered (e.g., “trans women are not men”). See https://search.issuelab.org/resource/misinformation-and-fact-checking-research-findings-from-social-science.html, pp. 14-15.
8 Expressive hate speech might refer to the same types of harmful speech as the term “assaultive speech,” a question I intend to take up in future work.
9 Pure persuasion might be more common than pure expression: propaganda containing hate speech could be understood as pure persuasion, especially if the intended audience of that propaganda is a member of the speaker’s in-group rather than a member of their out-group.
Sam Jackson is a senior research fellow in the Center on Terrorism, Extremism, and Counterterrorism at the Middlebury Institute of International Studies. His research focuses on antigovernment extremism in the U.S., conspiracy theories, extremism online, and contentious activity on the internet more broadly.
This article is republished from the Center on Terrorism, Extremism, and Counterterrorism (Middlebury). Read the original article.
Image: Freepik