By Alice E. Marwick
This post is part I of the series. Please click here to read part II.
In November 2016, Twitter shut down the accounts of numerous alt-right leaders and white nationalists. Richard Spencer, the head of the National Policy Institute and a vocal neo-Nazi, told the LA Times it was a violation of his free speech. “[Twitter needs] to issue some kind of apology and make it clear they are not going to crack down on viewpoints. Are they going to now ban Donald Trump’s account?”
Old and new media organizations are scrambling to define acceptable speech in the era of President Trump. But Twitter is in a particularly poor position. The prevalence of hateful speech and harassment on the platform scared off potential acquisitions by both Disney and Salesforce. The company has dealt with one PR disaster after another, from Ghostbusters star Leslie Jones temporarily leaving the platform after being harassed and doxed, to a viral video of obscene and abusive tweets sent to female sports journalists, to pro-Trump accounts sending Newsweek reporter Kurt Eichenwald animated gifs designed to induce epileptic seizures. A site once touted as “the free speech wing of the free speech party” is now best known for giving a voice to Donald Trump and #gamergaters.
At the same time, attempts by Twitter and sites with similar histories of free speech protections to regulate the more offensive content on their site have been met with furious accusations of censorship and pandering to political correctness. This enables the alt-right to position themselves as victims, and left-wing SJWs (“social justice warriors”) as aggressors. Never mind that private companies can establish whatever content restrictions they wish, and that virtually all these companies already have such guidelines on the books, even if they are weakly enforced. When technology companies appear to abandon their long-standing commitment to the First Amendment due to the concerns of journalists, feminists, or activists, the protests of those banned or regulated can seem sympathetic.
How did we get to the point where Twitter eggs spewing anti-Semitic insults are seen as defenders of free speech? To answer this question, we have to delve into why sites like Reddit and Twitter have historically been fiercely committed to freedom of speech. There are three reasons:
- The roots of American tech in the hacker ethic and the ethos that “information wants to be free”
- CDA 230 and the belief that the internet is the last best hope for free expression
- A belief in self-regulation and a strong antipathy to government regulation of the internet
But a commitment to freedom of speech above all else presumes an idealistic version of the internet that no longer exists. And as long as we consider any content moderation to be censorship, minority voices will continue to be drowned out by their aggressive majority counterparts.
To better understand this, we need to start with the origin story of the modern internet. Like many technology stories, it takes place in Northern California.
The Secret Hippie Hacker Past of the Internet
The American internet was birthed from a counter-culture devoted to freedom, experimentation, transparency and openness. While the internet originates with the military — ARPANET was commissioned and funded by the Department of Defense — the early hardware and applications that helped technology to thrive were mostly created by academics, geeks, hackers and enthusiasts.
For instance, in post-hippie Berkeley, early microcomputer aficionados formed the Homebrew Computer Club, freely sharing information that enabled its members to create some of the first personal computers. When Steve Wozniak and Steve Jobs built the first Apple Computer, they gave away its schematics at the Club. (Woz regularly helped his friends build their own Apples.) In the 1980s, people at elite universities and research labs built on ARPANET’s infrastructure to create mailing lists, chat rooms, discussion groups, adventure games, and many other textual ancestors of today’s social media. These were all distributed widely, and for free.
Today, it boggles the mind that people would give away such valuable intellectual property. But the members of this early computing culture adhered to a loose collection of principles that journalist Steven Levy dubbed “the hacker ethic”:
As I talked to these digital explorers, ranging from those who tamed multimillion-dollar machines in the 1950s to contemporary young wizards who mastered computers in their suburban bedrooms, I found a common element, a common philosophy that seemed tied to the elegantly flowing logic of the computer itself. It was a philosophy of sharing, openness, decentralization, and getting your hands on machines at any cost to improve the machines and to improve the world. This Hacker Ethic is their gift to us: something with value even to those of us with no interest at all in computers.
Early technology innovators deeply believed in these values of “sharing, openness, and decentralization.” The Homebrew Computer Club’s motto was “give to help others.” Hackers believed that barriers to improving technology, contributing to knowledge, and innovating should be eliminated. Information should instead be free so that people could improve existing systems and develop new ones. If everyone adhered to the hacker ethic and contributed to their community, they would all benefit from the contributions of others.
Now, obviously, these ideals only work if everyone adheres to them. It’s easy to take advantage of other people’s work — economists call this the “free rider problem.” And it doesn’t take into account people who aren’t just lazy or selfish, but people who deliberately want to cause harm to others.
But these beliefs were built into the very infrastructure of the internet. And they worked, for a time. But regulation was always necessary.
Regulating the Early Internet
On April 12, 1994, a law firm called Canter and Siegel, known as the “Green Card Lawyers,” sent the first commercial spam e-mail to 6,000 USENET groups advertising their immigration law services. This inspired virulent hatred. Internet users organized a boycott, jammed the firm’s fax, e-mail, and phone lines and set an autodialer to call the lawyers’ home 40 times a day. Canter and Siegel were kicked off three ISP’s before finally finding a home and publishing the early e-marketing book How to Make a Fortune on the Information Superhighway. Despite these dubious successes, the offense was seen as so inappropriate that Canter was finally disbarred in 1997, partially due to the e-mail campaign; William W. Hunt III of the Tennessee Board of Professional Responsibility said, “We disbarred him and gave him a one-year sentence just to emphasize that his e-mail campaign was a particularly egregious offense.”
Early internet adopters were highly educated and relatively young with above average incomes, but, more importantly, many of them were deeply invested in the anti-commercial nature of the emerging internet and the “information wants to be free” hacker ethos. Any attempted use of the network for commercial gain was highly discouraged, particularly uses that violated “netiquette,” the social mores of the internet. Netiquette was a set of community-determined guidelines that were enforced though both norms (people explicitly calling each other out when they violated community standards) and technical means (software that allowed users to block other users). Most USENET groups had lengthy Frequently Asked Questions documents where they spelled out explicitly what was encouraged, tolerated, and disallowed. And users who broke these rules were often sharply reprimanded.
The extent of the backlash against Canter and Siegel spam shows not only how egregious a violation of netiquette their messages were, but that their actions threatened the very utility of USENET. If the newsgroups were cluttered with spam, useful messages would be drowned out, interesting discussion would end, and key members would leave.
Fast forward a few years and email spam had taken over the inbox. Many internet users used dial-up connections, and resented having to pay to download useless messages about Rolexes and Viagra. By the mid-aughts, email, long a backbone of online communication, had become less useful. So technology companies and computer scientists worked together to develop sophisticated email filters. They don’t work all the time, but people who use commercial email services like Gmail or Hotmail rarely see a spam message in their inbox. The problem was solved technically.
In both of these situations, there was no argument that technical and normative ways to circumvent spam violated the free speech rights of spammers. Instead, internet users recognized that the value of their platforms was rooted in their ability to foster communication, and spam was a serious threat. It’s also worth noting that these problems were solved not through government regulation, but through collective action.
But today, we face a different set of problems related to free speech online. On one hand, we have aggressive harassment, often organized by particular online communities. On the other, we have platforms that are providing spaces for people with unarguably deplorable values (like neo-Nazis) to congregate. And this is particularly true for sites like Twitter and Reddit, which prioritize freedom of expression over virtually all other values.
Thanks to harryh for editing and Lindsay Blackwell & Whitney Phillips for inspiring some of the thoughts behind this piece.
Alice E. Marwick is former Director of the McGannon Communication Research Center and Assistant Professor of Communication and Media Studies at Fordham University. She is a 2016–2017 fellow at Data & Society. The article was originally published on points.datasociety.net. This is part one of two posts; click here to read part II.