Ferguson , p. 1

Before the FEDERAL COMMUNICATIONS COMMISSION Washington, DC 20554 In the Matter of Section 230 of the Communications Act of 1934

Reply Comments on the NTIA Petition for Rulemaking and Section 230 of the Communications Act of 1934. By: Niall Ferguson, The Hoover Institution

I. Introduction and Interest of the Author

I am the Milbank Family Senior Fellow at the Hoover Institution, Stanford University, and a senior faculty fellow of the Belfer Center for Science and International Affairs at Harvard. In recent years, I have begun researching the role of online networks in the public sphere. My most recent book, The Square and the Tower, sought to put this issue into historical perspective, as did my three-part television adaptation, Networld, which aired on PBS in March 2020.

Many of the comments on the NTIA’s proceedings have raised issues such as 1. To what extent the market share of large tech platforms such as , Google, and made them gatekeepers or part of the public sphere.1 2. Whether Section 230 was meant to immunize censorship of political speech.2 3. Whether changes to the internet ecosystem since 1996 changes create ambiguities in the application of the statute.3

I hope these comments help elucidate these questions.

II. Extracts from Recent Research on the Subject (not yet published)

1. Market share and gatekeeping The network platform companies are astonishingly profitable businesses—not least because users have handed them so much of their personal data for nothing, allowing advertisements to be targeted more precisely than ever before. As a commenter on the website MetaFilter memorably observed in 2010: “If you are not paying for it, you’re not the customer; you’re the product being sold.” That was neat but not quite true. Users of network platforms enjoy access to numerous very useful services for which they pay nothing, aside from the distraction of on-

1 See, e.g., Comments of Jeremy Carl, The Claremont Institute, at 2 (arguing that “As FCC Chairman Ajit Pai has repeatedly noted, Silicon Valley companies are the new gatekeepers--with more power over the marketplace of ideas than any tech platform” such as Broadband providers which have traditionally had stricter regulations); c,f. Comments of Tech Freedom at iii. (“Broadband Internet Access Service providers are utterly unlike those NTIA proposes for social media.”) 2 See, e.g., Comments of the Internet Accountability Project (“It is clear that Section 230 was not designed to provide blanket immunity for companies that use their power to censor political speech.” 3 See, e.g., Comments of Robert Seamans and Hal Singer (“When the legislation was drafted in 1996, the tech platforms had not yet integrated into adjacent content markets.”) Ferguson , p. 2 screen advertisements. People may bemoan the death of privacy,4 but they would not so readily hand over their data if they did not receive something desirable in return.5 According to one estimate, the monthly benefit of Facebook to the average American Facebook is equivalent to $48.6 It did not have to be this way, as network platforms might have opted to finance themselves through fees, subscriptions or donations.7 But the advertising model won. In terms of its revenues, Facebook today is primarily a vast, algorithmic billboard, as is Google’s parent company Alphabet. As an obviously amused Zuckerberg explained to Utah’s antediluvian Republican Senator Orrin Hatch in 2018, “Senator, we run ads.”8 He meant that Facebook sells the potential interest of users in advertisements aimed at them on the basis of Facebook’s vast treasure-trove of data. Eight of the world’s ten most highly valued companies at the end of 2019 were information technology businesses. Of these eight companies, five (Apple, Microsoft, Alphabet, Amazon and Facebook) were American and two Chinese (Alibaba and Tencent). It was not strictly speaking software that “ate the world,” in the venture capitalist Marc Andreessen’s famous phrase, as Apple is still thought of as a hardware company. It would be more accurate to say that network platforms ate the world, as the market dominance of all these companies arises from network effects and the operation of Zipf’s Law (which can be summed up as “winner takes nearly all”). Amazon ate bookselling and around a third of all U.S. online retail spending.9 In the words of Scott Galloway, “82 per cent of American homes have Amazon Prime, more than voted in the 2016 election, have a pet, attend church, or decorate a Christmas tree.”10 Google ate search: it accounts for 88 per cent of the US search engine market, and 95 per cent of all mobile searches. Apple ate music (along with Alphabet’s YouTube and Spotify). YouTube ate television. Above all, Google and Facebook ate advertising. The two companies capture a combined 60 per cent of US digital ad spending in 2018. The following year, Facebook’s ad revenues grew by 26.6 per cent.11 “For many years,” according to a New Yorker profile, “[Mark] Zuckerberg ended Facebook meetings with the half-joking exhortation ‘Domination!’” He stopped doing this because in European law “dominance” is a term used to describe a business monopoly. However, he remains unabashed about Facebook’s appetite for market share. “There’s a natural zero- sumness,” he told an interviewer in September 2018. Revealingly, the figure he most admires in history is the Emperor Augustus: You have all these good and bad and complex figures [in ancient Rome]. I think Augustus is one of the most fascinating. Basically, through a really harsh approach, he

4 Stuart A. Thompson and Charlie Warzel, “Twelve Million Phones, One Dataset, Zero Privacy,” New York Times, Dec. 19, 2019; Kashmir Hill, “The Secretive Company That Might End Privacy as We Know It,” New York Times, January 18, 2020. 5 Bowman Heiden and Nicolas Petit, “’Privacy Absolutism’ Masks How Consumers Actually Value Their Data,” The Hill, December 12, 2019. 6 Erik Brynjolfsson, Avinash Collis, and Felix Eggers, “Using Massive Online Choice Experiments to Measure Changes in Well-being,” PNAS, 116, 15, April 9, 2019, pp. 7250-7255: https://doi.org/10.1073/pnas.1815663116. 7 Greg Ip, “The Unintended Consequences of the ‘Free’ Internet,” Wall Street Journal, November 14, 2018. 8 “Senator Asks How Facebook Remains Free, Zuckerberg Smirks: ‘We Run Ads,’” NBC News, April 10, 2018: https://www.nbcnews.com/video/senator-asks-how-facebook-remains-free-zuckerberg-smirks-we-run-ads- 1207622211889. 9 Scott Galloway, “Silicon Valley’s Tax-Avoiding, Job-Killing, Soul-Sucking Machine,” Esquire, February 8, 2018. 10 Scott Galloway, “Fire and Fawning,” July 24, 2020. 11 https://www.emarketer.com/content/global-digital-ad-spending-update-q2-2020. Ferguson , p. 3

established two hundred years of world peace. What are the trade-offs in that? On the one hand, world peace is a long-term goal that people talk about today. Two hundred years feels unattainable. [But] that didn’t come for free, and he had to do certain things.12 Facebook is indeed a vast empire, with more users than Christianity has adherents, but it is a lean one, with a workforce of less than 50,000. Nor does the new Caesar render up much to the old one: between 2007 and 2015, according to an estimate by S&P Global Market Intelligence, Facebook paid just 4 per cent of its profits in federal, state, local and foreign taxes. Amazon paid only 13 per cent, Google 16 per cent and Apple paid 17 per cent. (The average S&P 500 company paid 27 per cent.)13 The benefits of a networked world are not to be dismissed lightly. When the COVID-19 pandemic struck in 2020, few people complained that the networked platforms could deliver to their homes, at minimal risk of contagion, more or less everything under the sun, including (in Facebook’s case) zero-cost, unlimited communication with family and friends. Yet it was already obvious by 2019 that there were a great many unforeseen disadvantages to a networked world. The rise of the Internet had coincided suspiciously with a decline in productivity growth.14 Traditional social relationships seemed to have deteriorated as online relationships had grown.15 Not everyone viewed positively the shift to online dating.16 The rise of social media had coincided with and contributed to a rise in mental health problems among teenagers.17 (The psychologist Deborah M. Gordon suggested that online social networks replicated on a vast scale many of the more insidious features of friendship circles amongst girls in a middle school.18) Social media turned out be—as indeed they were designed to be—addictive.19 There had been an epidemic of indignant disagreement, moral grandstanding and outrage mobs.20 In place of what Walter Lippmann called “manufactured consent” or consensus, society had broken up into ever smaller and more mutually antagonistic factions.21 On social media, a hoard of memetic tribes fought a soul-destroying culture war.22 It became clear that the network platforms were not only actively seeking to attract children as users but also exposing them to harmful content.23 But these were not the worst consequences of the networked world, as we shall see. Prior to 2016, remarkably little attention was paid to any of this. The posture adopted by the big technology companies might best be summed up as faux naïf. “Don’t be evil” was the

12 Evan Osnos, “Can Mark Zuckerberg Fix Facebook Before It Breaks Democracy?” New Yorker, September 10, 2018. 13 David Leonhardt, “Companies Avoiding Taxes,” New York Times, October 18, 2016. 14 Derek Thompson, “The Real Trouble with Silicon Valley,” Atlantic (January/February 2020) 15 Amy Orbena, Tobias Dienlin, and Andrew K. Przybylskia, “Social Media’s Enduring Effect on Adolescent Life Satisfaction,” PNAS (2019): www.pnas.org/cgi/doi/10.1073/pnas.1902058116. 16 Michael Rosenfeld, Reuben J. Thomas and Sonia Hausen, “Disintermediating Your Friends: How Online Dating in the Displaces Other Ways of Meeting,” July 15, 2019. 17 Jean M. Twenge, “Have Smartphones Destroyed a Generation?” Atlantic (September 2017). See also Holly B. Shakya and Nicholas A. Christakis, “The More You Use Facebook, the Worse You Feel,” Harvard Business Review, April 10, 2017; David Ginsberg and Moira Burke, “Hard Questions: Is Spending Time on Social Media Bad for Us?” Facebook, December 15, 2017. 18 Deborah M. Gordon, “Local Links Run the World,” Aeon, February 1, 2018. 19 Yascha Mounk, “The Problem Isn’t Twitter. It’s That You Care About Twitter,” Atlantic, April 29, 2019. Also Mark Haddon, “Why Novelist Mark Haddon Lost Faith in Twitter,” Financial Times, May 12, 2019. 20 Jonathan Haidt and Tobias Rose-Stockwell, “The Dark Psychology of Social Networks,” Atlantic (December 2019). 21 Renée DiResta, “Mediating Consent,” December 17, 2019. 22 Peter N. Limberg and Conor Barnes, “The Memetic Tribes of Culture War 2.0,” Medium, September 13, 2018. 23 James Bridle, “Something is Wrong on the Internet,” Medium, November 6, 2017. Ferguson , p. 4 motto adopted by Google in July 2001, after a brainstorming session between Eric Schmidt and early employees shortly before he took over as chief executive. As Schmidt recalled in 2006, referring to the company’s decision to offer a censored version of its search services in China, “We actually did an ‘evil scale’ and decided [that] not to serve at all was worse evil.”24 By contrast, close involvement in the administration of Barack Obama—including energetic efforts to secure his reelection in 2012—was deemed to lie at the other end of the evil scale. According to one estimate, there were 252 job moves between Google and the Obama administration from its inception to early 2016, and 427 meetings between White House staff and Google employees from 2009 to 2015.25 The closeness of the relationship between Silicon Valley and the Democratic Party was not unknown at the time, but it was uncontroversial. ’s victory in the November 2016 presidential election changed that, not just because the leaders of the big technology companies had confidently expected him to lose, but also because it was immediately apparent to them that their core products had either helped Trump win or failed to avert Clinton’s defeat. The fact that Trump had dominated Clinton on both Facebook and Twitter throughout the campaign had been overlooked by most political pundits because his huge lead in follower numbers was at odds with all opinion polls. Likewise, experts ignored or discounted his even larger lead over her in terms of Google searches. Brad Parscale, Trump’s digital media director, put it well: “These social platforms are all invented by very liberal people on the west and east coasts. And we figure out how to use it to push conservative values. I don’t think they thought that would ever happen.”26 A video recording of a post-election internal meeting at Google confirms this. Senior executives lined up to express their dismay at Trump’s victory and their allegiance to Clinton.27 As President Obama told David Letterman, his own successful use of social media in 2008 had left him and other Democrats with “a pretty optimistic feeling about it. … What we missed was the degree to which people who are in power [sic], special interests, foreign governments, etcetera, can in fact manipulate that and propagandize.” 28 Speaking at MIT in February 2017, Obama suggested that “the large platforms … have to have a conversation about their business model that recognizes they are a public good as well as a commercial enterprise.” It was, he said, “very difficult to figure out how democracy works over the long term” when “essentially we now have entirely different realities that are being created, with not just different opinions but now different facts—different sources, different people who are considered authoritative.”29 Obama was right that we have a problem, though it was notable that he only noticed it when his own party’s candidate was beaten at the game he had won twice. The same belated realization could be seen in Silicon Valley, too. “I thought once everybody could speak freely and exchange information and ideas, the world is automatically going to be a better place,” Evan Williams, one of the founders of Twitter, told in 2017. “I was wrong about that.”30 Indeed, he was. Two junior content moderators summed it up well in 2019. “I think when

24 Stacy Cowley, “Google CEO on Censoring: ‘We Did an Evil Scale,’” InfoWorld, January 27, 2006. 25 Adam J. White, “Google.gov,” New Atlantis, Spring 2018. 26 William J. Feltus, Kenneth M. Goldstein and Matthew Dallek, Inside Campaigns: Elections through the Eyes of Political Professionals (Thousand Oaks: CQ Press, 2019), p. 279. 27 https://www.breitbart.com/tech/2018/09/12/leaked-video-google-leaderships-dismayed-reaction-to-trump- election/. 28 The allusion is to Eli Pariser, The Filter Bubble: What the Internet Is Hiding From You (New York: Penguin, 2011). 29 White, “Google.gov.” 30 David Streitfeld, “‘The Internet is Broken.’ ev Is Trying to Salvage It,” New York Times, May 20, 2017. Ferguson , p. 5 we all started,” said Olivia, “it was sort of why wouldn’t we defend free speech? Why wouldn’t we defend the exchange of ideas and open dialogue? The best ideas should win out. But no one thought it would turn into what it is.” Adam agreed. “Remember ‘We’re the free speech wing of the free speech party’ [an early Twitter slogan]? How vain and oblivious does that sound now? Well, it’s the morning after the free speech party, and the place is trashed.”31 The impacts of the Internet have proved to be akin to those of the printing press after it spread throughout Europe from the late 15th century. The benefits of much cheaper, faster and wider dissemination of ideas were offset by the costs of 130 years of religious conflict.32 Yet the network of printing presses has remained relatively distributed, with only limited concentrations of ownership (for example of newspapers or magazines). Today, by contrast, a handful of very large and very profitable corporations dominate the online public sphere in most countries in the world. Subject to the most minimal regulation in their country of origin—far less than the terrestrial television networks in their heyday—they tend to pollute national discourse with a torrent of fake news and extreme views. The effects on the democratic process all over the world are potentially destabilizing. Moreover, the vulnerability of the network platforms to outside manipulation poses a serious new challenge to national security. Yet attempts by the network platforms to regulate themselves better have led to allegations that they are restricting free speech. It was already obvious before 2016 to anyone paying attention that the public sphere had been not just transformed but destabilized, and not only in the United States. Something needed to change. But what? What exactly did it imply to say that the network platforms were “a public good as well as a commercial enterprise”? Should the tech giants be broken up, as proponents of a revamped antitrust law argued? Should they be subject to tighter regulation, of the sort being pioneered by the European Union? Or should they be more exposed to litigation by those harmed by the content they host? Could that be done without a reduction in free speech, whether politically skewed or not? Finally, how could we mitigate the strategic vulnerabilities unwittingly created by network platforms—in particular, the exposure of democracies to the disruptive tactics of information warfare? More than two years ago, it was possible to quote Nikolai Chernyshevsky’s question (which Lenin stole), “What is to be done?”33 The remarkable thing was that, even as a new presidential election year dawned, almost nothing had been done.

2. Section 230 The network platforms are, in Tim Wu’s phase, “attention merchants,” the heirs of the big twentieth-century media companies, such as Hearst, which used news and other mostly non- fictional content to attract readers’ attention, selling space alongside articles and photographs to advertisers.34 They have a lot of attention to sell—more than any print publishing company in history. In 2019 the average American spent 6 hours and 35 minutes a day using digital media, more than television, radio and print put together.35 A rising share of that digital media time was

31 Alex Feerst, “Your Speech, Their Rules: Meet the People Who Guard the Internet,” Medium, March 29, 2019. 32 Ferguson, The Square and the Tower. 33 Niall Ferguson, “What Is to Be Done? Safeguarding Democratic Governance in the Age of Network Platforms,” in Hoover Institution, Governance in an Emerging World, Fall Series, Issue 318: The Information Challenge to Democracy (2018). 34 Tim Wu, The Attention Merchants: The Epic Scramble to Get Inside Our Heads (New York: Knopf, 2016). 35 eMarketer, September 2017. Ferguson , p. 6 spent on mobile devices. In 2016 it was estimated that the average smartphone user clicked, tapped or swiped more than 2,600 times per day.36 Most of what is said online is inane. But a significant proportion is “news,” i.e., content that purports to be true information about current affairs. In 2017, two thirds of American adults said they got news from social media sites. Around three quarters of Twitter users got some of the news they consumed from the application, around two thirds of Facebook users, and around a third of YouTube users. In all, 45 per cent of American adults got news from Facebook, 18 per cent of them from YouTube and 11 per cent from Twitter. A significant share of younger users also got their news from and Snapchat.37 In mid 2018 Facebook and Google were still responsible for two thirds of news publishers’ referral traffic, even after a deliberate effort by Facebook to reduce the importance of news in users’ News Feeds.38 A Pew study showed that, at the end of 2019, 18 per cent of American adults relied primarily on social media for political news. Amongst those aged 30 to 49, the share was 40 per cent; amongst those aged 18 to 29 it was 48 per cent. Those who relied on social media for news were less engaged in the news cycle than those who preferred traditional news media or news websites, less well informed and more likely to believe conspiracy theories. They also worried less about fake news as a problem.39 The network platforms have become aggregators of news for the simple reason that it engages users’ attention. The key point is that the network platforms customize the news that users see in order to maximize their engagement. When Mark Zuckerberg talked in 2013 of making Facebook “the best personalized newspaper in the world,” this was what he meant. News Feed is a “personalized collection of stories,” and a user sees on average of 220 per day. Advertising and Pages, dedicated profiles for groups or causes, are sources of stories in News Feed. Anyone can buy ads to promote Pages, using an automated interface. Whenever one of Facebook’s users opens the Facebook app, a personalization algorithm sorts through all the posts that a person could potentially see, serving up and sorting the fraction it thinks he or she would be most likely to share, comment on, or like. (Shares are worth more than comments, which are both worth more than likes.) Around two thousand pieces of user data (“features”) are used by Facebook’s machine-learning system to make those predictions. A somewhat similar process works when a user enters words in the Google search box. The user’s individual search history, geographic location, and other demographic information affect the content and ranking of the results. The problem is that the algorithms are not prioritizing truthfulness or accuracy but user engagement. For example, on October 1, 2017, Google directed users towards a false story alleging that the perpetrator of the Las Vegas massacre on that date was a member of the far-left group . A study by the Wall Street Journal and former YouTube engineer Guillaume Chaslot showed that a user searching for “The Pope” on YouTube was directed to videos with titles such as “How Dangerous is the Pope?”, “What if the Pope was assassinated?” and

36 “Here’s How Many Times We Touch Our Phones Every Day,” Business Insider, July 13, 2016: https://www.businessinsider.com/dscout-research-people-touch-cell-phones-2617-times-a-day-2016-7. 37 Elisa Shearer and Jeffrey Gottfried, “News Across Social Media Platforms 2017,” Pew Research Center, September 6, 2017. 38 Jon Gingerich, “Google Overtakes Facebook for Referral Traffic,” O’Dwyer’s, June 8, 2018. 39 Amy Mitchell et al., “Americans Who Mainly Get Their News on Social Media Are Less Engaged, Less Knowledgeable,” Pew Research Center, https://www.journalism.org/2020/07/30/americans-who-mainly-get-their- news-on-social-media-are-less-engaged-less-knowledgeable/. Ferguson , p. 7

“BREAKING: They caught the Pope.”40 As we shall see, these and other features of the network platforms had historic consequences in 2016 when they played a decisive role in the election of Donald Trump as the U.S. president. But it would be a mistake to focus too much on that election result. Fake news is an industry that predated 2016 and was disbanded the morning after the election. Fake news creators like Paul Horner continued to use Facebook’s “Trending Topics” feature as a vector for spreading fake stories until it was wound up in 2018.41 Instagram was and remains a haven for conspiracy theorists such as the group as QAnon.42 How did we arrive at this state of affairs—when such important components of the public sphere could operate solely with regard to their own profitability as attention merchants? The answer lies in the history of American Internet regulation. A key early decision was to define the Internet as a Title I information service, and therefore fundamentally different from the old telephone network, which was governed by Title II’s intrusive monopoly utility regulations. (The Internet was briefly re-classified as a Title II service between 2015 and 2017, but no major regulatory change occurred in that period.) Another important decision was to give Internet companies very lenient treatment when they violated copyright. The Digital Millennium Act’s notice-and-takedown provisions minimized the penalties to the network platforms of making the intellectual property of others available gratis to their users. A third vital decision—the most important of all—was enshrined in Section 509 (codified as Section 230) of Title V of the 1996 Telecommunications Act, which was enacted after a New York court held online service provider Prodigy liable for a user’s defamatory posts. Previously, managing content had triggered classification as a publisher—and hence civil liability—creating a perverse incentive not to manage content at all. Thus, Section 230c, “Protection for ‘Good Samaritan’ blocking and screening of offensive material,” was written to encourage nascent firms to protect users and prevent illegal activity without incurring massive content management costs. It states: 1. No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider. 2. No provider or user of an interactive computer service shall be held liable on account of … any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable. In essence, Section 230 gave and still gives websites immunity from liability for what their users post—or, to be more precise, it “immuniz[e] platforms from liability both for underfiltering under Section 230(c)(1) and for ‘good faith’ over-filtering under Section 230(c)(2).” The net and surely unintended result of this legislation framework is that some of the biggest companies in the world are utilities when they are acting as publishers, but they are publishers when acting as utilities, in a way reminiscent of Joseph Heller’s Catch-22. The argument for Section 230, as articulated by the Electronic Frontier Foundation, was that, “given the sheer size of user-generated websites … it would be infeasible for online intermediaries to prevent objectionable content from cropping up on their site. Rather than face potential liability for their users’ actions, most would likely not host any user content at all or would need to protect themselves by being actively engaged in censoring what we say, what we

40 Jack Nikas, “How YouTube Drives People to the Internet’s Darkest Corners,” Wall Street Journal, February 7, 2018. 41 Renée DiResta, “The Return of Fake News,” Wired, June 5, 2019: https://www.wired.com/story/the-return-of- fake-news/. 42 Taylor Lorenz, “Instagram Is the Internet’s New Home for Hate,” Atlantic, March 21, 2019. Ferguson , p. 8 see, and what we do online.” 43 Oregon Senator Ron Wyden put it even more strongly: “If websites, ISPs, text message services, video game companies and any other type of platform were held liable for every word and deed they facilitated or somehow enabled, the entire system would shut down … collaboration and communication on the internet would simply cease.”44 In effect, Section 230 split the difference between liability, which would have meant restriction, or complete lack of curation, which would have led to a torrent of “filth, , insults, and pornography.” Thus, the argument runs, “hobbling 230” would “stifle the competition that got us to today’s rich internet in the first place.”45 According to one recent and influential account: Platforms moderate content because of a foundation in American free speech norms, corporate responsibility, and the economic necessity of creating an environment that reflects the expectations of their users. Thus, platforms are motivated to moderate by both of §230’s purposes: fostering Good Samaritan platforms and promoting free speech. … [They] should be thought of as operating as the New Governors of online speech. These New Governors are part of a new triadic model of speech that sits between the state and speakers-publishers. They are private, self-regulating entities that are economically and normatively motivated to reflect the democratic culture and free speech expectations of their users.46

3. Changes to the internet ecosystem since 1996 Note that, under the present dispensation, the “New Governors” have the power (not the obligation) to “curate” content that they host. They do so, it is argued, “out of a sense of corporate social responsibility, but also, more importantly, because their economic viability depends on meeting users’ speech and community norms.” This curation began some time ago with the exclusion of content that no one would publicly condone. For years, the big technology companies have filtered out child pornography using an automated hash database assembled by the National Center for Missing and Exploited Children, so that, as soon as an illegal photo or a video is uploaded to one site, it is detected and excluded from all platforms. Facebook, Twitter, YouTube and Microsoft have a global working group that applies somewhat similar technology to find and filter out terrorist content. In November 2017, for example, YouTube took down videos of Anwar al-Awlaki, the jihadist cleric killed by a U.S. drone strike in Yemen in 2011. Video “fingerprinting” did not remove al-Awlaki altogether from the platform, but it substantially reduced the number of videos relating to him. However, the process of removing or at least downgrading offensive content has not stopped with recognized advocates of pedophilia or jihad. In January 2018, for example, YouTube removed from its Google Preferred platform the channels of Logan Paul, a YouTube “influencer” with almost 16 million subscribers, after he posted a video showing a suicide victim in Japan. “Demonetizing” YouTube videos, so that they are not promoted on the platform and

43 Electronic Frontier Foundation, “Section 230 of the Communications Decency Act”: https://www.eff.org/issues/cda230. 44 Ashley Gold and Joanna Plucinska, “U.S., Europe Threaten Tech Industry’s Cherished Legal Shield,” Politico, October 8, 2018. 45 James Pethokoukis, “Should Big Tech be Held More Liable for the Content on their Platforms? An AEIdeas Online Symposium,” March 20, 2018. See also Ashley Gold, “Tech’s Next Big Battle: Protecting Immunity from Content Lawsuits,” The Information, January 11, 2019. 46 Kate Klonick, “The New Governors: The People, Rules, and Processes Governing Online Speech,” Harvard Law Review, 131 (2018), pp. 1598–670. See also Jeff Kosseff, The Twenty-Six Words That Created the Internet (Ithaca, NY: Cornell University Press, 2019). Ferguson , p. 9 their creators receive no share of any advertising revenue, is a powerful sanction short of outright prohibition. Twitter had set out to be the “free speech wing of the speech movement,” but in 2015 it added a new line to its Twitter Rules that barred “promot[ing] violence against others … on the basis of race, ethnicity, national origin, religion, sexual orientation, gender, gender identity, age, or disability.” Any concern that the “New Governors” might abuse their power of moderation was dismissed with a promise that all problems could be addressed by making “changes to the architecture and governance systems put in place by these platforms,” with regulation as a last resort.47 Yet platforms’ content moderation policies are not public, only their terms of service and usually vague “community standards.” Under the current interpretation of Section 230, the network platforms can rely on judges to dismiss most litigation whether they under-filter or over-filter.48 The problem has only grown in its significance this year. When Facebook imposed an outright ban on the anti-immigration group “Britain First,” it explained that the group had used language “designed to stir up hatred against groups in our society.” On July 27, after a direct appeal from the parents of a child killed at Sandy Hook, Facebook took down four Infowars videos and suspended for a month. On August 5 Apple stopped distributing five podcasts associated with Jones on the ground that they purveyed “hate speech.” Facebook also shut down four of Jones’s pages for “repeatedly” violating rules against hate speech and online bullying. Zuckerberg’s attempt to explain his reluctance to ban Jones backfired when he explained to the journalist Kara Swisher: The principles that we have on what we remove from the service are: If it’s going to result in real harm, real physical harm, or if you’re attacking individuals, then that content shouldn’t be on the platform. [But] … The approach that we’ve taken to false news is not to say: You can’t say something wrong on the internet. I think that that would be too extreme. Everyone gets things wrong, and if we were taking down people’s accounts when they got a few things wrong, then that would be a hard world for giving people a voice and saying that you care about that. … I’m Jewish, and there’s a set of people who deny that the Holocaust happened. I find that deeply offensive. But at the end of the day, I don’t believe that our platform should take that down because I think there are things that different people get wrong. 49 The resulting storm of criticism illustrated the shift in attitudes in Silicon Valley. The libertarian instincts of an earlier generation of Silicon Valley entrepreneurs were being forced to yield to the more censorious attitudes of more recently hired employees who had been schooled in the modern campus culture of “no platforming” any ideas deemed to be “unsafe.” Between January 1 and September 30, 2018, Facebook took action against eight million pieces of content that violated its rules on hate speech, according to its latest transparency report.50 Among the individuals accused of violating the company’s policies on hate speech were , white supremacist Paul Nehlen, the leader of the African-American Nation of Islam, , the UK conspiracy theorist and the right-wing journalist Laura

47 Klonick, “New Governors.” 48 Danielle Citron and Quinta Jurecic, “Platform Justice Content Moderation at an Inflection Point,” Aegis Series Paper No. 1811 (Stanford: Hoover Institution, 2018). 49 Ezra Klein, “The Controversy Over Mark Zuckerberg’s Comments on , Explained,” Vox, July 20, 2018. 50 Tony Romm and Elizabeth Dwoskin, “Facebook Says It Will Now Block White-Nationalist, White-Separatist Posts,” Washington Post, March 27, 2019. Ferguson , p. 10

Loomer.51 In the first three months of 2019, the proportion of hate speech violations that Facebook found “proactively”—before users reported them—rose to 65.4 per cent compared with 38 per cent a year before.52 In a similar way, YouTube stopped recommending videos from alt-right channels in February 2019, drastically reducing their share in the suggestions field from nearly 8 per cent to 0.4 per cent.53 For conservatives, as well as for right-wing populists, all this amounted to a new regime of censorship that was skewed against them. Alex Marlow, editor-in-chief of , and film-makers Peter Schweizer and James O’Keefe were among those to add their voices to the growing chorus of complaint about Silicon Valley’s bias. In Prager University v. Google, conservative broadcaster Dennis Prager accused YouTube of violating his first amendment rights by “regulat[ing] and censor[ing] speech as if the laws governing free speech and commerce do not apply to it.” Facebook was forced to apologize to Prager for removing videos with the titles “Where Are the Moderate Muslims?” and “Make Men Masculine Again.” Writing for Breitbart in late October 2018, Brad Parscale accused “Big Tech monsters like Google and Facebook” of having become “nothing less than incubators for far-left liberal ideologies and … doing everything they can to eradicate conservative ideas and their proponents from the internet.” This was, Parscale argued, “an existential threat to our individual liberties as well as our system of government.”54 Renée DiResta might insist that the platforms “need to be able to take down users and sites that fail the tests of authenticity, organic distribution and integrity reputation.” Jonathan Albright might point out that, during the 2018 midterm elections, the suspect Facebook Pages (with foreign “manager” accounts) or the Facebook Groups used to spread scare stories—often by “gaming the platform’s metrics”—were mostly right-wing in character and content.55 But it was inevitable that those who fell foul of supposedly “viewpoint-agnostic moderation” would complain of politically motivated censorship.56 In the course of 2018, allegations of anti- conservative bias on network platforms were made with increasing frequency.57 Certainly, there was evidence that fewer voters were going to fake news sites ahead of the 2018 midterm elections, but no one could seriously object to that.58 It was also somewhat difficult to argue that Google’s seemingly innocuous “Go Vote” message on election day 2018 was disproportionately helpful to Democratic candidates.59 However, Robert Epstein did find that “Google search results were significantly more liberal than non-Google search results on all 10 days leading up to and including Election Day and in all 10 positions of search results on the first page of search

51 Hannah Murphy, “Facebook Bars Extremist Figures from Platform,” Financial Times, May 2, 2019. 52 Hannah Murphy, “Facebook removed 2.2bn fake accounts in first quarter,” Financial Times, May 23, 2019. 53 Nicolas Suzor, “YouTube Stops Recommending Altright Channels,” Digit Social Contract, February 27, 2019. 54 Brad Parscale, “Big Tech Is Meddling with Free Speech … and Elections,” Breitbart, October 23, 2018. 55 Jonathan Albright, “The 2018 Facebook Midterms, Part I: Recursive Ad-ccountability,” Medium, November 5, 2018; “The 2018 Facebook Midterms, Part II: Shadow Organizing,” November 5, 2018; “The 2018 Facebook Midterms, Part III: Granular Enforcement,” November 6, 2018. 56 Renée DiResta, “Free Speech in the Age of Algorithmic Megaphones,” Wired, October 12, 2018. 57 , “The Authoritarianism of Silicon Valley’s Tech Titans,” , November 28, 2018; Jeremy Carl, “Why We Need Anti-Censorship Legislation for Social Media,” The Federalist, November 28, 2018. 58 Nyhan, “Fears of Fake News.” 59 Robert Epstein, “Another Way Google Manipulates Votes Without Us Knowing: A ‘Go Vote’ Reminder Is Not What You Think It Is,” unpublished paper (2019). Ferguson , p. 11 results.”60 Interviews with former social media firm employees suggested that “while Facebook and Google resist being arbiters of political discourse, they actively vet paid content on their platforms … in often opaque ways, according to policies that are not transparent, and without clear justifications to campaigns or the public as to how they are applied or enforced.”61 More rigorously, a paper in Nature exposed the problem of “information gerrymandering” in unregulated online networks, whereby “a small number of zealots, when strategically placed on the influence network, can also induce information gerrymandering and thereby bias vote outcomes … even when both parties have equal sizes.”62 To all this, the big tech companies responded with apparently heartfelt commitments not to act as censors. At a lecture at Georgetown in October 2019, Zuckerberg pledged “to continue to stand for free expression, understanding its messiness, but believing that the long journey towards greater progress requires confronting ideas that challenge us.” He was against an “ever- expanding definition of what speech is harmful” and would “fight to uphold as wide a definition of freedom of expression as possible.”63 In a similar vein, Richard Gingras, vice president of News at Google, told an audience in Oregon: “Our role is NOT to censor expression on the open Internet. Our role with Search is to help people find ANY information that can be found within the corpus of legal expression. … No one should want Google to decide what is acceptable or unacceptable expression.”64 Strictly speaking, there is no First Amendment in cyberspace. There is indeed a “fundamental difference between a private platform refusing to carry your ideas on their property, and a government prohibiting you from speaking your ideas, anywhere, with the threat of prosecution.” 65 Yet when companies dominate the public sphere to the extent that Facebook and Google do, their power to enforce whatever community standards they choose becomes too great. In practice, an opaque system of policing the network platforms has evolved. At Google, much as is true in China, there are multiple blacklists excluding identified transgressors from Google accounts, Search autocomplete, YouTube, Google News, AdWords and AdSense.66 In 2018 Nick Foster, Google’s head of design, spoke of “a future of total data collection” in which a “goal-driven … Selfish Ledger” would enable Google to “nudge users into alignment with their [own] goals, custom-print personalized devices to collect more data, and even guide the behavior of entire populations to solve global problems like poverty and disease.” 67 In an internal presentation dated March of the same year, Google executives were asked to imagine acting as a

60 Robert Epstein and Emily M. Williams, “Evidence of Systematic Political Bias in Online Search Results in the 10 Days Leading Up to the 2018 U.S. Midterm Elections,” paper presented at the 99th annual meeting of the Western Psychological Association, Pasadena, April 2019. 61 Daniel Kreiss and Shannon C. Mcgregor, “The ‘Arbiters of What Our Voters See’: Facebook and Google’s Struggle with Policy, Process, and Enforcement around Political Advertising,” Political Communication (2019): doi:10.1080/10584609.2019.1619639. 62 Alexander I. Stewart et al., “Information Gerrymandering and Undemocratic Decisions,” Nature, 573, September 5, 2019, pp. 117–21: https://doi.org/10.1038/s41586-019-1507-6. 63 Mark Zuckerberg, speech at Georgetown, Oct. 17, 2019. 64 Richard Gingras, “In Google We Trust,” Robert and Mabel Ruhl Endowed Lecture, University of Oregon School of Journalism, February 12, 2019. 65 Amerige, “Facebook Has a Right to Block ‘Hate Speech.’” 66 Robert Epstein, “The New Censorship,” US News and World Report, June 22, 2016. 67 Vlad Savov, “Google’s Selfish Ledger is an Unsettling Vision of Silicon Valley Social Engineering,” The Verge, May 17, 2018. Ferguson , p. 12

“Good Censor,” to limit the impact of users “behaving badly.”68 At both companies, “trust and safety” teams now employ tens of thousands of mostly young content moderators whose thankless task it is to enforce ever more elaborate hate speech rules on an ever-growing torrent of content. As one of them described the process, “I was like, ‘I can just block this entire domain, and they won’t be able to serve ads on it?’ And the answer was, ‘Yes.’ I was like, ‘But… I’m in my mid-twenties.’” As another put it, “One depressing part [of the job] is that China did a frighteningly good job of their version of trust and safety.”69

4. The case for reform or repeal of Section 230 Section 230 and repeal it altogether, ending the exemption of network platforms from liability for the content they host, as proposed by Republican Senator Josh Hawley’s Ending Support for Internet Censorship Act (S. 1914). Somewhat different arguments to modify Section 230 have been made by, amongst others, Karen Kornlbuh,70 Danielle Citron and Benjamin Wittes.71 No piece of legislation is more tenaciously defended than Section 230, not only by the big tech companies and their lobbyists but also by scholars such as Eugene Volokh and Eric Goldman, who insist that it is “a crucial legal foundation for the modern Internet,” 72 as well as some conservative think tanks, notably the Heritage Foundation.73 Yet the exemptions of Section 230 made sense only when the network platforms were fledglings or did not yet exist. Today, in the words of Judge Alex Kozinski in Fair Housing Council v. Roommate.com, “the Internet has outgrown its swaddling clothes and no longer needs to be so gently coddled.”74 It gives them a now indefensible advantage over traditional publishers, while at the same time empowering them to act as a censors, contrary to the original intent of the law. In 1931 the British prime minister, Stanley Baldwin, accused the principal newspaper barons of the day, Lords Beaverbrook and Rothermere, of “aiming at … power, and power without responsibility—the prerogative of the harlot throughout the ages.” The phrase was his cousin Rudyard Kipling’s. It resonates today. As we saw in the preceding chapter, Section 230 states that Internet companies should not be held liable for removing any content that they believed in good faith to be “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” As successive court rulings have clearly established, those last two words were not intended to permit discrimination against particular political viewpoints.75 On the contrary, Section 230 is explicitly premised on the idea that online platforms should “offer a forum for a true diversity of political discourse.” One appealing feature of getting rid of Section 230 is that it would be left to the courts to bring the network platforms to heel when plaintiffs could show that a harm had arisen

68 Allum Bokhari, “‘The Good Censor’: Leaked Google Briefing Admits Abandonment of Free Speech for ‘Safety And Civility,’” Breitbart, October 9, 2018: https://www.breitbart.com/tech/2018/10/09/the-good-censor-leaked- google-briefing-admits-abandonment-of-free-speech-for-safety-and-civility/. 69 Feerst, “Your Speech, Their Rules.” 70 Kornbluh, “Internet’s Lost Promise.” 71 Danielle Citron and Benjamin Wittes, “The Internet Will Not Break: Denying Bad Samaritans § 230 Immunity,” Fordham Law Review, 86, 2 (2017), pp. 401–423: https://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=5435&context=flr 72 Eric Goldman, “Why Section 230 Is Better Than the First Amendment,” March 12, 2019: https://ssrn.com/abstract=3351323. See also Tarleton Gillespie, Custodians of the Internet (New Haven: Yale University Press, 2018). 73 Diane Katz, “Free Enterprise Is the Best Remedy for Online Bias Concerns,” Heritage Foundation, November 19, 2019. 74 Ibid. 75 Adam Candeub and Mark Epstein, “Platform or Publisher,” City Journal, May 7, 2018. Ferguson , p. 13 from, say, a fake news story disseminated by Facebook’s News Feed. Already the courts have established that the network platforms are not exempt from liability to warn and product liability, because they have a duty of care to warn users of potential dangers. Indeed, given social media’s uniquely deep and wide knowledge of users’ interactions and relationships, they have unprecedented abilities to foresee potential harms. In one important case, a model (“Jane Doe”) had been lured by scammers on ModelMayhem, a social network for models and photographers, who then drugged and raped her, filming the incident for a pornographic video. Internet Brands, the owner of ModelMayhem, was aware of this rape ring, but did not warn any of its users. In Jane Doe No. 14 v. Internet Brands, Inc. (2014), the Court of Appeals for the Ninth Circuit ruled that Doe’s “negligent failure to warn claim” did not “seek to hold Internet Brands liable as the ‘publisher or speaker of any information provided by another information content provider’ and therefore the Communications Decency Act did not bar the claim.” 76 This case established an important limit on Section 230 immunity, but it also exposed the anachronistic nature of Section 230 itself. If Internet Brands should have warned Jane Doe of the dangers of using ModelMayhem in 2011, why should not Facebook have warned all its users of the dangers of Russian-generated fake news conveyed to them through the News Feed in 2016? There is, however, an important corollary. If we are to end the fiction that network platforms are not, in some respects, media companies or publishers, then we must at the same time end the equally dangerous fiction that they are not also, in many respects, the modern public sphere. A first step has already been taken in this direction. In Packingham v. North Carolina (2017), the Supreme Court overturned a state law that banned sex offenders from using social media.77 In the opinion, Justice Anthony Kennedy likened Internet platforms to “the modern public square,” arguing that it was therefore unconstitutional to prevent sex offenders from accessing, and expressing opinions, on social network platforms. In other words, despite being private companies, the big tech companies have, in some cases, a public function. “While in the past there may have been difficulty in identifying the most important places (in a spatial sense) for the exchange of views,” Justice Kennedy wrote, “today the answer is clear. It is cyberspace— the ‘vast democratic forums of the Internet’ in general … and social media in particular.” 78 In May 2017 the Southern District of New York gave a similar ruling in Knight First Amendment Institute v. Donald J. Trump, Hope Hicks, Sarah Huckabee Sanders and Daniel Scavino: We hold that portions of the @realDonaldTrump account—the “interactive space” where Twitter users may directly engage with the content of the President’s tweets—are properly analyzed under the “public forum” doctrines set forth by the Supreme Court, that such space is a designated public forum, and that the blocking of the plaintiffs based on their political speech constitutes viewpoint discrimination that violates the First Amendment.79 As president, Donald Trump could not therefore block Twitter users from seeing his tweets. If the network platforms are the “modern public square,” then it cannot be their responsibility to remove “hateful content,” as 19 prominent civil right groups demanded of Facebook in October 2017, because hateful content—unless it explicitly instigates violence

76 Jane Doe No. 14 v. Internet Brands, Inc., DBA Modelmayhem.com, No. 12-56638, 9th Circuit, September 17, 2014. 77 Will Chamberlain, “Platform Access is a Civil Right,” Human Events, May 6, 2019. 78 Packingham v. North Carolina, No. 15–1194, Supreme Court, June 19, 2017, p. 5: https://www.supremecourt.gov/opinions/16pdf/15-1194_08l1.pdf. 79 U.S. District Court, Southern District of New York, 17 Civ. 5205, May 23, 2018: https://www.courtlistener.com/recap/gov.uscourts.nysd.477261/gov.uscourts.nysd.477261.72.0_1.pdf. Ferguson , p. 14 against a specific person—is protected by the First Amendment. Kate Klonick has argued that tech companies should not “be held to a First Amendment standard,” because that would mean “porn stays up, spam stays up, everything stays up.” But this is not convincing. It is surely better that porn and spam “stay up” than that be circumscribed by the community standards of unaccountable private companies, run by a small number of men, some of whom imagine themselves to be emperors. By the same token, a truly free public sphere is also bound to permit the publication online of false allegations directed against prominent public figures. In New York Times Co. v. Sullivan, the Supreme Court held that “erroneous statement is inevitable in free debate” and “must be protected if the freedoms of expression are to have the ‘breathing space’ that they ‘need to survive.’”80 The danger of a piecemeal erosion of Section 230 is that it could lead to “censorship creep,” by encouraging platforms to “over-moderate.” If outright repeal is too bold a step, with too many unforeseeable consequences, then a better compromise would be to create a blanket exception to 230 for “bad actors” who “knowingly and intentionally leave up unambiguously unlawful content that clearly creates a serious harm to others” (as proposed by Geoffrey Stone) or for “online service providers that intentionally solicit or induce illegality or unlawful content” (Stacey Dogan’s formulation) or for platforms that “can[not] show that their response to unlawful uses of their services is reasonable.” At the very least, a new Section 230 might read: “No provider or user of an interactive computer service that takes reasonable steps to prevent or address unlawful uses of its services shall be treated as the publisher or speaker of any information provided by another information content provider in any action arising out of the publication of content provided by that information content provider.”81 Other modifications are also conceivable. Jonathan Zittrain has proposed that “companies below a certain size or activity threshold could benefit from [Section 230’]s immunities, while those who grow large enough to facilitate the infliction of that much more damage from defamatory and other actionable posts might also have the resources to employ a compliance department.” Alternatively, a distinction could be drawn “between damages for past acts and duties for future ones … leading only to responsibility once the knowledge is gained and not timely acted upon.” Or “a refined CDA could take into account the fact that Facebook and others know exactly whom they’ve reached,” so that the new remedy for defamation “would less be to assess damages against the company for having abetted it, but rather to require a correction or other follow up to go out to those who saw—and perhaps came to believe—the defamatory content.”82 Even these weaker modifications of Section 230 would meaningfully increase the legal costs of network platforms. Such solutions would not be perfect, of course. Nevertheless, combining Section 230 reform with a requirement to act as if the First Amendment applies in cyberspace seems like a viable way of countering the various negative externalities currently created by the network platforms—and a much more elegant solution than probably futile attempts to break them up or regulate them through government agencies.

80 Thomas E. Kadri and Kate Klonick, “Facebook v. Sullivan: Building Constitutional Law for Online Speech” (January 2019). 81 Citron and Jurecic, “Platform Justice Content Moderation.” 82 Jonathan Zittrain, “CDA 230 Then and Now: Does Intermediary Immunity Keep the Rest of Us Healthy?” August 31, 2018: https://blogs.harvard.edu/jzwrites/2018/08/31/cda-230-then-and-now/.