Moderating Content Moderation: a Framework for Nonpartisanship in Online Governance
Total Page:16
File Type:pdf, Size:1020Kb
MODERATING CONTENT MODERATION: A FRAMEWORK FOR NONPARTISANSHIP IN ONLINE GOVERNANCE EDWARD LEE* Internet platforms serve two important roles that often conflict. Facebook, Twitter, YouTube, and other internet platforms facilitate the unfettered exchange of free speech by millions of people, yet they also moderate or restrict the speech according to their “community standards,” such as prohibitions against hate speech and advocating violence, to provide a safe environment for their users. These dual roles give internet platforms unparalleled power over online speech—even more so than most governments. Yet, unlike government actors, internet platforms are not subject to checks and balances that courts or agencies must follow, such as promulgating well-defined procedural rules and affording notice, due process, and appellate review to individuals. Internet platforms have devised their own policies and procedures for content moderation, but the platforms’ enforcement remains opaque—especially when compared to courts and agencies. Based on an independent survey of the community standards of the largest internet platforms, this Article shows that few internet platforms disclose the precise procedural steps and safeguards of their content moderation—perhaps hoping to avoid public scrutiny over those procedures. This lack of transparency has left internet platforms vulnerable to vocal accusations of having an “anti-conservative bias” in their content moderation, especially from politicians. Internet platforms deny such a bias, but their response has not mollified Republican lawmakers, who have proposed amending, if not repealing, Section 230 of the Communications Decency Act * Professor of Law, IIT Chicago-Kent College of Law; Founder, The Free Internet Project. Many thanks to helpful comments from Kathy Baker, Felice Batlan, Sungjoon Cho, Eric Goldman, Hal Krent, Nancy Marder, Blake Reid, Mark Rosen, Alex Boni-Saenz, Chris Schmidt, Stephanie Stern, Eugene Volokh, and participants of faculty workshops. This Article represents my own views as a legal scholar. They should not be attributed to the nonprofit The Free Internet Project. 913 914 AMERICAN UNIVERSITY LAW REVIEW [Vol. 70:913 to limit the permissible bases and scope of content moderation that qualify for civil immunity under the section. This Article provides a better solution to this perceived problem—a model framework for nonpartisan content moderation (NCM) that internet platforms should voluntarily adopt as a matter of best practices. The NCM framework provides greater transparency and safeguards to ensure nonpartisan content moderation in a way that avoids messy government entanglement in enforcing speech codes online. The NCM framework is an innovative approach to online governance that draws upon safeguards designed to promote impartiality in various sectors, including courts and agencies, clinical trials, peer review, and equal protection under the Fourteenth Amendment. TABLE OF CONTENTS Introduction ........................................................................................ 915 I. Online Governance and the Controversies over Election Interference, Voter Suppression, and Perceived Political Bias of Internet Platforms ........................................................ 927 A. Online Governance by Internet Platforms ....................... 928 B. Election Misinformation on Social Media in 2016 and 2020 ............................................................................. 932 C. Section 230 of the Communications Decency Act ........... 941 D. Accusations of Political Bias and Proposed Amendments to Section 230 to Require Political Neutrality ........................................................................... 982 II. Do Internet Platforms’ Content Moderation Policies Recognize Nonpartisanship or Impartiality as a Stated Principle? .................................................................................. 994 A. Overview ............................................................................. 994 B. Twitter ................................................................................. 998 C. Facebook .......................................................................... 1002 D. YouTube and Google ....................................................... 1010 E. Reddit ............................................................................... 1015 F. Snapchat ........................................................................... 1017 G. Twitch ................................................................................ 1019 H. TikTok ............................................................................... 1020 I. Internet Platforms’ Internal (Nonpublic) Manuals ....... 1023 III. The Case for Nonpartisanship as a Community Standard for Content Moderation of Political Candidates and Political Ads ............................................................................ 1024 A. Why Nonpartisanship in Content Moderation Matters.............................................................................. 1024 2021] MODERATING CONTENT MODERATION 915 B. Why Best Practices Are Better Than Bills to Reform Section 230 ....................................................................... 1034 IV. Model Framework for Nonpartisan Content Moderation (NCM) of Political Candidates .............................................. 1039 A. The Model NCM Framework........................................... 1039 B. Other Safeguards to Protect Against Partisan Content Moderation ...................................................................... 1053 V. Addressing Concerns with the Proposed NCM Framework ... 1055 A. Resources and Scalability ................................................. 1055 B. Timeliness and Effectiveness Concerns .......................... 1056 C. Is Content Moderation Better Under the Status Quo than the NCM Proposal? ................................................. 1058 Conclusion......................................................................................... 1059 No man is allowed to be a judge in his own cause; because his interest would certainly bias his judgment, and, not improbably, corrupt his integrity. —Madison, FEDERALIST NO. 10 Were there not even these inducements to moderation, nothing can be more ill-judged than that intolerant spirit, which has, at all times, characterized political parties. —Hamilton, FEDERALIST NO. 1 INTRODUCTION In 2020, amidst a pandemic and nationwide protests led by Black Lives Matter following the brutal police killing of George Floyd, internet platforms1 tightened their policies of content moderation— otherwise known as “community standards”—to stop the spread of 1. The term “internet platform” is an evolving, even “slippery term.” TARLETON GILLESPIE, CUSTODIANS OF THE INTERNET 18 (2018). I borrow Tarleton Gillespie’s definition: “online sites and services that (a) host, organize, and circulate users’ shared content or social interactions for them, (b) without having produced or commissioned (the bulk of) that content, (c) built on an infrastructure, beneath that circulation of information, for processing data for customer service, advertising, and profit.” Id. 916 AMERICAN UNIVERSITY LAW REVIEW [Vol. 70:913 misinformation, hate speech, and voter suppression.2 The platforms had implemented new policies to curb foreign interference and misinformation that were pervasive in the 2016 U.S. election,3 but the platforms took a hands-off approach to the content of U.S. politicians and political ads. That changed in 2020. On May 26, 2020, as protests over Floyd’s death erupted, Twitter started the sea change by flagging several of President Donald Trump’s tweets with labels indicating that his tweets violated Twitter’s policies against misinformation, voter suppression, and glorification of violence.4 Snapchat and Twitch followed suit by announcing their own efforts to moderate or outright suspend Trump’s accounts on their respective platforms due to concerns about “amplify[ing] voices who incite racial violence and injustice”5 and “hateful conduct.”6 Reddit, known for its über-permissiveness, even banned a subreddit, or discussion group, devoted to “r/The Donald” 2. See Craig Timberg & Elizabeth Dwoskin, Silicon Valley Is Getting Tougher on Trump and His Supporters over Hate Speech and Disinformation, WASH. POST (July 10, 2020, 1:53 PM), https://www.washingtonpost.com/technology/2020/07/10/hate- speech-trump-tech; Barbara Ortutay & Tali Arbel, Social Media Platforms Face a Reckoning over Hate Speech, AP NEWS (June 29, 2020, 6:00 PM), https://apnews.com/ article/6d0b3359ee5379bd5624c9f1024a0eaf. 3. See infra Section I.B.1. 4. See Trump Makes Unsubstantiated Claim that Mail-in Ballots Will Lead to Voter Fraud, TWITTER (May 26, 2020), https://twitter.com/i/events/126533060103425638 4?lang=en; Barbara Sprunt, The History Behind ‘When the Looting Starts, the Shooting Starts,’ NPR (May 29, 2020, 6:45 PM), https://www.npr.org/2020/05/29/864818368/ the-history-behind-when-the-looting-starts-the-shooting-starts [https://perma.cc/N4MZ- 3E82]; William Mansell & Libby Cathey, Twitter Flags Trump, White House for ‘Glorifying Violence’ in Tweets About George Floyd Protests, ABC NEWS (May 29, 2020, 2:32 PM), https://abcnews.go.com/US/twitter-flags-trump-white-house-glorifying-violence- tweet/story?id=70945228 [https://perma.cc/6Y3C-BYJL]; Twitter Flags Trump’s Tweet of Doctored ‘Racist Baby’ Video, AP NEWS (June 19, 2020), https://apnews.com/ 3499484ab404647b01fcc4a08babff03;