Hate Speech on Social Media: Content Moderation in Context
Total Page:16
File Type:pdf, Size:1020Kb
University of Connecticut OpenCommons@UConn Faculty Articles and Papers School of Law 2021 Hate Speech on Social Media: Content Moderation in Context Richard A. Wilson University of Connecticut Human Rights Institute, [email protected] Molly Land University of Connecticut School of Law Follow this and additional works at: https://opencommons.uconn.edu/law_papers Part of the First Amendment Commons Recommended Citation Wilson, Richard A. and Land, Molly, "Hate Speech on Social Media: Content Moderation in Context" (2021). Faculty Articles and Papers. 535. https://opencommons.uconn.edu/law_papers/535 CONNECTICUT LAW REVIEW VOLUME 52 FEBRUARY 2021 NUMBER 3 Article Hate Speech on Social Media: Content Moderation in Context RICHARD ASHBY WILSON & MOLLY K. LAND For all practical purposes, the policy of social media companies to suppress hate speech on their platforms means that the longstanding debate in the United States about whether to limit hate speech in the public square has been resolved in favor of vigorous regulation. Nonetheless, revisiting these debates provides insights essential for developing more empirically-based and narrowly tailored policies regarding online hate. First, a central issue in the hate speech debate is the extent to which hate speech contributes to violence. Those in favor of more robust regulation claim a connection to violence, while others dismiss these arguments as tenuous. The data generated by social media, however, now allow researchers to empirically test whether there are measurable harms resulting from hate speech. These data can assist in formulating evidence-based policies to address the most significant harms of hate speech, while avoiding overbroad regulation. Second, reexamining the U.S. debate about hate speech also reveals the serious missteps of social media policies that prohibit hate speech without regard to context. The policies that social media companies have developed define hate speech solely with respect to the content of the message. As the early advocates of limits on hate speech made clear, the meaning, force, and consequences of speech acts are deeply contextual, and it is impossible to understand the harms of hate speech without reference to political realities and power asymmetries. Regulation that is abstracted from context will inevitably be overbroad. This Article revisits these debates and considers how they map onto the platform law of content moderation, where emerging evidence indicates a correlation between hate speech online, virulent nationalism, and violence against minorities and activists. It concludes by advocating specific recommendations to bring greater consideration of context into the speech-regulation policies and procedures of social media companies. 1029 ARTICLE CONTENTS INTRODUCTION .................................................................................. 1031 I. DEBATING HATE SPEECH IN THE UNITED STATES ................... 1034 A. THE CASE FOR RESTRICTING HATE SPEECH ................................ 1035 B. THE FREEDOM OF EXPRESSION RIPOSTE ..................................... 1037 II. THE HARMS OF ONLINE HATE SPEECH...................................... 1039 III. THE PLATFORM LAW OF HATE SPEECH ................................... 1045 A. DEFINING HATE SPEECH ONLINE ................................................ 1046 B. THE LIMITS OF PLATFORM LAW ................................................. 1053 IV. TOWARDS CONTEXT-SPECIFIC CONTENT MODERATION ..... 1061 A. THE ROLE OF CONTEXT ............................................................. 1061 B. LAW WITHOUT CONTEXT ........................................................... 1063 C. RECOMMENDATIONS .................................................................. 1069 CONCLUSION ...................................................................................... 1075 Hate Speech on Social Media: Content Moderation in Context RICHARD ASHBY WILSON & MOLLY K. LAND * INTRODUCTION Hate speech and hate crimes are trending. In the past five years, there has been an upsurge in extreme nationalist and nativist political ideology in mainstream politics globally. In the United States, the President regularly mobilizes a political constituency by vilifying Mexican immigrants as “criminals” and “rapists” who “infest” America,1 and by promoting a “zero tolerance” policy at the border that punitively separates children from their parents, including persons exercising their right to apply for asylum.2 Data suggest a connection between this rise in rhetoric to increases in hate crimes in the United States.3 Similar trends are evident abroad as well. In the United Kingdom, the 2016 Brexit referendum elicited conspicuous expressions of anti-Muslim and anti-immigrant sentiment and coincided with the sharpest increase in religiously and racially motivated hate crimes ever recorded in British history.4 In the United States, there has been vigorous debate on the regulation of hate speech for decades, but the dominant legal frameworks for addressing the harms in hate speech were created in a world without the internet and urgently need updating. Historically, those advocating for the suppression of hate speech concentrated their efforts on measures that * Richard Ashby Wilson is the Gladstein Chair of Human Rights and Professor of Law and Anthropology at the University of Connecticut School of Law. Molly Land is the Catherine Roraback Professor of Law and Human Rights at the University of Connecticut School of Law and Associate Director of the Human Rights Institute. We extend our thanks to Nadine Strossen for her astute comments on our arguments, and to the editors of the Connecticut Law Review for their careful feedback and editing. We are grateful to Allaina Murphy and Danielle Nadeau for research assistance. 1 Andrés Oppenheimer, Immigrant Families Don’t “Infest” America—But Trump’s Racist Rhetoric Does, MIAMI HERALD (June 20, 2018), https://www.miamiherald.com/news/local/news- columns-blogs/andres-oppenheimer/article213543104.html. 2 Catherine E. Shoichet, ‘Zero Tolerance’ a Year Later: How the US Family Separations Crisis Erupted, CNN (Apr. 5, 2019), https://www.cnn.com/interactive/2019/04/us/immigrant-family- separations-timeline. 3 Griffin Edwards & Stephen Rushin, The Effect of President Trump’s Election on Hate Crimes 3 (Jan. 14, 2018) (unpublished manuscript), https://papers.ssrn.com/sol3/papers.cfm? abstract_id=3102652. 4 May Bulman, Brexit Vote Sees Highest Spike in Religious and Racial Hate Crimes Ever Recorded, INDEPENDENT (July 7, 2017), https://www.independent.co.uk/news/uk/home-news/racist- hate-crimes-surge-to-record-high-after-brexit-vote-new-figures-reveal-a7829551.html. 1032 CONNECTICUT LAW REVIEW [Vol. 52:3 compelled gatekeepers, such as district attorneys, newspaper editors, publishers, or university provosts, to restrict certain classes of speech. Now, in the cacophony on social media, gatekeeping itself has been transformed.5 Governments still attempt to regulate speech through gatekeepers such as social media platforms, but the sheer volume of speech involved requires operationalizing the definition of hate speech through algorithmic processes and tens of thousands of human content moderators. Governments are no longer the primary regulators of speech.6 Their regulatory capacity has been far outstripped by some of the largest companies in the world by public stock valuation,7 which together regulate the speech of 3.7 billion active social media users.8 Facebook, which moderates the online speech of over 2.4 billion active monthly users, is the largest publisher of content in human history.9 In a reversal of the historic roles, private corporations have even become the de facto regulators of government speech, as when Facebook banned the Commander-in-Chief of Myanmar’s military from the platform and removed over 400 other news, entertainment, and lifestyle pages linked to the military.10 For all practical purposes, the decision of social media companies to prohibit hate speech on their platforms means that the longstanding debate in the United States about whether to limit hate speech in the public square has been resolved in favor of regulation. The new realities of the internet do not mean that the prior debates on hate speech are irrelevant, however. Instead, we contend that a reexamination of the debates over hate speech that occurred in the United States in the 1980s and 1990s can help chart a course toward a more empirically-based and narrowly tailored policy regarding online hate speech. Social media policy might be informed by the terms of the U.S. hate speech debate in two vital ways. First, social media creates data that allow us to determine whether hate speech that nonetheless falls short of direct incitement can still contribute to violence. Those in favor of more robust 5 EMILY B. LAIDLAW, REGULATING SPEECH IN CYBERSPACE: GATEKEEPERS, HUMAN RIGHTS AND CORPORATE RESPONSIBILITY, at xi–xii (2015) (describing gatekeeping by private companies on the internet). 6 See DAVID KAYE, SPEECH POLICE: THE GLOBAL STRUGGLE TO GOVERN THE INTERNET 112 (2019) (noting that “a few private companies” control social media). 7 See FB Facebook, Inc. Class A Common Stock, NASDAQ, https://www.nasdaq.com/market- activity/stocks/fb (last visited Jan. 14, 2020) (listing Facebook’s common stock value); GOOG Alphabet Inc. Class C Capital Stock, NASDAQ, https://www.nasdaq.com/market-activity/stocks/goog (last visited Jan. 14, 2020) (listing Google’s capital stock value). 8 SIMON