Digital Platforms, Content Moderation & Free Speech How to Set A
Total Page:16
File Type:pdf, Size:1020Kb
MASTER OF ARTS IN LAW & DIPLOMACY CAPSTONE PROJECT Digital Platforms, Content Moderation & Free Speech How To Set A Regulatory Framework for Government, Tech Companies & Civil Society By Adriana Lamirande Under Supervision of Dr. Carolyn Gideon Grant Awarded by Hitachi Center for Technology & International Affairs Spring 2021 | Submitted April 30 In fulfillment of MALD Capstone requirement TABLE OF CONTENTS I. RESEARCH QUESTION……………………………………………………………………. 2 II. BACKGROUND……………………………………………………………………………. 2 ○ Social Media: From Public Squares to Dangerous Echo Chambers ○ Algorithms as Megaphones ○ Looking Forward: The Case for Regulation & Cross-Sectoral Collaboration III. OVERVIEW OF ANALYTIC FRAMEWORK………………………………………………… 10 IV. EVIDENCE………………………………………………………………………………… 13 ○ Public Interest Framework………………………………………………………. 13 ○ Common Carrier Framework…………………………………………………….. 17 ○ Free Market Framework…………………………………………………………. 22 ○ International Human Rights Law Framework……………………………………. 29 V. CONCLUSION/POLICY RECOMMENDATIONS…………………………………………….. 35 ○ For U.S. Policymakers………………………………………………………………. 36 ○ For Social Media Companies………………………………………………………. 39 1 RESEARCH QUESTION Which content moderation regulatory approach (international human rights law, public interest, free market, common carrier) best minimizes disinformation and hate speech inciting violence on social media? Which practices by social media companies and civil society, alongside existing legislation, are best suited to guide U.S. policymakers? BACKGROUND/CONTEXT To borrow the words of Anne Applebaum and Peter Pomerantsev from Johns Hopkins’ SNF Agora Institute in The Atlantic: “We don’t have an internet based on our democratic values of openness, accountability, and respect for human rights.”1 Social Media: From Public Squares to Dangerous Echo Chambers Social media platforms have become digital public squares, creating a new arena for users to air opinions, share content they like or feel is informative (whether true or false), and express their unique worldviews without constraint. In the last few years, a slew of complaints and controversies have emerged regarding Facebook, YouTube and Twitter’s ad hoc content moderation practices, as well as the exploitative nature of their ad-based monetization business model. Their “growth at all costs” ethos is problematic in that it inordinately collects private user data to curate personalized news feeds and strengthens highly profitable precision ad targeting – the major caveat being that such a model thrives on content that is controversial, conflict-inducing and extreme in nature. The notion that “the medium is the message” was first pioneered by lauded communications expert Marshall McCluhan, and purports that the media through which we choose to communicate holds as much, if not more, value than the message itself. He states: “the personal and social consequences of any medium—that is, of any extension of ourselves—result from the new scale that is introduced into our affairs by each extension of ourselves, or by any new technology. [...] The restructuring of human work and association was shaped by the technique of fragmentation that is the essence of machine technology.”2 In our post-truth era, where platforms have become a stand in for traditional news media and are increasingly asked to arbitrate speech online, his warning about scale, fragmentation and social consequences feel especially prescient. Social networks struggle with waves of misinformation and problematic fact-checking practices and policies which can elevate news of poor quality. A Columbia 1Applebaum, Anne and Pomerantsev, Peter. “How to Put Out Democracy’s Dumpster Fire.” The Atlantic, March 8, 2021. https://bit.ly/3gQONAW 2McLuhan, Marshall. Understanding Media: The Extensions of Man. MIT Press, 1964, page 1. https://bit.ly/3aIkeXz 2 Journalism Review study3 found, for example, that Facebook failed to consistently label content flagged by its own third-party partners, and 50% of some 1,1000 posts containing debunked falsehoods were not labelled as such. Critics also point out that the fact-checking process is too slow, when information can reach millions in a matter of hours or even minutes. While digital platforms never set out to undermine or replace journalism, they have for many Americans become a primary source for news, a battleground for flaming partisan debates, and an unruly sphere where information – false or not – is transferred and elevated, with the potential for harmful impact beyond the web. According to a 2019 Pew Research Center report, 55% of U.S. adults now get their news from social media either "often" or "sometimes" – an 8% increase from the previous year. The report also found that 88% of Americans recognized that social media companies now have at least some control over the mix of the news that people see each day, with 62% of them feeling this was a problem and acknowledging companies having far too much control over this aspect of their lives.4 In the past, the news business and broadcast industries were built on stringent checks and balances by the government, and a foundation of mostly self-enforced professional integrity standards and editorial guidelines that provided recourse and due process for readers and critics alike. One example we can recall is the Fairness Doctrine, introduced by the Federal Communications Commission in 1949, which was a policy that required the holders of broadcast licenses to both present controversial issues of public importance and to do so in a manner that was—in the FCC's view—honest, equitable, and balanced. During this period, licensees were obliged not only to cover fairly the views of others, but also to refrain from expressing their own views. The Fairness Doctrine grew out of the belief that the limited number of broadcast frequencies available compelled the government to ensure that broadcasters did not use their stations simply as advocates of a single perspective. Such coverage had to also accurately reflect opposing views, and afford a reasonable opportunity for discussing contrasting points of view.5 This meant that programs on politics were encouraged to give opposing opinions equal time on the topic under discussion. Additionally, the rule mandated that broadcasters alert anyone subject to a personal attack in their programming and give them a chance to respond, and required any broadcasters who endorse political candidates to invite other candidates to respond.6 Though the Fairness Doctrine experienced erosions before this, it was officially repealed in 2011 after challenges on First Amendment grounds.7 This is an 3Bengani, Priyanjana and Karbal, Ian. “Five Days of Facebook Fact-Checking.” Columbia Journalism Review. October 30, 2020. https://bit.ly/2Rd0mYw 4 Grieco, Elizabeth and Shearer, Eliza. “Americans Are Wary of the Role Social Media Sites Play in Delivering the News.” Pew Research Center: Journalism & Media, October 2, 2019. https://pewrsr.ch/2W8n2rx 5 Perry, Audrey. “Fairness Doctrine.” The First Amendment Encyclopedia, May 2017. https://bit.ly/3eLm0ev 6 Matthews, Dylan. “Everything you need to know about the Fairness Doctrine in one post.” Washington Post, August 23, 2011. https://wapo.st/3bMV37v 7 McKenna, Alix. “FCC Repeals the Fairness Doctrine and Other Regulations.” The Regulatory Review. September 26, 2011. https://bit.ly/3sZbNAc 3 example about one type of mechanism that some suggest could be used to regulate social media content moderation practices today. Platforms enjoy the primacy and responsibility of mediating “the truth” once held by traditional news publishers, without the same formalized editorial intervention, at the expense of a filter-bubbled user experience and questionable news quality. Furthermore, the core ad monetization business model is intrinsically linked to the creation of siloed echo chambers, as algorithms elevate and personalize what posts users see based on their on-site activity. Experts assert that this limits people’s exposure to a wider range of ideas and reliable information, and eliminates serendipity altogether.8 By lauding neutrality in their role and policies, digital platforms are attempting to escape scrutiny of algorithmic bias that fuels and is complicit in the broadcasting of extremist views, disinformation, and hate speech inciting violence, thus enabling its spread at a quicker and more effective pace than level- headed reports and stories based in fact. One article around Facebook’s refusal to review political content – even if it violates its hate speech guidelines – summarizes the issue as such: “The fact check never gets as many shares as the incendiary claim.”9 It is impossible to figure out exactly how systems might be susceptible to algorithmic bias since the backend technology operates in a corporate “black box,” which prevents experts and lawmakers from investigating and determining how a particular algorithm was designed, what data helped build it, or how it works.10 Algorithms as Megaphones The internet and its communications networks were once imagined as a space to foster widespread citizen engagement, innovative collaboration, productive debate around political and social issues, and public interest information sharing. Now, weaponized by extremists and conspiracy theorists, companies’ loosely defined rules and disincentive to abandon a toxic business model renders their current practices an existential threat to society and democratic process, as hate speech inciting violence manifests into domestic terrorism,