<<

MASTER OF ARTS IN LAW & DIPLOMACY CAPSTONE PROJECT

Digital Platforms, Content Moderation & Free Speech How To A Regulatory Framework for Government, Tech Companies & Civil Society

By Adriana Lamirande

Under Supervision of Dr. Carolyn Gideon Grant Awarded by Hitachi Center for Technology & International Affairs Spring 2021 | Submitted April 30 In fulfillment of MALD Capstone requirement TABLE OF CONTENTS

I. RESEARCH QUESTION……………………………………………………………………. 2

II. BACKGROUND……………………………………………………………………………. 2 ○ Social : From Public Squares to Dangerous Echo Chambers ○ Algorithms as Megaphones ○ Looking Forward: The Case for Regulation & Cross-Sectoral Collaboration

III. OVERVIEW OF ANALYTIC FRAMEWORK………………………………………………… 10

IV. EVIDENCE………………………………………………………………………………… 13 ○ Public Interest Framework………………………………………………………. 13 ○ Common Carrier Framework…………………………………………………….. 17 ○ Free Market Framework…………………………………………………………. 22 ○ International Human Rights Law Framework……………………………………. 29

V. CONCLUSION/POLICY RECOMMENDATIONS…………………………………………….. 35 ○ For U.S. Policymakers………………………………………………………………. 36 ○ For Social Media Companies………………………………………………………. 39

1 RESEARCH QUESTION

Which content moderation regulatory approach (international human rights law, public interest, free market, common carrier) best minimizes and inciting violence on social media?

Which practices by social media companies and civil society, alongside existing legislation, are best suited to guide U.S. policymakers?

BACKGROUND/CONTEXT

To borrow the words of Anne Applebaum and Peter Pomerantsev from Johns Hopkins’ SNF Agora Institute in The Atlantic: “We don’t have an based on our democratic values of openness, accountability, and respect for human rights.”1

Social Media: From Public Squares to Dangerous Echo Chambers

Social media platforms have become digital public squares, creating a new arena for users to air opinions, share content they like or feel is informative (whether true or false), and express their unique without constraint. In the last few years, a slew of complaints and controversies have emerged regarding Facebook, YouTube and Twitter’s ad hoc content moderation practices, as well as the exploitative nature of their ad-based monetization business model. Their “growth at all costs” ethos is problematic in that it inordinately collects private user data to curate personalized news feeds and strengthens highly profitable precision ad targeting – the major caveat being that such a model thrives on content that is controversial, conflict-inducing and extreme in nature.

The notion that “the medium is the message” was first pioneered by lauded communications expert Marshall McCluhan, and purports that the media through which we choose to communicate holds as much, if not more, value than the message itself. He states: “the personal and social consequences of any medium—that is, of any extension of ourselves—result from the new scale that is introduced into our affairs by each extension of ourselves, or by any new technology. [...] The restructuring of human work and association was shaped by the technique of fragmentation that is the essence of machine technology.”2

In our post-truth era, where platforms have become a stand in for traditional and are increasingly asked to arbitrate speech online, his warning about scale, fragmentation and social consequences feel especially prescient. Social networks struggle with waves of misinformation and problematic fact-checking practices and policies which can elevate news of poor quality. A Columbia

1Applebaum, Anne and Pomerantsev, Peter. “How to Put Out Democracy’s Dumpster Fire.” The Atlantic, March 8, 2021. https://bit.ly/3gQONAW 2McLuhan, Marshall. Understanding Media: The Extensions of Man. MIT Press, 1964, page 1. https://bit.ly/3aIkeXz

2 Journalism Review study3 found, for example, that Facebook failed to consistently label content flagged by its own third-party partners, and 50% of some 1,1000 posts containing debunked falsehoods were not labelled as such. Critics also point out that the fact-checking process is too slow, when can reach millions in a matter of hours or even minutes.

While digital platforms never set out to undermine or replace journalism, they have for many Americans become a primary source for news, a battleground for flaming partisan debates, and an unruly sphere where information – false or not – is transferred and elevated, with the potential for harmful impact beyond the web. According to a 2019 Pew Research Center report, 55% of U.S. adults now get their news from social media either "often" or "sometimes" – an 8% increase from the previous year. The report also found that 88% of Americans recognized that social media companies now have at least some control over the mix of the news that people see each day, with 62% of them feeling this was a problem and acknowledging companies having far too much control over this aspect of their lives.4

In the past, the news business and broadcast industries were built on stringent checks and balances by the government, and a foundation of mostly self-enforced professional integrity standards and editorial guidelines that provided recourse and due process for readers and critics alike. One example we can recall is the Fairness Doctrine, introduced by the Federal Communications Commission in 1949, which was a policy that required the holders of broadcast licenses to both present controversial issues of public importance and to do so in a manner that was—in the FCC's view—honest, equitable, and balanced.

During this period, licensees were obliged not only to cover fairly the views of others, but also to refrain from expressing their own views. The Fairness Doctrine grew out of the that the limited number of broadcast frequencies available compelled the government to ensure that broadcasters did not use their stations simply as advocates of a single perspective. Such coverage had to also accurately reflect opposing views, and afford a reasonable opportunity for discussing contrasting points of view.5 This meant that programs on politics were encouraged to give opposing opinions equal time on the topic under discussion.

Additionally, the rule mandated that broadcasters alert anyone subject to a personal attack in their programming and give them a chance to respond, and required any broadcasters who endorse political candidates to invite other candidates to respond.6 Though the Fairness Doctrine experienced erosions before this, it was officially repealed in 2011 after challenges on First Amendment grounds.7 This is an

3Bengani, Priyanjana and Karbal, Ian. “Five Days of Facebook Fact-Checking.” Columbia Journalism Review. October 30, 2020. https://bit.ly/2Rd0mYw 4 Grieco, Elizabeth and Shearer, Eliza. “Americans Are Wary of the Role Social Media Sites Play in Delivering the News.” Pew Research Center: Journalism & Media, October 2, 2019. https://pewrsr.ch/2W8n2rx 5 Perry, Audrey. “Fairness Doctrine.” The First Amendment Encyclopedia, May 2017. https://bit.ly/3eLm0ev 6 Matthews, Dylan. “Everything you need to know about the Fairness Doctrine in one post.” Washington Post, August 23, 2011. https://wapo.st/3bMV37v 7 McKenna, Alix. “FCC Repeals the Fairness Doctrine and Other Regulations.” The Regulatory Review. September 26, 2011. https://bit.ly/3sZbNAc

3 example about one type of mechanism that some suggest could be used to regulate social media content moderation practices today.

Platforms enjoy the primacy and responsibility of mediating “the truth” once held by traditional news publishers, without the same formalized editorial intervention, at the expense of a filter-bubbled user experience and questionable news quality. Furthermore, the core ad monetization business model is intrinsically linked to the creation of siloed echo chambers, as algorithms elevate and personalize what posts users see based on their on-site activity. Experts assert that this limits people’s exposure to a wider range of ideas and reliable information, and eliminates serendipity altogether.8

By lauding neutrality in their role and policies, digital platforms are attempting to escape scrutiny of algorithmic bias that fuels and is complicit in the of extremist views, disinformation, and hate speech inciting violence, thus enabling its spread at a quicker and more effective pace than level- headed reports and stories based in fact. One article around Facebook’s refusal to review political content – even if it violates its hate speech guidelines – summarizes the issue as such: “The fact check never gets as many shares as the incendiary claim.”9

It is impossible to figure out exactly how systems might be susceptible to algorithmic bias since the backend technology operates in a corporate “black box,” which prevents experts and lawmakers from investigating and determining how a particular algorithm was designed, what data helped build it, or how it works.10

Algorithms as Megaphones

The internet and its communications networks were once imagined as a space to foster widespread citizen engagement, innovative collaboration, productive debate around political and social issues, and public interest information sharing. Now, weaponized by extremists and conspiracy theorists, companies’ loosely defined rules and disincentive to abandon a toxic business model renders their current practices an existential threat to society and democratic process, as hate speech inciting violence manifests into domestic terrorism, and disinformation plagues election integrity amongst other political stronghholds.

As such, despite taking steps to clarify community guidelines and retool terms of service around defamatory language and false information, many critics deem PR statements from social media company leadership after-the-fact somewhat disingenuous and lacking a stark assessment of how algorithmic design, financialization that incentivizes bad behavior, and negligible moderation remain at work in the absence of a concrete digital rights regime and stringent regulation.

8Anderson, Janna and Rainie, Lee. “Theme 5: Algorithmic categorizations deepen divides.” Pew Research Center. February 8, 2017. https://pewrsr.ch/32YArX2 9 Constine, Josh. “Facebook promises not to stop politicians’ lies & hate.” TechCrunch, September 24, 2019. https://tcrn.ch/2xhih6J 10Heilweil, Rebecca. “Why algorithms can be racist and sexist.” Recode. February 18, 2020. https://bit.ly/3eGKcPe

4

One example came about in the midst of Facebook standing up its own Oversight Board, when a group of its most vocal critics formed the “Real” Oversight Board. Its intention was to analyze and critique Facebook's content moderation decisions, policies and other platform issues in the run-up to the presidential election and beyond. The expert body’s rationale is summed up in a quote from one member: “This is a real-time response from an authoritative group of experts to counter the Facebook is putting out."11

An April 2021 Buzzfeed investigation surfaced an internal report that found that Facebook failed to take appropriate action against the Stop the Steal movement ahead of the January 6 Capitol Hill riot, after which it repeated the refrain that it will “do better next time.”12

Harvard Shorenstein Center Research Director Joan Donovan said the report’s revelations and misleading public comments expose the true nature of the company and its products, stating that “it shows that they know the risks, and they know the harm that can be caused and they are not willing to do anything significant to stop it from happening again.” 13 Speaking to the real-life harms of organizing activity and capabilities on the platform, she says: “There is something about the way Facebook organizes groups that leads to massive public events. And when they’re organized on the basis of misinformation, hate, incitement, and harassment, we get very violent outcomes.”14

This is not the first high-profile instance where the platform failed to act and later issued a report doubling down on its commitment to address problematic content and reassess its approach to enforcement of its policies. It echoes previous high-profile examples, like a 2016 election disinformation postmortem, and a 2018 human rights report concluding it failed to stop Facebook from being leveraged to foment division and incite offline violence that helped fuel the Myanmar genocide.

It’s not just Facebook. Digital scholar Zeynep Tufekci tracked the way YouTube’s recommendation algorithm serves as an engine of radicalization. She noticed that videos of Trump rallies led to videos of alt-right content, and that Hillary Clinton speeches eventually served up leftist conspiracies. As she widened her analysis, she found it wasn’t just politics. “Videos about vegetarianism led to videos about veganism. Videos about jogging led to videos about running ultramarathons. It seems as if you are never ‘hard core’ enough for YouTube’s recommendation algorithm. It promotes, recommends and disseminates videos in a manner that appears to constantly up the stakes.”15

Looking Forward: The Case for Regulation & Cross-Sectoral Collaboration

11Solon, Olivia. “While Facebook works to create an oversight board, industry experts formed their own.” NBC News, September 25, 2020. https://nbcnews.to/3uaTx8k 12Lytvynenko, Jane; Mac, Ryan and Silverman, Craig. “Facebook Knows It Was Used To Help Incite The Capitol Insurrection.” Buzzfeed, April 22, 2021. https://bit.ly/2R5PnzX

13Lytvynenko, Jane; Mac, Ryan and Silverman, Craig. “Facebook Knows It Was Used To Help Incite The Capitol Insurrection.” Buzzfeed, April 22, 2021. https://bit.ly/2R5PnzX 14ibid 15Klein, Ezra. Why We’re Polarized. Avid Reader Press / Simon & Schuster (January 28, 2020), page 156

5

The societal implications of social media as they concern free speech are clear but sensitive, as the First Amendment’s cultural and legislative power hovers heavily over all considerations. Another important roadblock is that the technologies underpinning curation and moderation remain poorly understood given they operate as black boxes, wherein they can be viewed in terms of inputs and outputs but without any knowledge of their internal workings.

Harvard Business Review contributors Theo Lau and Uday Akkaraju succinctly summarize this conundrum: “When we type a query into a search engine, the results are determined and ranked based on what is deemed to be “useful” and “relevant.” What if they decide whose voice is prioritized? What if, instead of a public square where free speech flourishes, the internet becomes a guarded space where only a select group of individuals get heard — and our society in turn gets shaped by those voices?”16

The public and the government are aware that data collection helps algorithms determine what will capture the most eyeballs in today’s “attention economy” – keeping users scrolling, clicking and sharing. But neither have a clear view of how they are trained to flatten identity and opinion into manageable labels, thus reinforcing biases, segregating individuals into self-perpetuating echo chambers, and shaping public opinion with serious consequences.

A 2020 Gallup/Knight survey17 indicates that while users believe that online platforms are important places of open expression, they have gotten warier about the ways companies distribute misleading public health information, election disinformation, bigoted trolling and other harmful content.

As Sunstein suggests in “#republic: Divided Democracy in the Age of Social Media,” the ease with which users with fringe ideals spanning racism, sexism and homophobia can find their niche is cause for concern. Facilitated by platform architecture to grow a captive audience and gain the opportunity to go “viral,” an extreme groupthink forms. As we’ve seen through reports of mass shooters across the U.S., posting on social media and garnering support from fellow incels emboldened them to perform destructive acts in real life. Anonymity only adds fuel to the fire that networked technologies present – evaporating societal barriers to discriminatory threats, hateful or defamatory language, and false information around issues of public interest.18

Media scholar Jonathan Albright coined the term “Untrue-Tube” in his referencing YouTube’s primacy in the disinformation space. Albright notes that the video service’s recommendation system – deemed the best in the world – allows content creators to monetize harmful material while benefiting from the boost that comes along with the system’s high visibility potential. While he cautions against ,

16Lau, Theodora and Akkaraju, Uday. “When Algorithms Decide Whose Voices Will Be Heard.” Harvard Business Review, Nov. 12, 2019. https://bit.ly/2xhU5B3 17“The future of tech policy: American views.” Knight Foundation, June 16, 2020, https://kng.ht/3gNe3rF 18Sunstein, Cass R. #republic: Divided Democracy in the Age of Social Media. Princeton University Press, 2018, page 185.

6 he agrees that policies must be put in place to include optional filters and increase the number of human moderators scrutinizing potentially dangerous videos and imagery.19

Social media companies have recognized their role in providing platforms for speech. In a 2018 hearing before the Senate Select Committee on Intelligence, Twitter CEO Jack Dorsey, repeatedly referred to Twitter as a “digital public square,” emphasizing the importance of “free and open exchange” on the platform.20

Alongside legislation, we are seeing initiatives and standardizing being encouraged by civil society groups, and private companies have made some strides to stand up new oversight bodies and moderation features internally in the face of proliferating disinformation and hate speech inciting violence online. Such efforts are based on a growing awareness that this shift in information sharing urgently requires some regulatory oversight.

Researcher and Founding Director of Ranking Digital Rights (RDR) Rebecca MacKinnon asserts that “a clear and consistent policy environment that supports civil rights objectives and is compatible with human rights standards is essential to ensure that the digital public sphere evolves in a way that genuinely protects free speech and advances social justice.”21

Companies have for long toed the line between rejecting and inviting regulation. For years, Facebook lobbied governments against imposing tough regulations, warning that they would harm their business model. Recently, we have seen some reversal of this position, with Big Tech increasingly pleading for new rules for the good of its business – and to regain user trust.

Last March, Zuckerberg penned an open letter in the Washington Post calling for government intervention to delineate a standardized approach for content review systems at scale, and to set baselines around which companies can measure the efficacy and consistency of their practices.22 In a white paper published in February, he and his team detailed a push for internet regulation, specifically calling on lawmakers to devise rules around harmful content, a different model for platforms’ legal liability and a “new type of regulator” to oversee enforcement of harmful content amongst other areas. In addition, the company would consider unlocking content moderation systems for external audit to help governments better design regulation in areas like hate speech.23

19Albright, Jonathan. “Untrue-Tube: Monetizing Misery and Disinformation.” Feb. 25, 2018. http://bit.ly/31Nmytg 20Brannon, Valerie. “Free Speech and the Regulation of Social Media Content.” Congressional Research Service, March 27, 2019, page 5. https://bit.ly/334dDVX 21MacKinnon, Rebecca. “Reclaiming Free Speech for Democracy and Human Rights in a Digitally Networked World.” University of California National Center for Free Speech and Civic Engagement. 2019-2020. https://bit.ly/2SdkBpj 22Zuckerberg, Mark. “Mark Zuckerberg: The Internet needs new rules. Let’s start in these four areas.” Washington Post, March 30, 2019. https://wapo.st/2PwFic1 23Drozdiak, Natalia. “Facebook Needs Regulation to Win User Trust, Zuckerberg Says.” Bloomberg, February 17, 2020. https://bloom.bg/2VLVv0j

7 The creation of a Facebook Oversight Board demonstrates Zuckerberg’s willingness to grapple with these difficult issues, but he has come up against claims of partiality in giving the company too much power in nominating a majority of the “external” body’s board members.24 To this point, pessimism about his and other leaders’ appeals to the government for increased rules and standards is valid, as internet platforms have failed to rise to such occasions before. According to New America’s Open Tech Institute assessment around the Santa Clara Principles on Transparency and Accountability Around Online Content Moderation,25 findings indicate that although Facebook, YouTube and Twitter have demonstrated in implementing the recommendations related to “notice” and “appeals,” they have reneged on their commitment to disclose the numbers of posts removed and accounts permanently or temporarily suspended due to content guideline violations.26

As things stand, private internet companies remain enigmatic self-regulators and rely on ad hoc “platform law” where consistency, accountability and remedy are non-existent. As such, our best recourse in regaining control over lawless abuses of free speech and healthy dialogue in internet forums is deploying federal regulation. Such policies to combat hate speech and disinformation chipping away at democratic deliberation online should push for transparency around moderation and curation practices, accountability around mistakes made and commitments to amend internal decision-making around accordingly, and impose sanctions on private entities unwilling to comply with authorities over editorial misdeeds and abuses like free speech violations.

This paper will outline which framework – public interest, common carrier, free market, international human rights law – is best suited to help minimize disinformation and hate speech inciting violence. This analysis will necessarily require weighing which one best protects the fundamental right of free speech while at the same time reducing harmful content. Based on evidence gathered, I will map suggestions to begin building an effective and actionable regulatory framework for internet governance – grounded in best practices for private companies, civil society groups and U.S. lawmakers, weaving in existing legislation, and proposing new protocols to shape a stronger and more effective digital social contract for all.

ANALYTIC FRAMEWORK OVERVIEW

In this section I will provide an overview of the analytic framework designed to help structure my thinking, that I am using as a model to guide and facilitate my understanding of evidence for each possible answer. This paper will consider the following four possible answers to my research question.

24Ingram, Matthew. “Facebook lays out the rules for its new Supreme Court for content.” Columbia Journalism Review, January 30, 2020. https://bit.ly/3aQipYl 25Santa Clara Principles on Transparency and Accountability Around Online Content Moderation landing page. Accessed April 29, 2021. https://santaclaraprinciples.org/ 26Singh, Spandana. “Assessing YouTube, Facebook and Twitter’s Content Takedown Policies.” New America Open Technology Institute, May 7, 2019. https://bit.ly/2KNBbVH

8

First is the public interest framework, which advocates for guiding the use and regulation of scarce resources for the public good, in order to prevent what were traditionally broadcast licensees, and today are social media platforms acting as publishers, from taking advantage of their dominant position in disseminating information for-profit in the “marketplace of ideas.” The Fairness Doctrine has been one mechanism facilitating this approach, as it required broadcasters to offer equal time to and balanced perspectives on important civic issues. Another is Section 230 of the Communications Decency Act of 1996, which is today under consideration for amendment in order for its statute to apply to social media companies (which it currently does not).

Second is the common carrier framework, which advocates for regulating social media companies like common carriers and public utilities. Notably, this would mean they would be subject to non- discrimination clauses and the principle of , wherein they are prohibited from speeding up, slowing down, or blocking any content, applications or websites customers want to use.

Third is the free market framework, which is the dominant approach today, wherein digital platforms self-regulate in both the absence of and to avoid government intrusion.27 Mechanisms here include internally-set Terms of Service and Community Guidelines, and “third party” bodies like Facebook’s Oversight Board and Twitter’s Birdwatch acting as arbiters for tough takedown cases.

Fourth is the international human rights law framework, which suggests applying the lens of globally- ratified human rights norms and values, namely the right to free speech, to private companies’ content moderation policies. Central to this approach is its ability to grapple with issues that touch free speech and public debate, its inherent balancing of free speech with other fundamental rights, and myriad documents drafted to help guide businesses in how to best uphold human rights.

While the public interest framework has seen the creation of valuable indices to measure companies’ policies and reporting practices (or lack thereof) by civil society groups, these bodies lack a crucial incentivizing mechanism to ensure companies abide by their recommendations, and are unable to hold them accountable in any official or financial capacity. On the policy side, arguments against applying the Fairness Doctrine and debates around amending Section 230 remain ongoing, but some of the myriad proposals to update the statute offer promising provisions to bring it into the social media age.

Taking all of this into consideration, civil society experts and scholars analyzing and advocating for policies that could help limit hate speech and disinformation online remain the backbone of the content moderation regulation conversation, and have put forth both reports to expose company misbehavior, and increasingly have the ear of U.S. policymakers across the aisle. Their role should be to act both as educators and guides to U.S. policymakers arguably still lacking broad technical knowledge, and liaise

27Lotz, Amanda. “Profit, not free speech, governs media companies’ decisions on controversy.” The Conversation. August 10, 2018. https://bit.ly/3u9EOuc

9 with private companies to encourage the reconsideration and modification of internal policies that remain opaque and overly permissive.

The common carrier framework grapples with the benefits and pitfalls of classifying social media as a common carrier or public utility, and is rife with disagreement amongst scholars. Some academics are proponents of this approach while others argue that the framework traditionally applied to broadcast (wherein the broadcast spectrum is limited) is not appropriate for social media – which fosters a wide array of information sources, is free to access to all sorts of users, and is not restricted by a similar short supply predicament.

Free market solutions and self-regulatory measures put forth by companies in the absence of formal regulation – what is here classified as the free market framework – continue to fall short. Some critics view internal updates to tackle problematic posts and punting the most egregious cases to third party authorities as a tactic to further shed responsibility and reckoning with what most experts see as the major element being glossed over: the financial incentives behind the core business model.

With that said, some free market solutions should not be total throwaways. Facebook tapping subject matter experts with its Oversight Board has broadened the public’s view into the types of cases the platform puts under review with some net positive and actionable next steps. Its rulings can certainly supplement growing calls for more accountability and transparency measures around content moderation decision making. Twitter’s Birdwatch tool, though still in its early rollout, is a valuable experiment in measuring if crowdsourced moderation could work as a potential model to be deployed platform-wide. Ultimately, the more tools to attempt to solve for this tricky problem, the better.

The international human rights framework is too broadly applied to states who have ratified related treaties, so nailing down how they should be interpreted and applied specifically by global private companies lacking subject matter expertise seems both an arduous, and ultimately pointless, task. But, the Guiding Principles on Business and Human Rights28 propose meaningful steps companies can adopt to improve their human rights record, so this document should continue to be referenced in these discussions.

In short, there is no right or perfect framework to probe this complex, and ever changing, challenge. I will suggest that a combination of the public interest and free market frameworks is best suited to envision effective regulation for social media content moderation. I will venture to say that U.S. policymakers – seeking guidance from digital rights experts, grappling with Section 230 amendment, and recognizing the merits of platforms’ internal strides to tackle their difficult role – likely feel the same.

In order to evaluate which attributes make up each framework, I will analyze each by exploring how it stacks up to three key dimensions. One dimension is its efficacy in removing harmful content,

28Guiding Principles on Business and Human Rights, UN Human Rights Office of the High Commissioner. https://bit.ly/2yR2kog

10 understood as falling into the categories of disinformation and hate speech inciting violence. Another is its free speech protection. A final one is its implementability and enforceability.

To expand upon these considerations, I will begin by setting up the landscape of existing approaches. From there, I will break down what each framework does to piece together the complex puzzle of regulating or self-regulating social media content moderation, which will touch upon what is working and what is not, and outline how all these mechanisms interact. Finally, I will leverage all of the above analysis to inform a concluding argument that will specify policy recommendations directed at both U.S. lawmakers and social media companies.

ANALYSIS OF FRAMEWORKS

PUBLIC INTEREST FRAMEWORK

11 The public interest framework is primarily concerned with protecting society’s values that are at risk of being lost if we rely solely on the free market approach. Specifically in the context of information and communications technologies, it is concerned with the dissemination of accurate information, how its flows can have an impact beyond the digital sphere, and the consequences of relying on automation to moderate content in our complex and ever-evolving “informationscapes.” Related recommendations and pushes for legislation have been centered around developing safeguards to address what is seen as a moderation crisis that undermines values like free speech and democratic processes at a time of heightened political polarization, which may lead to a new era of public oversight of private companies according to a public interest standard.29

To this end, the proponents of this approach argue that companies’ market dominance has led to excessive influence over the political and public sphere, with poor outcomes for users. U.S. lawmakers and civil society groups have urged the examination of platforms’ core business model and inner workings of content moderation, called on them to produce transparency reports that include details about content blocking and removal, as well as provide access to internal data so researchers can study how algorithmic design may be driving substandard outcomes in both minimizing disinformation and hate speech inciting violence that sometimes translates to negative consequences and violent events in real life.

There is wide consensus amongst civil liberties and digital rights groups that platforms like Facebook appeal to free speech principles only when they are economically advantageous30, and that platforms rely on techno solutionism and internal standard-setting that net suboptimal results, and are deemed overdue but inadequate. Color of Change’s vice president Arisha Hatch said in a statement: “This is progress, but Twitter demonstrated a consequential lack of urgency in implementing the updated policy before the most fraught election cycle in modern history, despite repeated warnings by civil rights advocates and human rights organizations.”31

So, what public interest mechanisms have been proposed by the U.S. government and civil society groups that we may consider applying to digital platforms to fill the vacuum left by current free market policies in place today? Which can best address public interest goals and limit cesspools of hate and disinformation online?

Section 230 of the Communications Decency Act says that an “interactive computer service” can’t be treated as the publisher or speaker of third-party content.32 The Electronic Frontier Foundation calls it

29Matzko, Paul and Samples, John. “Social Media Regulation in the Public Interest: Some Lessons from History.” Knight First Amendment Institute at Columbia University. May 2020. https://bit.ly/3xCsoNA 30Solon, Olivia. “‘Facebook doesn't care': Activists say accounts removed despite Zuckerberg's free-speech stance.” NBC News. June 15, 2020. https://nbcnews.to/3vq2MS8 31Klar, Rebecca. “Twitter, Facebook to update hate speech moderation.” The Hill. December 30, 2020. https://bit.ly/3xATapK 32Casey Newton. “Everything You Need to Know About Section 230.” The Verge. May 28, 2020. https://bit.ly/344JxTh

12 “the most important law protecting internet speech.”33Created before the advent of social media, critics fear it protects companies while enabling real harm to their users.

EFF defines Section 230’s purpose as helping prevent overcensorship by online intermediaries that host or republish speech against laws that may otherwise be used to hold them legally responsible for what their users and third parties using their services say and do. Without Section 230, and rather than face potential liability for their users' actions, most would likely not host any user content at all or would need to protect themselves by being actively engaged in censoring what we say, see, and do online.34

There have been many congressional proposals to amend Section 230 or ratify it completely.35 Bipartisan at its origin, the law has been singled out and scrutinized across the aisle. Senator Ted Cruz describes it as “a subsidy, a perk” for Big Tech, and Speaker Nancy Pelosi calls it a “gift” to tech companies “that could be removed.”36 More broadly, Democrats assert it allows tech companies to get away with not moderating content enough, while Republicans proclaim it enables them to moderate too much.37 Because a flurry of legislative reforms have been put forth, Future Tense, the Tech, Law, & Security Program at the Washington College of Law at American University, and the Center on Science & Technology Policy at Duke University partnered on a project to track all of them starting in 2020.

The bipartisan Platform Accountability and Consumer Transparency (PACT) Act38, introduced by U.S. Senators Schatz and Thune in June 2020, is one proposal to update Section 230. Though contentious for its thorny treatment of court orders around illegal content, it puts forth worthwhile requirements for transparency, accountability, and user protections.

This includes an easy-to-understand disclosure of moderation guidelines, which remain opaque, which was one roadblock discussed during the 2019 sessions on platform transparency at the Transatlantic Working Group on Content Moderation and Free Expression.39 Additionally, platforms would have to explain their reasoning behind content removal decisions, and explain clearly how a removed post violated terms of use. Lastly, the act would create a system for users to appeal or file complaints around content takedowns. Digital rights organization Access Now has called it the most reasonable proposal put forth thus far, but acknowledge it is not a complete or perfect solution, but a few of its clauses offer a good start.40 Daphne Keller, the Director of the Program on Platform Regulation at Stanford's Cyber Policy Center, deems it an “intellectually serious effort to grapple with the operational challenges of

33Jason Kelley. “Section 230 is Good, Actually.” EFF. December 3, 2020. https://bit.ly/3ggZsT7 34“Section 230 of the Communications Decency Act.” EFF. Accessed on April 29, 2021. https://www.eff.org/issues/cda230 35Jeevanjee, Kiran et al. “All the Ways Congress Wants to Change Section 230.” Slate. March 23, 2021. https://bit.ly/3gMi2VD 36Wakabayashi, Daisuke.“Legal Shield for Social Media Is Targeted by Lawmakers.” New York Times. October 28, 2020. https://nyti.ms/39LFWNx 37Laslo, Matt. “The Fight Over Section 230—and the Internet as We Know It.” WIRED. August 13, 2019. https://bit.ly/36NhysO 38Sen. Schatz, Brian. S.4066 - PACT Act. Congress.gov. June 24, 2020.https://bit.ly/3u6a7pU 39MacCarthy, Mark. “How online platform transparency can improve content moderation and algorithmic performance.” Brookings. February 17, 2021. https://brook.gs/3aRGmkX 40“Unpacking the PACT Act.” Access Now. September 21, 2020. https://bit.ly/2SetYVU

13 content moderation at the enormous scale of the internet. [...] We should welcome PACT as a vehicle for serious, rational debate on these difficult issues.”

However, she is still grappling with some of its provisions and logistics, namely what the First Amendment ramifications of its FTC consumer protection model for Terms of Service-based content moderation would be.41 In tackling hate speech online, this is especially prescient considering it is not technically illegal under the First Amendment, barring narrow exceptions42 such as threats of illegal conduct or incitement intended to and likely to produce imminent illegal conduct (i.e. incitement to imminent lawless action).43

Scholars Danielle Citron and Benjamin Wittes have offered what they present as a broader though balanced fix, wherein platforms would enjoy immunity from Section 230 liability if they can show that their response to unlawful uses of their services is reasonable. Their revision to the statute is pasted below for reference:44

No provider or user of an interactive computer service that takes reasonable steps to prevent or address unlawful uses of its services shall be treated as the publisher or speaker of any information provided by another information content provider in any action arising out of the publication of content provided by that information content provider.

What constitutes a “reasonable standard of care” would take into account social networks that have millions of posts a day that cannot realistically respond to all complaints of abuse within a short time span. But this clause could help push for the deployment of technologies to detect content previously deemed unlawful as violations under platforms’ Terms of Service.

The FCC’s Fairness Doctrine45 is another mechanism that had been previously applied to U.S. broadcasters, which required them to present a balanced range of perspectives on issues of public interest. Former President Trump’s 2020 Executive Order on Preventing Online Censorship46 called on the Department of Justice to “assess whether any online platforms are problematic vehicles for government speech due to viewpoint discrimination,” which suggested that private social media companies should be compelled to serve as viewpoint-neutral vehicles for dissemination of “government speech.” Around this time, Senator Hawley introduced S.1914, a bill that would have

41Keller, Daphne. “CDA 230 Reform Grows Up: The PACT Act Has Problems, But It’s Talking About The Right Things.” Stanford Law School Center for Internet & Society. July 16, 2020. https://stanford.io/3sZj03e 42“Which Types of Speech Are Not Protected by the First Amendment?” Freedom Forum Institute. Accessed April 29, 2021. https://bit.ly/3aPKMZL 43Staff. “Factbox: When can free speech be restricted in the United States?” Reuters. August 14, 2017. https://reut.rs/330FIgH 44 Citron, Danielle Keats and Wittes, Benjamin. “The Problem Isn't Just Backpage: Revising Section 230 Immunity.” Georgetown Law Technology Review 453, U of Maryland Legal Studies Research Paper No. July 23, 2018. Available at SSRN: https://ssrn.com/abstract=3218521 45Ruane, Kathleen. “Fairness Doctrine: History and Constitutional Issues.” Congressional Research Service. July 13, 2011. https://bit.ly/2SflDkN 46Executive Order 13925, “Preventing Online Censorship.” May 28, 2020. https://bit.ly/3xuvrHP

14 amended Section 230 so that “big tech companies would have to prove to the FTC by clear and convincing evidence that their algorithms and content-removal practices are politically neutral.” There is nothing in Section 230 that requires social platforms that host third party and user-generated content to be viewpoint neutral.

Brookings Nonresident Senior Fellow of Governance Studies for the Center for Technology Innovation John Villasenor47 puts forth the argument that today’s internet ecosystem enables access to a wide variety and diverse range of information sources and viewpoints (versus the limited broadcast spectrum system traditional broadcast media operated within). Furthermore, given platforms are private companies and thus not bound by the First Amendment, requiring them to be “politically neutral” would be a constitutional violation for platforms that are free to welcome and preferentially make decisions around a diverse range of perspectives spanning political .

Additionally, DC think tank New America’s Ranking Digital Rights compiles an annual comprehensive Corporate Accountability Index to evaluate and rank the world’s most powerful digital platforms and telecommunications companies on their disclosed policies and practices affecting users’ digital rights like freedom of expression and privacy.48 The hope is that this could be a primary vehicle to leverage its breadth of public interest research, evaluate how transparent tech companies are about their policies and practices in comparison with their peers, establish a baseline against which to measure their commitment to digital rights, and push companies to improve if and how they uphold such obligations. Its analysts comb through thousands of internal documents to learn how each platform enforces its policies, how accessible they are, and how they interact with governments and other third parties.

Finally, some scholars including Ethan Zuckerman have proposed mapping a public service-minded digital media alternative to the Facebooks and Twitters of the world.49 But this could take years, and there’s no guarantee that alternative networks would be able to pierce through the crowded media environment, or that users would make the switch, considering Facebook currently has around 2.74 billion active users YouTube around 2.29 billion, and Twitter around 350 million globally.

In Social Media and the Public Interest: Media Regulation in the Disinformation Age, Duke Public Policy Professor Philip M. Napoli argues that a social media–driven news ecosystem represents a case of market failure in what he calls the algorithmic marketplace of ideas.50

To respond, he we need to rethink fundamental elements of media governance based on a revitalized concept of the public interest. Some of the bipartisan proposals put forth to amend Section

47Villasenor, John. “Why creating an internet “fairness doctrine” would backfire.” Brookings. June 24, 2020. https://brook.gs/3vn8JiW 482020 Ranking Digital Rights Corporate Accountability Index landing page. Accessed April 29, 2021. https://bit.ly/3aU6TOz 49Zuckerman, Ethan. “The Case for Digital Public Infrastructure.” Knight First Amendment Institute at Columbia University. January 17, 2020. https://bit.ly/3vvAZQq 50Napoli, Philip M. Social Media and the Public Interest, New York Chichester, West Sussex: Columbia University Press, 2019. https://doi.org/10.7312/napo18454

15 230 in order for it to be applicable to social media companies, as well as the reimagining of the digital public square by scholars touting the benefits and building blocks of alternative social networks, demonstrate we are well on our way.

Based on the discussion above, it is reasonable to assume that Section 230 amendment, taking into account both certain tenets of the PACT Act as well as Citron and Wittes’ perspective that platforms should be able to prove a “reasonable standard of care,” are the best avenues forward to minimize harmful content and preserve free speech under the public interest framework.

While implementation remains up in the air, we know Congressional Democrats have begun discussions with the White House on ways to crack down on Big Tech, including around the best ways to hold social media companies accountability for the spread of disinformation, hate speech and information-sharing that led to events like the Capitol riot. During his candidacy, President Biden called for revoking Section 230 altogether, but much legislation on the table is concerned with amending rather than repealing the statute.51

On the enforcement front, there is still uncertainty in discussions around the FCC and FTC’s authority to interpret and enforce Section 230 provisions.52 Some suggest it may be best to leave oversight of digital platforms and related issues to a new more specialized digital regulatory agency.53

COMMON CARRIER FRAMEWORK

A common carrier is a company that transports goods or services, like enabling communication, and is responsible for those goods or services during transport. In the U.S. and for the purposes of exploring this research question, the term can refer to telecommunications service providers and public utilities, whose business is affected with a public interest.54 The term “telecommunications” means the transmission, between or among points specified by the user, of information of the user’s choosing, without change in the form or content of the information as sent and received.55 The FCC classifies internet service providers (ISPs), like Comcast, as common carriers, for the purpose of enforcing net neutrality. Net neutrality is the basic principle that prohibits internet service providers like AT&T, Comcast and Verizon from speeding up, slowing down or blocking any content, applications or websites you want to use.56

51Bose, Nandita and Renshaw, Jarrett. “Exclusive: Big Tech's Democratic critics discuss ways to strike back with White House.” Reuters. February 17, 2021. https://reut.rs/3eKsutZ 52Brannon, Valerie et al. “UPDATE: Section 230 and the Executive Order on Preventing Online Censorship.” Congressional Research Service Legal Sidebar. October 16, 2020. https://bit.ly/3nNfKal 53Kimmelman, Gene. “Key Elements and Functions of a New Digital Regulatory Agency.” Public Knowledge. February 13, 2020. https://bit.ly/3aSS4fh 54Telecommunications common carrier definition. Law Insider. https://bit.ly/335Ael4 55“Basic Service / Telecommunications Service.” Cybertelecom Federal Internet Law & Policy, An Educational Project. Accessed April 29, 2021. https://bit.ly/3nyQxQu 56“The internet without Net Neutrality isn’t really the internet.” Free Press. Accessed April 29, 2021. https://bit.ly/32YNmbl

16

A key tenet of common carrier is that it must provide non-discriminatory service. This means that service cannot be denied for any legal content or purpose, and while there can exist different tiers and accompanying rates, service at each tier must be provided to those who pay for it. Non- discrimination regulations essentially prohibit common carriers from making individualized or case-by- case decisions with respect to the terms upon which they provide their services.57

In looking at what regulatory frameworks would best be applied to digital platforms like Facebook, Twitter and YouTube, classifying them as common carriers or public utilities in order to regulate them accordingly has been floated by some, though most experts caution against its applicability in the digital platform context. While some services are required to be common carriers – like telephone and text messaging – the characteristics of being treated like one are significant. As such, it is necessary to ask whether common carrier regulation would be beneficial in regards to social media platforms, or if other frameworks are better suited to helping minimize hate speech and disinformation online.

An important legal requirement for a common carrier is that it cannot discriminate against a customer or refuse service unless there is some compelling reason. This could make it a requirement that networks not demonstrate “bias” against certain viewpoints. In practice, this means that all legal content must be treated in a non-discriminatory manner, and all users who are engaging with or generating content must be treated the same.

Earlier this month, Supreme Court Justice Clarence Thomas put forth an opinion supporting the common carrier approach for regulating social media content around the Court’s decision to dismiss a lawsuit against former President Trump over his blocking of some Twitter followers. He cited the Turner Broadcasting case58 that required cable operators to carry broadcast signals, which he argued might also apply to digital platforms. In short, he provided a response to a First Amendment challenge to the common carrier framework, which says that social media platforms should not be treated as speakers but also should not have the right to decide what is said on their sites. Rather, they should be “reconceptualized as neutral, passive conveyors of the speech of others.”

Mark MacCarthy, a Nonresident Senior Fellow of Governance Studies for the Center for Technology Innovation at Brookings, outlines the response of experts and stakeholders to Justice Thomas’ opinion on the left and the right. Conservatives concerned with social media censorship applauded it.59 Some scholars on the left also endorse the idea, with law professors Genevieve Lakier and Nelson Tebbe,

57See 47 U.S.C. § 201; see also Report to Congress, FCC, CC Docket No. 96-45, FCC 98-37 (Apr. 10, 1998), at 8, 37-41, available at http://transition.fcc.gov/Bureaus/Common_Carrier/Reports/fcc98067.pdf 58“Turner Broadcasting System, Inc. v. FCC, 512 U.S. 622 (1994).” Justia US Supreme Court. Accessed April 29, 2021. https://bit.ly/3t5NL6G 59MacCarthy, Mark. “Justice Thomas sends a message on social media regulation.” Brookings. April 9, 2021. https://brook.gs/3u9FfEX

17 arguing users have a constitutional right to carriage on social media needed to counteract “the threats to that result from private control of the mass public sphere.”60

But a common carrier framework is still not necessarily considered the path to follow by other experts. In a response to Lakier and Tebbe, First Amendment scholar Robert Post notes that treating social platforms as common carriers would mean they would be “compelled to broadcast intolerable and oppressive forms of speech” and that such a move would invalidate existing minimal content moderation practices, exacerbating issues around harmful but legal communication like disinformation and hate speech that we grapple with in the digital public sphere.61

Following the same line of thought, Public Knowledge Legal Director John Bergmayer – who specializes in telecommunications, media and internet issues – does not think “must carry” requirements are necessary for social networks.62

In response to those on both sides of the aisle who think platforms should default to leaving technically legal content up and give leaders like former President Trump a platform for public interest reasons (access to his thoughts on policy, etc.), Bergmayer argues that the law should not require platforms to carry all user-generated content indifferently, and cautions against unmoderated speech platforms solely focused on removing illegal content.

Mechanisms that uphold arguments favoring the imposition of a common carrier framework include natural monopoly, wherein a company makes it very difficult for competitors to enter a marketplace. Bergmayer asserts that this dynamic does not apply to social media networks that are offered to end users for free and where similar information and communication technologies can be repurposed and replicated into alternative social networks. While major platforms control access to services to their users, they are not the sole providers of communication and content generation online. Unlike the smaller number of ISPs, users can go elsewhere to seek such services out as needed.

Additionally, even if an existing social media platform denies a competitor use of its "facility," competitors can essentially and relatively easily duplicate such platforms, wherein the accompanying challenge is more centered on building a comparable user base, rather than struggling to build the “physical” digital infrastructure of a social network. Parler stepping in to fill the vacuum for users and accounts removed from Twitter and Facebook is one example of this process in action, as it saw downloads surge after the Big Tech players restricted groups and posts peddling false election claims, and banning Trump.63

60Lakier, Genevieve and Tebbe, Nelson. “After the Great ”: Reconsidering the Shape of the First Amendment.” Law and Political Economy (LPE) Project. March 1, 2021. https://bit.ly/3t5pP3g 61Post, Robert. “Exit, Voice and the First Amendment Treatment of Social Media.” Law and Political Economy (LPE) Project. April 6, 2021. https://bit.ly/2Rg3W4c 62Bergmayer, John. “What Makes a Common Carrier, and What Doesn’t.” Public Knowledge. January 14, 2021. https://bit.ly/2QBrxfI 63Dwoskin, Elizabeth and Lerman, Rachel. “‘Stop the Steal’ supporters, restrained by Facebook, turn to Parler to peddle false election claims.” Washington Post. November 13, 2020. https://wapo.st/3xyixbD

18

Another element to consider in the common carrier framework is that of network effects, for which the historical example is the telephone system. In short, it designates the phenomenon where networks become more valuable as more people use them. Law.com defines network effects as driving both speakers and listeners to be in the same place where everybody else is in order to reach the broadest audience and to access the broadest range of content. Another consideration of network effects is that the owners control access to the ability to broadcast to a mass audience or to reach a niche one.64

This could be said in the case of social media, where a breadth of information is classified and categorized by algorithms, that feed personalized content back to users based on their browsing and engagement patterns. However, even though an alternative platform can emerge to serve a user who is denied access to one of the large mainstream and established platforms, the user experience will differ starkly because it is likely unable to offer the same scale of content and massive audience, which ties back to the concept of network effects.

Bergmayer concludes that the common carrier framework would not bring forth net positive results for regulating social networks, as unmoderated platforms would become oversaturated with low quality content like abuse and spam, and enable even more ease for groups to organize for mass violence purposes without oversight or fear of retribution.

Renowned researcher danah boyd contends that Facebook is acquiring some public utility characteristics, though it is still not at the scale of the internet, and suggests that regulation may be in its future.65 In comparing social media platforms to traditional public utilities, Senior Research Fellow at Mercatus Center at George Mason University’s Mercatus Center Adam Thierer warns that treating nascent digital platforms as such would ultimately harm consumer welfare for a few key reasons. He sees public utility regulation as the “archenemy of innovation and competition.”66

Additionally, Thierer believes that calling social media natural monopolies would turn into a self-fulfilling prophecy. Finally, given social media are tied up with the production and dissemination of speech and expression, First Amendment values are implicated though they do not technically apply to the private Big Tech companies. Thus, platforms are expected to retain the editorial discretion to determine what can appear on their sites.67 Given this is the case and that they hold a growing role in public discourse, it is no surprise that academics, digital rights advocates and lawmakers are reviewing content policies crafted internally closely to determine whether certain cases could be considered to amount to censorship.

64Law Journal Editorial Board. “Are Social Media Companies Common Carriers?” Law.com. March 14, 2021. https://bit.ly/3xyiHQh 65boyd, danah. “Facebook Is a Utility; Utilities Get Regulated.” ZEPHORIA. May 15, 2010. https://bit.ly/3gPo9s9 66Thierer, Adam. “The Perils of Classifying Social Media Platforms as Public Utilities.” George Mason University Mercatus Center. March 19, 2012. https://bit.ly/333bFFo 67Thierer, Adam. “The Perils of Classifying Social Media Platforms as Public Utilities.” George Mason University Mercatus Center. March 19, 2012. https://bit.ly/333bFFo

19

On the natural monopoly front, Zeynep Tufekci, an assistant professor at the University of North Carolina, Chapel Hill, argues that, "many such services are natural monopolies: Google, Ebay, Facebook, Amazon, all benefit greatly from network externalities which means that the more people on the service, the more useful it is for everyone." In particular, she worries about Facebook causing a "corporatization of social commons" and of the dangers of the "privatization of our publics.68

Here again, Thierer pushes back, pointing to the fact that traditional pillars of media regulation in regards to broadcast radio and television were scarcity and the supposed need for government allocation of the underlying limited resource of the broadcast spectrum. In contrast, social media services are not “physical resources with high fixed costs.”

He concludes by contending that social media platforms do not possess the appropriate criteria or qualities that have been typically designated or associated with public utilities and common carriers.69

Based on the discussion above, while there is some validity to scholars like Tufecki’s concerns that private companies increasingly “own” digital public squares, the common carrier framework traditionally applied to a limited broadcast spectrum is not suitable to social media today. This is because, while many take issue with the fact that a few private companies command much of the information and communications space and battle valid claims of content and user discrimination, their services are free to access and they provide a wide variety of information sources for users to choose from.

Because social media operates under different characteristics and lacks the same constraints as broadcast networks, solutions to tackle disinformation and hate speech inciting violence in the former should not seek inspiration from the latter. Broadcast and social media networks’ foundational services differ radically, and thus should not be regulated in a similar manner.

FREE MARKET FRAMEWORK

The dominant free market framework is defined by the current status quo of “adhoc platform law” and whack-a-mole content takedown strategy, wherein platforms make their own rules in the absence of formal policy intervention. Under this framework, if business interests are consistent with minimizing harmful content, or protecting free speech, companies will pursue these goals as they align with market incentives, which nets out to sustaining a strong user base and keeping partners happy. Now, platforms are facing a major public reckoning and pushback to their techno solutionist strategy to

68Tufekci, Zeynep. “Facebook: The Privatization of Our Privates and Life in the Company Town.” Technosociology. May 14, 2010. https://bit.ly/3gPA1dR 69Nat. Broad. Co. v. United States, 319 U.S. 190, 226-27 (1943); see also Red Lion Broad. Co. v. F.C.C., 395 U.S. 367, 375 (1969)

20 combat hate speech and disinformation, as it is revealed how the core ad monetization business model helps spread and amplify harmful content under the guise of free speech.

Experts have long expressed concern that tech giants program their features to favor profit over societal benefit, especially around civic issues. The January 6 attack on the U.S. Capitol was organized in plain sight on social media platforms, and offers a wake-up call about their growing power and reach beyond the confines of cyberspace.70 Facebook and Twitter swiftly banned accounts and removed radicalizing content that spawned the violent mob, which culminated in the ban of former President Trump from the platforms, stating his use of social media to share misleading content and inflame millions of his followers. But many decried these actions as “too little, too late.”71

This move saw the migration of many, especially in conservative and alt-right circles and whose accounts had been suspended or removed from mainstream networks, to Parler, which bills itself as “the only neutral social media platform” for being largely unmoderated. The app was spawned because its founders claimed to be "exhausted with a lack of transparency in big tech, ideological suppression and privacy abuse” on Big Tech platforms.72

Harvard Co-Director of the Digital Platforms & Democracy Project at the Shorenstein Center on Media, Politics and Public Policy Dipayan Ghosh pondered if the Trump ban action indicated a turning point in how platforms handle potentially harmful content, and what it heralds in terms of their self-regulation.73

The “de-platforming” was decried by world leaders including German chancellor Angela Merkel as “problematic” as it called into question the “right to freedom of opinion [that] is of fundamental importance.” Ghosh argues that even those that felt the ban was appropriate acknowledge that tackling a single account in a politically divisive environment is not an adequate solution to address related deep- rooted issues that have plagued platforms’ tendency to promote and amplify extremist groups, hate speech inciting violence political and disinformation, and other controversial content to serve their bottom line.74

A March hearing titled “Disinformation Nation: Social Media’s Role in Promoting Extremism and Misinformation” demonstrated lawmakers are keenly aware of the ways in which social media platforms prioritize user engagement and monetization schemes that has enabled the proliferation of extreme and false material, and lack risk mitigation and prevention methods baked into current rules and practices.

70Frenkel, Sheera. “The storming of Capitol Hill was organized on social media.” New York Times. January 6, 2021. https://nyti.ms/3xAYnOk 71Culliford, Elizabeth; Menn, Joseph and Paul, Katie. “Analysis: Facebook and Twitter crackdown around Capitol siege is too little, too late.” Reuters. January 8, 2021. https://reut.rs/3vwl5Fr 72Hadavas, Chloe. “What’s the Deal With Parler?” Slate. July 3, 2020. https://bit.ly/3t4M0GI 73Ghosh, Dipayan. “Are We Entering a New Era of Social Media Regulation?” Harvard Business Review. January 14, 2021. https://bit.ly/3nGTHly 74Ghosh, Dipayan. “Are We Entering a New Era of Social Media Regulation?” Harvard Business Review. January 14, 2021. https://bit.ly/3nGTHly

21 Illinois Democrat Robin Kelly succinctly summarizes the problem inherent in the free market framework that has protected platforms’ business model from scrutiny and regulatory action:

“The business model for your platforms is quite simple: keep users engaged. The more time people spend on social media, the more data harvested and targeted ads sold. To build that engagement, social media platforms amplify content that gets attention. That can be cat videos or vacation pictures, but too often it means content that’s incendiary, contains conspiracy theories or violence.

Algorithms on the platforms can actively funnel users from the mainstream to the fringe, subjecting users to more extreme content, all to maintain user engagement. This is a fundamental flaw in your business model that mere warning labels on posts, temporary suspensions of some accounts, and even content moderation cannot address. And your companies’ insatiable desire to maintain user engagement will continue to give such content a safe haven if doing so improves your bottom line.”75

There are many examples that justify Kelly’s accusations. While Facebook relies on the U.S. State Department list of designated terrorist organizations, this does not include many white supremacist sites such as a group called “Alt-Reich Nation,” of which a member was recently charged with murdering a black college student in Maryland. The platform still hosts a number of hateful and conspiratorial groups, including white supremacist groups with hundreds of thousands of members, and regularly recommends users join them, according to a study76 published by the Anti-Defamation League.77 Twitter has also had its fair share of complaints about letting white nationalists use the platform even after being banned, and has said that it plans to conduct academic research on the subject.78

One investigation into Facebook’s failure to address such ills was illuminated in a Wall Street Journal article reporting that leadership ignored the findings of a 2018 internal report that emphasized the company was well aware that its recommendation engine stoked divisiveness and polarization. One slide from the presentation read: “Our algorithms exploit the human brain’s attraction to divisiveness. If left unchecked,” it warned, Facebook would feed users “more and more divisive content in an effort to gain user attention & increase time on the platform.”79

This finding demonstrates the company’s senior leadership’s attempts to absolve itself of responsibility and choice not to implement changes to its service to minimize the promotion of hate speech and bad actors, for fear it would have disproportionately affected conservative users and hurt engagement.80

75Edelman, Gilad. “Social Media CEOs Can’t Defend Their Business Model.” WIRED. March 25, 2021. https://bit.ly/3aQpsDk 76Hateful and Conspiratorial Groups on Facebook. Anti-Defamation League. August 3, 2020. https://bit.ly/3aN30Ld 77McEvoy, Jemima. “Study: Facebook Allows And Recommends White Supremacist, Anti-Semitic And QAnon Groups With Thousands Of Members.” Forbes. August 4, 2020. https://bit.ly/3aN37q7 78Newton, Casey. “How white supremacists evade Facebook bans.” The Verge. May 31, 2019. https://bit.ly/3u5QEWv 79Horwitz, Jeff and Seetharaman, Deepa. “Facebook Executives Shut Down Efforts to Make the Site Less Divisive.” Wall Street Journal. May 26, 2020. https://on.wsj.com/3e62I4k 80Seetharaman, Deepa. “Facebook Throws More Money at Wiping Out Hate Speech and Bad Actors.” Wall Street Journal. May 15, 2018. https://on.wsj.com/2QJ06k1

22 Facebook was also found to not have enforced its rule to stop “calls to arms” ahead of the Kenosha shooting, despite CEO Mark Zuckerberg stating it had removed a militia event where members discussed gathering in Kenosha, Wisconsin, to shoot and kill protesters.81 Last summer’s Stop Hate for Profit campaign82 by leading advertisers was one market response to what some feel has been a limited and inadequate response by social media companies to proactively police misinformation and hate speech.

Civil rights groups bolstered this effort by calling on large advertisers to stop Facebook ad campaigns during July, saying the social network isn’t doing enough to curtail racist and violent content on its platform.83 The campaign specifically focused its pressure on Facebook because of its scale and because advertisers feel it’s been less proactive than rivals Twitter and YouTube on policing misinformation and hate speech.84

Despite widespread support from major conglomerates backing the movement and taking action by pausing ads on the platform, and predictions it would cost the platform over $70 million, analysts affirm it had little impact on Facebook’s revenue.85 Such little substantive change demonstrates that the free market framework isn’t able to balance private financial incentives with calls from ad partners, users and civil society advocacy groups tying real-life negative consequences to hate fomented online.

In March, Reporters Without Borders filed a lawsuit arguing that Facebook engaged in "deceptive commercial practices" by allowing disinformation and threats to flourish despite promising users that it will "exercise professional diligence" to create "a safe, secure and error-free environment." Their specific claims center on a lack of commitment to promises made in Facebook's terms and conditions, calling them deceitful and and contradicted by "the large-scale dissemination of hate speech and false information on its networks."86

Ahead of the 2020 U.S. election and in the wake of 2016 election meddling online, platforms were grappling with how to handle promoting political advertising, as there was fear their networks could have outsized power to change the balance of elections by targeting and influencing voter behavior.87 Twitter announced it would no longer serve political ads,88 and YouTube announced that it would

81Mac, Ryan and Silverman, Craig. “How Facebook Failed Kenosha.” Buzzfeed. September 3, 2020. https://bit.ly/3e4LlRh 82Stop Hate for Profit landing page. Accessed April 29, 2021. https://bit.ly/3t9IScJ 83Arbel, Tali. “Civil rights groups call for ‘pause’ on Facebook ads.” AP News. June 17, 2020. https://bit.ly/33gZ3ut 84Fischer, Sara. “Stop Hate for Profit social media boycott to focus its pressure on Facebook.” Axios. September 22, 2020. https://bit.ly/3xDpCaX 85Abril, Danielle. “Facebook ad boycott: ‘It’s not going to do anything to the company financially’.” Fortune. June 24, 2020. https://bit.ly/3eNTeK4 86Riley, Charles. “Facebook accused of failing to provide a 'safe' environment for users.” CNN. March 23, 2021. https://cnn.it/3eHNLVf 87Ryan-Mosley, Tate. “Why Facebook’s political-ad ban is taking on the wrong problem.” MIT Technology Review. September 6, 2020. https://bit.ly/3e7mv3x 88Feiner, Lauren. “Twitter bans political ads after Facebook refused to do so.” CNBC. October 30, 2019. https://cnb.cx/3ucwEkM

23 remove thousands of videos from its platform that promote white supremacy and other hateful material,89 Additionally, the company later decided to temporarily halt political ads, broadening its earlier restrictions, in order to limit confusion, misinformation and abuse of its services.90

Clearly, this framework presents the inconsistencies in how platforms, that have a difficult but undeniable responsibility as our digital public squares, have historically responded to charged events being organized on their site, and ahead of civic events like the 2020 U.S. election.

In the wake of related PR crises and complaints about ideological biases and noxious content, Facebook announced the debut of its third party but internally funded Oversight Board as one free market framework mechanism,91 hiring a group of subject matter experts, ranging from lawyers to human rights experts to civil society members, to adjudicate specific posts to be taken down and holding a “final say” over how to handle controversial content such as hate speech.92

There is a big emphasis on its independence, though, as each member is paid a six-figure salary by the company,93 can only interpret Facebook’s existing rules, and CEO Mark Zuckerberg is under no legal obligation to abide by rulings. Crucially, nothing comes before it that has not already been taken down by Facebook, which leaves major gaps in Facebook’s stated commitment to improve public accountability measures.94

In January, Twitter announced the pilot phase of Birdwatch, a tool to crowdsource the content fact- checking process.95 Similar to the Wikipedia volunteer content management model – referenced as the most thorough and factual around – Twitter would harness its own community to shape its information landscape.96 It’s especially salient for posts that fall into grey areas that don’t violate rules but could still benefit from added context. Since false information can spread rapidly, Birdwatch will speed up a labelling process Twitter has struggled to scale. Preceding this effort, Twitter had guardrails like a civic integrity policy97 and undertook removing fake accounts, and labelling and reducing the visibility of tweets containing false or misleading information.

89Allam, Hannah. “YouTube Announces It Will Ban White Supremacist Content, Other Hateful Material.” NPR. June 5, 2019. https://n.pr/3gQHvNF 90Dwoskin, Elizabeth. “Facebook to temporarily halt political ads in U.S. after polls close Nov. 3, broadening earlier restrictions.” Washington Post. October 7, 2020. https://wapo.st/3h5Cf99 91Oversight Board landing page. Accessed April 29, 2021. https://www.oversightboard.com/ 92Levine, Alexandra and Overly, Steven. “Facebook announces first 20 picks for global oversight board.” POLITICO. May 6, 2020. https://politi.co/3xDbNtb 93Akhtar, Alana. “Facebook's Oversight Board members reportedly earn 6-figure salaries and only work 'about 15 hours a week'.” Business Insider. February 13, 2021. https://bit.ly/336Ewsa 94Ghosh, Dipayan. “Facebook’s Oversight Board Is Not Enough.” Harvard Business Review. October 16, 2019. https://bit.ly/3e7oFzQ 95Coleman, Keith. “Introducing Birdwatch, a community-based approach to misinformation.” Twitter Blog. January 25, 2021. https://bit.ly/3nCNtmx 96Collins, Ben and Zadrozny, Brandy. “Twitter launches 'Birdwatch,' a forum to combat misinformation.” CNBC. January 25, 2021. https://nbcnews.to/3e3DV0J 97Civic integrity policy. Twitter Help Center. January 2021. https://bit.ly/3e9EeHz

24

“Birdwatchers” add links to their own sources, label tweets like Twitter does, and rate each other’s notes so administrators can elevate or remove posts accordingly. Rallying users who know the platform best and have a vested interest in its functioning as a fact-based forum – with reputations on the line – makes sense. Twitter’s vision for this open-source ethos and consensus is that users would come away better informed. Ultimately, this experiment will determine whether Twitter users trust each other more than they trust the company to verify what they see on their newsfeeds.

Birdwatch only has a thousand beta users, so it’s been hard to measure its impact at its current scope. VP of Product Keith Coleman acknowledges it could yield a mix of quality in its results, but hopes surfacing the best of the rating system and feedback loop can see through Twitter’s vision to foster a healthier forum with time.98

Both Facebook and Twitter say their goal is to build a new model for governing what appears on their platforms. Their new tools are radically different and have received equal amounts of flack and praise.99 Many critics see them for what they are: attempts to get ahead of, or altogether skirt, potential government regulation and hefty fines for monopolizing power over online discourse. The director of the Tow Center for Digital Journalism at Columbia University's Graduate School of Journalism and Guardian columnist Emily Bell affirms that “the social media giant is still trying to navigate controversial content, yet the problem remains the platform itself,” as the Board’s power remains illusory.100

Though the Board’s recent rulings overturned four out of five of Facebook’s content moderation decisions around cases involving hate speech, incitement to violence and other thorny topics101 – calling for posts to be restored – with one member stating that “this is the first time that Facebook has been overruled on a content decision by the independent judgement of the Oversight Board [with the ability to] provide a critical independent check on how Facebook moderates content,”102 its jurisdiction and authority are not expansive enough, as it can only review so many cases even if it has received more than 150,000 cases since October 2020.103

Additionally, interrogating the core business model falls outside its purview, and it seems logical that this should not be divorced from content moderation policies. Though the Board has the power to overrule CEO Mark Zuckerberg, nothing comes before it that has not already been taken down by Facebook, which left major gaps, but an update in April 2021 confirms users can now submit

98Bond, Shannon. “Twitter's 'Birdwatch' Aims to Crowdsource Fight Against Misinformation.” NPR. February 10, 2021. https://n.pr/2PEt1VZ 99Pasternack, Alex. “Twitter wants your help fighting falsehoods. It’s risky, but it might just work.” WIRED. January 28, 2021. https://bit.ly/3u4ojj8 100Bell, Emily. “Facebook has beefed up its ‘oversight board’, but any new powers are illusory.” The Guardian. April 14, 2021. https://bit.ly/3xC7U7Q 101Oversight Board Decisions landing page. Accessed April 29, 2021. https://oversightboard.com/decision/ 102Mihalcik, Carrie and Wong, Queenie. “Facebook oversight board overturns 4 of 5 items in its first decisions.” CNET. January 29, 2021. https://cnet.co/3gZRDDA 103ibid

25 for content removals, presenting a departure from it previously only considering restoring removed posts.104 This is evidence that the body is still an evolving mechanism, and how future rulings will play out will dictate if Facebook sees it in its best interest to grant it more power over internal policy or widen its caseload.

While creative, neither the Oversight Board nor Birdwatch constitute holistic solutions to the foremost challenges of our internet age, and shouldn’t be viewed as a substitute for or panacea to the lack of official regulation governing information and communication networks today. Nevertheless, market solutions can be easier to implement and enforce than policy in the short run, so assessing their goals and efficacy so far is instructive to the evolving content moderation landscape.

To this point, the Oversight Board reviews and public comments around hate speech cases can provide some nuance to what is arguably difficult content and context for the company’s algorithms to always arbitrate. Member and constitutional law expert Jamal Greene affirms that the cases, which also involved content removed over rules on adult nudity, dangerous individuals and organizations, and violence and incitement, raises "important line-drawing questions.” Another important development following the standing up of the Board is that Facebook for the first time disclosed numbers on the prevalence of hate speech on the platform, saying that out of every 10,000 content views in the third quarter, 10 to 11 included hate speech.105

All in all, it’s clear platforms are still vying for ways to govern themselves. Their latest iterations to avoid public sector scrutiny simply shifts the responsibility of moderation to users and experts, and may be considered a diversion strategy from interrogating the source of disinformation and hate speech plaguing them. This is the fact that platforms remain the only entity that has any visibility into the algorithmic design “black boxes”106 that shape public discourse online.

Public Knowledge Senior Vice President Harold Feld believes legislation that weighs evidence and balances interests is explicitly the job of Congress, not that of private companies. Feld takes issue with the current situation of pressuring companies to take “voluntary” action because it precludes forcing Congress to outline requirements companies must follow.

Additionally, he notes that this opens the door to soft censorship and the promotion of political propaganda in the name of “responsible” corporate governance. He emphasizes that however difficult and controversial Congress may find it to develop content moderation requirements for digital platforms, perpetuating current efforts to force platforms to create their own policies without any

104De Chant, Tim. “Facebook users can now oversight board to remove content.” Ars Technica. April 13, 2021. https://bit.ly/3nAhenW 105Culliford, Elizabeth. “From hate speech to nudity, Facebook's oversight board picks its first cases.” Reuters. December 1, 2020. https://reut.rs/2PCDbpY 106Stern, Joanna. “Social-Media Algorithms Rule How We See the World. Good Luck Trying to Stop Them.” Wall Street Journal. January 17, 2021. https://on.wsj.com/3e4vwu6

26 formal guidance or oversight from lawmakers balanced with platform discretion is corrosive to democracy and undermines free speech values.107

Though Facebook publishes a quarterly Transparency Community Standards Enforcement report108 to track progress on its efforts to take action on content that violates its policies, it requires downloading a hefty document and parsing through a lot of data, which is likely a cumbersome task for most. Crucially, it does nothing to illuminate the interior workings of its code in decision making processes.

One recent development on this front is Twitter’s late April 2021 announcement that it is making some strides towards sharing how race and politics shape its algorithm. The company will study the technology’s inherent biases in a new effort to try to understand how its machine learning tools can cause unintended consequences, and will share some of these insights publicly. Twitter ML Ethics leader Romman Chowdhury outlined the approach to prioritize what it calls “the pillars of "responsible ML," which include "taking responsibility for our algorithmic decisions, equity and fairness of outcomes, transparency about our decisions and how we arrived at them, and enabling agency and algorithmic choice.”109

More transparency, accountability and structural reassessment of the News Feed algorithm has been vigorously demanded of platforms – so much so that they have seen agitation by their own employees.110 Internal calls for change are an example of free market framework failure if the end result is platforms losing talent. But, if such pressure and backlash persists and results in tangible updates to correct content moderation practices that fall short, this would demonstrate the approach can be effective in removing harmful content to retain and attract talent needed to support the business.

While the evidence above demonstrates that a free market mechanism like the third party Oversight Board can help improve the platform’s record and responsiveness on the accountability and transparency front, its jurisdiction remains too narrow and adjusting existing policies that do not go far enough in limiting disinformation and hate speech still falls outside of its purview.

While the creation of the Oversight Board and Twitter’s Birdwatch signals a growing awareness and acknowledgment by companies that their content moderation practices are not effective, with the added plus of increasing transparency reporting, such mechanisms are not enough to placate lawmakers, academics and civil society members critical of the core business model and vying for a view into algorithmic creation and amplification.

107Stern, Joanna. “Social-Media Algorithms Rule How We See the World. Good Luck Trying to Stop Them.” Wall Street Journal. January 17, 2021. https://on.wsj.com/3e4vwu6 108Facebook Community Standards Enforcement Report landing page. Accessed April 29, 2021. https://bit.ly/3eD2YGU 109Kramer, Anna. “Twitter will share how race and politics shape its algorithms.” Protocol. April 14, 2021. https://bit.ly/2PCsRhF 110Frenkel, Shira; Isaac, Mike and Roose, Kevin. “Facebook Struggles to Balance Civility and Growth.” New York Times. November 24, 2020. https://nyti.ms/2QLVCsL

27 Negligence to respond to internal reporting that platform features increased polarization, amplified extremist and controversial content, and fell short in responding to flags of problematic Groups or posts in a timely manner are also top of mind.

Demands – and potential regulation on the horizon – focused on unpacking how massive hordes of disinformation and hate speech proliferated online in the first place, and the existential threats they present to our democracy, will persist. Understanding platforms’ algorithmic design and interrogating their ad monetization scheme are still viewed by experts as the best way to address prevention and risk mitigation efforts to combat harmful content online, and craft appropriate policy to this end.111 No free market solution will change that.

INTERNATIONAL HUMAN RIGHTS LAW FRAMEWORK

The international human rights law framework suggests applying human rights norms and values, with a particular emphasis on freedom of expression, to private companies’ content moderation policies. Arguments and proponents in favor of this lens cite the global nature of human rights law as appropriate to social media companies that operate worldwide, its ability to balance free speech with other rights, and the existence of many documents guiding businesses on how to best protect human rights as grounds for its consideration.

Ex-Facebook content moderation Director Dave Willner has stated that: “There is no path that makes people happy. All the rules are mildly upsetting.” He goes on to explain that because of the volume of decisions — many millions per day — the approach is “more utilitarian than we are used to in our justice system. It’s fundamentally not rights-oriented.”112 The company relies on the principle of harm articulated by John Stuart Mill. This led to the development of a “credible threat” standard, which bans posts that describe specific actions that could threaten others, but allows threats that are not likely to be carried out. Willner’s then-boss Jud Hoffman adds: “Limiting it to physical harm wasn’t sufficient, so we started exploring how free expression societies deal with this.”113

While it is not currently being applied to digital platforms, many scholars find the international human rights law framework appealing because of its centering of free speech values and norms. International human rights law as a lens to explore how content moderation practices intersect with freedom of expression standards has in the last few years been recommended and gained traction by former UN Special Rapporteur on Freedom of Expression.114

111DeChiaro, Dean. “Social media algorithms threaten democracy, experts tell senators.” RollCall. April 27, 2021. https://bit.ly/3gRgWry 112Angwin, Julia and Grasseger, Hannes. “Facebook’s Secret Censorship Rules Protect White Men From Hate Speech But Not Black Children.” ProPublica. June 28, 2017. https://bit.ly/3u8LZCR 113ibid 114Kaye, David. “Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression.” UN Human Rights Council. April 6, 2018. https://bit.ly/3gRFpgl

28

Proponents cite the fact that it offers a set of clear written rules crafted in the public interest, which have the broad support of the global community. Big Tech companies operate worldwide in countries have ratified the UN’s Universal Declaration of Human Rights, so this could be considered a plus when it comes to the free speech protection dimension.115 But, there are complications around what ratifying a treaty looks like in practice (multiple accusations of human rights violations by co-signing countries are an example), so respecting these falls more on states than private firms functioning within them.

Georgetown’s Berkley Center contributor and director of the Danish Institute for Human Rights Rikke Frank Jørgensen cites the framework’s ability to grapple with digital platform services that have an impact far beyond determining whose rights to free speech are protected in public debate online, as the approach touches upon and does a good job balancing an array of human rights and public policy issues. These include those related to discrimination, privacy, data protection, access to information, freedom of opinion and expression, freedom of assembly and association, and more – an intersection which can often arise in considering threatening or inciting posts or speech online.116

However, international human rights law remains in many areas highly indeterminate, given it offers guidance to those countries that have ratified related treaties, but does not outline precise answers to specifically tackle the intersection of free speech and content moderation online. Jørgensen contends that the framework may as a result only lend an aura of legitimacy through a broad and fuzzy commitment to upholding human rights, which could leave us in the context of “business-as-usual” we face today.

Nevertheless, the framework offers some solid mechanisms to be considered in the formation of internal policies to guide moderation that protects free speech.

Article 19 in the UN’s International Covenant on Civil and Political Rights117 provides globally established rules that could be considered as core guidelines in laying the groundwork to regulation of social media platforms. Its provisions around free speech, right to opinion and exceptions to protect national security and public order align closely with issues we’re confronting with digital networking mediums today:

1. Everyone shall have the right to hold opinions without interference.

2. Everyone shall have the right to freedom of expression; this right shall include freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers, either orally, in writing or in print, in the form of art, or through any other media of his choice.

115Universal Declaration of Human Rights. United Nations. Accessed April 29, 2021. https://bit.ly/3eM26zU 116Jorgensen Frank, Rikke. “A Human Rights-Based Approach to Social Media Platforms.” Georgetown University Berkley Center for , Peace & World Affairs. February 6, 2021. https://bit.ly/3t7TOrl 117International Covenant on Civil and Political Rights, UN Human Rights Office of the High Commissioner. https://bit.ly/2VJKVGQ

29 3. The exercise of the rights provided for in paragraph 2 of this article carries with it special duties and responsibilities. It may therefore be subject to certain restrictions, but these shall only be such as are provided by law and are necessary:

(a) For respect of the rights or reputations of others;

(b) For the protection of national security or of public order (ordre public), or of public health or morals.118

Additionally, a few foundational and operational provisions within the UN Guiding Principles on Business and Human Rights119 outline state duties to create environments that enable business respect for human rights:

1. States must protect against human rights abuse within their territory and/or jurisdiction by third parties, including business enterprises. This requires taking appropriate steps to prevent, investigate, punish and redress such abuse through effective policies, legislation, regulations and adjudication. 2. Enforce laws that are aimed at, or have the effect of, requiring business enterprises to respect human rights, and periodically to assess the adequacy of such laws and address any gaps.

One inherent challenge is that these laws are sweeping, and their language and allusion to reporting procedures remain somewhat vague and ambiguous. As such, they are not specific enough to be interpreted in a U.S. context nor leveraged to put forth rules the federal government can impose on companies. Another crucial point to touch upon in the way international human rights like free speech are interpreted in a U.S. context is determining how the First Amendment creates exceptions and constraints around hate speech.

First and foremost, outlining what makes the First Amendment so sticky in confronting today’s free speech challenges on social media will help situate its contested role in the digital landscape.

The First Amendment protects hate speech from government censorship unless that speech incites or is likely to incite imminent lawless action.120The First Amendment came to life, and many scholars have brought into question its usage and relevance in protecting the free and open presentation of a full range of political ideas in the age of the internet. Initially conceived to protect citizenry and dissidents from state suppression of political speech, the information environment defined by Web 2.0 – where the federal threat on speech is low and content is mostly user-generated – the statue may no longer be applicable.

118International Covenant on Civil and Political Rights, UN Human Rights Office of the High Commissioner. https://bit.ly/2VJKVGQ 119Guiding Principles on Business and Human Rights, UN Human Rights Office of the High Commissioner. https://bit.ly/2yR2kog 120Free Expression on Social Media. Freedom Forum Institute. Accessed April 29, 2021. https://bit.ly/2Kvx40m

30

Tim Wu, who has pioneered the debate around potential revisions to the First Amendment, concedes that it might be wise to adapt it to grapple with contemporary speech conditions. He bases his argument on a few underlying assumptions that no longer fit our current global digisphere: the premise of informational scarcity it was founded on has been replaced by an abundance of information as well as a booming number of content creators, and the digital behemoths, rather than the government, constitute the main threat to the “marketplace of ideas.”121

The advance in communications technologies and algorithmic targeting in fostering the “attention economy” has flipped the switch: instead of speech or information being scarce – it is viewer, reader or listener attention that has become highly valued and hard to capture. On this, Wu affirms that as a result, “one important means of controlling speech is targeting the bottleneck of listener attention, instead of speech itself.”122 In attempting to maximize the amount of user time on their sites, social media platforms leverage advertising to create “filter bubbles'' that will ensure engagement through tailored content that matches preexisting biases and interests.

Such actions are complicated by the state action doctrine, which holds that the Amendment only applies to action by the state, and not by private parties – thus protecting the new arbiters of speech, whose censorial power has merited reevaluation of this clause. The prominence of bots, and their potential to create troll armies, complicates this further.

As it stands, the First Amendment is not applied to tackle the dissemination of or hateful vitriol on social networks – as it was not intended to prevent the control of speech dissemination, but to prevent the government from interfering in . Quality journalism and internal editorial guidelines were held to a standard that would guard against false news stories. Determining whether social platforms are classified as acting as publishers, it is an open question whether this should also be applied to private social networks today.

Wu acknowledges that Big Tech has entrenched itself in the world of media, and thus must be compelled to do more to combat weaponized speech and distorted news stories to restore productive political discourse in a heated moment in history. He concludes that however central Facebook, Twitter and YouTube have become to our speech environment in their public function, suggestions to expand the First Amendment’s category of “state action’ to encompass the conduct of platforms like Facebook and Twitter may actually be counterproductive, as it would hurt their ability to fight abuse and trolling on their platforms. Additionally, classifying platforms would mean expanding to include TV networks, radio shows and other new media under this purview – which would ultimately render the category of state actor moot.123

121Wu, Tim. “Is the First Amendment Obsolete?” Knight First Amendment Institute at Columbia University. September 1, 2017. https://bit.ly/3aISvWx 122ibid 123Wu, Tim. “Is the First Amendment Obsolete?” Knight First Amendment Institute at Columbia University. September 1, 2017. https://bit.ly/3aISvWx

31

A second possibility is expanding the category of “state action” itself to encompass the conduct of major speech platforms like Facebook or Twitter. However, as discussed below, I view this as an unpromising and potentially counterproductive solution.

He closes by explaining that in order to promote a healthy speech environment, it is imperative for the government to consider deprioritizing the First Amendment’s role in its consideration of social media platforms, as its limits around libel and slander don’t extend to the hate speech and disinformation trends centerstage today.

Public Knowledge Senior Vice President Harold Feld pushes back on this argument, and states that contrary to popular belief, the First Amendment does not prevent any legislative effort to protect either individuals or society as a whole from harassing content, fraudulent content, or content that seeks to undermine democracy and civic discourse. He explains that at the same time, both the First Amendment and general concerns for freedom of expression require exercising caution.124

Another international human rights law mechanism applied in the EU that could be remixed in a U.S. context has been considered by experts. In 2017, Germany’s cabinet passed new legislation on hate speech that would enable the Bundestag to fine social media companies up to 50 million euros, or $55 million, for not reacting swiftly enough to reports of illegal content or hate speech on their sites. Dubbed the Network Enforcement Law, that officially came into effect in January 2018, it has turned into a “sandbox” and testbed for whether tech firms can be relied upon and trusted to identify the difference between free speech and hate speech online.125

As their international operations grow, Facebook and Twitter have both refitted their German websites with additional features to flag controversial content, and conducted specialized training around the Act for regional moderators. It also applies to sites like YouTube that we’ve discussed as another key hub for hateful material.

During its first few active days, the Act was quickly mired in controversy following contentious deletions and suspensions. Critics warned that the law would violate free speech protections as companies try to avoid fines. A New Year’s Eve tweet by AfD deputy leader Beatrix von Storch accusing Cologne police of appeasing “barbaric, gang-raping Muslim hordes of men” was the first post to have fallen foul, as her account was temporarily suspended, with the reason cited as the “incitement of the people” from the German penal code. Legal experts believe that while the tweet, amongst others by far-right politicians, were incendiary, they were not out of bounds of Germany’s traditionally strict hate speech nor necessarily merited suspension. On the other hand, German politicians across the rest of the spectrum

124Feld, Harold. “The Case for the Digital Platform Act: Market Structure and Regulation of Digital Platforms.” Roosevelt Institute and Public Knowledge. May 2019. https://bit.ly/3e7cWl3 125“Overview of the NetzDG Network Enforcement Law.” Center for Democracy & Technology. July 17, 2017. https://bit.ly/3e4NlsX

32 conceded that populists are deliberately leveraging the Act as an opportunity to paint themselves as victims.126

Human Rights Watch called out that the statute compelling companies to remove hate speech and illegal content would likely lead to unaccountable and overbroad censorship, and called for its prompt reversal. Its German director Wenzel Michalski is quoted stating that: “Governments and the public have valid concerns about the proliferation of illegal or abusive content online, but the new German law is fundamentally flawed. It is vague, overbroad, and turns private companies into overzealous censors to avoid steep fines, leaving users with no judicial oversight or right to appeal.”127 UN Special Rapporteur on freedom of opinion and expression David Kaye echoed this sentiment, stating that the law was at odds with international human rights standards.128

HRW further affirms that the law violates two key aspects of Germany’s obligation to respect free speech. First, it does so by placing the burden on companies that host third-party content to make difficult determinations around prohibited speech, thus creating conditions for private entities to suppress even lawful speech. Secondly, judicial oversight and remedy in the case of human rights violations (incidental or intentional) is non-existent.129

Although much of this law has been brought into question by international and regional bodies alike, the U.S. government can look to some of its provisions as positive and applicable to an American context, all the while gleaning lessons learned from the failures of the German model. The omission of an appeals process in the German Act is definitely problematic, as it explicitly does away with reporting mechanisms Big Tech companies already have in place, and is thus a clear step backwards.

However, shoring up infrastructure to facilitate reporting of community violations by other users is a welcome addition, as the ability to flag and escalate offensive posts for contravening platforms standards and the new law offers an interesting approach to content moderation that has yet to be widely deployed. To this point, crowdsourcing moderation would likely help curb the overwhelming number of posts algorithms and human moderators are responsible for screening, thus fostering a less polluted information environment to everyone’s benefit.130

These small but significant advances to better contextualize content, whether through more advanced linguistic familiarity or professional expertise of international and domestic legal systems, can only ease and improve upon the difficult work of assessing speech right violations online in the U.S. today.

126Olterman, Phillip. “Tough new German law puts tech firms and free speech in spotlight.” The Guardian. January 5, 2018. https://bit.ly/3cXBYzI 127“Germany: Flawed Social Media Law.” Human Rights Watch report. February 14, 2018. https://bit.ly/2KKns2e 128ibid 129Staff. “Germany: Flawed Social Media Law.” Human Rights Watch, Feb. 14, 2018. https://bit.ly/2KKns2e 130Olterman, Phillip. “Tough new German law puts tech firms and free speech in spotlight.” The Guardian. January 5, 2018. https://bit.ly/3cXBYzI

33

Based on the discussion above, while documents like the Guiding Principles on Business and Human Rights and International Covenant on Civil and Political Rights can be helpful resources in crafting rights- protecting content moderation policies, the international human rights law framework is not the best suited to minimize hate speech and disinformation on its own. For one, it is entirely reliant upon a state having ratified the appropriate treaties, so puts the onus on governments rather than companies to ensure rights are upheld accordingly. This fact complicates the drafting of global rules to be applied by platforms across all the countries they operate in.

Furthermore, while this framework’s ability to balance free speech and rights related to discrimination, access to information, freedom of assembly and association – often intersecting in posts inciting violence or promoting disinformation – is evident, there are concerns that companies stating a loose commitment to human rights would be lent an aura of credibility without much practical guidance on how to back up such claims in internal policies.

In terms of the applicability of the German NetzDG law, there are a few takeaways for content moderation and efforts to minimize hate speech in the U.S. context. Notably, the provision that enables the federal government to fine social media companies hefty sums up to $55 million for not reacting swiftly enough to reports of illegal content or hate speech on their sites, could be implemented as an incentive for platforms to improve responsiveness to flagged and reported posts and Groups.

However, critics of the law assert that it may well lead to overbroad censorship by platforms to meet compliance standards, and fuzzy rules around hate speech exceptions in the U.S. given the First Amendment could complicate this further. Additionally, it does not preclude the creation of an appeals system, which has been encouraged by digital rights advocates in the U.S. and that we have seen debuted with the Facebook Oversight Board in regards to removal requests.131

CONCLUDING RECOMMENDATIONS

Once praised as a democratizing force, social media platforms have of late come under fire for the hate, harassment and disinformation propagating across their vast networks.

While I don’t believe digital platforms should be granted the responsibility to grapple with free speech and ascertain what should and shouldn’t live online (it’s shameful they were put in such a position to begin with), analysis of these frameworks offers some guidance to map a path forward in terms of proposing effective government regulation and suggestions to improve social media companies’ internal policies to better combat disinformation and hate speech inciting violence, with the goal of preventing its spread before it has destructive impacts beyond the confines of the web.

131“The Oversight Board is accepting user appeals to remove content from Facebook and Instagram.” Oversight Board. April 2021. https://bit.ly/3aSq2R7

34 Overall, the public interest and free market frameworks offer the best legislative proposals and procedures to effectively limit and remove harmful content while protecting free speech.

Specifically, proposals to amend Section 230 to apply to platforms coming out of academia and civil society are on the right track, as our best recourse in regaining control over troves of free speech abuses and healthy online communications ecosystems is by deploying federal regulation. Top down and nimble public policies are the only instruments that will be effective in requiring, implementing and enforcing guardrails to guide and oversee stronger moderation practices by private entities, and they also have the ability to provide specialized agencies to ensure such laws are respected. Legislation should include clauses pushing for transparency around moderation and curation, and built-in sanctions that can be applied to companies unwilling to comply.

That is not to say we should discount efforts by companies to invite assessment and oversight of moderation practices by subject matter experts and their own users in order to improve upon them as they await government regulation. Cross-disciplinary bodies and collaboration across all sectors is necessary to ensure a diversity of voices are heard, a range of solutions are put forth, and all those involved are aware of which channels are available to them to clarify any issues, and which procedures they must abide by.

While there is no silver bullet to effectively navigate and regulate the digital sphere, below are a few high-level recommendations on actions that tech companies can take to improve harm reduction in internal content moderation policies and News Feed features, and suggestions for U.S. policymakers as they continue to grapple with related legislation and scrutiny of Big Tech power.

For U.S. Policymakers

Ratify Accountability Sections of Bipartisan PACT Act

A major challenge Republicans and Democrats cite is a lack of transparency into and accountability around content moderation because of the “blackbox” nature of platform design. Most people have no idea how the information on their newsfeeds is shaped, nor how algorithms silo them in echo chambers. This lack of public understanding makes it nearly impossible to mobilize support for regulating social media platforms.

The bipartisan Platform Accountability and Consumer Transparency (PACT) Act was introduced in 2020 to update Section 230. Though contentious for its thorny treatment of court orders around illegal content, it put forth worthwhile requirements for transparency, accountability, and user protections. This includes an easy-to-understand disclosure of moderation guidelines, which remain opaque. Additionally, platforms would have to explain their reasoning behind content removal decisions, and explain clearly how a removed post violated terms of use. Lastly, the act would create a system for users to appeal or file complaints around content takedowns.

35

The proposed PACT Act is not a complete or perfect solution, but these few clauses offer a good start.

Incentivize Section 230 Compliance by Baking In Santa Clara Principles on Notice, Transparency and Appeals Process

Incentives and enforcement are crucial for encouraging companies to obey the rules. Threatening to pull Section 230 immunity — unless companies implement significant improvements in their moderation approach and agree to disclose these publicly — could be the “carrot and stick” we need.

Eric Goldman’s Santa Clara Principles propose three meaningful steps for companies to strengthen guidelines and provide due process for users. They also ensure their enforcement is fair, unbiased, proportional, and respectful of users’ fundamental digital rights.

The principles encourage platforms to disclose the following internal metrics each quarter: ● Numbers: Publish the number of posts removed and accounts permanently or temporarily suspended due to violations of content guidelines. ● Notice: Notify and provide a clear explanation when a user’s account is taken down or suspended. ● Appeal: Provide opportunities to appeal content removal and suspension on a case-by-case basis.

Predicating 230 immunity on better processes and practices could be the motivation companies need to be more responsible. Avoiding a steady stream of lawsuits and PR crises likely wouldn’t hurt, either.

Address Discrimination by Better Differentiating Good and Bad Content

Some experts believe the role of Section 230 should be predicated on providing solutions to define the span and explain the takedown of good versus bad content.

I’ve pondered a hybrid of Fordham Law Professor Olivier Sylvain’s recommendations132 to better tackle discriminatory behavior, combined with Boston University Law Professor Danielle Citron’s “reasonable standard of care.”

Sylvain argues that the current “unfettered” market for online speech makes online engagement difficult for historically disadvantaged groups, including children, women, racial minorities, and other at- risk communities. The way information is targeted and flows online could result in disparate, unlawful or harmful impacts on certain individuals or groups. Courts should account for this and intervene in

132Sylvain, Olivier. “Everything You Need to Know About Section 230.” Fordham Law News. July 20, 2020. https://bit.ly/3gTZBOS

36 assessing who is disproportionally targeted by discrimination and harassment. It is essential we ground Section 230 rules in human rights norms that foster equality.

Citron’s “reasonableness” argument is a broader legislative fix, wherein platforms can enjoy Section 230 immunity if they demonstrate that their response to unlawful use of their service is reasonable. This is all well and good, but incredibly vague, since reasonable is in the eye of the beholder. That is why I would recommend merging this proposal with Sylvain’s for specificity’s sake, while also acknowledging the importance of balance and feasibility.

This harm-reduction strategy could encourage an environment of healthier online communication, while minimizing hate targeted at marginalized groups.

Designate Digital Regulatory Agency to Oversee Digital Platform Issues and Instate Hefty Fines for Future Regulation Violation

In February, former FCC Chairman Tom Wheeler made his case for why a focused federal agency is necessary to oversee Big Tech.133 There is no question that major digital platforms are now center when it comes to many social, political and economic spheres in the U.S. and worldwide today. But there is still no public interest oversight holding these companies that provide critical information and communication services to account when it comes to the rights of their users and their role in our democracy.

Wheeler asserts that the existing federal regulatory structure, and agencies like the FTC and DOJ, are not adequately set up to tackle these challenges, as they face constraints and much of them fall outside their purview.

If the analysis above is anothing to go by, there is clearly a precedent for a new digital oversight agency to grapple with the scope, scale and public interest facet inherent in the challenges we face in today’s digital era.

For Social Media Companies

Ban Facebook Groups and Pages, Modify Facebook Events Moderation, and Disable Twitter Trending Topics

Regarding Groups, Facebook has continuously been railed against for making claims that the company did not follow through on concerning white nationalist organizations – notably, a public statement it

133Wheeler, Tom. “A focused federal agency is necessary to oversee Big Tech.” Brookings. February 10, 2021. https://brook.gs/2QBB4U1

37 would ban all related Groups in March 2019, but waiting until June of that year to remove nearly 200 accounts with white supremacist ties.

An August 2020 policy restricting the activities of “organizations and movements that have demonstrated significant risks to public safety,” including “US-based militia organizations,” was criticized as not coming too late and leaving many problematic pages up.134 As such, I recommend Facebook disable the ability for users to create private and public Groups and Pages altogether. Banning all groups rather than selectively removing a few will ensure parity towards the right and the left, quell accusations of bias and censorship135, and circumvent accusations around the platform’s role in U.S. political polarization through implementing this objectively non-discriminatory and universal policy measure.

In terms of Facebook Events, there is room to be more restrictive. While it is impossible to monitor content on all Events for violations, prioritizing the removal of those that display clear calls to action that could cause real-world harm seems logical. To handle this gargantuan task, I propose creating a dedicated in-house moderation management team (rather than outsourcing to third-party content moderation staffing agencies) that will scan upcoming U.S.-based events.

In order to narrow search queries and surface a batch of highest concern, mirroring Trending Topics that appear on the News Feed (and related hashtags) is a good first step. Additionally, creating a library of key search terms including tags for BLM counterprotests and fringe groups with white supremacist ties will further delineate effective enforcement of event activity and potential veering from free speech into calls for incitement to violence. If egregious behavior is identified, a one-time user notification process can be deployed, and if reported posts are not deleted or adjusted to remove any reflections of hate speech or incitement to violence within 24 hours, the event will be deleted within 48 hours.

Twitter’s Trending Topics has also come under fire for QAnon-specific accounts and content being elevated to the page, alongside other sources of disinformation which it announced a crackdown around in July 2020.

Many have wondered why certain tweets have become so popular and landed on Trending Topics. In response, Twitter has announced it will add representative pinned tweets and short descriptions to some of the topics to help provide this context. This will be decided by a combination of human and algorithmic curation, in the hope the update will lessen spammy or abusive tweets from being pinned to a trending topic.136

134Marantz, Andrew. “Why Can’t Facebook Fix Itself.” The New Yorker. October 12, 2020. https://bit.ly/333K5aP 135Kreiss, Daniel and McGregor, Shannon. “Conservatives say Google and Facebook are censoring them. Here’s the real background.” Washington Post. August 1, 2019. https://wapo.st/2R9Uorg 136Porter, Jon. “Twitter tries to explain trending topics.” The Verge. September 2, 2020. https://bit.ly/3t2DBne

38 Rather than go to all this trouble, I would recommend doing away with Trending Topics altogether, since it seems it has little utility for users (beyond causing confusion), and focus moderation efforts around disinformation elsewhere on Twitter.

Improve News Feed Through Grant Partnerships with Leading Newsrooms

To solve for faulty fact-checking and information pollution on News Feeds, platforms can earmark Journalism Project137 funds and create grants to foster partnerships between local and national outlets to designate a list of legitimate and representative publications to appear on News Feeds. As subject matter experts, reporters and editors from across the political spectrum may be better equipped than random fact-checking groups hired by companies to sort out hyper-partisan or conspiracy-laden content to combat disinformation online.

Disabling “likes, “reactions,” and comments on news articles would also make the social media platforms more civil places for exchanging news and ideas. Though this suggestion may aggrieve newsroom analytics departments trying to measure the reach of their articles, there could be worthwhile tradeoffs. Removing public metrics has been shown to encourage users to pay attention to content itself rather than engagement numbers, which critics say can incentivize negative behavior due to the nature of these pressurized forums. Facebook has already experimented138 with this step and Twitter has hinted at wanting to do it for years.

Not only would these efforts help foster healthier discourse and limit content siloes, it would also start chipping away at our dangerously personalized ecosystems. The basic problem is that many social media companies use algorithms that take into account a post’s likes or view counts when determining how to distribute it.

Finally, doubling down on accurate labelling of reputable news stories (featuring perspectives across the aisle), offering geolocalized context, and limiting ads with questionable or ambiguous political affiliation can help bolster a healthier and more diverse forum for online political, social and cultural debate. Comprehensive political speech policies to strengthen an information environment that platforms are big players in will not only demonstrate their commitment to taking action to improve their features for the larger public good, but also serve as a shield for when inevitable charges around bias arise.139

Release Detailed Breakdown of Algorithmic Design Process and Review of Real-Time Curation Activity

137Cohen, David. “Facebook Journalism Project Grants Nearly $16M to Local News Publishers.” AdWeek. May 7, 2020. https://bit.ly/3e5ONew 138Meizensahl, Mary. “Instagram accidentally removed 'likes' for some users — here's what your posts will look like without them.” Business Insider. March 3, 2021. https://bit.ly/3ucKfbO 139Bradshaw, Samantha; douek, evelyn; Ghosh, Dipayan; Kreiss, Daniel; Leonard, Allison; Tworek, Heidi. “Should Big Tech Be Setting the Terms of Political Speech?” Center for International Governance Innovation. October 5, 2020. https://bit.ly/2SdPIkB

39 Enabling users, civil society and U.S. lawmakers to fundamentally understand how algorithmic curation shapes what users see on their News Feeds should be of utmost consideration. This lack of visibility continues to hinder the creation of suitable regulation, and many believe companies are still avoiding interrogating the very real impacts their technology has on society and the democratic process.

Democratic lawmakers are increasingly sounding the alarm about how digital platforms’ algorithms can contribute to the spread of misinformation, hate speech and extremist content by boosting the visibility of harmful material to users. Executives from Facebook, YouTube and Twitter testified at a hearing on “Algorithms and Amplification” held on April 27, before the Senate Judiciary Committee's privacy, technology and law subcommittee.140

Subcommittee chair Senator Chris Coons told POLITICO he plans to make social media and algorithmic accountability a top issue for his panel this Congress, stating that “Social media platforms use algorithms that shape what billions of people read, watch and think every day, but we know very little about how these systems operate and how they’re affecting our society. Increasingly, we’re hearing that these algorithms are amplifying misinformation, feeding political polarization and making us more distracted and isolated.”141

Slated to act as more of a listening session between lawmakers and companies than the weighing of any actual legislation, a major takeaway from the session was that lawmakers are still struggling to find a solution for the violence even platforms have internally recognized they cause in the real world.142 All in all, lawmakers present seemed skeptical that platforms were not mainly incentivized by ramping up user engagement, and were generally in favor of seeing greater transparency from platforms about how their algorithms elevate content to users.143

140Algorithms and Amplification: How Social Media Platforms’ Design Choices Shape Our Discourse and Our Minds. Subcommittee on Privacy, Technology, and the Law. Committee on the Judiciary Hearings. April 27, 2021. https://bit.ly/3gQP7Qf 141Lima, Cristiano. “Facebook, YouTube, Twitter execs to testify at Senate hearing on algorithms.” POLITICO. April 23, 2021. https://politi.co/3xF42mA 142Kelly, Makena. “Congress is way behind on algorithmic misinformation.” The Verge. April 27, 2021. https://bit.ly/3eZLyEX 143Feiner, Lauren. “Facebook, YouTube, Twitter execs grilled by senators over addictive nature of their apps.” CNBC. April 27, 2021. https://cnb.cx/2R9V7su

40