Everything in Moderation
Total Page:16
File Type:pdf, Size:1020Kb
July 2019 Everything in Moderation An Analysis of How Internet Platforms Are Using Artificial Intelligence to Moderate User- Generated Content Spandana Singh Last edited on July 15, 2019 at 10:21 a.m. EDT Acknowledgments In addition to the many stakeholders across civil society and industry that have taken the time to talk to us over the years about our work on content moderation and transparency reporting, we would particularly like to thank Nathalie Maréchal from Ranking Digital Rights for her help in drafting this report. We would also like to thank Craig Newmark Philanthropies for its generous support of our work in this area. newamerica.org/oti/reports/everything-moderation-analysis-how-internet-platforms-are-using-artificial- 2 intelligence-moderate-user-generated-content/ About the Author(s) Spandana Singh is a policy program associate in New America's Open Technology Institute. About New America We are dedicated to renewing America by continuing the quest to realize our nation’s highest ideals, honestly confronting the challenges caused by rapid technological and social change, and seizing the opportunities those changes create. About Open Technology Institute OTI works at the intersection of technology and policy to ensure that every community has equitable access to digital technology and its benefits. We promote universal access to communications technologies that are both open and secure, using a multidisciplinary approach that brings together advocates, researchers, organizers, and innovators. newamerica.org/oti/reports/everything-moderation-analysis-how-internet-platforms-are-using-artificial- 3 intelligence-moderate-user-generated-content/ Contents Introduction 5 Legal Frameworks that Govern Online Expression 9 How Automated Tools are Used in the Content Moderation Process 12 The Limitations of Automated Tools in Content Moderation 17 Case Study: Facebook 22 Case Study: Reddit 26 Case Study: Tumblr 29 Promoting Fairness, Accountability, and Transparency Around Automated Content Moderation Practices 33 newamerica.org/oti/reports/everything-moderation-analysis-how-internet-platforms-are-using-artificial- 4 intelligence-moderate-user-generated-content/ Introduction The proliferation of digital platforms that host and enable users to create and share user-generated content has significantly altered how we communicate with one another. In the 20th century, individual communication designed to reach a broad audience was largely expressed through formal media channels, such as newspapers. Content was produced and curated by professional journalists and editors, and dissemination relied on physically transporting physical artifacts like books or newsprint. As a result, communication during this period was expensive, slow, and, with some notable exceptions, easily attributed to an individual speaker. In the twenty-frst century, however, thanks to the expansion of the internet and social media, mass communication has become cheaper, faster, and sometimes difficult to trace.1 The widespread adoption and penetration of platforms such as YouTube, Facebook, and Twitter around the globe has significantly lowered the costs and barriers to communicating, thus democratizing speech online. Over the past decade, platforms have thrived of of users creating and exchanging their own content—whether it be family photographs, blog posts, or pieces of artwork—with speed and scale. However, in enabling user content production and dissemination, platforms also opened themselves up to unwanted forms of content, including hate speech, terror propaganda, harassment, and graphic violence. In this way, user-generated content has served as a key driver of growth for these platforms, as well as one of their greatest liabilities.2 In response to the growing prevalence of objectionable content on their platforms, technology companies have had to create and implement content policies and content moderation processes that aim to remove these forms of content, as well as accounts responsible for sharing this content, from their products and services. This is both because companies need to comply with legal frameworks that prohibit certain forms of content online, and because companies want to promote greater safety and positive user experiences on their services. In addition, in the context of the United States, this is because the First Amendment limits the extent to which the government can set the rules for what type of speech is permissible. Over the last few years, both large and small platforms that host user-generated content have come under increased pressure from governments and the public to remove objectionable content. In response, many companies have developed or adopted automated tools to enhance their content moderation practices, many of which are fueled by artificial intelligence and machine learning. In addition to enabling the moderation of various types of content at scale, these automated tools aim to reduce the involvement of time- consuming human moderation. newamerica.org/oti/reports/everything-moderation-analysis-how-internet-platforms-are-using-artificial- 5 intelligence-moderate-user-generated-content/ Over the last few years, both large and small platforms that host user-generated content have come under increased pressure from governments and the public to remove objectionable content. However, the development and deployment of these automated tools has demonstrated a range of concerning weaknesses, including dataset and creator bias, inaccuracy, an inability to interpret context and understand the nuances of human speech, and a significant lack of transparency and accountability mechanisms around how these algorithmic decision-making procedures impact user expression. As a result, automated tools have the potential to impact human rights on a global scale, and effective safeguards are needed to ensure the protection of human rights. This report is the first in a series of four reports that will explore how automated tools are being used by major technology companies to shape the content we see and engage with online, and how internet platforms, policymakers, and researchers can promote greater fairness, accountability, and transparency around these algorithmic decision-making practices. This report focuses on automated content moderation policies and practices, and it uses case studies on three platforms—Facebook, Reddit, and Tumblr—to highlight the different ways automated tools can be deployed by technology companies to moderate content and the challenges associated with each of them. Defning Content Moderation Content moderation can be defined as the “governance mechanisms that structure participation in a community to facilitate cooperation and prevent abuse.”3 Currently, companies employ a range of approaches to content moderation, and they use a varied set of tools to enforce content policies and remove objectionable content and accounts. There are three primary approaches to content moderation:4 1. Manual content moderation: This approach, which typically relies on the hiring, training, and deployment of human moderators to review and make decisions on content cases, can take many forms. Large platforms tend to rely primarily on outsourced contract employees to complete this newamerica.org/oti/reports/everything-moderation-analysis-how-internet-platforms-are-using-artificial- 6 intelligence-moderate-user-generated-content/ work. Small- to medium-size platforms tend to employ full-time, in-house moderators or rely on user moderators who volunteer to review content. 2. Automated content moderation: This approach involves the use of automated detection, filtering, and moderation tools to flag, separate, and remove particular pieces of content or accounts. Fully automated content detection and moderation practices are not widely used across all categories of objectionable content, as they have been found to lack accuracy and effectiveness for certain types of user speech. However, these tools are widely used for some types of objectionable content, such as child sexual abuse material (CSAM). In the case of CSAM, there is a clear international consensus that the content is illegal, there are clear parameters for what should be flagged and removed based on the law, and models have been trained on enough data to yield high levels of accuracy. 3. Hybrid content moderation: This approach incorporates elements of both the manual and automated approaches. Typically, this involves using automated tools to flag and prioritize specific content cases to human reviewers, who then make the final judgment call on the case. This approach is being more widely adopted by both smaller and larger platforms, as it helps reduce the initial workload of human reviewers. Additionally, by letting a human make the final decision on a case, it comparatively limits the negative externalities that come from using automated tools for content moderation (e.g. accidental removal of content due to inaccurate tools or tools that cannot understand the nuances or context of human speech). In addition, there are two different models of content moderation that are deployed by platforms, often depending on their size and capacity to engage in substantial content moderation practices.5 1. Centralized content moderation: This approach often involves a company establishing a broad set of content policies that they apply globally, with exceptions carved out to ensure compliance with laws in different jurisdictions. These content policies are enforced by a large group of moderators