How Content Moderation May Expose Social Media Companies to Greater Defamation Liability
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Section 230 of the Communications Decency Act: Research Library Perspectives June 2021 by Katherine Klosek
Issue Brief Section 230 of the Communications Decency Act: Research Library Perspectives June 2021 By Katherine Klosek With thanks to Jonathan Band, General Counsel at ARL; Greg Cram, Director of Copyright, Permissions & Information Policy at New York Public Library; and Judy Ruttenberg, Senior Director of Scholarship and Policy at ARL Introduction 3 Background 3 Section 230 of the Communications Decency Act 5 Libraries and Section 230 7 FOSTA: The Allow States and Victims to Fight Online Sex Trafcking Act 9 Discussion 10 Appendix: Discussion Guide 12 Access to Information 12 Afordability 12 Anti-harassment 12 Diversity 13 Open Internet 13 Political Neutrality/Freedom of Expression 14 Privacy 14 Issue Brief: Section 230 of the Communications Decency Act—June 2021 2 Introduction The 117th US Congress is holding hearings and debating bills on changes to Section 230 of the Communications Decency Act (CDA), a 1996 law that protects internet service providers and internet users from liability for content shared by third parties. When it was enacted, Section 230 ofered liability protections for then- nascent services that are integral to the development of the internet we have today. But Section 230 has come under fre from Democrats who believe internet service providers are not engaging in responsible content moderation, and from Republicans who believe that large social media companies censor conservative voices. This myopic debate ignores the longstanding role that research libraries have played as internet service providers, and the vast experience research libraries have in moderating and publishing content. More information on this topic can be found in the “Discussion” section of this brief. -
Heberlein, Thomas A. 2017. Sweden
Milwaukee Journal Sentinel, March 5, 2017 Sweden is safe Since President Donald Trump called attention to violence in Sweden, we have had American journalists show up here. One, Tim Pool — funded in part by right-wing editor Paul Joseph Watson who tweeted “Any journalist claiming Sweden is safe; I will pay for travel costs & accommodation for you to stay in crime ridden migrant suburbs of Malmo” — visited Rosengård, Malmö’s toughest immigrant area. He was followed by Swedish reporters and found what was no surprise to me, an American living in Sweden: “If this is the worst Malmö can offer never come to Chicago.” Pool claimed Chicago had about 750 murders last year. All of Sweden, which has three times the population of Chicago, had 112 murders in 2015 (the last year for which statistics are available). Malmö, with a population of 330,000, had 12 murders in 2016. Madison, with a population of 243,000 had nine. So per capita, you are about as safe in Malmö as in Madison. Pool also said after a few hours in Rosengård: “For anyone worried about violence in Malmö and worried that he will be robbed or attacked in Rosengård is ‘löliga’ (ridiculous).” On TV news, he was shown walking in the community and reported it was boring and the only danger he felt was from the ice on the sidewalks. Sweden, I admit, is not a perfect society: It does have problems keeping ice off the sidewalks, and thousands are injured in falls every year. Tom Heberlein emeritus professor University of Wisconsin-Madison Stockholm, Sweden http://www.jsonline.com/story/opinion/readers/2017/03/04/letters-march-walkers-prison-priorities- wrong/98723360/ . -
View / Download 11.3 Mb
THE PORTRAYAL OF MARIJUANA ON VICE.COM DOCUMENTARIES The Graduate School of Economics and Social Sciences of İhsan Doğramacı Bilkent University by GÖKÇE ÖZSU In Partial Fulfillment of Requirements for the Degree of MASTER OF ARTS THE DEPARTMENT OF COMMUNICATION AND DESIGN İHSAN DOĞRAMACI BİLKENT UNIVERSITY ANKARA August 2017 Scanned by CamScanner ABSTRACT THE PORTRAYAL OF MARIJUANA ON VICE.COM DOCUMENTARIES Özsu, Gökçe MA, The Department of Communication and Design Supervisor: Prof. Dr. Bülent Çaplı August 2017 This thesis aims to examine the representational attitude of vice.com (or VICE) documentaries covering marijuana, in the context of normalization. In this respect, this thesis mainly descriptively analyses three VICE documentaries covering marijuana in the setting of recreation, medicine and industry. VICE is a United States based media outlet which uses hybrid form of journalism combining conventional form of media operations and new media techniques. Normalization is a sociological concept for describing the the scale of social acceptance as a norm which was disseminated from the iii margins of the society towards mainstream scale. To implement descriptive analysis on vice.com documentaries, normalization and drug representation in the United States media has been evaluated in the socio-historical setting, and examined. As a major finding, even though vice.com documentaries represent marijuana as normal, the normative references of normalization of marijuana is not clear. In this respect, in the conclusion, the determiners and normative background of normalization of marijuana are tried to be discussed. Keywords: Documentary, Marijuana, Normalization, Stigmatization, VICE iv ÖZET VICE.COM BELGESELLERİNDE MARİJUANANIN TASVİRİ Özsu, Gökçe Yüksek Lisans, İletişim ve Tasarım Bölümü Danışman: Prof. -
How Marvel and Sony Sparked Public
Cleveland State Law Review Volume 69 Issue 4 Article 8 6-18-2021 America's Broken Copyright Law: How Marvel and Sony Sparked Public Debate Surrounding the United States' "Broken" Copyright Law and How Congress Can Prevent a Copyright Small Claims Court from Making it Worse Izaak Horstemeier-Zrnich Cleveland-Marshall College of Law Follow this and additional works at: https://engagedscholarship.csuohio.edu/clevstlrev Part of the Intellectual Property Law Commons How does access to this work benefit ou?y Let us know! Recommended Citation Izaak Horstemeier-Zrnich, America's Broken Copyright Law: How Marvel and Sony Sparked Public Debate Surrounding the United States' "Broken" Copyright Law and How Congress Can Prevent a Copyright Small Claims Court from Making it Worse, 69 Clev. St. L. Rev. 927 (2021) available at https://engagedscholarship.csuohio.edu/clevstlrev/vol69/iss4/8 This Note is brought to you for free and open access by the Journals at EngagedScholarship@CSU. It has been accepted for inclusion in Cleveland State Law Review by an authorized editor of EngagedScholarship@CSU. For more information, please contact [email protected]. AMERICA’S BROKEN COPYRIGHT LAW: HOW MARVEL AND SONY SPARKED PUBLIC DEBATE SURROUNDING THE UNITED STATES’ “BROKEN” COPYRIGHT LAW AND HOW CONGRESS CAN PREVENT A COPYRIGHT SMALL CLAIMS COURT FROM MAKING IT WORSE IZAAK HORSTEMEIER-ZRNICH* ABSTRACT Following failed discussions between Marvel and Sony regarding the use of Spider-Man in the Marvel Cinematic Universe, comic fans were left curious as to how Spider-Man could remain outside of the public domain after decades of the character’s existence. -
``At the End of the Day Facebook Does What It Wants'': How Users
“At the End of the Day Facebook Does What It Wants”: How Users Experience Contesting Algorithmic Content Moderation KRISTEN VACCARO, University of Illinois Urbana-Champaign CHRISTIAN SANDVIG, University of Michigan KARRIE KARAHALIOS, University of Illinois Urbana-Champaign Interest has grown in designing algorithmic decision making systems for contestability. In this work, we study how users experience contesting unfavorable social media content moderation decisions. A large-scale online experiment tests whether different forms of appeals can improve users’ experiences of automated decision making. We study the impact on users’ perceptions of the Fairness, Accountability, and Trustworthiness of algorithmic decisions, as well as their feelings of Control (FACT). Surprisingly, we find that none of the appeal designs improve FACT perceptions compared to a no appeal baseline. We qualitatively analyze how users write appeals, and find that they contest the decision itself, but also more fundamental issues like thegoalof moderating content, the idea of automation, and the inconsistency of the system as a whole. We conclude with suggestions for – as well as a discussion of the challenges of – designing for contestability. CCS Concepts: • Human-centered computing → Human computer interaction (HCI); Social media. Additional Key Words and Phrases: content moderation; algorithmic experience ACM Reference Format: Kristen Vaccaro, Christian Sandvig, and Karrie Karahalios. 2020. “At the End of the Day Facebook Does What It Wants”: How Users Experience Contesting Algorithmic Content Moderation. Proc. ACM Hum.-Comput. Interact. 4, CSCW2, Article 167 (October 2020), 22 pages. https://doi.org/10.1145/3415238 1 INTRODUCTION As algorithmic decision making systems become both more prevalent and more visible, interest has grown in how to design them to be trustworthy, understandable and fair. -
An Overview of the Communication Decency Act's Section
Legal Sidebari Liability for Content Hosts: An Overview of the Communication Decency Act’s Section 230 Valerie C. Brannon Legislative Attorney June 6, 2019 Section 230 of the Communications Act of 1934, enacted as part of the Communications Decency Act of 1996 (CDA), broadly protects online service providers like social media companies from being held liable for transmitting or taking down user-generated content. In part because of this broad immunity, social media platforms and other online content hosts have largely operated without outside regulation, resulting in a mostly self-policing industry. However, while the immunity created by Section 230 is significant, it is not absolute. For example, courts have said that if a service provider “passively displays content that is created entirely by third parties,” Section 230 immunity will apply; but if the service provider helps to develop the problematic content, it may be subject to liability. Commentators and regulators have questioned in recent years whether Section 230 goes too far in immunizing service providers. In 2018, Congress created a new exception to Section 230 in the Allow States and Victims to Fight Online Sex Trafficking Act of 2017, commonly known as FOSTA. Post-FOSTA, Section 230 immunity will not apply to bar claims alleging violations of certain sex trafficking laws. Some have argued that Congress should further narrow Section 230 immunity—or even repeal it entirely. This Sidebar provides background information on Section 230, reviewing its history, text, and interpretation in the courts. Legislative History Section 230 was enacted in early 1996, in the CDA’s Section 509, titled “Online Family Empowerment.” In part, this provision responded to a 1995 decision issued by a New York state trial court: Stratton- Oakmont, Inc. -
Setting the Record Straighter on Shadow Banning
Setting the Record Straighter on Shadow Banning Erwan Le Merrer, Benoˆıt Morgan, Gilles Tredan,´ Univ Rennes, Inria, CNRS, Irisa IRIT/ENSHEEIT LAAS/CNRS [email protected] [email protected] [email protected] Abstract—Shadow banning consists for an online social net- happens on the OSN side, collecting information about po- work in limiting the visibility of some of its users, without them tential issues is difficult. In this paper, we explore scientific being aware of it. Twitter declares that it does not use such approaches to shed light on Twitter’s alleged shadow banning a practice, sometimes arguing about the occurrence of “bugs” to justify restrictions on some users. This paper is the first to practice. Focusing on this OSN is crucial because of its central address the plausibility of shadow banning on a major online use as a public communication medium, and because potential platform, by adopting both a statistical and a graph topological shadow banning practices were recently commented. approach. Shadow banning and moderation techniques. shadow banning We first conduct an extensive data collection and analysis (SB or banning for short, also known as stealth banning [6]) campaign, gathering occurrences of visibility limitations on user profiles (we crawl more than 2:5 millions of them). In such is an online moderation technique used to ostracise undesired a black-box observation setup, we highlight the salient user user behaviors. In modern OSNs, shadow banning would refer profile features that may explain a banning practice (using to a wide range of techniques that artificially limit the visibility machine learning predictors). -
Big Data for Improved Health Outcomes — Second Edition — Arjun Panesar Machine Learning and AI for Healthcare Big Data for Improved Health Outcomes Second Edition
Machine Learning and AI for Healthcare Big Data for Improved Health Outcomes — Second Edition — Arjun Panesar Machine Learning and AI for Healthcare Big Data for Improved Health Outcomes Second Edition Arjun Panesar Machine Learning and AI for Healthcare Arjun Panesar Coventry, UK ISBN-13 (pbk): 978-1-4842-6536-9 ISBN-13 (electronic): 978-1-4842-6537-6 https://doi.org/10.1007/978-1-4842-6537-6 Copyright © 2021 by Arjun Panesar This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. Trademarked names, logos, and images may appear in this book. Rather than use a trademark symbol with every occurrence of a trademarked name, logo, or image we use the names, logos, and images only in an editorial fashion and to the benefit of the trademark owner, with no intention of infringement of the trademark. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. While the advice and information in this book are believed to be true and accurate at the date of publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or omissions that may be made. -
Write-Ins Race/Name Totals - General Election 11/03/20 11/10/2020
Write-Ins Race/Name Totals - General Election 11/03/20 11/10/2020 President/Vice President Phillip M Chesion / Cobie J Chesion 1 1 U/S. Gubbard 1 Adebude Eastman 1 Al Gore 1 Alexandria Cortez 2 Allan Roger Mulally former CEO Ford 1 Allen Bouska 1 Andrew Cuomo 2 Andrew Cuomo / Andrew Cuomo 1 Andrew Cuomo, NY / Dr. Anthony Fauci, Washington D.C. 1 Andrew Yang 14 Andrew Yang Morgan Freeman 1 Andrew Yang / Joe Biden 1 Andrew Yang/Amy Klobuchar 1 Andrew Yang/Jeremy Cohen 1 Anthony Fauci 3 Anyone/Else 1 AOC/Princess Nokia 1 Ashlie Kashl Adam Mathey 1 Barack Obama/Michelle Obama 1 Ben Carson Mitt Romney 1 Ben Carson Sr. 1 Ben Sass 1 Ben Sasse 6 Ben Sasse senator-Nebraska Laurel Cruse 1 Ben Sasse/Blank 1 Ben Shapiro 1 Bernard Sanders 1 Bernie Sanders 22 Bernie Sanders / Alexandria Ocasio Cortez 1 Bernie Sanders / Elizabeth Warren 2 Bernie Sanders / Kamala Harris 1 Bernie Sanders Joe Biden 1 Bernie Sanders Kamala D. Harris 1 Bernie Sanders/ Kamala Harris 1 Bernie Sanders/Andrew Yang 1 Bernie Sanders/Kamala D. Harris 2 Bernie Sanders/Kamala Harris 2 Blain Botsford Nick Honken 1 Blank 7 Blank/Blank 1 Bobby Estelle Bones 1 Bran Carroll 1 Brandon A Laetare 1 Brian Carroll Amar Patel 1 Page 1 of 142 President/Vice President Brian Bockenstedt 1 Brian Carol/Amar Patel 1 Brian Carrol Amar Patel 1 Brian Carroll 2 Brian carroll Ammor Patel 1 Brian Carroll Amor Patel 2 Brian Carroll / Amar Patel 3 Brian Carroll/Ama Patel 1 Brian Carroll/Amar Patel 25 Brian Carroll/Joshua Perkins 1 Brian T Carroll 1 Brian T. -
Analyzing Twitter Users' Behavior Before and After Contact By
Analyzing Twitter Users’ Behavior Before and After Contact by Russia’s Internet Research Agency UPASANA DUTTA, RHETT HANSCOM, JASON SHUO ZHANG, RICHARD HAN, TAMARA 90 LEHMAN, QIN LV, SHIVAKANT MISHRA, University of Colorado Boulder, USA Social media platforms have been exploited to conduct election interference in recent years. In particular, the Russian-backed Internet Research Agency (IRA) has been identified as a key source of misinformation spread on Twitter prior to the 2016 U.S. presidential election. The goal of this research is to understand whether general Twitter users changed their behavior in the year following first contact from an IRA account. We compare the before and after behavior of contacted users to determine whether there were differences in their mean tweet count, the sentiment of their tweets, and the frequency and sentiment of tweets mentioning @realDonaldTrump or @HillaryClinton. Our results indicate that users overall exhibited statistically significant changes in behavior across most of these metrics, and that those users that engaged with the IRA generally showed greater changes in behavior. CCS Concepts: • Applied computing ! Law, social and behavioral sciences; • Human-centered com- puting ! Collaborative and social computing. Additional Key Words and Phrases: Internet Research Agency; IRA; Twitter; Trump; democracy ACM Reference Format: Upasana Dutta, Rhett Hanscom, Jason Shuo Zhang, Richard Han, Tamara Lehman, Qin Lv, Shivakant Mishra. 2021. Analyzing Twitter Users’ Behavior Before and After Contact by Russia’s Internet Research Agency. Proc. ACM Hum.-Comput. Interact. 5, CSCW1, Article 90 (April 2021), 24 pages. https://doi.org/10.1145/3449164 INTRODUCTION For many years, the growth of online social media has been seen as a tool used to bring people together across borders and time-zones. -
Download Paper
Lawless: the secret rules that govern our digital lives (and why we need new digital constitutions that protect our rights) Submitted version. Forthcoming 2019 Cambridge University Press. Nicolas P. Suzor Table of Contents Part I: a lawless internet Chapter 1. The hidden rules of the internet ............................................................................................. 6 Process matters ....................................................................................................................................................... 12 Chapter 2. Who makes the rules?.......................................................................................................... 17 Whose values apply? ............................................................................................................................................... 22 The moderation process .......................................................................................................................................... 25 Bias and accountability ........................................................................................................................................... 28 Chapter 3. The internet’s abuse problem ............................................................................................... 41 Abuse reflects and reinforces systemic inequalities ................................................................................................ 50 Dealing with abuse needs the involvement of platforms ....................................................................................... -
Google V. Hood
Case 3:14-cv-00981-HTW-LRA Document 37-1 Filed 01/22/15 Page 1 of 15 IN THE UNITED STATES DISTRICT COURT FOR THE SOUTHERN DISTRICT OF MISSISSIPPI NORTHERN DIVISION GOOGLE, INC. PLAINTIFF VS. CIVIL ACTION NO. 3:14-cv-981-HTW-LRA JIM HOOD, ATTORNEY GENERAL OF THE STATE OF MISSISSIPPI, IN HIS OFFICIAL CAPACITY DEFENDANT PROPOSED MEMORANDUM OF AMICUS CURIAE INTERNATIONAL ANTICOUNTERFEITING COALITION I. INTEREST OF AMICUS CURIAE The International AntiCounterfeiting Coalition (“IACC”) is a nonprofit corporation recognized as tax-exempt under Internal Revenue Code § 501(c)(6). It has no parent corporation, and no publicly held corporation owns 10% or more of its stock. The IACC was established in 1979 to combat counterfeiting and piracy by promoting laws, regulations, directives, and relationships designed to render the theft of intellectual property undesirable and unprofitable. 1 Counterfeiting and piracy are scourges that threaten consumer health and safety and drain billions of dollars from the U.S. economy. The IACC is the oldest and largest organization in the country devoted exclusively to combating these threats. Today the IACC’s members include more than 230 companies in the pharmaceutical, health and beauty, automotive, fashion, food and beverage, computer and electronics, and entertainment industries, among others. The IACC offers anti-counterfeiting programs designed to increase protection for patents, trademarks, copyrights, service marks, trade dress and trade secrets. Critical to the IACC's 1 IACC states that no party’s counsel authored this brief in whole or in part; no party or party’s counsel contributed money that was intended to fund preparing or submitting this brief; and no person, other than the amicus curiae and its members, contributed money that was intended to fund preparing or submitting this brief.