Task Force on the Online Harassment of Journalists
Total Page:16
File Type:pdf, Size:1020Kb
adl report: Control- -Delete Recommendations of the ADLAlt Task Force on the Online Harassment of Journalists Table of Contents Foreword 1 Crash Override Network 16 Introduction 1 The Women’s Media Center Why ADL Is Focused On Cyberhate 2 Speech Project 16 Producing New Tools SECTION ONE: SUMMARY OF for Journalists 17 KEY RECOMMENDATIONS 2 The Legal Framework 17 Recommendations for Industry 2 The Communications Recommendations for Journalists, Decency Act 17 Targets, and Advocates 3 The First Amendment 17 Recommendations for the Legal System and Policymakers 4 Exceptions to the First Amendment 17 SECTION TWO: THE PROBLEM 4 Practical Difficulties in Bringing Fighting Cyberhate and Upholding Criminal Charges 18 the First Amendment 4 Doxxing and Swatting 18 Journalists as Targets 5 Hate Crime Laws 19 Fighting Cyberhate While Upholding the First Amendment 5 Challenges for Law Enforcement 20 Difficulty of Policing Online Hate 5 Civil Lawsuits and Copyright Remedies 20 Why Harassment Is a Particularly Serious Problem for Twitter 6 CONCLUSION 21 SECTION THREE: APPENDIX A: METHODOLOGY 22 DEEPER CONTEXT 7 APPENDIX B: The Social Media Ecosystem 7 TASK FORCE MEMBERS 22 How the Major Platforms APPENDIX C: ADL’S HISTORY Mitigate Abuse 8 OF RESPONDING TO Anti-Harassment by Design 12 RECURRING PATTERNS Privileging Accountable AND DISTURBING TRENDS 23 Online Identities 13 APPENDIX D: MORE OF THE LAW New Platforms and Services REGARDING “TRUE THREATS” 24 Designed to Help Mitigate Abuse 14 Heartmob 15 APPENDIX E: ANTI-SEMITISM IN THE U.S. 25 Trollbusters 15 Hate Speech Blocker 16 APPENDIX F: THE FIRST REPORT 27 Foreword Sometime during the 2016 presidential campaign, it became dangerous to be a journalist. Tweet at Jonathan Weisman “machen” Throughout 2016, ADL received complaints from journalists covering the presidential campaign that they were being serially harassed online merely for doing their jobs.1 Journalists received altered images of themselves as concentration camp inmates, wearing yellow “Juden” stars, and of Auschwitz’s infamous entry gates emblazoned with the slogan, “Machen Amerika Great.” They were directed to go “back to the ovens” and tarred with anti-Semitic slurs. This abuse had real-life consequences, as one journalist discovered when he saw white supremacist images embedded in a video that was designed to trigger his epilepsy,2 and as other reporters found when they were repeatedly harassed at campaign events. In response to this deluge of hate, ADL embarked on a groundbreaking project to quantify the scope of the abuse, to evaluate how, when, and where it occurred, and to create a profile of the abusers. ADL’s report,3 the first of its kind, found that anti-Semitic language on social media was shockingly pervasive: A total of 2.6 million tweets containing anti-Semitic language were posted on Twitter between August 2015 and July 2016, with an estimated potential reach of 10 billion impressions. ADL believes this outpouring of hate has contributed to the reinforcement and normalization of anti-Semitic language on a massive scale. We’re following our initial report with a second installment: ADL’s recommendations for addressing internet harassment, which we hope will help policy makers, law enforcement, the internet industry, the targets of online harassment, and the public. ADL believes we have a collective obligation to develop the tools and strategies to confront online hate in order to ensure that the internet remains a medium of free and open communication for all people. Introduction This report is organized in three main sections. • Section One provides a summary of our key recommendations. • Section Two provides background on how we arrived where we are today. Dana Schwartz (Observer) This meme is repeated with various journalists • Section Three sets out some context that is key to understanding the pictured inside the gas chamber. social media ecosystem, the relevant legal framework, and innovative research and new tools that are already providing creative responses to abuse. 1 Finally, we include several appendices providing background on the Task Force, our methodology, the initial report, and some additional About ADL analysis and context that may be helpful. The Anti-Defamation League (ADL) was founded over one hundred Why ADL Is Focused On Cyberhate years ago “to stop the defamation of the Jewish people, and to From its inception, ADL has understood that the fight against one form secure justice and fair treatment to of prejudice requires battling hate in all forms.4 all.”5 From its inception, ADL has ADL took an early lead against those who use new technologies to understood that the fight against foment hatred, undermine democratic values, and tear our society one form of prejudice requires apart. From anti-Semitic images in mass-circulation newspapers battling hate in all forms. to stereotypical depictions of Jews and African-Americans in the Today ADL is one of the nation’s entertainment industry, ADL has consistently recognized the power premier civil rights and human of media and communications technology to shape public attitudes relations agencies. ADL fights on issues of prejudice, hatred, and discrimination – both for ill and anti-Semitism and all forms of 6 for good. With every major advancement in technology in the past bigotry, defends democratic century, ADL has fought those who would use new platforms to spread ideals, and protects civil rights for hate, and emphasized the importance of promoting acceptance, all through information, education, inclusiveness, and the protection of civil and human rights. legislation and advocacy. SECTION ONE: SUMMARY OF KEY RECOMMENDATIONS RECOMMENDATION FOR INDUSTRY IMPROVE MECHANISMS FOR REPORTING HATE SPEECH AND HARASSMENT Reporting mechanisms should be thorough, quick, and responsive – Allow complainants to explain why content is offensive and to show the cultural, social, or personal context that makes them feel targeted. Reviewers need cultural context and continual training – Ensure that reviewers understand specific cultural contexts and receive continual updates to their training. Internet companies should be transparent – Explain review processes and moderation guidelines to users, so they can better participate in the system. Complaint mechanisms should be efficient – Permit complainants to report instances when the same abusive material is posted multiple times or on multiple channels. Targets should not be re-victimized – Reduce the need for prohibited or suspended material to be re-flagged every time it appears or is reported. RESPOND TO BYSTANDER REPORTING OF HARASSMENT Accept third-party harassment reports – Update platform systems to respond not only to reports from harassment victims, but also to reports by observers to harassment. Encourage user reporting – Encourage social media users to use existing complaint mechanisms and reporting solutions. 2 SIMPLIFY THE APPEALS PROCESS FOR DENIALS OF REQUESTS TO ADDRESS HATE SPEECH OR HARASSMENT Simplify and clarify appeals processes – Include the original complaint in all responses to complainants, especially since targets may have multiple active complaints. Include a link within the complaint response to appeal the company’s decision. ENCOURAGE CROSS-PLATFORM COMMUNICATION BETWEEN COMPANIES Disrupt online harassment campaigns – Determine ways to track and respond to cross-platform abuse when harassers employ multiple platforms to harass others. Social media platforms should collaborate on, or co-invest in, anti-harassment tools – Pool resources and combine expertise to develop new approaches to combat online harassment. INVEST IN INNOVATIVE RESEARCH Develop programming solutions to address harassment – Explore promising avenues to combat harassment, like natural language processing and machine learning, and technical solutions that may preserve anonymous speech but permit accountability. Encourage “anti-harassment by design” – Encourage innovators to consider harassment problems when designing new platforms and build in protections from the start. Privilege accountable online identities – Propose platform structures that privilege “verified” online identities, while still carving out spaces for free expression and the potential need for anonymity. RECOMMENDATIONS FOR JOURNALISTS, TARGETS, AND ADVOCATES HARNESS POSITIVE COUNTER-SPEECH Beat trolls at their own game – Emulate groups like HeartMob and Trollbusters that combat harassment in real-time by unleashing supportive positive content on a target’s social media feeds. Use more speech to educate – Learn from new services, like Hate Speech Blocker, that issue real-time alerts to educate users when they use hate speech or abusive language, making them pause and consider their choices before sending the content. PRODUCE NEW TOOLS FOR JOURNALISTS AND TARGETS Give writers control of their online space – Develop innovative solutions for journalists who must engage with social media, like the Coral Project has done, by improving online commenting forums and building out tools for newsrooms to customize. Silence the noise – Look to platform features or third-party applications that allow users to block content, keywords, or abusers from social media. Support targets of online abuse – Support advocacy groups, like Crash Override Network, that mitigate the effects of online abuse while providing legal and emotional support to targets of online hate. DEPLOY EASY TO USE AND ACCESSIBLE CYBERSECURITY TOOLS Apply smart online security solutions – Follow