<<

2017 NJSBA ANNUAL MEETING

Racism, Xenophobia, and the Algorithms: How do the Results from Google and other Internet Search Algorithms Impact Society and the Law? Co-Sponsored by the Internet and Computer Law Section Committee and the Diversity Committee

Moderator/Speaker: Maria P. Vallejo, Esq. Chasan Lamparello Mallon & Cappuzzo, PC, Secaucus

Speakers: Raymond M. Brown, Esq. Greenbaum Rowe Smith & Davis, LLP, Woodbridge Brett R. Harris, Esq. Wilentz, Goldman & Spitzer, PA, Woodbridge Jeffrey Neu, Esq. Kuzas Neu, Red Bank David Opderbeck, Professor of Law and Co-Director of the Gibbons Institute of Law, Science & Technology Seton Hall University School of Law, Newark Jonathan Vick, Associate Director, Investigative Technology and Cyberhate Response Anti-Defamation League

Racism, Xenophobia, and the Algorithms:

How do the results from Google and other Internet search algorithms impact society and the law? Google Search: Three White Teenagers Google Search: Three Black Teenagers Google Search: Inappropriate Hairstyles for Work Google Search: Beautiful Braids

Search results for “Martin Luther King Jr Historical” Home page for: https://www.martinlutherking.org/ Result when you click: “Truth About King – Who he fought and fought for” How do the results from Google and other Internet search algorithms impact society and the law?

Some Legal Issues Relating to Search Engines

Prof. David W. Opderbeck Seton Hall University Law School [email protected]

© 2017 David W. Opderbeck Licensed Under Creative Commons Attribution / Share-Alike Intellectual Property

 DMCA Notice and Takedown  DMCA Red Flag Knowledge  Fair use Privacy

 Data Collection and Big Data  Differences in U.S. and EU Approaches Competition / Antitrust

 EU Antitrust Action Against Google  Differences in U.S. / EU Approach Content Blocking and Free Speech

 User-Generated Content  EU Code of Conduct on Countering Illegal Hate Speech Online  Surveillance and Legal Process for User Identity and Content Information  Sponsored Content: Advertising by Hate Groups  Terms of Service / Terms of Use Issues “” Issue

 What Are Search Providers and What is Their Role in Governance?

Responding to Cyberhate Progress and Trends

March 2016 Table of Contents

INTRODUCTION 2

CHARTING PROGRESS 3

A NEW CHALLENGE: TERRORIST USE OF 8

BEST PRACTICES FOR RESPONDING TO CYBERHATE 14

APPENDIX: ADL CYBER-SAFETY ACTION GUIDE 16

1 INTRODUCTION

The Internet is the largest marketplace of ideas the world has known. It enables communications, education, entertainment and commerce on an incredible scale. The Internet has helped to empower the powerless, reunite the separated, connect the isolated and provide new lifelines for the disabled. By facilitating communication around the globe, the Internet has been a transformative tool for information-sharing, education, human interaction and social change. All of us treasure the freedom of expression that lies at its very core.

Unfortunately, while the Internet’s capacity to improve the world is boundless, it is also used by some to transmit anti-Semitism and other forms of hate and prejudice, including anti-Muslim bigotry, racism, homophobia, misogyny, and xenophobia. The Anti-Defamation League (ADL) has been addressing the scourge of online anti-Semitism since pre-Internet days, when dial-up bulletin boards were a prominent communications tool. As the Internet emerged for personal use in the 1990’s, ADL was there to monitor, report and propose approaches to fight online hate. Today, ADL is known as one of the preeminent NGO’s addressing online anti-Semitism and all forms of online hate.

Cyberhate, defined as “the use of any electronic technology to spread bigoted, discriminatory, terrorist and extremist information,” manifests itself on websites and blogs, as well as in chat rooms, social media, comment sections and gaming. In short, hate is present in many forms on the Internet, creating a hostile environment and reducing equal access to its benefits for those targeted by hatred and intimidation.

In an ideal world, people would not choose to communicate hate. But in the real world they do, all too often. And hate expressed online can lead to real-world violence, nearby or far away. Cyberhate poses additional challenges, because everyone can be a publisher on the Internet. Hateful content can spread around the globe literally in seconds, and it often goes unchallenged. So it is necessary to find effective ways to confront online hate, to educate about its dangers, to encourage individuals and communities to speak out when they see it, and to find and create tools and means to deter it and to mitigate its negative impact. In doing so, it is also important to keep in mind the need to find the right balance, addressing cyberhate while still respecting free expression and not inhibiting legiti- mate debate.

The unique challenge of hate speech online is unfortunately not the only challenge we face today. Extremists and terrorists have become much more sophisticated in their use of social media. This growing threat has been particular- ly evident with a rise in “self-radicalization,” encouraged and abetted by terrorist groups. Terrorist exploitation of the Internet is an order of magnitude different from hate speech online, and new strategies may be necessary to respond to it.

2 CHARTING PROGRESS

The following charts have been created to illustrate changes in the predominant Internet environment over just the past two years. In snapshot form, they show where we are today, revealing changes in both how cyberhate manifests itself on the Internet and how the industry has become more serious and more sophisticated in dealing with the problem. They document progress – often incremental, but in its totality significant, impressive and important – mostly the result of industry-sponsored initiatives.

Unfortunately, spewing anti-Semitism and hate online is much easier than finding effective ways to respond to it. We have come to understand that as long as hate exists in the real world, that hate will be reflected in the virtual world as well. What happens on the Internet is a reflection of society, and not the other way around. Consequently, as long as technology keeps evolving, and bias, racism and anti-Semitism persist, the haters will likely find ways to exploit the new services and new platforms to spew their corrosive message. We need to be just as creative, and just as determined, to counter them.

3 Highlights of changes in various platforms when it comes to dealing with cyberhate

PLATFORMS 2013 2016

Websites Limited existence and Mixed picture. Many companies with appropriate enforcement of hosting terms of service are responsive. Companies with company rules lax terms of service are less responsive. Potential impact of proposed new Federal Communications Commission (FCC) regulations considering U.S.-based hosts as “common carriers” is still to be determined.

Comments/Reviews Sporadic enforcement Word-sifting software coming into increasingly of hate speech in review frequent use. Websites far more responsive to and comment sections of complaints about issues in reviews and comments. websites. Terms of Service Anonymity, which is used to hide identity of haters, not always clear or increasingly is being addressed by online services. easily found.

Monetization and Many hate groups used Most transactional and funding websites have Terms E-Commerce PayPal, Amazon, GoFundMe of Service prohibiting use by hate groups (as defi ned and similar services by the company) and responsiveness increasing

Social Media Limited prohibitions on Universal acknowledgement of hate speech as a hate speech, defamation problem. More but confusing array of standards or abusive posts and mechanisms in use. Some companies more responsive to complaints than others.

Blogs Lack of meaningful Terms Google launched updated and unifi ed Terms of of Service Service (3/2014) affecting content rules for most user generated content services.

File Drops/Cloud Storage Limited attention to use of Most fi le-drop sites only prohibit illegal content but fi le drops by terrorists and do not monitor on privacy grounds. Abuse by hacker hate groups and terrorist groups still occurs.

Smart Devices/Apps No ratings for apps, Google play-initiated app rating and review system games or other smart 3/2015. iTunes acknowledged as having the strictest device content review procedure and permissions.

Games Microsoft Xbox Unit virtually Many online game platforms now using fi ltering alone in enforcing gaming software to monitor and limit inappropriate language environment rules used within games as well as users name/profi le information. Real time abuse in game environment still problematic.

4 Basic industry practices related to cyberhate and how they have changed in the last three years

PRACTICES 2013 2016

Hate Speech Policies The Terms of Service for many Multiple platforms, including , Google, platforms did not address , Amazon, Microsoft gaming, and Yahoo, hate speech directly or used now include specifi c prohibition of hate speech vague terminology in policies

User-Friendly Reporting Complaint mechanisms Virtually every major service and platform uses post, or contact details were profi le and image fl agging. Now standard practice often buried or limited to send receipt of complaint acknowledgements and in functionality provide links to further policy/process information.

Enforcement Mechanisms In cases where hate speech Google, Facebook, Twitter have instituted fl agging was prohibited, penalties for specifi c posts and partial content removal. Several were mostly delineated social media platforms have implemented “stop and think before sending” messages and campaigns.

Transparency Pervasive tendency for Most platforms offer explanations to users whose companies not to explain content has been deleted and provide an appeals why content allowed to process. Complainants on Facebook and YouTube remain after a complaint; are advised if content has been removed. Public little explanation offered disclosure of rationales for removals is limited. to users whose material was deleted

Counter-speech Counter-speech education Counter-speech projects are being studied and by only limited number changes implemented by major platforms. of companies, and un- coordinated between companies

5 Challenges that the industry as a whole confronts when dealing with cyberhate

INTERNAL INDUSTRY 2013 2016 CHALLENGES

Industry Realities No effort to broadly explain Industry platforms are sharing more data on the challenges created traffi c, members’ complaints and responses than by evolving technology, ever before - but still falling short in adequately unintended consequences illuminating the enormous and ever-growing volume and the volume of content of content and the challenge of addressing issues that require human evaluation and intervention

Anonymity Anonymous participation Anonymity continues to pose challenges for on many platforms enforcement of Terms of Service. New technologies tolerated despite policies are better at detecting users with multiple accounts to the contrary being used to evade website policy.

Industry Coordination No coordinated industry The Anti-Cyberhate Working Group has become statements or projects a major venue for the industry to coordinate obvious to the public anti-cyberhate activity. Major breakthroughs: publications of ADL’s “Best Practices for Responding to Cyberhate” and well-received Cyber-Safety Action Guide. There is more dialogue between companies on hate related issues than ever before.

Hate speech links and Platforms took no Ongoing debate and discussion regarding platform linked material substantial responsibility for as publisher and impact of link distribution third party or linked content

Corporate Voices Few if any corporate voices Anti-hate speech voices in industry now led by spoke about online hate Facebook, Microsoft, and Google with recent important statements by Twitter

6 External challenges that impact the industry’s ability to address cyberhate

EXTERNAL INDUSTRY 2013 2016 CHALLENGES

Cross Border Limited coordination of cross In the borderless environment of the Internet, border issues almost all initiatives and resolution programs remain geographically based

Government Intervention Uncoordinated or Increasing disconnect between online ideals and unenforceable regulations achievable targets for action compared to laws under consideration and being enacted to curb online hate

Cyber-Terror/Hacking Hacking (website Sharp increase in politically motivated hacking defacement) mainly targeting Jewish institutions and Western interests performed on an opportunistic basis without consistent political motivation or targeting

Activities by non-industry stakeholders to address cyberhate

STAKEHOLDERS 2013 2016

International Bodies Numerous country-specifi c Unchanged orgs- few international networks or associations

Academia Limited external and Centers fl ourishing at major universities, including stakeholder events by major Stanford, Harvard, Brandeis, UCLA and Yale in the U.S. institutions

Industry — Anti-Cyberhate Anti-Cyberhate Working Anti-Cyberhate Working Group continues to promote Working Group Group-First Steps coordination among stakeholders; such coordination is probably still the best hope for productive results

7 A NEW CHALLENGE: TERRORIST USE OF SOCIAL MEDIA

As Internet proficiency and the use of social media grow ever-more universal, so too do the efforts of terrorist groups to exploit new technology in order to make materials that justify and sanction violence more accessible and practical. Terrorist groups motivated by Islamic extremist ideologies are not only using Facebook, Twitter, YouTube and various other emerging platforms to spread their messages, but also actively to recruit adherents who live in the communities they seek to target.

While the fundamental ideological content of terrorist propaganda has remained consistent for two decades – replete with militant condemnations of perceived transgressions against Muslims worldwide, appeals for violence and anti-Semitism – terrorists groups are now able to reach, recruit and motivate extremists more quickly and effectively than ever before by adapting their messages to new technology.

In the past, plots were directed by foreign terrorist organizations or their affiliates, and recruitment and planning generally required some direct, face-to-face interaction with terrorist operatives. Indoctrination came directly from ex- tremist peers, teachers or clerics. Individuals would then advance through the radicalization process through constant interaction with like-minded sympathizers or, as the 2007 New York Police Department (NYPD) report on radicalization described, with a “spiritual sanctioner” who gave credence to those beliefs.

The Internet and Self-Radicalization

Today, individuals can find analogous social networks, inspiration and encouragement online, packaged neatly to- gether with bomb-making instructions. This enables adherents to self-radicalize without face-to-face contact with an established terrorist group or cell. Furthermore, individual extremists, or lone wolves, are also increasingly self-radi- calizing online with no physical interactions with established terrorist groups or cells – a development that can make it more difficult for law enforcement to detect plots in their earliest stages.

The majority of American citizens and residents linked to terrorist activity motivated by Islamic extremist ideologies since 2013 actively used the Internet to access propaganda or otherwise facilitate their extremist activity.

ISIS Recruitment Online

Since 2014, the Islamic State of Iraq and Syria (ISIS) has been particularly aggressive in pursuing multiple sophis- ticated online recruiting and propaganda efforts. ISIS’s far-reaching propaganda machine has not only attracted thousands of recruits, but has also helped Syria and Iraq emerge as the destinations of choice for a new generation of extremists.

This activity has likely contributed to the increasing number of individuals accused of joining or aiding the terrorist organization. Eighty U.S. residents were linked to Islamic extremist plots and other activity in 2015 nearly triple the total of each of the past two years (28 individuals in 2014 and 22 in 2013).

Globally, at least 20,000 fighters are believed to have traveled to join the conflict in Syria and Iraq, many of whom have joined ISIS. The largest numbers come from the Middle East and North Africa. But non-majority-Muslim coun- tries have seen steady numbers of individuals leaving to fight as well. This includes 800-1,500 from Russia, 1,200

8 from France, 500-600 from Germany, 500-600 from the United Kingdom, and about 300 from China.

There have also been a surprisingly large number of minors. For example, focusing on the United States, five Amer- icans under the age of 18 were linked to activity motivated by Islamic extremist idealogy in 2014, and four in 2015. This included three Denver, Colorado teenagers, aged 15, 16 and 17. At least one of the girls was encouraged to travel to Syria by an individual she was communicating with online, according to reports. The 15-year-old described her radicalization in a series of Tweets. “I started to notice the people I called ‘friends’ weren’t my true friends. But the people who reminded me about my Deen (religious ) were my TRUE friends.” Some of the 16-year-old’s Tweets reveal the degree to which she identified with this extreme ideology: “those who identify as ‘gay’ and ‘Mus- lim’ at the same time deserve death,” and “Muslims handing out apologizes (sic) because of 9/11 are a disgrace to the Ummah (global community of Muslims).”

Twitter emerged as ISIS’s platform of choice in part because it is able to conceal the identities of its users more effec- tively than other forums and social networking sites. And while accounts are regularly shut down – Twitter indicated that it has shut down more than 125,000 profiles linked to ISIS content since mid-2015 - ISIS supporters continuously attempt to establish new ones.

As Twitter and other platforms attempt to mitigate efforts by ISIS to actively encourage violent extremism by re- moving content that violates their terms of service, terrorist groups and their supporters continue to seek out new platforms to broadcast their propaganda and connect with adherents. In late 2015, for example, ISIS supporters started migrating to Telegram, a chat and group application available for smartphones and desktop, as their primary for official propaganda. While Telegram has since removed all public ISIS affiliated groups, ISIS supporters continue to utilize its private services.

Some efforts by terrorist groups to move to other platforms or create new ones have been less successful. In July of 2014, for example, ISIS announced that its official Internet presence was moving from Twitter to alternate social media sites Friendica and Quitter. Following exposure by ADL, however, all ISIS presence was quickly deleted from Friendica and Quitter, and the group returned to Twitter.

ISIS’s online presence is worldwide and presented in multiple languages, as is the propaganda it distributes via social media platforms. The terror group releases online magazines in Arabic, English, Turkish and French, and it has also released statements and videos in other languages, including Hebrew, Spanish, Russian, Kurdish and German.

Official social media accounts are augmented by supporters on social media, some of whom seem to have quasi-of- ficial status. These supporters both share official propaganda and contribute to the barrage of online voices support- ing terrorist ideology. Some supporters add personal details about their experiences in the group – information that adds to the authenticity of their narratives by providing concrete experiences.

In order to unify its messaging, ISIS has also organized campaigns on Twitter, encouraging supporters to repeatedly Tweet various such as #CalamityWillBefallUS, which threatened attacks against the U.S.; #All- EyesOnISIS, which attempted to magnify the number of ISIS supporters on Twitter; and #FightForHim, which called for copycat attacks following the 2014 attacks on the French magazine Charlie Hebdo. The apparent goal is for these terms to trend on Twitter, vastly increasing the visibility of tweets.

Similarly, ISIS has used hashtag campaigns to insert its messages into other trending topics on Twitter that have nothing to do with violent extremism. Thus, it will encourage its supporters to tweet ISIS messages with popular hashtags such as #worldcup or #Ferguson so that people searching for those hashtags will inadvertently come across pro-ISIS posts. Hashtag campaigns have been conducted in a number of languages, including English, French, Arabic and Turkish.

9 ISIS supporters are often active on a variety of platforms beyond Twitter, including the social networking site Face- book, the picture-sharing site , the chat services Kik and WhatsApp, the video sharing site YouTube, and the question and answer service Ask.FM. These individuals also encourage direct contact with potential recruits via encrypted messaging services such as SureSpot.

On Ask.FM, where users can post questions anonymously, known members of extremist organizations are asked questions by potential recruits. For example, the user Mujahid Miski (believed to be Mohamed Abdullahi Hassan, an Al Shabaab member from Minnesota who has since been taken into Somali custody) answered questions including, “My brother wants to be a mujahid (fighter) but he’s got glasses. Will that stop him from becoming one?” Many of his answers also include encouragement for readers to join terrorist groups, including ISIS. In one, for example, he wrote, “every minute and every second is wasted if you’re not out there building the Islamic Caliphate (a reference to ISIS). Go out and make hijrah (migration to a Muslim land) from the east and the west and join jihad (the fighting). Let your blood be the water for the tree of Khilafah (caliphate, a reference to ISIS).

Many ISIS supporters also take advantage of the websites Justpaste.it and its Arabic-language counterpart Manbar. me, which enable them to quickly publish content to unique URLs online that can then be shared on social media. ISIS supporters have used these sites to publish links to downloadable propaganda materials, instructions for travel- ing to Syria and Iraq, manifestos encouraging lone wolf attacks, and more.

A number of ISIS supporters maintain blogs on which they detail their extremist ideology and narratives of an idealized day-to-day life which they hope will appeal to potential recruits. There have also been instances of ISIS supporters creating new websites to make ISIS propaganda even more accessible. In February 2015, an ISIS support- er created a website called IS-Tube that featured a searchable archive of ISIS propaganda videos, including videos depicting beheadings. The site was hosted on a Google-owned IP-bloc, and was removed after ADL alerted Google to its presence.

Online repositories of terrorist propaganda are not unique to Google, yet some platforms do not have clear or effective policies regarding terrorist content, enabling terrorists and their supporters to exploit their services more easily and uninterrupted. For example, WordPress hosts a website that features hundreds of ISIS propaganda videos, statements and publications. Among the hundreds of items on the site are beheading and execution videos, as well as videos and articles encouraging Westerners to travel to join ISIS or to commit attacks on its behalf in their home countries. The site remains online despite efforts to flag the material.

Other Terrorist Groups Using the Internet

Other terrorist organizations use social media as well, and many have learned from ISIS’s techniques. During the 2014 conflict between Israel and Hamas, for example, ADL documented multiple social media profiles that could be considered official Hamas accounts.

Like ISIS followers, Hamas supporters utilized hashtag campaigns to promote terror attacks against Israelis and post- ed videos and images to social media that both applauded and encouraged killing Israelis and Jews with hatchets and by running them over. Indeed, instructional videos on stabbing, clips of preachers calling for attacks on Jews, graphic images are all going viral. Such incitement on social media is widely understood as having a significant link to the stabbing attacks against Israelis, and the online approbation of each attack further spreads the message and encourages would-be attackers.

The increase in small arms attacks in both the U.S. and abroad serves as a testimony to the potential power of social media. Spurred at least in part by extortions by ISIS propaganda on social media to undertake attacks by any means

10 possible, including with knives, the U.S. has seen an increase in small arms attacks by apparent terrorist sympathizers. These have been directed at law enforcement in particular, but pose a more general threat as well.

Advances in technology have enabled terrorist video production to rival high quality films. ISIS even released a feature-film length video, titled “Flames of War,” that portrayed the group as part of an apocalyptic struggle of good versus evil. Other terrorist groups – including Al Qaeda, Al Shabaab (Al Qaeda in Somalia), Boko Haram, Taliban affiliates, the Caucasus Emirates and more – have also distributed propaganda videos via Twitter in recent years.

Perhaps the most infamous English-language terrorist magazine, Inspire, is now distributed via Twitter instead of on extremist forums. An online English-language propaganda magazine produced by Al Qaeda in the Arabian Peninsula (AQAP), Inspire provides articles about terrorist ideology, recruitment information, and encouragement and instructions for homegrown attacks, including the very bomb-making instructions that the Tsarnaev brothers allegedly utilized in the 2013 Boston Marathon bombing.

An article in the magazine’s second issue encouraged “brothers and sisters coming from the West to consider attacking the West in its own backyard. The effect is much greater, it always embarrasses the enemy, and these types of individual attacks are nearly impossible for them to contain.” Its 2014 editions contained directions for making car bombs and bombs designed to evade airport security measures, as well as instructions regarding the best places to detonate them.

Outside the sphere of social media, terrorist groups and sympathizers have also attempted to create applications promoting their organizations and propaganda on iTunes and Google Play.

Hezbollah, for example, has launched a number of applications that provide streaming access to the group’s propa- ganda-based television station, Al Manar. Google Play and iTunes have been quick to remove them, but Hezbollah, having blamed “the Jewish Anti-Defamation League” for launching a “campaign” to remove the original application, has created the applications so users can download them directly from the Hezbollah website, without going through iTunes or Google Play. Hezbollah has also created several video games on its website with the explicit intent of indoctrinating young players.

Other applications are created by terrorist supporters. The Anwar al-Awlaki application, for example, enabled users to listen to Awlaki’s sermons directly from their mobile devices. Awlaki, the creator of Inspire magazine, was the primary English-language spokesman for AQAP until he was killed by a drone strike in 2011. Awlaki remains tremen- dously influential. Many of his lectures are still available on YouTube, and supporters regularly create Facebook and Twitter profiles dedicated to sharing his quotes. When these profiles are removed, they are quickly replaced by new ones. A significant number of domestic Islamic extremists, including the Tsarnaev brothers, have accessed his propa- ganda and cited him as an inspiration.

Another Challenge: Islamic Extremists’ Hacking Activity

Perhaps the newest frontier of online extremism comes in the form of Islamic extremist hackings. Politically moti- vated hackers from the Arab world have begun targeting the websites of perceived supporters of Israel, including synagogues, Jewish institutions, and individuals. These attacks are increasingly undertaken in the name of terrorist organizations, particularly ISIS. There are signs that ISIS is beginning to attempt to harness the hackers and hacker groups into supporting its own mission and expanding the hacks to target websites and government institutions in the U.S.

In March 2015, for example, hacker(s) identifying as “ISIS cyber army” claimed responsibility for hacking 51 American

11 websites on March 24. Each of the hacked websites was defaced with the ISIS flag, a statement that the website was “Hacked by Islamic State” and an e-mail address for the ISIS cyber army, the unit believed to be behind the cyber activities of ISIS. In the past, the ISIS cyber unit claimed responsibility for involvement in a series of attacks against a number of Israeli websites.

This capability to engage in cyber-attacks may be a reflection of ISIS’s calls for support from individuals with various skills, from media experts to doctors, to join and contribute to the group and its mission of gaining strength and territory however they can.

In April 2015, as international hackers once again set their sights on Jewish and Israeli targets as part of “OpIsrael,” an annual anti-Israel cyber-attack campaign, there were strong indications that AnonGhost, an international hacker group that supports terrorist groups and frequently employs anti-Semitism as part of its cyber activity, had replaced Anonymous as the main organizer of the campaign.

Groups such as AnonGhost express unambiguous support for the Palestinian terrorist group Hamas and the Islamic State (ISIS) and have carried out cyber-attacks in their names, bringing an Islamic extremist element into an already virulently anti-Israel and anti-Semitic campaign.

AnonGhost threatened individual Israelis with violence through mobile devices, claiming to have obtained personal information on more than 200 Israelis. One threatening text the group claims to have sent to an Israeli included an image of an infamous ISIS fighter with the caption, “We are coming O Jews to kill you.” A text sent to another Israeli man included an image of his family with the threat, “I’ll stick a knife in their throats.”

While anti-Semitic themes existed in previous OpIsrael campaigns, it had been primarily billed as a response to the Israeli-Palestinian conflict. AnonGhost’s participation and tactics thus far speak to the centrality of anti-Semitism in this year’s campaign, which serves as an extension of AnonGhost's pro-terror activism around the world.

Anti-Semitism: A Pillar of Islamic Extremist Ideology

As new technology and social media continue to alter the nature of global communications, terrorist groups have quickly adapted to these tools in their efforts to reach an ever-widening pool of potential adherents. As a result, anti-Semitism in its most dangerous form is easily accessible by a worldwide audience.

In a video message in August 2015, Osama bin Laden’s son, Hamza bin Laden, utilized a range of anti-Semitic and anti-Israel narratives in his effort to rally Al Qaeda supporters and incite violence against Americans and Jews.

Bin Laden described Jews and Israel as having a disproportionate role in world events and the oppression of Mus- lims. He compared the “Zio-Crusader alliance led by America” to a bird: “Its head is America, one wing is NATO and the other is the State of the Jews in occupied Palestine, and the legs are the tyrant rulers that sit on the chests of the peoples of the Muslim Ummah [global community].”

Bin Laden then called for attacks worldwide and demanded that Muslims “support their brothers in Palestine by fighting the Jews and the Americans... not in America and occupied Palestine and Afghanistan alone, but all over the world…. take it to all the American, Jewish, and Western interests in the world.”

While such violent expressions of anti-Semitism have been at the core of Al Qaeda’s ideology for decades, terrorist groups motivated by Islamic extremist ideology, from Al Qaeda to ISIS, continue to rely on depictions of a Jewish enemy – often combined with violent opposition to the State of Israel – to recruit followers, motivate adherents and draw attention to their cause. Anti-Israel sentiment is not the same as anti-Semitism. However, terrorist groups often

12 link the two, exploiting hatred of Israel to further encourage attacks against Jews worldwide and as an additional means of diverting attention to their cause.

And they have more tools at their disposal than ever before.

Recent terrorist attacks against Jewish institutions in Europe, and the spike in incitement materials encouraging stabbing and other attacks against Jews and Israelis around the world, not only speak to the global reach provided by these new technologies, but also to the pervasive nature of anti-Semitism in terrorist propaganda that encourages violence directed at Jews.

In September 2015, ADL issued a report examining the nature and function of anti-Semitism in terrorist propaganda today. It focused on ISIS, Al Qaeda Central, and two of Al Qaeda’s largest affiliates, Al Qaeda in the Arabian Penin- sula (AQAP) in Yemen and Al Shabaab in Somalia, as well as the prevalence of anti-Semitism among supporters of Palestinian terrorist organizations. It also provides examples of individuals linked to terrorist plots and other activity in the U.S. that were influenced, at least to some degree, by anti-Semitic and anti-Israel messages.

In light of the growing sophistication and reach of terrorist use of social media, developing and implementing strat- egies to respond to this challenge must be a shared priority, not just for the Internet industry and counterterrorism experts, but for all of us.

13 BEST PRACTICES FOR RESPONDING TO CYBERHATE

In May 2012, the Inter-Parliamentary Coalition for Combating Anti-Semitism, an organization comprised of parlia- mentarians from around the world working to combat resurgent anti-Semitism, asked the Anti-Defamation League (ADL) to convene a Working Group on Cyberhate. The mandate of the Working Group was to develop recommen- dations for the most effective responses to manifestations of hate and bigotry online. The Working Group includes representatives of the Internet industry, civil society, the legal community, and academia.

The Working Group has met five times, and its members have graciously shared their experiences and perspectives, bringing many new insights and ideas to the table. Their input and guidance have been invaluable, and are reflected in the following Best Practices. Obviously, the challenges are different for social networks, search engines, companies engaged in e-commerce, and others. Nevertheless, we believe that these Best Practices could contribute significant- ly to countering cyberhate.

PROVIDERS

1. Providers should take reports about cyberhate seriously, mindful of the fundamental principles of free expression, human dignity, personal safety and respect for the rule of law. 2. Providers that feature user-generated content should offer users a clear explanation of their approach to evaluat- ing and resolving reports of hateful content, highlighting their relevant terms of service. 3. Providers should offer user-friendly mechanisms and procedures for reporting hateful content. 4. Providers should respond to user reports in a timely manner. 5. Providers should enforce whatever sanctions their terms of service contemplate in a consistent and fair manner.

THE INTERNET COMMUNITY

6. The Internet Community should work together to address the harmful consequences of online hatred. 7. The Internet Community should identify, implement and/or encourage effective strategies of counter-speech – including direct response; comedy and satire when appropriate; or simply setting the record straight. 8. The Internet Community should share knowledge and help develop educational materials and programs that encourage critical thinking in both proactive and reactive online activity. 9. The Internet Community should encourage other interested parties to help raise awareness of the problem of cyberhate and the urgent need to address it. 10. The Internet Community should welcome new thinking and new initiatives to promote a civil online environment.

In addition to the above, ADL would offer the following recommendations for responding to terrorist use of social media.

14 Recommended Responses to Terrorist Use of Social Media

1. Providers should give priority attention to how their platforms are being used by terrorists and terrorist groups to promote terrorism, to recruit potential new terrorists, and to foster self-radicalization. 2. Providers should make their expertise available to those looking to generate and promote counter-narratives. 3. Providers should work with interested stakeholders to analyze the impact of counter-narratives in terms of their reach, scope, and effectiveness. 4. Providers should consider creating a specific new terrorism category for users seeking to flag terrorism-related content. 5. Providers should use their corporate voices to condemn terrorist use of their platforms and to explain why terrorist activity and advocacy is inconsistent with their goals of connecting the world. Underlying all of these recommendations is the understanding that rules on hate speech may be written and applied too broadly so as to encumber free expression. Thus, an underlying principle for these recommendations is that care should be taken to respect free expression and not to encumber legitimate debate and free speech.

15 APPENDIX: ADL CYBER-SAFETY ACTION GUIDE

This Appendix features ADL’s Cyber-Safety Action Guide, which in its online form brings together in one place the relevant Terms of Service addressing hate speech of major Internet companies. Individuals and groups seeking to respond to various manifestations of hate online have found it to be a unique and very useful tool, and the list of participating companies continues to grow.

16 ANTI-DEFAMATION LEAGUE Marvin D. Nathan National Chair Jonathan A. Greenblatt CEO & National Director Kenneth Jacobson Deputy National Director Deborah M. Lauter Senior Vice President, Policy and Programs Steven M. Freeman Deputy Director, Policy and Programs Glen S. Lewy President, Anti-Defamation League Foundation

CIVIL RIGHTS DIVISION Elizabeth A. Price Chair Christopher Wolf Chair, Internet Task Force Jonathan Vick Assistant Director, Cyberhate Response

CENTER ON EXTREMISM Jared Blum Chair David Friedman Vice President, Law Enforcement, Extremism and Community Security Oren Segal Director

This work is made possible in part by the generous support of: William and Naomi Gorowitz Institute on Extremism and Terrorism Marlene Nathan Meyerson Family Foundation Charles and Mildred Schnurmacher Foundation, Inc.

For additional and updated resources please see: www.adl.org Copies of this publication are available in the Rita and Leo Greenland Library and Research Center.

©2016 Anti-Defamation League | Printed in the United States of America | All Rights Reserved

Anti-Defamation League 605 Third Avenue, New York, NY 10158-3560 www.adl.org

17 605 Third Avenue New York, NY 10158-3560 www.adl.org adl report: Anti-Semitic Targeting of Journalists During the 2016 Presidential Campaign A report from ADL’s Task Force on Harassment and Journalism

October 19, 2016 KEY FINDINGS • Based on a broad set of keywords (and keyword combinations) designed by ADL to capture anti-Semitic language, there were 2.6 million tweets containing language frequently found in anti- overall Semitic speech between August 2015 – July 2016. dabasedta on keywor pullds correlating with anti-Semitism • These tweets had an estimated 10 billion impressions (reach), which may contribute to reinforcing and normalizing anti- Semitic language on a massive scale. • At least 800 journalists received anti-Semitic tweets with an 2,641,072 estimated reach of 45 million impressions. The top 10 most Total mentionsfrom August 1, targeted journalists (all of whom are Jewish) received 83 2015 2016 2015 through July 31, 2016 percent of these anti-Semitic tweets. contained these keywords • 1,600 Twitter accounts generated 68% of the anti-Semitic tweets targeting journalists. 21% of these 1,600 accounts have been suspended in the study period, amounting to 16% of the anti-Semitic tweets. 10,000,000,000Number of estimated impressions generated • Sixty percent of the anti-Semitic tweets were replies to journalists’ posts (11% were regular Tweets and 29% re- tweets). In other words, anti-Semitism more often than not occurred in response to journalists’ initial posts. Percentage of tweets posted by male 66users, based% on user-disclosed details • There was a significant uptick in anti-Semitic tweets in the second half (January-July 2016) of this study period. This correlates to intensifying coverage of the presidential campaign, the candidates and their positions on a range of issues.

• There is evidence that a considerable number of the anti-Semitic tweets targeting journalists originate with people identifying themselves as Trump supporters, “conservatives” or extreme right-wing elements. The words that show up most in the bios of Twitter users sending anti-Semitic tweets to journalists are “Trump,” “nationalist,” “conservative,” “American” and “white.” This finding does not imply that Mr. Trump supported these tweets, or that conservatives are more prone to anti-Semitism. It does show that the individuals directing anti-Semitism toward journalists self-identified as Trump supporters and conservatives.

• While anti-Semitic tweets tended to spike in the wake of election-related news coverage, the language used in the anti-Semitic tweets was not solely election-related. Many tweets referenced classic anti-Semitic tropes (Jews control the media, Jews control global finance, Jews perpetrated 9/11, etc.). This suggests that while the initial provocation for anti-Semitic tweets may have been at least nominally election-related, the Twitter users generating targeted anti-Semitism may have used news events as an excuse to unleash anti-Semitic memes, harassment, etc.

• The words most frequently used in anti-Semitic tweets directed at journalists included “kike,” “Israel,” “Zionist,” and “white” etc., an indication that the harassment may have been prompted by the perceived religious identity of the journalist.

1 • While anti-Semitism was primarily directed at journalists who are Jewish (or perceived to be Jewish), non- Jewish journalists also received anti-Semitic tweets following criticism of Mr. Trump – presumably intended to be either an insult or threat. This is likely connected to the anti-Semitic tropes related to Jews “controlling” the media, and the media “controlling” the government.

• As previously stated, there is no evidence suggesting these attacks were explicitly encouraged by any campaign or candidate. In fact, ADL has been able to identify individuals and websites in the white supremacist world that have played a role in encouraging these attacks.

• While this report did not investigate whether social media attacks have a chilling effect on journalists, it does show that targeted anti-Semitic tweets raised the cost of entry into (and staying in) the marketplace of ideas for journalists, particularly Jewish journalists.

Please note that this is the first stage of a two-stage reporting process. This data gathering and analysis phase will be followed by a series of recommendations, to be released on November 19, 2016.

NOTE ABOUT THE REPORT AND THE PRESIDENTIAL ELECTION

ADL is a nonprofit organization and does not take sides for or against any candidate for elective office, so it is crucial to be perfectly clear about what this report says and what it does not say.

This report identifies some self-styled followers of presidential candidate to be the source of a viciously anti-Semitic Twitter attack against reporters. Accordingly, we wish to make it clear that based on the statistical work we have performed, we cannot and do not attribute causation to Mr. Trump, and thus we cannot and do not assign blame to Mr. Trump for these ugly tweets. While candidates can and do affect the environment in which social media operates as well as the tenor of its messages, the individuals who tweet hateful words are solely responsible for their messages.

BACKGROUND: ADL TASK FORCE ON Harassment AND JOURNALISM In June 2016, in the wake of a series of disturbing incidents in which journalists covering the 2016 presidential campaign were targeted with anti-Semitic harassment and even death threats on social media, the Anti-Defamation League (ADL) announced the creation a Task Force on Harassment and Journalism.

Building on ADL’s decades of experience in monitoring and exposing hate and hate groups, as well as its critical work with the tech industry in efforts to address online harassment, the Task Force sought insights from a group of experts from the world of journalism, law enforcement, academia, Silicon Valley and nongovernmental organizations. Their advice and counsel will help ADL to do the following: • Assess the scope and source of anti-Semitic, racist and other harassment of journalists, commentators and others on social media;

• Determine whether and how this harassment is having an impact on the electorate or if it has a chilling effect on free speech;

• Propose solutions and/or countermeasures that can prevent journalists from becoming targets for hate speech and harassment on social media in the future.

2 With the release of this landmark report, ADL has unveiled the extent to which the 2016 presidential election cycle has exposed journalists to anti-Semitic abuse on Twitter. Our first-of-its-kind investigation included wide-ranging surveys of journalists as well as a quantitative analysis of anti-Semitic Twitter messages and memes directed at reporters.

This initial report, produced by ADL’s Center on Extremism, which has worked closely with social media and internet providers for more than two decades in responding to anti-Semitism and online hatred, will be followed by a final report, which will incorporate a broad range of recommended responses to bigotry on social media. The final report will be released at ADL’s Never is Now Summit on anti-Semitism on November 17, 2016.

*Participation in the Task Force does not imply agreement with, or assent to, the findings of this report.

INTRODUCTION Over the course of the 2016 Presidential campaign, an execrable trend has emerged: reporters who voiced even slightly negative opinions about presidential candidate Donald Trump have been targeted relentlessly on social media by the candidate’s self-styled supporters; reporters who are Jewish (or are perceived to be Jewish) have borne the brunt of these attacks.

There is evidence that Mr. Trump himself may have contributed to an environment in which reporters were targeted. Indeed, he repeatedly denounced reporters as “absolute scum,” and said of “most journalists” in December 2015, “I would never kill them, but I do hate them. And some of them are such lying, disgusting people. It’s true.” Accordingly, while we cannot (and do not) say that the candidate caused the targeting of reporters, we can say that he may have created an atmosphere in which such targeting arose.

The social media attacks on journalists were brutal.

When journalist Julia loffe wrote a profile of for the May 2016 issue of GQ magazine, a firestorm of virulently anti-Semitic (and misogynistic) responses on social media followed. One tweet called loffe a “filthy Russian kike,” while others sent her photos of concentration camps with captions like “Back to the Ovens!”

On May 19, New York Times editor Jonathan Weisman tweeted about casino magnate Sheldon Adelson’s support for Trump, and the anti-Semitic response to loffe’s article. The reaction was immediate, with Twitter user CyberTrump leading the charge against Weisman: “Do you wish to remain hidden, to be thought of one of the goyim by the masses?” As other racists and anti-Semites piled on, Weisman received images of ovens, of himself wearing Nazi “Juden” stars, and of Auschwitz’s infamous entry gates, the path painted over with the Trump logo, and the iron letters refashioned to read “Machen Amerika Great.”

After criticizing Mr. Trump, conservative writer Ben Shapiro became the target of a wave of anti-Semitic tweets calling him a “Christ-Killer” and a “kike.” Jake Tapper, John Podhoretz and Noah Rothman have all received similar messages after voicing opinions perceived to be critical of Mr. Trump. In the midst of the attacks, Rothman tweeted: “It never ends. Blocking doesn’t help either. They have lists, on which I seem to find myself.”

While much of the online harassment of journalists is at the hands of anonymous trolls, there are known individuals and

3 websites in the white supremacist world that have played a role in encouraging these attacks (see “White Supremacists Encourage the Online Harassment of Jewish Journalists” section).

METHODOLOGY This report covers the time period of August 2015 through July 2016.

To capture the vast sweep of anti-Semitic Tweets directed at tweets to journalists, ADL utilized the latest in “big data” techniques. There were journalistbased on keywords correlatings four phases to the report. with anti-Semitism Phase one: ADL interviewed journalists impacted by the anti-Semitic harassment and they provided critical background information and described their experiences as targets of harassment on Twitter. They also described the effect the attacks had on their work and personal 19,253 sense of safety. anti-semiticat US jour tnalistsweets from a pool of 50,000 journalist Phase two: ADL conducted a search of tweets using a broad set Twitter handles keywords (and keyword combinations) designed by ADL to capture anti-Semitic language. These keywords did not include any terms associated directly with the 2016 presidential campaign. This yielded 2.6 million results. Phase three: We focused our search on tweets received by a list of 45,000,000Number of estimated impressions generated 50,000 journalists and compared those with the 2.6 million results.

Phase four: We manually reviewed each of these tweets and narrowed the results to 19,253 overtly anti-Semitic tweets, which we found were classesto US journalists of tweets directed at 800 journalists.

Note 1: One can never include all of the words that might be used in 60% 29% 11 an anti-Semitic attack, and you can’t predict the ways in which anti- Replies Re-Tweets Regular Tweets Semites will create “codes” to avoid censure and potential exclusion by social media platforms. (In October 2016, for example, after this analysis was complete, white supremacists attempted to avoid tech-based approaches to isolate online harassment. To do so they assigned tech-oriented code words to their favorite slurs, referring to “kikes” as “Skypes,” among many others).

Note 2: It is impossible to capture all of the anti-Semitic tweets or identify all of the anti-Semitic Twitter users, and because 21 percent of the accounts responsible for tweets containing anti-Semitic language have been deleted (either by Twitter or by the users), there is reason to conclude that the numbers in this report – especially the number of anti-Semitic Tweets received by individual journalists – are conservative.

WHY TWITTER? This report is focused on Twitter because it is the primary social media platform used to perpetrate these attacks on journalists, according to the journalists themselves.

4 While the data are not designed to show why the attackers chose Twitter, the harassers clearly identified Twitter as a target-rich environment journalists routinely use and depend on Twitter for sharing information, soliciting sources and disseminating their work.

We cannot conclude that Mr. Trump’s extensive use of Twitter “encouraged” these attacks. Mr. Trump’s use of Twitter as a key communications tool is notable, but the platform is used extensively by all candidates.

We are also not attributing the abuse to the Twitter platform: as with all of the major social media companies, Twitter does not proactively monitor and regulate speech, * The above word cloud is based on the 2.6 million tweets. but like other platforms, claims to respond when hate speech is reported.

DETAILED FINDINGS ADL conducted a search of tweets using a broad set of keywords (and keyword combinations) designed by ADL to capture anti-Semitic language. These keywords did not include any terms associated directly with the 2016 presidential campaign. This yielded 2.6 million results.

These 2.6 million tweets, which were posted by 1.7 million Twitter users, appeared an estimated 10 billion * The above word cloud is based on the 19,253 anti-Semitic Tweets times – which means that this language was potentially directed at 800 journalists. seen 10 billion times. That’s roughly the equivalent social media exposure advertisers could expect from a $20 million Super Bowl ad - a juggernaut of bigotry we believe reinforces and normalizes anti-Semitic language and * To the left are the tropes on a massive scale. most common Twitter hashtags appearing Our next step, a manual review of tweets containing in the anti-Semitic anti-Semitic language, yielded 19,253 overtly anti-Semitic tweets tweets mentioning 800 journalists. The 19,253 Tweets were seen approximately 45 million times, and 60 percent of these tweets were replies with anti-Semitic content sent directly to journalists or other users.

Sixty-eight percent of the 19,253 Tweets were sent by 1,600 Twitter users, confirming that these were persistent attacks on journalists by a relatively small cohort of Twitter users.

5 Many of the anti-Semitic attackers publicized their role as self- appointed surrogates for Trump and their allegiance to the white nationalist cause. These five words appeared most frequently in tweets to the 1,600 Twitter attackers’ account “bios:” Trump, conservative, journalistbased on keywords correlatings white, nationalist and American. This demonstrates that those with anti-Semitism with a propensity to send anti-Semitic tweets are more likely to support Donald Trump, and self-identify as white nationalists and/ or conservative. This does not imply that Mr. Trump supported these tweets, or that conservatives are more prone to anti-Semitism. It does show that the users directing anti-Semitism toward journalists 15,or 83%952 of 19,253 self-identified as Trump supporters and nationalist. The percentage of all anti-Semitic tweets to all journalists were targeted at A very small number of journalists (10), all of whom are Jewish, received 83 percent of the 19,253 anti-Semitic Tweets. Notably, Ben journalist s Shapiro, the former Breitbart reporter at the forefront of the so-called 10 #NeverTrump movement, was targeted by more than 7,400 anti- Semitic Tweets. Number of unique users who posted tweets to There was a significant increase in the volume of anti-Semitic tweets 6US jour,131nalists, 1,600 of whom generated 13,500 of the 19,253 total tweets. in the second half of the reporting period. Seventy-six percent of Tweets at journalists were posted between February to July 2016. This corresponds with intensifying coverage of the presidential campaign, 26 70 the candidates, and their positions on a range of issues. % %

The percentage22% of these 1,600 users whose accounts were suspended, or 20% of the 19K-plus total tweets removed.

* The above word cloud is based on the Twitter bios of unique users / authors of anti-Semitic * The top ten journalists targeted with tweets directed at journalists. anti-Semitic tweets.

6 SPIKE ANALYSIS

As stated, there is no known causal relationship between Mr. Trump or his campaign and the wave of anti-Semitic attacks against journalists. However, these self-appointed Trump surrogates used events in the campaign, especially actions by Mr. Trump, as a justification for attacking journalists.

Examples:

• One of the most significant spike in anti-Semitic Tweets occurred on/around March 13, 2016, when Mr. Trump blamed Bernie Sanders for violence at a Trump rally.

• There was a similar spike in anti-Semitic Twitter activity on February 29, 2016, during peak coverage of Trump’s refusal to “disavow” the Ku Klux Klan

• Another spike occurred on May 17, 2016, when Melania Trump asserted that Julia Ioffe “provoked” the anti-Semitic attacks against her;

• A similar spike occurred May 25, 2016, when Trump verbally attacked a federal judge whose parents emigrated from Mexico.

7 But while anti-Semitic tweets demonstrably spiked following election-related news events, the language used in anti- Semitic tweets was not solely election-related. Many tweets referenced classic anti-Semitic tropes (Jews control the media, Jews control global finance, Jews perpetrated 9/11, et cetera).

Racial slurs and anti-Israel statements were the top two manifestations of anti-Semitism. This suggests that while the initial provocation for anti-Semitic tweets may have been related to the election, the Twitter attackers may have used news events - as well as the public airing of these anti-Semitic tweets - as an excuse to unleash more general anti- Semitic memes and attacks. When Jonathan Weisman tweeted about the racist reaction to his comments about Trump, he was inundated by a wave of anti-Semitic Twitter responses.

In February and March 2016, as the so-called #NeverTrump movement took hold, self-styled Trump supporters from the alt-right attacked. (Alt-right is short for “alternative right, “ a range of people on the extreme right who reject mainstream conservatism in favor of forms that embrace implicit or explicit racism or white supremacy). This is when the Twitter attacks on Ben Shapiro, an originator of the #NeverTrump movement, began in earnest.

“It’s amazing what’s been unleashed,” Shapiro told ADL. “I honestly didn’t realize they were out there. It’s every day, every single day.” Despite Shapiro’s efforts to shield his family from the abuse, his wife and baby were targeted as well. “When my child was born there were lots of anti-Semitic responses talking about cockroaches.”

Bethany Mandel, a freelance reporter who wrote critically about Trump, was also viciously harassed on Twitter. One user tweeted about her for 19 hours straight, and she received messages containing incendiary language about her family, and images with her face superimposed on photos of Nazi concentration camps. Mandel, like the other Jewish journalists interviewed by ADL, has been targeted by anti-Semitic language before, but these attacks stood out, she said, for their “volume and the imagery. It also seemed coordinated – they would come in waves and 50 percent of the time I couldn’t identify the source.”

8 IMPACT A landmark 2014 Pew Research Center study shows that only five percent of people who are harassed online report the problem to law enforcement. Many more – a combined 31 percent – withdraw, either by changing their username, deleting their account, bowing out of an online forum, or simply not attending certain offline events. When people stop talking because they’re afraid, that’s evidence of a chilling effect.

But for a lot of people, including journalists, quitting social media simply isn’t an option – and the Pew data reflects that. Forty-seven percent of those who are harassed online stood their ground and confronted their tormenter online. Forty- four percent blocked the person responsible, and 22 percent reported the person to the website or online service hosting the exchange.

Half of the journalists we interviewed decided not to report the harassing tweets, some because they believed people should have a right to say whatever they want, and others because they weren’t confident Twitter would do anything to address the issue. Across the board, the criticisms of Twitter were consistent: The company doesn’t do enough to enforce its terms of service.

Jonathan Weisman told us, “I think suspending or deleting [attackers’] accounts is pointless, because they just come back on under a different name. Twitter has to decide if they are going to stand by their terms of service or not. If they decide tomorrow, ‘Look, we don’t have the capacity to monitor all of this, and we want it to be a free exchange of ideas,’ – then fine, we would know what it was. But they want to have it both ways – the halo of having terms of service, but not enforcing them. Or enforcing them only sporadically.”

Some of the journalists, including Weisman, stepped away from Twitter, at least for a while, while others stuck with the platform, hoping for a respite even as they braced for more abuse.

While this particular report did not test whether there was a chilling effect on journalists, it does show that targeted anti-Semitic on Twitter undoubtedly raised the cost of entry into (and staying in) the marketplace of ideas for journalists, particularly Jewish journalists.

White Supremacists Encourage Online Harassment of Jewish Journalists While much of the online harassment of journalists is at the hands of anonymous trolls, there are known individuals and websites in the white supremacist world that have played a role in encouraging these attacks. These people and websites represent a sampling of the people and sites engaged in this activity, and have been on ADL’s radar for some time.

Two of the neo-Nazis responsible for some of the attacks on Jewish journalists are Andrew Anglin, founder of the extremely popular white supremacist website and Lee Rogers of Infostormer (formerly The Daily Slave). While both Anglin and Rogers are banned from Twitter, they have encouraged their followers to Tweet anti-Semitic language and memes at Jewish journalists, including Julia Ioffe and Jonathan Weisman.

Ioffe wrote a profile of Donald Trump’s wife, Melania, for the May 2016 issue of GQ. Anglin and Rogers (self-identified Trump supporters) felt the piece was unflattering. Anglin wrote to his supporters on April 28, “Please go ahead and send her a tweet and let her know what you think of her dirty kike trickery. Make sure to identify her as a Jew working against White interests, or send her the picture with the Jude star from the top of the article.” Anglin provided Ioffe’s Twitter address and the anti-Semitic picture he mentioned. Rogers followed a similar path a few days later, telling his supporters, “I would encourage a continued trolling effort against this evil Jewish bitch.” He then provided Ioffe’s Twitter address.

9 The situation with Jonathan Weisman was somewhat different. After Weisman tweeted out an article by Robert Kagan on the emergence of fascism in the United States and Donald Trump, he was bombarded by anti-Semitic Tweets and memes. Anglin attacked Weisman on May 25, 2016, for publicizing the hateful tweets directed at him. But Anglin went much further. Writing about Weisman and Ioffe, “You’ve all provoked us. You’ve been doing it for decades—and centuries even—and we’ve finally had enough. Challenge has been accepted.”

A couple of day later, Anglin, echoed by “Marcus Cicero” on Infostormer, urged supporters to Tweet anti-Semitic questions at Weisman, including, “Why do Jews demand that White Christians go fight and die in wars for them?”

White supremacist Andrew Auernheimer, an associate of Anglin and an Internet hacker also known as “Weev,” also tweeted at Weisman, “Get used to it you fucking kike. You people will be made to pay for the violence and fraud you’ve committed against us.”

Weisman was one of the first journalists, in , to publicize another form of harassment – the use of the echo symbol (multiple parentheses) around names to identify that person as Jewish in an article. In his May 26, 2016 article, Weisman noted that some of the anti-Semitic tweets included his name in parentheses. He asked one of the tweeters why, and that person responded, “It’s a dog whistle, fool. Belling the cat for my fellow goyim.”

A few days later, two journalists at Mic traced the origins of this anti-Semitic typographical symbol to a 2014 podcast “The Daily Shoah” on The Right Stuff (TRS), a racist and anti-Semitic website. The podcast used an echo sound effect when someone on the podcast mentioned a Jewish name. According to TRS, “all Jewish surnames echo throughout history. The echoes repeat the sad tale as they communicate the emotional lessons of our great white sins, imploring us to Never Forget the 6 GoRillion.” Other anti-Semites translated the audio echo into a typographical symbol used primarily on social media sites, including Twitter.

TRS was also behind the “Coincidence Detector” app, a Google Chrome plugin (removed on June 2, 2016 by Google) whose purpose was, according to Mic, “compiling and exposing the identities of Jews and others who are perceived as ‘anti-white.’” According to the creators of the app, it “can help you detect total coincidences about who has been involved in certain political movements and political empires.” It was, of course, referring to Jews. Users of the app would then put the echo around a Jewish name.

The publicity generated by the echo symbol resulted in a more widespread, defiant counter-use of the echo, as thousands of Twitter users, including Jewish journalists, changed their Twitter screen names to echo themselves.

10 TWEETS Of the 19,253 Tweets sent to 800 journalists, 79 percent were text only, while 12 percent contained links and 8 percent contained images.

A (small) sampling of anti-Semitic tweets sent to journalists:

11 Julia Ioffe

12 MEMES A few of the most frequently employed memes in the anti-Semitic online (Twitter) harassment of journalists:

Dana Schwartz (Observer) This meme is repeated with various journalists pictured inside the gas chamber.

13 Bethany Mandel

14 605 Third Ave. New York, NY 10158-3560 adl.org ©2016 Anti-Defamation League BIOGRAPHIES OF PANELISTS

Racism, Xenophobia, and the Algorithms: How do the Results from Google and other Internet Search Algorithms Impact Society and the Law?

Raymond Brown

Raymond Brown concentrates his practice in white collar criminal defense, international human rights issues, internal investigations and complex commercial litigation. He has defended clients in state and federal courts and before administrative tribunals. He is a leading figure in the developing area of law related to the regulation and enforcement of business requirements for human rights compliance.

Mr. Brown has extensive experience as a trial lawyer and has handled a wide variety of criminal and civil matters representing individuals, corporations and government entities. He has appeared in a number of high profile trials, including the nine-month trial involving former U.S. Secretary of Labor Raymond J. Donovan and the successful eight-year defense of senior executives of a major multinational corporation charged with environmental violations. Mr. Brown has appeared in courts in twelve states and has conducted investigations in the U.S. and in Kenya, El Salvador, the Cayman Islands, Switzerland, the Bahamas, Colombia and Sierra Leone. His international experience includes qualifying as Counsel before the International Criminal Court in The Hague and serving as Co- Lead Defense Counsel at the Special Court for Sierra Leone.

Mr. Brown counsels foreign and domestic multinationals on a broad range of corporate risk management, governance, and transactional issues, including those at the intersection of human rights and business concerns. He advises on compliance obligations, supply chain management issues, and the potential reputational risks and “bottom line” impacts facing corporations whose operational practices become entangled in human rights violations, both domestically and on the international stage.

Brett Harris Brett Harris, a business, nonprofit and technology attorney, is a shareholder in the Woodbridge, New Jersey office of Wilentz, Goldman & Spitzer. Her broad- based general corporate practice encompasses transactional matters and client counseling, including mergers and acquisitions, document drafting and negotiation, regulatory compliance matters and policy development. Ms. Harris has a particular focus in representing non-profit organizations including entity formation, establishing and maintaining tax exempt status and complying with fundraising regulations. She counsels Boards on governance, mission statement development and strategic planning, and advises clients on structuring and operating foundations and charitable trusts, including grantmaking due diligence and administration of grant agreements. She has also developed a practice with an emphasis on technology issues including cyber-security, and handles intellectual property matters including licensing, trademarks and copyrights.

Ms. Harris is General Counsel to the New Jersey Women Lawyers Association, and active with the New Jersey State Bar Association (past Chair of Internet and Computer Law Committee, currently a Trustee of the Women in the Profession Section and Secretary of the Board of Directors of the Business Law Section). She also serves on the Steering Committee of the Technology for Business Roundtable of the Commerce and Industry Association of New Jersey.

Jeffrey Neu

Jeffrey Neu is the chair of the New Jersey State Bar Association’s Internet and Computer Law Committee. Mr. Neu founded J. C. Neu and Associates in 2007, and then merged to form Kuzas Neu in 2011. He focuses his representation on purchasers and producers of technology, including software, hardware, and emerging technology. Prior to founding this firm, he worked as an associate at W. J. Cahill and Associates, focusing his representation on Medical Technology Companies.

Before becoming a lawyer, Mr. Neu has been involved in a wide range of leading companies and organizations, and has lived and traveled around the world. He co-founded a development firm in Sydney, Australia focusing on Mobile Technology and Wireless Content Delivery. As part of a group that were one of the first companies to produce an application to allow for rich content bi- directional mobile media publishing.

Mr. Neu also worked for the United Nations High Commissioner for Refugees, where he worked with Asian Pacific Governments urging compliance with UNHCR guidelines and protocols, as well as complex analysis of Refugee law. In 2004, Mr. Neu co-founded CharityHelp International, where he currently serves as Vice-President and is on the Board of Directors. CHI is a leading non-profit organization in use and deployment of technology in developing nations, fostering the growth and establishment of Women’s Rights and Education, and Children’s Welfare.

David Opderbeck

David Opderbeck is Professor of Law and Co-Director of the Gibbons Institute of Law, Science & Technology. His work focuses on intellectual property, cybersecurity and technology law and policy. David's publications concerning cybersecurity law and policy consider the law and economics of data breach litigation and executive power in cyber emergencies. He recently administered a multi-year project at the Law School funded by the Bergen County Prosecutor's Office that convened special programs and government working groups on cybersecurity issues. In the intellectual property field, David's work examines issues such as settlements in Hatch-Waxman litigation, cybersecurity policy, intellectual property restrictions on essential medicines in developing countries, open source biotechnology, patent damages reform, and the interaction of law and social norms concerning music file sharing. He blogs at thecybersecuritylawyer.com. David is also Counsel in the Cybersecurity Task Force at Gibbons, PC

Jonathan Vick Jonathan Vick is the Anti-Defamation League’s Associate Director, Investigative Technology and Cyberhate Response based in New York. He joined ADL over 14 years ago after working in online marketing for Hearst Publishing, marketing for Philips Electronics and trade development intelligence for the British Government.

As a specialist on the technology, business and politics of the internet and connected technologies, he has written extensively, created training material and consumer safety tools, works closely with industry entities, law enforcement agencies and government.

He is a recipient of ADL’s Senn/Greenberg Award for Professional Excellence, is ADL’s representative to Twitter’s Trust and Safety Council, a board member of the International Network Against Cyber Hate and a graduate of the Syracuse University, Newhouse School of Communications.

Moderator Bio: Maria P. Vallejo

Maria P. Vallejo is a civil litigation attorney, focusing her practice in commercial, government-entity defense, personal injury and appeals. She also has experience in employment, contract, estate, probate and municipal law. Since 2009, Ms. Vallejo has effectively represented the State of New Jersey and County of Hudson in thousands of bail forfeiture matters, including multiple successful appeals.

In addition to her role at the firm, Maria is a member of the New Jersey State Bar Association's Diversity Committee, and is an active member of the Hudson County Bar Association’s Board, in which she is presently Treasurer. She is on track to become the first Filipina-American president of the Hudson County Bar Association in 2023. She also serves on the District VI Ethics Committee by appointment of the New Jersey Supreme Court. In 2015, the Governor appointed Ms. Vallejo to the New Jersey State Board of Accountancy, which regulates the professional conduct of persons licensed as having special competence in accountancy.