<<

While the media dissemination channels evolve THE DYNAMICS OF AND over time, the mechanisms of group-based hostility COUNTER SPEECH IN THE SOCIAL MEDIA remain the same. Social media, such as Facebook or Twitter, have introduced new modes of social SUMMARY OF SCIENTIFIC RESEARCH discursive participation and their users now Dr. Katarzyna Bojarska1 contribute to dissemination of , fake news and hostility against refugees or other marginalized

groups on an unprecedented scale. The INTRODUCTION dissemination of hate in digital media constitutes therefore a social emergency with real-life individual, political and social consequences. Prejudice and hate speech have been observed A recent internet survey revealed that a majority throughout history and their dynamics and (67%) of internet users reports having encountered consequences have been scientifically researched, hate speech or hateful comments online (forsa., described and explained for decades before the 2017). Although new German law, Network digital era. Hostile portrayals and stereotyping of Enforcement Act (NetzDG), which came into force groups and minorities as „other, „different“ or in 2017, obliges social media sites to remove hate dangerous can lead to . This effect speech, fake news and illegal material within a short can escalate rapidly when hostile rhetoric reaches time after the illegal material has been reported, it a large audience by means of broadcast, print or does not cover all hate speech. Furthermore, owing digital media and lead to real-life violent hate to the fact, that different countries have different , including . regulations and hate speech extends over any national borders, strategies are still needed to In recent years Europe has witnessed a significant increase visibility of internet users’ objection to increase of xenophobic, nationalist, Islamophobic, online hate. racist and anti-semitic attitudes. Their effects are not solely restricted to hostile rhetoric, instead, they The primary purpose of this report is to summarize turn into actual crimes against groups and the current state of scientific knowledge that might individuals. As Heiko Maas, the German Minister of help inform strategies to more effectively counter Justice puts it: „Hate speech is often not restricted instances of hate speech online and make to mere act of hateful speech. It often moves from democratic message more visible. The focus of the words to deeds. The fact that "mental incitement" present report is therefore to compare dynamics of too often turns into violence can be seen in the hate speech and of counter-speech, which refers to surge in attacks on refugee shelters: in 2014, the any response to hateful content aiming to defy it. As number of acts tripled in comparison with the Bartlett and Krasodomski-Jones define it, counter- previous year“ (Maas, 2015, p. 6). speech is “a common, crowd-sourced response to extremism or hateful content” (2015, p. 5).

Universität Viadrina 1 Centre for Internet and , Europa- 1

immigrant origin” (Europarat Ministerkomitee, For a better understanding of the challenges of 1997)2. counter-speech online, hate speech will be first defined, followed by discussion of its legal status in While the term hate speech (Hassrede) itself is not Germany. Social and individual harms of hate recognized by German law, the relevant legal term speech will be discussed. We will examine available of incitement to (Volksverhetzung) has been research findings on how and why hate speech long recognized as a punishable criminal offence, spreads in social media and attempt to explain the regardless of whether committed online or offline. different dynamics of and According to the § 130 Section 1 of the German counter-speech transmission. Finally, we will point Criminal Code: to available resources for effective counter-speech „Whosoever, in a manner capable of disturbing the strategies. public peace: 1. incites hatred against a national, racial, religious group or a group defined by their ethnic origins, HATE SPEECH DEFINITION AND ITS against segments of the population or LEGAL STATUS individuals because of their belonging to one of the aforementioned groups or segments of the Hate speech can be defined as an expression of population or calls for violent or arbitrary hostility toward individuals or social groups based measures against them; or on their perceived group membership, which can 2. assaults the human of others by refer to their race, ethnicity, , , insulting, maliciously maligning an , or . The Council aforementioned group, segments of the of Europe defines hate speech as “all forms of population or individuals because of their expression which spread, incite, promote or justify belonging to one of the aforementioned groups racial hatred, , or other or segments of the population, or defaming forms of hatred based on , including: segments of the population, intolerance expressed by aggressive shall be liable to imprisonment from three months to and ethnocentrism, and hostility five years.“3 against minorities, migrants and people of

2 Der Begriff Hate Speech (Hassrede) “umfasst jegliche 3 § 130 Absatz 1 des Strafgesetzbuchs: „Wer in einer Ausdrucksformen, welche Rassenhass, Weise, die geeignet ist, den öffentlichen Frieden zu Fremdenfeindlichkeit, Antisemitismus oder andere stören, Formen von Hass, die auf Intoleranz gründen, 1. gegen eine nationale, rassische, religiöse oder durch propagieren, dazu anstiften, sie fördern oder ihre ethnische Herkunft bestimmte Gruppe, gegen rechtfertigen, einschließlich der Intoleranz, die sich in Teile der Bevölkerung oder gegen einen Einzelnen Form eines aggressiven Nationalismus und wegen seiner Zugehörigkeit zu einer Ethnozentrismus, einer Diskriminierung und vorbezeichneten Gruppe oder zu einem Teil der Feindseligkeit gegenüber Minderheiten, Einwanderern Bevölkerung zum Hass aufstachelt, zu Gewalt- oder und der Einwanderung entstammenden Personen Willkürmaßnahmen auffordert oder ausdrücken” 2. die Menschenwürde anderer dadurch angreift, dass 2

levels. It has potential of disturbing social peace in On the 1st October 2017 a new law, Network that exposure to hate speech shapes attitudes and Enforcement Act, Netzwerkdurchsetzungsgesetz influences actual behaviors (Müller & Schwarz, (NetzDG), came into force in Germany, which 2018), including serious hate crimes such as obliges social media sites to remove hate speech, genocide (cf. Fyfe, 2017; Maravilla, 2008). Online fake news and illegal material within 24 hours after hate may constitute a fertile ground for even more the illegal material has been reported. The new law hate, in that it provides a model, a permission has been however criticized for demanding social (Brodnig, 2016; Clay, 2017), a “social proof” of media companies, rather than courts, to decide, “appropriate” attitudes and behaviors (cf. Anderson, whether the reported content is illegal or not, Brossard, Scheufele, Xenos, & Ladwig, 2014), without any judicial oversight. This entails the risk of desensitizes the public to verbal violence and unaccountable and undermining free increases prejudice (Soral, Bilewicz, & Winiewski, speech and can a bad example for other 2018), rewarding its followers with social countries to silence political criticism by similar acceptation while punishing and silencing voices of laws. objection (Brodnig, 2016; Coustick-Deal, 2017). Above all, hate speech poses a threat to physical The controversy over the German anti-hate safety and psychological well-being of targeted law fits well with a wider political and academic group members (Baldauf, Banaszczuk, Koreng, freedom of speech vs. hate speech debate that has Schramm, & Stefanowitsch, 2015a; Coustick-Deal, been running for at least a decade. In fact, the bulk 2017; Gelber & McNamara, 2016). Several among of academic peer-reviewed publications4, are of aforementioned studies warrant more in-depth legal nature and focus on the legal status of hate discussion. speech and on whether, to which extent, by which tools and technologies and by whom hate speech The twentieth century has witnessed the role of should be regulated. This debate, however, falls mass media (e.g. broadcasting and print media) in beyond the scope of this report, which instead aims spreading hate, resulting in escalation of is to focus on counter-speech strategies that might dehumanization and leading to hate crimes, the be useful for individual social media users rather most extreme of which were , such as than legal measures to be implemented by Holocaust and genocide in Rwanda (Fyfe, 2017; governments. Maravilla, 2008). A recent study by Müller and Schwarz (2018) strongly suggests the same mechanism to be true for the role of the XXI HARMS OF HATE SPEECH century’s digital media. The study clearly Hate speech can be considered harmful at several demonstrates the link between exposure to hate

er eine vorbezeichnete Gruppe, Teile der wird mit Freiheitsstrafe von drei Monaten bis zu fünf Bevölkerung oder einen Einzelnen wegen seiner Jahren bestraft.“. Zugehörigkeit zu einer vorbezeichneten Gruppe oder zu einem Teil der Bevölkerung beschimpft, böswillig 4 Publications indexed by EBSCO database as of end of verächtlich macht oder verleumdet, May 2018. 3

speech in social media and real-life violence. The members of the targeted groups. According to authors applied a sound to attempt Amadeu Antnio Stiftung’s chronicle of anti-refugee some causative inferences on how hateful anti- incidents („Chronik flüchtlingsfeindlicher Vorfälle“, refugee social media activity on the Facebook page o. J.), there were 1249 reported attacks against of the German Alternative for Germany (Alternative asylum-seeking individuals or their lodgings in für Deutschland, AfD) party translates into actual Germany in the year 2015, 3769 in 2016 and 1939 violent acts against refugees. The authors in 2017. conclude: “Using these measures, we find that anti- While much has been written about hate refugee hate crimes increase disproportionally in speech, its legal status, its types and its areas with higher Facebook usage during periods of perpetrators, what is striking, is the scarcity of high anti-refugee sentiment online. This effect is published research on the psychological harms of especially pronounced for violent incidents against online hate speech for targeted individuals. refugees, such as arson and assault. Taken at face Therefore, we need to attempt, at least partially, to , this suggests a role for social media in the extrapolate from existing research on more general transmission of Germany-wide anti-refugee psychological effects of being targeted by prejudice. sentiment” (p. 3). In order to rule out potential The most profound effect of group-based prejudice uncontrolled factors, the researchers also provided on targeted individuals is probably elevated quasi-experimental support to the interpretation of drainage of emotional resources in comparison to their findings. They found out that in the weeks of unaffected individuals, associated with constant sizable local internet disruptions, which limited the necessity of dealing with overt discrimination as well internet access of local users, the higher anti- as with microaggressions5, both prevalent in refugee sentiment’s effect on hate crimes was everyday life, and augmented in the digital world significantly reduced as compared to the due to the online disinhibition effect, i.e. absence of municipalities unaffected by internet outages. Also, restraints in online communication in comparison to at the Germany-wide level, the authors observed face to face communication. that “the effect of refugee posts on hate crimes essentially vanishes in weeks of major Facebook Constantly increased vigilance and mental outages” (p. 4). In the light of the above, the role of preparedness to deal with or respond to overt social media in incitement to violent hate crimes, prejudice or microaggressions translate into hence in affecting violent behavior, seems chronically elevated level of stress, so called indisputable. , which can lead to adverse health outcomes, such as or anxiety (Meyer, It is also widely acknowledged that hate speech 1995, 2003). poses a serious threat to the physical safety of the

negative racial, gender, sexual-orientation, and religious 5 Sue defines microaggressions as “the brief and slights and to the target person or group. commonplace daily verbal, behavioral, and Perpetrators are usually unaware that they have environmental indignities, whether intentional or engaged in an exchange that demeans the recipient of unintentional, that communicate hostile, derogatory, or the communication” (Sue, 2010). 4

Being affected by hate speech as a member of a with media reports on the rising number of refugees targeted group is associated with significant and asylum-seeking individuals entering European emotional strain (Coustick-Deal, 2017; Gelber & Union. The United Nations Regional McNamara, 2016; Mullen & Smyth, 2004). The Centre for Western Europe reports: “In some feeling of injustice, helplessness, anxiety and threat countries the refugee crisis sparked an outpouring can be listed among the psychological effects of of solidarity and many local volunteers together with hate speech. Since being targeted by prejudice in central authorities were committed to making the and of itself constitutes a source of significant newcomers arriving in their towns feel welcome. In distress, the decision to directly confront instances other countries, however, the opposite happened of online hateful behavior might turn out to be too and restrictive border policies combined with a toxic much of emotional effort to endure for members of rhetoric have created an openly hostile environment the targeted populations. for refugees and migrants” (United Nations Regional Information Centre for Western Europe, Gelber and McNamara, who conducted an 2016). exceptional qualitative study “Evidencing the harms of hate speech” (2016), list the following types of The prevalent tone of media coverage on initially hate speech harms experienced by targeted unfamiliar circumstances or early interpretations of individuals: unfairly ranking target persons as their meaning by the public might dictate the inferior, silencing the victims, distress, risk of prevalent response and model resulting attitudes. destruction to one’s self-esteem, restrictions on This might explain the divergent ways the formation freedom of movement and association, harms to of social attitudes toward the same issue can take dignity, maintenance of power imbalances within in different countries or subpopulations. If what social hierarchies of race, making onlookers to develops is the atmosphere of permission to hatred, believe negative that lead them to it may lead to even more hatred. engage in harmful conduct, normalization of expressing negative stereotypes and discriminatory This brings us to the classic research on social behavior and encouraging the public to imitate the influence and in modelling patterns of hateful behavior. “appropriate” behavior, in other words – to normative and informational social influence (social proof). We speak of normative social influence when we conform to the behavior of others in order to gain acceptance and to be liked THE DYNAMICS OF HATE SPEECH AND (Asch & Guetzkow, 1951; Asch, 1956; Aronson, COUNTER SPEECH IN DIGITAL MEDIA Wilson, & Akert, 2010). Social proof or informative social influence is, in turn, a psychological SOCIAL PROOF phenomenon which occurs in unfamiliar or The recent rise in xenophobia, and ambiguous situations in which we mimic behavior of anti-refugee sentiments in numerous European others, because we don’t know what the countries has been hard to overlook even for casual appropriate behavior should be and we assume that observers. The radicalization of attitudes coincided others behave in certain way, because they 5

possess more knowledge than us (Sherif, 1935; approached the subject with more caution and also Baron, Vandello, & Brunsman, 1996; Aronson u. a., perceived the risk as higher. The study also 2010). In the context of the role of online hate revealed, that participants with preexisting positive speech, the normative and informational social attitudes toward nanotechnology, when exposed to influence seem to provide a plausible explanation offensive comments, still perceived the risks of for the rapid formation of attitudes and behavioral nanotechnology as lower than participants, whose patterns in response to media coverage of initially initial attitudes weren’t positive. Such differences unfamiliar topics. were not apparent among participants exposed to comments formulated in a polite way, suggesting the latter might have been more influenced by the EXPOSURE TO HATE SPEECH article’s actual content than by emotional remarks INCREASES MISTRUST, FAMILIARITY of other readers. Similar observation was made in IMMUNIZES AGAINST THIS EFFECT relation to participants’ religiosity. More religious An interesting experiment conducted by Anderson, participants exposed to offensive comments Brossard, Scheufele, Xenos and Ladwig (2014) perceived the risks of nanotechnology as higher demonstrates how offensive speech in digital media than less religious readers. This effect disappeared may contribute to shaping hostile attitudes and among participants exposed to polite discussion. suggests, what could make us more immune to hate The authors suggest this effect might be explained speech. The authors investigated the role of by religious value judgments of the nanotechnology “”, i.e. of offensive way of expressing as disturbing “natural order” (which perhaps could opinions, on formation of risk perceptions of be activated more easily by exposure to more nanotechnology, a topic unfamiliar to most internet emotional, rather than calm tone of the discussion). users. They designed an experiment, in which To conclude: preexistent familiarity with the subject, participants were asked to read a scientific blog preexisting positive attitudes and a low level of entry on the benefits and risks of nanosilver, (activation of) religious identification, each of them followed by artificially crafted user comments independently, contributed to participants’ immunity formulated either in offensive or polite . to the influence of offensive language on attitudes They were then asked to assess the risk associated and encouraged them to form more favorable with the new technology. perceptions of risks of nanotechnology. On the other hand, lack of preexistent knowledge, lack of The researchers made several interesting initial positive attitudes (and, similarly, high observations. Firstly, they found out that regardless religiosity) made them more susceptible to the of the offensiveness of what seemed as the influence of offensive comments, which facilitated apparent public reception, participants with development of distrust toward unfamiliar preexisting familiarity with the topic perceived the phenomenon. risks of nanotechnology as lower than participants, for whom the issue was initially unfamiliar. Social influence might explain the various Furthermore, those who considered themselves trajectories the development of attitudes toward less able of informed judgement of nanotechnology refugees or other minority groups can take. Depending on whether our initial encounter with an 6

unfamiliar subject is of informational or highly observed their social attitudes in a more affective nature, depending on the sources of liberal direction (Napier, Huang, Vonasch, & Bargh, information we draw our knowledge from and 2017). consider reliable, depending on the kind of behaviors we observe around us and treat as a It is probably the underlying fear and uncertainty as social proof to guide our own attitudes and opposed to sympathy, hope or despair, which may behaviors, depending on the strength of our explain different dynamics of hate speech and motivation to learn about a specific subject or counter speech in the digital media. Scientific challenge our beliefs, we may develop favorable or research provides support to casual observations hostile attitudes, confidence or fear, and act that anger spreads online more effortlessly than joy accordingly. or sadness (Fan, Zhao, Chen, & Xu, 2014) and that hate spreads easier than positive emotions or counter-speech do (Bartlett & Krasodomski-Jones, FEAR FUELS HATE, FREEDOM FROM 2015). As already mentioned, right-wing attitudes CONSEQUENCES FACILITATES HATEFUL and resulting hate speech seem to be at least EXPRESSION partially fueled by fear of the unfamiliar and by the The role of fear, threat and uncertainty in acquisition need for safety (Napier u. a., 2017). When an act of and maintaining of conservative attitudes has been online hate speech is motivated by underlying fear demonstrated in several studies. In 2003 Jost, and mistrust, it constitutes a spontaneous act Glaser, Kruglanski and Sulloway concluded their arising from the need to protect oneself and from an meta-analytic review of political attitudes: “people urge to warn and convince others to do the same. embrace political (at least in part) Posting hateful content can help to alleviate because it serves to reduce fear, anxiety, and emotional strain and evoke the feeling of uncertainty; to avoid change, disruption, and satisfaction by virtue of serving in a good and ambiguity; and to explain, order, and justify righteous cause. inequality among groups and individuals” (Jost, Glaser, Kruglanski, & Sulloway, 2003). Nail and Furthermore, there is the online disinhibition effect, McGregor observed, accordingly, significant shift or reduced which easily occurs when we toward the political right both among conservatives are posting content while hidden safely behind the and liberals when they tested them two months after screen of our devices and free from consequences the terrorist attacks on 9/11 (Nail & McGregor, of our hurtful actions which would otherwise have to 2009) in comparison to a year before. The same be faced in real-life interactions (Brodnig, 2016; researchers in extended team experimentally Terry & Cain, 2016). Terry and Cain attribute the instilled threat in participants and as a result effect to the anonymity of both the user and the obtained conservative shift in their attitudes (Nail, target, asynchronous communication and McGregor, Drinkwater, Steele, & Thompson, 2009). invisibility: In 2017 Napier, Huang, Vonasch and Bargh experimentally demonstrated the opposite effect – “First, the anonymity associated with computer- having reduced fear in conservatives, they mediated communication may permit people to possess an alternate online identity and essentially 7

hide behind a non-identifying pseudonym or cognitive and emotional effort in that, rather than username. This form of dissociative anonymity with carelessness, it is more often associated with allows people to separate from in-person identity awareness of the potential consequences of direct and moral agency, thereby freeing them to express confrontation with the hater, such as attracting their hostility and criticism without any effect to the attention and being targeted by insults and even psyche. Similarly, online users may dissociate t more hate personally. Hence, highly unpleasant hose at the other end of the communication by consequences. In short, a decision to counter an act subconsciously viewing them merely as avatars or of hate speech requires usually disproportionate usernames instead of actual persons. Second, as amount of emotional effort and resources as online communication can be asynchronous, compared to the impulsive, self-rewarding and individuals do not have to manage immediate affective act of posting or sharing a hateful post (cf. reactions to online conversations and can remove Coustick-Deal, 2017). This might explain the themselves from the repercussions of online restraint of many internet users who remain silent discussions, even avoiding ownership for hostile when exposed to hate speech. and intimidating comments. Third, even in a completely non-anonymous environment (i.e., According to free speech advocates as well as computerized medical record, e-mail Facebook’s official stance, counter-speech is correspondence, blogs), the nature of online supposed to be a more effective tool against hate communications is such that individuals are speech than removing offensive content by physically invisible to others, permitting them to websites administrators (Bartlett & Krasodomski- disregard any type of eye contact or physical Jones, 2015, p. 4). Coustick-Deal puts her concerns reaction of the other person(s). A significant portion this way: such rhetoric “doesn’t take into account of traditional face-to-face communications tends to power imbalances and privilege. (…) The way be nonverbal (e.g., body language, tone of voice), ‘counter speech’ is advocated is as though there is and without these cues, online conversations lack some kind of balance which works like this: (…) an essential element of understanding” (2016, p. 2). Racists speak = racists listen to their victims. (…) However, counter speech is actually only afforded to those who have voices to begin with. It’s more IMBALANCE IN COGNITIVE EFFORT IN like: Nazi speaks -> thousands of his supporters EXPRESSING HATE AND COUNTERING IT speak with him -> his opponents are attacked. A direct online expression of hate speech or an act There is no balance when someone replies to your of sharing a hateful post is usually impulsive, speech by threatening to kill your family” (2017). careless, internally motivated and does not involve She than points out to the “unseen forces that stop significant cognitive or emotional effort. Indeed, it a person from being able to speak at all. (…) Seeing might involve more effort to suppress a hateful or people harassed stops members of that same angry feeling than to release it. Unlike hate speech, group from speaking out. When we talk about an act of counter speech is not spontaneous, but , we also use the phrase ‘chilling responsive, not active, but reactive. It requires effect’ — and operates in much the conscious decision and involves considerable same manner. The knowledge that we are under constant surveillance stops us from expressing 8

ourselves freely. This same censoring effect previous usage history (the same rule is true also happens through harassment, when the fear of for search engines on the internet). As a result, filter silences us”. bubbles and echo chambers are created. Within them, we are mostly fed content and we witness behavior of others that resemble our own beliefs, FILTER BUBBLES ARE HARD TO BRAKE while being isolated from views that re different from our own. This creates impression that most of other Indeed, Bartlett and Krasodomski-Jones found out people share our own beliefs. that counter-speech activity on Facebook has a strikingly lower potential to reach wider audience Bartlett and Krasodomski-Jones (2015) examined than hate speech (2015). They analyzed 27,886 the hateful and counter-speech content’s spread posts uploaded over a two-month period on 150 from hate speech and counter-speech pages to public hate speech (124) and counter-speech (26) individual newsfeeds by calculating proportion of pages, mostly from the UK, France and Italy, with posts that had reached users who did not like the 25,522 and 2,364 posts, respectively. They also page at which given content was originally posted. collected 8.4 million associated interactions, i.e. They concluded: “populist right wing pages are likes, shares and comments. Their findings are significantly more effective at posting content which described in their report entitled: “Counter-speech: goes beyond their network of page fans. For Examining content that challenges extremism counter-speech pages (and populist right wing online”. Among four types of posts (links, photos, pages) videos are the most effective type of content statuses, videos), photos were the type most to post to reach a broader audience”. Populist right- interacted with. The most popular tone of posts on wing links, photos, videos and statuses received on right wing pages was celebratory, followed by an average 50%, 52%, 68% and 21% of their likes from angry one. On counter-speech pages the most people who didn’t subscribe to the original page, as popular tone was funny/satirical. This might compared to only 18%, 5%, 26% and 7%, suggest, that beside anger, which is the most virally respectively, for content from the counter-speech spreading emotion in the digital media, humor might pages. Similar applies to the proportion of be the winner within the subset of positive emotions. comments.

Each time a user interacts with a piece of content Unlike negative which underlies hate speech on a public Facebook page, it is more likely to and motivates users to share hateful content or appear in their friends’ timeline (depending on the opinions, messages of support for equal rights, privacy settings applied). This creates opportunity peaceful coexistence and empathy apparently for other users, who are not group members or page awake less interest in internet users. Aside from the followers, to interact with the shared content too. fear of confrontation and of personal exposure to Exposure to social media content, including posts insults one of the barriers to speaking up, there shared by Facebook friends, is, however, regulated might be other reasons preventing users from by algorithms which predict for each user and sharing pro-democratic, pro-, anti-racist display in their newsfeed mostly the content that is content as well. For instance, many regular users most likely to be of interest to them, based on might perceive democratic message as so obvious 9

that there is little point in further sharing it. date. Most available (and valuable) guidelines, DOs Especially, living inside of the filter bubble might and DON’Ts, are collections of experience-based augment this effect – one might believe democratic observations and conclusions made by digital values are obvious for almost everyone. While media activists operating within various anti-hate hateful content often serves as a tool to warn others speech projects, rather than effects of rigorously from, or remind them of perceived social threats controlled scientific analyses. They are available in and, consequently, spreads rapidly, pro-democratic form of downloadable publications or on websites. content might be perceived as less sensational, hence, less affectively loaded. Specifically, it I will now try to draw some research-based seldom induces fear which would be a much suggestions for successful counter-speech and for stronger motivator for action than positive affect or prevention of the online spread of hate speech. sympathy (Fan u. a., 2014; Nail & McGregor, 2009; Finally, I will point to some more useful resources Nail u. a., 2009). and recommendations.

GUIDELINES FOR EFFECTIVE COUNTER- TO AFFECT THE HATER’S BEHAVIOR SPEECH ENCOURAGE ONLINE BY REMEMBERING AND In their report, “Considerations for successful REMINDING OTHER USERS ABOUT THE HUMANITY OF PERSON(S) TARGETED BY HATE: Munger (2017) counterspeech”, Benesch, Ruths, Dillon, Saleem tested experimentally, how online sanctioning of and Wright (2016), provide two meanings for how racial hatred affects hateful white male Twitter “successful counterspeech” can be understood (on users. He used bots to tweet a reminder: “Hey man, Twitter): “The first is speech (text or visual media) just remember that there are real people who are that has a favorable impact on the original (hateful) hurt when you harass them with that kind of Twitter user, shifting his or her discourse if not also language” after selected users used racial slurs, his or her beliefs. This is usually indicated by an while manipulating the bots’ perceived race and apology or recanting, or the deletion of the original social status and keeping the bots’ perceived male tweet or account. The second type of success is to gender constant. Then he analyzed racist activity of positively affect the discourse norms of the the treated users over a period of one month. Bots ‘audience’ of a counterspeech conversation: all of appearing as white men of high status (as indicated the other Twitter users or ‘cyberbystanders’ who by large number of followers) achieved the longest read one or more of the relevant exchange of lasting change by significantly reducing future racial tweets. This impact is difficult to assess when slur usage among treated users. Unfortunately, the studying counterspeech “in the wild” as we have, effects achieved by bots whose profile picture but it may be indicated by long conversations that depicted a black male and by bots with lower social remain civil, and by counterspeech that leads to status were less lasting. The author explains the others counterspeaking” (p. 2). Depending on the effect by attributing it to “in-group” identification, approach, different strategies might prove useful. without mentioning white as an

alternative hypothesis. Exclusion of women from Strikingly little academic research on successful the experiment and the lack of women-treating- counter-speech strategies has been published to

10

women, women-treating-men and men-treating- instance: “Hi X, I’ve seen you have posted a women conditions makes it difficult to determine the [describe the meme]. I was wondering, why you exact nature of the effect. shared it. I know you are a mindful person and just wanted to let you know that sharing such content USE YOUR PRIVILEGE FOR A GOOD CAUSE: In hurts actual people. It costs nothing to post it, but accordance with the results mentioned above, and there are people in the real world who are beaten being mindful of Coustick-Deal’s argumentation and insulted because of others spreading this kind (2017) that not everyone has power to defend of . It negatively affects their/our everyday themselves from hate speech, whether you are life. I would appreciate it if you could remove that white or male or straight or , use your post and wouldn’t post anything similar in the white or male or straight or cis privilege to counter future”. This technique should work better for users hatred by clearly expressing expectation that social who are not (yet) radical right-wing advocates. norm of treating other users with respect be observed. In other words, if you are not from one of the disadvantaged groups, you can help the cause TO AFFECT THE ONLINE-BYSTANDERS by reminding hateful users of the humanity of REDUCE FEAR: It has been experimentally targeted persons you help to reduce the online demonstrated that reduction of fear is able to shift disinhibition effect and promote healthy of attitudes of conservatives toward more liberal online debate. stances, and vice versa. This is quite a general suggestion, for fear reduction can take various CONFRONT PROBLEMATIC ONLINE ACTIVITY BY forms. Fear reduction as a strategy to prevent PRIVATE MESSAGING: This kind of approach has spread of hate is likely to be more effective among been mentioned by Rafael, Dinar and Heyken users who do not (yet) strongly identify with right (Dinar & Rafael, 2017; Rafael, Dinar, & Heyken, wing values and beliefs. 2017). Public shaming poses a threat to the self- esteem of the shamed person, which might induce strong motivation to publicly resist the confrontation PROVIDE SOCIAL PROOF BY ENCOURAGING FRIENDS to restore the self-esteem. The aim becomes – to TO VISIBLY SUPPORT COUNTER-SPEAKERS: Filter defend one’s initial stand, as publicly changing mind bubbles aren’t helpful in making counter-speech might be interpreted as losing face, especially if the efforts viral. Unfortunately, social media spread of tone of the public confrontation is aggressive. The anti-hate content is less likely than hateful content intervention might become easier and might bring to reach audience outside of the filter bubble of like- more lasting effect if it is more personal, concerned minded individuals (Bartlett & Krasodomski-Jones, and private. One way to achieve this would be to 2015). As the result, counter-speech becomes less express concern by sending the problematic user a visible. To increase the chances of the content private message, sparing them public shame. This leaving the filter bubble, setting an example or could be done in a similar manner as in the providing a social proof might be a strategy worth experiment above, by reminding them about the trying, especially – but not exclusively – on public humanity of the targeted persons and informing Facebook pages. about real-life harm in being targeted by hate. For

11

It has been argued that only some users have POST AND PRODUCE COUNTER-SPEECH MEMES: enough courage, privilege, power or motivation to Bartlett and Krasodomski-Jones (2015) be able to directly confront online hate. It is determined, that the most successful counter- therefore important to encourage silent cyber- speech type of content was visual (a photo), the bystanders to show their support for those who do most successful counter-speech tone was satirical discuss with the haters. Counter-speech can be and the most successful type of counter-speech supported by liking the comments and posts of message was constructive (rather than insulting). counter-speakers (which increases the comments’ visibility), but even better, by writing few words of JOIN COUNTER-SPEECH AND ANTI-DISCRIMINATION support. The power balance in discussions between GROUPS AND PAGES: They might provide useful haters and counter-speakers often involves a group resources to boost your confidence in of haters against a single opponent. The likes the argumentation (for instance, Everyday opponent gets might not be very visible signs of provides daily powerful articles on gender, sexual support for other silent bystanders to notice. orientation, race and ). They can also be useful for gaining immediate support Apart from giving likes, it could make even more whenever social proof might be needed. One sense to express support by posting short example of such a Facebook group is #ichbinhier. comments: “X is right”, “I agree with X”, “I am on Would you like to follow Facebook pages with useful your side, X”, “X makes a good point”, “X, your content, but are unsure where to find them? Ask words convince me”. Such short responses, while your friends for recommendations of anti- unlikely to attract insults from haters, more focused discrimination, anti-, LGBTQ-support or on responding to the main thread, might provide a feminist pages they subscribe and find useful. Once social proof for other silent readers (cyber- there, ask again among their members and you will bystanders) and encourage them to gain more likely receive even more recommendations. confidence in showing even more support. Such WHAT IS OBVIOUS FOR YOU MIGHT NOT BE OBVIOUS support can also reduce the confidence in the FOR YOUR FRIENDS AND FOR MEMBERS OF GROUPS haters, who might otherwise feel strong having to YOU BELONG TO: Remember that familiarity with a argue with only a single user. given subject appears to make us immune to negative effects of online hate? Probably by Another way to show support to pro-democratic reducing fear from the unknown. Boost familiarity views is to include democratic message as a part of among your social media contacts by sharing one’s profile photo, for instance by adding a helpful articles that had helped you grow: for supportive photo frame (available on Facebook) or instance, on how to be less racist, how to better using a supportive hashtag. While Facebook’s filter support community or how to better bubbles filter out much of activity of our contacts, oppose ? Or maybe you already know a lot especially if their beliefs are different from ours, the about various forms of discrimination and you change of a profile photo might be able to reach assume others know them too? Share the links with outside the filter bubble, because it is often others, adding an interesting excerpt from the text. displayed in the friends’ newsfeed. Have you read an article in a foreign language? Link the article and provide a short summary in your 12

native language. Some among your friends might get interested in the subject, too. Research has demonstrated, that online hate has much more viral potential than joy or sympathy, and READ USEFUL EXPERIENCE-BASED RESOURCES AND another body of research has shown the possible USE RECOMMENDED INTERVENTION METHODS: mechanism underlying viral spread of hate and Among useful, experience-based publications prejudice: fear and psychological need for safety. containing valuable suggestions are: “„Geh Other researchers have shown that familiarity with sterben!“ Umgang mit Hate Speech und what we might otherwise fear of immunizes us from Kommentaren im Internet” by Amadeu Antonio perceiving a given issue as threatening, even when Stiftung (Baldauf, Banaszczuk, Koreng, Schramm, we are exposed to social proof guiding us & Stefanowitsch, 2015b), „Considerations for otherwise. Fear makes us motivated to share successful counterspeech“ by Benesch, Ruths, hateful content and to warn others. Hate and Dillon, Saleem and Wright (2016), “Digitale ridicule are our weapons against what we are afraid Antidiskriminierungsarbeit” by Rafael, Dinar and of and what we perceive as unfamiliar. Our online Heyken (2017) and “Hass und Hetze im Internet – invisibility and the “facelessness” of our online Analyse und Intervention”, also by Dinar and adversaries combined with asynchronicity of Rafael. There is also an informative and internet discussions and perceived lack of comprehensive book, “Hass im Netz: Was wir consequences of our actions make us prone to gegen Hetze, und Lügen tun können” by online disinhibition effect. A reminder of humanity of Ingrid Brodnig (2016). people we tend to mistreat might have a sobering

effect on our online behavior, especially when SUMMARY coming from either an in-group member with a high status or from a privileged white man.Satirical and This article was an attempt to concisely summarize visual form of counter-speech is the most popular some of the most recent scientific literature on the one, yet it is still unlikely to reach the audience psychological dynamics of hate-speech and outside of our filter bubble of like-minded counter-speech on the internet. While many internet individuals. Filter bubbles and echo chambers users spontaneously engage in various types of deceive us, creating an impression that most people counter-speech activity, mostly by commenting share our values, beliefs and fears. hateful posts, but also by creating memes, some Online hate and prejudice threaten members of even joining online anti-hate communities; while targeted groups. They transform into real-life various anti-hate projects are being carried out, violence, endangering the physical safety and making counter-speech tools, videos, articles and psychological well-being of the victims. Fear for other resources available to the internet users: only one’s own safety is one of the factors that silences few helpful scientific evaluations of counter speech the victims and suppresses active resistance. As strategies appear to have been published to date. the result, only some people are privileged enough Most available suggestions and resources are to be able to challenge online hate. On the other valuable experience-based observations by hand, the privileged status of being free of activists involved in the said anti-hate-speech and from its consequences reduces projects, rather than by scientists. motivation to actively counter hate. 13

leadership and men (S. 177–190). Countering hate is responsive in nature and Pittsburgh, PA: Carnegie Press. involves effort, as opposed to expressing hate, Baldauf, J., Banaszczuk, Y., Koreng, A., Schramm, which is impulsive, spontaneous and effortless. All J., & Stefanowitsch, A. (2015a). Die direkte these factors contribute to marked asymmetry that Bedrohung durch Hate Speech darf nicht helps hate speech spread and renders counter unterschätzt werden! Interview mit speech relatively invisible. Dorothee Scholz, Diplompsychologin. In J.

Schramm & A. Lanzke (Hrsg.), „Geh The unbelievable success of the #metoo movement sterben!“ Umgang mit Hate Speech und proves, however, that crowd-initiated social media Kommentaren im Internet (S. 25–29). counter-action driven by anger can go viral and Berlin: Amadeu Antonio Stiftung. Retrieved reach far beyond the original filter bubble, leading to http://www.amadeu-antonio- actual social change on an inter-continental scale. stiftung.de/w/files/pdfs/hatespeech.pdf

Baldauf, J., Banaszczuk, Y., Koreng, A., Schramm, J., & Stefanowitsch, A. (2015b). „Geh sterben!“ Umgang mit Hate Speech und Kommentaren im Internet. (J. Schramm & REFERENCES A. Lanzke, Hrsg.). Berlin: Amadeu Antonio Anderson, A. A., Brossard, D., Scheufele, D. A., Stiftung. Retrieved http://www.amadeu- Xenos, M. A., & Ladwig, P. (2014). The antonio- “Nasty Effect:” Online Incivility and Risk stiftung.de/w/files/pdfs/hatespeech.pdf Perceptions of Emerging Technologies. Journal of Computer-Mediated Baron, R. S., Vandello, J. A., & Brunsman, B. Communication, 19(3), 373–387. (1996). The forgotten variable in conformity https://doi.org/10.1111/jcc4.12009 research: Impact of task importance on social influence. Journal of Personality and Aronson, E., Wilson, T. D., & Akert, R. M. (2010). , 71(5), 915–927. Social psychology. Upper Saddle River, https://doi.org/10.1037/0022-3514.71.5.915 NJ : Prentice Hall. Retrieved http://archive.org/details/Social_Psychology Bartlett, J., & Krasodomski-Jones, A. (2015). _7th_edition_by_Elliot_Aronson_Timothy_ Counter-speech: Examining content that D._Wilson_R_M._Akert challenges extremism online (S. 21). Demos. Retrieved Asch, S. E. (1956). Studies of independence and https://www.demos.co.uk/wp- conformity: A minority of one against a content/uploads/2015/10/Counter- unanimous majority. Psychological speech.pdf Monographs, 70(9). Benesch, S., Ruths, D., Dillon, K. P., & Saleem, H. Asch, S. E., & Guetzkow, H. (1951). Effects of M. (2016). Considerations for Successful group pressure upon the modification and Counterspeech, 1–9. distortion of judgement. In Groups,

14

Brodnig, I. (2016). Hass im Netz: Was wir gegen in International Hetze, Mobbing und Lügen tun können. Criminal Law. Leiden Journal of Wien: Brandstätter Verlag. , 30(2), 523–548. https://doi.org/10.1017/S092215651600075 Chronik flüchtlingsfeindlicher Vorfälle. (o. J.). 3 Retrieved 12. Juni 2018, von https://www.mut-gegen-rechte- Gelber, K., & McNamara, L. (2016). Evidencing the gewalt.de/service/chronik-vorfaelle harms of hate speech. Social Identities, 22(3), 324–341. Clay, R. A. (2017). Islamophobia: Psychologists https://doi.org/10.1080/13504630.2015.112 are studying the impact of anti-Muslim 8810 sentiment and exploring ways to prevent it. Monitor on Psychology, 48(4), 34. Jost, J. T., Glaser, J., Kruglanski, A. W., & Retrieved Sulloway, F. J. (2003). Political http://www.apa.org/monitor/2017/04/islamo Conservatism as Motivated Social .aspx Cognition. PSYCHOLOGICAL BULLETIN, 129(3), 339–375. Retrieved Coustick-Deal, R. (2017, Februar 6). What’s wrong https://search.ebscohost.com/login.aspx?di with counter speech? Retrieved 14. Mai rect=true&db=edsbl&AN=RN131422234&la 2018, von ng=pl&site=eds-live https://medium.com/@ruthcoustickdeal/http s-medium-com-whats-wrong-with-counter- Maas, H. (2015). Geleitwort. In Amadeu Antonio speech-f5e972b13e5e Stiftung (Hrsg.), „Geh sterben!“ Umgang mit Hate Speech und Kommentaren im Europarat Ministerkomitee. Empfehlung R (97) 20 Internet (S. 6). Berlin: Amadeu Antonio des Ministerkomitees an die Stiftung. Retrieved http://www.amadeu- Mitgliedstaaten über die „Hassrede“ (1997). antonio- Fan, R., Zhao, J., Chen, Y., & Xu, K. (2014). Anger stiftung.de/w/files/pdfs/hatespeech.pdf Is More Influential than Joy: Sentiment Maravilla, C. S. (2008). Hate Speech as a War Correlation in Weibo. PLoS ONE, 9(10), Crime: Public and Direct Incitement to e110184. Genocide in International Law. Tulane https://doi.org/10.1371/journal.pone.01101 Journal of International & Comparative 84 Law, 17(1), 113–144. Retrieved forsa. (2017). Hate Speech. Landesanstalt für https://search.ebscohost.com/login.aspx?di Medien Nordrhein-Westfalen. Retrieved rect=true&db=lgs&AN=502065087&lang=pl https://www.lfm- &site=eds-live nrw.de/fileadmin/user_upload/lfm- Meyer, I. H. (1995). Minority stress and mental nrw/Service/Pressemitteilungen/Dokument health in men. Journal of Health & e/2017/Ergebnisbericht_Hate- Social Behavior, 36(1), 38–56. Speech_forsa-Mai-2017.pdf Meyer, I. H. (2003). Prejudice, social stress, and Fyfe, S. (2017). Tracking Hate Speech Acts as

15

mental health in , gay, and bisexual Physical safety promotes socially (but not populations: Conceptual issues and economically) progressive attitudes among research evidence. Psychological Bulletin, conservatives. European Journal of Social 129(5), 674–697. Psychology, 48(2), 187–195. https://doi.org/10.1037/0033- https://doi.org/10.1002/ejsp.2315 2909.129.5.674 Rafael, S., Dinar, C., & Heyken, C. (2017). Digitale Mullen, B., & Smyth, J. M. (2004). Immigrant Antidiskriminierungsarbeit. Wissen schafft rates as a function of Demokratie, Vol 1, Iss 2, Pp 160-171 ethnophaulisms: hate speech predicts (2017), (2), 160. death. Psychosomatic Medicine, 66(3), https://doi.org/10.19222/201702/15 343–348. Sherif, M. (1935). A study of some social factors in Müller, K., & Schwarz, C. (2018). Fanning the perception. Archives of Psychology Flames of Hate: Social Media and Hate (Columbia University), 187, 60–60. Crime (SSRN Scholarly Paper No. ID Soral, W., Bilewicz, M., & Winiewski, M. (2018). 3082972). Rochester, NY: Social Science Exposure to hate speech increases Research Network. Retrieved prejudice through desensitization. https://papers.ssrn.com/abstract=3082972 Aggressive Behavior, 44(2), 136–146. Munger, K. (2017). Tweetment Effects on the https://doi.org/10.1002/ab.21737 Tweeted: Experimentally Reducing Racist Sue, D. W. (2010). Microaggressions in Everyday Harassment. Political Behavior, 39(3), 629– Life: Race, Gender, and Sexual 649. https://doi.org/10.1007/s11109-016- Orientation. John Wiley & Sons. 9373-5 Terry, C., & Cain, J. (2016). The Emerging Issue of Nail, P. R., & McGregor, I. (2009). Conservative Digital Empathy. American Journal Of Shift among Liberals and Conservatives Pharmaceutical , 80(4), 58–58. Following 9/11/01. Social Justice Research, https://doi.org/10.5688/ajpe80458 22(2–3), 231–240. https://doi.org/10.1007/s11211-009-0098-z United Nations Regional Information Centre for Western Europe. (2016). Intolerance and Nail, P. R., McGregor, I., Drinkwater, A. E., Steele, xenophobia on the rise in Europe. G. M., & Thompson, A. W. (2009). Threat Retrieved 12. Juni 2018, causes liberals to think like conservatives. https://www.unric.org/en/latest-un- Journal of Experimental Social Psychology, buzz/30377-intolerance-and-xenophobia- 45, 901–907. Retrieved on-the-rise-in-europe https://www.academia.edu/23822628/Threa t_causes_liberals_to_think_like_conservati

ves Napier, J. L., Huang, J., Vonasch, A. J., & Bargh, J. A. (2017). Superheroes for change:

16