Stereotypical Bias Removal for Hate Speech Detection Task Using Knowledge-Based Generalizations

Total Page:16

File Type:pdf, Size:1020Kb

Stereotypical Bias Removal for Hate Speech Detection Task Using Knowledge-Based Generalizations Stereotypical Bias Removal for Hate Speech Detection Task using Knowledge-based Generalizations Pinkesh Badjatiya, Manish Gupta*, Vasudeva Varma IIIT Hyderabad Hyderabad, India [email protected],{manish.gupta,vv}@iiit.ac.in ABSTRACT Examples Predicted True Hate Label Label With the ever-increasing cases of hate spread on social media plat- (Score) forms, it is critical to design abuse detection mechanisms to pro- actively avoid and control such incidents. While there exist methods those guys are nerds Hateful (0.83) for hate speech detection, they stereotype words and hence suffer can you throw that garbage please Hateful (0.74) from inherently biased training. Bias removal has been traditionally People will die if they kill Oba- Hateful (0.78) studied for structured datasets, but we aim at bias mitigation from macare oh shit, i did that mistake again Hateful (0.91) unstructured text data. Neutral In this paper, we make two important contributions. First, we that arab killed the plants Hateful (0.87) systematically design methods to quantify the bias for any model I support gay marriage. I believe Hateful (0.77) and propose algorithms for identifying the set of words which the they have a right to be as miser- model stereotypes. Second, we propose novel methods leveraging able as the rest of us. knowledge-based generalizations for bias-free learning. Table 1: Examples of Incorrect Predictions from the Perspec- Knowledge-based generalization provides an effective way to tive API encode knowledge because the abstraction they provide not only generalizes content but also facilitates retraction of information from the hate speech detection classifier, thereby reducing the im- balance. We experiment with multiple knowledge generalization (WWW’19), May 13–17, 2019, San Francisco, CA, USA. ACM, New York, NY, policies and analyze their effect on general performance and in USA, 11 pages. https://doi.org/10.1145/3308558.3313504 mitigating bias. Our experiments with two real-world datasets, a Wikipedia Talk Pages dataset (WikiDetox) of size ∼96k and a Twit- 1 INTRODUCTION ter dataset of size ∼24k, show that the use of knowledge-based With the massive increase in user-generated content online, there generalizations results in better performance by forcing the classi- has also been an increase in automated systems for a large variety fier to learn from generalized content. Our methods utilize existing of prediction tasks. While such systems report high accuracy on knowledge-bases and can easily be extended to other tasks. collected data used for testing them, many times such data suffers from multiple types of unintended biases. A popular unintended CCS CONCEPTS bias, stereotypical bias (SB), can be based on typical perspectives • Networks → Online social networks; • Social and profes- like skin tone, gender, race, demography, disability, Arab-Muslim sional topics → Hate speech; • Computing methodologies background, etc. or can be complicated combinations of these as → Supervised learning by classification; Neural networks; well as other confounding factors. Not being able to build unbi- Lexical semantics; ased prediction systems can lead to low-quality unfair results for victim communities. Since many of such prediction systems are KEYWORDS critical in real-life decision making, this unfairness can propagate arXiv:2001.05495v1 [cs.CL] 15 Jan 2020 hate speech; stereotypical bias; knowledge-based generalization; into government/organizational policy making and general societal natural language processing; bias detection; bias removal perceptions about members of the victim communities [13]. In this paper, we focus on the problem of detecting and removing ACM Reference Format: such bias for the hate speech detection task. With the massive in- Pinkesh Badjatiya, Manish Gupta*, Vasudeva Varma. 2019. Stereotypical crease in social interactions online, there has also been an increase Bias Removal for Hate Speech Detection Task using Knowledge-based in hateful activities. Hateful content refers to “posts” that contain Generalizations. In Proceedings of the 2019 World Wide Web Conference abusive speech targeting individuals (cyber-bullying, a politician, a celebrity, a product) or particular groups (a country, LGBT, a reli- This paper is published under the Creative Commons Attribution 4.0 International gion, gender, an organization, etc.). Detecting such hateful speech (CC-BY 4.0) license. Authors reserve their rights to disseminate the work on their personal and corporate Web sites with the appropriate attribution. is important for discouraging such wrongful activities. While there WWW ’19, May 13–17, 2019, San Francisco, CA, USA have been systems for hate speech detection [1, 9, 16, 21], they do © 2019 IW3C2 (International World Wide Web Conference Committee), published not explicitly handle unintended bias. under Creative Commons CC-BY 4.0 License. ACM ISBN 978-1-4503-6674-8/19/05. https://doi.org/10.1145/3308558.3313504 *Author is also a Principal Applied Researcher at Microsoft. ([email protected]) We address a very practical issue in short text classification: • We propose a novel metric called PB (Pinned Bias) that is that the mere presence of a word may (sometimes incorrectly) easier to compute and is effective. determine the predicted label. We thus propose methods to identify • We perform experiments on two large real-world datasets if the model suffers from such an issue and thus provide techniques to show the efficacy of the proposed framework. to reduce the undue influence of certain keywords. Acknowledging the importance of hate speech detection, Google The remainder of the paper is organized as follows. In Section 2, launched Perspective API 1 in 2017. Table 1 shows examples of we discuss related work in the area of bias mitigation for both sentences that get incorrectly labeled by the API (as on 15th Aug structured and unstructured data. In Section 3, we discuss a few 2018). We believe that for many of these examples, a significant definitions and formalize the problem definition. We present the reason for the errors is bias in training data. Besides this API, we two-stage framework in Section 4. We present experimental results show examples of stereotypical bias from our datasets in Table 2. and analysis in Section 5. Finally, we conclude with a brief summary Traditionally bias in datasets has been studied from a perspective in Section 6. of detecting and handling imbalance, selection bias, capture bias, and negative set bias [14]. In this paper, we discuss the removal 2 RELATED WORK of stereotypical bias. While there have been efforts on identifying Traditionally bias in datasets has been studied from a perspective a broad range of unintended biases in the research and practices of detecting and handling imbalance, selection bias, capture bias, around social data [18], there is hardly any rigorous work on princi- and negative set bias [14]. In this paper, we discuss the removal of pled detection and removal of such bias. One widely used technique stereotypical bias. for bias removal is sampling, but it is challenging to obtain instances with desired labels, especially positive labels, that help in bias re- moval. These techniques are also prone to sneak in further bias 2.1 Handling Bias for Structured Data due to sample addition. To address such drawbacks, we design These bias mitigation models require structured data with bias automated techniques for bias detection, and knowledge-based gen- sensitive attributes (“protected attributes”) explicitly known. While eralization techniques for bias removal with a focus on hate speech there have been efforts on identifying a broad range of unintended detection. biases in the research and practices around social data [18], only Enormous amounts of user-generated content is a boon for learn- recently some research work has been done about bias detection ing robust prediction models. However, since stereotypical bias and mitigation in the fairness, accountability, and transparency in is highly prevalent in user-generated data, prediction models im- machine learning (FAT-ML) community. While most of the papers plicitly suffer from such bias [5]. To de-bias the training data, we in the community relate to establishing the existence of bias [3, 20], propose a two-stage method: Detection and Replacement. In the some of these papers provide bias mitigation strategies by altering first stage (Detection), we propose skewed prediction probability training data [10], trained models [12], or embeddings [2]. Kleinberg and class distribution imbalance based novel heuristics for bias et al. [15] and Friedler et al. [11] both compare several different sensitive word (BSW) detection. Further, in the second stage (Re- fairness metrics for structured data. Unlike this line of work, our placement), we present novel bias removal strategies leveraging work is focused on unstructured data. In particular, we deal with knowledge-based generalizations. These include replacing BSWs the bias for the hate speech detection task. with generalizations like their Part-of-Speech (POS) tags, Named Entity tags, WordNet [17] based linguistic equivalents, and word 2.2 Handling Bias for Unstructured Data embedding based generalizations. We experiment with two real-world datasets – WikiDetox and Little prior work exists on fairness for text classification tasks. Twitter of sizes ∼96K and ∼24K respectively. The goal of the exper- Blodgett and O’Connor [3], Hovy and Spruit [13] and Tatman [20]
Recommended publications
  • Stereotypes and Prejudice
    Stereotypes and Prejudice Their Overt and Subtle Influence in the Classroom CONNIE T. WOLFE University of Michigan STEVEN J. SPENCER Hope College The authors examine overt and subtle forms of stereotyping and prejudice. Two theories that explain overt prejudice are reviewed: realistic conflict theory and social identity theory. Although overt prejudice seems to have declined, subtle stereotyping is still pervasive. The authors review one theory, aversive racism theory, that explains this phenomenon. They also discuss two perspectives, attributional ambiguity and stereotype threat, which provide accounts of the impact of subtle racism. Both overt and subtle prejudice present challenges for the classroom. The authors describe one intervention called the jigsaw classroom that encourages work toward common goals and helps reduce the expression and impact of overt discrimination. A second intervention program, wise schooling, is presented, which aims to reduce the impact of subtle stereotypes by reducing stereotype threat. Why do prejudice and discrimination exist? Has overt racism been replaced by more subtle forms of prejudice? How does stereotyping affect its targets? In this article we describe two theories, realistic conflict theory and social identity theory, which provide an answer to the first question. We address the second question by noting that although overt discrimination has decreased, subtle forms of prejudice are still quite common and we describe one theory, aversive racism, that provides a compelling account of this change in the expression of prejudice. Finally, we answer the third question by describing two phenomena, attributional ambiguity and stereotype threat, that result from the pervasive nature of subtle stereotyping. This article is a selective overview of what social psychology has to say about these crucial issues.
    [Show full text]
  • The Stereotype of African American Characters In
    ADLN Perpustakaan Universitas Airlangga CHAPTER I INTRODUCTION 1.1 Background of the Study African Americans‘ history has become national history of America. Roger Daniels in his book entitled Coming to America year 2002 described that African slave trade existed for over four centuries. The history began with the slave trade in the past. African Americans were forced to leave Africa and move to New World or America by European colonization. From middle 15th century to 1870, about ten million people were kidnapped out of Africa; about 350,000 of them were sold in America (51). The slave trade was a business entered into white people for profit. For Europeans colony, Africans were used as their property and forced to work and without being paid. Oftentimes, they were separated from their family members. At that time, cotton, sugar cane and slavery became the dominant commodities of Southern America economy. Africans lived in Southern America which was very rich of soil and fertile, full of large farms and huge plantations. A History of U.S Families with a Focus on African American showed the domination of White people in economy: The political economy of slavery was characterized by segregation between whites and African Americans, the concentration of power 1 Skripsi THE STEREOTYPE OF AFRICAN AMERICAN CHARACTERSRESTYA ANDARU WINANDITA IN WILLIAM H. ARMSTRONG’S SOUNDER ADLN Perpustakaan Universitas Airlangga 2 and wealth… The dominant economic activity in the region was production of agricultural products especially cotton to export to the North and to Europe (Iasewords 57) African Americans are defined as the citizens of the United States of America who are also of African descendants.
    [Show full text]
  • SEXISM, STEREOTYPING, and the GENDER WAGE GAP a Thesis
    SEXISM, STEREOTYPING, AND THE GENDER WAGE GAP A Thesis Presented to The Faculty of Graduate Studies O t- The University of Guelph In partial fulfilment of requirements for the degree of Doctor of Philosophy June, 2000 @Christine Alksnis, 2000 National Library Bibliothéque nationale I*I of Canada du Canada Acquisitions and Acquisitions et Bibliographie Services services bibliographiques 395 Wellington Street 395. nie Wellington Ottawa ON K1A ON4 Ottawa ON K1A ON4 Canada Canada Your fila Vom nlemw Our W Notre raterence The author has granted a non- L'auteur a accordé une licence non exclusive licence allowing the exclusive permettant à la National Library of Canada to Bibliothèque nationale du Canada de reproduce, loan, distribute or sel1 reproduire, prêter, distribuer ou copies of this thesis in microform, vendre des copies de cette thèse sous paper or electronic formats. la forme de microfiche/fiim, de reproduction sur papier ou sur format électronique. The author retains ownership of the L'auteur conserve la propriété du copyright in this thesis. Neither the droit d'auteur qui protège cette thèse. thesis nor substantial extracts f?om it Ni la thèse ni des extraits substantiels may be printed or otherwise de celle-ci ne doivent être imprimés reproduced without the author's ou autrement reproduits sans son permission. autorisation. Canada ABSTRACT SEXISM, STEREOTYPING, AND THE GENDER WAGE GAP Christine Alksnis Advisor: University of Guelph, 2000 Dr. S. Desmarais For decades, women's average earnings have been a fraction of men's earnings. This dissertation is predicated on the assumption that gender stereotypes facilitate sexist behaviour by employers that in turn contributes, at least in part, to the persistence of this gender wage gap.
    [Show full text]
  • Gender Bias in Sexual Assault Response And
    End Violence Against Women International (EVAWI) Gender Bias in Sexual Assault Response and Investigation Part 1: Implicit Gender Bias Heather Huhtanen Contributions by Kimberly A. Lonsway, PhD Sergeant Joanne Archambault (Ret.) November 2017 Updated October 2020 . This project is supported by Grant No. 2016-TA-AX-K010 awarded by the Office on Violence Against Women, US Department of Justice. The opinions, findings, conclusions, and recommendations expressed in this publication are those of the author(s) and do not necessarily reflect the views of the Department of Justice, Office on Violence Against Women. Gender Bias in Sexual Assault Response and Investigation October Part 1: Implicit Gender Bias Huhtanen 2020 Public Domain Notice Unless something is excerpted directly from a copyrighted source, all the material in this document is in the public domain and may be reproduced or copied without specifically requesting permission from End Violence Against Women International (EVAWI) or the authors. Any direct quotes or excerpts should be properly cited, however. No one may reproduce or distribute this material for a fee without the specific, written authorization of End Violence Against Women International (EVAWI). Electronic Access The publication may be downloaded from End Violence Against Women International’s Resource Library. Recommended Citation Huhtanen, H. (2020). Gender Bias in Sexual Assault Response and Investigation. Part 1: Implicit Gender Bias. End Violence Against Women International. End Violence Against Women International 2 www.evawintl.org Gender Bias in Sexual Assault Response and Investigation October Part 1: Implicit Gender Bias Huhtanen 2020 Authors Heather Huhtanen is currently based in Geneva, Switzerland where she works to promote gender equality in the context of international development, security and justice reform and peace and transition processes.
    [Show full text]
  • The Scapegoat Study Guide
    Scholars Crossing Faculty Publications and Presentations Helms School of Government 2003 René Girard: The Scapegoat Study Guide Steven Alan Samson Liberty University, [email protected] Follow this and additional works at: https://digitalcommons.liberty.edu/gov_fac_pubs Part of the Other Social and Behavioral Sciences Commons, Political Science Commons, and the Public Affairs, Public Policy and Public Administration Commons Recommended Citation Samson, Steven Alan, "René Girard: The Scapegoat Study Guide" (2003). Faculty Publications and Presentations. 108. https://digitalcommons.liberty.edu/gov_fac_pubs/108 This Article is brought to you for free and open access by the Helms School of Government at Scholars Crossing. It has been accepted for inclusion in Faculty Publications and Presentations by an authorized administrator of Scholars Crossing. For more information, please contact [email protected]. RENÉ GIRARD: THE SCAPEGOAT STUDY GUIDE, 2003 Steven Alan Samson CHAPTER ONE: GUILLAUME DE MACHAUT AND THE JEWS Study Questions 1. What are some of the catastrophic events Machaut describes? What led to the massacre of the Jews? How is the process of scapegoating described in a fable by Jean La Fontaine? [The tendency to euphemize an evil is similarly depicted in the Harry Potter stories, as, for example, with "He who must not be named"]. (1-3) 2. [SKIP] How does Machaut's account of the arrival of the plague illustrate the phenomenon of scapegoating? What makes Machaut's use of epydimie [epidemic] a "linguistic scapegoat" and how is its essential structure the same as a human sacrifice? Does he ever connect these events into a single entity? ["The Thirty Years War" and "The Hundred Years War" were, of course, not perceived as such at the time].
    [Show full text]
  • •Understanding Bias: a Resource Guide
    Community Relations Services Toolkit for Policing Understanding Bias: A Resource Guide Bias Policing Overview and Resource Guide The Community Relations Service (CRS) is the U.S. Justice Department’s “peacemaker” agency, whose mission is to help resolve tensions in communities across the nation, arising from differences of race, color, national origin, gender, gender identity, sexual orientation, religion, and disability. CRS may be called to help a city or town resolve tensions that stem from community perceptions of bias or a lack of cultural competency among police officers. Bias and a lack of cultural competency are often cited interchangeably as challenges in police- community relationships. While bias and a lack of cultural competency may both be present in a given situation, these challenges and the strategies for addressing them differ appreciably. This resource guide will assist readers in understanding and addressing both of these issues. What is bias, and how is it different from cultural competency? The Science of Bias Bias is a human trait resulting from our tendency and need to classify individuals into categories as we strive to quickly process information and make sense of the world.1 To a large extent, these processes occur below the level of consciousness. This “unconscious” classification of people occurs through schemas, or “mental maps,” developed from life experiences to aid in “automatic processing.”2 Automatic processing occurs with tasks that are very well practiced; very few mental resources and little conscious thought are involved during automatic processing, allowing numerous tasks to be carried out simultaneously.3 These schemas become templates that we use when we are faced with 1.
    [Show full text]
  • The Influence of Media on Views of Gender
    Article 7 Gendered Media: The Influence of Media on Views of Gender Julia T. Wood Department of Communication, University of North times more often than ones about women (“Study Re- Carolina at Chapel Hill ports Sex Bias,” 1989), media misrepresent actual pro- portions of men and women in the population. This constant distortion tempts us to believe that there really THEMES IN MEDIA are more men than women and, further, that men are the cultural standard. Of the many influences on how we view men and women, media are the most pervasive and one of the most powerful. Woven throughout our daily lives, media insinuate their messages into our consciousness at every MEDIA’S MISREPRESENTATION OF turn. All forms of media communicate images of the sexes, many of which perpetuate unrealistic, stereotypi- AMERICAN LIFE cal, and limiting perceptions. Three themes describe how media represent gender. First, women are underrepre- The media present a distorted version of cultural life sented, which falsely implies that men are the cultural in our country. According to media portrayals: standard and women are unimportant or invisible. Sec- ond, men and women are portrayed in stereotypical White males make up two-thirds of the popula- ways that reflect and sustain socially endorsed views of tion. The women are less in number, perhaps be- cause fewer than 10% live beyond 35. Those who gender. Third, depictions of relationships between men do, like their younger and male counterparts, are and women emphasize traditional roles and normalize nearly all white and heterosexual. In addition to violence against women. We will consider each of these being young, the majority of women are beauti- themes in this section.
    [Show full text]
  • Empirically Validated Strategies to Reduce Stereotype Threat
    Empirically Validated Strategies to Reduce Stereotype Threat 1. Remove Cues That Trigger Worries About Stereotypes • Reduce prejudice (Logel et al., 2009); remove physical cues that make it seem that a school setting is defined by the majority group (Cheryan et al, 2009); don’t ask people to report a negatively stereotyped group identity immediately before taking a test (Steele & Aronson, 1995; Danaher & Crandall, 2008) 2. Convey That Diversity is Valued • For instance, communicate a multicultural ideology that explicitly values diversity (Purdie-Vaughns et al., 2008) 3. Create a Critical Mass • Increase the visibility and representation of people from minority groups in a field (Murphy et al., 2007; Purdie-Vaughns et al., 2008), among test-takers (Inzlicht & Ben-Zeev, 2000), and in positions of authority (e.g., among teachers; Carrell et al., 2010; Dee, 2004; Massey & Fischer, 2005) 4. Create Fair Tests, Present Them as Fair and as Serving a Learning Purpose • Use gender- and race-fair tests, communicate their fairness, convey that they are being used to facilitate learning, not to measure innate ability or reify stereotypes (Good et al., 2008; Spencer et al., 1999; Steele & Aronson, 1995) 5. Value Students’ Individuality • Remind students of aspects of their individual identity (Ambady et al., 2005; Gresky et al., 2005) 6. Improve Cross-Group Interactions • Foster better intergroup relations (Mendoza-Denton & Page-Gould, 2008; Steele, 1997; Walton & Carr, 2012); remind students of similarities among groups (Rosenthal & Crisp, 2006); undo stereotypical associations through cognitive retraining (Forbes & Schmader, 2010); promote cooperative classrooms (Aronson & Patnoe, 1997; Cohen, 1994) 7. Present and Recruit Positive Role Models from Diverse Groups • Expose students to successful role models from their group who refute negative stereotypes (Marx & Goff, 2005; Marx & Roman, 2002; McIntyre et al., 2003) 8.
    [Show full text]
  • Sexism at Work: How Can We Stop It? Handbook for the EU Institutions and Agencies Acknowledgements
    Sexism at work: how can we stop it? Handbook for the EU institutions and agencies Acknowledgements This handbook was developed by the European The personal stories were collected in person Institute for Gender Equality (EIGE). It was or in writing during the drafting of the hand- adapted from JUMP’s handbook Libérez votre book. They include experiences from the EU entreprise du sexisme* by Dorothy Dalton, who institutions and agencies, as well as from bod- was contracted as an External Expert. EIGE co- ies that work closely with these organisations. ordinated the work and provided quality assur- Stories were collected by the External Expert ance (Veronica Collins, Valentina Canepa, Vasiliki during workshops and coaching sessions, as Saini). well as through conversations. All stories are On 20 September 2019, EIGE held an expert real, though some names have been changed meeting in Vilnius, Lithuania, to receive com- to protect identities. EIGE would like to thank ments on a draft of the handbook. The meeting the Directorate-General (DG) for Justice and was attended by EU institutions and agencies, Consumers and DG Human Resources and non-governmental organisations and the pri- Security of the European Commission for their vate sector. The final version of this handbook valuable input, as well as Dr Sonja Robnik, mem- reflects the input of these organisations. ber of EIGE’s Expert Forum. The European Institute for Gender Equality The European Institute for Gender Equality Email: [email protected] (EIGE) is an autonomous body of the European Union established to strengthen gender Tel. +370 52157444 equality across the EU.
    [Show full text]
  • Gender Stereotypes
    Gender stereotypes and their effect on young people 7% 16% Only 7% of engineering modern Girls accounted for just 16% of students apprenticeships in the UK were filled by enrolled for a computing science degree in girls in 2016 1 the UK in 2016 2 10% 80% Only 10% of primary school teachers Boys account for 80% of permanent in Scotland are men 3 exclusions in England 4 87% 18% 87% of women asked (aged 11–21) think that In the UK, women earn an average of 18% women are judged more for their looks than less than men 6 ability 5 1 C Egglestone, E Jones and F Aldridg. Understanding the Underrepresentation of Women in Engineering Apprenticeships. Leicester: National Learning and Work Institute; 2018:10. www.learningandwork.org.uk/wp-content/uploads/2018/03/ Understanding-the-under-representation-of-women-in- engineering-apprenticeships-1.pdf. Accessed 7 August 2018. 2 Engineering UK. Engineering UK 2017: The State of Engineering.; 2018:123. www.engineeringuk.com/research/engineeringuk- repor t. Accessed 7 August 2018. 38% 3 Summar y Statistics for Schools in Scotland, No. 8: 2017 Edition. 38% of dads lie to their boss to take time off Gov.scot. www.gov.scot/Publications/2017/12/3099/348575. Published 2018. Accessed 7 August 2018. 7 work to spend with their children 4 National Tables. GOV.UK. www.gov.uk/government/statistics/ permanent-and-fixed-period-exclusions-in-england-2016- to -2017. Published 2018. Accessed 24 August 2018. 5 Girl Guiding UK. Girls’ Attitudes Survey 2014. London: Girlguiding; 2014:13. www.girlguiding.org.uk/globalassets/ docs-and-resources/research-and-campaigns/girls-attitudes- sur vey-2014.pdf.
    [Show full text]
  • Bias, Employment Discrimination, and Black Women's Hair: Another Way Forward
    BYU Law Review Volume 2018 Issue 4 Article 7 Winter 2-28-2019 Bias, Employment Discrimination, and Black Women's Hair: Another Way Forward Crystal Powell Follow this and additional works at: https://digitalcommons.law.byu.edu/lawreview Part of the Civil Rights and Discrimination Commons, and the Labor and Employment Law Commons Recommended Citation Crystal Powell, Bias, Employment Discrimination, and Black Women's Hair: Another Way Forward, 2018 BYU L. Rev. 933 (2019). Available at: https://digitalcommons.law.byu.edu/lawreview/vol2018/iss4/7 This Comment is brought to you for free and open access by the Brigham Young University Law Review at BYU Law Digital Commons. It has been accepted for inclusion in BYU Law Review by an authorized editor of BYU Law Digital Commons. For more information, please contact [email protected]. 004.POWELL_FIN2_NOHEADERS.DOCX (DO DELETE) 2/17/19 8:33 PM Bias, Employment Discrimination, and Black Women’s Hair: Another Way Forward CONTENTS I. INTRODUCTION .......................................................................................... 933 II. HISTORY OF BLACK HAIR, IMPLICIT BIAS, AND WORKPLACE GROOMING STANDARDS ..................................................................... 937 A. History of Black Hair Texture and Hairstyle: Centuries of Stereotyping ...................................................................................... 938 B. Clean, Neat, and Kept Versus Extreme, Eye-Catching, and Unprofessional: Workplace Grooming Policies Reflect Racial Stereotypes ............................................................................
    [Show full text]
  • Breaking Down Hate Speech UNIT 4
    LESSON PLAN Breaking Down Hate Speech UNIT 4 Essential Question Estimated time: 45 minutes How can you create a community culture in which hate speech is unacceptable, both online and offline? Standards Alignment Common Core: RL.9-12.1-2, Lesson Overview RL.9-12.4, RL.9-12.7-8, Students learn the definition of hate speech and understand how it RL.9-12.10, RI.9-12.1-2, affects individuals, groups, and communities. They learn to recognize RI.9-12.4, RI.9-12.10, hate speech by reading an article or by analyzing a brief video. They W.9-12.2a-f, W.9-12.3a-e, then explore school-wide solutions for addressing hate speech by role- W.9-12.4-6, W.9-12.8-10, playing a student mediation committee and creating guidelines for SL.9-12.1a-d, SL.9-12.2-5, online and offline communities. L.9-12.4a, L.9-12.6 1a-c, 2a, 2b, 2d, 3b, Learning Objectives 3d, 4a-d, 5a-d Students will be able to ... recognize hate speech and its impact on individuals, groups, and communities, both online and offline. Key Vocabulary analyze situations to determine if they constitute hate speech. hate speech: making cruel, create a set of community guidelines for dealing with online and hostile, or negative statements offline hate speech at school. about someone based on their race, religion, national Materials and Preparation origin, disability, age, gender, or sexual orientation Copy the “Hate Speech Corrodes Online Games” article excerpt, one for each student (Teach 1: Option A).
    [Show full text]