Stereotypical Bias Removal for Hate Speech Detection Task Using Knowledge-Based Generalizations
Total Page:16
File Type:pdf, Size:1020Kb
Stereotypical Bias Removal for Hate Speech Detection Task using Knowledge-based Generalizations Pinkesh Badjatiya, Manish Gupta*, Vasudeva Varma IIIT Hyderabad Hyderabad, India [email protected],{manish.gupta,vv}@iiit.ac.in ABSTRACT Examples Predicted True Hate Label Label With the ever-increasing cases of hate spread on social media plat- (Score) forms, it is critical to design abuse detection mechanisms to pro- actively avoid and control such incidents. While there exist methods those guys are nerds Hateful (0.83) for hate speech detection, they stereotype words and hence suffer can you throw that garbage please Hateful (0.74) from inherently biased training. Bias removal has been traditionally People will die if they kill Oba- Hateful (0.78) studied for structured datasets, but we aim at bias mitigation from macare oh shit, i did that mistake again Hateful (0.91) unstructured text data. Neutral In this paper, we make two important contributions. First, we that arab killed the plants Hateful (0.87) systematically design methods to quantify the bias for any model I support gay marriage. I believe Hateful (0.77) and propose algorithms for identifying the set of words which the they have a right to be as miser- model stereotypes. Second, we propose novel methods leveraging able as the rest of us. knowledge-based generalizations for bias-free learning. Table 1: Examples of Incorrect Predictions from the Perspec- Knowledge-based generalization provides an effective way to tive API encode knowledge because the abstraction they provide not only generalizes content but also facilitates retraction of information from the hate speech detection classifier, thereby reducing the im- balance. We experiment with multiple knowledge generalization (WWW’19), May 13–17, 2019, San Francisco, CA, USA. ACM, New York, NY, policies and analyze their effect on general performance and in USA, 11 pages. https://doi.org/10.1145/3308558.3313504 mitigating bias. Our experiments with two real-world datasets, a Wikipedia Talk Pages dataset (WikiDetox) of size ∼96k and a Twit- 1 INTRODUCTION ter dataset of size ∼24k, show that the use of knowledge-based With the massive increase in user-generated content online, there generalizations results in better performance by forcing the classi- has also been an increase in automated systems for a large variety fier to learn from generalized content. Our methods utilize existing of prediction tasks. While such systems report high accuracy on knowledge-bases and can easily be extended to other tasks. collected data used for testing them, many times such data suffers from multiple types of unintended biases. A popular unintended CCS CONCEPTS bias, stereotypical bias (SB), can be based on typical perspectives • Networks → Online social networks; • Social and profes- like skin tone, gender, race, demography, disability, Arab-Muslim sional topics → Hate speech; • Computing methodologies background, etc. or can be complicated combinations of these as → Supervised learning by classification; Neural networks; well as other confounding factors. Not being able to build unbi- Lexical semantics; ased prediction systems can lead to low-quality unfair results for victim communities. Since many of such prediction systems are KEYWORDS critical in real-life decision making, this unfairness can propagate arXiv:2001.05495v1 [cs.CL] 15 Jan 2020 hate speech; stereotypical bias; knowledge-based generalization; into government/organizational policy making and general societal natural language processing; bias detection; bias removal perceptions about members of the victim communities [13]. In this paper, we focus on the problem of detecting and removing ACM Reference Format: such bias for the hate speech detection task. With the massive in- Pinkesh Badjatiya, Manish Gupta*, Vasudeva Varma. 2019. Stereotypical crease in social interactions online, there has also been an increase Bias Removal for Hate Speech Detection Task using Knowledge-based in hateful activities. Hateful content refers to “posts” that contain Generalizations. In Proceedings of the 2019 World Wide Web Conference abusive speech targeting individuals (cyber-bullying, a politician, a celebrity, a product) or particular groups (a country, LGBT, a reli- This paper is published under the Creative Commons Attribution 4.0 International gion, gender, an organization, etc.). Detecting such hateful speech (CC-BY 4.0) license. Authors reserve their rights to disseminate the work on their personal and corporate Web sites with the appropriate attribution. is important for discouraging such wrongful activities. While there WWW ’19, May 13–17, 2019, San Francisco, CA, USA have been systems for hate speech detection [1, 9, 16, 21], they do © 2019 IW3C2 (International World Wide Web Conference Committee), published not explicitly handle unintended bias. under Creative Commons CC-BY 4.0 License. ACM ISBN 978-1-4503-6674-8/19/05. https://doi.org/10.1145/3308558.3313504 *Author is also a Principal Applied Researcher at Microsoft. ([email protected]) We address a very practical issue in short text classification: • We propose a novel metric called PB (Pinned Bias) that is that the mere presence of a word may (sometimes incorrectly) easier to compute and is effective. determine the predicted label. We thus propose methods to identify • We perform experiments on two large real-world datasets if the model suffers from such an issue and thus provide techniques to show the efficacy of the proposed framework. to reduce the undue influence of certain keywords. Acknowledging the importance of hate speech detection, Google The remainder of the paper is organized as follows. In Section 2, launched Perspective API 1 in 2017. Table 1 shows examples of we discuss related work in the area of bias mitigation for both sentences that get incorrectly labeled by the API (as on 15th Aug structured and unstructured data. In Section 3, we discuss a few 2018). We believe that for many of these examples, a significant definitions and formalize the problem definition. We present the reason for the errors is bias in training data. Besides this API, we two-stage framework in Section 4. We present experimental results show examples of stereotypical bias from our datasets in Table 2. and analysis in Section 5. Finally, we conclude with a brief summary Traditionally bias in datasets has been studied from a perspective in Section 6. of detecting and handling imbalance, selection bias, capture bias, and negative set bias [14]. In this paper, we discuss the removal 2 RELATED WORK of stereotypical bias. While there have been efforts on identifying Traditionally bias in datasets has been studied from a perspective a broad range of unintended biases in the research and practices of detecting and handling imbalance, selection bias, capture bias, around social data [18], there is hardly any rigorous work on princi- and negative set bias [14]. In this paper, we discuss the removal of pled detection and removal of such bias. One widely used technique stereotypical bias. for bias removal is sampling, but it is challenging to obtain instances with desired labels, especially positive labels, that help in bias re- moval. These techniques are also prone to sneak in further bias 2.1 Handling Bias for Structured Data due to sample addition. To address such drawbacks, we design These bias mitigation models require structured data with bias automated techniques for bias detection, and knowledge-based gen- sensitive attributes (“protected attributes”) explicitly known. While eralization techniques for bias removal with a focus on hate speech there have been efforts on identifying a broad range of unintended detection. biases in the research and practices around social data [18], only Enormous amounts of user-generated content is a boon for learn- recently some research work has been done about bias detection ing robust prediction models. However, since stereotypical bias and mitigation in the fairness, accountability, and transparency in is highly prevalent in user-generated data, prediction models im- machine learning (FAT-ML) community. While most of the papers plicitly suffer from such bias [5]. To de-bias the training data, we in the community relate to establishing the existence of bias [3, 20], propose a two-stage method: Detection and Replacement. In the some of these papers provide bias mitigation strategies by altering first stage (Detection), we propose skewed prediction probability training data [10], trained models [12], or embeddings [2]. Kleinberg and class distribution imbalance based novel heuristics for bias et al. [15] and Friedler et al. [11] both compare several different sensitive word (BSW) detection. Further, in the second stage (Re- fairness metrics for structured data. Unlike this line of work, our placement), we present novel bias removal strategies leveraging work is focused on unstructured data. In particular, we deal with knowledge-based generalizations. These include replacing BSWs the bias for the hate speech detection task. with generalizations like their Part-of-Speech (POS) tags, Named Entity tags, WordNet [17] based linguistic equivalents, and word 2.2 Handling Bias for Unstructured Data embedding based generalizations. We experiment with two real-world datasets – WikiDetox and Little prior work exists on fairness for text classification tasks. Twitter of sizes ∼96K and ∼24K respectively. The goal of the exper- Blodgett and O’Connor [3], Hovy and Spruit [13] and Tatman [20]