Gender Bias in Natural Language Processing Across Human

Total Page:16

File Type:pdf, Size:1020Kb

Gender Bias in Natural Language Processing Across Human Gender Bias in Natural Language Processing Across Human Languages Abigail Matthews Isabella Grasso Mariama Njie University of Christopher Mahoney Iona College Wisconsin-Madison Yan Chen Esma Wali Thomas Middleton Jeanna Matthews Clarkson University [email protected] Abstract relationship of profession words like doctor, nurse, or teacher relative to this gendered vector space. They Natural Language Processing (NLP) systems demonstrated that word embedding software trained on are at the heart of many critical automated a corpus of Google news could associate men with the decision-making systems making crucial rec- profession computer programmer and women with the ommendations about our future world. Gender profession homemaker. Systems based on such mod- bias in NLP has been well studied in English, els, trained even with “representative text” like Google but has been less studied in other languages. news, could lead to biased hiring practices if used to, In this paper, a team including speakers of 9 for example, parse resumes and suggest matches for languages - Chinese, Spanish, English, Ara- a computer programming job. However, as with many bic, German, French, Farsi, Urdu, and Wolof - results in NLP research, this influential result has not reports and analyzes measurements of gender been applied beyond English. bias in the Wikipedia corpora for these 9 lan- guages. We develop extensions to profession- In some earlier work from this team, “Quantifying level and corpus-level gender bias metric cal- Gender Bias in Different Corpora”, we applied Boluk- culations originally designed for English and basi et al.’s methodology to computing and compar- apply them to 8 other languages, including ing corpus-level gender bias metrics across different languages that have grammatically gendered corpora of the English text (Babaeianjelodar 2020). nouns including different feminine, masculine, We measured the gender bias in pre-trained models and neuter profession words. We discuss fu- based on a “representative” Wikipedia and Book Cor- ture work that would benefit immensely from pus in English and compared it to models that had been a computational linguistics perspective. fine-tuned with various smaller corpora including the General Language Understanding Evaluation (GLUE) 1. Introduction benchmarks and two collections of toxic speech, Rt- Gender and IdentityToxic. We found that, as might be Corpora of human language are regularly fed into ma- expected, the RtGender corpora produced the highest chine learning systems as a key way to learn about gender bias score. However, we also found that the hate the world. Natural Language Processing plays a signifi- speech corpus, IdentityToxic, had lower gender bias cant role in many powerful applications such as speech scores than some of more representative corpora found recognition, text translation, and autocomplete and is in the GLUE benchmarks. By examining the contents at the heart of many critical automated decision sys- of the IdentityToxic corpus, we found that most of the tems making crucial recommendations about our fu- text in Identity Toxic reflected bias towards race or sex- ture world (Yordanov 2018)(Banerjee 2020)(Garbade ual orientation, rather than gender. These results con- 2018). Systems are taught to identify spam email, sug- firmed the use of a corpus-level gender bias metric as gest medical articles or diagnoses related to a patient’s a way of measuring gender bias in an unknown corpus symptoms, sort resumes based on relevance for a given and comparing across corpora, but again was only ap- position, and many other tasks that form key compo- plied in English. nents of critical decision making systems in areas such as criminal justice, credit, housing, allocation of public Here we build on the work of Bolukbasi et al. and resources and more. our own earlier work to extend these important tech- In a highly influential paper “Man is to Computer niques in gender bias measurement and analysis be- Programmer as Woman is to Homemaker? Debiasing yond English. This is challenging because unlike En- Word Embeddings”, Bolukbasi et al. (2016) developed glish, many languages like Spanish, Arabic, German, a way to measure gender bias using word embedding French, and Urdu, have grammatically gendered nouns systems like Word2vec. Specifically, they defined a including feminine, masculine and, neuter or neutral set of gendered word pairs such as (“he”,“she”) and profession words. We translate and modify Boluk- used the difference between these word pairs to de- basi et al.’s defining sets and profession sets in En- fine a gendered vector space. They then evaluated the glish for 8 additional languages and develop exten- 45 Proceedings of the First Workshop on Trustworthy Natural Language Processing, pages 45–54 June 10, 2021. ©2021 Association for Computational Linguistics sions to the profession-level and corpus-level gender Bolukbasi et al. in order to extend the methodology be- bias metric calculations for languages with grammati- yond English. Before applying these changes to other cally gendered nouns. We use this methodology to an- languages, we evaluate the impact of the changes on alyze the gender bias in Wikipedia corpora for Chinese calculations in English. In this section, we also describe (Mandarin Chinese), Spanish, English, Arabic, Ger- the Wikipedia corpora we used across 9 languages and man, French, Farsi, Urdu, and Wolof. We demonstrate analyze the occurrences of our defining set and profes- how the modern NLP pipeline not only reflects gender sion set words in these corpora. This work is also de- bias, but also leads to substantially over-representing scribed, but with a different focus in Wali et al. (2020) some (especially English voices recorded in the digital and Chen et al. (2021). text) and under-representing most others (speakers of most of the 7000 human languages and even writers of 2.1. Defining Set classic works that have not been digitized). The defining set is a list of gendered word pairs In Section 2, we describe modifications that we used to define what a gendered relationship looks made to the defining set and profession set proposed like. Bolukbasi et al’s original defining set contained by Bolukbasi et al. in order to extend the method- 10 English word pairs (she-he, daughter-son, her-his, ology beyond English. In Section 3, we discuss the mother-father, woman-man, gal-guy, Mary-John, girl- Wikipedia corpora and the occurrence of words in the boy, herself-himself, and female-male) (Bobluski et al. modified defining and profession sets for 9 languages 2016). We began with this set, but made substantial in Wikipedia. In Section 4, we extend Bolukbasi’s gen- changes in order to compute gender bias effectively der bias calculation to languages, like Spanish, Arabic, across 9 languages. German, French, and Urdu, with grammatically gen- Specifically, we removed 6 of the 10 pairs, added 3 dered nouns. We apply this to calculate and compare new pairs and translated the final set into 8 additional profession-level and corpus-level gender bias metrics languages. For example, we removed the pairs she-he for Wikipedia corpora in the 9 languages. We conclude and herself-himself because they are the same word in and discuss future work in Section 5. Throughout this some languages like Wolof, Farsi, Urdu, and German. paper, we discuss future work that would benefit im- Similarly, we removed the pair her-his because in some mensely from a computational linguistics perspective. languages like French and Spanish, the gender of the object does not depend on the person to which it be- 2. Modifying Defining Sets and Profession longs. Sets We also added 3 new pairs (queen-king, wife- Word embedding is a powerful NLP technique that rep- husband, and madam-sir) for which more consistent resents words in the form of numeric vectors. It is used translations were available across languages. Interest- for semantic parsing, representing the relationship be- ingly, as we will discuss, the pair wife-husband intro- tween words, and capturing the context of a word in a duces surprising results in many languages. Our final document (Karani 2018). For example, Word2vec is a defining set for this study thus contained 7 word pairs system used to efficiently create word embeddings by and Table 1 shows our translations of this final defining using a two-layer neural network that efficiently pro- set across the 9 languages included in our study. cesses huge data sets with billions of words, and with millions of words in the vocabulary (Mikolov 2013). 2.2 Professions Set Bolukbasi et al. developed a method for measur- We began with Bolukbasi et al’s profession word set ing gender bias using word embedding systems like in English, but again made substantial changes in or- Word2vec. Specifically, they defined a set of highly der to compute gender bias effectively across 9 lan- gendered word pairs such as (“he”, “she”) and used the guages. Bolukbasi et al. had an original list of 327 difference between these word pairs to define a gen- profession words (2016), including some words that dered vector space. They then evaluated the relation- would not technically be classified as professions like ship of profession words like doctor, nurse or teacher saint or drug addict. We narrowed this list down to 32 relative to this gendered vector space. Ideally, profes- words including: nurse, teacher, writer, engineer, sci- sion words would not reflect a strong gender bias. How- entist, manager, driver, banker, musician, artist, chef, ever, in practice, they often do. According to such a filmmaker, judge, comedian, inventor, worker, soldier, metric, doctor might be male biased or nurse female bi- journalist, student, athlete, actor, governor, farmer, per- ased based on how these words are used in the corpora son, lawyer, adventurer, aide, ambassador, analyst, as- from which the word embedding model was produced.
Recommended publications
  • The Wikipedia Diversity Observatory a Project to Identify and Bridge Content Gaps in Wikipedia
    The Wikipedia Diversity Observatory A Project to Identify and Bridge Content Gaps in Wikipedia Marc Miquel-Ribé David Laniado Universitat Pompeu Fabra, Barcelona, Catalonia Eurecat, Centre Tecnològic de Catalunya [email protected] [email protected] ABSTRACT 1 Introduction In this paper we present the Wikipedia Diversity Observatory, Wikipedia is among the largest information repositories on the a project aimed to increase diversity within Wikipedia Internet that are both multilingual and created through language editions. The project includes dashboards with collaborative effort. Its prime objective1 is to "give free access visualizations and tools which show the gaps in terms of to the sum of all human knowledge" and, consequently, it exists concepts not represented or not shared across languages. The in as many as 309 languages. Even though the language dashboards are built on datasets generated for each of the more communities make the projects grow on a constant basis, the than 300 language editions, with features that label each article content does not represent the existing diversity in peoples, according to different categories relevant to overall content places, and cultures of the world; furthermore, there is a gap diversity. Through various examples, we show how the tools between language editions and articles often are not shared, or encourage and help editors to bridge the gaps in Wikipedia remain even unique to one language [1]. The creation of articles content. Finally, we discuss the project's impact on the in Wikipedia language editions is spontaneous and non- communities and implications for the Wikimedia movement, in directed. Several studies showed that cultural and geographical a moment in which covering diversity is considered strategic.
    [Show full text]
  • © 2020 Xiaoman Pan CROSS-LINGUAL ENTITY EXTRACTION and LINKING for 300 LANGUAGES
    © 2020 Xiaoman Pan CROSS-LINGUAL ENTITY EXTRACTION AND LINKING FOR 300 LANGUAGES BY XIAOMAN PAN DISSERTATION Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science in the Graduate College of the University of Illinois at Urbana-Champaign, 2020 Urbana, Illinois Doctoral Committee: Dr. Heng Ji, Chair Dr. Jiawei Han Dr. Hanghang Tong Dr. Kevin Knight ABSTRACT Information provided in languages which people can understand saves lives in crises. For example, language barrier was one of the main difficulties faced by humanitarian workers responding to the Ebola crisis in 2014. We propose to break language barriers by extracting information (e.g., entities) from a massive variety of languages and ground the information into an existing Knowledge Base (KB) which is accessible to a user in their own language (e.g., a reporter from the World Health Organization who speaks English only). The ambitious goal of this thesis is to develop a Cross-lingual Entity Extraction and Linking framework for 1,000 fine-grained entity types and 300 languages that exist in Wikipedia. Given a document in any of these languages, our framework is able to identify entity name mentions, assign a fine-grained type to each mention, and link it to an English KB if it is linkable. Traditional entity linking methods rely on costly human annotated data to train supervised learning-to-rank models to select the best candidate entity for each mention. In contrast, we propose a novel unsupervised represent-and-compare approach that can accurately capture the semantic meaning representation of each mention, and directly compare its representation with the representation of each candidate entity in the target KB.
    [Show full text]
  • Gender Bias and Under-Representation in Natural Language Processing Across Human Languages
    Gender Bias and Under-Representation in Natural Language Processing Across Human Languages Yan Chen1, Christopher Mahoney1, Isabella Grasso1, Esma Wali1, Abigail Matthews2, Thomas Middleton1, Mariama Njie3, Jeanna Matthews1 1Clarkson University, 2University of Wisconsin-Madison, 3 Iona College [email protected] ABSTRACT 1 INTRODUCTION Natural Language Processing (NLP) systems are at the heart of Corpora of human language are regularly fed into machine learning many critical automated decision-making systems making crucial systems as a key way to learn about the world. Natural Language recommendations about our future world. However, these systems Processing plays a significant role in many powerful applications reflect a wide range of biases, from gender bias to a bias inwhich such as speech recognition, text translation, and autocomplete and voices they represent. In this paper, a team including speakers of is at the heart of many critical automated decision systems mak- 9 languages - Chinese, Spanish, English, Arabic, German, French, ing crucial recommendations about our future world [20][2][7]. Farsi, Urdu, and Wolof - reports and analyzes measurements of Systems are taught to identify spam email, suggest medical articles gender bias in the Wikipedia corpora for these 9 languages. In the or diagnoses related to a patient’s symptoms, sort resumes based process, we also document how our work exposes crucial gaps on relevance for a given position, and many other tasks that form in the NLP-pipeline for many languages. Despite substantial in- key components of critical decision making systems in areas such vestments in multilingual support, the modern NLP-pipeline still as criminal justice, credit, housing, allocation of public resources, systematically and dramatically under-represents the majority of and more.
    [Show full text]
  • Primary-School-Feasibility Study
    WikiAfrica Primary School Feasibility Study WikiAfrica Primary School Feasibility Study Draft, November 2012. 1 WikiAfrica Primary School Feasibility Study The WikiAfrica Primary School Feasibility Study was produced in 2012 within the frame of WikiAfrica, a cross-continental collaboration that aims to increase the quantity and quality of African content on the world’s most referenced online encyclopaedia, Wikipedia. WikiAfrica is promoted by lettera27 Foundation and the Africa Centre and it was initiated by lettera27 Foundation in 2006. WikiAfrica Primary School Feasibility Study is edited by Iolanda Pensa, scientific director WikiAfrica; WikiAfrica project manager for lettera27 Cristina Perillo; WikiAfrica project manager for the Africa Centre Isla Haddow-Flood. WikiAfrica Primary School is a project conceived by Iolanda Pensa under Creative Commons attribution share alike license. The WikiAfrica Primary School Feasibility Study was made possible thanks to volunteers and the support of lettera27 Foundation. WikiAfrica Primary School Feasibility Study is under Creative Commons attribution share alike license. WikiAfrica, 2012. 2 WikiAfrica Primary School Feasibility Study “Education is the most powerful weapon which you can use to change the world “ Nelson Mandela 3 WikiAfrica Primary School Feasibility Study Table of content 1. Introduction 7 Key findings 8 2. Wikipedia 13 Outreach projects and education 13 Languages used in the African educational systems 15 General overview of involved Wikipedia projects 18 General overview on the
    [Show full text]
  • Proceedings of the First Workshop on Trustworthy Natural Language Processing, Pages 1–7 June 10, 2021
    TrustNLP TrustNLP: First Workshop on Trustworthy Natural Language Processing Proceedings of the Workshop June 10, 2021 ©2021 The Association for Computational Linguistics Order copies of this and other ACL proceedings from: Association for Computational Linguistics (ACL) 209 N. Eighth Street Stroudsburg, PA 18360 USA Tel: +1-570-476-8006 Fax: +1-570-476-0860 [email protected] ISBN 978-1-954085-33-6 ii Introduction Recent progress in Artificial Intelligence (AI) and Natural Language Processing (NLP) has greatly increased their presence in everyday consumer products in the last decade. Common examples include virtual assistants, recommendation systems, and personal healthcare management systems, among others. Advancements in these fields have historically been driven by the goal of improving model performance as measured by accuracy, but recently the NLP research community has started incorporating additional constraints to make sure models are fair and privacy-preserving. However, these constraints are not often considered together, which is important since there are critical questions at the intersection of these constraints such as the tension between simultaneously meeting privacy objectives and fairness objectives, which requires knowledge about the demographics a user belongs to. In this workshop, we aim to bring together these distinct yet closely related topics. We invited papers which focus on developing models that are “explainable, fair, privacy-preserving, causal, and robust” (Trustworthy ML Initiative). Topics of interest include: • Differential Privacy • Fairness and Bias: Evaluation and Treatments • Model Explainability and Interpretability • Accountability • Ethics • Industry applications of Trustworthy NLP • Causal Inference • Secure and trustworthy data generation In total, we accepted 11 papers, including 2 non-archival papers.
    [Show full text]
  • Wikipedia from Wikipedia, the Free Encyclopedia This Article Is About the Internet Encyclopedia
    Wikipedia From Wikipedia, the free encyclopedia This article is about the Internet encyclopedia. For other uses, see Wikipedia ( disambiguation). For Wikipedia's non-encyclopedic visitor introduction, see Wikipedia:About. Page semi-protected Wikipedia A white sphere made of large jigsaw pieces, with letters from several alphabets shown on the pieces Wikipedia wordmark The logo of Wikipedia, a globe featuring glyphs from several writing systems, mo st of them meaning the letter W or sound "wi" Screenshot Main page of the English Wikipedia Main page of the English Wikipedia Web address wikipedia.org Slogan The free encyclopedia that anyone can edit Commercial? No Type of site Internet encyclopedia Registration Optional[notes 1] Available in 287 editions[1] Users 73,251 active editors (May 2014),[2] 23,074,027 total accounts. Content license CC Attribution / Share-Alike 3.0 Most text also dual-licensed under GFDL, media licensing varies. Owner Wikimedia Foundation Created by Jimmy Wales, Larry Sanger[3] Launched January 15, 2001; 13 years ago Alexa rank Steady 6 (September 2014)[4] Current status Active Wikipedia (Listeni/?w?k?'pi?di?/ or Listeni/?w?ki'pi?di?/ wik-i-pee-dee-?) is a free-access, free content Internet encyclopedia, supported and hosted by the non -profit Wikimedia Foundation. Anyone who can access the site[5] can edit almost any of its articles. Wikipedia is the sixth-most popular website[4] and constitu tes the Internet's largest and most popular general reference work.[6][7][8] As of February 2014, it had 18 billion page views and nearly 500 million unique vis itors each month.[9] Wikipedia has more than 22 million accounts, out of which t here were over 73,000 active editors globally as of May 2014.[2] Jimmy Wales and Larry Sanger launched Wikipedia on January 15, 2001.
    [Show full text]