Uncovering and Mitigating Algorithmic Bias Through Learned Latent Structure

Uncovering and Mitigating Algorithmic Bias Through Learned Latent Structure

Uncovering and Mitigating Algorithmic Bias through Learned Latent Structure The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation Amini, Alexander et al. "Uncovering and Mitigating Algorithmic Bias through Learned Latent Structure." Proceedings of the 2019 AAAI/ ACM Conference on Artificial Intelligence, Ethics, and Society (AIES), 27-28 January, 2019, Honolulu, Hawaii, United States, AAAI/ACM, 2019. As Published http://www.aies-conference.com/wp-content/papers/main/ AIES-19_paper_220.pdf Publisher AAAI/ACM Version Author's final manuscript Citable link http://hdl.handle.net/1721.1/121101 Terms of Use Creative Commons Attribution-Noncommercial-Share Alike Detailed Terms http://creativecommons.org/licenses/by-nc-sa/4.0/ Uncovering and Mitigating Algorithmic Bias through Learned Latent Structure Alexander Amini1;3∗, Ava Soleimany2∗, Wilko Schwarting1;3, Sangeeta Bhatia3, and Daniela Rus1;3 1 Computer Science and Artificial Intelligence Lab, Massachusetts Institute of Technology 2 Biophysics Program, Harvard University 3 Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology ∗ Denotes co-first authors famini, asolei, wilkos, sbhatia, [email protected] Random Batch Sampling During Batch Sampling During Training Abstract Standard Face Detection Training with Learned Debiaising Recent research has highlighted the vulnerabilities of modern machine learning based systems to bias, especially for seg- ments of society that are under-represented in training data. In this work, we develop a novel, tunable algorithm for mitigat- ing the hidden, and potentially unknown, biases within train- ing data. Our algorithm fuses the original learning task with a variational autoencoder to learn the latent structure within the dataset and then adaptively uses the learned latent distribu- tions to re-weight the importance of certain data points while training. While our method is generalizable across various Homogenous skin color, pose Diverse skin color, pose, illumination data modalities and learning tasks, in this work we use our al- Mean Sample Prob: 7.57 x 10-6 Mean Sample Prob: 1.03 x 10-4 gorithm to address the issue of racial and gender bias in facial detection systems. We evaluate our algorithm on the Pilot Par- Figure 1: Batches sampled for training without (left) and liaments Benchmark (PPB), a dataset specifically designed to with (right) learned debiasing. The proposed algorithm evaluate biases in computer vision systems, and demonstrate identifies, in an unsupervised manner, under-represented increased overall performance as well as decreased categori- cal bias with our debiasing approach. parts of training data and subsequently increases their re- spective sampling probability. The resulting batch (right) from the CelebA dataset shows increased diversity in fea- 1 Introduction tures such as skin color, illumination, and occlusions. Machine learning (ML) systems are increasingly making de- cisions that impact the daily lives of individuals and soci- ety in general. For example, ML and artificial intelligence While deep learning based systems have been shown to (AI) are already being used to determine if a human is eli- achieve state-of-the-art performance on many of these tasks, gible to receive a loan (Khandani, Kim, and Lo 2010), how it has also been demonstrated that algorithms trained with long a criminal should spend in prison (Berk, Sorenson, and biased data lead to algorithmic discrimination (Bolukbasi Barnes 2016), the order in which a person is presented the et al. 2016; Caliskan, Bryson, and Narayanan 2017). Re- news (Nalisnick et al. 2016), or even diagnoses and treat- cently, benchmarks quantifying discrimination (Kilbertus et ments for medical patients (Mazurowski et al. 2008). al. 2017; Hardt et al. 2016) and even datasets designed The development and deployment of fair and unbiased to evaluate the fairness of these algorithms (Buolamwini AI systems is crucial to prevent any unintended side ef- and Gebru 2018) have emerged. However, the problem of fects and to ensure the long-term acceptance of these al- severely imbalanced training datasets and the question of gorithms (Miller 2015; Courtland 2018). Even the seem- how to integrate debiasing capabilities into AI algorithms ingly simple task of facial recognition (Zafeiriou, Zhang, still remain largely unsolved. and Zhang 2015) has been shown to be subject to ex- In this paper, we tackle the challenge of integrating debi- treme amounts of algorithmic bias among select demograph- asing capabilities directly into a model training process that ics (Buolamwini and Gebru 2018). For example, (Klare et al. adapts automatically and without supervision to the short- 2012) analyzed the face detection system used by the US comings of the training data. Our approach features an end- law enforcement and discovered significantly lower accu- to-end deep learning algorithm that simultaneously learns racy among dark women between the age of 18-30 years old. the desired task (e.g., facial detection) as well as the under- This is especially concerning since these facial recognition lying latent structure of the training data. Learning the latent systems are often not deployed in isolation but rather as part distributions in an unsupervised manner enables us to un- of a larger surveillance or criminal detection pipeline (Ab- cover hidden or implicit biases within the training data. Our dullah et al. 2017). algorithm, which is built on top of a variational autoencoder (VAE),is capable of identifying under-represented examples the latent structure to the data, which necessitates manual in the training dataset and subsequently increases the prob- annotation of the desired features. On the other hand, our ability at which the learning algorithm samples these data approach debiases variability within a class automatically points (Fig. 1). during training and learns the latent structure from scratch We demonstrate how our algorithm can be used to debias in an unsupervised manner. a facial detection system trained on a biased dataset and to Generating debiased data: Recent approaches have uti- provide interpretations of the learned latent variables which lized generative models (Sattigeri et al. 2018) and data trans- our algorithm actively debiases against. Finally, we com- formations (Calmon et al. 2017) to generate training data pare the performance of our debiased model to a standard that is more ‘fair’ than the original dataset. For example, deep learning classifier by evaluating racial and gender bias (Sattigeri et al. 2018) used a generative adversarial network on the Pilot Parliaments Benchmark (PPB) dataset (Buo- (GAN) to output a reconstructed dataset similar to the input lamwini and Gebru 2018). but more fair with respect to certain attributes. Preprocessing The key contributions of this paper can be summarized as: data transformations that mitigate discrimination, as in (Cal- 1. A novel, tunable debiasing algorithm which utilizes mon et al. 2017), have also been proposed, yet such methods learned latent variables to adjust the respective sampling are not learned adaptively during training nor do they pro- probabilities of individual data points while training; and vide realistic training examples. In contrast to these works, we do not rely on artificially generated data, but rather use a 2. A semi-supervised model for simultaneously learning a resampled, more representative subset of the original dataset debiased classifier as well as the underlying latent vari- for debiasing. ables governing the given classes; and Clustering to identify bias: Supervised learning ap- 3. Analysis of our method for facial detection with biased proaches have also been used to characterize biases in im- training data, and evaluation on the PPB dataset to mea- balanced data sets. Specifically, k-means clustering has been sure algorithmic fairness across race and gender. employed to identify clusters in the input data prior to train- The remainder of this paper is structured as follows: we ing and to inform resampling the training data into a smaller summarize the related work in Sec. 2, formulate the model set of representative examples (Nguyen, Bouzerdoum, and and debiasing algorithm in Sec. 3, describe our experimental Phung 2008). However, this method does not extend to high results in Sec. 4, and provide concluding remarks in Sec. 5. dimensional data like images or to cases where there is no notion of a data ‘cluster’, and relies on significant pre- 2 Related Work processing. Our proposed approach overcomes these limi- Interventions that seek to introduce fairness into machine tations by learning the latent structure using a variational learning pipelines generally fall into one of three cate- approach. gories: those that use data pre-processing before training, in- processing during training, and post-processing after train- 3 Methodology ing. Several pre-processing and in-processing methods rely Problem Setup on new, artificially generated debiased data (Calmon et Consider the problem of binary classification in which we al. 2017) or resampling (More 2016). However, these ap- are presented with a set of paired training data samples proaches have largely focused on class imbalances, rather (i) (i) n m Dtrain = f(x ; y )gi=1 consisting of features x 2 R than variability within a class, and fail to use any infor- d mation about the structure of the underlying latent fea- and labels y 2 R . Our goal is to

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us