
Fair Generative Modeling via Weak Supervision Kristy Choi 1* Aditya Grover 1* Trisha Singh 2 Rui Shu 1 Stefano Ermon 1 Abstract discriminatory nature of such systems and ways to mitigate Real-world datasets are often biased with respect it (Podesta et al., 2014). For example, some natural language to key demographic factors such as race and gen- generation systems trained on internet-scale datasets have der. Due to the latent nature of the underlying been shown to produce generations that are biased towards factors, detecting and mitigating bias is especially certain demographics (Sheng et al., 2019). challenging for unsupervised machine learning. A variety of socio-technical factors contribute to the dis- We present a weakly supervised algorithm for criminatory nature of ML systems (Barocas et al., 2018). A overcoming dataset bias for deep generative mod- major factor is the existence of biases in the training data els. Our approach requires access to an additional itself (Torralba et al., 2011; Tommasi et al., 2017). Since small, unlabeled reference dataset as the supervi- data is the fuel of ML, any existing bias in the dataset can be sion signal, thus sidestepping the need for explicit propagated to the learned model (Barocas & Selbst, 2016). labels on the underlying bias factors. Using this This is a particularly pressing concern for generative models supplementary dataset, we detect the bias in ex- which can easily amplify the bias by generating more of isting datasets via a density ratio technique and the biased data at test time. Further, learning a generative learn generative models which efficiently achieve model is fundamentally an unsupervised learning problem the twin goals of: 1) data efficiency by using and hence, the bias factors of interest are typically latent. training examples from both biased and reference For example, while learning a generative model of human datasets for learning; and 2) data generation close faces, we often do not have access to attributes such as gen- in distribution to the reference dataset at test time. der, race, and age. Any existing bias in the dataset with Empirically, we demonstrate the efficacy of our respect to these attributes are easily picked by deep genera- approach which reduces bias w.r.t. latent factors tive models. See Figure 1 for an illustration. by an average of up to 34.6% over baselines for comparable image generation using generative In this work, we present a weakly-supervised approach to adversarial networks. learning fair generative models in the presence of dataset bias. Our source of weak supervision is motivated by the observation that obtaining multiple unlabelled (biased) 1. Introduction datasets is relatively cheap for many domains in the big data era. Among these data sources, we may wish to generate Increasingly, many applications of machine learning (ML) samples that are close in distribution to a particular target involve data generation. Examples of such production level (reference) dataset.1 As a concrete example of such a ref- systems include Transformer-based models such as BERT erence, organizations such as the World Bank and biotech and GPT-3 for natural language generation (Vaswani et al., firms (23&me, 2016; Hong, 2016) typically follow several 2017; Devlin et al., 2018; Radford et al., 2019; Brown et al., good practices to ensure representativeness in the datasets 2020), Wavenet for text-to-speech synthesis (Oord et al., that they collect, though such methods are unscalable to 2017), and a large number of creative applications such large sizes. We note that neither of our datasets need to Coconet used for designing the “first AI-powered Google be labeled w.r.t. the latent bias attributes and the size of Doodle” (Huang et al., 2017). As these generative appli- the reference dataset can be much smaller than the biased cations become more prevalent, it becomes increasingly dataset. Hence, the level of supervision we require is weak. important to consider questions with regards to the potential Using a reference dataset to augment a biased dataset, our * 1 Equal contribution Department of Computer Science, Stan- goal is to learn a generative model that best approximates the ford University 2Department of Statistics, Stanford University. Correspondence to: Kristy Choi <[email protected]>. 1We note that while there may not be concept of a dataset de- void of bias, carefully designed representative data collection prac- th Proceedings of the 37 International Conference on Machine tices may be more accurately reflected in some data sources (Gebru Learning, Vienna, Austria, PMLR 119, 2020. Copyright 2020 by et al., 2018) and can be considered as reference datasets. the author(s). Fair Generative Modeling via Weak Supervision Figure 1. Samples from a baseline BigGAN that reflect the gender bias underlying the true data distribution in CelebA. All faces above the orange line (67%) are classified as female, while the rest are labeled as male (33%). desired, reference data distribution. Simply using the refer- variational autoencoders (VAE) (Kingma & Welling, 2013; ence dataset alone for learning is an option, but this may not Rezende et al., 2014) and normalizing flows (Dinh et al., suffice since this dataset can be too small to learn an expres- 2014) or hybrids (Grover et al., 2018). Our bias mitigation sive model that accurately captures the underlying reference framework is agnostic to the above training approaches. data distribution. Our approach to learning a fair generative For generality, we consider expectation-based learning ob- model that is robust to biases in the larger training set is jectives, where `( ) is a per-example loss that depends on based on importance reweighting. In particular, we learn · both examples x drawn from a dataset and the model a generative model which reweighs the data points in the D parameters ✓: biased dataset based on the ratio of densities assigned by the biased data distribution as compared to the reference data T distribution. Since we do not have access to explicit densi- 1 Ex pdata [`(x,✓)] `(xi,✓):= (✓; ) (1) ties assigned by either of the two distributions, we estimate ⇠ ⇡ T L D i=1 the weights by using a probabilistic classifier (Sugiyama X et al., 2012; Mohamed & Lakshminarayanan, 2016). The above expression encompasses a broad class of MLE We test our weakly-supervised approach on learning genera- and adversarial objectives. For example, if `( ) denotes the tive adversarial networks on the CelebA dataset (Ziwei Liu · negative log-likelihood assigned to the point x as per p , & Tang, 2015). The dataset consists of attributes such as ✓ then we recover the MLE training objective. gender and hair color, which we use for designing biased and reference data splits and subsequent evaluation. We empirically demonstrate how the reweighting approach can 2.2. Dataset Bias offset dataset bias on a wide range of settings. In partic- The standard assumption for learning a generative model ular, we obtain improvements of up to 36.6% (49.3% for is that we have access to a sufficiently large dataset Dref bias=0.9 and 23.9% for bias=0.8) for single-attribute of training examples, where each x is assumed 2Dref dataset bias and 32.5% for multi-attribute dataset bias on to be sampled independently from a reference distribution average over baselines in reducing the bias with respect to pdata = pref . In practice however, collecting large datasets the latent factors for comparable sample quality. that are i.i.d. w.r.t. pref is difficult due to a variety of socio- technical factors. The sample complexity for learning high 2. Problem Setup dimensional distributions can even be doubly-exponential in the dimensions in many cases (Arora et al., 2018), sur- 2.1. Background passing the size of the largest available datasets. We assume there exists a true (unknown) data distribution We can partially offset this difficulty by considering data d pdata : R 0 over a set of d observed variables x R . from alternate sources related to the target distribution, e.g., X! ≥ 2 In generative modeling, our goal is to learn the parameters images scraped from the Internet. However, these additional ✓ ⇥ of a distribution p✓ : R 0 over the observed datapoints are not expected to be i.i.d. w.r.t. pref . 2 X! ≥ variables x, such that the model distribution p✓ is close to We characterize this phenomena as dataset bias, where we pdata. Depending on the choice of learning algorithm, dif- assume the availability of a dataset , such that the ferent approaches have been previously considered. Broadly, Dbias examples x are sampled independently from a these include adversarial training e.g., GANs (Goodfellow 2Dbias et al., 2014) and maximum likelihood estimation (MLE) e.g., biased (unknown) distribution pbias that is different from pref , but shares the same support. Fair Generative Modeling via Weak Supervision 2.3. Evaluation ignore bias and consider learning p✓ based on ref alone. D D Since we only consider proper losses w.r.t. p , global Evaluating generative models and fairness in machine learn- ref optimization of the objective in Eq. equation 1 in a well- ing are both open areas of research. Our work is at the in- specified model family will recover the true data distribution tersection of these two fields and we propose the following as . However, since is finite in practice, metrics for measuring bias mitigation for data generation. |Dref |!1 Dref this is likely to give poor sample quality even though the fairness discrepancy would be low. Sample Quality: We employ sample quality metrics e.g., Frechet Inception Distance (FID) (Heusel et al., 2017), Ker- On the other extreme, we can consider learning p✓ based on nel Inception Distance (KID) (Li et al., 2017), etc. These the full dataset consisting of both ref and bias.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages12 Page
-
File Size-