
Fairness-Aware Learning for Continuous Attributes and Treatments Jer´ emie´ Mary 1 Clement´ Calauzenes` 1 Noureddine El Karoui 1 2 Abstract third modify data representation and use classical algorithms (Zemel et al., 2013; Donini et al., 2018). As formulated by We address the problem of algorithmic fairness: (Hardt et al., 2016), the core ingredient of algorithmic fair- ensuring that the outcome of a classifier is not ness is the ability to estimate and guarantee (conditional) biased towards certain values of sensitive vari- independence between two well chosen random variables ables such as age, race or gender. As common – typically involving the decision made by the algorithm fairness metrics can be expressed as measures of and the variables to protect and the “positive” outcome. We (conditional) independence between variables, we will call U and V these two random variables in the rest of propose to use the Renyi´ maximum correlation this introduction. While in all generality U and V can be coefficient to generalize fairness measurement to continuous variables – as an example a predicted probability continuous variables. We exploit Witsenhausen’s or a variable such as time – most of the work in fairness has characterization of the Renyi´ correlation coeffi- so far focused on protecting categorical variables. In this cient to propose a differentiable implementation work we relax this assumption. linked to f-divergences. This allows us to gener- alize fairness-aware learning to continuous vari- From an applied perspective this is desirable since it avoids ables by using a penalty that upper bounds this to consider continuous values as predetermined ”categorical coefficient. Theses allows fairness to be extented bins” which are going to present thresholds effects in the to variables such as mixed ethnic groups or finan- learnt model. Theses thresholds make no real sense when cial status without thresholds effects. This penalty considering age, ethnic proportions or a gender fluidity mea- can be estimated on mini-batches allowing to use sure. Moreover, a smooth and continuous way to describe deep nets. Experiments show favorable compar- fairness constraints - a way that considers also the order of isons to state of the art on binary variables and the elements (e.g. 10 yo < 11 yo) - is important. As an prove the ability to protect continuous ones 1. example from the real world, (Daniels et al., 2000) pointed the financial status to be a sensitive variable for health care. 1. Introduction From a statistical point of view, given that the dependence between U and V can be arbitrarily complex, the measure Ensuring that sensitive information (e.g. knowledge about of dependence is challenging. On one side of the spec- the ethnic group of an individual) does not ”unfairly” in- trum – the empirical one – simple and tractable correlation fluence the outcome of a learning algorithm is receiving coefficients have been introduced, such as Pearson’s rho, increasing attention with the spread of AI tools in soci- Spearman’s rank or Kendall’s tau. Sadly, while such correla- ety. To achieve this goal, as discussed with more details tion coefficients are able to disprove independence, they are in the related work section 3.1, there are three families of not able to prove it. They only express necessary conditions approaches: first modify a pretrained classifier while mini- for independence. On the opposite side – the theoretical mizing the impact on performance (Hardt et al., 2016; Pleiss one – Gebelein (Gebelein, 1941) introduced the Hirschfeld- et al., 2017), second enforce fairness during the training Gebelein-Renyi´ Maximum Correlation Coefficient (HGR), possibly at the cost of convexity (Zafar et al., 2017) and which has the desirable property to prove independence when evaluated to zero (Renyi´ , 1959) but is computationally 1 2 Criteo AI Lab, Paris, France University of California, Berke- intractable in the general case. However, HGR is a measure < > ley, USA. Correspondence to: J. Mary [email protected] , of dependence which takes value in [0,1] and is independent C. Calauzenes` <[email protected]>, N. El Karoui <[email protected]>. of the marginals of U and V , which allows a regulator to use it with an absolute threshold as a fairness criterion. Proceedings of the 36 th International Conference on Machine Learning, Long Beach, California, PMLR 97, 2019. Copyright Many authors have proposed principled ways to estimate 2019 by the author(s). this coefficient. Early approaches includes Witsenhausen’s 1Code available at https://github.com/ characterization (Witsenhausen, 1975) which provides a criteo-research/continuous-fairness Fairness-Aware Learning for Continuous Attributes and Treatments characterization by the second singular value of a well cho- dataset is large enough. Finally, we show that our approache sen matrix as discussed in Subsection 2.3. Later, (Breiman & generalizes to continuous sensitive attributes. Friedman, 1985a) used non-parametric estimates and effec- tively used the power method on the operator U = PU PV 2. HGR as a Fairness Criterion – with PV (resp. PU ) denotes the operator conditional ex- pectation wrt V (resp. U). More recently (Lopez-Paz et al., 2.1. Overview of Fairness Criteria 2013) exploited the largest canonical correlation between Many notions of fairness are currently being investigated random non-linear projections of the respective empirical an there is not yet a consensus on which ones are the most copula-transformations of U and V and showed excellent appropriate (Hardt et al., 2016; Zafar et al., 2017; Dwork empirical behavior even for multi-dimensional variables, et al., 2012). Indeed, it is a choice that requires not only extending the work of (Bach & Jordan, 2003; Gretton et al., statistical and causal arguments, but also ethical ones. A 2005; Reshef et al., 2011). common thread between these metrics is to rely on statistical We first aim to take advantage of advances on measure independence. Fairness meets machine learning when one of independence for algorithmic fairness. Simultaneously tries to build a prediction Y^ of some variable Y (ex: default the rise of deep learning and the mood for differentiable of payment) based on some available information X (ex: programming advocates the usage of differentiable approxi- credit card history); prediction that may be biased or unfair mations with a nice first order behavior and a limited com- with respect to some sensitive attribute Z (ex: gender). putational cost in order to be usable to penalize neural nets. An initial notion proposed to measure fairness was the de- In this work we derive a differentiable non parametric es- mographic parity of the prediction – i.e. whether (Y^ = timation based on Witsenhausen’s characterization mixed P 1jZ = 1) = (Y^ = 1jZ = 0) – which is often measured by with a Kernel Density Estimation (KDE). We demonstrate P the disparate impact criterion (Feldman et al., 2015): the empirical performance of this estimation and we tightly upper bound it by a quantity which is a f-divergence. The P(Y^ =1jZ =0) proposed bound is attained as soon as one of the two random DI = (Y^ =1jZ =1) variables (U or V ) is binary valued. Note that f-divergences P between a joint distribution and the product of its marginals This criterion is even part of the US Equal Employment are invariant under the action of invertible maps (Nelsen, Opportunity Commission recommendation (EEOC., 1979) 2010)) and can be used as measures of dependence. which advocates that it should not be below 0.8 – also known as the 80% rule. While initially defined for binary variables, As a second contribution we demonstrate that our upper the demographic parity can be easily generalized to the bound on the HGR coefficient can used to penalize the learn- requirement Y^ ?? Z, even when Z is non binary. ing of a model. It even empirically proves to perform well when estimated on mini-batches, allowing its use in conjunc- Demographic parity was criticized for ignoring confounding tion with neural networks trained by stochastic gradient. An- variables that may explain an already existing correlation in other key difference is that we are able to deal with sensitive the data between Y and Z. For instance, a model selecting variables which are continuous. Our approach also extends randomly 10% of men and choosing the best 10% women the work of Kamishima et al.(2011), who proposed to use would be perfectly fair w.r.t. DI. To partially overcome an estimate of the Mutual Information (MI) as a penalty, but these limitations, Equalized Odds was introduced by (Zafar their estimation method is restricted to categorical variables. et al., 2017; Hardt et al., 2016) as a measurement of whether Moreover, even when extending the MI to the continuous P(Y^ = 1jZ = 1;Y = y) = P(Y^ = 1jZ = 0;Y = y) for case, we show that our regularizer yields both better models any y. The particular case of y = 1 only is referred to as and less sensitivity to the value of hyper-parameters. Equal Opportunity (EO) and commonly measured by the difference of EO: The rest of the paper is organized as follows: first we make explicit the link between measure of independence and dif- DEO = P(Y^ =1jZ =1;Y =1) − P(Y^ =1jZ =0;Y =1) ferent fairness criteria such as Disparate Impact and Equal- ized Odds. Secondly, we introduce our approximation of Again, similarly to Demographic Parity, Equal Opportunity HGR which is able to deal with continuous variables and can be represented equivalently by a notion of independence,
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-