Learning Fair Naive Bayes Classifiers by Discovering and Eliminating

Learning Fair Naive Bayes Classifiers by Discovering and Eliminating

The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20) Learning Fair Naive Bayes Classifiers by Discovering and Eliminating Discrimination Patterns YooJung Choi,1∗ Golnoosh Farnadi,2,3∗ Behrouz Babaki,4∗ Guy Van den Broeck1 1University of California, Los Angeles, 2Mila, 3Universite´ de Montreal,´ 4Polytechnique Montreal´ {yjchoi, guyvdb}@cs.ucla.edu, [email protected], [email protected] Abstract is usually to assure the fair treatment of individuals or groups that are identified by sensitive attributes. As machine learning is increasingly used to make real-world decisions, recent research efforts aim to define and ensure fair- In this paper, we study fairness properties of probabilis- ness in algorithmic decision making. Existing methods often tic classifiers that represent joint distributions over the fea- assume a fixed set of observable features to define individuals, tures and decision variable. In particular, Bayesian network but lack a discussion of certain features not being observed classifiers treat the classification or decision-making task as at test time. In this paper, we study fairness of naive Bayes a probabilistic inference problem: given observed features, classifiers, which allow partial observations. In particular, we compute the probability of the decision variable. Such models introduce the notion of a discrimination pattern, which refers have a unique ability that they can naturally handle missing to an individual receiving different classifications depending on whether some sensitive attributes were observed. Then a features, by simply marginalizing them out of the distribu- model is considered fair if it has no such pattern. We propose tion when they are not observed at prediction time. Hence, a an algorithm to discover and mine for discrimination patterns Bayesian network classifier effectively embeds exponentially in a naive Bayes classifier, and show how to learn maximum- many classifiers, one for each subset of observable features. likelihood parameters subject to these fairness constraints. Our We ask whether such classifiers exhibit patterns of discrimi- approach iteratively discovers and eliminates discrimination nation where similar individuals receive markedly different patterns until a fair model is learned. An empirical evaluation outcomes purely because they disclosed a sensitive attribute. on three real-world datasets demonstrates that we can remove exponentially many discrimination patterns by only adding a The first key contribution of this paper is an algorithm to small fraction of them as constraints. verify whether a Bayesian classifier is fair, or else to mine the classifier for discrimination patterns. We propose two alterna- tive criteria for identifying the most important discrimination 1 Introduction patterns that are present in the classifier. We specialize our With the increasing societal impact of machine learning pattern miner to efficiently discover discrimination patterns come increasing concerns about the fairness properties of in naive Bayes models using branch-and-bound search. These machine learning models and how they affect decision mak- classifiers are often used in practice because of their simplic- ing. For example, concerns about fairness come up in polic- ity and tractability, and they allow for the development of ing (Mohler et al. 2018), recidivism prediction (Choulde- effective bounds. Our empirical evaluation shows that naive chova 2017), insurance pricing (Kusner et al. 2017), hir- Bayes models indeed exhibit vast numbers of discrimination ing (Datta, Tschantz, and Datta 2015), and credit rating (Hen- patterns, and that our pattern mining algorithm is able to find derson et al. 2015). The algorithmic fairness literature has them by traversing only a small fraction of the search space. proposed various solutions, from limiting the disparate treat- The second key contribution of this paper is a parameter ment of similar individuals to giving statistical guarantees learning algorithm for naive Bayes classifiers that ensures that on how classifiers behave towards different populations. Key no discrimination patterns exist in the the learned distribu- approaches include individual fairness (Dwork et al. 2012; tion. We propose a signomial programming approach to elim- Zemel et al. 2013), statistical parity, disparate impact and inate individual patterns of discrimination during maximum- group fairness (Kamishima et al. 2012; Feldman et al. 2015; likelihood learning. Moreover, to efficiently eliminate the ex- Chouldechova 2017), counterfactual fairness (Kusner et al. ponential number of patterns that could exist in a naive Bayes 2017), preference-based fairness (Zafar et al. 2017a), rela- classifier, we propose a cutting-plane approach that uses our tional fairness (Farnadi, Babaki, and Getoor 2018), and equal- discrimination pattern miner to find and iteratively eliminate ity of opportunity (Hardt et al. 2016). The goal in these works discrimination patterns until the entire learned model is fair. ∗Equal contribution Our empirical evaluation shows that this process converges Copyright c 2020, Association for the Advancement of Artificial in a small number of iteration, effectively removing millions Intelligence (www.aaai.org). All rights reserved. of discrimination patterns. Moreover, the learned fair mod- 10077 P (d) DP(x|D) Algorithm 1 DISC-PATTERNS(x, y, E) D +0.8 Input: P : Distribution over D ∪ Z, δ : discrimination threshold 0.2 − 0.5 Output: Discrimination patterns L x ←∅ y ←∅ E ←∅ L ← [] X Y1 Y2 Data: , , , DP(y1|D) DP(y2|D) 1: for all assignments z to some selected variable Z ∈ Z \ +0.7 +0.8 XYE do − 0.1 − 0.3 2: if Z ∈ S then 3: if |Δ(xz,y)| >δthen add (xz,y) to L Figure 1: Naive Bayes classifier with a sensitive attribute X 4: if UB(xz,y, E) >δthen DISC-PATTERNS(xz,y, E) and non-sensitive attributes Y1,Y2 5: if |Δ(x, yz)| >δthen add (x, yz) to L 6: if UB(x, yz,E) >δthen DISC-PATTERNS(x, yz,E) els are of high quality, achieving likelihoods that are close 7: if UB(x, y, E ∪{Z}) >δthen DISC-PATTERNS(x, y, E ∪ to the best likelihoods attained by models with no fairness {Z}) constraints. Our method also achieves higher accuracy than other methods of learning fair naive Bayes models. | | ≤ 1 δ =0.2 because maxxy1y2 Δ(x, y1y2) =0.167 δ. We 2 Problem Formalization discuss these connections more in Section 5. We use uppercase letters for random variables and lower- Even though the example network is considered (approxi- case letters for their assignments. Sets of variables and their mately) fair at the group level nor at the individual level with fully observed features, it may still produce a discrimination joint assignments are written in bold. Negation of a binary | | assignment x is denoted x¯, and x|=y means that x logically pattern. In particular, Δ(¯x, y1) =0.225 >δ. That is, a per- implies y. Concatenation of sets XY denotes their union. son with x¯ and y1 observed and the value of Y2 undisclosed Each individual is characterized by an assignment to a set would receive a much more favorable decision had they not of discrete variables Z, called attributes or features. Assign- disclosed X as well. Hence, naturally we wish to ensure that ment d to a binary decision variable D represents a decision there exists no discrimination pattern across all subsets of made in favor of the individual (e.g., a loan approval). A set observable features. of sensitive attributes S ⊂ Z specifies a group of entities Definition 3. A distribution P is δ-fair if there exists no protected often by law, such as gender and race. We now discrimination pattern w.r.t P and δ. define the notion of a discrimination pattern. Although our notion of fairness applies to any distribu- Definition 1. Let P be a distribution over D ∪ Z. Let x and tion, finding discrimination patterns can be computationally y be joint assignments to X⊆S and Y ⊆Z\X, respectively. challenging: computing the degree of discrimination involves The degree of discrimination of xy is: probabilistic inference, which is hard in general, and a given distribution may have exponentially many patterns. In this ΔP,d(x, y) P (d | xy) − P (d | y). paper, we demonstrate how to discover and eliminate dis- crimination patterns of a naive Bayes classifier effectively The assignment y identifies a group of similar individuals, by exploiting its independence assumptions. Concretely, we and the degree of discrimination quantifies how disclosing answer the following questions: (1) Can we certify that a sensitive information x affects the decision for this group. classifier is δ-fair?; (2) If not, can we find the most impor- Note that sensitive attributes missing from x can still appear tant discrimination patterns?; (3) Can we learn a naive Bayes in y. We drop the subscripts P, d when clear from context. classifier that is entirely δ-fair? Definition 2. Let P be a distribution over D ∪ Z, and δ ∈ [0, 1] a threshold. Joint assignments x and y form a dis- 3 Discovering Discrimination Patterns and X⊆S Y ⊆Z\X crimination pattern w.r.t. P and δ if: (1) and ; Verifying δ-fairness and (2) |ΔP,d(x, y)| >δ. This section describes our approach to finding discrimination Intuitively, we do not want information about the sensi- patterns or checking that there are none. tive attributes to significantly affect the probability of getting a favorable decision. Let us consider two special cases of 3.1 Searching for Discrimination Patterns discrimination patterns. First, if Y =∅, then a small discrimi- nation score |Δ(x, ∅)| can be interpreted as an approximation One may naively enumerate all possible patterns and com- of statistical parity, which is achieved when P (d | x)=P (d). pute their degrees of discrimination. However, this would be For example, the naive Bayes network in Figure 1 satisfies very inefficient as there are exponentially many subsets and approximate parity for δ =0.2 as |Δ(x, ∅)|=0.086 ≤ δ and assignments to consider.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us