Faithful and Customizable Explanations of Black Box Models

Faithful and Customizable Explanations of Black Box Models

Faithful and Customizable Explanations of Black Box Models Himabindu Lakkaraju Ece Kamar Harvard University Microsoft Research [email protected] [email protected] Rich Caruana Jure Leskovec Microsoft Research Stanford University [email protected] [email protected] ABSTRACT 1 INTRODUCTION As predictive models increasingly assist human experts (e.g., doc- The successful adoption of predictive models for real world decision tors) in day-to-day decision making, it is crucial for experts to be making hinges on how much decision makers (e.g., doctors, judges) able to explore and understand how such models behave in differ- can understand and trust their functionality. Only if decision makers ent feature subspaces in order to know if and when to trust them. have a clear understanding of the behavior of predictive models, To this end, we propose Model Understanding through Subspace they can evaluate when and how much to depend on these models, Explanations (MUSE), a novel model agnostic framework which detect potential biases in them, and develop strategies for further facilitates understanding of a given black box model by explaining model refinement. However, the increasing complexity and the how it behaves in subspaces characterized by certain features of proprietary nature of predictive models employed today is making interest. Our framework provides end users (e.g., doctors) with the this problem harder [9], thus, emphasizing the need for tools which flexibility of customizing the model explanations by allowing them can explain these complex black boxes in a faithful and interpretable to input the features of interest. The construction of explanations is manner. guided by a novel objective function that we propose to simultane- Prior research on explaining black box models can be catego- ously optimize for fidelity to the original model, unambiguity and rized as: 1) Local explanations, which focus on explaining individual interpretability of the explanation. More specifically, our objective predictions of a given black box classifier [4, 9, 10] and 2) Global allows us to learn, with optimality guarantees, a small number of explanations, which focus on explaining model behavior as a whole, compact decision sets each of which captures the behavior of a given often by summarizing complex models using simpler, more inter- black box model in unambiguous, well-defined regions of the fea- pretable approximations such as decision sets or lists [5, 7]. In this ture space. Experimental evaluation with real-world datasets and paper, we focus on a new form of explanation that is designed to user studies demonstrate that our approach can generate customiz- help end users (e.g., decision makers such as judges, doctors) gain able, highly compact, easy-to-understand, yet accurate explanations deeper understanding of model behavior: a differential explanation of various kinds of predictive models compared to state-of-the-art that describes how the model logic varies across different subspaces baselines. of interest in a faithful and interpretable fashion. To illustrate, let us consider a scenario where a doctor is trying to understand a model CCS CONCEPTS which predicts if a given patient has depression or not. The doctor • Computing methodologies → Supervised learning; Cost-sensitive might be keen on understanding how the model makes predictions learning. for different patient subgroups (See Figure 1 left). Furthermore, she might be interested in asking questions such as "how does KEYWORDS the model make predictions on patient subgroups associated with different values of exercise and smoking?" and might like tosee Interpretable machine learning, Decision making, Black box models explanations customized to her interest (See Figure 1 right). The ACM Reference Format: problem of constructing such explanations has not been studied by Himabindu Lakkaraju, Ece Kamar, Rich Caruana, and Jure Leskovec. 2019. previous research aimed at understanding black box models. Faithful and Customizable Explanations of Black Box Models. In AAAI/ACM Here, we propose a novel framework, Model Understanding Conference on AI, Ethics, and Society (AIES ’19), January 27–28, 2019, Honolulu, through Subspace Explanations (MUSE), which constructs global HI, USA. ACM, New York, NY, USA, 8 pages. https://doi.org/10.1145/3306618. explanations of black box classifiers which highlight their behav- 3314229 ior in subspaces characterized by features of user interest. To the best of our knowledge, this is the first work to study the notion of Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed incorporating user input when generating explanations of black for profit or commercial advantage and that copies bear this notice and the full citation box classifiers while successfully trading off notions of fidelity, un- on the first page. Copyrights for components of this work owned by others than ACM ambiguity and interpretability. Our framework takes as input a must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a dataset of instances with semantically meaningful or interpretable fee. Request permissions from [email protected]. features (e.g. age, gender), and the corresponding class labels as- AIES ’19, January 27–28, 2019, Honolulu, HI, USA signed by the black box model. It also accepts as an optional input © 2019 Association for Computing Machinery. ACM ISBN 978-1-4503-6324-2/19/01...$15.00 a set of features that are of interest to the end user in order to https://doi.org/10.1145/3306618.3314229 Figure 1: Explanations generated by our framework MUSE to describe the behavior of a 3-level neural network trained on depression dataset. MUSE generates explanation of the model without user input (left). It automatically selects features for defining subspaces by optimizing for fidelity, unambiguity, and interpretability. MUSE generates customized explanations based on the features of interest input by the end user - exercise and smoking (right). generate explanations tailored to user preferences. Our framework 2 RELATED WORK then maps these inputs to a customized, faithful, and interpretable Explaining Model Behavior: One approach for interpretability explanation which succinctly summarizes the behavior of the given is learning predictive models which are human understandable model. We employ a two-level decision set representation, where (e.g., decision trees [11], decision lists [7], decision sets [5], linear the if-then clauses at the outer level describe the subspaces, and models, generalized additive models [8]). Recent research focused the inner if-then clauses explain the decision logic employed by the on explaining individual predictions of black box classifiers4 [ , 9]. black box model within the corresponding subspace (See Figure 1 Ribeiro et. al.,’s approach of approximating global behavior of black left). The two-level structure which decouples the descriptions of box models through a collection of locally linear models create subspaces from the decision logic of the model naturally allows for ambiguity as it does not clearly specify which local model applies incorporating user input when generating explanations. In order to what part of the feature space. Global explanations can also be to construct an explanation based on the above representation, we generated by approximating the predictions of black box models formulate a novel objective function which can jointly reason about with interpretable models such as decision sets, decision trees. How- various relevant considerations: fidelity to the original model (i.e., ever, the resulting explanations are not suitable to answer deeper mimicking the original model in terms of assigning class labels questions about model behavior (e.g., ’how the model logic differs to instances), unambiguity in describing the model logic used to across patient subgroups associated with various values of exercise assign labels to instances, and interpretability by favoring lower and smoking?’). Furthermore, existing frameworks do not jointly complexity (i.e., fewer rules and predicates etc.). While exactly op- optimize for fidelity, unambiguity, and interpretability. timizing our objective is an NP-hard problem, we prove that our optimization problem is a non-normal, non-monotone submodular Visualizing and Understanding Specific Models: The problem function with matroid constraints which allows for provably near of visualizing how certain classes of models such as deep neural optimal solutions. networks are making predictions has attracted a lot of attention in We evaluated the fidelity and interpretability of the explanations the recent past [14, 15]. Zintgraf et. al. [15] focused on visualizing generated by our approach on three real world datasets: judicial how a deep neural network responds to a given input. Shrikumar bail decisions, high school graduation outcomes, and depression et. al. [12] proposed an approach to determine the important fea- diagnosis. Experimental results indicate that our approach can gen- tures of deep neural networks. Furthermore, there exist tools and erate much less complex and high fidelity explanations of various frameworks to visualize the functionality of different classes of kinds of black box models compared to state-of-the-art baselines. models

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us