An Imprecise SHAP As a Tool for Explaining the Class Probability Distributions Under Limited Training Data
Total Page:16
File Type:pdf, Size:1020Kb
An Imprecise SHAP as a Tool for Explaining the Class Probability Distributions under Limited Training Data Lev V. Utkin, Andrei V. Konstantinov, and Kirill A. Vishniakov Peter the Great St.Petersburg Polytechnic University St.Petersburg, Russia e-mail: [email protected], [email protected], [email protected] Abstract One of the most popular methods of the machine learning prediction explanation is the SHapley Additive exPlanations method (SHAP). An imprecise SHAP as a modification of the original SHAP is proposed for cases when the class probability distributions are imprecise and represented by sets of distributions. The first idea behind the imprecise SHAP is a new approach for computing the marginal contribution of a feature, which fulfils the important efficiency property of Shapley values. The second idea is an attempt to consider a general approach to calculating and reducing interval-valued Shapley values, which is similar to the idea of reachable probability intervals in the imprecise probability theory. A simple special implementation of the general approach in the form of linear optimization problems is proposed, which is based on using the Kolmogorov-Smirnov distance and imprecise contamination models. Numerical examples with synthetic and real data illustrate the imprecise SHAP. Keywords: interpretable model, XAI, Shapley values, SHAP, Kolmogorov-Smirnov distance, imprecise probability theory. arXiv:2106.09111v1 [cs.LG] 16 Jun 2021 1 Introduction An importance of the machine learning models in many applications and their success in solv- ing many applied problems lead to another problem which may be an obstacle for using the models in areas where their prediction accuracy as well as understanding is crucial, for ex- ample, in medicine, reliability analysis, security, etc. This obstacle takes place for complex models which can be viewed as black boxes because their users do not know how the models act and are functioning. Moreover, the training process is also often unknown. A natural way for overcoming the obstacle is to use a meta-model which could explain the provided predic- tions. The explanation means that we have to select features of an analyzed example which are responsible for the prediction of the obtained black-box model or significantly impact on the corresponding prediction. By considering the explanation of a single example, we say about the 1 so-called local explanation methods. They aim to explain the black-box model locally around the considered example. Another explanation methods try to explain predictions taking into account the whole dataset or its certain part. The need of explaining the black-box models us- ing the local or global explanations in many applications motivated developing a huge number of explanation methods which are described in detail in several comprehensive survey papers [10, 29, 41, 47, 49, 68, 79, 82, 85]. Among all explanation methods, we select two very popular methods: the Local Inter- pretable Model-Agnostic Explanation (LIME) [60] and SHapley Additive exPlanations (SHAP) [44, 72]. The basic idea behind LIME is to build an approximating linear model around the explained example. In order to implement the approximation, many synthetic examples are generated in the neighborhood of the explained example with weights depending on distances from the explained example. The linear regression model is constructed by using the generated examples such that its coefficients can be regarded as quantitative representation of impacts of the corresponding features on the prediction. SHAP is inspired by game-theoretic Shapley values [71] which can be interpreted as av- erage expected marginal contributions over all possible subsets (coalitions) of features to the black-box model prediction. In spite of two important shortcomings of SHAP, including its computational complexity depending on the number of features and some ambiguity of avail- able approaches for removing features from consideration [17], SHAP is widely used in practice and can be viewed as the most promising and theoretically justified explanation method which fulfils several nice properties [44]. Nevertheless, another difficulty of using SHAP is how to deal with predictions in the form of probability distributions which arise in multi-class classifica- tion and in machine learning survival analysis. The problem is that SHAP in each iteration calculates a difference between two predictions defined by different subsets of features in the explained example. Therefore, the question is how to define the difference between the prob- ability distributions. One of the simplest ways in classification is to consider a class with the largest probability. However, this way may lead to incorrect results when the probabilities are comparable. Moreover, the same approach cannot be applied to explanation of the survival model predictions which are in the form of survival functions. It should be noted that Covert et al. [18] justified that the well-known Kullback-Leibler (KL) divergence [40] can be applied to the global explanation. It is obvious that the KL divergence can be applied also to the local explanation. Moreover, the KL divergence can be replaced with different distances, for example, χ2-divergence [56], the relative J-divergence [23], Csiszar's f-divergence measure [19]. However, the use of all these measures leads to such modifications of SHAP that obtained Shapley values do not satisfy properties of original Shapley values. Therefore, one of the problems for solving is to define a meaningful distance between two predictions represented in the form of the class probability distributions, which fulfils the Shapley value properties. It should be noted that the probability distribution of classes may be imprecise due to a limited number of training data. On the one hand, it can be said that the class probabilities in many cases are not real probabilities as measures defined on events, i.e., they are not relative frequencies of occurrence of events. For example, the softmax function in neural networks pro- duce numbers (weights of classes), which have some properties of probabilities, but not original probabilities. On the other hand, probabilities of classes in random forests and in random sur- 2 vival forests can be viewed as relative frequencies (probabilities are computed by counting the percentage of different classes of examples at each leaf node), and they are true probabilities. In both the cases, it is obvious that accuracy of the class probability distributions depends on the machine learning model predicting the distributions and on the amount of training data. It is difficult to impact on improvement of the post-hoc machine learning model representing as a black box, but we can take into account the imprecision of probability distributions due to the lack of sufficient training data. The considered imprecision can be referred to the epistemic uncertainty [69] which represents our ignorance about a model caused by the lack of observation data. Epistemic uncertainty can be resolved by observing more data. There are several approaches to formalize the imprecision. One of them is to consider a set of probability distributions instead of the single one. Sets of distributions are produced by imprecise statistical models, for example, by the linear-vacuous mixture or imprecise "-contamination model [76], by the imprecise Dirichlet model [77], by the constant odds-ratio model [76], by Kolmogorov{Smirnov bounds [34]. It is obvious that the above imprecise statistical models produce interval-valued probabilities of events or classes, and the obtained interval can be regarded as a measure of observation data insufficiency. It turns out that the imprecision of the class probability distribution leads to imprecision of Shapley values when SHAP is used for explaining the machine learning model predictions. In other words, Shapley values become interval-valued. This allows us to construct a new framework of the imprecise SHAP or imprecise Shapley values, which provides a real accounting of the prediction uncertainty. Having the interval-valued Shapley values, we can compare intervals in order to select the most important features. It is interesting to point out that properties of the efficiency and the linearity of Shapley values play a crucial role in constructing basic rules of the framework of imprecise Shapley values. In summary, the following contributions are made in this paper: 1. A new approach for computing the marginal contribution of the i-th feature in SHAP for interpretation of the class probability distributions as predictions of a black-box machine learning model is proposed. It is based on considering a distance between two probability dis- tributions. Moreover, Shapley values using the proposed approach fulfils the efficiency property which is very important in correct explanation by means of SHAP. 2. An imprecise SHAP is proposed for cases when the class probability distributions are im- precise, i.e., they are represented by convex sets of distributions. The outcome of the imprecise SHAP is a set of interval-valued Shapley values. Basic tools for dealing with imprecise class probability distributions and for computing interval-valued Shapley values are introduced. 3. Implementation of the imprecise SHAP by using the Kolmogorov-Smirnov distance is proposed. Complex optimization problems for computing interval-valued Shapley values are reduced to finite sets of simple linear programming problems. 4. The imprecise SHAP is illustrated