
Uniform Convergence of Rank-weighted Learning Justin Khim 1 Liu Leqi 1 Adarsh Prasad 1 Pradeep Ravikumar 1 Abstract learning, where percentile based risk measures have been used to quantify the tail-risk of models. A recent line of The decision-theoretic foundations of classical work borrows classical ideas from behavioral economics machine learning models have largely focused for use in machine learning to make models more human- on estimating model parameters that minimize aligned. In particular, Prashanth et al.(2016) have brought the expectation of a given loss function. How- ideas from cumulative prospect theory (Tversky & Kahne- ever, as machine learning models are deployed in man, 1992) into reinforcement learning and bandits and Leqi varied contexts, such as in high-stakes decision- et al.(2019) have used cumulative prospect theory to intro- making and societal settings, it is clear that these duce the notion of human-aligned risk measures. models are not just evaluated by their average per- formances. In this work, we propose and study A common theme in these prior works is the notion of rank- a novel notion of L-Risk based on the classical weighted risks. The aforementioned risk measures weight idea of rank-weighted learning. These L-Risks, each loss by its relative rank, and are based upon the clas- induced by rank-dependent weighting functions sical rank-dependent expected utility theory (Diecidue & with bounded variation, is a unification of pop- Wakker, 2001). These rank-dependent utilities have also ular risk measures such as conditional value-at- been used in several different contexts in machine learn- risk and those defined by cumulative prospect ing. For example, such rank-weighted risks have been used theory. We give uniform convergence bounds of to speed up training of deep networks (Jiang et al., 2019). this broad class of risk measures and study their They have also played a key role in designing estimators consequences on a logistic regression example. which are robust to outliers in data. In particular, trimming procedures that simply throw away data with high losses have been used to design estimators that are robust to out- 1. Introduction liers (Daniell, 1920; Bhatia et al., 2015; Lai et al., 2016; Prasad et al., 2018; Lugosi & Mendelson, 2019; Prasad The statistical decision-theoretic foundations of modern ma- et al., 2019; Shah et al., 2020). chine learning have largely focused on solving tasks by minimizing the expectation of some loss function. This While these rank-based objectives have found widespread ensures that the resulting models have high average case use in machine learning, establishing their statistical prop- performance. However, as machine learning models are erties has remained elusive. On the other hand, we have deployed along side humans in decision-making, it is clear a clear understanding of the generalization properties for that they are not just evaluated by their average case per- average-case risk measures. Loosely speaking, given a col- formance but also properties like fairness. There has been lection of models and finite training data, and suppose we increasing interest in capturing these additional properties choose a model by minimizing average training error, then, via appropriate modifications of the classical objective of ex- we can roughly guarantee on how well this chosen model pected loss (Duchi et al., 2019; Garcıa & Fernandez´ , 2015; performs in an average sense. Such guarantees are typically Sra et al., 2012). obtained by studying uniform convergence of average-case risk measures. In parallel, there is a long line of work exploring alter- natives to expected loss based risk measures in decision- However, as noted before, uniform convergence of rank- making (Howard & Matheson, 1972), and in reinforcement weighted risk measures have not been explored in detail. This difficulty comes from the weights being dependent 1Machine Learning Department, Carnegie Mellon Uni- on the whole data, thereby, inducing complex dependen- versity, Pittsburgh, PA. Correspondence to: Justin Khim cies. Hence, existing work on generalization has been on <[email protected]>. the weaker notion of pointwise concentration bounds (Bhat, Proceedings of the 37 th International Conference on Machine 2019; Duchi & Namkoong, 2018) or have focused on spe- Learning, Online, PMLR 119, 2020. Copyright 2020 by the au- cific forms of rank-weighted risk measures such as condi- thor(s). Rank-weighted Learning tional value-at-risk (CVaR) (Duchi & Namkoong, 2018). where w : [0; 1] 7! [0; 1) is a scoring function. Contributions. In this work, we propose the study of With the above notion of an empirical L-statistic at hand, rank-weighted risks and prove uniform convergence results. we define the notion of empirical L-risk for any function f In particular, we propose a new notion of L-Risk in Section2 by simply replacing the empirical average of the losses with that unifies existing rank-dependent risk measures including their corresponding L-statistic. CVaR and those defined by cumulative prospect theory. In Definition 2. The empirical L-risk of f is Section3, we present uniform convergence results, and we n 1 X (︂ i )︂ observe that the learning rate depends on the weighting func- LR (f) = w ` (f); w;n n n (i) tion. In particular, when the weighting function is Lipschitz, i=1 we recover the standard O(n−1=2) convergence rate. Finally, we instantiate our result on logistic regression in Section4 where `(1)(f) ≤ ::: ≤ `(n)(f) are the order statistics of and empirically study the convergence performance of the the sample losses `(f; Z1); : : : `(f; Zn). L-Risks in Section6. Note that the empirical L-risk can be alternatively written in terms of the empirical cumulative distribution function n 2. Background and Problem Setup Ff;n of the sample losses f`(f; Zi)gi=1: n In this section, we provide the necessary background on 1 X LR = `(f; Z )w(F (`(f; Z ))): (1) rank-weighted risk minimization, and introduce the notion w;n n i f;n i of bounded variation functions that we consider in this work. i=1 Additionally, we provide standard learning-theoretic defini- Accordingly, the population L-risk for any function f can tions and notation before moving on to our main results. be defined as: 2.1. L-Risk Estimation LRw(f) = EZ∼P [`(f; Z)w(Ff (`(f; Z))] ; (2) We assume that there is a joint probability distribution where Ff (·) is the cumulative distribution function of P (X; Y ) over the space Z = X ×Y and our goal is to learn `(f; Z) for Z drawn from P . a function f : X 7! Y. In the standard decision-theoretic * framework, f is chosen among a class of functions F using 2.2. Illustrative Examples of L-Risk a non-negative loss function ` : F × Z 7! R+. The framework of risk minimization is a central paradigm Classical Risk Estimation. In the traditional setting of of statistical estimation and is widely applicable. In this risk minimization, the population risk of a function f is section, we provide illustrative examples that L-risk gener- given by the expectation of the loss function `(f; Z) when alizes classical risk and encompasses several other notions Z is drawn according to the distribution P : of risk measures. To begin with, observe that simply set- ting the weighting function as w(t) = 1 for all t 2 [0; 1], R(f) = EZ∼P [`(f; Z)]: L-Risk minimization corresponds to the classical empirical n risk estimation. Given n i.i.d. samples fZigi=1, empirical risk minimization substitutes the empirical risk for the population risk in the risk minimization objective: Conditional Value-at-Risk (CVaR). As noted in Sec- n tion1, in settings where low-probability events have catas- 1 X R (f) = `(f; Z ): trophic losses, using classical risk is inappropriate. Condi- n n i i=1 tional value-at-risk was introduced to handle such tail events and measures the expected loss when conditioned on the L-Risk. As noted in the Section1, there are many sce- event that the loss exceeds a certain threshold. Moreover, narios in machine learning, where we want to evaluate a CVaR has several desirable properties as a risk measure and function by other metrics apart from average loss. To this in particular, is convex and coherent (Krokhmal et al., 2013). end, we first define the notion of an L-Statistic which dates Hence, CVaR is studied across a number of fields such as back to the classical work of (Daniell, 1920). mathematical finance (Rockafellar et al., 2000), decision Definition 1. Let X(1) ≤ X(2) ::: ≤ X(n) be the order making, and more recently machine learning (Duchi et al., statistics of the sample X1;X2;:::Xn. Then, the L-statistic 2019). Formally, the CVaR of a function f at a confidence is a linear combination of order statistics, level 1 − 훼 is defined as, n (︂ )︂ 1 X i RCVaR,훼(f) = EZ∼P [`(f; Z)j`(f; Z) ≥ VaR훼(`(f; Z))]; T (fX gn ) = w X ; w i i=1 n n (i) (3) i=1 Rank-weighted Learning where VaR훼(`(f; Z)) = inf x is the value-at-risk. x:Ff (x)≥1−훼 Observe that CVaR is a special case L-Risk in (2) and can 1 be obtained by choosing w(t) = 훼 1ft ≥ 1 − 훼g, where 1{·} is the indicator function. Human-Aligned Risk (HRM). Cumulative prospect the- ory (CPT), which is motivated by empirical studies of hu- Figure 1. An illustration of the partition argument. Here, we have man decision-making from behavioral economics (Tversky Δi = "i and kΔk1 = ". The blue points, f(i+1)=n; (i+1)=n+ & Kahneman, 1992), has recently been studied in machine "i+1g, and the red points, fi=n; i=n + "i; (i + 2)=ng are in two learning (Prashanth et al., 2016; Gopalan et al., 2017; Leqi separate partitions.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-