Learning Probability Measures with Respect to Optimal Transport Metrics Guillermo D. Canas?;y Lorenzo A. Rosasco?;y ? Laboratory for Computational and Statistical Learning - MIT-IIT y CBCL, McGovern Institute - Massachusetts Institute of Technology fguilledc,[email protected] Abstract We study the problem of estimating, in the sense of optimal transport metrics, a measure which is assumed supported on a manifold embedded in a Hilbert space. By establishing a precise connection between optimal transport metrics, optimal quantization, and learning theory, we derive new probabilistic bounds for the per- formance of a classic algorithm in unsupervised learning (k-means), when used to produce a probability measure derived from the data. In the course of the analysis, we arrive at new lower bounds, as well as probabilistic upper bounds on the con- vergence rate of empirical to population measures, which, unlike existing bounds, are applicable to a wide class of measures. 1 Introduction and Motivation In this paper we study the problem of learning from random samples a probability distribution supported on a manifold, when the learning error is measured using transportation metrics. The problem of learning a probability distribution is classic in statistics, and is typically analyzed for distributions in X = Rd that have a density with respect to the Lebesgue measure, with total variation, and L2 among the common distances used to measure closeness of two densities (see for instance [10, 32] and references therein.) The setting in which the data distribution is supported on a low dimensional manifold embedded in a high dimensional space has only been considered more recently. In particular, kernel density estimators on manifolds have been described in [36], and their pointwise consistency, as well as convergence rates, have been studied in [25, 23, 18]. A discussion on several topics related to statistics on a Riemannian manifold can be found in [26]. Interestingly, the problem of approximating measures with respect to transportation distances has deep connections with the fields of optimal quantization [14, 16], optimal transport [35] and, as we point out in this work, with unsupervised learning (see Sec. 4.) In fact, as described in the sequel, some of the most widely-used algorithms for unsupervised learning, such as k-means (but also others such as PCA and k-flats), can be shown to be performing exactly the task of estimating the data-generating measure in the sense of the 2-Wasserstein distance. This close relation between learning theory, and optimal transport and quantization seems novel and of interest in its own right. Indeed, in this work, techniques from the above three fields are used to derive the new probabilistic bounds described below. Our technical contribution can be summarized as follows: (a) we prove uniform lower bounds for the distance between a measure and estimates based on discrete sets (such as the empirical measure or measures derived from algorithms such as k- means); (b) we provide new probabilistic bounds for the rate of convergence of empirical to population measures which, unlike existing probabilistic bounds, hold for a very large class of measures; 1 (c) we provide probabilistic bounds for the rate of convergence of measures derived from k-means to the data measure. The structure of the paper is described at the end of Section 2, where we discuss the exact formula- tion of the problem as well as related previous works. 2 Setup and Previous work Consider the problem of learning a probability measure ρ supported on a space M, from an i.i.d. n sample Xn = (x1; : : : ; xn) ∼ ρ of size n. We assume M to be a compact, smooth d-dimensional 1 manifold of bounded curvature, with C metric and volume measure λM, embedded in the unit ball of a separable Hilbert space X with inner product h·; ·i, induced norm k · k, and distance d (for d d instance M = B2 (1) the unit ball in X = R .) Following [35, p. 94], let Pp(M) denote the Wasserstein space of order 1 ≤ p < 1: Z p Pp(M) := ρ 2 P (M): kxk dρ(x) < 1 M of probability measures P (M) supported on M, with finite p-th moment. The p-Wasserstein dis- tance n p 1=p o Wp(ρ, µ) = inf [EkX − Y k ] : Law(X) = ρ, Law(Y ) = µ (1) X;Y where the random variables X and Y are distributed according to ρ and µ respectively, is the optimal expected cost of transporting points generated from ρ to those generated from µ, and is guaranteed to be finite in Pp(M) [35, p. 95]. The space Pp(M) with the Wp metric is itself a complete separable metric space [35]. We consider here the problem of learning probability measures ρ 2 P2(M), where the performance is measured by the distance W2. There are many possible choices of distances between probability measures [13]. Among them, Wp metrizes weak convergence (see [35] theorem 6.9), that is, in Pp(M), a sequence (µi)i2N of measures converges weakly to µ iff Wp(µi; µ) ! 0 and their p-th order moments converge to that of µ. There are other distances, such as the Levy-Prokhorov,´ or the weak-* distance, that also metrize weak convergence. However, as pointed out by Villani in his excellent monograph [35, p. 98], 1. “Wasserstein distances are rather strong, [...]a definite advantage over the weak-* distance”. 2. “It is not so difficult to combine information on convergence in Wasserstein distance with some smoothness bound, in order to get convergence in stronger distances.” Wasserstein distances have been used to study the mixing and convergence of Markov chains [22], as well as concentration of measure phenomena [20]. To this list we would add the important fact that existing and widely-used algorithms for unsupervised learning can be easily extended (see Sec. 4) 0 0 to compute a measure ρ that minimizes the distance W2(^ρn; ρ ) to the empirical measure n 1 X ρ^n := δx ; n i i=1 a fact that will allow us to prove, in Sec. 5, bounds on the convergence of a measure induced by k-means to the population measure ρ. The most useful versions of Wasserstein distance are p = 1; 2, with p = 1 being the weaker of the two (by Holder’s¨ inequality, p ≤ q ) Wp ≤ Wq.) In particular, “results in W2 distance are usually stronger, and more difficult to establish than results in W1 distance” [35, p. 95]. A discussion of p = 1 would take us out of topic, since its behavior is markedly different. 2.1 Closeness of Empirical and Population Measures By the strong law of large numbers, the empirical measure converges almost surely to the population measure: ρ^n ! ρ in the sense of the weak topology [34]. Since weak convergence and convergence in Wp plus convergence of p-th moments are equivalent in Pp(M), this means that, in the Wp sense, the empirical measure ρ^n converges to ρ, as n ! 1. A fundamental question is therefore how fast the rate of convergence of ρ^n ! ρ is. 2 2.1.1 Convergence in expectation The rate of convergence of ρ^n ! ρ in expectation has been widely studied in the past, result- −1=(d+2) ing in upper bounds of order EW2(ρ, ρ^n) = O(n ) [19, 8], and lower bounds of order −1=d EW2(ρ, ρ^n) = Ω(n ) [29] (both assuming that the absolutely continuous part of ρ is ρA =6 0, with possibly better rates otherwise). −1=d More recently, an upper bound of order EWp(ρ, ρ^n) = O(n ) has been proposed [2] by proving a bound for the Optimal Bipartite Matching (OBM) problem [1], and relating this problem to the expected distance EWp(ρ, ρ^n). In particular, given two independent samples Xn;Yn, the OBM −1 P p problem is that of finding a permutation σ that minimizes the matching cost n kxi−yσ(i)k [24, p 30]. It is not hard to show that the optimal matching cost is Wp(^ρXn ; ρ^Yn ) , where ρ^Xn ; ρ^Yn are the empirical measures associated to Xn;Yn. By Jensen’s inequality, the triangle inequality, and (a + b)p ≤ 2p−1(ap + bp), it holds p ≤ p ≤ p−1 p EWp(ρ, ρ^n) EWp(^ρXn ; ρ^Yn ) 2 EWp(ρ, ρ^n) ; −p=d and therefore a bound of order O(n ) for the OBM problem [2] implies a bound EWp(ρ, ρ^n) = −1=d O(n ). The matching lower bound is only known for a special case: ρA constant over a bounded set of non-null measure [2] (e.g. ρA uniform.) Similar results, with matching lower bounds are found for W1 in [11]. 2.1.2 Convergence in probability Results for convergence in probability, one of the main results of this work, appear to be considerably harder to obtain. One fruitful avenue of analysis has been the use of so-called transportation, or Talagrand inequalities Tp, which can be used to prove concentration inequalities on Wp [20]. In 2 particular, we say that ρ satisfies a Tp(C) inequality with C > 0 iff Wp(ρ, µ) ≤ CH(µjρ); 8µ 2 Pp(M), where H(·|·) is the relative entropy [20]. As shown in [6, 5], it is possible to obtain probabilistic upper bounds on Wp(ρ, ρ^n), with p = 1; 2, if ρ is known to satisfy a Tp inequality of the same order, thereby reducing the problem of bounding Wp(ρ, ρ^n) to that of obtaining a Tp inequality. Note that, by Jensen’s inequality, and as expected from the behavior of Wp, the inequality T2 is stronger than T1 [20].
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages9 Page
-
File Size-