Some Geometric Calculations on Wasserstein Space

Total Page:16

File Type:pdf, Size:1020Kb

Some Geometric Calculations on Wasserstein Space Commun. Math. Phys. 277, 423–437 (2008) Communications in Digital Object Identifier (DOI) 10.1007/s00220-007-0367-3 Mathematical Physics Some Geometric Calculations on Wasserstein Space John Lott Department of Mathematics, University of Michigan, Ann Arbor, MI 48109-1109, USA. E-mail: [email protected] Received: 5 January 2007 / Accepted: 9 April 2007 Published online: 7 November 2007 – © Springer-Verlag 2007 Abstract: We compute the Riemannian connection and curvature for the Wasserstein space of a smooth compact Riemannian manifold. 1. Introduction If M is a smooth compact Riemannian manifold then the Wasserstein space P2(M) is the space of Borel probability measures on M, equipped with the Wasserstein metric W2.We refer to [21] for background information on Wasserstein spaces. The Wasserstein space originated in the study of optimal transport. It has had applications to PDE theory [16], metric geometry [8,19,20] and functional inequalities [9,17]. Otto showed that the heat flow on measures can be considered as a gradient flow on Wasserstein space [16]. In order to do this, he introduced a certain formal Riemannian metric on the Wasserstein space. This Riemannian metric has some remarkable proper- n ties. Using O’Neill’s theorem, Otto gave a formal argument that P2(R ) has nonnegative sectional curvature. This was made rigorous in [8, Theorem A.8] and [19, Prop. 2.10] in the following sense: M has nonnegative sectional curvature if and only if the length space P2(M) has nonnegative Alexandrov curvature. In this paper we study the Riemannian geometry of the Wasserstein space. In order to write meaningful expressions, we restrict ourselves to the subspace P∞(M) of absolutely continuous measures with a smooth positive density function. The space P∞(M) is a smooth infinite-dimensional manifold in the sense, for example, of [7]. The formal calculations that we perform can be considered as rigorous calculations on this smooth manifold, although we do not emphasize this point. In Sect. 3 we show that if c is a smooth immersed curve in P∞(M) then its length in P2(M), in the sense of metric geometry, equals its Riemannian length as computed with Otto’s metric. In Sect. 4 we compute the Levi-Civita connection on P∞(M).We use it to derive the equation for parallel transport and the geodesic equation. This research was partially supported by NSF grant DMS-0604829. 424 J. Lott In Sect. 5 we compute the Riemannian curvature of P∞(M). The answer is relatively simple. As an application, if M has sectional curvatures bounded below by r ∈ R, one can ask whether P∞(M) necessarily has sectional curvatures bounded below by r.This turns out to be the case if and only if r = 0. There has been recent interest in doing Hamiltonian mechanics on the Wasserstein space of a symplectic manifold [1,4,5]. In Sect. 6 we briefly describe the Poisson geo- metry of P∞(M). We show that if M is a Poisson manifold then P∞(M) has a natural Poisson structure. We also show that if M is symplectic then the symplectic leaves of the Poisson structure on P∞(M) are the orbits of the group of Hamiltonian diffeomorphisms, thereby making contact with [1,5]. This approach is not really new; closely related re- sults, with applications to PDEs, were obtained quite a while ago by Alan Weinstein and collaborators [10,11,22]. However, it may be worth advertising this viewpoint. 2. Manifolds of Measures In what follows, we use the Einstein summation convention freely. Let M be a smooth connected closed Riemannian manifold of positive dimension. We denote the Riemannian density by dvolM .LetP2(M) denote the space of Borel probability measures on M, equipped with the Wasserstein metric W2. For relevant results about optimal transport and the Wasserstein metric, we refer to [8, Sects. 1 and 2] and references therein. Put ∞ ∞ P (M) ={ρ dvolM : ρ ∈ C (M), ρ > 0, ρ dvolM = 1}. (2.1) M ∞ ∞ Then P (M) is a dense subset of P2(M), as is the complement of P (M) in P2(M). ∞ We do not claim that P (M) is necessarily a totally convex subset of P2(M), i.e. that if ∞ µ0,µ1 ∈ P (M) then the minimizing geodesic in P2(M) joining them necessarily lies in P∞(M). However, the absolutely continuous probability measures on M do form a ∞ totally convex subset of P2(M) [12]. For the purposes of this paper, we give P (M) the smooth topology. (This differs from the subspace topology on P∞(M) coming from its ∞ inclusion in P2(M).) Then P (M) has the structure of an infinite-dimensional smooth manifold in the sense of [7]. The formal calculations in this paper can be rigorously justified as being calculations on the smooth manifold P∞(M). However, we will not belabor this point. ∞ ∞ ∞ Given φ ∈ C (M), define Fφ ∈ C (P (M)) by Fφ(ρ dvolM ) = φρ dvolM . (2.2) M ∞ ∞ ∗ This gives an injection P (M) → (C (M)) , i.e. the functions Fφ separate points in ∞ ∞ P (M). We will think of the functions Fφ as “coordinates” on P (M). ∞ ∞ Given φ ∈ C (M), define a vector field Vφ on P (M) by saying that for F ∈ C∞(P∞(M)), d i (Vφ F)(ρ dvol ) = = F ρ dvol − ∇ (ρ∇ φ)dvol . (2.3) M d 0 M i M Some Geometric Calculations on Wasserstein Space 425 φ → ∞( )/R → ∞( ) The map Vφ passes to an isomorphism C M Tρ dvolM P M .This ∞( ) parametrization of Tρ dvolM P M goes back to Otto’s paper [16]; see [2] for further discussion. Otto’s Riemannian metric on P∞(M) is given [16]by , (ρ ) = ∇φ , ∇φ ρ Vφ1 Vφ2 dvolM 1 2 dvolM M i =− φ1∇ (ρ∇i φ2) dvolM . (2.4) M i In view of (2.3), we write δVφ ρ =−∇(ρ∇i φ). Then Vφ , Vφ (ρ dvol ) = φ δ φ ρ dvol = φ δ φ ρ dvol . (2.5) 1 2 M 1 V 2 M 2 V 1 M M M 2 2( ,ρ ) 1 ( ,ρ ) In terms of the weighted L -spaces L M dvolM and 2 M dvolM ,letd ∗ L be the usual differential on functions and let dρ be its formal adjoint. Then (2.4) can be written as , (ρ ) = φ , φ ρ = φ ∗ φ ρ . Vφ1 Vφ2 dvolM d 1 d 2 dvolM 1 dρ d 2 dvolM (2.6) M M We now relate the function Fφ and the vector field Vφ. Lemma 1. The gradient of Fφ is Vφ. ∞ Proof. Letting ∇ Fφ denote the gradient of Fφ, for all φ ∈ C (M) we have i ∇ Fφ, Vφ (ρ dvolM ) = (Vφ Fφ)(ρ dvolM ) =− φ ∇ (ρ∇i φ ) dvolM M =Vφ, Vφ (ρ dvolM ). (2.7) This proves the lemma. 3. Lengths of Curves In this section we relate the Riemannian metric (2.4) to the Wasserstein metric. One such relation was given in [17], where it was heuristically shown that the geodesic distance coming from (2.4) equals the Wasserstein metric. To give a rigorous relation, we recall that a curve c :[0, 1]→P2(M) has a length given by J L(c) = sup sup W2 c(t j−1), c(t j ) . (3.1) ∈N = ≤ ≤...≤ = J 0 t0 t1 tJ 1 j=1 J ( ), ( ) From the triangle inequality, the expression j=1 W2 c t j−1 c t j is nondecreasing under a refinement of the partition 0 = t0 ≤ t1 ≤ ...≤ tJ = 1. ∞ ∞ If c :[0, 1]→P (M) is a smooth curve in P (M) then we write c(t) = ρ(t) dvolM φ( ) ∂ρ =−∇i (ρ∇ φ) φ and let t satisfy dt i , where we normalize by requiring for example φρ = ∇φ( ) = that M dvolM 0. If c is immersed then t 0. The Riemannian length of c, as computed using (2.4), is 1 1 1 2 1 2 c (t), c (t) 2 dt = |∇φ(t)| (m)ρ(t) dvolM dt. (3.2) 0 0 M The next proposition says that this equals the length of c in the metric sense. 426 J. Lott Proposition 1. If c :[0, 1]→P∞(M) is a smooth immersed curve then its length L(c) in the Wasserstein space P2(M) satisfies 1 1 L(c) = c (t), c (t) 2 dt. (3.3) 0 |∇φ( )|2 ρ( ) > Proof. We can parametrize c so that M t t dvolM is a constant C 0 with respect to t. Let {St }t∈[0,1] be the one-parameter family of diffeomorphisms of M given by ∂ S (m) t = (∇φ(t))(S (m)) (3.4) ∂t t with S0(m) = m. Then c(t) = (St )∗(ρ(0) dvolM ). Given a partition 0 = t0 ≤ t1 ≤ ...≤ tJ = 1of[0, 1], a particular transference plan −1 ( − ) ( ) ◦ from c t j 1 to c t j comes from the Monge transport St j St j−1 . Then 2 −1 2 W c(t − ), c(t ) ≤ d(m, S (S (m))) ρ(t − ) dvol 2 j 1 j t j t j−1 j 1 M M = ( ( ), ( ))2 ρ( ) d St j−1 m St j m 0 dvolM M 2 t j ≤ |∇φ(t)|(St (m)) dt ρ(0) dvolM M t − j 1 t j 2 ≤ (t j − t j−1) |∇φ(t)| (St (m)) dt ρ(0) dvolM M tj−1 t j 2 = (t j − t j−1) |∇φ(t)| (m)ρ(t) dvolM dt, (3.5) t j−1 M so 1 t j 2 1 2 W2 c(t j−1), c(t j ) ≤ (t j − t j−1) 2 |∇φ(t)| (m)ρ(t) dvolM dt t j−1 M 1 2 = ( − ) |∇φ( )|2( )ρ( ) t j t j−1 t j m t j dvolM (3.6) M ∈[ , ] for some t j t j−1 t j .
Recommended publications
  • Wasserstein Distance W2, Or to the Kantorovich-Wasserstein Distance W1, with Explicit Constants of Equivalence
    THE EQUIVALENCE OF FOURIER-BASED AND WASSERSTEIN METRICS ON IMAGING PROBLEMS G. AURICCHIO, A. CODEGONI, S. GUALANDI, G. TOSCANI, M. VENERONI Abstract. We investigate properties of some extensions of a class of Fourier-based proba- bility metrics, originally introduced to study convergence to equilibrium for the solution to the spatially homogeneous Boltzmann equation. At difference with the original one, the new Fourier-based metrics are well-defined also for probability distributions with different centers of mass, and for discrete probability measures supported over a regular grid. Among other properties, it is shown that, in the discrete setting, these new Fourier-based metrics are equiv- alent either to the Euclidean-Wasserstein distance W2, or to the Kantorovich-Wasserstein distance W1, with explicit constants of equivalence. Numerical results then show that in benchmark problems of image processing, Fourier metrics provide a better runtime with respect to Wasserstein ones. Fourier-based Metrics; Wasserstein Distance; Fourier Transform; Image analysis. AMS Subject Classification: 60A10, 60E15, 42A38, 68U10. 1. Introduction In computational applied mathematics, numerical methods based on Wasserstein distances achieved a leading role over the last years. Examples include the comparison of histograms in higher dimensions [6,9,22], image retrieval [21], image registration [4,11], or, more recently, the computations of barycenters among images [7, 15]. Surprisingly, the possibility to identify the cost function in a Wasserstein distance, together with the possibility of representing images as histograms, led to the definition of classifiers able to mimic the human eye [16, 21, 24]. More recently, metrics which are able to compare at best probability distributions were introduced and studied in connection with machine learning, where testing the efficiency of new classes of loss functions for neural networks training has become increasingly important.
    [Show full text]
  • Free Complete Wasserstein Algebras
    FREE COMPLETE WASSERSTEIN ALGEBRAS RADU MARDARE, PRAKASH PANANGADEN, AND GORDON D. PLOTKIN Abstract. We present an algebraic account of the Wasserstein distances Wp on complete metric spaces, for p ≥ 1. This is part of a program of a quantitative algebraic theory of effects in programming languages. In particular, we give axioms, parametric in p, for algebras over metric spaces equipped with probabilistic choice operations. The axioms say that the operations form a barycentric algebra and that the metric satisfies a property typical of the Wasserstein distance Wp. We show that the free complete such algebra over a complete metric space is that of the Radon probability measures with finite moments of order p, equipped with the Wasserstein distance as metric and with the usual binary convex sums as operations. 1. Introduction The denotational semantics of probabilistic programs generally makes use of one or another monad of probability measures. In [18, 10], Lawvere and Giry provided the first example of such a monad, over the category of measurable spaces; Giry also gave a second example, over Polish spaces. Later, in [15, 14], Jones and Plotkin provided a monad of evaluations over directed complete posets. More recently, in [12], Heunen, Kammar, et al provided such a monad over quasi-Borel spaces. Metric spaces (and nonexpansive maps) form another natural category to consider. The question then arises as to what metric to take over probability measures, and the Kantorovich-Wasserstein, or, more generally, the Wasserstein distances of order p > 1 (generalising from p = 1), provide natural choices. A first result of this kind was given in [2], where it was shown that the Kantorovich distance provides a monad over the sub- category of 1-bounded complete separable metric spaces.
    [Show full text]
  • Statistical Aspects of Wasserstein Distances
    Statistical Aspects of Wasserstein Distances Victor M. Panaretos∗and Yoav Zemely June 8, 2018 Abstract Wasserstein distances are metrics on probability distributions inspired by the problem of optimal mass transportation. Roughly speaking, they measure the minimal effort required to reconfigure the probability mass of one distribution in order to recover the other distribution. They are ubiquitous in mathematics, with a long history that has seen them catal- yse core developments in analysis, optimization, and probability. Beyond their intrinsic mathematical richness, they possess attractive features that make them a versatile tool for the statistician: they can be used to de- rive weak convergence and convergence of moments, and can be easily bounded; they are well-adapted to quantify a natural notion of pertur- bation of a probability distribution; and they seamlessly incorporate the geometry of the domain of the distributions in question, thus being useful for contrasting complex objects. Consequently, they frequently appear in the development of statistical theory and inferential methodology, and have recently become an object of inference in themselves. In this review, we provide a snapshot of the main concepts involved in Wasserstein dis- tances and optimal transportation, and a succinct overview of some of their many statistical aspects. Keywords: deformation map, empirical optimal transport, Fr´echet mean, goodness-of-fit, inference, Monge{Kantorovich problem, optimal coupling, prob- ability metric, transportation of measure, warping and registration, Wasserstein space AMS subject classification: 62-00 (primary); 62G99, 62M99 (secondary) arXiv:1806.05500v3 [stat.ME] 9 Apr 2019 1 Introduction Wasserstein distances are metrics between probability distributions that are inspired by the problem of optimal transportation.
    [Show full text]
  • Optimal Transportation: Duality Theory
    Optimal Transportation: Duality Theory David Gu Yau Mathematics Science Center Tsinghua University Computer Science Department Stony Brook University [email protected] August 15, 2020 David Gu (Stony Brook University) Computational Conformal Geometry August 15, 2020 1 / 56 Motivation David Gu (Stony Brook University) Computational Conformal Geometry August 15, 2020 2 / 56 Why dose DL work? Problem 1 What does a DL system really learn ? 2 How does a DL system learn ? Does it really learn or just memorize ? 3 How well does a DL system learn ? Does it really learn everything or have to forget something ? Till today, the understanding of deep learning remains primitive. David Gu (Stony Brook University) Computational Conformal Geometry August 15, 2020 3 / 56 Why does DL work? 1. What does a DL system really learn? Probability distributions on manifolds. 2. How does a DL system learn ? Does it really learn or just memorize ? Optimization in the space of all probability distributions on a manifold. A DL system both learns and memorizes. 3. How well does a DL system learn ? Does it really learn everything or have to forget something ? Current DL systems have fundamental flaws, mode collapsing. David Gu (Stony Brook University) Computational Conformal Geometry August 15, 2020 4 / 56 Manifold Distribution Principle We believe the great success of deep learning can be partially explained by the well accepted manifold distribution and the clustering distribution principles: Manifold Distribution A natural data class can be treated as a probability distribution defined on a low dimensional manifold embedded in a high dimensional ambient space. Clustering Distribution The distances among the probability distributions of subclasses on the manifold are far enough to discriminate them.
    [Show full text]
  • Data Assimilation in Optimal Transport Framework
    Data Assimilation in Optimal Transport Framework (DRAFT) Raktim Bhattacharya Aerospace Engineering, Texas A&M University. May 19, 2020 1 Mathematical Preliminaries n Definition 1.1. (Wasserstein distance) Consider the vectors x1, x2 ∈ R . Let P2(p1,p2) denote the collection of all probability measures p supported on the product space R2n, having finite second moment, with first marginal p1 and second marginal p2. The Wasserstein distance of order 2, denoted as W2, between two probability measures p1,p2, is defined as 1 2 2 , n W2(p1,p2) inf k x1 − x2 kℓ2(R ) dp(x1, x2) . (1) p∈P2(p1,p2) ZR2n Remark 1. Intuitively, Wasserstein distance equals the least amount of work needed to morph one distribu- tional shape to the other, and can be interpreted as the cost for Monge-Kantorovich optimal transportation plan [1]. The particular choice of ℓ2 norm with order 2 is motivated by [2]. Further, one can prove (p. 208, [1]) that W2 defines a metric on the manifold of PDFs. Proposition 1. The Wasserstein distance W2 between two multivariate Gaussians N (µ1, Σ1) and N (µ2, Σ2) in Rn is given by 1 2 2 2 W2 (N (µ1, Σ1), N (µ2, Σ2)) = kµ1 − µ2k + tr Σ1 +Σ2 − 2 Σ1Σ2 Σ1 . (2) p p Corollary 1. The Wasserstein distance between Gaussian distribution N (µ, Σ) and the Dirac delta function δ(x − µc) is given by, 2 2 W2 (N (µ, Σ),δ(x − µc)) = kµ − µck + tr (Σ) . (3) arXiv:2005.08670v1 [eess.SY] 15 May 2020 Proof. Defining the Dirac delta function as (see e.g., p.
    [Show full text]
  • Pdfs) Is Discussed
    Combustion and Flame 183 (2017) 88–101 Contents lists available at ScienceDirect Combustion and Flame journal homepage: www.elsevier.com/locate/combustflame A general probabilistic approach for the quantitative assessment of LES combustion models ∗ Ross Johnson, Hao Wu, Matthias Ihme Department of Mechanical Engineering, Stanford University, Stanford, CA 94305, United States a r t i c l e i n f o a b s t r a c t Article history: The Wasserstein metric is introduced as a probabilistic method to enable quantitative evaluations of LES Received 9 March 2017 combustion models. The Wasserstein metric can directly be evaluated from scatter data or statistical re- Revised 5 April 2017 sults through probabilistic reconstruction. The method is derived and generalized for turbulent reacting Accepted 3 May 2017 flows, and applied to validation tests involving a piloted turbulent jet flame with inhomogeneous inlets. It Available online 23 May 2017 is shown that the Wasserstein metric is an effective validation tool that extends to multiple scalar quan- Keywords: tities, providing an objective and quantitative evaluation of model deficiencies and boundary conditions Wasserstein metric on the simulation accuracy. Several test cases are considered, beginning with a comparison of mixture- Model validation fraction results, and the subsequent extension to reactive scalars, including temperature and species mass Statistical analysis fractions of CO and CO 2 . To demonstrate the versatility of the proposed method in application to multi- Quantitative model comparison ple datasets, the Wasserstein metric is applied to a series of different simulations that were contributed Large-eddy simulation to the TNF-workshop. Analysis of the results allows to identify competing contributions to model devi- ations, arising from uncertainties in the boundary conditions and model deficiencies.
    [Show full text]
  • Constrained Steepest Descent in the 2-Wasserstein Metric
    Annals of Mathematics, 157 (2003), 807–846 Constrained steepest descent in the 2-Wasserstein metric By E. A. Carlen and W. Gangbo* Abstract We study several constrained variational problems in the 2-Wasserstein metric for which the set of probability densities satisfying the constraint is d not closed. For example, given a probability density F0 on R and a time-step 2 h>0, we seek to minimize I(F )=hS(F )+W2 (F0,F) over all of the probabil- ity densities F that have the same mean and variance as F0, where S(F )isthe entropy of F .Weprove existence of minimizers. We also analyze the induced geometry of the set of densities satisfying the constraint on the variance and means, and we determine all of the geodesics on it. From this, we determine a criterion for convexity of functionals in the induced geometry. It turns out, for example, that the entropy is uniformly strictly convex on the constrained manifold, though not uniformly convex without the constraint. The problems solved here arose in a study of a variational approach to constructing and studying solutions of the nonlinear kinetic Fokker-Planck equation, which is briefly described here and fully developed in a companion paper. Contents 1. Introduction 2. Riemannian geometry of the 2-Wasserstein metric 3. Geometry of the constraint manifold 4. The Euler-Lagrange equation 5. Existence of minimizers References ∗ The work of the first named author was partially supported by U.S. N.S.F. grant DMS-00-70589. The work of the second named author was partially supported by U.S.
    [Show full text]
  • Computational Optimal Transport
    Full text available at: http://dx.doi.org/10.1561/2200000073 Computational Optimal Transport with Applications to Data Sciences Full text available at: http://dx.doi.org/10.1561/2200000073 Other titles in Foundations and Trends R in Machine Learning Non-convex Optimization for Machine Learningy Prateek Jain and Purushottam Ka ISBN: 978-1-68083-368-3 Kernel Mean Embedding of Distributions: A Review and Beyond Krikamol Muandet, Kenji Fukumizu, Bharath Sriperumbudur and Bernhard Scholkopf ISBN: 978-1-68083-288-4 Tensor Networks for Dimensionality Reduction and Large-scale Optimization: Part 1 Low-Rank Tensor Decompositions Andrzej Cichocki, Anh-Huy Phan, Qibin Zhao, Namgil Lee, Ivan Oseledets, Masashi Sugiyama and Danilo P. Mandic ISBN: 978-1-68083-222-8 Tensor Networks for Dimensionality Reduction and Large-scale Optimization: Part 2 Applications and Future Perspectives Andrzej Cichocki, Anh-Huy Phan, Qibin Zhao, Namgil Lee, Ivan Oseledets, Masashi Sugiyama and Danilo P. Mandic ISBN: 978-1-68083-276-1 Patterns of Scalable Bayesian Inference Elaine Angelino, Matthew James Johnson and Ryan P. Adams ISBN: 978-1-68083-218-1 Generalized Low Rank Models Madeleine Udell, Corinne Horn, Reza Zadeh and Stephen Boyd ISBN: 978-1-68083-140-5 Full text available at: http://dx.doi.org/10.1561/2200000073 Computational Optimal Transport with Applications to Data Sciences Gabriel Peyré CNRS and DMA, ENS Marco Cuturi Google and CREST/ENSAE Boston — Delft Full text available at: http://dx.doi.org/10.1561/2200000073 R Foundations and Trends in Machine Learning Published, sold and distributed by: now Publishers Inc. PO Box 1024 Hanover, MA 02339 United States Tel.
    [Show full text]
  • Maximum Mean Discrepancy Gradient Flow
    Maximum Mean Discrepancy Gradient Flow Michael Arbel Anna Korba Gatsby Computational Neuroscience Unit Gatsby Computational Neuroscience Unit University College London University College London [email protected] [email protected] Adil Salim Arthur Gretton Visual Computing Center Gatsby Computational Neuroscience Unit KAUST University College London [email protected] [email protected] Abstract We construct a Wasserstein gradient flow of the maximum mean discrepancy (MMD) and study its convergence properties. The MMD is an integral probability metric defined for a reproducing kernel Hilbert space (RKHS), and serves as a metric on probability measures for a sufficiently rich RKHS. We obtain conditions for convergence of the gradient flow towards a global optimum, that can be related to particle transport when optimizing neural networks. We also propose a way to regularize this MMD flow, based on an injection of noise in the gradient. This algorithmic fix comes with theoretical and empirical evidence. The practical implementation of the flow is straightforward, since both the MMD and its gradient have simple closed-form expressions, which can be easily estimated with samples. 1 Introduction We address the problem of defining a gradient flow on the space of probability distributions endowed with the Wasserstein metric, which transports probability mass from a starting distribtion ν to a target distribution µ. Our flow is defined on the maximum mean discrepancy (MMD) [19], an integral probability metric [33] which uses the unit ball in a characteristic RKHS [43] as its witness function class. Specifically, we choose the function in the witness class that has the largest difference in expectation under ν and µ: this difference constitutes the MMD.
    [Show full text]
  • Arxiv:1505.04954V2 [Math.PR]
    GENERALIZED WASSERSTEIN DISTANCE AND WEAK CONVERGENCE OF SUBLINEAR EXPECTATIONS XINPENG LI AND YIQING LIN Abstract. In this paper, we define the generalized Wasserstein distance for sets of Borel probability measures and demonstrate that the weak convergence of sublinear expectations can be characterized by means of this distance. 1. Introduction In classical probability theory, the Wasserstein distance of order p between two Borel probability measures µ and ν in the Polish metric space (Ω, d) is defined by p 1 Wp(µ,ν) = inf{E[d(X,Y ) ] p : law(X)= µ, law(Y) = ν}, which is related to optimal transport problems and can also be used to characterize the weak convergence of probability measures in the Wasserstein space Pp(Ω) (see Defini- tion 6.4 in Villani [10]), roughly speaking, under some moment condition, that µk con- verges weakly to µ is equivalent to Wp(µk,µ) → 0. In particular, W1 is commonly called Kantorovich-Rubinstein distance and the following duality formula is well-known: (1.1) W1(µ,ν)= sup {Eµ[ϕ] − Eν [ϕ]}, ||ϕ||Lip≤1 where Φ := {ϕ : ||ϕ||Lip ≤ 1} is the collection of all Lipschitz functions on (Ω, d) whose Lipschitz constants are at most 1 (see Villani [9, 10]). Recently, Peng establishes a theory of time-consistent sublinear expectations in [5, 6], i.e., the G-expectation theory. This new theory provides with several important models and tools for studying robust hedging and utility maximization problems in the finan- cial markets under Knightian uncertainty (see eg. Epstein and Ji [2] and Vorbrink [11]). According to Denis et al.
    [Show full text]
  • 1.2 Optimal Transport and Wasserstein Metric
    Learning and Inference with Wasserstein Metrics by Charles Frogner Submitted to the Department of Brain and Cognitive Sciences in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY February 2018 @ Massachusetts Institute of Technology 2018. All rights reserved. Signature redacted A uthor ............................ Department of Brain and Cognitive Sciences January 18, 2018 Signature redacted C ertified by ....... .................. Tomaso Poggio Eugene McDermott Professor of Brain and Cognitive Sciences Thesis Supervisor Signature redacted Accepted by ................. Matthew Wilson airman, Department Committee on Graduate Theses MASSA HUSETTS INS TECHNOWDGY U OCT 112018 LIBRARIES ARCHIVES 2 Learning and Inference with Wasserstein Metrics by Charles Frogner Submitted to the Department of Brain and Cognitive Sciences on January 18, 2018, in partial fulfillment of the requirements for the degree of Doctor of Philosophy Abstract This thesis develops new approaches for three problems in machine learning, us- ing tools from the study of optimal transport (or Wasserstein) distances between probability distributions. Optimal transport distances capture an intuitive notion of similarity between distributions, by incorporating the underlying geometry of the do- main of the distributions. Despite their intuitive appeal, optimal transport distances are often difficult to apply in practice, as computing them requires solving a costly optimization problem. In each setting studied here, we describe a numerical method that overcomes this computational bottleneck and enables scaling to real data. In the first part, we consider the problem of multi-output learning in the pres- ence of a metric on the output domain. We develop a loss function that measures the Wasserstein distance between the prediction and ground truth, and describe an efficient learning algorithm based on entropic regularization of the optimal trans- port problem.
    [Show full text]
  • Paulo Najberg Orenstein Optimal Transport and the Wasserstein Metric
    Paulo Najberg Orenstein Optimal Transport and the Wasserstein Metric Disserta¸c˜ao de Mestrado Thesis presented to the Postgraduate Program in Applied Math- ematics of the Departamento de Matem´atica, PUC–Rio as par- tial fulfillment of the requirements for the degree of Mestre em Matem´atica Aplicada Advisor: Prof. Jairo Bochi Co–Advisor: Prof. Carlos Tomei Rio de Janeiro January 2014 Paulo Najberg Orenstein Optimal Transport and the Wasserstein Metric Thesis presented to the Postgraduate Program in Applied Math- ematics of the Departamento de Matem´atica, PUC–Rio as par- tial fulfillment of the requirements for the degree of Mestre em Matem´atica Aplicada. Approved by the following commission: Prof. Jairo Bochi Advisor Departamento de Matem´atica — PUC–Rio Prof. Carlos Tomei Co–Advisor Departamento de Matem´atica — PUC–Rio Prof. Nicolau Cor¸c˜ao Saldanha Departamento de Matem´atica — PUC–Rio Prof. Milton da Costa Lopes Filho Instituto de Matem´atica — UFRJ Prof. Helena Judith Nussenzveig Lopes Instituto de Matem´atica — UFRJ Rio de Janeiro — January 23, 2014 All rights reserved. Paulo Najberg Orenstein BSc. in Economics from PUC–Rio. Bibliographic data Orenstein, Paulo Najberg Optimal Transport and the Wasserstein Metric / Paulo Najberg Orenstein ; advisor: Jairo Bochi; co–advisor: Carlos Tomei. — 2014. 89 f. : il. ; 30 cm Disserta¸c˜ao (Mestrado em Matem´atica Aplicada)- Pontif´ıcia Universidade Cat´olica do Rio de Janeiro, Rio de Janeiro, 2014. Inclui bibliografia 1. Matem´atica – Teses. 2. Transporte Otimo.´ 3. Problema de Monge–Kantorovich. 4. Mapa de Transporte. 5. Dualidade. 6. M´etrica de Wasserstein. 7. Interpola¸c˜ao de Medidas.
    [Show full text]