Information Geometry for Regularized Optimal Transport and Barycenters of Patterns
Total Page:16
File Type:pdf, Size:1020Kb
ARTICLE Communicated by Klaus-Robert Müller Information Geometry for Regularized Optimal Transport and Barycenters of Patterns Shun-ichi Amari [email protected] RIKEN, Wako-shi, Saitama 351-0198, Japan Ryo Karakida [email protected] National Institute of Advanced Industrial Science and Technology, Koto-ku, Tokyo 135-0064, Japan Masafumi Oizumi [email protected] ARAYA, Minato-ku, Tokyo 105-0003, Japan Marco Cuturi [email protected] Google, 75009 Paris and CREST, ENSAE, 91120 Palaiseau, France We propose a new divergence on the manifold of probability distribu- tions, building on the entropic regularization of optimal transportation problems. As Cuturi (2013) showed, regularizing the optimal transport problem with an entropic term is known to bring several computational benefits. However, because of that regularization, the resulting approx- imation of the optimal transport cost does not define a proper distance or divergence between probability distributions. We recently tried to in- troduce a family of divergences connecting the Wasserstein distance and the Kullback-Leibler divergence from an information geometry point of view (see Amari, Karakida, & Oizumi, 2018). However, that proposal was not able to retain key intuitive aspects of the Wasserstein geometry, such as translation invariance, which plays a key role when used in the more general problem of computing optimal transport barycenters. The diver- gence we propose in this work is able to retain such properties and admits an intuitive interpretation. 1 Introduction Two major geometrical structures have been introduced on the probabil- ity simplex, the manifold of discrete probability distributions. The first one is based on the principle of invariance, which requires that the geometry Neural Computation 31, 827–848 (2019) © 2019 Massachusetts Institute of Technology doi:10.1162/neco_a_01178 Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/neco_a_01178 by guest on 24 September 2021 828 S.-i. Amari, R. Karakida, M. Oizumi, and M. Cuturi between probability distributions must be invariant under invertible trans- formations of random variables. That viewpoint is the cornerstone of the theory of information geometry (Amari, 2016), which acts as a foundation for statistical inference. The second direction is grounded on the theory of optimal transport, which exploits prior geometric knowledge on the base pace in which random variables are valued (Villani, 2003). Computing op- timal transport amounts to obtaining a coupling between these two random variables that is optimal in the sense that it has a minimal expected trans- portation cost between the first and second variables. However, computing that solution can be challenging and is usually car- ried out by solving a linear program. Cuturi (2013) considered a relaxed formulation of optimal transport, in which the negative entropy of the coupling is used as a regularizer. We call that approximation of the orig- inal optimal transport cost the C function. Entropic regularization provides two major advantages: the resolution of regularized optimal transport prob- lems, which relies on Sinkhorn’s algorithm (Sinkhorn, 1964), can be triv- ially parallelized and is usually faster by several orders of magnitude than the exact solution of the linear program. Unlike the original optimal trans- port geometry, regularized transport distances are differentiable functions of their inputs, even when the latter are discrete, a property that can be exploited in problems arising from pattern classification and clustering (Cuturi & Doucet, 2014; Cuturi & Peyré, 2016). The Wasserstein distance between two distributions reflects the metric properties of the base space on which a pattern is defined. Conventional information-theoretic divergences such as the Kullback-Leibler (KL) diver- gence and the Hellinger divergence fail to capture the geometry of the base space. Therefore, the Wasserstein geometry is useful for applications where the geometry of the base space plays an important role, notably in com- puter vision, where an image pattern is defined on the two-dimensional base space R2. Cuturi and Doucet (2014) is a pioneering work, where the Wasserstein barycenter of various patterns can summarize the common shape of a database of exemplar patterns. Also, more advanced inference tasks use the C function as an output loss (Frogner, Zhang, Mobahi, Araya, & Poggio, 2015), a model fitting loss (Antoine, Marco, & Gabriel, 2016), or a way to learn mappings (Courty, Flamary, Tuia, & Rakotomamonjy, 2017). The Wasserstein geometry was also shown to be useful to estimate genera- tive models, under the name of the Wasserstein GAN (Arjovsky, Chintala, & Bottou, 2017) and related contributions (Aude, Gabriel, & Marco, 2018; Sal- imans, Zhang, Radford, & Metaxas, 2018). It was also applied to economics and ecological systems (Muzellec, Nock, Patrini, & Nielsen, 2016). All in all, the field of “Wasserstein statistics” is becoming an interesting research topic for exploration. The C function suffers, however, from a few issues. It is neither a dis- tance nor a divergence, notably because comparing a probability measure with itself does not result in a null discrepancy, namely, if p belongs to the Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/neco_a_01178 by guest on 24 September 2021 Information Geometry of Optimal Transport 829 simplex, then C(p, p) = 0. More worrisome, the minimizer of C(p, q) with respect to q is not reached at q = p. To solve these issues, we have pro- posed a first attempt at unifying the information and optimal transport ge- ometrical structures in Amari, Karakida, and Oizumi (2018). However, the information-geometric divergence introduced in that previous work loses some of the beneficial properties inherent in the C-function. For example, the C-function can be used to extract a common shape as the barycenter of a number of patterns (Cuturi & Doucet, 2014), which our former proposal was not able to. Therefore, it is desirable to define a new divergence from C, in the rigorous sense that it is minimized when comparing a measure with itself and, preferably, convex in both arguments, while still retaining the attractive properties of optimal transport. We propose in this article such a new divergence between probability distributions p and q that is inspired by optimal transport while incorpo- rating elements of information geometry. Its basic ingredient remains the entropic regularization of optimal transport. We show that the barycenters obtained with that new divergence are more sharply defined than those ob- tained with the original C-function, while still keeping the shape-location decomposition property. This article focuses on theoretical aspects, with only very preliminary numerical simulations. Detailed characteristics and simulations of the proposed divergence will be given in the future. 2 C-Function: Entropy-Regularized Optimal Transportation Plan The general transportation problem is concerned with the optimal trans- portation of commodities, initially distributed according to a distribution p, so that they end up being distributed as another distribution q.Initsfull generality, such a transport can be carried out on a metric manifold, and both p and q can be continuous measures on that manifold. We consider in this article the discrete case, where that metric space is of finite size n, namely, X ={1, 2,...,n}. We normalize the total number of commodities such that it sums to 1; p and q are therefore probability vectors of size n in the n − 1-dimensional simplex, = ∈ Rn | = , ≥ . Sn−1 p pi 1 pi 0 (2.1) i We leave out extensions to the continuous case for future work. Let Mij be the cost of transporting a unit of commodity from bin i to bin j, usually defined as a distance (or a suitable power thereof) between points i and > = = j in X. In what follows, we assume that Mij 0 for i j and Mii 0. A = ∈ Rn×n transportation plan P (Pij) + is a joint (probability) distribution over × X X that describes at each entry Pij the amount of commodities sent from bin i to bin j. Given a source distribution p and a target distribution q,the Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/neco_a_01178 by guest on 24 September 2021 830 S.-i. Amari, R. Karakida, M. Oizumi, and M. Cuturi set of transport plans allowing a transfer from p to q is defined as ⎧ ⎫ ⎨ n n ⎬ , = ∈ Rn×n ∀ ≤ , = , ∀ ≤ , = . U(p q) ⎩P + : i n Pij pi j n Pij q j⎭ j=1 i=1 (2.2) Note that if a matrix is in U(p, q), then its transpose is in U(q, p). The transportation cost of a plan P is defined as the dot product of P = with the cost matrix M Mij , , = . M P MijPij (2.3) ij Its minimum among all feasible plans, W(p, q) = min M, P, (2.4) P∈U(p,q) is the Wasserstein distance on Sn−1 parameterized by the ground metric (Cuturi, 2013) studied the regularization of that problem using entropy, =− , H(P) Pij log Pij (2.5) ij to consider the problem of minimizing L(P) =M, P−λH(P), (2.6) where λ>0 is a regularization strength. We call its minimum the C- function: Cλ(p, q) = min L(P). (2.7) P∈U(p,q) The Cλ function is a useful proxy for the Wasserstein distance, with favor- able computational properties, and has appeared in several applications as a useful alternative to information-geometric divergences such as the KL divergence. The optimal transportation plan is given in the following theorem (Cu- turi & Peyré, 2016; Amari et al., 2018). Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/neco_a_01178 by guest on 24 September 2021 Information Geometry of Optimal Transport 831 ∗ Theorem 1. The optimal transportation plan Pλ is given by ∗ = , Pλ caib jKij ij (2.8) = −Mij , K exp λ (2.9) ij ∈ where c, a normalization constant, and vectors a, b Sn−1 are determined from p and q such that the sender and receiver conditions (namely, marginal conditions) are satisfied.