Information Geometry Connecting Wasserstein Distance and Kullback– Leibler Divergence Via the Entropy-Relaxed Transportation Problem
Total Page:16
File Type:pdf, Size:1020Kb
Information geometry connecting Wasserstein distance and Kullback– Leibler divergence via the entropy-relaxed transportation problem Shun-ichi Amari, Ryo Karakida & Masafumi Oizumi Information Geometry ISSN 2511-2481 Info. Geo. DOI 10.1007/s41884-018-0002-8 1 23 Your article is published under the Creative Commons Attribution license which allows users to read, copy, distribute and make derivative works, as long as the author of the original work is cited. You may self- archive this article on your own website, an institutional repository or funder’s repository and make it publicly available immediately. 1 23 Info Geo https://doi.org/10.1007/s41884-018-0002-8 RESEARCH PAPER Information geometry connecting Wasserstein distance and Kullback–Leibler divergence via the entropy-relaxed transportation problem Shun-ichi Amari1,3 · Ryo Karakida2 · Masafumi Oizumi1,3 Received: 5 October 2017 / Revised: 9 February 2018 © The Author(s) 2018 Abstract Two geometrical structures have been extensively studied for a manifold of probability distributions. One is based on the Fisher information metric, which is invariant under reversible transformations of random variables, while the other is based on the Wasserstein distance of optimal transportation, which reflects the structure of the distance between underlying random variables. Here, we propose a new information-geometrical theory that provides a unified framework connecting the Wasserstein distance and Kullback–Leibler (KL) divergence. We primarily considered a discrete case consisting of n elements and studied the geometry of the probability simplex Sn−1, which is the set of all probability distributions over n elements. The Wasserstein distance was introduced in Sn−1 by the optimal transportation of com- modities from distribution p to distribution q, where p, q ∈ Sn−1. We relaxed the optimal transportation by using entropy, which was introduced by Cuturi. The optimal solution was called the entropy-relaxed stochastic transportation plan. The entropy- relaxed optimal cost C( p, q) was computationally much less demanding than the original Wasserstein distance but does not define a distance because it is not minimized at p = q. To define a proper divergence while retaining the computational advantage, we first introduced a divergence function in the manifold Sn−1 × Sn−1 composed of all optimal transportation plans. We fully explored the information geometry of the manifold of the optimal transportation plans and subsequently constructed a new one-parameter family of divergences in Sn−1 that are related to both the Wasserstein distance and the KL-divergence. B Shun-ichi Amari [email protected] 1 RIKEN Brain Science Institute, 2–1 Hirosawa, Wako-shi, Saitama 351-0198, Japan 2 National Institute of Advanced Industrial Science and Technology (AIST), 2-3-26 Aomi, Koto-ku, Tokyo 135-0064, Japan 3 Araya Inc, 2-8-10 Toranomon, Minato-ku, Tokyo 105-0001, Japan 123 Info Geo Keywords Wasserstein distance · Kullback–Leibler divergence · Optimal transporta- tion · Information geometry 1 Introduction Information geometry [1] studies the properties of a manifold of probability distri- butions and is useful for various applications in statistics, machine learning, signal processing, and optimization. Two geometrical structures have been introduced from two distinct backgrounds. One is based on the invariance principle, where the geometry is invariant under reversible transformations of random variables. The Fisher informa- tion matrix, for example, is a unique invariant Riemannian metric from the invariance principle [1–3]. Moreover, two dually coupled affine connections are used as invariant connections [1,4], which are useful in various applications. The other geometrical structure was introduced through the transportation problem, where one distribution of commodities is transported to another distribution. The minimum transportation cost defines a distance between the two distributions, which is called the Wasserstein, Kantorovich or earth-mover distance [5,6]. This structure provides a tool to study the geometry of distributions by taking the metric of the supporting manifold into account. Let χ = {1,...,n} be the support of a probability measure p. The invariant geom- etry provides a structure that is invariant under permutations of elements of χ and results in an efficient estimator in statistical estimation. On the other hand, when we consider a picture over n2 pixels χ = {(ij); i, j = 1,...,n} and regard it as a distri- bution over χ, the pixels have a proper distance structure in χ. Spatially close pixels tend to take similar values. A permutation of χ destroys such a neighboring structure, suggesting that the invariance might not play a useful role. The Wasserstein distance takes such a structure into account and is therefore useful for problems with metric structure in support χ (see, e.g., [7–9]). An interesting question is how these two geometrical structures are related. While both are important in their own respects, it would be intriguing to construct a unified framework that connects the two. With this purpose in mind, we examined the discrete case over n elements, such that a probability distribution is given by a probability vector p = (p,...,pn) in the probability simplex Sn−1 = p pi > 0, pi = 1 . (1) It is easy to naively extend our theory to distributions over Rn, ignoring mathematical difficulties of geometry of function spaces. See Ay et al. [4] for details. We consider Gaussian distributions over the one-dimensional real line X as an example of the continuous case. Cuturi modified the transportation problem such that the cost is minimized under an entropy constraint [7]. This is called the entropy-relaxed optimal transportation prob- lem and is computationally less demanding than the original transportation problem. In addition to the advantage in computational cost, Cuturi showed that the quasi- distance defined by the entropy-relaxed optimal solution yields superior results in 123 Info Geo many applications compared to the original Wasserstein distance and information- geometric divergences such as the KL divergence. We followed the entropy-relaxed framework that Cuturi et al. proposed [7–9] and introduced a Lagrangian function, which is a linear combination of the transportation cost and entropy. Given a distribution p of commodity on the senders side and q on the receivers side, the constrained optimal transportation plan is the minimizer of the Lagrangian function. The minimum value C( p, q) is a function of p and q, which we called the Cuturi function. However, this does not define the distance between p and q because it is non-zero at p = q and is not minimized when p = q. To define a proper distance-like function in Sn−1, we introduced a divergence between p and q derived from the optimal transportation plan. A divergence is a general metric concept that includes the square of a distance but is more flexible, allowing non-symmetricity between p and q. A manifold equipped with a divergence yields a Riemannian metric with a pair of dual affine connections. Dually coupled geodesics are defined, which possess remarkable properties, generalizing the Rieman- nian geometry [1]. We studied the geometry of the entropy-relaxed optimal transportation plans within the framework of information geometry. They form an exponential family of proba- bility distributions defined in the product manifold Sn−1 × Sn−1. Therefore, a dually flat structure was introduced. The m-flat coordinates are the expectation parameters ( p, q) and their dual, e-flat coordinates (canonical parameters) are (α, β), which are assigned from the minimax duality of nonlinear optimization problems. We can natu- rally defined a canonical divergence, that is the KL divergence KL[( p, q) : ( p, q)] between the two optimal transportation plans for ( p, q) and ( p, q), sending p to q and p to q, respectively. To define a divergence from p to q in Sn−1, we used a reference distribution r. Given r, we defined a divergence between p and q by KL[(r, p) : (r, q)]. There are a number of potential choices for r: one is to use r = p and another is to use the arithmetic or geometric mean of p and q. These options yield one-parameter families of divergences connecting the Wasserstein distance and KL-divergence. Our work uncovers a novel direction for studying the geometry of a manifold of probability distributions by integrating the Wasserstein distance and KL divergence. 2 Entropy-constrained transportation problem Let us consider n terminals χ = (X1,...,Xn), some of which, say X1,...,Xs,are sending terminals at which p1,...,ps (pi > 0) amounts of commodities are stocked. At the other terminals, Xs+1,...,Xn, no commodities are stocked (pi = 0). These are transported within χ such that q1,...,qr amounts are newly stored at the receiving ,..., terminals X j1 X jr . There may be overlap in the sending and receiving terminals, χ = { ,..., } χ = ,..., χ = χ = χ S X1 Xs and R X j1 X jr , including the case that R S (Fig. 1). We normalized the total amount of commodities to be equal to 1 so that p = (p1,...,ps) and q = (q1,...,qr ) can be regarded as probability distributions in the probability simplex Ss−1 and Sr−1, respectively, 123 Info Geo ··· ··· ··· ··· Fig. 1 Transportation from the sending terminals χS to the receiving terminals χR pi = 1, qi = 1, pi > 0, qi > 0. (2) ¯ ¯ Let Sn−1 be the probability simplex over χ. Then Ss−1 ⊂ Sn−1, Sr−1 ⊂ Sn−1, where ¯ Sn−1 is the closure of Sn−1, ¯ Sn−1 = r ri ≥ 0, ri = 1 . (3) It should be noted that if some components of p and q areallowedtobe0,wedo not need to treat χS and χR separately, i.e., we can consider both χS and χR to be equal to χ. Under such a situation, we simply considered both p and q as elements of ¯ Sn−1. We considered a transportation plan P = Pij denoted by an s × r matrix, where Pij ≥ 0 is the amount of commodity transported from Xi ∈ χS to X j ∈ χR.