An Overview of Information Geometry

An Overview of Information Geometry

An Overview of Information Geometry Yuxi Liu Abstract This paper summarizes the basics concepts of information geome- try, and gives as example applications, Jeffreys prior in Bayesian prob- ability, and natural gradient descent in machine learning. Necessary concepts in probability and statistics are explained in detail. The main purpose of the paper is to provide, for people familiar with differential geometry, an accessible overview and entry point into information geometry. No originality is claimed in content. 1 Introduction Information geometry applies methods of differential geometry to problems in statistics, probability theory, data analysis, and many other related fields. Despite its wide reach, and , it is not as well-known or well-understood as it should. This is probably due to a lack of accessible introductions. This paper aims to provide such an introduction. The intended audi- ence has a good symbolic and intuitive understanding of modern differential geometry, but merely symbolic understanding of probability, statistics, and machine learning. As such, all such concepts used in the paper are explained in detail, from the bare basics, while the differential geometry is written in ruthless telegraphic style. Standard references of information geometry are [Amari and Nagaoka, 2007] and [Amari, 2016]. We use Einstein summation throughout the paper. 2 Probability concepts In this section, we define some probability concepts we will use in the paper. See [Cover and Thomas, 2006, chapter 2] for a detailed exposition. We consider a probability space (Ω; B;P ), where Ω is the state space, and P is the probability measure on the measure space (Ω; B). The probability space is either discrete or continuous, depending on the context. If it is 1 discrete, we use the notation Ω = f0; 1; · · · g, and Pi = P (fig). If it is continuous, we assume that the probability measure has a density p :Ω ! R [0; 1), so that any measurable subset A ⊂ Ω;P (A) = A p(x)dx. In information theory, the entropy of a random variable quantifies the amount of randomness in a random variable. Definition 2.1. The entropy of a discrete random variable P is X H(P ) = − Pi ln Pi: (1) i For a continuous random variable, we can define an analogous quantity on its probability density: Definition 2.2. The differential entropy or continuous entropy of a probability density p is Z h(p) = − p(x) ln p(x)dx: (2) Ω One should be cautious to note that the differential entropy is not defined for the random variable P itself, but for its probability density p. Consider for instance the uniform random variable on [0; 1]. Its probability density function is p(x) = 1. So we have h(p) = 0. However, we could trivially transform the random variable by stretching its state space to [0; k], then we 1 have transformed probability density p(x) = k ; h(p) = ln k which could be negative. In contrast, the entropy of a discrete random variable is always nonnegative. Philosophically, in probability theory, one is interested in the random variable itself, rather than its presentation. As such, if a quantity, such as the differential entropy, depends on how the random variable is presented, then it is not a concept in probability, but rather, a concept in mathematical analysis. When there are two random variables P; Q, we can measure their differ- ence by Definition 2.3. The relative entropy or Kullback{Leibler divergence between two probability densities p; q is Z p(x) D(pkq) = p(x) ln dx: (3) Ω q(x) When both random variables are discrete, this reduces to X Pi D(P kQ) = P ln : (4) i Q i i 2 It can be proven that the Kullback{Leibler divergence is always nonneg- ative, and is zero iff p = q. This result is called the Gibbs inequality. It also does not depend on the presentation of P; Q, that is, under a change of state space f :Ω ! Ω0, p; q are changed to p0; q0, but as long as f does not \lose information", D(pkq) = D(p0kq0): Note that this is not a distance, since in general, D(pkq) 6= D(qkp), and it also does not satisfy the triangle inequality. This is why it's called a \divergence". 3 Defining the information manifold For this section, we follow [Caticha, 2015], and [Leinster, 2018]. 1 n Now we consider a family of probability densities fpθjθ = (θ ; ··· ; θ ) 2 Mg, where θ is usually called the parameter of the family, and M is an open submanifold of Rn. This M is the information manifold, or sta- tistical manifold. In general, an information manifold can be constructed by multiple parametrizations, each parametrization by θ being one chart of the manifold. We will have no need for such generality however, and would develop the theory as if M is covered by one chart. We will also assume that p(xjθ) is smooth in θ. In this view, we immediately see that the continuous entropy function h is a scalar function on the manifold. Differentiating under the integral sign shows that h 2 C1(M). We would endow it with a Riemannian metric that measures the distance between infinitesimally close probability densities p(xjθ) and p(xjθ + dθ). Example 3.1. Consider the family of normal distribution on the real line. Then each is characterized by its mean µ and variance σ2, with (µ, σ2) 2 R × (0; 1), and 2 2 1 − (x−µ) p(x j µ, σ ) = p e 2σ2 (5) 2πσ2 The Kullback{Leibler divergence between two points on an information manifold is an instance of a general concept, divergence, which can be thought of as an asymmetric generalization of distance. Definition 3.1. The divergence on a manifold M is some function D(·k·): M × M ! [0; 1), such that D(qkp) = 0 iff p = q, and in any local chart θ, we have 1 D(θ + dθkθ) = g dθidθj + o(jdθj2) (6) 2 ij 3 for some positive definite matrix [gij]. A divergence has an associated Riemannian metric gij as given in the definition. 3.1 Metric via distinguishability Consider the relative difference of p(xjθ) and p(xjθ + dθ): p(xjθ) − p(xjθ + dθ) @ ln p(xjθ) ∆ (dθ) = = dθi (7) θ p(xjθ) @θi We can interpret this as how distinguishable the two densities are, if all we have are their values at a particular x 2 Ω. If it is zero, then they appear the same. The bigger its absolute value is, the more they differ. We define convenient notation pθ(x) = p(xjθ) and the log-density function lθ(x) = l(xjθ) = ln p(xjθ). Fixing a parameter θ 2 M and an infinitesimal variation dθ, the expected value of ∆θ(dθ) is Z Z Eθ[∆θ(dθ)] = ∆θ(dθ)dx = (pθ+dθ(x) − pθ(x))dx = 0 (8) Ω Ω which is unfortunate. The square, however, is nontrivial: Z 2 @ ln pθ(x) @ ln pθ(x) i j Eθ[∆θ(dθ) ] = pθ(x) i j dx dθ dθ (9) Ω @θ @θ This allows us to define a metric on M. Definition 3.2. The Fisher information metric on M is defined by gij(θ) = Eθ [@il@jl] : (10) Note that we always take partial derivatives over the parameters θ unless i otherwise specified, so @il is partial derivative by θ . It's easy to verify that gij is a symmetric (2; 0)-tensor (check that it transforms covariantly under a coordinate change θ 7! θ0). It remains to i j show that it's positive-definite, that is, for all dθ, gijdθ dθ ≥ 0, with equality hold iff dθ = 0. i j 2 From definition, gijdθ dθ = h∆θ(dθ) i is the expected value of a nonneg- i j 2 ative number, so gijdθ dθ ≥ 0. If it equals zero, then we have ∆θ(dθ) = 0 pθ+dθ(x)−pθ(x) almost surely, that is, = 0 almost surely. So pθ+dθ(x) = pθ(x) pθ(x) almost surely. We thus assume that the manifold coordinates are regular, that is, the Fisher metric is positive-definite in that coordinate. 4 3.2 Metric via Kullback{Leibler divergence Consider the Kullback{Leibler divergence of two densities p; q: Z p(x) Z q(x) − p(x) D(pkq) = p(x) ln dx = − p(x) ln 1 + dx Ω q(x) Ω p(x) expand in Taylor series to second order, Z 1 (p(x) − q(x))2 ≈ (p(x) − q(x)) + dx: (11) Ω 2 p(x) R R The linear term integrates to zero, since Ω p(x)dx = Ω q(x)dx = 1, so 1 Z p(x) − q(x)2 D(pkq) ≈ p(x)dx: (12) 2 Ω p(x) Now let p = pθ+dθ; q = pθ, we have 1 1 D(p kp ) ≈ ∆ (dθ)2 = g dθidθj: (13) θ+dθ θ 2 θ 2 ij deriving the Fisher metric as the Kullback{Leibler divergence between in- finitesimally close points on the information manifold, that is, it is the asso- ciated Riemann metric of Kullback{Leibler divergence. We can recast this in another form. Start with 1 1 D(p kp ) = g (θ)dθidθj ≈ g (θ + dθ)dθidθj = D(p kp ) θ+dθ θ 2 ij 2 ij θ θ+dθ and perform Taylor expansion again, up to second order: 1 ln p (x) ≈ ln p (x) + s · dθ + (@ @ l) dθidθj θ+dθ θ θ 2 i j where sθ = rθlθ is usually called the score function. So 1 g dθidθj = D(p kp ) 2 ij θ θ+dθ Z = p(xjθ) (ln p(xjθ) − ln p(xjθ + dθ)) dx Ω (14) 1 ≈ − s · dθ + (@ @ l) dθidθj Eθ θ 2 i j 1 = − [@ @ l]dθidθj 2Eθ i j 5 where we used the fact that the expectation of score is zero: Z Z rθpθ(x) Eθ[sθ] = pθ(x) dx = rθ pθ(x)dx = 0 (15) Ω pθ(x) Ω So we obtain gij = −Eθ[@i@jl]: (16) 3.3 Uniqueness of the Fisher metric The Fisher metric has many derivations, and is in some sense the unique (up to a constant factor) Riemannian metric on the information manifold that is compatible with the probability distributions pθ.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    20 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us