An Information Geometry of Statistical Manifold Learning

Total Page:16

File Type:pdf, Size:1020Kb

An Information Geometry of Statistical Manifold Learning An Information Geometry of Statistical Manifold Learning Ke Sun [email protected] Stéphane Marchand-Maillet [email protected] Viper Group, Computer Vision & Multimedia Laboratory, University of Geneva, Switzerland Abstract NOTATIONS Manifold learning seeks low-dimensional repre- D sentations of high-dimensional data. The main Xn; Y n; Zn — respectively, a generic model in < , tactics have been exploring the geometry in an MAL input model in <D, and an MAL output model an input data space and an output embedding in <d. The subscripts denote the sample size and can be space. We develop a manifold learning the- n 2n ¯2 omitted; M(X), Mf, Mz, O , H , H ory in a hypothesis space consisting of models. X;k X;Y ;k Y ;Z;k,κ A model means a specific instance of a collec- — different model manifolds. The superscripts denote tion of points, e.g., the input data collectively the dimension. The parameters in parentheses denote or the output embedding collectively. The semi- the coordinate system. Both can be omitted; Sn−1 — Riemannian metric of this hypothesis space is the (n − 1)-dimensional statistical simplex; g, G — uniquely derived in closed form based on the in- semi-Riemannian metric of a model family and Fisher formation geometry of probability distributions. information metric; sX — pairwise dissimilarities of There, manifold learning is interpreted as a tra- ij jectory of intermediate models. The volume of a a model X. The superscript can be omitted; pjji, pij continuous region reveals an amount of informa- — neighbourhood probabilities; θ = (θ ; : : : ; θ ) — tion. It can be measured to define model com- 1 n plexity and embedding quality. This provides canonical parameters of probability distributions; jMj deep unified perspectives of manifold learning — volume or scale of M; kNNi — indexes of the theory. k-nearest-neighbours of the i’th sample; det(·) — determinant; diag(x1; : : : ; x`) — a diagonal matrix Manifold learning (MAL), or non-linear dimensionality re- with x ; : : : ; x on its main diagonal. duction, assumes that some given high-dimensional obser- 1 ` D vations y1;:::; yn 2 < lie around a low-dimensional sub-manifold fΓ(z): z 2 <dg induced by a smooth map- ping Γ: <d ! <D (d D). While it is possible to learn tion on a weighted k-nearest-neighbour (kNN) graph of a parametric form of Γ (Hinton & Salakhutdinov, 2006), fyig. Stochastic Neighbour Embedding (SNE) (Hinton & the majority of manifold learners are non-parametric. They Roweis, 2003) and its extensions (Cook et al., 2007; Venna n & Kaski, 2007; van der Maaten & Hinton, 2008) represent learn directly a set of low-dimensional coordinates fzigi=1 n the non-convex family. They encode the input and output as to preserve certain information in fyigi=1. probability distributions and optimize the embedding in a Depending on the choice of information to be preserved, maximum-likelihood framework. By sacrificing convexity, at least two families of MAL methods thrived in the last non-local information can be preserved as well. The latter decade. The spectral methods (Tenenbaum et al., 2000; SNE-based family shows robustness to parameter configu- Roweis & Saul, 2000; Belkin & Niyogi, 2003) and semi- ration and favorable performance in data visualization. It definite embeddings (Weinberger et al., 2004; Sha & Saul, is being actively developed (Carreira-Perpiñán, 2010; Vla- 2005) represent the family with natural convex formula- dymyrov & Carreira-Perpiñán, 2012; Sun et al., 2012; Yang tions. They only preserve encodings of local informa- et al., 2013) and stands as state-of-the-art MAL. Proceedings of the 31 st International Conference on Machine Despite such a diversity, several critical problems in the Learning, Beijing, China, 2014. JMLR: W&CP volume 32. Copy- field of MAL remain unclear. Practically, no standard ex- right 2014 by the author(s). ists in gauging the data complexity and the embedding An Information Geometry of Statistical Manifold Learning quality. The performance is often empirically assessed formation. The information in p is distributed in a sample- via visualization or indirectly evaluated via classification. wise manner. Each sample xi has limited knowledge en- Theoretically, an intrinsic MAL theory with deep connec- coded into p·|i mostly regarding its neighbours. tions to classical statistical learning theory (Akaike, 1974; Equations (1) and (2) are general enough to cover Schwarz, 1978; Amari, 1995; Vapnik, 1998) is not estab- SNE (Hinton & Roweis, 2003), symmetric SNE (Cook lished. MAL emphasizes local information encoded into et al., 2007), t-SNE (van der Maaten & Hinton, 2008), sample-wise structures. How to describe and measure such and a spectrum of extensions. For example, SNE ap- preservation of local information is unknown. X 2 plies sij = τijjxi − xjjj to eq. (1) for encoding the in- We attack these problems with a geometry, not in an obser- put and output, where τi > 0 is a scalar; t-SNE applies D d X 2 vation space < or an embedding space < , but in a very sij = log(jjxi − xjjj + 1) to eq. (2) for encoding the out- high-dimensional hypothesis space made of models. put. From a kernel view (Ham et al., 2004), any MAL tech- nique that encodes into kernels naturally extends to such Definition 1. A model Xn is a specific instance of a set of probabilities. vectors fx1;:::; xng. Remark 1.1. By default, a model denoted by Xn is a co- Despite a model X can have various forms, after the en- T ordinate matrix (x1;:::; xn) . Alternatively, it can be im- coding p(X) is in a unified space. In eq. (1), (pjji(X)) is plicitly specified by a n × n matrix of pairwise measure- a point on the product manifold (Sn−1)n, where Sn−1 = Pn ments, e.g., distances or (dis-)similarities. f(p1; : : : ; pn): 8i; pi > 0; i=1 pi = 1g is a statistical Definition 2. A model family M is a smooth manifold con- simplex consisting of all distributions over f1; 2; : : : ; ng. In n2−1 sisting of continuous models. eq. (2), (pij(X)) is a point on S . Such a unified rep- resentation makes it possible to measure the difference be- T For example in MAL, the input Y n = (y1;:::; yn) or tween two models Y and Z with different original forms. T the output Zn = (z1;:::; zn) is one single model. The It motivates us to develop an MAL theory on the statistical T P simplex regardless of the original representations. model family Mz = fZn = (z1;:::; zn) : i zi = d 0; 8i; zi 2 < g includes all possible embeddings cen- tered at 0. Then, MAL can be described as a projection 1.2. Information Geometry Y ! Z?(Y ) 2 M through convex optimization, or a z We introduce the Riemannian geometry of Sn−1, the (n − path Z0(Y ); Z1(Y );:::; Z?(Y ) 2 M along the gradi- z 1)-dimensional statistical simplex formed by all distribu- ent of some non-convex objective function. tions in the form (p1; : : : ; pn). The readers are referred to (Jost, 2008; Amari & Nagaoka, 2000) for a thorough view. 1. Preliminaries n−1 Any (p1; : : : ; pn) 2 S uniquely corresponds to θ = 1.1. Manifold Learning (θ1; : : : ; θn) via the invertible mapping θi = log(pi=pr), 8i, where p (1 ≤ r ≤ n) is a reference probability. These Given a model family M, any model X 2 M, represent- r n canonical parameters θ serve as a global coordinate system ing a collection of coordinates x ;:::; x , can be encoded 1 n of Sn−1. Around 8θ 2 Sn−1, the partial derivative oper- into n distributions over f1; 2; : : : ; ng: ators f@=@θ1; ··· ; @=@θr−1; @=@θr+1; ··· ; @=@θng repre- X sent the velocities passing through θ along the coordinate exp −sij p (X) = (8j 6= i); p = 0 (8i) curves. An infinitesimal patch of Sn−1 around θ can be jji P exp −sX iji |6=i i| studied as a linear space T Sn−1 = fP (α · @=@θ ): (1) θ i:i6=r i i 8i; α 2 <g called the tangent space.A Riemannian met- or one single distribution over f1; 2; : : : ; ng2: i ric G defines a local inner product h@=@θi; @=@θjiG(θ) on X n−1 exp −s each tangent space TθS and varies smoothly across p (X) = ij (8j 6= i); p = 0 (8i): ij P exp −sX ii different θ. Locally, it is given by the positive definite {;|:{6=| {| (p.d.) matrix G (θ) = h@=@θ ; @=@θ i . Under cer- (2) ij i j G(θ) X tain conditions (Cencovˇ , 1982), the Riemannian metric of In either case, sij is a possibly non-symmetric difference n−1 measure between xi and xj, e.g., square distance. After statistical manifolds, e.g. S , is uniquely given by the normalization, p represents the probability of xi and xj Fisher information metric (FIM) (Rao, 1945) Gij(θ) = Pn being similar. The subscript “jji” of p in eq. (1) signifies a k=1 pk(θ)(@ log pk(θ)=@θi)(@ log pk(θ)=@θj). conditional probability; the subscript “ij” in eq. (2) signi- Lemma 3. On Sn−1, G (θ) = p (θ)δ − p (θ)p (θ). fies a joint probability. ij i ij i j It is not arbitrary but natural to employ eqs. (1) and (2) for FIM grants us the power to measure information. With statistical MAL, because they encode distributed local in- respect to a coordinate system, e.g. the canonical pa- An Information Geometry of Statistical Manifold Learning TX Mf Tp(X)S (tangent space) dp (differential or pushforward) S @ @ M @θj @ @'i f @'j My X probablistic p encoding p(X) @ @θi Y p(Y ) p(My) manifold learning g statistical projection G ? Z?(Y ) pullback p(Z (Y )) Mz p(Mz) A model family Mf A (product of) statistical simplex S Figure 1.
Recommended publications
  • Introduction to Information Geometry – Based on the Book “Methods of Information Geometry ”Written by Shun-Ichi Amari and Hiroshi Nagaoka
    Introduction to Information Geometry – based on the book “Methods of Information Geometry ”written by Shun-Ichi Amari and Hiroshi Nagaoka Yunshu Liu 2012-02-17 Introduction to differential geometry Geometric structure of statistical models and statistical inference Outline 1 Introduction to differential geometry Manifold and Submanifold Tangent vector, Tangent space and Vector field Riemannian metric and Affine connection Flatness and autoparallel 2 Geometric structure of statistical models and statistical inference The Fisher metric and α-connection Exponential family Divergence and Geometric statistical inference Yunshu Liu (ASPITRG) Introduction to Information Geometry 2 / 79 Introduction to differential geometry Geometric structure of statistical models and statistical inference Part I Introduction to differential geometry Yunshu Liu (ASPITRG) Introduction to Information Geometry 3 / 79 Introduction to differential geometry Geometric structure of statistical models and statistical inference Basic concepts in differential geometry Basic concepts Manifold and Submanifold Tangent vector, Tangent space and Vector field Riemannian metric and Affine connection Flatness and autoparallel Yunshu Liu (ASPITRG) Introduction to Information Geometry 4 / 79 Introduction to differential geometry Geometric structure of statistical models and statistical inference Manifold Manifold S n Manifold: a set with a coordinate system, a one-to-one mapping from S to R , n supposed to be ”locally” looks like an open subset of R ” n Elements of the set(points): points in R , probability distribution, linear system. Figure : A coordinate system ξ for a manifold S Yunshu Liu (ASPITRG) Introduction to Information Geometry 5 / 79 Introduction to differential geometry Geometric structure of statistical models and statistical inference Manifold Manifold S Definition: Let S be a set, if there exists a set of coordinate systems A for S which satisfies the condition (1) and (2) below, we call S an n-dimensional C1 differentiable manifold.
    [Show full text]
  • Arxiv:1907.11122V2 [Math-Ph] 23 Aug 2019 on M Which Are Dual with Respect to G
    Canonical divergence for flat α-connections: Classical and Quantum Domenico Felice1, ∗ and Nihat Ay2, y 1Max Planck Institute for Mathematics in the Sciences Inselstrasse 22{04103 Leipzig, Germany 2 Max Planck Institute for Mathematics in the Sciences Inselstrasse 22{04103 Leipzig, Germany Santa Fe Institute, 1399 Hyde Park Rd, Santa Fe, NM 87501, USA Faculty of Mathematics and Computer Science, University of Leipzig, PF 100920, 04009 Leipzig, Germany A recent canonical divergence, which is introduced on a smooth manifold M endowed with a general dualistic structure (g; r; r∗), is considered for flat α-connections. In the classical setting, we compute such a canonical divergence on the manifold of positive measures and prove that it coincides with the classical α-divergence. In the quantum framework, the recent canonical divergence is evaluated for the quantum α-connections on the manifold of all positive definite Hermitian operators. Also in this case we obtain that the recent canonical divergence is the quantum α-divergence. PACS numbers: Classical differential geometry (02.40.Hw), Riemannian geometries (02.40.Ky), Quantum Information (03.67.-a). I. INTRODUCTION Methods of Information Geometry (IG) [1] are ubiquitous in physical sciences and encom- pass both classical and quantum systems [2]. The natural object of study in IG is a quadruple (M; g; r; r∗) given by a smooth manifold M, a Riemannian metric g and a pair of affine connections arXiv:1907.11122v2 [math-ph] 23 Aug 2019 on M which are dual with respect to g, ∗ X g (Y; Z) = g (rX Y; Z) + g (Y; rX Z) ; (1) for all sections X; Y; Z 2 T (M).
    [Show full text]
  • Information Geometry of Orthogonal Initializations and Training
    Published as a conference paper at ICLR 2020 INFORMATION GEOMETRY OF ORTHOGONAL INITIALIZATIONS AND TRAINING Piotr Aleksander Sokół and Il Memming Park Department of Neurobiology and Behavior Departments of Applied Mathematics and Statistics, and Electrical and Computer Engineering Institutes for Advanced Computing Science and AI-driven Discovery and Innovation Stony Brook University, Stony Brook, NY 11733 {memming.park, piotr.sokol}@stonybrook.edu ABSTRACT Recently mean field theory has been successfully used to analyze properties of wide, random neural networks. It gave rise to a prescriptive theory for initializing feed-forward neural networks with orthogonal weights, which ensures that both the forward propagated activations and the backpropagated gradients are near `2 isome- tries and as a consequence training is orders of magnitude faster. Despite strong empirical performance, the mechanisms by which critical initializations confer an advantage in the optimization of deep neural networks are poorly understood. Here we show a novel connection between the maximum curvature of the optimization landscape (gradient smoothness) as measured by the Fisher information matrix (FIM) and the spectral radius of the input-output Jacobian, which partially explains why more isometric networks can train much faster. Furthermore, given that or- thogonal weights are necessary to ensure that gradient norms are approximately preserved at initialization, we experimentally investigate the benefits of maintaining orthogonality throughout training, and we conclude that manifold optimization of weights performs well regardless of the smoothness of the gradients. Moreover, we observe a surprising yet robust behavior of highly isometric initializations — even though such networks have a lower FIM condition number at initialization, and therefore by analogy to convex functions should be easier to optimize, experimen- tally they prove to be much harder to train with stochastic gradient descent.
    [Show full text]
  • Information Geometry and Optimal Transport
    Call for Papers Special Issue Affine Differential Geometry and Hesse Geometry: A Tribute and Memorial to Jean-Louis Koszul Submission Deadline: 30th November 2019 Jean-Louis Koszul (January 3, 1921 – January 12, 2018) was a French mathematician with prominent influence to a wide range of mathematical fields. He was a second generation member of Bourbaki, with several notions in geometry and algebra named after him. He made a great contribution to the fundamental theory of Differential Geometry, which is foundation of Information Geometry. The special issue is dedicated to Koszul for the mathematics he developed that bear on information sciences. Both original contributions and review articles are solicited. Topics include but are not limited to: Affine differential geometry over statistical manifolds Hessian and Kahler geometry Divergence geometry Convex geometry and analysis Differential geometry over homogeneous and symmetric spaces Jordan algebras and graded Lie algebras Pre-Lie algebras and their cohomology Geometric mechanics and Thermodynamics over homogeneous spaces Guest Editor Hideyuki Ishi (Graduate School of Mathematics, Nagoya University) e-mail: [email protected] Submit at www.editorialmanager.com/inge Please select 'S.I.: Affine differential geometry and Hessian geometry Jean-Louis Koszul (Memorial/Koszul)' Photo Author: Konrad Jacobs. Source: Archives of the Mathema�sches Forschungsins�tut Oberwolfach Editor-in-Chief Shinto Eguchi (Tokyo) Special Issue Co-Editors Information Geometry and Optimal Transport Nihat
    [Show full text]
  • Information Geometry (Part 1)
    October 22, 2010 Information Geometry (Part 1) John Baez Information geometry is the study of 'statistical manifolds', which are spaces where each point is a hypothesis about some state of affairs. In statistics a hypothesis amounts to a probability distribution, but we'll also be looking at the quantum version of a probability distribution, which is called a 'mixed state'. Every statistical manifold comes with a way of measuring distances and angles, called the Fisher information metric. In the first seven articles in this series, I'll try to figure out what this metric really means. The formula for it is simple enough, but when I first saw it, it seemed quite mysterious. A good place to start is this interesting paper: • Gavin E. Crooks, Measuring thermodynamic length. which was pointed out by John Furey in a discussion about entropy and uncertainty. The idea here should work for either classical or quantum statistical mechanics. The paper describes the classical version, so just for a change of pace let me describe the quantum version. First a lightning review of quantum statistical mechanics. Suppose you have a quantum system with some Hilbert space. When you know as much as possible about your system, then you describe it by a unit vector in this Hilbert space, and you say your system is in a pure state. Sometimes people just call a pure state a 'state'. But that can be confusing, because in statistical mechanics you also need more general 'mixed states' where you don't know as much as possible. A mixed state is described by a density matrix, meaning a positive operator with trace equal to 1: tr( ) = 1 The idea is that any observable is described by a self-adjoint operator A, and the expected value of this observable in the mixed state is A = tr( A) The entropy of a mixed state is defined by S( ) = −tr( ln ) where we take the logarithm of the density matrix just by taking the log of each of its eigenvalues, while keeping the same eigenvectors.
    [Show full text]
  • Information Geometry: Near Randomness and Near Independence
    Khadiga Arwini and C.T.J. Dodson Information Geometry: Near Randomness and Near Independence Lecture Notes in Mathematics 1953 Springer Preface The main motivation for this book lies in the breadth of applications in which a statistical model is used to represent small departures from, for example, a Poisson process. Our approach uses information geometry to provide a com- mon context but we need only rather elementary material from differential geometry, information theory and mathematical statistics. Introductory sec- tions serve together to help those interested from the applications side in making use of our methods and results. We have available Mathematica note- books to perform many of the computations for those who wish to pursue their own calculations or developments. Some 44 years ago, the second author first encountered, at about the same time, differential geometry via relativity from Weyl's book [209] during un- dergraduate studies and information theory from Tribus [200, 201] via spatial statistical processes while working on research projects at Wiggins Teape Re- search and Development Ltd|cf. the Foreword in [196] and [170, 47, 58]. Hav- ing started work there as a student laboratory assistant in 1959, this research environment engendered a recognition of the importance of international col- laboration, and a lifelong research interest in randomness and near-Poisson statistical geometric processes, persisting at various rates through a career mainly involved with global differential geometry. From correspondence in the 1960s with Gabriel Kron [4, 124, 125] on his Diakoptics, and with Kazuo Kondo who influenced the post-war Japanese schools of differential geometry and supervised Shun-ichi Amari's doctorate [6], it was clear that both had a much wider remit than traditionally pursued elsewhere.
    [Show full text]
  • Exponential Families: Dually-Flat, Hessian and Legendre Structures
    entropy Review Information Geometry of k-Exponential Families: Dually-Flat, Hessian and Legendre Structures Antonio M. Scarfone 1,* ID , Hiroshi Matsuzoe 2 ID and Tatsuaki Wada 3 ID 1 Istituto dei Sistemi Complessi, Consiglio Nazionale delle Ricerche (ISC-CNR), c/o Politecnico di Torino, 10129 Torino, Italy 2 Department of Computer Science and Engineering, Graduate School of Engineering, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 466-8555, Japan; [email protected] 3 Region of Electrical and Electronic Systems Engineering, Ibaraki University, Nakanarusawa-cho, Hitachi 316-8511, Japan; [email protected] * Correspondence: [email protected]; Tel.: +39-011-090-7339 Received: 9 May 2018; Accepted: 1 June 2018; Published: 5 June 2018 Abstract: In this paper, we present a review of recent developments on the k-deformed statistical mechanics in the framework of the information geometry. Three different geometric structures are introduced in the k-formalism which are obtained starting from three, not equivalent, divergence functions, corresponding to the k-deformed version of Kullback–Leibler, “Kerridge” and Brègman divergences. The first statistical manifold derived from the k-Kullback–Leibler divergence form an invariant geometry with a positive curvature that vanishes in the k → 0 limit. The other two statistical manifolds are related to each other by means of a scaling transform and are both dually-flat. They have a dualistic Hessian structure endowed by a deformed Fisher metric and an affine connection that are consistent with a statistical scalar product based on the k-escort expectation. These flat geometries admit dual potentials corresponding to the thermodynamic Massieu and entropy functions that induce a Legendre structure of k-thermodynamics in the picture of the information geometry.
    [Show full text]
  • Information-Geometric Optimization Algorithms: a Unifying Picture Via Invariance Principles
    Journal of Machine Learning Research 18 (2017) 1-65 Submitted 11/14; Revised 10/16; Published 4/17 Information-Geometric Optimization Algorithms: A Unifying Picture via Invariance Principles Yann Ollivier [email protected] CNRS & LRI (UMR 8623), Université Paris-Saclay 91405 Orsay, France Ludovic Arnold [email protected] Univ. Paris-Sud, LRI 91405 Orsay, France Anne Auger [email protected] Nikolaus Hansen [email protected] Inria & CMAP, Ecole polytechnique 91128 Palaiseau, France Editor: Una-May O’Reilly Abstract We present a canonical way to turn any smooth parametric family of probability distributions on an arbitrary search space 푋 into a continuous-time black-box optimization method on 푋, the information-geometric optimization (IGO) method. Invariance as a major design principle keeps the number of arbitrary choices to a minimum. The resulting IGO flow is the flow of an ordinary differential equation conducting the natural gradient ascentofan adaptive, time-dependent transformation of the objective function. It makes no particular assumptions on the objective function to be optimized. The IGO method produces explicit IGO algorithms through time discretization. It naturally recovers versions of known algorithms and offers a systematic way to derive new ones. In continuous search spaces, IGO algorithms take a form related to natural evolution strategies (NES). The cross-entropy method is recovered in a particular case with a large time step, and can be extended into a smoothed, parametrization-independent maximum likelihood update (IGO-ML). When applied to the family of Gaussian distributions on R푑, the IGO framework recovers a version of the well-known CMA-ES algorithm and of xNES.
    [Show full text]
  • Fisher-Rao Geometry of Dirichlet Distributions Alice Le Brigant, Stephen Preston, Stéphane Puechmorel
    Fisher-Rao geometry of Dirichlet distributions Alice Le Brigant, Stephen Preston, Stéphane Puechmorel To cite this version: Alice Le Brigant, Stephen Preston, Stéphane Puechmorel. Fisher-Rao geometry of Dirich- let distributions. Differential Geometry and its Applications, Elsevier, 2021, 74, pp.101702. 10.1016/j.difgeo.2020.101702. hal-02568473v2 HAL Id: hal-02568473 https://hal.archives-ouvertes.fr/hal-02568473v2 Submitted on 7 Nov 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. FISHER-RAO GEOMETRY OF DIRICHLET DISTRIBUTIONS ALICE LE BRIGANT, STEPHEN C. PRESTON, AND STEPHANE´ PUECHMOREL Abstract. In this paper, we study the geometry induced by the Fisher-Rao metric on the parameter space of Dirichlet distributions. We show that this space is a Hadamard manifold, i.e. that it is geodesically complete and has everywhere negative sectional curvature. An important consequence for applications is that the Fr´echet mean of a set of Dirichlet distributions is uniquely defined in this geometry. 1. Introduction The differential geometric approach to probability theory and statistics has met increasing interest in the past years, from the theoretical point of view as well as in applications. In this approach, probability distributions are seen as elements of a differentiable manifold, on which a metric structure is defined through the choice of a Riemannian metric.
    [Show full text]
  • Information-Geometric Optimization Algorithms: a Unifying Picture Via Invariance Principles Yann Ollivier, Ludovic Arnold, Anne Auger, Nikolaus Hansen
    Information-Geometric Optimization Algorithms: A Unifying Picture via Invariance Principles Yann Ollivier, Ludovic Arnold, Anne Auger, Nikolaus Hansen To cite this version: Yann Ollivier, Ludovic Arnold, Anne Auger, Nikolaus Hansen. Information-Geometric Optimization Algorithms: A Unifying Picture via Invariance Principles. 2011. hal-00601503v2 HAL Id: hal-00601503 https://hal.archives-ouvertes.fr/hal-00601503v2 Preprint submitted on 29 Jun 2013 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Information-Geometric Optimization Algorithms: A Unifying Picture via Invariance Principles Yann Ollivier, Ludovic Arnold, Anne Auger, Nikolaus Hansen Abstract We present a canonical way to turn any smooth parametric fam- ily of probability distributions on an arbitrary search space X into a continuous-time black-box optimization method on X, the information- geometric optimization (IGO) method. Invariance as a major design principle keeps the number of arbitrary choices to a minimum. The resulting IGO flow is the flow of an ordinary differential equation con- ducting the natural gradient ascent of an adaptive, time-dependent transformation of the objective function. It makes no particular as- sumptions on the objective function to be optimized. The IGO method produces explicit IGO algorithms through time discretization.
    [Show full text]
  • The Fisher-Rao Geometry of Beta Distributions Applied to the Study of Canonical Moments Alice Le Brigant, Stéphane Puechmorel
    The Fisher-Rao geometry of beta distributions applied to the study of canonical moments Alice Le Brigant, Stéphane Puechmorel To cite this version: Alice Le Brigant, Stéphane Puechmorel. The Fisher-Rao geometry of beta distributions applied to the study of canonical moments. 2019. hal-02100897 HAL Id: hal-02100897 https://hal.archives-ouvertes.fr/hal-02100897 Preprint submitted on 16 Apr 2019 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. The Fisher-Rao geometry of beta distributions applied to the study of canonical moments Alice Le Brigant and Stéphane Puechmorel Abstract This paper studies the Fisher-Rao geometry on the parameter space of beta distributions. We derive the geodesic equations and the sectional curvature, and prove that it is negative. This leads to uniqueness for the Riemannian centroid in that space. We use this Riemannian structure to study canonical moments, an intrinsic representation of the moments of a probability distribution. Drawing on the fact that a uniform distribution in the regular moment space corresponds to a product of beta distributions in the canonical moment space, we propose a mapping from the space of canonical moments to the product beta manifold, allowing us to use the Fisher-Rao geometry of beta distributions to compare and analyze canonical moments.
    [Show full text]
  • Applications of Information Geometry to Hypothesis Testing and Signal
    CMCAA 2016 ApplicationsApplications ofof InformationInformation GeometryGeometry toto HypothesisHypothesis TestingTesting andand SignalSignal DetectionDetection Yongqiang Cheng National University of Defense Technology July 2016 OutlineOutline 1. Principles of Information Geometry 2. Geometry of Hypothesis Testing 3. Matrix CFAR Detection on Manifold of Symmetric Positive-Definite Matrices 4. Geometry of Matrix CFAR Detection 1.1. PrinciplesPrinciples ofof InformationInformation GeometryGeometry Important problems in statistics Distribution (likelihood): p(x|θ) where . x is a vector of data . θ is a vector of unknowns 1. How much does the data x tell about the unknown θ ? 2. How good is an estimator θ ˆ ? 3. How to measure difference between two distributions? 4. How about the structure of a statistical model specified by a family of distributions? 3 1.1. PrinciplesPrinciples ofof InformationInformation GeometryGeometry What is information geometry? Data Distributions Manifold Data processing Statistics Information Geometry 4 1.1. PrinciplesPrinciples ofof InformationInformation GeometryGeometry . Information geometry is the study of intrinsic properties of manifolds of probability distributions by way of differential geometry. The main tenet of information geometry is that many important structures in information theory and statistics can be treated as structures in differential geometry by regarding a space of probabilities as a differentiable manifold endowed with a Riemannian metric and a family of affine connections. Statistics Information Theory Probability Theory Relationships with Information Geometry other subjects Physics Differential Geometry Systems Riemannian Theory Geometry 5 1.1. PrinciplesPrinciples ofof InformationInformation GeometryGeometry n . Statistical manifold ,,GpR (|)|x θ x ,θ logp log p . Riemannian metric GEθ ij θθ . Affine connections 1 jim ()θ El (,)x θ l (,)x θ Ell (,)x θ (,)x θ l (,)x θ θθji m2 j i m .
    [Show full text]