Nonparametric Maximum Entropy Estimation on Information Diagrams

Nonparametric Maximum Entropy Estimation on Information Diagrams

Nonparametric Maximum Entropy Estimation on Information Diagrams Elliot A. Martin,1 Jaroslav Hlinka,2, 3 Alexander Meinke,1 Filip Dˇechtˇerenko,4, 2 and J¨ornDavidsen1 1Complexity Science Group, Department of Physics and Astronomy, University of Calgary, Calgary, Alberta, Canada, T2N 1N4 2Institute of Computer Science, The Czech Academy of Sciences, Pod vodarenskou vezi 2, 18207 Prague, Czech Republic 3National Institute of Mental Health, Topolov´a748, 250 67 Klecany, Czech Republic 4Institute of Psychology, The Czech Academy of Sciences, Prague, Czech Republic (Dated: September 26, 2018) Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We further demonstrate the applicability and advantages of our method to real world systems for the case of resting-state human brain networks. Finally, we show how our method can be used to estimate the structural network connectivity between interacting units from observed activity and establish the advantages over other approaches for the case of phase oscillator networks as a generic example. PACS numbers: 89.75.Hc, 89.70.Cf, 05.45.Tp, 87.18.Sn I. INTRODUCTION ples sizes too small to accurately estimate their underly- ing probability distributions [8]. In theory, conditioning on information-theoretic quan- Statistical mechanics is based on the assumption that tities can be accomplished using Lagrange multipliers. the most probable state of a system is the one with max- However, while this results in relatively simple equa- imal entropy. This was later shown by Jaynes [1] to be tions when conditioning on moments of distributions, a general property of statistical inference | the least bi- conditioning on information-theoretic quantities results ased estimate must have the maximum entropy possible in transcendental equations | making them much harder given the constraints, otherwise you are implicitly or ex- to solve. The absence of techniques to efficiently cal- plicitly assuming extra constraints. This has resulted in culate the maximum entropy in these cases is conspic- maximum entropy methods being applied widely outside uous; conditioning on the univariate entropies alone is of traditional statistical physics. equivalent to assuming the variables are independent, a Uses of maximum entropy methods can now be found widely used result, but a generalisation to a wider array in such diverse settings as neuroscience [2], genetics [3], of information-theoretic terms has not been forthcom- and inferring multidrug interactions [4]. These meth- ing to the best of our knowledge. In [9] we introduced ods typically condition on quantities such as cross- a method to address this issue using the set-theoretic correlations, which are not capable of detecting nonlinear formulation of information theory, but only when condi- relationships. Alternatively, one could condition on the tioning on bivariate mutual informations and univariate probability distributions of subsets of variables [5, 6], but entropies for discrete random variables. these can be hard to estimate accurately. In either case, Here, we significantly extend this technique and pro- arXiv:1601.00336v1 [physics.data-an] 3 Jan 2016 the computational costs quickly become prohibitive as vide relevant mathematical proofs. Specifically, we show the number of discrete states the random variables can how to apply the concept to continuous random variables take on increases (i.e. the cardinality of the variables and vastly expand the types of information-theoretic increases). quantities one can condition on. To establish the practi- In order to overcome these difficulties we propose con- cal relevance of our maximum entropy method, we show ditioning on information-theoretic quantities, such as en- that it can successfully be applied in the undersampled tropies and mutual informations. For example, the bi- regime, and that the computation time does not increase variate mutual information can detect arbitrary interac- with the cardinality of the variables | in fact we show tions between two variables, and is only zero when the our method can be computed much faster than other variables are pairwise independent [7]. At the same time techniques for cardinalities greater than 2. These are these measures can often be accurately estimated at sam- two issues that severely limit current maximum entropy 2 methods as noted in [10]. Inspired by this, we construct entropy, H(X) = P p(x) log(p(x)); conditional entropy, a noparametric estimate of connected informations in- H(XjY; Z) = P p(x; y; z) log(p(xjy; z)); mutual informa- troduced in [5], which are used to estimate the relevance tion, I(X; Y ) = P p(x; y) log(p(x; y)=(p(x)p(y))); of higher-order interactions in sets of variables. Previ- conditional mutual information, I(X; Y jZ) = ous techniques to estimate connected informations can P p(x; y; z) log (p(x; yjz)=[p(xjz)p(yjz)]); multivariate also be hampered by insufficient sampling, as well as be- mutual information, I(X; Y ; Z) = I(X; Y ) − I(X; Y jZ). come computationally intractable for larger cardinality In general the region where exactly n variables intersect variables. corresponds to the n-variate mutual information between We are also able to use our method to help resolve an those n variables conditioned on the remaining variables. outstanding issue of applying maximum entropy models to functional magnetic resonance imaging (fMRI) data, where past methods showed that pairwise measurements were a good representation of the data only when it was discretized to two states [11]. Here we show that dis- cretizing to larger cardinalities does not appreciably af- fect results from our method, though it does for meth- ods only conditioning on the first two moments of the variables. This indicates that nonlinear relationships are important for reconstructing this data. As a final application we show how our method can ���� be used to infer structural network connections. Infer- ������ ring networks from dynamical time series has seen much attention [12], with applications in such diverse fields as neuroscience [13], genetics [14], and the climate [15], as well as for generic coupled oscillators [16]. Our maximum entropy estimate allows for the inference of the condi- FIG. 1: (Color online) The information diagram for three tional mutual information between every pair of variables variables. It contains 7 regions corresponding to the pos- conditioned on all remaining considered variables. This sible combinations of 3 variables, with their corresponding has previously been used in [17] to detect causal connec- information-theoretic quantities defined in the text. The uni- tions with some success, though it becomes increasingly variate entropy H(X) is the sum of all the regions in the red hard to estimate as the number of variables and their circle, and the mutual information I(Y ; Z) is the sum of all cardinality go up | due to the exponentially increasing the regions in the blue oval. phase space. It has also been noted that there are fun- damental issues in the implementation of reconstructing Many information-theoretic quantities of interest can the underlying time graphs [18]. Our method can help be written as a sum of the `atoms' of information dia- overcome the sampling issue by not estimating the con- grams | their smallest subunits. For example, all en- ditional mutual informations directly, but by finding the tropies, mutual informations, and their conditioned vari- values of the conditional mutual information consistent ants can be expressed in this way. Given constraints of with the measured pairwise mutual informations and uni- this form we can calculate the maximum entropy using variate entropies when the joint entropy is maximized. linear optimization. The outline of our paper is as follows. In Sec. II Our methods works by constructing the information we show how one can vastly increase the types of diagram with the largest entropy given the constraints, information-theoretic quantities that one can condition which intuitively corresponds to creating the maximally on using the method we introduced in [9], as well as ex- disjoint diagram. For example, conditioning on the uni- tend the method to continuous variables. Next, in Sec. III variate entropies alone results in the diagram being com- we prove various properties relevant to the method. Fi- pletely disjoint, i.e., the maximum entropy is the sum of nally, in Sec. IV we illustrate pertinent features of our the univariate entropies | a well known result. However, method, and discuss various applications. when conditioning on other terms, such as mutual infor- mations, calculating the maximum entropy is no longer straightforward. II. METHOD Mutual informations and entropies correspond to re- gions of the information diagram and can be written as The set-theoretic formulation of information theory the sum of the atoms in their region. In general, a system N maps information-theoretic quantities to regions of of N variables, fXgN , will have 2 −1 atoms correspond- an information diagram [19], which is a variation ing to all possible possible combinations of the variables, of a Venn diagram. The information diagram for excluding the empty set. To illustrate this, for the three three variables is shown in Fig.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    12 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us