Entropy and Mutual Information (Continuous Random Variables)

Total Page:16

File Type:pdf, Size:1020Kb

Entropy and Mutual Information (Continuous Random Variables) Entropy and Mutual Information (Continuous Random Variables) Master Universitario en Ingenier´ıade Telecomunicaci´on I. Santamar´ıa Universidad de Cantabria Introduction Entropy Joint and Conditional Entropy Relative Entropy Mutual Information Contents Introduction Differential Entropy Joint and Conditional Differential Entropy Relative Entropy Mutual Information Entropy and Mutual Information 1/24 Introduction Entropy Joint and Conditional Entropy Relative Entropy Mutual Information Introduction I We introduce the (differential) entropy and mutual information of continuous random variables I We will need these concepts, for instance, to determine the capacity of the AWGN channel I Some important differences appear with respect to the case of discrete random variables I Continuous random variables =) differential entropy (strictly, not entropy) I Unlike the entropy of discrete random variables, the differential entropy of a continuous random variable can be negative I It does not give the average information in X I The relative entropy and mutual information concepts can be extended to the continuous case in a straightforward manner, and convey the same information Entropy and Mutual Information 2/24 Introduction Entropy Joint and Conditional Entropy Relative Entropy Mutual Information Definitions Let X be a continuous random variable with cumulative distribution function given by F (x) = PrfX ≤ xg; and probability density function (pdf) given by dF (x) f (x) = ; dx F (x) and f (x) are both assumed to be continuous functions Definition: The differential entropy h(X ) of a continuous random variable X with pdf f (x) is defined as Z h(X ) = − f (x) log f (x)dx; where the integration is carried out on the support of the r.v. Entropy and Mutual Information 3/24 Introduction Entropy Joint and Conditional Entropy Relative Entropy Mutual Information Example 1: Entropy of a uniform distribution, X ∼ U(a; b) f (x) 1 b − a a b Z Z b 1 1 h(X ) = − f (x) log f (x)dx = − log = log(b−a) a b − a b − a Note that h(X ) < 0 if (b − a) < 1 Entropy and Mutual Information 4/24 Introduction Entropy Joint and Conditional Entropy Relative Entropy Mutual Information Example 2: Entropy of a normal distribution, X ∼ N(0; σ2) 2 1 − x f (x) = p e 2σ2 2πσ Z Z 2 1 − x h(X ) = − f (x) log f (x)dx = − f (x) log p e 2σ2 dx 2πσ Z 1 x2 = − f (x) − log(2πσ2) − log e dx 2 2σ2 1 σ2 1 = log(2πσ2) + log e = log(2πeσ2) 2 2σ2 2 1 h(X ) = log(2πeσ2) 2 Entropy and Mutual Information 5/24 Introduction Entropy Joint and Conditional Entropy Relative Entropy Mutual Information 4 3 2 1 h(X) 0 -1 -2 -3 0 2 4 6 8 10 <2 It is a concave function of σ2 Entropy and Mutual Information 6/24 Introduction Entropy Joint and Conditional Entropy Relative Entropy Mutual Information Maximum entropy distribution For a fixed variance (E[X 2] = σ2), the normal distribution is the pdf that maximizes entropy Z maximize − f (x) log f (x)dx; f (x) subject to f (x) ≥ 0; Z f (x)dx = 1; Z x2f (x)dx = σ2: This is a convex optimization problem (entropy is a concave function) whose solution is 2 1 − x f (x) = p e 2σ2 2πσ This result will be important later Entropy and Mutual Information 7/24 Introduction Entropy Joint and Conditional Entropy Relative Entropy Mutual Information Densities Given two random variables X and Y , we have I Joint pdf f (x; y) I Marginal pdf Z Z f (x) = f (x; y)dy f (y) = f (x; y)dx I Conditional pdf f (x; y) f (xjy) = f (y) Independence f (x; y) = f (x)f (y) Entropy and Mutual Information 8/24 Introduction Entropy Joint and Conditional Entropy Relative Entropy Mutual Information Joint and conditional entropy T Definition: Let X = (X1;:::; XN ) be an N-dimensional random vector with density f (x) = f (x1;::: xN ). The (joint) differential entropy of X is defined as Z h(X) = − f (x) log f (x)dx Definition: Let (X ; Y ) have a joint pdf f (x; y), the conditional differential entropy h(X jY ) is defined as Z h(X jY ) = − f (x; y) log f (xjy)dx dy Like for discrete random variables, the following relationships also hold h(X ; Y ) = h(X ) + h(Y jX ) h(X ; Y ) = h(Y ) + h(X jY ) Entropy and Mutual Information 9/24 Introduction Entropy Joint and Conditional Entropy Relative Entropy Mutual Information Example 1: Entropy of a multivariate normal distribution T X ∼ N(0; C). Let X = (X1;:::; XN ) be an N-dimensional Gaussian vector with zero mean and covariance matrix C, 1 − 1 xT C−1x f (x) = p e 2 ( 2π)N jCj1=2 Z h(X) = − f (x) log f (x)dx Z 1 log e = − f (x) − log((2π)N jCj) − (xT C−1x) dx 2 2 1 N log e 1 = log((2π)N jCj) + = log((2πe)N jCj) 2 2 2 where we have used the fact that E[xT C−1x] = N 1 1 h(X) = log((2πe)N jCj) = log det(2πe C) 2 2 Entropy and Mutual Information 10/24 Introduction Entropy Joint and Conditional Entropy Relative Entropy Mutual Information As a particular case, let us consider a 2D vector containing two T correlated Gaussian random variables. Let X = (X1; X2) be a zero-mean Gaussian random vector with covariance matrix given by σ2 1 ρ C = ; 2 ρ 1 where E[X1X2] ρ = q q 2 2 E[X1 ] E[X2 ] is the correlation coefficient (−1 ≤ ρ ≤ 1) Applying the previous result, the entropy of X is q h(X) = log(πeσ2 (1 − ρ2)) 2 If ρ = 0, X = X1 + jX2 is a complex normal (X ∼ CN(0; σ )) with entropy h(X ) = log(πeσ2) Entropy and Mutual Information 11/24 Introduction Entropy Joint and Conditional Entropy Relative Entropy Mutual Information <2 =1 4 3 2 H 1 0 -1 -2 0 0.2 0.4 0.6 0.8 1 ;2 It is a concave function of ρ2 Entropy and Mutual Information 12/24 Introduction Entropy Joint and Conditional Entropy Relative Entropy Mutual Information Properties Let us review a few more properties of the differential entropy and the mutual information that might be useful later I The differential entropy is invariant to a translation (change in the mean of the pdf) h(X ) = h(X + a) Proof: The proof follows directly from the definition of differential entropy I The differential entropy changes with a change of scale h(aX ) = h(X ) + log jaj Proof: Let Y = aX , then the pdf of Y is 1 y f (y) = f : Y jaj X a Entropy and Mutual Information 13/24 Introduction Entropy Joint and Conditional Entropy Relative Entropy Mutual Information Applying now the definition of differential entropy we have Z h(aX ) = − fY (y) log fY (y)dy Z 1 y 1 y = − f log f jaj X a jaj X a Z = − fX (y) log fX (x)dx + log jaj = h(X ) + log jaj I An extension to random vectors is as follows h(AX) = h(X) + log jdet(A)j Entropy and Mutual Information 14/24 Introduction Entropy Joint and Conditional Entropy Relative Entropy Mutual Information Relative entropy Definition:The relative entropy (Kullback-Leibler divergence) D(f jjg) between two continuous densities is defined by Z f (x) D(f jjg) = f (x) log dx: g(x) Note that D(f jjg) is finite only if the support of f (x) is contained in the support of g(x) The KL distance satisfies the following properties (identical to the discrete case) I D(pjjq) ≥ 0 I D(pjjq) = 0 iff p = q Entropy and Mutual Information 15/24 Introduction Entropy Joint and Conditional Entropy Relative Entropy Mutual Information Example 1: Relative entropy between two normal distributions with different means and variances 2 2 − (x−µ1) − (x−µ2) 1 2σ2 1 2σ2 f (x) = p e 1 and g(x) = p e 2 2πσ1 2πσ2 Z Z 2 f (x) 2 N(µ1; σ1) D(f jjg) = f (x) log dx = N(µ1; σ1) log 2 dx g(x) N(µ2; σ2) Z 2 2 2 σ2 (x − µ1) (x − µ2) = N(µ1; σ1) log + log(e) − 2 + 2 dx σ1 2σ1 2σ2 2 2 2 1 σ2 σ1 (µ1 − µ2) D(f jjg) = log e ln 2 + 2 + 2 − 1 2 σ1 σ2 σ2 As defined, the relative entropy is measured in bits. If we use ln instead of log in the definition it would be measured in nats, the only difference in the previous expression would be the log e factor Entropy and Mutual Information 16/24 Introduction Entropy Joint and Conditional Entropy Relative Entropy Mutual Information I σ1 = σ2 = 1 and µ1 = 0 1 D(f jjg) = µ2 log e 2 2 3 2.5 2 1.5 D(f||g) 1 0.5 0 -2 -1 0 1 2 7 2 It is a convex function of µ2 Entropy and Mutual Information 17/24 Introduction Entropy Joint and Conditional Entropy Relative Entropy Mutual Information I σ1 = 1 and µ1 = µ2 1 2 1 D(f jjg) = log e ln(σ2) + 2 − 1 2 σ2 0.25 0.2 0.15 D(f||g) 0.1 0.05 0 0.5 1 1.5 2 <2 2 2 It is a convex function of σ2 Entropy and Mutual Information 18/24 Introduction Entropy Joint and Conditional Entropy Relative Entropy Mutual Information Mutual information Definition 1: The mutual information I (X ; Y ) between the random variables X and Y is given by I (X ; Y ) = h(X ) − h(X jY ) = h(Y ) − h(Y jX ) Definition 2: The mutual information I (X ; Y ) between two random variables with joint distribution f (x; y) is defined as the KL distance between the joint distribution and the product of their marginals Z f (x; y) I (X ; Y ) = D(f (x; y)jjf (x)f (y)) = f (x; y) log f (x)f (y) f (X ; Y ) = E log f (X )f (Y ) Entropy and Mutual Information 19/24 Introduction Entropy Joint and Conditional Entropy Relative Entropy Mutual Information Two important properties (identical to the case of discrete random variables) 1.
Recommended publications
  • Distribution of Mutual Information
    Distribution of Mutual Information Marcus Hutter IDSIA, Galleria 2, CH-6928 Manno-Lugano, Switzerland [email protected] http://www.idsia.ch/-marcus Abstract The mutual information of two random variables z and J with joint probabilities {7rij} is commonly used in learning Bayesian nets as well as in many other fields. The chances 7rij are usually estimated by the empirical sampling frequency nij In leading to a point es­ timate J(nij In) for the mutual information. To answer questions like "is J (nij In) consistent with zero?" or "what is the probability that the true mutual information is much larger than the point es­ timate?" one has to go beyond the point estimate. In the Bayesian framework one can answer these questions by utilizing a (second order) prior distribution p( 7r) comprising prior information about 7r. From the prior p(7r) one can compute the posterior p(7rln), from which the distribution p(Iln) of the mutual information can be cal­ culated. We derive reliable and quickly computable approximations for p(Iln). We concentrate on the mean, variance, skewness, and kurtosis, and non-informative priors. For the mean we also give an exact expression. Numerical issues and the range of validity are discussed. 1 Introduction The mutual information J (also called cross entropy) is a widely used information theoretic measure for the stochastic dependency of random variables [CT91, SooOO] . It is used, for instance, in learning Bayesian nets [Bun96, Hec98] , where stochasti­ cally dependent nodes shall be connected. The mutual information defined in (1) can be computed if the joint probabilities {7rij} of the two random variables z and J are known.
    [Show full text]
  • Lecture 3: Entropy, Relative Entropy, and Mutual Information 1 Notation 2
    EE376A/STATS376A Information Theory Lecture 3 - 01/16/2018 Lecture 3: Entropy, Relative Entropy, and Mutual Information Lecturer: Tsachy Weissman Scribe: Yicheng An, Melody Guan, Jacob Rebec, John Sholar In this lecture, we will introduce certain key measures of information, that play crucial roles in theoretical and operational characterizations throughout the course. These include the entropy, the mutual information, and the relative entropy. We will also exhibit some key properties exhibited by these information measures. 1 Notation A quick summary of the notation 1. Discrete Random Variable: U 2. Alphabet: U = fu1; u2; :::; uM g (An alphabet of size M) 3. Specific Value: u; u1; etc. For discrete random variables, we will write (interchangeably) P (U = u), PU (u) or most often just, p(u) Similarly, for a pair of random variables X; Y we write P (X = x j Y = y), PXjY (x j y) or p(x j y) 2 Entropy Definition 1. \Surprise" Function: 1 S(u) log (1) , p(u) A lower probability of u translates to a greater \surprise" that it occurs. Note here that we use log to mean log2 by default, rather than the natural log ln, as is typical in some other contexts. This is true throughout these notes: log is assumed to be log2 unless otherwise indicated. Definition 2. Entropy: Let U a discrete random variable taking values in alphabet U. The entropy of U is given by: 1 X H(U) [S(U)] = log = − log (p(U)) = − p(u) log p(u) (2) , E E p(U) E u Where U represents all u values possible to the variable.
    [Show full text]
  • Package 'Infotheo'
    Package ‘infotheo’ February 20, 2015 Title Information-Theoretic Measures Version 1.2.0 Date 2014-07 Publication 2009-08-14 Author Patrick E. Meyer Description This package implements various measures of information theory based on several en- tropy estimators. Maintainer Patrick E. Meyer <[email protected]> License GPL (>= 3) URL http://homepage.meyerp.com/software Repository CRAN NeedsCompilation yes Date/Publication 2014-07-26 08:08:09 R topics documented: condentropy . .2 condinformation . .3 discretize . .4 entropy . .5 infotheo . .6 interinformation . .7 multiinformation . .8 mutinformation . .9 natstobits . 10 Index 12 1 2 condentropy condentropy conditional entropy computation Description condentropy takes two random vectors, X and Y, as input and returns the conditional entropy, H(X|Y), in nats (base e), according to the entropy estimator method. If Y is not supplied the function returns the entropy of X - see entropy. Usage condentropy(X, Y=NULL, method="emp") Arguments X data.frame denoting a random variable or random vector where columns contain variables/features and rows contain outcomes/samples. Y data.frame denoting a conditioning random variable or random vector where columns contain variables/features and rows contain outcomes/samples. method The name of the entropy estimator. The package implements four estimators : "emp", "mm", "shrink", "sg" (default:"emp") - see details. These estimators require discrete data values - see discretize. Details • "emp" : This estimator computes the entropy of the empirical probability distribution. • "mm" : This is the Miller-Madow asymptotic bias corrected empirical estimator. • "shrink" : This is a shrinkage estimate of the entropy of a Dirichlet probability distribution. • "sg" : This is the Schurmann-Grassberger estimate of the entropy of a Dirichlet probability distribution.
    [Show full text]
  • Arxiv:1907.00325V5 [Cs.LG] 25 Aug 2020
    Random Forests for Adaptive Nearest Neighbor Estimation of Information-Theoretic Quantities Ronan Perry1, Ronak Mehta1, Richard Guo1, Jesús Arroyo1, Mike Powell1, Hayden Helm1, Cencheng Shen1, and Joshua T. Vogelstein1;2∗ Abstract. Information-theoretic quantities, such as conditional entropy and mutual information, are critical data summaries for quantifying uncertainty. Current widely used approaches for computing such quantities rely on nearest neighbor methods and exhibit both strong performance and theoretical guarantees in certain simple scenarios. However, existing approaches fail in high-dimensional settings and when different features are measured on different scales. We propose decision forest-based adaptive nearest neighbor estimators and show that they are able to effectively estimate posterior probabilities, conditional entropies, and mutual information even in the aforementioned settings. We provide an extensive study of efficacy for classification and posterior probability estimation, and prove cer- tain forest-based approaches to be consistent estimators of the true posteriors and derived information-theoretic quantities under certain assumptions. In a real-world connectome application, we quantify the uncertainty about neuron type given various cellular features in the Drosophila larva mushroom body, a key challenge for modern neuroscience. 1 Introduction Uncertainty quantification is a fundamental desiderata of statistical inference and data science. In supervised learning settings it is common to quantify uncertainty with either conditional en- tropy or mutual information (MI). Suppose we are given a pair of random variables (X; Y ), where X is d-dimensional vector-valued and Y is a categorical variable of interest. Conditional entropy H(Y jX) measures the uncertainty in Y on average given X. On the other hand, mutual information quantifies the shared information between X and Y .
    [Show full text]
  • An Unforeseen Equivalence Between Uncertainty and Entropy
    An Unforeseen Equivalence between Uncertainty and Entropy Tim Muller1 University of Nottingham [email protected] Abstract. Beta models are trust models that use a Bayesian notion of evidence. In that paradigm, evidence comes in the form of observing whether an agent has acted well or not. For Beta models, uncertainty is the inverse of the amount of evidence. Information theory, on the other hand, offers a fundamentally different approach to the issue of lacking knowledge. The entropy of a random variable is determined by the shape of its distribution, not by the evidence used to obtain it. However, we dis- cover that a specific entropy measure (EDRB) coincides with uncertainty (in the Beta model). EDRB is the expected Kullback-Leibler divergence between two Bernouilli trials with parameters randomly selected from the distribution. EDRB allows us to apply the notion of uncertainty to other distributions that may occur when generalising Beta models. Keywords: Uncertainty · Entropy · Information Theory · Beta model · Subjective Logic 1 Introduction The Beta model paradigm is a powerful formal approach to studying trust. Bayesian logic is at the core of the Beta model: \agents with high integrity be- have honestly" becomes \honest behaviour evidences high integrity". Its simplest incarnation is to apply Beta distributions naively, and this approach has limited success. However, more powerful and sophisticated approaches are widespread (e.g. [3,13,17]). A commonality among many approaches, is that more evidence (in the form of observing instances of behaviour) yields more certainty of an opinion. Uncertainty is inversely proportional to the amount of evidence. Evidence is often used in machine learning.
    [Show full text]
  • Guaranteed Bounds on Information-Theoretic Measures of Univariate Mixtures Using Piecewise Log-Sum-Exp Inequalities
    Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 20 October 2016 doi:10.20944/preprints201610.0086.v1 Peer-reviewed version available at Entropy 2016, 18, 442; doi:10.3390/e18120442 Article Guaranteed Bounds on Information-Theoretic Measures of Univariate Mixtures Using Piecewise Log-Sum-Exp Inequalities Frank Nielsen 1,2,* and Ke Sun 1 1 École Polytechnique, Palaiseau 91128, France; [email protected] 2 Sony Computer Science Laboratories Inc., Paris 75005, France * Correspondence: [email protected] Abstract: Information-theoretic measures such as the entropy, cross-entropy and the Kullback-Leibler divergence between two mixture models is a core primitive in many signal processing tasks. Since the Kullback-Leibler divergence of mixtures provably does not admit a closed-form formula, it is in practice either estimated using costly Monte-Carlo stochastic integration, approximated, or bounded using various techniques. We present a fast and generic method that builds algorithmically closed-form lower and upper bounds on the entropy, the cross-entropy and the Kullback-Leibler divergence of mixtures. We illustrate the versatile method by reporting on our experiments for approximating the Kullback-Leibler divergence between univariate exponential mixtures, Gaussian mixtures, Rayleigh mixtures, and Gamma mixtures. Keywords: information geometry; mixture models; log-sum-exp bounds 1. Introduction Mixture models are commonly used in signal processing. A typical scenario is to use mixture models [1–3] to smoothly model histograms. For example, Gaussian Mixture Models (GMMs) can be used to convert grey-valued images into binary images by building a GMM fitting the image intensity histogram and then choosing the threshold as the average of the Gaussian means [1] to binarize the image.
    [Show full text]
  • GAIT: a Geometric Approach to Information Theory
    GAIT: A Geometric Approach to Information Theory Jose Gallego Ankit Vani Max Schwarzer Simon Lacoste-Julieny Mila and DIRO, Université de Montréal Abstract supports. Optimal transport distances, such as Wasser- stein, have emerged as practical alternatives with theo- retical grounding. These methods have been used to We advocate the use of a notion of entropy compute barycenters (Cuturi and Doucet, 2014) and that reflects the relative abundances of the train generative models (Genevay et al., 2018). How- symbols in an alphabet, as well as the sim- ever, optimal transport is computationally expensive ilarities between them. This concept was as it generally lacks closed-form solutions and requires originally introduced in theoretical ecology to the solution of linear programs or the execution of study the diversity of ecosystems. Based on matrix scaling algorithms, even when solved only in this notion of entropy, we introduce geometry- approximate form (Cuturi, 2013). Approaches based aware counterparts for several concepts and on kernel methods (Gretton et al., 2012; Li et al., 2017; theorems in information theory. Notably, our Salimans et al., 2018), which take a functional analytic proposed divergence exhibits performance on view on the problem, have also been widely applied. par with state-of-the-art methods based on However, further exploration on the interplay between the Wasserstein distance, but enjoys a closed- kernel methods and information theory is lacking. form expression that can be computed effi- ciently. We demonstrate the versatility of our Contributions. We i) introduce to the machine learn- method via experiments on a broad range of ing community a similarity-sensitive definition of en- domains: training generative models, comput- tropy developed by Leinster and Cobbold(2012).
    [Show full text]
  • A Lower Bound on the Differential Entropy of Log-Concave Random Vectors with Applications
    entropy Article A Lower Bound on the Differential Entropy of Log-Concave Random Vectors with Applications Arnaud Marsiglietti 1,* and Victoria Kostina 2 1 Center for the Mathematics of Information, California Institute of Technology, Pasadena, CA 91125, USA 2 Department of Electrical Engineering, California Institute of Technology, Pasadena, CA 91125, USA; [email protected] * Correspondence: [email protected] Received: 18 January 2018; Accepted: 6 March 2018; Published: 9 March 2018 Abstract: We derive a lower bound on the differential entropy of a log-concave random variable X in terms of the p-th absolute moment of X. The new bound leads to a reverse entropy power inequality with an explicit constant, and to new bounds on the rate-distortion function and the channel capacity. Specifically, we study the rate-distortion function for log-concave sources and distortion measure ( ) = j − jr ≥ d x, xˆ x xˆ , with r 1, and we establish thatp the difference between the rate-distortion function and the Shannon lower bound is at most log( pe) ≈ 1.5 bits, independently of r and the target q pe distortion d. For mean-square error distortion, the difference is at most log( 2 ) ≈ 1 bit, regardless of d. We also provide bounds on the capacity of memoryless additive noise channels when the noise is log-concave. We show that the difference between the capacity of such channels and the capacity of q pe the Gaussian channel with the same noise power is at most log( 2 ) ≈ 1 bit. Our results generalize to the case of a random vector X with possibly dependent coordinates.
    [Show full text]
  • On a General Definition of Conditional Rényi Entropies
    proceedings Proceedings On a General Definition of Conditional Rényi Entropies † Velimir M. Ili´c 1,*, Ivan B. Djordjevi´c 2 and Miomir Stankovi´c 3 1 Mathematical Institute of the Serbian Academy of Sciences and Arts, Kneza Mihaila 36, 11000 Beograd, Serbia 2 Department of Electrical and Computer Engineering, University of Arizona, 1230 E. Speedway Blvd, Tucson, AZ 85721, USA; [email protected] 3 Faculty of Occupational Safety, University of Niš, Carnojevi´ca10a,ˇ 18000 Niš, Serbia; [email protected] * Correspondence: [email protected] † Presented at the 4th International Electronic Conference on Entropy and Its Applications, 21 November–1 December 2017; Available online: http://sciforum.net/conference/ecea-4. Published: 21 November 2017 Abstract: In recent decades, different definitions of conditional Rényi entropy (CRE) have been introduced. Thus, Arimoto proposed a definition that found an application in information theory, Jizba and Arimitsu proposed a definition that found an application in time series analysis and Renner-Wolf, Hayashi and Cachin proposed definitions that are suitable for cryptographic applications. However, there is still no a commonly accepted definition, nor a general treatment of the CRE-s, which can essentially and intuitively be represented as an average uncertainty about a random variable X if a random variable Y is given. In this paper we fill the gap and propose a three-parameter CRE, which contains all of the previous definitions as special cases that can be obtained by a proper choice of the parameters. Moreover, it satisfies all of the properties that are simultaneously satisfied by the previous definitions, so that it can successfully be used in aforementioned applications.
    [Show full text]
  • Noisy Channel Coding
    Noisy Channel Coding: Correlated Random Variables & Communication over a Noisy Channel Toni Hirvonen Helsinki University of Technology Laboratory of Acoustics and Audio Signal Processing [email protected] T-61.182 Special Course in Information Science II / Spring 2004 1 Contents • More entropy definitions { joint & conditional entropy { mutual information • Communication over a noisy channel { overview { information conveyed by a channel { noisy channel coding theorem 2 Joint Entropy Joint entropy of X; Y is: 1 H(X; Y ) = P (x; y) log P (x; y) xy X Y 2AXA Entropy is additive for independent random variables: H(X; Y ) = H(X) + H(Y ) iff P (x; y) = P (x)P (y) 3 Conditional Entropy Conditional entropy of X given Y is: 1 1 H(XjY ) = P (y) P (xjy) log = P (x; y) log P (xjy) P (xjy) y2A "x2A # y2A A XY XX XX Y It measures the average uncertainty (i.e. information content) that remains about x when y is known. 4 Mutual Information Mutual information between X and Y is: I(Y ; X) = I(X; Y ) = H(X) − H(XjY ) ≥ 0 It measures the average reduction in uncertainty about x that results from learning the value of y, or vice versa. Conditional mutual information between X and Y given Z is: I(Y ; XjZ) = H(XjZ) − H(XjY; Z) 5 Breakdown of Entropy Entropy relations: Chain rule of entropy: H(X; Y ) = H(X) + H(Y jX) = H(Y ) + H(XjY ) 6 Noisy Channel: Overview • Real-life communication channels are hopelessly noisy i.e. introduce transmission errors • However, a solution can be achieved { the aim of source coding is to remove redundancy from the source data
    [Show full text]
  • A Statistical Framework for Neuroimaging Data Analysis Based on Mutual Information Estimated Via a Gaussian Copula
    bioRxiv preprint doi: https://doi.org/10.1101/043745; this version posted October 25, 2016. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under aCC-BY-NC-ND 4.0 International license. A statistical framework for neuroimaging data analysis based on mutual information estimated via a Gaussian copula Robin A. A. Ince1*, Bruno L. Giordano1, Christoph Kayser1, Guillaume A. Rousselet1, Joachim Gross1 and Philippe G. Schyns1 1 Institute of Neuroscience and Psychology, University of Glasgow, 58 Hillhead Street, Glasgow, G12 8QB, UK * Corresponding author: [email protected] +44 7939 203 596 Abstract We begin by reviewing the statistical framework of information theory as applicable to neuroimaging data analysis. A major factor hindering wider adoption of this framework in neuroimaging is the difficulty of estimating information theoretic quantities in practice. We present a novel estimation technique that combines the statistical theory of copulas with the closed form solution for the entropy of Gaussian variables. This results in a general, computationally efficient, flexible, and robust multivariate statistical framework that provides effect sizes on a common meaningful scale, allows for unified treatment of discrete, continuous, uni- and multi-dimensional variables, and enables direct comparisons of representations from behavioral and brain responses across any recording modality. We validate the use of this estimate as a statistical test within a neuroimaging context, considering both discrete stimulus classes and continuous stimulus features. We also present examples of analyses facilitated by these developments, including application of multivariate analyses to MEG planar magnetic field gradients, and pairwise temporal interactions in evoked EEG responses.
    [Show full text]
  • Specific Differential Entropy Rate Estimation for Continuous-Valued Time Series
    Specific Differential Entropy Rate Estimation for Continuous-Valued Time Series David Darmon1 1Department of Military and Emergency Medicine, Uniformed Services University, Bethesda, MD 20814, USA October 16, 2018 Abstract We introduce a method for quantifying the inherent unpredictability of a continuous-valued time series via an extension of the differential Shan- non entropy rate. Our extension, the specific entropy rate, quantifies the amount of predictive uncertainty associated with a specific state, rather than averaged over all states. We relate the specific entropy rate to pop- ular `complexity' measures such as Approximate and Sample Entropies. We provide a data-driven approach for estimating the specific entropy rate of an observed time series. Finally, we consider three case studies of estimating specific entropy rate from synthetic and physiological data relevant to the analysis of heart rate variability. 1 Introduction The analysis of time series resulting from complex systems must often be per- formed `blind': in many cases, mechanistic or phenomenological models are not available because of the inherent difficulty in formulating accurate models for complex systems. In this case, a typical analysis may assume that the data are the model, and attempt to generalize from the data in hand to the system. arXiv:1606.02615v1 [cs.LG] 8 Jun 2016 For example, a common question to ask about a times series is how `complex' it is, where we place complex in quotes to emphasize the lack of a satisfactory definition of complexity at present [60]. An answer is then sought that agrees with a particular intuition about what makes a system complex: for example, trajectories from periodic and entirely random systems appear simple, while tra- jectories from chaotic systems appear quite complicated.
    [Show full text]