Homework Set #1 Properties of Entropy, Mutual Information and Divergence

Total Page:16

File Type:pdf, Size:1020Kb

Homework Set #1 Properties of Entropy, Mutual Information and Divergence Homework Set #1 Properties of Entropy, Mutual Information and Divergence 1. Entropy of functions of a random variable. Let X be a discrete random variable. Show that the entropy of a function of X is less than or equal to the entropy of X by justifying the following steps: (a) H(X; g(X)) = H(X) + H(g(X)jX) (b) = H(X): (c) H(X; g(X)) = H(g(X)) + H(Xjg(X)) (d) ≥ H(g(X)): Thus H(g(X)) ≤ H(X). Solution: Entropy of functions of a random variable. (a) H(X; g(X)) = H(X)+H(g(X)jX) by the chain rule for entropies. (b) H(g(X)jX) = 0 since for any particular value of X, g(X) is fixed, P P and hence H(g(X)jX) = x p(x)H(g(X)jX = x) = x 0 = 0. (c) H(X; g(X)) = H(g(X)) + H(Xjg(X)) again by the chain rule. (d) H(Xjg(X)) ≥ 0, with equality iff X is a function of g(X), i.e., g(:) is one-to-one. Hence H(X; g(X)) ≥ H(g(X)). Combining parts (b) and (d), we obtain H(X) ≥ H(g(X)). 2. Example of joint entropy. Let p(x; y) be given by Y X 0 1 1 1 0 3 3 1 1 0 3 Find 1 (a) H(X);H(Y ). (b) H(XjY );H(Y jX): (c) H(X; Y ). (d) H(Y ) − H(Y jX): (e) I(X; Y ): Solution: Example of joint entropy 2 3 1 (a) H(X) = 3 log 2 + 3 log 3 = :918 bits = H(Y ). 1 2 (b) H(XjY ) = 3 H(XjY = 0)+ 3 H(XjY = 1) = :667 bits = H(Y jX). 1 (c) H(X; Y ) = 3 × 3 log 3 = 1:585 bits. (d) H(Y ) − H(Y jX) = :251 bits. (e) I(X; Y ) = H(Y ) − H(Y jX) = :251 bits. 3. Bytes. P The entropy, Ha(X) = − p(x) loga p(x) is expressed in bits if the logarithm is to the base 2 and in bytes if the logarithm is to the base 256. What is the relationship of H2(X) to H256(X)? Solution: Bytes. i−1 X limi=1I(Yi; Y jQi) = 0H2(X) = − p(x) log2 p(x) X log p(x) log (2) = − p(x) 2 256 log256(2) (a) X log p(x) = − p(x) 256 log256(2) −1 X = p(x) log256 p(x) log256(2) (b) H (X) = 256 ; log256(2) where (a) comes from the property of logarithms and (b) follows from the definition of H256(X): Hence we get H2(X) = 8H256(X): 2 4. Two looks. Here is a statement about pairwise independence and joint indepen- dence. Let X; Y1; and Y2 be binary random variables. If I(X; Y1) = 0 and I(X; Y2) = 0, does it follow that I(X; Y1;Y2) = 0? (a) Yes or no? (b) Prove or provide a counterexample. (c) If I(X; Y1) = 0 and I(X; Y2) = 0 in the above problem, does it follow that I(Y1; Y2) = 0? In other words, if Y1 is independent of X, and if Y2 is independent of X, is it true that Y1 and Y2 are independent? Solution: Two looks. (a) The answer is \no". (b) Although at first the conjecture seems reasonable enough{after all, if Y1 gives you no information about X, and if Y2 gives you no information about X, then why should the two of them together give any information? But remember, it is NOT the case that I(X; Y1;Y2) = I(X; Y1)+I(X; Y2). The chain rule for information says instead that I(X; Y1;Y2) = I(X; Y1)+I(X; Y2jY1). The chain rule gives us reason to be skeptical about the conjecture. This problem is reminiscent of the well-known fact in probabil- ity that pair-wise independence of three random variables is not sufficient to guarantee that all three are mutually independent. I(X; Y1) = 0 is equivalent to saying that X and Y1 are indepen- dent. Similarly for X and Y2. But just because X is pairwise independent with each of Y1 and Y2, it does not follow that X is independent of the vector (Y1;Y2). Here is a simple counterexample. Let Y1 and Y2 be independent fair coin flips. And let X = Y1 XOR Y2. X is pairwise independent of both Y1 and Y2, but obviously not independent of the vector (Y1;Y2), since X is uniquely determined once you know (Y1;Y2). (c) Again the answer is \no". Y1 and Y2 can be arbitrarily dependent with each other and both still be independent of X. For example, let Y1 = Y2 be two observations of the same fair coin flip, and 3 X an independent fair coin flip. Then I(X; Y1) = I(X; Y2) = 0 because X is independent of both Y1 and Y2. However, I(Y1; Y2) = H(Y1) − H(Y1jY2) = H(Y1) = 1. 5. A measure of correlation. Let X1 and X2 be identically distributed, but not necessarily indepen- dent. Let H(X jX ) ρ = 1 − 1 2 : H(X1) (a) Show ρ = I(X1;X2) : H(X1) (b) Show 0 ≤ ρ ≤ 1: (c) When is ρ = 0? (d) When is ρ = 1? Solution: A measure of correlation. X1 and X2 are identically distributed and H(X jX ) ρ = 1 − 2 1 H(X1) (a) H(X ) − H(X jX ) ρ = 1 2 1 H(X1) H(X2) − H(X2jX1) = (since H(X1) = H(X2)) H(X1) I(X ; X ) = 1 2 : H(X1) (b) Since 0 ≤ H(X2jX1) ≤ H(X2) = H(X1), we have H(X jX ) 0 ≤ 2 1 ≤ 1 H(X1) 0 ≤ ρ ≤ 1: (c) ρ = 0 iff I(X1; X2) = 0 iff X1 and X2 are independent. 4 (d) ρ = 1 iff H(X2jX1) = 0 iff X2 is a function of X1. By symmetry, X1 is a function of X2, i.e., X1 and X2 have a one-to-one correspon- dence. For example, if X1 = X2 with probability 1 then ρ = 1. Similarly, if the distribution of Xi is symmetric then X1 = −X2 with probability 1 would also give ρ = 1. 6. The value of a question. Let X ∼ p(x); x = 1; 2; : : : ; m. We are given a set S ⊆ f1; 2; : : : ; mg. We ask whether X 2 S and receive the answer 1; if X 2 S Y = 0; if X 62 S: Suppose PrfX 2 Sg = α. (a) Find the decrease in uncertainty H(X) − H(XjY ). (b) Is it true that any set S with a given probability α is as good as any other. Solution: The value of a question. (a) Consider H(X) − H(XjY ) = H(Y ) − H(Y jX) = H(Y ) = Hb(α) (1) (b) Yes, since the answer depends only on α. 7. Relative entropy is not symmetric Let the random variable X have three possible outcomes fa; b; cg. Con- sider two distributions on this random variable Symbol p(x) q(x) a 1/2 1/3 b 1/4 1/3 c 1/4 1/3 Calculate H(p);H(q);D(p k q) and D(q k p). Verify that in this case D(p k q) 6= D(q k p). Solution: Relative entropy is not symmetric. 5 (a) H(p) = 1=2 log 2 + 2 × 1=4 log 4 = 1:5 bits. (b) H(q) = 3 × 1=3 log 3 = log 3 = 1:585 bits. (c) D(pkq) = 1=2 log 3=2+2×1=4 log 3=4 = log 3−3=2 = 0:0850 bits. (d) D(qkp) = 1=3 log 2=3+2×1=3 log 4=3 = 5=3−log 3 = 0:0817 bits. D(pkq) 6= D(qkp) as expected. 8. \True or False" questions Copy each relation and write true or false. Then, if it's true, prove it. If it is false give a counterexample or prove that the opposite is true. (a) H(X) ≥ H(XjY ) (b) H(X) + H(Y ) ≤ H(X; Y ) (c) Let X; Y be two independent random variables. Then H(X − Y ) ≥ H(X): (d) Let X; Y; Z be three random variables that satisfies H(X; Y ) = H(X) + H(Y ) and H(Y; Z) = H(Z) + H(Y ): Then the following holds H(X; Y; Z) = H(X) + H(Y ) + H(Z): (e) For any X; Y; Z and the deterministic function f; g I(X; Y jZ) = I(X; f(X; Y ); Y; g(Y; Z)jZ): Solution to \True or False" questions e. (a) H(X) ≥ H(XjY ) is true. Proof: In the class we showed that I(X; Y ) > 0, hence H(X) − H(XjY ) > 0. (b) H(X) + H(Y ) ≤ H(X; Y ) is false. Actually the opposite is true, i.e., H(X) + H(Y ) ≥ H(X; Y ) since I(X; Y ) = H(X) + H(Y ) − H(X; Y ) ≥ 0. (c) Let X; Y be two independent random variables. Then H(X − Y ) ≥ H(X): True (a) (b) H(X − Y ) ≥ H(X − Y jY )) ≥ H(X) 6 (a) follows from the fact that conditioning reduces entropy. (b) Follows from the fact that given Y , X − Y is a Bijective Func- tion. (d) Let X; Y; Z be three random variables that satisfies H(X; Y ) = H(X) + H(Y ) and H(Y; Z) = H(Z) + H(Y ). Then the following holds H(X; Y; Z) = H(X)+H(Y )+H(Z). This is false.
Recommended publications
  • Lecture 3: Entropy, Relative Entropy, and Mutual Information 1 Notation 2
    EE376A/STATS376A Information Theory Lecture 3 - 01/16/2018 Lecture 3: Entropy, Relative Entropy, and Mutual Information Lecturer: Tsachy Weissman Scribe: Yicheng An, Melody Guan, Jacob Rebec, John Sholar In this lecture, we will introduce certain key measures of information, that play crucial roles in theoretical and operational characterizations throughout the course. These include the entropy, the mutual information, and the relative entropy. We will also exhibit some key properties exhibited by these information measures. 1 Notation A quick summary of the notation 1. Discrete Random Variable: U 2. Alphabet: U = fu1; u2; :::; uM g (An alphabet of size M) 3. Specific Value: u; u1; etc. For discrete random variables, we will write (interchangeably) P (U = u), PU (u) or most often just, p(u) Similarly, for a pair of random variables X; Y we write P (X = x j Y = y), PXjY (x j y) or p(x j y) 2 Entropy Definition 1. \Surprise" Function: 1 S(u) log (1) , p(u) A lower probability of u translates to a greater \surprise" that it occurs. Note here that we use log to mean log2 by default, rather than the natural log ln, as is typical in some other contexts. This is true throughout these notes: log is assumed to be log2 unless otherwise indicated. Definition 2. Entropy: Let U a discrete random variable taking values in alphabet U. The entropy of U is given by: 1 X H(U) [S(U)] = log = − log (p(U)) = − p(u) log p(u) (2) , E E p(U) E u Where U represents all u values possible to the variable.
    [Show full text]
  • An Introduction to Information Theory
    An Introduction to Information Theory Vahid Meghdadi reference : Elements of Information Theory by Cover and Thomas September 2007 Contents 1 Entropy 2 2 Joint and conditional entropy 4 3 Mutual information 5 4 Data Compression or Source Coding 6 5 Channel capacity 8 5.1 examples . 9 5.1.1 Noiseless binary channel . 9 5.1.2 Binary symmetric channel . 9 5.1.3 Binary erasure channel . 10 5.1.4 Two fold channel . 11 6 Differential entropy 12 6.1 Relation between differential and discrete entropy . 13 6.2 joint and conditional entropy . 13 6.3 Some properties . 14 7 The Gaussian channel 15 7.1 Capacity of Gaussian channel . 15 7.2 Band limited channel . 16 7.3 Parallel Gaussian channel . 18 8 Capacity of SIMO channel 19 9 Exercise (to be completed) 22 1 1 Entropy Entropy is a measure of uncertainty of a random variable. The uncertainty or the amount of information containing in a message (or in a particular realization of a random variable) is defined as the inverse of the logarithm of its probabil- ity: log(1=PX (x)). So, less likely outcome carries more information. Let X be a discrete random variable with alphabet X and probability mass function PX (x) = PrfX = xg, x 2 X . For convenience PX (x) will be denoted by p(x). The entropy of X is defined as follows: Definition 1. The entropy H(X) of a discrete random variable is defined by 1 H(X) = E log p(x) X 1 = p(x) log (1) p(x) x2X Entropy indicates the average information contained in X.
    [Show full text]
  • Chapter 2: Entropy and Mutual Information
    Chapter 2: Entropy and Mutual Information University of Illinois at Chicago ECE 534, Natasha Devroye Chapter 2 outline • Definitions • Entropy • Joint entropy, conditional entropy • Relative entropy, mutual information • Chain rules • Jensen’s inequality • Log-sum inequality • Data processing inequality • Fano’s inequality University of Illinois at Chicago ECE 534, Natasha Devroye Definitions A discrete random variable X takes on values x from the discrete alphabet . X The probability mass function (pmf) is described by p (x)=p(x) = Pr X = x , for x . X { } ∈ X University of Illinois at Chicago ECE 534, Natasha Devroye Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981 You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links. 2 Probability, Entropy, and Inference Definitions Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981 This chapter, and its sibling, Chapter 8, devote some time to notation. Just You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/as the White Knight fordistinguished links. between the song, the name of the song, and what the name of the song was called (Carroll, 1998), we will sometimes 2.1: Probabilities and ensembles need to be careful to distinguish between a random variable, the v23alue of the i ai pi random variable, and the proposition that asserts that the random variable x has a particular value. In any particular chapter, however, I will use the most 1 a 0.0575 a Figure 2.2.
    [Show full text]
  • Noisy Channel Coding
    Noisy Channel Coding: Correlated Random Variables & Communication over a Noisy Channel Toni Hirvonen Helsinki University of Technology Laboratory of Acoustics and Audio Signal Processing [email protected] T-61.182 Special Course in Information Science II / Spring 2004 1 Contents • More entropy definitions { joint & conditional entropy { mutual information • Communication over a noisy channel { overview { information conveyed by a channel { noisy channel coding theorem 2 Joint Entropy Joint entropy of X; Y is: 1 H(X; Y ) = P (x; y) log P (x; y) xy X Y 2AXA Entropy is additive for independent random variables: H(X; Y ) = H(X) + H(Y ) iff P (x; y) = P (x)P (y) 3 Conditional Entropy Conditional entropy of X given Y is: 1 1 H(XjY ) = P (y) P (xjy) log = P (x; y) log P (xjy) P (xjy) y2A "x2A # y2A A XY XX XX Y It measures the average uncertainty (i.e. information content) that remains about x when y is known. 4 Mutual Information Mutual information between X and Y is: I(Y ; X) = I(X; Y ) = H(X) − H(XjY ) ≥ 0 It measures the average reduction in uncertainty about x that results from learning the value of y, or vice versa. Conditional mutual information between X and Y given Z is: I(Y ; XjZ) = H(XjZ) − H(XjY; Z) 5 Breakdown of Entropy Entropy relations: Chain rule of entropy: H(X; Y ) = H(X) + H(Y jX) = H(Y ) + H(XjY ) 6 Noisy Channel: Overview • Real-life communication channels are hopelessly noisy i.e. introduce transmission errors • However, a solution can be achieved { the aim of source coding is to remove redundancy from the source data
    [Show full text]
  • Networked Distributed Source Coding
    Networked distributed source coding Shizheng Li and Aditya Ramamoorthy Abstract The data sensed by differentsensors in a sensor network is typically corre- lated. A natural question is whether the data correlation can be exploited in innova- tive ways along with network information transfer techniques to design efficient and distributed schemes for the operation of such networks. This necessarily involves a coupling between the issues of compression and networked data transmission, that have usually been considered separately. In this work we review the basics of classi- cal distributed source coding and discuss some practical code design techniques for it. We argue that the network introduces several new dimensions to the problem of distributed source coding. The compression rates and the network information flow constrain each other in intricate ways. In particular, we show that network coding is often required for optimally combining distributed source coding and network information transfer and discuss the associated issues in detail. We also examine the problem of resource allocation in the context of distributed source coding over networks. 1 Introduction There are various instances of problems where correlated sources need to be trans- mitted over networks, e.g., a large scale sensor network deployed for temperature or humidity monitoring over a large field or for habitat monitoring in a jungle. This is an example of a network information transfer problem with correlated sources. A natural question is whether the data correlation can be exploited in innovative ways along with network information transfer techniques to design efficient and distributed schemes for the operation of such networks. This necessarily involves a Shizheng Li · Aditya Ramamoorthy Iowa State University, Ames, IA, U.S.A.
    [Show full text]
  • On Measures of Entropy and Information
    On Measures of Entropy and Information Tech. Note 009 v0.7 http://threeplusone.com/info Gavin E. Crooks 2018-09-22 Contents 5 Csiszar´ f-divergences 12 Csiszar´ f-divergence ................ 12 0 Notes on notation and nomenclature 2 Dual f-divergence .................. 12 Symmetric f-divergences .............. 12 1 Entropy 3 K-divergence ..................... 12 Entropy ........................ 3 Fidelity ........................ 12 Joint entropy ..................... 3 Marginal entropy .................. 3 Hellinger discrimination .............. 12 Conditional entropy ................. 3 Pearson divergence ................. 14 Neyman divergence ................. 14 2 Mutual information 3 LeCam discrimination ............... 14 Mutual information ................. 3 Skewed K-divergence ................ 14 Multivariate mutual information ......... 4 Alpha-Jensen-Shannon-entropy .......... 14 Interaction information ............... 5 Conditional mutual information ......... 5 6 Chernoff divergence 14 Binding information ................ 6 Chernoff divergence ................. 14 Residual entropy .................. 6 Chernoff coefficient ................. 14 Total correlation ................... 6 Renyi´ divergence .................. 15 Lautum information ................ 6 Alpha-divergence .................. 15 Uncertainty coefficient ............... 7 Cressie-Read divergence .............. 15 Tsallis divergence .................. 15 3 Relative entropy 7 Sharma-Mittal divergence ............. 15 Relative entropy ................... 7 Cross entropy
    [Show full text]
  • Information Theory and Maximum Entropy 8.1 Fundamentals of Information Theory
    NEU 560: Statistical Modeling and Analysis of Neural Data Spring 2018 Lecture 8: Information Theory and Maximum Entropy Lecturer: Mike Morais Scribes: 8.1 Fundamentals of Information theory Information theory started with Claude Shannon's A mathematical theory of communication. The first building block was entropy, which he sought as a functional H(·) of probability densities with two desired properties: 1. Decreasing in P (X), such that if P (X1) < P (X2), then h(P (X1)) > h(P (X2)). 2. Independent variables add, such that if X and Y are independent, then H(P (X; Y )) = H(P (X)) + H(P (Y )). These are only satisfied for − log(·). Think of it as a \surprise" function. Definition 8.1 (Entropy) The entropy of a random variable is the amount of information needed to fully describe it; alternate interpretations: average number of yes/no questions needed to identify X, how uncertain you are about X? X H(X) = − P (X) log P (X) = −EX [log P (X)] (8.1) X Average information, surprise, or uncertainty are all somewhat parsimonious plain English analogies for entropy. There are a few ways to measure entropy for multiple variables; we'll use two, X and Y . Definition 8.2 (Conditional entropy) The conditional entropy of a random variable is the entropy of one random variable conditioned on knowledge of another random variable, on average. Alternative interpretations: the average number of yes/no questions needed to identify X given knowledge of Y , on average; or How uncertain you are about X if you know Y , on average? X X h X i H(X j Y ) = P (Y )[H(P (X j Y ))] = P (Y ) − P (X j Y ) log P (X j Y ) Y Y X X = = − P (X; Y ) log P (X j Y ) X;Y = −EX;Y [log P (X j Y )] (8.2) Definition 8.3 (Joint entropy) X H(X; Y ) = − P (X; Y ) log P (X; Y ) = −EX;Y [log P (X; Y )] (8.3) X;Y 8-1 8-2 Lecture 8: Information Theory and Maximum Entropy • Bayes' rule for entropy H(X1 j X2) = H(X2 j X1) + H(X1) − H(X2) (8.4) • Chain rule of entropies n X H(Xn;Xn−1; :::X1) = H(Xn j Xn−1; :::X1) (8.5) i=1 It can be useful to think about these interrelated concepts with a so-called information diagram.
    [Show full text]
  • On the Entropy of Sums
    On the Entropy of Sums Mokshay Madiman Department of Statistics Yale University Email: [email protected] Abstract— It is shown that the entropy of a sum of independent II. PRELIMINARIES random vectors is a submodular set function, and upper bounds on the entropy of sums are obtained as a result in both discrete Let X1,X2,...,Xn be a collection of random variables and continuous settings. These inequalities complement the lower taking values in some linear space, so that addition is a bounds provided by the entropy power inequalities of Madiman well-defined operation. We assume that the joint distribu- and Barron (2007). As applications, new inequalities for the tion has a density f with respect to some reference mea- determinants of sums of positive-definite matrices are presented. sure. This allows the definition of the entropy of various random variables depending on X1,...,Xn. For instance, I. INTRODUCTION H(X1,X2,...,Xn) = −E[log f(X1,X2,...,Xn)]. There are the familiar two canonical cases: (a) the random variables Entropies of sums are not as well understood as joint are real-valued and possess a probability density function, or entropies. Indeed, for the joint entropy, there is an elaborate (b) they are discrete. In the former case, H represents the history of entropy inequalities starting with the chain rule of differential entropy, and in the latter case, H represents the Shannon, whose major developments include works of Han, discrete entropy. We simply call H the entropy in all cases, ˇ Shearer, Fujishige, Yeung, Matu´s, and others. The earlier part and where the distinction matters, the relevant assumption will of this work, involving so-called Shannon-type inequalities be made explicit.
    [Show full text]
  • Information Inequalities for Joint Distributions, with Interpretations and Applications Mokshay Madiman, Member, IEEE, and Prasad Tetali, Member, IEEE
    SUBMITTED TO IEEE TRANSACTIONS ON INFORMATION THEORY, 2007 1 Information Inequalities for Joint Distributions, with Interpretations and Applications Mokshay Madiman, Member, IEEE, and Prasad Tetali, Member, IEEE Abstract— Upper and lower bounds are obtained for the joint we use this flexibility later, but it is convenient to fix a entropy of a collection of random variables in terms of an default order.) arbitrary collection of subset joint entropies. These inequalities • For any set s ⊂ [n], X stands for the collection of generalize Shannon’s chain rule for entropy as well as inequalities s of Han, Fujishige and Shearer. A duality between the upper random variables (Xi : i 2 s), with the indices taken and lower bounds for joint entropy is developed. All of these in their increasing order. results are shown to be special cases of general, new results for • For any index i in [n], define the degree of i in C as submodular functions– thus, the inequalities presented constitute r(i) = jft 2 C : i 2 tgj. Let r−(s) = mini2s r(i) denote a richly structured class of Shannon-type inequalities. The new the minimal degree in s, and r (s) = max r(i) denote inequalities are applied to obtain new results in combinatorics, + i2s such as bounds on the number of independent sets in an arbitrary the maximal degree in s. graph and the number of zero-error source-channel codes, as First we present a weak form of our main inequality. well as new determinantal inequalities in matrix theory. A new inequality for relative entropies is also developed, along with Proposition I:[WEAK DEGREE FORM] Let X1;:::;Xn be interpretations in terms of hypothesis testing.
    [Show full text]
  • Entropy, Relative Entropy, Cross Entropy Entropy
    Entropy, Relative Entropy, Cross Entropy Entropy Entropy, H(x) is a measure of the uncertainty of a discrete random variable. Properties: ● H(x) >= 0 ● Entropy Entropy ● Lesser the probability for an event, larger the entropy. Entropy of a six-headed fair dice is log26. Entropy : Properties Primer on Probability Fundamentals ● Random Variable ● Probability ● Expectation ● Linearity of Expectation Entropy : Properties Primer on Probability Fundamentals ● Jensen’s Inequality Ex:- Subject to the constraint that, f is a convex function. Entropy : Properties ● H(U) >= 0, Where, U = {u , u , …, u } 1 2 M ● H(U) <= log(M) Entropy between pair of R.Vs ● Joint Entropy ● Conditional Entropy Relative Entropy aka Kullback Leibler Distance D(p||q) is a measure of the inefficiency of assuming that the distribution is q, when the true distribution is p. ● H(p) : avg description length when true distribution. ● H(p) + D(p||q) : avg description length when approximated distribution. If X is a random variable and p(x), q(x) are probability mass functions, Relative Entropy/ K-L Divergence : Properties D(p||q) is a measure of the inefficiency of assuming that the distribution is q, when the true distribution is p. Properties: ● Non-negative. ● D(p||q) = 0 if p=q. ● Non-symmetric and does not satisfy triangular inequality - it is rather divergence than distance. Relative Entropy/ K-L Divergence : Properties Asymmetricity: Let, X = {0, 1} be a random variable. Consider two distributions p, q on X. Assume, p(0) = 1-r, p(1) = r ; q(0) = 1-s, q(1) = s; If, r=s, then D(p||q) = D(q||p) = 0, else for r!=s, D(p||q) != D(q||p) Relative Entropy/ K-L Divergence : Properties Non-negativity: Relative Entropy/ K-L Divergence : Properties Relative Entropy of joint distributions as Mutual Information Mutual Information, which is a measure of the amount of information that one random variable contains about another random variable.
    [Show full text]
  • Information Theory & Coding
    INFORMATION THEORY & CODING Week 2 : Entropy Dr. Rui Wang Department of Electrical and Electronic Engineering Southern Univ. of Science and Technology (SUSTech) Email: [email protected] September 14, 2021 Dr. Rui Wang (EEE) INFORMATION THEORY & CODING September 14, 2021 1 / 25 Outline Most of the basic definitions will be introduced. Entropy, Joint Entropy, Conditional Entropy, Relative Entropy, Mutual Information, etc. Relationships among basic definitions. Chain Rules. Dr. Rui Wang (EEE) INFORMATION THEORY & CODING September 14, 2021 2 / 25 What is Information? To have a quantitative measure of information contained in an event, we consider intuitively the following: Information contained in events should be defined in terms of the uncertainty/probability of the events. Monotonous: Less certain (small probability) events should contain more information. Additive: The total information of unrelated/independent events should equal the sum of the information of each individual event. Dr. Rui Wang (EEE) INFORMATION THEORY & CODING September 14, 2021 3 / 25 Information Measure of Random Events A natural measure of the uncertainty of an event A is the probability Pr(A) of A. To satisfy the monotonous and additive properties, the information in the event A could be defined as I (A)self-info = − log Pr(A): If Pr(A) > Pr(B), then I (A) < I (B). If A, B are independent, then I (A + B) = I (A) + I (B). Dr. Rui Wang (EEE) INFORMATION THEORY & CODING September 14, 2021 4 / 25 Information Unit log2 : bit loge : nat log10 : Hartley logb X loga X = = loga b · logb X logb a Dr. Rui Wang (EEE) INFORMATION THEORY & CODING September 14, 2021 5 / 25 Average Information Measure of a Discrete R.V.
    [Show full text]
  • Lecture 2 Entropy and Mutual Information Instructor: Yichen Wang Ph.D./Associate Professor
    Elements of Information Theory Lecture 2 Entropy and Mutual Information Instructor: Yichen Wang Ph.D./Associate Professor School of Information and Communications Engineering Division of Electronics and Information Xi’an Jiaotong University Outlines Entropy, Joint Entropy, and Conditional Entropy Mutual Information Convexity Analysis for Entropy and Mutual Information Entropy and Mutual Information in Communications Systems Xi’an Jiaotong University 3 Entropy Entropy —— 熵 Entropy in Thermodynamics(热力学) It was first developed in the early 1850s by Rudolf Clausius (French Physicist). System is composed of a very large number of constituents (atoms, molecule…). It is a measure of the number of the microscopic configurations that corresponds to a thermodynamic system in a state specified by certain macroscopic variables. It can be understood as a measure of molecular disorder within a macroscopic system. Xi’an Jiaotong University 4 Entropy Entropy in Statistical Mechanics(统计力学) The statistical definition was developed by Ludwig Boltzmann in the 1870s by analyzing the statistical behavior of the microscopic components of the system. Boltzmann showed that this definition of entropy was equivalent to the thermodynamic entropy to within a constant number which has since been known as Boltzmann's constant. Entropy is associated with the number of the microstates of a system. How to define ENTROPY in information theory? Xi’an Jiaotong University 5 Entropy Definition The entropy H(X) of a discrete random variable X is defined by p(x) is the probability mass function(概率质量函数)which can be written as The base of the logarithm is 2 and the unit is bits. If the base of the logarithm is e, then the unit is nats.
    [Show full text]