An Introduction to Information Theory

Total Page:16

File Type:pdf, Size:1020Kb

An Introduction to Information Theory An Introduction to Information Theory Vahid Meghdadi reference : Elements of Information Theory by Cover and Thomas September 2007 Contents 1 Entropy 2 2 Joint and conditional entropy 4 3 Mutual information 5 4 Data Compression or Source Coding 6 5 Channel capacity 8 5.1 examples . 9 5.1.1 Noiseless binary channel . 9 5.1.2 Binary symmetric channel . 9 5.1.3 Binary erasure channel . 10 5.1.4 Two fold channel . 11 6 Differential entropy 12 6.1 Relation between differential and discrete entropy . 13 6.2 joint and conditional entropy . 13 6.3 Some properties . 14 7 The Gaussian channel 15 7.1 Capacity of Gaussian channel . 15 7.2 Band limited channel . 16 7.3 Parallel Gaussian channel . 18 8 Capacity of SIMO channel 19 9 Exercise (to be completed) 22 1 1 Entropy Entropy is a measure of uncertainty of a random variable. The uncertainty or the amount of information containing in a message (or in a particular realization of a random variable) is defined as the inverse of the logarithm of its probabil- ity: log(1=PX (x)). So, less likely outcome carries more information. Let X be a discrete random variable with alphabet X and probability mass function PX (x) = PrfX = xg, x 2 X . For convenience PX (x) will be denoted by p(x). The entropy of X is defined as follows: Definition 1. The entropy H(X) of a discrete random variable is defined by 1 H(X) = E log p(x) X 1 = p(x) log (1) p(x) x2X Entropy indicates the average information contained in X. When the base of the logarithm function is 2, the entropy is measured in bits. For example the entropy of a fair coin toss is 1 bit. Note: The entropy is a function of the distribution of X. It does not depend on the actual values taken by the random variable, but only on the probabilities. note: H(X) ≥ 0. Example 1. : Let 1 with probability p X = (2) 0 with probability 1 − p Show that the entropy of X is H(X) = −p log p − (1 − p) log(1 − p) (3) Some times this entropy is denoted by H(p; 1 − p). Note that the entropy is maximized for p = 0:5 and it is zero for p = 1 or p = 0. This makes sense because when p = 0 or p = 1 there is no uncertainty over the random variable X and hence no information in revealing its outcome. The maximum uncertainty is when the two events are equi-probable. Example 2. : Suppose X can take on K values. Show that that the entropy is maximized when X is uniformly distributed on these K Values and in this case, H(X) = log K. Solution: Calculating H(X) results in: K X 1 H(X) = p(x) log p(x) x=1 2 1 0.9 0.8 0.7 0.6 0.5 H(p) 0.4 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 p Figure 1: H(p) versus p Since log(x) is a concave function, using Jenson inequality for concave function f(x) stated below: X X λif(xi) ≤ f λixi it is clear that ! X 1 X 1 H(X) = p(x) log ≤ log p(x) p(x) p(x) x x So H(x) ≤ log K. It means that the maximum value for H(x) can be log K. Choosing p(X = i) = 1=K, we can obtain H(X) = log 1=K. So uniformly distributed X maximize the entropy and this entropy is log K. There is another way to solve this problem using Lagrange multipliers. We are going to maximize K X H(X) = − P (X = k) log P (X = k) k=1 The constraint of this maximization is K X g(p1 + p2 + ::: + pK ) = pk = 1 k=1 3 So by changing pi we try to find the maximum point of H. For all k from 1 to K we should maximize H + λ(g − 1) Hence we require: @ (H + λ(g − 1)) = 0 @pk It means that: K K ! @ X X − p(k) log p(k) + λ( pk − 1) = 0 @pk k=1 k=1 After calculating the differentiation we obtain a set of K independent equa- tions as: 1 − + log p + λ = 0 ln 2 2 k Solving this equation, we remark that all the pk have the same value. So the 1 p = k K As a conclusion, the uniform distribution yields the greatest entropy. 2 Joint and conditional entropy We saw the entropy of a single random variable (RV) and we now extend it to a pair of RV. Definition 2. The joint entropy H(X; Y ) of a pair of discrete random variables (X; Y ) with a joint distribution p(x; y) is defined as: X X H(X; Y ) = − p(x; y) log p(x; y) (4) x2X y2Y which can also be expressed as H(X; Y ) = − E log p(X; Y ) (5) The conditional entropy is defined at the same way. It is the expectation of the entropies of the conditional distributions: 4 Definition 3. If (X; Y ) ∼ p(x; y), then the conditional entropy H(Y jX) is defined as: X H(Y jX) = p(x)H(Y jX = x) (6) x2X X X = − p(x) p(yjx) log p(yjx) (7) x2X y2Y X X = − p(x; y) log p(yjx) (8) x2X y2Y = Ep(x;y) log p(Y jX) (9) Theorem 1. (chain rule) H(X; Y ) = H(X) + H(Y jX) (10) The proof comes from the fact that log p(X; Y ) = log p(X) + log p(Y jX) and then take the expectation. 3 Mutual information The mutual information is a measure of the amount of information that one random variable contains about another random variable. It is the reduction of uncertainty of one random variable due to the knowledge of the other. It is: I(X; Y ) = H(X) − H(XjY ) (11) p(X; Y ) = E log (12) p(x;y) p(X)p(Y ) = H(Y ) − H(Y jX) (13) = I(Y ; X) (14) The diagram of figure 2 represents the relation between the conditional entropies and mutual information. The chain rule can be stated here for the mutual information. First we define the conditional mutual information as: I(X; Y jZ) = H(XjZ) − H(XjY; Z) (15) Using this definition the chain rule can be written. I(X1;X2; Y ) = H(X1;X2) − H(X1;X2jY ) (16) = H(X1) + H(X2jX1) − H(X1jY ) − H(X2jX1;Y ) (17) = I(X1; Y ) + I(X2; Y jX1) (18) 5 H(Y|X) H(X|Y) I(X;Y) Figure 2: Relation between entropy and mutual information Some properties • I(X; Y ) ≥ 0 • I(X; Y jZ) ≥ 0 • H(X) ≤ log jX j where jX j denotes the number of elements in the range of X, with the equality if and only if X has a uniform distribution over X . • Condition reduces entropy: H(XjY ) ≤ H(X) with equality if and only if X and Y are independent. Pn • H(X1;X2;:::;Xn) ≤ i=1 H(Xi) with equality if and only if the Xi are independent. • Let (X; Y ) ∼ p(x; y) = p(x)p(yjx). The mutual information I(X; Y ) is a concave function of p(x) for fixed p(yjx) and a convex function of p(yjx) for fixed p(x). 4 Data Compression or Source Coding Two classes of coding are used: lossless source coding and lossy source coding. In this section, only lossless coding will be considered. The data compression can be achieved by assigning short descriptions to the most frequent outcomes of data source and then longer to the less probable. For example in Morse code, the letter "e" is coded by just a point. In this chapter some principles of compression will be given. Definition 4. A source code C for a random variable X is a mapping from X , the range of X, to D, the set of finite length strings of symbols from a D-ary alphabet. Let C(x) denote the codeword corresponding to x and let l(x) denote 6 the length of C(x). Example 3. : If you toss a coin, X = ftail; headg, C(head) = 0;C(tail) = 11; l(head) = 1; l(tail) = 2. Definition 5. The expected length of a code C(x) for a random variable X with probability mass function p(x) is given by X L(C) = p(x)l(x) (19) x2X For example for the above example the expected length of the code is 1 1 L(C) = ∗ 2 + ∗ 1 = 1:5 2 2 Definition 6. The code is singular if C(x1) = C(x2) and x1 6= x2. Non singular codes are uniquely decodable. Definition 7. The extension of a code C is a code obtained as: C(x1x2 : : : xn) = C(x1)C(x2) :::C(xn) (20) It means that a long message can be coded by concatenating the shorter mes- sage code words. For example if C(x1) = 11 and C(x2) = 00, then C(x1x2) = 1100. Definition 8. A code is uniquely decodable if its extension is non-singular. In other words, any encoded string has only one possible source string and there is no ambiguity. Definition 9. A code is called a prefix code or an instantaneous code if no code word is a prefix of any other codeword.
Recommended publications
  • Lecture 3: Entropy, Relative Entropy, and Mutual Information 1 Notation 2
    EE376A/STATS376A Information Theory Lecture 3 - 01/16/2018 Lecture 3: Entropy, Relative Entropy, and Mutual Information Lecturer: Tsachy Weissman Scribe: Yicheng An, Melody Guan, Jacob Rebec, John Sholar In this lecture, we will introduce certain key measures of information, that play crucial roles in theoretical and operational characterizations throughout the course. These include the entropy, the mutual information, and the relative entropy. We will also exhibit some key properties exhibited by these information measures. 1 Notation A quick summary of the notation 1. Discrete Random Variable: U 2. Alphabet: U = fu1; u2; :::; uM g (An alphabet of size M) 3. Specific Value: u; u1; etc. For discrete random variables, we will write (interchangeably) P (U = u), PU (u) or most often just, p(u) Similarly, for a pair of random variables X; Y we write P (X = x j Y = y), PXjY (x j y) or p(x j y) 2 Entropy Definition 1. \Surprise" Function: 1 S(u) log (1) , p(u) A lower probability of u translates to a greater \surprise" that it occurs. Note here that we use log to mean log2 by default, rather than the natural log ln, as is typical in some other contexts. This is true throughout these notes: log is assumed to be log2 unless otherwise indicated. Definition 2. Entropy: Let U a discrete random variable taking values in alphabet U. The entropy of U is given by: 1 X H(U) [S(U)] = log = − log (p(U)) = − p(u) log p(u) (2) , E E p(U) E u Where U represents all u values possible to the variable.
    [Show full text]
  • L11-IT-Handouts.Pdf
    Today’s Topics Information Theory • Communication Channel • Noiseless binary channel Mohamed Hamada • Binary Symmetric Channel (BSC) Software Engineering Lab The University of Aizu • Symmetric Channel Email: [email protected] URL: http://www.u-aizu.ac.jp/~hamada • Mutual Information • Channel Capacity 1 Digital Communication Systems Digital Communication Systems 1. Huffman Code. 1. Memoryless 2. Two-pass Huffman Code. 2. Stochastic 3. Lemple-Ziv Code. 3. Markov 4. Fano code. Information User of Information 4. Ergodic User of Source 5. Shannon Code. Information Source Information 6. Arithmetic Code. Source Source Source Source Encoder Decoder Encoder Decoder Channel Channel Channel Channel Encoder Decoder Encoder Decoder Modulator De-Modulator Modulator De-Modulator Channel Channel 2 3 INFORMATION TRANSFER ACROSS CHANNELS Communication Channel Sent Received A (discrete ) channel is a system consisting of an input alphabet messages messages X and output alphabet Y and a probability transition matrix symbols p(y|x) that expresses the probability of observing the output symbol y given that we send the symbol x Channel Channel Source Source Channel sourcecoding coding decoding decoding receiver Examples of channels: Compression Error Correction Decompression CDs, CD – ROMs, DVDs, phones, Source Entropy Channel Capacity Ethernet, Video cassettes etc. Rate vs Distortion Capacity vs Efficiency 4 5 1 Communication Channel Noiseless binary channel Noiseless binary channel Channel Channel 00 input x p(y|x) output y Transition probabilities 11 Memoryless: - output only on input Transition Matrix - input and output alphabet finite 01 p(y | x) = 0 10 1 01 6 7 Binary Symmetric Channel (BSC) Binary Symmetric Channel (BSC) (Noisy channel) (Noisy channel) 00BSC Channel 1 p 1-p 1-p Error Source 00 e BSC Channel p 110 y = x e p 1-p xi i i + 11 Input Output p 1-p 00 1-p 1 1 p BSC Channel 8 9 Symmetric Channel (Noisy channel) Channel XY In the transmission matrix of this channel , all the rows are permutations of each other and so the columns.
    [Show full text]
  • Chapter 2: Entropy and Mutual Information
    Chapter 2: Entropy and Mutual Information University of Illinois at Chicago ECE 534, Natasha Devroye Chapter 2 outline • Definitions • Entropy • Joint entropy, conditional entropy • Relative entropy, mutual information • Chain rules • Jensen’s inequality • Log-sum inequality • Data processing inequality • Fano’s inequality University of Illinois at Chicago ECE 534, Natasha Devroye Definitions A discrete random variable X takes on values x from the discrete alphabet . X The probability mass function (pmf) is described by p (x)=p(x) = Pr X = x , for x . X { } ∈ X University of Illinois at Chicago ECE 534, Natasha Devroye Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981 You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links. 2 Probability, Entropy, and Inference Definitions Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981 This chapter, and its sibling, Chapter 8, devote some time to notation. Just You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/as the White Knight fordistinguished links. between the song, the name of the song, and what the name of the song was called (Carroll, 1998), we will sometimes 2.1: Probabilities and ensembles need to be careful to distinguish between a random variable, the v23alue of the i ai pi random variable, and the proposition that asserts that the random variable x has a particular value. In any particular chapter, however, I will use the most 1 a 0.0575 a Figure 2.2.
    [Show full text]
  • Noisy Channel Coding
    Noisy Channel Coding: Correlated Random Variables & Communication over a Noisy Channel Toni Hirvonen Helsinki University of Technology Laboratory of Acoustics and Audio Signal Processing [email protected] T-61.182 Special Course in Information Science II / Spring 2004 1 Contents • More entropy definitions { joint & conditional entropy { mutual information • Communication over a noisy channel { overview { information conveyed by a channel { noisy channel coding theorem 2 Joint Entropy Joint entropy of X; Y is: 1 H(X; Y ) = P (x; y) log P (x; y) xy X Y 2AXA Entropy is additive for independent random variables: H(X; Y ) = H(X) + H(Y ) iff P (x; y) = P (x)P (y) 3 Conditional Entropy Conditional entropy of X given Y is: 1 1 H(XjY ) = P (y) P (xjy) log = P (x; y) log P (xjy) P (xjy) y2A "x2A # y2A A XY XX XX Y It measures the average uncertainty (i.e. information content) that remains about x when y is known. 4 Mutual Information Mutual information between X and Y is: I(Y ; X) = I(X; Y ) = H(X) − H(XjY ) ≥ 0 It measures the average reduction in uncertainty about x that results from learning the value of y, or vice versa. Conditional mutual information between X and Y given Z is: I(Y ; XjZ) = H(XjZ) − H(XjY; Z) 5 Breakdown of Entropy Entropy relations: Chain rule of entropy: H(X; Y ) = H(X) + H(Y jX) = H(Y ) + H(XjY ) 6 Noisy Channel: Overview • Real-life communication channels are hopelessly noisy i.e. introduce transmission errors • However, a solution can be achieved { the aim of source coding is to remove redundancy from the source data
    [Show full text]
  • Networked Distributed Source Coding
    Networked distributed source coding Shizheng Li and Aditya Ramamoorthy Abstract The data sensed by differentsensors in a sensor network is typically corre- lated. A natural question is whether the data correlation can be exploited in innova- tive ways along with network information transfer techniques to design efficient and distributed schemes for the operation of such networks. This necessarily involves a coupling between the issues of compression and networked data transmission, that have usually been considered separately. In this work we review the basics of classi- cal distributed source coding and discuss some practical code design techniques for it. We argue that the network introduces several new dimensions to the problem of distributed source coding. The compression rates and the network information flow constrain each other in intricate ways. In particular, we show that network coding is often required for optimally combining distributed source coding and network information transfer and discuss the associated issues in detail. We also examine the problem of resource allocation in the context of distributed source coding over networks. 1 Introduction There are various instances of problems where correlated sources need to be trans- mitted over networks, e.g., a large scale sensor network deployed for temperature or humidity monitoring over a large field or for habitat monitoring in a jungle. This is an example of a network information transfer problem with correlated sources. A natural question is whether the data correlation can be exploited in innovative ways along with network information transfer techniques to design efficient and distributed schemes for the operation of such networks. This necessarily involves a Shizheng Li · Aditya Ramamoorthy Iowa State University, Ames, IA, U.S.A.
    [Show full text]
  • On Measures of Entropy and Information
    On Measures of Entropy and Information Tech. Note 009 v0.7 http://threeplusone.com/info Gavin E. Crooks 2018-09-22 Contents 5 Csiszar´ f-divergences 12 Csiszar´ f-divergence ................ 12 0 Notes on notation and nomenclature 2 Dual f-divergence .................. 12 Symmetric f-divergences .............. 12 1 Entropy 3 K-divergence ..................... 12 Entropy ........................ 3 Fidelity ........................ 12 Joint entropy ..................... 3 Marginal entropy .................. 3 Hellinger discrimination .............. 12 Conditional entropy ................. 3 Pearson divergence ................. 14 Neyman divergence ................. 14 2 Mutual information 3 LeCam discrimination ............... 14 Mutual information ................. 3 Skewed K-divergence ................ 14 Multivariate mutual information ......... 4 Alpha-Jensen-Shannon-entropy .......... 14 Interaction information ............... 5 Conditional mutual information ......... 5 6 Chernoff divergence 14 Binding information ................ 6 Chernoff divergence ................. 14 Residual entropy .................. 6 Chernoff coefficient ................. 14 Total correlation ................... 6 Renyi´ divergence .................. 15 Lautum information ................ 6 Alpha-divergence .................. 15 Uncertainty coefficient ............... 7 Cressie-Read divergence .............. 15 Tsallis divergence .................. 15 3 Relative entropy 7 Sharma-Mittal divergence ............. 15 Relative entropy ................... 7 Cross entropy
    [Show full text]
  • Information Theory and Maximum Entropy 8.1 Fundamentals of Information Theory
    NEU 560: Statistical Modeling and Analysis of Neural Data Spring 2018 Lecture 8: Information Theory and Maximum Entropy Lecturer: Mike Morais Scribes: 8.1 Fundamentals of Information theory Information theory started with Claude Shannon's A mathematical theory of communication. The first building block was entropy, which he sought as a functional H(·) of probability densities with two desired properties: 1. Decreasing in P (X), such that if P (X1) < P (X2), then h(P (X1)) > h(P (X2)). 2. Independent variables add, such that if X and Y are independent, then H(P (X; Y )) = H(P (X)) + H(P (Y )). These are only satisfied for − log(·). Think of it as a \surprise" function. Definition 8.1 (Entropy) The entropy of a random variable is the amount of information needed to fully describe it; alternate interpretations: average number of yes/no questions needed to identify X, how uncertain you are about X? X H(X) = − P (X) log P (X) = −EX [log P (X)] (8.1) X Average information, surprise, or uncertainty are all somewhat parsimonious plain English analogies for entropy. There are a few ways to measure entropy for multiple variables; we'll use two, X and Y . Definition 8.2 (Conditional entropy) The conditional entropy of a random variable is the entropy of one random variable conditioned on knowledge of another random variable, on average. Alternative interpretations: the average number of yes/no questions needed to identify X given knowledge of Y , on average; or How uncertain you are about X if you know Y , on average? X X h X i H(X j Y ) = P (Y )[H(P (X j Y ))] = P (Y ) − P (X j Y ) log P (X j Y ) Y Y X X = = − P (X; Y ) log P (X j Y ) X;Y = −EX;Y [log P (X j Y )] (8.2) Definition 8.3 (Joint entropy) X H(X; Y ) = − P (X; Y ) log P (X; Y ) = −EX;Y [log P (X; Y )] (8.3) X;Y 8-1 8-2 Lecture 8: Information Theory and Maximum Entropy • Bayes' rule for entropy H(X1 j X2) = H(X2 j X1) + H(X1) − H(X2) (8.4) • Chain rule of entropies n X H(Xn;Xn−1; :::X1) = H(Xn j Xn−1; :::X1) (8.5) i=1 It can be useful to think about these interrelated concepts with a so-called information diagram.
    [Show full text]
  • On the Entropy of Sums
    On the Entropy of Sums Mokshay Madiman Department of Statistics Yale University Email: [email protected] Abstract— It is shown that the entropy of a sum of independent II. PRELIMINARIES random vectors is a submodular set function, and upper bounds on the entropy of sums are obtained as a result in both discrete Let X1,X2,...,Xn be a collection of random variables and continuous settings. These inequalities complement the lower taking values in some linear space, so that addition is a bounds provided by the entropy power inequalities of Madiman well-defined operation. We assume that the joint distribu- and Barron (2007). As applications, new inequalities for the tion has a density f with respect to some reference mea- determinants of sums of positive-definite matrices are presented. sure. This allows the definition of the entropy of various random variables depending on X1,...,Xn. For instance, I. INTRODUCTION H(X1,X2,...,Xn) = −E[log f(X1,X2,...,Xn)]. There are the familiar two canonical cases: (a) the random variables Entropies of sums are not as well understood as joint are real-valued and possess a probability density function, or entropies. Indeed, for the joint entropy, there is an elaborate (b) they are discrete. In the former case, H represents the history of entropy inequalities starting with the chain rule of differential entropy, and in the latter case, H represents the Shannon, whose major developments include works of Han, discrete entropy. We simply call H the entropy in all cases, ˇ Shearer, Fujishige, Yeung, Matu´s, and others. The earlier part and where the distinction matters, the relevant assumption will of this work, involving so-called Shannon-type inequalities be made explicit.
    [Show full text]
  • Week 8 Information Channels Suppose That Data Must Be Sent Through a Channel Before It Can Be Used
    CSC 310: Information Theory University of Toronto, Fall 2011 Instructor: Radford M. Neal Week 8 Information Channels Suppose that data must be sent through a channel before it can be used. This channel may be unreliable, but we can use a code designed counteract this. Some questions we aim to answer: • Can we quantify how much information a channel can transmit? • If we have low tolerance for errors, will we be able to make full use of a channel, or must some of the channel’s capacity be lost to ensure a low error probability? • How can we correct (or at least detect) errors in practice? • Can we do as well in practice as the theory says is possible? Error Correction for Memory Blocks The “channel” may transmit information through time rather than space — ie, it is a memory device. Many memory devices store data in blocks — eg, 64 bits for RAM, 512 bytes for disk. Can we correct some errors by adding a few more bits? For instance, could we correct any single error if we use 71 bits to encode a 64 bit block of data stored in RAM? Error Correction in a Communications System In other applications, data arrives in a continuous stream. An overall system might look like this: Encoder for Data often sent in blocks Data Compression Error-Correcting In Program Code Noise Channel Decoder for Decompression Data Error-Correcting Code Program Out Error Detection We might also be interested in detecting errors, even if we can’t correct them: • For RAM or disk memory, error detection tells us that we need to call the repair person.
    [Show full text]
  • Information Inequalities for Joint Distributions, with Interpretations and Applications Mokshay Madiman, Member, IEEE, and Prasad Tetali, Member, IEEE
    SUBMITTED TO IEEE TRANSACTIONS ON INFORMATION THEORY, 2007 1 Information Inequalities for Joint Distributions, with Interpretations and Applications Mokshay Madiman, Member, IEEE, and Prasad Tetali, Member, IEEE Abstract— Upper and lower bounds are obtained for the joint we use this flexibility later, but it is convenient to fix a entropy of a collection of random variables in terms of an default order.) arbitrary collection of subset joint entropies. These inequalities • For any set s ⊂ [n], X stands for the collection of generalize Shannon’s chain rule for entropy as well as inequalities s of Han, Fujishige and Shearer. A duality between the upper random variables (Xi : i 2 s), with the indices taken and lower bounds for joint entropy is developed. All of these in their increasing order. results are shown to be special cases of general, new results for • For any index i in [n], define the degree of i in C as submodular functions– thus, the inequalities presented constitute r(i) = jft 2 C : i 2 tgj. Let r−(s) = mini2s r(i) denote a richly structured class of Shannon-type inequalities. The new the minimal degree in s, and r (s) = max r(i) denote inequalities are applied to obtain new results in combinatorics, + i2s such as bounds on the number of independent sets in an arbitrary the maximal degree in s. graph and the number of zero-error source-channel codes, as First we present a weak form of our main inequality. well as new determinantal inequalities in matrix theory. A new inequality for relative entropies is also developed, along with Proposition I:[WEAK DEGREE FORM] Let X1;:::;Xn be interpretations in terms of hypothesis testing.
    [Show full text]
  • A Survey of Results for Deletion Channels and Related Synchronization Channels∗
    Probability Surveys Vol. 6 (2009) 1–33 ISSN: 1549-5787 DOI: 10.1214/08-PS141 A survey of results for deletion channels and related synchronization channels∗ Michael Mitzenmacher† Harvard University, School of Engineering and Applied Sciences e-mail: [email protected] Abstract: The purpose of this survey is to describe recent progress in the study of the binary deletion channel and related channelswith synchroniza- tion errors, including a clear description of open problems in this area, with the hope of spurring further research. As an example, while the capacity of the binary symmetric error channel and the binary erasure channel have been known since Shannon, we still do not have a closed-form description of the capacity of the binary deletion channel. We highlight a recent result that shows that the capacity is at least (1 − p)/9 when each bit is deleted independently with fixed probability p. AMS 2000 subject classifications: Primary 94B50; secondary 68P30. Keywords and phrases: Deletion channels, synchronization channels, capacity bounds, random subsequences. Received November 2008. Contents 1 Introduction................................. 2 2 The ultimate goal: A maximum likelihoodargument . 4 3 Shannon-stylearguments . 6 3.1 Howthebasicargumentfails . 7 3.2 Codebooks fromfirstorder Markovchains . 8 4 Better decoding via jigsaw puzzles . 12 5 A useful reduction for deletion channels . 16 6 A related information theoretic formulation . .... 17 7 Upper bounds on the capacity of deletion channels . ... 19 8 Variationsonthedeletionchannel . 22 8.1 Stickychannels ............................ 22 8.2 Segmented deletion and insertion channels . 24 8.3 Deletionsoverlargeralphabets . 26 9 Tracereconstruction ... .... ... .... ... .... ... .... 27 10Conclusion ................................. 29 References.................................... 30 ∗This is an original survey paper.
    [Show full text]
  • Entropy, Relative Entropy, Cross Entropy Entropy
    Entropy, Relative Entropy, Cross Entropy Entropy Entropy, H(x) is a measure of the uncertainty of a discrete random variable. Properties: ● H(x) >= 0 ● Entropy Entropy ● Lesser the probability for an event, larger the entropy. Entropy of a six-headed fair dice is log26. Entropy : Properties Primer on Probability Fundamentals ● Random Variable ● Probability ● Expectation ● Linearity of Expectation Entropy : Properties Primer on Probability Fundamentals ● Jensen’s Inequality Ex:- Subject to the constraint that, f is a convex function. Entropy : Properties ● H(U) >= 0, Where, U = {u , u , …, u } 1 2 M ● H(U) <= log(M) Entropy between pair of R.Vs ● Joint Entropy ● Conditional Entropy Relative Entropy aka Kullback Leibler Distance D(p||q) is a measure of the inefficiency of assuming that the distribution is q, when the true distribution is p. ● H(p) : avg description length when true distribution. ● H(p) + D(p||q) : avg description length when approximated distribution. If X is a random variable and p(x), q(x) are probability mass functions, Relative Entropy/ K-L Divergence : Properties D(p||q) is a measure of the inefficiency of assuming that the distribution is q, when the true distribution is p. Properties: ● Non-negative. ● D(p||q) = 0 if p=q. ● Non-symmetric and does not satisfy triangular inequality - it is rather divergence than distance. Relative Entropy/ K-L Divergence : Properties Asymmetricity: Let, X = {0, 1} be a random variable. Consider two distributions p, q on X. Assume, p(0) = 1-r, p(1) = r ; q(0) = 1-s, q(1) = s; If, r=s, then D(p||q) = D(q||p) = 0, else for r!=s, D(p||q) != D(q||p) Relative Entropy/ K-L Divergence : Properties Non-negativity: Relative Entropy/ K-L Divergence : Properties Relative Entropy of joint distributions as Mutual Information Mutual Information, which is a measure of the amount of information that one random variable contains about another random variable.
    [Show full text]