Chapter 4 Information Theory

Total Page:16

File Type:pdf, Size:1020Kb

Chapter 4 Information Theory Chapter Information Theory Intro duction This lecture covers entropy joint entropy mutual information and minimum descrip tion length See the texts by Cover and Mackay for a more comprehensive treatment Measures of Information Information on a computer is represented by binary bit strings Decimal numb ers can b e represented using the following enco ding The p osition of the binary digit 3 2 1 0 Bit Bit Bit Bit Decimal Table Binary encoding indicates its decimal equivalent such that if there are N bits the ith bit represents N i the decimal numb er Bit is referred to as the most signicant bit and bit N as the least signicant bit To enco de M dierent messages requires log M bits 2 Signal Pro cessing Course WD Penny April Entropy The table b elow shows the probability of o ccurrence px to two decimal places of i selected letters x in the English alphab et These statistics were taken from Mackays i b o ok on Information Theory The table also shows the information content of a x px hx i i i a e j q t z Table Probability and Information content of letters letter hx log i px i which is a measure of surprise if we had to guess what a randomly chosen letter of the English alphab et was going to b e wed say it was an A E T or other frequently o ccuring letter If it turned out to b e a Z wed b e surprised The letter E is so common that it is unusual to nd a sentence without one An exception is the page novel Gadsby by Ernest Vincent Wright in which the author delib erately makes no use of the letter E from Covers b o ok on Information Theory The entropy is the average information content M X px hx H x i i i=1 where M is the numb er of discrete values that x can take It is usually written as i M X H x px log px i i i=1 with the convention that log Entropy measures uncertainty Entropy is maximised for a uniform distribution px M The resulting entropy i is H x l og M which is the numb er of binary bits required to represent M dierent 2 messages rst section For M for example the maximum entropy distribution is given by px px eg an unbiased coin biased coins have lower 1 2 entropy The entropy of letters in the English language is bits which is less than l og bits This is however the information content due to considering just 2 the probability of o ccurence of letters But in language our exp ectation of what the next letter will b e is determined by what the previous letters have b een To measure this we need the concept of joint entropy Because H x is the entropy of a dimensional variable it is sometimes called the scalar entropy to dierentiate it from the joint entropy Signal Pro cessing Course WD Penny April Joint Entropy Table shows the probability of o ccurence to three decimal places of selected pairs of letters x and y where x is followed by y This is called the joint probability i i i i px y The table also shows the joint information content i i x y px y hx y i j i j i j t h t s t r Table Probability and Information content of pairs of letters hx y log i j px y i j The average joint information content is given by the joint entropy M M X X px y log px y H x y i j i j i=1 j =1 If we x x to say x then the probability of y taking on a particular value say y is i j given by the conditional probability px x y y i j py y jx x j i px x i For example if x t and y h then the joint probability px y is just the i j i j probability of o ccurrence of the pair which from table is The conditional probability py jx however says that given weve seen the letter t whats the j i probability that the next letter will b e h which from tables and is Rearranging the ab ove relationship and dropping the y y notation gives j px y py jxpx Now if y do es not dep end on x then py jx py Hence for indep endent variables we have px y py px This means that for indep endent variables the joint entropy is the sum of the indi vidual or scalar entropies H x y H x H y Consecutive letters in the English language are not indep endent except either after or during a b out of serious drinking If we take into account the statistical dep endence on the previous letter the entropy of English reduces to bits p er letter from If we lo ok at the statistics of not just pairs but triplets and quadruplets of letters or at the statistics of words then it is p ossible to calculate the entropy more accurately as more and more contextual structure is taken into account the estimates of entropy reduce See Covers b o ok page for more details Signal Pro cessing Course WD Penny April Relative Entropy The relative entropy or Kul lbackLiebler Divergence b etween a distribution q x and a distribution px is dened as X q x D q jjp q x log px x 1 Jensens inequality states that for any convex function f x and set of M p ositive co ecients f g which sum to one j M M X X f x x f j j j j j =1 j =1 A sketch of a pro of of this is given in Bishop page Using this inequality we can show that X px D q jjp q x log q x x X log px x log Hence D q jjp The KLdivergence will app ear again in the discussion of the EM algorithm and Variational Bayesian learning see later lectures Mutual Information The mutual information is dened as the relative entropy b etween the joint distribution and the pro duct of individual distributions I x y D pX Y jjpX pY Substuting these distributions into allows us to express the mutual information as the dierence b etween the sum of the individual entropies and the joint entropy I x y H x H y H x y Therefore if x and y are indep endent the mutual information is zero More generally I x y is a measure of the dependence b etween variables and this dep endence will b e captured if the underlying relationship is linear or nonlinear This is to b e contrasted with Pearsons correlation co ecient which measures only linear correlation see rst lecture 1 A convex function has a negative second derivative Signal Pro cessing Course WD Penny April Minimum Description Length Given that a variable has a determinisitic comp onent and a random comp onent the complexity of that variable can b e dened as the length of a concise description of that variables regularities This denition has the merit that b oth random data and highly regular data will have a low complexity and so we have a corresp ondence with our everyday notion of 2 complexity The length of a description can b e measured by the numb er of binary bits required to enco de it If the probability of a set of measurements D is given by pD j where are the parameters of a probabilistic mo del then the minimum length of a co de for representing D is from Shannons co ding theorem the same as the information content of that data under the mo del see eg equation L log pD j However for the recevier to deco de the message they will need to know the parameters which b eing real numb ers are enco ded by truncating each to a nite precision We need a total of k log bits to enco de the This gives L log pD j k log tx The optimal precision can b e found as follows First we expand the negative log likeliho o d ie the error using a Taylor series ab out the Maximum Likeliho o d ML solution This gives T L log pD j H k log tx where and H is the Hessian of the error which is identical to the inverse covariance matrix the rst order term in the Taylor series disapp ears as the error gradient is zero at the ML solution The derivative is L k tx H If the covariance matrix is diagonal and therefore the Hessian is diagonal then for the case of linear regression see equation the diagonal elements are 2 N x i h i 2 e 2 2 is the variance of the ith input More where is the variance of the errors and e x i generally eg nonlinear regression this last variance will b e replaced with the variance 2 This is not the case however with measures such as the Algorithm Information Content AIC or Entropy as these will b e high even for purely random data Signal Pro cessing Course WD Penny April of the derivative of the output wrt the ith parameter But the dep endence on N remains Setting the ab ove derivative to zero therefore gives us 2 constant N where the constant dep ends on the variance terms when we come to take logs of this constant b ecomes an additive term that do esnt scale with either the numb er of data p oints or the numb er of parameters in the mo del we can therefore ignore it The Minimum Description Length MDL is therefore given by k MDLk log pD j log N This may b e minimised over the numb er of parameters k to get the optimal mo del complexity For a linear regression mo del N 2 log log pD j e Therefore N k 2 MDL k log log N Linear e which is seen to consist of an accuracy term and a complexity term This criterion can b e used to select the optimal numb er of input variables and therefore oers a solution to the biasvariance dilemma see lecture In later lectures the MDL criterion will b e used in autoregressive and wavelet mo dels The MDL complexity measure can b e further rened by integrating out the dep en dence on altogether The resulting measure is known as the stochastic complexity I k log pD jk where Z pD jk pD j k p d In Bayesian statistics this quantity is known as the marginal likeliho o d or evidence The sto chastic complexity measure is thus equivalent after taking negative logs to the Bayesian mo del order selection criterion see later See Bishop page for a further discussion of this relationship.
Recommended publications
  • Information Content and Error Analysis
    43 INFORMATION CONTENT AND ERROR ANALYSIS Clive D Rodgers Atmospheric, Oceanic and Planetary Physics University of Oxford ESA Advanced Atmospheric Training Course September 15th – 20th, 2008 44 INFORMATION CONTENT OF A MEASUREMENT Information in a general qualitative sense: Conceptually, what does y tell you about x? We need to answer this to determine if a conceptual instrument design actually works • to optimise designs • Use the linear problem for simplicity to illustrate the ideas. y = Kx + ! 45 SHANNON INFORMATION The Shannon information content of a measurement of x is the change in the entropy of the • probability density function describing our knowledge of x. Entropy is defined by: • S P = P (x) log(P (x)/M (x))dx { } − Z M(x) is a measure function. We will take it to be constant. Compare this with the statistical mechanics definition of entropy: • S = k p ln p − i i i X P (x)dx corresponds to pi. 1/M (x) is a kind of scale for dx. The Shannon information content of a measurement is the change in entropy between the p.d.f. • before, P (x), and the p.d.f. after, P (x y), the measurement: | H = S P (x) S P (x y) { }− { | } What does this all mean? 46 ENTROPY OF A BOXCAR PDF Consider a uniform p.d.f in one dimension, constant in (0,a): P (x)=1/a 0 <x<a and zero outside.The entropy is given by a 1 1 S = ln dx = ln a − a a Z0 „ « Similarly, the entropy of any constant pdf in a finite volume V of arbitrary shape is: 1 1 S = ln dv = ln V − V V ZV „ « i.e the entropy is the log of the volume of state space occupied by the p.d.f.
    [Show full text]
  • Guide for the Use of the International System of Units (SI)
    Guide for the Use of the International System of Units (SI) m kg s cd SI mol K A NIST Special Publication 811 2008 Edition Ambler Thompson and Barry N. Taylor NIST Special Publication 811 2008 Edition Guide for the Use of the International System of Units (SI) Ambler Thompson Technology Services and Barry N. Taylor Physics Laboratory National Institute of Standards and Technology Gaithersburg, MD 20899 (Supersedes NIST Special Publication 811, 1995 Edition, April 1995) March 2008 U.S. Department of Commerce Carlos M. Gutierrez, Secretary National Institute of Standards and Technology James M. Turner, Acting Director National Institute of Standards and Technology Special Publication 811, 2008 Edition (Supersedes NIST Special Publication 811, April 1995 Edition) Natl. Inst. Stand. Technol. Spec. Publ. 811, 2008 Ed., 85 pages (March 2008; 2nd printing November 2008) CODEN: NSPUE3 Note on 2nd printing: This 2nd printing dated November 2008 of NIST SP811 corrects a number of minor typographical errors present in the 1st printing dated March 2008. Guide for the Use of the International System of Units (SI) Preface The International System of Units, universally abbreviated SI (from the French Le Système International d’Unités), is the modern metric system of measurement. Long the dominant measurement system used in science, the SI is becoming the dominant measurement system used in international commerce. The Omnibus Trade and Competitiveness Act of August 1988 [Public Law (PL) 100-418] changed the name of the National Bureau of Standards (NBS) to the National Institute of Standards and Technology (NIST) and gave to NIST the added task of helping U.S.
    [Show full text]
  • Shannon Entropy and Kolmogorov Complexity
    Information and Computation: Shannon Entropy and Kolmogorov Complexity Satyadev Nandakumar Department of Computer Science. IIT Kanpur October 19, 2016 This measures the average uncertainty of X in terms of the number of bits. Shannon Entropy Definition Let X be a random variable taking finitely many values, and P be its probability distribution. The Shannon Entropy of X is X 1 H(X ) = p(i) log : 2 p(i) i2X Shannon Entropy Definition Let X be a random variable taking finitely many values, and P be its probability distribution. The Shannon Entropy of X is X 1 H(X ) = p(i) log : 2 p(i) i2X This measures the average uncertainty of X in terms of the number of bits. The Triad Figure: Claude Shannon Figure: A. N. Kolmogorov Figure: Alan Turing Just Electrical Engineering \Shannon's contribution to pure mathematics was denied immediate recognition. I can recall now that even at the International Mathematical Congress, Amsterdam, 1954, my American colleagues in probability seemed rather doubtful about my allegedly exaggerated interest in Shannon's work, as they believed it consisted more of techniques than of mathematics itself. However, Shannon did not provide rigorous mathematical justification of the complicated cases and left it all to his followers. Still his mathematical intuition is amazingly correct." A. N. Kolmogorov, as quoted in [Shi89]. Kolmogorov and Entropy Kolmogorov's later work was fundamentally influenced by Shannon's. 1 Foundations: Kolmogorov Complexity - using the theory of algorithms to give a combinatorial interpretation of Shannon Entropy. 2 Analogy: Kolmogorov-Sinai Entropy, the only finitely-observable isomorphism-invariant property of dynamical systems.
    [Show full text]
  • Understanding Shannon's Entropy Metric for Information
    Understanding Shannon's Entropy metric for Information Sriram Vajapeyam [email protected] 24 March 2014 1. Overview Shannon's metric of "Entropy" of information is a foundational concept of information theory [1, 2]. Here is an intuitive way of understanding, remembering, and/or reconstructing Shannon's Entropy metric for information. Conceptually, information can be thought of as being stored in or transmitted as variables that can take on different values. A variable can be thought of as a unit of storage that can take on, at different times, one of several different specified values, following some process for taking on those values. Informally, we get information from a variable by looking at its value, just as we get information from an email by reading its contents. In the case of the variable, the information is about the process behind the variable. The entropy of a variable is the "amount of information" contained in the variable. This amount is determined not just by the number of different values the variable can take on, just as the information in an email is quantified not just by the number of words in the email or the different possible words in the language of the email. Informally, the amount of information in an email is proportional to the amount of “surprise” its reading causes. For example, if an email is simply a repeat of an earlier email, then it is not informative at all. On the other hand, if say the email reveals the outcome of a cliff-hanger election, then it is highly informative.
    [Show full text]
  • Information, Entropy, and the Motivation for Source Codes
    MIT 6.02 DRAFT Lecture Notes Last update: September 13, 2012 CHAPTER 2 Information, Entropy, and the Motivation for Source Codes The theory of information developed by Claude Shannon (MIT SM ’37 & PhD ’40) in the late 1940s is one of the most impactful ideas of the past century, and has changed the theory and practice of many fields of technology. The development of communication systems and networks has benefited greatly from Shannon’s work. In this chapter, we will first develop the intution behind information and formally define it as a mathematical quantity, then connect it to a related property of data sources, entropy. These notions guide us to efficiently compress a data source before communicating (or storing) it, and recovering the original data without distortion at the receiver. A key under­ lying idea here is coding, or more precisely, source coding, which takes each “symbol” being produced by any source of data and associates it with a codeword, while achieving several desirable properties. (A message may be thought of as a sequence of symbols in some al­ phabet.) This mapping between input symbols and codewords is called a code. Our focus will be on lossless source coding techniques, where the recipient of any uncorrupted mes­ sage can recover the original message exactly (we deal with corrupted messages in later chapters). ⌅ 2.1 Information and Entropy One of Shannon’s brilliant insights, building on earlier work by Hartley, was to realize that regardless of the application and the semantics of the messages involved, a general definition of information is possible.
    [Show full text]
  • Shannon's Coding Theorems
    Saint Mary's College of California Department of Mathematics Senior Thesis Shannon's Coding Theorems Faculty Advisors: Author: Professor Michael Nathanson Camille Santos Professor Andrew Conner Professor Kathy Porter May 16, 2016 1 Introduction A common problem in communications is how to send information reliably over a noisy communication channel. With his groundbreaking paper titled A Mathematical Theory of Communication, published in 1948, Claude Elwood Shannon asked this question and provided all the answers as well. Shannon realized that at the heart of all forms of communication, e.g. radio, television, etc., the one thing they all have in common is information. Rather than amplifying the information, as was being done in telephone lines at that time, information could be converted into sequences of 1s and 0s, and then sent through a communication channel with minimal error. Furthermore, Shannon established fundamental limits on what is possible or could be acheived by a communication system. Thus, this paper led to the creation of a new school of thought called Information Theory. 2 Communication Systems In general, a communication system is a collection of processes that sends information from one place to another. Similarly, a storage system is a system that is used for storage and later retrieval of information. Thus, in a sense, a storage system may also be thought of as a communication system that sends information from one place (now, or the present) to another (then, or the future) [3]. In a communication system, information always begins at a source (e.g. a book, music, or video) and is then sent and processed by an encoder to a format that is suitable for transmission through a physical communications medium, called a channel.
    [Show full text]
  • Ch. 4. Entropy and Information (Pdf)
    4-1 CHAPTER 4 ENTROPY AND INFORMATION In the literature one finds the terms information theory and communication theory used interchangeably. As there seems to be no well- established convention for their use, here the work of Shannon, directed toward the transmission of messages, will be called communication theory and the later generalization of these ideas by Jaynes will be called information theory. 4.1 COMMUNICATION THEORY Communication theory deals with the transmission of messages through communication channels where such topics as channel capacity and fidelity of transmission in the presence of noise must be considered. The first comprehensive exposition of this theory was given by Shannon and Weaver1 in 1949. For Shannon and Weaver the term information is narrowly defined and is devoid of subjective qualities such as meaning. In their words: Information is a measure of one's freedom of choice when one selects a message. ... the amount of information is defined, in the simplest of cases, to be measured by the logarithm of the number of available choices. Shannon and Weaver considered a message N places in length and defined pi as the probability of the occurrence at any place of the ith symbol from the set of n possible symbols and specified that the measure of information per place, H, should have the following characteristics 1 C.E. Shannon and W. Weaver, The Mathematical Theory of Communication, University of Illinois Press, Urbana, 1949. 4-2 1. The function H should be a continuous function of the probabilities, pi's. 2. If all the pi's are equal, pi = 1/n, then H should be a monotonic increasing function of n.
    [Show full text]
  • SIC-Povms and Compatibility Among Quantum States
    mathematics Article SIC-POVMs and Compatibility among Quantum States Blake C. Stacey Physics Department, University of Massachusetts Boston, 100 Morrissey Boulevard, Boston, MA 02125, USA; [email protected]; Tel.: +1-617-287-6050; Fax: +1-617-287-6053 Academic Editors: Paul Busch, Takayuki Miyadera and Teiko Heinosaari Received: 1 March 2016; Accepted: 14 May 2016; Published: 1 June 2016 Abstract: An unexpected connection exists between compatibility criteria for quantum states and Symmetric Informationally Complete quantum measurements (SIC-POVMs). Beginning with Caves, Fuchs and Schack’s "Conditions for compatibility of quantum state assignments", I show that a qutrit SIC-POVM studied in other contexts enjoys additional interesting properties. Compatibility criteria provide a new way to understand the relationship between SIC-POVMs and mutually unbiased bases, as calculations in the SIC representation of quantum states make clear. This, in turn, illuminates the resources necessary for magic-state quantum computation, and why hidden-variable models fail to capture the vitality of quantum mechanics. Keywords: SIC-POVM; qutrit; post-Peierls compatibility; Quantum Bayesian; QBism PACS: 03.65.Aa, 03.65.Ta, 03.67.-a 1. A Compatibility Criterion for Quantum States This article presents an unforeseen connection between two subjects originally studied for separate reasons by the Quantum Bayesians, or to use the more recent and specific term, QBists [1,2]. One of these topics originates in the paper “Conditions for compatibility of quantum state assignments” [3] by Caves, Fuchs and Schack (CFS). Refining CFS’s treatment reveals an unexpected link between the concept of compatibility and other constructions of quantum information theory.
    [Show full text]
  • Entropy in Classical and Quantum Information Theory
    Entropy in Classical and Quantum Information Theory William Fedus Physics Department, University of California, San Diego. Entropy is a central concept in both classical and quantum information theory, measuring the uncertainty and the information content in the state of a physical system. This paper reviews classical information theory and then proceeds to generalizations into quantum information theory. Both Shannon and Von Neumann entropy are discussed, making the connection to compressibility of a message stream and the generalization of compressibility in a quantum system. Finally, the paper considers the application of Von Neumann entropy in entanglement of formation for both pure and mixed bipartite quantum states. CLASSICAL INFORMATION THEORY information. For instance, if we use a block code which assigns integers to typical sequences, the information in In statistical mechanics, entropy is the logarithm of a string of n letters can be compressed to H(A) bits. the number of arrangements a system can be configured With this framework and definition, we may now con- and still remain consistent with the thermodyanmic ob- sider the maximum compression of a length n message servables. From this original formulation, entropy has without loss of information. The number of bits neces- grown to become an important element in many diverse sary to transmit the message is given by fields of study. One of the first examples was in 1948 when Claude Shannon adopted entropy as a measure of the uncertainty in a random variable, or equivalently, the H(An) = nH(A): (2) expected value of information content within a message. Classical information theory, as established by Claude Shannon, sought to resolve two central issues in signal which simply states that one needs (n times the entropy processing of the ensemeble A)-bits.
    [Show full text]
  • An Introduction to Information Theory and Entropy
    An introduction to information theory and entropy Tom Carter CSU Stanislaus http://astarte.csustan.edu/~ tom/SFI-CSSS [email protected] Complex Systems Summer School Santa Fe March 14, 2014 1 Contents . Measuring complexity 5 . Some probability ideas 9 . Basics of information theory 15 . Some entropy theory 22 . The Gibbs inequality 28 . A simple physical example (gases) 36 . Shannon's communication theory 47 . Application to Biology (genomes) 63 . Some other measures 79 . Some additional material . Examples using Bayes' Theorem 87 . Analog channels 103 . A Maximum Entropy Principle 108 . Application: Economics I 111 . Application: Economics II 117 . Application to Physics (lasers) 124 . Kullback-Leibler information measure 129 . References 135 2 The quotes } Science, wisdom, and counting } Being different { or random } Surprise, information, and miracles } Information (and hope) } H (or S) for Entropy } Thermodynamics } Language, and putting things together } Tools To topics 3 Science, wisdom, and counting \Science is organized knowledge. Wisdom is organized life." - Immanuel Kant \My own suspicion is that the universe is not only stranger than we suppose, but stranger than we can suppose." - John Haldane \Not everything that can be counted counts, and not everything that counts can be counted." - Albert Einstein (1879-1955) \The laws of probability, so true in general, so fallacious in particular ." - Edward Gibbon 4 Measuring complexity • Workers in the field of complexity face a classic problem: how can we tell that the system we are looking at is actually a complex system? (i.e., should we even be studying this system? :-) Of course, in practice, we will study the systems that interest us, for whatever reasons, so the problem identified above tends not to be a real problem.
    [Show full text]
  • A New Foundation for the Multinomial Logit Model†‡
    Rational Inattention to Discrete Choices: A New Foundation for the Multinomial Logit Modelyz Filip Matˇejka∗ and Alisdair McKay∗∗ First draft: February 2011 This draft: May 2014 Abstract Individuals must often choose among discrete actions with imperfect information about their payoffs. Before choosing, they have an opportunity to study the payoffs, but doing so is costly. This creates new choices such as the number of and types of questions to ask. We model these situations using the rational inattention approach to information frictions. We find that the decision maker's optimal strategy results in choosing probabilistically in line with a generalized multinomial logit model, which depends both on the actions' true payoffs as well as on prior beliefs. Keywords: discrete choice, information, rational inattention, multinomial logit. y We thank Michal Bauer, Levent Celik, Satyajit Chatterjee, Faruk Gul, Christian Hellwig, Stepan Jura- jda, Bart Lipman, Tony Marley, Andreas Ortmann, Juan Ortner, Christopher Sims, Leonidas Spiliopoulos, Jakub Steiner, Jorgen Weibull, and Michael Woodford for helpful discussions as well as seminar participants at BU, CERGE, CREI, Harvard Institute for Quantitative Social Science, Princeton University, SED 2011, and the Toulouse School of Economics. zThis research was funded by GACRˇ P402/11/P236, GDN project RRC 11+001 and by European Re- search Council under the European Community's Seventh Framework Programme FP7/2007-2013 grant agreement N. 263790 ∗CERGE-EI, A joint workplace of the Center for Economic Research and Graduate Education, Charles University, and the Economics Institute of the Academy of Sciences of the Czech Republic. fi[email protected] ∗∗Boston University. [email protected] 1 1 Introduction Economists and psychologists have long known that scarce attention plays an important role in decision making (Simon, 1959; Kahneman, 1973).
    [Show full text]
  • Introduction to Information Theory
    Introduction to Information Theory Entropy as a Measure of Information Content Entropy of a random variable. Let X be a random variable that takes on values from its domain P fx1; x2; : : : ; xng with respective probabilities p1; p2; : : : ; pn, where i pi = 1. Then the entropy of X, H(X), represents the average amount of information contained in X and is defined by n X H(X) = − pi log2 pi: i=1 Note that entropy is measured in bits. Also, Hn(p1; : : : ; pn) denotes another way of writing the entropy function. Arguments in favor of the above definition of information: • The definition is consistent with the following extreme cases: 1 1. If n = 2 and p1 = p2 = 2 , then H(X) = 1 bit; i.e. when an event (e.g. X = x1) has an equally likely chance of occurring or not occurring, then its outcome possesses one bit of information. This is the maximum amount of information a binary outcome may possess. 2. In the case when pi = 1 for some 1 ≤ i ≤ n, then H(X) = 0; i.e. any random variable whose outcome is certain possesses no information. • Moreover, the above definition is the only definition which satisfies the following three properties of information which seem reasonable under any definition: 1 1 { Normalization: H2( 2 ; 2 ) = 1 { Continuity: H2(p; 1 − p) is a continuous function of p on the interval (0; 1) { Grouping: p1 p2 Hm(p1; : : : ; pm) = Hm−1(p1 + p2; p3; : : : ; pm) + (p1 + p2)H2( ; ) p1 + p2 p1 + p2 1 Claude Shannon (1912-2001). Pioneer in • applying Boolean logic to electronic circuit design • studying the complexity of Boolean circuits • signal processing: determined lower bounds on the amount of samples needed to achieve a desired estimation accuracy • game theory: inventor of minimax algorithm • information theory: first to give a precise definition for the concept of information • coding theory: Channel Coding Theorem, Optimal Coding Theorem 2 Example 1.
    [Show full text]