Information Theory and Coding

Total Page:16

File Type:pdf, Size:1020Kb

Information Theory and Coding Information Theory and Coding Subject Code : 10EC55 IA Marks : 25 No. of Lecture Hrs/Week : 04 Exam Hours : 03 Total no. of Lecture Hrs. : 52 Exam Marks : 100 PART - A Unit – 1: Information Theory: Introduction, Measure of information, Average information content of symbols in long independent sequences, Average information content of symbols in long dependent sequences. Mark-off statistical model for information source, Entropy and information rate of mark-off source. 6 Hours Unit – 2: Source Coding: Encoding of the source output, Shannon‟s encoding algorithm. Communication Channels, Discrete communication channels, Continuous channels. 6 Hours Unit – 3: Fundamental Limits on Performance: Source coding theorem, Huffman coding, Discrete memory less Channels, Mutual information, Channel Capacity. 6 Hours Unit – 4: Channel coding theorem, Differential entropy and mutual information for continuous ensembles, Channel capacity Theorem. 6 Hours PART - B Unit – 5: Introduction to Error Control Coding: Introduction, Types of errors, examples, Types of codes Linear Block Codes: Matrix description, Error detection and correction, Standard arrays and table look up for decoding. 7 Hours Unit – 6: Binary Cycle Codes, Algebraic structures of cyclic codes, Encoding using an (n-k) bit shift register, Syndrome calculation. BCH codes. 7 Hours Unit – 7: RS codes, Golay codes, Shortened cyclic codes, Burst error correcting codes. Burst and Random Error correcting codes. 7 Hours Unit – 8: Convolution Codes, Time domain approach. Transform domain approach. 7Hours Text Books: Digital and analog communication systems, K. Sam Shanmugam, John Wiley, 1996. Digital communication, Simon Haykin, John Wiley, 2003. Reference Books: ITC and Cryptography, Ranjan Bose, TMH, II edition, 2007 Digital Communications - Glover and Grant; Pearson Ed. 2nd Ed 2008 Page 1 INDEX SHEET Sl No. Unit & Topic of Discussion PAGE NO. PART - A 4 1 UNIT – 1: INFORMATION THEORY 2 Introduction 5 3 Measure of information 5 Average information content of symbols in long independent 8 4 Sequences Average information content of symbols in long dependent 9 5 Sequences 6 Mark-off statistical model for information source, 11 7 Entropy and information rate of mark-off source. 19 8 Review questions 27 UNIT – 2 29 9 SOURCE CODING 10 Encoding of the source output 30 11 Shannon‟s encoding algorithm 31 12 Communication Channels 44 13 Discrete communication channels 45 14 Review questions 73 UNIT – 3 74 15 FUNDAMENTAL LIMITS ON PERFORMANCE 16 Source coding theorem 75 17 Huffman coding 75 18 Discrete memory less Channels 81 19 Mutual information 88 20 Channel Capacity 90 21 Review questions 110 22 UNIT – 4 111 23 Continuous Channel 112 Differential entropy and mutual information for continuous 119 24 Ensembles 25 Channel capacity Theorem 121 26 Review questions 129 PART – B 27 UNIT – 5 130 INTRODUCTION TO ERROR CONTROL CODING 28 Introduction 131 29 Types of errors 133 30 Types of codes 133 31 Linear Block Codes: Matrix description. 136 32 Error detection and correction 146 33 Standard arrays and table look up for decoding 149 Page 2 34 Hamming codes 153 35 Review questions 155 36 UNIT – 6 156 37 Binary Cyclic Codes 157 38 Algebraic structures of cyclic codes 167 39 Encoding using an (n-k) bit shift register, 171 40 Syndrome calculation. 178 41 BCH codes 181 42 Review questions 182 43 UNIT – 7 183 44 Introduction 184 45 Golay codes and Shortened cyclic codes 188 46 R S codes 189 47 Burst error correcting codes 189 48 Burst and Random Error correcting codes 191 49 Review questions 196 50 UNIT – 8 197 51 Convolution Codes 198 52 Time domain approach 200 53 Transform domain approach. 206 54 Review questions 216 Page 3 www.allsyllabus.com Information Theory and Coding 10EC55 PART A Unit – 1: Information Theory Syllabus: Introduction, Measure of information, Average information content of symbols in long independent sequences, Average information content of symbols in long dependent sequences. Mark-off statistical model for information source, Entropy and information rate of mark-off source. 6 Hours Text Books: Digital and analog communication systems, K. Sam Shanmugam, John Wiley, 1996. Reference Books: Digital Communications - Glover and Grant; Pearson Ed. 2nd Ed 2008 Page 4 Unit – 1: Information Theory 1.1 Introduction: Communication Communication involves explicitly the transmission of information from one point to another, through a succession of processes. Basic elements to every communication system o Transmitter o Channel and o Receiver Communication System Source User of Transmitter CHANNEL Receiver of information information Message Transmitted Received Estimate of signal Signal signal message signal Information sources are classified as: INFORMATION SOURCE ANALOG DISCRETE Source definition Analog : Emit a continuous – amplitude, continuous – time electrical wave from. Discrete : Emit a sequence of letters of symbols. The output of a discrete information source is a string or sequence of symbols. 1.2 Measure the information: To measure the information content of a message quantitatively, we are required to arrive at an intuitive concept of the amount of information. Consider the following examples: A trip to Mercara (Coorg) in the winter time during evening hours, 1. It is a cold day 2. It is a cloudy day 3. Possible snow flurries Page 5 Amount of information received is obviously different for these messages. o Message (1) Contains very little information since the weather in coorg is „cold‟ for most part of the time during winter season. o The forecast of „cloudy day‟ contains more informat ion, since it is not an event that occurs often. o In contrast, the forecast of „snow flurries‟ convey s even more information, since the occurrence of snow in coorg is a rare event. On an intuitive basis, then with a knowledge of the occurrence of an event, what can be said about the amount of information conveyed? It is related to the probability of occurrence of the event. What do you conclude from the above example with regard to quantity of information? Message associated with an event „least likely to occur‟ contains most information. The information content of a message can be expressed quantitatively as follows: The above concepts can now be formed interns of probabilities as follows: Say that, an information source emits one of „q‟ po ssible messages m1, m2 …… m q with p1, p2 …… pq as their probs. of occurrence. th Based on the above intusion, the information content of the k message, can be written as 1 I (mk) p k Also to satisfy the intuitive concept, of information. I (mk) must zero as pk 1 Therefore, I (mk) > I (mj); if pk < pj I (mk) O (mj); if pk 1 -------------- I I (mk) ≥ O; when O < pk < 1 Another requirement is that when two independent messages are received, the total information content is – Sum of the information conveyed by each of the messages. Thus, we have I (mk & mq) I (mk & mq) = Imk + Imq ------ I We can define a measure of information as – 1 I (m ) = log ------ III k p k Page 6 10EC55 Unit of information measure Base of the logarithm will determine the unit assigned to the information content. Natural logarithm base : „nat‟ Base - 10 : Hartley / decit Base - 2 : bit Use of binary digit as the unit of information? Is based on the fact that if two possible binary digits occur with equal proby (p1 = p2 = ½) then the correct identification of the binary digit conveys an amount of information. I (m1) = I (m2) = – log 2 (½ ) = 1 bit One bit is the amount if information that we gain when one of two possible and equally likely events occurs. Illustrative Example A source puts out one of five possible messages during each message interval. The probs. of 1 1 1 1 1 these messages are p1 = ; p2 = ; p1 = : p1 = , p5 2 4 4 16 16 What is the information content of these messages? 1 I (m1) = - log2 = 1 bit 2 1 I (m2) = - log2 = 2 bits 4 1 I (m3) = - log = 3 bits 8 1 I (m4) = - log2 = 4 bits 16 1 I (m5) = - log2 = 4 bits 16 HW: Calculate I for the above messages in nats and Hartley Page 7 Digital Communication System: Estimate of the Source of Message signal Message signal User of information information Source Source Receiver Transmitter Encoder decoder Source Estimate of code word source codeword Channel Channel Encoder decoder Channel Estimate of code word channel codeword Modulator Demodulator Received Wavefo rm Channel signal Entropy and rate of Information of an Information Source / Model of a Mark off Source 1.3 Average Information Content of Symbols in Long Independence Sequences Suppose that a source is emitting one of M possible symbols s0, s1 ….. s M in a statically independent sequence Let p1, p2, …….. p M be the problems of occurrence of the M-symbols resply. suppose further that during a long period of transmission a sequence of N symbols have been generated. On an average – s 1 will occur NP1 times S will occur NP times 2 2 : : si will occur NPi times 1 The information content of the i th symbol is I (si) = log p bits i PiN occurrences of si contributes an information content of 1 i i i p i bitsPN.I(s)=PN.log Total information content of the message is = Sum of the contribution due to each of Page 8 M symbols of the source alphabet M 1 total NP 1 log bits i.e., I = ∑ p i 1 i I Averageinforamtion content total M 1 bits per H = ∑NP1 log ---- IV p per symbol in given by N i 1 i symbol This is equation used by Shannon Average information content per symbol is also called the source entropy. 1.4 The average information associated with an extremely unlikely message, with an extremely likely message and the dependence of H on the probabilities of messages consider the situation where you have just two messages of probs.
Recommended publications
  • Package 'Infotheo'
    Package ‘infotheo’ February 20, 2015 Title Information-Theoretic Measures Version 1.2.0 Date 2014-07 Publication 2009-08-14 Author Patrick E. Meyer Description This package implements various measures of information theory based on several en- tropy estimators. Maintainer Patrick E. Meyer <[email protected]> License GPL (>= 3) URL http://homepage.meyerp.com/software Repository CRAN NeedsCompilation yes Date/Publication 2014-07-26 08:08:09 R topics documented: condentropy . .2 condinformation . .3 discretize . .4 entropy . .5 infotheo . .6 interinformation . .7 multiinformation . .8 mutinformation . .9 natstobits . 10 Index 12 1 2 condentropy condentropy conditional entropy computation Description condentropy takes two random vectors, X and Y, as input and returns the conditional entropy, H(X|Y), in nats (base e), according to the entropy estimator method. If Y is not supplied the function returns the entropy of X - see entropy. Usage condentropy(X, Y=NULL, method="emp") Arguments X data.frame denoting a random variable or random vector where columns contain variables/features and rows contain outcomes/samples. Y data.frame denoting a conditioning random variable or random vector where columns contain variables/features and rows contain outcomes/samples. method The name of the entropy estimator. The package implements four estimators : "emp", "mm", "shrink", "sg" (default:"emp") - see details. These estimators require discrete data values - see discretize. Details • "emp" : This estimator computes the entropy of the empirical probability distribution. • "mm" : This is the Miller-Madow asymptotic bias corrected empirical estimator. • "shrink" : This is a shrinkage estimate of the entropy of a Dirichlet probability distribution. • "sg" : This is the Schurmann-Grassberger estimate of the entropy of a Dirichlet probability distribution.
    [Show full text]
  • Information Content and Error Analysis
    43 INFORMATION CONTENT AND ERROR ANALYSIS Clive D Rodgers Atmospheric, Oceanic and Planetary Physics University of Oxford ESA Advanced Atmospheric Training Course September 15th – 20th, 2008 44 INFORMATION CONTENT OF A MEASUREMENT Information in a general qualitative sense: Conceptually, what does y tell you about x? We need to answer this to determine if a conceptual instrument design actually works • to optimise designs • Use the linear problem for simplicity to illustrate the ideas. y = Kx + ! 45 SHANNON INFORMATION The Shannon information content of a measurement of x is the change in the entropy of the • probability density function describing our knowledge of x. Entropy is defined by: • S P = P (x) log(P (x)/M (x))dx { } − Z M(x) is a measure function. We will take it to be constant. Compare this with the statistical mechanics definition of entropy: • S = k p ln p − i i i X P (x)dx corresponds to pi. 1/M (x) is a kind of scale for dx. The Shannon information content of a measurement is the change in entropy between the p.d.f. • before, P (x), and the p.d.f. after, P (x y), the measurement: | H = S P (x) S P (x y) { }− { | } What does this all mean? 46 ENTROPY OF A BOXCAR PDF Consider a uniform p.d.f in one dimension, constant in (0,a): P (x)=1/a 0 <x<a and zero outside.The entropy is given by a 1 1 S = ln dx = ln a − a a Z0 „ « Similarly, the entropy of any constant pdf in a finite volume V of arbitrary shape is: 1 1 S = ln dv = ln V − V V ZV „ « i.e the entropy is the log of the volume of state space occupied by the p.d.f.
    [Show full text]
  • Chapter 4 Information Theory
    Chapter Information Theory Intro duction This lecture covers entropy joint entropy mutual information and minimum descrip tion length See the texts by Cover and Mackay for a more comprehensive treatment Measures of Information Information on a computer is represented by binary bit strings Decimal numb ers can b e represented using the following enco ding The p osition of the binary digit 3 2 1 0 Bit Bit Bit Bit Decimal Table Binary encoding indicates its decimal equivalent such that if there are N bits the ith bit represents N i the decimal numb er Bit is referred to as the most signicant bit and bit N as the least signicant bit To enco de M dierent messages requires log M bits 2 Signal Pro cessing Course WD Penny April Entropy The table b elow shows the probability of o ccurrence px to two decimal places of i selected letters x in the English alphab et These statistics were taken from Mackays i b o ok on Information Theory The table also shows the information content of a x px hx i i i a e j q t z Table Probability and Information content of letters letter hx log i px i which is a measure of surprise if we had to guess what a randomly chosen letter of the English alphab et was going to b e wed say it was an A E T or other frequently o ccuring letter If it turned out to b e a Z wed b e surprised The letter E is so common that it is unusual to nd a sentence without one An exception is the page novel Gadsby by Ernest Vincent Wright in which
    [Show full text]
  • Guide for the Use of the International System of Units (SI)
    Guide for the Use of the International System of Units (SI) m kg s cd SI mol K A NIST Special Publication 811 2008 Edition Ambler Thompson and Barry N. Taylor NIST Special Publication 811 2008 Edition Guide for the Use of the International System of Units (SI) Ambler Thompson Technology Services and Barry N. Taylor Physics Laboratory National Institute of Standards and Technology Gaithersburg, MD 20899 (Supersedes NIST Special Publication 811, 1995 Edition, April 1995) March 2008 U.S. Department of Commerce Carlos M. Gutierrez, Secretary National Institute of Standards and Technology James M. Turner, Acting Director National Institute of Standards and Technology Special Publication 811, 2008 Edition (Supersedes NIST Special Publication 811, April 1995 Edition) Natl. Inst. Stand. Technol. Spec. Publ. 811, 2008 Ed., 85 pages (March 2008; 2nd printing November 2008) CODEN: NSPUE3 Note on 2nd printing: This 2nd printing dated November 2008 of NIST SP811 corrects a number of minor typographical errors present in the 1st printing dated March 2008. Guide for the Use of the International System of Units (SI) Preface The International System of Units, universally abbreviated SI (from the French Le Système International d’Unités), is the modern metric system of measurement. Long the dominant measurement system used in science, the SI is becoming the dominant measurement system used in international commerce. The Omnibus Trade and Competitiveness Act of August 1988 [Public Law (PL) 100-418] changed the name of the National Bureau of Standards (NBS) to the National Institute of Standards and Technology (NIST) and gave to NIST the added task of helping U.S.
    [Show full text]
  • Shannon Entropy and Kolmogorov Complexity
    Information and Computation: Shannon Entropy and Kolmogorov Complexity Satyadev Nandakumar Department of Computer Science. IIT Kanpur October 19, 2016 This measures the average uncertainty of X in terms of the number of bits. Shannon Entropy Definition Let X be a random variable taking finitely many values, and P be its probability distribution. The Shannon Entropy of X is X 1 H(X ) = p(i) log : 2 p(i) i2X Shannon Entropy Definition Let X be a random variable taking finitely many values, and P be its probability distribution. The Shannon Entropy of X is X 1 H(X ) = p(i) log : 2 p(i) i2X This measures the average uncertainty of X in terms of the number of bits. The Triad Figure: Claude Shannon Figure: A. N. Kolmogorov Figure: Alan Turing Just Electrical Engineering \Shannon's contribution to pure mathematics was denied immediate recognition. I can recall now that even at the International Mathematical Congress, Amsterdam, 1954, my American colleagues in probability seemed rather doubtful about my allegedly exaggerated interest in Shannon's work, as they believed it consisted more of techniques than of mathematics itself. However, Shannon did not provide rigorous mathematical justification of the complicated cases and left it all to his followers. Still his mathematical intuition is amazingly correct." A. N. Kolmogorov, as quoted in [Shi89]. Kolmogorov and Entropy Kolmogorov's later work was fundamentally influenced by Shannon's. 1 Foundations: Kolmogorov Complexity - using the theory of algorithms to give a combinatorial interpretation of Shannon Entropy. 2 Analogy: Kolmogorov-Sinai Entropy, the only finitely-observable isomorphism-invariant property of dynamical systems.
    [Show full text]
  • Understanding Shannon's Entropy Metric for Information
    Understanding Shannon's Entropy metric for Information Sriram Vajapeyam [email protected] 24 March 2014 1. Overview Shannon's metric of "Entropy" of information is a foundational concept of information theory [1, 2]. Here is an intuitive way of understanding, remembering, and/or reconstructing Shannon's Entropy metric for information. Conceptually, information can be thought of as being stored in or transmitted as variables that can take on different values. A variable can be thought of as a unit of storage that can take on, at different times, one of several different specified values, following some process for taking on those values. Informally, we get information from a variable by looking at its value, just as we get information from an email by reading its contents. In the case of the variable, the information is about the process behind the variable. The entropy of a variable is the "amount of information" contained in the variable. This amount is determined not just by the number of different values the variable can take on, just as the information in an email is quantified not just by the number of words in the email or the different possible words in the language of the email. Informally, the amount of information in an email is proportional to the amount of “surprise” its reading causes. For example, if an email is simply a repeat of an earlier email, then it is not informative at all. On the other hand, if say the email reveals the outcome of a cliff-hanger election, then it is highly informative.
    [Show full text]
  • On the Irresistible Efficiency of Signal Processing Methods in Quantum
    On the Irresistible Efficiency of Signal Processing Methods in Quantum Computing 1,2 2 Andreas Klappenecker∗ , Martin R¨otteler† 1Department of Computer Science, Texas A&M University, College Station, TX 77843-3112, USA 2 Institut f¨ur Algorithmen und Kognitive Systeme, Universit¨at Karlsruhe,‡ Am Fasanengarten 5, D-76 128 Karlsruhe, Germany February 1, 2008 Abstract We show that many well-known signal transforms allow highly ef- ficient realizations on a quantum computer. We explain some elemen- tary quantum circuits and review the construction of the Quantum Fourier Transform. We derive quantum circuits for the Discrete Co- sine and Sine Transforms, and for the Discrete Hartley transform. We show that at most O(log2 N) elementary quantum gates are necessary to implement any of those transforms for input sequences of length N. arXiv:quant-ph/0111039v1 7 Nov 2001 §1 Introduction Quantum computers have the potential to solve certain problems at much higher speed than any classical computer. Some evidence for this statement is given by Shor’s algorithm to factor integers in polynomial time on a quan- tum computer. A crucial part of Shor’s algorithm depends on the discrete Fourier transform. The time complexity of the quantum Fourier transform is polylogarithmic in the length of the input signal. It is natural to ask whether other signal transforms allow for similar speed-ups. We briefly recall some properties of quantum circuits and construct the quantum Fourier transform. The main part of this paper is concerned with ∗e-mail: [email protected] †e-mail: [email protected] ‡research group Quantum Computing, Professor Thomas Beth the construction of quantum circuits for the discrete Cosine transforms, for the discrete Sine transforms, and for the discrete Hartley transform.
    [Show full text]
  • A Flow-Based Entropy Characterization of a Nated Network and Its Application on Intrusion Detection
    A Flow-based Entropy Characterization of a NATed Network and its Application on Intrusion Detection J. Crichigno1, E. Kfoury1, E. Bou-Harb2, N. Ghani3, Y. Prieto4, C. Vega4, J. Pezoa4, C. Huang1, D. Torres5 1Integrated Information Technology Department, University of South Carolina, Columbia (SC), USA 2Cyber Threat Intelligence Laboratory, Florida Atlantic University, Boca Raton (FL), USA 3Electrical Engineering Department, University of South Florida, Tampa (FL), USA 4Electrical Engineering Department, Universidad de Concepcion, Concepcion, Chile 5Department of Mathematics, Northern New Mexico College, Espanola (NM), USA Abstract—This paper presents a flow-based entropy charac- information is collected by the device and then exported for terization of a small/medium-sized campus network that uses storage and analysis. Thus, the performance impact is minimal network address translation (NAT). Although most networks and no additional capturing devices are needed [3]-[5]. follow this configuration, their entropy characterization has not been previously studied. Measurements from a production Entropy has been used in the past to detect anomalies, network show that the entropies of flow elements (external without requiring payload inspection. Its use is appealing IP address, external port, campus IP address, campus port) because it provides more information on flow elements (ports, and tuples have particular characteristics. Findings include: i) addresses, tuples) than traffic volume analysis. Entropy has entropies may widely vary in the course of a day. For example, also been used for anomaly detection in backbones and large in a typical weekday, the entropies of the campus and external ports may vary from below 0.2 to above 0.8 (in a normalized networks.
    [Show full text]
  • Information, Entropy, and the Motivation for Source Codes
    MIT 6.02 DRAFT Lecture Notes Last update: September 13, 2012 CHAPTER 2 Information, Entropy, and the Motivation for Source Codes The theory of information developed by Claude Shannon (MIT SM ’37 & PhD ’40) in the late 1940s is one of the most impactful ideas of the past century, and has changed the theory and practice of many fields of technology. The development of communication systems and networks has benefited greatly from Shannon’s work. In this chapter, we will first develop the intution behind information and formally define it as a mathematical quantity, then connect it to a related property of data sources, entropy. These notions guide us to efficiently compress a data source before communicating (or storing) it, and recovering the original data without distortion at the receiver. A key under­ lying idea here is coding, or more precisely, source coding, which takes each “symbol” being produced by any source of data and associates it with a codeword, while achieving several desirable properties. (A message may be thought of as a sequence of symbols in some al­ phabet.) This mapping between input symbols and codewords is called a code. Our focus will be on lossless source coding techniques, where the recipient of any uncorrupted mes­ sage can recover the original message exactly (we deal with corrupted messages in later chapters). ⌅ 2.1 Information and Entropy One of Shannon’s brilliant insights, building on earlier work by Hartley, was to realize that regardless of the application and the semantics of the messages involved, a general definition of information is possible.
    [Show full text]
  • Bits, Bans Y Nats: Unidades De Medida De Cantidad De Información
    Bits, bans y nats: Unidades de medida de cantidad de información Junio de 2017 Apellidos, Nombre: Flores Asenjo, Santiago J. (sfl[email protected]) Departamento: Dep. de Comunicaciones Centro: EPS de Gandia 1 Introducción: bits Resumen Todo el mundo conoce el bit como unidad de medida de can- tidad de información, pero pocos utilizan otras unidades también válidas tanto para esta magnitud como para la entropía en el ám- bito de la Teoría de la Información. En este artículo se presentará el nat y el ban (también deno- minado hartley), y se relacionarán con el bit. Objetivos y conocimientos previos Los objetivos de aprendizaje este artículo docente se presentan en la tabla 1. Aunque se trata de un artículo divulgativo, cualquier conocimiento sobre Teo- ría de la Información que tenga el lector le será útil para asimilar mejor el contenido. 1. Mostrar unidades alternativas al bit para medir la cantidad de información y la entropía 2. Presentar las unidades ban (o hartley) y nat 3. Contextualizar cada unidad con la historia de la Teoría de la Información 4. Utilizar la Wikipedia como herramienta de referencia 5. Animar a la edición de la Wikipedia, completando los artículos en español, una vez aprendidos los conceptos presentados y su contexto histórico Tabla 1: Objetivos de aprendizaje del artículo docente 1 Introducción: bits El bit es utilizado de forma universal como unidad de medida básica de canti- dad de información. Su nombre proviene de la composición de las dos palabras binary digit (dígito binario, en inglés) por ser históricamente utilizado para denominar cada uno de los dos posibles valores binarios utilizados en compu- tación (0 y 1).
    [Show full text]
  • The Indefinite Logarithm, Logarithmic Units, and the Nature of Entropy
    The Indefinite Logarithm, Logarithmic Units, and the Nature of Entropy Michael P. Frank FAMU-FSU College of Engineering Dept. of Electrical & Computer Engineering 2525 Pottsdamer St., Rm. 341 Tallahassee, FL 32310 [email protected] August 20, 2017 Abstract We define the indefinite logarithm [log x] of a real number x> 0 to be a mathematical object representing the abstract concept of the logarithm of x with an indeterminate base (i.e., not specifically e, 10, 2, or any fixed number). The resulting indefinite logarithmic quantities naturally play a mathematical role that is closely analogous to that of dimensional physi- cal quantities (such as length) in that, although these quantities have no definite interpretation as ordinary numbers, nevertheless the ratio of two of these entities is naturally well-defined as a specific, ordinary number, just like the ratio of two lengths. As a result, indefinite logarithm objects can serve as the basis for logarithmic spaces, which are natural systems of logarithmic units suitable for measuring any quantity defined on a log- arithmic scale. We illustrate how logarithmic units provide a convenient language for explaining the complete conceptual unification of the dis- parate systems of units that are presently used for a variety of quantities that are conventionally considered distinct, such as, in particular, physical entropy and information-theoretic entropy. 1 Introduction The goal of this paper is to help clear up what is perceived to be a widespread arXiv:physics/0506128v1 [physics.gen-ph] 15 Jun 2005 confusion that can found in many popular sources (websites, popular books, etc.) regarding the proper mathematical status of a variety of physical quantities that are conventionally defined on logarithmic scales.
    [Show full text]
  • Shannon's Formula Wlog(1+SNR): a Historical Perspective
    Shannon’s Formula W log(1 + SNR): A Historical· Perspective on the occasion of Shannon’s Centenary Oct. 26th, 2016 Olivier Rioul <[email protected]> Outline Who is Claude Shannon? Shannon’s Seminal Paper Shannon’s Main Contributions Shannon’s Capacity Formula A Hartley’s rule C0 = log2 1 + ∆ is not Hartley’s 1 P Many authors independently derived C = 2 log2 1 + N in 1948. Hartley’s rule is exact: C0 = C (a coincidence?) C0 is the capacity of the “uniform” channel Shannon’s Conclusion 2 / 85 26/10/2016 Olivier Rioul Shannon’s Formula W log(1 + SNR): A Historical Perspective April 30, 2016 centennial day celebrated by Google: here Shannon is juggling with bits (1,0,0) in his communication scheme “father of the information age’’ Claude Shannon (1916–2001) 100th birthday 2016 April 30, 1916 Claude Elwood Shannon was born in Petoskey, Michigan, USA 3 / 85 26/10/2016 Olivier Rioul Shannon’s Formula W log(1 + SNR): A Historical Perspective here Shannon is juggling with bits (1,0,0) in his communication scheme “father of the information age’’ Claude Shannon (1916–2001) 100th birthday 2016 April 30, 1916 Claude Elwood Shannon was born in Petoskey, Michigan, USA April 30, 2016 centennial day celebrated by Google: 3 / 85 26/10/2016 Olivier Rioul Shannon’s Formula W log(1 + SNR): A Historical Perspective Claude Shannon (1916–2001) 100th birthday 2016 April 30, 1916 Claude Elwood Shannon was born in Petoskey, Michigan, USA April 30, 2016 centennial day celebrated by Google: here Shannon is juggling with bits (1,0,0) in his communication scheme “father of the information age’’ 3 / 85 26/10/2016 Olivier Rioul Shannon’s Formula W log(1 + SNR): A Historical Perspective “the most important man..
    [Show full text]