Photons, Bits and Entropy: from Planck to Shannon at the Roots of the Information Age
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Glossary Physics (I-Introduction)
1 Glossary Physics (I-introduction) - Efficiency: The percent of the work put into a machine that is converted into useful work output; = work done / energy used [-]. = eta In machines: The work output of any machine cannot exceed the work input (<=100%); in an ideal machine, where no energy is transformed into heat: work(input) = work(output), =100%. Energy: The property of a system that enables it to do work. Conservation o. E.: Energy cannot be created or destroyed; it may be transformed from one form into another, but the total amount of energy never changes. Equilibrium: The state of an object when not acted upon by a net force or net torque; an object in equilibrium may be at rest or moving at uniform velocity - not accelerating. Mechanical E.: The state of an object or system of objects for which any impressed forces cancels to zero and no acceleration occurs. Dynamic E.: Object is moving without experiencing acceleration. Static E.: Object is at rest.F Force: The influence that can cause an object to be accelerated or retarded; is always in the direction of the net force, hence a vector quantity; the four elementary forces are: Electromagnetic F.: Is an attraction or repulsion G, gravit. const.6.672E-11[Nm2/kg2] between electric charges: d, distance [m] 2 2 2 2 F = 1/(40) (q1q2/d ) [(CC/m )(Nm /C )] = [N] m,M, mass [kg] Gravitational F.: Is a mutual attraction between all masses: q, charge [As] [C] 2 2 2 2 F = GmM/d [Nm /kg kg 1/m ] = [N] 0, dielectric constant Strong F.: (nuclear force) Acts within the nuclei of atoms: 8.854E-12 [C2/Nm2] [F/m] 2 2 2 2 2 F = 1/(40) (e /d ) [(CC/m )(Nm /C )] = [N] , 3.14 [-] Weak F.: Manifests itself in special reactions among elementary e, 1.60210 E-19 [As] [C] particles, such as the reaction that occur in radioactive decay. -
An Atomic Physics Perspective on the New Kilogram Defined by Planck's Constant
An atomic physics perspective on the new kilogram defined by Planck’s constant (Wolfgang Ketterle and Alan O. Jamison, MIT) (Manuscript submitted to Physics Today) On May 20, the kilogram will no longer be defined by the artefact in Paris, but through the definition1 of Planck’s constant h=6.626 070 15*10-34 kg m2/s. This is the result of advances in metrology: The best two measurements of h, the Watt balance and the silicon spheres, have now reached an accuracy similar to the mass drift of the ur-kilogram in Paris over 130 years. At this point, the General Conference on Weights and Measures decided to use the precisely measured numerical value of h as the definition of h, which then defines the unit of the kilogram. But how can we now explain in simple terms what exactly one kilogram is? How do fixed numerical values of h, the speed of light c and the Cs hyperfine frequency νCs define the kilogram? In this article we give a simple conceptual picture of the new kilogram and relate it to the practical realizations of the kilogram. A similar change occurred in 1983 for the definition of the meter when the speed of light was defined to be 299 792 458 m/s. Since the second was the time required for 9 192 631 770 oscillations of hyperfine radiation from a cesium atom, defining the speed of light defined the meter as the distance travelled by light in 1/9192631770 of a second, or equivalently, as 9192631770/299792458 times the wavelength of the cesium hyperfine radiation. -
Non-Einsteinian Interpretations of the Photoelectric Effect
NON-EINSTEINIAN INTERPRETATIONS ------ROGER H. STUEWER------ serveq spectral distribution for black-body radiation, and it led inexorably ~o _the ~latantl~ f::se conclusion that the total radiant energy in a cavity is mfimte. (This ultra-violet catastrophe," as Ehrenfest later termed it, was point~d out independently and virtually simultaneously, though not Non-Einsteinian Interpretations of the as emphatically, by Lord Rayleigh.2) Photoelectric Effect Einstein developed his positive argument in two stages. First, he derived an expression for the entropy of black-body radiation in the Wien's law spectral region (the high-frequency, low-temperature region) and used this expression to evaluate the difference in entropy associated with a chan.ge in volume v - Vo of the radiation, at constant energy. Secondly, he Few, if any, ideas in the history of recent physics were greeted with ~o~s1dere? the case of n particles freely and independently moving around skepticism as universal and prolonged as was Einstein's light quantum ms1d: a given v~lume Vo _and determined the change in entropy of the sys hypothesis.1 The period of transition between rejection and acceptance tem if all n particles accidentally found themselves in a given subvolume spanned almost two decades and would undoubtedly have been longer if v ~ta r~ndomly chosen instant of time. To evaluate this change in entropy, Arthur Compton's experiments had been less striking. Why was Einstein's Emstem used the statistical version of the second law of thermodynamics hypothesis not accepted much earlier? Was there no experimental evi in _c?njunction with his own strongly physical interpretation of the prob dence that suggested, even demanded, Einstein's hypothesis? If so, why ~b1ht~. -
Chapter 4 Information Theory
Chapter Information Theory Intro duction This lecture covers entropy joint entropy mutual information and minimum descrip tion length See the texts by Cover and Mackay for a more comprehensive treatment Measures of Information Information on a computer is represented by binary bit strings Decimal numb ers can b e represented using the following enco ding The p osition of the binary digit 3 2 1 0 Bit Bit Bit Bit Decimal Table Binary encoding indicates its decimal equivalent such that if there are N bits the ith bit represents N i the decimal numb er Bit is referred to as the most signicant bit and bit N as the least signicant bit To enco de M dierent messages requires log M bits 2 Signal Pro cessing Course WD Penny April Entropy The table b elow shows the probability of o ccurrence px to two decimal places of i selected letters x in the English alphab et These statistics were taken from Mackays i b o ok on Information Theory The table also shows the information content of a x px hx i i i a e j q t z Table Probability and Information content of letters letter hx log i px i which is a measure of surprise if we had to guess what a randomly chosen letter of the English alphab et was going to b e wed say it was an A E T or other frequently o ccuring letter If it turned out to b e a Z wed b e surprised The letter E is so common that it is unusual to nd a sentence without one An exception is the page novel Gadsby by Ernest Vincent Wright in which -
Guide for the Use of the International System of Units (SI)
Guide for the Use of the International System of Units (SI) m kg s cd SI mol K A NIST Special Publication 811 2008 Edition Ambler Thompson and Barry N. Taylor NIST Special Publication 811 2008 Edition Guide for the Use of the International System of Units (SI) Ambler Thompson Technology Services and Barry N. Taylor Physics Laboratory National Institute of Standards and Technology Gaithersburg, MD 20899 (Supersedes NIST Special Publication 811, 1995 Edition, April 1995) March 2008 U.S. Department of Commerce Carlos M. Gutierrez, Secretary National Institute of Standards and Technology James M. Turner, Acting Director National Institute of Standards and Technology Special Publication 811, 2008 Edition (Supersedes NIST Special Publication 811, April 1995 Edition) Natl. Inst. Stand. Technol. Spec. Publ. 811, 2008 Ed., 85 pages (March 2008; 2nd printing November 2008) CODEN: NSPUE3 Note on 2nd printing: This 2nd printing dated November 2008 of NIST SP811 corrects a number of minor typographical errors present in the 1st printing dated March 2008. Guide for the Use of the International System of Units (SI) Preface The International System of Units, universally abbreviated SI (from the French Le Système International d’Unités), is the modern metric system of measurement. Long the dominant measurement system used in science, the SI is becoming the dominant measurement system used in international commerce. The Omnibus Trade and Competitiveness Act of August 1988 [Public Law (PL) 100-418] changed the name of the National Bureau of Standards (NBS) to the National Institute of Standards and Technology (NIST) and gave to NIST the added task of helping U.S. -
Understanding Shannon's Entropy Metric for Information
Understanding Shannon's Entropy metric for Information Sriram Vajapeyam [email protected] 24 March 2014 1. Overview Shannon's metric of "Entropy" of information is a foundational concept of information theory [1, 2]. Here is an intuitive way of understanding, remembering, and/or reconstructing Shannon's Entropy metric for information. Conceptually, information can be thought of as being stored in or transmitted as variables that can take on different values. A variable can be thought of as a unit of storage that can take on, at different times, one of several different specified values, following some process for taking on those values. Informally, we get information from a variable by looking at its value, just as we get information from an email by reading its contents. In the case of the variable, the information is about the process behind the variable. The entropy of a variable is the "amount of information" contained in the variable. This amount is determined not just by the number of different values the variable can take on, just as the information in an email is quantified not just by the number of words in the email or the different possible words in the language of the email. Informally, the amount of information in an email is proportional to the amount of “surprise” its reading causes. For example, if an email is simply a repeat of an earlier email, then it is not informative at all. On the other hand, if say the email reveals the outcome of a cliff-hanger election, then it is highly informative. -
1. Physical Constants 1 1
1. Physical constants 1 1. PHYSICAL CONSTANTS Table 1.1. Reviewed 1998 by B.N. Taylor (NIST). Based mainly on the “1986 Adjustment of the Fundamental Physical Constants” by E.R. Cohen and B.N. Taylor, Rev. Mod. Phys. 59, 1121 (1987). The last group of constants (beginning with the Fermi coupling constant) comes from the Particle Data Group. The figures in parentheses after the values give the 1-standard- deviation uncertainties in the last digits; the corresponding uncertainties in parts per million (ppm) are given in the last column. This set of constants (aside from the last group) is recommended for international use by CODATA (the Committee on Data for Science and Technology). Since the 1986 adjustment, new experiments have yielded improved values for a number of constants, including the Rydberg constant R∞, the Planck constant h, the fine- structure constant α, and the molar gas constant R,and hence also for constants directly derived from these, such as the Boltzmann constant k and Stefan-Boltzmann constant σ. The new results and their impact on the 1986 recommended values are discussed extensively in “Recommended Values of the Fundamental Physical Constants: A Status Report,” B.N. Taylor and E.R. Cohen, J. Res. Natl. Inst. Stand. Technol. 95, 497 (1990); see also E.R. Cohen and B.N. Taylor, “The Fundamental Physical Constants,” Phys. Today, August 1997 Part 2, BG7. In general, the new results give uncertainties for the affected constants that are 5 to 7 times smaller than the 1986 uncertainties, but the changes in the values themselves are smaller than twice the 1986 uncertainties. -
Information, Entropy, and the Motivation for Source Codes
MIT 6.02 DRAFT Lecture Notes Last update: September 13, 2012 CHAPTER 2 Information, Entropy, and the Motivation for Source Codes The theory of information developed by Claude Shannon (MIT SM ’37 & PhD ’40) in the late 1940s is one of the most impactful ideas of the past century, and has changed the theory and practice of many fields of technology. The development of communication systems and networks has benefited greatly from Shannon’s work. In this chapter, we will first develop the intution behind information and formally define it as a mathematical quantity, then connect it to a related property of data sources, entropy. These notions guide us to efficiently compress a data source before communicating (or storing) it, and recovering the original data without distortion at the receiver. A key under lying idea here is coding, or more precisely, source coding, which takes each “symbol” being produced by any source of data and associates it with a codeword, while achieving several desirable properties. (A message may be thought of as a sequence of symbols in some al phabet.) This mapping between input symbols and codewords is called a code. Our focus will be on lossless source coding techniques, where the recipient of any uncorrupted mes sage can recover the original message exactly (we deal with corrupted messages in later chapters). ⌅ 2.1 Information and Entropy One of Shannon’s brilliant insights, building on earlier work by Hartley, was to realize that regardless of the application and the semantics of the messages involved, a general definition of information is possible. -
Shannon's Coding Theorems
Saint Mary's College of California Department of Mathematics Senior Thesis Shannon's Coding Theorems Faculty Advisors: Author: Professor Michael Nathanson Camille Santos Professor Andrew Conner Professor Kathy Porter May 16, 2016 1 Introduction A common problem in communications is how to send information reliably over a noisy communication channel. With his groundbreaking paper titled A Mathematical Theory of Communication, published in 1948, Claude Elwood Shannon asked this question and provided all the answers as well. Shannon realized that at the heart of all forms of communication, e.g. radio, television, etc., the one thing they all have in common is information. Rather than amplifying the information, as was being done in telephone lines at that time, information could be converted into sequences of 1s and 0s, and then sent through a communication channel with minimal error. Furthermore, Shannon established fundamental limits on what is possible or could be acheived by a communication system. Thus, this paper led to the creation of a new school of thought called Information Theory. 2 Communication Systems In general, a communication system is a collection of processes that sends information from one place to another. Similarly, a storage system is a system that is used for storage and later retrieval of information. Thus, in a sense, a storage system may also be thought of as a communication system that sends information from one place (now, or the present) to another (then, or the future) [3]. In a communication system, information always begins at a source (e.g. a book, music, or video) and is then sent and processed by an encoder to a format that is suitable for transmission through a physical communications medium, called a channel. -
Determination of Planck's Constant Using the Photoelectric Effect
Determination of Planck’s Constant Using the Photoelectric Effect Lulu Liu (Partner: Pablo Solis)∗ MIT Undergraduate (Dated: September 25, 2007) Together with my partner, Pablo Solis, we demonstrate the particle-like nature of light as charac- terized by photons with quantized and distinct energies each dependent only on its frequency (ν) and scaled by Planck’s constant (h). Using data obtained through the bombardment of an alkali metal surface (in our case, potassium) by light of varying frequencies, we calculate Planck’s constant, h to be 1.92 × 10−15 ± 1.08 × 10−15eV · s. 1. THEORY AND MOTIVATION This maximum kinetic energy can be determined by applying a retarding potential (Vr) across a vacuum gap It was discovered by Heinrich Hertz that light incident in a circuit with an amp meter. On one side of this gap upon a matter target caused the emission of electrons is the photoelectron emitter, a metal with work function from the target. The effect was termed the Hertz Effect W0. We let light of different frequencies strike this emit- (and later the Photoelectric Effect) and the electrons re- ter. When eVr = Kmax we will cease to see any current ferred to as photoelectrons. It was understood that the through the circuit. By finding this voltage we calculate electrons were able to absorb the energy of the incident the maximum kinetic energy of the electrons emitted as a light and escape from the coulomb potential that bound function of radiation frequency. This relationship (with it to the nucleus. some variation due to error) should model Equation 2 According to classical wave theory, the energy of a above. -
Quantifying the Quantum Stephan Schlamminger Looks at the Origins of the Planck Constant and Its Current Role in Redefining the Kilogram
measure for measure Quantifying the quantum Stephan Schlamminger looks at the origins of the Planck constant and its current role in redefining the kilogram. child on a swing is an everyday mechanical constant (h) with high precision, measure a approximation of a macroscopic a macroscopic experiment is necessary, force — in A harmonic oscillator. The total energy because the unit of mass, the kilogram, can this case of such a system depends on its amplitude, only be accessed with high precision at the the weight while the frequency of oscillation is, at one-kilogram value through its definition of a mass least for small amplitudes, independent as the mass of the International Prototype m — as of the amplitude. So, it seems that the of the Kilogram (IPK). The IPK is the only the product energy of the system can be adjusted object on Earth whose mass we know for of a current continuously from zero upwards — that is, certain2 — all relative uncertainties increase and a voltage the child can swing at any amplitude. This from thereon. divided by a velocity, is not the case, however, for a microscopic The revised SI, likely to come into mg = UI/v (with g the oscillator, the physics of which is described effect in 2019, is based on fixed numerical Earth’s gravitational by the laws of quantum mechanics. A values, without uncertainties, of seven acceleration). quantum-mechanical swing can change defining constants, three of which are Meanwhile, the its energy only in steps of hν, the product already fixed in the present SI; the four discoveries of the Josephson of the Planck constant and the oscillation additional ones are the elementary and quantum Hall effects frequency — the (energy) quantum of charge and the Planck, led to precise electrical the oscillator. -
1. Physical Constants 101
1. Physical constants 101 1. PHYSICAL CONSTANTS Table 1.1. Reviewed 2010 by P.J. Mohr (NIST). Mainly from the “CODATA Recommended Values of the Fundamental Physical Constants: 2006” by P.J. Mohr, B.N. Taylor, and D.B. Newell in Rev. Mod. Phys. 80 (2008) 633. The last group of constants (beginning with the Fermi coupling constant) comes from the Particle Data Group. The figures in parentheses after the values give the 1-standard-deviation uncertainties in the last digits; the corresponding fractional uncertainties in parts per 109 (ppb) are given in the last column. This set of constants (aside from the last group) is recommended for international use by CODATA (the Committee on Data for Science and Technology). The full 2006 CODATA set of constants may be found at http://physics.nist.gov/constants. See also P.J. Mohr and D.B. Newell, “Resource Letter FC-1: The Physics of Fundamental Constants,” Am. J. Phys, 78 (2010) 338. Quantity Symbol, equation Value Uncertainty (ppb) speed of light in vacuum c 299 792 458 m s−1 exact∗ Planck constant h 6.626 068 96(33)×10−34 Js 50 Planck constant, reduced ≡ h/2π 1.054 571 628(53)×10−34 Js 50 = 6.582 118 99(16)×10−22 MeV s 25 electron charge magnitude e 1.602 176 487(40)×10−19 C = 4.803 204 27(12)×10−10 esu 25, 25 conversion constant c 197.326 9631(49) MeV fm 25 conversion constant (c)2 0.389 379 304(19) GeV2 mbarn 50 2 −31 electron mass me 0.510 998 910(13) MeV/c = 9.109 382 15(45)×10 kg 25, 50 2 −27 proton mass mp 938.272 013(23) MeV/c = 1.672 621 637(83)×10 kg 25, 50 = 1.007 276 466 77(10) u = 1836.152 672 47(80) me 0.10, 0.43 2 deuteron mass md 1875.612 793(47) MeV/c 25 12 2 −27 unified atomic mass unit (u) (mass C atom)/12 = (1 g)/(NA mol) 931.494 028(23) MeV/c = 1.660 538 782(83)×10 kg 25, 50 2 −12 −1 permittivity of free space 0 =1/μ0c 8.854 187 817 ..