Achievability for Discrete Memoryless Systems IT Membership and Chapters Committee Abbas El Gamal

Total Page:16

File Type:pdf, Size:1020Kb

Achievability for Discrete Memoryless Systems IT Membership and Chapters Committee Abbas El Gamal Acknowledgments Roberto Padovani for generous gift to the IT society Achievability for Discrete Memoryless Systems IT Membership and Chapters committee Abbas El Gamal Stanford University Summer School of IT general chairs Aylin Yener and Gerhard Kramer and the other organizers Padovani Lecture 2009 The lecture is based on Lecture Notes on Network Information Theory jointly developed with Young-Han Kim, UC San Diego A. El Gamal (Stanford University) Achievability Padovani Lecture 2009 1 / 151 A. El Gamal (Stanford University) Achievability Padovani Lecture 2009 2 / 151 Network Information Flow Problem Consider a general network information processing system: Each node observes a subset of the sources and wishes to obtain descriptions of some or all the sources, or to compute a replacements function/make a decision based on these sources Source To achieve these goals, the nodes communicate over the network Information flow questions: ◮ Communication Network Node What are the necessary and sufficient conditions on information flow in the network under which the desired information processing goals can be achieved? ◮ What are the optimal coding/computing techniques/protocols needed? The difficulty of answering these questions depends on the information processing goals, the source and communication network models, and the computational and storage capabilities of the nodes ◮ Sources may be data, speech, music, images, video, sensor data, ... ◮ Nodes may be handsets, base stations, servers, sensor nodes, ... ◮ The communication network may be wired, wireless, or a hybrid A. El Gamal (Stanford University) Achievability Padovani Lecture 2009 3 / 151 A. El Gamal (Stanford University) Achievability Padovani Lecture 2009 4 / 151 For example, if the sources are commodities with demands (rates in bits/sec), the nodes are connected by noiseless rate-constrained links, each intermediate node simply forwards the bits it receives, and the goal is to send the commodities to desired destination nodes, the This simple information processing system model, however, does not problem reduces to the well-studied multi-commodity flow with capture many important aspects of real-world systems: known necessary and sufficient conditions on optimal flow [1] ◮ Real world information sources have redundancies, time and space In the case of a single commodity, these necessary and sufficient correlations, and time variations ◮ Real world networks may suffer from noise, interference, node failures, conditions reduce to the celebrated max-flow min-cut theorem [2, 3] delays, and time variations M3 ◮ Real world networks may allow for broadcasting 2 j ◮ M2 Real world communication nodes may allow for more complex node operations than forwarding C12 ◮ The goal in many information processing systems is not to merely 1 N C14 communicate source information M1 M1 4 C 13 k M3 3 M2 A. El Gamal (Stanford University) Achievability Padovani Lecture 2009 5 / 151 A. El Gamal (Stanford University) Achievability Padovani Lecture 2009 6 / 151 Network Information Theory Network information theory attempts to answer the above information flow questions while capturing some of these aspects of real world Lots of research activities in the 70s and early 80s with many new networks in the framework of Shannon information theory results and techniques developed, but ◮ It involves several new challenges beyond point-to-point Many basic problems remained open ◮ Little interest from information theory community and absolutely no communication, including: multi-accessing; broadcasting; interest from communication practitioners interference; relaying; cooperation; competition; distributed source Wireless communication and the Internet (and advances in coding; use of side information; multiple descriptions; joint technology) have revived the interest in this area source–channel coding; coding for computing; . ◮ Lots of research going on since the mid 90s First paper on multiple user information theory: ◮ Some work on old open problems but some new problems as well Claude E. Shannon, “Two-way communication channels” Proc. 4th Berkeley Symp. Math. Stat. Prob., pp. 611–644, 1961 ◮ He didn’t find the optimal rates ◮ Problem remains open A. El Gamal (Stanford University) Achievability Padovani Lecture 2009 7 / 151 A. El Gamal (Stanford University) Achievability Padovani Lecture 2009 8 / 151 State of the Theory Capacity/rate regions known only for few settings, including: ◮ Focus has been on compression and communication Point-to-point communication ◮ Multiple access channel Most work assumes discrete memoryless (DM) and Gaussian source ◮ Classes of broadcast, interference, and relay channels and channel models ◮ Classes of channels with state Most results are for separate source–channel settings ◮ Classes of multiple descriptions and distributed source coding ◮ Classes of channels with feedback For such a setting, the goal is to establish a coding theorem that ◮ Classes of deterministic networks determines the set of achievable rate tuples, i.e., ◮ Classes of wiretap channels and distributed key generation settings ◮ the capacity region when sending independent messages over a noisy channel, or Several of the coding techniques developed, such as superposition ◮ the rate/rate-distortion region when sending source descriptions over a coding, successive cancellation decoding, Slepian–Wolf, Wyner–Ziv, noiseless channel successive refinement, writing on dirty paper, network coding, Establishing a coding theorem involves proving: decode–forward, compress–forward, and interference alignment, are ◮ Achievability: There exists a sequence of codes that achieve any rate beginning to have an impact on real world networks tuple in the capacity/rate region But many basic problems remain open and a complete theory is yet to ◮ (Weak) converse: If a rate tuple is achievable, then it must be in the be developed capacity/rate region A. El Gamal (Stanford University) Achievability Padovani Lecture 2009 9 / 151 A. El Gamal (Stanford University) Achievability Padovani Lecture 2009 10 / 151 The Lecture Outline Will present a simple (yet rigorous) and unified approach to 1 Typical Sequences achievability for DM systems through examples of basic channel and 2 DMC source models: 3 DM-MAC ◮ Strong typicality ◮ Small number of coding techniques and performance analysis tools 4 DM-BC with Degraded Message Sets The approach has its origin in Shannon’s original work, which was 5 DM-IC later made rigorous by Forney [7] and Cover [8] among others 6 DM-RC: Decode–Forward Why focus on achievability? 7 DMS ◮ Achievability is useful—it often leads to practical coding techniques 8 DMS with Side Information at Decoder ◮ We have recently developed a more unified approach based on a small set of techniques and tools 9 DM-RC: Compress–Forward Why discrete instead of Gaussian? 10 DMC with State Available at Encoder ◮ “End-to-end” model for many real world networks 11 DM-BC with Private Messages ◮ Many real world sources are discrete 12 ◮ Achievability for most Gaussian models follows from their discrete DM-WTC counterparts via scalar quantizations and standard limit theorems 13 References and Appendices A. El Gamal (Stanford University) Achievability Padovani Lecture 2009 11 / 151 A. El Gamal (Stanford University) Achievability Padovani Lecture 2009 12 / 151 Typical Sequences Typical Sequences Typical Sequences Properties of Typical Sequences n n n n n Let x ∈X . Define the empirical pmf (or type) of x as Let X ∼ pX (xi) Qi=1 {i : xi = x} n π(x|xn) := for x ∈X X n (n) Tǫ (X) Let X1, X2,... be a sequence of i.i.d. random variables each drawn n (n) n . −nH(X) according to p(x). By the law of large numbers (LLN), for every P{X ∈ Tǫ }≥ 1 − ǫ p(x ) =2 x ∈X , π(x|Xn) → p(x) in probability (n) . nH(X) For X ∼ p(x), the set of ǫ-typical n-sequences [4] is defined as |Tǫ | =2 T (n)(X) := xn : π(x|xn) − p(x) ≤ ǫ · p(x) for all x ∈X ǫ Typical Average Lemma n (n) Let x ∈Tǫ (X) and g(x) be a nonnegative function, then n H(X) := − E(log p(X)) 1 n . −nH(X) −n(H(X)+δ(ǫ)) n −n(H(X)−δ(ǫ)) (1 − ǫ) E(g(X)) ≤ g(xi) ≤ (1 + ǫ) E(g(X)) p(x ) = 2 means 2 ≤ p(x ) ≤ 2 , n X i=1 where δ(ǫ) → 0 as ǫ → 0 A. El Gamal (Stanford University) Achievability Padovani Lecture 2009 13 / 151 A. El Gamal (Stanford University) Achievability Padovani Lecture 2009 14 / 151 Typical Sequences Typical Sequences Jointly Typical Sequences Properties of Jointly Typical Sequences n n n n (n) . nH(X) The joint type of (x ,y ) ∈X ×Y is defined as Tǫ (X) | · | =2 {i : (xi,yi) = (x,y)} n π(x,y|xn,yn) := for (x,y) ∈X×Y x n yn For (X,Y ) ∼ p(x,y), the set of jointly ǫ-typical n-sequences (xn,yn) is defined as before T (n)(X, Y ) ǫ . T (n)(Y ) | · | =2nH(X,Y ) T (n)(X,Y ) := {(xn,yn) : |π(x,y|xn,yn) − p(x,y)|≤ ǫ · p(x,y) . ǫ ǫ | · | =2nH(Y ) for all (x,y) ∈X ×Y} n n (n) n (n) n (n) If (x ,y ) ∈Tǫ (X,Y ), then x ∈Tǫ (X), y ∈Tǫ (Y ) n (n) n For x ∈Tǫ , the set of conditionally ǫ-typical y sequences is T (n)(Y |xn) T (n)(X|yn) ǫ . ǫ . (n) n n n n | · | =2nH(Y |X) | · | =2nH(X|Y ) Tǫ (Y |x ) := {y : |π(x,y|x ,y ) − p(x,y)|≤ ǫ · p(x,y) for all (x,y) ∈X ×Y} H(X|Y ) := − EX,Y (log p(X|Y )) A. El Gamal (Stanford University) Achievability Padovani Lecture 2009 15 / 151 A. El Gamal (Stanford University) Achievability Padovani Lecture 2009 16 / 151 Typical Sequences Typical Sequences Another Useful Picture Other Properties n (n) n n n n Let x ∈Tǫ′ (X) and Y |{X = x }∼ i=1 pY |X (yi|xi). Then by the LLN, for every ǫ>ǫ′, Q n X Yn (n) (n) Tǫ (Y ) Tǫ (X) n n (n) P{(x ,Y ) ∈Tǫ (X,Y )}→ 1 as n →∞ Joint Typicality Lemma xn (n) n ˜ n n Given (X,Y ) ∼ p(x,y), let x ∈Tǫ (X) and Y ∼ i=1 pY (˜yi) (instead n Q (n) n of pY |X (˜yi|xi)).
Recommended publications
  • Data Compression: Dictionary-Based Coding 2 / 37 Dictionary-Based Coding Dictionary-Based Coding
    Dictionary-based Coding already coded not yet coded search buffer look-ahead buffer cursor (N symbols) (L symbols) We know the past but cannot control it. We control the future but... Last Lecture Last Lecture: Predictive Lossless Coding Predictive Lossless Coding Simple and effective way to exploit dependencies between neighboring symbols / samples Optimal predictor: Conditional mean (requires storage of large tables) Affine and Linear Prediction Simple structure, low-complex implementation possible Optimal prediction parameters are given by solution of Yule-Walker equations Works very well for real signals (e.g., audio, images, ...) Efficient Lossless Coding for Real-World Signals Affine/linear prediction (often: block-adaptive choice of prediction parameters) Entropy coding of prediction errors (e.g., arithmetic coding) Using marginal pmf often already yields good results Can be improved by using conditional pmfs (with simple conditions) Heiko Schwarz (Freie Universität Berlin) — Data Compression: Dictionary-based Coding 2 / 37 Dictionary-based Coding Dictionary-Based Coding Coding of Text Files Very high amount of dependencies Affine prediction does not work (requires linear dependencies) Higher-order conditional coding should work well, but is way to complex (memory) Alternative: Do not code single characters, but words or phrases Example: English Texts Oxford English Dictionary lists less than 230 000 words (including obsolete words) On average, a word contains about 6 characters Average codeword length per character would be limited by 1
    [Show full text]
  • Digital Communication Systems 2.2 Optimal Source Coding
    Digital Communication Systems EES 452 Asst. Prof. Dr. Prapun Suksompong [email protected] 2. Source Coding 2.2 Optimal Source Coding: Huffman Coding: Origin, Recipe, MATLAB Implementation 1 Examples of Prefix Codes Nonsingular Fixed-Length Code Shannon–Fano code Huffman Code 2 Prof. Robert Fano (1917-2016) Shannon Award (1976 ) Shannon–Fano Code Proposed in Shannon’s “A Mathematical Theory of Communication” in 1948 The method was attributed to Fano, who later published it as a technical report. Fano, R.M. (1949). “The transmission of information”. Technical Report No. 65. Cambridge (Mass.), USA: Research Laboratory of Electronics at MIT. Should not be confused with Shannon coding, the coding method used to prove Shannon's noiseless coding theorem, or with Shannon–Fano–Elias coding (also known as Elias coding), the precursor to arithmetic coding. 3 Claude E. Shannon Award Claude E. Shannon (1972) Elwyn R. Berlekamp (1993) Sergio Verdu (2007) David S. Slepian (1974) Aaron D. Wyner (1994) Robert M. Gray (2008) Robert M. Fano (1976) G. David Forney, Jr. (1995) Jorma Rissanen (2009) Peter Elias (1977) Imre Csiszár (1996) Te Sun Han (2010) Mark S. Pinsker (1978) Jacob Ziv (1997) Shlomo Shamai (Shitz) (2011) Jacob Wolfowitz (1979) Neil J. A. Sloane (1998) Abbas El Gamal (2012) W. Wesley Peterson (1981) Tadao Kasami (1999) Katalin Marton (2013) Irving S. Reed (1982) Thomas Kailath (2000) János Körner (2014) Robert G. Gallager (1983) Jack KeilWolf (2001) Arthur Robert Calderbank (2015) Solomon W. Golomb (1985) Toby Berger (2002) Alexander S. Holevo (2016) William L. Root (1986) Lloyd R. Welch (2003) David Tse (2017) James L.
    [Show full text]
  • Randomized Lempel-Ziv Compression for Anti-Compression Side-Channel Attacks
    Randomized Lempel-Ziv Compression for Anti-Compression Side-Channel Attacks by Meng Yang A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Master of Applied Science in Electrical and Computer Engineering Waterloo, Ontario, Canada, 2018 c Meng Yang 2018 I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be made electronically available to the public. ii Abstract Security experts confront new attacks on TLS/SSL every year. Ever since the compres- sion side-channel attacks CRIME and BREACH were presented during security conferences in 2012 and 2013, online users connecting to HTTP servers that run TLS version 1.2 are susceptible of being impersonated. We set up three Randomized Lempel-Ziv Models, which are built on Lempel-Ziv77, to confront this attack. Our three models change the determin- istic characteristic of the compression algorithm: each compression with the same input gives output of different lengths. We implemented SSL/TLS protocol and the Lempel- Ziv77 compression algorithm, and used them as a base for our simulations of compression side-channel attack. After performing the simulations, all three models successfully pre- vented the attack. However, we demonstrate that our randomized models can still be broken by a stronger version of compression side-channel attack that we created. But this latter attack has a greater time complexity and is easily detectable. Finally, from the results, we conclude that our models couldn't compress as well as Lempel-Ziv77, but they can be used against compression side-channel attacks.
    [Show full text]
  • Principles of Communications ECS 332
    Principles of Communications ECS 332 Asst. Prof. Dr. Prapun Suksompong (ผศ.ดร.ประพันธ ์ สขสมปองุ ) [email protected] 1. Intro to Communication Systems Office Hours: Check Google Calendar on the course website. Dr.Prapun’s Office: 6th floor of Sirindhralai building, 1 BKD 2 Remark 1 If the downloaded file crashed your device/browser, try another one posted on the course website: 3 Remark 2 There is also three more sections from the Appendices of the lecture notes: 4 Shannon's insight 5 “The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point.” Shannon, Claude. A Mathematical Theory Of Communication. (1948) 6 Shannon: Father of the Info. Age Documentary Co-produced by the Jacobs School, UCSD- TV, and the California Institute for Telecommunic ations and Information Technology 7 [http://www.uctv.tv/shows/Claude-Shannon-Father-of-the-Information-Age-6090] [http://www.youtube.com/watch?v=z2Whj_nL-x8] C. E. Shannon (1916-2001) Hello. I'm Claude Shannon a mathematician here at the Bell Telephone laboratories He didn't create the compact disc, the fax machine, digital wireless telephones Or mp3 files, but in 1948 Claude Shannon paved the way for all of them with the Basic theory underlying digital communications and storage he called it 8 information theory. C. E. Shannon (1916-2001) 9 https://www.youtube.com/watch?v=47ag2sXRDeU C. E. Shannon (1916-2001) One of the most influential minds of the 20th century yet when he died on February 24, 2001, Shannon was virtually unknown to the public at large 10 C.
    [Show full text]
  • Marconi Society - Wikipedia
    9/23/2019 Marconi Society - Wikipedia Marconi Society The Guglielmo Marconi International Fellowship Foundation, briefly called Marconi Foundation and currently known as The Marconi Society, was established by Gioia Marconi Braga in 1974[1] to commemorate the centennial of the birth (April 24, 1874) of her father Guglielmo Marconi. The Marconi International Fellowship Council was established to honor significant contributions in science and technology, awarding the Marconi Prize and an annual $100,000 grant to a living scientist who has made advances in communication technology that benefits mankind. The Marconi Fellows are Sir Eric A. Ash (1984), Paul Baran (1991), Sir Tim Berners-Lee (2002), Claude Berrou (2005), Sergey Brin (2004), Francesco Carassa (1983), Vinton G. Cerf (1998), Andrew Chraplyvy (2009), Colin Cherry (1978), John Cioffi (2006), Arthur C. Clarke (1982), Martin Cooper (2013), Whitfield Diffie (2000), Federico Faggin (1988), James Flanagan (1992), David Forney, Jr. (1997), Robert G. Gallager (2003), Robert N. Hall (1989), Izuo Hayashi (1993), Martin Hellman (2000), Hiroshi Inose (1976), Irwin M. Jacobs (2011), Robert E. Kahn (1994) Sir Charles Kao (1985), James R. Killian (1975), Leonard Kleinrock (1986), Herwig Kogelnik (2001), Robert W. Lucky (1987), James L. Massey (1999), Robert Metcalfe (2003), Lawrence Page (2004), Yash Pal (1980), Seymour Papert (1981), Arogyaswami Paulraj (2014), David N. Payne (2008), John R. Pierce (1979), Ronald L. Rivest (2007), Arthur L. Schawlow (1977), Allan Snyder (2001), Robert Tkach (2009), Gottfried Ungerboeck (1996), Andrew Viterbi (1990), Jack Keil Wolf (2011), Jacob Ziv (1995). In 2015, the prize went to Peter T. Kirstein for bringing the internet to Europe. Since 2008, Marconi has also issued the Paul Baran Marconi Society Young Scholar Awards.
    [Show full text]
  • INFORMATION and CODING THEORY Exercise Sheet 4
    INFO-H-422 2017-2018 INFORMATION AND CODING THEORY Exercise Sheet 4 Exercise 1. Lempel-Ziv code. (a) Consider a source with alphabet A, B, C,_ . Encode the sequence AA_ABABBABC_ABABC with the Lempel-Ziv code. What is the numberf of bitsg necessary to transmit the encoded sequence? What happens at the last ABC? What would happen if the sequence were AA_ABABBABC_ACABC? (b) Consider a source with alphabet A, B . Encode the sequence ABAAAAAAAAAAAAAABB with the Lempel-Ziv code. Give the numberf of bitsg necessary to transmit the encoded sequence and compare it with a naive encoding. (c) Consider a source with alphabet A, B . Encode the sequence ABAAAAAAAAAAAAAAAAAAAA with the Lempel-Ziv code. Give the numberf ofg bits necessary to transmit the sequence and compare it with a naive encoding. (d) The sequence below is encoded by the Lempel-Ziv code. Reconstruct the original sequence. (0 , A), (1 , B), (2 , C), (0 , _), (2 , B), (0 , B), (6 , B), (7 , B), (0 , .). Exercise 2. We are given a set of n objects. Each object in this set can either be faulty or intact. The random variable Xi takes the value 1 if the i-th object is faulty and 0 otherwise. We assume that the variables X1, X2, , Xn are independent, with Prob Xi = 1 = pi and p1 > p2 > > pn > 1=2. The problem is to determine··· the set of all faulty objects withf an optimalg method. ··· (a) How to find the optimum sequence of yes/no-questions that identifies all faulty objects? (b) – What is the last question that has to be asked in the worst case (i.e., in the case when one has to ask the most number of questions)? – Which two sets can be distinguished with this question? Exercise 3.
    [Show full text]
  • IEEE Information Theory Society Newsletter
    IEEE Information Theory Society Newsletter Vol. 63, No. 3, September 2013 Editor: Tara Javidi ISSN 1059-2362 Editorial committee: Ioannis Kontoyiannis, Giuseppe Caire, Meir Feder, Tracey Ho, Joerg Kliewer, Anand Sarwate, Andy Singer, and Sergio Verdú Annual Awards Announced The main annual awards of the • 2013 IEEE Jack Keil Wolf ISIT IEEE Information Theory Society Student Paper Awards were were announced at the 2013 ISIT selected and announced at in Istanbul this summer. the banquet of the Istanbul • The 2014 Claude E. Shannon Symposium. The winners were Award goes to János Körner. the following: He will give the Shannon Lecture at the 2014 ISIT in 1) Mohammad H. Yassaee, for Hawaii. the paper “A Technique for Deriving One-Shot Achiev - • The 2013 Claude E. Shannon ability Results in Network Award was given to Katalin János Körner Daniel Costello Information Theory”, co- Marton in Istanbul. Katalin authored with Mohammad presented her Shannon R. Aref and Amin A. Gohari Lecture on the Wednesday of the Symposium. If you wish to see her slides again or were unable to attend, a copy of 2) Mansoor I. Yousefi, for the paper “Integrable the slides have been posted on our Society website. Communication Channels and the Nonlinear Fourier Transform”, co-authored with Frank. R. Kschischang • The 2013 Aaron D. Wyner Distinguished Service Award goes to Daniel J. Costello. • Several members of our community became IEEE Fellows or received IEEE Medals, please see our web- • The 2013 IT Society Paper Award was given to Shrinivas site for more information: www.itsoc.org/honors Kudekar, Tom Richardson, and Rüdiger Urbanke for their paper “Threshold Saturation via Spatial Coupling: The Claude E.
    [Show full text]
  • Network Information Theory
    Network Information Theory This comprehensive treatment of network information theory and its applications pro- vides the first unified coverage of both classical and recent results. With an approach that balances the introduction of new models and new coding techniques, readers are guided through Shannon’s point-to-point information theory, single-hop networks, multihop networks, and extensions to distributed computing, secrecy, wireless communication, and networking. Elementary mathematical tools and techniques are used throughout, requiring only basic knowledge of probability, whilst unified proofs of coding theorems are based on a few simple lemmas, making the text accessible to newcomers. Key topics covered include successive cancellation and superposition coding, MIMO wireless com- munication, network coding, and cooperative relaying. Also covered are feedback and interactive communication, capacity approximations and scaling laws, and asynchronous and random access channels. This book is ideal for use in the classroom, for self-study, and as a reference for researchers and engineers in industry and academia. Abbas El Gamal is the Hitachi America Chaired Professor in the School of Engineering and the Director of the Information Systems Laboratory in the Department of Electri- cal Engineering at Stanford University. In the field of network information theory, he is best known for his seminal contributions to the relay, broadcast, and interference chan- nels; multiple description coding; coding for noisy networks; and energy-efficient packet scheduling and throughput–delay tradeoffs in wireless networks. He is a Fellow of IEEE and the winner of the 2012 Claude E. Shannon Award, the highest honor in the field of information theory. Young-Han Kim is an Assistant Professor in the Department of Electrical and Com- puter Engineering at the University of California, San Diego.
    [Show full text]
  • President's Column
    IEEE Information Theory Society Newsletter Vol. 53, No. 3, September 2003 Editor: Lance C. Pérez ISSN 1059-2362 President’s Column Han Vinck This message is written after re- tion of the electronic library, many turning from an exciting Informa- universities and companies make tion Theory Symposium in this product available to their stu- Yokohama, Japan. Our Japanese dents and staff members. For these colleagues prepared an excellent people there is no direct need to be technical program in the beautiful an IEEE member. IEEE member- and modern setting of Yokohama ship reduced in general by about city. The largest number of contri- 10% and the Society must take ac- butions was in the field of LDPC tions to make the value of member- coding, followed by cryptography. ship visible to you and potential It is interesting to observe that new members. This is an impor- quantum information theory gets tant task for the Board and the more and more attention. The Jap- Membership Development com- anese hospitality contributed to IT Society President Han Vinck playing mittee. The Educational Commit- the general success of this year’s traditional Japanese drums. tee, chaired by Ivan Fair, will event. In spite of all the problems contribute to the value of member- with SARS, the Symposium attracted approximately 650 ship by introducing new activities. The first action cur- participants. At the symposium we announced Bob rently being undertaken by this committee is the McEliece to be the winner of the 2004 Shannon Award. creation of a web site whose purpose is to make readily The Claude E.
    [Show full text]
  • Sharp Threshold Rates for Random Codes∗
    Sharp threshold rates for random codes∗ Venkatesan Guruswami1, Jonathan Mosheiff1, Nicolas Resch2, Shashwat Silas3, and Mary Wootters3 1Carnegie Mellon University 2Centrum Wiskunde en Informatica 3Stanford University September 11, 2020 Abstract Suppose that P is a property that may be satisfied by a random code C ⊂ Σn. For example, for some p 2 (0; 1), P might be the property that there exist three elements of C that lie in some Hamming ball of radius pn. We say that R∗ is the threshold rate for P if a random code of rate R∗ + " is very likely to satisfy P, while a random code of rate R∗ − " is very unlikely to satisfy P. While random codes are well-studied in coding theory, even the threshold rates for relatively simple properties like the one above are not well understood. We characterize threshold rates for a rich class of properties. These properties, like the example above, are defined by the inclusion of specific sets of codewords which are also suitably \symmetric." For properties in this class, we show that the threshold rate is in fact equal to the lower bound that a simple first-moment calculation obtains. Our techniques not only pin down the threshold rate for the property P above, they give sharp bounds on the threshold rate for list-recovery in several parameter regimes, as well as an efficient algorithm for estimating the threshold rates for list-recovery in general. arXiv:2009.04553v1 [cs.IT] 9 Sep 2020 ∗SS and MW are partially funded by NSF-CAREER grant CCF-1844628, NSF-BSF grant CCF-1814629, and a Sloan Research Fellowship.
    [Show full text]
  • SHANNON SYMPOSIUM and STATUE DEDICATION at UCSD
    SHANNON SYMPOSIUM and STATUE DEDICATION at UCSD At 2 PM on October 16, 2001, a statue of Claude Elwood Shannon, the Father of Information Theory who died earlier this year, will be dedicated in the lobby of the Center for Magnetic Recording Research (CMRR) at the University of California-San Diego. The bronze plaque at the base of this statue will read: CLAUDE ELWOOD SHANNON 1916-2001 Father of Information Theory His formulation of the mathematical theory of communication provided the foundation for the development of data storage and transmission systems that launched the information age. Dedicated October 16, 2001 Eugene Daub, Sculptor There is no fee for attending the dedication but if you plan to attend, please fill out that portion of the attached registration form. In conjunction with and prior to this dedication, 15 world-renowned experts on information theory will give technical presentations at a Shannon Symposium to be held in the auditorium of CMRR on October 15th and the morning of October 16th. The program for this Symposium is as follows: Monday Oct. 15th Monday Oct. 15th Tuesday Oct. 16th 9 AM to 12 PM 2 PM to 5 PM 9 AM to 12 PM Toby Berger G. David Forney Jr. Solomon Golomb Paul Siegel Edward vanderMeulen Elwyn Berlekamp Jacob Ziv Robert Lucky Shu Lin David Neuhoff Ian Blake Neal Sloane Thomas Cover Andrew Viterbi Robert McEliece If you are interested in attending the Shannon Symposium please fill out the corresponding portion of the attached registration form and mail it in as early as possible since seating is very limited.
    [Show full text]
  • Reflections on ``Improved Decoding of Reed-Solomon and Algebraic-Geometric Codes
    Reflections on \Improved Decoding of Reed-Solomon and Algebraic-Geometric Codes" Venkatesan Guruswami∗ Madhu Sudany March 2002 n A t-error-correcting code over a q-ary alphabet Fq is a set C ⊆ Fq such that for any received n vector r 2 Fq there is at most one vector c 2 C that lies within a Hamming distance of t from r. The minimum distance of the code C is the minimum Hamming distance between any pair of distinct vectors c1; c2 2 C. In his seminal work introducing these concepts, Hamming pointed out that a code of minimum distance 2t + 1 is a t-error-correcting code. It also pointed out the obvious fact that such a code is not a t0-error-correcting code for any t0 > t. We conclude that a code can correct half as many errors as its distance and no more. The mathematical correctness of the above statements are indisputable, yet the interpretation is quite debatable. If a message encoded with a t-error-correcting code ends up getting corrupted in t0 > t places, the decoder may simply throw it hands up in the air and cite the above paragraph. Or, in an alternate notion of decoding, called list decoding, proposed in the late 1950s by Elias [10] and Wozencraft [43], the decoder could try to output a list of codewords within distance t0 of the received vector. If t0 is not much larger than t and the errors are caused by a probabilistic (non-malicious) channel, then most likely this list would have only one element | the transmitted codeword.
    [Show full text]