LDPC Codes 2 Outline

Total Page:16

File Type:pdf, Size:1020Kb

LDPC Codes 2 Outline An Introduction to Low-Density Parity-Check Codes Paul H. Siegel Electrical and Computer Engineering University of California, San Diego 5/ 31/ 07 1 ©©CCoopyrightpyright 2007 2007 by by Paul Paul H. H .Siegel Siegel Outline • Shannon’s Channel Coding Theorem • Error-Correcting Codes – State-of-the-Art • LDPC Code Basics • Encoding • Decoding • LDPC Code Design • Asymptotic performance analysis • Design optimization 5/ 31/ 07 LDPC Codes 2 Outline • EXIT Chart Analysis • Applications • Binary Erasure Channel • Binary Symmetric Channel • AWGN Channel • Rayleigh Fading Channel • Partial-Response Channel • Basic References 5/ 31/ 07 LDPC Codes 3 A Noisy Communication System INFORMATION SOURCE TRANSMITTER RECEIVER DESTINATION CHANNEL SIGNAL RECEIVED SIGNAL MESSAGE MESSAGE NOISE SOURCE 5/ 31/ 07 LDPC Codes 4 Channels • Binary erasure channel BEC(ε) 1- ε 0 0 ε ε ? 1 1 1- ε • Binary symmetric channel BSC(p) 1-p 00 p p 1 1 1-p 5/ 31/ 07 LDPC Codes 5 More Channels • Additive white Gaussian noise channel AWGN f (y | −1) f ( y |1) − P P 5/ 31/ 07 LDPC Codes 6 Shannon Capacity Every communication channel is characterized by a single number C, called the channel capacity. It is possible to transmit information over this channel reliably (with probability of error → 0) if and only if: def # information bits R = < C channel use 5/ 31/ 07 LDPC Codes 7 Channels and Capacities • Binary erasure channel BEC(ε) C = 1− ε 1- ε 1 0 0 0.9 0.8 ε 0.7 0.6 0.5 ? 0.4 ε 0.3 0.2 1 1 0.1 0 1- ε 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 • Binary symmetric channel BSC(p) C = 1− H 2 ( p) 1 1-p 0.9 0.8 000.7 p 0.6 0.5 0.4 p 0.3 0.2 0.1 1 1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1-p H 2 ( p) = − p log2 p − (1− p)log2 (1− p) 5/ 31/ 07 LDPC Codes 8 More Channels and Capacities • Additive white Gaussian noise channel AWGN 1 ⎛ P ⎞ C = log 2 ⎜1+ ⎟ f (y |1) f (y | 0) 2 ⎝ σ 2 ⎠ 1.8 1.6 1.4 − P P 1.2 1 0.8 0.6 0.4 0.2 0 -10 -8 -6 -4 -2 0 2 4 6 8 10 5/ 31/ 07 LDPC Codes 9 Coding We use a code to communicate over the noisy channel. c = c ,c ,…,c x = x1, x2 ,…, xk 1 2 n Source Encoder Channel Sink Decoder xˆ = xˆ1, xˆ2 ,…, xˆk y = y1, y2 ,…, yn Code rate: R = k n 5/ 31/ 07 LDPC Codes 10 Shannon’s Coding Theorems If C is a code with rate R>C, then the probability of error in decoding this code is bounded away from 0. (In other words, at any rate R>C, reliable communication is not possible.) For any information rate R < C and any δ > 0, there exists a code C of length nδ and rate R, such that the probability of error in maximum likelihood decoding of this code is at most δ. C = Max (H(x) – H (x)) y Proof: Non-constructive! 5/ 31/ 07 LDPC Codes 11 Review of Shannon’s Paper • A pioneering paper: Shannon, C. E. “A mathematical theory of communication. Bell System Tech. J. 27, (1948). 379–423, 623–656 • A regrettable review: Doob, J.L., Mathematical Reviews, MR0026286 (10,133e) “The discussion is suggestive throughout, rather than mathematical, and it is not always clear that the author’s mathematical intentions are honorable.” Cover, T. “Shannon’s Contributions to Shannon Theory,” AMS Notices, vol. 49, no. 1, p. 11, January 2002 “Doob has recanted this remark many times, saying that it and his naming of super martingales (processes that go down instead of up) are his two big regrets.” 5/ 31/ 07 LDPC Codes 12 Finding Good Codes • Ingredients of Shannon’s proof: • Random code • Large block length • Optimal decoding • Problem Randomness + large block length + optimal decoding = COMPLEXITY! 5/ 31/ 07 LDPC Codes 13 State-of-the-Art •Solution • Long, structured, “pseudorandom” codes • Practical, near-optimal decoding algorithms •Examples • Turbo codes (1993) • Low-density parity-check (LDPC) codes (1960, 1999) • State-of-the-art • Turbo codes and LDPC codes have brought Shannon limits to within reach on a wide range of channels. 5/ 31/ 07 LDPC Codes 14 Evolution of Coding Technology LDPC codes from Trellis and Turbo Coding, Schlegel and Perez, IEEE Press, 2004 5/ 31/ 07 LDPC Codes 15 Linear Block Codes - Basics • Parameters of binary linear block code C • k = number of information bits • n = number of code bits • R = k/n • dmin = minimum distance • There are many ways to describe C • Codebook (list) • Parity-check matrix / generator matrix • Graphical representation (“Tanner graph”) 5/ 31/ 07 LDPC Codes 16 Example: (7,4) Hamming Code •• ((n,kn,k)) == (7,4)(7,4) , , R R = = 4/74/7 • d = 3 • dminmin = 3 • single error correcting • single error correcting 5 •• doubledouble erasure erasure correct correctinging •• EEncodingncoding rule:rule: 2 3 1 1.1. InsertInsert data data bits bits in in 1, 1, 2, 2, 3, 3, 4. 4. 7 6 2.2. InsertInsert “parity” “parity”bbitsits in in 5, 5, 6, 6, 7 7 4 toto ensureensure anan even even number number ofof 1’s1’s inin eacheach circlecircle 5/ 31/ 07 LDPC Codes 17 Example: (7,4) Hamming Code •2k=16 codewords • Systematic encoder places input bits in positions 1, 2, 3, 4 • Parity bits are in positions 5, 6, 7 0 0 0 0 0 0 0 1 0 0 0 1 1 1 0 0 0 1 0 1 1 1 0 0 1 1 0 0 0 0 1 0 1 1 0 1 0 1 0 0 0 1 0 0 1 1 1 0 1 1 0 1 1 0 1 0 0 1 0 0 1 0 1 1 1 0 0 0 1 0 0 1 0 1 1 1 0 1 1 0 1 0 0 1 0 1 1 0 0 1 1 1 1 1 0 1 0 0 0 1 1 1 0 0 0 1 1 1 1 1 1 1 5/ 31/ 07 LDPC Codes 18 Hamming Code – Parity Checks 5 11 22 33 44 55 66 77 1110100 2 3 1 1011010 7 6 1101001 4 5/ 31/ 07 LDPC Codes 19 Hamming Code: Matrix Perspective • Parity check matrix H c = []c1,c2 ,c3 ,c4 ,c5 ,c6 ,c7 ⎡1 1 1 0 1 0 0⎤ ⎡0⎤ ⎢ ⎥ T ⎢ ⎥ H = ⎢1 0 1 1 0 1 0⎥ H c = ⎢0⎥ ⎢1 1 0 1 0 0 1⎥ ⎣ ⎦ ⎣⎢0⎦⎥ •Generator matrix G ⎡1 0 0 0 1 1 1⎤ u = []u1,u2 ,u3.u4 ⎢0 1 0 0 1 0 1⎥ G = ⎢ ⎥ c = []c1,c2 ,c3 ,c4 ,c5 ,c6 ,c7 ⎢0 0 1 0 1 1 0⎥ ⎢ ⎥ u ⋅G = c ⎣0 0 0 1 0 1 1⎦ 5/ 31/ 07 LDPC Codes 20 Parity-Check Equations • Parity-check matrix implies system of linear equations. ⎡1 1 1 0 1 0 0⎤ c1 + c2 + c3 + c5 = 0 ⎢ ⎥ c + c + c + c = 0 H = ⎢1 0 1 1 0 1 0⎥ 1 3 4 6 ⎣⎢1 1 0 1 0 0 1⎦⎥ c1 + c2 + c4 + c7 = 0 • Parity-check matrix is not unique. • Any set of vectors that span the rowspace generated by H can serve as the rows of a parity check matrix (including sets with more than 3 vectors). 5/ 31/ 07 LDPC Codes 21 Hamming Code: Tanner Graph • Bi-partite graph representing parity-check equations c1 c2 c1 +c2 +c3 +c5 =0 c3 c4 c1 +c3 +c4 +c6 =0 c5 c6 c1 +c2 +c4 +c7 =0 c7 5/ 31/ 07 LDPC Codes 22 Tanner Graph Terminology variable nodes (bit, left) check nodes (constraint, right) TheThe degreedegreeofof a a node node is is the the number number ofof edges edges connected connected to to it. it. 5/ 31/ 07 LDPC Codes 23 Low-Density Parity-Check Codes • Proposed by Gallager (1960) • “Sparseness” of matrix and graph descriptions • Number of 1’s in H grows linearly with block length • Number of edges in Tanner graph grows linearly with block length • “Randomness” of construction in: • Placement of 1’s in H • Connectivity of variable and check nodes • Iterative, message-passing decoder • Simple “local” decoding at nodes • Iterative exchange of information (message-passing) 5/ 31/ 07 LDPC Codes 24 Review of Gallager’s Paper • Another pioneering work: Gallager, R. G., Low-Density Parity-Check Codes, M.I.T. Press, Cambridge, Mass: 1963. • A more enlightened review: Horstein, M., IEEE Trans. Inform. Thoery, vol. 10, no. 2, p. 172, April 1964, “This book is an extremely lucid and circumspect exposition of an important piece of research. A comparison with other coding and decoding procedures designed for high-reliability transmission ... is difficult...Furthermore, many hours of computer simulation are needed to evaluate a probabilistic decoding scheme... It appears, however, that LDPC codes have a sufficient number of desirable features to make them highly competitive with ... other schemes ....” 5/ 31/ 07 LDPC Codes 25 Gallager’s LDPC Codes • Now called “regular” LDPC codes • Parameters (n,j,k) ─ n = codeword length ─ j = # of parity-check equations involving each code bit = degree of each variable node ─ k = # code bits involved in each parity-check equation = degree of each check node • Locations of 1’s can be chosen randomly, subject to (j,k) constraints.
Recommended publications
  • Hamming Code - Wikipedia, the Free Encyclopedia Hamming Code Hamming Code
    Hamming code - Wikipedia, the free encyclopedia Hamming code Hamming code From Wikipedia, the free encyclopedia In telecommunication, a Hamming code is a linear error-correcting code named after its inventor, Richard Hamming. Hamming codes can detect and correct single-bit errors, and can detect (but not correct) double-bit errors. In contrast, the simple parity code cannot detect errors where two bits are transposed, nor can it correct the errors it can find. Contents [hide] • 1 History • 2 Codes predating Hamming ♦ 2.1 Parity ♦ 2.2 Two-out-of-five code ♦ 2.3 Repetition • 3 Hamming codes • 4 Example using the (11,7) Hamming code • 5 Hamming code (7,4) ♦ 5.1 Hamming matrices ♦ 5.2 Channel coding ♦ 5.3 Parity check ♦ 5.4 Error correction • 6 Hamming codes with additional parity • 7 See also • 8 References • 9 External links History Hamming worked at Bell Labs in the 1940s on the Bell Model V computer, an electromechanical relay-based machine with cycle times in seconds. Input was fed in on punch cards, which would invariably have read errors. During weekdays, special code would find errors and flash lights so the operators could correct the problem. During after-hours periods and on weekends, when there were no operators, the machine simply moved on to the next job. Hamming worked on weekends, and grew increasingly frustrated with having to restart his programs from scratch due to the unreliability of the card reader. Over the next few years he worked on the problem of error-correction, developing an increasingly powerful array of algorithms.
    [Show full text]
  • Error-Correction and the Binary Golay Code
    London Mathematical Society Impact150 Stories 1 (2016) 51{58 C 2016 Author(s) doi:10.1112/i150lms/t.0003 e Error-correction and the binary Golay code R.T.Curtis Abstract Linear algebra and, in particular, the theory of vector spaces over finite fields has provided a rich source of error-correcting codes which enable the receiver of a corrupted message to deduce what was originally sent. Although more efficient codes have been devised in recent years, the 12- dimensional binary Golay code remains one of the most mathematically fascinating such codes: not only has it been used to send messages back to earth from the Voyager space probes, but it is also involved in many of the most extraordinary and beautiful algebraic structures. This article describes how the code can be obtained in two elementary ways, and explains how it can be used to correct errors which arise during transmission 1. Introduction Whenever a message is sent - along a telephone wire, over the internet, or simply by shouting across a crowded room - there is always the possibly that the message will be damaged during transmission. This may be caused by electronic noise or static, by electromagnetic radiation, or simply by the hubbub of people in the room chattering to one another. An error-correcting code is a means by which the original message can be recovered by the recipient even though it has been corrupted en route. Most commonly a message is sent as a sequence of 0s and 1s. Usually when a 0 is sent a 0 is received, and when a 1 is sent a 1 is received; however occasionally, and hopefully rarely, a 0 is sent but a 1 is received, or vice versa.
    [Show full text]
  • The Binary Golay Code and the Leech Lattice
    The binary Golay code and the Leech lattice Recall from previous talks: Def 1: (linear code) A code C over a field F is called linear if the code contains any linear combinations of its codewords A k-dimensional linear code of length n with minimal Hamming distance d is said to be an [n, k, d]-code. Why are linear codes interesting? ● Error-correcting codes have a wide range of applications in telecommunication. ● A field where transmissions are particularly important is space probes, due to a combination of a harsh environment and cost restrictions. ● Linear codes were used for space-probes because they allowed for just-in-time encoding, as memory was error-prone and heavy. Space-probe example The Hamming weight enumerator Def 2: (weight of a codeword) The weight w(u) of a codeword u is the number of its nonzero coordinates. Def 3: (Hamming weight enumerator) The Hamming weight enumerator of C is the polynomial: n n−i i W C (X ,Y )=∑ Ai X Y i=0 where Ai is the number of codeword of weight i. Example (Example 2.1, [8]) For the binary Hamming code of length 7 the weight enumerator is given by: 7 4 3 3 4 7 W H (X ,Y )= X +7 X Y +7 X Y +Y Dual and doubly even codes Def 4: (dual code) For a code C we define the dual code C˚ to be the linear code of codewords orthogonal to all of C. Def 5: (doubly even code) A binary code C is called doubly even if the weights of all its codewords are divisible by 4.
    [Show full text]
  • OPTIMIZING SIZES of CODES 1. Introduction to Coding Theory Error
    OPTIMIZING SIZES OF CODES LINDA MUMMY Abstract. This paper shows how to determine if codes are of optimal size. We discuss coding theory terms and techniques, including Hamming codes, perfect codes, cyclic codes, dual codes, and parity-check matrices. 1. Introduction to Coding Theory Error correcting codes and check digits were developed to counter the effects of static interference in transmissions. For example, consider the use of the most basic code. Let 0 stand for \no" and 1 stand for \yes." A mistake in one digit relaying this message can change the entire meaning. Therefore, this is a poor code. One option to increase the probability that the correct message is received is to send the message multiple times, like 000000 or 111111, hoping that enough instances of the correct message get through that the receiver is able to comprehend the original meaning. This, however, is an inefficient method of relaying data. While some might associate \coding theory" with cryptography and \secret codes," the two fields are very different. Coding theory deals with transmitting a codeword, say x, and ensuring that the receiver is able to determine the original message x even if there is some static or interference in transmission. Cryptogra- phy deals with methods to ensure that besides the sender and the receiver of the message, no one is able to encode or decode the message. We begin by discussing a real life example of the error checking codes (ISBN numbers) which appear on all books printed after 1964. We go on to discuss different types of codes and the mathematical theories behind their structures.
    [Show full text]
  • Low-Density Parity-Check Codes—A Statistical Physics Perspective
    ADVANCES IN IMAGING AND ELECTRON PHYSICS, VOL. 125 Low-Density Parity-Check Codes—A Statistical Physics Perspective 1, 1 RENATO VICENTE, ∗ DAVID SAAD AND YOSHIYUKI KABASHIMA2 1Neural Computing Research Group, University of Aston, Birmingham B4 7ET, United Kingdom 2Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology, Yokohama 2268502, Japan I. Introduction ............................. 232 A. Error Correction .......................... 232 B. Statistical Physics of Coding ..................... 236 C. Outline ............................. 236 II. Coding and Statistical Physics ..................... 237 A. Mathematical Model for a Communication System ........... 237 1. Data Source and Sink ...................... 238 2. Source Encoder and Decoder ................... 238 3. Noisy Channels ......................... 239 4. Channel Encoder and Decoder ................... 241 B. Linear Error-Correcting Codes and the Decoding Problem ........ 242 C. Probability Propagation Algorithm .................. 244 D. Low-Density Parity-Check Codes .................. 250 E. Decoding and Statistical Physics ................... 250 III. Sourlas Codes ............................ 252 A. Lower Bound for the Probability of Bit Error .............. 254 B. Replica Theory for the Typical Performance of Sourlas Codes ....... 256 C. Shannon’s Bound ......................... 262 D. Decoding with Probability Propagation ................ 266 IV. Gallager Codes ........................... 270 A. Upper Bound on Achievable Rates
    [Show full text]
  • Coding Theory: Linear-Error Correcting Codes 1 Basic Definitions
    Anna Dovzhik 1 Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014 Sharing data across channels, such as satellite, television, or compact disc, often comes at the risk of error due to noise. A well-known example is the task of relaying images of planets from space; given the incredible distance that this data must travel, it is be to expected that interference will occur. Since about 1948, coding theory has been utilized to help detect and correct corrupted messages such as these, by introducing redundancy into an encoded message, which provides a means by which to detect errors. Although non-linear codes exist, the focus here will be on algebraic coding, which is efficient and often used in practice. 1 Basic Definitions The following build up a basic vocabulary of coding theory. Definition 1.1 If A = a1; a2; : : : ; aq, then A is a code alphabet of size q and an 2 A is a code symbol. For our purposes, A will be a finite field Fq. Definition 1.2 A q-ary word w of length n is a vector that has each of its components in the code alphabet. Definition 1.3 A q-ary block code is a set C over an alphabet A, where each element, or codeword, is a q-ary word of length n. Note that jCj is the size of C. A code of length n and size M is called an (n; M)-code. Example 1.1 [3, p.6] C = f00; 01; 10; 11g is a binary (2,4)-code taken over the code alphabet F2 = f0; 1g .
    [Show full text]
  • CSCI 234 - Design of Internet Protocols Error Detection and Correction
    CSCI 234 - Design of Internet Protocols Error Detection and Correction CSCI 234 Design of Internet Protocols: Error Detection and Correction George Blankenship Error Detection and George Blankenship 1 Correction Outline • Effect of Transmission Errors • Cause of Transmission Errors • Detection of Transmission Errors • Correction of Transmission Errors Error Detection and George Blankenship 2 Correction Effects of Errors • Data may distorted (bit inversion) • Data may be deleted (unrecognizable) • Data may be added (merged messages) • Data may be reordered (queuing delays) Error Detection and George Blankenship 3 Correction George Blankenship 1 CSCI 234 - Design of Internet Protocols Error Detection and Correction Place of Errors (Layered Model) • All errors are bit-based • Bit insertion and distortion takes place at Physical Layer (PHY) – transmission errors • Data Link (DL) Layer, provides well- defined service interface to the Network Layer and recovers from transmission errors • Network Layer and above suffer from implementation (software/hardware) errors Error Detection and George Blankenship 4 Correction Impact of Transmission Error • Problem: some data sent from A Æ B may get ‘corrupted’ – Why is this a big problem? How will the receiver know they received “bad data”? • “bad data” is directly proportional to the probability of a transmission error Error detection and Correction reduce the impact of a transmission error Error Detection and George Blankenship 5 Correction Handling Transmission Errors • Example: – “hello, world” is
    [Show full text]
  • Coding Theory: Introduction to Linear Codes and Applications
    InSight: RIVIER ACADEMIC JOURNAL, VOLUME 4, NUMBER 2, FALL 2008 CODING THEORY: INTRODUCTION TO LINEAR CODES AND APPLICATIONS Jay Grossman* Undergraduate Student, B.A. in Mathematics Program, Rivier College Coding Theory Basics Coding theory is an important study which attempts to minimize data loss due to errors introduced in transmission from noise, interference or other forces. With a wide range of theoretical and practical applications from digital data transmission to modern medical research, coding theory has helped enable much of the growth in the 20th century. Data encoding is accomplished by adding additional informa- tion to each transmitted message to enable the message to be decoded even if errors occur. In 1948 the optimization of this redundant data was discussed by Claude Shannon from Bell Laboratories in the United States, but it wouldn’t be until 1950 that Richard Hamming (also from Bell Labs) would publish his work describing a now famous group of optimized linear codes, the Hamming Codes. It is said he developed this code to help correct errors in punch tape. Around the same time John Leech from Cambridge was describing similar codes in his work on group theory. Notation and Basic Properties To work more closely with coding theory it is important to define several important properties and notation elements. These elements will be used throughout the further exploration of coding theory and when discussing its applications. There are many ways to represent a code, but perhaps the simplest way to describe a given code is as a set of codewords, i.e. {000,111}, or as a matrix with all the codewords forming the rows, such as: This example is a code with two codewords 000 and 111 each with three characters per codeword.
    [Show full text]
  • Hamming Codes
    Hamming Codes James Fiedler Fall, 2004 In the late 1940s Richard Hamming recognized that the further evolution of computers required greater reliability, in particular the ability to detect and correct errors. (At the time, parity checking was being used to detect errors, but was unable to correct any errors.) He created the, Hamming Codes, perfect 1-error correcting codes, and the extended Hamming Codes, 1-error correcting and 2-error detecting codes. Hamming Codes are still widely used in computing, telecommunication, and other applications including data compression, popular puzzles, and turbo codes. Definition A code, C is a subset of An, for some set or alphabet A. Definition A code with a field as alphabet and which is a linear space over the field is a linear code. Definition A word is an element of An.A codeword is an element of C. Here the alphabets will be finite fields. Linear codes with length n and dimension k will be described as [n,k] codes. Hamming Codes are linear codes, and a Hamming Code will be described as a [n,k] q-ary Hamming Code, where q is the size of the base field, Fq. In other words an [n,k] q-ary Hamming Code is a linear subspace of the n-dimensional vector space over Fq. As an introduction, here is a concrete construction of a [7,4] binary Hamming Code. Begin with 4 ordered bits (base field F2) of information, and rename these as x3, x5, x6, x7, respectively. Choose x4 = x5 + x6 + x7 (∈ F2) x2 = x3 + x6 + x7 (∈ F2) x1 = x3 + x5 + x7 (∈ F2).
    [Show full text]
  • 6.02 Lecture 4: Linear Block Codes, Parity Relations
    6.02 Fall 2012 Lecture #4 • Linear block codes • Rectangular codes • Hamming codes 6.02 Fall 2012 Lecture 4, Slide #1 Single Link Communication Model End-host Original source computers Receiving app/user Digitize Render/display, (if needed) etc. Source binary digits (“message bits”) Source coding Source decoding Bit stream Bit stream Channel Channel Mapper Recv Decoding Coding Bits + samples Bits Signals (reducing or (bit error Xmit (Voltages) + removing correction) samples over Demapper physical link bit errors) 6.02 Fall 2012 Lecture 4, Slide #2 Embedding for Structural Separation Encode so that the codewords are far enough from each other Likely error patterns shouldn’t transform one codeword to another 01 Code: nodes chosen in 0 00 11 1 hypercube + mapping of message bits to nodes single-bit error may cause 00 to be 10 10 (or 01) 101 111 1 If we choose 2k out of 2n nodes, it means 100 110 we can map all k-bit message strings in a space of n-bit codewords. code rate 001 011 The is k/n. 0 000 010 6.02 Fall 2012 Lecture 4, Slide #3 Minimum Hamming Distance of Code vs. Detection & Correction Capabilities If d is the minimum Hamming distance between codewords, we can: • detect all patterns of up to t bit errors if and only if d ≥ t+1 • correct all patterns of up to t bit errors if and only if d ≥ 2t+1 • detect all patterns of up to tD bit errors while correcting all patterns of tC (<tD) errors if and only if d ≥ tC+tD+1 e.g.: d=4, t=1, t =2 6.02 Fall 2012 C Lecture 4,D Slide #4 Linear Block Codes Block code: k message bits encoded to n code bits i.e., each of 2k messages encoded into a unique n-bit codeword via a linear transformation.
    [Show full text]
  • Hamming and Golay Codes
    HAMMING AND GOLAY CODES Satish Kumar Buddha November 7, 2011 Satish Kumar Buddha () HAMMING AND GOLAY CODES November 7, 2011 1 / 29 Many communication channels are subject to channel noise, and thus errors may be introduced during transmission from the source to a receiver. Error detection techniques allow detecting such errors, while error correction enables reconstruction of the original data. Error correcting codes In Information theory and coding theory, error detection and correction or error control are techniques that enable reliable delivery of digital data over unreliable communication channels. Satish Kumar Buddha () HAMMING AND GOLAY CODES November 7, 2011 2 / 29 Error correcting codes In Information theory and coding theory, error detection and correction or error control are techniques that enable reliable delivery of digital data over unreliable communication channels. Many communication channels are subject to channel noise, and thus errors may be introduced during transmission from the source to a receiver. Error detection techniques allow detecting such errors, while error correction enables reconstruction of the original data. Satish Kumar Buddha () HAMMING AND GOLAY CODES November 7, 2011 2 / 29 Error correction is the detection of errors and reconstruction of the original, error-free data. Definition Error detection is the detection of errors caused by noise or other impairments during transmission from the transmitter to the receiver. Satish Kumar Buddha () HAMMING AND GOLAY CODES November 7, 2011 3 / 29 Definition Error detection is the detection of errors caused by noise or other impairments during transmission from the transmitter to the receiver. Error correction is the detection of errors and reconstruction of the original, error-free data.
    [Show full text]
  • More on Hamming Distance Recall That the Hamming Distance, D(U, V), of Two Codewords U and V Is the Number of Positions Where U and V Have Different Symbols
    CSC 310: Information Theory University of Toronto, Fall 2011 Instructor: Radford M. Neal Week 11 More on Hamming Distance Recall that the Hamming distance, d(u, v), of two codewords u and v is the number of positions where u and v have different symbols. This is a proper distance, which satisfies the triangle inequality: d(u, w) ≤ d(u, v) + d(v, w) Here’s a picture showing why: u : 011001101110 ------ v : 011001010001 --- -- w : 011110010010 Here, d(u, v) = 6, d(u, v = 5), and d(u, w) = 7. Minimum Distance and Decoding A code’s minimum distance is the minimum of d(u, v) over all distinct codewords u and v. If the minimum distance is at least 2t + 1, a nearest neighbor decoder will always decode correctly when there are t or fewer errors. Here’s why: Suppose the code has distance d ≥ 2t + 1. If u is sent and v is received, having no more than t errors, then • d(u, v) ≤ t. • d(u, u′) ≥ d for any codeword u′ 6= u. From the triangle inequality: ′ ′ d(u, u ) ≤ d(u, v) + d(v, u ) It follows that ′ ′ d(v, u ) ≥ d(u, u ) − d(u, v) ≥ d − t ≥ (2t + 1) − t ≥ t + 1 The decoder will therefore decode correctly to u, at distance t, rather than to some other u′, at distance at least t + 1. A Picture of Distance and Decoding Here’s a picture of codewords (black dots) for a code with minimum distance 2t + 1, showing how some transmissions are decoded: t Incorrect decodings with more than t errors Correct decoding with less than t errors Correct decoding with more than t errors Minimum Distance for Linear Codes To find the minimum distance for a code with 2K codewords, we will in general have to look at all 2K (2K −1)/2 pairs of codewords.
    [Show full text]