4 Classical Error Correcting Codes
Total Page:16
File Type:pdf, Size:1020Kb
“Machines should work. People should think.” Richard Hamming. 4 Classical Error Correcting Codes Coding theory is an application of information theory critical for reliable communication and fault-tolerant information storage and processing; indeed, Shannon channel coding theorem tells us that we can transmit information on a noisy channel with an arbitrarily low probability of error. A code is designed based on a well-defined set of specifications and protects the information only for the type and number of errors prescribed in its design. Agoodcode: (i) Adds a minimum amount of redundancy to the original message. (ii) Efficient encoding and decoding schemes for the code exist; this means that information is easily mapped to and extracted from a codeword. Reliable communication and fault-tolerant computing are intimately related to each other; the concept of a communication channel, an abstraction for a physical system used to transmit information from one place to another and/or from one time to another is at the core of communication, as well as, information storage and processing. It should however be clear 355 Basic Concepts Linear Codes Polynomial Codes Basic Concepts Linear Codes Other Codes Figure 96: The organization of Chapter 4 at a glance. that the existence of error-correcting codes does not guarantee that logic operations can be implemented using noisy gates and circuits. The strategies to build reliable computing systems using unreliable components are based on John von Neumann’s studies of error detection and error correction techniques for information storage and processing. This chapter covers topics from the theory of classical error detection and error correction and introduces concepts useful for understanding quantum error correction, the subject of Chapter 5. For simplicity we often restrict our discussion to the binary case. The organization of Chapter 4 is summarized in Figure 96. After introducing the basic concepts we discuss linear, polynomial, and several other classes of codes. 356 4.1 Informal Introduction to Error Detection and Error Correction Error detection is the process of determining if a message is in error; error correction is the process of restoring a message in error to its original content. Information is packaged into codewords and the intuition behind error detection and error correction is to artificially increase some “distance” between codewords so that the errors cannot possibly transform one valid codeword into another valid codeword. Error detection and error correction are based on schemes to increase the redundancy of a message. A crude analogy is to bubble wrap a fragile item and place it into a box to reduce the chance that the item will be damaged during transport. Redundant information plays the role of the packing materials; it increases the amount of data transmitted, but it also increases the chance that we will be able to restore the original contents of a message distorted during communication. Coding corresponds to the selection of both the packing materials and the strategy to optimally pack the fragile item subject to the obvious constraints: use the least amount of packing materials and the least amount of effort to pack and unpack. Example. A trivial example of an error detection scheme is the addition of a parity check bit to a word of a given length. This is a simple scheme but very powerful; it allows us to detect an odd number of errors, but fails if an even number of errors occur. For example, consider a system that enforces even parity for an eight-bit word. Given the string (10111011), we add one more bit to ensure that the total number of 1s is even, in this case a 0, and we transmit the nine-bit string (101110110). The error detection procedure is to count the number of 1s; we decide that the string is in error if this number is odd. When the information symbols are embedded into the codeword, as in this example, the code is called a systematic code; the information symbols are scattered throughout the codeword of a non-systematic code. This example also hints to the limitations of error detection mechanisms. A code is designed with certain error detection or error correction capabilities and fails to detect, or to correct error patterns not covered by the original design of the code. In the previous example we transmit (101110110) and when two errors occur, in the 4-th and the 7-th bits we receive (101010010). This tuple has even parity (an even number of 1’s) and our scheme for error detection fails. Error detection and error correction have practical applications in our daily life. Error detection is used in cases when an item needs to be uniquely identified by a string of digits and there is the possibility of misidentification. Bar codes, also called Universal Product Codes (UPC), the codes used by the postal service, the American banking system, or by the airlines for paper tickets, all have some error detection capability. For example, in the case of ordinary bank checks and travellers checks there are means to detect if the check numbers are entered correctly during various financial transactions. Example. The International Standard Book Number (ISBN) coding scheme. The ISBN code is designed to detect any single digit in error and any transposition of adjacent digits. The ISBN number consists of 10 digits: ISBN → d1 d2 d3 d4 d5 d6 d7 d8 d9 d10 357 Here d10 is computed as follows: A = w1d1 + w2d2 + w3d3 + w4d4 + w5d5 + w6d6 + w7d7 + w8d8 + w9d9 mod 11 with w1 =10,w2 =9,w3 =8,w4 =7,w5 =6,w6 =5,w7 =4,w8 =3,w9 = 2. Then: 11 − A if 2 ≤ A ≤ 10 d = 10 X if A =1 For example, consider the book with ISBN 0 − 471 − 43962 − 2. In this case: A =10× 0+9× 4+8× 7+7× 1+6× 4+5× 3+4× 9+3× 6+2× 2mod11 A =36+56+7+24+15+36+18+4 mod11=196 mod11=9. d10 =11− A =2. The code presented above detects any single errors because the weights are relatively prime with 11. Indeed, if digit di is affected by an error and becomes di + ei then ∀ei wi(di + ei)mod11= widi mod 11 because wiei = 0 mod 11, wi is relatively prime to 11. The code also detects any transpo- sitions of adjacent digits. Indeed, the difference of two adjacent weights wi − wi+1 = 1 and di − di+1 < 11, thus it cannot be a multiple of 11. We conclude that the ISBN code detects any single digit in error and any transposition of adjacent digits. Assume that we wish to transmit through a binary channel a text from a source A with an alphabet with 32 letters. We can encode each one of the 32 = 25 letters of the alphabet as a 5-bit binary string and we can rewrite the entire text in the binary alphabet accepted by the channel. This strategy does not support either detection, or correction of transmission errors because all possible 5-bit combinations are exhausted to encode the 32 letters of the alphabet; any single bit error will transform a valid letter into another valid letter. However, if we use more than five bits to represent each letter of the alphabet, we have a chance to correct a number of single-bit errors. Adding redundancy to a message increases the amount of information transmitted through a communication channel. The ratio useful information versus total information packed in each codeword defines the rate of a code, see Section 4.2. Example. We now consider a very simple error correcting code, a “repetitive” code. Instead of transmitting a single binary symbol, 0 or 1, we map each bit into a string of length n of binary symbols: 0 → (00 ...0) and 1 → (11 ...1). If n =2k + 1 we use a majority voting scheme to decode a codeword of n bits. We count the number of 0s; if this number is larger than k we decode the codeword as 0, else we decode it as 1. Consider a binary symmetric channel with random errors occurring with probability p and n call perr the probability that an n bit codeword of this repetitive code is in error; then 358 2k +1 2k +1 pn =1− (1 − p)k+1 (1 − p)k + p(1 − p)k−1 + ···+ pk(1 − p)0 . err 1 k Indeed, the probability of error is the probability that (k + 1) or more bits are in error which in turn is equal to one minus the probability that at most k bits are in error. As we can see the larger is k, the smaller is the probability of error; this encoding scheme n is better than sending a single bit as long as perr <p. For example, when n = 3 and k =1 we encode a single bit as follows: 0 → (000) and 1 → (111). Then the probability of error is 3 p3 =1− (1 − p)2[(1 − p)1 + p(1 − p)0]=3p2 − 2p3. err 1 The encoding scheme increases the reliability of the transmission when p<1/2. 4.2 Block Codes. Decoding Policies In this section we discuss information transmission through a classical channel using codewords of fixed length. The channel alphabet, A, is a set of q symbols that could be transmitted through the communication channel; when q = 2 we have the binary alphabet A = {0, 1}. The source encoder maps a message into blocks of k symbols from the code alphabet; then the channel encoder adds r symbols to increase redundancy and transmits n-tuples or codewords of length n = k + r, as shown in Figure 74.