Hamming and Golay Codes
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
The Witt Designs, Golay Codes and Mathieu Groups
The Witt designs, Golay codes and Mathieu groups 1 The Golay codes Let V be a vector space over Fq with fixed basis e1, ..., en. A code C is a subset of V .A linear code is a subspace of V . The vector with all coordinates equal to zero (resp. one) will be denoted by 0 (resp. 1). The Hamming distance dH (u, v) between two vectors u, v ∈ V is the number P P of coordinates where they differ: when u = uiei, v = viei then dH (u, v) = |{i | ui 6= vi}|. The weight of a vector u is its number of nonzero coordinates, i.e., dH (u, 0). The minimum distance d(C) of a code C is min{dH (u, v) | u, v ∈ C, u 6= v}. The support of a vector is the set of coordinate positions where it has a nonzero coordinate. Theorem 1.1 There exist codes, unique up to isomorphism, with the indicated values of n, q, |C| and d(C): n q |C| d(C) name of C (i) 23 2 4096 7 binary Golay code (ii) 24 2 4096 8 extended binary Golay code (iii) 11 3 729 5 ternary Golay code (iv) 12 3 729 6 extended ternary Golay code Let us assume that the codes have been chosen such as to contain 0. Then each of these codes is linear. (The dimensions are 12, 12, 6, 6.) 1 The codes (i) and (iii) are perfect, i.e., the balls with radius 2 (d(C) − 1) around the code words partition the vector space. -
Hamming Code - Wikipedia, the Free Encyclopedia Hamming Code Hamming Code
Hamming code - Wikipedia, the free encyclopedia Hamming code Hamming code From Wikipedia, the free encyclopedia In telecommunication, a Hamming code is a linear error-correcting code named after its inventor, Richard Hamming. Hamming codes can detect and correct single-bit errors, and can detect (but not correct) double-bit errors. In contrast, the simple parity code cannot detect errors where two bits are transposed, nor can it correct the errors it can find. Contents [hide] • 1 History • 2 Codes predating Hamming ♦ 2.1 Parity ♦ 2.2 Two-out-of-five code ♦ 2.3 Repetition • 3 Hamming codes • 4 Example using the (11,7) Hamming code • 5 Hamming code (7,4) ♦ 5.1 Hamming matrices ♦ 5.2 Channel coding ♦ 5.3 Parity check ♦ 5.4 Error correction • 6 Hamming codes with additional parity • 7 See also • 8 References • 9 External links History Hamming worked at Bell Labs in the 1940s on the Bell Model V computer, an electromechanical relay-based machine with cycle times in seconds. Input was fed in on punch cards, which would invariably have read errors. During weekdays, special code would find errors and flash lights so the operators could correct the problem. During after-hours periods and on weekends, when there were no operators, the machine simply moved on to the next job. Hamming worked on weekends, and grew increasingly frustrated with having to restart his programs from scratch due to the unreliability of the card reader. Over the next few years he worked on the problem of error-correction, developing an increasingly powerful array of algorithms. -
Geometry of the Mathieu Groups and Golay Codes
Proc. Indian Acad. Sci. (Math. Sci.), Vol. 98, Nos 2 and 3, December 1988, pp. 153-177. 9 Printed in India. Geometry of the Mathieu groups and Golay codes ERIC A LORD Centre for Theoretical Studies and Department of Applied Mathematics, Indian Institute of Science, Bangalore 560012, India MS received 11 January 1988 Abstraet. A brief review is given of the linear fractional subgroups of the Mathieu groups. The main part of the paper then deals with the proj~x-~tiveinterpretation of the Golay codes; these codes are shown to describe Coxeter's configuration in PG(5, 3) and Todd's configuration in PG(11, 2) when interpreted projectively. We obtain two twelve-dimensional representations of M24. One is obtained as the coUineation group that permutes the twelve special points in PC,(11, 2); the other arises by interpreting geometrically the automorphism group of the binary Golay code. Both representations are reducible to eleven-dimensional representations of M24. Keywords. Geometry; Mathieu groups; Golay codes; Coxeter's configuration; hemi- icosahedron; octastigms; dodecastigms. 1. Introduction We shall first review some of the known properties of the Mathieu groups and their linear fractional subgroups. This is mainly a brief resum6 of the first two of Conway's 'Three Lectures on Exceptional Groups' [3] and will serve to establish the notation and underlying concepts of the subsequent sections. The six points of the projective line PL(5) can be labelled by the marks of GF(5) together with a sixth symbol oo defined to be the inverse of 0. The symbol set is f~ = {0 1 2 3 4 oo}. -
Reed-Solomon Error Correction
R&D White Paper WHP 031 July 2002 Reed-Solomon error correction C.K.P. Clarke Research & Development BRITISH BROADCASTING CORPORATION BBC Research & Development White Paper WHP 031 Reed-Solomon Error Correction C. K. P. Clarke Abstract Reed-Solomon error correction has several applications in broadcasting, in particular forming part of the specification for the ETSI digital terrestrial television standard, known as DVB-T. Hardware implementations of coders and decoders for Reed-Solomon error correction are complicated and require some knowledge of the theory of Galois fields on which they are based. This note describes the underlying mathematics and the algorithms used for coding and decoding, with particular emphasis on their realisation in logic circuits. Worked examples are provided to illustrate the processes involved. Key words: digital television, error-correcting codes, DVB-T, hardware implementation, Galois field arithmetic © BBC 2002. All rights reserved. BBC Research & Development White Paper WHP 031 Reed-Solomon Error Correction C. K. P. Clarke Contents 1 Introduction ................................................................................................................................1 2 Background Theory....................................................................................................................2 2.1 Classification of Reed-Solomon codes ...................................................................................2 2.2 Galois fields............................................................................................................................3 -
Linear Block Codes
Linear Block codes (n, k) Block codes k – information bits n - encoded bits Block Coder Encoding operation n-digit codeword made up of k-information digits and (n-k) redundant parity check digits. The rate or efficiency for this code is k/n. 푘 푁푢푚푏푒푟 표푓 푛푓표푟푚푎푡표푛 푏푡푠 퐶표푑푒 푒푓푓푐푒푛푐푦 푟 = = 푛 푇표푡푎푙 푛푢푚푏푒푟 표푓 푏푡푠 푛 푐표푑푒푤표푟푑 Note: unlike source coding, in which data is compressed, here redundancy is deliberately added, to achieve error detection. SYSTEMATIC BLOCK CODES A systematic block code consists of vectors whose 1st k elements (or last k-elements) are identical to the message bits, the remaining (n-k) elements being check bits. A code vector then takes the form: X = (m0, m1, m2,……mk-1, c0, c1, c2,…..cn-k) Or X = (c0, c1, c2,…..cn-k, m0, m1, m2,……mk-1) Systematic code: information digits are explicitly transmitted together with the parity check bits. For the code to be systematic, the k-information bits must be transmitted contiguously as a block, with the parity check bits making up the code word as another contiguous block. Information bits Parity bits A systematic linear block code will have a generator matrix of the form: G = [P | Ik] Systematic codewords are sometimes written so that the message bits occupy the left-hand portion of the codeword and the parity bits occupy the right-hand portion. Parity check matrix (H) Will enable us to decode the received vectors. For each (kxn) generator matrix G, there exists an (n-k)xn matrix H, such that rows of G are orthogonal to rows of H i.e., GHT = 0, where HT is the transpose of H. -
Introduction to Coding Theory
Introduction Why Coding Theory ? Introduction Info Info Noise ! Info »?# Sink to Source Heh ! Sink Heh ! Coding Theory Encoder Channel Decoder Samuel J. Lomonaco, Jr. Dept. of Comp. Sci. & Electrical Engineering Noisy Channels University of Maryland Baltimore County Baltimore, MD 21250 • Computer communication line Email: [email protected] WebPage: http://www.csee.umbc.edu/~lomonaco • Computer memory • Space channel • Telephone communication • Teacher/student channel Error Detecting Codes Applied to Memories Redundancy Input Encode Storage Detect Output 1 1 1 1 0 0 0 0 1 1 Error 1 Erasure 1 1 31=16+5 Code 1 0 ? 0 0 0 0 EVN THOUG LTTRS AR MSSNG 1 16 Info Bits 1 1 1 0 5 Red. Bits 0 0 0 0 0 0 0 FRM TH WRDS N THS SNTNCE 1 1 1 1 0 0 0 0 1 1 1 Double 1 IT CN B NDRSTD 0 0 0 0 1 Error 1 Error 1 Erasure 1 1 Detecting 1 0 ? 0 0 0 0 1 1 1 1 Error Control Coding 0 0 1 1 0 0 31% Redundancy 1 1 1 1 Error Correcting Codes Applied to Memories Input Encode Storage Correct Output Space Channel 1 1 1 1 0 0 0 0 Mariner Space Probe – Years B.C. ( ≤ 1964) 1 1 Error 1 1 • 1 31=16+5 Code 1 0 1 0 0 0 0 Eb/N0 Pe 16 Info Bits 1 1 1 1 -3 0 5 Red. Bits 0 0 0 6.8 db 10 0 0 0 0 -5 1 1 1 1 9.8 db 10 0 0 0 0 1 1 1 Single 1 0 0 0 0 Error 1 1 • Mariner Space Probe – Years A.C. -
An Introduction to Coding Theory
An introduction to coding theory Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India Feb. 6, 2017 Lecture #6A: Some simple linear block codes -I Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Dual code. Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Dual code. Examples of linear block codes Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Dual code. Examples of linear block codes Repetition code Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Dual code. Examples of linear block codes Repetition code Single parity check code Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Dual code. Examples of linear block codes Repetition code Single parity check code Hamming code Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Dual code Two n-tuples u and v are orthogonal if their inner product (u, v)is zero, i.e., n (u, v)= (ui · vi )=0 i=1 Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Dual code Two n-tuples u and v are orthogonal if their inner product (u, v)is zero, i.e., n (u, v)= (ui · vi )=0 i=1 For a binary linear (n, k) block code C,the(n, n − k) dual code, Cd is defined as set of all codewords, v that are orthogonal to all the codewords u ∈ C. -
Error-Correction and the Binary Golay Code
London Mathematical Society Impact150 Stories 1 (2016) 51{58 C 2016 Author(s) doi:10.1112/i150lms/t.0003 e Error-correction and the binary Golay code R.T.Curtis Abstract Linear algebra and, in particular, the theory of vector spaces over finite fields has provided a rich source of error-correcting codes which enable the receiver of a corrupted message to deduce what was originally sent. Although more efficient codes have been devised in recent years, the 12- dimensional binary Golay code remains one of the most mathematically fascinating such codes: not only has it been used to send messages back to earth from the Voyager space probes, but it is also involved in many of the most extraordinary and beautiful algebraic structures. This article describes how the code can be obtained in two elementary ways, and explains how it can be used to correct errors which arise during transmission 1. Introduction Whenever a message is sent - along a telephone wire, over the internet, or simply by shouting across a crowded room - there is always the possibly that the message will be damaged during transmission. This may be caused by electronic noise or static, by electromagnetic radiation, or simply by the hubbub of people in the room chattering to one another. An error-correcting code is a means by which the original message can be recovered by the recipient even though it has been corrupted en route. Most commonly a message is sent as a sequence of 0s and 1s. Usually when a 0 is sent a 0 is received, and when a 1 is sent a 1 is received; however occasionally, and hopefully rarely, a 0 is sent but a 1 is received, or vice versa. -
Reed-Solomon Encoding and Decoding
Bachelor's Thesis Degree Programme in Information Technology 2011 León van de Pavert REED-SOLOMON ENCODING AND DECODING A Visual Representation i Bachelor's Thesis | Abstract Turku University of Applied Sciences Degree Programme in Information Technology Spring 2011 | 37 pages Instructor: Hazem Al-Bermanei León van de Pavert REED-SOLOMON ENCODING AND DECODING The capacity of a binary channel is increased by adding extra bits to this data. This improves the quality of digital data. The process of adding redundant bits is known as channel encod- ing. In many situations, errors are not distributed at random but occur in bursts. For example, scratches, dust or fingerprints on a compact disc (CD) introduce errors on neighbouring data bits. Cross-interleaved Reed-Solomon codes (CIRC) are particularly well-suited for detection and correction of burst errors and erasures. Interleaving redistributes the data over many blocks of code. The double encoding has the first code declaring erasures. The second code corrects them. The purpose of this thesis is to present Reed-Solomon error correction codes in relation to burst errors. In particular, this thesis visualises the mechanism of cross-interleaving and its ability to allow for detection and correction of burst errors. KEYWORDS: Coding theory, Reed-Solomon code, burst errors, cross-interleaving, compact disc ii ACKNOWLEDGEMENTS It is a pleasure to thank those who supported me making this thesis possible. I am thankful to my supervisor, Hazem Al-Bermanei, whose intricate know- ledge of coding theory inspired me, and whose lectures, encouragement, and support enabled me to develop an understanding of this subject. -
The Binary Golay Code and the Leech Lattice
The binary Golay code and the Leech lattice Recall from previous talks: Def 1: (linear code) A code C over a field F is called linear if the code contains any linear combinations of its codewords A k-dimensional linear code of length n with minimal Hamming distance d is said to be an [n, k, d]-code. Why are linear codes interesting? ● Error-correcting codes have a wide range of applications in telecommunication. ● A field where transmissions are particularly important is space probes, due to a combination of a harsh environment and cost restrictions. ● Linear codes were used for space-probes because they allowed for just-in-time encoding, as memory was error-prone and heavy. Space-probe example The Hamming weight enumerator Def 2: (weight of a codeword) The weight w(u) of a codeword u is the number of its nonzero coordinates. Def 3: (Hamming weight enumerator) The Hamming weight enumerator of C is the polynomial: n n−i i W C (X ,Y )=∑ Ai X Y i=0 where Ai is the number of codeword of weight i. Example (Example 2.1, [8]) For the binary Hamming code of length 7 the weight enumerator is given by: 7 4 3 3 4 7 W H (X ,Y )= X +7 X Y +7 X Y +Y Dual and doubly even codes Def 4: (dual code) For a code C we define the dual code C˚ to be the linear code of codewords orthogonal to all of C. Def 5: (doubly even code) A binary code C is called doubly even if the weights of all its codewords are divisible by 4. -
Tcom 370 Notes 99-8 Error Control: Block Codes
TCOM 370 NOTES 99-8 ERROR CONTROL: BLOCK CODES THE NEED FOR ERROR CONTROL The physical link is always subject to imperfections (noise/interference, limited bandwidth/distortion, timing errors) so that individual bits sent over the physical link cannot be received with zero error probability. A bit error -6 rate (BER) of 10 , which may sound quite low and very good, actually leads 1 on the average to an error every -th second for transmission at 10 Mbps. 10 -7 Even with better links, say BER=10 , one would make on the average one error in transferring a binary file of size 1.25 Mbytes. This is not acceptable for "reliable" data transmission. We need to provide in the data link control protocols (Layer 2 of the ISO 7- layer OSI protocol architecture) a means for obtaining better reliability than can be guaranteed by the physical link itself. Note: Error control can be (and is) also incorporated at a higher layer, the transport layer. ERROR CONTROL TECHNIQUES Error Detection and Automatic Request for Retransmission (ARQ) This is a "feedback" mode of operation and depends on the receiver being able to detect that an error has occurred. (Error detection is easier than error correction at the receiver). Upon detecting an error in a frame of transmitted bits, the receiver asks for a retransmission of the frame. This may happen at the data link layer or at the transport layer. The characteristics of ARQ techniques will be discussed in more detail in a subsequent set of notes, where the delays introduced by the ARQ process will be considered explicitly. -
OPTIMIZING SIZES of CODES 1. Introduction to Coding Theory Error
OPTIMIZING SIZES OF CODES LINDA MUMMY Abstract. This paper shows how to determine if codes are of optimal size. We discuss coding theory terms and techniques, including Hamming codes, perfect codes, cyclic codes, dual codes, and parity-check matrices. 1. Introduction to Coding Theory Error correcting codes and check digits were developed to counter the effects of static interference in transmissions. For example, consider the use of the most basic code. Let 0 stand for \no" and 1 stand for \yes." A mistake in one digit relaying this message can change the entire meaning. Therefore, this is a poor code. One option to increase the probability that the correct message is received is to send the message multiple times, like 000000 or 111111, hoping that enough instances of the correct message get through that the receiver is able to comprehend the original meaning. This, however, is an inefficient method of relaying data. While some might associate \coding theory" with cryptography and \secret codes," the two fields are very different. Coding theory deals with transmitting a codeword, say x, and ensuring that the receiver is able to determine the original message x even if there is some static or interference in transmission. Cryptogra- phy deals with methods to ensure that besides the sender and the receiver of the message, no one is able to encode or decode the message. We begin by discussing a real life example of the error checking codes (ISBN numbers) which appear on all books printed after 1964. We go on to discuss different types of codes and the mathematical theories behind their structures.