Implementation of Designed Encoder and Decoder for Golay Code
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Reed-Solomon Error Correction
R&D White Paper WHP 031 July 2002 Reed-Solomon error correction C.K.P. Clarke Research & Development BRITISH BROADCASTING CORPORATION BBC Research & Development White Paper WHP 031 Reed-Solomon Error Correction C. K. P. Clarke Abstract Reed-Solomon error correction has several applications in broadcasting, in particular forming part of the specification for the ETSI digital terrestrial television standard, known as DVB-T. Hardware implementations of coders and decoders for Reed-Solomon error correction are complicated and require some knowledge of the theory of Galois fields on which they are based. This note describes the underlying mathematics and the algorithms used for coding and decoding, with particular emphasis on their realisation in logic circuits. Worked examples are provided to illustrate the processes involved. Key words: digital television, error-correcting codes, DVB-T, hardware implementation, Galois field arithmetic © BBC 2002. All rights reserved. BBC Research & Development White Paper WHP 031 Reed-Solomon Error Correction C. K. P. Clarke Contents 1 Introduction ................................................................................................................................1 2 Background Theory....................................................................................................................2 2.1 Classification of Reed-Solomon codes ...................................................................................2 2.2 Galois fields............................................................................................................................3 -
Linear Block Codes
Linear Block codes (n, k) Block codes k – information bits n - encoded bits Block Coder Encoding operation n-digit codeword made up of k-information digits and (n-k) redundant parity check digits. The rate or efficiency for this code is k/n. 푘 푁푢푚푏푒푟 표푓 푛푓표푟푚푎푡표푛 푏푡푠 퐶표푑푒 푒푓푓푐푒푛푐푦 푟 = = 푛 푇표푡푎푙 푛푢푚푏푒푟 표푓 푏푡푠 푛 푐표푑푒푤표푟푑 Note: unlike source coding, in which data is compressed, here redundancy is deliberately added, to achieve error detection. SYSTEMATIC BLOCK CODES A systematic block code consists of vectors whose 1st k elements (or last k-elements) are identical to the message bits, the remaining (n-k) elements being check bits. A code vector then takes the form: X = (m0, m1, m2,……mk-1, c0, c1, c2,…..cn-k) Or X = (c0, c1, c2,…..cn-k, m0, m1, m2,……mk-1) Systematic code: information digits are explicitly transmitted together with the parity check bits. For the code to be systematic, the k-information bits must be transmitted contiguously as a block, with the parity check bits making up the code word as another contiguous block. Information bits Parity bits A systematic linear block code will have a generator matrix of the form: G = [P | Ik] Systematic codewords are sometimes written so that the message bits occupy the left-hand portion of the codeword and the parity bits occupy the right-hand portion. Parity check matrix (H) Will enable us to decode the received vectors. For each (kxn) generator matrix G, there exists an (n-k)xn matrix H, such that rows of G are orthogonal to rows of H i.e., GHT = 0, where HT is the transpose of H. -
Introduction to Coding Theory
Introduction Why Coding Theory ? Introduction Info Info Noise ! Info »?# Sink to Source Heh ! Sink Heh ! Coding Theory Encoder Channel Decoder Samuel J. Lomonaco, Jr. Dept. of Comp. Sci. & Electrical Engineering Noisy Channels University of Maryland Baltimore County Baltimore, MD 21250 • Computer communication line Email: [email protected] WebPage: http://www.csee.umbc.edu/~lomonaco • Computer memory • Space channel • Telephone communication • Teacher/student channel Error Detecting Codes Applied to Memories Redundancy Input Encode Storage Detect Output 1 1 1 1 0 0 0 0 1 1 Error 1 Erasure 1 1 31=16+5 Code 1 0 ? 0 0 0 0 EVN THOUG LTTRS AR MSSNG 1 16 Info Bits 1 1 1 0 5 Red. Bits 0 0 0 0 0 0 0 FRM TH WRDS N THS SNTNCE 1 1 1 1 0 0 0 0 1 1 1 Double 1 IT CN B NDRSTD 0 0 0 0 1 Error 1 Error 1 Erasure 1 1 Detecting 1 0 ? 0 0 0 0 1 1 1 1 Error Control Coding 0 0 1 1 0 0 31% Redundancy 1 1 1 1 Error Correcting Codes Applied to Memories Input Encode Storage Correct Output Space Channel 1 1 1 1 0 0 0 0 Mariner Space Probe – Years B.C. ( ≤ 1964) 1 1 Error 1 1 • 1 31=16+5 Code 1 0 1 0 0 0 0 Eb/N0 Pe 16 Info Bits 1 1 1 1 -3 0 5 Red. Bits 0 0 0 6.8 db 10 0 0 0 0 -5 1 1 1 1 9.8 db 10 0 0 0 0 1 1 1 Single 1 0 0 0 0 Error 1 1 • Mariner Space Probe – Years A.C. -
An Introduction to Coding Theory
An introduction to coding theory Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India Feb. 6, 2017 Lecture #6A: Some simple linear block codes -I Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Dual code. Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Dual code. Examples of linear block codes Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Dual code. Examples of linear block codes Repetition code Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Dual code. Examples of linear block codes Repetition code Single parity check code Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Dual code. Examples of linear block codes Repetition code Single parity check code Hamming code Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Dual code Two n-tuples u and v are orthogonal if their inner product (u, v)is zero, i.e., n (u, v)= (ui · vi )=0 i=1 Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Dual code Two n-tuples u and v are orthogonal if their inner product (u, v)is zero, i.e., n (u, v)= (ui · vi )=0 i=1 For a binary linear (n, k) block code C,the(n, n − k) dual code, Cd is defined as set of all codewords, v that are orthogonal to all the codewords u ∈ C. -
Reed-Solomon Encoding and Decoding
Bachelor's Thesis Degree Programme in Information Technology 2011 León van de Pavert REED-SOLOMON ENCODING AND DECODING A Visual Representation i Bachelor's Thesis | Abstract Turku University of Applied Sciences Degree Programme in Information Technology Spring 2011 | 37 pages Instructor: Hazem Al-Bermanei León van de Pavert REED-SOLOMON ENCODING AND DECODING The capacity of a binary channel is increased by adding extra bits to this data. This improves the quality of digital data. The process of adding redundant bits is known as channel encod- ing. In many situations, errors are not distributed at random but occur in bursts. For example, scratches, dust or fingerprints on a compact disc (CD) introduce errors on neighbouring data bits. Cross-interleaved Reed-Solomon codes (CIRC) are particularly well-suited for detection and correction of burst errors and erasures. Interleaving redistributes the data over many blocks of code. The double encoding has the first code declaring erasures. The second code corrects them. The purpose of this thesis is to present Reed-Solomon error correction codes in relation to burst errors. In particular, this thesis visualises the mechanism of cross-interleaving and its ability to allow for detection and correction of burst errors. KEYWORDS: Coding theory, Reed-Solomon code, burst errors, cross-interleaving, compact disc ii ACKNOWLEDGEMENTS It is a pleasure to thank those who supported me making this thesis possible. I am thankful to my supervisor, Hazem Al-Bermanei, whose intricate know- ledge of coding theory inspired me, and whose lectures, encouragement, and support enabled me to develop an understanding of this subject. -
The Binary Golay Code and the Leech Lattice
The binary Golay code and the Leech lattice Recall from previous talks: Def 1: (linear code) A code C over a field F is called linear if the code contains any linear combinations of its codewords A k-dimensional linear code of length n with minimal Hamming distance d is said to be an [n, k, d]-code. Why are linear codes interesting? ● Error-correcting codes have a wide range of applications in telecommunication. ● A field where transmissions are particularly important is space probes, due to a combination of a harsh environment and cost restrictions. ● Linear codes were used for space-probes because they allowed for just-in-time encoding, as memory was error-prone and heavy. Space-probe example The Hamming weight enumerator Def 2: (weight of a codeword) The weight w(u) of a codeword u is the number of its nonzero coordinates. Def 3: (Hamming weight enumerator) The Hamming weight enumerator of C is the polynomial: n n−i i W C (X ,Y )=∑ Ai X Y i=0 where Ai is the number of codeword of weight i. Example (Example 2.1, [8]) For the binary Hamming code of length 7 the weight enumerator is given by: 7 4 3 3 4 7 W H (X ,Y )= X +7 X Y +7 X Y +Y Dual and doubly even codes Def 4: (dual code) For a code C we define the dual code C˚ to be the linear code of codewords orthogonal to all of C. Def 5: (doubly even code) A binary code C is called doubly even if the weights of all its codewords are divisible by 4. -
Tcom 370 Notes 99-8 Error Control: Block Codes
TCOM 370 NOTES 99-8 ERROR CONTROL: BLOCK CODES THE NEED FOR ERROR CONTROL The physical link is always subject to imperfections (noise/interference, limited bandwidth/distortion, timing errors) so that individual bits sent over the physical link cannot be received with zero error probability. A bit error -6 rate (BER) of 10 , which may sound quite low and very good, actually leads 1 on the average to an error every -th second for transmission at 10 Mbps. 10 -7 Even with better links, say BER=10 , one would make on the average one error in transferring a binary file of size 1.25 Mbytes. This is not acceptable for "reliable" data transmission. We need to provide in the data link control protocols (Layer 2 of the ISO 7- layer OSI protocol architecture) a means for obtaining better reliability than can be guaranteed by the physical link itself. Note: Error control can be (and is) also incorporated at a higher layer, the transport layer. ERROR CONTROL TECHNIQUES Error Detection and Automatic Request for Retransmission (ARQ) This is a "feedback" mode of operation and depends on the receiver being able to detect that an error has occurred. (Error detection is easier than error correction at the receiver). Upon detecting an error in a frame of transmitted bits, the receiver asks for a retransmission of the frame. This may happen at the data link layer or at the transport layer. The characteristics of ARQ techniques will be discussed in more detail in a subsequent set of notes, where the delays introduced by the ARQ process will be considered explicitly. -
Cyclic Codes
Cyclic Codes Saravanan Vijayakumaran [email protected] Department of Electrical Engineering Indian Institute of Technology Bombay August 26, 2014 1 / 25 Cyclic Codes Definition A cyclic shift of a vector v0 v1 ··· vn−2 vn−1 is the vector vn−1 v0 v1 ··· vn−3 vn−2 . Definition An (n; k) linear block code C is a cyclic code if every cyclic shift of a codeword in C is also a codeword. Example Consider the (7; 4) code C with generator matrix 21 0 0 0 1 1 03 60 1 0 0 0 1 17 G = 6 7 40 0 1 0 1 1 15 0 0 0 1 1 0 1 2 / 25 Polynomial Representation of Vectors For every vector v = v0 v1 ··· vn−2 vn−1 there is a polynomial 2 n−1 v(X) = v0 + v1X + v2X + ··· + vn−1X Let v(i) be the vector resulting from i cyclic shifts on v (i) i−1 i n−1 v (X) = vn−i +vn−i+1X +···+vn−1X +v0X +···+vn−i−1X Example v = 1 0 0 1 1 0 1, v(X) = 1 + X 3 + X 4 + X 6 v(1) = 1 1 0 0 1 1 0, v(1)(X) = 1 + X + X 4 + X 5 v(2) = 0 1 1 0 0 1 1, v(2)(X) = X + X 2 + X 5 + X 6 3 / 25 Polynomial Representation of Vectors • Consider v(X) and v(1)(X) 2 n−1 v(X) = v0 + v1X + v2X + ··· + vn−1X (1) 2 3 n−2 v (X) = vn−1 + v0X + v1X + v2X + ··· + vn−2X h 2 n−2i = vn−1 + X v0 + v1X + v2X + ··· + vn−2X n h n−2 n−1i = vn−1(1 + X ) + X v0 + ··· + vn−2X + vn−1X n = vn−1(1 + X ) + Xv(X) • In general, v(X) and v(i)(X) are related by X i v(X) = v(i)(X) + q(X)(X n + 1) i−1 where q(X) = vn−i + vn−i+1X + ··· + vn−1X • v(i)(X) is the remainder when X i v(X) is divided by X n + 1 4 / 25 Hamming Code of Length 7 Codeword Code Polynomial 0000000 0 1000110 1 + X 4 + X 5 0100011 X + X 5 + X 6 1100101 1 + X + X 4 + X 6 0010111 X 2 + X 4 + X 5 + X 6 1010001 1 + X 2 + X 6 0110100 X + X 2 + X 4 1110010 1 + X + X 2 + X 5 0001101 X 3 + X 4 + X 6 1001011 1 + X 3 + X 5 + X 6 0101110 X + X 3 + X 4 + X 5 1101000 1 + X + X 3 0011010 X 2 + X 3 + X 5 1011100 1 + X 2 + X 3 + X 4 0111001 X + X 2 + X 3 + X 6 1111111 1 + X + X 2 + X 3 + X 4 + X 5 + X 6 5 / 25 Properties of Cyclic Codes (1) Theorem The nonzero code polynomial of minimum degree in a linear block code is unique. -
ABSTRACT IDEMPOTENTS in CYCLIC CODES by Benjamin Brame April 9, 2012
ABSTRACT IDEMPOTENTS IN CYCLIC CODES by Benjamin Brame April 9, 2012 Chair: Dr. Zachary Robinson Major Department: Mathematics Cyclic codes give us the most probable method by which we may detect and correct data transmission errors. These codes depend on the development of advanced math- ematical concepts. It is shown that cyclic codes, when viewed as vector subspaces of a vector space of some dimension n over some finite field F, can be approached as polynomials in a ring. This approach is made possible by the assumption that the set of codewords is invariant under cyclic shifts, which are linear transformations. Developing these codes seems to be equivalent to factoring the polynomial xn −x over F. Each factor then gives us a cyclic code of some dimension k over F. Constructing factorizations of xn − 1 is accomplished by using cyclotomic polyno- mials and idempotents of the code algebra. The use of these two concepts together allows us to find cyclic codes in Fn. Hence, the development of cyclic codes is a journey from codewords and codes to fields and rings and back to codes and codewords. IDEMPOTENTS IN CYCLIC CODES A Thesis Presented to The Faculty of the Department of Mathematics East Carolina University In Partial Fulfillment of the Requirements for the Degree Master of Arts in Mathematics by Benjamin Brame April 9, 2012 Copyright 2012, Benjamin Brame IDEMPOTENTS IN CYCLIC CODES by Benjamin Brame APPROVED BY: DIRECTOR OF THESIS: Dr. Zachary Robinson COMMITTEE MEMBER: Dr. Chris Jantzen COMMITTEE MEMBER: Dr. Heather Ries COMMITTEE MEMBER: Dr. David Pravica CHAIR OF THE DEPARTMENT OF MATHEMATICS: Dr. -
An Introduction to Coding Theory
An introduction to coding theory Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India Feb. 6, 2017 Lecture #7A: Bounds on the size of a code Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Hamming bound Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Hamming bound Perfect codes Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Hamming bound Perfect codes Singleton bound Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Hamming bound Perfect codes Singleton bound Maximum distance separable codes Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Hamming bound Perfect codes Singleton bound Maximum distance separable codes Plotkin Bound Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Hamming bound Perfect codes Singleton bound Maximum distance separable codes Plotkin Bound Gilbert-Varshamov bound Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Bounds on the size of a code The basic problem is to find the largest code of a given length, n and minimum distance, d. -
Coding Theory: Linear-Error Correcting Codes 1 Basic Definitions
Anna Dovzhik 1 Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014 Sharing data across channels, such as satellite, television, or compact disc, often comes at the risk of error due to noise. A well-known example is the task of relaying images of planets from space; given the incredible distance that this data must travel, it is be to expected that interference will occur. Since about 1948, coding theory has been utilized to help detect and correct corrupted messages such as these, by introducing redundancy into an encoded message, which provides a means by which to detect errors. Although non-linear codes exist, the focus here will be on algebraic coding, which is efficient and often used in practice. 1 Basic Definitions The following build up a basic vocabulary of coding theory. Definition 1.1 If A = a1; a2; : : : ; aq, then A is a code alphabet of size q and an 2 A is a code symbol. For our purposes, A will be a finite field Fq. Definition 1.2 A q-ary word w of length n is a vector that has each of its components in the code alphabet. Definition 1.3 A q-ary block code is a set C over an alphabet A, where each element, or codeword, is a q-ary word of length n. Note that jCj is the size of C. A code of length n and size M is called an (n; M)-code. Example 1.1 [3, p.6] C = f00; 01; 10; 11g is a binary (2,4)-code taken over the code alphabet F2 = f0; 1g . -
Cyclic Codes, BCH Codes, RS Codes
Classical Channel Coding, II Hamming Codes, Cyclic Codes, BCH Codes, RS Codes John MacLaren Walsh, Ph.D. ECET 602, Winter Quarter, 2013 1 References • Error Control Coding, 2nd Ed. Shu Lin and Daniel J. Costello, Jr. Pearson Prentice Hall, 2004. • Error Control Systems for Digital Communication and Storage, Prentice Hall, S. B. Wicker, 1995. 2 Example of Syndrome Decoder { Hamming Codes Recall that a (n; k) Hamming code is a perfect code with a m × 2m − 1 parity check matrix whose columns are all 2m −1 of the non-zero binary words of length m. This yields an especially simple syndrome decoder look up table for arg min wt(e) (1) ejeHT =rHT In particular, one selects the error vector to be the jth column of the identity matrix, where j is the column of the parity check matrix which is equal to the computed syndome s = rHT . 3 Cyclic Codes Cyclic codes are a subclass of linear block codes over GF (q) which have the property that every cyclic shift of a codeword is itself also a codeword. The ith cyclic shift of a vector v = (v0; v1; : : : ; vn−1) is (i) the vector v = (vn−i; vn−i+1; : : : ; vn−1; v0; v1; : : : ; vn−i−1). If we think of the elements of the vector as n−1 coefficients in the polynomial v(x) = v0 + v1x + ··· + vn−1x we observe that i i i+1 n−1 n n+i−1 x v(x) = v0x + v1x + ··· + vn−i−1x + vn−ix + ··· + vn+i−1x i−1 i n−1 = vn−i + vn−i+1x + ··· + vn−1x + v0x + ··· + vn−i−1x n n i−1 n +vn−i(x − 1) + vn−i+1x(x − 1) + ··· + vn+i−1x (x − 1) Hence, i n i−1 i i+1 n−1 (i) x v(x)mod(x − 1) = vn−i + vn−i+1x + ··· + vn−1x + v0x + v1x + ··· + vn−i−1x = v (x) (2) That is, the ith cyclic shift can alternatively be thought of as the operation v(i)(x) = xiv(x)mod(xn − 1).