10. on Extending Bch Codes

Total Page:16

File Type:pdf, Size:1020Kb

10. on Extending Bch Codes University of Plymouth PEARL https://pearl.plymouth.ac.uk 04 University of Plymouth Research Theses 01 Research Theses Main Collection 2011 Algebraic Codes For Error Correction In Digital Communication Systems Jibril, Mubarak http://hdl.handle.net/10026.1/1188 University of Plymouth All content in PEARL is protected by copyright law. Author manuscripts are made available in accordance with publisher policies. Please cite only the published version using the details provided on the item record or document. In the absence of an open licence (e.g. Creative Commons), permissions for further reuse of content should be sought from the publisher or author. Copyright Statement This copy of the thesis has been supplied on the condition that anyone who consults it is understood to recognise that its copyright rests with its author and that no quotation from the thesis and information derived from it may be published without the author’s prior consent. Copyright © Mubarak Jibril, December 2011 Algebraic Codes For Error Correction In Digital Communication Systems by Mubarak Jibril A thesis submitted to the University of Plymouth in partial fulfilment for the degree of Doctor of Philosophy School of Computing and Mathematics Faculty of Technology Under the supervision of Dr. M. Z. Ahmed, Prof. M. Tomlinson and Dr. C. Tjhai December 2011 ABSTRACT Algebraic Codes For Error Correction In Digital Communication Systems Mubarak Jibril C. Shannon presented theoretical conditions under which communication was pos- sible error-free in the presence of noise. Subsequently the notion of using error correcting codes to mitigate the effects of noise in digital transmission was intro- duced by R. Hamming. Algebraic codes, codes described using powerful tools from algebra took to the fore early on in the search for good error correcting codes. Many classes of algebraic codes now exist and are known to have the best properties of any known classes of codes. An error correcting code can be described by three of its most important properties length, dimension and minimum distance. Given codes with the same length and dimension, one with the largest minimum distance will provide better error correction. As a result the research focuses on finding improved codes with better minimum distances than any known codes. Algebraic geometry codes are obtained from curves. They are a culmination of years of research into algebraic codes and generalise most known algebraic codes. Addi- tionally they have exceptional distance properties as their lengths become arbitrar- ily large. Algebraic geometry codes are studied in great detail with special attention given to their construction and decoding. The practical performance of these codes is evaluated and compared with previously known codes in different communica- tion channels. Furthermore many new codes that have better minimum distance to the best known codes with the same length and dimension are presented from a generalised construction of algebraic geometry codes. Goppa codes are also an important class of algebraic codes. A construction of binary extended Goppa codes is generalised to codes with nonbinary alphabets and as a result many new codes are found. This construction is shown as an efficient way to extend another well known class of algebraic codes, BCH codes. A generic method of shortening codes whilst increasing the minimum distance is generalised. An analysis of this method reveals a close relationship with methods of extending codes. Some new codes from Goppa codes are found by exploiting this relationship. Finally an extension method for BCH codes is presented and this method is shown be as good as a well known method of extension in certain cases. i CONTENTS List of figures V List of tables VIII I Introduction, Definitions and Preliminaries 1 1 Introduction and Motivation 3 1.1 Shannon’s Contributions and Implications ................ 4 1.2 History and Development of Coding Theory ................ 5 1.2.1 1948–1970 ................................ 5 1.2.2 1970-1990 ................................ 6 1.2.3 1990– .................................. 7 1.3 Research Scope ................................. 8 1.4 Major Contributions of the Thesis ...................... 8 1.5 Publication List ................................. 9 1.6 Thesis Organisation .............................. 10 2 Linear Codes Over Finite Fields 13 2.1 Finite Fields ................................... 13 2.1.1 Subfields and Conjugacy Classes .................. 14 2.2 Linear Codes .................................. 16 2.2.1 Properties of Linear Codes ...................... 18 2.3 Generic Code Constructions ......................... 19 2.3.1 Modifying The Length ........................ 19 2.3.2 Modifying The Dimension ...................... 20 2.3.3 Subfield Constructions ........................ 21 2.3.4 Code Concatenation .......................... 24 2.4 Channel Models ................................ 25 2.5 Computing Minimum Distances ....................... 26 2.5.1 Probabilistic Method of Finding Minimum Distance Using Era- sures ................................... 28 2.6 Summary .................................... 30 I II Algebraic Codes For Error Correction 33 3 RS, BCH and Goppa Codes 35 3.1 Introduction ................................... 35 3.2 Reed Solomon Codes .............................. 38 3.3 BCH Codes ................................... 39 3.4 Goppa Codes .................................. 41 3.5 Summary .................................... 43 4 Algebraic Geometry Codes 45 4.1 Introduction ................................... 45 4.2 Bounds Relevant to Algebraic Geometry Codes .............. 45 4.3 Motivation for Studying AG Codes ..................... 46 4.4 Curves and Planes ............................... 49 4.5 Important Theorems and Concepts ..................... 51 4.6 Construction of AG Codes ........................... 55 4.6.1 Affine Hermitian and Reed Solomon Codes ............ 56 4.7 Summary .................................... 59 5 Decoding Algebraic Codes 61 5.1 Introduction ................................... 61 5.2 Bounded Distance Decoding ......................... 61 5.2.1 Berlekamp Massey Sakata Algorithm ............... 62 5.3 Maximum Likehood Erasure Decoding ................... 74 5.4 Ordered Reliability Decoding ......................... 75 5.5 Performance of Algebraic Geometry Codes ................. 80 5.5.1 Encoding ................................ 80 5.5.2 AWGN Channel with Hard Decision Decoding .......... 81 5.5.3 Erasure Channel ............................ 83 5.5.4 AWGN channel with Soft Decision Decoding ........... 84 5.6 Summary .................................... 85 III Search For New Codes 87 6 Introduction 89 6.1 Maximising the Minimum Distance ..................... 89 6.2 Tables of Best Known Codes ......................... 90 6.3 Methodology and Approach .......................... 91 7 Improved Codes From Generalised AG Codes 95 7.1 Introduction ................................... 95 7.2 Concept of Places of Higher Degree ..................... 96 II 7.3 Generalised Construction I .......................... 97 7.3.1 Results .................................. 100 7.4 Generalised Construction II ......................... 102 7.5 Summary .................................... 103 8 Improved Codes From Goppa Codes 105 8.1 Introduction ................................... 105 8.2 Goppa Codes .................................. 105 8.2.1 Modified Goppa Codes ........................ 107 8.3 Code Construction ............................... 108 8.4 CP As Extended BCH Codes ......................... 109 8.4.1 Example ................................. 111 8.5 Nested Structure From Codes CP ...................... 112 8.6 Results ...................................... 115 8.6.1 New Codes From CP .......................... 115 8.6.2 New Codes From CR .......................... 116 8.7 Further Extensions of the Codes CP ..................... 116 8.8 Summary .................................... 118 9 A Special Case of Shortening Linear Codes 123 9.1 Introduction ................................... 123 9.2 Background ................................... 123 9.3 Code Shortening and Extension ....................... 124 9.3.1 Code Shortening ............................ 125 9.3.2 Code Extension ............................. 126 9.4 Goppa Codes .................................. 129 9.5 Alternative Method .............................. 131 9.6 Summary .................................... 132 IV More On Algebraic Codes 133 10 On Extending BCH Codes 135 10.1 The Method ................................... 135 10.2 BCH Codes ................................... 136 10.3 Single Extension ................................ 138 10.4 Construction .................................. 139 10.5 Observations .................................. 150 10.6 Summary .................................... 153 11 Improved Codes From Goppa Codes II 155 11.1 Introduction ................................... 155 11.2 Goppa Codes .................................. 155 III 11.3 Construction .................................. 156 11.3.1 Preliminary: Cauchy and Vandermonde Matrices ........ 156 11.3.2 Construction of Extended Length Goppa Codes ......... 160 11.3.3 Codes with Better Dimensions ................... 162 11.3.4 An Example ............................... 165 11.3.5 Adding One More Column ...................... 169 11.4 Nested Structure and Construction X ................... 172 11.5 Summary .................................... 173
Recommended publications
  • Decoding Beyond the Designed Distance for Certain Algebraic Codes
    INFORMATION AND CONTROL 35, 209--228 (1977) Decoding beyond the Designed Distance for Certain Algebraic Codes DAVID MANDELBAUM P.O. Box 645, Eatontown, New Jersey 07724 It is shown how decoding beyond the designed distance can be accomplished for a certain decoding algorithm. This method can be used for subcodes of generalized Goppa codes and generalized Reed-Solomon codes. 1. INTRODUCTION A decoding algorithm for generalized Goppa codes is presented here. This method directly uses the Euclidean algorithm, it is shown how subcodes of these generalized codes can be decoded for errors beyond the designed distance. It is proved that all coset leaders can be decoded by this method. Decoding of errors beyond the BCH bound may involve solution of nonlinear equations. The method used is similar to the use of continued fractions for decoding within the BCH bound (Mandelbaum, 1976). The Euclidean algorithm is also used by Sugiyama et aL (1975) in a different manner for decoding Goppa codes within the BCH bound. Section 2 defines the generalized codes and shows the relationship with the Mattson-Solomon polynomial. Section 3 defines the decoding algorithm and gives, as a detailed example, the decoding of the (15, 5) BCH code for a quadruple error. Application to the Golay code is pointed out. This method can also be used for decoding the Generalized Reed-Solomon codes constructed by Delsarte (1975). Section 4 presents a method of obtaining the original information and syndrome in an iterated manner, which results in reduced computation for codewords with no errors. Other methods for decoding beyond the BCH bound for BCH codes are given by Berlekamp (1968) and Harmann (1972).
    [Show full text]
  • Chapter 6 Modeling and Simulations of Reed-Solomon Encoder and Decoder
    CALIFORNIA STATE UNIVERSITY, NORTHRIDGE The Implementation of a Reed Solomon Code Encoder /Decoder A graduate project submitted in partial fulfillment of the requirements For the degree of Master of Science in Electrical Engineering By Qiang Zhang May 2014 Copyright by Qiang Zhang 2014 ii The graduate project of Qiang Zhang is approved: Dr. Ali Amini Date Dr. Sharlene Katz Date Dr. Nagi El Naga, Chair Date California State University, Northridge iii Acknowledgements I would like to thank my project professor and committee: Dr. Nagi El Naga, Dr. Ali Amini, and Dr. Sharlene Katz for their time and help. I would also like to thank my parents for their love and care. I could not have finished this project without them, because they give me the huge encouragement during last two years. iv Dedication To my parents v Table of Contents Copyright Page.................................................................................................................... ii Signature Page ................................................................................................................... iii Acknowledgement ............................................................................................................. iv Dedication ............................................................................................................................v List of Figures .................................................................................................................. viii Abstract ................................................................................................................................x
    [Show full text]
  • Chapter 9: BCH, Reed-Solomon, and Related Codes
    Draft of February 23, 2001 Chapter 9: BCH, Reed-Solomon, and Related Codes 9.1 Introduction. In Chapter 7 we gave one useful generalization of the (7, 4) Hamming code of the Introduction: the family of (2m − 1, 2m − m − 1) single error-correcting Hamming Codes. In Chapter 8 we gave a further generalization, to a class of codes capable of correcting a single burst of errors. In this Chapter, however, we will give a far more important and extensive generalization, the multiple-error correcting BCH and Reed-Solomon Codes. To motivate the general definition, recall that the parity-check matrix of a Hamming Code of length n =2m − 1 is given by (see Section 7.4) H =[v0 v1 ··· vn−1 ] , (9.1) m m where (v0, v1,...,vn−1) is some ordering of the 2 − 1 nonzero (column) vectors from Vm = GF (2) . The matrix H has dimensions m × n, which means that it takes m parity-check bits to correct one error. If we wish to correct two errors, it stands to reason that m more parity checks will be required. Thus we might guess that a matrix of the general form v0 v1 ··· vn−1 H2 = , w0 w1 ··· wn−1 where w0, w1,...,wn−1 ∈ Vm, will serve as the parity-check matrix for a two-error-correcting code of length n. Since however the vi’s are distinct, we may view the correspondence vi → wi as a function from Vm into itself, and write H2 as v0 v1 ··· vn−1 H2 = . (9.3) f(v0) f(v1) ··· f(vn−1) But how should the function f be chosen? According to the results of Section 7.3, H2 will define a n two-error-correcting code iff the syndromes of the 1 + n + 2 error pattern of weights 0, 1 and 2 are all distinct.
    [Show full text]
  • Chapter 9: BCH, Reed-Solomon, and Related Codes
    Draft of February 21, 1999 Chapter 9: BCH, Reed-Solomon, and Related Codes 9.1 Introduction. In Chapter 7 we gave one useful generalization of the (7, 4)Hamming code of the Introduction: the family of (2m − 1, 2m − m − 1)single error-correcting Hamming Codes. In Chapter 8 we gave a further generalization, to a class of codes capable of correcting a single burst of errors. In this Chapter, however, we will give a far more important and extensive generalization, the multiple-error correcting BCH and Reed-Solomon Codes. To motivate the general definition, recall that the parity-check matrix of a Hamming Code of length n =2m − 1 is given by (see Section 7.4) H =[v0 v1 ··· vn−1 ] , (9.1) m m where (v0, v1,...,vn−1)is some ordering of the 2 − 1 nonzero (column)vectors from Vm = GF (2) . The matrix H has dimensions m × n, which means that it takes m parity-check bits to correct one error. If we wish to correct two errors, it stands to reason that m more parity checks will be required. Thus we might guess that a matrix of the general form v0 v1 ··· vn−1 H2 = , w0 w1 ··· wn−1 where w0, w1,...,wn−1 ∈ Vm, will serve as the parity-check matrix for a two-error-correcting code of length n. Since however the vi’s are distinct, we may view the correspondence vi → wi as a function from Vm into itself, and write H2 as v0 v1 ··· vn−1 H2 = . (9.3) f(v0) f(v1) ··· f(vn−1) But how should the function f be chosen? According to the results of Section 7.3, H2 will define a n two-error-correcting code iff the syndromes of the 1 + n + 2 error pattern of weights 0, 1 and 2 are all distinct.
    [Show full text]
  • Error Detection and Correction Using the BCH Code 1
    Error Detection and Correction Using the BCH Code 1 Error Detection and Correction Using the BCH Code Hank Wallace Copyright (C) 2001 Hank Wallace The Information Environment The information revolution is in full swing, having matured over the last thirty or so years. It is estimated that there are hundreds of millions to several billion web pages on computers connected to the Internet, depending on whom you ask and the time of day. As of this writing, the google.com search engine listed 1,346,966,000 web pages in its database. Add to that email, FTP transactions, virtual private networks and other traffic, and the data volume swells to many terabits per day. We have all learned new standard multiplier prefixes as a result of this explosion. These days, the prefix “mega,” or million, when used in advertising (“Mega Cola”) actually makes the product seem trite. With all these terabits per second flying around the world on copper cable, optical fiber, and microwave links, the question arises: How reliable are these networks? If we lose only 0.001% of the data on a one gigabit per second network link, that amounts to ten thousand bits per second! A typical newspaper story contains 1,000 words, or about 42,000 bits of information. Imagine losing a newspaper story on the wires every four seconds, or 900 stories per hour. Even a small error rate becomes impractical as data rates increase to stratospheric levels as they have over the last decade. Given the fact that all networks corrupt the data sent through them to some extent, is there something that we can do to ensure good data gets through poor networks intact? Enter Claude Shannon In 1948, Dr.
    [Show full text]
  • Lecture 14: BCH Codes Overview 1 General Codes
    CS681 Computational Number Theory Lecture 14: BCH Codes Instructor: Piyush P Kurur Scribe: Ramprasad Saptharishi Overview We shall look a a special form of linear codes called cyclic codes. These have very nice structures underlying them and we shall study BCH codes. 1 General Codes n Recall that a linear code C is just a subspace of Fq . We saw last time that by picking a basis of C we can construct what is known as a parity check matrix H that is zero precisely at C. Let us understand how the procedure works. Alice has a message, of length k and she wishes to transmit across the channel. The channel is unreliable and therefore both Alice and Bob first agree on some code C. Now how does Alice convert her message into a code word in C? If Alice’s message could be written as (x1, x2, ··· , xk) where each xi ∈ Fq, then Alice Pk simply sends i=1 xibi which is a codeword. Bob receives some y and he checks if Hy = 0. Assuming that they choose a good distance code (the channel cannot alter one code into an- other), if Bob finds that Hy = 0, then he knows that the message he received was untampered with. But what sort of errors can the channel give? Let us say that the channel can change atmost t positions of the codeword. If x was sent and x0 was received with atmost t changes between them, then the vector e = x0−x can be thought of as the error vector.
    [Show full text]
  • A List-Decoding Approach to Low-Complexity Soft Maximum-Likelihood Decoding of Cyclic Codes
    A List-Decoding Approach to Low-Complexity Soft Maximum-Likelihood Decoding of Cyclic Codes Hengjie Yang∗, Ethan Liang∗, Hanwen Yaoy, Alexander Vardyy, Dariush Divsalarz, and Richard D. Wesel∗ ∗University of California, Los Angeles, Los Angeles, CA 90095, USA yUniversity of California, San Diego, La Jolla, CA 92093, USA zJet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA Email: {hengjie.yang, emliang}@ucla.edu, {hwyao, avardy}@ucsd.edu, [email protected], [email protected] Abstract—This paper provides a reduced-complexity approach Trellises with lower complexity than the natural trellis can to maximum likelihood (ML) decoding of cyclic codes. A cyclic be found using techniques developed in [7]–[9] that identify code with generator polynomial gcyclic(x) may be considered a minimum complexity trellis representations. Such trellises terminated convolutional code with a nominal rate of 1. The trellis termination redundancy lowers the rate from 1 to the are often time varying and can have a complex structure. actual rate of the cyclic code. The proposed decoder represents Furthermore, at least for the high-rate binary BCH code gcyclic(x) as the product of two polynomials, a convolutional examples [10] in this paper, the minimum complexity trellis code (CC) polynomial gcc(x) and a cyclic redundancy check representations turn out to have complexity similar to the (CRC) polynomial gcrc(x), i.e., gcyclic(x) = gcc(x)gcrc(x). This natural trellis. representation facilitates serial list Viterbi algorithm (S-LVA) decoding. Viterbi decoding is performed on the natural trellis for As a main contribution, this paper presents serial list gcc(x), and gcrc(x) is used as a CRC to determine when the S-LVA Viterbi algorithm (S-LVA) as an ML decoder for composite should conclude.
    [Show full text]
  • Cyclic Codes, BCH Codes, RS Codes
    Classical Channel Coding, II Hamming Codes, Cyclic Codes, BCH Codes, RS Codes John MacLaren Walsh, Ph.D. ECET 602, Winter Quarter, 2013 1 References • Error Control Coding, 2nd Ed. Shu Lin and Daniel J. Costello, Jr. Pearson Prentice Hall, 2004. • Error Control Systems for Digital Communication and Storage, Prentice Hall, S. B. Wicker, 1995. 2 Example of Syndrome Decoder { Hamming Codes Recall that a (n; k) Hamming code is a perfect code with a m × 2m − 1 parity check matrix whose columns are all 2m −1 of the non-zero binary words of length m. This yields an especially simple syndrome decoder look up table for arg min wt(e) (1) ejeHT =rHT In particular, one selects the error vector to be the jth column of the identity matrix, where j is the column of the parity check matrix which is equal to the computed syndome s = rHT . 3 Cyclic Codes Cyclic codes are a subclass of linear block codes over GF (q) which have the property that every cyclic shift of a codeword is itself also a codeword. The ith cyclic shift of a vector v = (v0; v1; : : : ; vn−1) is (i) the vector v = (vn−i; vn−i+1; : : : ; vn−1; v0; v1; : : : ; vn−i−1). If we think of the elements of the vector as n−1 coefficients in the polynomial v(x) = v0 + v1x + ··· + vn−1x we observe that i i i+1 n−1 n n+i−1 x v(x) = v0x + v1x + ··· + vn−i−1x + vn−ix + ··· + vn+i−1x i−1 i n−1 = vn−i + vn−i+1x + ··· + vn−1x + v0x + ··· + vn−i−1x n n i−1 n +vn−i(x − 1) + vn−i+1x(x − 1) + ··· + vn+i−1x (x − 1) Hence, i n i−1 i i+1 n−1 (i) x v(x)mod(x − 1) = vn−i + vn−i+1x + ··· + vn−1x + v0x + v1x + ··· + vn−i−1x = v (x) (2) That is, the ith cyclic shift can alternatively be thought of as the operation v(i)(x) = xiv(x)mod(xn − 1).
    [Show full text]
  • Algebraic Decoding of Reed-Solomon and BCH Codes
    Algebraic Decoding of Reed-Solomon and BCH Codes Henry D. Pfister ECE Department Texas A&M University July, 2008 (rev. 0) November 8th, 2012 (rev. 1) November 15th, 2013 (rev. 2) 1 Reed-Solomon and BCH Codes 1.1 Introduction Consider the (n; k) cyclic code over Fqm with generator polynomial n−k Y g(x) = x − αa+jb ; j=1 where α 2 Fqm is an element of order n and gcd(b; n) = 1. This is a cyclic Reed-Solomon (RS) code a+b a+2b a+(n−k)b with dmin = n − k + 1. Adding roots at all the conjugates of α ; α ; : : : ; α (w.r.t. the subfield K = Fq) also allows one to define a length-n BCH subcode over Fq with dmin ≥ n − k + 1. Also, any decoder that corrects all patterns of up to t = b(n − k)=2c errors for the original RS code can be used to correct the same set of errors for any subcode. Pk−1 i The RS code operates by encoding a message polynomial m(x) = j=0 mix of degree at most k − 1 Pn−1 i into a codeword polynomial c(x) = j=0 cix using c(x) = m(x)g(x): During transmission, some additive errors are introduced and described by the error polynomial e(x) = Pn−1 i Pt σ(j) j=0 eix . Let t denote the number of errors so that e(x) = j=1 eσ(j)x . The received polynomial Pk−1 i is r(x) = j=0 rix where r(x) = c(x) + e(x): In these notes, the RS decoding is introduced using the methods of Peterson-Gorenstein-Zierler (PGZ), the Berlekamp-Massey Algorithm (BMA) and the Euclidean method of Sugiyama.
    [Show full text]
  • Max90x0000090xxx0099xx600x9099x 09 /4Co
    US 20190268023A1 ( 19) United States (12 ) Patent Application Publication ( 10) Pub . No. : US 2019 /0268023 A1 Freudenberger et al. (43 ) Pub. Date : Aug . 29, 2019 (54 ) METHOD AND DEVICE FOR ERROR GIIC 29 /52 ( 2006 .01 ) CORRECTION CODING BASED ON HO3M 13 / 15 ( 2006 .01 ) HIGH - RATE GENERALIZED (52 ) U . S . CI. CONCATENATED CODES CPC . .. .. HO3M 13 / 253 ( 2013 .01 ) ; HO3M 13 /293 ( 2013 . 01 ) ; GIIC 29 / 52 ( 2013. 01 ) ; HO3M (71 ) Applicant: HYPERSTONE GMBH , Konstanz 13 /152 (2013 .01 ) ; H03M 13 / 2906 (2013 .01 ) ; (DE ) HO3M 13/ 1515 (2013 . 01 ) ; HO3M 13 / 1505 (72 ) Inventors : Juergen Freudenberger, Radolfzell ( 2013 . 01 ) (DE ) ; Jens Spinner , Konstanz (DE ) ; Christoph Baumhof , Radolfzell (DE ) ( 57 ) ABSTRACT (21 ) Appl. No. : 16/ 395 ,046 ( 22 ) Filed : Apr. 25 , 2019 Field error correction coding is particularly suitable for applications in non - volatile flash memories. We describe a Related U . S . Application Data method for error correction encoding of data to be stored in (63 ) Continuation of application No. 15 /593 , 973, filed on a memory device, a corresponding method for decoding a May 12 , 2017 , now Pat. No . 10 , 320 , 421. codeword matrix resulting from the encoding method , a coding device , and a computer program for performing the ( 30 ) Foreign Application Priority Data methods on the coding device , using a new construction for high - rate generalized concatenated (GC ) codes . The codes , May 13 , 2016 (DE ) . .. .. .. 102016005985 . 0 which are well suited for error correction in flash memories Apr. 6 , 2017 (DE ) . 102017107431 . 7 for high reliability data storage , are constructed from inner nested binary Bose -Chaudhuri - Hocquenghem (BCH ) codes Publication Classification and outer codes, preferably Reed - Solomon (RS ) codes .
    [Show full text]
  • Appendix A. Code Generators for BCH Codes
    Appendix A. Code Generators for BCH Codes As described in Chapter 2, code generator polynomials can be constructed in a straightforward manner. Here we Jist some of the more useful generators for primitive and nonprimitive codes. Generators for the primitive codes are given in octal notation in Table A-1 (i.e., 23 denotes x4 + x + 1). The basis for constructing this table is the table of irreducible polynomials given in Peterson and Weldon.(t) Sets of code generators are given for lengths 7, 15, 31, 63, 127, and 255. Each block length is of the form 2m - 1. The first code at each block length is a Hamming code, so it has a primitive polynomial, p(x), of degree m as its generator. This polynomial is also used for defining arithmetic operations over GF(2m). The next code generator of the same length is found by multiplying p(x) by m3 (x), the minimal polynomial of 1>: 3. Continuing in the same fashion, at each step, the previous generator is multiplied by a new minimal polynomial. Then the resulting product is checked to find the maximum value oft for which IX, 1>: 2, ••• , ~>: 21 are roots of the generator polyno­ mial. This value tis the error-correcting capability of the code as predicted by the BCH bound. The parameter k, the number of information symbols per code word, is given by subtracting the degree of the generator polynomial from n. Finally, on the next iteration, this generator polynomial is multiplied by mh(x) [with h = 2(t + 1) + 1] to produce a new generator polynomial.
    [Show full text]
  • Reducing the Overhead of BCH Codes: New Double Error Correction Codes
    electronics Article Reducing the Overhead of BCH Codes: New Double Error Correction Codes Luis-J. Saiz-Adalid * , Joaquín Gracia-Morán , Daniel Gil-Tomás, J.-Carlos Baraza-Calvo and Pedro-J. Gil-Vicente Institute of Information and Communication Technologies (ITACA), Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain; [email protected] (J.G.-M.); [email protected] (D.G.-T.); [email protected] (J.-C.B.-C.); [email protected] (P.-J.G.-V.) * Correspondence: [email protected] Received: 15 October 2020; Accepted: 7 November 2020; Published: 11 November 2020 Abstract: The Bose-Chaudhuri-Hocquenghem (BCH) codes are a well-known class of powerful error correction cyclic codes. BCH codes can correct multiple errors with minimal redundancy. Primitive BCH codes only exist for some word lengths, which do not frequently match those employed in digital systems. This paper focuses on double error correction (DEC) codes for word lengths that are in powers of two (8, 16, 32, and 64), which are commonly used in memories. We also focus on hardware implementations of the encoder and decoder circuits for very fast operations. This work proposes new low redundancy and reduced overhead (LRRO) DEC codes, with the same redundancy as the equivalent BCH DEC codes, but whose encoder, and decoder circuits present a lower overhead (in terms of propagation delay, silicon area usage and power consumption). We used a methodology to search parity check matrices, based on error patterns, in order to design the new codes. We implemented and synthesized them, and compared their results with those obtained for the BCH codes.
    [Show full text]