A List-Decoding Approach to Low-Complexity Soft Maximum-Likelihood Decoding of Cyclic Codes

Total Page:16

File Type:pdf, Size:1020Kb

A List-Decoding Approach to Low-Complexity Soft Maximum-Likelihood Decoding of Cyclic Codes A List-Decoding Approach to Low-Complexity Soft Maximum-Likelihood Decoding of Cyclic Codes Hengjie Yang∗, Ethan Liang∗, Hanwen Yaoy, Alexander Vardyy, Dariush Divsalarz, and Richard D. Wesel∗ ∗University of California, Los Angeles, Los Angeles, CA 90095, USA yUniversity of California, San Diego, La Jolla, CA 92093, USA zJet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA Email: {hengjie.yang, emliang}@ucla.edu, {hwyao, avardy}@ucsd.edu, [email protected], [email protected] Abstract—This paper provides a reduced-complexity approach Trellises with lower complexity than the natural trellis can to maximum likelihood (ML) decoding of cyclic codes. A cyclic be found using techniques developed in [7]–[9] that identify code with generator polynomial gcyclic(x) may be considered a minimum complexity trellis representations. Such trellises terminated convolutional code with a nominal rate of 1. The trellis termination redundancy lowers the rate from 1 to the are often time varying and can have a complex structure. actual rate of the cyclic code. The proposed decoder represents Furthermore, at least for the high-rate binary BCH code gcyclic(x) as the product of two polynomials, a convolutional examples [10] in this paper, the minimum complexity trellis code (CC) polynomial gcc(x) and a cyclic redundancy check representations turn out to have complexity similar to the (CRC) polynomial gcrc(x), i.e., gcyclic(x) = gcc(x)gcrc(x). This natural trellis. representation facilitates serial list Viterbi algorithm (S-LVA) decoding. Viterbi decoding is performed on the natural trellis for As a main contribution, this paper presents serial list gcc(x), and gcrc(x) is used as a CRC to determine when the S-LVA Viterbi algorithm (S-LVA) as an ML decoder for composite should conclude. At typical target frame error rates, the expected cyclic codes that, at least for our high-rate BCH examples, list size of S-LVA is small, and the average decoding complexity has complexity similar to that of the Viterbi decoder used is dominated by the trellis complexity of gcc(x) rather than in communication devices today. The proposed decoding gcyclic(x). Some high-rate binary Bose-Chaudhuri-Hocquenghem (BCH) examples show that the proposed use of S-LVA via algorithm also supports complete hard decoding as opposed factorization significantly lowers complexity as compared to to the bounded distance decoding of the Berlekamp-Massey using the minimum-complexity trellis representation of gcyclic(x) and Euclidean algorithms. for soft ML decoding. In [11] the concatenation of a convolutional code with an outer cyclic redundancy check (CRC) was recognized as being I. INTRODUCTION a new convolutional code with a larger constraint length. This paper makes the dual observation: when a cyclic code The Berlekamp-Massey [1], [2] and Euclidean [3] algo- has a composite a generator polynomial gcyclic(x), it can be rithms provide bounded-distance hard decoding for Bose- factored into a “convolutional encoder” polynomial gcc(x) and Chaudhuri-Hocquenghem (BCH) codes [4]. For soft decod- an “outer CRC” polynomial gcrc(x). Factoring gcyclic(x) into ing, Guruswami and Sudan developed a ground-breaking (but gcc(x) and gcrc(x) facilitates S-LVA decoding. complex) list decoding approach for Reed-Solomon codes in The paper also studies the complexity of the S-LVA decoder [5] that identifies all codewords within a defined Hamming and shows that for high-rate binary BCH examples S-LVA has distance of the received word. lower complexity than soft decoding based on the minimal- Separately, maximum-likelihood (ML) decoding can be complexity trellis of the original cyclic code. performed by the Viterbi algorithm [6] on any trellis repre- This paper is organized as follows: Section II describes sentation of the cyclic code. The natural trellis implied by the the decoding approach and presents examples and simulations r generator polynomial gcyclic(x) has 2 states where r is the for several high-rate binary BCH codes. Section III explores degree of gcyclic(x). Decoding on the natural trellis essentially the complexity of the new decoding approach, showing that treats the BCH generator polynomial as the generator polyno- decoding complexity for S-LVA depends on the expected list mial of a terminated convolutional code. However, for most size, which decreases as signal-to-noise ratio improves. At cyclic codes of interest, the number of states in the natural typical frame error rate (FER) operating points, the expected trellis induces a complexity that is not practical. list size is small and the average decoding complexity is dominated by the complexity of the natural trellis for gcc(x). This research is supported in part by National Science Foundation (NSF) Section IV investigates the minimal trellis representations for grant CCF-1618272 and a grant from the Physical Optics Corporation (POC). Any opinions, findings, and conclusions or recommendations expressed in our high-rate BCH examples and shows that these minimal this material are those of the author(s) and do not necessarily reflect the trellises will not have significantly lower complexity than views of the NSF or POC. The work of H. Yao and A. Vardy was supported the natural trellises. In our examples, S-LVA achieves a in part by the NSF under grants CCF-1719139 and CCF-1764104. Research was carried out in part at the Jet Propulsion Laboratory, California Institute complexity much lower than Viterbi decoding on the minimal of Technology, under a contract with NASA. trellis. Section V concludes the paper. TABLE I 100 EXAMPLE BINARY BCH CODES THAT CAN BE VIEWED AS BINARY CONVOLUTIONAL CODES 10-1 m r Conjugate δ gcyclic(x) KNR 6 6 α 3 (103)8 57 63 0:90 10-2 7 7 α 3 (211)8 120 127 0:94 8 8 α 3 (435)8 247 255 0:97 9 9 α 3 (1021)8 502 511 0:98 -3 10 10 α 3 (2011)8 1013 1023 0:99 10 10-4 II. SERIAL LIST VITERBI ON FACTORED CYCLIC CODES This section describes how a cyclic code can be represented 10-5 as a rate-1 terminated convolutional code and how factoring 6 6.5 7 7.5 8 8.5 9 9.5 10 the cyclic code generator polynomial into the product of two polynomials facilitates S-LVA decoding. This section also Fig. 1. FER performance of BCH convolutional codes of degree m = presents examples and simulations for several high-rate binary 6; 7;:::; 10 decoded using the Viterbi algorithm. Both soft decoding and complete hard decoding are shown. BCH codes. A. Cyclic Codes as Rate-1 Convolutional Codes It is well known that a natural trellis that has 2r states exists Table I presents example high-rate binary BCH codes for each degree-r generator polynomial gcyclic(x). See, for where each gcyclic(x) is a minimal polynomial of degree example, [12]. The decoder design proposed in this paper can m = 6; 7;:::; 10. Note that the polynomials are described be best understood by using this result to view the cyclic code in octal form with degree decreasing from left to right. For 6 as a terminated rate-1 convolutional code. This is presented example, x + x + 1 is represented by (103)8. The rate R below as a theorem. is greater than or equal to 0.9 for all these examples. For each code, is the base-2 logarithm of the field size over Theorem 1. Every cyclic code is a terminated convolutional m which the binary BCH code was designed. The degree of code. r gcyclic(x) for these examples is also equal to m. In each case, Proof: Consider the cyclic code with degree-r generator gcyclic(x) is the minimal polynomial for the conjugate element polynomial gcyclic(x) and codewords c(x) = w(x)gcyclic(x) provided in the column identified as “Conjugate”, where α is where w(x) is the message polynomial. The codeword pro- a primitive element that is a root of the primitive polynomial duced by multiplying the polynomials w(x) and gcyclic(x) is used to define the Galois field. The designed distance of the identical to the codeword produced by passing w(x) followed BCH code is denoted δ. by r zeros through the rate-1 feedforward convolutional Fig. 1 provides simulation results from applying the Viterbi encoder circuit defined by g (x). cyclic algorithm to the BCH convolutional codes of Table I. The We use the term BCH convolutional code to refer to a BCH system model is as follows: Let w(x) be the K-bit infor- code represented as a terminated rate-1 convolutional code, mation sequence, and c(x) = w(x)g (x) is the N-bit i.e., a convolutional code where the number of input symbols cyclic BCH convolutional codeword. After BPSK modulation, the k per encoder operation and the number of output symbols constellation points are transmitted over the additive white n per encoder operation are both 1. The trellis termination Gaussian noise (AWGN) channel. The signal-to-noise ratio redundancy lowers the rate from 1 to the actual rate of the (SNR) is defined as the amplitude of a single BPSK signal cyclic code. This is distinct from the notion in [13] of a divided by the variance of a one-dimensional, zero-mean BCH convolutional code as a standard rate-k=n (k < n) Gaussian. convolutional code designed using ideas from BCH theory. We focus on binary BCH codes, but the approach general- If hard Viterbi decoding is employed, the noisy constel- izes to non-binary BCH codes such as Reed-Solomon codes. lation points are first hard decoded to the nearest BPSK Although we refer to the convolutional code as having rate symbol and the corresponding bits are used by the hard one, the rate is, of course, actually below one because of Viterbi decoder.
Recommended publications
  • 10. on Extending Bch Codes
    University of Plymouth PEARL https://pearl.plymouth.ac.uk 04 University of Plymouth Research Theses 01 Research Theses Main Collection 2011 Algebraic Codes For Error Correction In Digital Communication Systems Jibril, Mubarak http://hdl.handle.net/10026.1/1188 University of Plymouth All content in PEARL is protected by copyright law. Author manuscripts are made available in accordance with publisher policies. Please cite only the published version using the details provided on the item record or document. In the absence of an open licence (e.g. Creative Commons), permissions for further reuse of content should be sought from the publisher or author. Copyright Statement This copy of the thesis has been supplied on the condition that anyone who consults it is understood to recognise that its copyright rests with its author and that no quotation from the thesis and information derived from it may be published without the author’s prior consent. Copyright © Mubarak Jibril, December 2011 Algebraic Codes For Error Correction In Digital Communication Systems by Mubarak Jibril A thesis submitted to the University of Plymouth in partial fulfilment for the degree of Doctor of Philosophy School of Computing and Mathematics Faculty of Technology Under the supervision of Dr. M. Z. Ahmed, Prof. M. Tomlinson and Dr. C. Tjhai December 2011 ABSTRACT Algebraic Codes For Error Correction In Digital Communication Systems Mubarak Jibril C. Shannon presented theoretical conditions under which communication was pos- sible error-free in the presence of noise. Subsequently the notion of using error correcting codes to mitigate the effects of noise in digital transmission was intro- duced by R.
    [Show full text]
  • Decoding Beyond the Designed Distance for Certain Algebraic Codes
    INFORMATION AND CONTROL 35, 209--228 (1977) Decoding beyond the Designed Distance for Certain Algebraic Codes DAVID MANDELBAUM P.O. Box 645, Eatontown, New Jersey 07724 It is shown how decoding beyond the designed distance can be accomplished for a certain decoding algorithm. This method can be used for subcodes of generalized Goppa codes and generalized Reed-Solomon codes. 1. INTRODUCTION A decoding algorithm for generalized Goppa codes is presented here. This method directly uses the Euclidean algorithm, it is shown how subcodes of these generalized codes can be decoded for errors beyond the designed distance. It is proved that all coset leaders can be decoded by this method. Decoding of errors beyond the BCH bound may involve solution of nonlinear equations. The method used is similar to the use of continued fractions for decoding within the BCH bound (Mandelbaum, 1976). The Euclidean algorithm is also used by Sugiyama et aL (1975) in a different manner for decoding Goppa codes within the BCH bound. Section 2 defines the generalized codes and shows the relationship with the Mattson-Solomon polynomial. Section 3 defines the decoding algorithm and gives, as a detailed example, the decoding of the (15, 5) BCH code for a quadruple error. Application to the Golay code is pointed out. This method can also be used for decoding the Generalized Reed-Solomon codes constructed by Delsarte (1975). Section 4 presents a method of obtaining the original information and syndrome in an iterated manner, which results in reduced computation for codewords with no errors. Other methods for decoding beyond the BCH bound for BCH codes are given by Berlekamp (1968) and Harmann (1972).
    [Show full text]
  • Chapter 6 Modeling and Simulations of Reed-Solomon Encoder and Decoder
    CALIFORNIA STATE UNIVERSITY, NORTHRIDGE The Implementation of a Reed Solomon Code Encoder /Decoder A graduate project submitted in partial fulfillment of the requirements For the degree of Master of Science in Electrical Engineering By Qiang Zhang May 2014 Copyright by Qiang Zhang 2014 ii The graduate project of Qiang Zhang is approved: Dr. Ali Amini Date Dr. Sharlene Katz Date Dr. Nagi El Naga, Chair Date California State University, Northridge iii Acknowledgements I would like to thank my project professor and committee: Dr. Nagi El Naga, Dr. Ali Amini, and Dr. Sharlene Katz for their time and help. I would also like to thank my parents for their love and care. I could not have finished this project without them, because they give me the huge encouragement during last two years. iv Dedication To my parents v Table of Contents Copyright Page.................................................................................................................... ii Signature Page ................................................................................................................... iii Acknowledgement ............................................................................................................. iv Dedication ............................................................................................................................v List of Figures .................................................................................................................. viii Abstract ................................................................................................................................x
    [Show full text]
  • Chapter 9: BCH, Reed-Solomon, and Related Codes
    Draft of February 23, 2001 Chapter 9: BCH, Reed-Solomon, and Related Codes 9.1 Introduction. In Chapter 7 we gave one useful generalization of the (7, 4) Hamming code of the Introduction: the family of (2m − 1, 2m − m − 1) single error-correcting Hamming Codes. In Chapter 8 we gave a further generalization, to a class of codes capable of correcting a single burst of errors. In this Chapter, however, we will give a far more important and extensive generalization, the multiple-error correcting BCH and Reed-Solomon Codes. To motivate the general definition, recall that the parity-check matrix of a Hamming Code of length n =2m − 1 is given by (see Section 7.4) H =[v0 v1 ··· vn−1 ] , (9.1) m m where (v0, v1,...,vn−1) is some ordering of the 2 − 1 nonzero (column) vectors from Vm = GF (2) . The matrix H has dimensions m × n, which means that it takes m parity-check bits to correct one error. If we wish to correct two errors, it stands to reason that m more parity checks will be required. Thus we might guess that a matrix of the general form v0 v1 ··· vn−1 H2 = , w0 w1 ··· wn−1 where w0, w1,...,wn−1 ∈ Vm, will serve as the parity-check matrix for a two-error-correcting code of length n. Since however the vi’s are distinct, we may view the correspondence vi → wi as a function from Vm into itself, and write H2 as v0 v1 ··· vn−1 H2 = . (9.3) f(v0) f(v1) ··· f(vn−1) But how should the function f be chosen? According to the results of Section 7.3, H2 will define a n two-error-correcting code iff the syndromes of the 1 + n + 2 error pattern of weights 0, 1 and 2 are all distinct.
    [Show full text]
  • Chapter 9: BCH, Reed-Solomon, and Related Codes
    Draft of February 21, 1999 Chapter 9: BCH, Reed-Solomon, and Related Codes 9.1 Introduction. In Chapter 7 we gave one useful generalization of the (7, 4)Hamming code of the Introduction: the family of (2m − 1, 2m − m − 1)single error-correcting Hamming Codes. In Chapter 8 we gave a further generalization, to a class of codes capable of correcting a single burst of errors. In this Chapter, however, we will give a far more important and extensive generalization, the multiple-error correcting BCH and Reed-Solomon Codes. To motivate the general definition, recall that the parity-check matrix of a Hamming Code of length n =2m − 1 is given by (see Section 7.4) H =[v0 v1 ··· vn−1 ] , (9.1) m m where (v0, v1,...,vn−1)is some ordering of the 2 − 1 nonzero (column)vectors from Vm = GF (2) . The matrix H has dimensions m × n, which means that it takes m parity-check bits to correct one error. If we wish to correct two errors, it stands to reason that m more parity checks will be required. Thus we might guess that a matrix of the general form v0 v1 ··· vn−1 H2 = , w0 w1 ··· wn−1 where w0, w1,...,wn−1 ∈ Vm, will serve as the parity-check matrix for a two-error-correcting code of length n. Since however the vi’s are distinct, we may view the correspondence vi → wi as a function from Vm into itself, and write H2 as v0 v1 ··· vn−1 H2 = . (9.3) f(v0) f(v1) ··· f(vn−1) But how should the function f be chosen? According to the results of Section 7.3, H2 will define a n two-error-correcting code iff the syndromes of the 1 + n + 2 error pattern of weights 0, 1 and 2 are all distinct.
    [Show full text]
  • Error Detection and Correction Using the BCH Code 1
    Error Detection and Correction Using the BCH Code 1 Error Detection and Correction Using the BCH Code Hank Wallace Copyright (C) 2001 Hank Wallace The Information Environment The information revolution is in full swing, having matured over the last thirty or so years. It is estimated that there are hundreds of millions to several billion web pages on computers connected to the Internet, depending on whom you ask and the time of day. As of this writing, the google.com search engine listed 1,346,966,000 web pages in its database. Add to that email, FTP transactions, virtual private networks and other traffic, and the data volume swells to many terabits per day. We have all learned new standard multiplier prefixes as a result of this explosion. These days, the prefix “mega,” or million, when used in advertising (“Mega Cola”) actually makes the product seem trite. With all these terabits per second flying around the world on copper cable, optical fiber, and microwave links, the question arises: How reliable are these networks? If we lose only 0.001% of the data on a one gigabit per second network link, that amounts to ten thousand bits per second! A typical newspaper story contains 1,000 words, or about 42,000 bits of information. Imagine losing a newspaper story on the wires every four seconds, or 900 stories per hour. Even a small error rate becomes impractical as data rates increase to stratospheric levels as they have over the last decade. Given the fact that all networks corrupt the data sent through them to some extent, is there something that we can do to ensure good data gets through poor networks intact? Enter Claude Shannon In 1948, Dr.
    [Show full text]
  • Lecture 14: BCH Codes Overview 1 General Codes
    CS681 Computational Number Theory Lecture 14: BCH Codes Instructor: Piyush P Kurur Scribe: Ramprasad Saptharishi Overview We shall look a a special form of linear codes called cyclic codes. These have very nice structures underlying them and we shall study BCH codes. 1 General Codes n Recall that a linear code C is just a subspace of Fq . We saw last time that by picking a basis of C we can construct what is known as a parity check matrix H that is zero precisely at C. Let us understand how the procedure works. Alice has a message, of length k and she wishes to transmit across the channel. The channel is unreliable and therefore both Alice and Bob first agree on some code C. Now how does Alice convert her message into a code word in C? If Alice’s message could be written as (x1, x2, ··· , xk) where each xi ∈ Fq, then Alice Pk simply sends i=1 xibi which is a codeword. Bob receives some y and he checks if Hy = 0. Assuming that they choose a good distance code (the channel cannot alter one code into an- other), if Bob finds that Hy = 0, then he knows that the message he received was untampered with. But what sort of errors can the channel give? Let us say that the channel can change atmost t positions of the codeword. If x was sent and x0 was received with atmost t changes between them, then the vector e = x0−x can be thought of as the error vector.
    [Show full text]
  • Cyclic Codes, BCH Codes, RS Codes
    Classical Channel Coding, II Hamming Codes, Cyclic Codes, BCH Codes, RS Codes John MacLaren Walsh, Ph.D. ECET 602, Winter Quarter, 2013 1 References • Error Control Coding, 2nd Ed. Shu Lin and Daniel J. Costello, Jr. Pearson Prentice Hall, 2004. • Error Control Systems for Digital Communication and Storage, Prentice Hall, S. B. Wicker, 1995. 2 Example of Syndrome Decoder { Hamming Codes Recall that a (n; k) Hamming code is a perfect code with a m × 2m − 1 parity check matrix whose columns are all 2m −1 of the non-zero binary words of length m. This yields an especially simple syndrome decoder look up table for arg min wt(e) (1) ejeHT =rHT In particular, one selects the error vector to be the jth column of the identity matrix, where j is the column of the parity check matrix which is equal to the computed syndome s = rHT . 3 Cyclic Codes Cyclic codes are a subclass of linear block codes over GF (q) which have the property that every cyclic shift of a codeword is itself also a codeword. The ith cyclic shift of a vector v = (v0; v1; : : : ; vn−1) is (i) the vector v = (vn−i; vn−i+1; : : : ; vn−1; v0; v1; : : : ; vn−i−1). If we think of the elements of the vector as n−1 coefficients in the polynomial v(x) = v0 + v1x + ··· + vn−1x we observe that i i i+1 n−1 n n+i−1 x v(x) = v0x + v1x + ··· + vn−i−1x + vn−ix + ··· + vn+i−1x i−1 i n−1 = vn−i + vn−i+1x + ··· + vn−1x + v0x + ··· + vn−i−1x n n i−1 n +vn−i(x − 1) + vn−i+1x(x − 1) + ··· + vn+i−1x (x − 1) Hence, i n i−1 i i+1 n−1 (i) x v(x)mod(x − 1) = vn−i + vn−i+1x + ··· + vn−1x + v0x + v1x + ··· + vn−i−1x = v (x) (2) That is, the ith cyclic shift can alternatively be thought of as the operation v(i)(x) = xiv(x)mod(xn − 1).
    [Show full text]
  • Algebraic Decoding of Reed-Solomon and BCH Codes
    Algebraic Decoding of Reed-Solomon and BCH Codes Henry D. Pfister ECE Department Texas A&M University July, 2008 (rev. 0) November 8th, 2012 (rev. 1) November 15th, 2013 (rev. 2) 1 Reed-Solomon and BCH Codes 1.1 Introduction Consider the (n; k) cyclic code over Fqm with generator polynomial n−k Y g(x) = x − αa+jb ; j=1 where α 2 Fqm is an element of order n and gcd(b; n) = 1. This is a cyclic Reed-Solomon (RS) code a+b a+2b a+(n−k)b with dmin = n − k + 1. Adding roots at all the conjugates of α ; α ; : : : ; α (w.r.t. the subfield K = Fq) also allows one to define a length-n BCH subcode over Fq with dmin ≥ n − k + 1. Also, any decoder that corrects all patterns of up to t = b(n − k)=2c errors for the original RS code can be used to correct the same set of errors for any subcode. Pk−1 i The RS code operates by encoding a message polynomial m(x) = j=0 mix of degree at most k − 1 Pn−1 i into a codeword polynomial c(x) = j=0 cix using c(x) = m(x)g(x): During transmission, some additive errors are introduced and described by the error polynomial e(x) = Pn−1 i Pt σ(j) j=0 eix . Let t denote the number of errors so that e(x) = j=1 eσ(j)x . The received polynomial Pk−1 i is r(x) = j=0 rix where r(x) = c(x) + e(x): In these notes, the RS decoding is introduced using the methods of Peterson-Gorenstein-Zierler (PGZ), the Berlekamp-Massey Algorithm (BMA) and the Euclidean method of Sugiyama.
    [Show full text]
  • Max90x0000090xxx0099xx600x9099x 09 /4Co
    US 20190268023A1 ( 19) United States (12 ) Patent Application Publication ( 10) Pub . No. : US 2019 /0268023 A1 Freudenberger et al. (43 ) Pub. Date : Aug . 29, 2019 (54 ) METHOD AND DEVICE FOR ERROR GIIC 29 /52 ( 2006 .01 ) CORRECTION CODING BASED ON HO3M 13 / 15 ( 2006 .01 ) HIGH - RATE GENERALIZED (52 ) U . S . CI. CONCATENATED CODES CPC . .. .. HO3M 13 / 253 ( 2013 .01 ) ; HO3M 13 /293 ( 2013 . 01 ) ; GIIC 29 / 52 ( 2013. 01 ) ; HO3M (71 ) Applicant: HYPERSTONE GMBH , Konstanz 13 /152 (2013 .01 ) ; H03M 13 / 2906 (2013 .01 ) ; (DE ) HO3M 13/ 1515 (2013 . 01 ) ; HO3M 13 / 1505 (72 ) Inventors : Juergen Freudenberger, Radolfzell ( 2013 . 01 ) (DE ) ; Jens Spinner , Konstanz (DE ) ; Christoph Baumhof , Radolfzell (DE ) ( 57 ) ABSTRACT (21 ) Appl. No. : 16/ 395 ,046 ( 22 ) Filed : Apr. 25 , 2019 Field error correction coding is particularly suitable for applications in non - volatile flash memories. We describe a Related U . S . Application Data method for error correction encoding of data to be stored in (63 ) Continuation of application No. 15 /593 , 973, filed on a memory device, a corresponding method for decoding a May 12 , 2017 , now Pat. No . 10 , 320 , 421. codeword matrix resulting from the encoding method , a coding device , and a computer program for performing the ( 30 ) Foreign Application Priority Data methods on the coding device , using a new construction for high - rate generalized concatenated (GC ) codes . The codes , May 13 , 2016 (DE ) . .. .. .. 102016005985 . 0 which are well suited for error correction in flash memories Apr. 6 , 2017 (DE ) . 102017107431 . 7 for high reliability data storage , are constructed from inner nested binary Bose -Chaudhuri - Hocquenghem (BCH ) codes Publication Classification and outer codes, preferably Reed - Solomon (RS ) codes .
    [Show full text]
  • Appendix A. Code Generators for BCH Codes
    Appendix A. Code Generators for BCH Codes As described in Chapter 2, code generator polynomials can be constructed in a straightforward manner. Here we Jist some of the more useful generators for primitive and nonprimitive codes. Generators for the primitive codes are given in octal notation in Table A-1 (i.e., 23 denotes x4 + x + 1). The basis for constructing this table is the table of irreducible polynomials given in Peterson and Weldon.(t) Sets of code generators are given for lengths 7, 15, 31, 63, 127, and 255. Each block length is of the form 2m - 1. The first code at each block length is a Hamming code, so it has a primitive polynomial, p(x), of degree m as its generator. This polynomial is also used for defining arithmetic operations over GF(2m). The next code generator of the same length is found by multiplying p(x) by m3 (x), the minimal polynomial of 1>: 3. Continuing in the same fashion, at each step, the previous generator is multiplied by a new minimal polynomial. Then the resulting product is checked to find the maximum value oft for which IX, 1>: 2, ••• , ~>: 21 are roots of the generator polyno­ mial. This value tis the error-correcting capability of the code as predicted by the BCH bound. The parameter k, the number of information symbols per code word, is given by subtracting the degree of the generator polynomial from n. Finally, on the next iteration, this generator polynomial is multiplied by mh(x) [with h = 2(t + 1) + 1] to produce a new generator polynomial.
    [Show full text]
  • Reducing the Overhead of BCH Codes: New Double Error Correction Codes
    electronics Article Reducing the Overhead of BCH Codes: New Double Error Correction Codes Luis-J. Saiz-Adalid * , Joaquín Gracia-Morán , Daniel Gil-Tomás, J.-Carlos Baraza-Calvo and Pedro-J. Gil-Vicente Institute of Information and Communication Technologies (ITACA), Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain; [email protected] (J.G.-M.); [email protected] (D.G.-T.); [email protected] (J.-C.B.-C.); [email protected] (P.-J.G.-V.) * Correspondence: [email protected] Received: 15 October 2020; Accepted: 7 November 2020; Published: 11 November 2020 Abstract: The Bose-Chaudhuri-Hocquenghem (BCH) codes are a well-known class of powerful error correction cyclic codes. BCH codes can correct multiple errors with minimal redundancy. Primitive BCH codes only exist for some word lengths, which do not frequently match those employed in digital systems. This paper focuses on double error correction (DEC) codes for word lengths that are in powers of two (8, 16, 32, and 64), which are commonly used in memories. We also focus on hardware implementations of the encoder and decoder circuits for very fast operations. This work proposes new low redundancy and reduced overhead (LRRO) DEC codes, with the same redundancy as the equivalent BCH DEC codes, but whose encoder, and decoder circuits present a lower overhead (in terms of propagation delay, silicon area usage and power consumption). We used a methodology to search parity check matrices, based on error patterns, in order to design the new codes. We implemented and synthesized them, and compared their results with those obtained for the BCH codes.
    [Show full text]