Appendix A. Code Generators for BCH Codes

Total Page:16

File Type:pdf, Size:1020Kb

Appendix A. Code Generators for BCH Codes Appendix A. Code Generators for BCH Codes As described in Chapter 2, code generator polynomials can be constructed in a straightforward manner. Here we Jist some of the more useful generators for primitive and nonprimitive codes. Generators for the primitive codes are given in octal notation in Table A-1 (i.e., 23 denotes x4 + x + 1). The basis for constructing this table is the table of irreducible polynomials given in Peterson and Weldon.(t) Sets of code generators are given for lengths 7, 15, 31, 63, 127, and 255. Each block length is of the form 2m - 1. The first code at each block length is a Hamming code, so it has a primitive polynomial, p(x), of degree m as its generator. This polynomial is also used for defining arithmetic operations over GF(2m). The next code generator of the same length is found by multiplying p(x) by m3 (x), the minimal polynomial of 1>: 3. Continuing in the same fashion, at each step, the previous generator is multiplied by a new minimal polynomial. Then the resulting product is checked to find the maximum value oft for which IX, 1>: 2, ••• , ~>: 21 are roots of the generator polyno­ mial. This value tis the error-correcting capability of the code as predicted by the BCH bound. The parameter k, the number of information symbols per code word, is given by subtracting the degree of the generator polynomial from n. Finally, on the next iteration, this generator polynomial is multiplied by mh(x) [with h = 2(t + 1) + 1] to produce a new generator polynomial. This process produces a sequence of BCH code generators of length n = 2m - 1 with increasingly larger degree and error-correction capability. (In some cases the actual minimum distance will exceed this bound.) The minimal polynomials necessary for constructing this table may be found in Peterson and Weldon. In addition, these tables may be used for constructing codes of length greater than 255. The tables in Peterson and 393 394 Appendix A Weldon are complete through degree 16, which would allow construction of all primitive BCH codes through length 65535 using the same technique used here. Table A-2 provides a list of selected code generators of nonprimitive BCH codes. These particular codes were selected because one cannot shorten a primitive BCH code to the same n and k and achieve the same or larger minimum distance. The arithmetic operations for these codes are defined over GF(2m), where m is the value indicated in the table. In this case the actual error-correcting capability is indicated since it is larger than the BCH bQJ,md in a number of cases. Table A-1. Generator Polynomials of Primitive BCH Codes n k Generator polynomial. g1 (x) (octal) 7 4 I gdxl = p(x) = 13 3 g 1 (x)-(15) = 177 15 II gdxl = p(x) = 23 7 2 g1 (x)· (37) = 721 5 3 g2 (x)· (7) = 2467 7 g3 (x) · (31) = 77777 31 26 g1 (x) = p(x) = 45 21 2 g1 (x) · (75) = 3551 16 3 g2 (x) · (67) = 10765 7 II 5 g3 (x) ·(57) = 54233 25 6 7 g5 (x) · (73) = 31336 5047 15 g1 (x)·(51) = 17777 77777 7 63 57 I g 1 (x) = p(x) = 103 51 2 g1 (x)·(127) = 12471 45 3 g2 (x)·(l47) = 17013 17 39 4 g3 (x)·(lll) = 16662 3567 36 5 g4 (x)·(l5) = 10335 00423 30 6 g5 (x)·(l55) = 15746 41655 47 24 7 g, (x)·(l33) = 17323 26040 4441 18 10 g1 (x)·(l65) = 13630 26512 35172 5 16 II gto(X) ·(7) = 63311 41367 23545 3 10 13 g11 (x) · (163) = 47262 23055 27250 !55 7 15 giJ(X). (13) = 52310 45543 50327 1737 31 g 15 (x)·(l41) = 77777 77777 77777 77777 7 127 120 g 1(x) = p(x) = 211 113 2 g 1 (x)·(217) =41567 106 3 g 2 (x)·(135)= 11554 743 AppeadlxA 395 Table A-1 (continued) n k Generator polynomial, g,(x) (octal) 127 99 4 g3 (x) · (367) = 34470 23271 92 5 g4 (x) · (277) = 62473 00223 27 85 6 g5 (x) · (325) = 13070 44763 22273 78 7 g6 (x) · (203) = 26230 00216 61301 15 71 9 g7 (x) · (357) = 62550 10713 25312 7753 64 10 g9 (x) · (313) = 12065 34025 57077 31000 45 57 II g10 (x) · (345) = 33526 52525 05705 05351 7721 50 13 g1.{x)·(301) = 54446 51252 33140 12421 50142 I 43 14 g13(x) · (323) = 17721 77221 36512 27521 22057 4343 36 15 g14(x)·(253) = 31460 74666 52207 50447 64574 72173 5 29 21 (Jt 5(x)·(361)=40311 44613 67670 60366 7530141176 155 22 23 g2 .(x)·(247) = 12337 60704 04722 52243 54456 26637 64704 3 15 27 g23 (x)·(271) = 22057 04244 56045 54770 52301 37622 17604 353 8 31 g2 ,(x)-(375) = 70472 64052 75103 06514 76224 27156 77331 30217 63 g31 (x) · (221) = 17777 77777 77777 77777 77777 77777 77777 77777 777 255 247 gt<xJ = p(x) = 435 239 2 g1 (x) · (567) = 26754 3 231 3 g2 (x)·(763) = 15672 0665 223 4 gJ (x-)·(551) = 75626 64137 5 215 5 g4 (X)· (675) = 23157 56472 6421 207 6 g 5 (x) · (747) = 16176 56056 76362 27 199 7 g6 (x-)·(453) = 76330 31270 42072 2341 191 8 g7 (x) · (727) = 26634 70176 11533 37145 67 187 9 g8 (x) · (23) = 52755 31354 00013 22236 351 179 10 g9 (x)·(545) = 22624 71071 73404 32416 30045 5 171 II g10 (x) · (613) = 15416 21421 23423 56077 06163 0637 163 12 gll(x)·(543) = 75004 15510 07560 25515 74724 51460 I 155 13 g12(x)·(433) = 37575 13005 40766 50157 22506 46467 7633 147 14 g13 (x)·(477) = 16421 30173 53716 55253 04165 30544 10117 II 139 15 g14(x)·(615) = 46140 17320 60175 56157 07227 30247 45356 7445 131 18 g 15 (x) · (455) = 21571 33314 71510 15126 12502 77442 14202 41654 71 396 Appendix A Table A-1 (continued) n k Generator polynomial, g,(x) (octal) 255 123 19 g 18 (x)·(537) = 12061 40522 42066 00371 72103 26516 14122 62725 06267 115 21 glg(X) · (771) = 60526 66557 21002 47263 63640 46002 76352 55631 34727 37 107 22 g 2 dx)·(703) = 22205 77232 20662 56312 41730 02353 47420 17657 47501 54441 99 23 g22 (x) · (471) = 10656 66725 34731 74222 74141 62015 74332 25241 10764 32303 431 91 25 g 23 (x)·(651) = 67502 65030 32744 41727 23631 72473 25110 75550 76272 07243 44561 87 26 g25 (x) · (37) = 11013 67634 14743 23643 52316 34307 17204 62067 22545 27331 17213 17 79 27 g 26 (x) · (607) = 66700 03563 76575 00020 27034 42073 66174 62101 53267 11766 54134 2355 71 29 g 27 (x) · (661) = 24024 71052 06443 21515 55417 21123 31163 20544 42503 62557 64322 17060 35 63 30 g 29 (x)·(5!5) = 10754 47505 51635 44325 31521 73577 07003 66611 17264 55267 61365 67025 43301 55 31 g 30 (.x)·(717) =73154 25203 50110 0133015275 30603 20543 25414 32675 50105 57044 42603 54736 17 47 42 g3 1 (x) · (735) = 25335 42017 06264 65630 33041 37740 62331 75123 33414 54460 45005 06602 45525 43173 45 43 g42 (x) · (7) = 15202 05605 52341 61131 10134 63764 23701 56367 00244 70762 37303 32021 57025 05154 I 37 45 g 43 (x) · (643) = 51363 30255 06700 74141 77447 24543 75304 20735 70617 43234 32347 64435 47374 03044 003 29 47 g 45 (x) · (765) = 30257 15536 67307 14655 27064 01236 13771 15342 24232 42011 74114 06025 46574 10403 56503 7 21 55 g 47 (x)·(637) = 12562 15257 06033 26560 01773 15360 76121 03227 34140 56530 74542 52115 31216 14466 51347 3725 13 59 g55 (x) · (573) = 46417 32005 05256 45444 26573 71425 00660 04330 67744 54765 61403 17467 72135 70261 34460 50054 7 Appendix A 397 Table A-1 (continued) n k Generator polynomial, g1 (x) (octal) 9 63 g 59 (x) · (31) = 15726 02521 74724 63201 03104 32553 55134 61416 23672 12044 07454 51127 66115 54770 55616 77516 057 255 127 g63 (x) · (561) = 77777 77777 77777 77777 77777 77777 77777 77777 77777 77777 77777 77777 77777 77777 77777 77777 77777 Table A-2. Selected Generator Polynomials of Nonprimitive BCH Codes n k m Generator polynomial (octal) 17 9 1 8 727 21 12 2 6 1663 23 12 3 II 5343 33 22 2 10 5145 33 12 4 10 3777 41 21 4 20 66471 33 47 24 5 23 43073 357 65 53 2 12 10761 65 40 4 12 35430 3067 73 46 4 9 17177 73537 Appendix B. Code Generators for Convolutional Codes This appendix presents a series of tables of good code generators for con­ volutional codes. Separate tables provide generators specifically optimized for use with Viterbi decoders, table look-up decoders, threshold decoders, and sequential decoders. B.l. Viterbi Decoding Tables B-1 and B-2 list the optimum short-constraint-length convolu­ tional codes for R = 1/3 and R = 1/2.1 91 ) The code generators are given in octal notation. For each code the free distance, d, is given and the total information weight of all paths with weight d through d + 4. The constraint length is designated by v to facilitate comparisons among the various code rates (remember that the number of states is 2• so that all codes with the same I' have roughly the same decoder complexity even if the rates differ). For the higher-rate codes, lists of the best punctured codes(SSJ will be provided. rather than the optimum codes.
Recommended publications
  • 10. on Extending Bch Codes
    University of Plymouth PEARL https://pearl.plymouth.ac.uk 04 University of Plymouth Research Theses 01 Research Theses Main Collection 2011 Algebraic Codes For Error Correction In Digital Communication Systems Jibril, Mubarak http://hdl.handle.net/10026.1/1188 University of Plymouth All content in PEARL is protected by copyright law. Author manuscripts are made available in accordance with publisher policies. Please cite only the published version using the details provided on the item record or document. In the absence of an open licence (e.g. Creative Commons), permissions for further reuse of content should be sought from the publisher or author. Copyright Statement This copy of the thesis has been supplied on the condition that anyone who consults it is understood to recognise that its copyright rests with its author and that no quotation from the thesis and information derived from it may be published without the author’s prior consent. Copyright © Mubarak Jibril, December 2011 Algebraic Codes For Error Correction In Digital Communication Systems by Mubarak Jibril A thesis submitted to the University of Plymouth in partial fulfilment for the degree of Doctor of Philosophy School of Computing and Mathematics Faculty of Technology Under the supervision of Dr. M. Z. Ahmed, Prof. M. Tomlinson and Dr. C. Tjhai December 2011 ABSTRACT Algebraic Codes For Error Correction In Digital Communication Systems Mubarak Jibril C. Shannon presented theoretical conditions under which communication was pos- sible error-free in the presence of noise. Subsequently the notion of using error correcting codes to mitigate the effects of noise in digital transmission was intro- duced by R.
    [Show full text]
  • Decoding Beyond the Designed Distance for Certain Algebraic Codes
    INFORMATION AND CONTROL 35, 209--228 (1977) Decoding beyond the Designed Distance for Certain Algebraic Codes DAVID MANDELBAUM P.O. Box 645, Eatontown, New Jersey 07724 It is shown how decoding beyond the designed distance can be accomplished for a certain decoding algorithm. This method can be used for subcodes of generalized Goppa codes and generalized Reed-Solomon codes. 1. INTRODUCTION A decoding algorithm for generalized Goppa codes is presented here. This method directly uses the Euclidean algorithm, it is shown how subcodes of these generalized codes can be decoded for errors beyond the designed distance. It is proved that all coset leaders can be decoded by this method. Decoding of errors beyond the BCH bound may involve solution of nonlinear equations. The method used is similar to the use of continued fractions for decoding within the BCH bound (Mandelbaum, 1976). The Euclidean algorithm is also used by Sugiyama et aL (1975) in a different manner for decoding Goppa codes within the BCH bound. Section 2 defines the generalized codes and shows the relationship with the Mattson-Solomon polynomial. Section 3 defines the decoding algorithm and gives, as a detailed example, the decoding of the (15, 5) BCH code for a quadruple error. Application to the Golay code is pointed out. This method can also be used for decoding the Generalized Reed-Solomon codes constructed by Delsarte (1975). Section 4 presents a method of obtaining the original information and syndrome in an iterated manner, which results in reduced computation for codewords with no errors. Other methods for decoding beyond the BCH bound for BCH codes are given by Berlekamp (1968) and Harmann (1972).
    [Show full text]
  • Chapter 6 Modeling and Simulations of Reed-Solomon Encoder and Decoder
    CALIFORNIA STATE UNIVERSITY, NORTHRIDGE The Implementation of a Reed Solomon Code Encoder /Decoder A graduate project submitted in partial fulfillment of the requirements For the degree of Master of Science in Electrical Engineering By Qiang Zhang May 2014 Copyright by Qiang Zhang 2014 ii The graduate project of Qiang Zhang is approved: Dr. Ali Amini Date Dr. Sharlene Katz Date Dr. Nagi El Naga, Chair Date California State University, Northridge iii Acknowledgements I would like to thank my project professor and committee: Dr. Nagi El Naga, Dr. Ali Amini, and Dr. Sharlene Katz for their time and help. I would also like to thank my parents for their love and care. I could not have finished this project without them, because they give me the huge encouragement during last two years. iv Dedication To my parents v Table of Contents Copyright Page.................................................................................................................... ii Signature Page ................................................................................................................... iii Acknowledgement ............................................................................................................. iv Dedication ............................................................................................................................v List of Figures .................................................................................................................. viii Abstract ................................................................................................................................x
    [Show full text]
  • Chapter 9: BCH, Reed-Solomon, and Related Codes
    Draft of February 23, 2001 Chapter 9: BCH, Reed-Solomon, and Related Codes 9.1 Introduction. In Chapter 7 we gave one useful generalization of the (7, 4) Hamming code of the Introduction: the family of (2m − 1, 2m − m − 1) single error-correcting Hamming Codes. In Chapter 8 we gave a further generalization, to a class of codes capable of correcting a single burst of errors. In this Chapter, however, we will give a far more important and extensive generalization, the multiple-error correcting BCH and Reed-Solomon Codes. To motivate the general definition, recall that the parity-check matrix of a Hamming Code of length n =2m − 1 is given by (see Section 7.4) H =[v0 v1 ··· vn−1 ] , (9.1) m m where (v0, v1,...,vn−1) is some ordering of the 2 − 1 nonzero (column) vectors from Vm = GF (2) . The matrix H has dimensions m × n, which means that it takes m parity-check bits to correct one error. If we wish to correct two errors, it stands to reason that m more parity checks will be required. Thus we might guess that a matrix of the general form v0 v1 ··· vn−1 H2 = , w0 w1 ··· wn−1 where w0, w1,...,wn−1 ∈ Vm, will serve as the parity-check matrix for a two-error-correcting code of length n. Since however the vi’s are distinct, we may view the correspondence vi → wi as a function from Vm into itself, and write H2 as v0 v1 ··· vn−1 H2 = . (9.3) f(v0) f(v1) ··· f(vn−1) But how should the function f be chosen? According to the results of Section 7.3, H2 will define a n two-error-correcting code iff the syndromes of the 1 + n + 2 error pattern of weights 0, 1 and 2 are all distinct.
    [Show full text]
  • Chapter 9: BCH, Reed-Solomon, and Related Codes
    Draft of February 21, 1999 Chapter 9: BCH, Reed-Solomon, and Related Codes 9.1 Introduction. In Chapter 7 we gave one useful generalization of the (7, 4)Hamming code of the Introduction: the family of (2m − 1, 2m − m − 1)single error-correcting Hamming Codes. In Chapter 8 we gave a further generalization, to a class of codes capable of correcting a single burst of errors. In this Chapter, however, we will give a far more important and extensive generalization, the multiple-error correcting BCH and Reed-Solomon Codes. To motivate the general definition, recall that the parity-check matrix of a Hamming Code of length n =2m − 1 is given by (see Section 7.4) H =[v0 v1 ··· vn−1 ] , (9.1) m m where (v0, v1,...,vn−1)is some ordering of the 2 − 1 nonzero (column)vectors from Vm = GF (2) . The matrix H has dimensions m × n, which means that it takes m parity-check bits to correct one error. If we wish to correct two errors, it stands to reason that m more parity checks will be required. Thus we might guess that a matrix of the general form v0 v1 ··· vn−1 H2 = , w0 w1 ··· wn−1 where w0, w1,...,wn−1 ∈ Vm, will serve as the parity-check matrix for a two-error-correcting code of length n. Since however the vi’s are distinct, we may view the correspondence vi → wi as a function from Vm into itself, and write H2 as v0 v1 ··· vn−1 H2 = . (9.3) f(v0) f(v1) ··· f(vn−1) But how should the function f be chosen? According to the results of Section 7.3, H2 will define a n two-error-correcting code iff the syndromes of the 1 + n + 2 error pattern of weights 0, 1 and 2 are all distinct.
    [Show full text]
  • Error Detection and Correction Using the BCH Code 1
    Error Detection and Correction Using the BCH Code 1 Error Detection and Correction Using the BCH Code Hank Wallace Copyright (C) 2001 Hank Wallace The Information Environment The information revolution is in full swing, having matured over the last thirty or so years. It is estimated that there are hundreds of millions to several billion web pages on computers connected to the Internet, depending on whom you ask and the time of day. As of this writing, the google.com search engine listed 1,346,966,000 web pages in its database. Add to that email, FTP transactions, virtual private networks and other traffic, and the data volume swells to many terabits per day. We have all learned new standard multiplier prefixes as a result of this explosion. These days, the prefix “mega,” or million, when used in advertising (“Mega Cola”) actually makes the product seem trite. With all these terabits per second flying around the world on copper cable, optical fiber, and microwave links, the question arises: How reliable are these networks? If we lose only 0.001% of the data on a one gigabit per second network link, that amounts to ten thousand bits per second! A typical newspaper story contains 1,000 words, or about 42,000 bits of information. Imagine losing a newspaper story on the wires every four seconds, or 900 stories per hour. Even a small error rate becomes impractical as data rates increase to stratospheric levels as they have over the last decade. Given the fact that all networks corrupt the data sent through them to some extent, is there something that we can do to ensure good data gets through poor networks intact? Enter Claude Shannon In 1948, Dr.
    [Show full text]
  • Lecture 14: BCH Codes Overview 1 General Codes
    CS681 Computational Number Theory Lecture 14: BCH Codes Instructor: Piyush P Kurur Scribe: Ramprasad Saptharishi Overview We shall look a a special form of linear codes called cyclic codes. These have very nice structures underlying them and we shall study BCH codes. 1 General Codes n Recall that a linear code C is just a subspace of Fq . We saw last time that by picking a basis of C we can construct what is known as a parity check matrix H that is zero precisely at C. Let us understand how the procedure works. Alice has a message, of length k and she wishes to transmit across the channel. The channel is unreliable and therefore both Alice and Bob first agree on some code C. Now how does Alice convert her message into a code word in C? If Alice’s message could be written as (x1, x2, ··· , xk) where each xi ∈ Fq, then Alice Pk simply sends i=1 xibi which is a codeword. Bob receives some y and he checks if Hy = 0. Assuming that they choose a good distance code (the channel cannot alter one code into an- other), if Bob finds that Hy = 0, then he knows that the message he received was untampered with. But what sort of errors can the channel give? Let us say that the channel can change atmost t positions of the codeword. If x was sent and x0 was received with atmost t changes between them, then the vector e = x0−x can be thought of as the error vector.
    [Show full text]
  • A List-Decoding Approach to Low-Complexity Soft Maximum-Likelihood Decoding of Cyclic Codes
    A List-Decoding Approach to Low-Complexity Soft Maximum-Likelihood Decoding of Cyclic Codes Hengjie Yang∗, Ethan Liang∗, Hanwen Yaoy, Alexander Vardyy, Dariush Divsalarz, and Richard D. Wesel∗ ∗University of California, Los Angeles, Los Angeles, CA 90095, USA yUniversity of California, San Diego, La Jolla, CA 92093, USA zJet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA Email: {hengjie.yang, emliang}@ucla.edu, {hwyao, avardy}@ucsd.edu, [email protected], [email protected] Abstract—This paper provides a reduced-complexity approach Trellises with lower complexity than the natural trellis can to maximum likelihood (ML) decoding of cyclic codes. A cyclic be found using techniques developed in [7]–[9] that identify code with generator polynomial gcyclic(x) may be considered a minimum complexity trellis representations. Such trellises terminated convolutional code with a nominal rate of 1. The trellis termination redundancy lowers the rate from 1 to the are often time varying and can have a complex structure. actual rate of the cyclic code. The proposed decoder represents Furthermore, at least for the high-rate binary BCH code gcyclic(x) as the product of two polynomials, a convolutional examples [10] in this paper, the minimum complexity trellis code (CC) polynomial gcc(x) and a cyclic redundancy check representations turn out to have complexity similar to the (CRC) polynomial gcrc(x), i.e., gcyclic(x) = gcc(x)gcrc(x). This natural trellis. representation facilitates serial list Viterbi algorithm (S-LVA) decoding. Viterbi decoding is performed on the natural trellis for As a main contribution, this paper presents serial list gcc(x), and gcrc(x) is used as a CRC to determine when the S-LVA Viterbi algorithm (S-LVA) as an ML decoder for composite should conclude.
    [Show full text]
  • Cyclic Codes, BCH Codes, RS Codes
    Classical Channel Coding, II Hamming Codes, Cyclic Codes, BCH Codes, RS Codes John MacLaren Walsh, Ph.D. ECET 602, Winter Quarter, 2013 1 References • Error Control Coding, 2nd Ed. Shu Lin and Daniel J. Costello, Jr. Pearson Prentice Hall, 2004. • Error Control Systems for Digital Communication and Storage, Prentice Hall, S. B. Wicker, 1995. 2 Example of Syndrome Decoder { Hamming Codes Recall that a (n; k) Hamming code is a perfect code with a m × 2m − 1 parity check matrix whose columns are all 2m −1 of the non-zero binary words of length m. This yields an especially simple syndrome decoder look up table for arg min wt(e) (1) ejeHT =rHT In particular, one selects the error vector to be the jth column of the identity matrix, where j is the column of the parity check matrix which is equal to the computed syndome s = rHT . 3 Cyclic Codes Cyclic codes are a subclass of linear block codes over GF (q) which have the property that every cyclic shift of a codeword is itself also a codeword. The ith cyclic shift of a vector v = (v0; v1; : : : ; vn−1) is (i) the vector v = (vn−i; vn−i+1; : : : ; vn−1; v0; v1; : : : ; vn−i−1). If we think of the elements of the vector as n−1 coefficients in the polynomial v(x) = v0 + v1x + ··· + vn−1x we observe that i i i+1 n−1 n n+i−1 x v(x) = v0x + v1x + ··· + vn−i−1x + vn−ix + ··· + vn+i−1x i−1 i n−1 = vn−i + vn−i+1x + ··· + vn−1x + v0x + ··· + vn−i−1x n n i−1 n +vn−i(x − 1) + vn−i+1x(x − 1) + ··· + vn+i−1x (x − 1) Hence, i n i−1 i i+1 n−1 (i) x v(x)mod(x − 1) = vn−i + vn−i+1x + ··· + vn−1x + v0x + v1x + ··· + vn−i−1x = v (x) (2) That is, the ith cyclic shift can alternatively be thought of as the operation v(i)(x) = xiv(x)mod(xn − 1).
    [Show full text]
  • Algebraic Decoding of Reed-Solomon and BCH Codes
    Algebraic Decoding of Reed-Solomon and BCH Codes Henry D. Pfister ECE Department Texas A&M University July, 2008 (rev. 0) November 8th, 2012 (rev. 1) November 15th, 2013 (rev. 2) 1 Reed-Solomon and BCH Codes 1.1 Introduction Consider the (n; k) cyclic code over Fqm with generator polynomial n−k Y g(x) = x − αa+jb ; j=1 where α 2 Fqm is an element of order n and gcd(b; n) = 1. This is a cyclic Reed-Solomon (RS) code a+b a+2b a+(n−k)b with dmin = n − k + 1. Adding roots at all the conjugates of α ; α ; : : : ; α (w.r.t. the subfield K = Fq) also allows one to define a length-n BCH subcode over Fq with dmin ≥ n − k + 1. Also, any decoder that corrects all patterns of up to t = b(n − k)=2c errors for the original RS code can be used to correct the same set of errors for any subcode. Pk−1 i The RS code operates by encoding a message polynomial m(x) = j=0 mix of degree at most k − 1 Pn−1 i into a codeword polynomial c(x) = j=0 cix using c(x) = m(x)g(x): During transmission, some additive errors are introduced and described by the error polynomial e(x) = Pn−1 i Pt σ(j) j=0 eix . Let t denote the number of errors so that e(x) = j=1 eσ(j)x . The received polynomial Pk−1 i is r(x) = j=0 rix where r(x) = c(x) + e(x): In these notes, the RS decoding is introduced using the methods of Peterson-Gorenstein-Zierler (PGZ), the Berlekamp-Massey Algorithm (BMA) and the Euclidean method of Sugiyama.
    [Show full text]
  • Max90x0000090xxx0099xx600x9099x 09 /4Co
    US 20190268023A1 ( 19) United States (12 ) Patent Application Publication ( 10) Pub . No. : US 2019 /0268023 A1 Freudenberger et al. (43 ) Pub. Date : Aug . 29, 2019 (54 ) METHOD AND DEVICE FOR ERROR GIIC 29 /52 ( 2006 .01 ) CORRECTION CODING BASED ON HO3M 13 / 15 ( 2006 .01 ) HIGH - RATE GENERALIZED (52 ) U . S . CI. CONCATENATED CODES CPC . .. .. HO3M 13 / 253 ( 2013 .01 ) ; HO3M 13 /293 ( 2013 . 01 ) ; GIIC 29 / 52 ( 2013. 01 ) ; HO3M (71 ) Applicant: HYPERSTONE GMBH , Konstanz 13 /152 (2013 .01 ) ; H03M 13 / 2906 (2013 .01 ) ; (DE ) HO3M 13/ 1515 (2013 . 01 ) ; HO3M 13 / 1505 (72 ) Inventors : Juergen Freudenberger, Radolfzell ( 2013 . 01 ) (DE ) ; Jens Spinner , Konstanz (DE ) ; Christoph Baumhof , Radolfzell (DE ) ( 57 ) ABSTRACT (21 ) Appl. No. : 16/ 395 ,046 ( 22 ) Filed : Apr. 25 , 2019 Field error correction coding is particularly suitable for applications in non - volatile flash memories. We describe a Related U . S . Application Data method for error correction encoding of data to be stored in (63 ) Continuation of application No. 15 /593 , 973, filed on a memory device, a corresponding method for decoding a May 12 , 2017 , now Pat. No . 10 , 320 , 421. codeword matrix resulting from the encoding method , a coding device , and a computer program for performing the ( 30 ) Foreign Application Priority Data methods on the coding device , using a new construction for high - rate generalized concatenated (GC ) codes . The codes , May 13 , 2016 (DE ) . .. .. .. 102016005985 . 0 which are well suited for error correction in flash memories Apr. 6 , 2017 (DE ) . 102017107431 . 7 for high reliability data storage , are constructed from inner nested binary Bose -Chaudhuri - Hocquenghem (BCH ) codes Publication Classification and outer codes, preferably Reed - Solomon (RS ) codes .
    [Show full text]
  • Reducing the Overhead of BCH Codes: New Double Error Correction Codes
    electronics Article Reducing the Overhead of BCH Codes: New Double Error Correction Codes Luis-J. Saiz-Adalid * , Joaquín Gracia-Morán , Daniel Gil-Tomás, J.-Carlos Baraza-Calvo and Pedro-J. Gil-Vicente Institute of Information and Communication Technologies (ITACA), Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain; [email protected] (J.G.-M.); [email protected] (D.G.-T.); [email protected] (J.-C.B.-C.); [email protected] (P.-J.G.-V.) * Correspondence: [email protected] Received: 15 October 2020; Accepted: 7 November 2020; Published: 11 November 2020 Abstract: The Bose-Chaudhuri-Hocquenghem (BCH) codes are a well-known class of powerful error correction cyclic codes. BCH codes can correct multiple errors with minimal redundancy. Primitive BCH codes only exist for some word lengths, which do not frequently match those employed in digital systems. This paper focuses on double error correction (DEC) codes for word lengths that are in powers of two (8, 16, 32, and 64), which are commonly used in memories. We also focus on hardware implementations of the encoder and decoder circuits for very fast operations. This work proposes new low redundancy and reduced overhead (LRRO) DEC codes, with the same redundancy as the equivalent BCH DEC codes, but whose encoder, and decoder circuits present a lower overhead (in terms of propagation delay, silicon area usage and power consumption). We used a methodology to search parity check matrices, based on error patterns, in order to design the new codes. We implemented and synthesized them, and compared their results with those obtained for the BCH codes.
    [Show full text]