Reducing the Overhead of BCH Codes: New Double Error Correction Codes
Total Page:16
File Type:pdf, Size:1020Kb
electronics Article Reducing the Overhead of BCH Codes: New Double Error Correction Codes Luis-J. Saiz-Adalid * , Joaquín Gracia-Morán , Daniel Gil-Tomás, J.-Carlos Baraza-Calvo and Pedro-J. Gil-Vicente Institute of Information and Communication Technologies (ITACA), Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain; [email protected] (J.G.-M.); [email protected] (D.G.-T.); [email protected] (J.-C.B.-C.); [email protected] (P.-J.G.-V.) * Correspondence: [email protected] Received: 15 October 2020; Accepted: 7 November 2020; Published: 11 November 2020 Abstract: The Bose-Chaudhuri-Hocquenghem (BCH) codes are a well-known class of powerful error correction cyclic codes. BCH codes can correct multiple errors with minimal redundancy. Primitive BCH codes only exist for some word lengths, which do not frequently match those employed in digital systems. This paper focuses on double error correction (DEC) codes for word lengths that are in powers of two (8, 16, 32, and 64), which are commonly used in memories. We also focus on hardware implementations of the encoder and decoder circuits for very fast operations. This work proposes new low redundancy and reduced overhead (LRRO) DEC codes, with the same redundancy as the equivalent BCH DEC codes, but whose encoder, and decoder circuits present a lower overhead (in terms of propagation delay, silicon area usage and power consumption). We used a methodology to search parity check matrices, based on error patterns, in order to design the new codes. We implemented and synthesized them, and compared their results with those obtained for the BCH codes. Our implementation of the decoder circuits achieved reductions between 2.8% and 8.7% in the propagation delay, between 1.3% and 3.0% in the silicon area, and between 15.7% and 26.9% in the power consumption. Therefore, we propose LRRO codes as an alternative for protecting information against multiple errors. Keywords: reliability; fault tolerance; error control codes; double error correction; BCH codes 1. Introduction Error control codes (ECCs) are frequently employed for fault tolerance in dependable systems. Coding theory has been studied for over half a century, and it is still going stronger than ever [1,2], because of its extensive application in computing, data storage, communications, etc. The Bose-Chaudhuri-Hocquenghem (BCH) codes are one of the best-known ECCs. They were discovered in 1959 by Hocquenghem [3] and, independently, in 1960 by Bose and Ray-Chaudhuri [4]. Since then, BCH codes have been extensively employed in several applications, namely: flash memories in solid-state drives (SSDs) [5], optical storage like compact disks (CDs) or digital versatile disks (DVDs) [6], ethernet [7], video codecs [8], digital video broadcasting (DVB) [9], and satellite communications [10]. Their algebraic features make BCH codes very useful in a wide range of situations, and their error coverage is achieved with minimum redundancy. Nevertheless, when applied to computers, these codes have two main weaknesses. First, primitive BCH codes (those generated according to their strict definition) only exist for a limited number of word lengths [11]. This problem can be easily solved by shortening a longer code. However, although they maintain their error coverage, shortened BCH codes may lose other properties, such as cyclicity or minimal redundancy, as stated later. Electronics 2020, 9, 1897; doi:10.3390/electronics9111897 www.mdpi.com/journal/electronics Electronics 2020, 9, x FOR PEER REVIEW 2 of 14 Electronics 2020, 9, 1897 2 of 15 The second problem with these codes is the latency of the decoding process. Although their algebraic structure is useful in simplifying the encoding and decoding procedures, most of the decodingThe secondalgorithms problem have witha sequential these codes structure is the that latency requires of the several decoding clock process. cycles to Although complete their the algebraicdecoding structure[11]. Commonly, is useful inthis simplifying is not a problem the encoding in software and decoding implementation procedures, or most even of thein hardware decoding algorithmswhen the speed have arequirements sequential structure are not thatvery requires high. Ho severalwever, clock the cyclesusefulness to complete of BCH the codes decoding is limited [11]. Commonly,when very fast this encoding is not a problemand decoding in software operations implementation are required. or even in hardware when the speed requirementsThe information are not stored very high. in key However, elements theof a usefulness computer ofsystem, BCH codessuch as is registers limited whenand memories, very fast encodingmay be perturbed and decoding by different operations physical are required. mechanisms [12–14]. As technology increases in integration scale,The single information error correction stored in (SEC) keyelements or single oferro a computerr correction-double system, such error as registersdetection and (SEC-DED) memories, codes may bemay perturbed not be enough by different for present physical and mechanisms future comput [12–14ers.]. AsMultiple technology error increasescorrection in codes, integration like BCH, scale, singlebecome error an correctioninteresting (SEC)alternative, or single but error designers correction-double have to deal error with detection the two (SEC-DED) problems codesdescribed may notpreviously. be enough First, for the present lengths and of futuredata words computers. are commo Multiplenly in error a power correction of two codes, (8, 16, like32, etc.), BCH, and become they ando interestingnot match to alternative, the block sizes but designers of primitive have BCH to dealcodes. with Second, the two their problems “slow” decoding described may previously. reduce First,the system the lengths performance. of data words are commonly in a power of two (8, 16, 32, etc.), and they do not matchThis to thepaper block focuses sizes on of binary primitive codes BCH for codes.double Second,error correction their “slow” (DEC), decoding applied to may common reduce data the systemword lengths performance. in computers (i.e., 8, 16, 32, and 64). We propose new low redundancy and reduced overheadThis paper(LRRO) focuses DEC oncodes, binary which codes maintain for double the error same correction error coverage (DEC), appliedand redundancy to common as data the wordequivalent lengths BCH in computerscodes, but reduce (i.e., 8, the 16, 32,overhead and 64). intr Weoduced propose by the new encoder low redundancy and decoder and circuits reduced in overheadterms of the (LRRO) propagation DEC codes,delay, silicon which area, maintain and power the same consumption. error coverage Different and encoder redundancy and decoder as the equivalentcircuits have BCH been codes, implemented but reduce and the synthesized overhead introduced in order byto validate the encoder the anderror decoder coverage circuits and into termsmeasure of theand propagation compare those delay, parameters. silicon area, The and results power validate consumption. the improvements Different encoder achieved and decoderby our circuitsproposal, have and been confirm implemented LRRO andcodes synthesized as an intere in ordersting to alternative validate the to error BCH coverage codes andin high-speed to measure andapplications, compare like those random parameters. access The memories results (RAM) validate and the processor improvements registers. achieved These by codes our proposal,are especially and confirmwell suited LRRO for codes RAM as memories, an interesting where alternative low redundancy, to BCH codes high-speed in high-speed operations, applications, and likelow random power accessconsumption memories are mandatory. (RAM) and processor registers. These codes are especially well suited for RAM memories,This paper where is low organized redundancy, as follows. high-speed Section operations, 2 introduces and basic low powerconcepts consumption about coding are theory mandatory. and BCHThis codes. paper The isproposed organized LRRO as follows. codes are Section described2 introduces in Section basic 3. concepts Section 4 about includes coding the theory evaluation and BCHof the codes. proposal The and proposed a comparison LRRO codes with arethe describedBCH code ins. SectionFinally,3 .Section Section 54 presentsincludes thesome evaluation conclusions of theand proposal ideas for andfuture a comparison work. with the BCH codes. Finally, Section5 presents some conclusions and ideas for future work. 2. Coding Theory and BCH Codes 2. Coding Theory and BCH Codes 2.1. Basics on Error Control Coding 2.1. Basics on Error Control Coding An (n, k) binary linear block ECC encodes a k-bit input word in an n-bit output word [15]. The An (n, k) binary linear block ECC encodes a k-bit input word in an n-bit output word [15]. The input word, u = (u0, u1, ..., uk−1), is a k-bit vector that represents the original data. The code word, b = input word, u = (u0, u1, ::: , uk 1), is a k-bit vector that represents the original data. The code word, (b0, b1, ..., bn−1), is an n-bit vector,− where the (n − k) added bits are called the parity, code, or redundant b = (b0, b1, ::: , bn 1), is an n-bit vector, where the (n k) added bits are called the parity, code, or bits. b is transmitted− through an unreliable channel that− delivers the received word, r = (r0, r1, ..., rn−1). redundant bits. b is transmitted through an unreliable channel that delivers the received word, r = (r , The error vector, e = (e0, e1, ..., en−1), models the error induced by the channel. If no error has occurred0 r1, ::: , rn 1). The error vector, e = (e0, e1, ::: , en 1), models the error induced by the channel.⊕ If no in the i-th− bit, then ei = 0; otherwise, ei = 1. Therefore,− r can be interpreted as r = b e. Figure 1 error has occurred in the i-th bit, then e = 0; otherwise, e = 1. Therefore, r can be interpreted as r = b synthesizes this encoding, channel crossing,i and syndromei decoding processes. ⊕ e. Figure1 synthesizes this encoding, channel crossing, and syndrome decoding processes. Figure 1. Encoding, channel crossing, and syndrome decoding processes.