CALIFORNIA STATE UNIVERSITY, NORTHRIDGE

AN ERROR DETECTING AND CORRECTING SYSTEM

FOR MAGNETIC STORAGE DISKS

A project submitted in partial satisfaction of the requirements for the degree of Master of Science in

Engineering

by

David Allan Kieselbach

June, 1981 The project of David Allan Kieselbach is approved:

Nagi M. Committee Chairman

California State University, Northridge

ii ACKNOWLEDGMENTS

I wish to thank Professor Nagi El Naga who helped in numerous ways by suggesting, reviewing and criticizing the entire project.

I also wish to thank my employer, Hughes Aircraft Company, who sponsored my Master of Science studies under the Hughes Fellowship

Program. In addition the company prov~ded the services of the

Technical Typing Center to aid in the preparation of the project manuscript. Specifically I offer my sincere gratitude to Sharon Scott and Aiko Ogata who are responsible for the excellent quality of typing and art work of the figures.

For assistance in proof reading and remaining a friend throughout the long ordeal my special thanks to Christine Wacker.

Finally to my parents, Dulcie and Henry who have always given me their affection and encouragement I owe my utmost appreciation and debt of gratitude.

iii TABLE OF CONTENTS

CHAPTER I. INTRODUCTION Page 1.1 Introduction . 1 1.2 Objectives •••••.• 4 1.3 Project Outline 5

CHAPTER II. EDCC CODES 2.1 Generator Matrix 11 2.1.1 Systematic Generator Matrix •. 12 2.2 Parity Check Matrix 16 2.3 Cyclic Codes .• . . . . 19 2.4 Analytic Methods of Code Construction . . . . 21 2.4.1 Hamming Codes • • 21 2.4.2 Fire Codes 21 2.4.3 Burton Codes- 24 2.4.4 BCH Codes • • . 25

CHAPTER III. ENCODING 3.1 The Mathematics of Encoding a Cyclic Code 26 3.2 Encoding Via the Generator Matrix 32 3.3 Encoding Implementation for Cyclic Codes . 36 3.3.1 Encoding With an (n-k) Stage Shift Register • 38 3.3.1.1 Multiplier Circuit Encoder • • . . . 38 3.3.1.2 Divider Circuit Encoder (Unmodified) 38 3.3.1.3 Divider Circuit En~oder (Modified) • 40 3.3.2 Encoding With a k Stage Shift Register 43 3.4 Shortened Cyclic Codes • 49

iv TABLE OF CONTENTS (Continued)

CHAPTER IV. DECODING Page 4.1 Error Detection 51 4.2 Decoding Procedures 55 4.2.1 Table Look-up Decoding 56 4.2.2 Meggitt Decoding Technique • • • • 57 4.2.3 Error Trapping • • . . • 61 4.2.4 Trial and Error Decoding . • . . 64 4.2.5 Majority Logic Decoding ••...•... 66 4.2.6 Algebraic Procedures . . • . 68 4.3 Examples of Practical Burst Error Correcting Decoders • . . . 68 4.4 Decoding Shortened Codes • 76 4.5 Encoder/Decoders • 78

CHAPTER V. INTERLEAVING 5.1 Symbol Interleaving 82 5 .1.1 Interleaving Via Code Expansion . . • 82 5 .1.2 Interleaving Via Manipulation of Code Vectors .. • • . • • ...... • 83 5.1.2.1 I-Encoder Interleaving • . . • 87 5.1.2.2 Array Manipulation Interleaving 87 5.2 Block Interleaving •.•..••••.••.•• 90

CHAPTER VI. EDCC SYSTEM DESIGN 6.1 Introduction • • • • • 92 6.2 Code and Interleaving Degree Selection • 96 6.2.1 Determination of G(X) for the EDC • 96 6.2.2 Determination of tne Interleaving Degree (EDCC) • . • • • • . 97 6.2.3 Determination of G(X) for the EDCC 105 6.3 Encoding Process .•. 107 6.3.1 EDC Encoding 107 6.3.2 EDCC Encoding . . • • . 107 6.3.3 Combined EDC and EDCC Encoding 110 6.4 Decoding and Error Correction 114 6.4.1 EDCC Decoding .••••• 114 6.4.1.1 Premultiplication Polynomial 117

v TABLE OF CONTENTS (Continued)

CHAPTER VI (Continued)' Page 6.4.2 Error Correction Algorithm 118 6.4.3 EDC Decoding 120 6.4.4 Combined EDCC and EDC Decoding 121 6.5 Interleaving . 127 6.6 System Control 132

CHAPTER VII. EDCC SYSTEM SIMULATION 7.1 Description of the Simulation Program 144 7. 1 .1 Program Structure • . . . . • . . . 146 7.1.1.1 Program Subroutines 147 7.2 Execution of the Simulation Program 149 7.3 Results of the Simulation Program 150

CHAPTER VIII. PERFORMANCE MEASUREMENTS 8.1 Undetectability 152 8.2 Mistakability 156 8.3 Unreliability 159 8.4 Evaluation of the EDC System Parameters 159 8.5 Conclusions 162

REFERENCES 164 APPENDIX A - Flowcharts of Simulation Program and Subroutines • 165 APPENDIX B - Source Code for EDCC Simulation Program 178 APPENDIX C - Results from EDCC Simulation Program . • . 186

vi LIST OF FIGURES

CHAPTER I Page 1.1 Block diagram of a general data communication or storage system 3

CHAPTER II 2.1 Block diagram of a one-way data communication system employing an error correcting code • . . 7 2.2 Block diagram of a two-way data communication system using error detection and retransmission • 7 2.3 Classes of codes 10 2.4 Encoding circuit for a convolutional code 10 2.5 Matrix constructed of code vectors 14 2.6 Systematic code format 14

CHAPTER III 6 5 3.1 A circuit for dividing by x + x + x4 + x3 + 1 (internal XOR) ...... 39 6 5 4 3 3.2 A circuit for dividing by x + x + x + x + 1 (external XOR) ...... 39 3 3.3 Unmodified encoder, G(X) = X +X+ 1 .•. 40 3.4 General circuit for a modified (n-k) stage encoder • • • . • . 41 3.5 Modified encoder for a (15,9) code 44 3.6 General circuit for a k stage encoder • 48 3.7 k stage encoding circuit for (7,4) cyclic code 48 3.8 An (n,k) code shortened by S bits ...• 50

vii LIST OF FIGURES (Continued)

CHAPTER IV Page 4.1 The impact of the error polynomial 12 13 14 E(X) = X + X + X on a (15,9) code • • • • • • • 54

4.2 Process of shifting the syndrome until I the degree of the error polynomial is less than that of G(X) 54 I 4.3 Look-up table decoder for an (n,k) code . 58 4.4 A Meggitt decoder for an (n,k) cyclic code 60 4.5 Error trapping decoder for a burst EDCC . 63 4.6 Format of end around burst of length t 65 4. 7 (a) Unmodified error trapping decoder for a (15,9) code .•.••..••.... 70 4.7(b) Corrector/buffer circuit for the unmodified decoder of figure 4.7(a) .••• 71 4.8 Modified error trapping decoder for a (15,9) code • . • . • . . • • • • .••••.•• 72 4.9 Modified error trapping decoder for a (15,9) code shortened by 2 bits 79 4.10 Modified error trapping encoder/decoder for a general (n,k) code • • . • • • . 80

CHAPTER V 5.1 Symbol interleaving the (15, 9) code to degree 5 ...... 86 5.2 Interleaving via I encoders . . . . • . . 86 5.3 Interleaving via array manipulation 88 5.4 A symbol interleaved data stream, I = 2 88

CHAPTER VI -I 6.1 Block diagram of a disk system 94 6.2 Error detecting and correcting system • 95 6.3 EDC format (n,k) = (380,360) 98 6.4 EDCC combined data field for code length 98 6.5 Relationship between code rate and burst correction capability ...... 103

viii LIST OF FIGURES (Continued)

CHAPTER VI (Continued) Page 6.6 Relationship between code rate and interleaving degree • • • 104 6.7 Format for system base EDCC . . • 106 6.8 Irreducible polynomials of determination of G (X) • • • • • • • .. • • • • • • • • • 106

6.9 Feedback pattern for G(X) = 1 + x2 + x5 + x9 + x11 + x14 • 108 3 20 6.10 EDC generation circuit, G(X) = 1 +X +X . • • . • 109

6.11 EDCC generation circuit • 111 6.12 Combined encoding circuit • • 112 6.13 Flow chart of encoding process • 113 6.14 EDCC decoding and error correction circuit 116 6.15 Format of the shortened Fire Code . 117 6.16 Fortran program calculation of T(X) for the EDCC • 119 6.17 Subdivisions of the syndrome register • . 118 6.18 EDC decoding circuit . 122 6.19 Combined decoding and error correction circuit 123 6.20 Flowchart of decoding process • • • • • • 125 6.21 The affect of interleaving a (33,19) EDCC to degree I = 20 • • • . • • • • • • • . 128 6.22 Interleaving circuit block diagram 129 6.23 Interleaving circuit 131 6.24 Alternative vertical address generator 133 6.25 EDC encoder/decoder circuit • • • ••• 135 6.26 EDCC encoder/decoder circuit • 136 6.27 Block diagram of the EDCC system 138 6.28 Timing diagram for the WRITE cycle 140 6.29 Timing diagram for the READ cycle • 141

ix LIST OF FIGURES (Continued)

CHAPTER VII Page 7.1 EDCC computer simulation block diagram • 143 . I

CHAPTER VIII 8.1 Graphic representation of an error detection system . • • . • ...... • . . . • . . . . 153 8.2 Graphic representation of an EDCC system • • 156 8.3 Code domain of the EDCC system designed with dual codes • . • . • • • • . . . • • 160

X LIST OF TABLES

CHAPTER II Page 2.1 Burst error correcting Fire Codes with b < 10 . . . . . • • . . • • 23 2.2 A comparison of code construction techniques 25

CHAPTER III 3.1 A (7,4) cyclic code generated by 3 G(X) = X + X + 1 . . • ...... 27

3.2 A (7,4) systematic cyclic code generated 3 by G(X) = x + X + 1 . . • • • • . . 31

3.3 Internal operation of a modified encoder 45 3.4 Comparison of the encoding techniques • • . 45

CHAPTER IV 4.1 Syndrome generation of an error free received word • • • • • . . • • • • • 73 4.2 Syndrome generation of a corrupted received word • • . . • • . • • • . 74 4.3 Unmodified decoding versus modified decoding 75

CHAPTER VI 6.1 Possible code lengths and interleave degrees with 380 data bits • • . . • . • • 99 6.2 Code selection process for a sector of 380 bits and b > 100 • . . • . . . 102

xi ABSTRACT

AN ERROR DETECTING AND CORRECTING SYSTEM

FOR MAGNETIC STORAGE DISKS

by

David Allan Kieselbach

Master of Science in Engineering

This project presents an error detecting and correcting system for a magnetic disk storage device. rThe system is designed using concepts developed for encoding, decoding and interleaving of cyclic codes. A Fire code is used to protect the system from burst errors.

The design is implemented with an encoder/decoder circuit in which the error trapping technique was used for correction. By interleaving the code words, burst errors of length less than or equal to one hundred are corrected.J The system has been simulated on a PDP 11/34 mini­ computer to verify the correctness of the design. The results of the simulation show that all anticipated error patterns can be corrected.

Analysis of the system's architecture reveals that the mistakability is approximately 2-201 P(E) where P(E) is the probability that a sector contains an error.

xii CHAPTER I

INTRODUCTION

1.1 Introduction

The original impetus behind the study of algebraic codes was the search for efficient techniques for fulfilling the promise of informa- · tion theory. Specifically the reliable recovery of digital data disturbed by noise. Studies concentrated on the search for codes with sufficient structure to allow for encoding and decoding with equipment of moderate complexity. The objective was to produce a system whose improvement in performance would justify the extra cost and added complexity. Although the objective was achieved by producing applica­ tions in space communications and high density storage of data in computers, this effort leveled off because of the restricted market.

Several developments have contributed to the recent resurgence of the field of error correcting codes (ECC). With the advent of large scale integration, the cost of solid state electronic devices has decreased almost as dramatically as their size. This has stimulated the development of automatic data processors, digital computers, long range communications such as with satellites and peripheral devices.

This, in turn, has caused a dramatic increase in the volume of data communicated between such machines. The intolerance of computing sys­ tems to error, and in some cases the critical nature of the data, demand the use of either error free facilities or some type of error detecting or correcting code in the terminal devices. The latter approach is more economical and is commonly used.

1 2

There have also been significant accomplishments within the field of error correcting codes itself. Several classes of long, powerful codes have been devised. In addition, decoding procedures implemented with a moderate amount of hardware have been devised for several of these classes of codes. These and other developments have made the use of ECC quite practical in contemporary data communication and storage systems. In the near future the prospect is that the trends mentioned above will continue and that ECC will become much more widely employed. The area that will spur this growth is the microcomputer whose storage requirements are changing as rapidly as microcomputers themselves. Some factors that place demands on microcomputer storage capabilities are high level languages, increasingly complex application programs, large data bases and the trend toward compactness and portability.

A block diagram of a digital communication system is shown in

Figure 1.1. The same model can be used to describe an information storage system, if the storage medium is considered to be a channel.

Typical examples of transmission channels are telephone lines, high frequency radio links, space communication links, and magnetic tape units including writing and reading heads for storage systems. The channel is usually subject to various types of noise disturbances, natural or man made. For example, on a telephone line the disturbance may come from thermal noise, lightning, impulse noise or crosstalk from other lines. On a magnetic tape, the disturbances may be caused from tape defects, nonhuman contaminants such as dust and smoke particles or human contaminants such as fingerprints. The type of 3

CHANNEL N $0U~CE OR bli.SiiNArtO _., ...... , - ENc.CDE:R STORAGE ./ DEC.O.DE.R MEDIUM

FIGURE 1.1 Block diagram of a general data communication or storage system

errors caused by these disturbances depends on the code rate and/or bit densities. Several types of errors are listed below.

1. Single bit error (random)

2. Burst error

3. Multiple burst errors

The source information is usually composed of binary or decimal digits or alphabetic information in some form. The encoder transforms these messages into signals acceptable to the channel; typically electrical signals with some restrictions on power, bandwidth and duration. These signals enter the channel and are perturbed by noise.

The output enters the decoder which makes a decision concerning which message was sent and delivers this message to the destination. The engineering task is to design the encoder and decoder, and sometimes improve the channel. In this case the encoder performs what is called modulation and the decoder performs the inverse function called demodulation.

Analysis shows that many communication channels of the form illus- trated in Figure 1.1 have a definite capacity for information trans- mission. Research into this area was conducted by Shannon in the late 1940's. A resul~ of his studies was that if the rate of the 4 source is less than the channel capacity, then it is possible to choose a set of signals such that the probability of erroneous decoding is arbitrarily small. Shannon's theory showed engineers ways to defeat their arch enemy, noise, by encoding of signals. This theory does not indicate ~recisely how these signals are to be constructed nor does it guarantee that such a system can be built in the real world. The use of error correcting cod~s is an attempt to solve these last two problems.

1.2 Objectives

The objective of this project is to investigate the theory of error correcting codes, the processes of encoding and decoding, and to apply this knowledge by designing a burst error detecting and correc­ ting system for a magnetic disk storage system. This system is simu­ lated and tested by using a computer program.

In reviewing the theory of error correcting codes, different types of codes are studied with special interest placed on cyclic codes. By studying these codes, it will be possible to determine which of them can be practically implemented. It is also desired to research the different encoding and decoding generators to find the techniques best suited for particular applications. Using this knowledge to design the error correcting circuit for the magnetic disk storage system will guarantee that it will meet the requirements of sector size, projected burst error rate and length, and detection and correction capability.

Proof of the system's performance will be accomplished by verifying that the results attained by the computer simulation agree with the design goals. 5

1.3 Project Outline

A brief description of the subject matter, covered in the eight

chapters, follows. Chapter 2 develops the principles of cyclic codes and describes how to construct several practical classes of codes.

The process of encoding these codes is explained in Chapter 3, where example circuits are illustrated to reinforce the concepts. In

Chapter 4, the algebraic model of the decoder is presented and various decoding methods are reviewed. In order to give a practical view of decoding, several decoder circuits are designed and compared. Chap- ter 5 contains a technique known as interleaving which is used to expand the error correcting capability of a cyclic code. In Chapter 6, as an application to the material presented in the other chapters, an error detecting and correcting system used for magnetic disks is pre­ sented. After the system has been designed, a computer simulation is made via a program described in Chapter 7. The theoretical capabilities of the system are tested and the performance parameters calculated in the concluding chapter (Chapter 8). CHAPTER II

EDCC CODES

In an ideal communication (storage) system, the transmitted

(stored) code should match the received (retrieved) code. This however is not the case in practical systems where there are potential errors, and it is the purpose of the codes to detect and possibly correct such errors. The name given to these codes is error detecting and correcting codes which is abbreviated EDCC. These codes can not cor­ rect every conceivable pattern of errors but must be designed to cor­ rect only those patterns that are most likely to occur. A system employing a correcting circuit in addition to a detecting circuit is known as a one-way channel communication system (Figure 2.1).

The alternative to a one-way channel system is a two-way channel communication system (Figure 2.2). Unlike a one-way channel, no cor­ recting circuitry is required, and a pure error detection code can be used. When an error is detected at one terminal, a request for retransmission can be given and errors can effectively be avoided.

The integrity of the feedback retransmission signal is also subject to the same noisy channel as the encoded information; however, because of the very low data rate of the feedback signal, its channel can be assumed to be noiseless. Some reasons for using a two-way channel with error detection and retransmission are:

1. Error detection alone is a much simpler task than error

correction and requires simpler decoding equipment.

6 Z.NFO CHFtNNGL CH/l.NNSL C.HPo N IV E: 1... INFO REcEtv~r. 1'11 t=S'S.4GE" , ])AT"14 ,...... ENcoDER ,.... OR ', f>ECO])ER. .... _...., ENCODE/{ STORf1GE D£CoDI:R 11 (11oDtit.ATc~) fV\F:.b/A f)t:MODU LA ro"S) I" ... ------·-

FIGURE 2.1. Block diagram of a one-way data communication system employing an Error Correcting Code

RETRANSMISSION REQtJEST

\v oUTPUT" "/'J P/.J T TIER,Mif'IA L 74 '=AMINIIl- ENCOT>ER ' MoDIJ /./I TOR CHANNEL -"', D/EI't'oOI>IJ./..fiTo/?.. ])£CoDER

FIGURE 2.2. Block diagram of a two-way data communication system using error detection and retransmission

...... 8

2. Adaptability- transmission of redundant parity checks is

increased when errors occur, which can lead to better system

performance.

Some disadvantages which limits its efficiency are:

1. Short error detecting codes cannot detect errors efficiently.

2. Long codes require retransmission too frequently.

There are some systems in which two-way channels cannot be used

to reduce error probabilities, but this can effectively be performed

by one-way channels. One such system is a magnetic disk storage

system where it is too late to request retransmission of data after it

has been stored on the disk for any length of time. This is because

errors are detected upon reading which occurs at unknown times later.

It is the codes for this type of system that will be studied in depth.

There are many classes of error detecting and correcting codes which

are shown in Figure 2.3.

There are two fundamentally different types of codes - block and

convolutional. The encoder for a segments the continuous

sequence of information bits into k-bit segments or message blocks.

It then operates on these blocks independently according to the parti-

cular code to be used. The message block is transformed into a longer

block of n bits which is called a code word. It is transmitted,

exposed to a channel which might corrupt it by noise, and decoded

independently of all other code words. Since each message block con­

sists of k binary bits, there are 2k possible distinct message blocks.

Therefore, corresponding to 2k possible messages, there are 2k possible "k code words at the output of the encoder. This set of 2 code words is 9 called a block code. A code word is sometimes called a code vector because it is an n-tuple from the vector space V of all n-tuples. The n quantity n is referred to as the code length.

The other type of code, called a convolutional code, operates on the information sequence without breaking it up into independent seg- ments. Instead the encoder processes the information continuously and associates each long information sequence with a code.sequence con- taining more bits. The encoder breaks its input sequence into k symbol blocks where k is a small number. Then on the basis of this k-tuple and the preceding information symbols, the encoder outputs an n symbol section of the code sequence. See Figure 2.4.

In this project only the first class of codes will be studied.

Specifically the binary block codes that deal with error detection, and random and burst error correction. For a block code as defined, unless it has a certain special structure, the encoding circuitry would be prohibitively complex for large k. The reason for this is that it has to store the 2k code vectors in a dictionary. Therefore a restriction is placed on the algebraic structure of a code to allow only codes which can be mechanized in a practical manner.

Thus there are three main aspects of the coding problem; 1) to find codes that have the required error correcting capability, 2) to find a practical method of encoding, 3) to find a practical method of making the decision at the receiver; in other words, a method of error correction.

The typical solution to the problem has been to find codes that could be proved mathematically to satisfy the required error correcting 10

DcT£crloN

/"'UL.TIPt.E TR.ItNSP'HTT~ R.S (31NAR.Y RAN OOM ERRoR.

8u~5T Fi.RRol!.

8/..0CK biGITAI..

Ccd~R. fi. C TICN S'(NCH ER.P..OR. coDES 1M trH. ERRoR...

S/i>J GL.f; NON falf\JA ~y Tfl.ANSMtTTGil,

CON VOLLJ TION It L.

FIGURE 2.3. Classes of codes

INPUT F-RoM $OURCE (rno; m, /YI2 ••• ) 1 _(T\ .... / RE6tSTE I< ,~ MIJX 0/..JTP/..JT CHA Av 7o

~""

EtJ = exclusive OR gate

(a) Encoder (b) Code Sequence; 0, M , (M @o), M , (M @M ), M (M (±)M ) ••• 0 0 1 1 0 2 2 1

FIGURE 2.4. (a) Encoding circuit for a convolutional code (k 2) (b) Output code sequence from the encoder. 11 capability. In order to make this possible, an algebraic structure is necessary which can also be exploited to meet the other two require- ments, capability to encode and decode.

Linear block codes offer such an algebraic structure and are a k subclass of all codes. A set of 2 n-tuples is called a if and only if it is a subspace of the vector space V of all n-tuples. n With this structure, the encoding complexity will be considerably reduced. A linear code of length n and dimension k is called an (n,k) code. An important parameter associated with a code is its code rate or efficiency, R = k/n.

2.1 Generator Matrix

For a subspace S of V , it is possible to find a set of linearly n independent n-tuples, say k of them v ,v , •.• vk' such that each n-tuple 1 2 of Sis a linear combination of v ,v , ••• vk in the following form: 1 2

u = (2.1) where

m. = 0,1 1 for

i = 1,2, ••. k

This subspace is a k-dimensional subspace of V and it consists of n k 2k n-tuples. From these facts, a linear code of 2 code vectors can be described by a set of k linearly independent code vectors. These code vectors can be arranged as rows of a kxn matrix (Figure 2.5). 12

Let M = (m ,m , ••. ~) be a message block. Then the corresponding code 1 2 word can be given as follows:

u = MG

v1 v2

= (m1 ,m2' •.• ~) (2.2)

vk

= m1v1 + m2v2 + ...+ ~vk

The code word corresponding to the message (m ,m ••• ~) is a 1 2 linear combination of the rows of G. thus, the rows of matrix G gen- erate a linear code. G is called the generator matrix of the code.

Since a linear code is completely specified by its generator matrix G, the storage size required for encoding is reduced. The encoder has to store only the k rows of G instead of storing the 2k code vectors of the code. Besides the storage memory for G, the encoder must have a logic element to perform the linear combinations of the rows of G.

The set of all the possible linear combinations of G form a k-dimensional subspace of V which is defined as the row space of G. n 2.1.1 Systematic Generator Matrix

It is possible to encode each message block into a code word in such a way that the first k bits of the code word are exactly the same as the message block, and the last n-k bits are redundant bits which 13 are functions of the information bits (Figure 2.6). A code of this form is called systematic and is useful for two reasons:

1. Easy to encode and decode

2. The transparency of the message is advantageous

The redundancy should provide the capability to combat errors intra- duced during the transmission over a noisy channel and thus protect the message. Now the coding problem is to form these redundant bits. A systematic (n,k) linear code can be described by a k x n generator matrix in equation 2.3.

1 0 0 ••• 0 Pu P12· • ·P1,n-k

0 1 0 •.. 0 p21 P22· ··P2,n-k

0 0 1. .. 0 p31 P32 • • ·P3 ,n-k G = (2.3)

0 0 0 ••• 1 where

pij = 0' 1.

An abbreviated form of the generator matrix can be presented with:

1 k being the k x k identity matrix and

P being the k x (n-k) matrix of p ..• ~J

The generator matrix of a systematic code is then of the form:

G = [~ p] (2.4) 14

G =

where

v (v. ,v. ••• v. ) 0 1 1 1 2 1U for

i = 1,2 .•• k

FIGURE 2.5. Matrix Constructed of Code Vectors

l MESSAGE 1- ·j.__M_E_s_s_A_G_E__ ..__R_E_n_UN_nAN_T_n_I_G_I_T_s_,

--k- ---- k --*---- n-k ----

FIGURE 2.6. Systematic Code Format 15

A code word can be generated from the new matrix of Eq. (2.3) by substitution. into Eq. (2.2)

By matrix multiplication it can be shown that

u. = m. for i 1,2, .•. k (2. 5a) ]. ]. and

= (2.5b) for

j = 1, 2, ... n-k

From equations (2.5a) and (2.5b) it can be verified that the first k bits of the code word are just the information bits to be trans- mitted; the last n-k bits are linear functions of the information bits.

A special name has been given to the last n-k bits which is the parity check bits of the code word.

For a linear code in systematic form, the encoding complexity is further reduced since it has to store only k x (n-k) bits p .. of the l.J P matrix instead of storing k x n bits of the generator matrix G. 16

2.2 Parity Check Matrix

Another way of describing linear codes is by its parity check matrix H. For each k x n matrix G there exists an (n-k) x n matrix H such that the row space of G is orthogonal to H which means that the inner product of a vector in the row space of G and a row of H is zero.

Let H = = (2.6)

h h k. 2' •. h k n-k n- , n- ,nJ and let

U (u ,u , ... un) be a vector in the row space of G. 1 2

Then

(0,0, ..• 0) (2.7) or

u h. = (2.8) ~ for

i = 1,2, .•. n-k. 17

For a vector to be a code word in the code generated by G, the vector U

must satisfy UHT = 0. The parity check matrix of the code generated by

G in Eq. (2.3) is

1 o o ... ol

0 1 D ••• 0

H = (2.9)

P1 ,n-k P2 ,n-k 0 0 0 ••• 1

where

PT is the transpose of matrix P.

Several descriptions of linear codes have been presented; but

before some practical codes can be discussed, some basic terminology must be introduced which will be used to define the error correcting

capability of a code.

The Hamming weight of a vector V, w(v), is defined as the

r.umber of nonzero· components of V.

The between two vectors U and V, d(u,v) is

defined as the number of places in which they differ.

By the definition of modulo-2 addition, the following relationship

between weight and distance can be seen.

d(u,v) = w(u+v) (2 .10) 18

The distance between two code vectors, U and V, is just equal to

the weight of their vector sum U + V. Given a linear code, the dis-

tances between all possible pairs of code words can be calculated. The

smallest distance is called the minimum distance of the code denoted

as d . • The addition of any two vectors must also be a code vector m1.n since the set of all code vectors is a subspace of all n-tuples.

Therefore the distance between any two code vectors is equal to the weight of a third code vector. Thus the minimum distance of a code is

equal to the minimum weight of its nonzero code vectors. The impor-

tance of minimum distance is that it determines the random error cor- recting and detecting capability of a linear code.

In certa~n-communication channels, each transmitted symbol is

affected independently by noise. These kinds of errors introduced

earlier are called random errors. Codes which are designed to protect

these, t,. independent errors are called random error correcting codes.

These codes are dependent on the minimum distance properties already

described. There are also channels on which the channel interference

introduces errors of unspecified time duration, referred to as a burst

of b errors. Codes designed to protect against this kind of burst are

called burst error correcting codes. Sometimes a combination of ran-

dam and burst errors exists, and a more complex code is needed.

Linear codes can be constructed for all of the above mentioned

systems; however, by adding more algebraic structure to linear codes,

a subclass known as cyclic codes can be formed which operates more

efficiently. A linear code requires matrix multiplication to encode

a message block, a similar matrix operation on the received word, and 19 then a look up action into a decoding table. All this leads to cumbersome control circuitry and a tremendous amount of hardware, particularly for large values of n and k.

2.3 Cvclic Codes

A cyclic code is a linear code with the additional property that shifting a code word cyclically produces another code word. With this structure, encoding and parity checking of a cyclic code can be imple- mented by using simple shift registers and feedback connections. In most cases the decoding will be simple and efficient.

For the purpose of expression, the components of a code word will be associated with the coefficients of a polynomial and vice versa.

(2.11) =

Every code word in an (n,k) code corresponds one to one to a poly- nomial of degree n-1 or less and no polynomial of degree greater than n corresponds to a code word. The term V(X) will be called the code polynomial of V. For example, the code word 1010001 of a (7,4) cyclic code ~orresponds to the polynomial 1 + x2 + x6 . A convention has been adopted to transmit polynomials high order first. This in effect transmits the message block, high order bit first followed by the parity check bits.

Since a cyclic shift is just multiplication by X mod Xn + 1, the cyclic property implies that if a polynomial P(X) is a code word, so are all multiples of P(X)' But there is one polynomial of least 20 degree, G(X)' called the generator polynomial, and all multiples of

G(X) are the code words. Thus the generator matrix, G can be replaced in the encoding operation by the generator polynomial G(X)'

The code size can be determined from the degree of the generator polynomial G(X)' If G(X) has degree n-k, then the code size has 2k code words.

Polynomial representation makes it possible to express the useful properties of cyclic codes:

1. In an (n,k) cyclic code, there exists one and only one code

polynomial of degree n-k

2 n-k-1 n-k = 1 + g1 X+ g2 X + ... gn-k-1 X +X (2.12)

2. Every code polynomial V(X) is a multiple of G(X)' and

every polynomial of degree n-1 or less which is a multiple

of G(X) must be a code polynomial.

3. The generator polynomial, G(X)' of an (n,k) cyclic code is

a factor of Xn + 1 such that,

= G(X) H(X) (2.13)

where

H(X) = parity check polynomial

4. If G(X) is a polynomial of degree n-k and is a factor of

Xn + 1, then G(X) genera t es an ( n, k) eye1" ~c co d e. 21

2.4 Analytic Methods of Code Construction

The algebraic structure of cyclic codes makes it possible to systematically construct codes for detection and correction of errors.

Several code constructing techniques can be implemented, such as

Hamming, Fire, Burton, BCH, each having algebraic properties best suited for a particular type of error. Analytical models and their applicability will be presented for several construction techniques.

2.4.1 Hamming Codes

Let G(X) be a primitive polynomial of degree m, where m is a positive integer. The code produced is a single error correcting

Hamming code with the following:

code length n 2m -1.

number of parity checks n-k m

number of information bits k 2m-m-1

error correcting capability t = 1

minimum distance d . 3 mJ.n 2.4.2 Fire Codes

Let p(X) be an irreducible polynomial of degree C which divides a n 2 1 polynomial of the form x( - ) + 1. Then the exponent to p(X) is C-1 1 2 . If 9~ is defined to be relatively prime to 2C- then the genera- tor polynomial,

!t = p (X) (X + 1) produces an (n,k) code with c code length n = Q,• (2 -1)

number of information bits k = n-(c+Z) 22

correction capability any burst of length b

1ilhere c > b

and !L > 2b-1

detection capability all bursts of length

d = !L + 1-b

Frequently a special case of Fire codes is used strictly for burst cor-

rection. The b error correcting codes have the following parameters

for the values of b for which 2b-1 and 2b-1 are relatively prime (b~3).

code length n (2b-1) (2b -1)

number of parity checks n-k 3b-l

Table 2.1 contains a list of Fire.codes generated from the burst code

equations.

Construction of the generator polynomial for burst correcting Fire

codes can be accomplished by algebra. The basic structure of G(X) is

!L p (X) (X + 1)

for the special burst case,

!L = 2b-l and

c = b

so 2b-1 = Pcx) ex ) where

p(X) is of degree b

The first term p(X) is an irreducible polynomial obtainable by factor­ ing the polynomia~ ~Zc~ 1 + 1. 23

TABLE 2.1. Burst error correcting Fire codes with b < 10

I BURST CORRECTING PARITY I CODE LENGTH RATE CHECKS I (n,k) (b) (R=k/n) (n-k) (35 '27) 3 o. 77 8 I (105,94) 4 0.90 11 (279,265) 5 0. 95 14 (693,676) 6 0.98 17 I (1651 '1631) 7 0.99 20 (3825,3802) 8 0.99 23 (8687 ,8861) 9 0.99 26 (19437' 19408) 10 0.99 29

In order to demonstrate the fabrication process of G(X)' a Fire code

with burst correction capability b = 5 will be selected. The following

will determine p(X)'

= x31 + 1 2 5 4 3 2 5 4 2 (X+1) (X5+x +1) (X +x +X +x +1) (X +x +x +X+l)

From the above factorization, six irreducible polynomials of degree

five are found. The best choice is the polynomial with the least 5 2 terms or nonzero coefficients. Therefore either polynomial x +x +1

or X5 +X 3+1 could be selected. Tables are available that give all of 2n-1 the factors of polynomials of the form X +1. One such reference is

in Peterson and Weldon's book, "Error Correcting Codes," Appendix C,

Table C.2. Since the degree of p(X) is known (b=5), one only has to

examine the factors listed under degree equal to 5. 24

Substitution into the G(X) formula yields

= (X5 + X2 + 1) (X2(5)-1 + 1)

Fire codes are frequently used in burst error correcting systems due to their burst error correcting capabilities.

2.4.3 Burton Codes

A class of codes similar to Fire codes is defined as Burton codes.

Let p(X) be an irreducible polynomial of degree b, let e be the smallest positive integer such that Xe + 1 is divisable by p(X)' and let n be the least common multiple (LCM) of b and e.

n LCM (b ,e)

n = eb

Then for any positive integer b, there exists a b burst error correc- ting code generated by

= with

code length n eb

number of parity checks n-k = 2b

number of information bits k = n-2b

Each code vector consists of e subblocks. 25

2.4.4 BCH Codes

This class of codes differs from the ones presented because it has

only random error correcting capability. To construct the generator

polynomial of a t error correcting BCH code with n = 2m-1, let

G(X) LCM (m1 (X), m3 (X), •.. ,m2t-l (X))

where 2 .+1 m t-l (X) is the minimum function of a J 2

This means that m (X), m (X) .•. m t_ (X) must each divide G(X) but if 1 3 2 1 two or more of these minimum functions are identical, only one is

included in G(x)· Since G(X) has at most t factors and each factor

has maximum degree m, the t error correcting BCH code has at most mt

parity checks.

The code construction techniques discussed in the preceding

section for burst and random errors are summarized in Table 2.2. This

is not a complete set of techniques but represents those currently

being implemented.

TABLE 2.2. A comparison of code construction techniques I TECHNIQUE I ADVANTAGES DISADVANTAGES Hamming Good single error Limited applicability correcting code

Fire Very high Code Rate Long codes required for modest Simple implementation burst-correcting capability

Burton High code rate Must conserve phase burst High code efficiency quality to interleave

BCH Good random error Decoding more complex than correcting capability for burst error codes CHAPTER III

ENCODING

3.1 The Mathematics of Encoding a Cyclic Code

The first technique to encode a cyclic (n,k) code generated by

G(X) is to multiply the message polynomial, M(X) by G(X)" Thus the code polynomial V(X) can be expressed as

= M(X) G(X) (3 .1) k-1 = (m0 + m1 X+ ..• + ~- 1 X ) G(X)

If the coefficients of M(X)' (m , m , .•. , ~- ) are the k information 0 1 1 bits to be encoded, then V(X) would be the corresponding code poly- nomial. The coefficients of the polynomial V(X) will be referred to as the code word. The number of check bits in the code word is equal to the degree n-k of G(X)"

In order to exemplify this technique, the cyclic code (7,4) gen- 3 erated by G(X) x + X + 1 which is capable of correcting single bit errors will be studied. To generate all possible code words, every combination of information bits has to be considered. Since n = 7 and k = 4, there are 2 k (or 16) messages. The code words are shown in

Table 3 .1.

Examination of the code words in Table 3.1 finds the information bits intermingled with the check bits. This type of code is called

26 TABLE 3.1. A (7,4) cyclic code generated by G(X) = x3 +X+ 1

MESSAGE CODE POLYNOMIAL CODE WORD

(X3 x2 x1 xo) 6 5 4 3 2 0 M(X) • G(X) cx x x x x x x ) 3 0 0 0 0 0 • (X +X + 1) = 0 0 0 0 0 0 0 0 3 3 0 0 0 1 1 • (X + X + 1) = x + x + 1 0 0 0 1 0 1 1 3 4 2 0 0 1 0 X • (X + X + 1) = x + x + X 0 0 1 0 1 1 0 3 4 3 2 0 0 1 1 (l + X) • cx + x + 1) = x + x + x + 1 0 0 1 1 1 0 1 2 3 5 3 2 I 0 1 0 0 x • cx + x + 1) = x + x + x 0 1 0 1 1 0 0 I 2 3 5 2 0 1 0 1 (X + 1) • (X + X + 1) = x + x + X + 1 0 1 0 0 1 1 1 2 3 5 4 3 0 1 1 0 (X + X) . (X + X + 1) "' x + x + x + X 0 1 1 1 0 1 0 2 3 5 4 0 1 1 1 (X + X + 1) • (X + X + 1) = x + x + 1 0 1 1 0 0 0 1 3 3 6 4 3 1 0 0 0 x . cx + x + 1) = x + x + x 1 0 1 1 0 0 0 3 4 1 0 0 1 (XJ + 1) • (X + X + 1) = i + x + X + 1 1 0 1 0 0 1 1 3 6 3 2 1 0 1 0 (XJ + X) • (X + X + 1) = x + x + x + X 1 0 0 1 1 1 0 3 3 6 2 1 0 1 1 (X + X + 1) • (X + X + 1) = x + x + 1 1 0 0 0 l 0 1 2 3 6 5 4 2 1 1 0 0 (X3 + x ) • (X + X + 1) = x + x + x + x 1 1 1 0 l 0 0 2 3 6 5 4 3 2 1 1 0 1 (x3 + x + 1) • (x + x + 1) = x + x + x + x + x + x + 1 1 1 1 1 1 1 1 3 2 6 5 1 1 1 0 (X + x + X) • (X 3 + X + 1) = x + x + X 1 1 0 0 0 1 0 3 2 3 6 5 3 1 1 1 1 (X + x + X + 1) • (X + X + 1) = x + x + x + 1 1 1 0 1 0 0 1

------

N -....! 28 unsystematic. Given the generator polynomial G(X) of an (n,k) cyclic code, the code can be put into a systematic form. That is:

1. The first k bits of each code word are the unaltered

information bits

2. The last n-k bits are parity check bits.

The second technique of encoding involves systematic codes which will be formulated as follows. Suppose that the message of k bits to be encoded is,

M (3.2)

The corresponding message polynomial is,

2 k-1 = m + m X + m X + ... + ~- X (3. 3) 0 1 2 1

n-k Multiplying M(X) by X ,

(3 .4)

n-k Dividing X M(X) by G(X)'

n-k X M(X) = (3.5) where Q(X) is the quotient and R(X) is the remainder. Since the degree of the generator polynomial G(X) is n-k, the degree of the remainder R(X) must be n-k-1 or less. 29

n-k-1 ro + r1 X + .•. + rn-k-1 X (3 .6)

- Taking Eq. (3.5) and rearranging terms

n-k = R(X) + X M(X) (3. 7)

n-k This shows that R(X) +X M(X) is a multiple of G(X)' and has degree n-k n-1 or less. Therefore R(X) +X M(X) is a code polynomial of the

cyclic code generated by G(X)" V(X) can be expressed as,

n-k n-k-1 = R(X) + X M(X) = rO + r1 X + ..• + rn-k-1 X

(3.8)

n-1 + ... + ~-1 X

which corresponds to the systematic code word

(3.9)

INFORMATION CHECK BITS BITS

The code word is now systematic and will be transmitted from right to

left, most significant information bit through least significant check

bit. All practical systems use systematic codes, therefore only

systematic cyclic codes will be discussed.

As an example, consider the systematic (7,4) code generated by 3 G(X) = x +X+ 1. The calculations to compute the code words require 30 division to find the remainder, R(X)' The process of encoding the message (0111) would be as follows:

From Eq. (3.5)

n-k = X M(X) MOD G(X)

2 M = (0111) implies M(X) x + x + 1

Dividing

2 ~x__+_x______C_Q_uotient, Q(X)) 3 = x + x + 1 ) x5 + x4 + x3

X (Remainder, R(X))

Since R(X) =X, by Eq. (3.7) the code polynomial is V(X) = R(X) 3 5 4 3 + X M(X) x + x + x + X (0111010) where the left most four bits are the message. This code would be transmitted high order first or left to right.

All 16 code words in systematic form are listed in Table 3.2. 31

TABLE 3.2. A (7,4) systematic cyclic code generated 3 by G(X) = x + X + 1

MESSAGE CODE WORD

(MESSAGE) (CHECKS) cx3 x2 x1 xo) cx6 xs x4 x3 x2 x1 xo)

0 0 0 0 0 0 0 0 0 0 0

0 0 0 1 0 0 0 1 0 1 1

0 0 1 0 0 0 1 0 1 1 0

0 0 1 1 0 0 1 1 1 0 1

0 1 0 0 0 1 0 0 1 1 1

0 1 0 1 0 1 0 1 1 0 0

0 1 1 0 0 1 1 0 0 0 1

0 1 1 1 0 1 1 1 0 1 0

1 0 0 0 1 0 0 0 1 0 1

1 0 0 1 1 0 0 1 1 1 0

1 0 1 0 1 0 1 0 0 1 1

1 0 1 1 1 0 1 1 0 0 0

1 1 0 0 1 1 0 0 0 1 0

1 1 0 1 1 1 0 1 0 0 1

1 1 1 0 1 1 1 0 1 0 0

1 1 1 1 1 1 1 1 1 1 1 32

3.2 Encoding Via the Generator Matrix

The generator matrix of a cyclic code in systematic form can be formed as shown next. Expressing Eq. (3.5) for k single terms of the message,

= (3.10) with

i = 0, 1, ••. , k-1

n-k-1 = riO+ ri1 X,+ •.. + ri,n-k-1 X

Again rearranging terms,

n-k+i = X + ri(X) = (3.11) which are just the code polynomials. Arranging these k code poly- nomials as k rows of a matrix yields,

rO,n-k-1 1 o... O

r 0 1. .. 0 1 ,n-k- 1 G = (3.12)

which is the generator matrix of the cyclic code. The first row of G is the generator polynomial of the code. If M = (m , m , ••. ~_ ) are 0 1 1 33

the k information bits to be encoded, then the corresponding code vector is,

v (3.13)

In polynomial form,

(3.14) Xn-k + m Xn-k+1 n-1 + mo 1 + ... + WK-1 X

n-k = R(X) + X M(X) where k-1 M(X) = m0 + m1 X+ ... + ~- 1 X

R(X) = mo rO(X) + m1 r1(X) + ... + ~-1 rk-1(X) 34

By comparison with Eq. (3.10)

=

(3.15)

and R(X) is the remainder resulting from dividing Xn-k M(X) by G(X)'

Thus this shows that the two methods of deriving the code polynomial,

matrix generation and algebraic generation (encoding equation) are

theoretically equivalent.

In order to obtain the parity check matrix H one has to rotate

the G matrix in Eq. (3.12) as follows:

r 0 0 •.. 0 roo r10 rk-1,0 l 0 1 0 •.. 0 r01 r11 rk-1,1

0 0 1 .•• 0 r02 r12 rk-1,2

H = (3.16)

0 0 0 ••• 1 rO,n-k-1, r1,n-k-1'''rk-1,n-k-1

2 For example, the message polynomial M(X) = x + X + 1 will be encoded via the generator matrix, to compare the resultant code word with the

one generated previously by the encoding equation. First the entire

generator matrix of the (7,4) cyclic code will be formed from the

following equation which is a modified form of Eq. (3.10). 35

(3.17)

where

i = 0, 1, ... k-1 (0,1,2,3)

For

i = 0

By long division,

1

x3 + x + 1 ) x3

x3 + x + 1

X + 1 0 1 1

Similarly for

4 3 i 1 x ;x +X+ 1 r1(X) = 1 1 0 5 3 i = 2 x ;x + x + 1 r2(X) 1 1 1 6 3 i = 3 x ;x + x + 1 r3(X) = 1 0 1 36

Therefore

v3 0 0 0 1 0 1

v2 1 0 0 1 1 1 G = = v1 ~ 0 1 0 1 1 0 vo ~ 0 0 1 0 1 1

r3

r2 v MG = (0 1 1 1) I4 r1

ro

= 0 1 1 1 (r2 + r1 + ro)

= 0 1 1 1 0 1 0 I~ h •I DATA CHECKS which is the same code word produced by using the encoding equation.

3.3 Encoding Implementation for Cyclic Codes

The encoder accepts k-bit blocks of information bits from the source and transmits them to the channel with no delay. Following each such block, n-k parity check bits are sent. During this time no information can be accepted from the source. Thus it is assumed that the source is capable of starting and stopping on command or else require the use· of a buffer.

Cyclic codes are easily mechanized by shift register devices. The cyclic property and the property that each code polynomial is a multi- ple of the generator polynomial minimizes the hardware requirements. 37

Two methods to encode an information word into a code word exist.

Method 1: Implementation of the encoding equation (Eq. 3.7)

n-k X M(X) R(X) G(X) Q(X) + G(X) with a circuit that calculates R(X) by dividing by G(x)· This circuit has n-k stages.

Method 2: Employs a circuit which multiplies by H(x)· This circuit

has k stages.

For codes with more check bits than information bits, the second method is best. While for codes with more information bits, rates k/n >0.5, the first method is better. Both methods give the same code word. Since n-k is usually less than k in practice, method 2 is rarely used.

Division Circuits

Before describing the two methods in more detail, two circuits for division should be understood by the designer. Although there are many circuits for division all yielding the same result, only two are used in practical systems.

Internal X-ORS: Refer to Figure 3.1. It operates as follows.

After 6 shifts, (n-k), division starts. The circuit requires n shifts to calculate and output the quotient. At the completion of shifting, the register contains the remainder. 38

External X-ORS: Refer to Figure 3.2.

After 6 shifts (n-k) the division starts. After n shifts the quotient is output, but the register does not contain the remainder.

It contains a linear combination of the remainder bits.

3.3.1 Encoding With an (n-k) Stage Shift Register

3.3.1.1 Multiplier Circuit Encoder

A code word can be formed by Eq. (3.1) which multiplies a poly- nomial of degree k-1 or less whose coefficients are arbitrary informa- tion, symbols by G(X)' This produces an unsystematic code where the information symbols do not appear unaltered in the code word yet can be recovered from a correct code polynomial by division by G(X)' The industry standards call for systematic codes, therefore this is not used.

3.3.1.2 Divider Circuit Encoder (Unmodified)

Another technique, involving the basic division algorithm gen- erates a systematic code.

n-k R(X) + X M(X) = G(X) Q(X) (3.18)

The circuit constructed from this equation functions by dividing the input polynomial by G(X)' The input polynomial is not the message n-k polynomial. Rather the product X M(X) which is of length n.

Encoding consists of computing the remainder (check bits) by this technique and requires n clock pulses. Then the feedback must be disabled and it takes another n-k steps to shift out the remainder.

As an example, an encoder circuit is shown in Figure 3.3 for the 39

IN PLJ/ 0/.JTPJJT

E9 XOR gate

~ Shift register (1 bit)

FIGURE 3.1. A circuit for dividing by x6 + x5 + x4 + x3 + 1 (internal XOR)

INPi-1/

o/.17Pi.J/

EB XOR gate

~ Shift register (1 bit)

6 5 4 FIGURE 3.2. A circuit for dividing by x + x + x + x3 + 1 (external XOR) 40

feEl> SACI< ,--~~~'

>>-IN_P_/..I_T_*,ffi-+·-+---"11 R I 1-~ R 2_ I {b JR31~ 1 _ OJJTPI-IT M(x) X Il-k

where EB XOR gate

IR I Shift register (1 bit)

3 FIGURE 3.3. Unmodified encoder, G(X) = x + x + 1

3 (7,4) cyclic code with G(X) =X +X+ 1. The division is performed by using the internal XOR divider. The circuit in Figure 3.3 is called an unmodified encoder because it can be improved to speed up the cal- culation of the remainder.

3.3.1.3 Divider Circuit Encoder (Modified)

In order to reduce the encoding time by n-k clock pulses, the multiplication of the message polynomial, M(X) by Xn-k, ~an be per­ formed in the encoder circuit. This is accomplished by designing a n-k circuit that simultaneously divides by G(X) and multiplies by X •

A general purpose modified encoder circuit is shown in Figure 3.4 which produces the remainder after only k clock pulses. 41

C ~ ou1P~JT REM.

I ~ CALCI.JLIITE f?£1'/ FEED BfoCI<. ENABLE

INPUT

'-~';t,ux '-----.:l)l8

SEL1iC. T J...OG /C. 0 -7 IIYFO 81rS I -;> C:Jf£C.I< BIT.>

2 n-k-1 n-k where = 1 + g1 X+ g2 X + •.• + gn-k-1 X +X

single binary shift register stage (i.e., flip flop) which is shifted by an external synchronous clock so that its input appears as its output one clock pulse later

~e~ = Exclusive OR gate t = denotes a feedback connection if g. = 1 ~e~ l. and no connection if g. = 0 l.

FIGURE 3.4. General circuit for a modified (n-k) stage encoder 42

The procedure for the modified encoder is as follows:

1. Clear the shift register

2. Set FEEDBACK ENABLE to 1 and SELECT LOGIC to 1

3. Shift the k information bits into the shift register and

into the output

4. Set FEEDBACK ENABLE to 0 and SELECT LOGIC to 0

5. Shift the n-k parity check bits into the output

The modified encoding circuit consists of the following hardware:

1. n-k shift register stages (flip-flops)

2. One AND gate in the feedback connection

3. An average of (n-k)/2 XOR gates

4. A counter to select the control. If m is the smallest m integer such that n < 2 , then m flip-flops or equivalent

are needed.

5. One multiplexer

In order to demonstrate an application using the modified encoder, an example will be developed. Suppose nine bits of information are to be encoded. The previous examples used only codes with k = 4 so a new code is needed. One such code of length k = 9 is the cyclic (15,9) 3 4 5 6 code with G(X) = 1 + x + x + x + x . The generator polynomial for this case is of degree six and has the following form:

2 3 4 5 6 1 + g (X) + g (X) + g (X) + g (X) + g (X) +X 1 2 3 4 5 43

From the previous equation the feedback connections for the encoder circuit become;

0

= = 1

3 4 5 6 . The circuit for dividing by G(X) = 1 +X +X +X +X and mult1ply- 6 . ing by X is shown in Figure 3.5. Table 3.3 shows the input, status of the shift register and feedback signal of the circuit for each 8 6 5 2 clock pulse when the message polynomial, M(X) = x + x + x + x is encoded. The differences between the unmodified encoder and the modi- fied encoder are summarized in Table 3.4.

3.3.2 Encoding With a k Stage Shift Register

It has been shown that an (n,k) cyclic code is completely speci- fied by its generator polynomial G(X) which is a factor of Xn + 1.

= (3.19)

The parity polynomial H(X) has the form

(3.20)

The H(X) encoding equation which is derived from Eq. (3.19) is known as the difference equation. FE. E /J BAC/<. E f'IA/3i.E

:13 9~ 9s Q.J.JOT/ENT (FeEl>BAcl<. SIGNFlL)

I/IIPJIT A .M(x) OUTPLll

SE.L£cl

3 4 5 6 FIGURE 3.5. Modified encoder for a (15,9) code divides by G(X) = 1 + x + x + x + x 6 and multiplies by x

~ ~ 45

TABLE 3.3. Internal operation of a modified encoder

SHIFT REGISTER (AFTER CLOCK) INPUT FEEDBACK CLOCK BIT (R1 R2 R3 R4 R5 R6) SIGNAL

0 - 0 0 0 0 0 0 0 (OFF)

1 1 (X8) 1 0 0 1 1 1 1 (ON)

2 0 1 1 0 1 0 0 1 (ON)

3 1 (X6) 1 1 1 1 0 1 1 (ON)

4 1 cx5) 0 1 1 1 1 0 0 (OFF)

5 0 0 0 1 1 1 1 0 (OFF)

6 0 1 0 0 0 0 0 1 (ON) I I 7 1 (X2) 1 1 0 1 1 1 1 (ON)

8 0 1 1 1 1 0 0 1 (ON)

9 0 0 1 1 1 1 0 0 (OFF) (remainder)

TABLE 3.4. Comparison of the encoding techniques.

UNMODIFIED MODIFIED OPERATION ENCODER ENCODER

Number of shifts to calculate R(X) n k

Number shifts to output R(X) n-k n-k

TOTAL SHIFTS 2n-k n 46

k-1 h. v .. (3.21) vn- k -J . = I J. n-J.-J i=O for

1 .::_ j < n-k

Given the k information bits, this equation is a rule to determine the n-k parity check bits of the code polynomial, V(X)" As mentioned pre­ viously, the code word is exactly the same as the one obtained by divi- sion by G(X)' the only difference is that a multiplication algorithm is being used. The general H(X) encoding circuit based on Eq. (3.21) is shown in Figure 3.6.

The procedure for the k-stage encoder is:

1. Gate 1 is turned on and Gate 2 off. The k information

digits, coefficients of m(X) are shifted into the parity

check register and communication channel simultaneously.

2. After all k digits have entered the register, Gate 1 is

turned off and Gate 2 on. The first check digit then

appears at P.

3. The register is shifted once. The first check digit is

output into the channel and is also shifted into the left-

most stage of the register. The next check digit now

appears at P.

4. Step 3 is repeated until n-k check digits are output to

the channel. 47

The k-stage encoding circuit consists of the following hardware:

1. k shift register stages (flip-flops)

2. Maximum of k-1 XOR gates

3. Counter to control switching of gates

4. Two AND gates for control 2 As an example, consider encoding the message m(X) = X + X + 1 with the (7,4) code used previously. In order to design the circuit, the parity polynomial is needed which is calculated from equation 3.19.

7 3 = (X + 1)/(X +X+ 1) =

The feedback coefficients are determined from equation 3.20 and are listed below.

The circuit for the k-stage encoder is shown in Figure 3.7. By sub- stitution into the difference equation (Eq. 3.21), the check bits can be calculated.

= v3 -J.

12_j<3 = v7 -J. + v6 -J. + vs -J. 48

GATE 2.

OtJTPJ.JT

FIGURE 3.6. General circuit for a k stage encoder

\

G./HE h, z..

OLJTPJJT

FIGURE 3.7. k stage encoding circuit for (7,4) cyclic code 49

For

M = (0111) VS =V4 = v 3 = 1 v6 = o vz = v6 + vs + v4 = 0 + 1 + 1 = 0 v1 = vs + v4 + v3 = 1 + 1 + 1 = 1 vo v4 + v3 + v2 1 + 1 + 0 = 0

Thus the code word is cv6, vs, v4, v3, vz, v 1, vo) = (0111010) which is the same as the code word generated by the other encoding techniques.

In summary, for codes with more check bits than information bits, the k stage encoder is more economical. Otherwise the n-k stage encoder is preferable.

3.4 Shortened Cyclic Codes

In a system design, if a code of a desired number of information bits cannot be found, it may be desirable to shorten a code to meet the requirements. Because cyclic codes are generated by divisors of n-1 X , there are few codes for most values of n and k, so code shortening becomes an important tool.

Given an (n,k) code, it is always possible to form an (n-s, k-s) code by making the s leading information bits equal to zero and omitting them from all code words. This type of code is called a shortened cyclic code although it is not cyclic. It has at least the same error correcting capability as the code from which it is derived.

The encoding process can be accomplished by the same circuits designed for the unshortened code. 50

The result of shortened codes on the generator matrix G is to omit the first s rows and columns of the matrix. See Figure 3.8 for a graphic representation of the code space.

SHORTENED BITS MESSAGE CHECKS

1+------k------..+1<~---.n-k---

+-----s----+

FIGURE 3.8. An (n,k) code shortened by s bits CHAPTER IV

DECODING

4.1 Error Detection

When a code word is transmitted over a noisy channel, it may be corrupted by noise. At the output of the channel, the received vector may or may not be the same as the transmitted code word. The function of the decoder is to recover the original code vector from knowledge of the received word. Let the received vector be

= where

r , r , ••• ,rn-k-l are the received check bits 0 1

r k, ••• ,r are the received information bits n- n-1

The decoder first checks if the received word is a code word, whether it is divisable by the generator polynomial G(X) of the code used at the encoder. This can be accomplished by the same division circuit used to encode the message. However, during decoding, the feedback shift register with connections corresonding to G(X) will be referred to as the syndrome generator. In addition, the remainder in this register at the completion of division is called the syndrome. The syndrome S(X) is arrived at by,

(4 .1)

51 52

(4.2) where

S(X) is a polynomial of degree n-k-1 or less.

If the syndrome is zero, the received vector R(X) is divisable by

G(X) and represents a code word. The decoder will accept the received word as the transmitted word. However, if the syndrome is not zero, the received vector R(X) does not represent a code word and errors have been detected. Suppose V(X) was the transmitted code vector so,

= (4.3) where E(X) is the error pattern caused by the channel disturbance.

Since V(X) is a code polynomial it is therefore a multiple of G(X) and can be expressed as,

= o. (4.4)

Eq. (4.2) can be expressed similarly as,

= MOD G(X) (4.5)

The combination of equations (4.4) and (4.5) yields,

= (4.6)

This expression means that the syndrome of R(X) is equal to the remainder, resulting from dividing the error pattern by the generator polynomial of a code. Thus, the syndrome of the received vector 53 contains the information about the error pattern in the received vector which will be used for error detection and correction. More specifically, if the degree of E(X) is less than that of G(X) then,

= E(x)· (4. 7)

In the event that the degree of E(X) is greater than that of

G(X) , it is possible to find the error pat.tern by calculating a syndrome similar to the following:

n X•E(X) MOD X + 1 (4 .8) which is the syndrome of E(X) shifted cyclically one place. This can be proven to be

X E(X) X S(X) s(X) = = (4. 9) G(X) G(X) . shift

The result is important because it means that the syndrome of a cyclic shift of a vector is just the original syndrome shifted once in the ~- syndrome generator with the feedback on. 3 4 As an example, the (15,9) cyclic code with G(X) = 1 + x + x s 6 12 + X + X will be corrupted by the error polynomial, E(X) = X 13 14 + x + x • The degree of E(X) is fourteen which is greater than the degree of G(X) which is six, therefore the syndrome does not contain 3 the error pattern. However, if the syndrome of X E(X) is formed, the

--·-. ,._,._,_._,,.~~-,-. '--0>

":~.:·,~--~-~,.,·..., ···--,.;-:.:~;~·:,; • ·~·· c degree of E(X) will be less than that of G(X) and the syndrome register will contain the error pattern. See Figures 4.1 and 4.2 which show 54

lXXX XXX XXX XXX EEE

t·CHECK BITS ---'*'----- MESSAGE -----1

WHERE X = UNAFFECTED BITS

E = ERRONEOUS BITS

FIGURE 4 .1. The impact of the error polynomial 12 13 14 E(X) = x + x + x on a (15,9) code.

· NO. SHIFTS (AFTER SYNDROME) SYNDROME REGISTER ERROR POLYNOMIAL

0 (syndrome) 1 0 1 1 0 1 x12 + x13 + x14

1 1 1 0 0 0 1 1 + x13 + x14

2 1 1 1 1 1 1 1 + x + x14 2 3 1 1 1 0 0 0 1 + x + x

FIGURE 4.2. Process of shifting the syndrome until the degree of the error polynomial is less than that of G(X)" 55

the error pattern induced and the number of shifts of the syndrome

register necessary to determine it. It is observed after the third

shift that the shifted error polynomial 1 + X + x2 indicates that the

three least significant bits of the syndrome register are erroneous.

(l's indicate error positions.)

For decoding, the received vector is shifted into the syndrome

register with all stages initially set to zero. After the entire

received word (including check bits) has been entered into the syn­

drome register, the contents will be the syndrome. In order to perform

error detection, the output of each stage is connected to an OR gate which will examine the syndrome registers' contents. The output of the

OR gate is connected to a flip-flop which will indicate whether an

error has been detected or not. For example, if the syndrome is nonzero, then the flip-flop is set, otherwise it is not set and the received word is a code word.

4.2 Decoding Procedures

Several decoding methods exist for cyclic codes and are listed below:

1. Table look-up decoding

2. Meggitt decoder

3. Error trapping technique

4. Trial and error decoding

5. Majority - logic decoding

6. Algebraic procedures 56

The first four of these methods are suitable for decoding burst correcting codes and short random error correcting codes. The remain- ing two methods can be used to decode long random error correcting codes.

In general, the decoding process consists of three basic steps:

1. Calculate the syndrome of the received word.

2. Identify the correctable error pattern which corresponds

to the syndrome calculated in step 1.

3. Correct the error by taking the modulo-2 sum of the received

word and the error pattern found in step 2.

An emphasis of step 2 is important because it is the most complex phase of decoding. The first and third steps are relatively simple to implement and differ slightly between decoding methods. However, each decoding method is different in its' approach to identifying the error pattern and the time required to accomplish it.

4.2.1 Table Look-up Decoding

This method is based on the fact that every possible syndrome,

S(X)' corresponds to only one error pattern, E(X)' for a given code.

Because of this fixed relationship it is possible to construct a table n-k with a k error pattern associated with each syndrome (2 total entries). The name for this method is table look-up decoding which in principle can be used with any code. However for many practical codes, the table is large and other means must be found. The overwhelming advantage of this method is that the entire word is decoded at once making it very fast. 57

One application for table look-up decoding is in the area of in

line RAM correction. In this case, bytes of data are protected by

Hamming codes. A typical decoder is shown in Figure 4.3. The integ­

rity of the stored data is safeguarded by also storing the encoded

parity checks. Upon decoding, the data read out of the memory is

re-encoded to form the parity checks. These new check bits are com­

pared to the ones stored with an XOR gate to generate the.syndrome. If

no errors have occurred, the syndrome will be zero and an error pattern

of zero is output from the decoding table. However if the data con­

tains errors, then the syndrome will be nonzero producing the associa­

ted error pattern from the decoding table in order to correct the data.

The implementation of the (n-k)-input, k-output decoding table is

usually done with hardwired logic. Similarly the encoder can be con­

structed for k-inputs and (n-k)-outputs. Other techniques for imple­

mentation exist but are not cost effective or efficient.

Look-up table decoding has limited applicability due to the size

and cost requirements of the decoding table. For instance with a sys­

tem requiring 15 check bits (n-k), the size of the decoding table grows

to approximately 32,000 entries. This would result in a very large

hardwired logic circuit that would not be cost effective.

4.2.2 Meggitt Decoding Technique

The most general of all the decoding methods is the Meggitt

decoder which is the basic method chosen by the manufacturers of mag­

netic disks. It uses the principle that if a decoder can decode the

.. first bit in a word correctly for all error patterns, then the entire

word can be decoded with the same circuitry. The syndrome generator ,, k DECODING ) .... ENCODER I JT\ ....,. t'\ CORRECTED ., TABLE I I( \..1./ DATA DATA n-k k

ERRORI n-k PATTERN .... I PARITY CHECKS

FIGURE 4.3. Look-up table decoder for an (n,k) code. (Does not have to be cyclic)

V1 CXl 59 will use the same circuits used for encoding which multiply by Xn-k

and divide by G(X)' therefore, the generated syndrome will be given by

the following equation.

n-k 1 = X S (X) MOD G(X) (4.10)

1 where s (X) is the syndrome without multiplication by Xn-k.

The Meggitt decoder consists of three sections:

1. Syndrome generator

2. Error pattern detector

3. An n-bit buffer memory

Section 1, the syndrome generator consists of the (n-k) stage

syndrome register with feedback connections corresponding to the non-

zero coefficients of G(x)· The next section, whose purpose is to

detect error patterns, -consists of a single output combinational logic

circuit that has a truth table in which there are 2n-k rows. It pro-

duces a 1 output when the syndrome register corresponds to an error in 1 a correctable pattern in the highest order position (Xn- ) and 0 other- wise. If a correctable error pattern occurs, the first shift will out-

put a correct bit regardless of the location of the error. The third

section is a first in first out (FIFO) buffer which holds the received word during the calculation of the syndrome. See Figure 4.4.

The operating procedure of the Meggitt decoder is as follows:

1. Calculate the syndrome by shifting the entire received word

into the syndrome generator and simultaneously storing it

into the buffer. FEEDBACK CONNECTIONS II I ['\ ''/./' \.. 11 r ... JJ '; v

SYNDROME SYNDROME GENERATOR (n-k) MODIFICATION

0 0 0 0 0 0

ERROR PATTERN DETECTOR " (COMBINATIONAL LOGIC CKT) " ERROR CORRECTION

_;K ... BUFFER MEMORY (n) "'-l..l , RECEIVED -----' CORRECTED WORD WORD

WHERE (±) - = XOR GATE

FIGURE 4.4. A Meggitt decoder for an (n,k) cyclic code.

0"\ 0 61

2. Check for a detectable error pattern and output an

appropriate correction bit.

3. Read the first received bit from the buffer. At the same

time, the syndrome register is shifted cyclically once with

the feedback enabled. If this was an erroneous bit, then it

will be corrected via the detectors' output. The detectors'

output also goes back to the syndrome generator to modify the

syndrome. By removing the effect of the error, this results

in a new syndrome corresponding to the altered received word

shifted one place to the right.

4. With the new syndrome detecting any additional errors, the

decoder repeats steps 2 and 3.

5. After the received word is read out of the FIFO, the errors

corresponding to the patterns built into the logical circuit

will have been corrected; the syndrome register will then con­

tain all zeroes. If the syndrome register does not contain

all zeroes then an uncorrectable error has been detected.

4.2.3 Error Trapping

In principle, the Meggitt decoder applies to any cyclic code, but refinements are necessary for practical implementation. One such refined method known as error trapping decoding employs a simple com­ binational logic circuit for error detection and correction. It is effective for decoding single, double, and burst error correcting codes. However, when it is applied to long and high rate codes with large error correcting capability, it becomes ineffective. 62

The basic idea of error trapping is to cyclically shift the error pattern E(X) and the syndrome S(X) in step with each other. If all of the errors of E(X) are confined to some (n-k) bit window, then at some point the syndrome will contain a correctable error pattern. Remember however that not all applications of the error trapping decoder have the same capabilities.

There are two architectures of decoders that can be designed to implement the error trapping technique. Both are to be used with an

£ error correcting code.

1. Unmodified decoder - confines the error pattern to the £ low 1 order parity check positions. (Syndrome = S (X))

2. Modified decoder - confines the error pattern to the £ high

order parity check positions. (Syndrome= S(X))

The differences between the two are important. Basically, the two architectures of decoders differ in speed of correction and the process of correction. The detailed differences will be seen in later examples.

The modified error trapping decoder is shown in Figure 4.5. A special case which applies to burst errors of length £ is considered.

An error trapping decoder such as this one detects whether the n-k-£ low order bits of S(X) are zeroes by the use of a zero detector con­ sisting of a (n-k-£)-input OR gate. The error trapping decoder operates as follows:

1. Clear the syndrome register.

2. Gate 1 is turned on which enables the feedback and gates 2

and 3 are off. The syndrome is formed by shifting the entire FEEDBACK ENABLE

FEEDBACK CONNECTIONS

SYNDROME REGISTER (:~;··------~ ~~

< R, ----~ n-k-R- ERROR PATTERN

ZERO DETECTOR

z GATE 2 ~ ~ CORRECTION ENABLE

BUFFER (k) ", CORRECTED INPUT GATE 3)'1 j OUTPUT ENABLE WHERE (±) = XOR GATE

FIGURE 4.5. Error trapping decoder for a burst EDCC. Includes the multiplication by Xn-k (modified technique).

"'w 64

received word into the syndrome register. However only

k information bits are required to be stored in the buffer

because it is not always necessary to correct the check bits.

3. Gate 3 is turned on to enable the output. The syndrome

register is shifted with no input. As soon as its n-k-~

leftmost stages contain only zeroes, its ~ rightmost stages

contain the burst pattern. Simultaneous to this action, the

buffer is being output. Since the error pattern has not been

detected it is assumed that the bits are error free.

4. Having detected all zeroes in the first n-k-~ stages, gate 2

is turned on to start the correction. Correction is thus

accomplished by adding (modulo-2) the syndrome which repre-

sents E(X) to V(X)"

5. If the n-k-~ stages never contain all zeroes, then an uncor-

rectable error has been detected and an error flip-flop is

set.

The following special types of bursts cannot be corrected by an

~-burst error trapping decoder like the one just described:

l. End around burst - A burst that occupies the i high order

positions of a word and the ~-i low order positions. See

Figure 4.6.

2. All bursts of length longer than ~

3. Multiple bursts - more than one burst

4.2.4 Trial and Error Decoding

In this method the decoder is designed in such a way that it will never cause a decoding failure. For this reason it is sometimes used 65

-?>I 9--i ~ ~ i ->1

IEEX XXX XXX XXX EEEI

CHECK BITS DATA

WHERE E = ERRONEOUS DATA

X = CORRECT DATA

Figure 4.6. Format of end around burst of length L 66 with error trapping. The decoder inverts j-bits at a time, recalculates the syndrome, and then tries to decode using a (t-j) error correcting procedure. If decoding is successful, an error pat- tern of weight less than or equal to t has been corrected. Otherwise, the j-bits are reinverted and another set is inverted and the process is repeated.

The reason why this guessing followed by a false correction can never cause a decoding failure can be explained by examining the worst case condition as follows:

System produced errors t errors

Guessing produced errors j errors

Decoder produced errors t-j errors

By summing the individual errors, a total of 2t errors have been made.

However the code has minimum distance 2t + 1, so the result of this worst case correction process cannot be a code word and will be detec- ted by a nonzero syndrome. A major disadvantage of this trial and error method is that up to (~) trials may be required for decoding J limiting its applicability.

4.2.5 Majority Logic Decoding

This is a special type of Meggitt decoder in which the error pattern detector is implemented with majority (voting) gates. The decoder, however can only be used with a specific type of codes known· as majority-logic decodable codes. A concept imperative to these codes is the formation of orthogonal parity check sums. In order to under- stand the structure of these codes, a few definitions have to be pre- sented. First the syndrome is made up of individual bits called 67 syndrome bits which are the sum of the error bits specified by the corresponding row of the H (parity check) matrix. A sum of these syndrome bits, which is called a parity check sum, is just the sum of their error bits. A set of check sums s , s , .•• sj is defined to be 1 2 orthogonal on a particular error bit e. if each sum contains e. and no ~ ~ other bit appears in more than one check sum. For example the three check sums shown below are orthogonal or error bit e : 1 s1 e1 + e2

s2 = e1

~3 = e1 Since majority-logic decodable codes are cyclic random error correcting codes, it is important to know the minimum distan~e. This

.is usually expressed as follows; if a cyclic code has d-1 check sums orthogonal on any bit, then it has minimum distance d.

Basically the decoding amounts to determining the error bits.

For instance, if the value of d-1 check sums on any bit e. is known, ~ and all other error bits are zero, then the value of e. is known. ~ The erroneous bit is corrected and decoding continues. In the case of multiple errors a few other error bits are nonzero, but most of the check sums will equal e .. A decision can be made which will find e. ~ ~ d-1 by taking a majority vote of the check sums provided (---) or fewer 2 errors occurred. This will enable the other erroneous bits to be located and corrected.

The decoder is fast and simple and would be used widely in random error applications if it were not for one large disadvantage. This 68 being the fact that only a very small number of majority-logic codes

exist.

4.2.6 Algebraic Procedures

The method of decoding called algebraic procedures is used strictly for decoding a type of random error correcting codes known as BCH codes. These codes conform to the general decoding process, but differ in the way the syndrome is calculated and the error position

is detected. The first step of decoding is to calculate the remain­ ders modulo-2 of the various factors of G(X) which are called the partial syndromes. Then these partial syndromes are used to construct a special decoding polynomial having the error-location numbers as roots. Finally the roots of the decoding polynomial are determined indicating the error locations and the appropriate bits corrected.

This method can be implemented with a moderate amount of hardware.

4.3 Examples of Practical Burst Error Correcting Decoders

In order to gain a better understanding of the two types of decoders that are being considered for the system design in this project, each type will be exemplified in detail. The two types are the unmodified and modified error trapping decoders. Each will decode 3 4 5 6 a cyclic (15,9) code generated by G(X) = 1 + x + x + x + x which has burst error correcting capability, b = 3. A comparison between the two will be made for the error free condition as well as for the 3-bit burst which corrupts the three most significant bits of the message. 69

The circuits designed for the decoders are shown in Figures 4.7 and 4.8. Their simplified procedures are as follows:

Unmodified decoding procedure,

1. Clear syndrome register

2. Enter received word and calculate the syndrome with feedback

enabled (load k-bits in buffer)

3. Rotate buffer while correcting erroneous bits (k shifts)

4. Enable output and transfer data to system (k shifts)

Modified decoding procedure,

1. Clear syndrome register

2. Enter received word and calculate the syndrome with feedback

enabled (load k-bits in buffer)

3. Enable output and correct erroneous bits (shift up to n times)

A table is constructed for both the error free and error induced cases shown in Tables 4.1 and 4.2 respectively. The received vector used 2 3 4 8 in this simulation of the circuit is, V(X) =X+ x + x + x + x 11 12 14 + x + x + x • As seen in the tables, both decoders arrive at the zero syndrome for the error free case in the same amount of time

(15 clock pulses). Examination of the second case with errors reveals that the unmodified technique is slower than the modified technique.

A factor contributing to this difference in speed is the detecting window which is at opposite ends of the syndrome register in the two decoders. The timing differences are summarized in Table 4.3.

The Meggitt decoder and the error trapping technique are capable of decoding only alternate received words if the decoding circuit is FEEDBACK ENABLE (1) GATE 1 J

INPUT in+ZERO

ZERO DETECTOR (WINDOW)

z 0 = E(X) DETECTED SEL in+B Ql Q2 Q3 1 = E(X) NOT DETECTED GATE 2 CORRECTED CORRECTOR/BUFFER (k=9) I • )I OUTPUT

0 = NULL OUTPUT 1 = DECODER OUTPUT

FIGURE 4.7(a). Unmodified error trapping decoder for a (15,9) code

...... 0 MUX SELECT Z 0 • • •

Ql Q2 Q3

INPUT~ ~---{B~ J >ouTPUT

FIGURE 4.7(b). Corrector/buffer circuit for the unmodified decoder of figure 4.7(a)

"I-' FEEDBACK ENABLE (1)

GATE ._.,1

....----CORRECTION ENABLE O=NO CORRECTION ZERO DETECTOR (WINDOW) 1=MAKE CORRECTION

INPUT [B-;;;;;;;-(;~~ >~ GAT~ 3 in+ZERO OUTPUT ENABLE ~ OUTPUT z 0 = NULL OUTPUT 1 = DECODER OUTPUT 0 = E(X) DETECTED

1 = E(X) NOT DETECTED

FIGURE 4.8. Modified error trapping decoder for a (15,9) code.

"N 73

TABLE 4.1. Syndrome generation of an error free received word. The circuit simulated is an error trapping decoder for a (15,9) cyclic code.

SYNDROME REGISTER CONTENTS

CLOCK INPUT UNMODIFIED (S~X)) MODIFIED (S (X))

0 - 0 0 0 0 0 0 0 0 0 0 0 0

1 1 1 0 0 0 0 0 1 0 0 1 1 1

2 0 0 1 0 0 0 0 1 1 0 1 0 0

3 1 1 0 1 0 0 0 1 1 1 1 0 1

4 1 1 1 0 1 0 0 0 1 1 1 1 0

5 0 0 1 1 0 1 0 0 0 1 1 1 1

6 0 0 0 1 1 0 1 1 0 0 0 0 0

7 1 0 0 0 0 0 1 1 1 0 1 1 1

8 0 1 0 0 1 1 1 1 1 1 1 0 0

9 0 1 1 0 1 0 0 0 1 1 1 1 0

10 0 0 1 1 0 1 0 0 0 1 1 1 1

11 1 1 0 1 1 0 1 0 0 0 1 1 1

12 1 0 1 0 0 0 1 0 0 0 0 1 1

13 1 0 0 1 1 1 1 0 0 0 0 0 1

14 1 0 0 0 0 0 0 0 0 0 0 0 0

15 0 0 0 0 0 0 0 0 0 0 0 0 0 74

TABLE 4.2. Syndrome generation of a corrupted received word. The error is a 3-bit burst located in the most significant bits of the message. The circuit simulated is an error trapping decoder for a (15,9) cyclic code.

SYNDROME REGISTER CONTENTS

1 CLOCK INPUT UNMODIFIED (S (X)) MODIFIED (S (X))

0 - 0 0 0 0 0 0 0 0 0 0 0 0 1 0 (X) 0 0 0 0 0 0 0 0 0 0 0 0

2 1 (X) 1 0 0 0 0 0 1 0 0 1 1 1

3 0 (X) 0 1 0 0 0 0 1 1 0 1 0 0

4 1 1 0 1 0 0 0 1 1 1 1 0 1

5 0 0 1 0 1 0 0 1 1 1 0 0 1

6 0 0 0 1 0 1 0 1 1 1 0 1 1

7 1 1 0 0 1 0 1 0 1 1 1 0 1

8 0 1 1 0 1 0 1 1 0 1 0 0 1

9 0 1 1 1 1 0 1 1 1 0 0 1 1

10 0 1 1 1 0 0 1 1 1 1 1 1 0

11 1 0 1 1 0 1 1 1 1 1 0 0 0

12 1 0 0 1 0 1 0 1 1 1 0 1 1

13 1 1 0 0 1 0 1 0 1 1 1 0 1

14 1 0 1 0 1 0 1 0 0 1 1 1 0

15 0 1 0 1 1 0 1 0 0 0 1 1 1 ' 16 - 1 1 0 0 0 1 ' 17 - 1 1 1 1 1 1 18 - 1 1 1 0 0 0 75

TABLE 4.3. Unmodified decoding versus modified decoding

NUMBER OF SHIFTS REQUIRED I

DECODING FUNCTION UNMODIFIED MODIFIED

Calculation of syndrome n n

Correction of information bits 2k k

Total decoding period n + 2k n + k shifted in step with the received bits. This is referred to as decoding at line speed and there are several approaches to the problem.

1. Duplicate decoders can be provided to allow all circuits to

operate at line speed.

2. Addition of a second n-bit buffer. The purpose of the first

buffer is to accept the first received word. At the com­

pletion of n shifts the second buffer takes on the role of

accepting the received bits, meanwhile the first buffer

passes its data to the syndrome generator and then corrects

any erroneous bits. After the completion of the second n-bit

cycle the roles again change and the second buffer enters the

correction phase. During this time the first buffer is

accepting new bits while shifting the corrected bits into

the output. With this procedure, the decoder must operate

at not less than twice line speed.

3. A dual speed syndrome register could be designed so that a

complete word can be decoded in a single bit time. The

decoder would input the received word at rate A, then switch 76

to rate B (n times faster) and shift n-fast times while

correcting and outputting the data. This would all occur

between the arrival of the last bit of one word and the first

bit of a second word.

4.4 Decoding Shortened Codes

Suppose errors occur too frequently or the number of information bits is constrained by other system requirements, then it may be desirable to shorten a code. The possibility of shortening a code was introduced in the encoding chapter where it was determined that the encoder did not have to be modified for shortened codes. The reason for this is that the code word is generated from the first nonzero bit forward and is unaffected by leading zeroes. Although shortening presented no problems for encoding, it does present a few for decoding.

The circuits designed to decode full length cyclic codes will work, but the syndrome which is calculated in n shifts is aligned with the n-k high order bits of the word. These are the shortened bits so there is no point in correcting them. The unaltered correction procedure would require a total of S shifts before reading the actual received word out of the buffer. The obvious disadvantage is that the

S shifts, which are being made to compensate for the S shortened bits, require extra control circuitry and use up valuable time. There are two solutions to eliminate the S extra shifts.

The first solution is achieved by automatic premultiplication of the received vector by T(X)"

s = X MOD G(X) (4.11) 77 which results in an unmodified architecture similar to the circuit in

Figure 4.7.

The second solution premultiplies the received vector by T(X) where,

T = Xn-k + S MOD G(X)" (4.12) (X)

This results in a decoder of the modified architecture which was determined to be superior for performing error correction. Both of the solutions align the computed syndrome with the high order n-k bits of the received word, so correction can start immediately. In order to calculate the premultiplication polynomial, T(X)' it is necessary to solve Eq. (4.12). However, unless the number of shortened bits is small compared to the degree of G(X)' the calculation of T(X) can be tedious and is best done with the aid of a computer program.

As an example of code shortening, consider the (15,9) cyclic code with burst error correcting capability, b = 3. Suppose this code is to be used in a system that contains only seven information bits. If used as is, the (15,9) code has two unused information bits which must be decoded every time a word is received. This correction of useless bits wastes two clock pulses and can be eliminated by shortening the code by, S = 2, bits; The shortened code becomes a (13,7) code having the same generator polynomial as the unshortened code. For this 3 4 5 6 example, G = 1 + + + + is used to determine the premul­ (X) x x x x tiplication polynomial as follows. 78

From Eq. (4.12)

Remainder

=

The decoding circuit is shown in Figure 4.9. The decoder's operating procedure is exactly the same as the modified decoding procedure described in Section 4.3 except that both n and k are reduced by two.

4.5 Encoder/Decoders

The circuits for encoding and decoding can be combined into a single unit which performs two functions. This is possible because both the encoding and decoding algorithms require a similar division, typically using an (n-k)-stage register. The differences are in the number of shifts required to make this division, feedback enable con- trol and detection/correction control; all of which can be managed by proper timing signals. Combined circuits of this type can be used in systems where the transmission and reception of data do not overlap.

If such is the case, then a substantial savings can be made in compo- nents and costs. An encoder/decoder circuit is shown to provide an example of this widely used structure. The circuit is a general application of the modified error trapping architecture encoder and decoder, and can easily be modified to include code shortening. (See

Figure 4.10). By the selection of the necessary control signals, either of the following procedures can be performed. PREMULTIPLICATION POLYNOMIAL, T(X) · FEEDBACK ENABLE

Ir------t-----~----~~------~~--~G=A:TE~l ~u,------4-- '

I IIJ!Il ~Ill I

COLLECTION ENABLE GATE 2 DETECTOR ~ 1 Z = 0 ERROR DETECTED

: 1 ERROR NOT DETECTED GATE 3

BUFFER (k) ~COLLECTED - L__/ OUTPUT INPUT 1 OUTPUT ENABLE G(X) = 1 + x3 + x4 + x5 + x6

T(X) =X+ X2 + X4

FIGURE 4.9. Modified error trapping decoder for a (15,9) code shortened by 2. Premultiplication by T(X) is automatic.

--...J \.0 GATE 1 FEEDBACK CONNECTIONS .It FEEDBACK ENABLE CLEAR ! ~ ~ J

SYNDROME REGISTER (n-k) 11 ... l CORRECTION ENABLE ERROR GATE 2---7 ZERO DETECTOR WINDOW PATTERN n-k-b l b Z = 0 (-DETECTED) = 1 (NOT DETECTED) GATE 3 INPUT ~ ,_ BUFFER (k) , ·DH~ouTPm ' . INPUT ENABLE /

SEL 1 = INFORMATION 2 = CHECKS BITS 3 = CORRECTION 4 = NULL OUTPUT

FIGURE 4.10. Modified error trapping encoder/decoder for a general (n,k) code

00 0 81

Encoding procedure:

1. Clear syndrome register (parity check generator).

2. Enable input and feedback and set the output select logic

for information.

3. Shift the k information bits into the register which are

then simultaneously output into the channel.

4. Disable the feedback and set the output select logi~ for

check bits.

5. Shift the n-k check bits into the channel.

Decoding procedure:

1. Clear syndrome register.

2. Enable the input and feedback.

3. Shift the n-bit received word into the register while

storing the first k-bits in the buffer. At the completion

of this time, a zero syndrome indicates valid data which is

then passed to the output.

4. In the event of a nonzero syndrome, set the output select

logic for correction and shift up to n times while searching

for detected errors.

5. If the number of decoding shifts is less than or equal to

n + k and an error is detected, then correction is enabled.

6. When the total number of shifts exceeds k the output is

disabled.

7. If after 2n total shifts the error pattern is not detected,

then an uncorrectable error has occurred. CHAPTER V

INTERLEAVING

Cyclic codes designed specifically for error detection and

correction of burst errors can be enhanced by a process known as

interleaving. The process of interleaving is broken down into two

techniques.

1. Symbol interleaving

2. Block interleaving

5.1 Symbol Interleaving

The first technique can be applied by either of two methods.

Method one applies interleaving to code construction forming an

improved code. Method two applies interleaving to existing cyclic

codes to improve its capabilities. The basic;_differgnce between the

two is the m~'t:l:Wd- used .for decodiJ:l_g;.__ _ ...--_...... - 5.1.1 Interleaving via Code Expansion (Method 1)

Given an (n,k) cyclic code, it is possible to construct an

(In, Ik) cyclic code which is I times as long with I times as many

information bits. The resulting code is called an interleaved code

and the parameteri is referred to as the interleaving degree. Inter-

leaving a code with burst correcting capability b to degree I produces \ a code with burst correcting capability bi. This is made possible

because the burst of length \l?I; is broken up into I bursts of length b,

each of which lies in a different code word. If the generator poly-

nomial of the original code is G(X)' then the generator polynomial for

the interleaved code is G(XI). [Lin, 1970]

82 83

For example, suppose a system has burst error correcting requirements of 15 bits. Rather than finding a code specifically to meet this need, one can be generated by interleaving a known code.

The (15,9) cyclic code has burst correcting capability b = 3 and can be interleaved to degree 5 to produce a (75,45) code with b = 15. 6 5 4 3 The generator polynomial G(X) = x + x + x + x + 1 interleaved 30 25 20 15 to degree I(5) ~ecomes G(XI) = x + x + x + x + 1.

5.1.2 Interleaving via Manipulation of Code Vectors (Method 2)

Given an (n,k) cyclic code, in,te'!'leaving can be accomplil:lhed by arranging I code vectors of the code into I rows of a rectangular array and then.: 1::I::B.-11Smitting them 'Column by column. A PB:t:t:_ern of errors can be corrected for the whole array if and only if the pattern of errors in each row is a correctable pattern for the code. No matter where it starts, a burst of length I will affect no more than one bit in each row. Therefore bursts are effectively divided into I groups of correctable length, b. As an example, the (15,9) code is arranged into five rows (degree = 5) to demonstrate the scattering of the error bits. Refer to Figure 5.1.

By interleaving short codes or shortened codes, it is possible to construct codes of practically any length. This helps tremendously to fill the gaps where no (n,k) codes exist with desired capabilities.

It also can be interpreted that it reduces the problem of searching for long good codes to searching for good short codes.

Another way to view the two methods of symbol interleaving is to examine their respective generator matrices. 84

For example, the (7,4) is generated by 3 2 G(X) = x + x + 1. Interleaving can be accomplished by both Method 1 and Method 2. If degree I = 3 is chosen then,

For Method 2

G(X) is not affected, so

1 1 0

0 1 1 G = I4 1 1 1

1 0 1 while for Method 1 the codes becomes (21,12) with

1 0 0 1 0 0 0 0 0

0 1 0 0 1 0 0 0 0

0 0 1 0 0 1 0 0 0

0 0 0 1 0 0 1 0 0

0 0 0 0 1 0 0 1 0

0 0 0 0 0 1 0 0 1 G I12 1 0 0 1 0 0 1 0 0

0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

1 0 0 0 0 0 1 0 0

0 1 0 0 0 0 0 1 0

0 0 1 0 0 0 0 0 1 85

The hardware requirements will be different. In Method 1, the encoders and decoders are similar to those previously described except that the parity check register and syndrome register will be expanded to I (n-k) of the original code or the new n-k of the interleaved code.

For the preceding example this is 9 stages. Method 2 uses the same encoder/decoder however requires additional circuitry to control and generate the array in which interleaving takes place. Although

Method 1 is the simplest to implement, the real trade off takes place in the decoding correctability.

Method 1: Since the interleaved code is an independent code

with an enlarged syndrome, only one long burst can

be corrected.

Method 2: Each row of the fabricated matrix known as subcodes

can be decoded individually. This permits correction

of many other error patterns in addition to single

bursts.

The advantage of Method 2 relies on the fact that interleaving a b error correcting code to degree I produces a code capable of cor- recting all single bursts of length Ib or less, any two bursts of Ib length z- or less, or any b bursts of length I or less. [Peterson and Weldon, 1972]

The implementation of Method 2 interleaving can be carried out by either an I-ericoder circuit or an array manipulation circuit shown in Figures 5.2 and 5.3 respectively. 86

*1<'----- CHECKS -----ji~----- DATA ------,~ CODE WORDS I 1 71 E E E 6 1 I 2 72 E E E 7 2

3 73 E E E 8 3

4 74 E E E 9 4 I 5 75 E E E I I l 10 5 I I 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1

WHERE E = ERROR LOCATION

FIGURE 5.1. Symbol interleaving the (15,9) code to degree 5. The burst correcting capability is expanded from b=3 to b=15. The order of transmission is indicated by the numbers in the box. (1 transmitted first.)

m ,m v ,v I o I o ENCODER 1

m m v v INPUT I+1' 1 I+1' 1 OUTPUT ENCODER 2

'-', "'- m m v ,v ~ 2I-1' I-1 2I-1 I-1 ENCODER I

FIGURE 5.2. Interleaving via I encoders. 87

5.1.2.1 I-Encoder Interleaving

The technique of interleaving by using I encoders would require

I decoders or the replacement of all of them by I encoder/decoders.

The parallel nature would add extra control circuitry to alternate between paths. If the modified n-k architecture was used in the

encoder, then the first bit would take n-k shifts to enter the channel,

the same as without interleaving. The disadvantage of such a circuit

is that for large I, there is alot of overhead (additions to control

circuitry).

5.1.2.2 Array Manipulation Interleaving

Array manipulation is the most straight forward of the inter­

leaving circuits. It is derived from the theory which essentially

takes the stream of encoded data and segments it into I sections.

The data output of the encoder is stored in a one-dimensional memory

(1 bit x N RAM). Although the storing of data is serial it is con­ venient to refer to it as being stored in rows and columns whereby

each row is an encoded word. Once all Ib locations are full the memory can be output in interleaved format by transmitting what was

referred to as columns. The hardware requirements for this circuit,

specifically the memory suggests applications with large interleaving

degree I to make use of dense RAM chips commercially available.

Both circuits described produce the same interleaved data

stream with the I encoder circuit being able to output the first

interleaved bit in less time than array manipulation. This importance

is negated in system design where dual arrays can be designed for con­

tinuous encoding. The interleaved data stream is shown in Figure 5.4. 88

I ROW + COLUMN TIMING I

MEM INPUT ENCODER ' / OUTPUT )I ARRAY I l

FIGURE 5.3. Interleaving via array manipulation

BEFORE INTERLEAVING

WORD 2 WORD 1

AFTER INTERLEAVING, DEGREE I = 2

WHERE P . = PARITY CHECK ·BITS ~ m. = MESSAGE BITS ~

FIGURE 5.4. A symbol interleaved data stream, 1=2 89

In order to demonstrate the advantage of interleaving, a Fire code which corrects 10-bit burst errors is designed with and without interleaving.

Wi t,ll,Ql.J._t: J-!l t~~ ~eavii1&:

The analytical equations for Fire codes are,

n = (2b~1) (2b-1)

(n-k) = 3b-1

For

b = 10 n = [2(10)-1] [210-1] = 19(1023) = 19437

n-k = 3(10)-1 = 29

Therefore the Fire code is (19437,19408)

With int;~xleaving (degree I = 2) :

For

b = 5 5 n = [2(5)-1] [2 -1] = (9) (31) = 279

n-k = 3(5)-1 = 14

Therefore the Fire code is (279, 265)

In comparison, the uninterleaved code will require a larger syn­ drome register, more shortening to achieve an equal code length, and can only correct one long burst. The advantages it has is simplicity and the fact that a special interleaving circuit is not required. 90

5.2 Block Interleaving

Not all codes can be symbol interleaved and retain their correctability which is solved by the second interleaving technique, known as block interleaving. [Peterson and Weldon, 1972] This technique divides the data stream into segments of length b, where all

segments (blocks) are adjacent on the channel.

The Burton codes can be block interleaved to achieve better code capabilities. The improved parameters are:

n = I n'

b = I b'-(b'-1) (5.1)

n-k = 2Ib' where n' and b' are the parameters of the base code.

For example, the Burton code (889,875) is interleaved to degree I = 4 to compare its increased burst capability. The uninter­ leaved burst correcting capability b' can be determined from the analytical structure of the Burton code; (n,k) = (n, n-2b').

Solving forb',

875 = 889-2b'

b' = 7

Substitution into Eq. (5.1) yields,

b = Ib'-(b'-1)

= 4(7)-(b'-1)

= 22 91

Unlike symbol interleaved codes, the block interleaved burst correcting capability is not an integral multiple of its base param­ eter. In fact the gain in correctability is less than that of symbol interleaving. For instance, in the previous example symbol inter­ leaving would produce b = b'I = 28. This would be a disadvantage of selecting block codes when interleaving is to be used. CHAPTER VI

EDCC SYSTEM DESIGN

6.1 Introduction

Storage devices for full scale computers and minicomputers have required circuitry as part of their controllers for error detection and correction since their inception. However, as improvements in recording technology increase areal density and data transfer speed, the demands on the error correction system grows. The effects of these technological innovations in a recording system are to increase the number of errors and their length. In order to satisfy the new demands and maintain reliability, an error correction system has to be designed with these new parameters. A commercial recording media which uses error correction is the disk. Two types of disks are gen-. erally used; magnetic disks which have been part of systems for many years and optical disks which are a new technology.

In order to demonstrate the material presented in previous chapters, a system will be designed. Because the need for error cor- rection is so great in disk systems, a disk storage system will be assumed. To avoid similarities with present systems and to antici- pate technological advances in the next generation of systems, the requirements were chosen with these restrictions in mind. The requirements for the system design are a sector length of 45 bytes, burst error correction capability of 100 bits, and the mistakability -20 double that of the best known systems or approximately 2 x 10 • In comparison with the disk storage systems researched, the sector size is one fourth and the correction capability ten times greater. To

92 93 prove that the system.designed works, a computer simulation of the entire system has been done.

A general disk storage (disk) system architecture is pre$ented in Figure 6.1 to show the modules that comprise the system. A description of each module follows.

Sector Buffer - storage location for information to be encoded,

and decoded information. Interfaces with user system (computer).

Parallel Serial Parallel Converter - A register which converts

parallel information to serial to be encoded and converts

decoded serial information to parallel.

Error Detecting and Correcting System - Contains encoding and

decoding circuits to safeguard error free transmission.

Modulator/Demodulator - Converts and records digital data on

magnetic media and vice versa.

Disk - Magnetic storage media.

The error detecting and correcting system is the area of major concentration in the design, shown separately in Figure 6.2. A description of each module follows.

Encoding Circuit - generates parity check bits and attaches

thebits to the information bits during the write operation.

Decoding and Error Detecting and Correcting Circuit - generates

a syndrome from which errors can be detected and corrected

during the read operation.

Interleaving/Deinterleaving Memory - separates code words into

smaller pieces and alternates pieces with pieces from other

code words and vice versa. >------WRITE ------7

FROM PARALLEL ERROR DETECTING MODULATOR DATA BUS~ SECTOR SERIAL ~ ~ AND ~ AND (: ) TO BUFFER PARALLEL DATA BUS 8 CORRECTING DEMODULATOR CONVERTER SYSTEM

~----READ------<

FIGURE 6.1. Block diagram of a disk system.

\0 .j::- ENCODING CIRCUIT

DATA IN INTERLEAVING/ TO WRITE r DEINTERLEAVING DATA OUT MEMORY FROM READ

DECODING AND ERROR DETECTING AND CORRECTING CIRCUIT

FIGURE 6.2. Error detecting and correcting system.

\.0 IJl 96

6.2 Code and Interleaving Degree Selection

The error detecting and correcting system being designed is based

on two codes. The first code is known as the error detecting code

(EDC) and serves to protect the 45 bytes of data from any erroneous

corrections made by the EDCC described later. To guarantee that no more than approximately one out of a million corrections results in

an erroneous correction, 20 parity check bits are necessary. A cyclic

code can now be selected with k (45 bytes) and n-k (20 bits) known.

See Figure 6.3.

6.2.1 Determination of G(X) for the EDC

1. A degree of 20

2. Divides (Xn-1) for n > 380 (for n > 380 the code has to

be shortened)

3 •. Generates a code that has the largest minimum distance

4. A minimum number of terms

With the aid of mathematical tables which list polynomials, the

determined function is the following.

(6.0) 97

It is also known from the table that this polynomial is irreducible and primitive. A mathematical fact is that an irreducible polynomial of degree m is primitive if and only if it divides Xn -1 for n greater m than or equal to 2 -1. As applied to this case of degree equal to twenty, the minimum natural length of the EDC will be at least

1,048,575. Because the desired natural length is 380 bits, the code will be shortened. The resultant code rate, R, is 0.947.

The second code will be referred to as the error detecting and correcting code (EDCC) which serves to protect the 45 bytes of data plus the 20 bit parity check from errors up to and including 100 bits.

Thus the k for this code is the sum of the information bits in the sector plus the detection check bits. See· Figure 6.4.

A generally accepted fact is that Fire codes are well suited for correction on disks. This is true because Fire codes provide correc- tion capability against short to moderately long bursts with a high rate and are economical since implementation is not complicated. For this reason the EDCC selected will be a Fire code.

6.2.2 Determination of the Interleaving Degree (EDCC)

Since the system structure includes interleaving, it is possible to subdivide the code length and still achieve the correction capabil- ity. For all feasible interleaving degrees and code lengths to be considered, all the integral factors of the EDCC information length have to be calculated. All of the factors should be compared for the systems operation before the interleaving degree and code length are selected. See Table 6.1. 98

20 45 BYTES (360 BITS)

~ n-k ~'

FIGURE 6.3. EDC format, (n,k) = (380,360)

I 0+ I 45 BYTES (360 BITS) = 380 BITS DETECTION SECTOR INFORMATION DATA TO THE EDCC ENCODER WORD

FIGURE 6.4. EDCC combined data field for.code length. 99

TABLE 6.1. Possible code lengths and interleave degrees with 380 data bits

FACTOR 1 FACTOR 2 CODE (INT DEGREE) (CODE LENGTH)

1 1 380

2 2 190

3 4 95

4 5 76

5 10 38

6 19 20

7 20 19

8 38 10

9 76 5

10 95 4

11 190 2

12 380 1

From the second system requirement which is burst correctability b = 100, the number of possibilities will be narrowed. This is so because not all of the code factor pairs can be interleaved to obtain b > 100 without a tremendous amount of overhead.

The most straight forward design would be to select a code to correct a burst of length 100 in a single syndrome register. The interleaving would be unnecessary and the system architecture could be 100 simplified. Applying the Fire code formulas to achieve b = 100 requires that,

(6 .1) n-k = 3(100)-1 299

The resultant code is very large, but it can be shortened to obtain the desired number of information bits k, to meet the system's requirements. The shortened code is (679,380) which has a code rate,

R = 0.560. This is substantially lower than the rate of the base code, however it cannot be avoided because code shortening deletes informa­ tion bits and no parity check bits. A particular disadvantage of this code is that the syndrome register requires 299 stages which is unpractical.

On the other extreme of code selection is the process of inter­ leaving a small length code to a large degree. The two shortest length codes (No. 11, 12) can be eliminated because codes do not exist for k < 3. Code 10 has the smallest feasible code length of 4 in this design and is associated with an interleave degree of 95. Two alter­ natives are possible, both of which present problems, overdesign and underdesign. The first alternative is to interleave a (9,4) fire code with b = 2 to degree 95 to meet the burst requirement. This results with a code capable of correcting 190 bits which has the disadvantage of a low code rateR = 0.44. The second alternative is to interleave a single bit error correcting code to degree 95 which falls 5 bits short of meeting the burst requirement. Thus this code is not a goo~ choice for the EDCC. 101

A complete set of codes formed from the factors are listed in

Table 6.2 with their respective properties. Additional data was computed for a system with 100 bytes of information so comparisons could be made. For instance during the discussion of code extremes, the disadvantage of overdesign of correctability was demonstrated.

In order to see this more clearly a graph of burst length versus code rate is shown in Figure 6.5. From this graph it is noticed that as the burst length to be corrected increases, the code rate decreases.

In addition, for a fixed burst error length, the larger the sector size, the higher the code rate.

From the information contained in Table 6.2 the selection of a code and interleaving degree will be made. Two important system criteria are reliability (probability of an error) and the speed

(throughput rate). The proposed system architecture which inserts an additional code (EDC) specifically for detection allows the reliability c·onstraint to be more independent of the throughput rate. With this freedom the throughput rate (code rate) can be designed for maximum efficiency. For instance a small increase in code rate, 0.01 trans­ lates to 10,000 less clock cycles over one million information bits.

To find the point of maximum code rate, a graph is shown in Figure 6.6 showing the relationship between code rate and interleaving degree.

Analysis of the curves show a very slight increase in the code rate with increasing interleave degree, and that regardless of interleave degree the code rate is higher in a system with the larger sector size. 102

~ •· .... TABLE 6.2. Code selection process for a sector of 380 bits and b > 100

LENGTH, INTERLEAVE RATE CODE k (n,k) DEGREE, I (k/n) b b I

1 380 (679,380) 1 0.560 100 100

2 190 (339,190) 2 0.560 50 100

3 95 (169, 95) 4 0.562 25 100

4 76 (135,76) 5 0.563 20 100

5 38 (67,38) 10 0.567 10 100

6 20 (37,20) 19 0.540 6 114

7 19 (33,19) 20 0.576 5 100

8 10 (18,10) 38 0.560 3 114

9 5 (13,5) 76 0.380 3 228

10 4 (9,4) 95 0.440 2 190

Therefore the code selected for this design is code No. 7 where

(n,k) = (33,19), burst correction capability of 5 bit& and interleave degree, I of 20.

Other advantages which make this code a good selection are:

1. The codes with higher code rates require less memory.

2. Simpler generator polynomial requires less register stages.

For instance, code No. 7 requires n-k = 14 flip flops while

in contrast code No. 1 requires n-k = 299 flip flops.

3. Less complex premultiplication polynomial when degree of

G(X) is smaller. 103

,_ 1[\ --- r-. ' - - ! . --•1 I . - l ~. r-·- I -ro

X :..-: ,~:: - '- , -·- i •-::-' 1- -: - L- . - \ : "'- 0....A- f •• --- ; • --- ! l

"'

" 'TJ.t: ·- - ': . ' - !··- --I '\. .>.:• ; ~L ·-- -i 'DO- I -; .w~· . ---·--·-·-·~ f- ·~------·-"--·~ __c.__;.__;.__;.__:__.:___-'--____; -~-~=+-----~--~~~~----~-·~-·~~~--~~~--T-~~~ ~ ~ -- • I --. ~. t . i -! - I ' i ·--:··--!' i~-f-•--i -•T ' ' Ur i --"' , . : ! t _i_- -·-r [:.' ___ ,_:~~--i-~.--L: _t_._ =- ~~~::======:~·======:==:=:=:-:-l.U t- .r- -- t------t-: ~~r<. - :;=::=·~,~----: ... -~L- ====·~==·~:~.~-~-:·:·:::.~-~·::::_:-:.::-____ .t-~--~----r· __ ':::-0 ,- 1-- --!-- --- t:-c:: :-·--:: ;.- -- ______t-· - t - ·-- -L• r--+~--~--~~~~~~l~-~~~-~--~-·~i-~·-~-~~-~-~\-=-~~:~--~-f~-~~~--~·=~-=-~-~~-=-~·-~-~--~··-~--~·-~-4·f~··-~~ f '- .. • ·-- - -·-· - f-- ' _L_.: __ L_ : ... i---~----+: -~---1--_:_~---.L .:~~~l ~ \___.:_ l ------J. __ - -. !: . t- - l - •--:: I 1 ' --I~- ' '-- -:; ------~-- L- -~-- --- ·- -- __ . i ~ - ---- r -- .. - ~- . ~: __:.:.__:_-_

FIGURE 6.5. Relationship between code rate and burst correction capability 104

,_ . I_ ··-· ::~ ~- -=-r=------I" ;­ l l' . 1 .. i

l.

v ! --. --- .L . .. ! ...... ~-=- ::: ~.: :_·-~ !~...::.:- ::. t. L -~ : - =--. --[":_-- . : i- ... --~---- • "l' .. ;_ ' i i

i. ... -­ - f------_ r-

. --- i: ·f··-r----- . ,~,­ . l ... \)...... -- i.- •r -· -! i ... i

·.·t.- . i . ~- . ! . I ~-- --- j. ,. __ : __ .. 'l: .. . ! • I

. !

!:~ ' :JQ · ,_ . ' tc ... ::;:··•-- ··;· .! . · t ·· · t -~=-~----~·~-~--~~-E~~+-~~~~~~--~-~:~~:~-~~~~'r:~~--~::~~~:+l-~-~-~-~+r·~-~~~~ $ i i . [ .. : - ' . _;- ·- . l ' ;.. :._: l. . i ~ t- r--· ·t_ Ui . . . -~~--~---~- ____ ; ______; .. - -.!: !· .· I • !

T' ,.' ·1 .... I

.FIGURE 6. 6. Relationship between code rate and interleaving degree 105

6.2.3 Determination of G(X) for the EDCC

The base code before shortening is a (279,265) Fire code. See

Figure 6.8.

Given an (n,k) Fire code, the generator polynomial can be found from the analytical formulas which define Fire codes.

G(X) = (X2b-1 + 1) . f(X) (6.2) where f(X) is an irreducible polynomial of degree b

The f(X) polynomial can be found in many books, such as Appendix C in

Peterson and Weldon's text "Error Correcting Codes." The polynomials f(X) are given in right justified octal notation; for example 4 3 35 = 011101 = x + x + x 2 + 1. For degree 5, three such polynomials can be found. See Figure 6.8.

The best choice of polynomials is the one with the least terms or nonzero coefficients. This is advantageous because it results in fewer feedback connections. 5 2 By this reasoning f 1(X) = X + X + 1 should be used. Also for convenience, the appendix in the text lists the optimum polynomial first.

Substitution into Eq. 6.2 yields

f]c, (X2(5)-1 5 2 G(X) = + 1) (x + x + 1) u\ ...

\" (6. 3) -\:·· = x14 + x11 + x9 + x5 + x2 + 1 "-' G(X) "* 106

265

~n-k ~;______k ------7

FIGURE 6.7. Format for system base EDCC.

OCTAL POLYNOMIAL REPRESENTATION

x5 x2 1. f 1 (X) = + + 1 45

x5 + x4 + x3 + x2 + 2. f2(X) = 1 75

5 4 2 3. f3(X) = x + x + x + x + 1 67

FIGURE 6.8. Irreducible polynomials for determination of G(X)" 107

The generator polynomial, G(X)' will be the divisor used to

obtain the parity checks for the EDCC. If code shortening were not

needed, the complete feedback pattern for both the encoder and decoder

would be;

0

corresponding to the feedback circuit in Figure 6.9.

6.3 Encoding Process

6.3.1 EDC Encoding

The EDC generator accepts 45 bytes of information and creates

20 parity check bits. By regulating the control signals, a continuous

data stream of 380 bits can be formed at the multiplexer output. Four

control signals are necessary to accomplish this task which are

described below. See Figure 6.10.

GXEN (EDC) - has the capability to enable or disable the feed-

back patteL~ determined from the EDC generator polynomial.

SEL DATA (EDC) - selects the data to be output from either the

input information or the data in the EDC generator register.

CLK ENABLE (EDC) - controls the shifting operation by inhibiting

the clock when shifting is not desired.

CLEAR (EDC) - resets the shift registers to the zero state.

6.3.2 EDCC Encoding

The output of the EDC encoder represents the input to the EDCC

encoder, however it is broken into twenty segments of length nineteen before encoding. Only one segment is passed to the EDCC generating INPUT

g2 gs g9 gll

1~}-{:J-4-{~l~~ v l \ ~

2 5 9 11 . 14 FIGURE 6.9. Feedback pattern for G(X) = 1 +X +X +X +X +X

...... 0 CXl (FEEDBACK CONTROL) GXEN (EDC) CLEAR (EDC) (CONTROL) o------.

g3

~4 11~~ MUX

DATA IN I lOUTDATA I

CLOCK )---===0, I SEL DATA (EDC) (CONTROL) 1 CLK ENABLE (EDC) (CONTROL)

FIGURE 6.10. EDC generation circuit, G(X) 1 + x3 + x20

I-' 0 \0 110

register at a time. For each segment, fourteen parity check bits are

created which can be attached to the data stream in a continuous

operation by regulating the control signals. This process is repeated

until all 45 bytes and 20 bits of input are encoded and the parity

check bits are output. Four control signals are necessary to accom­ plish this task shown in Figure 6.11 and described below.

GXEN (EDCC) - has the capability to enable or disable the feed­

back pattern determined from the EDCC generator polynomial.

SEL DATA (EDCC) - selects the data to be output from either the

input information or the data in the EDCC generation register.

CLK ENABLE (EDCC) - controls the shifting operation by inhibiting

the clock when shifting is not desired.

CLEAR (EDCC) - resets the shift register to the zero state.

6.3.3 Combined EDC and EDCC Encoding

In this system, both encoders are in action at the same time.

Having presented the encoding operation of both the EDC and the EDCC, the system as a whole can now be described. Refer to Figure 6.12.

The interface between the two circuits is serial. The data out of the EDC encoder is the input to the EDCC encoder. To comprehend how the full encoding circuit functions, the timing has to be understood.

A detailed system timing diagram will be introduced later; however, there are three basic operating modes shown in the flowchart of

Figure 6.1J.

Mode 1: Generation of Parity Check Bits - the period of time

when both the EDC and EDCC are calculating the check bits. This

mode is not continuous because it is interrupted by mode 2. DATA IN

CLEAR (EDCC) 0 (FEEDBACK CONTROL) GXEN (EDCC)

0--al • >I MUX g2 g5 g9 gll DATA OUT

SEL DATA (EDCC) (CONTROL)

CLOCK >--I\ ~-

CLK ENABLE (EDCC)

FIGURE 6.11. EDCC generation circuit.

1-' 1-' 1-' "' MUX '-.. EDCC GENERATOR / (14 BIT) "'' EDC GENERATOR DATA~ MUX ~DATA IN (20 BIT) I >l /~ OUT J, If' r ----;If\

EDC CONTROL I I I EDCC CONTROL AND TIMING AND TIMING

FIGURE 6.12. Combined encoding circuit.

1-' 1-' N

... 113

c L.€AP ,:DC. /NITII'H.IZ.E CONT~QI.

t:D c c..

M

• CAL.C.ill.llTG CJoiiECIC lltTS • Ct.OCI< E:DC. .., £l)C C::. R.G;.

•ENCoDE I:DC CHE.CI<. t!t"T'$ i:N]J INFo SE<:.ToR. a;: • S'\N1 rc.H Mv.l( C.UNT•1o&. N: ~G 0 ? • DISAI!LE t:~C LMTA ZN

I:N"D oF O:!>CC SE.G,Mt!NT;> !=: = 1'1 ?

• DISA8t..l£ FILL P"'TA ZN • lHSI'I(3LS' ct. OC:.IC.

/j/T~ To

• ENABLE I+LL. DIITA ZN , iiNA13LE El)C. Cl

FIGURE 6.13. Flow chart of encoding process 114

Mode 2: Output Parity Check Bits - the period of time when the . . EDCC is outputting the 14 check bits.

Mode 3: Encoding of EDC Parity Check Bits - the period of time

after the 45 bytes of information have been encoded when the

20 bit parity check is encoded in the EDCC.

6.4 Decoding and Error Correction

To retrieve data from the disk, it is necessary to decode the data and extract the information bits from the check bits. During this process some corrections may be required. The basic steps are listed below.

1 - demodulate the recorded data from the magnetic disk.

2 - deinterleave the data in order to reformat it into code

words which can be decoded.

3 - decode twenty 33 bit segments in the EDCC generator and

correct erroneous bits.

4 - decode the EDC and determine if the information decoded is

correct.

5 - convert all decoded information to parallel and store in

the sector buffer.

6.4.1 EDCC Decoding

The input to the EDCC decoder represents the reconstructed code words received from the deinterleaver circuit. This consists of twenty segments of 33-bit code words which totals 660 bits. The decoding process of each segment can be separated into two phases; syndrome calculation and error correction. The first phase which is always necessary accepts a 33-bit code word from which the syndrome 115 is calculated in 33 clock cycles. A test is made and if the syndrome equals zero, the next code segment can be decoded and so on until all the data has been decoded. However if the syndrome register is non­ zero which is determined with the use of a zero detector circuit, phase two of the decoding process must be performed. In order to perform the error corrections it is necessary to include a decoding buffer to store the received code words. The operation described is performed by the circuit in Figure 6.14.

The circuitry inside the zero detectors consists of OR gates connected in a tree structure. This will function to indicate if one or more bits are nonzero. The output OR gate or the root of the tree should be a NOR gate to invert the logic which will be described later.

The control signals which are needed to make the EDCC decoding and error correction circuit work are listed below.

GXEN (EDCC) - enables or disables the G(X) feedback pattern

from going to the syndrome register. When disabled, no division

can take place, thus no syndrome calculated.

TXEN (EDCC) - enables or disables the T(X) feedback pattern

associated with the premultiplication polynomial.

SEL DATA (EDCC) Sl,S2 - selects the data to be stored in the

decoding buffer.

CLK ENABLE (EDCC) - controls the shifting operation by inhibiting

the clock when shifting is not desired.

CLEAR (EDCC) - resets the syndrome register to the zero state.

HOLD BUFFER - controls when data can be stored and read out of

the decoding buffer register. TXEN (Ellcc.)

t2 re t.l/

9s 3't 13,, CLEAI!. ~l (ElJCC) o~o----~l--~•r---1*~llr--t'--1·----•r---,,---tl--,~--~·r---l4---1~~4--~l~~--~·r--+1--~l--~---.--~~ ~ . .

9 ZERo PETECTO,R 5 lEIW l9 l5

J>ATA llot..D BuJ: ouT DECoJ>//1/~ :1-U FFER.. (I 'I)

1M11l IN

D.A r;J EN!jBL£

FIGURE 6.14. EDCC decoding and error correction circuit

...... 0'\ 117

DATA ENABLE (EDCC) - controls the input data to the decoder. It

allows either serial data or no data (zero state).

6.4.1.1 Premultiplication Polynomial

Examination of the EDCC decoding circuit in Figure 6.14 reveals additional feedback connections which were not present in the EDCC encoder. The reason for these is that the system required only information segments of length, k = 19 and not the full length of the code selected. The base Fire code (279,265) is shortened by 246 bits to achieve a (33,19) shortened Fire code (Figure 6.15).

The modified architecture shortening technique is preferred because less hardware is required and the correction cycle is faster.

From Eq. (4.8) the premultiplication polynomial can be computed.

= Xn~k + S MOD G (6.4) (X)

For the system in concern,

X33-19 + 246 MOD G = (X) (6 .5)

=

14 19

~n-k -k ----~

FIGURE 6.15. Format of the shortened Fire code 118

The value of T(X) was calculated by the computer program in Figure 6.16 and found to be,

= (6 .6)

The result was very good because of the few terms required to perform the premultiplication. Additional terms would cause the part count to increase. Since the generator polynomial of the code has degree n-k, the worst case premultiplication polynomial could have n-k-1 = 13 terms.

6.4.2 Error Correction Algorithm

The second phase in retrieving data from the disk only occurs if the syndrome is not zero which is called the correction phase. In order to implement this correction process the error trapping tech- nique is used where the syndrome register of the EDCC is divided into two sections. See Figure 6.17.

1) The first 9 bits (Position Register)

2) The remaining 5 bits (Error Pattern Register)

POSITION REC. (WINDOW) ERROR PATTERN REG.

I 9 BITS 5 BITS

~SYNDROME REG ------7-

FIGURE 6.17. Subdivisions of the syndrome register 119

PROGRAM TX

PURPOSE: CALCULATE PREMULTIPLICATION POLYNOMIAL FOR SHORTENED CODES

INTEGER*2 DATR(261),FEEDBK(14),SHIFTR(14)

OPEN(UNIT=6,NAME='SY:TX.LST' ,TYPE='NEW')

DATA FEEDBK/1,0,1,0,0,1,0,0,0,1,0,1,0,0/ DATR(1) = 1 IFLAG = 0 DO 1000 J=1,261 CALL SHIFT(SHIFTR,DATR(J)) IF(IFLAG.EQ.O) GOTO 50

DO 200 JJ=1,14 SHIFTR(JJ) = SHIFTR(JJ) + FEEDBK(JJ) IF(SHIFTR(JJ).NE.1) SHIFTR(JJ)=O 200 CONTINUE

50 PRINT 10,J,SHIFTR 10 FORMAT(1X,I3,14I2)

IFLAG = SHIFTR(14)

1000 CONTINUE

CLOSE(UNIT=6,DISPOSE='PRINT')

STOP END

SUBROUTINE SHIFT(SHIFTR,INPUT) INTEGER*2 SHIFTR(14) DO 100 I=13,1,-1 SHIFTR(I+1) = SHIFTR(I) 100 CONTINUE SHIFTR(1) = INPUT RETURN END

FIGURE 6.16. Fortran program calculation of T(X) for the EDCC 120

The first 9 bits which are referred to as the 'window', determine the position at which the error starts in the data field. The remain­ ing 5 bits contain the error pattern. For instance a one in the error pattern register indicates that a bit in memory should be complemented and a zero indicates that the bit in memory is correct. As each bit is read out of the buffer memory, the 14 bit EDCC register receives further decoding. The error position register is monitored for all zeroes and when detected, the correction is performed by exclusive

GRing the 5 bit pattern register, one bit at a time with the data from the decoding buffer memory. Thus the erroneous bits are corrected and stored in the decoding buffer.

In the case that the position register never contains all zeroes an error greater than 5 bits has occurred and is uncorrectable. Since interleaving to degree twenty is applied in this system, such a case indicates an error greater than 100 bits existed. The correction cycle for a data segment with a nonzero syndrome takes a minimum of 19 addi­ tional clock cycles regardless of the position of the error and can require up to a maximum of 33 clock cycles. In a worst case environ­ ment this could occur 20 times and require 660 extra clock cycles.

At the end of the 45 bytes plus 20 bits the function of the EDCC decoder will be complete.

6.4.3 EDC Decoding

The input to the EDC decoder is the sum of the twenty decoded

19-bit information segments that were output from the EDCC decoder which may or may not have been corrected properly. By regulating the control signals to only accept what is thought to be the original 121

45 bytes and 20 bits, the validity of the EDCC decoding can be checked.

False EDCC corrections can be detected by examining the EDC syndrome register for a nonzero state at the completion of reading a sector.

If a false correction has occurred an error flip flop is set to indi­ cate this fact to the controller. The circuit and control signals which perform this function are shown in Figure 6.18. See descriptions of control signals which follow.

GXEN (EDC) - enables or disables the EDC G(X) feedback pattern

from going to the syndrome register.

CLK ENABLE (EDC) - controls the shifting operation by inhibiting

the clock when shifting is not desired.

CLEAR (EDC) - resets the syndrome register to the zero state.

One noticeable absence from the.EDC decoding circuit which is used with a shortened code is a premultiplication polynomial. This omission was intentional because the decoder's purpose is to detect errors which can be performed without a premultiplication polynomial that serves only to align the error pattern for correction. After receiving 380 bits of corrected data, the integrity of the correction can be determined. A 20 bit zero detector circuit similar to the one used for the EDCC decoder zero detector can be designed to indicate a successful decoding and correcting operation.

6.4.4 Combined EDCC and EDC Decoding

Both of the decoders are active at the same time so their inter­ action in the combined circuit will be described; refer to Figure 6.19.

The interface between the two decoders is serial. Continuous data flow is achieved by the decoding buffer register in the EDCC decoder DATA IN DATA OUT

I (CONTROL) lg3 CLEAR (EDC) OGXEN tEDC) ~ (CONTROL)

It • "' .J, ,- " " " 20 ZERO DETECTOR l ZZO CLOCK/ f CLK ENABLE (EDC),o----­ (CONTROL) v SET ERROR FLIP FLIP J ERROR

FIGURE 6.18. EDC decoding circuit.

1-' ['..) N TIMING AND CONTROL

~ DATA EDCC GENERATOR OUT DATA __:, + BUFFER EDC GENERATOR IN " " (14 BIT) (20 BIT)

k. .. '-It' 9 ZERO 5 ZERO 20 ZERO DETECTOR DETECTOR DETECTOR

w ' ' Z9 Z5 Z20

FIGURE 6.19. Combined decoding and error correction circuit.

1-' N w 124 which holds the received information bits. In the case where no errors are detected in the EDCC syndrome, the EDCC decoder can shift the 19-bits of information to the EDC decoder while simultaneously decoding the next code word. A difference between encoding and decoding is the path taken by the serial data. For instance, during encoding the EDCC and EDC generation is done in parallel with the same data entering each generator simultaneously. However during decoding, there will be a delay between a data bit entering the EDCC circuit and the EDC circuit. This is needed to allow the EDCC circuit time to correct possible errors before generating the EDC syndrome.

To comprehend how the entire decoding circuit operates, the timing has to be understood. A detailed system timing diagram will be intro­ duced later, however there are five basic modes of operation which are shown in the.flowchart of Figure 6.20.

1. Initialization

2. Syndrome generation

3. Window location

4. Error correction

5. Error state

The error correction operation is a single sector operation.

Mode 1 is entered.to begin the process where initialization of the decoders is performed. This mode is divided into two parts with part A only being performed once per sector which resets the EDC decoder. After initializing both decoders, the data can be read in mode 2 one segment at a time. During the time that the 33-bit segment is read, the syndrome is generated, 19-bits of the new data are stored 125

( BEGIH ) 1 ...... /A • /NITIIILI'Z.E El>c. ])£ColiER. I ""'D CoNTRA 1.. I -- 7'1\o:Diif --r~ • 1/'ll"t'UfLI~£ £De: c. 'DCt:oiUEp.. I .41'1.1> C.oNTAOI.. l l ---,;;;t\oDE j• GEHEAII.,.- S'fllbR.oM(i Cn eL~t~l ij • STefl.~ Nlw >lr71f IN lUico~;q -'U""~"elt (I"' ·1VT) • OIITftrr PIIINIOUS lMTA (l'f·Bir. "AI' EDe ./)£coDeR. 1 -~ec:. Yn 10/..0itffi :o ? jl/rJ I'PISitBLE PRGMIILTI!LICA71tJN FE&I>1JAcf( I T f\10 cHec IC. WIA/,ow xs zr .-L.J. Z&R.oS 1 Y£S ltDTAT.I! ,A1'>1 trt /J~coDINQ ~lfe•s-re.lt AHP 5YN/Ift.ol"\£ j ..JJis.tiLE STAI)Jl.OME ~lfiAIS.,..R. FIE/i p 8/fC/(. I RE.s•S'11F~t FEEi.JIBACIC 011~ tNJE BiT To""'""' THS ~/Gifl """J...... • I RoTATE II"'T-"' ~~., :ttYNIIR.OMI£ ADQ/irb "ft> IT ONE BIT TO Tlffl.. RIG/IT 53-Btr A.o1)>.-rloN I "'" C.Yr:.L.& 'DoNE i' J) NO Yes ,, /(o"1JitT/0111 s CoMPUTE -, ! Ylis ~· .. UNcPAA/iit::rAeLE t;AR.o,R M:7 LAST SEGMENT IN Slic..""f"Dit ? ~s l FE!E.P LAST S&GoMaNT I Tl' S'I>C ])Ec.oZUiR. I

NO ·~ EJ)C SYNI>AOME ~ o? YJCS l SET Ell.I{OR. r VA L.IJ) ~TA r FLIP -;:.t.oP 1 A6SEr efl.ltOit pt.JP- F/JIP I

(eND )

FIGURE 6.20. Flowchart of decoding process I • 126 in the decoding buffer and 19-bits of previous data are output to the

EDC decoder. For the first segment, the data output to the EDC decoder is unimportant because the EDC decoder is not enabled. Following the generation of the EDCC syndrome, one of two paths is activated. If the syndrome equals zero, no errors are detected therefore the correc­ tion modes can be passed and decoding continued if more segments remain in the sector. However, if an error is detected by a nonzero syndrome, then the error position window must be checked. Depending on the value of this window either mode 3 or mode 4 is entered. Mode 3 is selected in the case where the error position is not located by the window. In this mode the data is rotated to the right in the decoding buffer and syndrome register with the feedback on. A check of the window is performed after each bit and if the window is detected, the control transfers to mode 4. The possibility of never detecting a window exists so a limit of 33 rotations is imposed and if reached, indicates an uncorrectable error (mode 5). Corrections are made in mode 4 by rotating the data one bit to the right with the syndrome added to it, but is only necessary if the window is located in the first 19-bits of the correction phase. Otherwise the error is located in the EDCC check bits which do not have to be corrected and decoding can continue. The same check is made for remaining segments as was done after a zero syndrome. At the completion of decoding and cor­ recting the data, the last segment of 19-bits is transferred from the decoding buffer to the EDC decoder and the EDC syndrome examined. A zero result indicates the decoding is correct and subsequent sectors can be decoded. However a nonzero EDC syndrome indicates an error in 127 decoding and sets an error flip flop. The sector is decoded a second time if the error flip flop is set for this reason or the uncorrectable error condition.

6.5 Interleaving

The process of interleaving makes it possible to construct a long

EDCC from a short EDCC as is done in this system. The resulting equivalent code is shown in Figure 6.21.

By using an array manipulation technique, the code words can be interleaved in a rectangular array where the rows represent the code words. Other interleaving techniques were considered but were more difficult to implement. In order to construct an array manipulation interleaver, a random access memory (RAM) and two address generators are needed. See Figure 6.22 for a block diagram of the circuit.

Although the RAM used has a word length of 1-bit, the addresses can be subdivided into twenty groups of 33-bits which will ·represent code words. This allows the serial data to be stored in consecutive locations by translating the rectangular array into a one dimensional array with all code words adjacent. The address generator controlling this sequence is called the horizontal address generator whose address sequence is demonstrated below.

0,1,2,3, ••• 32,33, ••• 65, •.. 627 .•• 659 (Address) < ~ ···< ) Code Word 1 Code Code Word Word 2 20

sequence Cei!l bechanged by switching address generators. The second address generator is called the vertical address generator because 128

PARITY BITS ----~)~(-INFORMATION BITS~ (n,k) CODE

BEFORE 14 19 (33,19) INTERLEAVING

AFTER 14 X 20 280 19 20 380 INTERLEAVING = X = (660,380)

FIGURE 6.21. The affect of interleaving a (33,19) EDCC to degree I = 20

of the reference to the two dimension array where readout is performed by columns. The sequence produced by the vertical address generator is listed below.

0,33,66, ••. 627 ,1,34, •.. 628, ••• 32,65, •.• 659

Column 1 Column 2 Column 33

For the encoding and decoding process of distinct sectors to be continuous, a second interleaving memory is needed. This permits two sectors to be encoded or decoded back to back. While one memory stores the incoming data, the other can be performing the interleaving. At the completion of the sector, the functions reverse and the next sector can immediately be processed. The dual interleaving memories CONTROL AND TIMING I l I R/W w I HORIZONTAL ADDRESS INTERLEAVING ---·· GENERATOR /" MEMORY 1,2.3 •.• 660 ' '

I '/ I MUX ADDR,,

_, .1 660 VERTICAL ADDRESS X GENERATOR J 1 BIT 1,34,67,100, .•• 660 RAM

SERIAL D D SERIAL DATA IN ' IN OUT DATA OUT

------

FIGURE 6.22. Interleaving circuit block diagram.

1-' N 1.0 130 are shown in Figure 6.23 as well as the details of the address generators. A description of the control signals follows:

CS#l, CS#2 - chip select for memories enables Read or Write.

R/W - determines operating mode for memory Read or Write.

ADDR GEN SEL - controls the address sequence to both inter­

leaving memories.

CNTR RESET - clears both sets of counters in the address

generator circuits.

CNTR INHIBIT - prevents the counter in the vertical address

generator from counting.

LATCH RESET - clears the latch that holds the previous address

in the vertical address generator.

SEGMENT ADDRESS SEL - determines starting address for columns

in vertical address generator.

Most of these control signals are required to operate the vertical address generator which is the most complex. This is attributed to the fact that the address sequence for storage by columns is not a convenient mathematical pattern. The circuit functions by generating the starting address of a column in the counter and incrementing it by 33 until the last address in the column is reached. This is accom­ plished by adding a hardwired constant of 33 in an adder to the previous value which is stored in a latch. Upon reaching the end of a column, the latch is reset and the starting address of the next column is read from the counter with the appropriate multiplexer control. To cover the entire address field of the sector, ten address bits are necessary which require that three 4-bit stages be cascaded. ------. c.t.~fR. AESET r VERTIC IlL ADDRESS GiEN.::~ATOR c.NTR INHifJir I LATCH R,Eft: I I I ., I I SEGMCN'T A t>DR S 1£ L. I CAitll I L ____ I - _.J ------~------'i ,., 2. ADDR CiEN. S~

DATA ot.IT..

cs #I .PATA IN r,;;------I w/1?. I I cs #2.. I Ct..o~ I I c~ ~ES'ET ------~------I I L __ ~0~11!_:-N_:A:_ ~J)~R::_s~ ~E~EAA~I<.j

FIGURE 6.23. Interleaving circuit

t-' w t-' 132

For the horizontal address generator a much simpler circuit can be used. A ten-bit binary counter consisting of three 4-bit counters cascaded can be controlled by one signal; CNTR RESET.

An alternative method to generating the vertical address sequence is through the use of a look up table. This allows the predetermined vertical address sequence to be stored as data in a PROM. An address sequencer is needed to step the PROM through its address range which happens to be the same as the horizontal address generator. To simplify the circuit even further, the horizontal and vertical address generators can be combined because both operate in unison. See Fig­ ure 6.24. The advantages of the look up table method for the vertical address generator are that chip count is minimized and control signals reduced by three.

In order to control the dual interleaving memories, two multi­ plexers are needed to select the addresses. While one memory is being addressed in the horizontal mode, the other will be in the vertical mode. In this way, control signals are conserved as well as preparing for the continuous encoding or decoding operation. In addition, the

Read/Write control also makes sure that the two memories are not performing the same operation. Separate controls are provided for the chip select of the memories so any unused memory can be disabled to conserve power.

6.6 System Control

The circuitry required to perform the encoding and decoding op~rations has been described in the previous sections. These circuits could be implemented as is, however due to similarities in their 't

CNTR RESET u l 4 BIT PROM CNTR ADDR D I 4~ / 4 . (LSB) E y_

PROM ... I CLOCK , ADDR D 7 VERTICAL ADDR R 4 4 " A -A E 0 9 ~ ? I 4 BIT PROM CNTR ADDR D I (MSB) 2 2 E '( '--v ~ HORIZONTAL ADDR A -A 0 9

FIGURE 6.24. Alternative vertical address generator.

1-' w w

... 134 design, a common encoder/decoder module can be constructed to minimize hardware. Performance is not sacrificed since storage and retrieval of information from a disk cannot occur simultaneously. A small increase in control is needed to direct the-data flow which is dif­ ferent in each mode, but the reduction of the encoding and decoding circuits by a factor of two more than compensates for the additional multiplexers and control.

A circuit for the common EDC encoder/decoder is shown in Fig- ure 6.25. The circuit is a combination of the individual circuits with the same feedback patterns, shift registers and control signals.

In the case of the EDCC encoder and decoder, the combining cannot be done without some modifications; see Figure 6.26. First and most obvious are the differences in architecture. The EDCC encoder selects data via an output multiplexer where as the EDCC decoder multiplexes data into a buffer memory before being output. A modification that allows the encoder and decoder to be combined is the addition of a second multiplexer. One multiplexer selects the data flow for encoding and the other for decoding. The combined module will have dual out­ puts; one for encoding and one for decoding.

The final error detecting and correcting disk storage system design consists of the following: ·

1. EDC Encoder/Decoder

2. EDCC Encoder/Decoder

3. Dual Interleaving Memory

4. Transceivers

5. Multiplexers DA'rA IN ------·-··- ---

CLK EN (EDC)o------, ,..--1f------t---OGXEN (EDC)

CLOCK~------+------~

CLEAR (EDC)~--~~-~--~----r------~

DATA OUT

20 ZERO DETECTOR

I

Z20

SEL DATA (EDC)

FIGURE 6.25. EDC encoder/decoder circuit

1-' w \JI TX£N (n.,.c) tz. ts t,

61£1'1 (f.Dce) !I .a !ls :~, ~II

cu.11A...... -.----<> 56LfA~cf" tr'Dcc.) • 'ILJ Ll'l'l 'IJtlf ~I 11 1 1 '1'1' CLOCK S.EL )JIM N)(~tcc) llil1l' ouT Sl " I PO!>!TION RiiGo. J)ETii'C7lM ,WINDow Pll,.lt,., P£4 1 s: UltOE'S - •. WA Sl ,I __ :u_rzs ·- ~

~coi>OIG- 8<~1'Fii"- (1't·8oT FtFo) DATf'l c;JJT /?p _T _____ H=;;n.-k

L.Q::: C/.11C 1(.

DIIM .:HAf,L..

FIGURE 6.26. EDCC encoder/decoder circuit

..... w "" 137

6. Tri-state buffers

7. Control logic

All of the above are shown in the system block diagram of Figure 6.27 with their interactions. For a write operation, the flow is from left to right and vice versa for a read operation. A complete description of both operations will be given.

During the write cycle, the direction of the data flow will be selected by setting transceiver No. 1 to the receive mode, enabling the encoders in both encoder/decoder modules with the proper multi­ plexor control, and setting transceiver No. 2 to the transmit mode.

This will allow the incoming serial data from the sector storage memory to be encoded in both the EDC and EDCC. While the encoding is proceeding, the first encoded sector (660 bits) is stored in inter­ leaving memory No. 1 with the horizontal address generator. Subsequent sectors to be encoded will be stored in an alternating manner in the interleaving memories so that the previous sector can be interleaved and output by the vertical address generator. With this process, the system can handle continuous encoding, requiring 2 (660) + 19 clock pulses to process one cycle.

The read cycle consists of accessing information stored on the disk and is more complicated than the write cycle. First the direc­ tion of the data flow is selected by setting transceiver No. 1 to the transmit mode, enabling the decoders in both encoder/decoder modules with the proper multiplexor control, and setting transceiver No. 2 to the receive mode. In addition, tri-state buffer No. 2 has to be disabled to avoid conflict and tri-state buffer No. 1 enabled to pass c11e.. B'ts-1 R/~ ENCo.J)E/DeC.oD&'

aCE ' ~--~,-- l~.:.JtLJ~~L_ ~: rNrt< I t- r-t ~EINTR.. )I£ co J)~ jq u T I EDCC£f'ICobEIC 'W ~f 1"1 ,.,tro.. ro· ' 5i :~ .·.~~···I <:to) r ~~I]) x ~ l>llC:o 'Ill E .Q. i\ U i ' J (1'1) )(. t- • • ]1 H3-S'""') ~uJ: trri~® [..., l 1. . l ..___ INM/ .!-. ~ ~UNTR I- IIIIU"l NO, 2. :--'~(I I I

It AJJtt Cd10 SEL

CNTit A.ESET wu,~ IIIII 11111111 c GICEti(EDc:) ,

Sc L J)llT~ (EfJ<:) I t> SEL Dl)11l ~J, S/,S2 (EDCc.) 0 b SE.~ l>'!ro!l W/l

i)fl"TA ENA&Lti'.

Hoi.!)-SiiFFf!o~

FIGURE 6.27. Block diagram of the EDCC system

...... w 00 139 data to the output. As data is received from the disk, it is stored in the interleaving memory via the vertical address generator. After one complete encoded sector is received, the original data can be obtained by decoding one code word at a time. The data is output from the interleaving memory via the horizontal address generator and enters the EDCC decoder. This decoder stores the received data in an internal memory (buffer memory) which serves to pipeline the data flow through the system. If no errors are detected by the EDCC, 19-bits of this buffered data can be decoded by the EDC encoder/decoder while the next code word is decoded in the EDCC encoder/decoder. In the case where errors are detected, the correction process has to be performed and the buffer memory rotated while correcting erroneous bits. Assuming ideal conditions, meaning no errors, the read cycle can be performed with 2 (660) + 19 clock pulses. During the correction phase inter­ leaving must be halted, therefore a signal called HOLD ADDR GEN is used to suspend operation of the address generators. The worst case condition will require all code words to undergo the correction process which is an extra 660 clock pulses for a total of 3 (660) + 19 clock pulses.

The timing diagrams for both of these operations are shown in

Figures 6.28and 6.29 respectively. '" ~.a. '·J "•v ' .. 1 ''• '~& '•o CL.o c. I< ~ ••. ~- ... .JBLliiL.. ..iLJ1.JLrl •• ...fVl._ •• ,.Jl..Ji_ ·~·__,_"'· £/fc.o~•/blile<>'JJI!;

cz::iA'A (/;o c.) J ------~r

tadN (E.bc) -~· ~12.

CLI<*f'J (Ebc) _j 1'1 I ''t r- ~_J l'l 1~--.-!'..:.J'i__ _

SELlMTA ~c.)

crJiiil{ (EJ>cc:.) ' u

G,'tGIJ (!bc:c) j • 1'1 I I 'I r-- I 'I ''I T)CUI {£Dec.) ... '

:DAm IE/II {li)cc ) '

11~ -BUF(~} o-·------52 ... SELJ.fiA ( l 51 o------S£L IIIITII W~ 1 1'1 I ,., L.__. __J '" I ,q I ,.,

CS#I L.nn.n.r-Ln.ru-tn.rlrl.lln Lrlflii.n.rlf"UL[lJ4~ ) { Hoi

FIGURE 6.28. Timing diagram for the WRITE cycle

1-' .p. 0

... )0! CO>Jifi (9 f)ECOIJIE A. h\ ~ST wo,.~T sa;:c;.,.,•.n 1 SEG.M~NT '0' ~ CIISE C/'\SO: 11 11~'\ C.Locl( ..J:i1JiUJI ••• .Ji;t.Q .. ~ • . • • ..liLil. •.• Alii.GilA •· ..Ja.la • • • .. . J1...... J1.. • ,..c~E/•nopr;

c~~ (6DC.)

G~Efll (Oc.l ...1

cJ..JC.EtJ (Eoc:) ., '"'

sg~~ (~qo------NoTia ;

' S&cTo/t lMr~ ;'> c.LeAA. le:t>cc) ' ·------,r------. MUST BE DEII\Ir£~Lr=l\vHl GXEH. ( Ebc.t;.) J ~ ... PR.Il~R 1b rtfiS ... C'tCLE ..,..~ (£oc.c.) , \ MTl'l 91 (IEb«) t------~---- _-.!..1__- - _I- ....

fiLl) SuF(Etc:c) J 1'1 I '" I '" I 1'1~

1 SEL 1:141)11 RJ (Sit ls•o------$EL 'DK111 WR.. 0 ------Cs •Z. lf'1.JLj"'""Lr1JL{""1.JLrr-ut.r1J1. , . . ... VIO'/tTtCAl AOI>~I'i.~S R/Vii . GE.N.( 7-MTE.P- LEiNio\fG) ( ADI>Jt. Gilt S£L MIJST RE ]>ON/! ... ( BE f:oP.E D£tol>ING c;;tif{ RFSET. J ) ('!i.O <.LI

NOTEs ~ (t> =ZERO S'(Nj)~OME: .I :fl-= NON ZEICO SYI!If>P..OI'II£ ~ ~ = WINDOW LOC.4T£D

FIGURE 6.29. Timing diagram for the READ cycle

1-' ~ 1-' CHAPTER VII

EDCC SYSTEM SIMULATION

The selection process for cyclic codes in EDCC systems can be simplified with the aid of simulation techniques. Although codes can be constructed by analytical methods to meet the anticipated error patterns, this does not guarantee it is the optimum code or effective over a wider range of error patterns. Perhaps by searching for better codes, a higher code rate can be found, more errors detected and cor­ rected or the hardware minimized. Almost all designs go through an experimental stage where prototypes are built. This is the case with disk storage systems where the code design can be based on typical error patterns observed. In order to be thorough in the design, com­ puter simulation allows more alternatives to be explored and evaluated as well as saving time and manpower.

For the disk system in this design the error characteristic is unknown, therefore all types of error patterns will have to be con­ sidered; random errors, burst errors and multiple burst errors. The block diagram of the simulation performed is shown in Figure 7.1. The simulation emulates the entire encoder/decoder subsystem operation except for the EDC, whose bits are treated as additional information and are replaced by random data. This eliminates complicating the simulation with a second encoder/decoder, however the same function is performed by an alternative method which serves to detect erroneous corrections. An explanation of this method follows. The computer simulation performed, having an advantage of hind sight, can compare

142 RANDOM DATA EDCC SCRAMBLER NUMBER I~ SEGMENTER ENCODER ~ COMBINER 71 I~ (INTERLEAVER) GENERATOR (19BITS) (14 BITS)

WRITE

_.;:_

CORRUPTER (ERROR 1- ·-· -- - ~( MEDIA ) GENERATOR) STORAGE -

READ

' RESULT COMPARATOR EDCC DATA I~ UNSCRAMBLER TABULATOR DATA = DATA r- DECODER r- SEGMENTER ~ (DEINTERLEAVER) WRITE READ (33 BITS) I I

FIGURE 7.1. EDCC computer simulation block diagram.

I-' .j::-­ (j.) 144 the corrected data to the original data thus acting as a detecting code. This frees the simulation to focus on the EDCC indicating weaknesses in the system which is the prime objective.

7.1 Description of the Simulation Program

The simulation program is written in Fortran-4 Plus, and is executed on a Digital Equipment Corporation PDP 11/34 minicomputer.

The software is modeled after the actual hardware by replacing circuits with mathematical operations. Some of the analogies between the two are:

1. The storage registers and buffers are replaced by arrays

with each computer word representing one code bit.

2. The digital values of the circuits are maintained by

modulo-2 addition which is simulated by conditional IF

statements.

3. Discrete gates are simulated with logical statements.

4. Shifting is performed by DO LOOPS which manipulate the

arrays.

5. Control timing is achieved by conditional IF statements

which simulate the real time clock.

The disk system simulated has the following parameters:

Sector size 380

Code (EDCC) (33,19)

Generator polynomial x14 + x11 + x9 + xs + x2 + 1

Premultiplication polynomial xu + xa + x2

Buffer size 660

Interleaving degree 20 145

A message, representing information to be recorded onto a magnetic disk, is generated by a random number generator and stored in an array.

It is segmented according to the code size into 20 groups of 19-bit information words (data segmenter). The next process is to generate the check bits associated with each 19-bit group which is performed by reproducing the action of the EDCC encoder. One bit at a time is encoded until the last bit of the group is reached, at which time the check bits can be attached to their associated information bits. This process is performed by a module called the combiner which stores the code word in a buffer. The software storage device is a two dimen­ sional array (33,20) where the rows represent code words. After all code words have been stored in the buffer a module referred to as the scrambler, interleaves the bits of the code words. This is done by changing the readin and readout directions of the (33,20) array.

Readout occurs via columns and the data is transferred to a one dimen­ sional array (660) which represents the serial data channel. At this time, any error pattern can easily be induced to corrupt the scrambled code words (error generator). The data, containing errors, is read and deinterleaved to reform the original code words which are stored in the twenty rows of a two dimensional array. Following this, one row at a time (33-bits) is decoded using a simulated error trapping technique. All decoded words including the corrections are stored in memory so a comparison can be made with the source code words. The number of proper corrections, erroneous corrections and uncorrectable errors are determined by comparing the original and final arrays bit by bit. 146

Before the program structure is described in detail, the hardware architecture will be reviewed to point out important facts. First, the encoder is a modified architecture encoder requiring k shifts (19) to calculate the n-k check bits (14). Next an array manipulation interleaver scrambles the twenty code words by manipulating the read­ out addresses of the RAM. To read the data from the disk, the same manipulation interleaver performs deinterleaving to unscramble the bits of the code words. A modified architecture decoder including a premultiplication polynomial calculates the syndrome (33 shifts). In the case where errors are detected, the syndrome register is shifted an additional 33 times to attempt corrections; 66 total.

7.1.1 Program Structure

The simulation program is called DISC and it is partitioned into nine sections which emulate hardware functions. Refer to the flowcharts in Appendix A.

1. Generate Data

2. Segment Data

3. Encode Sector

4. Combiner

5. Scrambler

6. Corrupter

7. Unscrambler

8. Decode Sector

9. Detection

Each section was chosen with modularity in mind so a circuit design change would not be a major software redesign. For instance, to 147 lengthen a code only those modules affected would need to be modified, mainly the encoder and decoder. An advantage of this structure for the simulation is that it allows several error patterns to be easily induced. By modifying the corrupter and making it flexible to induce one of several types of error patterns, one program can simulate every environment the system will be subjected to. For example, two standard types of error patterns are selectable upon program execution.

These are single bit random errors and burst errors.

7.1.1.1 Program Subroutines

To support the main program eight subroutines are necessary;

RANDOM, CLEAR, SHIFT, FEEDBK, CORRPT, INTERL, DEINTR and DETECT.

Subroutine RANDOM calculates n random binary numbers to be used as data. A system random number generator is used to create values between zero and one. If the number is less than 0.5 then a binary 0 is stored, otherwise a binary 1 is stored. The subroutine is general purpose and passes the generated data back to the main program in an array called DATR. The second subroutine, CLEAR, simulates the reset function of the encoding and decoding register which is done before each segment is encoded or decoded. CLEAR inserts zeroes in each location of the register referred to as the array SHIFTR of length fourteen. In order to shift this register the subroutine SHIFT is used. SHIFT moves the data to the right one place least significant bit (LSB) toward most significant bit (MSB) discarding the old MSB.

An input bit is passed from the main program to load into the LSB before the shifted array is returned to the main program. After shifting, the next operation is to include the effect of the feedback 148 connections via the subroutine FEEDBK. Three modes of feedback are possible: G(X) only, G(X) common T(X) and T(X) only. All three are used for decoding, but only the first two are used for encoding which deals exclusively with the generator polynomial. By simulating the logic control in the main program, the appropriate feedback patterns are added to the shift register (SHIFTR) in the subroutine. SHIFTR is returned to the main program when the operation is finished. At the completion of encoding the sector, the array ENCODE (33,20) contains the twenty code words stored in the rows. This array is interleaved by subroutine INTERL which outputs the data column by column instead of row by row into an array IBUF (660), thus scrambling the code words.

To simulate errors in the disk system, a subroutine called

CORRPT is used. It has the capability to generate any length error pattern, starting at a selected location, by inverting the data in the array, IBUF. By controlling the starting location and length, any conceivable error pattern can be imposed on the serial data stream.

A read operation of a corrupted sector is initiated by deinter­ leaving the array, IBUF. The subroutine DEINTR performs this task by unscrambling the data and reforming the twenty code words, storing them in a two dimensional array (33,20). Once this is completed, the decoding process decodes one row at a time (code word) using the subroutines CLEAR, SHIFT and FEEDBK. During this time it is necessary to detect error patterns by the use of a trapping window which is done by the DETECT subroutine. DETECT examines the nine LSB's of the syndrome register (SHIFTR) for all zeroes and saves the error pattern when this condition is true. The error pattern is found in the 149

remaining five bits of the syndrome register and is stored in array

IPTRN. After the decoding is completed, the original data is compared to the final and the results tabulated.

7.2 Execution of the Simulation Program

The program code written for the simulation is in Appendix B.

This code is referred to as the source code which must be compiled and task built before executing. Upon executing the program, four options are available which affect the error pattern to be generated. The options are;

(1) No error bits

(2) All single bit errors

(3) 100-bit burst error

(4) User defined errors

The first three options require no further user responses but the fourth requests a starting position and length for the error. In order to perform option (2) which simulates 1-bit errors in every location of the sector, the program repeats the decoding process many times. The purpose of this test is to verify that the EDCC covers the entire sector, including check bits. Option (3) is used to check the EDCC system for its guaranteed burst correcting capability. If this option is selected, the sector is decoded five times, each con­ taining a 100-bit burst in a different location. 150

7.3 Results of the Simulation Program

After the program execution is terminated a data file containing the results of the program is printed. This data file consists of the following;

1. Random data

2. Segmented data (no check bits)

3. One encoding cycle

4. Segmented data (with check bits)

5. Interleaved array

6. Interleaved array plus error pattern

7. Deinterleaved array

8. One decoding cycle

9. Corrected array

10. Summary of errors

The purpose of the detailed listing is to provide examples of each step of the simulation process. Appendix C contains the results obtained from option (3) of the program. It is possible to trace the flow of all code words from one operation to the next. However only the third code word (third row of the array) is used to demonstrate encoding and decoding to conserve space.

Analysis of the results reveals that the EDCC system corrected one hundred percent of the random and burst errors simulated. This is based on the error patterns induced in options (2) and (3) of the program which created a total of 1160 bit errors. In addition the error free case was verified to operate correctly. To prove that the

EDCC system cannot correct all error patterns a multiburst error 151 pattern was induced which could not be corrected. Therefore the theoretical capabilities of the EDCC system seem to match the results obtained from the simulation. CHAPTER VIII

PERFORMANCE MEASUREMENTS

It is important to know the ultimate capabilities and limitations of the EDCCs. This information together with the knowledge of what is practically achievable indicates the system problems which are virtu- ally solved and those which need further work. Several parameters can be defined to measure these performance capabilities and limitations.

In a general binary system using an (n,k) code, the probability of a bit error is defined as p. That is receiving an incorrect bit,

V., from the source code occurs with probability p and correct recep­ J. tion with probability q (or 1-p). Given a code with minimum distance d, the probability that an n-bit segment has m errors is defined as

P(m,n). This can be expressed as,

n-m P (m,n) q (8 .1)

Errors occurring in the system are independent of the data stream and will be of two types; random and burst.

8.1 Undetectability

The first performance parameter is called undetectability which is defined as the fraction of the total errors which are not detected.

The probability of its occurrence will be abbreviated Pd. In order to calculate parameters of this type a graphic representation of the error detecting system is shown in Figure 8.1.

152 153

"k---R1: REGION OF ERRORS

R4: REGION OF ERRORS HAVING WEIGHT LESS THAN d

FIGURE 8.1. Graphic representation of an error detection system

The size of each of these regions can be determined for the

(n,k) code.

Region R1 - contains all error patterns each one having

length n therefore it is of size 2n.

Regin R2 - contains all error patterns that are similar to

the code words therefore it is of size 2k.

Region R3 - contains all detectable error patterns. Since

some error patterns resemble code words error

detection is impossible (R2). Only the patterns

outside this region can be detected therefore R3

1S· d f S1Ze· 2n- 2k •

Region R4 - contains all error patterns whose weight is less

than the minimum distance, d. To determine its

size, the number of error patterns at each weight

can be summed as follows. 154

N(O) = 1

N(1) = n

N(2) = n(n-1)/2

N(3) = n(n-1)(n-2)/2·3 which can be expressed as the series

N(w) = 1 + n(n-1)(n-2) •.. (n-w+1)/w! (8. 2)

Summing only those patterns of weight less than d

d-1 L N(w) = L: N(i) (8. 3) i=O

From the size of each region the undetectability can be expressed as a ratio,

Region 2 Undetectability (8.4) = Region 1

2k = 2n

-(n-k) = 2

The probability of an undetectable error is given by;

n P = L P (error pattern occurs) • P (error pattern is a code word) m=1

= [t P(m,n)] • Undetectability (8.5) m=1

-(n-k) = 2 155

It is observed from this equation that the controlling factor in the probability of undetectability is the term based on the number of check bits. The designer can select codes with larger n-k to reduce the undetectability but has little control over the occurrence of bit errors. To reduce the undetectability further, the code can be inter- leaved increasing the check bits by a factor of I (interleaving degree).

-(n-k)I Undetectability (interleaved) 2 (8.6)

This probability of undetectability can also be expressed in another way using the code's minimum distance property. ,o d R4 R2 P­ = P(m,n) • P d I ( R4 m=1 (8. 7)

Since at least d errors must occur to cause an undetected error, one term is zero.

n P- = P (m,n) • p ( R2 d 2 m=d R4,

n m 2 p qn-m] [ d 2k 1 m=d [(~) l 2n-.2 N(i) J..=O 156

The last term is a constant which is approximately equal to 2-(n-k) because I N(i) is small compared to 2n . Thus it is considered _to be

zero for simplification purposes. (The degree of I N(i) is d-1 which

is small compared to 2n ). Therefore the observation made about

Eq. (8.5) is also true in this case.

8.2 Mistakability

Another performance parameter is called mistakability which is defined as the fraction of errors which are erroneously corrected.

The probability of its occurrence will be abbreviated P . A graphic m representation of an error detecting and correcting system is shown in

Figure 8.2.

The mistakability can be expressed as a ratio;

Region 5 Mistakability = Region 1 (8.8).

REGION OF ERRORS

R4: REGION OF ATTEMPTED CORRECTIONS

REGION OF ERRONEOUS CORRECTIONS

FIGURE 8.2. Graphic representation of an EDCC system 157

Region 5 contains all error patterns that produce code words other than the one transmitted. Therefore it is of size 2k-1. The mistakability is approximately the same as the undetectability and means little in this form.

The probability of mistakability can be calculated for both random and burst errors. A t error correcting system designed for random errors will cause a decoding failure if any error pattern of weight greater than t occurs.

n p = (8.9) m L P(m,n) m=t+1

For the general binary system this becomes,

n i n-i p p q m = I i=t+1 (~)

The first term in the series dominates (np << 1) thus the equation can be simplified as follows.

p t+1 m p (8.10) t+1 ( np ) = [Weldon, 1980] (t+1)!

In a system designed for a burst error environment, a single burst correcting code with burst correcting capability b can fail in two ways; (1) one single burst of length greater than b or (2) multiple bursts within a sector. A brief description of multiple bursts will be presented before expressing Pm. 158

The probability that a burst error pattern has length ~ is defined as PB(~). Therefore given a burst, the probability of its length being greater than b is;

n P (burst length~ b/1 burst) = ~ PB(~) (8.11) ~=b+l

If error bursts are random events then their starting points are distributed according to the Poisson distribution. [Weldon, 1980]

In this case the probability of i bursts occurring in the sector is binomially distributed. The probability that a block contains a single burst is defined as P • It follows that 1

(2 bursts) p 2/2 p2 - 1

bursts) - p 3/3 p3 (3 1

bursts) - p 4/4 p4 (4 1

Due to the fact that the probability of a single burst is usually small, the probability of multiple bursts can be approximated by:

p 2 1 P (~ 2 bursts) (8.12) 2

The mistakability of a burst correcting system can be expressed as,

P = P(~ 2 bursts) + P (1 burst) • P (burst length~ b/1 burst) m

p 2 n 1 (8.13) -2- + pl 2: PB (~) = ~=b+l [Weldon, 1980] 159

8.3 Unreliability

The last performance parameter is called unreliability which is

defined as the fraction of attempted corrections which are erroneously

corrected. It can best be expressed as PR, the probability that when

a correction is attempted there results an undetected mistaken correc-

tion. The unreliability can be expressed as a ratio;

Region 5 Unreliability = Region 4

(8.14)

The probability of unreliability can be expressed as,

PR = P (false correction) • P (fails"to detect)

unreliability • undetectability

k (8.15) 2 -1 -(n-k) = ---·2 2k

-(n-k) P­ 2 R

8.4 Evaluation of the EDCC System Parameters

To evaluate the capabilities and limitations of the EDCC system designed in Chapter 6, its performance parameters will be studied. The

system's code architecture consisted of two codes; one for detection

(EDC) and one for correction (EDCC) which was interleaved to degree

twenty. The domain of each code is shown in Figure 8.3 in order to determine the sizes of the regions for probabilistic analysis. 160 ' .

EDC 45 BYTE SECTOR (360 BITS) 20 BITS (380 '360) I.______...__ ___

I< LENGTH OF DETECTING CODE

EDCC I (33,19) 19 BITS 14 l 19 114 I 19 14 1 2 20 k'-----LENGTH OF CORRECTING CODE (660 BITS) (INTERLEAVE DEPTH = 20 SEGMENTS)

FIGURE 8.3. Code domain of the EDCC system designed with dual codes

The performance analysis will be performed considering the worst case where all error patterns are equally likely. Some intermediate values that will be used in determining the quantities Pd, PM and PR are listed below. Refer to Figure 8.1 for the values dealing with the

EDC and Figure 8.2 for the values dealing with the EDCC.

P(E) - probability that during a READ cycle the sector contains

an error (R1).

P(R3/E) - probability that an error is detected by the EDCC

given an error has occurred. (1 - P-) d P(R4/R3) - probability that a correction is attempted by the

EDCC given a detected error.

P(R5/R4) - probability that an erroneous correction is made

given an attempted correction (unreliability). 161

P(R~, R4, R3, E) - probability that an erroneous correction

occurs somewhere in the EDCC domain when a sector is read

from the disk.

P(CHK/R5) - probability that an erroneous correction occurs in

the check bits given an encoded sector containing an

erroneous correction.

Pd (EDC) - probability that an erroneous correction is not

detected by the EDC given it has occurred.

From Eq. 8.5 the probability of undetectability for the system is,

n Pd = Undetectability L P (error pattern occurs) m=1

= 2-(n-k)I • P(E)

= 2-280 P(E)

The probability of mistakability for the system is expressed in the following probability equation:

P(R5,R4,R3,E) P(CHK/R5) Pd(EDC)

= P(R5/R4,R3,E) P(R4/R3,E) P(R3/E) P(E) P(CHK/R5) Pd(EDC)

= P(R5/R4) P(R4/R3) P(R3/E) P(E) P(CHK/R5) Pd(EDC) 2 1 u ( :~ j( 2~~~~J (1-P;J) P(E) (n~k) (z-

(1) c~~k) (1) P(E) (~~~) (z-20) 100/ (1) (1) P(E) (0.57) ( ~ZSO) 162

(1) /2-180 (1) P (E) (2-1J (2-20)

2-201 P(E)

The probability of unreliability for the system is:

P- = P(CHK/R5) R Pd(EDC) P(R5/R4)

eso) (z-20) (2k-~ 660 2k

-21 - 2

A review of the design goals for the performance parameters

shows that desired goals were achieved. In particular, the probability -20 of mistakability is much greater than the goal of 2 . Sometimes this

can be misleading in a system that is subject to a high probability of

errors. In such a case the parameter PR is a good indicator of how many false corrections will go undetected.

8.5 Conclusions

The EDCC system for the disk storage device in this project can correct burst errors up to and including a maximum length of one hundred bits. This sytem applies only to a disk storage system with a fixed sector size of 45 bytes. If the sector size is increased, the circuits can be modified to allow the EDCC system to provide the same

capabilities. By incrementing the sector size in segments of 19-bits

(k of the EDCC) only the timing, premultiplication polynomial and

interleaving circuit need to be changed. However if the desired burst correcting limit requires a larger burst correcting capability then the 163 interleaving degree should be increased but cannot be altered without selecting a different EDCC. Therefore due to the architecture of the design, the EDCC system can be easily modified but should be done in conjunction with alterations of the sector size.

/ 164

REFERENCES

[Lin, 1970] Lin, Shu., "An Introduction to Error-Correcting Codes," Prentice-Hall, Inc., Englewood Cliffs, New Jersey, 1970

[Peterson and Peterson, W. W. , and E. J. Weldon, Jr. , "Error Weldon, 1972] Correcting Codes," 2nd Edition, the MIT Press, Cambridge, Massachusetts, 1972

[Weldon, 1980] Weldon, E. J., Jr., "Error Correcting and Detecting Codes," Hellman Associates Seminar, Palo Alto, California, 1980 165

APPENDIX A

FLOWCHARTS OF SIMULATION PROGRAM

AND SUBROUTINES 166

Flowchart of DISC Program

(BEG-IN ) J,. SEL£C.T ERRoR. oPT/oN I PRINT HE.AbER. 1 I CALL SUBRoUTIN£ RAN~OM 1 ' SEGMENT RANDOM 1>A71f //'I 'TO zo I /'liFO Wof.f>.S o-F L~N&TH '" 1 SToR.E' //JFO Woll.bS IN If RIMY E.NGO])E (No c.H£C.I< BITS) .I I "'J, Cfi/.L SUI3R.OcJ TINt! Cl-EAR 1 /fESET ovrfLIT BIT FLit~ FI.A ~ - 0 -1 ®2 -'J, I f}t:.cEPT l'fN INFo IJJr I IN fliT =]),lfTI( (:r+l'l * L) 1 CALL SIJB~ouT~NE SHI~T 167 Flowchart of DISCy Program (continued) )>ISIIB/.c FEE])BAcl< •i' Yes FLfiG - :ZNPU7 •? 1NO CALL SU8ROIJTINE F£E'b81<. MoDE 2.-f-3 I. ~ UPDATE ovrPUT BIT F.£,.,~. FJ..IH;, - SH!FTR... ( /"') 1 SAVE CoNTENTS or ENC.Obt/11 C$ I RE<& IS TER. (SHIP'TR) l ol= CREe/( BITS 2 ,... NO ~EN ER/1 TloN ])oNE .? (:r ~ 1'1) T~es C.OM.8/ItiE C.H trcl< 6 ITS ""11'11 INFC WoA.I>. S7tlre Ill ARil.A'f Ell ~0./JE J No ENTI!l.E SIEt:ToR... ENC'o'Z>El> ?. I - 1..,. e.o

~I YES 3 "J, "' c.A/.L SUB~D/.17/NE 'IN TEAL 1 C"'I.L SVBROUYINE coR.P.. P7 168 ~ '

Flowchart of DISC Program (continued) \( CI'ILL SvBROVT/NE ])EINTR.. J a; :It C.ltJ.i. SUBROV TINE' c.LE!lR J, RES lET OUTPl/T BIT !=LA 6t Fl..li ~ ::. o l RESET EIMDP, bET£e.TEZ> FiA~ ICNT =0

.j ® '1· A Cc.EP7 A RECEIVE}) .SJ7" INPUT = J)ECo])E (3'1-lBA c}( ? YES ,..... INPUT::. I FEEI>BI<. .I Mol>E" / /10 I I ,I; G-(lc) FE=l>aAclt:. ? YES CFtLL s VB ~OVT"' N G! FLifG, ::. I , FEEI>BK,. MObE z.

f./0 1- .);' ? CALL $U8ROUTIN/: T('x) t Gt(X) rEEJ)BifCI< YES ., F/.AG, ~ IN PuT FEEbGk) hfo/)E 3 NOr ! w]; 169

Flowchart of DISC Program (continued)

CIILL StJB~ovr/NE JJE:TEc:/

£F{I(oR '7'~.1/Pf>ED l IST,~JT = 0

UP1>ATE t=LA <:,$ FLA <=J =. SH IF7R. {l.tt)

col>E" woRJ> coMPLETELY 'bE~bE!> ? (K > '') / ~------~------Yes SEcn;,R._ CDMPLETELY '})£coDE1> ? (J"'> 2.0) • YES REPEAT ]J£coJ)tN 170

Subroutine CLEAR Flowchart

( BEGIN )

-- INITIAL/ ZE COUNt I I= I .J ~ C.LE~~ 13tr SHIFTR. (I) - 0 1 ~ IN C.R.EMENT couNr I= I+ I l NO 11/..J. BITS JfESE7 ? I> I 'I l YES (RETURN 171

Subroutine RANDOM Flowchart

(BEGIN) I INITIALIZE C.OI.JNI I. =I -L JY CALL RAN])U (N)

I# N k o. 5' YES . - ~ J)ATR(z) :0 No 1'. I '" N > o.5 YES .... 7 l>ATR..(+) =I I 1-~0 I JY INcREMENT CoUNT z :::: .x +I i ~ ~t..J.. NO.- S fS£NEJ

Subroutine SHIFT Flowchart

( BEGIN ) ~ INtTIIfL/2:..£: COUNT :z:. /~

l.r.... 'II MoVE A .Bir SHtFTR. (.r) =SHIFT!< (:t-1)

]>~c.ReMENT c.oUNT I =I-/ ALL .BITS SHIF77:.b? /110 • i 3:. ~ I

LoA]) Ls.B SHIFT!< {I) = INPUT

(ReTUlR..N ) 173

Subroutine FEEDBK Flowchart

[B£6i~ ' ) t INITIALIZE COLI NT I=/ .... ';,!,

MO)E- I FEEJ>BACI::.. PJI/77~-/l..IV M.o'DE 3 SEi.£C7EJ> _/ MODE 2 u A» DO'Y E 8J:.J Al>J> ONI& 817" Al>}) ONE B1r s~ Cr) == SR. ('Z)=Sif.(;r)+ ISRO:) =SR.(l) ... ~(z.) s'*}.,. Pl)(GX(r)

', ~ ~ " c:.oR..R.ecr ~/ESuLT MO~ 1"'7 Yes

/ (RETURN ) 174 p •

Subroutine INTERL Flowchart

( BeqtN ) l INITI!iLIZE' C.OLINT Row :. I coL= 0 J _2~0~--- ~ lY£S 'INCREMENT COL COIJ;:,T·------] CoL::::. coL + / t- ·-

No <._ __L_A_S_T---C-0~4~UM~~--? coL > 32.. ____ ./~ ~ 175 ~ '

Subroutine CORRUPT Flowchart

( BEGIN )

'III IN ITMJJ.IZE COJ.IN'T I=O l LoCA"TE SiAR.TI/11($ AJ>'1JR.ESS FoR E~~toR. ADJ)~ :::. XSTR."T ..J LOA'D A 817 Tb 8E co~~vPn:~ ~ BIT = 18UF (:r:.STR.T+I. l p IIJ vtEA. T JM BIT= 0 • Yes r"' " mur{;ttncr.,. r) = I " l No "I.NVEA. T bAT-'1 1311 - I ..,? YES - ~ 7 / rauF(Isr~T+I) =.0 I NO < 'L~ r- J. IN cREME/\// COUNT ·I= I.-t I J. ~~Roll. P/1171EIUI :boNG · I> Z81T.S ?"" J '(ES ( RE'ivRN 176

Subroutine DEINTR Flowchart

( BEGIN j ./ _l I IN IT IA L I Z.E COUNT Row .::. I cot... = 0 " -~ LOll]) ONE' .BIT >sc.ol>E {3J -c.oL, ~ow) ) = tBu P ( 2.0 *coL _,.~ow j, I 1'1 c.t(E"/'IIE'NI ~cw CoiJNT R.ow ~ ~ow + I

... No LAST ~ow"' ..? ..... ; 20 J,YE.S I.NCIU!ii"'ENT COI..UMN c.ocJNT coL.:: coL. r I -- J, LAST col. u .tt1 !'I ~ No ? ' col.... ~ 32. YES

(RETURN 177

Subroutine DETECT Flowchart

INITIAl-IZE COUNT r=-1; B=l

Ell.R.I!e.. PAttERN ])£TECTE1>/ BIT BY 8JT f SHtr~ :& .: o

No Erdl oF WINfUJIN ~ ::r: > 9

I.STflT= 0 .I '/ES < ~ SAVE ERR.o~ p ,ef :rrz:=R. N ~R~ BITS ) zrr.e./'1 (.8) = SHIFTA.(.B + 'I J 'INC..R.EMENT EI(R.oR.. Bir CCJUNT B=-J3+/ Jv NO ENTIR.E ER.R.CR. PA tr~N -~ -- SAVel> ? B :> 5 / YES I J;' (RETuRN 178

APPENDIX B

SOURCE CODE FOR

EDCC SIMULATION PROGRAM 179

PROGRAM DISC c c 05/01/81 c C PURPOSE: TO SIMULATE ERROR DETECTION AND CORRECTION C USING RANDOM DATA. THE DATA IS ENCODED. INTERLEAVED C STORED.CORRUPTED WITH AN ERROR VECTOR.DECODED C: CORRECTED AND COMPARED TO CHECK RELIABILITY c INTEGER+2 DATRC380l.SHIFTRC14l,FEEDC14l,CORRCTC33.20l INTEGER+2 ENCODEC33.20l.FLAG.DECODEC33.20), IBUFC660l DIMENSION IPTRNCSI c OPENCUNIT=6.NAME='SY:DISC.LST',TYPE='NEW'l c !ERROR = 0 ! DEFAULT NO ERROR TYPE *• ' SELECT ONE OF THE FOLLOWING ERROR PATTERNS' TYPE *• ' NONE 0' TYPE *• ALL SINGLE BIT 1' TYPE *• 100 BIT BURST = 2' TYPE *• ' OTHER 3' ACCEPT *• !ERROR PRINT 1 FORMATC///SX, 'DISC SIMULATION PROGRAM FOR AN ERROR CORRECTION SYSTEM'I IFCIERROR.EG.Ol PRINT 2 IFCIERROR. EG. 1) PRINT 3 IFCIERROR.EG.2l PRINT 4 IF< !ERROR. EG. 3) TYPE * , ' ENTER STARTING POSITION OF ERROR ( 1-660)' IFCIERROR.EG.3> ACCEPT *• ISTRT IFC !ERROR. EQ. 3> TYPE *• ' ENTER LENGTH OF ERROR ' IFCIERROR.EG.31 ACCEPT *• !BITS IF 3 FORMAT< lOX. 'CALL SINGLE BIT ERRORS INDUCED>' l 4 FORMAT< lOX. '< 100 BIT BURST ERROR INDUCED>' I 5 FORMAT< lOX,' ERROR OF LENGTH,', I4,' STARTING AT BIT', I4> c FLAG = 0 ! INDICATES IF FEEDBACK IS ENABLED c C************************************************************** c C GENERATE DATA c C************************************************************** c CALL RANDOM c PRINT 10 10 FORMAT< ///1 X, 'RANDOM DATA C380 > '> PRINT 20,DATR 20 FORMATC1X, 19I3l c 180

C********************************************************************* c SEGMENT DATA INTO INFORMATION WORDS c C********************************************************************* c DO 1000 L=O, 19

DO 2000 LL=O, 18 ENCODE<33-LL,L+1l DATR<19*L + LL + 1> 2000 CONTINUE 1000 CONTINUE

PRINT 30 30 FORMAT PRINT 40,ENCODE 40 FORMAT<1X,33I2> c C********************************************************************** c c ENCODE SECTOR c C********************************************************************** c DO 3000 L=O, 19 CALL CLEAR ZERO SHIFT REG. FLAG = 0 c IF CALL SHIFT CALL FEEDBK CALL FEEDBK c c IF 4000 CONTINUE c 181

c C••*~••••••*************~******************************************** c *** COMBINE PARITY BITS WITH INFO c·- G*******~********************************************•************** c DO 6000 I = 14, L -1 ENCODEII,L+ll = SHIFTR 6000 CONTINUE 3000 CONTINUE c PRINT 31 31 FORMAT PRINT 40.ENCODE c c IF LOOP 0 IF! IERROR. EG. 1) LOOP 32 IF IF< I. EG. 01 PRINT 322 FORMAT<1X,' ') IF C*********************************************************** c c *** INDUCE ERROR c C•********************************************************** IF< IERROR. EG. 1 > CALL CORRPTC20, 20+I+1, IBUF> IF IF IF PRINT 322 IF PRINT 315 315 FORMAT IF IFCI.EG.O> PRINT 322 IFII.EG.OI PRINT 320 320 FORMATC/1X, 'DEINTERLEAVED ARRAY'> IF

c *** DECODE SECTOR c (~***********************************************************

DO 1500 J=O, 19 1 # OF INFO GROUPS IN SECTOR CALL CLEARCSHIFTRl LOCATE = 0 ! ERROR LOCATED FLAG FLAG = 0 NBIT = 0 c C********************************************************************* c c DECODE RECEIVED WORD c C********************************************************************* c IF INPUT 0 CALL SHIFT GOTO 1510 IFi INPUT. EG. 1 I CALL FEEDBK IFCFLAG.EG. 1> CALL FEEDBK IFCFLAG.NE. INPUT> CALL FEEDBK 1510 CONTINUE IFCJ. EG. 2. AND. I. EG. O> PRINT 80, K, INPUT. SHIFTR IFCK.LE.32> GOTO 3400 IF 8~8 FORMATC1X, 'ERROR POSITION:', 12,' ROW:', I2, l lOX, 'NOT DETECTED'> c c c CORRECTIONS c (************************************************************* qoo IFiK.EQ. 66> GOTO 3400 CORRCTC66-K,J+ll = DECODEC66-K,J+1) IFCISTAT. NE.OI GOTO 3400 IF CORRCT

C************************************************************** c c COMPARISON c C************************************************************** NUMERR = 0 DO 1200 N1=0,32 DO 1200 N2=1,20 NG = 0 ! RESET INDICATOR IF.NE.CORRCT<33-Nl,N2>> NG=l IF

DO 100 I= 1.380 C ** SAVE 25o, 16 ** CALL RANDU SYSTEM RANDOM# GENERATOR IF DATR DATR=1 100 CONTINUE c c RETURN END c c c c SUBROUTINE CLEAR c c PURPOSE: INSERTS ZEROES IN A FIXED LENGTH REGISTER c LENGTH=14 c INTEGER+2 SHIFTR<14>

DO 100 I=L 14 SHIFTR = 0 100 CONTINUE c RETURN END c c 184

c SUBROUTINE SHIFTCSHIFTR. INPUT> c C PURPOSE: TO SHIFT REGISTER TO RIGHT ONE PLACE C OLD MSB IS DISCARDED C INPUT BIT IS LOADED INTO LSB c INTEGER*2 SHIFTRC14l c DO 100 I=13. 1. -1 SHIFTR 100 CONTINUE ·~ SHIFTR<1> INPUT c RETURN f.ND c

SUBROUTINE FEEDBKC~. TFTR,MODE> c PURPOSE: TO ADD FEEDBACK TO THE SHIFT REGISTER c IN ONE OF THREE MODES c Cl> TCXl ONLY c <2 > G < X> ONLY < 3) T ( X > +G < X l c INTEGER*2 SHIFTRC14l,FTX<14J,FGXil4>.FTXGX+FTX IFIMODE.EG. 21 SHIFTR=SHIFTR+FGXCI) IFCMODE. EG. 3) SHIFTRCI>=SHIFTR+FTXGX IF=O 100 CONTINUE c RETURN END c c SUBROUTINE CORRPTCIBITS, ISTRT, IBUF> c C PURPOSE: TO ADD AN ERROR VECTOR TO DATA & CHECK BITS c DIMENSION IBUF<660l DO 10.0 I=O, IBITS-1 IFCIBUFCISTRT+Il.EG.Ol GOTO 10 IF C IBUF < ISTRT+I>. EG. 1l GOTO 20 GOTO 100 10 IBUFCISTRT+Il=l GOTO 100 20 IBUF

c SUBROUTINE INTERL c C PURPOSE: TO INTERLEAVE DATA AS TO SCRAMBLE ERRORS

INTEGER*2 ENCODE<33,20), IBUF<660l DO 10 1-\=0,32 DO 10 I=l. 20 IBUF<20*K+Il = ENCODE<33-K, I> 10 CONTINUE RETURN END c c ~UBROUTINE DEINTR c ~ PURPOSE: TO DEINTERLEAVE DATA AS TO UNSCRAMBLE DATA

INTEGER*2 DECODE<33.20>, IBUF<660i DO 10 1-\=0,32 DO 10 !=1. 20 DECODE<33-K, Il = IBUF<20*K+Il 10 CONTINUE RETURN END c SUBROUTINE DETECT c C PURPOSE: DETECTS LOCATION OF ERROR BY EXAMINING C 9 MSB AND RECORDS ERROR PATTERN c INTEGER*2 SHIFTR<14l, IPTRN ISTAT=O 1 WINDOW FOUND DO 100 !=1,9 IF. NE. Ol ISTAT=1 100 CONTINUE DO 200 I=l• 5 IF

APPENDIX C

RESULTS FROM EDCC SIMULATION PROGRAM 187

DISC SIMULATION PROGRAM FOR AN ERROR CORRECTION SYSTEM (100 BIT BURST ERROR INDUCED>

RANDOM DATA (380> 0 0 0 0 0 0 0 0 1 0 0 1 1 1 0 1 1 0 1 0 1 1 1 1 1 0 1 1 1 0 1 0 1 0 1 1 0 1 1 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 1 0 0 1 0 0 1 0 0 0 0 0 0 0 1 1 0 1 1 1 0 0 0 0 l 1 1 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 1 1 0 0 0 1 1 1 1 0 0 1 1 1 1 0 0 0 1 1 0 1 1 0 0 0 1 0 1 1 0 0 0 1 0 0 1 1 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 1 0 1 0 0 0 0 0 1 1 1 1 1 0 1 0 0 0 0 0 1 1 0 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 . 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 1 1 0 1 1 1 1 1 1 1 1 1 0 1 0 0 1 1 0 1 0 0 1 1 1 1 0 0 1 1 0 1 1 0 0 0 0 0 0 1 1 1 1 1 0 0 0 1 0 1 1 1 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 1 0 1 1 0 0 0 1 1 0 0 0 1 1 1 0 0 1 1 0 1 1

SEGMENTED DATA BEFORE PARITY BITS (660) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 1 1 1 1 0 0 1 0 0 0 0 0 0 o·o 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 1 1 1 0 1 1 1 1 1 0 1 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 1 1 0 1 1 1 0 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 l 0 l 0 0 0 0 1 1 0 0 0 l 0 0 l 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 l 0 0 1 0 0 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 l 1 l l 1 0 0 0 0 l l l 0 l l l 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 l 1 0 0 l 0 1 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 l 1 1 l 1 1 0 0 1 l 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 1 1 0 1 0 0 0 1 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 1 1 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 1 0 0 0 0 0 1 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 1 1 1 1 1 1 1 1 0 1 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 1 1 0 0 1 1 1 0 0 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 1 0 0 0 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 l 1 0 1 1 1 0 0 1 1 0 0 0 1 0 0 0 188 ' '

CLV.. INPUT SHIFT REGISTER

1 0 0 0 1 0 0 0 1 0 0 0 2 1 1 1 0 1 1 0 0 1 1 1 0 ,.j 0 0 1 1 1 0 1 1 0 0 1 1 4 0 0 1 1 1 1 0 1 1 0 0 1 5 0 1 0 1 1 1 0 1 0 1 0 0 1 6 0 1 0 1 1 1 0 1 0 1 0 0 1 1 7 0 0 1 0 1 1 1 0 1 0 1 0 0 1 8 0 0 0 1 0 1 1 1 0 1 0 0 0 9 0 0 0 0 0 1 0 1 1 1 0 1 0 0 10 1 1 0 1 0 0 0 0 1 1 0 0 0 0 1 11 1 0 1 0 1 0 0 0 0 1 1 0 0 0 0 12 0 0 0 1 0 1 0 0 0 0 1 0 0 0 13 0 0 0 0 1 0 1 0 0 0 0 ~ 0 0 14 0 0 0 0 0 1 0 1 0 0 0 0 1 0 15 0 0 0 0 0 0 1 0 1 0 0 0 0 1 16 0 1 0 1 0 0 1 1 0 1 0 0 1 17 1 0 1 0 1 0 0 1 1 0 1 1 0 1 0 18 1 1 0 0 0 1 1 0 1 1 1 0 0 1 19 0 1 1 0 0 0 0 0 1 0 0 0 SEGI'1ENTED DATA AFTER PARITY BITS 1 1 0 1 0 1 1 1 1 0 0 1 0 0 1 0 1 1 1 1 1 1 0 0 1 0 0 0 0 0 0 0 0 •J 1 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 0 1 1 1 1 1 0 1 0 1 1 1 1 1 1 1 0 0 0 1 0 1 0 1 0 0 0 0 1 1 0 0 0 0 0 1 1 0 ·1 1 1 0 1 0 1 1 ,_, 1 G 1 1 1 1 0 0 1 1 1 0 0 0 1 0 1 0 0 0 0 1 1 0 0 0 1 0 0 1 1 1 0 1 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 1 1 1 1 1 0 0 0 1 0 1 1 0 0 0 0 1 0 1 0 1 1 1 1 1 1 0 0 0 0 l 1 1 1 0 1 1 1 0 0 1 0 1 1 0 1 0 0 0 1 1 1 0 1 1 0 0 1 0 1 0 0 1 0 0 0 1 0 0 0 1 0 1 1 1 0 0 0 0 1 0 1 1 1 1 1 0 0 0 1 1 1 1 1 1 0 0 1 1 1 1 1 0 0 0 0 1 1 0 0 1 0 1 1 1 0 0 0 1 0 0 1 0 0 0 1 1 0 1 0 0 0 1 1 0 1 1 0 1 0 1 1 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 1 1 0 0 0 1 0 0 0 1 1 1 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 1 1 0 1 1 0 1 1 0 0 0 0 0 1 0 1 1 1 1 1 0 0 0 0 0 1 1 0 0 0 0 0 0 0 1 1 0 0 0 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 0 0 1 1 0 0 1 1 0 0 0 1 1 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 1 0 0 0 1 0 1 1 1 0 0 1 1 0 1 1 1 1 1 1 1 1 1 0 1 1 0 0 1 0 1 1 0 0 0 0 0 0 1 1 1 1 0 1 1 1 0 1 1 0 0 1 1 1 1 0 0 1 0 1 1 0 1 0 0 0 0 0 0 0 1 0 0 0 0 1 1 1 1 0 1 0 0 0 1 1 1 1 1 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 1 1 1 1 0 1 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 1 1 0 1 1 1 0 0 1 1 1 0 0 0 1 1 0 0 0 189

INTERLEAVED ARRAY 0 1 1 1 0 0 0 0 1 1 0 0 0 1 1 0 0 0 0 0 1 0 1 1 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 1' 1 1 1 0 1 0 u 0 1 1 0 1 0 0 1 0 0 0 0 1 0 0 1 1 0 0 0 1 1 0 1 1 0 1 0 1 0 1 1 1 1 0 0 0 1 1 1 0 0 0 0 0 0 0 0 1 0 1 1 0 1 0 0 0 0 1 1 0 1 0 1 0 0 0 1 1 1 0 0 1 0 0 1 0 0 0 0 1 1 0 0 1 1 0 1 0 0 0 1 0 0 0 1 0 0 1 1 0 1 0 1 0 1 0 1 0 0 0 0 0 1 0 1 1 1 0 1 0 1 0 0 0 0 1 1 1 0 0 1 0 1 1 1 0 1 1 1 0 0 0 0 0 1 1 1 1 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 1 1 0 1 1 1 0 1 0 1 0 0 0 0 0 1 0 1 0 1 0 0 0 1 0 1 0 1 1 0 0 1 1 0 0 0 1 1 1 0 0 0 0 1 1 0 1 1 1 0 1 1 0 1 0 1 0 1 1 0 0 1 1 1 0 0 0 0 1 1 1 1 1 0 0 1 0 0 0 0 1 0 0 0 1 1 1 0 0 0 0 1 0 0 1 1 0 0 0 0 0 1 0 1 1 1 1 1 1 1 1 0 0 0 1 1 0 1 0 0 0 1 0 1 0 1 1 1 1 0 1 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 1 1 0 1 0 1 0 1 1 1 0 0 1 0 1 1 1 1 0 1 0 1 0 0 1 0 0 1 1 0 1 1 1 0 1 1 1 0 0 0 0 1 0 0 1 1 1 1 1 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 0 1 1 0 0. 1 0 0 0 0 1 0 0 1 1 1 1 0 0 0 1 0 0 1 0 0 0 1 0 1 1 1 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 1 1 1 0 0 1 0 1 0 0 0 0 0 1 0 0 0 1 0 1 1 0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 1 1 1 0 0 1 1 0 0 1 0 1 0 0 1 0 1 0 1 0 1 0 1 1 1 0 0 1 1 0 0 0 1 0 0 1 0 0 1 0 1 1 0 0 0 0 0 1 1 1 0 1 1 0 l 1 1 0 1 1 0 1 0 1 0 0 0 1 1 0 1 0 1 0 0 0 1 1 0 0 0 0 0 0 0 1 0 0 190

INTEFLEAVED ARRAY PLUS ERROR PATTERN 0 1 0 0 0 0 1 1 0 0 0 1 110000 0 0 1 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 1 0 0 0 1 0 1 0 0 1 0 1 1 1 1 0 0 0 0 0 1 0 0 1 1 1 0 0 101100 1 0 1 1 0 0 1 1 1 0 0 1 0 0 1 0 0 0 0 0 0 0 0 1 1 0 0 0 1 1 1 1 1 1 0 0010111100 101011 J 0 0 1 0 1 0 1 0 1 0 0 0 0 1 1 1 0 0 0 1 0 1 0 0 0 0 0 1 0 0 1 1 1 0 1 0 0 1 0 1 0 0 1 0 0 0 1 0 1 1 1 0 1 0 000011 1001 011101 vO•J001111001 0 1 0 0 0 0 0 0 0 0 1 1 0 1 1 1 0 1 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 1 0 1 1 0 0 1 0 0 0 1 1 1 0 0 0 0 1 1 011101 0 0 1 0 1 1 0 0 1 1 1 0 0 0 0 1 1 0 0 1 0 0 0 0 1 0 0 0 1 1 1 0 0 0 0 0 0 1 1 0 0 0 0 0 1 0 1 1 1 1 1 1 1 1 u 0 0 1 1 0 1 0 0 0 1 0 1 0 1 1 1 1 G 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 1 1 0 0 1 0 1 1 1 0 0 1 0 1 1 1 1 1 0 1 0 0 0 1 0 0 1 1 0 1 1 1 1 0 1 1 1 0 0 0 0 1 1 0 0 1 1 1 1 1 0 0 1 0 011000 0 1 0 0 0 0 0 1 1 0 0 1 0 0 0 0 1 0 0 1 111000100100 0 1 0 1 1 1 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 1 1 0 0 1 0 1 0 0 0 0 0 1 0 0 0 1 0 l 0'1 1 1 0 0 0 0 0 0 ~ 0 0 0 0 0 1 0 v 1 0 1 1 1 1 0 0 1 1 0 0 1 0 1 0 0 1 0 1 1 0 1 1 0 1 0 1 1 1 0 0 1 1 0 0 0 1 0 0 1 0 0 1 0 1 1 0 0 0 0 0 1 1 1 0 1 1 0 0 1 1 0 1 0 1 0 0 0 1 1 0 1 0 0 0 0 1 0 0 0 0 0 0 1 0 1 0 0

~EINTERLEAVED ARRAY 1 0 1 0 1 1 1 0 0 1 0 0 1 0 1 1 1 1 1 1 0 0 1 1 1 1 1 1 0 0 0 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 0 1 1 1 1 0 1 0 1 0 1 1 1 1 1 0 0 0 1 0 1 0 1 0 0 0 0 1 1 0 0 0 0 0 1 1 0 0 0 0 1 0 0 1 1 0 0 1 1 1 1 1 1 0 0 1 1 1 0 0 0 1 0 1 0 0 0 0 1 0 1 1 1 0 0 0 1 0 1 1 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 1 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0 1 0 1 1 1 1 1 1 0 0 0 0 1 0 0 0 1 0 1 1 0 0 1 0 1 1 0 1 0 0 0 1 1 1 0 1 1 0 0 1 0 1 0 0 1 0 1 1 0 1 1 0 1 0 l 1 1 0 0 0 0 1 0 1 1 1 1 1 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 1 1 1 0 0 0 1 0 0 1 0 0 0 1 1 0 1 1 1 1 0 0 0 1 1 0 1 0 1 1 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 1 1 1 0 0 1 1 1 1 1 0 1 u 0 0 1 1 0 0 0 0 0 0 1 1 0 0 0 1 0 0 0 1 1 1 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 1 1 0 1 0 0 0 0 0 1 0 1 1 0 0 0 0 0 1 0 1 1 1 0 0 0 0 0 0 0 1 0 0 0 0 1 1 1 0 1 1 0 1 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 1 1 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0 1 1 0 0 1 0 0 1 1 0 0 0 1 1 0 1 1 1 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 1 1 0 1 0 0 0 1 0 1 1 1 0 0 1 1 0 1 1 1 1 1 1 1 1 0 1 0 0 1 0 1 0 1 1 0 0 0 0 0 0 1 1 1 1 0 1 1 1 0 1 1 0 0 1 1 1 1 1 1 0 1 0 1 0 1 0 0 0 0 0 0 0 1 0 0 0 0 1 1 1 1 0 1 0 0 0 1 1 1 1 0 1 1 1 1 0 0 0 1 1 1 1 1 1 0 1 0 0 0 1 1 1 1 0 1 0 0 0 0 0 0 0 0 1 1 0 0 1 0 0 0 0 1 0 0 0 0 0 1 1 0 0 0 1 1 1 0 1 1 1 0 0 1 1 1 0 1 1 0 0 1 0 0 191

- '--· ,·. inPUT SHIFT REGISTER

1 0 0 0 0 0 0 0 0 0 0 0 2 1 0 0 0 0 0 0 1 0 1 I) 3 0 0 0 0 1 1 0 0 0 0 1 1 0 1 4 0 1 0 1 0 1 0 0 0 0 1 0 0 1 5 1 1 1 0 1 0 0 0 0 1 1 1 0 0 6 0 0 1 1 0 1 0 0 0 0 1 1 1 0 7 0 0 0 1 1 0 0 0 0 0 1 1 1 1 8 0 1 0 1 1 1 1 1 0 0 1 0 0 1 1 9 0 1 1 1 1 0 1 0 1 1 1 0 1 10 1 1 1 1 1 0 0 1 0 1 1 1 1 0 l 1 1 0 1 0 1 1 1 0 0 0 0 1 0 1 1 12 0 1 0 0 0 1 0 1 0 0 1 0 0 0 1 13 0 1 1 0 0 0 0 1 0 1 1 1 o. 0 14 0 0 1 1 0 0 0 0 0 1 1 1 0 15 0 0 0. 1 1 1 0 0 0 0 1 0 1 1 16 0 1 0 1 1 1 0 0 0 0 1 1 1 1 17 1 1 0 1 1 0 0 0 1 1 .J. 1 18 1 1 1 0 1 0 0 0 1 0 1 1 1 1 lG' 0 1 1 0 1 0 0 0 0 0 0 0 0 1 1 20 0 1 1 0 0 1 1 0 0 0 1 0 0 1 21 0 1 1 0 0 0 0 1 0 0 1 1 1 0 22 0 0 1 1 0 0 0 0 1 0 0 1 1 1 23 1 0 1 1 0 1 0 0 0 0 1 1 24 0 1 1 1 1 1 1 1 0 0 1 1 1 25 1 1 1 1 1 0 1 1 1 1 1 1 ;;:0 0 1 1 0 1 1 0 0 1 1 0 1 0 1 1 27 1 1 1 1 0 1 0 0 0 0 0 0 1 0 1 28 0 1 1 0 1 0 0 0 0 0 1 0 1 1 0 29 0 0 1 1 0 1 0 0 0 0 0 1 0 1 1 30 0 1 0 0 1 0 0 0 0 0 1 0 0 0 1 31 1 1 1 0 0 1 1 0 0 1 1 1 0 0 0 32 1 0 1 0 0 0 1 1 0 1 1 1 0 0 0 33 1 0 0 0 0 0 0 1 1 1 1 1 0 0 0 34 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 35 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 .::o 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 37 0 0 0 0 0 0 0 0 0 0 0 1 1 1 38 0 0 0 0 0 0 0 0 0 0 0 0 1 1 39 o. 0 0 0 0 0 0 0 0 0 0 0 0 1 40 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 41 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 42 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 43 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 44 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 45 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 46 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 47 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 48 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 49 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 50 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ,...,51 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ~c:. 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 53 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 54 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 55 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 56 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 57 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 59 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 59 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 60 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 61. 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 o ....., 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 63 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 64 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 65 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 66 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 192

CORRECTED ARRAY 1 0 1 0 1 1 1 1 0 0 1 0 0 1 0 1 1 1 1 1 1 0 0 1 0 0 0 0 0 0 0 0 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 0 1 1 1 1 1 0 1 0 1 1 1 1 1 0 0 0 1 0 1 0 1 0 0 0 0 1 1 0 0 0 0 0 1 1 0 1 1 1 0 1 0 1 1 '.,j 1 0 1 1 1 1 1 1 0 0 1 1 1 0 0 0 1 0 1 0 0 0 0 1 1 0 0 0 1 0 0 1 1 0 1 l 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 1 1 1 1 1 1 0 0 0 1 0 1 1 0 0 0 0 1 0 1 0 1 1 1 1 1 1 0 0 0 0 1 1 1 1 0 1 1 1 0 0 1 0 1 1 0 1 0 0 0 1 1 1 0 1 1 0 0 1 0 1 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 1 1 1 1 1 0 0 0 1 1 1 1 1 1 0 0 1 1 1 1 1 0 0 0 1 0 1 0 0 1 0 1 1 1 0 0 0 1 0 0 1 0 0 0 1 1 0 1 0 0 0 1 1 0 1 1 0 1 0 1 1 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 1 1 0 0 0 1 0 0 0 1 1 1 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 1 1 0 1 0 0 0 0 0 1 0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 Cl 0 0 0 1 0 0 0 0 1 1 1 0 1 1 0 1 1 0 0 0 0 0 1 0 1 1 1 1 1 0 0 0 0 0 1 1 0 0 0 0 0 0 0 1 1 0 0 0 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 0 0 1 0 0 1 1 0 0 0 1 1 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 0 0 0 1 0 1 1 1 0 0 1 1 0 1 1 1 1 1 1 1 1 0 1 0 0 1 ~""·, 0 0 0 0 0 0 1 1 1 1 0 1 1 1 0 1 1 0 0 1 1 0 0 1 0 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 1 1 1 0 1 0 0 0 1 1 1 1 0 0 0 0 0 0 ~) 1 1 1 1 0 1 0 0 0 1 1 1 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 iJ 0 0 0 0 0 0 1 0 0 0 1 0 1 1 1 0 0 1 1 1 0 0 0 1 0 0 0

THE NUr113ER OF ERRONEOUS CORRECTIONS IS: 0

THE NIJ1'1!3ER OF ERRONEOUS CORRECTIONS IS: 0

THE Nl..lf'1DER OF ERRONEOUS CORRECTIONS IS: 0

THE NUMDER OF ERRONEOUS CORRECTIONS IS: 0

THE NUMBER OF ERRONEOUS CORRECTIONS IS: 0