Math 550 Coding and Cryptography Workbook J. Swarts 0121709 Ii Contents

Total Page:16

File Type:pdf, Size:1020Kb

Math 550 Coding and Cryptography Workbook J. Swarts 0121709 Ii Contents Math 550 Coding and Cryptography Workbook J. Swarts 0121709 ii Contents 1 Introduction and Basic Ideas 3 1.1 Introduction . 3 1.2 Terminology . 4 1.3 Basic Assumptions About the Channel . 5 2 Detecting and Correcting Errors 7 3 Linear Codes 13 3.1 Introduction . 13 3.2 The Generator and Parity Check Matrices . 16 3.3 Cosets . 20 3.3.1 Maximum Likelihood Decoding (MLD) of Linear Codes . 21 4 Bounds for Codes 25 5 Hamming Codes 31 5.1 Introduction . 31 5.2 Extended Codes . 33 6 Golay Codes 37 6.1 The Extended Golay Code : C24. ....................... 37 6.2 The Golay Code : C23.............................. 40 7 Reed-Muller Codes 43 7.1 The Reed-Muller Codes RM(1, m) ...................... 46 8 Decimal Codes 49 8.1 The ISBN Code . 49 8.2 A Single Error Correcting Decimal Code . 51 8.3 A Double Error Correcting Decimal Code . 53 iii iv CONTENTS 8.4 The Universal Product Code (UPC) . 56 8.5 US Money Order . 57 8.6 US Postal Code . 58 9 Hadamard Codes 61 9.1 Background . 61 9.2 Definition of the Codes . 66 9.3 How good are the Hadamard Codes ? . 67 10 Introduction to Cryptography 71 10.1 Basic Definitions . 71 10.2 Affine Cipher . 73 10.2.1 Cryptanalysis of the Affine Cipher . 74 10.3 Some Other Ciphers . 76 10.3.1 The Vigen`ereCipher . 76 10.3.2 Cryptanalysis of the Vigen`ereCipher : The Kasiski Examination . 77 10.3.3 The Vernam Cipher . 79 11 Public Key Cryptography 81 11.1 The Rivest-Shamir-Adleman (RSA) Cryptosystem . 82 11.1.1 Security of the RSA System . 85 11.2 Probabilistic Primality Testing . 86 12 The Rabin Cryptosystem 89 12.1 Security of the Rabin Cryptosystem . 94 13 Factorisation Algorithms 97 13.1 Pollard’s p 1 Factoring Method (ca. 1974) . 97 − 14 The ElGamal Cryptosystem 99 14.1 Discrete Logarithms (a.k.a indices) . 99 14.2 The system . 99 14.3 Attacking the ElGamal System . 101 15 Elliptic Curve Cryptography 105 15.1 The ElGamal System in Arbitrary Groups . 105 15.2 Elliptic Curves . 106 15.3 The Menezes-Vanstone Cryptosystem . 110 CONTENTS v 16 The Merkle-Hellman “knapsack” System 113 17 The McEliece Cryptosystem 117 A Assignments 121 vi CONTENTS Part I Coding 2 CONTENTS Chapter 1 Introduction and Basic Ideas 1.1 Introduction A model of a communication system is shown in Figure 1.1. Figure 1.1: Model of a Communications System The function of each block is as follows ✗ Source : This produces the output that we would like to transmit over the channel. The output can be either continuous, for example voice, or discrete as in voltage measurements taken at set times. ✗ Source Encoder : The source encoder transforms the source data into binary digits (bits). Further, the aim of the source encoder is to minimise the number of bits required to represent the source data. We have one of two options here in that we could require that the source data be perfectly reconstructible (as in computer data) or we may allow a certain level of errors in the reconstruction (as in images or speech). ✗ Encryption : Is intended to make the data unintelligible to all but the intended receiver. That is, it is intended to preserve the secrecy of the messages in the presence of unwelcome monitoring of the channel. Cryptography is the science of maintaining secrecy of data from both passive intrusion (eavesdropping) and active intrusion (introduction or alteration of messages). This introduces the following distinctions 3 4 CHAPTER 1. INTRODUCTION AND BASIC IDEAS ✎ Authentication : is the corroboration that the origin of a message is as claimed. ✎ Data Integrity : is the property that the data has not been changed in an unauthorised way. ✗ Channel Encoder : Tries to maximise the rate at which information can be reliably transmitted on the channel in the presence of disruptions (noise) that can introduce errors. Error correction coding, located in the channel encoder, adds redundancy in a controlled manner to messages to allow transmission errors to be detected and/or corrected. ✗ Modulator : Transforms the data into a format suitable for transmission over the channel. ✗ Channel : The medium that we use to convey the data. This could be in the form of a wireless link (as in cell phones), a fiber optic link or even a storage medium such as magnetic disks or CD’s. ✗ Demodulator : Converts the received data back into its original format (usually bits). ✗ Channel Decoder : Uses the redundancy introduced by the channel encoder to try and detect/correct any possible errors that the channel introduced. ✗ Decryption : Converts the data back into an intelligible form. At this point one would also perform authentication and data integrity checks. ✗ Source Decoder : Adds back the original (naturally occurring) redundancy that was removed by the source encoder. ✗ Receiver : The intended destination of the data produced by the source. 1.2 Terminology Definition 1.2.1 (Alphabet). An alphabet is a finite, nonempty set of symbols. Usually the binary alphabet K = 0, 1 is used and the elements of K are called binary digits or { } bits. Definition 1.2.2 (Word). A word is a finite sequence of elements from an alphabet. Definition 1.2.3 (Code). A code is a set C of words (or codewords). 1.3. BASIC ASSUMPTIONS ABOUT THE CHANNEL 5 Example 1.2.4. C = 00, 01, 10, 11 and C = 000, 001, 01, 1 are both codes over the 1 { } 2 { } alphabet K. Definition 1.2.5 (Block Code). A block code has all codewords of the same length; this number is called the length of the code. In our example C1 is a (binary) block code of length 2, while C2 is not a block code. Definition 1.2.6 (Prefix code). A prefix code is a code such that there do not exist distinct words wi and wj such that wi is a prefix (initial segment) of wj. C2 is an example of a prefix code. Prefix codes can be decoded easily since no codeword is a prefix of any other. We simply scan the word from left to right until we have a codeword, we then continue scanning from this point on until we reach the next codeword. This is continued until we reach the end of the received word. Example 1.2.7. Suppose the words 000, 001, 01 and 1 correspond to N, S, E and W respectively. If 001011100000101011 is received, the message is S E W W N S E E W. Since C2 is a prefix code, there is only one way to decode this message. Prefix codes can be used in a situation where not all data are equally likely — shorter words are used to encode data that occur more frequently and longer words are used for infrequent data. A good example of this is Morse code. In English, the letter E occurs quite frequently and so in Morse code it is represented by a single dot . On the other · hand the letter Z rarely occurs and it is assigned the sequence of dots and dashes. − − · · 1.3 Basic Assumptions About the Channel We make the following three assumptions about the channel 1. If n symbols are transmitted, then n symbols are received — though maybe not the same ones. That is, nothing is added or lost. 2. There is no difficulty identifying the beginning of the first word transmitted. 3. Noise is scattered randomly rather than being in clumps (called bursts). That is the probability of error is fixed and the channel stays constant over time. Most channels do not satisfy this property with one notable exception — the deep space channel (used to transmit images and other data between spacecraft and earth). In 6 CHAPTER 1. INTRODUCTION AND BASIC IDEAS almost all other channels the probability of error varies over time. For example on a CD, where certain areas are more damaged than others, and in cell phones where the radio channel varies continually as the users move about their terrain. A binary channel is called symmetric if 0 and 1 are transmitted with equal accuracy. The reliability of such a channel is the probability p, 0 p 1, that the digit sent is the ≤ ≤ digit received. The error probability q = 1 p is the probability that the digit sent is − received in error. Figure 1.2: The Binary Symmetric Channel Example 1.3.1. If we are transmitting codewords of length 4 over a binary symmetric channel (BSC) with reliability p and error probability q = 1 p, then − 4 Probability of no errors : p4q0, 0 4 Probability of 1 error : p3q1, 1 4 Probability of 2 errors : p2q2, 2 4 Probability of 3 errors : p1q3, 3 4 Probability of 4 errors : p0q4. 4 Chapter 2 Detecting and Correcting Errors Errors can be detected when the received word is not a codeword. If a codeword is received, then it could be that no errors, or several errors (changing one codeword into another) have occurred. Consider C = 00, 01, 10, 11 . Every received word is a codeword, so no errors can 1 { } be detected (and none can be corrected). On the other hand consider C = 000000, 010101, 101010, 111111 , 2 { } obtained from C1 by repeating each word in C1 three times — this is known as a repetition code. Here we can detect 2 or fewer errors as we need 3 or more errors to change one codeword into another. Example 2.1.2. Let C = 000000, 010101, 101010, 111111 be the code defined above. 2 { } Assume that 010101 is sent, but that 110111 is received.
Recommended publications
  • A [49 16 6 ]-Linear Code As Product of the [7 4 2 ] Code Due to Aunu and the Hamming [7 4 3 ] Code
    General Letters in Mathematics, Vol. 4, No.3 , June 2018, pp.114 -119 Available online at http:// www.refaad.com https://doi.org/10.31559/glm2018.4.3.4 A [49 16 6 ]-Linear Code as Product Of The [7 4 2 ] Code Due to Aunu and the Hamming [7 4 3 ] Code CHUN, Pamson Bentse1, IBRAHIM Alhaji Aminu2, and MAGAMI, Mohe'd Sani3 1 Department Of Mathematics, Plateau State University Bokkos, Jos Nigeria [email protected] 2 Department Of Mathematics, Usmanu Danfodiyo University Sokoto, Sokoto Nigeria [email protected] 3 Department Of Mathematics, Usmanu Danfodiyo University Sokoto, Sokoto Nigeria [email protected] Abstract: The enumeration of the construction due to "Audu and Aminu"(AUNU) Permutation patterns, of a [ 7 4 2 ]- linear code which is an extended code of the [ 6 4 1 ] code and is in one-one correspondence with the known [ 7 4 3 ] - Hamming code has been reported by the Authors. The [ 7 4 2 ] linear code, so constructed was combined with the known Hamming [ 7 4 3 ] code using the ( u|u+v)-construction method to obtain a new hybrid and more practical single [14 8 3 ] error- correcting code. In this paper, we provide an improvement by obtaining a much more practical and applicable double error correcting code whose extended version is a triple error correcting code, by combining the same codes as in [1]. Our goal is achieved through using the product code construction approach with the aid of some proven theorems. Keywords: Cayley tables, AUNU Scheme, Hamming codes, Generator matrix,Tensor product 2010 MSC No: 97H60, 94A05 and 94B05 1.
    [Show full text]
  • Coding Theory: Linear Error-Correcting Codes Coding Theory Basic Definitions Error Detection and Correction Finite Fields Anna Dovzhik
    Coding Theory: Linear Error- Correcting Codes Anna Dovzhik Outline Coding Theory: Linear Error-Correcting Codes Coding Theory Basic Definitions Error Detection and Correction Finite Fields Anna Dovzhik Linear Codes Hamming Codes Finite Fields Revisited BCH Codes April 23, 2014 Reed-Solomon Codes Conclusion Coding Theory: Linear Error- Correcting 1 Coding Theory Codes Basic Definitions Anna Dovzhik Error Detection and Correction Outline Coding Theory 2 Finite Fields Basic Definitions Error Detection and Correction 3 Finite Fields Linear Codes Linear Codes Hamming Codes Hamming Codes Finite Fields Finite Fields Revisited Revisited BCH Codes BCH Codes Reed-Solomon Codes Reed-Solomon Codes Conclusion 4 Conclusion Definition A q-ary word w = w1w2w3 ::: wn is a vector where wi 2 A. Definition A q-ary block code is a set C over an alphabet A, where each element, or codeword, is a q-ary word of length n. Basic Definitions Coding Theory: Linear Error- Correcting Codes Anna Dovzhik Definition If A = a ; a ;:::; a , then A is a code alphabet of size q. Outline 1 2 q Coding Theory Basic Definitions Error Detection and Correction Finite Fields Linear Codes Hamming Codes Finite Fields Revisited BCH Codes Reed-Solomon Codes Conclusion Definition A q-ary block code is a set C over an alphabet A, where each element, or codeword, is a q-ary word of length n. Basic Definitions Coding Theory: Linear Error- Correcting Codes Anna Dovzhik Definition If A = a ; a ;:::; a , then A is a code alphabet of size q. Outline 1 2 q Coding Theory Basic Definitions Error Detection Definition and Correction Finite Fields A q-ary word w = w1w2w3 ::: wn is a vector where wi 2 A.
    [Show full text]
  • Superregular Matrices and Applications to Convolutional Codes
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Repositório Institucional da Universidade de Aveiro Superregular matrices and applications to convolutional codes P. J. Almeidaa, D. Nappa, R.Pinto∗,a aCIDMA - Center for Research and Development in Mathematics and Applications, Department of Mathematics, University of Aveiro, Aveiro, Portugal. Abstract The main results of this paper are twofold: the first one is a matrix theoretical result. We say that a matrix is superregular if all of its minors that are not trivially zero are nonzero. Given a a × b, a ≥ b, superregular matrix over a field, we show that if all of its rows are nonzero then any linear combination of its columns, with nonzero coefficients, has at least a − b + 1 nonzero entries. Secondly, we make use of this result to construct convolutional codes that attain the maximum possible distance for some fixed parameters of the code, namely, the rate and the Forney indices. These results answer some open questions on distances and constructions of convolutional codes posted in the literature [6, 9]. Key words: convolutional code, Forney indices, optimal code, superregular matrix 2000MSC: 94B10, 15B33, 15B05 1. Introduction Several notions of superregular matrices (or totally positive) have appeared in different areas of mathematics and engineering having in common the specification of some properties regarding their minors [2, 3, 5, 11, 14]. In the context of coding theory these matrices have entries in a finite field F and are important because they can be used to generate linear codes with good distance properties.
    [Show full text]
  • Mceliece Cryptosystem Based on Extended Golay Code
    McEliece Cryptosystem Based On Extended Golay Code Amandeep Singh Bhatia∗ and Ajay Kumar Department of Computer Science, Thapar university, India E-mail: ∗[email protected] (Dated: November 16, 2018) With increasing advancements in technology, it is expected that the emergence of a quantum computer will potentially break many of the public-key cryptosystems currently in use. It will negotiate the confidentiality and integrity of communications. In this regard, we have privacy protectors (i.e. Post-Quantum Cryptography), which resists attacks by quantum computers, deals with cryptosystems that run on conventional computers and are secure against attacks by quantum computers. The practice of code-based cryptography is a trade-off between security and efficiency. In this chapter, we have explored The most successful McEliece cryptosystem, based on extended Golay code [24, 12, 8]. We have examined the implications of using an extended Golay code in place of usual Goppa code in McEliece cryptosystem. Further, we have implemented a McEliece cryptosystem based on extended Golay code using MATLAB. The extended Golay code has lots of practical applications. The main advantage of using extended Golay code is that it has codeword of length 24, a minimum Hamming distance of 8 allows us to detect 7-bit errors while correcting for 3 or fewer errors simultaneously and can be transmitted at high data rate. I. INTRODUCTION Over the last three decades, public key cryptosystems (Diffie-Hellman key exchange, the RSA cryptosystem, digital signature algorithm (DSA), and Elliptic curve cryptosystems) has become a crucial component of cyber security. In this regard, security depends on the difficulty of a definite number of theoretic problems (integer factorization or the discrete log problem).
    [Show full text]
  • Reed-Solomon Code and Its Application 1 Equivalence
    IERG 6120 Coding Theory for Storage Systems Lecture 5 - 27/09/2016 Reed-Solomon Code and Its Application Lecturer: Kenneth Shum Scribe: Xishi Wang 1 Equivalence of Codes Two linear codes are said to be equivalent if one of them can be obtained from the other by means of a sequence of transformations of the following types: (i) a permutation of the positions of the code; (ii) multiplication of symbols in a fixed position by a non-zero scalar in F . Note that these transformations can be applied to all code symbols. 2 Reed-Solomon Code In RS code each symbol in the codewords is the evaluation of a polynomial at one point α, namely, 2 1 3 6 α 7 6 2 7 6 α 7 f(α) = c0 c1 c2 ··· ck−1 6 7 : 6 . 7 4 . 5 αk−1 The whole codeword is given by n such evaluations at distinct points α1; ··· ; αn, 2 1 1 ··· 1 3 6 α1 α2 ··· αn 7 6 2 2 2 7 6 α1 α2 ··· αn 7 T f(α1) f(α2) ··· f(αn) = c0 c1 c2 ··· ck−1 6 7 = c G: 6 . .. 7 4 . 5 k−1 k−1 k−1 α1 α2 ··· αn G is the generator matrix of RSq(n; k; d) code. The order of writing down the field elements α1 to αn is not important as far as the code structure is concerned, as any permutation of the αi's give an equivalent code. i j=1;:::;n The generator matrix G is a Vandermonde matrix, which is of the form [aj]i=0;:::;k−1.
    [Show full text]
  • Coded Computing Via Binary Linear Codes: Designs and Performance Limits Mahdi Soleymani∗, Mohammad Vahid Jamali∗, and Hessam Mahdavifar, Member, IEEE
    1 Coded Computing via Binary Linear Codes: Designs and Performance Limits Mahdi Soleymani∗, Mohammad Vahid Jamali∗, and Hessam Mahdavifar, Member, IEEE Abstract—We consider the problem of coded distributed processing capabilities. A coded computing scheme aims to computing where a large linear computational job, such as a divide the job to k < n tasks and then to introduce n − k matrix multiplication, is divided into k smaller tasks, encoded redundant tasks using an (n; k) code, in order to alleviate the using an (n; k) linear code, and performed over n distributed nodes. The goal is to reduce the average execution time of effect of slower nodes, also referred to as stragglers. In such a the computational job. We provide a connection between the setup, it is often assumed that each node is assigned one task problem of characterizing the average execution time of a coded and hence, the total number of encoded tasks is n equal to the distributed computing system and the problem of analyzing the number of nodes. error probability of codes of length n used over erasure channels. Recently, there has been extensive research activities to Accordingly, we present closed-form expressions for the execution time using binary random linear codes and the best execution leverage coding schemes in order to boost the performance time any linear-coded distributed computing system can achieve. of distributed computing systems [6]–[18]. However, most It is also shown that there exist good binary linear codes that not of the work in the literature focus on the application of only attain (asymptotically) the best performance that any linear maximum distance separable (MDS) codes.
    [Show full text]
  • Reed-Solomon Error Correction
    R&D White Paper WHP 031 July 2002 Reed-Solomon error correction C.K.P. Clarke Research & Development BRITISH BROADCASTING CORPORATION BBC Research & Development White Paper WHP 031 Reed-Solomon Error Correction C. K. P. Clarke Abstract Reed-Solomon error correction has several applications in broadcasting, in particular forming part of the specification for the ETSI digital terrestrial television standard, known as DVB-T. Hardware implementations of coders and decoders for Reed-Solomon error correction are complicated and require some knowledge of the theory of Galois fields on which they are based. This note describes the underlying mathematics and the algorithms used for coding and decoding, with particular emphasis on their realisation in logic circuits. Worked examples are provided to illustrate the processes involved. Key words: digital television, error-correcting codes, DVB-T, hardware implementation, Galois field arithmetic © BBC 2002. All rights reserved. BBC Research & Development White Paper WHP 031 Reed-Solomon Error Correction C. K. P. Clarke Contents 1 Introduction ................................................................................................................................1 2 Background Theory....................................................................................................................2 2.1 Classification of Reed-Solomon codes ...................................................................................2 2.2 Galois fields............................................................................................................................3
    [Show full text]
  • Linear Block Codes
    Linear Block codes (n, k) Block codes k – information bits n - encoded bits Block Coder Encoding operation n-digit codeword made up of k-information digits and (n-k) redundant parity check digits. The rate or efficiency for this code is k/n. 푘 푁푢푚푏푒푟 표푓 푛푓표푟푚푎푡표푛 푏푡푠 퐶표푑푒 푒푓푓푐푒푛푐푦 푟 = = 푛 푇표푡푎푙 푛푢푚푏푒푟 표푓 푏푡푠 푛 푐표푑푒푤표푟푑 Note: unlike source coding, in which data is compressed, here redundancy is deliberately added, to achieve error detection. SYSTEMATIC BLOCK CODES A systematic block code consists of vectors whose 1st k elements (or last k-elements) are identical to the message bits, the remaining (n-k) elements being check bits. A code vector then takes the form: X = (m0, m1, m2,……mk-1, c0, c1, c2,…..cn-k) Or X = (c0, c1, c2,…..cn-k, m0, m1, m2,……mk-1) Systematic code: information digits are explicitly transmitted together with the parity check bits. For the code to be systematic, the k-information bits must be transmitted contiguously as a block, with the parity check bits making up the code word as another contiguous block. Information bits Parity bits A systematic linear block code will have a generator matrix of the form: G = [P | Ik] Systematic codewords are sometimes written so that the message bits occupy the left-hand portion of the codeword and the parity bits occupy the right-hand portion. Parity check matrix (H) Will enable us to decode the received vectors. For each (kxn) generator matrix G, there exists an (n-k)xn matrix H, such that rows of G are orthogonal to rows of H i.e., GHT = 0, where HT is the transpose of H.
    [Show full text]
  • Short Low-Rate Non-Binary Turbo Codes Gianluigi Liva, Bal´Azs Matuz, Enrico Paolini, Marco Chiani
    Short Low-Rate Non-Binary Turbo Codes Gianluigi Liva, Bal´azs Matuz, Enrico Paolini, Marco Chiani Abstract—A serial concatenation of an outer non-binary turbo decoding thresholds lie within 0.5 dB from the Shannon code with different inner binary codes is introduced and an- limit in the coherent case, for a wide range of coding rates 1 alyzed. The turbo code is based on memory- time-variant (1/3 ≤ R ≤ 1/96). Remarkably, a similar result is achieved recursive convolutional codes over high order fields. The resulting codes possess low rates and capacity-approaching performance, in the noncoherent detection framework as well. thus representing an appealing solution for spread spectrum We then focus on the specific case where the inner code is communications. The performance of the scheme is investigated either an order-q Hadamard code or a first order length-q Reed- on the additive white Gaussian noise channel with coherent and Muller (RM) code, due to their simple fast Hadamard trans- noncoherent detection via density evolution analysis. The pro- form (FHT)-based decoding algorithms. The proposed scheme posed codes compare favorably w.r.t. other low rate constructions in terms of complexity/performance trade-off. Low error floors can be thus seen either as (i) a serial concatenation of an outer and performances close to the sphere packing bound are achieved Fq-based turbo code with an inner Hadamard/RM code with down to small block sizes (k = 192 information bits). antipodal signalling, or (ii) as a coded modulation Fq-based turbo/LDPC code with q-ary (bi-) orthogonal modulation.
    [Show full text]
  • Linear Block Codes Vector Spaces
    Linear Block Codes Vector Spaces Let GF(q) be fixed, normally q=2 • The set Vn consisting of all n-dimensional vectors (a n-1, a n-2,•••a0), ai ∈ GF(q), 0 ≤i ≤n-1 forms an n-dimensional vector space. • In Vn, we can also define the inner product of any two vectors a = (a n-1, a n-2,•••a0) and b = (b n-1, b n-2,•••b0) n−1 i ab= ∑ abi i i=0 • If a•b = 0, then a and b are said to be orthogonal. Linear Block codes • Let C ⊂ Vn be a block code consisting of M codewords. • C is said to be linear if a linear combination of two codewords C1 and C2, a 1C1+a 2C2, is still a codeword, that is, C forms a subspace of Vn. a 1, ∈ a2 GF(q) • The dimension of the linear block code (a subspace of Vn) is denoted as k. M=qk. Dual codes Let C ⊂Vn be a linear code with dimension K. ┴ The dual code C of C consists of all vectors in Vn that are orthogonal to every vector in C. The dual code C┴ is also a linear code; its dimension is equal to n-k. Generator Matrix Assume q=2 Let C be a binary linear code with dimension K. Let {g1,g2,…gk} be a basis for C, where gi = (g i(n-1) , g i(n-2) , … gi0 ), 1 ≤i≤K. Then any codeword Ci in C can be represented uniquely as a linear combination of {g1,g2,…gk}.
    [Show full text]
  • Orthogonal Arrays and Codes
    Orthogonal Arrays & Codes Orthogonal Arrays - Redux An orthogonal array of strength t, a t-(v,k,λ)-OA, is a λvt x k array of v symbols, such that in any t columns of the array every one of the possible vt t-tuples of symbols occurs in exactly λ rows. We have previously defined an OA of strength 2. If all the rows of the OA are distinct we call it simple. If the symbol set is a finite field GF(q), the rows of an OA can be viewed as vectors of a vector space V over GF(q). If the rows form a subspace of V the OA is said to be linear. [Linear OA's are simple.] Constructions Theorem: If there exists a Hadamard matrix of order 4m, then there exists a 2-(2,4m-1,m)-OA. Pf: Let H be a standardized Hadamard matrix of order 4m. Remove the first row and take the transpose to obtain a 4m by 4m-1 array, which is a 2-(2,4m-1,m)-OA. a a a a a a a a a a b b b b 2-(2,7,2)-OA a b b a a b b a b b b b a a constructed from an b a b a b a b order 8 Hadamard b a b b a b a matrix b b a a b b a b b a b a a b Linear Constructions Theorem: Let M be an mxn matrix over Fq with the property that every t columns of M are linearly independent.
    [Show full text]
  • An Introduction to Coding Theory
    An introduction to coding theory Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India Feb. 6, 2017 Lecture #6A: Some simple linear block codes -I Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Dual code. Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Dual code. Examples of linear block codes Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Dual code. Examples of linear block codes Repetition code Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Dual code. Examples of linear block codes Repetition code Single parity check code Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Dual code. Examples of linear block codes Repetition code Single parity check code Hamming code Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Dual code Two n-tuples u and v are orthogonal if their inner product (u, v)is zero, i.e., n (u, v)= (ui · vi )=0 i=1 Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Dual code Two n-tuples u and v are orthogonal if their inner product (u, v)is zero, i.e., n (u, v)= (ui · vi )=0 i=1 For a binary linear (n, k) block code C,the(n, n − k) dual code, Cd is defined as set of all codewords, v that are orthogonal to all the codewords u ∈ C.
    [Show full text]