Week 3: January 22-26, 2018 3.1 Basics of Linear Algebra

Week 3: January 22-26, 2018 3.1 Basics of Linear Algebra

EE564/CSE554: Error Correcting Codes Spring 2018 Week 3: January 22-26, 2018 Lecturer: Viveck R. Cadambe Scribe: Yu-Tse Lin Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal publications. They may be distributed outside this class only with the permission of the Instructor. 3.1 Basics of Linear Algebra 3.1.1 Abelian Group (G; ∗) is a group if it satisfies the axioms: 1. Closure: a ∗ b 2 G 8a; b 2 G 2. Associativity: a ∗ (b ∗ c) = (a ∗ b) ∗ c 3. Identity: 9e; a ∗ e = e ∗ a = a8a 4. Inverse: 8a; 9a−1; a ∗ a−1 = e Moreover, (G; ∗) is an Abelian group if it also satisfies 5. Commutativity (Abelian): a ∗ b = b ∗ a 3.1.2 Field (F; +; ·) (F; +; ·) is a field if it satisfies: • (F; +) is an Abelian group • (F − f0g; ·) is an Abelian group, where 0 is the identity in (F; +). • Distributive law: a · (b + c) = a · b + a · c A field is called finite field if F is a finite set. We will prove this theorem later in the course. Theorem 3.1 (Zq; +; ×); where + and × are performed modulo q, is a finite field if and only if q is a prime. 3-1 3-2 Week 3: January 22-26, 2018 3.1.3 Vector Space V is a vector space over field (F; +; ·); where + is an addition operation over V and · : F × V ! V is a scalar multiplication operation if it satisfies: 1.( V; +) is an Abelian group, 2. Distributive law: α · (~v1 + ~v2) = α · ~v1 + α · ~v2, where · is scalar multiplication. Recall some linear algebra concepts that will be used in the course: • Linear independence: ~v1;~v2; :::;~vn are said to be linearly independent if α1~v1 + α2~v2 + ::: + αn~vn = 0 () α1 = α2 = ::: = αn = 0 • Span: The span of vectors is the set of all finite linear combinations of the vectors. • Basis: A basis B of a vector space V over a field F is a maximal spanning set. Equivalently, it is a linearly independent subset of V that spans V . • Dimension: The dimension of a vector space is the number of vectors in any basis for the space. Note that F n is n-dimensional vector space over field F . 3.2 Linear Codes An (n; k) linear code C; over a finite field F , is a k dimensional subspace of Fn. jCj = jFjk. Importantly, a linear combination of codewords is also a codeword in a linear code. 3.2.1 Generator Matrix of Linear Code C A generator matrix G is in the form that 0 1 ~g1 B~g2C G = B C B . C @ . A ~gk and ~g1; ~g2; :::;~gk are 1 × n vectors that form a basis for the space; note that these vectors are linearly independent codewords. Dimension: k × n. Encoding: ~x = ~mG. 3.2.1.1 Examples 1. Repetition Codes: G1×n = 1 1 ::: 1 Week 3: January 22-26, 2018 3-3 2. Single Parity codes: Gk×(k+1) = I ~1 3. (7.4) Hamming Code m1 m2 m3 m4 m1 + m2 + m4 m1 + m2 + m3 m2 + m3 + m4 01 0 0 0 1 1 01 B0 1 0 0 1 1 1C G4×7 = B C @0 0 1 0 0 1 1A 0 0 0 1 1 0 1 3.2.2 Systematic Form A generator matrix of form G = IP is said to be in systematic form. Any linear code C with generator matrix G can form a systematic form generator matrix G^ via matrix row operations. 3.2.3 Parity Check Matrix, Dual Code Given a generator matrix G 2 F k×n, then a parity check matrix H 2 F (n−k)×n which HT is nullspace of G (or ker(G)) , i.e. GHT = 0 Theorem 3.2 Rank-Nullity theorem rank(G) + rank(ker(G)) = n Theorem 3.3 Let G = IP be a generator matrix for a code C: Then H = −P T I is a parity check matrix of the code. An alternate description of a linear code C; in terms of its parity check matrix H, is as follows: C = f~x : ~x = ~m;m 2 F kGg = f~x : ~x 2 F n; H~xT = 0g 3.2.3.1 Example Consider the repetition code over F = f0; 1g, where generator matrix G = 1 1 ::: 1 , and parity check matrix H = ~1 In−1 Note that H is also the generator matrix of single parity code. Repetition and Single parity codes are called dual codes. 3.2.4 Dual Code Let C be a code over F n, then the dual code of C is defined as C? = f~y : ~x~yT = 0 for all ~x 2 Cg Note that for a linear code C, (C?)? = C If H is a parity check matrix of C, then it is a generator matrix for C?. 3-4 Week 3: January 22-26, 2018 3.2.5 Minimum Distance of Linear Codes (n; k) linear code has min distance d. () Distance between two closest codewords = d () Non-zero codeword with smallest Hamming weight = d () d = smallest number such that every n − (d − 1) columns of G has rank k. () d = largest number such that every (d − 1) columns of H are linearly independent. We state the above properties more formally and give proofs/sketches below. Theorem 3.4 C is an (n; k; d) linear code if and only if d = minfHW(~x): ~x 6= 0g. Proof:[Sketch.] dH (~x;~y) = dH (~x − ~y;~0) Then, min ~x;~y 2 C; ~x 6= ~ydH (~x;~y) = d =) min~x−~y6=0 HW(~x − ~y) = d Noting that ~x − ~y is also a codeword in C leads to the desired conclusion. Theorem 3.5 C is an (n; k; d) linear code if and only if 1. Every n − (d − 1) columns of G has rank k, and 2. There exists n − d columns of G has rank k − 1. Proof: We first prove 1. Minimum distance is d, which implies that C can correct d − 1 erasures. Let J ⊂ f1; 2; :::; ng be the coordinates of a projection where jJj = n − (d − 1), then ~xjJ − ~yjJ 6= 0 m1GjJ − m2GjJ 6= 0 =) (m1 − m2)GjJ 6= 0 8m1 − m2 6= 0 =) mGjJ 6= 0 8m 6= 0 =) rank(GjJ ) = k Proof of 2. follows similarly, noting that if every n − d columns had rank k, then the code could have corrected d erasures (just reversing the above chain of equalities), which contradicts the hypothesis that the minimum distance is d. Theorem 3.6 C is an (n; k; d) linear code if and only if 1. Every d − 1 columns of H are linearly independent, and 2. There exists d linearly dependent columns of H. Proof: 1. Suppose for the sake of contradiction that 9d − 1 columns of H that are linearly dependent, we show that minimum distance < d. Week 3: January 22-26, 2018 3-5 h~ ~ ~ i Let H = h1 h2 ··· hn . Suppose HjJ are linearly dependent, where jJj ≤ d − 1. This implies that X ~ αihi = 0; αj 6= 0 for some j 2 J i2J Construct ~x = (x1; x2; :::; xn), where ( 0 i2 = J xi = αi i 2 J T P ~ Then, H~x = i2J αihi = 0, which implies that (1) ~x 2 C and (ii) HW(~x) ≤ d − 1: This contradicts the assumption that minimum distance of C is d. 2. Let ~y = (y1; y2; :::; yn) be a non-zero codeword in C with Hamming weight d, note that such a vector ~y indeed exists because d is the minimum distance of C. Let J ⊂ f1; :::; ng, where jJj = d, such that for j 2 J; yj 6= 0. Then, n X ~ X ~ T yjhj = yihi = ~y H = 0 j2J i=1 which implies that columns indexed by elements of J are linearly dependent. 3.2.6 Hamming Code n = 2m − 1; (n − k) = m; k = 2m − m − 1, H an m × 2m − 1 matrix whose columns include all m-long binary vectors except the zero vector. Theorem 3.6 can be used to conclude that the minimum distance of H is d = 3, that is, every pair of columns of H are linearly independent, but we can find 3 columns that are linearly dependent. Exercise: Verify that the Hamming code satisfies sphere packing bound (i.e. Hamming codes are perfect codes)..

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    5 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us