15-853:Algorithms in the Real World Block Codes Linear Codes

15-853:Algorithms in the Real World Block Codes Linear Codes

Block Codes 15-853:Algorithms in the Real World message (m) Each message and codeword is of fixed size Error Correcting Codes II coder ∑∑∑ = codeword alphabet k n q ∑ –Reed-Solomon Codes codeword (c) =|m| = |c| = | | –Concatenated Codes C ⊆ Σn (codewords) noisy –Overview of some topics in coding channel ∆∆∆(x,y) = number of positions s.t. x ≠ y –Low Density Parity Check Codes codeword’ (c’) i i (aka Expander Codes) d = min{ ∆(x,y) : x,y ∈ C, x ≠ y} -Network Coding decoder s = max{ ∆(c,c’)} that the code can correct -Compressive Sensing Code described as: (n,k,d) –List Decoding message or error q 15-853 Page1 15-853 Page2 Linear Codes Generator and Parity Check Matrices Generator Matrix : If ∑ is a field, then ∑n is a vector space G ∈ ∑k Definition : C is a linear code if it is a linear subspace of A k x n matrix such that: C = { xG | x } ∑n of dimension k. Made from stacking the spanning vectors This means that there is a set of k independent vectors Parity Check Matrix : n n T vi ∈ ∑ (1 ≤ i ≤ k) that span the subspace. An (n – k) x n matrix H such that: C = {y ∈ ∑ | Hy = 0} i.e. every codeword can be written as: (Codewords are the null space of H.) c = a 1 v1 + a 2 v2 + … + ak vk where ai ∈ ∑ These always exist for linear codes The sum of two codewords is a codeword. Minimum distance = weight of least-weight codeword 15-853 Page3 15-853 Page4 1 k mesg Relationship of G and H n For linear codes, if G is in standard form [I k A] n T mesg then H = [ -A In-k] G = codeword Example of (7,4,3) Hamming code: n-k transpose recv’d word n-k 1 0 0 0 1 1 1 1 1 1 0 1 0 0 = 0 1 0 0 1 1 0 H syndrome G = H = 1 1 0 1 0 1 0 0 0 1 0 1 0 1 1 0 1 1 0 0 1 0 0 0 1 0 1 1 if syndrome = 0, received word = codeword else have to use syndrome to get back codeword 15-853 Page5 15-853 Page6 Two Codes Hamming codes are binary (2 r-1–1, 2 r-1-r, 3) codes. Basically (n, n – log n, 3) Reed-Solomon Codes Hadamard codes are binary (2 r-1, r, 2 r-1). Basically (n, log n, n/2) The first has great rate, small distance. The second has poor rate, great distance. Can we get Ω(n) rate, Ω(n) distance? Yes. One way is to use a random linear code. Irving S. Reed and Gustave Solomon Let’s see some direct, intuitive ways. 15-853 Page7 15-853 Page8 2 Reed-Solomon Codes in the Real World 2 (204,188,17) 256 : ITU J.83(A) (128,122,7) 256 : ITU J.83(B) PDF-417 (255,223,33) 256 : Common in Practice QR code – Note that they are all byte based (i.e., symbols are from GF(2 8)). Decoding rate on 1.8GHz Pentium 4: – (255,251) = 89Mbps – (255,223) = 18Mbps Aztec code Dozens of companies sell hardware cores that DataMatrix code operate 10x faster (or more) All 2-dimensional Reed-Solomon bar codes – (204,188) = 320Mbps (Altera decoder) 15-853 Page9 15-853 Page10 images: wikipedia Applications of Reed-Solomon Codes Viewing Messages as Polynomials • Storage : CDs, DVDs, “hard drives”, A (n, k, n-k+1) code: • Wireless : Cell phones, wireless links Consider the polynomial of degree k-1 k-1 • Sateline and Space : TV, Mars rover, Voyager, p(x) = a k-1 x + L + a 1 x + a 0 • Digital Television : DVD, MPEG2 layover Message : (a k-1, …, a 1, a 0) Codeword : (p(1), p(2), …, p(n)) High Speed Modems • : ADSL, DSL, .. r To keep the p(i) fixed size, we use a i ∈ GF(Q ) To make the i distinct, n ≤ Q r Good at handling burst errors. Other codes are better for random errors. For simplicity, imagine that n = Q r. So we have a – e.g., Gallager codes, Turbo codes (n, k, n-k+1) log n code. Alphebet size increasing with the codeword length. A little awkward. (But can be fixed.) 15-853 Page11 15-853 Page12 3 Viewing Messages as Polynomials Polynomial-Based Code A (n, k, n-k+1) code: A (n, k, 2s +1) code: Consider the polynomial of degree k-1 k 2s k-1 p(x) = a k-1 x + L + a 1 x + a 0 Message : (a , …, a , a ) k-1 1 0 n Codeword : (p(1), p(2), …, p(n)) r Can detect 2s errors To keep the p(i) fixed size, we use ai ∈ GF(Q ) To make the i distinct, n ≤ Qr Can correct s errors Generally can correct α erasures and β errors if Unisolvence Theorem : Any subset of size k of α + 2 β ≤ 2s (p(1), p(2), …, p(n)) is enough to (uniQuely) reconstruct p(x) using polynomial interpolation, e.g., Lagrange interpolation formula. 15-853 Page13 15-853 Page14 Correcting Errors RS and “burst” errors Correcting s errors : Let’s compare to Hamming Codes (which are “optimal”). 1. Find k + s symbols that agree on a polynomial p(x). code bits check bits These must exist since originally k + 2s symbols RS (255, 253, 3) 256 2040 16 agreed and only s are in error 11 11 Hamming (2 -1, 2 -11-1, 3) 2 2047 11 2. There are no k + s symbols that agree on the wrong polynomial p’(x) They can both correct 1 error, but not 2 random errors. - Any subset of k symbols will define p’(x) – The Hamming code does this with fewer check bits - Since at most s out of the k+s symbols are in However, RS can fix 8 contiguous bit errors in one byte error, p’(x) = p(x) – Much better than lower bound for 8 arbitrary errors n n log1+ + + > 8log(n − )7 ≈ 88 check bits A brute-force approach. L 1 8 Better algos exist (maybe next lecture). 15-853 Page15 15-853 Page16 4 Concatenated Codes Concatenated Codes Take a RS code (n,k,n-k+1) Q = n code. Take a RS code (n,k,n-k+1) Q = n code. David Forney Can encode each alphabet symbol of k’ = log Q = log n Can encode each alphabet symbol using another code. bits using another code. E.g., use ((k’ + log k’), k’, 3) 2-Hamming code. Now we can correct one error per alphabet symbol with little rate loss. (Good for sparse periodic errors.) k’ k’-1 Or (2 , k’, 2 )2 Hadamard code. (Say k = n/2.) Then 2 2 get (n , (n/2) log n, n /4) 2 code. Much better than plain Hadamard code in rate, distance worse only by factor of 2. 15-853 Page17 15-853 Page18 Wikipedia Concatenated Codes Error Correcting Codes Outline Introduction Take a RS code (n,k,n-k+1) Q = n code. Can encode each alphabet symbol of k’ = log Q = log n Linear codes bits using another code. Reed Solomon Codes Expander Based Codes Or, since k’ is O(log n), could choose a code that – Expander Graphs reQuires exhaustive search but is good. – Low Density Parity Check (LDPC) codes δδδ δδδ Random linear codes give ((1+f( ))k’, k’, k’)) 2 codes. – Tornado Codes Composing with RS (with k = n/2), we get δδδ δδδ ( (1+f( ))n log n, (n/2) log n, (n/2)log n ) 2 Gives constant rate and constant distance ! And poly-time encoding and15-853 decoding. Page19 15-853 Page20 5 Why Expander Based Codes? (α, β ) Expander Graphs (non-bipartite) These are linear codes like RS & random linear codes RS/random linear codes give good rates but are slow: k ≤ αn ≥ βk Code Encoding Decoding G Random Linear O(n 2) O(n 3) RS O(n log n) O(n 2) Properties LDPC O(n 2) or better O(n) – Expansion: every small subset ( k ≤αn) has many Tornado O(n log 1/ ε) O(n log 1/ ε) (≥ βk) neighbors – Low degree – not technically part of the Assuming an (n, (1-p)n, (1-ε)pn+1) 2 tornado code definition, but typically assumed 15-853 Page21 15-853 Page22 (α, β ) Expander Graphs (bipartite) Expander Graphs Useful properties: k bits – Every set of vertices has many neighbors (k ≤ αn) at least βk bits – Every balanced cut has many edges crossing it – A random walk will Quickly converge to the Properties stationary distribution (rapid mixing) – Expansion: every small subset ( k ≤αn) on left has many ( ≥βk) neighbors on right – Expansion is related to the eigenvalues of the – Low degree – not technically part of the adjacency matrix definition, but typically assumed 15-853 Page23 15-853 Page24 6 Expander Graphs: Applications d-regular graphs Pseudo-randomness : implement randomized An undirected graph is d-regular if every vertex has algorithms with few random bits d neighbors. Cryptography : strong one-way functions from weak ones. A bipartite graph is d-regular if every vertex on the Hashing: efficient n-wise independent hash left has d neighbors on the right. functions Random walks: Quickly spreading probability as you We consider only d-regular constructions. walk through a graph Error Correcting Codes: several constructions Communication networks: fault tolerance, gossip- based protocols, peer-to-peer networks 15-853 Page25 15-853 Page26 Expander Graphs: Constructions Expander Graphs: Constructions Theorem: for every constant 0 < c < 1, can construct Important parameters: size (n) , degree (d) , expansion ( β) bipartite graphs with n nodes on left, Randomized constructions cn on right, – A random d-regular graph is an expander with a high d-regular, probability ααα ααα – Construct by choosing d random perfect matchings that are ( , 3d/4) expanders, for constants and – Time consuming and cannot be stored compactly d that are functions of c alone.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    14 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us