BASICS of CODING THEORY CODING

BASICS of CODING THEORY CODING

CHAPTER 1: BASICS of CODING THEORY ABSTRACT Coding theory - theory of error correcting codes - is one of the most interesting and applied part of informatics. Part I Goals of coding theory are to develop systems and methods that allow to detect/correct errors caused when information is transmitted through noisy channels. Basics of coding theory All real communication systems that work with digitally represented data, as CD players, TV, fax machines, internet, satellites, mobiles, require to use error correcting codes because all real channels are, to some extent, noisy – due to various interference/destruction caused by the environment Coding theory problems are therefore among the very basic and most frequent problems of storage and transmission of information. Coding theory results allow to create reliable systems out of unreliable systems to store and/or to transmit information. Coding theory methods are often elegant applications of very basic concepts and methods of (abstract) algebra. This first chapter presents and illustrates the very basic problems, concepts, methods and results of coding theory. prof. Jozef Gruska IV054 1. Basics of coding theory 2/51 CODING - BASIC CONCEPTS CHANNEL Without coding theory and error-correcting codes there would be no deep-space travel is any physical medium in which information is stored or through which information is and pictures, no satellite TV, no compact disc, no . no . no . transmitted. Error-correcting codes are used to correct messages when they are (erroneously) (Telephone lines, optical fibres and also the atmosphere are examples of channels.) transmitted through noisy channels. NOISE channel message code code may be caused by sunspots, lighting, meteor showers, random radio disturbance, poor W Encoding word word Decoding W user source typing, poor hearing, . C(W) C'(W)noise TRANSMISSION GOALS Error correcting framework 1 Fast encoding of information. Example 2 Similar messages should be encoded very differently 3 Easy transmission of encoded messages. message Encoding 00000 01001 Decoding YES YES 00000 01001 YES user YES or NO 4 Fast decoding of received messages. NO 11111 00000 5 Reliable correction of errors introduced in the channel. A code C over an alphabet Σ is a subset of Σ∗(C Σ∗). ⊆ 6 Maximum transfer of information per unit time. A q-nary code is a code over an alphabet of q-symbols. A binary code is a code over the alphabet 0, 1 . BASIC METHOD OF FIGHTING ERRORS: REDUNDANCY!!! { } Examples of codes C1 = 00, 01, 10, 11 C2 = 000, 010, 101, 100 { } { } C3 = 00000, 01101, 10111, 11011 Example: 0 is encoded as 00000 and 1 is encoded as 11111. { } prof. Jozef Gruska IV054 1. Basics of coding theory 3/51 prof. Jozef Gruska IV054 1. Basics of coding theory 4/51 CHANNELS - MATHEMATICAL VIEWS BASIC CHANNEL CODING PROBLEMS Summary: The task of a channel coding is to encode the Formally, a channel is described by a triple C = (Σ, Ω, p), where information sent over a communication channel in such a Σ is an input alphabet way that in the presence of some channel noise, errors can Ω is an output alphabet p is a probability distribution on Σ Ω and for i Σ, o Ω, p(i, o) is the be detected and/or corrected. × ∈ ∈ probability that the output of the channel is o if the input is i. IMPORTANT CHANNELS There are two basic coding methods Binary symmetric channel maps, with probability p0 each binary input into the BEC (BackwardErrorCorrection) Coding allows the oposite one. Therefore, Pr(0, 1) = Pr(1, 0) = p0 and Pr(0, 0) = Pr(1, 1) = 1 p0. − Binary erasure channel with probability p0, maps binary inputs into outputs in receiver only errors detection. If an error is 0, 1, e , where e is so called the erasure symbol, and Pr(0, 0) = Pr(1, 1) = Pr(p0), { } detected the sender is requested to retransmit Pr(0, e) = Pr(1, e) = 1 p0. − White noise Gaussian channel that models errors in deep space. the message. FEC(ForwardErrorCorrection) Coding allows the receiver to correct a certain amount of errors. prof. Jozef Gruska IV054 1. Basics of coding theory 5/51 prof. Jozef Gruska IV054 1. Basics of coding theory 6/51 WHY WE NEED TO IMPROVE ERROR-CORRECTING CODES BASIC IDEA of ERROR CORRECTION When error correcting capabilities of some code are improved - that is a better code is found -this has the following impacts: Details of the techniques used to protect information against noise in practice are sometimes rather complicated, but basic principles are easily understood. For the same quality of of received information it is possible to achieve that the transmission system operates in more severe conditions; For example; The key idea is that in order to protect a message against a noise, we should encode the message by adding some redundant information to the message. 1 It is possible to reduce the size of antennas or solar panels and the weight of batteries; 2 In space systems such savings can be measured in hundred of thousands of dollars; 3 In mobile telephone systems, improving the code enables the operators to increase the In such a case, even if the message is corrupted by a noise, there will be enough potential number of users in each cell. redundancy in the encoded message to recover – to decode the message Another field of applications of error-correcting codes is that of mass memories: completely. computer hard drives, CD-ROMs, DVDs and so on prof. Jozef Gruska IV054 1. Basics of coding theory 7/51 prof. Jozef Gruska IV054 1. Basics of coding theory 8/51 MAJORITY VOTING DECODING - BASIC IDEA EXAMPLE The basic idea of so called majority voting decoding/principle or of maximal likelihood In case the encoding 0 000 1 111, decoding/principle is → → is used, then the probability of the bit error is p 1 , and the following majority voting ≤ 2 decoding 000, 001, 010, 100 000 and 111, 110, 101, 011 111 to decode a received message w 0 → → is used, then the probability of an erroneous decoding (for the case of 2 or 3 errors) is 3p2(1 p) + p3 = 3p2 2p3 < p by a codeword w that is the closest one to w 0 − − in the whole set of the potential codewords of a given code C. prof. Jozef Gruska IV054 1. Basics of coding theory 9/51 prof. Jozef Gruska IV054 1. Basics of coding theory 10/51 EXAMPLE: Coding of a path avoiding an enemy territory BASIC TERMINOLOGY Datawords - words of a message Story Alice and Bob share an identical map (Fig. 1) gridded as shown in Fig.1. Only Codewords - words of some code. Alice knows the route through which Bob can reach her avoiding the enemy territory. Block code - a code with all codewords of the same length. Alice wants to send Bob the following information about the safe route he should take. Alice Basic assumptions about channels NNWNNWWSSWWNNNNWWN HQ 1 Code length preservation Each output word of a channel has the same length as Three ways to encode the safe route from Bob to the input codeword. Alice are: N 2 Independence of errors The probability of any one symbol being affected in 1 C1 = N = 00, W = 01, S = 11, E = 10 transmissions is the same. { } In such a case any error in the code word Basic strategy for decoding 000001000001011111010100000000010100 For decoding we use the so-called maximal likehood principle, or nearest neighbor would be a disaster. x Bob decoding strategy, or majority voting decoding strategy which says that 2 C2 = 000, 011, 101, 110 Fig. 1 { } A single error in encoding each of symbols N, W, S, E can be detected. the receiver should decode a received word w’ 3 C3 = 00000, 01101, 10110, 11011 { } A single error in decoding each of symbols N, W, S, E can be corrected. as the codewordw that is the closest one to w’. prof. Jozef Gruska IV054 1. Basics of coding theory 11/51 prof. Jozef Gruska IV054 1. Basics of coding theory 12/51 HAMMING DISTANCE BINARY SYMMETRIC CHANNEL The intuitive concept of “closeness“ of two words is well formalized through Hamming Consider a transition of binary symbols such that each symbol has probability of error distance h(x, y) of words x, y. For two words x, y 1 p < 2 . h(x, y) = the number of symbols in which the words x and y differ. 01 - p 0 Example: h(10101, 01100) = 3, h(fourth, eighth) = 4 p Binary symmetric channel Properties of Hamming distance p 1 - p 11 1 h(x, y) = 0 x = y ⇔ If n symbols are transmitted, then the probability of t errors is 2 h(x, y) = h(y, x) t n t n 3 h(x, z) h(x, y) + h(y, z) triangle inequality p (1 p) − ≤ − t An important parameter of codes C is their minimal distance. In the case of binary symmetric channels, the ”nearest` ´ neighbour decoding strategy” is h(C) = min h(x, y) x, y C, x = y , { | ∈ 6 } also ”maximum likelihood decoding strategy”. Therefore, h(C) is the smallest number of errors that can change one codeword into Example Consider C = 000, 111 and the nearest neighbour decoding strategy. { } another. Probability that the received word is decoded correctly Basic error correcting theorem as 000 is (1 p)3 + 3p(1 p)2, − 3 − 2 1 A code C can detect up to s errors if h(C) s + 1. as 111 is (1 p) + 3p(1 p) , ≥ − − 2 A code C can correct up to t errors if h(C) 2t + 1. 3 2 ≥ Therefore Perr (C) = 1 ((1 p) + 3p(1 p) ) Proof (1) Trivial.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    13 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us