Channel Capacity and Coding

Total Page:16

File Type:pdf, Size:1020Kb

Channel Capacity and Coding ECE 515 Information Theory Channel Capacity and Coding 1 Information Theory Problems • How to transmit or store information as efficiently as possible. • What is the maximum amount of information that can be transmitted or stored reliably? • How can information be kept secure? 2 Digital Communications System W X PX Data source PY|X Data sink Ŵ Y PY Ŵ is an estimate of W 3 Communication Channel • Cover and Thomas Chapter 7 4 Discrete Memoryless Channel X Y x1 y1 x2 y2 . xK yJ J≥K≥2 5 Discrete Memoryless Channel Channel Transition Matrix py(|)11 x py (|)1 xkK py (|)1 x P= py(|)j x py (|)jk x py (|)jK x 1 py(|)J x1 py (|)Jk x py (|)JK x 6 Binary Symmetric Channel x1 y1 x2 y2 channel matrix pp pp 7 Binary Errors with Erasure Channel 1-p-q x 0 0 y 1 q 1 p e y2 p q x2 1 1 y3 1-p-q 8 BPSK Modulation K=2 2E x t = + b cos 2π ft 0 bit 1 ( ) ( c ) Eb is the energy per bit Tb Tb is the bit duration 2E f is the carrier x t = − b cos 2π ft 1 bit c 2 ( ) ( c ) frequency Tb 9 BPSK Demodulation in AWGN − Eb 0 + Eb x2 x1 p(y|x2) p(y|x1) 0 x2 x1 2 1 (yx− k ) pYX (yx |= xk ) = exp− | 2 2 2πσ 2σ 10 BPSK Demodulation K=2 − + Eb 0 Eb J=3 y3 y2 y1 − + Eb 0 Eb J=4 y4 y3 y2 y1 − + Eb 0 Eb J=8 y8 y1 11 Mutual Information for a BSC crossover probability p X BSC Y pp=1 − channel matrix pp PY|X = pp X Y p w 0 0 p p(xw= 0) = p p(x==−= 1) 1 ww w 1 1 p 12 13 Convex Functions Concave Convex 14 Concave Function 15 Convex Function 16 Mutual Information 17 18 input probabilities 19 20 21 channel transition probabilities 22 BSC I(X;Y) w p 23 Channel Capacity X channel Y 24 Binary Symmetric Channel x1 y1 x2 y2 C=−== 1h()forp ww 1/2 25 Properties of the Channel Capacity • C ≥ 0 since I(X;Y) ≥ 0 • C ≤ log|X| = log(K) since C = max I(X;Y) ≤ max H(X) = log(K) • C ≤ log|Y| = log(J) for the same reason • I(X;Y) is a concave function of p(X), so a local maximum is a global maximum 26 Symmetric Channels A discrete memoryless channel is said to be symmetric if the set of output symbols {yj}, j = 1, 2, ..., J, can be partitioned into subsets such that for each subset of the matrix of transition probabilities, each column is a permutation of the other columns, and each row is also a permutation of the other rows. 27 Binary Channels Symmetric channel matrix Non-symmetric channel matrix 1 − pp 1 − pp12 P = P= pp12≠ pp1 − pp121 − 28 Binary Errors with Erasure Channel 1-p-q x 0 0 y 1 q 1 p e y2 p q x2 1 1 y3 1-p-q 29 Symmetric Channels • No partition required → strongly symmetric • Partition required → weakly symmetric 30 Capacity of a Strongly Symmetric Channel Theorem KJ I(X;Y)= ∑∑ p(xk ) p( yx jk | )logp( yx jk | )+ H(Y) kj=11 = J = ∑p(yxjk | )logp( yx jk | )+ H(Y) j=1 31 Example J = K = 3 x .7 y 1 .1 .2 1 .2 .7 x2 y2 .1 .2 .1 x3 .7 y3 32 Example .7 .2 .1 P = Y|X .1 .7 .2 .2 .1 .7 KJ ∑∑p(xk ) p( yx jk | )logp( yx jk | ) kj=11 = =p(x1 )[ .7log.7 ++ .1log.1 .2log.2] +p(x2 )[ .2log.2 ++ .7log.7 .1log.1] +p(x3 )[ .1log.1 ++ .2log.2 .7log.7] =.7log.7 ++ .2log.2 .1log.1 J = ∑p(yxjk | )logp( yx jk | ) j=1 33 Example J H(Y)= −∑ p(yyjj )logp( ) j=1 K p(y11 )= ∑ p( yx |kk )p( x ) k=1 K p(y22 )= ∑ p( yx |kk )p( x ) k=1 K p(yJ )= ∑ p( yx Jk | )p( x k ) k=1 p(y1 )=++ .7p( xxx 123 ) .2p( ) .1p( ) p(y2 )=++ .1p( xxx 123 ) .7p( ) .2p( ) p(y3 )=++ .2p( xxx 123 ) .1p( ) .7p( ) 34 r-ary Symmetric Channel pp p 1 − p rr−− r − 11 1 p pp 1 − p r−1 rr −− 11 Pp= p p 1 − p rr−−11 r − 1 ppp 1 − p −−−rrr111 35 r-ary Symmetric Channel pp C=− (1p )log(1 −+− pr ) ( 1) log( )+ logr rr−−11 p =logr +− (1 p )log(1 −+ pp ) log( ) r −1 =logr +− (1 p )log(1 −+ pp ) log( pp ) − log( r − 1) =logr −− hp ( ) p log( r − 1) • r = 2 C = 1 – h(p) • r = 3 C = log23 – h(p) – p • r = 4 C = 2 – h(p) – plog23 36 Binary Errors with Erasure Channel 1-p-q 0 0 q p e p q 1 1 1-p-q 37 Binary Errors with Erasure Channel .8 .05 = PY|X .15 .15 .05 .8 .425 .5 = = ×= PX PY P Y|X P X .15 .5 .425 38 Capacity of a Weakly Symmetric Channel L CC= ∑qii i=1 • qi – probability of channel i • Ci – capacity of channel i 39 Binary Errors with Erasure Channel 1 −−pq 1 − q q 0 0 0 p q 1 − q p e − 1 q q 1 1 −−pq 1 1 q 1 − q .9412 .0588 P1 = P2 = [ 1.0 1.0] .0588 .9412 40 Binary Errors with Erasure Channel L CC= ∑qii i=1 C1 = .9412log.9412 + .0588log.0588 += log2 .6773 C2 = 1log1 += log1 0.0 C= .85(.6773) += .15(0.0) .5757 41 Binary Erasure Channel 1-p 0 0 p e p 1 1 1-p C = 1 - p 42 Z Channel (Optical) 1 0 0 (light on) p 1 1 (light off) 1-p 43 Z Channel (Optical) 22 p(yxjk | ) I(X;Y)= ∑∑ p(xk )p( yx jk | )log kj=11 = p(y j ) 1 I(x1 ;Y)= log w+ pw p 1 I(xp2 ;Y)= log+− (1p )log w+ pw w I(X;Y) = w× I(xx12 ;Y) +× w I( ;Y) 44 Z Channel (Optical) I(X;Y) = w× I(xx12 ;Y) +× w I( ;Y) 1 w*1= − (1−+p )(1 2h(pp )/(1− ) ) pp/(1− ) C= log2 ( 1 +− (1pp ) ) 45 Mutual Information for the Z Channel • p = 0.15 I(X;Y) w 46 Channel Capacity for the Z, BSC and BEC p 47 Blahut-Arimoto Algorithm KJ p(yx | ) = jk I(X;Y)∑∑ p(xk ) p( yx jk | )log K kj=11 = px( )( py | x ) ∑ l jl l=1 • An analytic solution for the capacity can be very difficult to obtain • The alternative is a numerical solution – Arimoto Jan. 1972 – Blahut Jul. 1972 • Exploits the fact that I(X;Y) is a concave function of p(xk) 48 Blahut-Arimoto Algorithm 49 Blahut-Arimoto Algorithm • Update the probabilities 50 51 Symmetric Channel Example 52 53 Non-Symmetric Channel Example 0.7000 54 55 56 57 58 Binary Symmetric Channel x1 y1 x2 y2 channel matrix pp pp=1 − pp 59 Binary Symmetric Channel • Consider a block of N = 1000 bits – if p = 0, 1000 bits are received correctly – if p = 0.01, 990 bits are received correctly – if p = 0.5, 500 bits are received correctly • When p > 0, we do not know which bits are in error – if p = 0.01, C = .919 bit – if p = 0.5, C = 0 bit 60 Triple Repetition Code • N = 3 message w codeword c 0 000 1 111 61 Binary Symmetric Channel Errors • If N bits are transmitted, the probability of an m bit error pattern is Nm− ppm (1 − ) • The probability of exactly m errors is Nm− N ppm 1 − ( m) ( ) • The probability of m or more errors is N Ni− N ppi 1 − ∑( i ) ( ) im= 62 Triple Repetition Code • N = 3 • The probability of 0 errors is (1− p )3 • The probability of 1 error is 3p(1− p )2 • The probability of 2 errors is 3pp2 (1− ) • The probability of 3 errors is p3 63 Triple Repetition Code • For p = 0.01 – The probability of 0 errors is .970 – The probability of 1 error is 2.94×10-2 – The probability of 2 errors is 2.97×10-4 – The probability of 3 errors is 10-6 1 • If p 2 p(0 errors) p(1 error) p(2 errors) p(3 errors) 64 Triple Repetition Code – Decoding Received Word Codeword Error Pattern 000 000 000 001 000 001 010 000 010 100 000 100 111 111 000 110 111 001 101 111 010 011 111 100 65 Triple Repetition Code • Majority vote or nearest neighbor decoding will correct all single errors 000, 001, 010, 100 → 000 111, 110, 101, 011 → 111 • The probability of a decoding error is then 2 323 Pe = 3p (1 −+ pp ) = 3 p − 2 p < p • If p = 0.01, then Pe = 0.000298 and only one word in 3356 will be in error after decoding. • A reduction by a factor of 33. 66 Code Rate • After compression, the data is (almost) memoryless and uniformly distributed (equiprobable) • Thus the entropy of the messages (codewords) is H(WM ) = logb • The blocklength of a codeword is N 67 Code Rate • The code rate is given by log M R = b bits per channel use N • M is the number of codewords • N is the block length • For the triple repetition code log 2 1 R = 2 = 33 68 Repetition Code rate R 69 YN XN 70 Shannon’s Noisy Coding Theorem For any ε > 0 and for any rate R less than the channel capacity C, there is an encoding and decoding scheme that can be used to ensure that the probability of decoding error is less than ε for a sufficiently large block length N.
Recommended publications
  • An Introduction to Information Theory
    An Introduction to Information Theory Vahid Meghdadi reference : Elements of Information Theory by Cover and Thomas September 2007 Contents 1 Entropy 2 2 Joint and conditional entropy 4 3 Mutual information 5 4 Data Compression or Source Coding 6 5 Channel capacity 8 5.1 examples . 9 5.1.1 Noiseless binary channel . 9 5.1.2 Binary symmetric channel . 9 5.1.3 Binary erasure channel . 10 5.1.4 Two fold channel . 11 6 Differential entropy 12 6.1 Relation between differential and discrete entropy . 13 6.2 joint and conditional entropy . 13 6.3 Some properties . 14 7 The Gaussian channel 15 7.1 Capacity of Gaussian channel . 15 7.2 Band limited channel . 16 7.3 Parallel Gaussian channel . 18 8 Capacity of SIMO channel 19 9 Exercise (to be completed) 22 1 1 Entropy Entropy is a measure of uncertainty of a random variable. The uncertainty or the amount of information containing in a message (or in a particular realization of a random variable) is defined as the inverse of the logarithm of its probabil- ity: log(1=PX (x)). So, less likely outcome carries more information. Let X be a discrete random variable with alphabet X and probability mass function PX (x) = PrfX = xg, x 2 X . For convenience PX (x) will be denoted by p(x). The entropy of X is defined as follows: Definition 1. The entropy H(X) of a discrete random variable is defined by 1 H(X) = E log p(x) X 1 = p(x) log (1) p(x) x2X Entropy indicates the average information contained in X.
    [Show full text]
  • L11-IT-Handouts.Pdf
    Today’s Topics Information Theory • Communication Channel • Noiseless binary channel Mohamed Hamada • Binary Symmetric Channel (BSC) Software Engineering Lab The University of Aizu • Symmetric Channel Email: [email protected] URL: http://www.u-aizu.ac.jp/~hamada • Mutual Information • Channel Capacity 1 Digital Communication Systems Digital Communication Systems 1. Huffman Code. 1. Memoryless 2. Two-pass Huffman Code. 2. Stochastic 3. Lemple-Ziv Code. 3. Markov 4. Fano code. Information User of Information 4. Ergodic User of Source 5. Shannon Code. Information Source Information 6. Arithmetic Code. Source Source Source Source Encoder Decoder Encoder Decoder Channel Channel Channel Channel Encoder Decoder Encoder Decoder Modulator De-Modulator Modulator De-Modulator Channel Channel 2 3 INFORMATION TRANSFER ACROSS CHANNELS Communication Channel Sent Received A (discrete ) channel is a system consisting of an input alphabet messages messages X and output alphabet Y and a probability transition matrix symbols p(y|x) that expresses the probability of observing the output symbol y given that we send the symbol x Channel Channel Source Source Channel sourcecoding coding decoding decoding receiver Examples of channels: Compression Error Correction Decompression CDs, CD – ROMs, DVDs, phones, Source Entropy Channel Capacity Ethernet, Video cassettes etc. Rate vs Distortion Capacity vs Efficiency 4 5 1 Communication Channel Noiseless binary channel Noiseless binary channel Channel Channel 00 input x p(y|x) output y Transition probabilities 11 Memoryless: - output only on input Transition Matrix - input and output alphabet finite 01 p(y | x) = 0 10 1 01 6 7 Binary Symmetric Channel (BSC) Binary Symmetric Channel (BSC) (Noisy channel) (Noisy channel) 00BSC Channel 1 p 1-p 1-p Error Source 00 e BSC Channel p 110 y = x e p 1-p xi i i + 11 Input Output p 1-p 00 1-p 1 1 p BSC Channel 8 9 Symmetric Channel (Noisy channel) Channel XY In the transmission matrix of this channel , all the rows are permutations of each other and so the columns.
    [Show full text]
  • Networked Distributed Source Coding
    Networked distributed source coding Shizheng Li and Aditya Ramamoorthy Abstract The data sensed by differentsensors in a sensor network is typically corre- lated. A natural question is whether the data correlation can be exploited in innova- tive ways along with network information transfer techniques to design efficient and distributed schemes for the operation of such networks. This necessarily involves a coupling between the issues of compression and networked data transmission, that have usually been considered separately. In this work we review the basics of classi- cal distributed source coding and discuss some practical code design techniques for it. We argue that the network introduces several new dimensions to the problem of distributed source coding. The compression rates and the network information flow constrain each other in intricate ways. In particular, we show that network coding is often required for optimally combining distributed source coding and network information transfer and discuss the associated issues in detail. We also examine the problem of resource allocation in the context of distributed source coding over networks. 1 Introduction There are various instances of problems where correlated sources need to be trans- mitted over networks, e.g., a large scale sensor network deployed for temperature or humidity monitoring over a large field or for habitat monitoring in a jungle. This is an example of a network information transfer problem with correlated sources. A natural question is whether the data correlation can be exploited in innovative ways along with network information transfer techniques to design efficient and distributed schemes for the operation of such networks. This necessarily involves a Shizheng Li · Aditya Ramamoorthy Iowa State University, Ames, IA, U.S.A.
    [Show full text]
  • Week 8 Information Channels Suppose That Data Must Be Sent Through a Channel Before It Can Be Used
    CSC 310: Information Theory University of Toronto, Fall 2011 Instructor: Radford M. Neal Week 8 Information Channels Suppose that data must be sent through a channel before it can be used. This channel may be unreliable, but we can use a code designed counteract this. Some questions we aim to answer: • Can we quantify how much information a channel can transmit? • If we have low tolerance for errors, will we be able to make full use of a channel, or must some of the channel’s capacity be lost to ensure a low error probability? • How can we correct (or at least detect) errors in practice? • Can we do as well in practice as the theory says is possible? Error Correction for Memory Blocks The “channel” may transmit information through time rather than space — ie, it is a memory device. Many memory devices store data in blocks — eg, 64 bits for RAM, 512 bytes for disk. Can we correct some errors by adding a few more bits? For instance, could we correct any single error if we use 71 bits to encode a 64 bit block of data stored in RAM? Error Correction in a Communications System In other applications, data arrives in a continuous stream. An overall system might look like this: Encoder for Data often sent in blocks Data Compression Error-Correcting In Program Code Noise Channel Decoder for Decompression Data Error-Correcting Code Program Out Error Detection We might also be interested in detecting errors, even if we can’t correct them: • For RAM or disk memory, error detection tells us that we need to call the repair person.
    [Show full text]
  • A Survey of Results for Deletion Channels and Related Synchronization Channels∗
    Probability Surveys Vol. 6 (2009) 1–33 ISSN: 1549-5787 DOI: 10.1214/08-PS141 A survey of results for deletion channels and related synchronization channels∗ Michael Mitzenmacher† Harvard University, School of Engineering and Applied Sciences e-mail: [email protected] Abstract: The purpose of this survey is to describe recent progress in the study of the binary deletion channel and related channelswith synchroniza- tion errors, including a clear description of open problems in this area, with the hope of spurring further research. As an example, while the capacity of the binary symmetric error channel and the binary erasure channel have been known since Shannon, we still do not have a closed-form description of the capacity of the binary deletion channel. We highlight a recent result that shows that the capacity is at least (1 − p)/9 when each bit is deleted independently with fixed probability p. AMS 2000 subject classifications: Primary 94B50; secondary 68P30. Keywords and phrases: Deletion channels, synchronization channels, capacity bounds, random subsequences. Received November 2008. Contents 1 Introduction................................. 2 2 The ultimate goal: A maximum likelihoodargument . 4 3 Shannon-stylearguments . 6 3.1 Howthebasicargumentfails . 7 3.2 Codebooks fromfirstorder Markovchains . 8 4 Better decoding via jigsaw puzzles . 12 5 A useful reduction for deletion channels . 16 6 A related information theoretic formulation . .... 17 7 Upper bounds on the capacity of deletion channels . ... 19 8 Variationsonthedeletionchannel . 22 8.1 Stickychannels ............................ 22 8.2 Segmented deletion and insertion channels . 24 8.3 Deletionsoverlargeralphabets . 26 9 Tracereconstruction ... .... ... .... ... .... ... .... 27 10Conclusion ................................. 29 References.................................... 30 ∗This is an original survey paper.
    [Show full text]
  • Lecture Notes on Channel Coding
    Lecture Notes on Channel Coding Georg B¨ocherer Institute for Communications Engineering Technical University of Munich, Germany [email protected] July 5, 2016 arXiv:1607.00974v1 [cs.IT] 4 Jul 2016 These lecture notes on channel coding were developed for a one-semester course for graduate students of electrical engineering. Chapter 1 reviews the basic problem of channel coding. Chapters 2{5 are on linear block codes, cyclic codes, Reed-Solomon codes, and BCH codes, respectively. The notes are self-contained and were written with the intent to derive the presented results with mathematical rigor. The notes contain in total 68 homework problems, of which 20% require computer programming. Contents 1 Channel Coding7 1.1 Channel . .7 1.2 Encoder . .8 1.3 Decoder . .8 1.3.1 Observe the Output, Guess the Input . .8 1.3.2 MAP Rule . .9 1.3.3 ML Rule . 10 1.4 Block Codes . 10 1.4.1 Probability of Error vs Transmitted Information . 10 1.4.2 Probability of Error, Information Rate, Block Length . 12 1.4.3 ML Decoder . 14 1.5 Problems . 16 2 Linear Block Codes 20 2.1 Basic Properties . 20 2.1.1 Groups and Fields . 20 2.1.2 Vector Spaces . 22 2.1.3 Linear Block Codes . 23 2.1.4 Generator Matrix . 23 2.2 Code Performance . 25 2.2.1 Hamming Geometry . 25 2.2.2 Bhattacharyya Parameter . 27 2.2.3 Bound on Probability of Error . 27 2.3 Syndrome Decoding . 29 2.3.1 Dual Code . 31 2.3.2 Check Matrix .
    [Show full text]
  • Lecture 21: Computing Capacities, Coding in Practice, & Review
    COMP2610/6261 - Information Theory Lecture 21: Computing Capacities, Coding in Practice, & Review ANU Logo UseMark Guidelines Reid and Aditya Menon Research School of Computer Science The ANU logo is a contemporary The Australian National University refection of our heritage. It clearly presents our name, our shield and our motto: First to learn the nature of things. To preserve the authenticity of our brand identity, there are rules that govern how our logo is used. Preferred logo Black version Preferred logo - horizontal logo October 14, 2013 The preferred logo should be used on a white background. This version includes black text with the crest in Deep Gold in either PMS or CMYK. Black Where colour printing is not available, the black logo can be used on a white background. Reverse Mark Reid and Aditya Menon (ANU) COMP2610/6261 - Information Theory Oct. 14, 2014 1 / 21 The logo can be used white reversed out of a black background, or occasionally a neutral dark background. Deep Gold Black C30 M50 Y70 K40 C0 M0 Y0 K100 PMS Metallic 8620 PMS Process Black PMS 463 Reverse version Any application of the ANU logo on a coloured background is subject to approval by the Marketing Offce, contact [email protected] LOGO USE GUIDELINES 1 THE AUSTRALIAN NATIONAL UNIVERSITY 1 Computing Capacities 2 Good Codes vs. Practical Codes 3 Linear Codes 4 Coding: Review Mark Reid and Aditya Menon (ANU) COMP2610/6261 - Information Theory Oct. 14, 2014 2 / 21 1 Computing Capacities 2 Good Codes vs. Practical Codes 3 Linear Codes 4 Coding: Review Mark Reid and Aditya Menon (ANU) COMP2610/6261 - Information Theory Oct.
    [Show full text]
  • Noisy-Channel Coding Copyright Cambridge University Press 2003
    Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981 You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links. Part II Noisy-Channel Coding Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981 You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links. 8 Dependent Random Variables In the last three chapters on data compression we concentrated on random vectors x coming from an extremely simple probability distribution, namely the separable distribution in which each component xn is independent of the others. In this chapter, we consider joint ensembles in which the random variables are dependent. This material has two motivations. First, data from the real world have interesting correlations, so to do data compression well, we need to know how to work with models that include dependences. Second, a noisy channel with input x and output y defines a joint ensemble in which x and y are dependent { if they were independent, it would be impossible to communicate over the channel { so communication over noisy channels (the topic of chapters 9{11) is described in terms of the entropy of joint ensembles. 8.1 More about entropy This section gives definitions and exercises to do with entropy, carrying on from section 2.4. The joint entropy of X; Y is: 1 H(X; Y ) = P (x; y) log : (8.1) P (x; y) xy2AXX AY Entropy is additive for independent random variables: H(X; Y ) = H(X) + H(Y ) iff P (x; y) = P (x)P (y): (8.2) The conditional entropy of X given y = bk is the entropy of the proba- bility distribution P (x y = b ).
    [Show full text]
  • Channel Capacity, Rate of Channel Code 15.1 the Information Theory
    10-704: Information Processing and Learning Spring 2012 Lecture 15: Channel Capacity, Rate of Channel Code Lecturer: Aarti Singh Scribes: Martin Azizyan Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal publications. They may be distributed outside this class only with the permission of the Instructor. 15.1 The Information Theory Diagram Until now we have studied the data compression (source coding) problem, schematically described by the following diagram: Source Compressor Decompressor Receiver We now turn to the channel coding problem, where we introduce an intermediate step between the output of the compressor and the input to the decompressor: Source Compressor Channel Decompressor Receiver A \channel" is any source of noise, for example a wireless link, an error prone storage device, an IP network with the potential for packet loss, etc. In the presence of such noise we need to introduce some redundancy in the signal to ensure successful transmission, so we add two new steps in the diagram: Compressed Input X Output Y To Encoder Channel Decoder Message Decompressor Essentially, a channel is a conditional distribution p(Y jX). We will focus on discrete memoryless chan- nels (DMC): Discrete: Input and output alphabets X ; Y are finite; and Memoryless: p(Y jX) is independent of outputs in previous timesteps (later we will discuss more general feedback channels). Definition 15.1 The information capacity of a (DMC) channel is C = max I(X; Y ) p(x) where the maximum is over all distributions on X. 15-1 15-2 Lecture 15: Channel Capacity, Rate of Channel Code Informally, the operational capacity of a channel is the highest rate in terms of bits/channel use (e.g.
    [Show full text]
  • ECE 587 / STA 563: Lecture 6 – Channel Coding Information Theory Duke University, Fall 2020
    ECE 587 / STA 563: Lecture 6 { Channel Coding Information Theory Duke University, Fall 2020 Author: Galen Reeves Last Modified: September 24, 2020 Outline of lecture: 6.1 Introduction to Channel Coding.............................1 6.1.1 Motivation.....................................1 6.1.2 Problem Setup...................................3 6.1.3 Channel Coding Theorem.............................5 6.2 Examples of Discrete Memoryless Channels.......................6 6.2.1 Binary Erasure Channel (BEC)..........................6 6.2.2 Binary Symmetric Channel (BSC)........................7 6.3 Converse to the Channel Coding Theorem:.......................7 6.4 Proof of Channel Coding Theorem via Random Coding................9 6.4.1 Encoding & Decoding............................... 10 6.4.2 Performance Analysis............................... 11 6.4.3 Strengthening the Proof.............................. 13 6.5 Channel Coding with Feedback.............................. 15 6.6 Coding Theory....................................... 16 6.6.1 Hamming Codes.................................. 16 6.6.2 Beyond Hamming................................. 17 6.6.3 The Hat Game................................... 18 6.1 Introduction to Channel Coding 6.1.1 Motivation • Communication problems play an important role in many applications: ◦ Wireless communication between your laptop and a router ◦ Wireless communication between your cell phone and a cell tower (e.g, GSM and CDMA) ◦ Fiber-optic communication across the internet ◦ Data storage on a hard drive (communicating from the past to the future). ◦ Communication between Hubble Space Telescope and Earth. ◦ Statistics: communicating the true state of nature θ through the observed data X • The primary goals of communication are: (1) Reliability: We want the received message to be equal to the transmitted message (at least with very high probability) Error probability: P[received message 6= transmitted message] 2 ECE 587 / STA 563: Lecture 6 (2) Efficiency: We want to communicate each message as quickly as possible.
    [Show full text]
  • Channel Capacity
    Chapter 7 Channel Capacity In Section 5.2 the block diagram of a communication system in Figure 5.1 was discussed. It was stated that the source coding and the channel coding can be separated. In Chap- ters 5 and 6 the source coding was treated. Here we will concentrate on the analysis of the channel coding system. In Figure 7.1 a block diagram is shown for the channel encoder, the channel and the channel decoder. The channel in this figure is a mathematical model that represents ev- erything that can occur in the actual transmission. This includes e.g. background noise, scratches on the surface of a CD or erasures due to overflow in router buffers. U Channel X Y Channel Ub Channel encoder decoder Figure 7.1: A model of the channel coding system. A little more formal description of the encoding scheme is as follows. • The symbols U are assumed to be taken from an index set I = f1; 2;:::;Mg. • The encoding function x : I!X is a mapping from the index set I to the set of codewords X . We can denote the codewords as x(1); x(2); : : : ; x(M). In the most cases in this text, we will assume that the codewords are binary vectors of length k, X 2 f0; 1gn. • As the codeword is transmitted over the channel, errors occur and the received vectors are y 2 Y. In many situations the received symbols are taken from a larger alphabet than the code symbols. It can be that Y are real values detected by the receiver, while the code symbols X are binary.
    [Show full text]
  • Mathematical Theory of Communication)
    10/2/2019 Introduction to Information Theory Part 4 A General Communication System CHANNEL • Information Source • Transmitter • Channel • Receiver • Destination 10/2/2019 2 1 10/2/2019 Information Channel Input Output Channel X Y 10/2/2019 3 Perfect Communication (Discrete Noiseless Channel) 0 0 X Y Transmitted Received Symbol Symbol 1 1 10/2/2019 4 2 10/2/2019 NOISE 10/2/2019 5 Motivating Noise… 0 0 X Y Transmitted Received Symbol Symbol 1 1 10/2/2019 6 3 10/2/2019 Motivating Noise… f = 0.1, n = ~10,000 1 - f 0 0 f f 1 1 1 - f 10/2/2019 7 Motivating Noise… Message: $5213.75 Received: $5293.75 1. Detect that an error has occurred. 2. Correct the error. 3. Watch out for the overhead. 10/2/2019 8 4 10/2/2019 Error Detection by Repetition In the presence of 20% noise… Message : $ 5 2 1 3 . 7 5 Transmission 1: $ 5 2 9 3 . 7 5 Transmission 2: $ 5 2 1 3 . 7 5 Transmission 3: $ 5 2 1 3 . 1 1 Transmission 4: $ 5 4 4 3 . 7 5 Transmission 5: $ 7 2 1 8 . 7 5 There is no way of knowing where the errors are. 10/2/2019 9 Error Detection by Repetition In the presence of 20% noise… Message : $ 5 2 1 3 . 7 5 Transmission 1: $ 5 2 9 3 . 7 5 Transmission 2: $ 5 2 1 3 . 7 5 Transmission 3: $ 5 2 1 3 . 1 1 Transmission 4: $ 5 4 4 3 .
    [Show full text]