Communication Channels
Total Page:16
File Type:pdf, Size:1020Kb
Communication Channels Professor A. Manikas Imperial College London EE303 - Communication Systems An Overview of Fundamentals Prof. A. Manikas (Imperial College) EE303: Communication Channels 2011 1 / 48 Table of Contents 1 Introduction 2 Continuous Channels 3 Discrete Channels 4 Converting a Continuous to a Discrete Channel 5 More on Discrete Channels Backward transition Matrix Joint transition Probability Matrix 6 Measure of Information at the Output of a Channel Mutual Information of a Channel Equivocation & Mutual Information of a Discrete Channel 7 Capacity of a Channel Shannos’sCapacity Theorem Capacity of AWGN Channels Capacity of non-Gaussian Channels Shannon’sChannel Capacity Theorem based on Continuous Channel Parameters 8 Bandwidth and Channel Symbol Rate 9 Criteria and Limits of DCS Digital Communications - Introduction ENERGY UTILIZATION EFFICIENCY (EUE) Bandwidth Utilisation Effi ciency (BUE) Visual Comparison of Comm Systems Theoretical Limits on the Performance of Dig. Comm. Systems 10 Other Comparison-Parameters 11 Appendix-A: SNR at the output of an Ideal Comm System Prof. A. Manikas (Imperial College) EE303: Communication Channels 2011 2 / 48 Introduction Introduction With reference to the following block structure of a Dig. Comm. System (DCS), this topic is concerned with the basics of both continuous and discrete communication channels. Block structure of a DCS Prof. A. Manikas (Imperial College) EE303: Communication Channels 2011 3 / 48 Introduction Just as with sources, communication channels are either I discrete channels, or I continuous channels 1 wireless channels (in this case the whole DCS is known as a Wireless DCS) 2 wireline channels (in this case the whole DCS is known as a Wireline DCS) Note that a continuous channel is converted into (becomes) a discrete channel when a digital modulator is used to feed the channel and a digital demodulator provides the channel output. Examples of channels - with reference to DCS shown in previous page, I discrete channels: F input: A2 - output: A2 (alphabet: levels of quantiser - Volts) F input: B2 - output: B2 (alphabet: binary digits or binary codewords) b I continuous channels: b F input: A1 - output: A1, (Volts) - continuous channel (baseband) F input: T, - output: T (Volts) - continuous channel (baseband), F input: T1 - output: T1b (Volts) - continuous channel (bandpass). b Prof. A. Manikas (Imperial College) EE303: Communication Channels 2011 4 / 48 b Continuous Channels Continuous Channels A continuous communication channel (which can be regarded as an analogue channel) is described by I an input ensemble (s(t), pdfs (s)) and PSDs (f ) I an output ensemble, (r(t), pdfr (r)) I the channel noise (AWGN) ni (t) and b, I the channel bandwidth B and channel capacity C. () Prof. A.types Manikas of (Imperial channel College) signalsEE303: Communication Channels 2011 5 / 48 I s(t), r(t), n(t): bandpass I ni (t) = AWGN: allpass Discrete Channels Discrete Channels A discrete communication channel has a discrete input and a discrete output where I the symbols applied to the channel input for transmission are drawn from a finite alphabet, described by an input ensemble (X , p) while I the symbols appearing at the channel output are also drawn from a finite alphabet, which is described by an output ensemble (Y , q) I the channel transition probability matrix F. Prof. A. Manikas (Imperial College) EE303: Communication Channels 2011 6 / 48 Discrete Channels In many situations the input and output alphabets X and Y are identical but in the general case these are different. Instead of using X and Y , it is common practice to use the symbols H and D and thus define the two alphabets and the associated probabilities as ,p1 ,p2 ,pM T input: H = H1, H2, ..., HM p = [Pr(H1), Pr(H2), ..., Pr(HM )] f g z ,}|q1 { z ,}|q2 { z }|,qK { T output: D = D1, D2, ..., DM q = [Pr(D1), Pr(D2), ..., Pr(DK )] f g z }| { z }| { z }| { where pm abbreviates the probability Pr(Hm ) that the symbol Hm may appear at the input while qk abbreviates the probability Pr(Dk ) that the symbol Dk may appear at the output of the channel. Prof. A. Manikas (Imperial College) EE303: Communication Channels 2011 7 / 48 Discrete Channels The probabilistic relationship between input symbols H and output symbols D is described by the so-called channel transition probability matrix F, which is defined as follows: Pr(D1 H1), Pr(D1 H2), ..., Pr(D1 HM ) Pr(D jH ), Pr(D jH ), ..., Pr(D jH ) F = 2 1 2 2 2 M (1) 2 ...,j ...,j ..., ...j 3 6Pr(DK H1), Pr(DK H2), ..., Pr(DK HM )7 6 j j j 7 Prof. A. Manikas (Imperial4 College) EE303: Communication Channels 5 2011 8 / 48 Discrete Channels Pr(Dk Hm ) denotes the probability that symbol Dk D will appear j 2 at the channel output, given that Hm H was applied to the input. 2 The input ensemble H, p , the output ensemble D, q and the matrix F fully describe the functional properties of the channel. The following expression describes the relationship between q and p q = F.p (2) Note that in a noiseless channel D = H (3) q = p i.e the matrix F is an identity matrix F = IM (4) Prof. A. Manikas (Imperial College) EE303: Communication Channels 2011 9 / 48 Converting a Continuous to a Discrete Channel Converting a Continuous to a Discrete Channel A continuous channel is converted into (becomes) a discrete channel when a digital modulator is used to feed the channel and a digital demodulator provides the channel output. A digital modulator is described by M different channel symbols . These channel symbols are ENERGY SIGNALS of duration Tcs . Digital Modulator: Digital Demodulator: If M = 2 Binary Digital Modulator Binary Comm. System If M > 2 )M-ary Digital Modulator )M-ary Comm. System ) ) Prof. A. Manikas (Imperial College) EE303: Communication Channels 2011 10 / 48 Converting a Continuous to a Discrete Channel ..01..101..00.. T T 0..00 st( )= : H ,Pr(H ) cs cs 1 t 1 1 st( )= t 0..01 st( )= : H ,Pr(H ) 2 t 2 2 Tcs Tcs t 1..11 stM( )= : HMM , Pr(H ) st() Channel ..00..101..10.. rt()=()+n() st t 0..00 st( )= : D , Pr(D ) 1 t 1 1 Tcs Tcs D Detector rt( )= 0..01 st2( )= : 2D 2 , Pr(D ) with a t Decision Device t t Tcs Tcs 1..11 stM( )= : DMM , Pr(D ) (Decision Rule) Prof. A. Manikas (Imperial College) EE303: Communication Channels 2011 11 / 48 More on Discrete Channels Backward transition Matrix More on Discrete Channels Backward transition Matrix There are also occassions where we get/observe the output of a channel and then, based on this knowledge, we refer to the input . In this case we may use the concept of an imaginary "backward " channel and its associated transition matrix, known as backward transition matrix defined as follows: T Pr(H1 D1), Pr(H1 D2), ..., Pr(H1 DK ) Pr(H jD ), Pr(H jD ), ..., Pr(H jD ) B = 2 1 2 2 2 K (5) 2 ...,j ...,j ..., ...j 3 6Pr(H D ), Pr(H D ), ..., Pr(H D )7 6 M 1 M 2 M K 7 4 j j j 5 Prof. A. Manikas (Imperial College) EE303: Communication Channels 2011 12 / 48 More on Discrete Channels Joint transition Probability Matrix Joint transition Probability Matrix The joint probabilistic relationship between input channel symbols H = H1, H2, ..., HM and output channel f g symbols D = D1, D2, ..., DM , is described byf the so-called joint-probabilityg matrix, T Pr(H1, D1), Pr(H1, D2), ..., Pr(H1, DK ) Pr(H , D ), Pr(H , D ), ..., Pr(H , D ) J 2 1 2 2 2 K (6) , 2 ..., ..., ..., ... 3 6Pr(H , D ), Pr(H , D ), ..., Pr(H , D )7 6 M 1 M 2 M K 7 J is related4 to the forward transition probabilities of a channel5 with the following expression (compact form of Bayes’Theorem): p1 0 ... 0 0 p ... 0 J = F. 2 = F.diag(p) (7) 2... ... ... ... 3 6 0 0 ... p 7 6 M 7 4 5 ,diag(p) Prof. A. Manikas (Imperial College) EE303: Communication Channels 2011 13 / 48 | {z } More on Discrete Channels Joint transition Probability Matrix Note: This is equivalent to a new (joint) source having alphabet (H1, D1) , (H1, D2) , ..., (HM , DK ) f g and ensemble (joint ensemble) defined as follows (H1, D1), Pr(H1, D1) (H1, D2), Pr(H1, D2) 8 ... 9 (H D, J) = > > (8) > (Hm, Dk ), Pr(Hm, Dk ) > > > < ... = > (HM , DK ), Pr(HM , DK ) > > > > > :> ;> = (Hm, Dk ), Pr(Hm, Dk ) , mk : 1 m M, 1 k K 80 1 8 ≤ ≤ ≤ ≤ 9 <> =J => B km C @ A :> | {z } ;> Prof. A. Manikas (Imperial College) EE303: Communication Channels 2011 14 / 48 Measure of Information at the Output of a Channel Measure of Information at the Output of a Channel In general three measures of information are of main interest: 1 the Entropy of a Source - in (info) bits per source symbol 2 the Mutual Entropy (or Multual Information) of a Channel , in (info) bits per channel symbol 3 the Discrimination of a Sink Next we will focus on the Mutual Information of a Channel Hmut Prof. A. Manikas (Imperial College) EE303: Communication Channels 2011 15 / 48 Measure of Information at the Output of a Channel Mutual Information of a Channel Mutual Information of a Channel The mutual information measures the amount of information that the output of the channel (i.e. received message) gives about the input to the channel (transmitted message). That is, when symbols or signals are transmitted over a noisy communication channel, information is received. The amount of information received is given by the mutual information, Hmut = 0 (9) Prof. A. Manikas (Imperial College) EE303: Communication Channels 2011 16 / 48 Measure of Information at the Output of a Channel Mutual Information of a Channel M K qk Hmut , Hmut (p, F) = ∑ ∑ Fkm.pm log2 (10) Fkm m=1 k=1 M K pm.qk = ∑ ∑ Jkm log2 (11) Jkm m=1 k=1 T T bits = 1K 0J log2 F.p.p ?J 1 1M symbol B K hM matrix iC B C @ A (12) | {z } where 1M = a column vector of M ones , ? = Hadamard operators (mult.