Chapter 4

Frequency Analysis of Signals and Systems

Contents

Motivation: complex exponentials are eigenfunctions ...... 4.2 Overview ...... 4.3 Frequency Analysis of Continuous-Time Signals ...... 4.4 The Fourier series for continuous-time signals ...... 4.4 The Fourier transform for continuous-time aperiodic signals ...... 4.5 Frequency Analysis of Discrete-Time Signals ...... 4.7 The Fourier series for discrete-time periodic signals ...... 4.8 Relationship of DTFS to z-transform ...... 4.10 Preview of 4.4.3, analysis of LTI systems ...... 4.11 The Fourier transform of discrete-time aperiodic signals ...... 4.12 Relationship of the Fourier transform to the z-transform ...... 4.13 Gibb’s phenomenon ...... 4.14 Energy density spectrum of aperiodic signals ...... 4.16 Properties of the DTFT ...... 4.18 The sampling theorem revisited ...... 4.21 Proofs of Spectral Replication ...... 4.23 Frequency-domain characteristics of LTI systems ...... 4.30 A first attempt at filter design ...... 4.32 Relationship between system function and ...... 4.32 Computing the frequency response function ...... 4.33 Steady-state and transient response for sinusoidal inputs ...... 4.35 LTI systems as frequency-selective filters ...... 4.37 Digital sinusoidal oscillators ...... 4.42 Inverse systems and deconvolution ...... 4.44 Minimum-phase, maximum-phase, and mixed-phase systems ...... 4.47 Linear phase ...... 4.47 Summary ...... 4.48

4.1 4.2 c J. Fessler, May 27, 2004, 13:11 (student version)

Motivation: complex exponentials are eigenfunctions Why frequency analysis? Complex exponential signals, which are described by a frequency value, are eigenfunctions or eigensignals of LTI systems. • Period signals, which are important in , are sums of complex exponential signals. • Eigenfunctions of LTI Systems Complex exponential signals play an important and unique role in the analysis of LTI systems both in continuous and discrete time.

Complex exponential signals are the eigenfunctions of LTI systems.

The eigenvalue corresponding to the complex exponential signal with frequency ω is (ω ), 0 H 0 where (ω) is the Fourier transform of the h( ). H · This statement is true in both CT and DT and in both 1D and 2D (and higher). The only difference is the notation for frequency and the definition of complex exponential signal and Fourier transform. Continuous-Time x(t) = e2πF0t LTI, h(t) y(t) = h(t) e2πF0t = H (F ) e2πF0t . → → ∗ a 0 Proof: (skip )

2πF0t ∞ 2πF0(t τ) 2πF0t ∞ 2πF0τ 2πF0t y(t) = h(t) x(t) = h(t) e = h(τ) e − dτ = e h(τ) e− dτ = H (F ) e , ∗ ∗ a 0 Z−∞ Z−∞ where ∞ 2πF τ ∞ 2πF t Ha(F ) = h(τ) e− dτ = h(t) e− dt . Z−∞ Z−∞ Discrete-Time

ω0n ω0n ω0n (ω0n+∠ (ω0)) x[n] = e LTI, h[n] y[n] = h[n] e = (ω ) e = (ω ) e H → → ∗ H 0 |H 0 | Could you show this using the z-transform? No, because z-transform of eω0n does not exist! Proof of eigenfunction property:

∞ ∞ ω0n ω0(n k) ω0k ω0n ω0n y[n] = h[n] x[n] = h[n] e = h[k] e − = h[k] e− e = (ω ) e , ∗ ∗ H 0 k= "k= # X−∞ X−∞ where for any ω R: ∈ ∞ ωk ∞ ωn (ω) = h[k] e− = h[n] e− . H k= n= X−∞ X−∞ In the context of LTI systems, (ω) is called the frequency response of the system, since it describes “how much the system H responds to an input with frequency ω.” This property alone suggests the quantities H (F ) (CT) and (ω) (DT) are worth studying. a H Similar in 2D!

Most properties of CTFT and DTFT are the same. One huge difference:

The DTFT (ω) is always periodic: (ω + 2π) = (ω), ω, H H H ∀ whereas the CTFT is periodic only if x(t) is train of uniformly spaced Dirac delta functions.

Why periodic? (Preview.) DT frequencies unique only on [ π, π). Also (ω) = H(z) ω so the frequency response for any − H |z=e ω is one of the values of the z-transform along the unit circle, so 2π periodic. (Only useful if ROC of H(z) includes unit circle.) c J. Fessler, May 27, 2004, 13:11 (student version) 4.3

Why is frequency analysis so important? What does Fourier offer over the z-transform? Problem: the z-transform does not exist for eternal periodic signals. Example: x[n] = ( 1)n. What is X(z)? − n n x[n] = x1[n] + x2[n] = ( 1) u[n] +( 1) u[ n 1] − 1 − − − 1 From table: X (z) = 1/(1 + z− ) and X (z) = 1/(1 + z− ) so by linearity: X(z) = X (z) + X (z) = 0. 1 2 − 1 2 But what is the ROC? ROC : z > 1 ROC : z < 1 ROC ROC = φ. 1 | | 2 | | 1 ∩ 2 In fact, the z-transform summation does not converge for any z for this signal. In fact, X(z) does not exist for any eternal periodic signal other than x[n] = 0. (All “causal periodic” signals are fine though.) Yet, periodic signals are quite important in DSP practice. Examples: networking over home power lines. Or, a practical issue in audio recording systems is eliminating “60 cycle hum,” a 60Hz periodic signal contaminating the audio. We do not yet have the tools to design a digital filter that would eliminate, or reduce, this periodic contamination. Need the background in Ch. 4 (DTFT) and 5 (DFT) to be able to design filters in Ch. 8. Roadmap (See Table 4.27)

Signal Signal Transform Continuous Time Discrete Time Aperiodic Continuous Frequency Fourier Transform (306) DTFT (Ch. 4) (4.1) (periodic in frequency) Periodic Discrete Frequency Fourier Series (306) DTFS (Ch. 4) or DFT (Ch. 5) (periodic in time) (periodic in time and frequency) (4.1) FFT (Ch. 6)

Overview The DTFS is the discrete-time analog of the continuous-time Fourier series: a simple decomposition of periodic DT signals. • The DTFT is the discrete-time analog of the continuous-time FT studied in 316. • Ch. 5, the DFT adds sampling in Fourier domain as well as sampling in time domain. • Ch. 6, the FFT, is just a fast way to implement the DFT on a computer. • Ch. 7 is filter design. •

Familiarity with the properties is not just theoretical, but practical too; e.g., after designing a notch filter for 60Hz, can use scaling property to apply design to another country with different AC frequency. 4.4 c J. Fessler, May 27, 2004, 13:11 (student version)

Frequency Analysis of Continuous-Time Signals Skim / Review

4.1.1 The Fourier series for continuous-time signals

If a continuous-time signal xa(t) is periodic with fundamental period T0, then it has fundamental frequency F0 = 1/T0. Assuming the Dirichlet conditions hold (see text), we can represent xa(t) using a sum of harmonically related complex exponential 2πkF0t signals e . The component frequencies (kF0) are integer multiples of the fundamental frequency. In general one needs an infinite series of such components. This representation is called the Fourier series synthesis equation:  ∞ 2πkF0t xa(t) = ck e , k= X−∞ where the Fourier series coefficients are given by the following analysis equation:

1 2πkF0t c = x (t) e− dt, k Z. k T a ∈ 0 Z

If xa(t) is a real signal, then the coefficients are Hermitian symmetric: c k = ck∗. − 4.1.2 Power density spectrum of periodic signals The Fourier series representation illuminates how much power there is in each frequency component due to Parseval’s theorem:

1 2 ∞ 2 Power = xa(t) dt = ck . T0 | | | | k= Z X−∞ We display this spectral information graphically as follows. power density spectrum: kF vs c 2 • 0 | k| magnitude spectrum: kF vs c • 0 | k| phase spectrum: kF vs ∠c • 0 k

Example. Consider the following periodic triangle wave signal, having period T0 = 4ms, shown here graphically.

xa(t) 2 6 · · · - -2 0 2 4 6 8 10 t [ms]

A picture is fine, but we also need mathematical formulas for analysis. One useful “time-domain” formula is an infinite sum of shifted triangle pulse signals as follows:

x˜a(t) 2 6 - ∞ t/2 + 1, t < 2 xa(t) = x˜a(t kT0), where x˜a(t) = | | -2 0 2 t − 0, otherwise. k=  X−∞ The Fourier series representation has coefficients

2 1, k = 0 1 2π(k/4)t 1, k = 0 π/2 1 k e , k = 0, even ck = (t/2 + 1) e− dt = π/2 ( 1) = πk 4 −  6 2 ( e πk , k = 0, π/2 1 Z− 6  e− , k = 0, odd, πk 6 which can be found using the following line of MATLAB and simplifying:  syms t, pretty(int(’1/4 * (1+t/2) * exp(-i*2*pi*k/4*t)’, -2, 2)) c J. Fessler, May 27, 2004, 13:11 (student version) 4.5

So in terms of complex exponentials (or sinusoids), we write xa(t) as follows:

∞ ( 1)k ∞ 2 k π ∞ 2 k π x (t) = c e2π(k/4)t = 1 + eπ/2 − e2π(k/4)t = 1 + cos 2π t + + cos 2π t . a k πk πk 4 2 πk 4 − 2 k= k=0 k=2   k=1   X−∞ X6 evXen Xodd

Power Density Spectrum of xa(t) 6 1

1 π2 1 4π2 1 . . . 9π2 - 0 0.25 0.50 0.75 F [kHz]

Note that an infinite number of complex exponential components are required to represent this periodic signal. For practical applications, often a truncated series expansion consisting of a finite number of sinusoidal terms may be suffi- cient. For example, for additive music synthesis (based on adding sine-wave generators), we only need to include the terms with frequencies within the audible range of human ears.

4.1.3 The Fourier transform for continuous-time aperiodic signals 2πF t Analysis equation: Xa(F ) = ∞ xa(t) e− dt −∞ 2πF t Synthesis equation: xa(t) = R∞ Xa(F ) e dF −∞ Example. R 1, F = 0 CTFT x (t) = rect(t) X (F ) = sinc(F ) = sin(πF ) a ↔ a , F = 0. ( πF 6

Caution: this definition of sinc is consistent with MATLAB and most DSP books. However, a different definition (without the π terms) is used in some signals and systems books. 4.1.4 Energy density spectrum of aperiodic signals Parseval’s relation for the energy of a signal:

∞ ∞ Energy = x (t) 2 dt = X (F ) 2 dF | a | | a | Z−∞ Z−∞

2 So Xa(F ) represents the energy density spectrum of the signal xa(t). | | 4.6 c J. Fessler, May 27, 2004, 13:11 (student version)

Properties of the Continuous-Time Fourier Transform Domain Time Fourier 2πF t 2πF t Synthesis, Analysis xa(t) = ∞ Xa(F ) e dF Xa(F ) = ∞ xa(t) e− dt −∞ −∞ 2πF0t 2πF0t Eigenfunction h(t) e R = Ha(F0) e H(F ) δ(FR F0) ∗ − = (F ) δ(F F ) H 0 − 0 Linearity α x1(t) +β x2(t) α X1(F ) +β X2(F ) 2πF τ Time shift x (t τ) X (F ) e− a − a Time reversal x ( t) X ( F ) a − a − x (t) x (t) X (F ) X (F ) 1 ∗ 2 1 · 2 Cross-Correlation x (t) ? x (t) = x (t) x∗( t) X (F ) X∗(F ) 1 2 1 ∗ 2 − 1 · 2 Frequency shift x (t) e2πF0t X (F F ) a a − 0 X (F F ) + X (F + F ) Modulation (cosine) x (t) cos(2πF t) a − 0 a 0 a 0 2 Multiplication x (t) x (t) X (F ) X (F ) 1 · 2 1 ∗ 2  d Freq. differentiation t x (t) X (F ) a 2π dF a d Time differentiation dt xa(t) 2πF Xa(F )

Conjugation x∗(t) X∗( F ) a a − 1 F Scaling x (at) X a a a a   Symmetry properties x (t) real X (F ) = X∗( F ) a a a − ... x (t) = x∗( t) X (F ) real a a − a Duality Xa∗(t) xa∗(F )

Relation to Laplace Xa(F ) = X(s) s=2πF

Parseval’s Theorem ∞ x1(t) x2∗(t) dt = ∞ X1(F ) X2∗(F ) dF −∞ −∞ 2 2 Rayleigh’s Theorem R ∞ xa(t) dt =R ∞ Xa(F ) dF −∞ | | −∞ | | DC Value R ∞ xa(t) dRt = Xa(0) −∞ R A function that satisfies xa(t) = x∗( t) is said to have Hermitian symmetry. a − c J. Fessler, May 27, 2004, 13:11 (student version) 4.7

4.1.3 Existence of the Continuous-Time Fourier Transform skim

Sufficient conditions for xa(t) that ensure that the Fourier transform exists: x (t) is absolutely integrable (over all of R). • a x (t) has only a finite number of discontinuities and a finite number of maxima and minima in any finite region. • a x (t) has no infinite discontinuities. • a

Do these conditions guarantee that taking a FT of a function xa(t) and then taking the inverse FT will give you back exactly the same function xa(t)? No! Consider the function

1, t = 0 x (t) = a 0, otherwise.  1 Since this function (called a “null function”) has no “area,” X (F ) = 0, so the inverse FT x˜ = − [X] is simply x˜ (t) = 0, which a F a does not equal x (t) exactly! However, x and x˜ are equivalent in the L (R2) sense that x x˜ 2 = x (t) x˜ (t) 2 dt = 0, a 2 k − k | a − a | which is more than adequate for any practical problem. R 1 If we restrict attention to continuous functions x (t), then it will be true that x = − [ [x]]. a F F Most physical functions are continuous, or at least do not have type of isolated points that the function xa(t) above has, so the above mathematical subtleties need not deter our use of transform methods. 1 We can safely restrict attention to functions x (t) for which x = − [ [x]] in this course. a F F Lerch’s theorem: if f and g have the same Fourier transform, then f g is a null function, i.e., ∞ f(t) g(t) dt . − −∞ | − | R

4.2 Frequency Analysis of Discrete-Time Signals In Ch. 2, we analyzed LTI systems using superposition. We decomposed the input signal x[n] into a train of shifted delta functions:

∞ ∞ x[n] = c x [n] = x[k] δ[n k], k k − k= k= X−∞ X−∞ determined the response to each shifted delta function δ[n k] T h [n], and then used linearity and time-invariance to find the − → k overall output signal: y[n] = k∞= x[k] h[n k] . −∞ − The “shifted delta” decompositionP is not the only useful choice for the “elementary” functions xk[n]. Another useful choice is the collection of complex-exponential signals eωn for various ω. { } We have already seen that the response of an LTI system to the input eωn is (ω) eωn , where (ω) is the frequency response H H of the system. Ch 2: x [n] = δ[n k] T y [n] = h[n k] • k − → k − Ch 4: x [n] = eωkn T y [n] = (ω ) eωkn • k → k H k So now we just need to figure out how to do the decomposition of x[n] into a weighted combination of complex-exponential signals.

We begin with the case where x[n] is periodic. 4.8 c J. Fessler, May 27, 2004, 13:11 (student version)

4.2.1 The Fourier series for discrete-time periodic signals For a DT periodic signal, should an infinite set of frequency components be required? No, because DT frequencies alias to the interval [ π, π]. − Fact: if x[n] is periodic with period N, then x[n] has the following series representation: (synthesis, inverse DTFS):

N 1 − ωkn 2π x[n] = ck e , where ωk = k. picture of ωk’s on [0, 2π) (4-1) N kX=0

Intuition: if x[n] has period N, then x[n] has N degrees of freedom. The N ck’s in the above decomposition express those degrees of freedom in another coordinate system. Linear algebra / matrix perspective Undergrads: skim. Grads: study.  2π kn To prove (4-1), one can use the fact that the N N matrix W with n, kth element W = e N is invertible, where we number × n,k the elements from 0 to N 1. In fact, the columns of W are orthogonal. − To show that the columns are orthogonal, we first need the following simple property:

N 1 1 −  2π nm 1, m = 0, N, 2N, . . . ∞ e N =   = δ[m kN] . N 0, otherwise − n=0  k= X X−∞

2πn(kN)/N 2πnk 1 N 1 − The case where m is a multiple of N is trivial, since clearly e = e = 1 so N n=0 1 = 1. For the case where m is not a multiple of N, we can apply the finite geometric series formula: P N  2π m N 1 N 1 n 1 e N 1 −  2π nm 1 −  2π m 1 1 1 1 e N = e N = − = − = 0. N N N   2π m N  2π m n=0 n=0 1 e N 1 e N X X   − −     Now let wk and wl be two columns of the matrix W, for k, l = 0, . . . , N 1. Then the inner product of these two columns is − N 1 N 1 N 1 k l − k l −  2π kn  2π ln −  2π (k l)n w , w = w (w )∗ = e N e− N = e N − = N δ[k l], h i n n − n=0 n=0 n=0 X X X proving that the columns of W are orthogonal. This also proves that W = 1 W is an matrix, so its inverse is just 0 √N orthonormal 1 1 its Hermitian transpose: W− = W = W , where “ ” denotes Hermitian transpose. Furthermore, 0 0 √N 0 0

1 1 1 1 1 1 1 W− = (√NW0)− = W− = W0 = W0. √N 0 √N √N N Thus we can rewrite (4-1) as

x[0] c0 c0 x[0] x[1] c1 c1 1 x[1]  .  = W  .  , so  .  = W0  .  . . . . N .          x[N 1]   cN 1   cN 1   x[N 1]   −   −   −   −          c J. Fessler, May 27, 2004, 13:11 (student version) 4.9

The DTFS analysis equation

How can we find the coefficients ck in (4-1), without using linear algebra? 2π 1  N ln Read. Multiply both sides of (4-1) by N e− and sum over n:

N 1 N 1 N 1 N 1 N 1 N 1 − 1  2π ln − 1  2π ln −  2π kn − 1 −  2π (k l)n − e− N x[n] = e− N c e N = c e N − = c δ[k l] = c . N N k k N k − l n=0 n=0 " # " # X X kX=0 kX=0 kX=0 kX=0 Therefore, replacing l with k, the DTFS coefficients are given by the following analysis equation:

N 1 1 −  2π kn c = x[n] e− N . (4-2) k N n=0 X The above expression is defined for all k, but we really only need to evaluate it for k = 0, . . . , N 1, because: −

The ck’s are periodic with period N.

Proof (uses a simplification method we’ll see repeatedly):

N 1 N 1 N 1 1 −  2π (k+N)n 1 −  2π kn  2π Nn 1 −  2π kn c = x[n] e− N = x[n] e− N e− N = x[n] e− N = c . k+N N N N k n=0 n=0 n=0 X X X Equations (4-1) and (4-2) are sometimes called the discrete-time Fourier series or DTFS. In Ch. 6 we will discuss the discrete Fourier transform (DFT), which is similar except for scale factors.

Example. Skill: Compute DTFS representation of DT periodic signals. Consider the periodic signal x[n] = 4, 4, 0, 0 , a DT square wave. Note N = 4. { }4 3 1  2π kn 1  2π k1 πk/2 c = x[n] e− 4 = 4 + 4 e− 4 = 1 + e− , k 4 4 n=0 X h i so c , c , c , c = 2, 1 j, 0, 1 + j and { 0 1 2 3} { − }

 2π n  2π 3n  2π n  2π n π π x[n] = 2 + (1 ) e 4 + (1 + ) e 4 = 2 + (1 ) e 4 + (1 + ) e− 4 = 2 + 2√2 cos n . − − 2 − 4   This “square wave” is represented by a DC term plus a single sinusoid with frequency ω1 = π/2. From Fourier series analysis, we know that a CT square wave has an infinite number of frequency components. The above signal x[n] could have arisen from sampling a particular CT square wave xa(t). picture. But our DT square wave only has two frequency components: DC and ω = π/2. Where did the extra frequency components go? They aliased to DC, π/2, and π. 1   Hermitian symmetry

If x[n] is real, then the DTFS coefficients are Hermitian symmetric, i.e., c k = ck∗. − But we usually only evaluate c for k = 0, . . . , N 1, due to periodicity. k − So a more useful expression is: c0 is real, and cN k = ck∗ for k = 1, . . . , N 1. The preceding example illustrates this. − − 4.10 c J. Fessler, May 27, 2004, 13:11 (student version)

4.2.2 Power density spectrum of periodic signals The power density spectrum of a periodic signal is a plot that shows how much power the signal has in each frequency component ωk. For pure DT signals, we can plot vs k or vs ωk. For a DT signal formed by sampling a CT signal at rate Fs, it is more natural to plot power vs corresponding continuous frequencies Fk = (k0/N)Fs, where

k, k = 0, . . . , N/2 1 k = 0 k N, k = N/2, N/2 +−1, . . . , N 1  − − so that k0 N/2, . . . , N/2 1 and F [ F /2, F /2). ∈ {− − } k ∈ − s s ωkn What is the power in the periodic signal xk[n] = ck e ? Recall power for periodic signal is

N 1 1 − P = x [n] 2 = c 2 . k N | k | | k| n=0 X So the power density spectrum is just a plot of c 2 vs k or ω or F . | k| k k Parseval’s relation expresses the average signal power in terms of the sum of the power of each spectral component:

N 1 N 1 1 − − Power = x[n] 2 = c 2 . N | | | k| n=0 X kX=0

2 2 2 2 2 Read. Proof via matrix approach. Since x = Wc, x = Wc = √NW0c = N W0c = N c . k k k k k k k k k k Example. (Continuing above.) 1 N 1 2 2 2 − Time domain N n=0 x[n] = (1/4)[4 + 4 ] = 8. N| 1 | 2 2 2 2 Frequency domain: − c = 2 + 1 j + 1 + j = 4 + 2 + 2 = 8. P k=0 | k| | − | | | Picture of power spectrum of x[n]. P

Relationship of DTFS to z-transform Even though we cannot take the z-transform of a periodic signal x[n], there is still a (somewhat) useful relationship. Let x˜[n] denote one period of x[n], i.e., x˜[n] = x[n] (u[n] u[n N]). Let X˜(z) denote the z-transform of x˜[n]. Then − − 1 ck = X˜(z) . 2π N ω  k z=e k =e N

Picture of unit circle with ck’s. Clearly only valid if z-transform includes the unit circle. Is this a problem? No problem, because x˜[n] is a finite duration signal, so ROC of X˜(z) is C (except 0), so ROC includes unit circle. 1 Example. (Continuing above.) x˜[n] = 4, 4, 0, 0 , so X˜(z) = 4 + 4z− . { }

1 ˜ 1 πk/2 ck = X(z) = 1 + z− z=eπk/2 = 1 + e− , 4 2πk/4 z=e

which is the same equation for the coefficients as before.

Now we have seen how to decompose a DT periodic signal into a sum of complex exponential signals. This can help us understand the signal better (signal analysis). • Example: looking for interference/harmonics on a 60Hz AC power line. It is also very useful for understanding the effect of LTI systems on such signals (soon). • c J. Fessler, May 27, 2004, 13:11 (student version) 4.11

Preview of 4.4.3, analysis of LTI systems Time domain input/output: x[n] h[n] y[n] = h[n] x[n] . → → ∗ If x[n] is periodic, we cannot use z-transforms, but by (4-1) we can decompose x[n] into a weighted sum of N complex exponen- tials: N 1 −  2π kn x[n] = ck xk[n], where xk[n] = e N . kX=0  2π kn We already showed that the response of the system to the input x [n] is just y [n] = (ω ) e N , where (ω) is the frequency k k H k H response corresponding to h[n]. Therefore, by superposition, the overall output signal is

N 1 N 1 − −  2π kn y[n] = c y [n] = c (ω ) e N . k k k H k k=0 k=0 X X multiply | {z } For a periodic input signal, the response of an LTI system is periodic (with same frequency components).

The DTFS coefficients of the output signal are the product of the DTFS coefficients of the input signal with certain samples of the frequency response H(ωk) of the system.

This is the “convolution property” for the DTFS, since the time-domain convolution y[n] = h[n] x[n] becomes simply multipli- ∗ cation of DTFS coefficients. The results are very useful because prior to this analysis we would have had to do this by time-domain convolution. Could we have used the z-transform to avoid convolution here? No, since X(z) does not exist for eternal periodic signals!

Example. Making violin sound like flute. skip : done in 206.

F0 = 1000Hz sawtooth violin signal. T0 = 1/F0 = 1msec. Fs = 8000Hz, so N = 8 samples per period. Ts = 1/Fs = 0.125msec

Want lowpass filter to remove all but fundamental and first harmonic. Cutoff frequency: Fc = 2500 Hz, so in digital domain: fc = Fc/Fs = 2500/8000 = 5/16, so ωc = 2π(5/16) = 5π/8. picture of x[n], y[n], (ω), power density spectrum before and after filtering H 4.12 c J. Fessler, May 27, 2004, 13:11 (student version)

The DTFS allows us to analyze DT periodic signals by decomposing into a sum of N complex exponentials, where N is the period. What about aperiodic signals? (Like speech or music.) Note in DTFS ω = 2πk/N. Let N then the ω ’s become closer and closer together and approach a continuum. k → ∞ k 4.2.3 The Fourier transform of discrete-time aperiodic signals Now we need a continuum of frequencies, so we define the following analysis equation:

∞ ωn DTFT (ω) = x[n] e− and write x[n] (ω) . (4-3) X ↔ X n= X−∞ This is called the discrete-time Fourier transform or DTFT of x[n]. (The book just calls it the Fourier transform.) We have already seen that this definition is motivated in the analysis of LTI systems. The DTFT is also useful for analyzing the spectral content of samples continuous-time signals, which we will discuss later. Periodicity The DTFT (ω) is defined for all ω. However, the DTFT is periodic with period 2π, so we often just mention π ω π. X − ≤ ≤ Proof: ∞ (ω+2π)n ∞ ωn 2πn ∞ ωn (ω + 2π) = x[n] e− = x[n] e− e− = x[n] e− = (ω) . X X n= n= n= X−∞ X−∞ X−∞ In fact, when you see a DTFT picture or expression like the following

1, ω ω (ω) = | | ≤ c X 0, ωc < ω π,  | | ≤ we really are just specifying one period of (ω), and the remainder is implicitly defined by the periodic extension. X Some books write X(eω) to remind us that the DTFT is periodic and to solidify the connection with the z-transform. The Inverse DTFT (synthesis) If the DTFT is to be very useful, we must be able to recover x[n] from (ω). X First a useful fact: 1 π 1, m = 0 eωm dω = δ[m] = for m Z. 2π π 0, m = 0, ∈ Z−  6 eωn Multiplying both sides of 4-3 by 2π and integrating over ω (note use k not n inside!):

π ωn π ωn π e e ∞ ωk ∞ 1 ω(n k) ∞ (ω) dω = x[k] e− dω = x[k] e − dω = x[k] δ[n k] = x[n] . 2π X 2π 2π − π π "k= # k=  π  k= Z− Z− X−∞ X−∞ Z− X−∞ The exchange of order is ok if (ω) converges to (ω) pointwise (see below). XN X Thus we have the inverse DTFT: 1 π x[n] = (ω) eωn dω . (4-4) 2π π X Z−

Because both eωn and (ω) are periodic in ω with period 2π, any 2π interval will suffice for the integral. X

Lots of DTFT properties, mostly parallel those of CTFT. More later. c J. Fessler, May 27, 2004, 13:11 (student version) 4.13

4.2.6 Relationship of the Fourier transform to the z-transform

(ω) = X(z) ω , X |z=e if ROC of z-transform includes the unit circle. (Follows directly from expressions.) Hence some books use the notation X(eω). n 1 1 Example: x[n] = (1/2) u[n] with ROC z > 1/2 which includes unit circle. So X(z) = 1 1 z−1 so (ω) = 1 1 e−ω . | | − 2 X − 2 If the ROC does not include the unit circle, then strictly speaking the FT does not exist! An “exception” is the DTFT of periodic signals, which we will “define” by using Dirac impulses (later).

4.2.4 Convergence of the DTFT The DTFT has an infinite sum, so we should consider when does this sum “exist,” i.e., when is it well defined? A sufficient condition for existence is that the signal be absolutely summable:

∞ x[n] < . | | ∞ n= X−∞ Any signal that is absolutely summable will have an ROC of X(z) that includes the unit circle, and hence will have a well-defined DTFT for all ω, and the inverse DTFT will give back exactly the same signal that you started with. In particular, under the above condition, one can show that the finite sum (which always exists)

N ωn (ω) = x[n] e− XN n= N X− converges pointwise to (ω) for all ω: X sup N (ω) (ω) 0 as N , ω |X − X | → → ∞ or in particular: (ω) (ω) as N ω. XN → X → ∞ ∀ Unfortunately the absolute summability condition precludes periodic signals, which were part of our motivation! To handle periodic signals with the DTFT, one must allow Dirac delta functions in (ω). The book avoids this, so I will also for X now. For DT periodic signals, we can use the DTFS instead. However, there are some signals that are not absolutely summable, but nevertheless do satisfy the weaker finite-energy condition

∞ E = x[n] 2 < . x | | ∞ n= X−∞ For such signals, the DTFT converges in a mean-square sense:

π 2 N (ω) (ω) dω 0 as N . π |X − X | → → ∞ Z− This is theoretically weaker than pointwise convergence, but it means that the error energy diminishes with increasing N, so practically the two functions become physically indistinguishable. Fact. Any absolutely summable signal has finite energy:

∞ ∞ ∞ ∞ ∞ 2 2 |x[n]| < ∞ =⇒ |x[n]| |x[m]| < ∞ =⇒ |x[n]| + |x[n]| |x[m]| < ∞ =⇒ |x[n]| < ∞. n=−∞ n=−∞ ! m=−∞ ! n=−∞ m6=n n=−∞ X X X X X X 4.14 c J. Fessler, May 27, 2004, 13:11 (student version)

Gibb’s phenomenon (an example of mean-square convergence) Our first filter design.

Suppose we would like to design a discrete-time low-pass filter, with cutoff frequency 0 < ωc < π, having frequency response:

1, 0 ω ω (ω) = ≤ | | ≤ c (periodic). H 0, ωc < ω π  | | ≤ Find the impulse response h[n] of such a filter. Careful with n = 0!

π ωc ωc ωcn ωcn 1 1 1 e e− sin(ω n) ω ω h[n] = (ω) eωn dω = eωn dω = eωn = − = c = c sinc c n . 2π π H 2π ωc 2πn ω= ω 2πn πn π π Z− Z− c −  

Is h[n] an FIR or IIR filter? This is IIR. (Nevertheless it could still be practical if it had a rational system function.)

Is it stable? Depends on ωc. If ωc = π, then h[n] = δ[n] which is stable. sin nπ/2 1 ∞ ∞ ∞ But consider ω = π/2. Then n= h[n] = 1/2 + 2 n=1 πn = 1/2 + (2/π) k=1 2k+1 = , so it is unstable. −∞ | | ∞

P Frequency responseP of ideal low−pass filter P

1.2

1

0.8 ) ω

H( 0.6

0.4

0.2

0 −π −ω 0 ω π c c ω

Impulse response of ideal low−pass filter 0.5

0.4

0.3

0.2

h(n) 0.1

0

−0.1

−0.2 −20 −15 −10 −5 0 5 10 15 20 n c J. Fessler, May 27, 2004, 13:11 (student version) 4.15

Can we implement h[n] with a finite number of adds, multiplies, delays? Only if H(z) is in rational form, which it is not! It does not have an exact (finite) recursive implementation. How to make a practical implementation? A simple, but suboptimal approach: just truncate impulse response, since h[n] 0 ≈ for large n. But how close will be the frequency response of this FIR approximation to ideal then? Let

h[n], n N h [n] = N 0, |otherwise| ≤ ,  and let (ω) be the corresponding frequency response: HN N N ωn sin ωcn ωn (ω) = h [n] e− = e− . HN N πn n= N n= N X− X− No closed form for (ω). Use MATLAB to compute (ω) and display. HN HN H (ω) |H (ω) − H(ω)| N N Truncated Sinc Filters 1.5 0.5 N=10 0.5 0.4 1 0.4 0.3 0.3 (n) 0.2 10

h 0.5 0.1 0.2 0 0.1 0 −0.1 0 −0.2 −80 −60 −40 −20 0 20 40 60 80 −5 0 5 −5 0 5

1.5 0.5 N=20 0.5 0.4 1 0.4 0.3 0.3

(n) 0.2

20 0.5 h 0.1 0.2

0 0.1 0 −0.1 0 −0.2 −80 −60 −40 −20 0 20 40 60 80 −5 0 5 −5 0 5

1.5 0.5 N=80 0.5 0.4 1 0.4 0.3 0.3

(n) 0.2

80 0.5 h 0.1 0.2

0 0.1 0 −0.1 0 −0.2 −80 −60 −40 −20 0 20 40 60 80 −5 0 5 −5 0 5 n ω ω

As N increases, h [n] h[n]. But (ω) does not converge to (ω) for every ω. But the energy difference of the two spectra N → HN H decreases to 0 with increasing N. Practically, this is good enough. This effect, where truncating a sum in one domain leads to ringing in the other domain is called Gibbs phenomenon. We have just done our first filter design. It is a poor design though. Lots of taps, no recursive form, suboptimal approximation to ideal lowpass for any given number of multiplies. 4.16 c J. Fessler, May 27, 2004, 13:11 (student version)

4.2.5 Energy density spectrum of aperiodic signals or Parseval’s relation

π ∞ 2 ∞ ∞ 1 ωn ∗ E = x[n] = x[n] x∗[n] = x[n] (ω) e dω | | 2π X n= n= n=  π  X−∞ X−∞ X−∞ Z− π π π 1 ∞ ωn 1 1 2 = ∗(ω) x[n] e− dω = ∗(ω) (ω) dω = (ω) dω . 2π X 2π X X 2π |X | π "n= # π π Z− X−∞ Z− Z−

π ∞ 1 x[n] 2 = (ω) 2 dω . | | 2π |X | n= π X−∞ Z− LHS: total energy of signal. RHS: integral of energy density over all frequencies. 2 Sxx(ω) =4 (ω) is called the energy density spectrum. |X | When x[n] is aperiodic, the energy in any particular frequency is zero. But

1 ω2 (ω) 2 dω 2π |X | Zω1 quantifies how much energy the signal has in the frequency band [ω , ω ] for π ω < ω π 1 2 − ≤ 1 2 ≤ 1 1 Example. x[n] = an u[n] for a real. X(z) = so (ω) = , so 1 az 1 X 1 a e ω − − − − 1 2 1 S (ω) = (ω) 2 = = . xx |X | 1 a e ω 1 2a cos ω + a2 − − −

n Energy density spectrum of a u(n)

4 a=−0.5

3 ) ω (

2xx S a=−0.2 1 a=0 a=0.2 a=0.5 0 −5 0 5 ω 1

0.8 a=0.5 u(n)

n 0.6

0.4

x(n) = a 0.2

0 −5 0 5 10 n 1 a=−0.5

0.5 u(n) n

0 x(n) = a

−0.5 −5 0 5 10 n

Why is there greater energy density at high frequencies when a < 0? Recall an = ( a )n = ( 1)n a n which alternates. −| | − | | Note: for a = 0 we have x[n] = δ[n]. c J. Fessler, May 27, 2004, 13:11 (student version) 4.17

4.2.7 The Cepstrum skip 4.2.8 The Fourier transform of signals with poles on the unit circle Skim A complex exponential signal has all its power concentrated in a single frequency component. So we expect that the DTFT of eω0n would be something like “δ(ω ω ).” − 0

6 ?

-

0 ω0 ω

What is wrong with the above picture and the formula in quotes? Not explicitly periodic. Consider (using Dirac delta now):

∞ (ω) = “2π δ(ω ω ) ” = 2π δ(ω ω k2π) X − 0 − 0 − k= X−∞

Assume ω [ π, π] 0 ∈ − Find inverse DTFT:

π π 1 ∞ x[n] = (ω) eωn dω = δ(ω ω k2π) eωn dω = eω0n , 2π X − 0 − π π k= Z− Z− X−∞ since integral over [ π, π] hits exactly one of the Dirac delta functions. − Thus

DTFT ∞ eω0n 2π δ(ω ω k2π) = “2π δ(ω ω ) ”. ↔ − 0 − − 0 k= X−∞ The step function has a pole on the unit circle, yet some textbooks also state:

1 ∞ u[n] DTFT U(ω) = + π δ(ω 2πk) . ↔ 1 e ω − − k= − X−∞ The step function is not square summable, so this transform pair must be used with care. Rarely is it needed. 4.2.9 Sampling Will be done later! 4.2.10 Frequency-domain classification of signals: the concept of bandwidth Read 4.2.11 The frequency ranges of some natural sounds Skim 4.2.12 Physical and mathematical dualities Read discrete-time periodic spectrum (DTFT, DTFS) ↔ periodic time discrete spectra (CTFS, DTFS) ↔ 4.18 c J. Fessler, May 27, 2004, 13:11 (student version)

4.3 Properties of the DTFT ∞ ωn (ω) = x[n] e− X n= X−∞ Most properties analogous to those of CTFT. Most can be derived directly from corresponding z-transform properties, by using the fact that (ω) = X(z) ω . Caution: reuse of X( ) notation. X |z=e · Periodicity: (ω) = (ω + 2π) X X 4.3.1 Symmetry properties of the DTFT Time reversal Z 1 DTFT 1 ω Recall x[ n] X z− . Thus x[ n] X z− = X(e− ) = X(z) −ω = ( ω). − ↔ − ↔ z=eω |z=e X − DTFT So x[ n] ( ω).  − ↔ X − So if x[n] is even, i.e., x[n] = x[ n], then (ω) = ( ω) (also even). − X X − Conjugation Z DTFT ω Recall that x∗[n] X∗(z∗). Thus x∗[n] X∗(z∗) ω = X∗(e− ) = X∗(z) −ω = ∗( ω). ↔ ↔ |z=e |z=e X − DTFT So x∗[n] ∗( ω) . ↔ X − Real signals

If x[n] is real, i.e., x[n] = x∗[n], then (ω) = ∗( ω) (a Hermitian symmetric spectrum). X X − In particular, writing (ω) = (ω) + (ω) we have X XR XI R( ω) = R(ω) • X ( −ω) = X (ω) • XI − − XI Combining: if x[n] is real and even, then (ω) is also real and even i.e., (ω) = ( ω) = ∗(ω) = ∗( ω) . X X X − X X − There are many such relationships, as summarized in the following diagram.

e e o o x[n] = xR[n] +  xI [n] + xR[n] +  xI [n]

l l %-&. (ω) = e (ω) +  e(ω) + o (ω) +  o(ω) X XR XI XR XI c J. Fessler, May 27, 2004, 13:11 (student version) 4.19

4.3.2 DTFT properties Again, one motivation is to avoid PFE for inverse DTFT. Linearity a x [n] +a x [n] DTFT a (ω) +a (ω) 1 1 2 2 ↔ 1 X1 2 X2 Time-shift Z k Since x[n k] z− X(z) − ↔ DTFT ωk x[n k] e− (ω) “phase shift” − ↔ X DTFT Time-reversal (earlier) x[ n] ( ω) − DTFT↔ X − Conjugation (earlier) x∗[n] ∗( ω) ↔ X − Convolution (particularly use for LTI systems) Z h[n] x[n] H(z) X(z) so ∗ ↔ h[n] x[n] DTFT (ω) (ω) ∗ ↔ H X Cross Correlation DTFT r [n] = x[n] y∗[ n] S (ω) = (ω) ∗(ω) xy ∗ − ↔ xy X Y If x[n] and y[n] real, then r [n] = x[n] y[ n] DTFT S (ω) = (ω) ( ω) . xy ∗ − ↔ xy X Y − Autocorrelation (Wiener-Khintchine theorem)

r [n] DTFT S (ω) = (ω) 2 xx ↔ xx |X |

Frequency shift (complex modulation)

DTFT eω0n x[n] (ω ω ) ↔ X − 0

3π Example. Let y[n] = e 4 n x[n],

6 (ω) 6 (ω) = (ω 3π/4) 1 X 1 Y X −

- - π π/2 0 π/2 π ω = π π/2 0 π/2 π ω − − ⇒ − − Modulation 1 x[n] cos(ω n) DTFT [ (ω ω ) + (ω + ω )] 0 ↔ 2 X − 0 X 0

ω0n ω0n since cos(ω0n) = (e + e− )/2. Now what? Apply frequency shift property. 2 1 π 2 ∞ Parseval’s Theorem (earlier) n= x[n] = 2π π (ω) dω −∞ | | − |X | Frequency differentiation P R

DTFT d d ∞ ωn ∞ ωn ∞ ωn n x[n]  (ω) =  x[n] e− =  x[n]( n) e− = (n x[n]) e− ↔ dω X dω − n= n= n= X−∞ X−∞ X−∞ careful with n = 0! 4.20 c J. Fessler, May 27, 2004, 13:11 (student version)

ω (b+a)/2 Example. Find x[n] when (ω) = 1 for a ω b (periodic) with π a < b π, i.e., (ω) = rect − b a (periodic) X ≤ ≤ − ≤ ≤ X − DTFT   picture We derived earlier that (ωc/π) sinc((ωc/π)n) rect(ω/(2ωc)). b a b a DTFT↔ ω Let ωc = (b a)/2, then y[n] = 2−π sinc 2−π n (ω) = rect b a (periodic). − ↔ Y − How does (ω) relate to (ω)? By a frequency shift: (ω) = (ω (b+ a)/2) so x[n] = e(b+a)/2n y[n] X Y  X Y −

b+a b a b a x[n] = e 2 n − sinc − n . 2π 2π    

Signal multiplication (time domain) Should it correspond to convolution in frequency domain? Yes, but DTFT is periodic.

π DTFT 1 x[n] = x1[n] x2[n] (ω) = 1(λ) 2(ω λ) dλ . ↔ X 2π π X X − Z− Proof: π ∞ ωn ∞ ωn ∞ 1 λn ωn (ω) = x[n] e− = x [n] x [n] e− = (λ) e dλ x [n] e− X 1 2 2π X1 2 n= n= n=  π  X−∞ X−∞ X−∞ Z− π π 1 ∞ (ω λ)n 1 = (λ) x [n] e− − dλ = (λ) (ω λ) dλ 2π X1 2 2π X1 X2 − π "n= # π Z− X−∞ Z− which is called periodic convolution. Similar to ordinary convolution (flip and slide) but we only integrate from π to π. − Compare: Convolution in time domain corresponds to multiplication in frequency domain. • Multiplication in time domain corresponds to periodic convolution in frequency domain (because DTFT is periodic). • Intuition: x[n] = x [n] x [n] where x [n] = eω1n and x [n] = eω2n. Obviously x[n] = e(ω1+ω2)n, but if ω + ω > π then 1 · 2 1 2 | 1 2| the ω + ω will alias to some frequency component within the interval [ π, π]. 1 2 − The “periodic convolution” takes care of this aliasing.

1 π ∞ Parseval generalized n= x[n] y∗[n] = 2π π (ω) ∗(ω) dω −∞ − X Y ω = 0 (DC value) P R ∞ (0) = x[n] X n= X−∞ n = 0 value 1 π x[0] = (ω) dω 2π π X Z−

Other “missing” properties cf. CT? no duality, no time differentiation, no time scaling. Upsampling, Downsampling See homework - derive from z-transform c J. Fessler, May 27, 2004, 13:11 (student version) 4.21

4.2.9 The sampling theorem revisited sampling xa(t) x[n] → CTFT DTFT l l ? X (F ) (ω) a X → Basic relationships: DTFT ωn (ω) = n∞= x[n] e− • X 1 π −∞ ωn x[n] = 2π π (ω) e dω • P − X (CT)FT R 2πF t Xa(F ) = xa(t) e− dt • x (t) = X (F ) e2πF t dF • a R a Sampling: R x[n] = xa(nTs) where Ts = 1/Fs. Questions I. How does (ω) relate to X (F )? Sum of shifted replicates. X a What analog frequencies F relate to digital frequencies ω? ω = 2πF/Fs. II. When can we recover xa(t) from x[n] exactly? Sufficient to have xa(t) bandlimited and Nyquist sampling. III. How to recover xa(t) from x[n]? Sinc interpolation. There had better be a simple relationship between (ω) and X (F ), otherwise digital processing of sampled CT signals could be X a impractical! 4.22 c J. Fessler, May 27, 2004, 13:11 (student version)

Spectral Replication (Result I) Intuition / ingredients: F If xa(t) = cos(2πF t) then x[n] = cos(2πF nTs) = cos 2π n so ω = 2πF/Fs. • Fs DTFT (ω) is periodic   • X CTFT X (F ) is not periodic (for energy signals) • a CTFT Fact. If x[n] = x (nT ) where x (t) X (F ) then a s a ↔ a

1 ∞ ω/(2π) k (ω) = Xa − . X Ts Ts k=   X−∞ So there is a direct relationship between the DTFT of the samples of an analog signal and the FT of that analog signal: (ω) is the X sum of shifted and scaled replicates of the (CT)FT of the analog signal. Each digital frequency ω has contribution from many analog frequencies: F F ω , F ω F , F ω 2F , . . . . ∈ s 2π s 2π  s s 2π  s Each analog frequency F appears as digital frequencies ω = 2πF/F 2πk. s   The argument of X ( ) is logical considering the ω = 2πF/F relation. • a · s The 1/T factor out front is natural because (ω) has the same units as x[n], whereas X (F ) has x[n]’s units times time units. • s X a

2 Example. Consider the CT signal xa(t) = 100 sinc (100t) which has the following spectrum.

6Xa(F ) 1

- -100 0 100 F

If x[n] = xa(nTs) where 1/Ts = Fs = 400Hz, then the spectrum of x[n] is the following, where ωc = 2π100/400 = π/2.

6 (ω) 400 X ...

- 2π π π/2 0 π/2 π 2π ω − − − Of course, we really only need to show π ω π because (ω) is 2π periodic. − ≤ ≤ X 4 On the other hand, if Fs = 150Hz, then 2π100/150 = 3 π and there will be aliasing as follows.

6 (ω) 150 X ...

- 2π 4π 2π π 0 3 π 3 2π ω c J. Fessler, May 27, 2004, 13:11 (student version) 4.23

Proofs of Spectral Replication Proof 1. (No Dirac Impulses!)

x[n] = xa(nTs) sampling

∞ 2πF (nTs) = Xa(F ) e dF inverse (CT)FT Z−∞ ∞ 2π(k+1/2)/Ts 2πF Tsn = Xa(F ) e dF rewrite integral 2π(k 1/2)/T k= Z − s X−∞ π ∞ ω/(2π) + k ω/(2π)+k dω let ω = 2πF Ts 2πk 2π( )Tsn = Xa e Ts ω/(2π)+k− π Ts 2πTs so F = T k= Z−   s X−∞ π 1 1 ∞ ω/(2π) + k ωn = Xa e dω simplifying exp. 2π Ts Ts π " k=  # Z− X−∞ The bracketed expression must be (ω) since the integral is the inverse DTFT formula: X 1 π x[n] = (ω) eωn dω . 2π π X Z− Thus, when x[n] = xa(nTs) (regardless of whether xa(t) is bandlimited) we have:

1 ∞ ω/(2π) + k (ω) = Xa . X Ts Ts k=   X−∞

Proof 2. First define the train of Dirac impulses, aka the comb function: comb(t) = n∞= δ(t n) . −∞ − The continuous-time FT of comb(t) is comb(F ) = k∞= δ(F k) . −∞ − P If x[n] = xa(nTs), then (using the sifting property andP the time-domain multiplication property):

∞ ∞ ∞ ωn ∞ ωt/Ts ∞ ωt/Ts (ω) = x[n] e− = x (t) δ(t nT ) e− dt = x (t) δ(t nT ) e− dt X a − s a − s n= n= "n= # X−∞ X−∞ Z−∞ Z−∞ X−∞ ∞ ∞ 1 t ωt/Ts ∞ 1 t 2πω/(2πTs)t = x (t) δ n e− dt = x (t) comb e− dt a T T − a T T "n= s  s # s  s  Z−∞ X−∞ Z−∞ 1 ∞ = [Xa(F ) comb(TsF )] = Xa(F ) δ(F k/Ts) ∗ F =ω/(2πTs) ∗Ts − " k= # F =ω/(2πTs) X−∞

1 ∞ 1 ∞ ω/(2π) k = Xa(F k/Ts) = Xa − . Ts − Ts Ts " k= # k= F =ω/(2πTs)   X−∞ X−∞

4.24 c J. Fessler, May 27, 2004, 13:11 (student version)

Proof 3. Engineer’s fact: the impulse train function is periodic, so it has a (generalized) Fourier series representation as follows:

∞ ∞ δ(F k) = e2πF n . − k= n= X−∞ X−∞ (This can be shown more rigorously using limits.) Thus we can relate (ω) and X (F ) as follows: X a

∞ ωn (ω) = x[n] e− X n= X−∞ ∞ ωn = xa(nTs) e− n= X−∞ ∞ ∞ 2πF (nTs) ωn = Xa(F ) e dF e− n=   X−∞ Z−∞ ∞ (2πF Ts ω)n = Xa(F ) e − dF "n= # Z X−∞ ∞ = X (F ) δ(F T ω/(2π) k) dF a s − − "k= # Z X−∞ ∞ 1 ω/(2π) + k = Xa(F ) δ F dF Ts − Ts "k=  # Z X−∞ 1 ∞ ω/(2π) + k = Xa . Ts Ts k=   X−∞ 1 Note δ(at) = a δ(t) for a = 0. | | 6 A mathematician would be uneasy with all of these “proofs.” Exchanging infinite sums and integrals really should be done with care by making appropriate assumptions about the functions (signals) considered. Practically though, the conclusions are fine since real-world signals generally have finite energy and are continuous, which are the types of regularity conditions usually needed. c J. Fessler, May 27, 2004, 13:11 (student version) 4.25

Example. Consider the signal (for t0 > 0) 1 1 xa(t) = 2 . π 1 + (t/t0) Then from FT table

2πF t0 Xa(F ) = t0 e−| |

CT "Cauchy" Signal CT Fourier Transform 0.35 1

0.3 0.8 t = 1 t = 1 0.25 0 0 0.6 0.2 x(t) X(F) 0.15 0.4

0.1 0.2 0.05

0 0 −20 0 20 −1 0 1 t F

Bandlimited? No. Suppose we sample it anyway. Does DTFT of sampled signal still look similar to Xa(F ) (but replicated)? Note that since X (F ) is real and symmetric, so is (ω). Since the math is messy, we apply the spectral replication formula a X focusing on ω [0, π]: ∈

1 1 ∞ ω/(2π) k (ω) /α = Xa − X α Ts Ts k=   X−∞ ∞ ω−2πk ∞ 1 t0 α ω+2πk = t0 e− | Ts | = e− | | Tsα k= k= X−∞ X−∞ 1 α ω − α ω+2πk ∞ α ω+2πk = e− | | + e− | | + e− | | k= k=1 X−∞ X 1 α ω − α(ω+2πk) ∞ α(ω+2πk) = e− | | + e + e− (for ω [0, π]) ∈ k= k=1 X−∞ X 0 α ω αω ∞ α2πk αω ∞ α2πk = e− | | + e e− + e− e− k0=1 k=1 X α2π X α ω αω αω e− = e− | | + e + e− α2π 1 e− α(2π ω) α−(2π+ω) α ω e− − +e− = e− | | + 1 e α2π − − where α = t /T . Thus for ω [0, π] we have the following relationship between the spectrum of the sampled signal and the 0 s ∈ spectrum of the original signal: α(2π ω) α(2π+ω) α ω e− − + e− (ω) = α e− | | + . X 1 e α2π  − −  1 αω Ideally we would have (ω) = Xa(ω/(2πTs)) = α e−| | for ω [0, π]. The second term above is due to aliasing. Note that X Ts ∈ as T 0, α and the second term goes to 0. s → → ∞ The following figure shows x (t), its samples x[n], and the DTFT (ω) for two sampling rates. For T = 1 there is significant a X s aliasing whereas for Ts = 0.5 the aliasing is smaller. 4.26 c J. Fessler, May 27, 2004, 13:11 (student version)

Cauchy signal DTFT of samples 2 0.3 t = 1 0

T = 1 )

0.2 ω 1 X( 0.1 x(t) and x[n]

0 0 −10 0 10 −2π 0 2π t 2 0.3 t = 1 0

T = 0.5 )

0.2 ω 1 X( 0.1 x(t) and x[n]

0 0 −10 0 10 −2π 0 2π t ω

Note that for smaller sample spacing Ts, the Xa(F ) replicated become more compressed, so less aliasing overlap. Can we recover x (t) from x[n], i.e., X (F ) from (ω) in this case? Well, yes, if we knew that the signal is Cauchy but just do a a X not know t0; just figure out t0 from the samples. But in general there are multiple analog spectra that make the same DTFT. Picture of rectangular and trapezoid spectra, both yielding (ω) = 1 X c J. Fessler, May 27, 2004, 13:11 (student version) 4.27

Bandlimited Analog Signals (Answers question: when can we recover X (F ) from (ω), and hence x (t) from x[n].) a X a Suppose there is a maximum frequency F > 0 for which X (F ) = 0 for F F , as illustrated below. max a | | ≥ max

6Xa(F ) 1

-

-Fmax 0 Fmax F

(ω) 6 ... 1/Ts X

-

0 ωmax π 2π − ωmax 2π ω

Where ωmax = 2πFmaxTs = 2πFmax/Fs. Clearly there will be no overlap iff ω < 2π ω iff ω < π iff 2πF /F < π iff F > 2F . max − max max max s s max Result II.

If the sampling frequency Fs is at least twice the highest analog signal frequency Fmax, then there is no overlap of the shifted scaled replicates of X (F ) in (ω). a X

2Fmax is called the Nyquist sampling rate. Recovering the spectrum When there is no spectral overlap (no aliasing), we can recover X (F ) from the central (or any) replicate in the DTFT (ω). a X ω0 Let ω0 be any frequency between ωmax and π. Equivalently, let F0 = 2π Fs be any frequency between Fmax and Fs/2. Then the center replicate of the spectrum is

ω 1 ω rect (ω) = X 2ω X T a 2πT  0  s  s  or equivalently (still in bandlimited case): F X (F ) = T rect (2πF T ), if 0 < F F F /2 a s 2F X s max ≤ 0 ≤ s  0  T (2πF T ), F F F /2 = s s 0 s 0, X otherwise| | ≤ ≤. 

So the Ts-scaled rectangular-windowed DTFT gives us back the original signal spectrum. 4.28 c J. Fessler, May 27, 2004, 13:11 (student version)

Result III. How do we recover the original analog signal xa(t) from its samples x[n] = xa(nTs). Convert central replicate back to the time domain:

∞ 2πF t xa(t) = Xa(F ) e dF Z−∞ ∞ F 2πF t = Ts rect (2πF Ts) e dF 2F0 X Z−∞   central replicate

| F {z ∞ } (2πF T )n 2πF t = T rect x[n] e− s e dF s 2F  0  "n= # Z X−∞ DTFT | {z } ∞ F 2πF (t nTs) = x[n] Ts rect e − dF  2F0 n=  Z    X−∞    FT of rect  ∞   = x[n] (2F |T ) sinc(2F (t{z nT )) . } 0 s 0 − s n= X−∞ This is the general sinc interpolation formula for bandlimited signal recovery. ω0 1 The usual case is to use ω0 = π (the biggest rect possible), in which case F0 = = . 2πTs 2Ts In this case, the reconstruction formula simplifies to the following.

∞ t nT x (t) = x[n] sinc − s a T n=  s  X−∞ sin(πx) where sinc(x) = . Called sinc interpolation. πx The sinc interpolation formula works perfectly if x (t) is bandlimited to F /2 where F = 1/T , but it is impractical because of a  s s s the infinite summation. Also, real signals are never exactly bandlimited, since bandlimited signals are eternal in time. However, after passing through an anti-aliasing filter, most real signals can be made to be almost bandlimited. There are practical interpolation methods, e.g., based on splines, that work quite well with a finite summation. c J. Fessler, May 27, 2004, 13:11 (student version) 4.29

Example. Sinc recovery from non-bandlimited Cauchy signal.

Samples of Cauchy signal x(n)=x (nT) Samples of Cauchy signal x(n)=x (nT) a a 0.4 0.4

t =1, T=1 t =1, T=0.5 0.3 0 0.3 0

0.2 0.2 x(n) x(n)

0.1 0.1

0 0 −10 −5 0 5 10 −20 −15 −10 −5 0 5 10 15 20 n n Sinc interpolation Sinc interpolation 0.4 0.4

0.3 0.3

(t) 0.2 (t) 0.2 sinc sinc

x 0.1 x 0.1

0 0

−0.1 −0.1 −10 −5 0 5 10 −10 −5 0 5 10 t t 0.4 0.4

0.3 0.3 x (t) x (t) a a 0.2 0.2 x (t) x (t) sinc sinc 0.1 0.1

0 0 −10 −5 0 5 10 −10 −5 0 5 10 t t x (t) − x (t) sinc a

0.01 t =1, T=1 0 0.005

0

−0.005

−0.01

−15 −10 −5 0 5 10 15

0.01 t =1, T=0.5 0 0.005

0

−0.005

−0.01

−15 −10 −5 0 5 10 15

0.01 t =1, T=0.25 0 0.005

0

−0.005

−0.01

−15 −10 −5 0 5 10 15 t

Error signals x (t) x (t) go to zero as T 0. sinc − a s → Why use a non-bandlimited signal here? Real signals are never perfectly bandlimited, even after passing through an anti-alias filter. But they can be “practically” bandlimited, in the sense that the energy above the folding frequency can be very small. The above results show that with suitably high sampling rates, sinc interpolation is “robust” to the small aliasing that results from the signal not being perfectly bandlimited. Summary: answered questions I, II, III. 4.30 c J. Fessler, May 27, 2004, 13:11 (student version)

4.4 Frequency-domain characteristics of LTI systems We have seen signals are characterized by their frequency content. Thus it is natural to design LTI systems (filters) from frequency- domain specifications, rather than in terms of the impulse response. 4.4.1 Response to complex-exponential and sinusoidal signals: The frequency response function We have seen the following input/output relationships for LTI systems thus far: x[n] h[n] y[n] = h[n] x[n] → → ∗ X(z) H(z) Y (z) = H(z) X(z) → → ω0n ω0n ω0n ∠ (ω0) x[n] = e h[n] y[n] = (ω ) e = (ω ) e e H (eigenfunction) → → H 0 |H 0 | Now as a consequence of convolution property: (ω) h[n] (ω) = (ω) (ω) X → → Y H X (ω) is the frequency response function of the system. H Before: A LTI system is completely characterized by its impulse response h[n]. Now: A LTI system is completely characterized by its frequency response (ω), if it exists. H Recall that the frequency response is z-transform along unit circle. When does a system function include the unit circle? When system is BIBO stable. Thus

The frequency response (ω) always exists for BIBO stable systems. H

Note: h[n] = sinc(n) is the impulse response of an unstable system, but nevertheless (ω) can be considered in a MS sense. H Notation: (ω) magnitude response of system • |H | Θ(ω) = ∠ (ω) phase response of system • H ? 1 The book says: ∠ (ω) = tan− (ω) / (ω) H HI HR Be very careful with this expression. It is not rigorous. In MATLAB, use the atan2 function or angle function, not the atan function to find phase response. Phase is defined over [0, 2π] or [ π, π], whereas atan only returns values over [ π/2, π/2]. − − Example. Consider (ω) = 3. What is ∠ (ω)? H − H 1 1 It is π. But (ω) = 3 and (ω) = 0 so tan− (ω) / (ω) = tan− 0 = 0 which is incorrect.  HR − HI HI HR c J. Fessler, May 27, 2004, 13:11 (student version) 4.31

Response of a real LTI system to (eternal) sinusoidal signal

If h[n] is real, then (ω) = ∗( ω) so (ω) = ( ω) and Θ(ω) = Θ( ω). H H − |H | |H − | − − ω0n ω0n ω0n ∠ ( ω0) ω0n ∠ (ω0) So x[n] = e− h[n] y[n] = ( ω ) e− = ( ω ) e− e H − = (ω ) e− e− H . → → H − 0 |H − 0 | |H 0 | Adding shows that: x[n] = cos(ω n + φ) h[n] y[n] = (ω ) cos(ω n + φ + ∠ (ω )) . 0 → → |H 0 | 0 H 0 This is called the “sine in, sine out” property, and it should be memorized! Prove in lecture using (ω) = (ω) (ω) with pictures of impulsive spectrum, as follows. DTFTY H X1 φ 1 φ x[n] = cos(ω n + φ) (ω) = e 2π δ(ω ω ) + e− 2π δ(ω + ω ) 0 ↔ X 2 − 0 2 0

1 φ 1 φ (ω) = (ω) (ω) = (ω) e 2π δ(ω ω ) + e− 2π δ(ω + ω ) Y H X H 2 − 0 2 0   1 φ 1 φ = e 2π δ(ω ω ) (ω ) + e− 2π δ(ω + ω ) ( ω ) sampling property 2 − 0 H 0 2 0 H − 0 1 φ 1 φ = e 2π δ(ω ω ) (ω ) + e− 2π δ(ω + ω ) ∗(ω ) real h[n] = Hermitian (ω) 2 − 0 H 0 2 0 H 0 ⇒ H

1 φ ∠ (ω0) 1 φ ∠ (ω0) = e 2π δ(ω ω ) (ω ) e H + e− 2π δ(ω + ω ) (ω ) e− H polar 2 − 0 |H 0 | 2 0 |H 0 |

1 (φ+∠ (ω0)) ω0n 1 (φ+∠ (ω0)) ω0n = y[n] = e H e (ω ) + e− H e− (ω ) ⇒ 2 |H 0 | 2 |H 0 | = (ω ) cos(ω n + φ + ∠ (ω )) . |H 0 | 0 H 0

Filter design example Example. A simple treble boost filter (cf., Exam1, W04). Im(z)

Re(z)

gain = 1

z 1 1 H(z) = − = 1 z− = h[n] = δ[n] δ[n 1] = y[n] = x[n] x[n 1] . z − ⇒ − − ⇒ − − Frequency response: ω ω/2 ω/2 ω/2 ω/2 (ω) = 1 e− = e− e e− = e− 2 sin(ω/2) . H − − h i Magnitude response: (ω) = 2 sin(ω/2) . Picture |H | | | ω/2 Phase response: for ω 0: ∠ (ω) = ∠ e− = π/2 ω/2. ≥ H − 6∠ (ω) π/2 H

- π π ω − π/2 −

This is an example of a linear phase filter. 4.32 c J. Fessler, May 27, 2004, 13:11 (student version)

A first attempt at filter design (notch) Equipped with the concepts developed thus far, we can finally attempt our first filter design.

Goal: eliminate Fc = 60Hz sinusoidal component from DT signal sampled at Fs = 480Hz. Find impulse response and block diagram.

What digital frequency to remove? ωc = 2πFc/Fs = π/4. Picture of ideal (ω). This is called a notch filter. H Im(z) p = eπ/4 First design attempt. Re(z) π/4 p = e−

Why? Because (ω) = H(z) z=eω . H | Im(z)

Practical problems? Noncausal. So add two poles at origin. 2 Re(z)

∗ 2 (z p)(z p ) z 2z cos ωc+1 1 2 Analyze: H(z) = − 2 − = − 2 = 1 z− 2 cos ω + z− = h[n] = 1, 2 cos ω , 1 . z z − c ⇒ { − c } FIR block diagram

zplane Magnitude Response 4

1 3

2 )| 0 ω 2 Im(z) |H(

−1 1

0 −1 0 1 −π 0 π Re(z) ω

Impulse response Phase Response π 1

0.5

0 ) ω 0 H( h[n]

−0.5 ∠

−1

−1.5 −π −2 0 2 4 6 8 10 −π 0 π n ω

These pictures show the response to eternal 60Hz sampled sinusoidal signals. But what about a causal sinusoid, or eωcn u[n] (applied after system first started)?

4.4.5 Relationship between system function and frequency response

If h[n] is real, then ∗(ω) = ( ω), so H H − 2 1 (ω) = H(z) ω , (ω) = (ω) ∗(ω) = H(z) H z− . H |z=e |H | H H z=eω  diffeq diagram. main remaining thing is pole-zero to and from (ω) H c J. Fessler, May 27, 2004, 13:11 (student version) 4.33

4.4.6 Computing the frequency response function Skill: pole-zero (or H(z) or h[n] etc.) to (ω) H Focus on diffeq systems with rational system functions and real coefficients.

k M k bkz− k=1(z zk) H(z) = k = G N − k akz− (z p ) P Qk=1 − k P Q M ω k=1(e zk) = (ω) = H(z) ω = G − ⇒ H |z=e N (eω p ) Qk=1 − k So (ω) determined by gain, poles, and zeros. Q H See MATLAB function freqz, usage: H = freqz(b, a, w) where w is vector of ω values of interest, usually created using linspace. For sketching the magnitude response: M eω z (ω) = G k=1 | − k| |H | | | N eω p Qk=1 | − k| Product of contribution from each pole and each zero. (On a logQscale these contributions would add). Geometric interpretation: closer to zeros, (ω) decreases, closer to poles, (ω) increases. H H

Example: 60Hz notch filter earlier. Is (0) or (π) bigger? (π) since further from zeros. H H H Phase response M N ∠ (ω) = ∠G + ∠(eω z ) ∠(eω p ) H − k − − k kX=1 kX=1 phases add (zeros) or subtract (poles) since phase of a product is sum of phases. Example: How to improve our notch filter for 60Hz rejection? Move poles from origin to near the zeros. pole-zero diagram with poles at z = r exp( ωc). For r = 0.9: 

ωc ωc 2 (z e )(z e− ) z 2z cos ω + 1 H(z) = − − = − c (z r eωc )(z r e ωc ) z2 2rz cos ω + r2 − − − − c

zplane Magnitude Response 1.5

1 1 )|

0 ω Im(z) |H( 0.5 −1

0 −1 0 1 −π 0 π Re(z) ω

Impulse response Phase Response π 1

0.8

0.6 ) ω 0 H( h[n] 0.4 ∠

0.2

0 −π −2 0 2 4 6 8 10 −π 0 π n ω Why (ω) > 1 for ω π? Slightly closer to poles. |H | ≈ Is filter FIR or IIR now? IIR. Need to look at transient response. Practical flaw: what if disturbance not exactly 60Hz? Need band cut filter. How to design? 4.34 c J. Fessler, May 27, 2004, 13:11 (student version)

More phase examples

zplane Phase Response π zplane Phase Response 1 π 1

0.5 0 0 0 0

−0.5 −1 −1 −π −1 0 1 −π 0 π −π −1 0 1 −π 0 π

π π 1 1

0.5 0 0 0 0

−0.5 −1 −1 −π −1 0 1 −π 0 π −π −1 0 1 −π 0 π

π π 1 1

0.5

0 0 0 0

−0.5

−1 −1 −π −π −1 0 1 −π 0 π −1 0 1 −π 0 π

π π 1 1

0.5 ) ) ω ω 0 0

H( 0 0 H( Im(z) Im(z) ∠ ∠ −0.5

−1 −1 −π −π −1 0 1 −π 0 π −1 0 1 −π 0 π Re(z) ω Re(z) ω

zplane Phase Response zplane Phase Response π π 1 1

0.5 0.5

0 0 0 0

−0.5 −0.5

−1 −1 −π −π −1 0 1 −π 0 π −1 0 1 −π 0 π

π π 1 1

0.5 0.5

0 0 0 0

−0.5 −0.5

−1 −1 −π −π −1 0 1 −π 0 π −1 0 1 −π 0 π

π π 1 1

0.5 0.5

0 0 0 0

−0.5 −0.5

−1 −1 −π −π −1 0 1 −π 0 π −1 0 1 −π 0 π

π π 1 1

0.5 0.5 ) ) ω ω 0 0 0 0 H( H( Im(z) Im(z) ∠ ∠ −0.5 −0.5

−1 −1 −π −π −1 0 1 −π 0 π −1 0 1 −π 0 π Re(z) ω Re(z) ω c J. Fessler, May 27, 2004, 13:11 (student version) 4.35

4.4.2 Steady-state and transient response for sinusoidal inputs

ω0n ω0n The relation x[n] = e h[n] y[n] = (ω0) e only holds for eternal complex exponential signals. So the output → → H ω0n (ω0) e is just the steady-state response of the system. H In practice (cf. 60Hz rejection) there is also an initial transient response, when a causal sinusoidal signal is applied to the system. Example. Consider the simple first-order (stable, causal) diffeq system:

y[n] = p y[n 1] + x[n] where p < 1. − | | Find response to causal complex-exponential signal: x[n] = eω0n u[n] = qn u[n] where q = eω0 .

p q 1 1 p q q p Y (z) = H(z) X(z) = = − + − 1 pz 1 1 qz 1 1 pz 1 1 qz 1 − − − − − − − − Could p = q? No, since q is on unit circle, but p is inside.

p q p y[n] = pn u[n] + qn u[n] = pn u[n] + (ω ) eω0n u[n] . p q q p p q H 0 − −  −  steady-state transient | {z } Transient because (causal and) pole within unit circle, so natural response| {zgoes to }0 as n . → ∞ This property hold more generally, i.e., for any stable LTI system the response to a causal sinusoidal signal has a transient response (that decays) and a steady-state response whose amplitude and phase are determined by the “usual” eigenfunction properties. What determines the duration of the transient response? Proximity of the poles to the unit circle. Example. What about our 60Hz notch filter? First design is FIR (only poles at z = 0). So transient response duration is finite.

H(z) H(z) H(q) H(q) Y (z) = H(z) X(z) = = − + . 1 qz 1 1 qz 1 1 qz 1 − − − − − − The first term has a root at z = q in both numerator and denominator that cancel as follows:

1 2 1 2 1 1 2 2 H(z) H(q) (1 z− 2 cos ω + z− ) (1 q− 2 cos ω + q− ) (z− q− )2 cos ω + (z− q− ) − = − c − − c = − c − 1 qz 1 1 qz 1 q(q 1 z 1) − − − − − − − 1 1 1 2 cos ωc q− z− 2 cos ωc q− 1 1 = − − − = − − z− , q q − q where q = eπ/4. So by this PFE we see:

1 2 cos ωc q− 1 1 H(q) Y (z) = − − z− + , q − q 1 qz 1 − − 1 2 cos ω q− 1 = y[n] = − c − δ[n] δ[n 1] + (ω ) eω0n u[n] . ⇒ q − q − H 0 steady-state transient | {z } | {z } Our second design is IIR. Closer the poles are to unit circle, the closer we get to the ideal notch filter magnitude response. But then the longer the transient response. 4.36 c J. Fessler, May 27, 2004, 13:11 (student version)

4.4.3 Steady-state response to periodic inputs Covered earlier. 4.4.4 Response to aperiodic input signals y[n] = h[n] x[n] so (ω) = (ω) (ω). ∗ Y H X If (ω) = 0, then what is (ω)? (ω) = 0. X Y Y

The output of an LTI system will only contain frequency components present in the input signal.

Frequency components present in the input signal can be amplified ( (ω) > 1) or attenuated ( (ω) < 1). |H | |H | energy density spectrum, output vs input:

S (ω) = (ω) 2 = (ω) (ω) 2 = (ω) 2 S (ω). yy |Y | |H X | |H | xx

4.4.7 Input-output correlation functions and spectra skim 4.4.8 Correlation functions and power spectra for random input signals skim

Summary of LTI in frequency domain There are several cases to consider depending on the type of input signal. Eternal periodic input. Use DTFS.

N 1 N 1 −  2π kn − 2π  2π kn x[n] = c e N (ω) y[n] = c k e N k → H → k H N kX=0 kX=0   Eternal sum of sinusoids, not necessarily periodic. Use sine-in / sine-out.

x[n] = A cos(ω n + φ ) (ω) y[n] = A (ω ) cos(ω n + φ + ∠ (ω )) k k k → H → k |H k | k k H k Xk Xk Causal “periodic” signal:

x[n] = eω0n u[n] (ω) y[n] = y [n] + (ω ) eω0n u[n] → H → transient H 0 steady-state | {z } General aperiodic input signal: x[n] (ω) y[n] DTFT (ω) = (ω) (ω) . → H → ↔ Y H X c J. Fessler, May 27, 2004, 13:11 (student version) 4.37

4.5 LTI systems as frequency-selective filters

Filter design: choosing the structure of a LTI system (e.g., recursive) and the system parameters ( ak and bk ) to yield a desired { } { } frequency response (ω). H Since (ω) = (ω) (ω), the LTI system can boost (or leave unchanged) some frequency components while attenuating others. Y H X 4.5.1 Ideal filter characteristics Types: lowpass, highpass, bandpass, bandstop, notch, resonator, all-pass. Pictures of (ω) in book. |H | ideal filter means unity gain in passband, gain is (ω) , • |H | zero gain in stopband. • The magnitude response is only half of the specification of the frequency response. The other half is the phase response.

An ideal filter has a linear phase response in the passband.

The phase response in the stopband is irrelevant since those frequency components will be eliminated. C e ωn0 , ω < ω < ω Consider the following linear-phase bandpass response: (ω) = − 1 2 H 0, otherwise.  Suppose the spectrum (ω) of the input signal x[n] lies entirely within the passband. picture Then the output signal spectrum is X

ωn0 (ω) = (ω) (ω) = C e− (ω) . Y H X X What is y[n]? By the frequency-shift property y[n] = C x[n n ] . For a linear-phase filter, the output is simply a scaled and − 0 shifted version of the input (which was in the passband). A (small) shift is usually considered a tolerable “distortion” of the signal. If the phase were nonlinear, then the output signal would be a distorted version of the input, even if the input is entirely contained in the passband. The group delay1 d τ (ω) = Θ(ω) g −dω is the same for all frequency components when the filter has linear phase Θ(ω) = ωn . − 0 MATLAB has a command grpdelay for diffeq systems. These ideal frequency responses are physically unrealizable, since they are noncausal, non-rational, and unstable (impulse response is not absolutely summable). We want to approximate these ideal responses with rational and stable systems (and usually causal too). Basic guiding principles of filter design: All poles inside unit circle so causal form is stable. (Zeros can go anywhere). • All complex poles and zeros in complex-conjugate pairs so filter coefficients are real. • # poles # zeros so causal. (In real-time applications, not necessary for post-processing signals in MATLAB.) • ≥

1The reason for this term is that it specifies the delay experienced by a narrow-band “group” of sinusoidal components that have frequencies within a narrow frequency interval about ω. The width of this interval is limited to that over which the group delay is approximately constant. 4.38 c J. Fessler, May 27, 2004, 13:11 (student version)

Relationship between a phase shift and a time shift for sinusoidal signals. Suppose x[n] = cos(ω n) and y[n] = x[n m ] is a time-shifted version of x[n]. Then 0 − 0 y[n] = cos(ω (n m )) = cos(ω n + θ) where θ = m ω . 0 − 0 0 − 0 0

Note that the phase shift depends on both the time-shift m0 and the frequency ω0.

x (n) = cos(n 2π/16) x (n) = cos(n 2π/8) 1 2 1 1

0.5 0.5

0 0

−0.5 −0.5

−1 −1 0 5 10 15 20 0 5 10 15 20

x (n−4) = cos(n 2π/16 − π/2) x (n−4) = cos(n 2π/16 − π) 1 2 1 1

0.5 0.5

0 0

−0.5 −0.5

−1 −1 0 5 10 15 20 0 5 10 15 20 n n

To time-shift a sinusoidal signal by m0 samples, the required phase shift is proportional to the frequency.

A more complicated signal will be composed of a multitude of frequency components. To time-shift each component by a certain amount m0, the phase-shift for each component should be proportional to the frequency of that component, hence linear phase:

Θ(ω) = ωm . − 0

1 π ωn Since x[n] = 2π π (ω) e dω, − X R π π 1 ω(n m0) 1 (ωn+Θ(ω)) y[n] = x[n m0] = (ω) e − dω = (ω) e dω − 2π π X 2π π X Z− Z− where Θ(ω) = ωm . − 0 c J. Fessler, May 27, 2004, 13:11 (student version) 4.39

4.5.2 Lowpass, highpass, bandpass filters For a lowpass filter, the poles should be near low frequencies (ω = 0) and the zeros should be near high frequencies (ω = π). Example. Simple lowpass filter, a pole at z = a (0, 1) and a zero a z = 0. z 1 ∈ 1 a H (z) = G = G so with G = 1 a for unity gain at DC: (ω) = − . 1 z a 1 az 1 − H1 1 a e ω − − − − − H (ω) H (ω) 1 2 1 1

0 0

Imaginary part −1 Imaginary part −1 −2 0 2 −2 0 2 Real part Real part

5 20

4 15

)| 3 )| ω ω 10

|H( 2 |H( 5 1

0 0 −π −π/2 ω π/2 π −π −π/2 ω π/2 π

1 2

0.5 1 ) ) ω ω ( 0 ( 0 Θ Θ

−0.5 −1

−1 −2 −π −π/2 ω π/2 π −π −π/2 ω π/2 π

Moving the zero to z = 1 further cuts out the high frequencies. − 1 ω (z + 1) 1 + z− 1 a 1 + e− H (z) = G = G so with G = (1 a)/2 for unity gain at DC (z = 1): (ω) = − . 2 z a 1 az 1 − H2 2 1 a e ω − − − − −

To make a highpass filter, simply reflect the poles and zeros around the imaginary axis.

H (ω) H (ω) 1 2 1 1

0 0

Imaginary part −1 Imaginary part −1 −2 0 2 −2 0 2 Real part Real part

5 20

4 15

)| 3 )| ω ω 10

|H( 2 |H( 5 1

0 0 −π −π/2 ω π/2 π −π −π/2 ω π/2 π

1 2

0.5 1 ) ) ω ω ( 0 ( 0 Θ Θ

−0.5 −1

−1 −2 −π −π/2 ω π/2 π −π −π/2 ω π/2 π

ω (ω π) Explanation: H (z) = H ( z) so since z = e , we have z = e − , so (ω) = (ω π) . hp lp − − Hhp Hlp − πn n So by the frequency-shift property of the DTFT: hhp[n] = e hlp[n] = ( 1) hlp[n] . − So to make a highpass filter from a lowpass filter design, just reverse the sign of every other sample of the impulse response. We will do better lowpass filter designs later, from which we can easily get highpass filters. What about bandpass filters? 4.40 c J. Fessler, May 27, 2004, 13:11 (student version)

4.5.3 Digital resonators (“opposite” of notch filter) Pair of poles near unit circle at the desired pass frequency or resonant frequency. Example application: detecting specific frequency components in touch-tone signals using a filter bank. How many zeros can we choose? Two. If we use more, then noncausal. Putting two zeros at origin and poles at z = p = r exp( ω ), we have  0 z2 1 1 H(z) = G = G = G (z p)(z p ) (1 pz 1)(1 p z 1) 1 (2r cos ω )z 1 + r2z 2 − − ∗ − − − ∗ − − 0 − − 1 (ω) = G H (1 r eω0 e ω)(1 r e ω0 e ω) − − − − − 1 2 − √ so for unity gain at ω = ω0, 1 = G (1 r)(1 re 2ω0 ) so G = (1 r) 1 2r cos ω0 + r . − − − −

So the magnitude response is G (ω) = |H | U1(ω)U2(ω)

ω0 ω 2 U (ω) = 1 r e e− = 1 2r cos(ω ω) +r 1 − − 0 − ω0 ω 2 U (ω) = 1 r e− e− =p 1 2r cos(ω + ω) +r . 2 − − 0 ω U1(ω) is distance from e to top pole, U2 is for bottom pole pictur pe.

The minimum of U (ω) is when ω = ω . Where is the maximum of (ω) ? Using Taylor expansion around r = 1, we find 1 0 |H | 2 1 1 + r tan ω0 2 ω = cos− cos ω ω (r 1) for r 1.  r 2r 0 ≈ 0 − 2 − ≈   For r 1, the pole is close to the unit circle, and ω ω . Otherwise the peak is not exactly at the resonant frequency. ≈ r ≈ 0 Why not? Because of the effect of the other pole. For ω (0, π/2), ω < ω , so the peak is a little lower than specified resonant frequency. 0 ∈ r 0 Since (1 r)2 > 0, 1 2r + r2 > 0 so (1 + r2)/2r > 1. − − H (ω) H (ω) 0.8 0.95

1 1

0.5 0.5

0 0 Imag(z) Imag(z) −0.5 −0.5

−1 −1

−1 0 1 −1 0 1 Real(z) Real(z)

1 1

0.8 0.8 )| )|

ω 0.6 ω 0.6

|H( 0.4 |H( 0.4

0.2 0.2

0 0 −π −π/2 ω0 π/2 π −π −π/2 ω0 π/2 π

1 1 ) ) ω ω ( 0 ( 0 Θ Θ

−1 −1

−π −π/2 ω0 π/2 π −π −π/2 ω0 π/2 π

What happens as poles approach unit circle? c J. Fessler, May 27, 2004, 13:11 (student version) 4.41

4.5.4 Notch filters Already discussed 60Hz rejection. 4.5.5 Comb filters Multiple notches at periodic locations across frequency band, e.g., to reject 60, 120, 180, ... Hz.

4.5.6 All-pass filters (ω) = 1, so all frequencies passed. However, phase may be affected. |H | k ωk Example: pure delay H(z) = z− , then (ω) = e− so (ω) = 1. H |H | So a delay is an allpass filter, since it passes all frequencies with no change in gain, just a phase change. Exampleapplication: phase equalizers, cascade with a system that has undesirable phase response, linearizing the overall phase response. x[n] h [n] h [n] y[n] (ω) = (ω) (ω) → 0 → allpass → H H0 Hallpass Note (ω) = (ω) unchanged, but ∠ (ω) = ∠ (ω) +∠ (ω) |H | |H0 | H H0 Hallpass General form for allpass filter (with real coefficients) uses bk = aN k (reverse the filter coefficient order): −

N k 1 N N k 1 k=0 aN kz− aN + aN 1z− + + a0z− N k=0 akz N A(z− ) − − H(z) = N = 1 · · · N = z− N = z− . a z k a0 + a1z− + + aN z− a z k A(z) P k=0 k − · · · Pk=0 k − Fact. If z = p is a root ofPthe denominator, then z = 1/p is a zero. So the poles and zerosP come in reciprocal pairs. Frequency response: ω ω ωN A(e− ) ωN A(e ) (ω) = H(z) = e− ω so ∗(ω) = e ω . H z=eω A(e ) H A(e− )

Thus it is allpass since: ω ω 2 ωN A(e− ) ωN A(e ) (ω) = (ω) ∗(ω) = e− e = 1. |H | H H A(eω) A(e ω)    −  −1 2 1 N A(z ) N A(z) Alternative proof. Since h[n] is real: (ω) = H(z) H z− = z− A(z) z A(z−1) = 1. |H | z=eω z=eω     ω = π/4 r = 4/5 g = r2 Example. A pair of poles and zero with 0 and , with gain yields the following phase response. H (ω) 0.8

1

0.5

0 Imag(z) −0.5

−1

−1 0 1 Real(z)

1

0.8 )|

ω 0.6

|H( 0.4

0.2

0 −π −π/2 ω0 π/2 π

2 ) ω ( 0 Θ

−2

−π −π/2 ω0 π/2 π

1 ω0 1 ω0 2 1 2 (z e )(z e− ) r 2r cos ω z− + z− The system function is H(z) = r2 − r − r = − 0 , from which the feed-back filter ω0 ω0 1 2 2 (z r e )(z r e− ) 1 2r cos ω0z− + r z− coefficients are seen to be the reversal of−the feed-forw−ard coefficients. − 4.42 c J. Fessler, May 27, 2004, 13:11 (student version)

4.5.7 Digital sinusoidal oscillators

What if we want to generate a sinusoidal signal cos(ω0n), e.g., for digital speech or music synthesis. A brute-force implementation would require calculating the cos function for each n. This is expensive. Can we do it with a few delays, adds, and multiplies? Solution: create LTI system with poles on the unit circle. This is called marginally unstable, because blows up for certain inputs, but not for all inputs. We do not use it as a filter; we only consider the unit impulse input. Single pole complex oscillator

z 1 n ω0n H(z) = z p = 1 pz−1 thus h[n] = p u[n] = e u[n]. − − If the input is a unit impulse signal, then the output is a complex exponential signal! Difference equation: y[n] = p y[n 1] + x[n] − This system is marginally unstable because the output would blow up for certain input signals, but for the input x[n] = δ[n] the output is a nice bounded causal complex exponential signal. Very simple digital “waveform” generator. Im(z) y[n] = eω0n u[n] p = eω0 x[n] = δ[n] Re(z) -1 z PSfrag replacements

p

Two poles

ω0 What if only a real sinusoidal is desired? Naive way: add another pole at z = p∗ = e− p* -1 z

Im(z) Im(z) p = eω0 + x[n] = δ[n] y[n] = 2 cos(ω0n) u[n] Re(z) Re(z)

ωPSfrag0 replacements p∗ = e− -1 z p Two systems in parallel, so add their system functions.

1 1 1 1 cos ω z− H(z) = + = 2 − 0 , 1 pz 1 1 p z 1 1 2 cos ω z 1 + z 2 − − − ∗ − − 0 − − so from earlier z-transform tables, h[n] = 2 cos(ω0n) u[n] . 2 complex delays, 3 complex adds, 2 complex multiplies c J. Fessler, May 27, 2004, 13:11 (student version) 4.43

Can we eliminate the complex multiply / add / delay?

Goal: synthesize y[n] = cos(ω0n + φ) u[n]

1 ω0n φ 1 ω0n φ y[n] = e e + e− e− u[n] 2 2   so φ φ 1 e /2 e− /2 cos φ cos(ω φ) z− Y (z) = + = − 0 − = X(z) H(z) 1 eω0 z 1 1 e ω0 z 1 1 2 cos ω z 1 + z 2 − − − − − − 0 − − where 1 1 X(z) = cos φ cos(ω φ) z− , H(z) = − 0 − 1 2 cos ω z 1 + z 2 − 0 − −

x[n] = (cos φ) δ[n] cos(ω φ) δ[n 1] y[n] = cos(ω0n + φ) u[n] − 0 − −

−1 z

Im(z) 2 cos ω0

2 Re(z) −1 PSfrag replacements z

−1 2 adds, 2 delays, 1 multiply, all real cf. 2nd order diffeq required to generate real sinusoidal signals 4.44 c J. Fessler, May 27, 2004, 13:11 (student version)

4.6 Inverse systems and deconvolution Example application: DSP compensation for effects of poor audio microphone. 4.6.1 Invertibility of LTI systems Definition: A system is called invertible iff each (possible) output signal is the response to only one input signal. T Otherwise is not invertible. T Example. y[n] = x2[n]. not invertible, since x[n] and x[n] produce the same response. − Example. y[n] = 2 x[n 7] 4 is invertible, since x[n] = 1 y[n + 7] +2. − − 2 1 If a system is invertible, then there exists a system − such that T T 1 x[n] y[n] − x[n] . → T → → T → 1 Design of − is important in many signal processing applications. T Fact: If is T LTI with impulse response h[n], and • invertible, • 1 then − is also LTI (and invertible). T 1 An LTI system is completely characterized by its impulse response, so − has some impulse response hI [n] under the above 1 T 1 conditions. In this case we call − the inverse filter. We call the use of − deconvolution since does convolution. T T T x[n] h[n] y[n] = x[n] h[n] h [n] h [n] y[n] = h [n] h[n] x[n] = x[n] . → → ∗ → I → I ∗ I ∗ ∗ In particular 1 h [n] h[n] = δ[n] so H (z) H(z) = 1 and H (z) = . I ∗ I I H(z)

Example: h[n] = 1, 1/2 (FIR). Find inverse filter. {1 } 1 n H(z) = 1 + 0.5z− so H (z) = −1 so h [n] = ( 1/2) u[n], which is IIR. I 1+0.5z I − One could check that h [n] h[n] = δ[n]. I ∗ 1 z+a More generally, suppose h[n] = 1, a . Then H(z) = 1 + az− = so pole at z = 0 and zero at z = a pole-zero plot. { } z − H (z) = 1/ H(z) = z so zero at z = 0 and pole at z = a I z+a − Is (causal form) of inverse filter stable? Only if a < 1, i.e., only if the zero of the original system function is within the unit | | circle. B(z) 1 A(z) Generally: if H(z) = g A(z) then HI (z) = g B(z) so to form the inverse system one just swaps all poles and zeros (and reciprocates the gain). B(z) A(z) Then when the two systems are cascaded, one gets pole zero cancellation: H(z) HI (z) = A(z) B(z) = 1. In practice, imperfect pole-zero cancellation, and results can be very noise sensitive! When is the inverse system causal and stable? Only if zeros of H(z) are within unit circle! • N M, i.e., # poles of H(z) # of zeros of H(z) • ≤ ≤ Recall that for H(z) to be causal we need N M, so for a causal system to have a causal stable inverse we need N = M. ≥ c J. Fessler, May 27, 2004, 13:11 (student version) 4.45

4.6.3 System identification and deconvolution In general, will the inverse system for a FIR system be FIR or IIR? IIR. Exception? h[n] = δ[n k]. − What if we do not want an IIR inverse system? Example. In preceding example h [n] = ( 1/2)n u[n]. I − Let us just try the FIR “approximation” g[n] = 1, 1/2, 1/4 . Look at frequency and phase response. { − } Note (ω) (ω) = 1. HI H 2 1 2 z 1/2z+1/4 G(z) = 1 1/2z− + 1/4z− = − 2 which has zeros at q = 1/4 √3/4 = (1/2) exp( π/3). − z   Comparison of ideal IIR inverse filter vs FIR approximation.

H (z) H(z)H (z) H(z) I I

1 1 1

0.5 0.5 0.5

0 0 0 Imag(z) Imag(z) Imag(z) −0.5 −0.5 −0.5

−1 −1 −1

−1 0 1 −1 0 1 −1 0 1 Real(z) Real(z) Real(z)

|H (ω)| |H(ω) H (ω)| |H(ω)| I I 2 2 2

1.5 1.5 1.5

1 1 1

0.5 0.5 0.5

0 0 0 −π 0 π −π 0 π −π 0 π ω ω ω

H(z) G(z) H(z) G(z)

1 1 1

2 3 0 0 0 Im(z) Im(z) Im(z)

−1 −1 −1

−1 0 1 −1 0 1 −1 0 1 Re(z) Re(z) Re(z)

|H(ω)| |G(ω)| |H(ω) G(ω)| 2 2 2

1.5 1.5 1.5

1 1 1

0.5 0.5 0.5

0 0 0 −π 0 π −π 0 π −π 0 π ω ω ω 4.46 c J. Fessler, May 27, 2004, 13:11 (student version)

Locations of zeros

For a real system, moving zeros to reciprocal locations leaves relative magnitude response unchanged except for a gain constant.

But the phase is affected. Brief explanation: 2 1 (ω) = H(z) H z− |H | z=eω

 (z z ) (z 1/z ) Elaboration: Suppose H (z) = g i − i and H (z) = g i − i , where both systems are real so the zeros are real or 1 1 A(z) 2 2 A(z) occur in complex conjugate pairs. ThenQ since Q

ω ω 1 e ω 1 ω 1 ω 1 ω e = − (e− q) = e− q = (e− q)∗ = e q∗ , − q q − q − q − q | − |

| | | | | | we have ω 1 ω 2(ω) g2 e 1/zi g2 i z e zi∗ g2 1 H = i | − | = | i| | − | = , (ω) g eω z g eω z g z 1 1 Q i i 1 Q i i 1 i i H | − | | − | Y | | because the zeros occur in complex conjug ateQpairs. So (ω) and Q(ω) differ only by a constant. |H2 | |H1 | Example. Here are two lowpass filters; the magnitude response has the same shape and differs only by a gain factor of two. But note the different phase response.

H (z) H (z) 2 3 1 1

0.5 0.5

0 0

−0.5 −0.5 Imaginary Part Imaginary Part −1 −1 −2 −1 0 1 2 −2 −1 0 1 Real Part Real Part

3 3

2.5 2.5

2 2 )| )| ω 1.5 ω 1.5 |Η( |Η(

1 1

0.5 0.5

0 0 −π 0 π −π 0 π

π π ) ) ω ω 0 0 Η( Η( ∠ ∠

−π −π −π 0 π −π 0 π ω ω c J. Fessler, May 27, 2004, 13:11 (student version) 4.47

4.6.2 Minimum-phase, maximum-phase, and mixed-phase systems A system is called minimum phase if it has: all poles inside unit circle, • all zeros inside unit circle. • What can we say about the inverse system for a minimum-phase system? It will also be minimum-phase and stable. maximum phase if all zeros outside unit circle. • mixed phase otherwise • Example application: invertible filter design, e.g., Dolby, with (ω) given. |H | Linear phase (preview of 8.2) We have seen that a zero on the unit circle contributes linear phase, but this is not the only way to produce linear phase filters. Example. Im(z)

2 Re(z) (z r)(z 1/r) 1 1 2 1 H(z) = − 2 − = 1 (r + )z− + z− = h[n] = 1, (r + ), 1 . 3 4 z − r ⇒ − r 4 3  

1 ω 2ω ω ω 1 ω ω 1 (ω) = 1 (r + ) e− + e− = e− e (r + ) + e− = e− 2 cos ω (r + ) . H − r − r − r     The bracketed expression is real, so this filter has linear phase. Specifically, the phase response is: ω, 2 cos ω > r + 1 ∠ (ω) = − r H π ω, 2 cos ω < r + 1 .  − r

Any pair of reciprocal real zeros contribute linear phase.

Im(z)

Re(z) What about complex zeros? 4 1 2

Consider a complex value q and the system function:

(z q)(z 1/q)(z q∗)(z 1/q∗) 1 2 3 4 2 2 1 2 H(z) = − − − − = 1 + b z− + b z− + b z− + z− = z− z + b z + b + b z− + z− z4 1 2 1 1 2 1 2   where b = (q + 1/q + q∗ + 1/q∗) and b = 2 + (q + 1/q)(q∗ + 1/q∗) = 2 + q + 1/q = h[n] = 1, b , b , b , 1 . 1 − 2 | | ⇒ { 1 2 1 } 2ω 2ω ω ω 2ω 2ω (ω) = e− e + b e + b + b e− + e− = e− [b + 2b cos ω + 2 cos(2ω)] . H 1 2 1 2 1 Again the bracketed expression is real, so this filter has linear phase. 

Each set of 4 complex zeros at q, 1/q, q∗, 1/q∗ contribute linear phase. { } 4.48 c J. Fessler, May 27, 2004, 13:11 (student version)

4.6.4 Homomorphic deconvolution skip

4.7 Summary eigenfunctions • DTFS for periodic DT signals • DTFT for aperiodic DT signals • Sampling theorem • Frequency response of LTI systems • Pole-zero plots vs magnitude and phase response • Filter design preliminaries, especially phase-response considerations. •