Lecture 6 — January 28, 2021 1 Outline 2 a Case of Aliasing
Total Page:16
File Type:pdf, Size:1020Kb
MATH 262/CME 372: Applied Fourier Analysis and Winter 2021 Elements of Modern Signal Processing Lecture 6 | January 28, 2021 Prof. Emmanuel Candes Scribe: Carlos A. Sing-Long, Edited by E. Bates 1 Outline Agenda: Fourier series 1. Aliasing example 2. Discrete convolutions 3. Fourier series 4. Another look at Shannon's sampling theorem Last Time: We introduced the Dirac comb and proved the Poisson summation formula, which states that the Fourier transform of a Dirac comb of period T is also a Dirac comb, but with period 2π=T (modulo a mutiplicative factor 2π=T ). We discussed the problem of sampling an analog signal f(t) by measuring its values at time intervals of length T , and we represented the discretized signal fd as a superposition of Dirac delta functions. The Poisson summation formula was instrumental in proving the aliasing formula, which states that the Fourier transform f^d of the discretized signal is the 2π=T -periodization of the spectrum f^ of the analog signal. From this calculation, we saw that when f is bandlimited and T is sufficiently large, no aliasing occurs in the periodization. When this is the case, we can recover the original signal from its samples via Shannon's sampling theorem. Today: We will first revisit the topic of aliasing by observing it for an actual signal. Then, in the pursuit of further studying discrete signals, we shall introduce Fourier series. Understanding these discrete sums as a special case of the Fourier integrals we know well, we will survey the properties Fourier series have in analogy with Fourier transforms. In particular, the Poisson summation formula will be employed once more to show that complex exponentials again form an orthonormal basis of L2-space. The machinery of this basis will actually provide a second proof of Shannon interpolation. 2 A case of aliasing To make concrete our study of aliasing, let us consider the very simple signal 3π 1 h i 3π t i −3π ti f(t) = cos t = e 2 + e 2 : 2 2 1 Its Fourier transform is 1 3π 3π f^(!) = δ ! + + δ ! ; 2 2 − 2 3π 3π meaning f is bandlimited on 2 ; 2 . From last lecture we know that the Nyquist rate is then 3 − 2 (three samples for every two units of time). Suppose we did not know this and instead guessed f were bandlimited on [ π; π]. We would then believe samples were only needed every 1 unit of − time. But this sampling rate is now below Nyquist and thus too slow to prevent aliasing. Let us P see why: The Fourier transform of the discretization fd = n2Z f(n)δn with T = 1 is X 1 X 3π 3π f^d(!) = f^(! 2πk) = δ ! + 2πk + δ ! 2πk : (1) − 2 2 − − 2 − k2Z k2Z which is a spike train of spacing π. What we then see in the frequency range [ π; π] is not f^, but − rather ^ 1 π π fd(!)I ! π =g ^(!) = δ ! + + δ ! ; fj j ≤ g 2 2 − 2 | {z } | {z } from k = −1 from k = 1 5⇡ 3⇡ ⇡ ⇡ 3⇡ 5⇡ − 2 − 2 − 2 2 2 2 k = 2 k =0 k = 1 k =1 k =0 k =2 − − Figure 1: Shown above is the portion of the Dirac comb f^d in [ π; π], which has contributions from − five different copies of f^, corresponding to k = 0, 1, and 2 in (1). Part of each of the k = 1 copies ± ± ± is aliased into the interval [ 3π ; 3π ], giving artifact delta functions (displayed with solid lines) in the − 2 2 supposed frequency band [ π; π]. Such aliasing produces low-frequency artifacts in signal reconstruction. − This is the Fourier transform of π g(t) = cos t ; 2 a cosine wave with frequency three times smaller than that of f. Indeed, if Shannon's formula is naively applied to f^d, the signal recovered is X X 3π X π X f(n) sinc(t) = cos n sinc(t) = cos n sinc(t) = g(n) sinc(t) = g(t) 2 2 n2Z n2Z n2Z n2Z 2 since g is bandlimited to [ π; π] and f is not. − In other words, by sampling f at too low a rate, we cannot detect the difference between the signal f and the signal g: their values at the integers are identical. Shannon interpolation can only return the unique function that gives those samples and is bandlimited to [ π; π]. In this case, that − function is g(t). 2 2 − 3π π Figure 2: When sampled at the integers, the functions f(t) = cos 2 t and g(t) = cos 2 t return identical values. The higher frequency of f (solid, black) is aliased into the lower frequency band [ π; π], − giving a reconstruction of g rather than of f. 3 Discrete convolutions Now we consider discrete signals f(n) n2 , or more precisely, discrete-time signals. As in lecture f g Z 1, we are interested in linear maps L : f Lf. We define for τ Z the translation of f as 7! 2 fτ (n) = f(n τ); n Z; − 2 and we say L is time-invariant if (Lf)τ = Lfτ . The discrete impulse δ is defined as ( 1 n = 0; δ(n) = 0 otherwise, and the discrete convolution between f and g as X (f g)(n) = f(m)g(n m): ∗ − m2Z As before, δ acts as the identity under convolution: X (f δ)(n) = f(m)δ(n m) = f(n)δ(0) = f(n); ∗ − m2Z 3 Consequently, for any time-invariant map L, X X X (Lf)(n) = f(m)(Lδm)(n) = f(m)(Lδ)m(n) = f(m)(Lδ)(n m): − m2Z m2Z m2Z If we let h := Lδ, we see that Lf = f h. The kernel h is called the impulse response for L, ∗ and we conclude that every time-invariant map can be written as the convolution with the impulse response. It is easy to check that the converse is true, and so a map is time-invariant if and only if it is a convolution with a fixed function. We saw in the continuous-time case that complex exponentials are eigenvectors of time-invariant operators. The same is true in the discrete setting, but now the exponentials are evaluated only at i!n integers. Indeed, write e!(n) = e , n Z. For a time-invariant map L, 2 X i!(n−m) i!n X −i!m X (Le!)(n) = (h e!)(n) = h(k)e = e h(m)e = e!(n) h(m)e!(m): ∗ m2Z m2Z m2Z We once more have a transfer function X X −i!m h^(!) := h; e! = h(m)e!(m) = h(m)e ; h i m2Z m2Z defined to be return the eigenvalue for the eigenfunction e!: Le! = h^(!)e!: We call h^ the Fourier series of h. In fact, this is just a special case of the Fourier transform. Considering the discretization X fd(t) = f(n)δ(t n); − n2Z of a continuous-time signal f(t), we have X −i!n f^d(!) = f(n)e ; n2Z which is the Fourier series of the discrete-time signal f(n) n2 . f g Z 4 Fourier series What separates Fourier series from Fourier transforms is their periodicity. Since only integer values of m are considered, X h^(!) = h(m)e−i!m m2Z is necessarily 2π-periodic in !. This fact is actually one we already know, since we learned in Lecture 5 that discretization in time is equivalent to periodization in frequency. Obviously Fourier transforms need not be periodic. After all, the inverse Fourier transform told 2 us that every L function is a superposition of complex exponentials with frequencies in R. Once 4 again drawing analogy with the continuous case, we can ask: Is every periodic L2 function (with period 2π) the superposition of exponentials with frequencies in Z? Perhaps less surprising now, but no less amazing, the answer is yes. To properly develop this answer, we consider a periodic domain S = R=2πZ (read \R modulo 2πZ"), which we call the circle. To say f is a function on S just means f is 2π-periodic: f(t) = f(t + 2π) for every t R. 2 2 2 Instead of working in the vector space L (R), we now work in L (S), the vector space of square- integrable functions on the circle: Z π 2 2 L (S) = f : S C : f(t) dt < : ! −π j j 1 2 L (S) is a Hilbert space under the inner product Z π f; g = f(t)g(t) dt; h i −π and complex exponentials with integer frequencies form a basis. Theorem 1 (Riesz-F´ejer). The sequence en k2 where f g Z 1 int en(t) = e ; p2π 2 is a complete orthonormal set for L (S). Riesz-F´ejer'stheorem is equivalent to Shannon's sampling theorem. By this, we mean that if we know Riesz-F´ejer,then Shannon's theorem is an easy consequence and vice versa. Since we proved Shannon's theorem, we will show how Riesz-F´ejeris an easy consequence. Historically, Riesz-F´ejer came first. Recall from linear algebra that if v1; v2; : : : ; vd is an orthonormal basis of an d-dimensional inner f g product space, any vector v has representation d X v = vn; v vn: h i n=1 The analogous statement is true for infinite-dimensional Hilbert spaces, namely that if vn n2 is f g Z a complete orthonormal set, then X v = vn; v vn: (2) h i n2Z So Theorem 1 states that for f square-integrable in [ π; π] with f( π) = f(π), we can write − − X f = cnen with cn = en; f ; (3) h i n2Z which means Z π 1 X int 1 −int f(t) = cne with cn = f(t)e dt: p2π p2π −π n2Z 5 Here the right-hand side of the first expression is the Fourier series of f, and we understand the equality to hold in the L2 sense.