<<

Chapter 4

Fourier Analysis and Spectral Density

4.1 Fourier Series and Transforms

Recall Fourier series for periodic functions ∞ 1 X  2πnt 2πnt x(t) = a + a cos + b sin (4.1) 2 0 n T n T n=1 for x(t + T ) = x(t), where Z T 2 a0  a0 = x(t) dt =x ¯ T 0 2 2 Z T  2π  an = x(t) cos nωt dt ω = (4.2) T 0 T 2 Z T bn = x(t) sin nωt dt . T 0

Dirichlet Theorem: For x(t) periodic on 0 t < T , if x(t) is bounded, has a finite number ≤ of maxima, minima, and discontinuities, then the Fourier Series Eq. (4.1) converges t to ∀ 1 + − 2 [x(t ) + x(t )].

Complex form of Eq. (4.1) is better for experimental applications. Using Euler’s (or de Moivre’s) formulas we get: 1 cos ωt = eiωt + e−iωt 2 (4.3) 1 sin ωt = eiωt e−iωt . 2i − Using above Eq. (4.1) can be rewritten as:

∞ X inωt x(t) = Xn e , (4.4) −∞

31 32 CHAPTER 4. AND POWER SPECTRAL DENSITY where a0 X0 = 2 (4.5) 1 X± = (a ib ) . n 2 n ∓ n ∗ Please also note that Xn = X−n. Therefore:

Z T Z T/2 1 −inωt 1 −inωt Xn = x(t) e dt = x(t) e dt . (4.6) T 0 T −T/2

n If “signal” x(t) is not periodic, we let fn = T (i.e., in Eq. (4.6) nω = 2πfn). Now, we define a function X(f) by X(fn) = TXn (i.e., Xn = X(fn)/T ) to get the following:

∞ ∞ ∞ X X 1 X x(t) = X einωt = X(f ) ei2πfnt = X(f ) ei2πfnt∆f , (4.7) n T n n n −∞ n=−∞ n=−∞ where we used the fact that ∆f = f f = n+1 n = 1 . Therefore, in the limit as T n n+1 − n T − T T → ∞ and ∆f 0 in Eq. (4.9), we get our signal in time domain as n → Z ∞ x(t) = X(f) ei2πft df . (4.8) −∞

Now, assuming that everything converges and using Eq. (4.6) we get the corresponding domain expression Z ∞ X(f) = x(t) e−i2πft dt . (4.9) −∞ Therefore, x(t) X(f) are pair, where x(t) is in time domain and X(f) is in ↔ .

4.1.1 Several Important Properties of Fourier Transforms

We denote a Fourier transform (FT) as X(f) = (x(t)) and x(t) = −1(X(f)). Now, we can write F F several of the properties of FT:

1. Linearity: [αx(t) + βy(t)] = αX(f) + βY (f) F 2. Duality: x(t) X(f) X(t) x( f) ↔ ⇒ ↔ − 3. Conjugation: x(t) X(f) x∗(t) X∗( f). ↔ ⇒ ↔ − Therefore, for real signal x(t), X(f) = X∗( f). This instead gives: −

X(f) 2 = X(f)X∗(f) = X ( f)X( f) = X( f) 2 , (4.10) | | ∗ − − | − |

i.e., for real x(t), X(f) is symmetric. | | 4. : Z ∞  x(τ)y(t τ) dτ = X(f)Y (f) , [x y] , F −∞ − F ∗ 4.1. FOURIER SERIES AND TRANSFORMS 33

where x y indicates time convolution between x(t) and y(t). In addition, ∗ Z ∞ [xy] = X(φ)Y (f φ) dφ , X Y, F −∞ − ∗

where X Y indicates frequency convolution between X(f) and Y (f) (also, X Y = Y X). ∗ ∗ ∗

5. Differentiation: dkx = (i2πf)kX(f) , F dtk

6. Time Scaling and Shifting:

2πif b e a f  x(at + b) X . ↔ a a | |

∞ Theorem: Provided x(t) 1 (i.e., R x(t) dt < and x(t) has a finite number of ∈ L −∞ | | ∞ maxima, minima, and discontinuities) X(f) exists, and   −1 [X(f)] for x continuous at t , x(t) = F 1 + −  2 [x(t ) + x(t )] for x discontinuous at t . .

There is a problem with the above theorem if we consider the following:

Z ∞ sin t dt = , −∞ | | ∞ which can be fixed using theory of generalized functions (or distributions), duality and other basic properties.

4.1.2 Basic Fourier Transform Pairs

1. Delta (δ) “function”: This actually is a generalized function or distribution defined as:

Z ∞ δ(t)dt = 1 . (4.11) −∞

∞ Now, by definition of δ(t), ∆(f) = R e−2πiftδ(t t )dt = e−2πift0 , also called “sifting −∞ − 0 property.” Note that ∆ is a complex constant with ∆ = 1. Therefore: | |

δ(t t ) e−2πift0 , (4.12) − 0 ↔

and in particular, δ(t) 1. Therefore, by duality property ↔

e−2πif0t δ(f f ) , (4.13) ↔ − 0

and in particular, 1 δ(f). ↔ 34 CHAPTER 4. FOURIER ANALYSIS AND POWER SPECTRAL DENSITY

Figure 4.1: Signal modulation in the frequency domain

2. Trigonometric functions:

1 cos(2πf t) = e2πif0t + e−2πif0t , (4.14) 0 2

so δ(f f ) + δ(f + f ) [cos(2πf t)] = − 0 0 . (4.15) F 0 2 Similarly, δ(f f ) δ(f + f ) [sin(2πf t)] = − 0 − 0 . (4.16) F 0 2i

3. Modulated trigonometric functions: As an example consider x(t) cos(2πf t) X(f) c ↔ ∗ C(f), where fc is called carrier frequency and C(f) is given by Eq. (4.15). Then,

Z ∞ δ(f fc s) + δ(f + fc s) x(t) cos(2πfct) X(s) − − − ds , (4.17) ↔ −∞ 2

where on the right hand side we have a convolution integral, which gives:

X(f f ) + X(f + f ) x(t) cos(2πf t) − c c . (4.18) c ↔ 2

Therefore, if we already know X(f), modulation scales and shifts it to f as shown in Fig. 4.1. ± c

4.2 Power Spectral Density

The of a real, stationary signal x(t) is defined to by Rx(τ) = E[x(t)x(t + τ)]. The

Fourier transform of Rx(τ) is called the Power Spectral Density (PSD) Sx(f). Thus:

Z ∞ −i2πft Sx(f) = Rx(τ) e dτ . (4.19) −∞

The question is: what is the PSD? What does it mean? What is a “spectral density,” and why is

Sx called a power spectral density? To answer this question, recall that

Z ∞ X(f) = x(t) e−i2πft dt . (4.20) −∞ 4.2. POWER SPECTRAL DENSITY 35

To avoid convergence problems, we consider only a version of the signal observed over a finite-time 1 T , xT = x(t)wT (t), where   1 for 0 t T/2 , wT = ≤ | | ≤ (4.21)  0 for t > T/2 . ≤ | |

Then xT has the Fourier transform Z ∞ −i2πft XT (f) = xT (t) e dt , (4.22) −∞ Z T/2 = x(t) e−i2πft dt , (4.23) −T/2 and so

"Z T/2 #"Z T/2 # ∗ −i2πft ∗ i2πfs XT XT = x(t) e dt x (s) e ds , (4.24) −T/2 −T/2 Z T/2 Z T/2 = x(t)x(s) e−i2πf(t−s) dtds , (4.25) −T/2 −T/2 where the star denotes complex conjugation and for compactness the frequency argument of XT has been suppressed. Taking the expectation of both sides of Eq. (4.26)2

Z T/2 Z T/2 ∗ −i2πf(t−s) E [XT XT ] = E [x(t)x(s)] e dtds . (4.26) −T/2 −T/2

Letting s = t + τ , one sees that E[x(t)x(s)] , E[x(t)x(t + τ)] = Rx(τ), and thusb

Z T/2 Z T/2 ∗ −i2πf(t−s) E [XT XT ] = Rx(τ) e dtds . (4.27) −T/2 −T/2

To actually evaluate the above integral, the both variables of integration must be changed. Let

τ = f(t, s) = s t (as already defined for Eq. (4.30)) (4.28) − η = g(t, s) = s + t . (4.29)

Then, the integral of Eq. (4.30) is transformed (except for the limits of integration) using the change of variables formula:3

Z T/2 Z T/2 Z T Z T −i2πf(t−s) −i2πfτ −1 Rx(τ) e dtds = Rx(τ) e J dτdη , (4.30) −T/2 −T/2 −T −T | |

1This restriction is necessary because not all of our signals will be square integrable. However, they will be mean square integrable, which is what we will take advantage of here. 2To understand what this means, remember that Eq. (5) holds for any x(t). So imagine computing Eq. (6) for different x(t) obtained from different experiments on the same system (each one of these is called a sample function). The expectation is over all possible sample functions. Since the exponential kernel inside the integral of Eq. (6) is the same for each sample function, it can be pulled outside of the expectation. 3This is a basic result from multivariable calculus. See, for example, I.S. Sokolnikoff and R.M. Redheffer, Mathe- matics of Physics and Modern Engineering, 2nd edition, McGraw- Hill, New York, 1966. 36 CHAPTER 4. FOURIER ANALYSIS AND POWER SPECTRAL DENSITY

Figure 4.2: The domain of integration (gray regions) for the Fourier transform of the autocorrelation Eq. (7): (left) for the original variables, t and s; (right) for the transformed variables, η and τ, obtained by the change of variables Eq. (4.28). Notice that the square region on the left is not only rotated (and flipped about the t axis), but its area is increased by a factor of J = 2. The circled | | numbers show where the sides of the square on the left are mapped by the change of variables. The lines into which the t and s axes are mapped are also shown. where J is the absolute value of the Jacobian for the change of variables Eq. (4.28) given by | |

df df dt ds 1 1 J = = − = 2 . (4.31) dg dg − dt ds 1 1

To determine the limits of integration needed for the right hand side of Eq. (4.31), we need to refer to Fig. 4.2, in which the domain of integration is plotted in both the original (t, s) variables and the transformed (τ, η) variables. Since we wish to integrate on η first, we hold τ fixed. For τ > 0, a vertical cut through the diamond-shaped region in Fig. 4.2 (right) shows that T + τ η T τ , ≤ ≤ whereas for τ < 0 one finds that T τ η T + τ. Putting this all together yields: ≤ ≤ Z T Z T −|τ| Z T   ∗ −i2πfτ τ −i2πfτ E [XT XT ] = Rx(τ) e dηdτ = T 1 | | Rx(τ) e dτ . (4.32) −T −(T −|τ|) −T − T

Finally, dividing both sides of Eq. (4.32) by T and taking the limit as T gives → ∞

Z T   1 ∗ τ −i2πfτ lim E [XT XT ] = lim 1 | | Rx(τ) e dτ T →∞ T T →∞ −T − T Z T −i2πfτ = lim Rx(τ) e dτ T →∞ −T (4.33) Z ∞ −i2πfτ = Rx(τ) e dτ −∞

= Sx(f) . 4.2. POWER SPECTRAL DENSITY 37

Thus, in summary, the above demonstrates that

1 h 2i Sx(f) = lim E XT (f) . (4.34) T →∞ T | |

Recalling that XT (f) has units SU/Hz (where SU stands for “signal units,” i.e., whatever units the h i signal x (t) has), it is clear that E X (f) 2 has units (SU/Hz)2 . However, 1/T has units of Hz, T | T | so that Eq. (4.33) shows that the PSD has units of (SU2)/Hz.4 Although it is not always literally true, in many cases the mean square of the signal is proportional 5 to the amount of power in the signal. The fact that Sx is therefore interpreted as having units of “power” per unit frequency explains the name Power Spectral Density. Notice that power at a frequency f that does not repeatedly reappear in x (t) as T 0 T → ∞ will result in S (f ) 0, because of the division by T in Eq. (4.34). In fact, based on this x 0 → idealized mathematical definition, any signal of finite duration (or, more generally, any mean square integrable signal), will have power identical to zero! In practice, however, we do not

let T extend much past the support [Tmin,Tmax] of xT (t)(Tmin / max is the minimum (respectively,

maximum) T for which xT (t) = 0). Since all signals that we measure in the laboratory have the form y(t) = x(t) + n(t), where n(t) is broadband , extending T to infinity for any signal with finite support will end up giving S S . x ≈ n We conclude by mentioning some important properties of Sx. First, since Sx is an average of the

magnitude squared of the Fourier transform, Sx(f) R and Sx(f) 0 for all f. A simple change ∈ ≥ of variables in the definition Eq. (4.19) shows that Sx(f) = Sx(f). Given the definition Eq. (4.19), we also have the dual relationship Z ∞ i2πfτ Rx(τ) = Sx(f) e df . (4.35) −∞

Setting τ = 0 in the above gives Z ∞  2 Rx(0) = E x(t) = Sx(f) df , (4.36) −∞ which, for a mean zero signal gives Z ∞ 2 σx = Sx(f) df , (4.37) −∞ Finally, if we assume that x(t) is ergodic in the autocorrelation, that is, that

Z T/2 Rx(τ) = E[x(t)x(t + τ)] = lim x(t)x(t + τ)dt , T →∞ −T/2

4Of course, the units can also be determined by examining the definition of Eq. (4.19). 5This comes primarily from the fact that, in electrical circuits, the power can be written in terms of the as V 2/Z , or in terms of the current as I2Z , where Z is the circuit impedance. Thus, for electrical signals, it is precisely true that the mean square of the signal will be proportional to the power. Be forewarned, however, that the mean square of the scaled signal, expressed in terms of the actual measured variable (such as displacement or acceleration), will not in general be equal to the average mechanical power in the structure being measured. 38 CHAPTER 4. FOURIER ANALYSIS AND POWER SPECTRAL DENSITY where the last equality holds for any sample function x(t), then Eq. (4.37) can be rewritten as

Z T/2 Z ∞ 1 2 lim x(t) dt = Sx(f) df . (4.38) T →∞ T −T/2 −∞

The above relationship is known as Parsevals Identity.

This last identity makes it clear that, given any two f1 and f2, the quantity

Z f2 Sx(f) df f1 represents the portion of the average signal power contained in signal frequencies between f1 and f2, and hence Sx is indeed a “spectral density.”

4.3 Sample Power Spectra

1. : Sxx(f) = 1, where we have power at all frequencies. The corresponding

autocorrelation is Rxx(τ) = δ(τ), see Fig. 4.3.

1

0 0

0 0

Figure 4.3: White noise signal has power at all frequencies and is uncorrelated for τ = 0. 6

  1, 0 f fBW 2. Band limited noise: Sxx(f) = W (f) = ≤ | | ≤  0, otherwise

Z ∞ i2πfτ Rxx(τ) = W (f)e df −∞ Z fBW = ei2πfτ df −fBW (4.39) 1 = ei2πfBW τ e−i2πfBW τ  2iπτ − sin (2πf τ) = BW . πτ

For the corresponding graphs refer to Fig. 4.4 4.3. SAMPLE POWER SPECTRA 39

Figure 4.4: Band limited noise has a correlation time of 1 . 2fBW

Problems

Problem 4.1

Create a sample x 1024+14 of uncorrelated Gaussian random variables (command randn in Mat- { n}n=1 P7 lab). Now apply the moving average filter sn = 1/15 i=−7 xn+i to obtain 1024 correlated Gaussian variates. Estimate the power spectrum (type > help pwelch in Matlab) for both data sequences and observe the differences.

Problem 4.2

Create two : (1) x 4096 of uncorrelated Gaussian random variables (command randn in { n}n=1 Matlab), and (2) deterministic evolution of the Ulam map y 4096, which follows the rule y = 0.1 { n}n=1 0 and y = 1 2y2 . The values of y are measured through a nonlinear observation function n+1 − n n s = arccos( y )/π. Compare the mean, and the power spectra of the two time series. n − n