<<

Matthew Schwartz Lecture 5:

1 Fourier series

When N oscillators are strung together in a series, the amplitude of that string can be described by a A(x,t) which satisfies the : ∂2 ∂2 − v2 A(x,t)=0 (1) ∂t2 ∂x2   We saw that electromagnetic fields satisfy this same equation with v = c the speed of light. We found solutions of the form ω A(x,t)= A cos (x ± vt)+ φ (2) 0 v   for any ω which are traveling waves. Solutions of the form

A(x,t)= A0cos(kx)cos(ωt) (3) with ω2 = v2k2 are called standing waves. Whether traveling waves or standing waves are rele- vant depends on the boundary condition. More generally, we found traveling wave solutions could come from any function f(x + vt): ∂2 ∂2 − v2 f(x + vt)=0 (4) ∂t2 ∂x2   Similarly f(x − vt) is a solution. Functions f(x − vt) are right-moving traveling waves and func- tions f(x + vt) are left-moving traveling waves. Now, since any vector can be written as a sum of eigenvectors, any solution can be written as a sum of normal modes. This is true both in the discrete case and in the continuum case. Thus we must be able to write

f(x + vt)= akcos( kx )cos(ωt)+ bksin(kx)cos(ωt)+ ckcos(kx )sin(ωt)+ dksin(kx)sin(ωt) (5) k X where the sum is over wavenumbers k. In particular, at t =0 any function can be written as

f(x)= akcos(kx )+ bksin(kx) (6) k X We have just proved Fourier’s theorem! (Ok, we haven’t really proven it, we just assumed the result from linear algebra about a finite system applies also in the continuum limit. The actual proof requires certain properties about the of f(x) to hold. But we are physicists not mathematicians, so let’s just say we proved it.)

2 Fourier’s theorem

Fourier’s theorem states that any square-integrable function1 f(x) which is periodic on the interval 0

L 2πn dx sin x =0, n> 0 (12) L Thus, Z0   ∞ ∞ L L L 2πn L 2πn dxf(x)=a dx+ a dx cos x + b dx sin x (13) 0 n L n L 0 0 n=1 0 n=1 0 Z Z X Z   X Z   =a0L (14) in agreement with Eq. (8). For an we can use the cosine sum formula to write L 2πm 2πn L 1 n + m 1 n − m dx cos x cos x = dx cos 2πx + cos 2πx (15) 0 L L 0 2 L 2 L Z     Z      Now again we have that these all vanish over an integer number of periods of the cosine curve. The only way this wouldn’t vanish is if n − m =0. So we have for n> 0 L 2πm 2πn 1 L L dx cos x cos x = δ dx = δ (16) L L 2 mn 2 mn Z0     Z0 where δmn is the Kronecker δ-function 0, m =/ n δ = (17) mn 1 m = n Similarly,  L 2πm 2πn dx cos x sin x =0 (18) L L Z0     L 2πm 2πn L dx sin x sin x = δ (19) L L 2 mn Thus, for n> 0 Z0     L 2πn dxf(x) cos x (20) L Z0   ∞ ∞ L 2πm 2πm 2πn = dx a + a cos x + b sin x cos x (21) 0 m L m L L Z0 " m=1   m=1  #   ∞ X X L = a δ (22) 2 m mn m=1 X L = a (23) 2 n Example 3

as in Eq. (9). In the same way you can check the formula for bn.

We use • Fourier cosine series for functions which are even on the interval (f(x)= f(L − x)) • Fourier series for functions which are odd on the interval (f(x)= −f(L − x)) • For functions that are neither even nor odd on the interval, we need both and cosines

3 Example

Find the Fourier series for the sawtooth function:

2.0

1.5

1.0

0.5

0.0

-0.5

-1.0 0 1 2 3 4 5

Figure 1. Sawtooth function

This function is clearly periodic. It is equal to f(x)= x on the interval 0

1 L 1 1 a0 = f(x)= dxx = (24) L 0 0 2 Next, Z Z 2 L 2πn 1 a = dxf(x) cos x =2 dxx cos(2πnx) (25) n L L Z0   Z0 This can be done by

x L 1 an =2 sin(2πnx) − 2 dx sin(2πnx)=0 (26) 2πn 0 0 Finally, Z

1 bn =2 dxx sin(2πnx) (27) Z0 x L 1 =−2 cos(2πnx) +2 dx cos(2πnx) (28) 2πn 0 0 Z 1 =− (29) πn Thus we have ∞ 1 1 2πnx f(x)= + − sin (30) 2 πn L n=1 X Let’s look at how well the series approximates the function when including various terms. Taking 0, 1 and 2 terms in the sum gives 4 Section 4

2.0 2.0 2.0 1.5 1.5 1.5 1.0 1.0 1.0 0.5 0.5 0.5 0.0 0.0 0.0 -0.5 -0.5 -0.5 -1.0 -1.0 -1.0 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 1mode 2modes 3modes f 1 f 1 1 sin πx f 1 1 sin πx 1 sin πx = 2 = 2 − π (2 ) = 2 − π (2 ) − 2π (4 ) Figure 2. Approximations to the sawtooth function

Already at 3 modes, it’s looking reasonable. For 5, 10 and 100 modes we find

2.0 2.0 2.0 1.5 1.5 1.5 1.0 1.0 1.0 0.5 0.5 0.5 0.0 0.0 0.0 -0.5 -0.5 -0.5 -1.0 -1.0 -1.0 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 5 mode 10 modes 100 modes

Figure 3. More approximations to the sawtooth function

For 10 modes we find excellent agreement.

4 Plucking a string Let’s apply the Fourier decomposition we worked out to plucking a string. Suppose we pluck a string by pulling up one end:

What happens to the string? To find out, let us do a Fourier decomposition of the x-dependence of the pluck. We start by writing ∞ 2nπ 2nπ 2nπ A(x,t)= a cos x cos(ω t)+ b sin x cos(ω t) , ω = v (31) n L n n L n n L n=0 X       v k 2nπ Here is the speed of sound in the string. For a given wavenumber, n = L , we know that ωn = knv to satisfy the wave equation. We could also have included components with sin(ωnt); however since the string starts off at rest (so that ∂tA(x, t)=0), then the coefficients of the sin(ωnt) functions must all vanish. At time t =0, the amplitude is ∞ 2nπ 2nπ A(x, 0)= a cos x + b sin x (32) n L n L n=0 X      This is just the Fourier decomposition of the function described by our pluck shape. If we approximate the pluck as the sawtooth function from the previous section, then we already know that 1 a =0, b = − (33) n n πn Exponentials 5

So that, setting L =1 ∞ 1 A(x,t)= − sin(2πx)cos(2πvt) (34) πn n=1 X This gives the motion of the string for all time. The relative amplitudes of each mode are

Figure 4. Amplitudes of the relative of a string plucked with a sawtooth plucking.

The n = 1 mode is the largest. This the of the string. Thus the ω 2π v n> sound that comes out of the string will be mostly this frequency: 1 = L . The modes with 1 are the harmonics. Harmonics have frequencies which are integer multiples of the funda- mental.

5 Exponentials

Fourier series decompositions are even easier with complex numbers. There we can replace the sines and cosines by exponentials. The series is ∞ 2π inx L f(x)= cne (35) n=−∞ where X L 2π 1 −inx c = dxf(x)e L (36) n L Z0 To check this, we substitute in ∞ ∞ L 2π L 2π 2π L 2π − − − inx L imx L inx L i (m n) x L dxf(x)e = cm dxe e = cm dxe (37) 0 m=−∞ 0 m=∞ 0 Z X Z X Z If n =/ m then,

L 2π 2π i (m−n) x L 1 i (m−n) x L dxe L = e L (38) 2π m − n 0 Z0 L 1 − = ei2π(m n) − 1 =0 (39) 2π m − n If m = n, then the is just   L dx = L (40) 0 Thus, Z L 2π − i (m n) x L dxe = Lδmn (41) 0 and so Z L 2π − inx L dxf(x)e = Lcn (42) Z0 6 Section 6

If f(x) is real, then ∞ 2πnx 2πnx f(x)= Re(c + c− )cos + Im(c− + c )sin (43) n n L n n L n=−∞ X     So an = Re(cn + c−n) and bn = Im(c−n + cn). Thus, the exponential series contains all the infor- mation in both the sine and cosine series in an efficient form.

6 (optional)

In verifying Fourier’s theorem, we found the relevant integral equations

L 2π 1 i (m−n) x dxe L = δ (44) L mn Z0 1 L 2πm 2πn dx cos x sin x =0 (45) L L L Z0     1 L 2πm 2πn 1 dx cos x cos x = δ (46) L L L 2 nm Z0     1 L 2πm 2πn 1 dx sin x sin x = δ (47) L L L 2 nm Z0     These are examples of orthogonal functions. The integral is a type of inner product. The dot- product among vectors is another example of an inner product. We can write the inner product in various ways

hv |wi≡ v~ · w~ = viwi (48) i X The integral inner product is a generalization of this from vectors of numbers to functions. We can define the inner product of two functions as 1 2π hf |gi = dxf ⋆(x)g(x) (49) 2π Z0 where f ⋆(x) is the complex conjugate of f(x). For example,

imx inx he |e i = δmn (50) This is the analog of

hxi |xj i = δij (51)

th where |xii = (0, ···, 0, 1, 0, 0) with the 1 in the i component. That is the |xii are the unit vec- tors. When a set of functions satisfy

hfi|f j i = δij (52) we say that they are orthonormal. The ortho part means they are orthogonal: hfi|f j i = 0 for i =/ j. The normal part means they are normalized, hfi|f j i =1 for i = j. If any function can be written as a of functions fi we say that the set {fi} is complete. Then

f(x)= aifi(x) (53) i We can extract ai via X

hf(x)|fii = aj hf j |fii = ajδij = ai (54) j j X X Orthogonal functions (optional) 7

This is exactly what we did with the Fourier decomposition above. It is also what we do with vectors

v~ = cix~ i (55)

th Then ci = hv|xii which is just the i componentX of v~. We will see various sets of orthonormal function bases with different inner products come up in physics. Other examples are: Bessel functions: J n(x). These functions are the solutions to the differential equation x2f ′′(x)+ xf ′(x) + (x2 − n2)f(x)=0 (56) They satisfy the condition 1 hJ n|J ni = dxx J n(x)J m(x)= δnm (57) Z0 Bessel functions come up in 2 dimensional problems. You will start to see them all over the place in physics. Legendre Pn. These satisfy P0(x)=1, P1(x)= x and

(n + 1)Pn+1(x)=(2n + 1)xPn(x) − nPn−1(x) (58) Their inner product is 1 2 hP |P i = dxP (x)P (x)= δ (59) n m n m 2n +1 nm Z0 Legendre polynomials come up in problems with spherical symmetry. You will study them to death in . Hermite polynomials 2 2 − x d − x H (x) = (−1)ne 2 e 2 (60) n dx2 2 So H0(x)=1, H1(x)= x, H2(x)= x − 1 and so on. These satisfy

2 1 − x 2 hHn|Hmi = dxe Hn(x)Hm(x)= δnm (61) Z0 Hermite polynomials play a critical role in the quantum oscillator.