<<

Part I

1 1 We are going to learn

Vector space.  Linearly independent vectors, Linearly dependent vectors.  Spanning set of a vector space  Basis of a vector space.  Subspace of a vector space.  Dimension of a vector space. 

2 De nition 1

A vector space over F is a set X on which two operations

+ : X X X;  ! : F X X;   ! are de ned such that:

1. X is commutative group under addition. 2. multiplication between the elements of F and X satis es two con- ditions. 3. The two distributive properties hold.

The elements of X are called vectors and those of F are called scalars.

Remark 2 (Real and complex vector space)

1. F = R = X is a real vector space. ) 2. F = C = X is a complex vector space. ) 3. We often write ax instead of a x:  Example 3 (Vector spaces or not! Addition? scalar multiplication?)

1. The set of n tuples of real numbers, i.e. Rn = (x ; x ; :::; x ): x R : f 1 2 n i 2 g 2. The set of n tuples of complex numbers, i.e. Cn = (x ; x ; :::; x ): x C f 1 2 n i 2 g 3. The set Cn over R. 4. The set Rn over C.

5. The set m (I) of on an I with real coe¢cients and degree Pm, i.e.  (I) = a xn + ::: + a x + a ; a R; n m ; Pm f n 1 0 i 2  g over R: When I = R; we write (R) = : Pm Pm 6. The set (I) of polynomials on an interval I with real coe¢cients, i.e. P (I) = a xn + ::: + a x + a ; a R; n N ; P f n 1 0 i 2 2 0g where N = N 0 . When I = R; we write (R) = : 0 [ f g P P

3 7. The set m (I) of real functions on an interval I with whose rst m deriv- atives areC continuous, i.e.

m (I) = f : f (n) (I): n 0; 1; :::; m C f 2 C 2 f gg where (I) is (0) (I). When I = R; we write m (R) = m: C C C C 8. The set 1 (I) of real functions on an interval I with whose all are continuous,C i.e. i 1 (I) = 1 (I) C \i=1C When I = R; we write 1 (R) = 1: C C De nition 4

Let x ; :::; x be any nite set of vectors in a vector space X: The sum f 1 ng a x + ::: + a x ; a F 1 1 n n i 2 or in a short notation n aixi i=1 X is called a linear combination of the vectors in the set and the scalars ai are called coe¢cients.

De nition 5

1. A nite set of vectors x ; :::; x is said to be linearly independent if f 1 ng n a x = 0 a = 0 i 1; :::; n : i i ) i 8 2 f g i=1 X

2. A in nite set of vectors x1; x2; ::: is said to be linearly independent if every subset of it is linearlyf independent.g

De nition 6

The set x1; :::; xn is said to be linearly dependent if it is not linearly inde- pendent. Af nite setg of vectors is linearly dependent i¤ one of the vectors can be represented as a linear combination of the others.

De nition 7

A set of vectors in a vector space X is said to span X if every vector in X can beA written as a linear combination of the vectors in . A De nition 8

4 If a set spans a vector space and is linearly independent, then it is called a basis of X:A A

De nition 9

1. If a vector space X has a nite basis, then every other basis of X is nite with the same number of vectors. This number is called the dimension of X; and is denoted by dimX: 2. If a vector space X has a in nite basis, then we write dimX = : 1 De nition 10

A subset Y of a vector space X is called a subspace of X if every linear combination of vectors in Y lies in Y: In other words, the set Y is closed under addition and scalar multiplication.

Example 11 (Basis and dimension)

1. The real vector space Rn: 2. The complex vector space Cn: 3. The real vector space Cn:

4. The real vector space (I) : Pm Exercise 12 (Homework)

1. Prove that m is a vector space for each m N : C 2 0 2. Prove that 1; is a vector space (Hint: use the concept of subspace). C P 3. Find a basis for . P 4. What is the dimension of ? P 5. What is the dimension of 1? (use the relation between and 1). C P C

5 2 Inner Product Space We are going to learn

Inner product space.  Norm of a vector.  Chauchy-Bunyakowsky-Schwarz Inequality.  Triangle Inequality.  Orthogonality of vectors.  Orthogonal and orthonormal set.  Relation between linear independence and orthogonality. 

6 De nition 13

Let X be a vector space over F: A from X X to F is called an inner product in X if, for any pair of vectors x; y X, the inner product 2 (x; y) x; y F ! h i 2 satis es the following conditions:

1. x; y = y; x for all x; y X. h i h i 2 2. ax + by; z = a x; z + b y; z for all a; b F;x; y; z X. h i h i h i 2 2 3. x; x 0 for all x X: h i  2 4. x; x = 0 x = 0: h i , A vector space on which an inner product is de ned is called an inner product space.

Example 14 (Inner product space)

1. The n dimensional Euclidean space is the vector space Rn together with the inner product of the vectors x = (x1; :::; xn) and y = (y1; :::; yn) de ned by x; y = x y + ::: + x y ; h i 1 1 n n 2. The n dimensional Euclidean space is the vector space Rn together with the inner product of the vectors x = (x1; :::; xn) and y = (y1; :::; yn) de ned by x; y = cx y + ::: + cx y ; h i 1 1 n n where c > 0:

n 3. In C we de ne the inner product of z = (z1; :::; zn) and w = (w1; :::; wn) by z; w = z w + ::: + z w ; h i 1 1 n n 4. For f; g C ([a; b]) we de ne their inner product by 2 b f; g = f (x) g (x)dx; h i Za Theorem 15

If X is an inner product space, then

x; y 2 x; x y; y for all x; y X: (CBS) jh ij  h i h i 2 Proof:

7 CASE 1: If x = 0 or y = 0; the CBS inequality clearly holds.  CASE 2: If x = 0 and y = 0; If x; y C, then we can write  6 6 h i 2 x; y = x; y ei h i jh ij

) i i i e x; y = e x; y e h i jh ij

) i e x; y = x; y R jh ij 2 and i 2 2 e x; y = x; y jh ij i So, by taking e x; y R instead of x; y C; the CBS inequality

remain the same. Therefore,2 we assumeh withouti 2 loss of generality that x; y R: h i 2 For any t R; the real quadratic expression 2 0 x + ty; x + ty = x; x + 2 x; y t + y; y t2 (1.1)  h i h i h i h i x;y have a minimum at t = hy;y i . h i x;y Substituting t = hy;y i in (1:1) gives h i x; y 2 0 x; x h i ;  h i y; y h i or x; y 2 x; x y; y h i  h i h i which is the desired inequality.

De nition 16

Let X be an inner product space over F. We de ne the norm on X

: X [0; ) kk ! 1 by x = x; x : k k h i Example 17 (norm of an inner productp space)

1. The n dimensional Euclidean space is the vector space Rn together with the inner product of the vectors x = (x1; :::; xn) and y = (y1; :::; yn) de ned by x; y = x y + ::: + x y ; h i 1 1 n n This gives x = x2 + ::: + x2 : k k 1 n q

8 2. The n dimensional Euclidean space is the vector space Rn together with the inner product of the vectors x = (x1; :::; xn) and y = (y1; :::; yn) de ned by x; y = cx y + ::: + cx y ; h i 1 1 n n where c > 0: This gives

x = cx2 + ::: + cx2 : k k 1 n n q 3. In C we de ne the inner product of z = (z1; :::; zn) and w = (w1; :::; wn) by z; w = z w + ::: + z w h i 1 1 n n This gives z = z 2 + ::: + z 2: k k j 1j j nj q 4. For f; g C ([a; b]) we de ne their inner product by 2 b f; g = f (x) g (x)dx h i Za This gives 1 b 2 f = f (x) 2 dx k k j j "Za # Properties of the norm (Homework: prove properties 1, 2, 3) 1. x 0 x X: k k  8 2 2. x = 0 x = 0: k k , 3. ax = a x a F; x X: k k j j k k 8 2 8 2 4. x + y x + y x; y X: [Triangle inequality]. k k  k k k k 8 2 Remark 18 (Another form of the CBS inequality) If X is an inner product space, then x; y x y for all x; y X: jh ij  k k k k 2 Proof. (of triangle inequality)

x + y 2 = x + y; x + y k k h i = x; x + x; y + y; x + y; y h i h i h i h i = x 2 + x; y + x; y + y 2 k k h i h i k k = x 2 + 2 Re x; y + y 2 k k h i k k x 2 + 2 x; y + y 2  k k jh ij k k x 2 + 2 x y + y 2 = ( x + y )2  k k k k k k k k k k k k Taking the square root of both sides gives the triangle inequality.

9 De nition 19

Let X be an inner product space. The distance between the vectors x X and y X is given by x y : 2 2 k k Remark 20 Geometrical interpretation of the triangle inequality

Consider a triangle with vertices x; y; z, then

x y = x z + z y k k k k x z + z y :  k k k k Remark 21 (Derivation of CBS and Triangle inequality in Rn)

See book p(10). Concept of orthogonality in Rn In Rn; the angle  [0; ] between the vectors x and y is de ned by 2 x; y = x y cos  h i k k k k In R2 and R3; the vectors x and y are orthogonal if

 = 90 cos  = 0 x; y = 0 , , h i In general, two nonzero vectors in Rn are said to be orthogonal if

cos  = 0 x; y = 0 , h i De nition 22 (Orthogonal vectors in an inner product space)

1. A pair of nonzero vectors x and y in the inner product space X is said to be orthogonal if x; y = 0; symbolically written as x y: h i ? 2. A set of nonzero vectors in X is orthogonal if every pair in is orthog- onal. V V 3. An orthogonal set X is said to be orthonormal if x = 1 for all x : V  k k 2 V Example 23 (An orthonormal set)

n In the Euclidean space R ; the set e1; :::; en is orthonormal. Relation between linear independencef andg orthogonality

1. Let x1; x2; :::; xn be an orthogonal set of vectors in the inner product spacef X, then theyg are necessarily linearly independent.

2. Let x1; x2; :::; xn be a linearly independent set of vectors in the inner productf space X,g then we can always form an orthogonal set of vectors from it. [Give an example of a linearly independent set that is not orthog- onal!].

10 To show that 1 is true, let

a1x1 + a2x2 + ::: + anxn = 0; then by taking the inner product of both sides in the above equation with x ; k 1; :::; n we have k 2 f g a x + a x + ::: + a x ; x = 0; x h 1 1 2 2 n n ki h ki or n a x ; x = 0 i h i ki i=1 X but x ; x = 0 i = k; hence h i ki 8 6 a x 2 = 0 k k kk which gives ak = 0 This is true for all k 1; :::; n . The second part is2 festablishedg using the Gram-Schmidt method. We need the following de nition: De nition 24 If x and y = 0 are any vectors in the inner product space X; then the projection of x on6 y is given by y x; y  k k and the projection vector of x along y is given by y y x; y y  k k k k or x; y h iy y 2 k k Gram-Schmidt method Let x1; x2; :::; xn be a linearly independent set of vectors. Construct y ; y ; :::;f y as followsg f 1 2 ng y1 = x1; x2; y1 y2 = x2 h iy1; y 2 k 1k x3; y1 x3; y2 y3 = x3 h iy1 h iy2; y 2 y 2 ::: k 1k k 2k xn; y1 xn; yn 1 yn = xn h iy1 ::: h iyn 1; 2 2 y1 yn 1 k k k k then the set y ; y ; :::; y is orthogonal [Homework: Justify!]. f 1 2 ng

11 3 The Space 2 L We are going to learn

Why the 2 space?  L What is the 2 space?  L Properties of the 2 space.  L Inner product with respect to a weight function. 

12 Recall that we de ned an inner product on the vector space C ([a; b]) by

b f; g = f (x) g (x)dx h i Za The associated norm is given by

1 b 2 f = f (x) 2 dx k k j j "Za # 1. In the vector space C ([a; b]) ; a sequence of function might converges (in a sense to be de ned latter) to a limit that is not C ([a; b]) : This is prob- lematic and thus we need to enlarge the space to avoid this problem. 2. The larger space, which we will denote by X ([a; b]) ; must be chosen such that the inner product

b f; g = f (x) g (x)dx h i Za is de ned for all f; g X ([a; b]) : That is, we need to make sure that for all f; g X ([a; b]) ; 2 2 b f (x) g (x)dx < ; a 1 Z

Using the CBS inequality we have

b f (x) g (x)dx = f; g a jh ij Z 1 1 b 2 b 2 f g = f (x) 2 dx g (x) 2 dx  k k k k j j j j "Za # "Za # Therefore, f; g is de ned if f 2 and g 2 are both integrable. h i j j j j 3. De ne 2(a; b) to be the set of function f :[a; b] C such that f 2 is integrable,L i.e. ! j j b f (x) 2 dx < j j 1 Za then

(a) 2(a; b) is a vector space. For all f; g 2(a; b) we have L 2 L f + g f + g k k  k k k k = f + g j j k k j j k k i.e., f + g 2(a; b): 2 L

13 (b) C ([a; b]) is a proper subset of 2(a; b):For example, take L 1; x = 1 h(x) = : 0; x (1; 2]  2 (c) In 2(a; b); L f = 0 f = 0 , k k which is not equivalent to f(x) = 0 x [a; b]: Note that f = 0 in 2(a; b) if it is zero on all but a nite8 number2 of points in I: L (d) We say that f and g are equal in 2(a; b) if f g = 0: L k k (e) In 2(a; b) the is not a¤ected by the end points of the interval (a;L b): Therefore, 2(a; b); 2 ([a; b)) ; 2 ((a; b]) ; 2 ([a; b]) are all the same. Also, the intervalL canL be unboundedL andL we have 2( ; b); 2 (a; ) ; 2 ( ; ) : L 1 L 1 L 1 1 Example 25 1.10

Determine which of the following functions belong to 2 and compute its norm. L 1; 0 x 1 (i) f (x) = 2 : 0; 1  x  1:  2    (ii) f (x) = 1 ; 0 x 1: px   1 (iii) f (x) = 3 ; 0 x 1: px   (iv) f (x) = 1 ; 1 x : x   1 Solution

(i) 1 1 2 1 f 2 = f 2 (x) dx = 1dx = ; k k 2 Z0 Z0 Hence f 2 (0; 1) and f = 1 : 2 L k k p2 (ii) 1 1 2 1 1 1 f = dx = lim dx = lim ln x r = lim ln r = ; k k x r 0+ x r 0+ j r 0+ 1 Z0 ! Zr ! ! Hence f = 2 (0; 1) : 2 L (iii)

1 1 1 1 1 1 1 2 3 3 f = 2 dx = lim 2 dx = lim 3x = lim 3 1 r = 3 + + + k k 0 x 3 r 0 r x 3 r 0 r r 0 Z ! Z ! !  

Hence f 2 (0; 1) and f = p3: 2 L k k

14 (iv) r 2 1 1 1 1 r 1 f = dx = lim dx = lim x = lim 1 = 1 k k x2 r x2 r 1 r r Z1 !1 Z1 !1 !1  

Hence f 2 (1; ) and f = 1: 2 L 1 k k Example 26 1.11 Consider the in nite set of functions = 1; cos x; sin x; cos 2x; sin 2x; ::: : V f g (i) Prove that is orthogonal in the real inner product space 2 ( ; ) : V L (ii) Construct an orthonormal set using . V Solution First note that 2 ( ; ) V  L (i) We need to show that each pair of vectors in is orthogonal, i.e. V f; g = 0 f; g ; f = g. h i 8 2 V 6 1. For all n N; 2  sin nx  1; cos nx = cos nxdx = = 0: h i  n  Z 2. For all n N; 2  cos nx  1; sin nx = sin nxdx = = 0: h i  n  Z 3. For all n; m N; n = m 2 6  cos nx; cos mx = cos nx cos mxdx h i  Z 1  = [cos (n m) x + cos (n + m) x] dx 2  Z 1 1 1  = sin (n m) x + sin (n + m) x dx 2 n m n + m    = 0;

4. For all n; m N; n = m 2 6  sin nx; sin mx = sin nx sin mxdx h i  Z 1  = [cos (n m) x cos (n + m) x] dx 2  Z 1 1 1  = sin (n m) x sin (n + m) x dx 2 n m n + m    = 0;

15 5. For all n; m N; 2  cos nx; sin mx = cos nx sin mxdx = 0: h i  Z (ii) We normalize all the vector in by dividing each vector by its norm. V  1 = dx = p2; k k s  Z   1 + cos 2nx cos nx = cos2 nxdx = dx = p; k k s  s  2 Z Z   1 cos 2nx sin nx = sin2 nxdx = dx = p: k k s  s  2 Z Z

Therefore, we have the orthonormal set

1 cos x sin x cos 2x sin 2x ; ; ; ; ; ::: : p2 p p p p   Example 27 1.12

Show that the set of functions

inx i2x ix ix i2x e : n z = :::; e ; e ; 1; e ; e ; ::: ; 2 is orthogonal in the complex space 2 ( ; ) : L

16 For n = m we have 6  einx; eimx = einxeimxdx  D E Z inx imx = e e dx  Z i(n m)x = e dx  Z  1 i(n m)x = e i (n m)   1 = [cos (n m) x + i sin (n m) x] i (n m)  = 0

We can construct an orthonormal set by de ning each function by its norm

 einx = einxeinxdx s  Z

 inx inx = e e dx s  Z  = 1dx s  Z = p2; n Z 2 The orthonormal set is thus given by,

einx : n z p2 2   De nition 28 Inner product with respect to a weight function

Let  C (a; b) and  (x) > 0 x (a; b), then for f; g C (a; b) we de ne the inner2 product with respect to the8 2 weight function  by 2

b f; g = f (x) g (x)  (x) dx h i Za [verify that the above de nition satis es the inner product conditions]. The norm is therefore de ned by

b f = f (x) 2  (x) dx k k s j j Za

17 The set of functions f :(a; b) C ! that satisfy f < k k 1 is denoted by 2 (a; b) : Note that for  = 1; this set is nothing but 2 (a; b) : L L If f; g  = 0;then f is said to be orthogonal to g with respect to the weight functionh :i

18 4 Sequences of Functions We are going to learn

Pointwise convergence of sequences of functions.  Uniform convergence of sequences of functions.  Pointwise convergence of series of functions.  Absolute convergence of series of functions.  Uniform convergence of series of functions. 

19 De nition 29

A sequence of functions (real or complex)

f : I F n ! is said to converge pointwise to a function

f : I F ! i.e., lim fn = f; lim fn = f; or fn f; n !1 ! if for every x I; 2 limn fn (x) = f (x) : !1 That is,  > 0; N(; x) such that 8 9 n N f (x) f (x) < : (1.15)  ) j n j Example 30 1.14

Find the pointwise limit of the following functions

(i) f (x) = 1 sin nx; x R: n n 2 (ii) f (x) = xn; x [0; 1] : n 2 (iii) f (x) = nx ; x [0; ): n 1+nx 2 1 Solution

(i) Let f (x) = 1 sin nx; x R: n n 2 1 lim fn (x) = lim sin nx = 0: n n !1 !1 n Hence, f f where f(x) = 0 x R: n ! 8 2 (ii) Let f (x) = xn; x [0; 1] : n 2 0; 0 x < 1 lim xn =  n 1; x = 1 !1  Thus, f f where n ! 0; 0 x < 1 f (x) = 1; x = 1 

20 (iii) Let f (x) = nx ; x [0; ): n 1+nx 2 1 nx 0; x = 0 lim = n 1 + nx 1; x > 0 !1  Example 31 1.15

For each n N; de ne the sequence f : [0; 1] R by 2 n ! 0; x = 0 f (x) = n; 0 < x 1 n 8 n 0; 1 < x 1 < n  Find the pointwise limit of fn: : Solution First note that f (0) 0: n ! For x > 0; we will show that there is always an N N such that fn(x) = 0 n N: Now, since x > 0 and 1 0 then N(x) N2such that 8  n ! 9 2 1 1 n N < x  ) n  N Therefore, lim fn (x) = 0: n !1 De nition 32

A sequence of functions (real or complex)

f : I F n ! is said to converge uniformly to a function

f : I F !

21 i.e., f u f; n ! if  > 0, N() such that 8 9 n N f (x) f (x) <  x R:  ) j n j 8 2 Remark 33 (Relation between pointwise and uniform convergence) Uniform convergence Pointwise convergence. ) Example 34 1.14 (revisited) Which of the following sequences converges uniformly to its pointwise limit (i) f (x) = 1 sin nx; x R: n n 2 (ii) f (x) = xn; x [0; 1] : n 2 (iii) f (x) = nx ; x [0; ): n 1+nx 2 1 Solution (i) Let  > 0; then for n N > 1 and for all x R   2 1 1 1 1 f (x) f (x) = sin nx 0 = sin nx <  j n j n n  n  N

i.e. f u f n ! (ii) Let 0 <  < 1 f (x) f (x) = xn 0 = xn j n j j j So for any N N; 2 xN <  x < Np , that is for all x [ Np; 1), 2 xN 0 >  u i.e. fn 9 f (iii) Let 0 <  < 1 nx 1 f (x) f (x) = 1 = j n j 1 + nx 1 + nx

So for any N N; 2 1 1  <  x > 1 + Nx , N 1  that is for all x 0; , 2 N   Nx 1 >  1 + Nx

u i.e. fn 9 f

22 Remark 35 1.16

1. In the de nition of pointwise and uniform convergence, we can replace the "<" relation by " " and the "" by "c", where c > 0.  2. The statement f (x) f(x)  x I j n j  8 2 is equivalent to sup fn (x) f(x) : x I j j  2 Therefore, f u f  > 0; N N such that n ! , 8 9 2

n N sup fn (x) f(x)   ) x I j j  2 De nition 36

A sequence of functions (real or complex)

f : I F n ! is said to converge uniformly to a function

f : I F ! i.e., f u f; n ! if lim sup fn (x) f(x) = 0 n x I j j !1 2 Example 37 1.14 (revisited)

Use the above criteria to decide which of the following sequences converges uniformly to its pointwise limit

(i) f (x) = 1 sin nx; x R: n n 2 (ii) f (x) = xn; x [0; 1] : n 2 (iii) f (x) = nx ; x [0; ): n 1+nx 2 1 Solution

(i) 1 1 f (x) f (x) = sin nx 0 = sin nx j n j n n

23 but 1 1 0 sin nx ; x R  n  n 8 2

1 1 0 sup sin nx )  x R n  n 2

1 lim sup sin nx = 0 [Why?] ) n x R n !1 2

u Thus, f f . n ! (ii) xn; x [0; 1) fn (x) f (x) = 2 j j 0; x = 1  Hence n sup fn (x) f (x) = sup x = 1 x [0;1] j j x [0;1) 2 2 Therefore, lim sup fn (x) f (x) = 1 = 0 n x [0;1] j j 6 !1 2 u i.e., fn 9 f. (iii) 0; x = 0 f (x) f (x) = j n j 1 ; x > 0  1+nx and hence

sup fn (x) f (x) = sup fn (x) f (x) = 1 x [0; ) j j x (0; ) j j 2 1 2 1 Therefore, lim sup fn (x) f (x) = 1 = 0 n x [0; ) j j 6 !1 2 1 u i.e., fn 9 f.

Theorem 38 1.17

Let (fn) be a sequence of functions de ned on the interval I, then

1. If f is continuous on I n and f u f; then f is continuous on I: n 8 n ! u 2. If fn is integrable on I n, fn f and I is bounded, then f is integrable on I and 8 !

f (x) dx = lim fn (x) dx ZI ZI

24 0 u 3. If fn is di¤erentiable on I n, fn f; fn g and I is bounded, then u 8 ! ! f f, f is di¤erentiable and n ! g = f 0

Remark 39 1.18 In part 3, the pointwise convergence f f can be replaced by a weaker n ! condition in part 3, namely the convergence fn (x0) f (x0) for some point x I: ! 0 2 Example 40 1.14 (revisited) Check if the conditions of theorem 1.17 are satis ed for the following se- quences (i) f (x) = 1 sin nx; x I R: n n 2  (ii) f (x) = xn; x [0; 1] : n 2 (iii) f (x) = nx ; x I [0; ):[Homework!] n 1+nx 2  1 Example 41 1.15 (revisited) For each n N; de ne the sequence f : [0; 1] R by 2 n ! 0; x = 0 f (x) = n; 0 < x 1 n 8 n 0; 1 < x 1 < n  We know that f 0: n ! : 1. limn fn is a . !1 2.

1 1 n 1 lim fn (x) dx = lim ndx = 1 = 0 = lim fn (x) dx n n 6 n !1 Z0 !1 Z0 Z0 !1 which implies that limn fn is not uniform. !1

3. fn is not di¤erentiable but limn fn = 0 is di¤erentiable. !1 De nition 42

Given a sequence of (real or complex) functions (fn) de ned on a real interval I; we de ne its nth partial sum by

n S (x) = f (x) + ::: + f (x) = f (x) ; x I: n 1 n k 2 kX=1 The sequence of functions (Sn) is called an in nite series of functions and is denoted fk: P 25 De nition 43

The series fk is said to converge pointwise (or simply, converge) on I if the sequence (Sn) converges pointwise on I. The sum of the series is given by the limit P 1 fk (x) = lim Sn (x) : n !1 kX=1 If the series k1=1 fk (x) does not converge, then it is said to diverge at the point x: P De nition 44

The series fk is said to be absolutely convergent on I if the positive series fk is pointwise convergent on I: j j P De nitionP 45

The series fk is said to converge uniformly on I if the sequence (Sn) converges uniformly on I. P Corollary 46 1.19

Suppose the series fn converges pointwise on the interval I:

1. If f is continuousP on I n and f converges uniformly on I, then its n 8 n sum 1 fn is continuous. n=1 P 2. If f Pis integrable on I n, I is bounded, and f converges uniformly n 8 n on I, then 1 fn is integrable on I and n=1 P

P 1 1 fn (x) dx = fn (x) dx I n=1 n=1 I Z X X Z

3. If f is di¤erentiable on I n, I is bounded, and f 0 converges uniformly n 8 n on I, then n1=1 fn converges uniformly on I and its limit is di¤erentiable on I and satis es P P 1 0 1 0 fn = fn n=1 ! n=1 X X The de nition of pointwise, uniform and absolute convergence of a series require that we test the convergence of the sequence of partial sums. The following test provides an easier way to do that!

Theorem 47 1.20 (Weierstrass M-Test)

26 Let (fn) be a sequence of functions on I; and suppose that there is a sequence of nonnegative numbers Mn such that f (x) M for all x I; n N: j n j  n 2 2 If Mn converges, then fn converges uniformly and absolutely on I: XProof X We want to prove that  > 0, N() N such that 8 9 2 n N S (x) S (x) < ; x I  ) j n j 8 2 or equivalently n 1 n N fk (x) fk (x) < ; x I  ) 8 2 k=1 k=1 X X But for all x I 2 n 1 fk (x) fk (x) k=1 k=1 X X 1 = fk (x)

k=n+1 X 1 f (x)  j k j k=Xn+1 1 M  k k=Xn+1 Now, if k1=1 Mk is convergent, then so is k1=n+1 Mk: Therefore, for a chosen  > 0, N() N such that 9P 2 P n 1 1 n N Mk = Mk Mk <   ) k=n+1 k=1 k=1 X X X Thus, we have

n 1 n N fk (x) fk (x) <  x I  ) 8 2 k=1 k=1 X X which proves that 1 f (x) is uniformly convergent. k=1 k To prove that 1 fk (x) is absolutely convergent we need to prove that Pk=1 1 fk (x) is pointwise convergent. k=1 j j P Now, for each x I; both k1=1 fk (x) and k1=1 Mk is a series of nonneg- Pative numbers that2 satisfy j j P P f (x) M k N j k j  k 8 2 Therefore, since k1=1 Mk is convergent, then so is k1=1 fk (x) according to the comparison test. j j P P

27 Example 48 1.21

In the following, determine if the series is uniformly convergent and check the properties listed in Corollary 1.19:

1 (i) 2 sin nx; x R: n 2 1 (ii) P 3 sin nx; x R: n 2 Solution:P

(i) Since 1 1 sin nx x R; n N n2  n2 8 2 2

1 and the nonnegative series of numbers n2 is convergent, we can use the 1 M-Test to conclude that 2 sin nx is uniformly convergent on R: n P 1 1 1. Since (a) n2 sin nx is continuousP on R for all n and (b) n2 sin nx is 1 uniformly convergent on R, then 2 sin nx is continuous on R. n P 1 1 2. Since (a) n2 sin nx is integrable on [a;P b] R and (b) n2 sin nx is uniformly 1  convergent on [a; b] R; then 2 sin nx is integrable on [a; b] R and  n P  b P b 1 1 1 1 sin nx dx = sin nxdx n2 n2 a n=1 ! n=1 a Z X X Z 1 1 = (cos na cos nb) n3 n=1 X 1 1 2  n3 n=1 X which is a convergent p series. 1 0 1 3. fn = n2 sin nx is di¤erentiable on R for all n with fn = n cos nx, but 1 cos nx is not convergent at some points of x R. Therefore, we n 2 Xcannot deduce that

d 1 1 1 1 sin nx = cos nx dx n2 n n=1 n=1 X   X (ii) Since 1 1 sin nx x R; n N n3  n3 8 2 2

1 and the nonnegative series of numbers n3 is convergent, we can use the 1 M-Test to conclude that 3 sin nx is uniformly convergent on R: n P P 28 1 1 1. Since (a) n3 sin nx is continuous on R for all n and (b) n3 sin nx is 1 uniformly convergent on R, then 3 sin nx is continuous on R. n P 1 1 2. Since (a) n3 sin nx is integrable on [a;P b] R and (b) n3 sin nx is uniformly 1  convergent on [a; b] R; then 3 sin nx is integrable on [a; b] R and  n P  b P b 1 1 1 1 sin nx dx = sin nx dx n3 n3 a n=1 n=1 a Z X   X Z  

1 0 1 3. fn = n3 sin nx is di¤erentiable on [a; b] R for all n with fn = n2 cos nx. 1  Here, 2 cos nx is uniformly convergent on [a; b] R. Therefore, we n  1 deduceX that n3 sin nx is uniformly convergent on R and X d 1 1 1 1 sin nx = cos nx dx n3 n2 n=1 n=1 X X holds for all x [a; b] : 2

29 5 Convergence in 2 L We are going to learn

Convergence of sequences of functions in 2.  L Relation between pointwise convergence and convergence in 2.  L Relation between uniform convergence and convergence in 2.  L The Cauchy sequence and its relation to convergence.  Completeness of 2:  L Density of C in 2:  L

30 De nition 49 1.22

2 2 A sequence of functions (fn) in (a; b) is said to converge in if there is a function f 2 (a; b) such that L L 2 L lim fn f = 0; ((1.19)) n !1 k k that is, if for every  > 0 there is an N such that

n N f f < :  ) k n k Equation (1:19) is equivalent to writing

2 f L f; n ! and f is called the limit in 2 of the sequence (f ) : L n Example 50 1.23

In the following determine if the sequence converge in 2 L (i) f (x) = xn; x [0; 1] : n 2 0; if x = 0 (ii) f (x) = n; if 0 < x 1 n 8 n 0; if 1 < x 1 < n  Solution:: (i) We already know that the pointwise limit of xn on [0; 1] is given by 0; if 0 x < 1 f (x) = ; 1; if x = 1  Note that 2 ([0; 1]) = 2 ([0; 1)) and f = 0 in 2. Moreover, L L L 1 1 2 xn 0 = (xn)2 dx k k 0 Z 1  2 x2n+1 1 = 2n + 1 " 0# 1 1 2 = 2n + 1   i.e., 1 1 2 lim xn 0 = lim = 0 n k k n 2n + 1 !1 !1   Therefore, 2 xn L 0 !

31 (ii) We know that the pointwise limit of

0; if x = 0 f (x) = n; if 0 < x 1 n 8 n 0; if 1 < x 1 < n  is : f (x) = 0; x [0; 1] : 2 Now,

1 1 2 n f (x) 0 = (n)2 dx k n k "Z0 # 1 1 2 2 n = n x 0 1 h 2 i = n i.e., 1 lim fn (x) 0 = lim n 2 = n n !1 k k !1 1 Therefore, f does not converge in 2. n L Remark

1. pointwise convergence 9 convergence in 2: L 2. If a sequence converges both pointwise and in 2; then the limit functions are equal in 2: L L Lemma 51 (Exercise 1.41, page 29 )

u u 2 u If fn f on [a; b] ; then fn f 0 on [a; b] and hence fn f 0 on [a; b] : ! j j ! j j ! Proof: If f u f on [a; b] ; then n !

lim sup fn f = 0; n x [a;b] j j !1 2 which is equivalent to

lim sup fn f 0 = 0 n x [a;b] jj j j !1 2 u That is, fn f 0 on [a; b] : j j ! 2 u Next, we prove that fn f 0 on [a; b]. Let  > 0 and take 1 = j u j ! min (; 1) ; then since f f 0; N () N such that j n j ! 9 2

n N sup fn f 1   ) x [a;b] j j   2

32 That is, n N f f  x [a; b]  ) j n j  1 8 2 or n N f f 2 2 <   x [a; b]  ) j n j  1 1  8 2 Hence, 2 n N sup fn f   ) x [a;b] j j  2 or in other words

2 lim sup fn f 0 = 0 n x [a;b] j j !1 2 i.e., f f 2 u 0: j n j ! Theorem 52 (Relation between uniform convergence and convergence in 2) L 2 u Let fn be a sequence of functions in (I) where I is bounded. If fn 2 L ! 2 f;where f (I) ; then fn L f: Proof:2 L ! We want to prove that

lim fn f = 0 n !1 k k Now, 2 2 lim fn f = lim fn f dx n k k n j j !1 !1 ZI u 2 u But fn f. Hence, using the above lemma we have fn f 0. Moreover, using Theorem! 1.17 gives j j !

2 2 lim fn f dx = lim fn f dx = 0 n j j n j j !1 ZI ZI !1 from which we get 2 lim fn f = 0 n !1 k k which gives lim fn f = 0: n !1 k k Example 53 1.24

1 2 1 Show that the series k=1 k2 sin kx is convergent in ( ; ). Solution: L 1P 2 1 We say that k=1 k2 sin kx is convergent in ( ; ) if the sequence of n 1 L 2 partial sums Sn = k=1 k2 sin kx is convergent in ( ; ) : Using the above P L 2 theorem it is su¢cient to prove that the sequence (Sn) is in ( ; ) and u P L S S in [ ; ] for some S 2 ( ; ) : n ! 2 L

33 u 1 1 We know from example 1.21 that Sn S in [ ; ] ; where S = k=1 k2 sin kx. 1 ! Now, since k2 sin kx is continuous k on [ ; ] ; then so is Sn, which proves 2 8 u P that (Sn) is in ( ; ) : Next, since Sn S, then S is also continuous on [ ; ] : Thus, SL 2 ( ; ) : ! 2 L 1 1 Remark 54 (Testing the convergence of k=1 k cos kx) We cannot use the above argument toP prove the convergence of the series 1 1 1 1 k=1 k cos kx in [ ; ] : In fact, the series k=1 k cos kx is divergent at all x = 2z; z Z: P 2 P De nition 55 1.25 ()

A sequence in 2 ( ; ) is called a Cauchy sequence if,  > 0; N N such that L 8 9 2 n; m N f f <   ) k n mk Theorem 56 1.26 (Convergence and Cauchy sequence)

1. Every convergent sequence (f ) in 2 is a Cauchy sequence [Homework!]. n L 2 2 2. For every Cauchy sequence (fn) in there is a function f such that 2 L 2 L f L f . That is to say 2 is "complete". n ! L Example 57 1.28

1 2 1 Show that the series k=1 k cos kx is convergent in ( ; ) : Solution: L Clearly, the sequenceP(S ), where S = n 1 cos kx, is in 2 ( ; ) : So, n n k=1 k L we only need to prove that (Sn) is a Cauchy sequence. Let  > 0; P

n 2 2 1 Sn (x) Sm (x) = cos kx ; m < n; k k k k=m+1 X

Since the set cos kx : k N is orthogonal in 2 ( ; ) (Example 1.11), we have f 2 g L

2 n 1 n 1 n 1 cos kx = cos kx 2 =  k k2 k k k2 k=m+1 k=m+1 k=m+1 X X X 1 1 but k=1 k2 is a convergent series, therefore the sequence of partial sums n 1 k=1 k2 is a Cauchy sequence in [ ; ]. In other words for the given  > 0; N NPsuch that 9P 2  n 1 m 1 2 n; m N <  ) k2 k2  k=1 k=1 X X

34 or n 1 2 n; m N <  ) k2  k=Xm+1 Form which we get

2 n 1 n 1 n; m N cos kx =  < 2  ) k k2 k=m+1 k=m+1 X X

Taking the square root, we get n 1 n; m N cos kx <   ) k k=m+1 X 2 Remark 58 convergence in 9pointwise convergence. L Remark 59 (A series that is divergent pointwise but is convergent in 2 ( ; )) L 1 1 Using the above steps, we can prove that the series k=1 k cos kx is conver- 2 1 1 gent in ( ; ) : The series k=1 k cos kx is divergent at all x = 2z; z Z: Thus, convergenceL in 2 9pointwise convergence. P 2 L P Theorem 60 1.27 (Density of C in 2) L For any f 2 (a; b) and any  > 0; there is a continuous function g on [a; b] such that f 2g L < : The abovek theoremk shows that the set of continuous function on [a; b] is dense in 2 (a; b) in the same way the rational numbers Q are dense in R: In L 2 fact, for every f (a; b) ; there is a sequence of continuous functions (fn) 2 2 L such that f L f: n ! Example 61 (Density of C in 2) L Show that the following sequence of functions in C [ 1; 1] 1 0; 1 x n f (x) = nx + 1; 1< x < 0 n 8 n 1; 0 x 1 <   converges to the function : 0; 1 x < 0 f (x) = 1; 0 x 1    in 2 ( 1; 1) : LSolution: We need to show that lim fn f = 0 n !1 k k

35 Now

1 1 2 2 lim fn f = lim fn f dx n k k n j j !1 !1  1  Z 1 0 2 = lim (nx + 1)2 dx n 1 !1 "Z n # 1 0 2 (nx + 1)3 = lim n !1 2 3n 1 3 x= n

41 5 = lim = 0 n !1 p3n

36 6 We are going to learn

Given an orthogonal set S in 2 and a function f 2, we answer the  following questions: L 2 L

– If f is a linear combination of a nite number of elements in S; what are the coe¢cients in this linear combination? – How to nd the linear combination in S that best approximates f?

Completeness of an orthogonal set.  Bessel’s inequality.  Parseval’s relation. 

37 Consider an orthogonal set of functions S = '1;'2;'3; ::: in the complex space 2: f g QuestionL 1 Let f 2 be a nite linear combination of the functions in S; that is, 2 L n f = ' ; C: i i i 2 i=1 X Find the coe¢cients " i’s"! Answer If we take the inner product of f with 'k, k = 1; 2; :::; n:

n f; ' = ' ;' h ki h i i ki i=1 X n f; ' = ' ;' ) h ki i h i ki i=1 X f; ' = ' 2 ) h ki k k kk f; 'k k = h i: ) ' 2 k kk Therefore, we have n f; 'k f = h 2i'k; 'k kX=1 k k Remark 62 1

f;'k 1. f is the sum of the projection vectors of f along ' ;namely h 2i ' ; k 'k k k = 1; 2; :::; n: k k 2. In terms of the orthonormal set = 'k ; we have k 'k f k k g n f; 'k f = h 2i'k 'k kX=1 k k n ' ' = f; k k ' ' k=1  k kk k kk Xn = f; h ki k k=1 Xn

= k k kX=1

f; k In this case, the coe¢cient " " is equal to h i ; which is the projection k k k k of f on k:

38 Question 2 Let f be any function in 2. Find the nite linear combination of the functions in S that best approximatesL f! Answer n The best approximation of f is the function k=1 k'k that minimizes the quantity n P

f k'k k=1 X but

n 2 n n

f k'k = f k'k; f k'k * + kX=1 kX=1 kX=1 n n n

= f; f k'k k'k; f k'k * + * + kX=1 kX=1 kX=1 n n n n = f 2 f; ' ' ; f + ' ; ' k k k h ki k h k i k k j j * j=1 + kX=1 kX=1 kX=1 X n n n n = f 2 f; ' f; ' + ' ;' k k k h ki kh ki k j k j j=1 kX=1 kX=1 kX=1 X n n n = f 2 f; ' f; ' + ' ;' k k k h ki k h ki k k h k ki k=1 k=1 k=1 Xn X nX = f 2 f; ' + f; ' + 2 ' 2 k k k h ki k h ki j kj k kk k=1 h i k=1 Xn n X = f 2 2 Re f; ' + 2 ' 2 k k k h ki j kj k kk k=1 k=1 nX 2 X 2 f; 'k = f jh ij2 k k 'k kX=1 k k n 2 2 2 f; 'k f; 'k + 'k k 2 Re k h 2i + jh ij4 k k "j j ' ' # k=1 k kk k kk X n 2 2 f; 'k = f jh ij2 k k 'k kX=1 k k n 2 f; 'k f; 'k + 'k k h 2i k h 2i k k " 'k ! 'k !# kX=1 k k k k n 2 n 2 2 f; 'k 2 f; 'k = f jh ij2 + 'k k h 2i k k ' k k ' k=1 k kk k=1 k kk X X Note that only the last term involves and this term is always 0: By k 

39 choosing f; 'k k = h i; k = 1; 2; :::n: ' 2 k kk the term attains its minimum, namely 0:As a result, the best approximation of f is given by n f; 'k h 2i'k 'k kX=1 k k and the minimum of the quantity f n ' 2 is given by k k=1 k kk n 2 P n 2 2 f; 'k f k'k = f jh ij2 : k k ' k=1 k=1 k kk X X

Remark 63 2 1. Since n 2

f k'k 0  k=1 X we have, n 2 f; 'k 2 jh ij2 f 'k  k k kX=1 k k for all n:Therefore, it is also true as n ; which gives the Bessel’s inequality, namely ! 1 2 1 f; 'k 2 jh ij2 f 'k  k k kX=1 k k for any orthogonal set ' : k N and f 2: f k 2 g 2 L 2. Bessel’s inequality becomes equality if and only if

2 1 f; 'k f h 2i'k = 0 ' k=1 k kk X

[why? Homework]. That is,

1 f; 'k f = h 2i'k 'k kX=1 k k are equal in 2. L De nition 64 1.29

40 2 An orthogonal set 'n : n N in is said to be complete if, for any f 2; f 2 g L n 2 L 2 f; 'k L h 2i'k f 'k ! kX=1 k k Remark 65 3

2 The above de nition states that an orthogonal set 'n : n N in is complete if for each f 2, we have f 2 g L 2 L

1 f; 'k f = h 2i'k 'k kX=1 k k in 2: In other words, ' : n N is a basis for 2: L f n 2 g L Theorem 66 1.30

An orthogonal set ' : n N in 2 is complete if and only if f n 2 g L 2 1 f; 'k 2 jh ij2 = f 'k k k kX=1 k k Remark 67 1.31

1. The equality in theorem 1.31 is called Parseval’s relation or the Complete- ness relation.

2 2. We have shown that for any orthogonal set 'n : n N in ; the best 2 n f;'k f 2 g L approximation for f is h 2i 'k, and this choice is independent 2 L k=1 'k of n: Moreover, if the orthogonal setk k' : n N is complete, then P f n 2 g

1 f; 'k f = h 2i'k 'k kX=1 k k in 2: L 3. When the orthogonal set 'n : n N is normalized to n : n N ; then f 2 g f 2 g

(a) The Bessel’s inequality becomes

1 f; 2 f 2 jh kij  k k kX=1 (b) The Parseval’s equality becomes

1 f; 2 = f 2 jh kij k k kX=1

41 2 4. Since f < for all f ; Bessel’s inequality implies that f; n 0 whetherk k the1 set : n 2 LN is complete or not. h i ! f n 2 g 5. Parseval’s relation can be seen as a generalization of the Pythagoras the- orem from Rn to 2, where L f 2 the square of the length of the vector k k  1 f; 2 the sum of the squares of the projections of f jh kij  kX=1 on the .

42 Part II The Sturm-Liouville Theory

43 7 Linear Second-Order Equations We are going to learn

Some Terminology related to second-order ordinary di¤erential equations.  Properties of the solution of a second-order ordinary di¤erential equation.  Initial and boundary conditions.  The and its relation to the solution of a second-order ordinary  di¤erential equation.

44 Consider the second-order ordinary di¤erential equation on the real interval I

00 0 a0 (x) y + a1 (x) y + a2 (x) y = f (x) ; (2.1) where a0; a1a2 and f are given complex functions on I:

1. If f = 0 on I; the equation is called homogeneous, otherwise it is nonho- mogeneous. 2. A function ' C2 (I) is a solution of the above equation if the substituting y = ' gives an2 identity, i.e.

a (x) '00 + a (x) '0 + a (x) ' = f (x) x I: 0 1 2 8 2 3. Let d2 d L = a (x) + a (x) + a (x) ; 0 dx2 1 dx 2 then in terms of the di¤erential L; we can write

Ly = f (x)

2 (a) Note that for any functions '; C (I) and any constants c1; c2 C; we have 2 2 L (c1' + c2 ) = c1L' + c2L thus, L is a linear di¤erential operator. (b) If ' and are solutions of a linear homogeneous equation, then

L' = 0; L = 0

and thus L (c1' + c2 ) = c1L' + c2L = 0 This property of linear homogeneous equations is the superposition principle. That is, if ' and are solutions of a linear homogeneous equation, then so is any linear combination "c1' + c2 " of them.

4. If a0 (x) = 0 x I; then equation (2:1) is said to be regular on I; and can be written6 in8 the2 equivalent form (i.e. the two equations have the same set of solutions) y00 + q (x) y0 + r (x) y = g (x) where a a f q = 1 ; r = 2 ; g = : a0 a0 a0 5. If a (x ) = 0 at some x I; then equation (2:1) is said to be singular 0 0 0 2 and x0 is called a singular point of the equation.

45 Consider the second-order ordinary di¤erential equation on the real interval I y00 + q (x) y0 + r (x) y = g (x) (2.2)

1. If q; r and g are continuous functions on I and x0 is any point in I;then for any numbers  and ; there is a unique solution ' of equation (2:2) on I such that ' (x0) = ; '0 (x0) =  (2.3) Equation (2:2) with the initial conditions (2:3) is called an initial-value problem. 2. If g (x) = 0 x I; then (2:2) becomes the homogeneous equation 8 2

y00 + q (x)y 0 + r (x) y = 0 (2.4)

and

(a) Equation (2:2) has two linearly independent solutions y1 (x) and y2 (x) on I: (b) Any solution of (2:2) can be written in the form

c1y1 + c2y2 (2.5)

for some constants c1 and c2: Thus, (2:5) is called the general solution of (2:4) :

(c) Using c1 = c2 = 0 in (2:5) shows that 0 is always a solution of (2:4) : This solution is called the trivial solution. (d) According to the existence and uniqueness theorem, the trivial solu- tion is the only solution of (2:4) if  =  = 0 in the initial conditions (2:3) : (e) If the coe¢cients q and r are constants, the general solution of (2:4) is found by solving the second degree equation

m2 + qm + r = 0

i. If the roots are distinct, then the general solution is given by

m1x m2x c1e + c2e

ii. If m1 = m2 = m; then the general solution is given by

mx mx c1e + c2xe

3. If the coe¢cients q and r are analytical functions at some point x0 in the interior of I; i.e. each of them can be represented in an open interval centered at x by a power series in (x x ) ; then 0 0

46 (a) the general solution of (2:4) is also analytic at x0 and is given by

1 c (x x )n n 0 n=0 X The above series converges in the intersection of I and the two inter- vals of convergence of q and r.

(b) The coe¢cients cn, n = 2; 3; 4::: in the above power series can be written in terms of c0 and c1 by substituting the series into equation (2:4) :

4. If g (x ) = 0 for some x I; then (2:2) is nonhomogeneous and 0 6 0 2 (a) If yp (x) is a particular solution of equation (2:2), then the general solution is given by yp + c1y1 + c2y2 (b) A unique solution is obtained by using the initial conditions (2:3) to determine the values of c1 and c2: 5. The special case of equation (2:1) 2 x y00 + axy0 + by = 0; where a and b are constants is called the Cauchy-Euler equation. The general solution is found by solving the second degree equation m2 + (a 1) m + b = 0 (a) i. If the roots are distinct, then the general solution is given by

m1 m2 c1x + c2x

ii. If m1 = m2 = m; then the general solution is given by m mx c1x + c2x log x In the existence and uniqueness theorem of equation (2:2), we mentioned initial conditions at a point x0 in I: In most physical application, the di¤erential equation is subject to boundary conditions. That is, if we let I = [a; b] then the conditions are imposed at the end points of the interval, namely a and b: The general form of the boundary conditions is given by

1y (a) + 2y0 (a) + 3y (b) + 4y0 (b) = ;

1y (b) + 2y0 (b) + 3y (a) + 4y0 (a) =  where i and i are constants and satisfy 4 4 > 0 and > 0: j ij j ij i=1 i=1 X X Equation (2:1) with the boundary conditions is called a boundary-value problem.

47 1. If  =  = 0; then we have homogenous boundary conditions

2. If 3 = 4 = 3 = 4 = 0; then we have separated boundary conditions. 3. Unseparated boundary conditions are called coupled boundary conditions.

4. If y (a) = y (b) and y0 (a) = y0 (b) ; then we have periodic boundary condi- tions.

De nition 68 2.1

For any two functions f; g C1 the determinant 2 f (x) g (x) W (f; g)(x) = = f (x) g0 (x) g (x) f 0 (x) f 0 (x) g0 (x)

is called the Wronskian of f and g: The symbol W (f; g)(x) is sometimes ab- breviated to W (x).

Lemma 69 2.2

If y1 and y2 are solutions of the homogeneous equation

y00 + q (x) y0 + r (x) y = 0; x I 2 where q C (I) ; then either W (y1; y2) = 0 for all x I; or W (y1; y2)(x) = 0 for any x2 I: 2 6 Proof:2 Using the de nition of the Wronskian, we have

W (x) = y y0 y0 y 1 2 1 2 Therefore,

W 0 (x) = y0 y0 + y y00 y00y y0 y0 1 2 1 2 1 2 1 2 = y y00 y00y 1 2 1 2

But since y1 and y2 are solution of the di¤erential equation, we have

y100 + qy10 + ry1 = 0;

y200 + qy20 + ry2 = 0 Multiplying the rst equation by y and the second by y gives 2 1

y00y qy0 y ry y = 0; 1 2 1 2 1 2 y200y1 + qy20 y1 + ry2y1 = 0 Adding the above equations gives

y00y y00y + q (y0 y y0 y ) = 0 2 1 1 2 2 1 1 2

48 or W 0 + qW = 0 This is a linear rst order ordinary di¤erential equation. Thus, multiplying the above equation by the

eR q(x)dx gives R q(x)dx R q(x)dx e W 0 + qe W = 0 or d eR q(x)dxW = 0 dx or h i eR q(x)dxW = c or R q(x)dx W (x) = ce where c is a constant. If c = 0; then W (x) = 0 for all x: Otherwise, W (x) = 0 for all x: 6

Lemma 70 2.4

Any two solutions y1 and y2 of equation (2:10) are linearly independent if, and only if, W (y1; y2)(x) = 0 on I: Proof: 6 We will prove that: any two solutions y1 and y2 of (2:10) are linearly depen- dent W (y ; y ) = 0 for some point x in I: , 1 2 0 ( ) If y1 and y2 are linearly dependent, then k constant such that y1 = ky2: Therefore,) 9

W = y y0 y0 y 1 2 1 2 = ky y0 ky0 y 2 2 2 2 = 0:

( ) Let W (y1; y2) = 0 for some point x0 in I; then by Lemma 2.2 W (y1; y2) = 0 for( all x I: This implies that 2 y y 1 2 = 0 y y 10 20

Which means that the system

k1 (y1; y10 ) + k2 (y2; y20 ) = (0; 0) has a nontrivial solution. Consequently

k1y1 + k2y2 = 0 has a nontrivial solution, i.e. y1 and y2 are linearly dependent functions.

49 Remark 71 2.5

Note that we used the fact that y1 and y2 are solutions of (2:10) only to prove that if the Wronskian vanishes on some point in I; then these solutions are linearly dependent. This means we can nd two linearly independent func- tions with a Wronskian that vanishes at some points of their domain. Take for example x; x2on [ 1; 1] : Example 72 2.6

Find the solution of the equation

y00 + y = 0 (2.12) on the interval [0; ] ; with the following set of conditions

1. y (0) = 0; y0 (0) = 1:

2. y (0) = 0; y0 (0) = 0:

3. y (0) = 0; y () = 0: then show that any choice of the initial conditions

y (x ) = ; y0 (x ) = ; x [0; ] 0 0 0 2 with (2:12) gives a unique solution.

Solution The general solution of (2.12) is

y (x) = c1 cos x + c2 sin x

1. The of the solution

y0 (x) = c sin x + c cos x 1 2 Using the rst choice of initial condition gives

0 = y (0) = c1 cos 0 + c2 sin 0 = c1;

1 = y0 (0) = c sin 0 + c cos 0 = c : 1 2 2 That is, these initial conditions gives the solution

y (x) = sin x

50 2. Using the second choice of initial condition gives

0 = y (0) = c1 cos 0 + c2 sin 0 = c1;

0 = y0 (0) = c sin 0 + c cos 0 = c : 1 2 2 That is, these initial conditions gives the trivial solution. 3. Using the third choice of boundary condition gives

0 = y (0) = c1 cos 0 + c2 sin 0 = c1;

0 = y () = c1 cos  + c2 sin  = c1:

which gives the solution y (x) = c2 sin x;

where c2 is any constant. That is, this choice of boundary conditions does not give a unique solution for the problem.

Using the general initial conditions

y (x0) = ; y0 (x0) = 

we have

 = y (0) = c1 cos x0 + c2 sin x0;

 = y0 (0) = c sin x + c cos x : 1 0 2 0 t which gives a unique solution for [c1; c2] if and only if

cos x sin x 0 0 = 0 sin x0 cos x0 6

or equivalently cos2 x + sin2 x = 0 0 0 6 which is always true.

51 8 Self-Adjoint Di¤erential Operator We are going to learn

The adjoint of a linear operator in a nite-dimensional inner product space.  Properties of the eigenvalue problem Lu+u = 0 for a self-adjoint operator  L in a nite-dimensional inner product space. The extension of the above concepts to a in nite-dimensional inner prod-  uct space, namely 2: L

52 A linear operator in a vector space X is a mapping A : X X ! which satis es A (ax + by) = aAx + bAy for all a; b F and all x; y X: The adjoint2 of a linear2 operator A; if it exists, is the mapping

A0 : X X ! that satis es Ax; y = x; A0y h i h i for all x; y X: If A0 = A, then A is said to self-adjoint. 2 Example 73 (Adjoint of a linear operator in a nite-dimensional inner product space) Consider a nite-dimensional inner product space X (e.g. Cn over C). If B = ei : 1 i n is an orthonormal basis for X, then any linear operator T can bef represented  g by a matrix

a11 a1n .    . A = 2 . : . 3 a a 6 n1    nn 7 that is, 4 5

A [x]B = [T (x)]B for all x X: Here, [y]B is the coordinate vector of y with respect to the basis B: 2 The columns of A are given by

t [T (e )] = a a ; i = 1; 2; :::; n: i B 1i    ni In this case, the adjoint of A is given by 

a11 an1 .    . T A0 = 2 . : . 3 = A a a 6 1n    nn 7 4 5 De nition 74 (Eigenvalues and eigenvectors) Consider a linear operator A on a inner product space X. If there exists an nonzero vector x in X such that Ax = ax for a C; then x is called an eigenvector of A corresponding to the eigenvalue a: 2

53 Remark 75 (Properties of a self-adjoint matrix)

If A is a self-adjoint (Hermitian) matrix, then

1. The eigenvalues of A are all real numbers.

2. The eigenvectors of A corresponding to distinct eigenvalues are orthogonal. 3. The eigenvectors of A form a basis of X:

Adjoints of operators generalize conjugate transposes of square matrices to in nite-dimensional situations.

8.1 Generalization to The Space 2 L Consider the second-order homogenous di¤erential equation

p (x) y00 + q (x) y0 + r (x) y = 0; which can be written in terms in of the linear operator

L : 2 (I) C2 (I) 2 (I) ; L \ !L d2 d L = p (x) + q (x) + r (x) dx2 dx as follows Ly = 0:

Remark 76 1

1. We have assumed that p; q; r are all in C2 (I) : 2. If I is a closed bounded interval, then

C2 (I) C (I) 2 (I)   L and thus 2 (I) C2 (I) = C2 (I) L \ [Give an example for a function in f C2 (I) ; but f = 2 (I) ] 2 2 L

By de nition, the adjoint of L, denoted by L0, must satisfy

Lf; g = f; L0g h i h i for all f; g 2 (I) C2 (I) : Let I =2 (a; L b) where\ I can be in nite. Now,

54 b Lf; g = pf 00 + qf 0 + rf gdx h i Za b  b  b = pf 00 gdx + qf 0 gdx + rfgdx Za Za Za b b b = pf 00 gdx + f 0 (pg)0 dx f 0 (pg)0 dx + Za Za Za ! b b b b qf 0 gdx + f (qg)0 dx f (qg)0 dx + rfgdx Za Za Za ! Za b b = f 0 (pg) 0 dx f 0 (pg)0 dx + Za Za b  b b 0 + (f (qg))0 dx f (qg) dx + rfgdx Za Za Za b b b b = f 0 pg f 0 (pg)0 dx + f (pg)00 dx + f (pg)00 dx a a a a ! Z Z Z b b + fq g b f (qg)0 dx + rfgdx ja Za Za b b b = f 0 pg f (pg)0 0 dx + f (pg)00 dx a a a Z   Z b b + fq g b f (qg)0 dx + rfgdx ja Za Za b b 0 0 b = f pg f (pg) + fqg a a a j b b b 00 0 + f (pg) dx f (qg) dx + rfgdx Za Za Za b = f 0 pg fp0 g fpg0 + fqg a b

+ f (pg)00 (qg)0 + rg dx Za h i b = p f 0 g fg0 + q p0 fg + f; (pg)00 (qg)0 + rg a     D E

Thus, b Lf; g = p f 0 g fg0 + q p0 fg + f; (pg)00 (qg)0 + rg h i a     D E

Remark 77 2 1. If (a; b) is in nite or the any of integrands are unbounded at a or b; then the are improper.

55 2. The right hand side is well de ned if p C2 (a; b) ; q C1 (a; b) and r C (a; b) : 2 2 2 We can write

b 0 0 0 Lf; g = p f g fg + q p fg + f; Lg (2.24) h i a h i     where

00 0 Lg = (pg) (qg) + rg 0 0 = (p0g + pg0)0 q g + qg + rg 00  00 0  0 = p g + 2p0g0 + pg q g qg + rg 00 00 0 = pg + (2p0 q) g0 + p q + r g   The operator

2 d d 00 0 L = p + (2p0 q) + p q + r dx2 dx   is called the formal adjoint of L. L is said to be formally self-adjoint if

L = L

Theorem 78 2.14

Let L : 2 (a; b) C2 (a; b) 2 (a; b) L \ !L be a linear di¤erential operator of second order de ned by

Lu = p (x) u00 + q (x) u0 + r (x) u; x (a; b) ; 2 where p C2 (a; b) ; q C1 (a; b) ; and r C (a; b) :Then 2 2 2 1. L is formally self-adjoint, that is, L = L; if the coe¢cients p; q and r are real and p0 = q:

2. L is self-adjoint, that is, L0 = L; if L is formally self-adjoint and

b p f 0 g fg0 = 0 a   for all f; g 2 (a; b) C2 (a; b) : 2 L \ 3. If L is self adjoint, then the eigenvalues of the equation

Lu + u = 0

are all real and any pair of associated with distinct eigen- values are orthogonal in 2 (a; b) : L

56 Proof

1. The formal adjoint of L is given by

2 d d 00 0 L = p + (2p0 q) + p q + r dx2 dx   L is formally self-adjoint if L = L That is,

p = p;

2p0 q = q; p00 q0 + r = r:

which are satis ed if and only if p, q and r are real functions and p0 = q. 2. If L is formally self-adjoint, then

Lg = pg00 + p0 g0 + rg

= (pg0)0 + rg

That is, d d L = p + r dx dx   and (2:24) becomes

b Lf; g = p f 0 g fg0 + f; Lg h i a h i  

Hence, L is self-adjoint if b p f 0 g fg0 = 0 a   for all f; g 2 (a; b) C2 (a; b) : 2 L \ 3. Suppose  C is an eigenvalue of L; then f 2 (a; b) C2 (a; b) ; f = 0 such that2 9 2 L \ 6 Lf = f Thus, Lf; f = Lf; f = f; f =  f 2 h i h i h i k k But L is self-adjoint implies that

Lf; f = f; Lf = f; Lf = f; f =  f 2 h i h i h i h i k k Thus,  f 2 =  f 2 k k k k

57 Since f = 0; then we can divide the above equation by f 2 ; which gives 6 k k  = 

That is,  R. 2 Second, we want to prove that if f and g are eigenfunctions of L associ- ated with the eigenvalues  and , respectively where  = ; then f and g are orthogonal. 6

 f; g = f; g h i h i = Lf; g h i = Lf; g h i = f; Lg h i = f; g h i =  f; g h i Thus,  f; g  f; g = 0 h i h i or ( ) f; g = 0 h i but   = 0; hence f; g = 0: 6 h i Remark 79 2.15

If p0 = q; then the continuity of p00 and q0 are no longer required. That is, the above theorem is valid under the weaker condition that p0 is continuous.

Example 80 2.16

Determine the eigenvalues and eigenfunctions of

u00 + u = 0 on (0; ) subject to the homogenous boundary conditions

u (0) = 0; u () = 0:

d2 In the di¤erential operator L = 2 , we have p = 1; q = 0; r = 0 all are dx real and p0 = 0 = q: Thus, L is formally self-adjoint. The auxiliary equation is given by

m2 +  = 0 m = p  ) 

58 CASE1: If  > 0, then m = pi and the general solution is given by 

u (x) = c1 cos px + c2 sin px

Using the boundary conditions

0 = u (0) = c1

0 = u () = c2 sin p p = n )  = n2; n N ) 2 The eigenvalues are

n2 : n N = 1; 4; 9; ::: R f 2 g f g  and the corresponding eigenvectors are

sin n : n N = sin x; sin 2x; sin 3x; ::: f 2 g f g where we have chosen c2 = 1:[Verify that the eigenfunctions are orthogonal!] CASE2: If  < 0;then m = p  and the general solution is given by  u (x) = c cosh p x + c sinh p x 1 2 Using the boundary conditions

0 = u (0) = c1

0 = u () = c sinh p  2 c = 0 ) 2 But the cannot be zero. Hence, there is no negative eigenvalues. CASE3: If  = 0; then m = 0 the general solution is given by

u (x) = c1 + c2x

Using the boundary conditions

0 = u (0) = c1

0 = u () = c2 c = 0 ) 2 Again the eigenfunction cannot be zero. So we have no eigenvalues on ( ; 0]: 1 Example 81 2.17

59 Determine the eigenvalue and eigenfunctions of

u00 + u = 0 on (0; l) subject to the separated boundary conditions

u (0) = 0;

hu (l) + u0 (l) = 0 where h > 0: As done in the previous example

m2 +  = 0 m = p  )  CASE1: If  > 0, then m = pi and the general solution is given by 

u (x) = c1 cos px + c2 sin px

Using the boundary conditions

0 = u (0) = c1

0 = hu (l) + u0 (l)

= hc2 sin pl + c2p cos pl

hc sin pl = c p cos pl 2 2 or pl tan pl = hl where we have divided by c2 cos pl = 0 [Why?]. If we write = pl; then the solution6 of the above equation is the intersec- tions of the graphs of y = tan with y = as shown in the gure below. hl The eigenvalues are n that satis es

n = nl that is, p 2  = n ; n N f n l 2 g and the corresponding eigenvectors are

sin n x : n N f l 2 g   There are no eigenvalue on ( ; 0]. 1

60 8.2 Transforming a second-order di¤erential operator to a formally self-adjoint operator Recall that in the di¤erential equation

Lu = 0; where L : 2 (a; b) C2 (a; b) 2 (a; b) L \ !L d2 d L = p (x) + q (x) + r (x) dx2 dx the operator L is a formally self-adjoint operator i¤ p; q and r are all real and p0 = q, but what if last condition is not satis ed!

Theorem 82 (Transforming the operator L to a formally self-adjoint operator)

Let L : 2 (a; b) C2 (a; b) 2 (a; b) L \ !L be a linear di¤erential operator of second order de ned by

Lu = p (x) u00 + q (x) u0 + r (x) u; x (a; b) ; 2 where p C2 (a; b) ; q C1 (a; b) ; r C (a; b) and p; q and r are all real func- 2 2 2 tions, but p0 = q on (a; b). Then, 6 1. There exists a strictly positive function  (x) C2 (a; b) such that 2 L = L;

is formally self-adjoint. e

61 2. L is self-adjoint if L is formally self-adjoint and

b e e p f 0 g fg0 = 0 a  

for all f; g 2 (a; b) C2 (a; b) : 2 L \ 3. If L = L is self-adjoint, then the eigenvalues of the operator L are all real and any pair of eigenfunctions associated with distinct eigenvalues are orthogonale in 2 (a; b) : L Proof

1. Redoing the algebraic manipulation that was done to nd the adjoint operator of L; with L replaced by L; we get

b 0 0 0 Lf; g = p f g fg +e q (p) fg + f; Lg a D E     D E

where e e 2 d d 00 0 L = p + 2 (p)0 q + (p) q + r dx2 dx    where we havee used the fact that ; p; q and r are all real functions.

Note that L is formally self-adjoint, i.e. L = L if

e p e= ep; 2 (p)0 q = q; (p)00 q0 + r = r; which is true if (p)0 = q which gives 0p + p0 = q or p0 q 0 +  = 0 p This is a rst-order homogenous di¤erential equation, with integrating factor p0 q exp dx p Z   Hence,

p0 q p0 q p0 q exp dx 0 + exp dx  = 0 p p p Z  Z   

62 or d p0 q exp dx  = 0 dx p  Z   or p0 q exp dx  = c p Z  Assuming without loss of generality that p (x) > 0 on (a; b) ; we have

q p0  = c exp dx p Z  q p = c exp dx exp 0 dx p p Z   Z  q = c exp dx exp ( ln p) p Z  q 1 = c exp dx exp ln p p Z  c q  = exp dx p p Z  where c is a constant. Note that  is a strictly positive function and  (x) C2 (a; b). 2 2. [This part is homework!]. Hint: use the relation

b 0 0 0 Lf; g = p f g fg + q (p) fg + f; Lg a D E     D E e2 e 3. Let u  (a; b) be an eigenfunction of the operator L corresponding to the eigenvalue2 L ; that is Lu + u = 0 The above equation is equivalent to

Lu + u = 0

where L = L is a self-adjointe operator. Now,  u 2 =  u; u e k k h i = u; u h i = Lu; u D E = u; Lu e = Du; u E h ei =  u; u h i =  u 2 k k

63 Since u 2 = 0;  must be a . k k 6 2 Now, if v  (a; b) is an other eigenfunction of the operator L corre- sponding to2 a L di¤erent eigenvalue ; then we have ( ) u; v =  u; v  u; v h i h i h i = Lu; v u; Lv D E D E = u; Lv u; Lv e e =D 0: E D E e e i.e. u and v are orthogonal in 2 (a; b) : L Corollary 83 2.19 If L : 2 (a; b) C2 (a; b) 2 (a; b) is a self-adjoint linear operator and  is a positiveL and continuous\ function!L on [a; b] ; then the eigenvalues of the equation Lu + u = 0 are all real and any pair of the eigenfunctions associated with distinct eigenvalues are orthogonal in 2 (a; b). L Remark 84 2.20 1. The eigenvalue problem equivalent to the problem Lu + u = 0 (where L here is a self-adjoint operator) is 1 Lu + u = 0  Therefore, the eigenvalues and eigenfunctions obtained by solving the for- mer equation are actually the eigenvalues and eigenfunctions of the oper- ator 1 L:  2. If (a; b) is a nite interval, then continuos positive function  (x) attains its minimum and maximum on [a; b] ; that is, 0 <  (x) <   1

) b b b 0 < u 2 dx  (x) u 2 dx u 2 dx < j j  j j  j j 1 Za Za Za ) 0 < p u u u < k k  k k  k k 1 from which we get p u < u < k k 1 , k k 1 2 2 i.e. (a; b) and  (a; b) are the same, but have di¤erent inner product spaces.L L

64 3. The operator L in the above theory can be any self-adjoint linear operator on an inner product space.

Example 85 2.21

1. Find the eigenfunctions and eigenvalues of the on [1; b]

2 x y00 + xy0 + y = 0; y (1) = y (b) = 0:

2. Is the above di¤erential operator self-adjoint? If not transform it to a formally self-adjoint operator.

Solution 1. The above equation is a Cauchy-Euler equation, with a = 1; b = :Thus, the auxiliary equation is given by

m2 + (1 1) m +  = 0 m2 +  = 0 ) m = ip )  where we have assumed that  > 0 [show that there are no eigenvalues in ( ; 0]]. 1The general solution is given by

ip ip y (x) = d1x + d2x ip ip ln x ln x = d1e + d2e ip ln x ip ln x = d1e + d2e

= d1 cos p ln x + i sin p ln x h    i +d cos p ln x i sin p ln x 2 h    i = (d1 + d2) cos p ln x   + (d d ) i sin p ln x 1 2   = c1 cos p ln x + c2 sin p ln x     Using the boundary conditions

0 = y (1) = c1;

0 = y (b) = c2 sin p ln b  

65 but c = 0 (otherwise, the eigenfunction will be zero!). Thus, 2 6 sin p ln b = 0

pln b = n; n N; ) 2 n 2  = ; n N ) n ln b 2   and the eigenfunction corresponding to these eigenvalues are n y (x) = sin ln x n ln b   2. In this example, d2 d L = x2 + x dx2 dx that is,

p (x) = x2; q (x) = x; r (x) = 0; all are real, but p0 (x) = 2x = x = q (x) 6 Thus, L is not a formally self-adjoint operator. If we take

1 q (x)  (x) = exp dx p (x) p (x) Z  1 1 = exp dx x2 x Z  1 = x then,

1 1 d2 d L = x2 + x x x dx2 dx   d2 d = x + dx2 dx d d = x dx dx   is formally self-adjoint. [Verify!] According to theory developed above, the eigenfunctions n y (x) = sin ln x n ln b   66 are orthogonal in 2 (a; b) : That is for n = m, L 6 b 1 n m yn; ym  = sin ln x sin ln x dx = 0 h i 1 x ln b ln b Z     [Verify!]

67 Part III The Sturm-Liouville Theory

68 9 The Sturm-Liouville Problem We are going to learn

De ne The Sturm-Liouville problem.  Generalize the third part of the eigenvalue theory to an in nite-dimensional  space.

69 De nition 86

Let L be a formally self-adjoint operator of the form

d d L = p (x) + r (x) ; ((2.33)) dx dx   The eigenvalue equation

Lu +  (x) u = 0; x (a; b) ((2.34)) 2 subject to the separated homogenous boundary conditions

u (a) + u0 (a) = 0; + > 0 ((2.35)) 1 2 j 1j j 2j u (b) + u0 (b) = 0; + > 0 1 2 j 1j j 2j where i and i are real constants, is called a Sturm-Liouville eigenvalue prob- lem, or SL problem for short.

De nition 87

The SL problem is called regular if the interval (a; b) is bounded and p = 0 on (a; b). Otherwise, the SL problem is called singular. 6

Remark 88 1

1. The solution of the SL problem (2.34) with boundary conditions (2.35) are the eigenfunctions of the operator 1 L.  2. Under the above boundary conditions, L is a self-adjoint operator [ver- ify!]. Therefore, the eigenvalues in (2.34), if they exist, are real and the corresponding eigenvectors are orthogonal in 2 (a; b) : L 3. In a regular SL problem, we assume that p (x) > 0:

The following theorem generalize the third part of the eigenvalue theory, namely "that eigenvectors of a self-adjoint matrix in a nite-dimensional space X form a basis for that space", to an in nite-dimensional inner product space.

Theorem 89 2.29

Assuming that p0; r;  C ([a; b]) ; and p;  > 0 on [a; b] ; the SL eigenvalue problem de ned by Equations2 (2:34) and (2:35) has an in nite sequence of real eigenvalues 0 < 1 < 2 < ::: such that  : To each eigenvalue  corresponds a single eigenfunction n ! 1 n 'n; and the sequence of eigenfunctions 'n : n N forms an orthogonal basis of 2 (a; b) : f 2 g L

70 Remark 90 2.30

1. For all ;  max r (x) : a x b :  fj j   g 2. If the SL problem (2:34) is considered under the periodic boundary con- ditions

u (a) = u (b) ;

u0 (a) = u0 (b) ;

then,

(a) The operator L de ned by (2:33) will be self-adjoint if p (a) = p (b) [verify!]. (b) Theorem (2:29) holds in this case, except that the uniqueness of the eigenfunctions for each eigenvalues is not guaranteed.

Example 91 2.31

Find the eigenvalues and eigenfunctions of the equation

u00 + u = 0; 0 x l   subject to the boundary condition

u0 (0) = 0;

u0 (l) = 0;

1. Find the eigenvalues and eigenfunctions of the above problem.

2. Let f 2 (0; l) ; write f as a linear combination of the computed eigen- functions.2 L 3. Write f (x) = 1 as a linear combination of the computed eigenfunctions. 4. Do the above problem using the boundary conditions

u (0) = 0; u (l) = 0;

and compare the series representation for f (x) = 1 in the current case with that in the previous case.

Solution The above eigenvalue problem can be written as

Lu + u = 0

71 with d d L = ; dx dx    (x) = 1 and under separated homogenous boundary conditions. Thus, we have an SL problem and Theorem 2.29 holds. 1. From example 2.16, we know that the roots of the auxiliary equation are m = p .  (a) According to remark 2.30, the eigenvalues must satisfy  max r (x) : 0 x l = 0  fj j   g hence, there are no negative eigenvalues. (b) For  = 0; the general solution is given by

u (x) = c1x + c2 and u0 (x) = c1 using the boundary conditions

0 = u0 (0) = c1;

0 = u0 (l) = c1:

Thus, the eigenfunction corresponding to 0 = 0 is u0 (x) = 1: (c) For  > 0; we have the general solution

u (x) = c1 cos px + c2 sin px from which we get

u0 (x) = c p sin px + c p cos px 1 2 using the boundary conditions

0 = u0 (0) = c2p and since p = 0; we have c = 0: 6 2 0 = u0 (l) = c p sin pl 1 and since c p = 0; we have 1 6 0 = sin pl pl = n; n N ) 2 0 n22  = ; n N ) n l2 2 0

72 n22 Note that we have an in nite sequence of eigenvalues l2 : n N0 with f 2 g n22 lim n = lim = ; n n 2 !1 !1 l 1 as stated in the theorem.

n22 The eigenfunction un corresponding to n = l2 is given by n u (x) = cos x ; n l   where we have used c2 = 1: 2. According the theorem 2.29, the set of eigenfunctions cos n x : n N f l 2 0g is orthogonal in 2 (0; l) [verify!]. Moreover, the set cos n x : n N l  0 form a basis for L2 (0; l) : f 2 g L  That is, we can write any function f 2 (0; l) in the form 2 L 1 n f (x) = a cos x n l n=0 X   where n f; cos l x an = n 2 cos l x  but 

n l n f; cos x = f (x) cos x dx; l l Z0 D  E2 l   n 2 n l n = 0 cos x = cos x dx = l l 0 l 2 n N   Z    2

Therefore, For f 2 (0; l), we have 2 L 1 n f (x) = a cos x n l n=0 X   where

1 l a = f (x) dx 0 l Z0 2 l n an = f (x) cos x dx l 0 l Z  

73 3. Let f (x) = 1; then 1 l a = 1dx = 1 0 l Z0 2 l n an = cos x dx = 0 l 0 l Z   Hence, the function f (x) = 1 is represented by a single term, namely, the rst eigenfunction u0 (x) : 4. For the second pair of boundary conditions, we have n22  = n l2 n u (x) = sin x n l   where n N. Any f 2 (0; l) can be written in the form 2 2 L 1 n f (x) = b sin x n l n=1 X   where 2 l n bn = f (x) sin x dx l 0 l Z   In this case, if we consider f (x) = 1; we get 2 l n b = sin x dx n l l Z0 2   = (1 cos n) n 2 = (1 ( 1)n) n Therefore, n 2 1 (1 ( 1) ) n 1 = sin x  n l n=1 X   2 1 1 = sin x + sin 3x + sin 5x + :::  3 5   Remember that the above equality hold in 2 (0; l) ; i.e. in the sense that L n 2 1 (1 ( 1) ) n 1 sin x = 0  n l n=1 X   or k n 2 (1 ( 1) ) n 2 sin x L 1 as k :  n l ! ! 1 n=1 X  

74 Part IV

75 10 Fourier Series in 2 L We are going to learn

Using "The Sturm-Liouville Theory" to derive "The Fundamental Theo-  rem of Fourier Series".

76 Theorem 92 3.2 (Fundamental Theorem of Fourier Series)

The orthogonal set of functions n n 1; cos x; sin x : n N l l 2 n o is complete in 2 ( l; l) ; in the sense that any function f 2 ( l; l) can be represented byL the series 2 L

1 n n f (x) = a + a cos x + b sin x ; l x l; ((3.7)) 0 n l n l   n=1 X   where f; 1 1 l a0 = h 2i = f (x) dx; 1 2l l k k Z n l f; cos l x 1 n an = = f (x) cos xdx; n N; n 2 cos x l l l 2 l Z n l f; sin l x 1 n bn = = f (x) sin xdx; n N; n 2 sin x l l l 2 l Z

The right-hand side of Equation (3:7) is called the Fourier series expansion of f; and the coe¢cients an and bn in the expansion are the Fourier coe¢cients of f: Proof: Consider the eigenvalue problem

u00 + u = 0; l x l   with the periodic boundary conditions

u ( l) = u (l) ; u0 ( l) = u0 (l) d2 1. The above problem is an SL problem because L = dx2 is formally self- d2 adjoint. In fact, L = dx2 is self-adjoint under the above periodic boundary conditions [verify!]. 2. The results of Theorem 2.29 therefore hold (except for the uniqueness of the eigenfunctions as previously mentioned in Remark 2.30). In particular, the eigenfunctions of L are orthogonal and complete in 2 ( l; l). L 3. There are no negative eigenvalues of L since

 max r (x) : l x l = 0  fj j   g (see Remark 2.30).

77 4. For  = 0; the general solution is given by

u (x) = c1x + c2 Using the boundary condition, we have

c l + c = u ( l) = u (l) = c l + c 1 2 1 2 or c1 = 0 and 0 = u0 ( l) = u0 (l) = 0 Therefore, the eigenfunction corresponding to 0 = 0 is u0 (x) = 1: 5. For  > 0; the general solution is given by

u (x) = c1 cos px + c2 sin px Using the boundary conditions gives

c cos pl c sin pl = u ( l) = u (l) = c cos pl + c sin pl 1 2 1 2

) c2 sin pl = 0 and

c sin ppl+c p cos pl = u0 ( l) = u0 (l) = c p sin pl+c p cos pl 1 2 1 2

) c1p sin pl = 0

Now, since  > 0 and c1 and c2 cannot both be zero, we have sin pl = 0  l = n ) n n22  p= ; n N ) n l2 2

n22 Note that for each eigenvalue n = l2 , we have two eigenfunctions, namely,

(a) If we choose c1 = 0, we get the eigenfunction n u (x) = sin x n l

(b) If we choose c2 = 0, we get the eigenfunction n u (x) = cos x n l

78 That is, the eigenfunction corresponding to a particular eigenvalue is not unique, which is due to using a non separated boundary conditions (see Theorem 2.29). 6. The last part of Theorem 2.29 states that the set of eigenfunctions n n 1; cos x; sin x : n N f l l 2 g are orthogonal and complete in 2 ( l; l) : Therefore, for each f 2 ( l; l), f is represented by the series L 2 L

1 n n f (x) = a + a cos x + b sin x ; l x l 0 n l n l   n=1 X   where f; 1 a0 = h i 1 2 k k 1 l = l f (x) dx 1dx l l Z l R1 = f (x) dx 2l l Z and

n f; cos l x an = n 2 cos l x  l 1  n = f (x) cos x dx l n cos2 x dx l l l l Z   l R1  n = f (x) cos x dx l l l Z   and

n f; sin l x bn = n 2 sin l x  l 1  n = f (x) sin x dx l 2 n sin x dx l l l l Z   l 1R  n = f (x) sin x dx l l l Z   Remark 93 3.3

79 1. If f 2 ( l; l) is an even function, i.e. f satisfy 2 L f ( x) = f (x) x [ l; l] 8 2 then, b = 0 for all n N and f is represented on [ l; l] by a cosine series n 2 1 n f (x) = a + a cos x; 0 n l n=1 X where

1 l 1 l a0 = f (x) dx = f (x) dx; 2l l l 0 Z Z 1 l n 2 l n an = f (x) cos x dx = f (x) cos x dx; n N: l l l l 0 l 2 Z   Z   2. If f 2 ( l; l) is an odd function, i.e. f satisfy 2 L f ( x) = f (x) x [ l; l] 8 2 then, a = 0 for all n N and f is represented on [ l; l] by a sine series n 2 0 1 n f (x) = b sin x; n l n=1 X where

1 l n 2 l n bn = f (x) sin x dx = f (x) sin x dx; n N: l l l l 0 l 2 Z   Z   3. The equality between f and the Fourier series in Theorem 3.2 is in 2 ( l; l) and not pointwise. Namely, we have L

2 1 n n f (x) a0 + an cos x + bn sin x = 0 " l l # n=1   X or

2 N n n f (x) a0 + an cos x + bn sin x 0 as N " l l # ! ! 1 n=1   X

80 But

2 N n n f (x) a0 + an cos x + bn sin x " l l # n=1   X 2 l N n n = f (x) a + a cos x + b sin x dx 0 n l n l l " n=1 # Z X   l l N 2 n n = f (x) dx 2 Re f (x) a0 + an cos x + bn sin x dx l j j l " l l #! Z Z n=1   X 2 l N n n + a + a cos x + b sin x dx 0 n l n l l n=1 Z X   l N l l 2 n n = f 2 Re a0 f (x) dx + an f (x) cos xdx + bn f (x) sin xdx k k l l l l l Z n=1 Z Z !! X 2 N n n + a0 + an cos x + bn sin x l l n=1   X N = f 2 2 Re a (2l) a + a la + b lb k k 0 0 n n n n n=1 ! X  N n 2 n 2 + a 2 1 2 + a cos x + b 2 sin x j 0j k k j nj l j nj l n=1   X N = f 2 2 Re 2l a 2 + l a 2 + l b 2 k k j 0j j nj j nj n=1 ! X   N +2l a 2 + l a 2 + l b 2 j 0j j nj j nj n=1 X   N N = f 2 2 2l a 2 + l a 2 + l b 2 + 2l a 2 + l a 2 + l b 2 k k j 0j j nj j nj j 0j j nj j nj n=1 ! n=1 X   X   N = f 2 l 2 a 2 + a 2 + b 2 k k j 0j j nj j nj n=1 ! X   Since f 2 ( l; l) 2 L N f 2 l 2 a 2 + a 2 + b 2 0 as N k k j 0j j nj j nj ! ! 1 n=1 ! X   2 2 2 implies that 1 a + b is convergent and therefore both 1 a n=1 j nj j nj n=1 j nj P   P

81 2 and 1 b is convergent. Consequently, n=1 j nj

P lim an = 0; n !1 lim bn = 0: n !1 4. If f 2 ( l; l) is continuous on [ l; l] and the Fourier series of f converges uniformly2 L to f; then the equality between f and the Fourier series in Theorem 3.2 is pointwise.

Example 94 3.4

Find the Fourier series expansion of the function

1;  < x <  f (x) = 0; x = 0 8 1; 0 < x  <  :

Note that,

1. f 2 ( ; ) [verify!] 2 L 2. f is an odd function [verify!]

Therefore, the Fourier series expansion of f is given by

1 f (x) = bn sin nx; n=1 X

82 where 2  b = f (x) sin (nx) dx n  Z0 2  = sin (nx) dx  Z0 2 = cos (nx)  n j0 2 = (1 ( 1)n) n 0; if n is even = 4 ; if n is odd  n

Note that limn bn = 0: The Fourier!1 series can therefore be written in the form

1 f (x) = bn sin nx n=1 X 4 1 1 = sin (2n + 1) x  (2n + 1) n=0 X Figure 3.2 below shows the rst three terms in the sequence of partial sums of the Fourier series, i.e.

4 N 1 S (x) = sin (2n + 1) x N  (2n + 1) n=0 X for N = 0; 1; 2:

Note that the larger the N, the better the approximation. Also note that

S ( ) = S () = 0 for N = 0; 1; 2: N N

83 In fact, the Fourier series of f at  equals to 0 while f () = 1 and f ( ) is not de ned, which shows that the equality

4 1 1 f (x) = sin (2n + 1) x  (2n + 1) n=0 X does not hold at every point in [ ; ] : Corollary 95 3.5

Any function f 2 ( l; l) can be represented by the Fourier series 2 L

1 inx=l f (x) = cne ; n= X1 where inx=l l f; e 1 inx=l cn = 2 = f (x) e dx; n Z: einx=l 2l l 2 Z Proof: [Assignment!]

84 11 Convergence of Fourier Series We are going to learn about

Periodic functions and their properties.  Piecewise continuous and Piecewise smooth functions.  Pointwise convergence of a Fourier series.  Uniform and absolute convergence of a Fourier series. 

85 A function f : R C is periodic in p; where p > 0 if ! f (x + p) = f (x) for all x R; 2 and p is then called a period of f: Properties of periodic functions

1. If f is periodic in p, then f is also periodic in np where n Z [Why?] 2 2. If a periodic function f in p is integrable on [0; p] ; then f is integrable over any nite interval, and its integral have the same value over all intervals of length p; that is

x+p p f (t) dt = f (t) dt for all x R: 2 Zx Z0 [Justify?]

Example 96 (Periodic functions)

Determine the period of the following periodic functions:

1. cos x; sin x: 2. cos(ax); sin (ax), where a > 0: 3. A constant function.

De nition 97 3.6

1. A function f de ned on a bounded interval I; where (a; b) I [a; b] ; is said to be piecewise continuous if  

(a) f is continuous on (a; b) except for a nite number of points x ; x ; :::; x : f 1 2 ng (b) The right-hand and left-hand limits

+ lim f (x) = f x ; lim f (x) = f x + i i x x x x ! i ! i   for all i 1; 2; :::n : 2 f g (c) The limits at the endpoints exist, that is

+ lim f (x) = f a ; lim f (x) = f b x a+ x b ! !   2. f is said to be piecewise smooth if f and f 0 are both piecewise continuous. 3. If the interval I is unbounded, then f is piecewise continuous (smooth) if it is piecewise continuous (smooth) on every bounded subinterval of I:

Remark 98 ( piecewise continuous and piecewise smooth functions)

86 1. The discontinuities in the graph of a piecewise continuous functions results from jumps in its values and occur at a nite number of points. 2. The non-smoothness in the graph of a piecewise smooth function results from jumps in its values and/or sharp corners at some points, which occur

at a nite number of points. 3. A continuous function is always piecewise continuous, but may not be piecewise smooth. [Example?] 4. A di¤erentiable function may not be piecewise smooth. [Example?] . Note that the right-hand (left-hand) limit of a derivative f 0 is not the same as the right-hand (left-hand) derivative of f: In other words,

f is di¤erentiable at a point x 0 ) right- and left-hand derivatives exits at that point (or one of them if its an endpoint))  f (x + h) f (x ) f (x ) f (x h) lim 0 0 ; lim 0 0 exist. h 0+ h h 0+ h ! ! but

right- and left hand limit of f 0 exist at a point x 0  + f (x0 + h) f x0 + lim f 0 (x) = lim = f 0 x0 ; x x+ h 0+ h ! 0 !   f x0 f (x0 h) _ lim f 0 (x) = lim = f 0 (x ) ; + 0 x x h 0 h ! 0 ! 

We already know that any f 2 ( ; ) can be represented by a Fourier series 2 L

1 f (x) = a + (a cos nx + b sin nx) ;  x ; 0 n n   n=1 X

87 where a0; an; bn are the Fourier coe¢cients. We also know that the above equal- ity holds in 2 ( ; ) ; but not necessarily pointwise. In the following, we study the pointwiseL and uniform convergence of Fourier series.

11.1 Pointwise Convergence of Fourier Series The following theorem discusses the pointwise convergence of the Fourier series to the periodic function de ned by extending a function f from [ ; ] to R using the equation f (x + 2) = f (x) ; x R: 8 2 Theorem 99 3.9

Let f be a piecewise smooth function on [ ; ] which is periodic in 2: If 1  a0 = f (x) dx; 2  Z 1  an = f (x) cos nxdx;   Z 1  bn = f (x) sin nxdx; n N   2 Z then the Fourier series

1 S (x) = a0 + (an cos nx + bn sin nx) n=1 X 1 + converges at every x in R to 2 [f (x ) + f (x)] : Remark 100 3.10

1. If f is continuous on [ ; ] ; then the Fourier series converges pointwise to f on R:[Why?] 2. If f is discontinuous at a point x, then the Fourier series converges to the average of the "jump" at x, namely

1 + S (x) = f x + f x 2    regardless of the value of f (x) : 3. We can rede ne the function f at the points of discontinuities to achieve a pointwise convergence of the Fourier series to f on R: [How?]

4. The conditions on f in Theorem 3.9 are su¢cient but not necessary. For 1 example, f (x) = x 3 is not piecewise smooth on [ ; ] ; but its Fourier series expansion converges to f on [ ; ] :[Exercise 3.26].

88 5. Theorem 3.9 holds if the interval [ ; ] is replaced by any other interval [ l; l]. Namely, if f is a piecewise smooth function on [ l; l] which is periodic in 2l; then the Fourier series

1 n n a + a cos x + b sin x ; 0 n l n l n=1 X      where 1 l a0 = f (x) dx; 2l l Z 1 l n an = f (x) cos x dx; l l l Z   1 l n bn = f (x) sin x dx; n N l l l 2 Z   1 + converges at every x R to [f (x ) + f (x)] : 2 2 Exercise 101 3.4 (revisited)

Consider the function 1;  < x < 0 f (x) = 0; x = 0 8 1; 0 < x  <  :

1. Sketch the periodic function that results from extending f from ( ; ] to R: Is the extension of f piecewise smooth on [ ; ]? 2. Determine the Fourier series expansion of the extension of f . Does it converge to f (x) at every x R? 2 3. Rede ne the function f so that the Fourier series converges to f on R:

89 4. Find a series representation for :

Solution:

1. 2. We already know that the Fourier series expansion of f is given by

4 1 1 S (x) = sin (2n + 1) x  (2n + 1) n=0 X The graph of the extension of f is clearly piecewise continuous. Also, f has

a zero derivative at all points x ( ; ) 0 ; and limx x f 0 (x) = 0; 2 f g ! 0 where x0 is a point at which f is discontinuous. Therefore, f is piecewise smooth on [ ; ]. Therefore, Theorem 3.9 holds. That is,

4 1 1 1 + S (x) = sin (2n + 1) x = f x + f x  (2n + 1) 2 n=0 X    (a) At x = 0;

1 + 1 S (0) = f 0 + f 0 = [1 + ( 1)] = 0 = f (0) 2 2 By periodicity of f, the Fourier  series converges to f at all points x = 0 + n (2) = 2n; where n Z. 2 (b) At x = ;

1 + 1 S () = f  + f  = [ 1 + 1] = 0 = 1 = f () 2 2 6 By periodicity of f, the Fourier  series does not converge to f at all points x =  + n (2) = (2n + 1) ; where n Z. 2

90 3. If we rede ne f at all points x = (2n + 1) ; n Z as 2

1 + 1 f (x) = f x + f x = [ 1 + 1] = 0 2 2    then the Fourier series becomes convergent to f on R:

 4. If we take the point x = 2 ; then we know that the Fourier series does converge to f at this point since f is continuous there. Therefore,

4 1 1   sin (2n + 1) = f ;  (2n + 1) 2 2 n=0 X   or 4 1 1 ( 1)n = 1;  (2n + 1) n=0 X or 1 1  = 4 1 + ::: : 3 5   11.2 Uniform Convergence of Fourier Series Lemma 102 3.13

If f is a continuous function on the interval [ ; ] such that f ( ) = f () and f 0 is piecewise continuous on ( ; ) ; then the series

1 a 2 + b 2 j nj j nj n=1 X q is convergent, where an and bn are the Fourier coe¢cients of f de ned by 1  an = f (x) cos nxdx;   Z 1  bn = f (x) sin nxdx:   Z Proof: 2 If f 0 is piecewise continuous on [ ; ], then f 0 ( ; ) : [Why?]. The 2 L Fourier coe¢cients of f 0 is therefore given by

 0 1 0 a0 = f (x) dx; 2  Z 0 1 0 an = f (x) cos nxdx;   Z 0 1 0 bn = f (x) sin nxdx:   Z

91 Since f ( ) = f (), we have  0 1 0 1 a0 = f (x) dx = [f () f ( )] = 0; 2  2 Z Integrating by parts,

u = cos nx dv = f 0 (x) dx u = sin nx dv = f 0 (x) dx ; du = n sin nxdx v = f (x) du = n cos nxdx v = f (x) we have 1  a0 = f (x) cos nx  + n f (x) sin nxdx = nb ; n  j  n    Z 0 1  bn = f (x) sin nx  n f (x) cos nxdx = nan:  j   Z  from which we get 1 1 a = b0 ; b = a0 n n n n n n Therefore, the nth partial sum

N S = a 2 + b 2 N j nj j nj n=1 X q N 1 2 2 = a0 + b0 n j nj j nj n=1 X q Recall that the CBS inequality states that

x; y x y jh ij  k k k k

2 2 2 2 1 0 0 0 0 Thus, by taking x = 1; :::; N ; y = a1 + b1 ; :::; aN + bN   we have  q q

1 1 2 2 N N N 2 2 1 2 2 1 0 0 0 0 an + bn 2 an + bn n j j j j  n ! ! n=1 q n=1 n=1   X X X N 1 But n=1 n2 is a convergent p series and is therefore bounded above by its sum. Next we use Bessel’s inequality, namely P 2 1 g; ' jh nij g 2 2  k k n=1 'n X k k

92 2 2 for any orthogonal set 'n : n N in and any g ; to show that 2 2 f 2 g L 2 L N a0 + b0 is bounded for every N N: n=1 n n 2   P 2 2 0 0 N 2 2 N f ; cos nx N f ; sin nx a0 + b0 = + n n D 4E D 4E n=1 n=1 cos nx n=1 sin nx   k k k k X X 2 X 2 N 0 N 0 1 f ; cos nx 1 f ; sin nx = +  D 2E  D 2E n=1 cos nx n=1 sin nx X k k X k k 1 2 1 2 2 2 f 0 + f 0 = f 0 <     1

0 2 because f ( ; ). We conclude that the sequence of partial sums SN = 2 L N a 2 + b 2 is bounded for every N and is therefore convergent. n=1 j nj j nj q TheoremP 103 3.14

If f is a continuous function on the interval [ ; ] such that f ( ) = f () and f 0 is piecewise continuous on ( ; ), then the Fourier series

1 a0 + (an cos nx + bn sin nx) n=1 X converges uniformly and absolutely to f on [ ; ] : Proof: Consider the extension of f from [ ; ] to R by the relation f (x + 2) = f (x) for all x R; 2 then since f is continuous on [ ; ] and f ( ) = f () ; the extension of f is a continuous function on R:Therefore, the Fourier series

1 a0 + (an cos nx + bn sin nx) n=1 X converges to f (x) for all x R: To prove that the convergence2 is uniform and absolute, we use the M-Test

a cos nx + b sin nx a + b p2 a 2 + b 2 j n n j  j nj j nj  j nj j nj q 2 2 [why?] , but 1 a + b converges by Lemma 3.13. Therefore, the n=1 j nj j nj Fourier series convergesq uniformly and absolutely. P Remark 104 (Comparing su¢cient conditions for pointwise and uniform con- vergence of a Fourier series)

93 The conditions imposed on f in the above theorem are the same as those of Theorem 3.9 with piecewise continuity replaced by continuity on [ ; ] : Corollary 105 3.15

If f is piecewise smooth on [ ; ] and periodic in 2; its Fourier series is uniformly convergent if, and only if, f is continuous.

Remark 106 3.16

Corollary 3.15 holds if the interval [ ; ] is replaced by any other interval [ l; l].

94