<<

Chapter 6 Series Solution of 1D Problems

In the last chapter, we studied PDEs on unbounded domain and represented solution in terms of integral by the aid of method. To obtain series solutions of linear PDEs on bounded domains we need to represent functions as an innite series in terms of some functions called basis, similar to what we did in the last chapter of the rst volume of this book. In this chapter we learn how to derive dierent basis for boundary value problems in the form of eigenvalue problems generally called the Sturm-Liouville problem. The theory is a direct generalization of the method and we strongly recommend the reader rst read the last chapter of the previous book on Fourier series method.

6.1 Inner product spaces

We rst x the suitable space to work within.

6.1.1 Inner product

Let T denote the interval (x0;x1). The set C(T ) denote the set of all real continuous functions dened on T and C r(T ) is the set of all real function on T such that f (k) C(T ) for k =0;:::;r. This set become a vector space with the familiar function addition and2scalar multiplication as (f + g)(x) := f(x) + g(x); (cf)(x) = cf(x); x T : 8 2 An extremely important operation in Cr(T ) is the inner product dened by the relation

f ; g := f(x) g(x) dx; f ; g Cr(T ): (6.1) h i 8 2 ZT It is simply seen (and is left as an exercise to the reader) to verify the following familiar properties that hold also for dot product in Rn i. f ; f 0 for all f 0, and if f ; f = 0 then f 0, h i   h i  ii. f ; g = g; f (symmetric) h i h i iii. f ; g + h = f ; g + f ; h (additive) h i h i h i iv. cf ; g = c f ; g for all constant c (homogeneity) h i h i

1 2 Series Solution of 1D Problems

Due to the above properties, it makes sense that we consider ; a natural generalization of the dot product in Rn. h i

Denition 6.1. The vector space C r(T ) with the inner product ; is called an . h i

Remember the notion of orthogonality in Rn. We say two non-zero vectors ~u; ~v are orthogonal if ~u:~v = 0. We keep this notion to dene the orthogonality in X(T ). In fact, we say two function f ; g Cr(T ) are orthogonal if f ; g =0. The orthogonality of two functions may seem somehow st2range, however, it is somehtimeis useful we consider functions as vectors in the vector space C r(T ). But, we know that a vector has two components: a) a direction, b) a magnitude or norm. Remember that the norm or magnitude of a vector ~u in Rn is dened by the relation ~u = p~u:~u. We keep this notion also for an inner product, that is, k k 1/2 f : = f ; f = f(x) 2dx : (6.2) k k h i j j  T  p Z ^ f Therefore, we can dene the direction of a nonzero function as f = f . In fact, we have k k innitely many in C r(T ). For example, for T = ( L; L), it is seen that n n ¡ the set of all functions 1; cos x ; sin x 1 are orthogonal, that is, L L n=1  ¡  ¡  0 n =/ m cos(nx/L); cos(mx/L) = sin(nx/L); sin(mx/L) = ; h i h i L n = m  and also for all n; m, we have cos(nx/L); sin(mx/L) = 0: h i Therefore, we can consider Cr(T ) an innite dimensional vector space equipped with the inner product ; . h i Complex space.In the above formulation, we assumed that all functions in C r(T ) are real. We need sometimes to work with complex functions, and for this we should modify our denition of the inner product.

Denition 6.2. If f ; g are two complex piecewise continuous functions dened on T, then their inner product f ; g is dened by the following relation hh ii f ; g = f(x)g(x) dx; hh ii ZT where g is the complex conjugate of g, that is, if g = g + ig then g= g ig . 1 2 1 ¡ 2 It is simply seen that the symmetric property of the real inner product ; is changed to h i f ; g = g; f : hh ii hh ii Accordingly, we have f ; cg = c f ; g . hh ii hh ii 6.1 Inner product spaces 3

Properties of norm.The norm of a function f as dened by the relation f = f ; f enjoys same properties as the norm of a vector in Rn, that is, k k h i p i. f 0 and f = 0 if and only if f = 0, k k  k k k k ii. cf = c f for all c R, k k j j k k 2 iii. f + g f + g . k k  k k k k The proof of the third property need the following important inequality.

Lemma 6.1. (Cauchy-Schwarz) For any two functions f ; g C r(T ), the following inequality holds 2 f ; g f g : (6.3) jh ij  k k k k The equality holds if and only if f ; g are linearly dependent.

Proof. By the relation tf + g 0 for arbitrary t, we have k k  0 tf + g; tf + g = t2 f 2 + 2t f ; g + g 2:  h i k k h i k k The above inequality holds for all t and thus f ; g 2 f g 0: h i ¡ k k k k  f ; g If f ; g 2 = f g , and f =/ 0 (otherwise f ; g are linearly dependent), take t = h i , and h i k k k k ¡ f 2 thus tf + g = 0. This implies tf + g = 0 and thus f ; g are linearly dependent. k k k k  Now, it is left an exercise to the reader to verify the third property of the norm.

Remark 6.1. The dened inner product ; and its associated norm : are not unique. In fact, for any positive function (x) > 0, thhe ifollowing relation is also aknkinner product

f ; g = (x) f(x) g(x) dx: h i ZT For the norm, we have more options. For example, the following relation is a norm called the innity norm of f f = max f(x) : k k1 x T j j 2

6.1.2 Convergence in inner product spaces n Remember that a set of vectors ~v1; :::; ~vn is called a basis for R if ~v1; :::; ~vn are linearly f n g independent and every vector in R is represented uniquely by a linear combination of ~v1;:::; ~vn. The story for functions is a little bit delicate and it leads to the concept of convergence. r An innite set of functions  := 1; 2; C (T ) is called linearly independent if every nite subset of  is a set of linefarly indepgendent functions (we discussed the concept of linear independence of functions in the rst volume of this book). The set  is called a basis for C r(T ) if every function f C r(T ) can be represented by an innite series 2 f c  + + c  + : (6.4)  1 1  n n  4 Series Solution of 1D Problems

Consider the relation (6.4). What is the exact meaning of this relation? We interpret this relation as follows. Let SN be the partial sum of the innite series in the right of (6.4), that is N

SN = cn n: n=1 We say SN converges to f and write X

lim SN = f ; N !1 and understand the convergence in the following sense

2 lim f(x) SN(x) dx = 0; N T j ¡ j or equivalently !1 Z

lim f SN = 0: N k ¡ k !1 We call the above convergence, the convergence in norm. This is only one notion of the convergence. Another notion is the pointwise convergence, that is, for any x T , we have 2 lim SN(x) = f(x): N !1 As we will see below, these two notions are not equivalent.

Example 6.1. Let (Sn) be the following sequence of functions 0 x = 0 1 Sn(x) =8 pn 0 < x < n : > <> 0 x 1  n > It is simply seen that the sequence conve:>rges to the zero function, f = 0 in [0; 1] pointwise. 1 In fact, Sn(0) = Sn(1) = 0 for all n. Also, for any x (0; 1), there exist N0 such that < x N0 and therefore S (x) = 0 for all n N . Therefore, fo2r any x [0; 1], we have n  0 2 lim Sn(x) = 0: n !1 On the other hand, Sn(x) does not converges in norm to the zero function. In fact, for all n 1 we have  1 1/n 0 S 2 = S (x) 2 = ndx = 1: k ¡ nk j n j Z0 Z0 However, in some cases, two convergences are the same. For example, consider the function x f(x) = e in the domain [0; 1]. Let Sn be the sequence of functions 1 1 S (x) = 1 + x + x2 + + xn: n 2!  n!

We show that Sn(x) converges to f(x) both pointwise and in norm to f(x). In fact, by the Taylor's formula, we have 1 1 ex = 1 + x + + xn + f (n+1)() xn+1;  n! (n + 1)! 6.1 Inner product spaces 5

where  (0; 1). Fix x and then we have 2 1 e ex S (x) e : j ¡ n j  (n + 1)!  (n + 1)! This implies x lim e Sn(x) = 0; n j ¡ j !1 x and thus Sn converges pointwise to e . Similarly, we have 1 1 1 e 1 e f S = f(x) S (x) dx = e xn+1dx xn+1dx ; k ¡ nk j ¡ n j (n + 1)!  (n + 1)!  (n + 2)! Z0 Z0 Z0 and thus

lim f Sn = 0; n k ¡ k !1 x and thus Sn converges in norm to e .

The above example also shows that we can not pass the limit inside the integral always, that is, the following equality does not hold always:

1 1 2 2 lim f(x) Sn(x) dx = lim f(x) Sn(x) dx: n j ¡ j n j ¡ j !1 Z0 Z0 !1 As is given in the appendix, the dominant convergence theorem is a sucient condition for passing the limit inside the integral. In real applications, sometimes we have to work with less smooth functions. Here, we introduce two more spaces.

Denition 6.3. A function f(x), x (x ; x ) is called piecewise continuous and denoted by 2 0 1 X(x0; x1) if it is continuous for all points x (x0; x1) except possibly at nitely many points. In addition, if z (x ; x ) is a discontinuity2point of f(x) then both left and right limit exist 2 0 1 + f(z ) = lim f(x); f(z¡) = lim f(x): x z+ x z ! ! ¡ Furthermore, the following limits exist

+ f(x ) = lim f(x); f(x¡) = lim f(x): 0 + 1 x x x x¡ ! 0 ! 1

A function f(x), x (x0; x1) is called piecewise continuously dierentiable and denoted by 1 2 X (x0; x1) if it is continuously dierentiable everywhere in (x0; x1) except possibly at nitely many points. If z (x ; x ) is a point where f is not continuously dierentiable, then right 2 0 1 and left derivatives of f at z must exist. In addition, the right derivative of f(x) at x = x0 and the left derivative of f(x) at x = x1 must exist.

6.1.3 Orthogonal expansion and projection r Suppose that  1 is a basis for C (T ) such that  ;  =0 for i=/ j. Therefore, if we write f ngn=1 h i j i f = c  + c  + ; 1 1 2 2  6 Series Solution of 1D Problems

then the coecients cn acne be determined by the aid of inner product and the orthogonality condition as follows f ;  = c  ;  + c  ;  + = c  ;  ; h ki 1h 1 ki 2h 2 ki  k h k ki and thus f ; k ck = h 2i: k Therefore, we can write k k 1 f ; n f = h 2i n: n n=1 X k k Notice that how the orthogonality condition signicantly simplies the calculations to derive coecients cn. Let us have a better insight into the above calculation. If we consider n(x) a function vector, then the term f ; n fn(x) = h i n(x);  2 k k can be considered as the projection of f(x) along n(x). This projection is called orthogonal projection because the function f~= f f is orthogonal to  due to the calculation ¡ n n f f ;  = f ;  f ;  ; h ¡ n ni h ni ¡ h n ni and since f ;  = f ;  , we conclude f f ;  = 0. h n ni h ni h ¡ n ni

Orthogonal projection.Let  = 1; 2; ::: be an orthogonal basis of F(T ). Consider the subspace F = span  ; :::;  of Ff (T ). Theg orthogonal projection of a function f F(T ) n f 1 ng 2 into Fn is dened by the relation n f ; k fn(x) = h 2i k(x); (6.5) k k=1 X k k The projection (6.5) is called orthogonal because f f is orthogonal to every function in F . ¡ n n

Proposition 6.1. The orthogonal projection of f into Fn is the best approximation of f into Fn in the following sense: f f f g ; (6.6) k ¡ nk  k ¡ k for any g F . 2 n Since f f is the error between f and f , the above proposition states that the error k ¡ nk n between f and fn is smaller than the error between f and any other function in Fn.

Proof. Let g f = h F . We have ¡ n 2 n f g 2 = f g; f g = f f h; f f h = f f 2 2 f f ; h + h 2: k ¡ k h ¡ ¡ i h ¡ n ¡ ¡ n ¡ i k ¡ nk ¡ h ¡ n i k k On the other hand, since h V , we have f f ; h = 0, and thus 2 n h ¡ n i f g 2 = f f 2 + h 2 f f 2; k ¡ k k ¡ nk k k  k ¡ nk 6.1 Inner product spaces 7 and this completes the proof.  Example 6.2. Let us nd the best approximation of the function sin(x) X[ 1; 1] in the subspace P = span 1; x; x2 X[ 1; 1]. If p (x) = a x2 + b x + c is the be2st ap¡proximation 2 f g  ¡ 0 0 0 0 of sin(x) in P2, then the function sin(x) p0(x) is orthogonal to all polynomials in P2. In fact, we have ¡ 1 2 k (sin(x) a0 x b0 x c0)x = 0; for k = 0; 1; 2: 1 ¡ ¡ ¡ Z¡ Calculating the above integrals determines a0 = c0 = 0 and b0 = 0.904. It is seen that the square error between two functions sin(x) and 0.904x is, sin(x) 0.904x 2 = 0.0011 and it k ¡ k is smaller or equal to sin(x) p(x) 2 for any p(x) P . The projection of sin(x) in P gives k ¡ k 2 2 3 a better approximation. For P3 and P5 we have the following result

squar error P2 0.904x 0.0011 3 7 P3 0.158x + 0.998x 1.88 10¡ ¡ 5 3  12 P 0.008x 0.166x + x 6.88 10¡ 5 ¡  Table 6.1.

The projection in P4 is the same as the projection in P3 because sin(x) is an odd function. Everything is ne with our calculations except that in passing from P2 to P3, or from P3 to P5, we need to recalculate all coecients. We remedy this problem by choosing an orthogonal basis as shown in the following example.

Example 6.3. Let us rst nd an orthogonal basis for Pn in C(T ). The natural choice for  (x) is 1. Now let  (x) = a x + b and suppose  ;  = 0. This gives b = 0 and a 0 1 1 1 h 1 0i 1 1 arbitrary. Let us take a1 = 1, and thus 1(x) = x. Let 2(x) = a2x + b2x + c2, we should have 2 2; 1 = 2; 0 = 0, that gives b2 = 0, a2 = 3c2. Taking c2 = 1, gives 2(x) = 3x 1. h i h i 3 ¡ 4 2 ¡ 5 3 ¡ Similarly we obtain 3(x) = 5x 3x, 4(x) = 35x 30x + 3, 5(x) = 63x 70x + 15x. Now let us write ¡ ¡ ¡ sin(x) c  (x) + c  (x) + c  (x):  0 0 1 1 2 2 We simply obtain c0 = c2 = 0 and c1 = 0.9. Now, in P3, we write sin(x) 0.9 (x) + c (3);  1 3 that gives c = 0.03. In P , we have 3 ¡ 5 4 sin(x) 0.9 (x) 0.03c (3) + 10¡  (x):  1 ¡ 3 5 Observe that the contribution of 5(x) is neglectable and thus the the partial summation 0.9 (x) 0.03c (3) approximates very accurately sin(x) in [ 1; 1]. 1 ¡ 3 ¡

Problems Problem 6.1. Show that the norm (6.2) satises all properties of a norm. 8 Series Solution of 1D Problems

Problem 6.2. Show that the operation (6.1) in C(T ) satises conditions dened for an inner product and thus it is an inner product in C(T ). Problem 6.3. The set of sequence x; x = (x ; x ; :::) is called square summable or l2 if f 1 2 g 1/2 1 x = x 2 < : k k j nj 1 n=1 ! X i. Show that the sequence (1; 1/2; 1/3; :::) is in l2 but (1; 1/p2; 1/p3; :::; 1/pn; ) is not.  ii. Show that the relation x dened above is a norm. k k Problem 6.4. To justify that vector spaces are not only the set of line segments with a direction, let us consider the set of all 2 2 matrices. Show that the list of matrices  1 0 1 1 1 0 0 0 A = ; A = ; A = ; A = 1 0 1 1 0 0 1 1 0 1 1 1         are linearly independent and this set is basis for the vector space of all 2 2 matrices. Propose a norm  for this space. Problem 6.5. Let c be the set of all sequences which are eventually zero. In other word, (a ; a ; :::) c c 1 2 2 c if there is n N such that a =0 for k = n; n + 1; . Dene the addition and multiplication in c as 2 k  c (a1; a2; :::) + (b1; b2; :::) = (a1 + b1; a2 + b2; :::); (6.7)

and (a1;a2;:::)=(a1;a2;:::). Show that cc is an innite dimensional vector space and ne a basis for it.

Problem 6.6. Let c0 be the vector space of all sequences convergent to zero. Show that the list e1 = (1; 0; 0; :::), e2 = (0; 1; 0; :::); ::::; is a basis for c0. Problem 6.7. The set of all convergent sequences (not necessarily to zero) is denoted by c. This is a vector space with the familiar addition and scalar multiplication of sequences. Dene a norm in c and nd a basis for it. Problem 6.8. Let T be the space of all n n matrices. 2  i. Show that the operation A; B = tr AtB; h i is an inner product in T2. ii. Generalize this for T , the space of all n n matrices. n  iii. Is the operation A; B = tr AB an inner product? h i Problem 6.9. Consider an inner product space (V; ; ) with a compatible norm. If for all real numbers a; b, the relationship au + bv = bu + av , u; v Vh hiolds, show that u = v . k k k k 2 k k k k Problem 6.10. If x1; :::; xn are positive numbers, prove the following identities

2 1 1 1. n (x1 + + xn) + +   x1  xn   2. (x + + x )2 n(x2 + + x2 ). 1  n  1  n Problem 6.11. Let u; v be two arbitrary vectors of an inner product space (V; ; ). Prove the following h i identity for arbitrary positive number " > 0. 1 u + v u + " v k k  " k k k k 1 Problem 6.12. Consider the sequence of functions f (x) = sin(nx) in 0 x  and n = 1; 2; . n pn    i. Show that f (x) 0 if we dened the norm as n !  f ; g = f(x)g(x)dx: h i Z0 6.1 Inner product spaces 9

ii. Show that the operation   f ; g = f(x)g(x)dx + f 0(x)g0(x)dx h i Z0 Z0 is an inner product in the space of all continuously dierentiable functions. Show that f (x) n ! 1 in the new form. Problem 6.13. Show that for u; v in an inner product space, we have u + v 2 u v 2 = 4 u; v : k k ¡ k ¡ k h i Use this relation to show that there is no inner product in R2 that gives the following norms 1. (x ; x ) = max x ; x , k 1 2 k fj 1j j 2jg 2. (x ; x ) = ( x 3 + x 3)1/3. k 1 2 k j 1j j 2j Problem 6.14. Use vector operations to prove the cosine law in triangles, i.e., c2 = a2 + b2 2ab cos(); (6.8) ¡ where  is the angle between sides a and b. Problem 6.15. Consider the triangle shown in the gure (6.1) where the side d bisects the side c. Use vector operations to prove the Apollonius law i a2 + b2 = 1 c2 + 2d2 2

a b

d

c Figure 6.1.

2 1 1 Problem 6.16. Let ~u1; ~u2 be vectors in R satisfying the relation ~e1 ~u1 < and ~e2 ~u2 < . k ¡ k p2 k ¡ k p2 n Show that ~u1;~u2 are linearly independent. Generalize this as follows. Let ~u1; :::;~un be vectors in R such that n 1 ~ek ~vk < ¡ : k ¡ k n r Show that ~u1; :::;~un are linearly independent vectors. Problem 6.17. Dene an inner product in the vector space of all 2 2 matrices.  Problem 6.18. Which one of the following functions belong to X1 or X? i. f(x) = x2/3, x [0; 1] 2 ii. f(x) = x x , x 1; 1 j j 2 ¡ iii. f(x) = x , x [ 1; 1] j j 2 ¡ iv. f(x) =p 1 0 < x < 1 . 0 otherwise  Problem 6.19. Is the function f(x) dened below belong to X[ 1;1]? ¡ x3 sin(1/x) x =/ 0 f(x) = : ( 0 x = 0 10 Series Solution of 1D Problems

1/x2 Is the function f(x) = e¡ is in X[ 1;1]? ¡ Problem 6.20. Show that the following relation is an inner product in C[a; b] b f ; g w = w(x) f(x) g(x) dx; h i a where w(x) is a positive function. Z d2 Problem 6.21. Show that the L := : U C(0; 1) where U is the space dx2 ! U = f C2(x ; x ); f(0) = f(1) = 0 ; f 2 0 1 g is one to one and onto. Problem 6.22. For P ( 1; 1) nd an orthogonal basis with respect to the following inner product 2 ¡ 1 f ; g w = w(x) f(x) g(x) dx; h i 1 Z¡ for w(x) given below. In each case, set constants such that the value at x = 1 is equal to 1. i. w(x) = 1, (this is called the up to order 2) ii. w(x) = 1 , (this is called the Chebyshev polynomials up to order 2) p1 x2 ¡ Problem 6.23. Find the best approximation of the function f(x)=cos(x) for 1 x 1 in P . Calculate ¡   2 the error. 6.2 Eigenvalue problem 11

6.2 Eigenvalue problem

It turns out that solving PDEs in terms of elementary functions is almost impossible except in specic cases. Remember that we employed series representation of the solutions to linear ODEs with variable coecients in the rst volume of this book. Here we illustrate the method by considering a general heat problem dened on a nite length conductive rod

@tu = a(x)@xxu + b(x)@xu + c(x) u; x0 < x < x1; t > 0 a u(t; x ) + b @ u(t; x ) = 0 8 1 0 1 x 0 ; (6.9) > a2u(t; x1) + b2 @xu(t; x1) = 0 <> u(0; x) = u0(x); x0 < x < x1 > where a; b; c :>are some smooth functions and a(x) > 0 for x [x0; x1]. As we observe, the 2 boundary conditions (at x = x0; x = x1) are homogeneous.

For the sake of simplicity, let us rewrite (6.9) in the operator form @tu = L[u], where the dierential L stands for

L := a(x)@xx + b(x)@x + c(x): (6.10)

For a moment assume that the solution u(t; x) can be written in the separated form

u(t; x) = T (t) (x):

We do not have any straightforward justication for this assumption right now, but we will see how it nds its meaning in the form of series solution. Substituting the separated of of u(t; x) into the PDE leads to the following relation

T L[] 0 = : T 

Note that the left hand side of the above equality is generally a function of t, and the right hand side is a function of x, and since t and x are independent, the equality does not hold in general except both sides are the same constant, i.e., T L[] 0 = = ; T  ¡ for some constant . The negative sign is just a historical convention that we decided to keep it here, otherwise, there is no physical meaning behind this assumption. On the other hand, the boundary condition for the separated solution reduces to

T (t)[a1 (x0) + b1 0(x0)] = 0; T (t)[a2 (x1) + b2 0(x1)] = 0:

If T (t) is identically zero, then the separated solution u(t; x) is identically zero, that is not generally correct due to the nonzero initial condition. For this, we take the boundary conditions for (x) as

a1 (x0) + b1 0(x0) = 0; a2 (x1) + b2 0(x1) = 0: 12 Series Solution of 1D Problems

Then, we reach the following ODE for  L[] =  ¡ a (x ) + b  (x ) = 0 : (6.11) 8 1 0 1 0 0 < a2 (x1) + b2 0(x1) = 0 In the above problem,  i:s called an eigenvalue and (x) is called an eigenfunction of the problem. The ODE for T (t) is simple and has the form T 0 = T . In this section, we show that the above eigenvalue problem has innitely many real e¡igenvalues and eigenfunctions. But before that, let us see a few examples.

Example 6.4. Consider the following equation

@tu = @xxu; 0 < x < l; t > 0 u(t; 0) = 0 : 8 < u(t; l) = 0 The associated eigenvalue to th:e problem is

00 = ; 0 < x < L ¡ ; (0) = 0; (l) = 0  We rst show that  > 0 is the necessary condition that the eigenvalue problem has nonzero eigenfunction. Multiplying the equation 00 =  by  and integrating in (0; l) gives ¡ l l 2 00(x) (x) dx =  (x) dx: ¡ j j Z0 Z0 Integration by parts and applying the boundary conditions (0) = (l) = 0 transforms the left hand side of the above equality to the following

l l 2 2 0(x) dx =  (x) dx: ¡ j j ¡ j j Z0 Z0 The above equality indicate that  0. However, if  =0, then 0 is identically zero and thus (x) is constant. Since (0) = 0, we conclude  is identically zero, that is not acceptable as an eigenfunction. Finally,  is strictly positive. By this result, we obtain the solution  as (x) = A cos(p x) + B sin(p x): The condition (0) = 0 implies A = 0, and the condition (l) = 0 implies p l = n for n = 1; 2; . Therefore, the eigenvalues are  =(n /l)2, and eigenfunctions are  (x)=sin(nx/l).  n n Note that we obtained innitely many real eigenvalues n such that  <  <  < <; 1 2 3  and n for n . In addition, to each eigenvalue, we have an eigenfunction n(x), and a si!mp1le calcul!atio1n shows that they are orthogonal with respect to the inner product

l  ;  =  (x)  (x) dx = 0; n =/ m: h n mi n m Z0 6.2 Eigenvalue problem 13

Each eigenfunction has the important property L[ ] =   : n ¡ n n Example 6.5. Consider the following eigenvalue problem

00 =  ¡ : (6.12) (0) = 0; 5(1) +  (1) = 0  0 In a similar way, we obtain a necessary condition on the sign of , that is  > 0 (it is left as an exercise to the reader). Therefore, the general solution to the ODE is

(x) = c1 cos(p x) + c2 sin(p x): (6.13)

According to the given boundary conditions we obtain c1 = 0 and at x = 1, we obtain 5sinp + p cosp = 0: (6.14) There is no closed form solution to the above algebraic equation. The roots of the equation are shown in the gure (6.2). Observe that there are innitely many roots n, n = 1; 2; to the equation and furthermore  . To each  , there is only one eigenfunction that is n ! 1 n n(x) = sin(pnx).

1 λ 2 4 6 8 10 12 -1 λ1 λ2 -2 λ3 λ4 -3

-4

Figure 6.2.

Some numerical values of n are

1 = 7.02; 2 = 29.7; 3 = 70.35; 4 = 130.1; ; (6.15)      and direct calculation shows n(x) are orthogonal with respect to the inner product ; having the important property h i L[ ] =   : n ¡ n n Example 6.6. Let us solve the following wave problem dened in the interval 1 < x < e

2 @ttu = (x + 1) @xxu + (x + 1)@x u 8 u(t; 0) = u(t; 1) = 0 : (6.16) <> u(0; x) = f(x); @tu(0; x) = g(x)

:> 14 Series Solution of 1D Problems

The associated eigenvalue problem is

2 (x + 1) 00 + (x + 1)0 =  ¡ : (6.17) ( (0) = (1) = 0

Note that this ordinary dierential equation is Cauchy-Euler equation we studied in the volume I of this book. By substituting x + 1 = es, we reach

2 d  =  ds2 ¡ : (6.18) ( (s = 0) = (s = ln2) = 0

Obviously, the problem has the solution

n(x) = sin(nln(x + 1)/ln2);

2 2 with  = n  for n = 1; 2; . It is seen that  are not orthogonal with respect to ; , n (ln2)2  f ng h i however, they are orthogonal with respect to the weighted inner product

1 1  ;  :=  (x)  (x) dx = 0; n =/ m: h n mi=1/(1+x) x + 1 n m Z0

6.3 Sturm-Liouville theorem

Before we discuss the eigenvalue problem (6.11), we consider the following eigenvalue problem that is called the Sturm-Liouville problem.

Denition 6.4. Let U be the vector space

U =  C1(x ; x ); a (x ) + b 0(x ) = 0; a (x ) + b 0(x ) = 0 ; (6.19) f 2 0 1 1 0 1 0 2 1 2 1 g and let L : U U be the operator s ! d L [] := [p(x) ] + q(x); (6.20) s dx 0

where p is smooth functions and p(x) > 0 on [x0; x1], and q is a continuous function. The eigenvalue problem Ls[] = (x) where  > 0 is a smooth positive function is called the regular Sturm-Liouville¡problem.

Theorem 6.1. The Sturm-Liouville eigenvalue problem has the following properties i. All eigenvalues of the problem are real. ii. There are innitely many eigenvalues  <  <  < such that 1 2 3  lim n = : n 1 !1 6.3 Sturm-Liouville theorem 15

iii. There is only one eigenfunction n(x) for each eigenvalue n. All eigenfunctions are orthogonal to each other with respect to the weight function (x), that is,

x1  ;  : = (x)  (x)  (x) dx = 0; n =/ m: h n mi n m Zx0 1 1 iv. The set of eigenfunctions n 1n=1 forms a basis for X (x0;x1). If f X (x0;x1), then f has the series representaftiong 2 1 f(x) f  (x);  n n n=1 X where fn are determined by the following relation 1 fn = f ; n ; (6.21) n; n  h i h i and the series converges pointwise in the sense

1 1 + lim fn n(x) = [f(x ) + f(x¡)]; N 2 !1 n=1 X + where f(x ); f(x¡) are respectively the right and left limit at x.

The proof of some parts of the theorem (6.1) is highly technical. We give the scheme of the proof in the appendix. The key in the proof is the symmetric property of Ls.

6.3.1 Symmetric operators t t Recall that a matrix An n is called symmetric if A = A where A is the transpose of A. If  An n is symmetric, then it has n orthogonal eigenvectors. We generalize this notion for a  symmetric dierential operator in an inner product spaces.

Denition 6.5. Assume that (V; ; ) is an inner product space. A linear map T : V V is called symmetric (or self-adjoint) ihf tihe following relation holds ! T []; = ; T [ ] ; ; V: (6.22) h i h i 8 2 n It is simply seen that a matrix An n is symmetric if for arbitrary vectors ~u;~v R , the following relation holds  2 ~v:A~u = A~v:~u

Proposition 6.2. L : U U is symmetric, that is, L []; = ; L [ ] for all ; U. s ! s h s i h s i 2 Proof. For any ; U, we can write by integration by parts formula 2 x1 x1 x1 L []; = L [] (x) = p(x)0(x) (x) p(x) 0(x)(x) + h s i s jx0 ¡ jx0 Zx0 x1 d x1 [p(x) 0] + q(x)(x) (x) = boundary term+ ; L [ ] : dx h s i Zx0 Zx0 16 Series Solution of 1D Problems

We show that the boundary term vanishes. In fact, we have

p(x ) [0(x ) (x ) (x ) 0(x )] p(x )[0(x ) (x ) (x ) 0(x )]; 1 1 1 ¡ 1 1 ¡ 0 0 0 ¡ 0 0 or equivalently p(x ) W (; )(x ) p(x ) W (; )(x ); 1 1 ¡ 0 0 where W (; ) is the Wronskian of ; . According to the boundary conditions, we have (x )  (x ) a 0 0 0 0 1 = : (6.23) (x ) (x ) b 0  0 0 0  1   

Since at least one of a1 or b1 must be nonzero (otherwise no boundary condition exists at

(x0) 0(x0) (x1) 0(x1) x = x0), we get det = 0. Similarly, we obtain det = 0 and this (x0) 0(x0) ! (x1) 0(x1) ! justies the claim. 

Proposition 6.3. All eigenvalues of the problem Ls[] = (x)  are real and eigenfunc- tions are orthogonal with respect to (x), that is, ¡

x1  ;  = (x)  (x)  (x) dx = 0: h n mi n m Zx0

Proof. Let (; ) be a pair of complex eigenvalue-eigenfunction of Ls. Remember that the inner product of complex functions has the following property u; v =  u; v : hh ii hh ii We have L [];  = ;  =   2: hh s ii hh¡ ii ¡ k k Since Ls is symmetric, we have L [];  = ; L [] = ;  =   2: hh s ii hh s ii hh ¡ ii ¡ k k This implies  =  and hence  is real. To prove part (b), we write L [ ];  =   ;  : h s i j i ¡ ih i j i Since L is symmetric, the left hand side of the above equality reads  ; L [ ] . But we have s h i s j i  ; L [ ] =   ;  ; h i s j i ¡ j h i j i and hence   ;  =   ;  . Since  =/  , we conclude  ;  = 0. ¡ j h i j i ¡ i h i j i i j h i j i  The proof of the other parts of Theorem (6.4) is highly technical and the interested reader can see advanced book on this topic. We presented the sketch of proof for some parts in the appendix.

Example 6.7. Let V be the vector space

V = u C1(0; l); u(0) = u(l) = 0 : f 2 g 6.3 Sturm-Liouville theorem 17

2 The map T = d is symmetric in V. In fact, for arbitrary ; V, we have dx2 2 l l l T []; = 00(x) (x)dx = 0(x) 0(x) dx = (x) 00(x) dx = ; T [ ] : h i ¡ h i Z0 Z0 Z0 n As we observed above, the eigenfunctions of T are  (x) = sin x . The set of  1 is n l f ngn=1 a basis for X1(0; l), that is, if f X1(0; l), then we can write 2 ¡  1 n f(x) f sin x ;  n l n=1 where X   l f ; n 2 n fn = h 2i = f(x) sin x dx: fn l 0 l k k Z   The convergence in this case is the pointwise convergence. For example, the function f(x)=x for x (0; 1) is represented by the series 2 1 2( 1)n x ¡ sin(nx):  ¡ n n=1 X The graph is shown in gure (6.3).

1

1

Figure 6.3.

Similarly, for the space

V =  C1(0; 1); 0(0) = 0(l) = 0 ; f 2 g the eigenfunctions are (x) = cos n x , n = 0; 1; . As we know, obtained trigonometric n l  functions are orthogonal with respect to the inner product ; . In addition, every function ¡  f X1(0; l), then f has a series representation h i 2 1 f(x) = f0 + fn n(x); n=1 where X l l f ; 1 1  f ; n 2 n f0 = h 2i = f(x) dx = f ; fn = h 2i = f(x) cos x dx: 1 l 0 n l 0 l k k Z k k Z   18 Series Solution of 1D Problems

For example, the function f(x) = x is represented in terms of n as

1 1 2(( 1)n 1) x + ¡ ¡ cos(nx):  2 n22 n=1 X The graph shown in gure (6.4).

1

1

Figure 6.4.

d The operator T : dx is not symmetric. For example, functions sin(x) and sin(2x) belongs to V and 4 4 T [sin(x)]; sin(2x) = ; T [sin(2x)]; sin(x) = : h i 3 h i ¡3

Remark 6.2. If p(x) is positive in (x0; x1) and vanishes at x0 or x1, the eigenvalue problem Ls[] =  is called a singular Sturm-Liouville problem. The solution to this kind of problem¡s can go unbounded at these points.

Example 6.8. (Legendre equation) Consider the following eigenvalue problem dened on 1 < x < 1: ¡ d 2 [(1 x ) 0] = : (6.24) dx ¡ ¡ Observe that p(x) =1 x2 is zero at the boundary points x= 1 . The appropriate boundary conditions for the Leg¡endre equation is the boundedness of the solution, that is, (x) remains bounded when x 1: !  The Legendre equation is solved by the power series method in Volume I. It is known that the solution goes unbounded at x = 1 except for  = n(n + 1), where n is an integer. For  this value of , one solution become the polynomial Pn(x) called Legendre polynomial of order n. Recall that Pn(x) are determined by the Rodrigues formula n n ( 1) d 2 n Pn(x) = ¡ (1 x ) : (6.25) 2n n! dxn ¡ We will study the Legendre polynomial in detail in subsequent chapters. Here we just mention a few of them 3 1 5 3 P (x) = 1; P (x) = x; P (x) = x2 ; P (x) = x3 x: 0 1 2 2 ¡ 2 3 2 ¡ 2 6.3 Sturm-Liouville theorem 19

Legendre polynomial are orthogonal with the weigh function (x) = 1 and then

1 Pn(x) Pm(x) dx = 0; n =/ m: (6.26) 1 Z¡ In addition, every X1( 1; 1) function can be represented in terms of P (x), i.e., ¡ n 1 f(x) f P (x); (6.27)  n n n=0 X where f are determined by the inner product ; . The following gure represent the function n h i f(x) = sin(x) in terms of Pn(x).

1

−1 1

−1

Figure 6.5.

Now consider the operator L in (6.11). We rst transform the eigenvalue problem L[] =  in U as a symmetric eigenvalue problem. Since a>0, we divide the equation L[]=  ¡by a and rewrite it as ¡ b(x) c(x) 1 00 + 0 +  =  : (6.28) a(x) a(x) ¡ a(x) Multiplying both sides of the equation by the factor p(x):

b(x) dx p(x) = e a(x) ; (6.29) R we obtain b(x) c(x) p(x) p(x)00 + p(x) 0 + p(x)  =  : a(x) a(x) ¡ a(x) b(x) Note that p0(x) = p(x) a(x) , and thus d [p(x)0] + q(x) = (x); (6.30) dx ¡ where c(x) p(x) q(x) = p(x) ; and (x) = : (6.31) a(x) a(x)

Proposition 6.4. Two forms of eigenvalues problems Ls[] = , and L[] =  have same eigenvalues and eigenfunctions. ¡ ¡ 20 Series Solution of 1D Problems

6.4 Series solution to linear PDEs Here we describe the eigenfunction expansion method as a powerful method to solve linear PDEs. Consider again the problem (6.9). The associated eigenvalue problem is L[[ = , and therefore its eigenvalues and eigenfunctions are the same as the symmetric eigenv¡alue problem Ls[] = . On the other hand, the set of eigenfunctions n 1n=1 is a basis for 1 ¡ f g functions in X (x0; x1) and thus, the solution u(t; x) can be represented by the series

1 u(t; x) = Un(t) n(x); n=1 X for some unknown functions Un(t). Therefore, if we are able to determine these functions, then we obtain a series solution of the problem (6.9). Hence, the job before us reduces to determine Un(t). In order to determine Un, we plug in the series in the PDE and obtain

1 1 Un0(t) n(x) = Un(t) L[n(x)]: n=1 n=1 X X By the relation L[ ] =   , the above equation reduces to n ¡ n n 1 1 U 0(t)  (x) =  U (t)  (x); n n ¡ n n n n=1 n=1 X X nt and nally Un0 = n Un a simple ODE with the solution Un(t) = Un(0)e¡ . Therefore, the series solution is¡ 1 nt u(t; x) = Un(0) e¡ n(x): n=1 X On the other hand, since u(0; x) = u0(x) is given, the following relation holds

1 u0(x) = Un(0) n(x); n=1 that implies X u0; n  Un(0) = h i ; n; n  and nally h i 1 u0; n  nt u(t; x) = h i e¡ n(x): n; n  n=1 X h i Example 6.9. Use the eigenfunction expansion method to solve the following problem

t x @tu = @xxu + 2@xu + e¡ e¡ sin(x); 0 < x < 1; t > 0 8 u(t; 0) = u(t; 1) = 0 : <> u(0; x) = 0

The eigenfunctions:>of the associated operator are obtained by solving the equation

00 + 20 = : ¡ 6.4 Series solution to linear PDEs 21

we have r2 + 2r +  = 0, and thus r = 1 p1 . We have  > 1 and thus ¡  ¡ x 2 2 n(x) = e¡ sin(nx);  = n  + 1: Let us write u as 1 x u(t; x) = Un(t) e¡ sin(nx): n=1 We obtain for n =/ 1 X 2 2 U 0 = (1 + n  ) U n ¡ n ; ( Un(0) = 0 and for n = 1 2 t U 0 = (1 +  ) U + e¡ 1 ¡ 1 ( Un(0) = 0

Therefore, we obtain Un(t) = 0 for n = 1 and for n = 1 t e¡ 2t U (t) = (1 e¡ ): 1 2 ¡ Thus t e¡ 2t x u(t; x) = (1 e¡ ) e¡ sin(x): 2 ¡

Remark 6.3. It might be helpful to consider eigenfunctions as the directions in the space 1 X (x0; x1). With this interpretation, the partial dierential equation reduces to the ordinary equation along each eigenfunction n. This fact is schematically illustrated in the gure (6.6). ) ( U = ′ U

′ U = U ()

Figure 6.6.

In the volume I of this book, we used a similar method to solve linear systems of ordinary dierential equations. If ~v is an eigenvector of a matrix A, then the system ~y0 = A~y along ~v reduces to the scalar equation y 0 = y, where  is the eigenvalue associated to ~v. If the matrix An n has n distinct eigenvectors ~v1:::;~vn, then the general solution to the system is  ~y = c e1t~v + + c ent~v : 1 1  n n 22 Series Solution of 1D Problems

For partial dierential equations like @tu = L[u], if (x) is an eigenfunction of the operator L, the equation reduces to the ordinary dierential equation U 0 = U. For this reason, this method is called eigenfunction expansion method. ¡

Remark 6.4. The eigenvector expansion method is also used to solve linear algebraic systems as A~x =~b. We illustrate it by solving a simple example. Let A be the matrix 3 1 2 A = 1 4 1 ; 0 1 2 1 3 @ A 1 and suppose we want to solve the equation A[u] =0 2 1. The eigenvalues of A are 1 = 6, 1 B ¡ C @ A 1 1 1 2 = 3, and 3 = 1 with the eigenvectors v1 = 1 , v2 = 2 and v3 = 0 . The set 0 1 0 ¡ 1 0 1 1 1 1 B C B C B ¡ C v ; v ; v is an orthogonal basis of R3. We wri@te thAe solut@ion vAas v = c v +@ c vA+ c v for f 1 2 3g 1 1 2 2 3 3 some undetermined constants c1; c2; c3. Substituting v into the equation gives 1 L[v] = 6c v + 3c v + c v = 2 ; 1 1 2 2 3 3 0 1 1 and thus @ ¡ A 1 1 1 2 1 c1 = b; v1 = ; c2 = b; v2 = ¡ ; c3 = b; v3 = 1: 6 v1; v1 h i 9 3 v2; v2 h i 9 v3; v3 h i h i h i h i Therefore, the solution to the equation is 8 1 2 1 v = v v + v = 5 : 9 1 ¡ 9 2 3 90 1 10 ¡ @ A 1 1 It is simply veried that the obtained solution is equal to v = A¡ 0 2 1. 1 B ¡ C @ A Problems Problem 6.24. Let A be the matrix 13 4 7 A = 4 16 4 : 0 1 7 4 13 @ A 1 Use the eigenvector expansion method to solve the linear equation Au =0 2 1. 3 B C Problem 6.25. Let L be the following linear map @ A d x L := + : dx Z0 Calculate L[f] for the following functions in the interval [0; 1] a) f(x) = x 6.4 Series solution to linear PDEs 23

b) f(x) = sin(x) c) f(x) = ekx

Problem 6.26. Show that the linear dierential operator L[] = 00 + 0 is not symmetric in the space

U =  C1(0; 1); (0) = (1) = 0 : f 2 g x x Problem 6.27. Show that the linear dierential operator L[] = e 00 + e 0 is symmetric in the space

U =  C1(0; 1); (0) = (1) = 0 : f 2 g x Problem 6.28. Show that the operator L[] = 00 + e  is symmetric in the space

U =  C1(0; 1); (0) = (1); 0(0) = 0(1) : f 2 g

Problem 6.29. Show that the map L[] = 0 in the space

U =  C1(0; 1); (0) = 0(0); (1) = 0(1) ; f 2 g is anti-symmetric, that is, L[]; = ; L[ ] . h i ¡h i Problem 6.30. Suppose G(x) is a continuous function in [0; 1], and G( x) = G(x). Show that the map ¡ 1 L[](t) = G(t s)(s) ds; 0 ¡ is symmetric. Z Problem 6.31. In this problem we show that a linear equation may have no solution. Let U be the vector space 2 U =  C (0; 1); (0) 0(0) = 0; (1) 0(1) = 0 ; f 2 ¡ ¡ g and V = C(0; 1). Consider the linear map L: U V dened by ! L[] = 00 + 0: Show that the equation L[] = f(x) is not solvable if 1 f(x)dx =/ 0: Z0 Problem 6.32. Find the eigenvalues and eigenfunctions of the following problem and verify they are orthogonal 00 =  ¡ : (0) = 0;  (1) = 0  0 Problem 6.33. Consider the following eigenvalue problem:

2 x 00 + x0 + 2 =  ¡ : ( 0(1) = 0(e) = 0

a) Find the eigenvalues and eigenfunctions of the problem. b) Find an expansion of the function f(x) = 1, x [1; e] in terms of the rst four eigenfunctions of 2 the problem. Problem 6.34. Consider the following eigenvalue problem

00 + 20 =  ¡ : (0) = 0; (1) +  (1) = 0  0 a) Find eigenvalues and eigenfunctions of the problem. 24 Series Solution of 1D Problems

b) Find an expansion of the function f(x)=x, x [0;1] in terms of four eigenfunctions of the problem. 2 Problem 6.35. Consider the following problem  =  00 ¡ : (0) = 0; (1) +  (1) = 0  0 a) Show that the eigenvalues of the problem are strictly positive. b) Use a numerical method to nd the rst 5 eigenvalues of the problem and then nd associated eigenfunctions n(x).

c) Verify directly that obtained eigenfunctions n(x) are orthogonal. d) Use eigenfunction expansion method to nd as approximate solution to the linear equation

 = x 00 ; (0) = 0; (1) + 0(1) = 0 in the space span  5 .  f ngn=1 e) Solve the above equation in closed form and verify that the approximate solution obtained in part (c) is the expansion of the true solution.

f) Now let us change change the boundary condition to the new one (1) 0(1) = 0. Verify that ¡ there is no solution to the problem. g) Use eigenfunction expansion method and observe that the linear equation

00 =  ¡ ; (0) = 0; (1)  (1) = 0  ¡ 0 can not be solved by this method. This explains why the try to nd a closed form solution in part (f) fails. x Problem 6.36. Consider the linear map L[] = (e¡ 0)0 in

U =  C1(0; ln2); (0) = (ln2) = 0 : f 2 g a) Show that L is symmetric. x b) Find n and (x) such that the functions n(x) = sin(ne ) satisfy the equation L[ ] =  (x)  : n ¡ n n 1 c) The set n n1=1 is a basis for C (0; ln2). Find the best approximation of the function f(x) = 1 f g 5 in the space span n . That is, nd f1; :::; f5 such that f gn=1 1 f  f  f g ; k ¡ 1 1 ¡  ¡ 5 5k  k ¡ k for any g span  5 . Draw this approximation and compare it with the function f. 2 f ngn=1 d) Consider the following heat problem

@tu = L[u] u(t; 0) = u(t; ln2) = 0 : 8 < u(0; x) = 1

Since n 1 is a basis, we can app:roximate the solution u as f gn=1 5 u(t; x) = Un(t) n(x): n=1 X Find the ordinary dierential equation of Un(t) along n(x). This dierential equation describes the change of heat along n(x). 6.4 Series solution to linear PDEs 25

2 Problem 6.37. Consider the map L[] = x 00 + 3x0. a) Find eigenvalues and eigenfunctions of the problem

L[] =  ¡ : (1) = (e) = 0  b) Solve the linear equation L[] = x : (1) = (e) = 0  c) Use eigenfunction expansion method to solve the following problem

@ttu + 2@tu = L[] (1) = (e) = 0 ; 8 < u(0; x) = 0; @tu(0; x) = 1 where  is a positive constant. : Problem 6.38. Write down the following eigenvalue problems in the Sturm-Liouville form. Use the energy method in each case and show that eigenvalues are strictly positives a) 00 + 2x0 =  ¡ (0) = 0;  (1) = 0  0 b) (1 + x)00 + x0 =  ¡ (0) =  (0);  (0) =  (1)  0 0 0 c) 2x 00 + 20 = e¡  ¡ ( (0) = 0; 0(1) = 0

Problem 6.39. Consider the following eigenvalue problem

x 00 + 0 + q(x)  = e¡  ¡ : ( (0) = 0; 0(1) = 0 Show the following inequality 1q(x) ex 2(x)  0 ;  ¡ 12(x) R 0 and conclude that if q(x) < 0 in [0; 1] then  is strRictly positive. Problem 6.40. Consider the following

2 @tu = x @xxu + 3x@xu + u 8 u(t; 1) = 0; u(t; e) + ux(t; e) = 0 : <> u(0; x) = x

a) Find the rst 3 eigenvalues o:>f the associated eigenvalue problem (you can use a computer software) b) Use the 3 eigenfunctions you obtained in the part (a) to nd an approximate solution to the heat equation. Problem 6.41. Repeat the above problem for the following equation

2 t @tu = x @xxu + 3x@xu + u + e¡ x 8 u(t; 1) = 0; u(t; e) + ux(t; e) = 0 : <> u(0; x) = 0

:> 26 Series Solution of 1D Problems

Problem 6.42. Solve the following

2 @ttu=x @xxu+x@xu 8 u(t; 1) = u(t; e) = 0 > u(0; x)=0; @ u(0; x)= sin (log(x)) 3 sin (3log(x)) < t ¡

Problem 6.43. We kno:>w that the eigenfunctions n(x) = x sin(nx) are the solution to the problem

2 2 x 00 2x0 + 2 = x  ¡ ¡ ; ( (1) = (2) = 0 2 2 for n = n  . Having these eigenfunctions, solve the following heat problem

@ u = @ u 2 @ u + 2 u t xx ¡ x x x2 8 u(t; 1) = u(t; 2) = 0 : <> u(0; x) = sin(x)

Problem 6.44. We know that the b:>ounded eigenfunction solutions for the problem 2 (1 x )00 2x0 = ; ¡ ¡ ¡ in x ( 1; 1) are the Legendre polynomials P (x) with  = n(n + 1). Having these eigenfunctions, 2 ¡ n n solve the following heat problem on 1 < x < 1 ¡ @ u = (1 x2)@ u 2x@ u t ¡ xx ¡ x u(t; 1): bounded 8  <> u(0; x) = sin(x)

Problem 6.45. Solve the following:>problems i. @tu = @xxu + 2@xu + 1 u(t; 0) = u(t; ) = 0 8 < u(0; x) = 0 ii. : @ u = @ u x t xx ¡ u(t; 0) = 1; @ u(t; ) = 0 8 ¡ x > 1 3 < u(0; x) = 6 x iii. > : x @tu = @xxu + u + sin(x) +  8 u(t; 0) = 0; u(t; ) = t <> u(0; x) = 0 iv. :> @ttu = @xxu + 2@xu + u + tx u(t; 0) = u(t; ) = 0 8 < u(0; x) = x; @tu(0; x) = 0 v. : 2 @tu = x @xxu + 4u u(t; 1) = 1; u(t; e) = 1 8 ¡ <> u(0; x) = x vi. > : ut = uxx + 2ux u(t; 0) = 0; u(t; 1) = 1 8 < u(0; x) = x : 6.4 Series solution to linear PDEs 27

Problem 6.46. Solve the following equation

@tu = @xxu u(t; 0) = u(t; 1); @ u(t; 0) = @ u(t; 1) : 8 x x u(0; x) = 1 + 2 cos(2x) 3 sin(3x) < ¡ :

Appendix A Sturm-Liouville theorem

The complete proof of the problem is highly technical and the interested reader is referred to advanced book on this subject. Here we give an elementary justication for some parts of the theorem. The denition of a Sturm-Liouville problem is given in the Denition (6.4).

Proposition A.1. All eigenvalues of a Sturm-Liouville problem are real.

This follows that the dierential operator of a Sturm-Liouville problem is symmetric and thus it enjoys the properties of symmetric maps.

Proposition A.2. Assume n; m are two eigenfunctions of a Sturm-Liouville problem associated with  ;  , and  =/  . Then,  ;  = 0. 1 2 1 2 h 1 2i The proof is given in the text.

Proposition A.3. There exists only one eigenfunction associated to each eigenvalue, i.e., if 1 and 2 are two eigenfunctions associated with the same eigenvalue , then they are linearly dependent.

Proof. In the rst part of this book, we proved that two solutions y1; y2 of a second order linear dierential equation are linearly dependent if and only if their Wronskian W (y1; y2)(x) is zero at some point x in their domain of denition. If 1 and 2 are solutions to the equation L [] = (x), satisfying the condition s ¡ a1(x0) + b10(x0) = 0; (A.1) then we have  (0)  (0) a 0 1 10 1 = : (A.2)  (0) 0 (0) b 0  2 2  1   

Since a1 and b1 can not be zero simultaneously, we conclude W (1; 2)(x0) = 0 and thus 1; 2 are linearly dependent in [x0; x1].  The main question is still unanswered, that, the problem has innitely many eigenvalues. To answer that, we give a constructive method for .

Theorem A.1. The optimization problem

max U Ls[];  2 h i ; (A.3) subject to  = 1  k k

29 30 Sturm-Liouville theorem

where  2 = ;  and k k h i U =  C1(x ; x ); B [] = B [] = 0 ; (A.4) f 2 0 1 0 1 g has a solution, this solution is unique up to positive and negative sign and furthermore, if  is a solution to the problem then  is an eigenfunction of the eigenvalue problem L [] = . s ¡ Proof. Use the Lagrange multiplier method to write the problem in the form

2 max Ls[];  + (   1): (A.5)  U h i k k ¡ 2 Unfortunately, here we do not have enough tool to prove the existence part of the theorem. It needs some knowledge about the properties of function spaces and the completeness. For this we omit that part here. The last part can be proved as follows. Suppose  is the value ¡ of the maximizing problem (A.3) and let  be the solution of it. Dene I: U R dened as ! I[] = L [];  + ( ;  1): h s i h i ¡ Clearly, the maximum of I[+ " ] for arbitrary U happens at " = 0. This implies 2 d I[+ " ] =0: d" j"=0 On the other hand, we have d I[+ " ] = L [ ] +  ;  + L [] + ; = 0: d" j"=0 h s i h s i

Since Ls is symmetric, it is simply obtained

L [] + ; = 0: (A.6) h s i Since U is arbitrary, we conclude L[] =  . Now, we prove the uniqueness. Assume 2 ¡ that 1; 2 both are solutions to the dierential equation

L [] =  s ¡ : (A.7) B [] = B [] = 0  0 1

In above we observed that if 1; 2 are two solutions then there are linearly dependent. Since  =  = 1, this implies  =  . k 1k k 2k 2  1 

Remark A.1. Let us denote the solution to the problem (A.3) by 1(x). Consider the space U2, the orthogonal complement with respect to ;  of span 1 . Now the maximization problem h i f g

max U Ls[];  2 2 h i ; subject to  = 1 ( k k hast the unique solution (up to sign)  . It is simply seen that for any U we have 2 2 L [ ] +   ; = 0; h s 2 2 2 i Sturm-Liouville theorem 31

where  is the value of the Lagrange multiplier value and thus L[ ] =   . ¡ 2 2 ¡ 2 2 Apparently 1 > 2. Repeat same procedure and obtain the sequence 1 > 2 >  > , or¡equival¡ently  <  <  < . ¡ ¡ ¡ 3  1 2 3 

Proposition A.4. There is no limit point for the increasing sequence (n), that is, limn n . !1 ! 1 We omit the proof. By the aid of above proposition, we can prove the following theorem.

Theorem A.2. Every continuous function f on [x0; x1] can be written as a series in terms of eigenfunctions n(x) of a Sturm-Liouville problem

1 f(s) f  (x); (A.8)  n n n=1 X  ; f h n i where fn =  ;  . The convergence is in norm, that is, h n ni N

lim f fn n = 0: (A.9) N ¡ !1 n=1 X

Proof. We prove the theorem for f unctions in U. Assume contrary, then there is a functions f U such that f ;  = 0 for all n. This is impossible since according to the Proposition 2 h ni (A.4), the eigenvalues n have the property

1 > 2 > ; (A.10) n ¡ ¡  and  !1 , and thus ¡ n !!!!!!!!!!!!!!!!!! ¡ 1 Ls[f]; f h i N ; (A.11) f 2  ¡ k k for some N > 0. This implies that

Ls[];  Ls[f]; f max h 2 i = h 2 i; (A.12)  UN  f 2 k k k k and then f is an eigenfunction of the map L. The proof for continuous functions is based on the fact thatC1 functions are dense in continuous functions. We omit the proof.  As we saw in the previous chapter, if the function f(x) is piecewise C1, the convergence is point-wise, however, we do not prove it here.