<<

CONTENTS

CHAPTER 1 POWER SOLUTIONS 03

1.1 INTRODUCTION 03

1.2 POWER SERIES SOLUTIONS 04

1.3 REGULAR SINGULAR POINTS – 05

FROBENIUS SERIES SOLUTIONS

1.4 GAUSS’S HYPER GEOMETRIC EQUATION 07

1.5 THE POINT AT 09

CHAPTER 2 SPECIAL FUNCTIONS 11

2.1 LEGENDRE POLYNOMIALS 11

2.2 BESSEL FUNCTIONS – GAMMA 15

CHEPTER 3 SYSTEMS OF FIRST ORDER EQUATIONS 20

3.1 LINEAR SYSTEMS 20

3.2 HOMOGENEOUS LINEAR SYSTEMS WITH 21

CONSTANT COEFFICIENTS

3.3 NON LINEAR SYSTEM 24

CHAPTER 4 NON LIEAR EQUATIONS 26

4.1 AUTONOMOUS SYSTEM 26

4.2 CRITICAL POINTS & STABILITY 28

4.3 LIAPUNOV’S DIRECT METHOD 31

4.4 SIMPLE CRITICAL POINTS -NON LINEAR SYSTEM 34

CHAPTER 5 FUNDAMENTAL THEOREMS 38

5.1 THE METHOD OF SUCCESSIVE APPROXIMATIONS 38

5.2 PICARD’S THEOREM 39

Differential Equations 3 CHAPTER 6 FIRST ORDER PARTIAL DIFFERENTIAL EQUATIONS 46

6.1 INTRODUCTION – REVIEW 46

6.2 FORMATION OF FIRST ORDER PDE 48

6.3 CLASSIFICATION OF 50

6.4 LINEAR EQUATIONS 54

6.5 PFAFFIAN DIFFERENTIAL EQUATIONS 56

6.6 CHARPIT’S METHOD 62

6.7 JACOBI’S METHOD 66

6.8 CAUCHY PROBLEM 70

6.9 OF SOLUTIONS 74

CHAPTER 7 SECOND ORDER PARTIAL DIFFERENTIAL EQUATIONS

7.1 CLASSIFICATION 78

7.2 ONE DIMENSIONAL WAVE EQUATION 81

7.3 RIEMANN’S METHOD 87

7.4 LAPLACE EQUATION 89

7.5 HEAT CONDUCTION PROBLEM 95 - 98

Differential Equations 4

CHAPTER 1

POWER SERIES SOLUTIONS AND SPECIAL FUNCTIONS

1.1 Introduction

An is a polynomial, a rational function, or any function that satisfies a polynomial equation whose coefficients are polynomials. The elementary functions consists of algebraic functions, the elementary transcendental functions or non algebraic functions- the and their inverses, exponential and logarithmic functions and all others that can be constructed from these by adding or multiplying or taking compositions. Any other function is called a special function.

 n Consider the power series  an x . The series has a radius of convergence R, n0 0  R   such that the series converges for | x | < R and diverges for | x | > R. lim a We have, R = n1 . n   an

 x n For the geometric series 1+ x + x 2 + … , R = 1 and for the exponential series  , R =  n0 n!

 and the series  n!x n converges only for x = 0. n0

Suppose = f(x) for | x | < R . Then f(x) has derivatives of all orders and the series can

  n1 n2 be differentiated term by term f '(x)  nan x , f ''(x)   n(n 1)an x and so on and n1 n1 f (0) each series converges for | x | < R. In fact, we get a  n ,n . n n!

 n A function f(x) which can be expanded as a power series  an (x  x0 ) , valid in some n0 neighborhood of x 0, is said to be analytic at x 0. Polynomials, e x, sin x , cos x are analytic at all points, but 1/( 1+ x) is not at x = -1..

Differential Equations 5 1.2. Power series solutions It may be recalled that many differential equations can not be solved by the few analytical methods developed and these methods can be employed only if the differential equations are of a particular type. By applying the following method solutions can be obtained as a power series and hence known as power series method.

Consider the equation y '  y.

 n We may assume that this equation has a power series solution in the form y =  an x that n0 converges for | x | < R, for some R.

' 2 ' Then y  a1  2a2 x  3a3 x  ..... Since y  y , by equating the coefficients of like powers of x, we get a1=a0, 2a2=a1,3a3=a2,… which reduces to a1=a0,a2=a1/ 2 = a0/2!,a3=a0/3!,…. 2 x Thus we obtain, y = a0( 1+ x/1! + x /2!+…..) = a0 e , where a0 is left undetermined and hence arbitrary.

Now let us consider the general second order homogeneous equation, y ''  P(x)y '  Q(x)y  0. (*).

If both P(x) and Q(x) are analytic at x =x0, we say x0 is an ordinary point of the equation

 n We may assume the solution of the equation (*) as a power series y =  an (x  x0 ) valid for n0

|x-x0| < R, for some R. The various coefficients can be found in terms of a0 and a1, which is left undetermined.

Consider y ''  y  0. Here P(x)=0 and Q(x) = 1, which are analytic at x = 0.

Assume y = . Then the equation gives the (n+1)(n+2) an+2+an=0 ,

n for n=0,1,2,….. . .Substituting n =0,1,2,..successively and reducing we get a2n+1 = (-1) a1/

2 4 3 5 n x x x x (2n+1)! and a2n=(-1) a0/(2n)!. Hence y = a (1  ...)  a (x   ....) 0 2! 4! 1 3! 5!

= a0 cos x + a1 sin x. Consider the Legendre’s equation 1 x2 y ''  2xy'  p( p 1)y  0 , where p is a constant.  2x p( p 1) Here P(x) = and Q(x) = , which are analytic at x = 0. 1 x 2 1 x 2

Differential Equations 6  n Let y =  an x . Then the equation gives the recurrence relation (n+1)(n+2)an+2-(n-1)an- n0 p( p 1) ( p 1)( p  2) 2nan+p(p+1) an= 0. Put n = 0,1,2,..which gives a   a ,a   a , 2 2! 0 3 3! 1

p( p  2)( p 1)( p  3) ( p 1)( p  3)( p  2)( p  4) a  a ,a  a ,..... 4 4! 0 5 5! 1

 p( p 1) 2 ( p  2) p( p 1)( p  3) 4  Thus y = a0 1 x  x  ......  2! 4! 

 ( p 1)( p  2) 3 ( p  3)( p 1)( p  2)( p  4) 5  +a1 x  x  x  ......  3! 5!  The radius of convergence for each of the series in the brackets is R = 1. The series in the first bracket terminates for p = 0,2,4,6,.. and the series in the second bracket terminates for p = 1,3,5,….. The resulting polynomials are called Legendre polynomials whose properties will be discussed later. Ex. The equation y ''  ( p 1 2 1 4 x2 )y  0 , where p is a constant, has a power series solution y = at x = 0. Show that the coefficients are related by the three term

recurrence relation (n 1)(n  2)an2  ( p 1 2)an 1 4an2  0. If the dependent variable y

x2  is replaced by y = w e 4 , show that the equation is transformed to w''  xw'  pw  0 and its power series solution at x = 0, involves only a two term recurrence relation.

1.3. Regular singular points x = x0 is a singular point of (*) if either P(x) or Q(x) is not analytic at x0. In this case the power series solution may not exist in a neighborhood of x0. But the solutions near a singular point is important in a physical context, and most of the cases they exist. Origin is a singular

'' 2 ' 2 -2 point of y  y  y  0 and for x > 0, y = c1 x + c2 x , is its general solution. x x 2 2 A singular point x0 of (*) is called regular singular if both ( x-x0)P(x) & (x-x0) Q(x) are analytic at x0. Consider the Legendre’s equation 1 x2 y ''  2xy'  p( p 1)y  0 , for which x =1 , -1 are singular points but they are regular singular. For the Bessel equation of order p,

Differential Equations 7 x2 y ''  xy'  (x2  p 2 )y  0 , where p is a non negative constant, x = 0 is a regular singular point. 2 If x = x0 is regular singular point of (*), then by definition ( x-x0)P(x) & (x-x0) Q(x) are

 n analytic at x0 and hence we may take ( x-x0)P(x) =  pn (x  x0 ) and n0

 2 n (x-x0) Q(x) =  qn (x  x0 ) . A solution of the equation (*) as a Frobenius series n0

 m n y =  x  x0   an (x  x0 ) , where m is a real number and a0 is assumed non zero, can be n0 expected.

On substituting, y = in (*), and equating the coefficients, we get

n1 the recursion formula an [(m  n)(m  n 1)  (m  n) p0  q0 ]  ak [(m  k) pnk  qnk ]  0 k 0

lim lim 2  ( ** ). Here p0  (x  x0 )P(x) and q0  (x  x0 ) Q(x) . x  x0 x  x0

For n=0, ( ** )gives m(m 1)  mp0  q0  0  ***, called the indicial equation, which determines the values of m.

Substituting the values of m and taking n=1,2,3,.. in ( ** ) an’s can be determined in terms of a0 and the respective solutions can be obtained .

Eg. Consider the equation 2x2 y ''  x(2x 1)y '  y  0 . x = 0 is a regular singular point of

 m n the equation. Let us assume that the solution at x = 0, is y = x  an x . n0 1 1 we get the indicial equation, m(m 1)  m   0 --(1).  m = 1 , -1/2 . For m = 1 , -1/2 2 2 respectively we get the solutions on determining the an’s successively from the recurrence 2 4 1 relation ( ** ) as y  a (x  x 2  x 3  ...) and y  a x 1 2 (1 x  x 2  ...) which 1 0 5 35 2 0 2 are independent also and thereby the general solution is y = c1 y1 + c2 y2, where c1& c2 are arbitrary constants.

Differential Equations 8 Remark.

Let the roots of the indicial equation be real, say, m1& m2 with m1  m2 .

Then the equation ( * ) has a Frobenius series solution corresponding to m1, the larger exponent. If m2 = m1, there is no scope to get a second independent solution by the same procedure and it may be found by some alternate method. If m1 - m2 is not a positive integer, another independent solution corresponding to m2 can be obtained, and otherwise the method may not be giving a second independent solution. Ex.1.. Consider the equation x2 y ''  3xy'  (4x  4)y  0 . Show that x = 0 is a regular singular point and find the only one Frobenius series solution. Ex.2.. Find the two independent solutions of xy''  2y '  xy  0 , at x = 0. 1 Ex.3. The Bessel equation of order p = ½ , namely x 2 y ''  xy'  (x 2  )y  0 has x = 0 as a 4 regular singular point. The exponents m1 & m2 is such that m1 – m2 =1, but the method gives two independent solutions, and determine them.

1.4. GAUSS’S HYPER GEOMETRIC EQUATION.

The equation x(1 x)y '' [c  (a  b 1)x]y '  aby  0, where a, b, c are constants – (A) represents many classical equations and is known as Gauss’s Hyper geometric equation.

c  (a  b 1)x  ab We have P(x) = and Q(x) = . x(1 x) x(1 x) The only singular points are, x = 0 & 1, and they are regular singular points. We may proceed to investigate the solution at x = 0.

We get, p0 = c & q0 = 0, so the indicial equation is m(m-1)+mc = 0 which gives m1 = 0 &

m2 = 1-c. If 1-c is not a positive integer, i.e. if c is not zero or a negative integer, then (A)

 0 n has a solution of the form y = x  an x . Substituting in (A) and equating to zero, the n0

n (a  n)(b  n) coefficients of x , we get the recursion formula ; a  a . With a0=1, we get n1 (n 1)(c  n) n

 a(a 1)...(a  n 1)b(b 1).....(b  n 1) n in succession all an’s and the solution, y = 1  x , n1 n!c(c 1).....(c  n 1) called the hyper geometric function, denoted by, F(a ,b ,c ,x).

lim a lim (a  n)(b  n) Since R = n1 = = 1, the series converges for |x| < 1.( Note n   an n   (n 1)(c  n) that the series reduces to a polynomial for a or b equal to zero or some negative integer.)

Differential Equations 9 If 1-c is not zero or a negative integer a second independent solution can be obtained, similarly. or by the substitution, y = x 1-cz, (A) becomes, x(1 x)z '' {(2  c) [(a  c 1)(b  c 1)] 1]x}y '  (a  c 1)(b  c 1)y  0 --(B), a hyper geometric equation with a, b, c replaced by (a-c+1), (b-c+1) and (2-c) Hence the solution of ( B ), at x = 0 is, z = F(a-c+1,b-c+1,2-c,x) or y = x 1-c F(a-c+1,b-c+1,2-c,x), when c is not a positive integer.

Thus if c is not an integer, then the general solution of (A), at x = 0 is, y = c1 F(a,b,c,x) +

1-c c2 x F(a-c+1,b-c+1,2-c,x).

To find the solution at x = 1, we may take t = 1 - x, so that when x = 1 , t = 0.

(A) becomes t(1 t)y '' [(a  b  c 1)  (a  b 1)t]y '  aby  0 . Hence the general solution at x = 1 , when c-a-b is not an integer is, y = c1 F(a,b,a+b-c+1,1-x) + c-a-b c2 (1-x ) F(c-b,c-a,c-a-b+1,1-x).

Remark

The solution of the general hyper geometric equation (x  A)(x  Bx)y ''  (C  Dx)y '  Hy  0 (x  A) Where A B is obtained through the map t = which transforms the equation to (B  A)

x(1 x)y.. [F  Gt]y.'  Hy  0 and x = A & x = B to t = 0 & t = 1 respectively. Ex1. Show that ( 1+x )p= F(-p, b, b, -x), log(1+x) = x F(1, 1, 2, -x), sin 1 (x)  xF(1 2,1 2,3 2, x2 )

lim x lim  x 2 Ex2. Show that e x  F(a,b,a, ) , cos x  F(a,a, 1 , ) b   b a   2 4a 2

Ex3. Consider the Chebychev’s equation, (1 x2 )y ''  xy'  p 2 y  0 , where p is a non 1 x negative constant. Transform it into a hyper geometric equation by t= and show that its 2 general solution near x = 1 is, y =

1 1 1 x 1 x  2 1 1 3 1 x c1F( p, p, , )  c2   F( p  , p  , , ) 2 2  2  2 2 2 2 ab Ex.4. Show that F ' (a,b,c, x)  F(a 1,b 1,c 1, x) . c

Differential Equations 10 Ex.5. Show that the only solutions of the Chebychev’s equation, whose are 1 x bounded near x = 1 are, y = c F( p, p, 1 , ) . 1 2 2 1.5. The point at infinity It is of practical importance to study the solutions of a given , for large values of x. By the transformation x = 1/t and taking t small this can be achieved. Consider the Euler equation x2 y ''  4xy'  2y  0 , which is transformed to

t 2 y..  2ty.  2y  0 , by the substitution x = 1/t. Since t = 0 is a regular singular point of the transformed equation so is, x =  for the original equation. Consider the hyper geometric equation (A). By the map x=1/t , it is transformed to t 2 (1 t)y..  (1 a  b)  (2  c)tty.  aby  0 . t = 0 is a regular singular point, with exponents m = a , b. Hence x = is also a regular singular point with exponents a , b.

Confluent hyper geometric equation

d 2 y dy Consider the hyper geometric equation s(1 s) [c  (a  b 1)s]  aby  0. ds 2 ds

x  (a 1)x Changing s to x = bs, the equation becomes x(1 )y ''  (c  x)  y '  ay  0 , b  b  which has the regular singular points x = 0, b and . If we let b   , then b will be merged with , and this confluence of two regular singular points produce an irregular singular point at for the limiting equation, xy''  (c  x)y '  ay  0 , called the confluent hyper geometric equation.

Differential Equations 11

CHAPTER 2 SPECIAL FUNCTIONS – LEGENDRE POLYNOMIALS 2.1. Legendre Polynomials For n, a non negative integer, consider the Legendre’s equation (1 x2 )y ''  2xy'  n(n 1)y  0 --(L). We are now proceeding to find the solutions of (L), bounded near x = 1 , a regular singular 1 x point. Take, t = . Then x = 1 corresponds to t = 0 and the transformed equation is 2 d 2 y dy t(1 t) [1 2t]  n(n 1)y  0, hyper geometric equation . t = 0 is regular singular dt 2 dt with indicial equation, m(m-1) + m = 0, giving the only exponent m = 0. The corresponding

Frobenius series solution is, y 1 = F( -n, n+1,1, t).

Let a second independent solution be y2=vy1. where

2t1 dt ' 1  p(t)dt 1  t(1t) 1 1 1  1  1 v  2 e  2 e  2   2    a1  a2t  ....., since y1 is a y1 y1 y1 t(1 t) t  y1 (1 t) t polynomial with non zero constant term. Thus v = log t + a1 t+…… and y2 = y1 ( log t + a1t

+….). As t0, logt   , y2 is unbounded at t = 0 i.e. at x = 1. Thus the only bounded solutions of (L) bounded at x = 1 are constant multiples of

1 x th y1 = F(n,n,1, ) - a polynomial of degree n , called the n Legendre polynomial, 2 denoted by Pn(x). We may proceed to express the polynomial Pn(x) in the standard power form and obtain a generating formula, known as the Rodrigue’s formula.

The power series solution, we have obtained earlier, at x = 0, reduces to a polynomial of degree n, since p = n, a non negative integer, and there by a valid solution, bounded at x = 1, also. Thus by the above observation about bounded solutions at x = 1, we get the earlier solution as a constant multiple of Pn( x ).

On simplification,

n(n 1) n(n 1)(n 1)(n  2) 2 (2n)! n Pn(x)=1 (x 1)  (x 1)  .....  (x 1) --(1) 1!2 2 2!2 22 (n!) 2 2n

But Pn(x) is polynomial of degree n, which contains only odd or even powers of x according as n is odd or even. n n-2 Hence, Pn(x)=anx +an-2 x +….. --(2)

Differential Equations 12 n It is noted from (1) that Pn(1)=1 and using (2) Pn(-1)=(-1) . Further from (1), we get (2n)! an= . Since a polynomial solution is valid everywhere, from the power series solution (n!) 2 2 n we have obtained at x = 0, the recursion formula used in that context relates the coefficients of (n  k  2)(n  k 1) Pn(x) in the form (2). Thus a   a and writing in the reverse order k (k 1)k k2 with k = n, n-2, n-4,…..yields, n(n 1) (n  2)(n  3) n(n 1)(n  2)(n  3) a   a , a   a = (1)2 a ,…… n2 2(2n 1) n n4 4(2n  3) n2 2.4.(2n 1)(2n  3) n

 n n(n 1) n2 n(n 1)(n  2)(n  3) n4  Thus Pn(x) = x  x  x  .... --(3)  2(2n 1) 2.4.(2n 1)(2n  3)  (2n  2k)! The coefficient of xn-2k in (3) can be simplified as (1) k , and we obtain 2n k!(n  k)!(n  2k)!

 n     2  k n n-2k (1) d 2n2k Pn(x) = x = (x )  n n k 0 2 k!(n  k)! dx

n n n 1 d n! nk 1 d n = x2 (1)k  x2 1 , called the Rodrigue’s formula, n n    n n   2 n! dx k0 k!(n  k)! 2 n! dx which is used for computing the Legendre Polynomials directly. 2 3 We get, P0(x)=1, P1(x)=x, P2(x)=1/2(3x -1), P3(x)=1/2(5x -3x),…..

1 n n Ex.1. Assuming that   Pn (x)t is true, show that Pn(1)=1, Pn(-1)=(-1) , 1 2xt  t 2

n 1.3...(2n 1) P2n+1(0)=0 and P2n(0)= (1) . 2n n! By differentiating both sides w.r.to t, and equating the coefficients of tn obtain the recursion formula (n+1) Pn+1 (x)  (2n 1)xPn (x)  nPn1 (x) and use it to find P2(x) & P3(x) from

P1(x)=x and P0(x)=1. Orthogonality of Legendre Polynomials

1  0 if m  n  P (x)P (x)dx  2  m n  if m  n 1 2n 1 i.e. {Pn(x),n = 0,1,2,..} is a family of orthogonal functions in [-1,1] Let f(x) be a function with at least n continuous derivatives in [-1,1] and consider the

1 1 1 d n I= f (x)P (x)dx = f (x) (x 2 1) n dx , by Rodrigue’s formula.  n n  n 1 2 n! 1 dx

Differential Equations 13 n1 1 1  d 2 n  Applying , I = n  f (x) n1 (x 1)  - 2 n! dx  1

1 1 d n1 f (1) (x) (x 2 1) n dx n  n1 2 n! 1 dx

1 1 d n1 = - f (1) (x) (x 2 1) n dx, since the expression in bracket vanishes at both the limits n  n1 2 n! 1 dx

(1) n 1 d nn Continuing to integrate by parts, we get I = f (n) (x) (x 2 1) n dx n  nn 2 n! 1 dx

(1) n 1 = f (n) (x)(x 2 1) n dx. n  2 n! 1 (n) Take f(x) = Pm(x), where m < n. Then f (x) = 0, since Pm(x) is a polynomial of degree m.

1 Thus I = 0  P (x)P (x)dx  0, m  n.  m n 1

(n) (2n)! Now let f(x) = Pn(x). Then f (x)= , and we get from above, 2 n n!

 (2n)! 1 (2n)! 1 (2n)! 2 I = (1 x 2 ) n dx  2 (1 x 2 )n dx  2 cos 2n1d , by the 2n 2  2n 2  2n 2  2 (n!) 1 2 (n!) 0 2 (n!) 0 substitution x = sin . 2n 2n  2 2 2 Thus I = 2 ...... = , on simplification. 2n 1 2n 1 3 2n 1

Legendre series

  1  1 Let f(x) be an arbitrary function, then a P (x) , where a  n  f (x)P (x)dx  n n n   n n0  2 1 is called the Legendre series expansion of f(x). The expression of an’s are motivated by the orthogonality properties of Legendre polynomials. Notice that if P(x) is a polynomial of

k degree k, then P(x) = an Pn (x) . n0 Least square approximation Let f(x) be a function defined in [-1,1] and consider the problem of finding a polynomial P(x) of degree less than or equal to n, for a given n, such that the error estimate,

1 I = [ f (x)  P(x)]2 dx is least. We will show that the approximation is uniquely fixed as 1

Differential Equations 14 n  1  1 P(x) = a P (x) , where a  k  f (x)P (x)dx , and P (x) is the kth Legendre  k k k   k k k0  2 1 polynomial.

1 n 1 n 2 We have I = [ f (x)  b P (x)]2 dx = [ f (x)]2 dx + b 2 - 2   k k   k 1 k 0 1 k 0 2k 1

n  1  b f (x)P (x)dx  k  k  k0 1 

n 2 = + - 2  ak bk = k 0 2k 1

n n 2 2 2 2 +  (bk  ak ) -  ak , which is least when b k = ak for k = 0 to n. k0 2k 1 k 0 2k 1 Hence the result.

1 Ex.1. If P(x) is a polynomial of degree n > 0 such that,  x k P (x)dx  0 , for k = 0,1,..,n-1, 1 show that P(x) = c Pn(x), for some constant c. 2n (n!)2 Ex2. Show that among all the monic polynomials P(x) of degree n, P (x) is the (2n)! n

1 unique one so that [P (x)]2 dx is least. 1

2.2. Bessel functions, The Gamma function The differential equation x2 y ''  xy'  (x2  p 2 )y  0, where p is a non negative constant, is known as the Bessel differential equation. Note that x = 0 is a regular singular point of the equation with indicial equation m2-p2=0 and

p n exponents are m1 = p and m2 = -p. The equation has a solution in the form y = x an x 

n p an x , where a0  0. The recurrence relation for an’s is, n(2p+n)an + an-2=0.

n 1 Since a-1=0, an = 0 for odd values of n. We get a  (1) . 2n 22n n!( p 1)...( p  n) Hence, we have,  x 2n y = a x p (1)n . 0  2n 0 2 n!( p 1)...( p  n)

p Taking a0 =1/2 p!, we get the solution,

Differential Equations 15 p  2n  2nP x n x n (x / 2) Jp(x) = (1) = (1) , called the p  2n  2 p! 0 2 n!( p 1)...( p  n) 0 n!( p  n)! of the first kind of order p. Remark: In the above discussion we have used the notation p!, though p is a real number not necessarily a non negative integer for which factorials are defined. We extend the definition of factorial with the help of gamma function as follows.

 For p > 0, we have ( p)  t p1et dt . 0 The famous recurrence relation on gamma integral is obtained below.  lim b Now ( p 1)  t pet dt  t p1et dt 0 b   0

b b p lim  b  lim   b =  t p e t  p t p1e t dt =p t p1e t dt =p p , since  0 as b    0     b b    0  b    0  e

 Now (1)   et dt  1. Thus for any non negative integer n, (n 1) =n (n) =n(n-1) 0 (n 1) =……= n(n-1)(n-2)…1 (1) = n!. ( p 1) From the recurrence relation, presented as, ( p) = -- ( I ), we can define ( p) for p -10. For -2 0.

We have m1 - m2 = 2 p. There exists a Frobenius series solution corresponding to m2 = -p,

 2n p n (x / 2) even when p =1/2, 3/2,..as a multiple of J-p(x) =  (1) . 0 n!( p  n)!

Differential Equations 16  p 1  x  The first term of this series is   which is unbounded as x 0. Hence J-p(x) is ( p)! 2  unbounded at x = 0 where as Jp(x) is bounded at x = 0, for p not an integer . Thus for p not an integer, the general solution at x = 0 is y = c1 Jp(x) + c2 J-p(x).

 2nm  2nm n (x / 2) n (x / 2) For p = m a non negative integer J –m(x) =  (1)   (1) , 0 n!(m  n)! nm n!(m  n)! 1 since  0 for n = 0,1,…,m-1. (m  n)!

 2(nm)m  2nm nm (x / 2) m n (x / 2) m Thus J –m(x) =  (1)  (-1)  (1)  (-1) Jm(x). 0 (n  m)!(n)! 0 n!(m  n)!

Hence Jm(x) & J-m(x) are not independent, when m = 0,1,2,….

J p (x)cos p  J  p (x) Remark: The general solution is y = c1 Jp(x) + c2 Yp(x), where Yp(x)= , sin p lim for p not an integer and for m = 0,1,2,…Ym(x) = Y (x) . p   p

Ex.1. Show that (1/ 2)   .

 1   2 We have (1/ 2)  t 2 et dt  2  es ds , by the substitution t = s2. 0 0

      2  2   2  2 2 2 2  x    y  (x  y ) r (1/ 2)  4 e dx   e dy  4 e dxdy  4   e rddr   ,  0   0  0 0 0 0

Changing to polar coordinates. Hence . Ex.2. When p = ½ , show that the general solution can be taken in equivalent forms 1 y = c1 J1/2 (x) + c2 Y-1/2(x) and y = d1 cos x  d 2 sin x. Hence xJ 1 (x) = a cosx + b sinx x 2

2 and xJ 1 (x) = c cos x + d sin x. Evaluate a,b,c,d and show that J 1 (x)  sin x and 2 2 x

2 J (x)  cos x .  1 2 x

Properties of Bessel functions

 2nP n (x / 2) We have Jp(x)=  (1) . 0 n!( p  n)! d d  (x)2n2P  (x)2n2P1 Now x p J (x)  (1)n = (1)n =  p   2n p  2n p1 dx dx 0 2 n!( p  n)! 0 2 n!( p  n 1)!

Differential Equations 17  2n p1 p n (x / 2) p x  (1) = x Jp-1(x). 0 n!(n  p 1)! d i.e. x p J (x)  x p J (x) …(1) dx p p1 d Similarly it can be shown that x  p J (x)  x  p J (x) …..(2) dx p p1 p ’ p-1 p p ’ p-1 p i.e. x Jp (x)+p x Jp(x) = x Jp-1(x) …(1) & . x- Jp (x) - p x- Jp(x) = - x- Jp+1(x) ..(2) p ’ p ’ Now, (1)/x  Jp (x) + (p/x) Jp(x) =Jp-1(x) ..(3) & (2)/x-  Jp (x) – (p/x) Jp(x)=-Jp+1(x) ..(4)

’ (3)+(4)  2 Jp (x) = Jp-1(x)-Jp+1(x) ....(5) & (3)-(4)  (2p/x) Jp(x) = Jp-1(x)+Jp+1(x) …..(6)

The recurrence relation on Bessel functions is, (2p/x) Jp(x) = Jp-1(x)+Jp+1(x).

Orthogonality properties.

If n ’s are the positive zeroes of Jp(x), then

1 0, m  n  xJ ( x)J ( x)dx  1  p m p n  J ( ) 2 , m  n 0 2 p1 n

2 '' 1 ' p Let y = Jp(x) . Then y  y  (1 )y  0. If a & b are distinct positive constants, x x 2

2 '' 1 ' 2 p then u(x) = Jp(ax) & v(x) = Jp(bx) satisfy the equations, u  u  (a  )u  0 -- (1) x x 2 1 p 2 and v ''  v'  (b2  )v  0 --(2). x x 2 d 1 (1)v – (2) u  (u 'v  v 'u)  (u 'v  v 'u)  (b 2  a 2 )uv --(3) dx x d (3) x  [x(u 'v  v 'u)]  (b 2  a 2 )xuv --(4) dx

1 1 Integrating from 0 to 1, we get, (b 2  a 2 ) xuvdx  xu 'v  v 'u = 0, if a & b are distinct 0 0 zeroes of Jp(x).

1 Let a =  & b =  . Then we have obtained, xJ ( x)J ( x)dx  0, if m  n m n  p m p n 0

Differential Equations 18 2 (1)2x2u’  2x 2u 'u ''  2xu'  2a 2 x 2uu '  2 p 2uu '  0

d 2 or [x 2u '  (a 2 x 2  p 2 )u 2 ]  2a 2 xu2 --(5). dx

1 2 Thus, [x 2u '  (a 2 x 2  p 2 )u 2 ]1  2a 2 xu2dx ..(6) 0  0

’ ’ But u(x) = Jp(ax) and hence u (1)=a Jp (a). Thus, we get, from (6), with a replaced by n ,

1 1 1 that, xJ ( x)2 dx  J ' ( )2  J ( ) 2 .  p n p n p1 n 0 2 2

Bessel series

Let f(x) be a function defined in [0,1] and ’s be the positive zeroes of some fixed Bessel

 2  function Jp(x), p  0. Then, a J ( x) , where an = xf(x)J ( x)dx , is called  n p n 2  p n n1 J p1 (n ) 0 the Bessel series expansion of f(x). The following theorem gives sufficient conditions for the expansion of a function as a Bessel series. Bessel Expansion Theorem : Assume that f(x) and f’(x) have at most a finite number of jump discontinuities in [0,1]. If 0 < x < 1, then the Bessel series (B) converges to f(x), when x is a point of continuity, and converges to ½ [f(x-)+f(x+)], when x is a point of discontinuity.

Ex.1. Prove that the positive zeroes of Jp(x) and Jp+1(x) occur alternately.  1 , 0  x  1/ 2   J ( / 2) Ex.2. If f (x)  1/ 2 , x  1/ 2 , show that f (x)  1 n J ( x) , where   2 0 n  1 n J1 (n )  0 , 1/ 2  x  1

’s are the positive zeroes of J0(x). Ex.3. If F(x) = xp in [0,1), show that its Bessel series for a given p is

 p 2 x   J p (n x) . If g(x) is a well-behaved function in [0,1], then show that 1 n J p1 (n )

1 1  2 1 x p1g(x)dx  xg(x)J ( x)dx . By taking g(x) = xp, xp+1, deduce that    p n 2 0 1 n J p1 (n ) 0

1 1 1 1 1  2  and  . Taking p = ½ , derive that   2  4 2  2 n 4( p 1) n 16( p 1) ( p  2) n 6 1  4 and  .  n 4 90

Differential Equations 19

CHAPTER 3 SYSTEMS OF FIRST ORDER EQUATIONS 3.1. LINEAR SYSTEMS Let x, y be variables depending on the independent variable t. Consider the following system dx   F(t, x, y) of first order differential equations dt ….(1). dy   G(t, x, y)  dt The above system is called linear if the dependent variables x & y are appearing only in first degree. Thus the corresponding linear system can be presented as,  dx   a1 (t)x  b1 (t)y  f1 (t) dt ….(2) dy   a (t)x  b (t)y  f (t)  dt 2 2 2

If f1(t) & f2(t) are identically zero, the system is called homogeneous. Thus the associated  dx   a1(t)x  b1(t)y homogeneous linear system is dt ……(3) dy   a (t)x  b (t)y  dt 2 2

We assume that ai(t), bi(t), fi(t), i = 1, 2, are continuous in some interval [ a , b ]. The solution of (2) is a pair of functions, x = x(t) and y = y(t).

We require the support of the following theorems in our discussion.

Theorem 1. If t0 is any point in [ a , b ], and if x0 and y0 are given numbers, then (2) has a unique solution x = x(t), y = y(t), valid in [ a , b ], such that x( t0 ) = x0 and y (t0 ) = y0.

( Proof is given later )

Theorem 2. If the linear homogeneous system (3) has two solutions, x = x1(t), y = y1(t), and x = x2(t), y = y2(t), valid in [ a , b , then x = c1 x1(t) + c2 x2(t), y = c1 y1(t) + c2 y2(t) is also a solution, for any two constants c1, c2.

x1 (t) x2 (t) Let W(t) = . Then W(t) is called the Wronskian of the solutions (x1(t),y1(t)) and y1 (t) y2 (t)

(x2(t),y2(t)).

Differential Equations 20 Theorem 3. If the two solutions (x1(t),y1(t)) and (x2(t),y2(t)) of the homogeneous system (3 ) has a Wronskian that does not vanish on [ a , b ], then x = c1 x1(t) + c2 x2(t), y = c1 y1(t) + c2 y2(t) , where c1 & c2 are arbitrary constants, is the general solution of (3 ) in [ a, b ].

Theorem 4. The Wronskian of two solutions of the homogeneous system, is either identically zero or no where zero in [ a, b ].

dW  a (t)b (t)dt Proof: We have  [ a1(t) + b2(t) ]W, which gives W(t) = ce 1 2 , for some dt constant c. Then W(t)  0, if c = 0 and W(t)  0, for any t, if c  0.

Remark: The two solutions x = x1(t), y = y1(t), and x = x2(t), y = y2(t), valid in [ a , b ] of the homogeneous system are said to be linearly independent , if one is not a constant multiple of the other which is equivalent to the condition that the Wronskian of the solutions is not zero. The following Theorem is a consequence of the above definition and Theorem 4.

Theorem 5. If the two solutions x = x1(t), y = y1(t), and x = x2(t), y = y2(t), are linearly independent, then x = c1 x1(t) + c2 x2(t), y = c1 y1(t) + c2 y2(t) , where c1 & c2 are arbitrary constants, is the general solution of (3 ) in [ a, b ].

Theorem 6. If the two solutions (x1(t),y1(t)) and (x2(t),y2(t)) of the homogeneous system are linearly independent and x = xp(t), y = yp(t) is any particular solution of the corresponding non- homogeneous system (2), then x = c1 x1(t) + c2 x2(t) + xp(t) , y = c1 y1(t) + c2 y2(t) + yp(t), where c1 & c2 are arbitrary constants, is the general solution of (2 ) in [ a, b ].

Proof: Let (x(t),y(t) be a solution of (2). Then it can be easily shown that (x(t) - xp(t),y(t)- yp(t)) is a solution of (3), and the result follows, by virtue of Theorem 5. 3.2. Homogeneous Linear System with constant coefficients.  dx   a1 x  b1 y Consider the system, dt …(4) , where a , a ,b ,b are constants. We may dy 1 2 1 2   a x  b y  dt 2 2

x  Aemt assume that a solution of the system can be taken as …(5).  mt y  Be If we substitute (5) in equation (4), mt mt mt mt mt mt mt we get, Ame = a1 A e + b1 B e , bme = a2A e + b2 B e . Cancelling e throughout gives the homogeneous linear algebraic system, (a1-m)A+b1B=0, a2A+(b2-m)B=0 ....(6).

Differential Equations 21 It is clear that the system trivial solution A = 0, B = 0 yields the trivial solution a  m b x = 0, y = 0 of (4). The system (6) has a non trivial solution iff 1 1  0. a2 b2  m On expansion of the determinant, we get the quadratic equation 2 m – ( a1 + b2 ) m + ( a1b2 – a2b1) = 0 ….(7) , with roots, say, m = m1, m2.

m1t x1  A1e For m = m1, the system (6) gives a non trivial solution, say, A1, B1. Then  m1t y1  B1e

m2t x2  A2e We get the solution corresponding to m = m2, in a similar fashion as  . m2t y2  B2e

The nature of the roots m1 & m2 are important whenever we try to write the general solution. Case 1: Distinct Real roots.

If m1 and m2 are real and distinct, then x = c1 x1 + c2 x2, y = c1 y1 + c2 y2 is the general solution. Case 2. Complex roots

Let m = a ib be the roots of (7). For m = a + ib, solve (6), to get A = A1 +iA2 , B = B1+iB2 Since we require real solutions alone, the general solution is a linear combination of

at at (x1  e (A1 cosbt  A2 sin bt), y1  e (B1 cosbt  B2 sin bt)) and

at at (x2  e (A1 sin bt  A2 cosbt), y2  e (B1 sin bt  B2 cosbt)) . These are obtained by x  Aemt separating into real and imaginary parts, the solution, , obtained for m = a +ib .  mt y  Be Case 3: Two equal real roots

We get one solution as, . A second solution may be obtained in the form

x  (A  A t)emt 1 2 and their linear combination gives the general solution.  mt y  (B1  B2t)e  dx   x  y Eg.1. Consider the system, dt . Let x = A e mt , y = B e mt. Then after dy   4x  2y  dt cancellation of e mt, we get, the linear algebraic system (1- m ) A + B = 0 , 4 A + ( -2 – m ) B = 0. For non trivial solution of the algebraic system, we have, m 2 + m – 6 = 0 i.e. m = -3 or 2. With m = -3, the algebraic system becomes, 4 A + B = 0.

Differential Equations 22 A non trivial is chosen as A = 1, B = -4 . Thus we have the solution, x = e -3t, y = -4 e -3t. With m = 2, we get - A + B = 0. A non trivial solution is taken as, A = 1, B = 1. This gives the solution, x = e 2t , y = e 2t. It may be noted that the solutions obtained are independent. -3t 2t -3t 2t Hence the general solution is x = c1 e + c2 e , y = -4 c1 e + c2 e . dx   3x  4y Eg.2. Consider the system, dt . Let x = A e mt , y = B e mt. Then after  dy   x  y  dt cancellation of e mt, we get, the linear algebraic system , ( 3 – m ) A – 4 B = 0, A + ( -1 – m ) B = 0 …( 1 ) For a non zero solution, ( 3 – m ) ( -1 – m ) + 4 = 0 i.e. m 2 – 2m + 1 = 0 or m = 1, 1. With m = 1, ( 1 ) gives, A – 2 B = 0. Choose, A = 2, B = 1. Corresponding solution is, x = 2 e t, y = e t . t A second solution linearly independent from the above is assumed to be, x = ( A1 + A2 t ) e t and y = ( B1 + B2 t ) e . Then we obtain, ( A 1 + A 2 t + A2 ) = 3 ( A1 + A2 t ) – 4 ( B1 + B2 t ) &

( B 1 + B 2 t + B2 ) = ( A1 + A2 t ) – ( B1 + B2 t ). Since these are identities in t, we get,

2 A2 – 4 B2 = 0, A2 – 2 B2 = 0, 2 A1 – A2 -4 B1 =0, A1 – 2 B1 – B2 = 0. A non zero solution is t t taken as, A2 = 2, B2 = 1, A1 = 1, B1 = 0. Now we get another solution, x = ( 1 + 2t ) e , y = e . t t The two solutions obtained are linearly independent. Hence, we get, x = 2 c1 e + c2 ( 1 + 2t ) e , t t y = c1 e + c2 t e as the general solution. dx   4x  2y Eg.3. Consider the system, dt . Let x = A e mt , y = B e mt. Then after dy   5x  2y  dt cancellation of e mt, we get, the linear algebraic system , ( m - 4 ) A + 2 B = 0, 5A + ( 2 – m ) B = 0 ( 1 ) For non trivial solution of (1), we have, m 2 – 6 m + 18 = 0 or m = 3 3i . Since the values of m are complex, we are expecting complex values for A & B also.

Let A = A1 + i A2 , B = B1 + i B2 and substitute, m = 3 3i , in (1). We obtain,

( - 1 +3 i ) (A1 + i A2 ) + 2 (B1 + i B2 ) = 0, 5 (A1 + i A2) + ( -1 -3i )( B1 + i B2 ) = 0.

Equating the real and imaginary parts, - A1 – 3 A2 + 2 B1 = 0, 3 A1 – A2 + 2 B2 = 0, 5 A1 – B1

+ 3 B2 = 0, 5 A2 – 3 B1 – B2 = 0. Consider the coefficient matrix and reduce it to row echelon form. A solution of the homogeneous algebraic system is, A1 = 2, A2 = 0, B1 = 1, B2 = -3. The general solution is, x = e 3t( 2 c cos 3t + 2 d sin 3t ), y = e 3t[c( cos 3t + 3 sin 3t) + d(sin 3t – 3 cos 3t)]

Differential Equations 23 3.3 Non linear system – Volterra’s prey – predator equations. Consider an island inhabited by foxes and rabbits. The foxes hunt the rabbits and rabbits feed on carrots. We assume that there is abundant supply of carrots. As the rabbits become large in number, foxes flourish as they hunt on rabbits and their population grows. As the foxes become numerous and eat too many rabbits, the rabbit population declines. As a result the foxes enter a period of famine and their population declines. As foxes decrease in number the rabbits become more safe resulting in a population surge. As time goes on we can observe an unending almost cyclic repetition of population growth and decline of either species. We make a mathematical formulation of the above problem. Let x be the rabbit population and y, the corresponding population of foxes at a given instant. Since there is an unlimited supply of carrots, the rabbit population grows as in the case of a first order reaction relative to the current population. dx Thus in the absence of foxes,  ax, a > 0 . It is natural to assume that the number of dt encounters between foxes and rabbits is jointly proportional to their populations. As these encounters will enrich the fox population, but results in the decline of rabbit population, we dx may correct the above equation as  ax  bxy, where a , b > 0. In a similar dt dy manner, we obtain,  cx  dxy, where c , d > 0. Thus, we have, the following non linear dt system, describing the populations,

dy  cx  dxy. The above equations are called dt Volterra’s prey – predator equations. (a  by)dy (c  dx)dx Eliminating t , we get   . The solution is , y a e –by = K x –c e dx, where y x

c a –dx – dy K = x0 y0 e 0 0 , for some initial solution, ( x0 , y0 ). Drawing the (x,y) graph is really tough and Volterra has introduced an efficient approach in this regard as discussed below.

We note that , x , y being populations, are non negative. The plane is divided into 4 quadrants and the bordering rays are used to represent the positive x, y, z , w directions. We may take,

z = y a e –by and w = K x –c e dx. Giving suitable values for x and y independently plot the ( y , z ) and ( x , w ) graphs in the respective quadrants and then obtain the ( x , y ) graph from the ( z, w ) graph which is in fact the straight line z = w.

Differential Equations 24 dx dy Note that  0  gives x = c/d and y = a/b, called the equilibrium populations. dt dt dX bc    Y  bXY Let x = X + c/d and y = Y + a/b. Then the system becomes, dt d .  dY ad   X  dXY  dT b

dX bc    Y Consider the linearised system, dt d . The solution of the linear system is a d 2 X 2  dY ad   X  dT b + b 2 c Y 2 = L2, a family of ellipses concentric with the origin. The ( x , y ) graph turns out to be an oval about the equilibrium point ( c/d , a/b )

Differential Equations 25

CHAPTER 4 NON LINEAR EQUATIONS 4.1. Autonomous systems dx dy Consider the system  F(x, y) ,  G(x, y) --( 1 ) . dt dt Since F and G are independent of t, the system is called autonomous. The solution of the system is a pair of functions, ( x(t), y(t) ), describing a family of curves in the x-y plane, called the phase plane. If t0 is any number and ( x0 , y0 ) is a given point in the phase plane, there exists a unique curve ( x(t), y(t) ) passing through ( x0 , y0 ) and satisfying the system. Such a curve is called a path in the phase plane and the plane with all these paths will be called phase portrait of the system.

For a given path, we may use forward arrows to indicate the direction in which the path is advancing as t  . A point ( x0 , y0 ) at which both F and G vanish, is called a critical point dx dy of the system. Since  0 and  0 at a critical point ( x0 , y0 ), no path is passing dt dt through a critical point and two different paths will not intersect, since there exists a unique path through a given point.

Given an autonomous system, apart from its solution we are interested in the location of the various critical points, arrangement of paths near critical points, stability of the critical points and the phase portrait.

Stability : Let ( x0 , y0 ) be an isolated critical point and C = { (x(t) , y(t) ) |    t   } be a lim path of (1). We say, C approaches ( x0 , y0 ) as t   , if (x(t), y(t)  (x , y ) and t   0 0

lim y(t)  y0 C enters ( x0 , y0 ) as , if exists or  or   . t   x(t)  x

If a path C enters a critical point, then it approaches it in a definite direction. dx dy Eg.1. Consider the system,  x ,  x  2y --(1). The origin is the only critical point, dt dt t t 2t and the general solution is obtained as, x = c1 e , y = c1 e + c2 e --(2). 2t When c1 = 0, we get x = 0, y = c2 e . In this case the path becomes the positive or negative

x – axis according as c2 > or < 0, and as t  , the path approaches and enters the critical point.

Differential Equations 26 t t When c2 = 0, we get x = c1 e and y = c1 e . For c1 < 0, the path is the ray, y =x , x < 0 and for c1 > 0, the path is the ray, y =x , x > 0, and both the paths enters the critical point, as t  .

2 2 When c1,c2  0, then the paths are ½ - parabolas y = x + (c2/c1 ) x . Each of these paths enters ( 0 , 0 ) as .

dx dy Eg.2. Consider the system,  3x  4y ,  2x  3y . The only critical point is ( 0 , 0 ). dt dt

–t t -t t We obtain the general solution as, x = 2 c1 e + c2 e , y = c1 e + c2 e . t t When c1 = 0, we get x = c2 e , y = c2 e . In this case the path becomes the ½ – line y = x and as , the path approaches and enters the critical point. -t -t When c2 = 0, we get x = 2 c1 e and y = c1 e . The path is the ½ - line y = ½ x , x < 0 and both the paths enters the critical point, as t   . When , then the paths are distinct branches of hyperbolas ( x – y ) ( 2 y – x ) = C, with asymptotes being y = x and y = ½ x and none of these paths approaches the critical point ( 0 , 0) as or as .

dx dy Eg.3. Consider the system,   y ,  x . The only critical point is ( 0 , 0 ). dt dt

We obtain the general solution as, x = - c1 sin t + c2 cos t, y = c1 cos t + c2 sin t , which are circles, with common centre (0 , 0 ). All paths are closed ones and each of them encloses the critical point and none of these paths approaches the critical point. dx dy Eg.4. Consider the system,  x  y ,  x  y . The only critical point is ( 0 , 0 ). dt dt

dr By changing to polar coordinates, we get  r which gives the general solution as the d d family of spirals r = c e . We have,  1, so that as , the spiral unwinds in the anti dt clock wise fashion to infinity.

Stability and asymptotic stability . dx dy Consider the autonomous system,  F(x, y) ,  G(x, y) --( 1 ) . For convenience, dt dt assume that (0 , 0 ) is an isolated critical point of the system. This critical point is said to be stable, if for each R > 0 given there exists r  R such that every path which is inside the circle centered at ( 0 , 0 ) with radius r, for some t = t0, remains inside the circle centered

Differential Equations 27 ( 0 , 0 ) with radius R for all t > t0 , which is equivalent to say that paths which gets sufficiently close to the critical point stay close to it in its due course i.e. as t   . The critical point is said to be asymptotically stable if there exists a circle with centre ( 0 , 0) and radius r0 , such that a path which is inside this circle for some t = t0 approaches the centre (0 , 0 ) as .

4.2. Types of Critical points and stability of linear systems.

Consider the homogeneous linear system with constant coefficients,  dx   a1 x  b1 y dt ---(1) which has evidently the origin as the only critical point by dy   a x  b y  dt 2 2 assuming that a1 b2 – a2 b1  0. Let x = A e mt, y = B e mt ---(2), be a non trivial solution, 2 when ever m – ( a1 + b2 ) m + (a1 b2 – a2 b1) = 0 ---(3), called the auxiliary equation of the system. Let m1 and m2 be the roots of (3). We may distinguish the following 5 cases. Major cases:

1. The roots m1 and m2 are real, distinct, and of the same signs.

2. The roots m1 and m2 are real, distinct, and of opposite signs

3. The roots m1 and m2 are complex conjugates, but not pure imaginary. Border line cases :

4. The roots m1 and m2 are real, and equal.

5. The roots m1 and m2 are pure imaginary. Case 1. : The critical point is called a node.

x  c A em1t  c A em2t The general solution is  1 1 2 2 ---(4) m1t m2t y  c1B1e  c2 B2e

(a) The roots m1 and m2 are both negative.

Further assume, for precision, that m1 < m2 < 0.

m2t m2t When c1 = 0, we get, x = c2 A2e , y = c2 B2e --(5). If c2 > 0, then we get ½ of the line y/x = B2/A2, which enters the critical point as t   and for c2 < 0, we get the other ½ of the same line which also enters ( 0 , 0 ) . as

Differential Equations 28 m t m t When c2 = 0, we get x = c1A1 e 1 and y = c1 B1 e 1 –(6). For c1 < 0, the path is ½ of the line, y/x = B1/A1, which enters the critical point as t   and for c1 > 0, we get the other ½ of the same line which also enters ( 0 , 0 ) . as .

When c1,c2  0, then the paths are curves. Since m1 and m2 are both negative, these paths also approaches ( 0 , ) as . Considering the expression for y/x from (4), and since

m1 - m2 < 0, each of these paths enters ( 0 , 0 ) as t   ( Note that y/x  B2/A2 as .) The critical point is referred as a NODE, and in this case it is asymptotically stable.

If m2 < m1 < 0, then the above conclusion holds good, with a change each curvilinear path enters ( 0, 0 ) along the direction B1/A1. (b) The roots m1 and m2 are both positive. The situation is exactly the same , but all the paths are approaching and enters ( 0 , 0 ) as t  .

2. Assume m1 < 0 < m2 The two ½ line paths represented by (5) enter ( 0 , 0 ) as , the two ½ line paths represented by (6) enter ( 0 , 0 ) as . But none of the curvilinear paths represented by (4), corresponding to , approaches ( 0 , 0 ) as or ; each of them is asymptotic to one of the ½ line paths. The critical point is called a SADDLE POINT which is always unstable.

3. Let m1  a  ib, m2  a ib, where a  0 . x  eat [c (A cosbt  A sin bt)  c (A sin bt  A cosbt)] The general solution is, 1 1 2 2 1 2 --(8)  at y  e [c1 (B1 cosbt  B2 sin bt)  c2 (B1 sin bt  B2 cosbt)] Suppose, a < 0. As , all paths approach ( 0 , 0 ), but they do not enter it and they wind around it in a spiral like manner. Changing to polar coordinates, d xdy/ dt  ydx / dt a x 2  (b  a )xy b y 2   2 2 1 1 -- (9) dt x 2  y 2 x 2  y 2 d > or < 0, if a2 > 0 or < 0.( Note that the discriminant D of the auxiliary equation is dt negative, in the present context ) d For y = 0, (9) gives =a2. Thus when a2 >0, then > 0 which implies that as , all dt paths spiral about ( 0 , 0 ) in the anti clock wise sense. The sense will be clock wise, if a2 < 0. The critical point is called a spiral, which is asymptotically stable.

Differential Equations 29 If a > 0, the situation is the same, except that all paths approach ( 0 , 0 ) as t  , and hence it is an unstable spiral.

4. Let m1 = m2 = m, say. Assume, m < 0.

(a) a1= b2  0, a2 = b1 = 0. Let the common value be a. Then the system reduces to dx   a x dt and its general solution is x = c e mt , y = c e mt. The paths are ½ lines of dy 1 2   a y  dt various slopes and since m < 0, each path enters ( 0 , 0 ) as t   . The critical point is an asymptotically stable border line node. If m > 0, then all paths enter ( 0 , 0 ) as . The critical point is an unstable border line node. (b) All other cases. Assume m < 0.  x  c A emt  c (A  A t)emt The general solution is,  1 1 2 2 1 . m1t m t y  c1B1e  c2 (B2  B1t)e

When c2 = 0, we get the two ½ line paths lying on y/x = B/A.. Since, m < 0, both of them enter ( 0 , 0 ) as .

If c2 , the paths are curvilinear and all of them enter the critical point as , keep tangential to y/x = B/A as they approach ( 0 , 0 ). The critical point is again an asymptotically stable border line node. If m > 0, then it is unstable. 5. We may refer case 3. with a = 0. Since the exponential factor is missing from the solution, they reduce to periodic functions and each path is closed surrounding the origin. The paths are actually ellipses. The critical point is called a CENTRE, which is stable, but can not be asymptotically stable. We may, summarise, some of the observations we have made in the of the above discussion about stability. Theorem. The critical point ( 0 , 0 ) of the linear system (1) is stable iff both the roots of the auxiliary equation have non positive real parts, and it is asymptotically stable ifi both roots have negative real parts.

Taking, p = - ( m1 + m2) and q = m1 m2, we can reformulate the theorem as, Theorem. The critical point ( 0 , 0 ) of the linear system (1) is asymptotically stable iff p and q are both positive.

Differential Equations 30 4.3. Liapunov’s direct method In a physical system, if the total energy has a local minimum at certain equilibrium point, then it is stable. This concept leads to a powerful method for studying stability problems. dx dy Consider, the autonomous system,  F(x, y) ,  G(x, y) . Assume that ( 0 , 0 ) is an dt dt isolated critical point of the system. Let C = [ x(t), y(t)] be a path. Let E(x,y) be a function that is continuous and having continuous first partial derivatives in a region containing C. If ( x , y ) is point on C, then E(x,y) is a function of t alone, say, E(t). Its rate of change, as the point moves along C is, dE E dx E dy E E   = F  G . dt x dt y dt x y Let E(x,y) be a with continuous first partial derivatives in some region containing the origin. If E( 0 , 0 ) = 0 , and then it is said to be positive definite if E (x,y) > 0, for (x,y)  0, and negative definite if E(x,y) < 0, for (x,y) . Similarly, E is called positive semi-definite if E(0,0) = 0 and E(x,y)  0, for (x,y) and negative semi-definite if E(0,0) = 0 and E(x,y)  0, for (x,y) . Functions of the form a x 2m + b y 2n , where m & n are positive integers and a & b are positive constants, are positive definite. Note that E(x,y) is negative definite iff –E(x,y) is positive definite; functions x 2m or y 2n are not positive definite. Given the linear system (1), a positive definite function E(x,y) such that the derived function

H(x,y) = is negative semi-definite is called a Liapunov function for (1). By the earlier discussion, we get that, along a path C near the origin, dE/dt 0, and hence E is decreasing along C as it advances. Theorem. If there exists a Liapunov function E(x,y) for the system (1), then the critical point ( 0 , 0 ) is stable. Furthermore, if this function has the additional property that the derived function H(x,y) is negative definite, then ( 0 , 0 ) is asymptotically stable.

Proof: Let C1 be a circle of radius R > 0 centered at the origin and it may be assumed that C1 is small enough that it is contained in the domain of definition of E. Since E(x,y) is continuous and positive definite, it has a positive minimum m on C1. Since E(x,y) is continuous at the origin and vanishes there, we can find 0 < r < R such that E(x,y) < m, whenever (x,y) is inside the circle C2 of radius r nad centered at the origin. Let C be a path which is inside C, for t = t0. Then E(t0 ) < m, and dE/dt 0 implies that E(t) E(t0) < m for

Differential Equations 31 all t > t0. It follows that the path C can never reach the circle C1 for t > t0. Thus ( 0 , 0 ) is stable. Under the additional assumption, we claim further that E(t)  0 as t   . This would imply that the path C approaches ( 0 , 0 ) as . Now along C, dE/dt < 0, E(t) is a decreasing function. Since E(t) is bounded below by 0, E(t)  L  0,say , as . Then it suffices to show that L = 0.

Suppose not. Choose 0 < r < r such that E(x,y) < L/2, whenever (x,y) is inside the circle C3 with radius r . Since H is negative definite, it has a negative maximum -k in the closed annulus bounded by C1 and C3. Since this region contains C for t  t0, E(t) = E( t0) +

t dE dt which gives E(t)  E(t0 ) - k(t – t0 ). But since right side of the inequality becomes  dt t0 negatively infinite as , E(t)  as . This contradicts the fact that E(x,y)  0. Thus L = 0, and the proof is complete.

Eg. Consider the equation of motion of a mass m attached to a spring, d 2 x dx m  c  kx  0 . dt 2 dt Here c  0 is the viscosity of the medium through which the mass moves, and k> 0 is the  dx   y spring constant. The equivalent autonomous system is, dt dy k c    x  y  dt m m The only critical point is ( 0 , 0 ). The kinetic energy of the mass is my 2/2, and the potential

x 1 energy due to the current elongation of the spring is  kxdx kx2 . 0 2 Thus the total energy of the mechanical system is E( x , y ) = ½ m y 2 + ½ k x 2. Then E( x , y ) is positive definite and H( x , y ) = k x y + m y ( -k/m x-c/m y )= - c y 2  0. Thus E( x , y ) is a Liapunov function and the critical point is stable, by Theorem

Ex.1. Show that ( 0 , 0 ) is an asymptotically stable critical point of the system

dx 3   3x  y dt . dy   x5  2y 3  dt Let E( x , y ) = a x 2m + b y 2n, where a , b > 0 and m , n are positive integers. E is positive definite and H = 2ma x 2m-1( -3 x 3 - y ) + 2nb y 2n-1 ( x 5 – 2 y 3 ) = - 6 ma x 2m+2 – 2ma x 2m-1y

Differential Equations 32 + 2 nb x5 y 2n-1- 4 nb y 2n+2. Let m = 3, n = 1, a = 1 , b = 3. Then H = -18 x8 – 12 y 4 is negative definite. Now, E( x , y ) is a Liapunov function for the system with the derived function H(x,y), negative definite. Thus the critical point ( 0 , 0 ) is asymptotically stable.

Ex.2. Show that ( 0 , 0 ) is an asymptotically stable critical point of the system

 dx 3   2x  xy dt . dy   x 2 y 2  y 3  dt Take E( x , y ) = x 2 + y 2. 4.4. Simple critical points – Non linear system dx   F(x, y) Consider the autonomous system, dt with an isolated critical point at ( 0 , 0 ). dy   G(x,Y)  dt Since F ( 0 , 0 ) = 0 = G( 0 , 0 ), assuming their Maclaurin’s series expansions about ( 0 , 0 ) and neglecting higher powers of x & y , for ( x , y ) close to ( 0 , 0 ), the system reduces to a linear one.  dx   a1 x  b1 y  f (x, y) More generally, we may take the system as, dt dy   a x  b y  g(x, y)  dt 2 2

It is assumed that a1 b2 – a2 b1  0, so that the critical point will be isolated. lim f (x, y) ( 0 , 0 ) is called a ‘ simple critical point ’ of the system, if  0 (x, y)  (0,0) x 2  y 2

lim g(x, y) and  0 . (x, y)  (0,0) x 2  y 2 Theorem. Let ( 0 , 0 ) be a simple critical point of the non linear system,

--(1) .

If the critical point ( 0 , 0 ) of the associated linear system,  dx   a1 x  b1 y dt --(2) dy   a x  b y  dt 2 2

Differential Equations 33 falls under any of the three major cases ( Node, Saddle point, spiral ), then the critical point ( 0 , 0 ) of (1) is of the same type. Remark: There will not be similarities among the paths in both the systems. In the non linear case, paths will have more distortions.

If system (2) has the origin as a border line node ( centre ), then origin will be either a node or spiral ( centre or spiral ) for the system (1). Theorem. Let ( 0 , 0 ) be a simple critical point of the non linear system (1), and consider the related linear system(2). If ( 0,0 ) is an asymptotically stable critical point of (2), then it is asymptotically for (1).

Proof. We may construct a suitable Liapunov function for the system (1) to justify the claim. The coefficients of the auxiliary equation of the linear system, namely, p & q will be positive, by the assumption that ( 0 , 0 ) is asymptotically stable for (2). a 2  b2  (a b  a b ) Now define, E( x , y ) = ½ ( a x 2 + 2 b x y + c y 2 ), where a = 2 2 1 2 2 1 , pq

a 2  b2  (a b  a b ) (a a  b b ) c = 1 1 1 2 2 1 and b = - 1 2 2 1 . pq pq

Note that p = - (a1 + b2 ) & q = ( a1 b2 – a2 b1 ). We have p ,q , a , b > 0 , and it can be directly shown that ( ac – b2 ) > 0. Since b 2 – ac < 0 & a > 0, E(x,y) is positive definite. It can also be easily obtained that, 2 2 H( x , y ) = (a x + b y ) ( a 1 x + b1 y ) + ( b x + c y ) ( a2 x + b2 y ) = -(x + y ), which is negative definite. Thus E( x , y ) is a Liapunov function for the linear system. Now by using the continuity of f & g at ( 0 , 0 ), and shifting polar coordinates, it can be E E shown that F  G , where F = ( a1 x + b1 y ) + f(x , y ) and G = ( a2 x + b2 y ) + g x y ( x , y ) is negative definite. Thus E ( x , y ) is a positive definite function with the derived function related to the non linear system negative definite. Hence by Theorem, ( 0 , 0 ) is asymptotically stable, for the system (1).

Eg. The equation of motion for damped vibrations of a simple pendulum is,

d 2 x dx  (c / m)  (g / a)sin x  0 ---(1) , where c > 0 and g is the acceleration due to dt 2 dt gravity.

Differential Equations 34  dx   y The equivalent autonomous system is dt --(2) dy   (g / a)sin x  (c / m)y  dt

 dx   y (2) can be written as, dt --(3) dy   (g / a)x  (c / m)y  (g / a)(x  sin x)  dt Thuis, f(x,y) = 0 and g(x,y) = (g/a) (x-sinx). lim g(x, y) lim x  sin x Since,   0 , (x, y)  (0,0) x 2  y 2 (x, y)  (0,0) x 2  y 2 ( 0 , 0 ) is a simple critical point of the non linear system (2). ( As ( x , y )  0, for x  0, | x  s i n x | | x  sin x |   1 sin x / x  0 ). x 2  y 2 | x | Now ( 0 , 0 ) is an isolated critical point of the associated linear system,  dx   y dt --(4) . dy   (g / a)x  (c / m)y  dt We have p = (c/m) > 0 , q = ( g / a ) > 0 , for (4)

Hence by Theorem, ( 0 , 0 ) is asymptotically stable. Thus by the last Theorem, ( 0 , 0 ) is an asymptotically stable critical point of the original system (1).

Since x = 0 and y = dx/dt = 0 refers to the mean position and initial velocity, asymptotic stability of ( 0 , 0 ) implies that the motion due to a slight disturbance of the simple pendulum will die out with the passage of time.

We give a few more Theorems helpful in the investigation of an autonomous system

Theorem.1. A closed path of an autonomous system necessarily surrounds at least one critical point. Thus a system without critical points in a given region can not have closed paths in that region. F G Theorem.2. If  is always positive or always negative in a certain region of the x y phase plane, then the system can not have closed paths in that region.

Differential Equations 35 Proof: Assume that the region contains a closed path C with interior R. Then by Green’s

F G Theorem. (Fdy  Gdx)    dx dy  0. But along C dx = F dt & dy = G dt, so C R x y that

(Fd y  Gd x) = 0, a contradiction. C Theorem.3.( Poincare-Bendixson) Let R be bounded region of the phase plane together with its boundary, and suppose R does not contain any critical points of the system. If C is a path that enters R and remains in R in its further course, then C is either a closed path or it spirals toward a closed path as t   . Thus in either case the system has a closed path. Theorem.4. ( Lienard) Let the functions f(x) and g(x) satisfy the following conditions. (1) both are continuous and have continuous derivatives for all x (2) g(x) is an odd function such

x that g(x) > 0 for x > 0, and f(x) is an even function (3) the odd function F(x)   f (x)dx has 0 exactly one positive zero at x = a, is negative for 0 < x < a, is positive and non decreasing for d 2 x dx x > a and F(x)  as x . Then the Lienard’s equation  f (x)  g(x)  0 has a dt 2 dt unique closed path surrounding the origin in the phase plane, and this path is approached spirally by every other path as t  .

Differential Equations 36

CHAPTER 5

SOME FUNDAMENTAL THEOREMS 5.1. Method of successive approximations

Consider the initial value problem (IVP), y’ = f(x,y), y(x0 ) = y0, --( 1) , where f(x,y) is a function continuous in some neighborhood of ( x0 , y0 ). A solution is geometrically a curve in the x – plane that passes through ( x0 , y0 ) so that at each point on the curve the slope is prescribed as f( x , y ).

x The IVP is equivalent to the integral equation ( IE), y( x ) = y0 +  f [t, y(t)]dt --(2) x0 { ( 1) is equivalent to (2) : Suppose y (x ) is a solution of ( 1 ). Then y ( x ) is indeed continuous and if we integrate it from x0 to x, (2) is obtained. If y( x ) is a continuous solution of (2), then y(x0) = y0 and by differentiating y’(x) = f(x,y(x)). } We may suggest an iterative procedure to solve the IE (2). Start with the rough approximation

y0 ( x ) = y0. Substituting in the right side of (2), we get a new approximation as,

x x y ( x ) = y + f [t, y (t)]dt . Next use y (x) in R.S. of (2), to get y (x) = y + f [t, y (t)]dt 1 0  0 1 2 0  1 x0 x0

x This process can be continued to get y ( x ) = y + f [t,y (t)]dt . n 0  n1 x0 The procedure is called Picard’s method of successive approximations.

Eg.1. Consider the IVP, y’ = y, y( 0 ) = 1.

x x Equivalent IE is, y( x ) = 1 + y(t)dt . Then y (x) = 1+ y (t)dt .  n  n1 0 0

x x 2 With y0( x ) = 1, y1(x) = 1+ 1dt .=1 + x, y2(x) = 1+  (1 t)dt = 1 + x + x /2, 0 0

x 2 3 2 x x y3(x) = 1+  (1 t  t / 2)dt . = 1 x   . ….. 0 2 2.3

2 3 x x x It is clear that, yn ( x )  1 x    ....  e . 2 2.3 Note that, y(x) = e x is a solution of the IVP.

Differential Equations 37

Eg.2. Consider, y’ = x + y 2, y( 0 ) = 0.

We may take, y0 ( x ) = 0.

x x 2 4 2 5 Then y1(x) = 0+ tdt = x /2, y2(x) =  (t  t / 4)dt = x /2 + x /20, 0 0

x 4 10 7 2 5 8 11 y3(x) =  t  t / 4  t / 400  t / 20dt = x /2 + x /20 + x /160+ x /4400,……. 0

Eg.3. Consider, y’ = x + y , y( 0 ) = 1. It is not difficult to get the exact solution as, y(x) = 2 e x – x – 1.

x 2 x x 2 With y0 ( x ) = 1, y1(x) = 1+  (t 1)dt = 1 x  , y2(x) = 1+  (1 2t  t / 2!)dt = 0 2! 0

3 3 4 2 x 2 x x 1 x  x  , y3(x) = 1 x  x   ,….. 3! 3 4! x x Note that yn ( x )  1 + x + 2( e – x – 1 ) = 2 e – x – 1.

5.2. Picard’s Theorem f Theorem.1. Let f ( x , y ) and be continuous functions on a closed rectangular region R, y with sides parallel to the coordinate axes. If ( x0 , y0 ) is any interior point of R, then there exists a number h > 0 with the property that the initial value problem,

y’ = f ( x , y ) , y( x0 ) = y0 ---(1),

has a unique solution y = y (x ) in [ x0 – h , x0 + h ]. Proof: We know that every solution of the IVP is also a continuous solution of the IE,

x y( x ) = y0 +  f [t, y(t)]dt ---(2) and conversely. x0

We will show that (2) has a unique solution in [ x0 – h , x0 + h ] , for some h > 0. We may a produce a sequence of functions following Picard’s method of successive approximations.

Let y0 ( x ) = y0,

x x y ( x ) = y + f [t, y (t)]dt , y (x) = y + f [t, y (t)]dt ,…., 1 0  0 2 0  1 x0 x0

x y ( x ) = y + f [t,y (t)]dt ,…… n 0  n1 x0

Differential Equations 38 Claim. The sequence < yn ( x ) > converges to a solution of the IVP, in [ x0 – h , x0 + h ] , for some h > 0. f Since R is compact and f( x , y ) and are continuous in R, they are bounded. y

Therefore there exists M , K > 0 such that f (x, y)  M --(3) and

 f (x, y)  K,    (4), ((x, y)  R . y

Let ( x , y1 ) , ( x , y2 ) be distinct points in R.  Then by Mean value theorem, f (x, y )  f (x, y )  f (x, y* ) y  y --(5) , for some y* 1 2 y 1 2 between y1 & y2. Then by (4), we get f (x, y1)  f (x, y2 )  K y1  y2 --(6). Now choose h > 0 such that K h < 1 --(7) and the rectangular region R’ defined by

x  x0  h and y  y0  Mh is contained in R. Since ( x0, y0 ) is an interior to R, such an h exists.

 th Note that yn(x) is the n partial sum of the series, y0 (x)  yn (x)  yn1 (x) --(8) n1

Thus to show that < yn( x ) > converges, it is sufficient to show that the series (*) converges.

(a) The graph of the functions y = yn ( x ), for , lies in R’ and hence in R, for

every n.

This is clear for y = y0(x) =y0.

x Since ( t , y (t) ) are in R’, we get y (x)  y  f [t, y (t)]dt  Mh . Thus graph of y = y (x) 0 1 0  0 1 x0

x lies in R’. Now, it turns out that [t,y (t)] are in R’ and y (x)  y  f [t, y (t)]dt  Mh . 1 2 0  1 x0

Thus graph of y = y2 (x) lies in R’. Proceeding similarly we get the result(a).

Since y1( x ) is continuous in , there exists a constant a = max y1(x)  y0 .

Since [t,y1(t)] , [t,y0(t)] are in R’, (6)  f (t, y1(t))  f (t, y0 (t))  K y1(t)  y0 (t)  Ka and

x y (x)  y (x)  ( f [t, y (t)  f (t, y (t))]dt  Kah =a(Kh). 2 1  1 0 x0

2 Similarly, f (t, y2 (t))  f (t, y1(t))  K y2 (t)  y1(t)  K (Kah)  K ah so

Differential Equations 39 x y (x)  y (x)  ( f [t, y (t)  f (t, y (t))]dt  K 2 ah.h  a(Kh)2 . 3 2  2 1 x0

n1 Continuing like this, we can show that, yn (x)  yn1 (x)  a(Kh) .

 Now each term of the series | y0 (x) | | yn (x)  yn1 (x)| is dominated by the n1

2 corresponding term of the series, | y0 | + a + a (Kh) + a (Kh) + … which converges being essential a geometric series with common ration r = Kh numerically less than 1, by (7).

Thus, by comparison test, the series (8) converges uniformly in x  x0  h to a sum, say, y(x) and hence < yn ( x ) > converges to y(x) uniformly in .

Since the graph of yn ( x ) lies in R’, the graph of the function y(x) also lies in R’.

Since each yn ( x ) is continuous , the uniform limit y( x ) is also continuous.

Now we proceed to prove that, y(x) is a solution of the IVP.

x i.e. We have to show that, y( x ) - y0 -  f [t, y(t)]dt = 0 --(9) x0

x But y ( x ) - y - f [t,y (t)]dt = 0. Now consider, | y( x ) - y - - 0 | = n 0  n1 0 x0

| y( x ) - y0 - - [yn( x ) - y0 - ] | =

x | [y( x ) – y ( x ) ] + [ f (t, y (t))  f (t, y(t))]dt |  | [y( x ) – y ( x ) ] | + n  n1 n x0

| | | [y( x ) – yn ( x ) ] | + K h max | [y( x ) – yn-1 ( x ) ] | , since graph of y ( x ) lies in R’ and hence in R and by virtue of (6). The uniform convergence of yn(x) to y ( x) will enable us to make the right side of the above inequality as small as we please by taking n sufficiently large. Since the left side is independent of n, we get the required result. Now we settle the uniqueness part. Let y(x) be another possible solution of the IVP in .

It is essential to show that the graph of also lies in R’. On the contrary assume, its graph

leaves R’. Then there exists some x1 in such that y(x1 )  y0  Mh and the

Differential Equations 40 continuity of y(x) at x = x0 will give y(x )  y0  Mh if x  x0  x1  x0 . Then,

y(x1 )  y0 Mh   Mh / h  M , where as by mean value theorem, there exists x* x1  x0 x1  x0

y(x1 )  y0 between x0 & x1 such that,  y(x*)  f (x*, y(x*))  M , since ( x*, y(x*) ) lies x1  x0 in R’ . Hence, a contradiction.

x Since both y(x) & are solutions of the IVP, y(x)  y(x)  | [ f (t, y(t))  f (t, y(t))]dt | x0

 Kh max y(x)  y(x) , since graph of both functions lie in R’.

So max y(x)  y(x)  K h max . But this will imply, max = 0, since K h < 1.

Thus, we have = y(x), for every x in the interval x  x0  h . f Remark. We notice that the continuity of is made of use of in the proof to the extent that y it implies (6). We can replace this requirement by a Lipschitz’s condition , namely, there exists

K > 0 ,such that f (x, y1)  f (x, y2 )  K y1  y2 . If we further drop this condition too, it is known that the IVP still has a solution, but the uniqueness can not be ascertained.– Peano’s Theorem.

1 Consider the IVP, y' 3y 2 , y(0)  0. Let R be the rectangular region | x | 1, | y | 1. Here f(x,y) = 3 y 1/2 is continuous in R. Two different solutions are, y= x 3 and y = 0.

Theorem.2. Let f( x , y ) be continuous and satisfy the Lipschitz’s condition,

f (x, y1)  f (x, y2 )  K y1  y2 on a vertical strip a  x  b and    y  . If ( x0 , y0 ) is any point of the strip, then the initial value problem y’ = f( x , y ) , y ( x0 ) = y0 has a unique solution in [ a , b ]. Proof.: The proof is similar to that of Theorem.1. and based on Picard’s method of successive approximations.

Let M0 = | y0 |, M1 = max | y1(x) |, M = M0 + M1. It can be easily observed that, | y0( x ) |

 M and | y1(x) – y0(x) |  M .

Differential Equations 41 x Assume, x  x  b . Then y (x)  y (x)  ( f [t, y (t)  f (t, y (t))]dt  0 2 1  1 0 x0

x x f (t, y (t))  f (t, y (t)) dt  K y (t)  y (t) dt  KM(x  x ) ,  1 1  1 0 0 x0 x0

x y (x)  y (x)  ( f [t, y (t)  f (t, y (t))]dt  3 2  2 1 x0

x x f (t, y (t))  f (t, y (t)) dt  K y (t)  y (t) dt  K 2 M (t  x )dt  K 2 M (x  x )2 / 2 2 1  2 1  0 0 x0 x0

n1 n1 and in general, yn (x)  yn1 (x)  K M (x  x0 ) /(n 1)!.

The same result holds for a  x  x0 , but ( x – x0 ) has to be replaced by | x – x 0 |.

n1 n1 n1 n1 Thus, we have, K M | (x  x0 ) | /(n 1)! K M (b  a) /(n 1)!.

 Now each term of the series | y0 (x) | | yn (x)  yn1 (x)| ..(*) is dominated by the n1 corresponding term of the convergent numerical series, (b  a)2 M  M  KM(b  a)  K 2 M  ...... and hence the series 2!

 y0 (x)  yn (x)  yn1 (x) converges uniformly in [ a , b ] to a limit function y(x). n1 The uniform convergence will readily imply that y(x) is a solution of the IVP in [ a , b ].

If possible, let y(x) be another solution to the IVP. We claim that, then yn ( x )  , also, so that =y(x).

x Now, is continuous and = y0 +  f [t, y(t)]dt . x0

Let A = max y(x)  y0 ,

x Then for , y(x)  y (x) ( f [t, y(t)  f (t, y (t))]dt  1   0 x0

x x f (t, y(t))  f (t, y (t)) dt  K y(t)  y dt  KA(x  x ) ,  0  0 0 x0 x0

x y(x)  y (x) ( f [t, y(t)  f (t, y (t))]dt  2  1 x0

Differential Equations 42 x x x (x  x )2 f (t, y(t))  f (t, y (t)) dt  K y(t)  y dt  K 2 A (t  x )dt  K 2 A 0 ,…, and  1  1  0 2! x0 x0 x0 (x  x )n in general, y(x)  y (x)  K n A 0 . n n!

A similar result is got for a  x  x0 .

| (x  x ) |n (b  a) n Thus for any x in [ a , b ] ,  K n A 0  K n A . But from n! n! exponential series, we get, for any r, r n / n!  0 as n   . Thus the right side of the above inequality tends to zero as n   . Hence, we get, yn (x)  y(x) also. Remark. Picard’s method of successive approximations can be applied to systems of first order equations, by starting with necessary number of initial conditions, by converting into a system of integral equations. Picard’s theorem under suitable hypothesis holds good in this context also.

Theorem. Let P(x), Q(x) and R(x) be continuous functions in[ a , b ]. If x0 is any point in ’ [ a , b ], and y0 , y0 are any two numbers, then the initial value problem , d 2 y dy  P(x)  Q(x)y  R(x) , y(x0) = y0 and y’(x0) = y’0 , has a unique solution y = y (x) dx 2 dx in [ a, b ]

Differential Equations 43

CHAPTER 6

FIRST ORDER PARTIAL DIFFERENTIAL EQUATIONS

6.1. Introduction - Review.

Partial differential equations arise from a context involving more than one independent variable. For the analysis of a partial differential equation and its solution, geometrically, we require good knowledge about representation of curves and surfaces, in 3 – dimension.

A curve C in 3 – dimension can be specified in parametric form as the collection of points

(x, y, z) satisfying the equations x = f1 ( t ), y = f2( t ), z = f3 ( t ), where the parameter t varies in an interval I in R, and f1, f2, f3 are continuous functions on I. The standard parameter is the arc length s from a fixed point A on C to a generic point P on C.

Equation to C, can also be presented in vector – parametric form as, dr r = f1( t ) i + f2( t ) j + f3( t ) k. At any point P ( r ) on C, gives a tangent direction dt dr ( If s is the arc length parameter, then gives the unit tangent direction ) ds

Eg. A straight line with direction ratios, l, m, n and passing through ( a , b , c ) can be written x  x y  y z  z in symmetric form as, 0 = 0 = 0 . l m n

Eg. A right circular helix on the cylinder x 2 + y 2 = a2, can be specified as, x = ac o st , y = a s i n t , z = kt. Equation to a surface is usually taken as F ( x , y , z ) = 0, where F is a continuously differentiable function in R 3. Its equation can also be expressed in parametric form as,

(F1 , F2 ) x = F1( u , v ) , y = F2 ( u , v ) , z = F3 ( u , v ). If  0, then u and v can be solved as (u,v) functions of x and y locally, say, u = (x, y) , v = (x, y) . Then from the last equation, we get, z = F3 ( (x, y) , (x, y) ). Suppose, the curve C : x = x( s ), y = y( s ), z = z( s ) lies on the surface S whose equation is, F( x , y , z ) = 0. Then F(x(s), y(s), z(s)) = 0, for every s.

Differential Equations 44 On differentiating w. r. t. s, F dx F dy F dz we get, + + = 0. This implies that at the point P ( x , y , z) of the curve, x ds y ds z ds F F F dx dy dz the direction ( , , ) is perpendicular to the tangent direction ( , , ) to C. x y z ds ds ds Since C lies entirely on S, the above is a tangent direction to the surface also.

Thus ( , , ) is a normal direction to S at P.

Let u be a variable depending on the independent variable x, y, z,….. Then an equation of the u u u  2u  2u form, f ( x, y, z,…,u, , , ,….., , ,…. ) = 0, where f is a function with x y z x 2 xy continuous partial derivatives of order up to n, for some n, is called a partial differential equation ( PDE ) of order n. The order of a PDE is the order of the highest order appearing in it.

A PDE is said to be quasi linear, if it is linear in its highest order derivatives; and semi – linear, if it is quasi-linear and the coefficients of the highest order derivatives does not contain the dependent variable or its derivatives. A PDE which is not quasi – linear is called non-linear. A semi-linear PDE, which is linear in the dependent variable and its derivatives, is called linear.

At the beginning, we may consider the case of only two independent variables, say, x and y and one dependent variable, say, z. , depending on x and y. z z  2 z  2 z  2 z We may use the notations, p = , q = , r = , s = , t = ,…. x y x 2 xy y 2 In this context, the first order PDE has the form, f( x , y, z , p , q ) = 0. We require the following Theorem in , in many contexts which involves solving one set of variables in terms of the others from a given functional equation.

Implicit Function Theorem. Let f be a continuously differentiable function from an open set E of R n+m to R n. Let ( a, b ) E such that f ( a , b ) = 0. Let A = f’( a, b ). m If Ax is invertible then there exists W, a neighborhood of b in R such that for each y in W, there exists a unique x in R n , such that f ( x, y ) = 0.

The Theorem gives the logical support in finding x in terms of y, given f( x, y ) = 0

Differential Equations 45 6.2.Formation of First Order PDE Consider a family of surfaces of the form, F( u , v ) = 0, where F is an arbitrary function, and u and v are given functions of x , y , z. Differentiating, F ( u , v ) = 0, partially w .r. t. x , y, we get , F F u  u p + v  v p = 0 … ( 1 ) u x z v x z F F u  u q + v  v q = 0 … ( 2 ). u y z v y z F On eliminating, and , from ( 1 ) & ( 2 ), we get, v u  u p v  v p x z x z = 0 ….( 3 ), u y  uz q vy  vz q

which can be simplified as, (uy vz – uz vy ) p + ( uz vx – ux vz ) q = ( ux vy – uy vx ) ….( 3 ), or, (u,v) (u,v) (u,v) in terms of Wronskians, p + q = … ( 3 ), which is a quasi linear (y, z) (z, x) (x, y) equation.

Thus p + q = is the PDE associated with the family of surfaces

F ( u , v ) = 0, where F is an arbitrary function and u & v are given functions of x, y, z.

Remark: Let u an v be functions of x and y. If v can be expressed as a function of u alone, without involving x and y, then = 0. Here, we say, u and v are functionally dependent.

Let v = H ( u ), where H is some function. Then vx = H’(u) ux & vy = H’(u) uy.

Eliminating, H’(u), we get = ( ux vy – uy vx ) = 0.

Now, consider, a two – parameter family of surfaces, z = F ( x, y, a, b ) …( 1 ), where a and b are parameters.

Differentiating, partially w. r. t. x and y, we get, p = Fx ( x, y, a, b ) …( 2 )

and q = Fy ( x, y, a, b ) …( 3 ).

Differential Equations 46 Fa Fxa Fya  Suppose the matrix,   , is of rank 2. Then by Implicit function theorem, we Fb Fxb Fy b can solve for a and b from two of the above three equations, in terms x, y, z, p, q, and substituting in the remaining equation, we get a PDE, f(x , y, z, p, q ) = 0.

Eg.1. Consider, z = x + a x 2 y 2 + b. ..( 1 ). Differentiating ( 1 ) w.r.t. x & y respectively, p = 1 + 2a x y 2 …( 2 ) and q = 2 a x 2 y …( 3 ).

Eliminating a between ( 2 ) & ( 3 ), we get, x p – y q = x , a PDE.

Eg.2. Eliminate a and b to form a PDE, given z = a x + b y.

Differentiating partially w.r.t. x and y, we get, p = a & q = b . The PDE is obtained by eliminating a and b, from the above equations.

Thus, z = x p + y q.

Eg.3. Eliminate the arbitrary function F, to form the PDE of the family of surfaces, z = x + y + F ( x y ) …( 1 ). On differentiation, p = 1 + F’ ( xy ) y …( 2 ) & q = 1 + F’ ( x, y ) x …( 3 ). Eliminating, F’( x, y ) from equations ( 2 ) & ( 3 ), we get the PDE, x p – y q = x – y.

Eg.4. Eliminating F, form the PDE of the one – parameter family of surfaces, F ( x + y , x - z ) = 0. Let u = x + y, v = x - .

On differentiation, w.r.t. x , y , Fu ( 1 + 0. p ) + Fv ( 1 – 1/ p ) = 0 and

Fu ( 1 + 0. q ) + Fv ( -1/ . q ) = 0. Eliminating, Fu and Fv, between the last two equations,

1 11/ z p We get the PDE, = 0. i.e. -1/ q - 1 + 1/ p = 0 , i.e. p – q = . 1 1/ zq

Ex.1. Form the differential equation given, ( a ) z = y + F ( x 2 + y 2) ( b ) F ( x – z , y – z ) = 0 ( c ) z = F ( x / y )

Ex.2. Form the differential equation given, ( 1 ) z = ( x + a ) ( y + b ) ( 2 ) 2 z = ( a x + y ) 2 + b ( 3 ) z 2 ( 1 + a 3 ) = ( x + a y + b ) 3 We have the classification of first order PDEs as given below.

( 1 ) Linear Equation P ( x , y ) p + Q ( x , y ) q = R ( x , y ) z + S ( x , y )

Differential Equations 47 (2) Semi-linear equation

P ( x , y ) p + Q ( x , y ) q = R ( x , y , z )

( 3 ) Quasi-linear equation

P( x , y , z ) p + Q ( x, y, z ) = R ( x , y , z )

( 4 ) Non-linear equation

f ( x, y, z, p, q ) = 0.

The solution of a first order PDE in x , y, z is a surface in 3 – dimension, called an integral surface. There are different classes of integrals for a given PDE.

6.3.Classification of Integrals Consider the PDE, f ( x , y , z , p , q ) = 0.

(a ) Complete integral

A two-parameter family of solutions of the PDE f ( x , y , z , p , q ) = 0 … ( * ), z = F ( x , y, a, b ) is called a complete integral of ( * ), if in the region of definition of the

Fa Fxa Fya  PDE, the rank of the matrix   is 2. Fb Fxb Fyb 

Fa Fxa  Fa Fya  This condition implies that at least one of the 2 x 2 sub matrices,   ,   , Fb Fyb  Fb Fyb 

Fxa Fya    is non-singular i.e. invertible. It guarantees that we can solve for a and b from Fxb Fyb  two of the equations, z = F ( x, y, a, b ) …( 1 ), p = Fx ( x, y, a, b ) …( 2 ) and q = Fy ( x, y, a, b ) …( 3 ). in terms x, y, z, or p or q, and then elimination of a and b, by substituting in the remaining equation, so that equation ( * ) is recovered or satisfied. This is a consequence of Implicit Function Theorem.

( b ) General integral

Let z = F ( x, y, a, b ) …( 1 ), be the complete integral of ( * ), where a and b are arbitrary constants, referred as parameters in the geometrical context that ( 1 ) represents a two- parameter family of surfaces in 3 – dimension.

Differential Equations 48 Let us assume that a and b are functionally related, so that b =  ( a ), where  is an arbitrary function. Correspondingly, we get a one-parameter subfamily, z = F ( x, y, a, ( a ) ) of the two- parameter family of surfaces represented by ( 1 ). The envelope of this family, if it exists, is also a solution of the PDE ( * ) , called the General Integral. The envelope is obtained by eliminating the parameter between the equations,

' z = F ( x, y, a, ( a ) ) …( 2 ) and 0 = Fa  Fb (a) … ( 3 ), obtained by differentiating ( 2 ) partially w .r. t. the parameter a. The elimination will give G ( x , y , z ) = 0, a surface in 3-dimension. If instead of an arbitrary function , we use a definite relation between a and b, like b = a or b = a + 2 a 2 or b = sin a, etc , and proceeding to find the envelope of the corresponding sub-family of ( 1 ), then the resulting envelope, if it exists, is a solution of ( * ), called a particular integral.

c ) Singular Integral

The envelope of the two-parameter family of surfaces z = F ( x, y, a, b ) …( 1 ), if it exists, is also a solution of the PDE ( * ), called the singular integral. The envelope can be obtained by eliminating the parameters a and b from the equations, z = F ( x, y, a, b ) …( 1 ), 0 = Fa ( x, y, a, b ) … ( 2 ) and 0 = Fb ( x, y, a, b) …( 3 ).

( d ) Special Integral

In certain cases, we can obtain solutions, which are not falling under the above classes, called Special Integrals. For the PDE, p – q = z , z = 0 is solution, which is not belonging to the three class of solutions mentioned above.

Theorem.1. The envelope of a 1-parameter family, z = F ( x, y, a ) of solutions of the PDE f ( x, y, z, p , q ) = 0, if it exists, is also a solution of the PDE. Proof: The envelope is obtained by eliminating the parameter a between z = F ( x, y, a ) ..( 1 ) and 0 = Fa ( x, y, a ) …( 2 ). Thus the envelope is z = G ( x, y ) = F ( x, y, a(x, y )), where a( x, y ) is obtained from ( 2 ), by solving for a in terms of x and y.

For points on the envelope, Gx = Fx + Fa ax = Fx and Gy =Fy + Fa ay = Fy , since on the envelope Fa = 0. Thus, the envelope has the same partial derivatives as those of a member

Differential Equations 49 of the family at a given point. Since the PDE at a point is a relation to be satisfied by the coordinates of the point and the partial derivatives at that point, we get that the envelope is also a solution of the PDE.

Theorem.2. The Singular integral is a solution.

Proof: Let z = F ( x, y, a, b ) …( 1 ) be the complete integral of f( x, y, z, p , q ) = 0 …( * ) The singular integral of ( * ) is obtained by eliminating a and b between,

z = F ( x, y, a, b ) …( 1 ) , 0 = Fa ( x, y, a, b ) …..( 2 ) 0 = Fb ( x, y, a, b ) …(3). Hence the envelope is z = G ( x, y ) = F ( x, y, a ( x, y ) , b ( x, y )), where a( x, y ) & b ( x, y) are obtained from ( 2 ) & ( 3 ) by solving for a & b in terms of x & y.

For the envelope, Gx = Fx + Fa ax + Fb bx = Fx and Gy = Fy + Fa ay + Fb by = Fy, since Fa = 0,

Fb = 0 on the envelope. Thus at any point on the envelope the partial derivatives will be the same as a member of the family. Hence the envelope is also a solution of ( * ). Remark : Recall that an envelope of a family at given point on it, touches a member of the family.

Remark: The singular integral can also be determined directly from the given PDE ( * ) by the following procedure.

The singular solution is obtained by eliminating, p and q from the equations, f( x, y, x, p , q ) = 0 ..( * ), fp ( x, y, z, p, q ) = 0 ..( ** ), fq ( x, y, z, p, q ) = 0 …( *** ), treating p & q as parameters. Let z = F ( x, y, a, b ) be the complete integral of ( * ).

Then, f( x, y, F(x, y, a, b ), Fx ( x, y, a, b), Fy( ( x, y, a, b )) = 0, which holds for every a & b. It can be differentiated partially w.r.t. a & b , so that, fz Fa + fp Fxa + fq Fya = 0 and fz Fb + fp Fxb + fq Fyb = 0 .

Since on the singular integral, Fa = 0 & Fb = 0, the above equations will simplify to,

fp Fxa + fq Fya = 0 and fp Fxb + fq Fyb = 0 …( # ).

Fa Fxa Fya  Fxa Fya  Since the matrix   is 2, and Fa = 0 & Fb = 0, we get   , non- Fb Fxb Fyb  Fxb Fyb  singular. Hence ( # ) gives, fp = 0 & fq = 0. Hence, the result.

Eg. It can be shown that z = F ( x, y, a, b ) = a x + b y + a 2 + b2 …( 1 )is a complete integral of the PDE, f( x, y, z, p, q ) = z – px- qy – p2 – q2 = 0. …( 2 ) From ( 1 ) by differentiating partially w.r.t. x & y, we get,p = a & q = b.

Differential Equations 50 Then equation ( 2 ) is satisfied by ( 1 ) i.e. ( 1) is a solution of ( 2), for any two arbitrary constants, a & b.

Fa Fxa Fya  x  2a 1 0 Further,   =   is of rank 2. Fb Fxb Fyb  y  2b 0 1 Thus, ( 1 ) is a complete integral of ( 2 ). We can find particular integrals by relating a & b, and finding the envelope of the corresponding sub-family. Let b = a. Then we get the family, z = a( x + y ) + 2 a 2. Differentiating w. r. t. a, 0 = ( x + y ) + 4 a. On elimination of a , we get the envelope as, z = - ( x + y ) / 4 ( x + y ) + 2 ( - ( x + y )/4 ) 2 = - ( x + y ) 2/ 8 or ( x + y ) 2 + 8 z = 0, a particular integral.

The singular integral is obtained as follows. Eliminate a & b from, z = ax + by + a 2 + b 2 , 0 = x + 2 a, 0 = y + 2 b . The singular integral is, ( x 2 + y 2 ) + 4 z = 0.

Remark: A PDE can have more than one complete integral, so that the term ‘ Complete’ may not be misinterpreted, and the particular integrals or the singular integral are not members of the family z = F ( x, y, a, b ), for some values of a & b.

6.4.Linear equations The following Theorem provides a method for finding the General integral of a given quasi linear equation.

Theorem. The general integral of the quasi linear equation, P( x , y , z ) p + Q ( x, y, z ) = R ( x , y , z ) … ( 1 ) , where P, Q, R are continuously differentiable functions of x, y, and z is F ( u, v ) = 0 … ( 2 ) ,where F is an arbitrary function and u and v are functions such that u ( x, y, z ) = c1 and v( x, y, z ) = c2 are two independent solutions of the system of ordinary differential equations, dx dy dz   --- ( 3 ) P Q R

Proof: Since u ( x, y, z ) = c1 is a solution of ( 3 ), du = 0  u x dx + uy dy + uz dz = 0, and hence ux P + uy Q + uz R = 0 ……( 4 ).

Similarly, we get, vx P + vy Q + vz R = 0 ……( 5 ). Thus, from equations ( 4 ) & ( 5 ), P Q R = = …( 6 ) ( Here, we use the assumption (u,v) (y, z) (u,v) (z, x) (u,v) (x, y) that u and v are independent )

Differential Equations 51 But we have seen earlier that F ( u, v ) = 0 produces the PDE, (u,v) (u,v) (u,v) p + q = ….( 7 ). (y, z) (z, x) (x, y) Now, from ( 6 ) & ( 7 ), it is obtained that, P p + Q q = R. Thus, F ( u, v ) = 0 is a solution of the given PDE and it is general integral, by the presence of the arbitrary function.

Remark: It may be noted that the general integral is a family of surfaces and a member may be fixed by assigning a specific form for the function F.

For given constants c1 & c2, the solutions u ( x, y, z ) = c1 and v( x, y, z ) = c2 of the dx dy dz auxiliary equation,   , determines the curve of intersection of the surfaces P Q R u ( x, y, z ) = c1 and v( x, y, z ) = c2 .

Eg.1. Find the general integral of z ( x p – y q ) = y 2 - x 2. dx dy dz Consider, the auxiliary equation, i.e.   . zx  zy y 2  x 2 dy dx  dy dz i.e.    zy z(x  y) y 2  x 2

dx dy dx  dy .i.e.  and  . x  y z(x  y)

2 2 The solutions are, x y = c1 and ( x + y ) + z = c2. 2 2 The general integral is , F ( x y , ( x + y ) + z ) = 0, where F is an arbitrary function.

Eg.2. Find the general integral of z ( x p + y q ) = x + y dy dz Consider, the auxiliary equation, i.e.  . zy x  y dy dx  dy i.e. and  or dx + dy = z dz. y z(x  y) The solutions are, x/y = c and 2 (x + y ) - z 2 = d.

The general integral is, F ( x/y, 2 ( x + y ) – z 2 ) = 0, where F is arbitrary.

Ex. Determine the general integral of (a) ( y + 1 ) p + ( x + 1 ) q = z ( b ) x ( y – z ) p + y ( z – x ) q = z ( x – y ) ( c ) x p + y q = z

6.5.Pfaffian Differential Equations

n A differential equation of the form, F (x ,...... , x )dx  0 … ( 1 ), where Fi’ s i1 i 1 n i continuous functions, is called a Pfaffian differential equation in the n-variables x1,…..,xn.

Differential Equations 52 n The Pfaffian differential form F (x ,...... , x )dx is said to be exact, if there exists a i1 i 1 n i continuously differentiable function u( x1,…..,xn ) such that du = , and is called integrable, if there exists a non-zero differentiable function  ( x1,…..,xn ) such that

 is exact. Here, is called an integrating factor.

If the Pfaffian differentiable form is exact as described above, then the solution of the equation ( 1 ) is u( x1,…..,xn ) = c, a constant. (The term ‘ exact’ or ‘integrable’ is also attributed to the corresponding equations also.)

Theorem.1. The Pfaffian differential equation, P ( x, y ) dx + Q ( x, y ) dy = 0, in two variables x & y, is always integrable. Proof. If Q ( x, y ) = 0, then equation reduces to dx = 0, and it is exact for u ( x, y ) = x. dy P Other wise, we get   , first order ordinary differential equation. By Picard’s dx Q Theorem, the above equation has got a solution.

v Theorem.2. Let u( x, y ) and v ( x, y ) be two functions such that  0 . If, further y (u,v) =0, (x, y) then there exists a functional relation F ( u, v ) = 0, between u & v without involving x & y directly.

Proof. Since , we can eliminate y between the u = u ( x, y ) & v = v( x, y ) to obtain a relation, F( u, v, x ) = 0….( * ) We claim that F is independent of x. On differentiating ( * ) w. r. t. x & y, we get, F F u F v F u F v  + = 0 ….( 1 ) and + = 0 ..(2) x u x v x u y v y v Case.1.  0 . x v v F v F (u,v) Then ( 1 ) - ( 2 )   = 0. y x x y u (x, y) F v F But then =0, which gives = 0  = 0 , since . x y x

Differential Equations 53 v Case 2. = 0. x F F u (u,v) v Then ( 1 ) reduces to  = 0. But = 0, with = 0 &  0  x u x (x, y) y u F = 0, and thus =0. x x

Thus, in either case, =0, which implies that F is independent of x, or F is a function of u & v only.

Theorem.3. Let X = ( P, Q, R ), where P, Q, R are continuously differentiable functions of x, y, z and  be a differentiable function of x, y, z. If . Curl = 0, then . Curl = 0.

(R) (Q) 2 (R) (Q) Proof: Consider, . Curl =  P   =  P   + x,y,z  y z  x,y,z  y z 

 () () 2 2  PQ  PR  = = ( . Curl ). x,y,z y z  Since . Curl = 0, we get from above that, . Curl =0, for any .

Conversely, if . Curl =0 and  0, then from the above identity, . Curl

= 0.

Remark: A Pfaffian differential form in 3 variable x, y, z can be taken as, P( x, y, z ) dx + Q ( x, y, z ) dy + R ( x, y, z ) dz = . dr , where = ( P, Q, R ) and r  (x, y, z)

Theorem.4. A necessary and sufficient condition for the Pfaffian differential equation, . = P( x, y, z ) dx + Q ( x, y, z ) dy + R ( x, y, z ) dz = 0 …( * ) to be integrable is . Curl = 0. Proof: Suppose, ( * ) is integrable. Then there exist differentiable functions (x, y, z ) and u( x, y, z ) such that du = (x, y, z ) (P( x, y, z ) dx + Q ( x, y, z ) dy + R ( x, y, z ) dz ). u u u u But du = dx + dy + dz. On comparison, P = , Q = , R = . y z y z i.e. = u . But then curl = curl u =0. Thus . Curl =0 .

Since (x, y, z ) , from Theorem.4. we get . Curl = 0.

Differential Equations 54 Conversely, assume X . Curl = 0. Treating, z as a constant, ( * ) becomes P( x, y, z ) dx + Q ( x, y, z ) dy = 0…( ** ), a Pfaffian differential equation in the two variables x & y, which is always integrable, by Theorem 1. Then there exists  (x, y, z )  0 and U (x, y, z ) such that dU = P dx + Q dy. U U Then  P = , Q = ( # ). x y U U Now using ( # ) in ( * )  dx + dy + dz + ( R - ) dz = 0. z z U i.e. dU + K dz = 0…( *** ). , where K = ( R - ) . z Now we claim that K is a function of U and z alone, so that ( *** ) is a Pfaffian differential equation in two variable U & z, and thereby integrable. Since it is assumed that . Curl = 0, by Theorem.2. . Curl = 0.

But = ( P , Q, R ) = ( , , + K ) = ( , , ) + ( 0, 0, K )

K K = U + ( 0 , 0, K ). Then curl = curl U + curl ( 0, 0, K ) = 0 + ( , ,0) y x

K K Thus 0 = . Curl = ( , , + K ) . ( =  = y x (U, K) = 0. Hence K can be expressed as a function of U and z, with out involving x or y. (x, y) Thus ( *** ) becomes, . dU + K( U, z ) dz = 0, which is integrable. Let the solution be  ( U, z ) = c. The solution of ( * ) is thus u ( x, y, z ) = c, by substituting for U = U ( x, y, z ).

Theorem.5. The Pfaffian differential equation,

. dr = P( x, y, z ) dx + Q ( x, y, z ) dy + R ( x, y, z ) dz = 0 …( * ) is exact iff Curl = 0. Proof: We have, = u iff = and du = u . . Thus . = 0 is exact iff there exists u ( x, y, z ) such that . = du, by definition, i.e iff . = . iff . = iff curl = 0.

Differential Equations 55 Eg.1. Show that yz dx + zx dy + xy dz = 0 is exact and solve it. Here X = ( yz, xz, xy ). Then curl = 0. Hence by Theorem 5., equation is exact. Solution is, yzx = c, a constant

Eg.2. Solve ( 6x + yz ) dx + ( xz – 2y ) dy + ( xy + 2z ) dz = 0. Here = ( 6x + yz, xz – 2y, xy + 2z ). Then curl =0. Hence by Theorem 5., equation is exact. The solution is obtained by direct integration as 3 x2 + yzx – y 2 + z 2 = c.

Eg.3. Solve ( y 2 + yz ) dx + ( xz + z 2 ) dy + ( y 2 – xy ) dz = 0. Here = ( y 2 + yz, xz + z 2, y 2 - xy ). Then . curl =0. Hence by Theorem 4., equation is integrable. ( Note that equation is not exact since  0 ) Treat z as a constant. Then equation becomes, ( y 2 + yz ) dx + ( xz + z 2 ) dy = 0. i.e. y ( y + z ) dx + z ( x + z ) dy = 0,i.e. 1/(x + z ) dx + z/ y(y + z ) dy = 0. i.e. 1/(x + z ) dx + [1/y - 1/(y + z ) ] dy = 0.  log ( x + z ) + log y – log ( y + z ) = log c i.e. U = ( x + z ) y/( y + z ). Let  be the integrating factor.

Then P = Ux  y ( y + z ) = y/( y + z ) or = 1/ ( y + z ) 2.

2 2 2 Thus K = R – Uz = ( y – xy )/( y + z ) - y( y – x )/( y + z ) = 0. The original equation becomes, d U = 0, whose solution is U = c i.e. ( x + z ) y/( y + z ) = c.

Eg.4. Show that yz dx + 2 xz dy – 3 xy dz = 0 is integrable and solve it. Here = ( yz, 2 xz, -3 xy ). Then . curl =0. Hence by Theorem 4., equation is integrable. ( Note that equation is not exact, since ) Treat z as a constant. Then equation becomes, y z dx + 2 xz dy = 0 or y d x + 2 x dy = 0, i.e. dx / x+ 2 dy/ y = 0, whose solution is U = x y 2 = c. If is the integrating factor, then . yz = y 2  = y/z. Now K = y/z .( -3 xy ) – 0 = - 3 x y 2/z = - 3 U/z. The solution of the original equation is dU + - 3 U/z dz = 0  U / z 3 = c i.e. x y 2 / z 3 = c.

Remark: The variables in the Pfaffian differential equations are independent and the equation is written in terms of the differentials of these variables, and no dependent variable is appearing in the equation. Pfaffian differential equations are used in finding the complete integral of a given non-linear PDE.

Differential Equations 56 Ex.1. Solve ( 1 + yz ) dx + x ( z – x ) dy - ( 1 + xy ) dz = 0

Ex.2. Test for integrability and solve, z( z – y ) dx + z ( x + z ) dy + x ( x + y ) dz = 0..

6.6. Charpit’s Method

The partial differential equations, f( x, y, z, p, q ) = 0 ….( 1 ) and g( x, y, z, p, q ) = 0 ….( 2 ) are said to be compatible in a domain D in 3-dimension, if ( f , g) ( a ) J =  0, on D and ( b ) for p = (x, y, z) and q  (x, y, z) obtained by ( p,q) solving ( 1 ) & ( 2 ) algebraically, the Pfaffian differential equation, dz = dx + (x, y, z) dy ..( 3 ) is integrable.

The condition J = , guarantees the solvability of p & q in terms of the remaining variables x, y, z from ( 1 ) & ( 2 ), by virtue of the Implicit Function Theorem.

Theorem.1. A necessary and sufficient condition for the integrability of the Pfaffian differential equation dz = dx + dy , where and are the expressions for p & q obtained from f( x, y, z, p, q ) = 0 ….( 1 ) and ( f , g) ( f , g) ( f , g) ( f , g) g ( x, y, z, p, q ) = 0 ….( 2 ) is that, [ f, g ] = + p + + q (x, p) (z, p) (y,q) (z,q) = 0. Proof: Let X  (,,1) . Then the Pfaffian differential equation, dz = dx + dy i.e. dx + dy - dz = 0, becomes X.dr = 0

Hence by the necessary and sufficient condition for the integrability of a Pfaffian differential equation, we get X. curl X.= 0.

i.e. (,,1) . ( z ,z ,( x  y ) = 0, i.e.  x  z   y  z …( 3 ).

By substituting for  &  for p & q respectively, wherever necessary and feasible, and by differentiating f( x, y, z, p, q ) = 0 w . r. t. x & z, we get fx + fp x + fq  x = 0 …( 4 ) and fz + fp z + fq  z = 0 ..( 5 )

( 5 )  + ( 4 )  ( fx + fz ) + fp ( + z ) + fq ( +  ) = 0 .. ( 6 )

Similarly from ( 2 ), we get ( gx + gz ) + gp ( + ) + gq ( + ) = 0 .. ( 7 )

1 ( f , g) ( f , g) ( 6 ) gp – ( 7 ) fp  ( + ) =    …( 8 ). J  (x, p) (z, p)  By differentiating equations ( 1 ) & ( 2 ) w. r. t. y & z, and proceeding as above, we get,

Differential Equations 57 1 ( f , g) ( f , g) ( y z ) = -    …( 9 ). J  (y,q) (z,q)  Now from equations, ( 3 ), ( 8 ) & ( 9 ), we get [ f, g ] = 0.

Remark: In the proof of the Theorem, x, y, z are taken as independent variable and p & q as variables depending on them, since we are discussing the integrability of a Pfaffian differential equation in the variables x, y, z. The basic fact that x & y are independent and p & q are partial derivatives of z( x, y ) w.r.t. x, y is not used explicitly, anywhere.

Remark: If the partial differential equations, f( x, y, z, p, q ) = 0 ….( 1 ) and

g( x, y, z, p, q ) = 0 ….( 2 ) are compatible then they have a one-parameter family of common solutions given by the associated Pfaffian differential equation.

Charpit’s method : Consider the partial differential equation, f ( x, y, z, p, q ) = 0 ….( 1 ) .

The above theorem provides a tool for finding a complete integral of ( 1 ).

A PDE, g( x, y, z, p, q, a ) = 0 ….( 2 ) compatible with ( 1 ) , for each constant value of a, can be obtained as follows. Assuming the compatibility of ( 1 ) & ( 2 ), we get the necessary & sufficient condition as, ( f , g) ( f , g) ( f , g) ( f , g) [ f, g ] = + p + + q = 0 … ( 3 ). (x, p) (z, p) (y,q) (z,q)

Expanding the Wronskians and interpreting ( 3 ) in the context of the system of equations ( 1 ) & ( 2 ) which has x, y, z, p, q as independent variables and f, g as variables depending on them, we have a quasi linear differential equation, in the independent variables x, y, z, p, q and the dependent variable g and its first order partial derivatives gx, gy, gz, gp, gq , since f is given. In this context, the above condition ( 3 ) may be presented as, fp gx + fq gy + ( p fp + q fq ) gz – ( fx + p fz ) gp – ( fy + q fz ) gq = 0 ..( 3 )

Since ( 3 ) is a quasi linear partial differential equation, its solution g can be obtained through the auxiliary equation, dx dy dz dp dq dg = = = - = - = …( 4 ) f p f q ( pf p  qf q ) ( f x  pf z ) ( f y  qf z ) 0

Any solution of ( 4 ) containing at least p or q can be taken as g( x, y, z, p, q, a ) = 0 ….( 2 ), where the arbitrary constant a will arise naturally.

Differential Equations 58 Since by the assumption, the given PDE ( 1 ) and ( 2 ) which can be obtained as in the above discussion are compatible, the Pfaffian differential equation, dz = (x, y, z,a) dx + (x, y, z,a) dy , where (x, y, z,a) & (x, y, z,a) are the expressions for p & q obtained algebraically from ( 1 ) & ( 2 ), is integrable.

The solution of this Pfaffian differential obtained as, z = F ( x, y, z, a, b ), where F is a known function and b is another arbitrary constant, is readily a solution of the original PDE ( 1 ), in the form of a complete integral.

Remark: We can drop the last expression in the auxiliary equation, since it will give only the trivial solution g = a constant function, which is not desirable

Remark: The PDEs f( x, y, z, p, q ) = 0 ….( 1 ) and g( x, y, z, p, q ) = 0 ….( 2 ) are compatible amounts to the assumption that they share a one-parameter family of common solutions, and does not mean that the equations are equivalent which assumes that their solutions agree completely.

Eg.1. Consider the PDE, f = p 2 x 2 + q 2 y 2 – 4 = 0 …( 1 ).

The auxiliary equation to find a PDE, g( x, y, z, p, q, a ) = 0 ….( 2 ), compatible with ( 1 ) is, dx dy dz dp dq = = = - = - …( * ) f p f q ( pf p  qf q ) ( f x  pf z ) ( f y  qf z ) dx dy dz dp dq i.e. = = = - = - …( * ) 2x 2 p 2y 2 q ( p.2x 2 p  q.2y 2 q) (2xp2 ) (2yq 2 )

dq dy dq We may consider, = - i.e. = -  g = q y – a = 0 …( 2 ) (2yq 2 ) y q

Now from ( 1 ) & ( 2 ), q = a /y , p = 4  a2 / x .

Consider, dz = 4  a2 / x dx + a /y dy, whose solution is z = 4  a2 log x + a log y + b.

Thus, we have got the complete integral of ( 1 ) as, z = 4  a2 log x + a log y + b.

Eg.2. Find a complete integral of f = p + q - pq = 0 …( 1 ), by Charpit’s method. The auxiliary equation to find a PDE, g( x, y, z, p, q, a ) = 0 ….( 2 ), compatible with ( 1 ) is,

= = = - = - …( * ) i.e. dx/( 1 – q ) = dy/( 1 – p ) = dz/ p ( 1-q)+q(1-p)) = - dp/0 = - dq/0 A solution is q = a. Then ( 1 )  p = a/( a – 1 ).

Differential Equations 59 Now consider, dz = p dx + q dy = a/(a-1) dx + a dy, whose solution is z = ax /(a-1) + a y + b, and hence a complete integral of ( 1).

Eg.3. Consider, ( p2 + q 2 ) – q z = 0 …( 1 ) The auxiliary equation is, dx/ 2py = dy/( 2qy-z) = dz/[2(p2+q2)y-qz] = - dp/-pq = - dq/ p2 Consider, dp/pq = - dq/p2 or p dp + q dq = 0  p 2 + q 2 = a 2, say.

Using, ( 1 ) q = a 2y/z , p = a z 2  a 2 y 2 /z.

Consider, dz = a z 2  a 2 y 2 /z dx + a 2y/z dy i.e. ( z dz – a 2 y dy )/ = a dx.

On solving, we get a complete integral as, = ax + b.

Ex.1. Find a complete integral of p 2 q 2 + x2 y2 = x 2 q2 ( x 2 + y 2 )

Ex.2. Show that x 2 p + q – xz = 0 and xp – yq – x = 0 are compatible and hence find a

1- parameter family of common solutions.

Ex.3. Find a complete integral of ( a ) p 2 + q 2 – 1 = 0 ( b ) ( p 2 + q 2 ) x – pz = 0, by Charpit’s method.

Ex.4. Find a complete integral of p 2y = 2 ( z + xp + yq )

6.7. Jacobi’s method

Now let us consider a first order PDE, f ( x, y, z, ux, uy, uz ) = 0 ..( 1 ), where u is a variable depending on the independent variables x, y, z. ( It may be noted that the depended variable is not explicitly appearing in the equation )

A solution u = F ( x, y, z, a, b, c ) of ( 1 ), where a, b, c are arbitrary constants & F is a

Fa Fxa Fya Fza    known function, is said to be a complete integral , if Fb Fxb Fyb Fzb  is of rank 3.   Fc Fxc Fyc Fzc  For a PDE of the above type, a method will be developed to find the complete integral, similar to the development of the Charpit’s method.

Two one-parameter families of partial differential equations, h1 ( x, y, z, ux, uy, uz , a ) = 0 ..( 2 ) and h2 ( x, y, z, ux, uy, uz , b ) = 0 ..( 3 ) are said to be

( f ,h1 ,h2 ) compatible with f ( x, y, z, ux, uy, uz ) = 0 ..( 1 ), if  0 and (ux ,u y ,uz ) the Pfaffian differential equation,

Differential Equations 60 du = ux( x, y, z, a, b ) dx + uy( x, y, z, a, b ) dy + uz( x, y, z, a, b ) dz … ( 4 ), is integrable, where ux( x, y, z, a, b ), uy( x, y, z, a, b ) and uz( x, y, z, a, b ) are obtained from ( 1 ) , ( 2 ) & ( 3 ), algebraically, for all values of a & b.

Since, u is expected to be a function of x, y, z, the differential equation is integrable iff it is exact. (Since, we have du = ux dx + uy dy + uz dz ) The equation ( 4 ) is exact iff the mixed derivatives of order 2 agree, i.e. uxy = uyx, uyz = uzy, uxz= uzx.

Theorem. If the one-parameter families of partial differential equations, h1 ( x, y, z, ux, uy, uz , a ) = 0 ..( 2 ) and h2 ( x, y, z, ux, uy, uz , b ) = 0 ..( 3 ) are ( f ,h) ( f ,h) ( f ,h) compatible with f ( x, y, z, ux, uy, uz ) = 0 .. ( 1 ), then + + = 0, (x,u x ) (y,u y ) (z,u z ) where h = h 1 or h2 . Proof: Differentiating, ( 1 ) partially w.r.t. x, y, z , we get, f f f f + u xx + u xy + u xz = 0 …( 4 ) x u x u y u z

f f f f + u yx + u yy + u yz = 0 …( 5 ) y u x u y u z

f f f f + u zx + u zy + u zz = 0 …( 6 ) z u x u y u z

Similarly, ( 2 ) & ( 3 )  h h h h + u xx + u xy + u xz = 0 …( 7 ) x u x u y u z

h h h h + u yx + u yy + u yz = 0 …( 8 ) y u x u y u z

h h h h + u zx + u zy + u zz = 0 …( 9 ), where h = h1, h2. z u x u y u z

h f ( f ,h) ( f ,h) (4) - ( 7 )  + uxy + uzx = 0 …( 10 ), since uxy = uyx u x u x (u y ,ux ) (u z ,u x )

& uxz = uzx. ( f ,h) ( f ,h) ( f ,h) Similarly from ( 5 ) & ( 8 ), + uxy + uzy = 0 …( 11 ), (y,u y ) (u x ,u y ) (u z ,u y )

Differential Equations 61 since uxy = uyx & uyz = uzy . ( f ,h) ( f ,h) ( f ,h) Similarly from ( 6 ) & ( 9 ), + uxz + uzy = 0 …( 12 ), (z,u z ) (u x ,u z ) (u y ,u z ) since uxz = uzx & uyz = uzy . ( f ,h) ( f ,h) Now, ( 10 ) + ( 11 ) + ( 12 )  + + = 0, which is the required (x,u x ) (y,u y ) condition.

Remark: Given the PDE, f ( x, y, z, ux, uy, uz ) = 0 .. ( 1 ), we may find two families of partial differential equations of the same type, h1 ( x, y, z, ux, uy, uz , a ) = 0 ..( 2 ) and h2

( x, y, z, ux, uy, uz , b ) = 0 ..( 3 ) , so that they are compatible. These families of PDEs are obtained using ( 1 ), by means of the compatibility condition established in the above Theorem, namely,

+ + = 0 -…( * ). The Wronskians may be expanded to get the

f h f h f h h f h f h f quasi linear PDE, + + - - - = 0 … ( * ), u x x u y y u z z u x x u y y u z z in the independent variables x, y, z, ux, uy, uz and the dependent variable h.

Two independent solutions h1 = 0 & h2 = 0 of the associated auxiliary equation,

dx dy dz du du y du = = = - x = - = - z …( ** ) are obtained. f f f f f f u x u y u z x y z

Then from equations ( 1 ), ( 2 ), ( 3 ) ux, uy, uz are obtained algebraically in terms of x, y, z and the two arbitrary constants.

Substitute these expressions in du = ux dx + uy dy + uz dz , which will be an exact differential equation and the solution obtained as, u = F ( x, y, z, a, b, c ) is a complete integral of the original PDE . The above procedure is known as the Jacobi’s method.

2 2 2 Eg.1. Consider, z + z uz = ux + uy .

The auxiliary equation is dx/ -2ux = dy/-2uy = dz/z = dux/0 = duy/0 = duz/-2z-uz

2 2 2 Thus dux = 0 = duy  ux = a, uy = b. Then uz = (a + b – z )/z.

Then du = a dx + b dy + (a 2 + b 2 – z 2)/z dz  u = ax + by + ( a 2 + b 2 ) log z – z 2/2 + c.

2 Eg.2. Consider, z + 2 uz = ( ux + uy )

Differential Equations 62 The auxiliary equation is,

dx/-2( ux + uy ) = dy/ -2( ux + uy ) = dz/2 = - dux/0 = - duy/0 = -duz/1. 2 Thus dux = 0 = duy  ux = a, uy = b. Then uz = [( a + b ) – z ]/2. Then du = a dx + b dy + [( a + b ) 2 – z ]/2 dz  u = ax + by + ( a + b ) 2 z/2 - z 2/4 + c

2 Eg.3. Consider, x ux + y uy = uz

Auxiliary equation is, dx/x = dy/ y = dz/- 2 uz = - dux/ux = - duy/uy = -duz/0

Then dux/ux + dx/x = 0 and dy/y + duy/uy = 0.  xux = a, yuy = b.

Then uz = a  b .

Then du = a/x dx + b/y dy + dz  u = a logx + b logy + z + c

Eg.4. Consider, f( ux , uy, uz ) = 0, where f is a given function.

The auxiliary equation is, dx/( ) = dy/ ( ) = dz/( ) = - dux/0 = - duy/0 = -duz/0.

Thus dux = duz = duy = 0  ux = a, uy = b, uz = c, where f( a, b, c ) = 0 Then du = a dx + b dy + c dz  u = ax + by + cz + d, f( a, b, c ) = 0.

2 2 Ex. Find the complete integral by Jacobi’s method, given ux + uy + uz = 1.

Remark: We can apply Jacobi’s method to find a complete integral of f ( x, y, z, p, q ) = 0. Let the solution be u( x, y, z ) = c , a constant. But, here z = z(x, y ).

Differentiating partially u( x, y, z(x,y) ) = c w. r. t. x, we get, ux + uz p = 0.

Thus p = - ux / uz and similarly, q = - uy / uz . Now substituting in f ( x, y, z, p, q ) = 0, the above expressions for p & q , we get

G( x, y, z, ux, uy, uz ) = 0. By employing Jacobi’s method, a solution is obtained as u = u ( x, y, z, a, b ) + C. But, we have already taken u = c, in the beginning, and we are expecting only 2 arbitrary constants in the complete integral . The situation can be overcome with C = c, so that the complete integral of f ( x, y, z, p, q ) = 0 is obtained as u( x, y, z, a, b ) = 0.

Eg. Consider, p 2 x + q 2 y = z.

Assume the solution as u( x, y, z ) = c. Then p = - ux / uz & q = - uy / uz . 2 2 2 On substitution, we get, x ux + y uy – z uz = 0. The auxiliary equation is, 2 2 2 dx/2xux = dy/ 2yuy = dz/- 2zuz = - dux/ux = - duy/uy = -duz/-uz 2 2 2 The 2 independent solutions are, xux = a, yuy = b. Then z uz = a + b

Differential Equations 63 1 1 1  a  2  b  2  a  b  2 Then ux =   , uy =   , uz =    x   y   z 

On integrating, du = dx + dy + dz, we get u = 2 ( ax) ½ +2 ( by ) ½ + 2 ( (a + b)z ) ½ + c. Thus we have a complete integral of the given PDE as, ( ax) ½ + ( by ) ½ + ( (a + b)z ) ½ = 0.

Ex. Find the complete integral of ( a ) ( p 2 + q 2 ) y = qz ( b ) pqxy = z 3

6.8. The Cauchy Problem The problem of determining the integral surface of a given PDE, containing a given curve is called a Cauchy problem. Different approaches are made for solving the problem according to the type of the equation given. Quasi linear equation

Let C be a given curve with parametric equation, x = x0 ( s ), y = y0 ( s ), z = z0 ( s ), where s is the parameter. The problem is to determine the integral surface of the quasi linear PDE, P ( x, y, z ) p + Q ( x, y, z ) q = R ( x, y, z ) …( 1 ) containing the given curve C.

Let F ( u, v ) = 0 …( 2 ) be the general integral of ( 1 ), where u ( x, y, z ) = c1

& v ( x, y, z ) = c2 are two independent solutions of the auxiliary equation, dx / P = dy/Q = dz/R …( 3 ). Here we have to fix the function F, so that the surface

F ( u, v ) = 0, contains C.( i.e. we have to relate the arbitrary constants c1 & c2.) This can be done as follows.

Substitute x = x0 ( s ), y = y0 ( s ), z = z0 ( s ) in the equations, u ( x, y, z ) = c1

& v ( x, y, z ) = c2 , and then eliminate s between the resulting equations, so that the required relation between i.e. the form of the function F will be obtained.

Eg.1. Consider the PDE, x 3 P + y ( 3 x 2 + y ) q = z ( 2 x 2 + y ) …( 1 ), and the curve

C : x0 = 1, y0 = s, z0 = s ( 1 + s ) ….( 2 ). We have to find the integral surface of ( 1 ) containing C. First, we have to find two independent solutions of the auxiliary equation, dx dy dz dx dy dz  dx / x  dy / y  dz / z       & x 3 y(3x 2  y) z(2x 2  y) x3 y(3x 2  y) z(2x 2  y) 0

Differential Equations 64 dx dy (3x 2  y)dx dy (3x 2  y)dx  dy  . Now, -dx/x + dy/y – dz/z = 0 &    x 3 y(3x 2  y) x3 y x3  y

dy (3x 2  y)dx  dy  xdy  . y x3  y  xy

x3  y  xy We get, u = y/xz = c1 & v =  c . Substituting, x = 1, y = s, z = s ( 1 + s ) y 2 1 1 2s in u = c1 & v = c2,  c &  c . Eliminating s, we get, c1 c2 – c1 – c2 + 2 = 0. 1 s 1 s 2 Thus the required surface is, uv – u – v + 2 = 0 y x3  y  xy y x3  y  xy i.e. .   - -   + 2 = 0. xz y xz y

Ex.1. Find the general integral of ( x – y ) y 2 p + ( y – x ) x 2 q = ( x 2 + y 2 ) z and the integral surface containing the curve xz = a 2, y = 0.

Ex.2. Find the integral surface of the PDE, ( x – y ) p + ( y – x – z ) q = z passing through the circle, z = 1, x 2 + y 2 = 1.

Non linear Equation Let us consider the Cauchy problem for non linear PDE. We are assuming that the solution is a particular integral. Let F ( x, y, z, a, b ) = 0 ..( 1 ) be the Complete integral of f ( x, y, z, p, q ) = 0 …..( 2 ) To fix a particular integral, we have to isolate a one parameter subfamily S of ( 1 ). Let E be the envelope of this sub family which contains the given curve C. Since E is the envelope of the sub family, it will touch each member of the family S. As C lies entirely on E, C will be tangential to each member of S, for some parameter s.

This will then imply that, F ( x0(s), y0(s), z0(s), a, b ) = 0 ….( 3 ) F and also that ( x0(s), y0(s), z0(s), a, b ) = 0 ….( 4 ) , for some s. s Now, eliminate s between ( 3 ) & ( 4 ) to get a relation between a & b, and there by the sub family S. Then E is found as the envelope of S.

It can be observed that the Cauchy problem for a non linear equation may have more than one solution, unlike the situation with the quasi linear equations.

Eg.1.. Find the integral surface of ( p 2 + q2 ) x = p z …( 1 ) containing the curve C: x = 0, y = s 2, z = 2s.

Differential Equations 65 dx dy dz dp dq The auxiliary equation is, = = = - = - …( * ) f p f q ( pf p  qf q ) ( f x  pf z ) ( f y  qf z ) dx dy dz dp dq i.e. = = = - = - …( * ) 2 px  z 2qx ( pz)  q 2 pq dp dq - = -  p 2 + q 2 = a 2 …( 2) .  q 2 pq

a 2 x z 2  a 2 x 2 Using ( 1 ), we get p = & q =  a . z z

z 2  a 2 x 2 Now consider, dz = dx +  a dy. z On integration, we obtain the complete integral as, z 2 = a2 x 2 + ( a y + b ) 2. ….( 3 ) Substituting, x = 0, y = s 2 , z= 2s , we get, 4 s 2 = ( a s 2 + b ) 2 ….( 4 ) Integrating ( 4 ) partially www. R. t. s, 8 s = ( a s 2 + b ) 4 a s or 2 = a ( a s 2 + b ) …( 5 ) Eliminating, s between ( 4 ) & ( 5 ), ab = 1 or b = 1/a. The corresponding one parameter subfamily S of ( 3 ) is, z 2 = a2 x 2 + ( a y + 1/a ) 2. ….( 6 ) Or a 4 ( x 2 + y 2 ) + a 2 ( 2y – z 2 ) + 1 = 0 …( 7 ).To find the envelope of S, differentiate (7 ) partially w. r. t. a, to get 4 a 3 ( x 2 + y 2 ) + 2a ( 2y – z 2 ) = 0 ….( 8 ). Eliminating a between ( 7 ) & ( 8 ), the envelope is obtained as, z 2 = 2y  x2  y 2 , which is the required solution.

Eg.2. Find the complete integral and the integral surface of p 2 x + q y – z = 0, containing the curve C: x + z = 0, y = 1.

The auxiliary equation is, = = = - = - …( * )

dx dy dz dp dq i.e. = = = - = - …( * )  q = a. Using the given PDE, 2 px y 2 p 2 x  qy  p(1 p) 0

z  ay z  ay p =  . Consider the Pfaffian differential equation, dz =  dx + a dy x x whose solution is, z  ay   x  b or on squaring to remove the ambiguity, ( a y – z + x + b ) 2 = 4 b x. …( * ) The curve can be parametrised as, x = s, y = 1, z = -s. Substituting in ( * ), ( a + b + 2s ) 2 = 4 b s …( ** ). Differentiating ( ** ) w. r. t. s, ( a + b + 2s ) = b. … ( *** ) Eliminating s between ( ** ) & ( *** ) , b ( b + 2 a ) = 0 i.e. b = 0 or b = -2a.

Differential Equations 66 It cab easily seen that b = 0 leads to no solution. Take, b = - 2a. Then ( * )  ( a y – z + x - 2a ) 2 = - 8a x. ( $ ) Differentiating partially w.r.t. a, ( a y – z + x - 2a ) ( y – 2 ) = - 4 x. ( # ). The envelope is obtained by eliminating a between ( $ ) & ( # ) as x y = z ( y-2 ).

Ex.1. Find the integral surface of p 2 x + pq y – 2 p z – x = 0 containing x = z, y = 1.

Ex.2. Determine the integral surface of p q = z containing x = 0, z = y 2.

Ex.3. Find the particular solution of ( x – y ) y 2 p + ( y – x ) x 2 q = ( x 2 + y 2 ) z containing the curve xz = a 2, y = 0.

6.9 Geometry of Solutions We may start our discussion with the Semi linear PDE, P ( x, y ) p + Q ( x , y ) q = R ( x, y, z ) … ( 1 ) Here it is assumed that P, Q, R are continuously differentiable functions. dy Q(x, y) Consider the equation,  … ( 2 ), whose solution is a one parameter family of dx P(x, y) curves in the x-y plane. dx dy ( 2 ) can also be written as, = P ( x, y ) .. (3.1 ), = Q ( x, y ) …( 3.2 ) dt dt Along these curves ( x = x ( t ), y = y ( t )), z ( x, y ) will satisfy the ordinary differential dz dy P(x, y)z  Q(x, y)z R(x, y, z) equation,  z  z  x y  or dx x y dx P(x, y) P(x, y) dz dz = . = R (x, y, z ) ….( 3.3. ). dt dx The one parameter family of curves in the x – y plane determined by ( 2 ) are called the Characteristic curves of ( 1 ).

Let ( x0, y0 ) be a point in the x-y plane. Then by Picard’s theorem the initial value problem,

= P ( x, y ) .. (3.1 ), = Q ( x, y ) …( 3.2 ) , x ( 0 ) = x0, y ( 0 ) = y0 has a unique solution, x( t ) = x ( x0, y0, t ), y = y ( x0, y0, t ) , which is the unique characteristic curve passing through ( x0, y0 ). At ( x0, y0 ) on this curve, prescribe the z – value as z0. Then along this curve ( 3.3 ) will have a unique solution as, z = z ( x0, y0, t ) such that z ( 0 ) = z0. Let  be a given curve in the x-y plane such that intersects each of the characteristic curves.

If is equipped with the data, being the value of z at each point ( x0, y0 ), we can

Differential Equations 67 completely determine z( x, y ) for the region in the x-y plane containing the characteristic curves. Here  is called an initial data curve. It can be seen that can not be chosen arbitrarily, but has to satisfy the ‘admissibility criterion’. Eg. Solve the semi linear equation, x q – y p = z with the initial condition that z ( x, o ) = f ( x ), x  0, where f ( x ) is a given function.

Here the initial data curve is the positive x – axis and it carries the information about the solution, namely, at ( x , o ) the solution is z = f ( x ). dy x The Characteristic curves are determined by,    x 2 + y 2 = c 2. dx y Thus the Characteristic curves are the concentric circles at the origin. Note that meets each of the characteristic curves at a unique point. dz z z Along a Characteristic curve x 2 + y 2 = c 2,     dx y c 2  x 2

  1  x  sin   dz dx 1 x2  y2 i.e.    z = k( c ) esin (x / c) = k x 2  y 2 e   . dz c 2  x 2

  By the initial condition, f ( x ) = z ( x, 0 ) = k(x ) e 2 or k ( x ) = f ( x ) e 2 .

  x  sin1   2  2 2  Thus we have the solution, z ( x, y ) = f  x 2  y 2 e  x  y  .

We now consider, a quasi linear equation, P ( x, y, z ) p + Q ( x, y, z ) q = R ( x, y, z ) …(1 ) Any solution defines a surface z = z ( x, y ) in 3- dimension, with normal direction at a point ( x, y, z ) on it being ( p, q, - 1). Hence from the given PDE we can make the following observation. “ Any surface z = z ( x, y ) is a solution of the PDE ( 1 ) iff the tangent plane at each of its points contains the directions ( P, Q, R )”.. Here ( P, Q, R ) is called a characteristic direction and the integral curves of the resulting vector field or direction field are called the characteristic curves. The Characteristic curves are determined by the system of ordinary dx dy dz differential equations, = = , the auxiliary equation, we have used earlier for P Q R finding the general integral. The equations may be rewritten as, dx dy dz = P ( x, y, z ) , = Q ( x, y, z ), = R ( x, y, z ) .. ( 2 ). dt dt dt

Differential Equations 68 Given a point ( x0, y0, z0 ) in space , there exists a unique solution for the above system, say, x = x ( t ), y = y ( t ), z = z ( t ) such that x( 0 ) = x0, y ( 0 ) = y0, z ( 0 ) = z0, which is geometrically the Characteristic curve passing through ( x0, y0, z0 ).

Let  be a curve in space with equation x = x0 ( s ), y = y0 ( s ) , z = z0 ( s ). Then there exists a unique characteristic curve passing through each point of , and these curves taken together will determine an integral surface. The curve of intersection of two integral surfaces is a characteristic curve and if a characteristic curve meets an integral surface then it should lie entirely on that surface. The system ( 2 ) produces a two parameter family of curves in space and any one parameter subfamily will generate an integral surface.

Eg.1. Consider the initial value problem - determine the integral surface of z p + q = 1 containing the curve C : x = s, y = s , z = s/2. The given equation is quasi linear and the characteristic equations are, dx dy dz = z , = 1, = 1 ..( * ) . We may solve it under the initial conditions x ( s, 0 ) = s, dt dt dt y ( s, 0 ) = s, z ( s, 0 ) = s/2, so that we can determine all the characteristic curves through each point of the given curve C. The surface generated by these curves will be the integral surface containing C. ( * )  z ( s, t ) = t + a , y ( s, t ) = t + b, initially. Under the initial conditions, a = s/2, b = s. 2 Now, we get, x ( s, t ) = ½ t + ½ st + c. From x( s , o ) = s, we get c = s. 2 Thus, we have the solution as, x = ½ t + ½ st + s , y = t + s, z = t + s/2 . We get the required surface by eliminating s & t from the above three equations. y 2 x  y  x Solving for s & t in terms of x & y, s = 2 , t = . Substituting in the last 1 y / 2 1 y / 2 4y  2x  y 2 equation, z = . 2(2  y)

Eg.2. Solve the Cauchy problem, 2 p + y q = z , x = s, y = s 2, z = s.

The characteristic equations are, = 2 , = y, = z ..( * ).

Initial conditions are x ( s, 0 ) = s , y ( s , 0 ) = s 2, z ( s, 0 ) = s. We get the solution, x = s + 2t, y = s 2 e t , z = s e t .

xzy Eliminating s & t , the solution to the Cauchy problem is obtained as, z 2  ye 2z .

Differential Equations 69 Ex.1. Solve p + q = z 2 under the condition z ( x, 0 ) = sin x

Ex.2. Find the integral surface of xz p – yz q = y 2 – x 2 , passing through the straight line

x/2 = y/1=z/1 Finally, we consider a non linear equation, f ( x, y, z, p , q ) = 0 …. ( * ).

Let ( x0, y0, z0 ) be a point in space, and z = z ( x, y ) be an integral surface through ( x0, y0, z0 ).

Then the equation to the tangent plane at ( x0, y0, z0 ) to this surface is, z – z0 = p ( x – x0 ) + q ( y – y0 ) ……..( 1)

Assuming, f q  0 , we can solve for q from ( * ) as q = q ( x, y, z, p ).

Thus p & q are not independent at ( x0, y0, z0 ) and ( 1 ) describes a one parameter p – family of planes through ( x0, y0, z0 ), and its envelope is called the Monge cone at ( x0, y0, z0 ) of the PDE. Thus a surface z = z ( x, y ) is an integral surface iff it must be tangential to the Monge cone at each point on it.

The equation to the Monge cone at ( x0, y0, z0 ) can be determined by eliminating the parameter p between ( 1 ) and 0 = ( x – x0 ) + ( y – y0 ) dq/dp, where q = q ( x0, y0, z0, p ), obtained from ( * ).

But from ( * ), we get fp + fq ( dq/dp) = 0 . Hence, on substitution, ( x – x0) fq = ( y – y0) fp or x  x y  y x  x y  y z  z 0  0 ., which can be extended using ( 1 ) as 0  0 = 0 ..( 2 ) f p f q f p f q pf p  qf q

Thus the Monge cone at ( x0, y0, z0 ).can be determined by eliminating, p & q from f ( x, y, z, p , q ) = 0 …. ( * ), z – z0 = p ( x – x0 ) + q ( y – y0 ) ……..( 1), and

= ..( 2 )

Here ( 2 ) represents the generators of the Monge cone.

Differential Equations 70

CHAPTER 7

SECOND ORDER PARTIAL DIFFERENTIAL EQUATIONS

Let x , y be independent variables and u be a variable depending on x & y. A partial differential equation which contains second order partial derivatives of u as well as the basic variables x, y, z will be called a second order partial differential equation. We will be discussing in some detail certain classical second order equations arising from physical contexts alone. They belong to the class of semi linear equations. A second order PDE in the form,

R(x, y ) uxx +s ( x, y ) uxy + T ( x, y ) uyy + g ( x, y, u, ux, uy ) = 0 ….( 1 ), where R, S, T are continuous functions of x and y, is called a semi linear equation.

7.1. Classification Consider the second order semi linear PDE,

R(x, y ) uxx +S ( x, y ) uxy + T ( x, y ) uyy + g ( x, y, u, ux, uy ) = 0 ….( 1 ) It may be also assumed that R, S , T have continuous partial derivatives w. r. t. x & y. The above equation can be reduced to certain canonical forms according to the type of the equation, and their solutions can be obtained. Consider S 2 - 4 R T . The equation ( 1 ) is hyperbolic, parabolic, elliptic according as S 2 - 4 R T >, = ,< 0, respectively. We may transform the independent variables x & y to new variables  and  as ,  = (x, y ) &  = ( x, y ).

Then u x  u  x  u x , u y  u  y  u y and

2 2 uxx  u  x  u  x x  u  xx  u x x  u x  u xx ,……

2 2 Then R(x, y ) uxx + S ( x, y ) uxy + T ( x, y ) uyy = u (R x  S x y T y )

2 2 +u (Rx  Sx y T y ) + u (2R xx  S( x y x y )  2T y y ) + G(,,u ,u ,u) Taking the forms, 2 2 A ( u, v ) = R u + S u v + T v , B ( u1, v1, u2, v2 ) = R u1 u2 + (S/2)(u1 v2 + u2 v1) + T v1 v2, it can be shown by direct computation that,

2 2 2 A( x , y )A(x , y )  B ( x , y ,x , y )  ( 4 R T – S )  x y  x y  / 4 ….( * ) The PDE ( 1 ) is transformed to,

Differential Equations 71 A( x , y ) u + 2 B ( x , y ,x , y ) u + A( x , y ) u + H (,,u ,u ,u) = 0 …( ** ) We may choose  and  so that the equation ( 1 ) will be assuming simpler forms according to its types.

Hyperbolic Equation ( S 2 – 4 R T > 0 ) Consider R 2  S  T  0 .. ( 2 ). This equation has two distinct real roots (x, y) & (x, y) . dy dy Consider the equations, + (x, y) = 0 and + (x, y) = 0 . dx dx Let the solutions be f ( x, y ) = c & g ( x, y ) = d. Then choose = f(x, y ) and = g ( x, y )

Theses choices make A( x , y )  0 and = 0.

Hence the equation ( 1 ) is transformed to u  (,,u,u ,u ) , which is called the canonical form of the hyperbolic equation.

Parabolic Equation ( S 2 – 4 R T = 0 ) In this case, the roots of the equation ( 2 ) will coincide, say, . Take = f(x, y ) as in the above case. Take as function of x & y so that it is independent from . Since these functions are independent,  x y  x y   0 .

Now by the choice of , . Thus ( * )  = 0. Since is chosen independent of ,  0. Thus we get the transformed equation as,

u  (,,u,u ,u ) , called the Canonical form of the parabolic equation.

Elliptic Equation ( S 2 – 4 R T < 0 ) In this case the roots of ( 2 ) are complex, and proceeding as in the case of hyperbolic type determine the functions, = f(x, y ) and = g ( x, y ), which will be complex conjugates. We     make a further transformation,   and   . Then the equation will finally 2 2i reduces to u  u  (, ,u,u ,u ) , in terms of the real variables  and  , which is called the canonical form of the elliptic equation.

2 Eg.1. We may reduce to canonical form, uxx – x uyy = 0. Here R = 1, S = 0, T = - x 2, so that S2 – 4 R T = 4 x 2 > 0. The equation is hyperbolic type. Consider i.e.  2  x 2  0    x .

Differential Equations 72 dy dy Now the equations, + (x, y) = 0 and + (x, y) = 0 becomes = -x , x  dx dx y + x 2/2 = c, y – x 2/2 = d . Now take,  = y + x 2/2 and  = y – x 2/2.

We get u x = u x- u y , uy = u + u ,

2 2 2 u xx = u x - 2 x u + x u + - , uyy = + 2 + . u  u  Thus the equation is transformed to, =   . 4( )

2 2 2 2 y x Eg.2. Consider y uxx – 2 x y uxy + x uyy - u  u = 0. x x y y

Here S 2 – 4 R T = 0; equation is parabolic. The only root of R 2  S  T  0 is x/y. x + = 0  x 2 + y 2 = c. Take = x 2 + y 2. y Choose = x 2 - y 2 so that these functions are independent. The equation reduces to = 0.

2 Eg.3. We may reduce to canonical form, uxx + x uyy = 0. Here S 2 – 4 R T = - 4 x 2 < 0 ; equation is elliptic. becomes  2  x 2  0    ix .   We get = iy + x 2/2 and = -iy + x 2/2. Further   = x 2/2 and 2     = y. 2i  u The equation is transformed to u  u   .   2

2 2n 2n-1 Ex.1. Transform ( n – 1 ) uxx - y uyy = n y uy into canonical form, where n is an integer.

2 Ex.2. Transform uxx – 4 x uyy = 1/x ux to canonical form.

7.2. ONE DIMENSIONAL WAVE EQUATION Let y = y ( x, t ) be the transverse displacement of a string at the position x at the instant t from the mean position, being the x- axis. We may consider a small segment of the string of length s between two neighbouring points P & Q. The forces acting on this portion of the string are the tensions T1 & T2 respectively at P & Q along the tangential directions.

Differential Equations 73 Resolving the forces along the x- direction and the y – direction,

T2 c o s2 = T1 c o s1 = T, say, and

(  s ) ytt = T2 s in2 - T1 s in1 = T ( t a n2 - t a n1 ) , where is the linear density and 1 and . 2 the inclination of the tangents at P & Q .

Here t a n2 = ( yx )|Q = ( yx )|P + ( yxx )|P x , approximately and t a n1 = ( yx )|P

Hence ( ) ytt = T ( yxx )|P , approximately. Taking the limit as Q  P, we get

yxx ytt = T . Assuming small displacements, yx is negligible, and there by we get, 2 1 yx 1 yxx = ytt , for some constant c, which is the one dimensional wave equation. c 2 Let us proceed to find the solution of the following initial value problem. Consider an infinite string placed along the x – axis and undergoing vibrations about it, so that at the position x and at the instant t, its vertical displacement y is given by the equation, yxx = ytt ,   x   and t > 0 : with the initial conditions y ( x, 0 ) = f ( x ) & yt ( x, 0 ) = g ( x ), . The wave equation is hyperbolic. Using the transformation,  = x – ct,  = x + ct, the wave equation can be reduced to y = 0. The solution is y = F ( ) + G( ), where F & G are arbitrary functions. In terms of the original variables x & t, y ( x, t ) = F ( x – c t ) + G ( x + c t ) …( * ) By the initial condition y ( x, 0 ) = f ( x ), F ( x ) + G ( x ) = f( x ) …( 1 )

Since yt ( x , t ) = F’ ( x – c t ) .-c + G’ ( x + c t ) . c , using the condition yt ( x, 0 ) = g ( x ), -c F’ ( x ) + c G’ ( x ) = g ( x ) .

x Then for a suitable x0, - c F ( x ) + c G ( x ) =  g(s)ds . …( 2 ) x0

1  x  1  x  Thus F ( x ) = cf (x)  g(s)ds and G ( x ) = cf (x)  g(s)ds . 2c  2c   x0   x0  Thus the solution of the problem is,

f (x  ct)  f (x  ct) 1 xct y ( x , t ) =   g(s)ds , called the d’ Alembert’s solution. 2 2c xct

Differential Equations 74 Remark: The straight lines x – ct = a constant & x + ct = a constant in x-t plane are called characteristic curves. It can be shown that a given pair of characteristics of different types will fix the solution if the data is supplied on both of them. Suppose the data is given on the characteristics,  = 0 &  = 0 i.e. assume y ( 0, ) = g( ) and y ( , 0 ) = f ( ), for some given functions g & f. From the solution, y ( , ) = F ( ) + G( ) , g( ) = F ( 0 ) + G ( ) and f ( ) = F ( ) + G ( 0 ) so that y ( , ) = F ( ) + G( ) = f ( ) + g( ) - f ( 0 ) , since f( 0 ) = g ( 0 ). But the solution can not be uniquely fixed if the data is given only on one characteristic.

Domain of dependence and range of influence

Let P ( x1, t1 ) be any point with t1 > 0.

x1 ct1 f (x1  ct1 )  f (x1  ct1 ) 1 Then we have, y ( x1 , t1 ) =  g(s)ds , so that , 2 2c  x1 c t1

f (A)  f (B) 1 B y ( P ) =   g(s)ds , where A ( x1 – c t1, 0 ) & B ( x1 + c t1, 0 ) are the 2 2c A points at which the characteristics x – c t = x1 – c t1 & x + c t = x1 + c t1 through P meets the x – axis. Here y ( P ) depends on the data given on the line segment AB, which is called the domain of dependence for P. The data at A ( x1 , 0 ) on x – axis will influence the solution y ( x , t ) at any point P ( x, t ) lying in the angular region bounded by the characteristics through A. Hence this region is called the range of influence of A.

Vibrations of a semi- infinite string. Consider the motion of an infinite string placed along the positive x- axis and tied at the end x = 0, and undergoing transverse vibrations about the mean position, namely, the x-axis. Then we have the following problem:- 1 yxx = ytt , 0  x   and t > 0 , with the initial conditions y ( x, 0 ) = u ( x ) & c 2 yt ( x, 0 ) = v ( x ), 0  x   , and the boundary conditions y ( 0, t ) = 0 = yt ( 0, t ). The d’ Alembert’s solution obtained earlier in the case of infinite string may not suit the situation, since at present the data ( i.e. u & v ) is available only for x > 0, but even for x ,

Differential Equations 75 t > 0 the above mentioned solution requires informations at x – ct which can assume negative values. To overcome the situation we may give odd extensions to u & v , by defining,  u(x) , x  0  v(x) , x  0 U ( x ) =  and V ( x ) =  .  u(x) , x  0  v(x) , x  0 We have gone for odd extensions to take care of the homogeneous boundary condition given at x = 0. We claim that the solution to the current problem is,

U(x  ct) U(x  ct) 1 xct y ( x , t ) =  V (s)ds …( & ) 2 2c xct

U(ct) U(ct) 1 ct Put x = 0. Then y( 0 , t ) =  V (s)ds = 0, since U & V are odd 2 2c ct functions.

U(x) U(x) 1 x Put t = 0. Then y ( x, 0 ) =  V (s)ds = u ( x ), for x > 0. 2 2c x

Similarly, we can show that yt ( x , 0 ) = v ( x ) & yt ( 0 , t ) = 0, by taking V (s)ds  G(s)  c in ( & ) , then differentiating w.r.t. t and finally substituting t = 0.

Vibrations of a finite string Consider a string of length l placed along the x – axis and tied at both ends at x = 0 & x = l, making transverse vibrations about the x- axis. We have, 1 yxx = ytt , 0  x 1 and t > 0 , with the initial conditions y ( x, 0 ) = u ( x ) & c 2 yt ( x, 0 ) = v ( x ), 0  x 1 , and the boundary conditions y ( 0, t ) = 0 = y ( l, t ) & yt ( 0, t ) = 0 = yt ( l, t ).. The d’ Alembert’s solution obtained earlier in the case of infinite string may not serve the purpose, since at present the data ( i.e. u & v ) is available only for , but even for these x & t > 0 the above mentioned solution requires information at x – ct & x + ct, which can assume values outside [ 0 , l ]. We may extent the data functions to cope up with the situation. First, u & v will be given odd extensions to [ -l, l] and then periodic extensions with period 2 l to cover   x  .  u(x) ,0  x  l  v(x) ,0  x  l Define, U ( x ) =  and V ( x ) =  , and then  u(x) ,l  x  0  v(x) ,l  x  0

Differential Equations 76 U ( x + r. 2l ) = U ( x ) , V ( x + r. 2l ) = V ( x ), for l  x  l , r = 1,2,.. . Assuming U ( x ) & V ( x ) can be expanded as Fourier series, we have,

  mx  2 l  ms  U ( x ) = u s i n , where u  u(s)sin ds and  m   m    m1  l  l 0  l 

  mx  2 l  ms  V ( x ) = v s i n , where v  v(s)sin ds .  m   m    m1  l  l 0  l  Then the solution becomes,  mct  l  v  mx   mct  y( x , t ) = c os +  m sin  sin   .  l  c m1 m  l   l 

Remark: The solution of the problem can be obtained by an alternate method, called the method of separation of variables. Here we assume that the solution can be written as y ( x, t ) = X ( x ) T ( t ). X '' T '' Then the equation becomes,  . Here the right side is a function of t alone, where X c 2T as the left side is a function of x alone. Hence each of them must be a constant, say,  . Therefore, X’’ -  X = …( 1 ) and T’’ – c 2 T = 0 …( 2 ) From 0 = y ( 0, t ) = X ( 0 ) T ( t ) , we infer X ( 0 ) = 0 and similarly from y ( l , t ) = 0, we get X ( l ) = 0. ( For otherwise, we will be reaching only the trivial solution only )

Case 1. > 0.

The solution of ( 1 ) is, X ( x ) = A e x  Be x , where A & B are arbitrary constants. The conditions X ( 0 ) = 0 = X ( l )  A = 0 & B = 0. This leads to the trivial solution.

Case 2. = 0. Now ( 1 )  X ( x ) = A + B x. Again the conditions X ( 0 ) = 0 = X ( l ) will imply A = 0 = B, and hence only the trivial solution.

Case 3. < 0. Now from ( 1 ) we get X ( x ) = Acos  x  Bsin  x . X ( 0 ) = 0  A = 0.

But X ( l ) = 0  B sin  l = 0. Thus for non trivial solution we consider the possibility

n2 2 sin  l = 0 which gives  l  n , n = 1,2,3, . Taking    , called the eigen n l 2  nx  values of the equation ( 1 ), we get the solutions, X n  Bn sin   , n = 1, 2,3, and  l  correspondingly

Differential Equations 77  nct   nct  ( 2 )  Tn (t)  Cn cos   Dn sin   , n= 1, 2, ….  l   l  Hence, for n = 1, 2, …, we have the solutions,

  nct   nct   nx  yn (x, t ) = an cos   bn sin sin  .   l   l   l  Since the boundary conditions are homogeneous by the method of super imposition, we get a

 solution as y ( x, t ) =  yn (x,t) . Substituting yn ( x , t ) and applying the initial conditions, 1

2 l  ms  2 l  ms  we get a  u(s)sin ds and b  v(s)sin ds . m    m    l 0  l  mc 0  l  Note that we get the same solution obtained earlier.

Remark: The solution to the problem of vibrations of a finite string is unique – a consequence of the following Theorem,

2 Theorem. The solution of the problem, ytt - c yxx = F( x, t ), 0  x 1 and t > 0 , with the initial conditions y ( x, 0 ) = u ( x ) & yt ( x, 0 ) = v ( x ), 0  x 1 , and the boundary conditions y ( 0, t ) = 0 = y ( l, t ) , if it exists, is unique.

Proof: Let there be two solutions, say, u1 & u2.

Let W = u1 - u2 . 2 Then W satisfies the problem, Wtt - c Wxx = 0, and t > 0 , with the initial conditions W ( x, 0 ) = 0 & Wt ( x, 0 ) = 0, , and the boundary conditions W ( 0, t ) = 0 = W ( l, t ) . We will show that W = 0.

l Consider E ( t ) = c 2W 2 W 2 dx . Here E ( t ) is s a differentiable function and W is twice  x t  0 differentiable.

l l l dE  l    Therefore  2 W W dx  c 2W W  c 2W W dx = 2 W (W  c 2W )dx = 0, since  t tt  x t 0  t xx   t tt xx  dt  0 0   0  for every t, W ( 0, t ) = 0 = W ( l, t )  Wt ( 0, t ) = 0 = Wt ( l, t ).

Thus E = a constant. But W ( x, 0 ) = 0 ,  Wx ( x, 0 ) = 0 & Wt ( x, 0 ) = 0, given. Thus E ( 0 ) = 0 E = 0. Hence Wx = 0 = Wt , and t > 0. This implies that W ( x, t ) = a constant and hence W = 0, since W ( x , 0 ) = 0.

Differential Equations 78 7.3. Riemann’s Method. This method can be employed for solving linear, second order, hyperbolic equations, in canonical form.

Let L [ u ] = uxy + a ( x, y ) ux + b ( x, y ) uy + c ( x, y ) u = f ( x, y ) ……( 1 ), where a, b, c, f are continuously differentiable functions of x & y. Being a hyperbolic equation in canonical form, the characteristics are x = a constant , y = a constant. A solution of ( 1 ) is a function with continuous second order partial derivatives.

Let v ( x, y ) be a function with continuous second order partial derivatives.

Then v uxy – u vxy = ( v ux )y – ( u vy )x, avux = ( av u )x – u ( av )x, bv uy = ( b v u )y – u

( b v )y so that v L [ u ] – u M [ v ] = Ux + Vy, where U = a u v – u vy, V = b u v + v ux and

M [ v ] = vxy – ( a v )x – ( b v )y + c v. Here L , M are differential operators and M is called the adjoint of L. We require the following - Green’s Theorem – Let C be closed curve bounding the region D and U and V be differentiable functions in D and continuous on C. Then U V dxdy  Udy Vdx .   x y   D C We will discuss the following Cauchy problem Let  be a smooth initial curve such that it is nowhere parallel to the x or y axes.

Assume that u and ux ( or uy ) are prescribed along . We want a solution of ( 1 ) in some neighborhood of .

Let P ( ,  ) be a point at which the solution to the above Cauchy problem is required. Let the characteristics through P intersect the initial data curve at Q and R ( so that PQ is horizontal and PR is vertical) Let D be the region bounded by the closed contour C = PQRP. Then by the application of Green’s Theorem ,

R P Q  vL[u]  uM[v]dxdy   (Udy Vdx)  Udy  Vdx ---( * ) D Q R P

Q Q It can be shown that Vdx  uv Q  u(bv  v )dx ….( ** )   P  x P P Substituting in ( * ),

Q R [ u v ] = [ u v ] + u(bv  v )dx + u(av  v )dy - P Q  x  y P P

Differential Equations 79 R  (Ud y Vd x) +  vL[u]  uM[v]dxdy…...(&) Q D

Choose v(x, y; ,  ) so that M [ v ] = 0, vx = bv on y =  , vy = a v, on x =  and v = 1 at P( ). Such a function is called a Riemann function for the problem.

R Now ( & )  [ u ]P = [ u v ]Q - uv(a d y  b d x) - Q

R (uv dy  vu dx) + vfd xd y …...( I )  y x  Q D

( I ) gives u at P when u and ux are given along the curve  .

R Since, [ u v ] – [ u v ] = [(uv) dx  (uv) dy], from ( I ) , we get, R Q  x y Q

[ u ]P = [ u v ]R - -

R (uv dx  vu dy) + …...( II )  x y Q

( II ) can be used to find u at P, when u and uy are given along the curve .

On adding ( I ) & ( II ),

R [ u ] = { [ u v ] + [ u v ] }/2 + ½ v(u dx  u dy) - P Q R  x y Q

R ½ u(v dx  v dy) - …( III ),  x y Q which can be used for finding u at P, when u , ux and uy are given along the curve .

Remark: Consider the wave equation, y = 0 …( 1 ). Take the Riemann function as v(,;,  ) = 1 , and using formula ( III ) under Riemann’s method, u(Q)  u(R) 1 [ u ] =  u d  u d . …( 2 ) P      2 2 QR But x =  / 2 and t =  / 2c.

2 ( 1 ) becomes, uxx = (1/c ) utt and the solution ( 2 ) will reduce to the d’ Alembert’s solution obtained earlier.

Differential Equations 80 7.3. Laplace’s Equation

2 The Laplace’s equation in two dimension is  u  uxx  u yy and a solution of the equation is known as a Harmonic function. There are many boundary value problems associated with harmonic functions. Let D be the interior of a simple closed smooth curve B and f , h be continuous functions on the boundary B. Dirichlet Problem. To find a function u( x, y ) harmonic in D and agrees with f on the boundary B. Neumann Problem.

To find a function u( x, y ) harmonic in D and satisfies un = f on B, where n is the unit, outward normal to B. Robin Problem.

To find u ( x, y ) harmonic in D and satisfies un + h u = 0 on B, where h is non egative. We need the following theorem in .

Maximum & Minimum Principles Suppose u ( x, y ) is harmonic in a bounded domain D and continuous on D  D  B . Then u attains its maximum as well as minimum at some point on B. Proof: Let the maximum of u on B be M. Suppose the maximum of u on D is not attained at any point on B. Then it will be attained at some point in D, say, P( x0, y0 ). Let M0 = u(x0, y0).

Then M0 > M. M  M Consider v ( x, y ) = u ( x, y ) + 0 (x  x )2  (y  y )2  …. ( 1 ), for each point in 4R2 0 0 D where R is the radius of a circle with center P containing D.

Then v ( x, y ) is continuous on and v( x0, y0 ) = u(x0, y0) = M0. M  M On B, we have v(x, y)  M  0  M . 4 0 Thus v( x , y ) also attains its maximum at some point Q in D itself.

Then at Q , vxx  0,v yy  0 .i.e. vxx  v yy  0 , at Q.

M 0  M M 0  M However, in D vxx + vyy = uxx + uyy + = > 0. R 2 R 2 Thus we reach a contradiction. Hence the maximum of u in is attained at some point on the boundary B. The minimum value of u in is attained at some point on the boundary B also. Apply the above discussion to –u instead of u.

Differential Equations 81 Theorem. The solution of the Dirichlet problem, if it exists, is unique.

Proof: Let u1 & u2 be two solutions of the problem. Then v = u1 – u2 is also harmonic in D and on B, v = 0. Then by Maximum & minimum principles applied to v gives v = 0 in D. i.e. u1 = u2 in D .

Green’s identity: If U( x, y ) & V ( x, y ) are functions on the boundary B of a closed region D, then by Green’s Theorem (U V )dS  (Udy Vdx) .  x y  D B

Let U = x & V = y .  Then          dS   ds ….( % ),   x x xx y y yy   D B n where n is the unit out ward normal to B. Interchanging  & and subtracting the equations, we get,

     2 2dS    ds …( $ ) D B  n n  Theorem. Let u ( x , y ) be a solution of the Neumann problem. Then  f (s)ds  0. B Proof: Let  = 1 and   u in Green’s identity …( $ ).

Theorem. The solution to the Neumann problem, is unique upto an additive constant.

Proof: Let u1 & u2 be two solutions of the problem. Let v = u1 – u2. v Then  2v  0 on D and  0 on B. n

2 Take = in Green’s identity …( % ). Then  v dS  0 , which implies D v  0, since it is continuous. Thus v = a constant.

Dirichlet Problem for the upper ½ plane

Consider the problem, u xx  u yy  0,  x  , y  0 ….( 1 ), and u ( x , 0 ) = f ( x ) ,   x   with the assumption that u is bounded as y   and u and ux vanishes as | x |   . Solution is obtained by Fourier Transform Method. Let U (, y) be the Fourier transform of u ( x, y ) w.r.t. x. i.e. U (, y) =

1  u(x, y)eix dx . 2 

Differential Equations 82 2 Then by taking the Fourier transform of ( 1 ) , U yy  U  0 ….( 2 )

Solution of ( 2 ) is, U  A()ey  B()ey . Since u is bounded as y   , U is also bounded as y  . Hence for  > 0, A ( ) = 0, and for < 0 , B ( ) = 0 Thus U (, y) = U(,0)e||y . But U(,0)  F[u(x,0)]  F[ f (x)]  K(), say.

Thus = = K()e||y .

2  y  We can compute directly F 1 (e||y )   .  2 2    y  x  2  y  Thus by Convolution Theorem, u ( x, y ) = f(x) *   =  2 2    y  x  1  2  y  y   f ()  f ()  d =  d .   2 2    2 2  2    y  (x )     y  (x )  Neumann Problem for the upper ½ plane

Consider u xx  u yy  0,  x  , y  0 and uy ( x , 0 ) = g ( x ) ,   x  , with the assumption that u is bounded as y  and u and ux vanishes as | x |   .

 and  g(x)dx = 0. 

Solution is found by converting it to a Dirichlet problem. Let v ( x, y ) = uy ( x, y )

y Then u ( x, y ) = v(x,)d . a In terms of v, the problem becomes,  v  v  u  u  u  u   0,  x  , y  0 and v ( x , 0 ) = uy ( x, 0 ) = g ( x ). xx yy xxy yyy y xx yy

  g()  Thus from the above solution of the Dirchlet problem, v(x, y ) =  d .   2 2    y  (x ) 

1 y   g()  1     x2  y 2  Hence, u ( x , y ) =   dd = g( ) log d .    2 2    2 2   a   y  (x )       x  a  Dirichlet Problem for the interior of a circle. Consider a circle of radius a, centered at the origin. 1 1 Consider the problem,  2u  u  u  u  0, r < a …( 1 ), subject to the rr r r r 2  boundary condition u ( a ,  ) = f (  ) …..( 2 ).

Differential Equations 83 Since the equation is linear and homogeneous, we assume that the solution is in the separated form, i.e. u ( r ,  ) = R ( r ) H ( ) ….( 3 ) r 2 R'' R' H' Then ( 1 )   r     , say. R R H Here,  is a constant. Hence we get, r2 R’’ + r R’ - R = 0 …( 4 ) & H” + H = 0 …( 5 ) But H is a periodic function with period 2 . Hence < 0 will not supply a feasible solution. When = 0, then ( 4 ) & ( 5 ) gives R = A + B log r, H = C + D …( 6 ) Since u is bounded inside the circle , but log r  -  as r  0, we get B = 0 , and hence R = A. Further since H is periodic we get C = 0. Thus under this case we get, u = a constant. Let < 0. Assume =  2 . Then H = A cos  + B sin  . Then the periodicity of H will fix  as 1,2,3,…. Correspondingly, ( 4 )  R ( r ) = C r n + D r –n. Since u has to be bounded and r –n.  as r  0, we get D = 0. Combining ( 6 ) and the solutions corresponding to n = 1, 2, 3, … by super position the solution is,

 n a0  r  u ( r,  ) =    an cos n  bn sin n , …( 7 ) , for some constants an & bn. 2 1  a  By the given boundary conditions, u ( a , ) = f ( ) .

1 2 1 2 Then ( 7 )  an =  f ()cosnd and bn =  f ()sin nd . Substituting these  0  0 coefficients in the series solution for u ( r, ), with  = r/a we can obtain the solution in the form of an integral formula,

1 2 1  2 u(,)  f ( )d , known as the Poisson integral formula.  2 2 0 1   2 cos(  )

Dirichlet Problem for the exterior of a circle. Similar to the above problem we have to determine a harmonic function u in the region r > a, if u is given at points on r = a. Since the region is unbounded we may impose the further condition that u is bounded as r  . Proceeding as above, the solution is obtained as,

1 2  2 1 u(,)  f ( )d .  2 2 0 1   2 cos(  )

Differential Equations 84 Neumann Problem for the interior of a circle 1 1 The problem is to solve  2u  u  u  u  0, r < a, subject to the boundary condition rr r r r 2  u that  f ( ) on r = a. r  n a0  r  As in Dirichlet’s problem we get, u ( r,  ) =    an cos n  bn sin n , …..( 1 ) 2 1  a  where an’s & bn’ s are constants to be fixed based on the boundary conditions. Differentiating ( 1 ),

u(a, )   n     an cos n  bn sin n   f ( )  r 1  a 

a 2 a 2 an =  f ()cosnd and bn =  f ()sin nd . n 0 n 0 On substitution and simplification,

a a 2 we get u ( r, ) = 0  log a 2  2ar cos(  )  r 2 f ( )d 2 2 0 Remark: The solution to the corresponding problem for the exterior of the circle r = a is,

a a 2 u ( r, ) = 0  log a 2  2ar cos(  )  r 2 f ( )d 2 2 0 Dirichlet’s problem for a rectangle.

Consider the Problem -: uxx + uyy = 0, 0 < x < a , 0 < y < b; u ( x , 0 ) = f ( x ) , 0  x  a u( x , b ) = 0 , and u ( 0 , y ) = 0, u ( a , y ) = 0 , 0  y  b Solution is found by the method of separation of variables. Assume that u ( x , y ) = X ( x ) Y ( y ). Then we get X’’ -  X = 0 & Y’’ + Y = 0 . These equations are solved using the conditions X ( 0 ) = 0 = X ( a ), Y ( b ) = 0.

2 We will be getting non zero solutions corresponding to the case = - n  , n = 1,2,3,.. a 2  nx   n (y  b)  as Xn = Bn sin   and Yn = En sinh    a   a  By the method of super imposition we may assume the solution as,   U ( x, y ) =  X nYn   an sin sinh . Now using the boundary 1 1 2 a  nx  condition u ( x , 0 ) = f ( x ), we get an =  f (x)sin dx .  nb    a  asinh   0  a 

Differential Equations 85 7.5. Heat Conduction Problem Consider a homogeneous, isotropic solid. Let V be an arbitrary volume inside the solid bounded by the surface S. If V is an volume element then the heat energy stored in it is cuV , where c is the specific heat, u is the temperature as function of its position and time, and  is the density. Thus the total Heat energy stored in V =    cuV … ( 1 ) V

For an element  S of the of the bounding surface, heat flow across it is, ku.n S, where k is the thermal conductivity factor and n is the unit, outward drawn normal. Thus the total flux across S is  ku.n S =   .kuV , by Gauss Divergence theorem. S V Since no heat energy is created or destroyed in V, the rate of change of Heat energy in V = Flux across S. d  Thus, = . i.e.  ( (cu)  .(ku) )V  0 . dt V t u Since V is arbitrary, we get, c  .(ku)  0 . Assuming the k is a constant throughout t u the body, we have the heat conduction equation,  L 2u , where L is a constant. t u  2u The one dimensional ( Solid body is a straight rod ) heat conduction equation is,  L t x 2

Heat Conduction – Infinite rod Consider an infinite homogeneous rod placed along the x- axis, sufficiently thin so that heat is uniformly distributed over any cross section and insulated to prevent any loss or gain from external sources. Let u ( x , t ) be the temperature at the position x, at the instant t.

The problem is to solve, ut = k uxx,   x  , t > 0, …( 1) under the initial condition, u ( x, 0 ) = f ( x ), . We may use the Fourier transform method .

1  Let F[u(x,t)]  U (,t)  u(x,t)eix dx . 2 

2 Then ( 1 )  Ut + k  U = 0.

2 Its solution is U ( , t ) = A ( ) e kt , where A ( ) is an arbitrary function, which can be fixed by the initial condition, since A ( ) = U ( , 0 ) .

Differential Equations 86 1  1  We have, U ( , 0 ) = F[u(x,0)] = u(x,o)eix dx =  f (x)eix dx = F( ), say. 2  2 

2 Hence, U ( , t ) = F ( ) e kt .

 2    x   1  4kt  Then by convolution theorem, u ( x , t ) = f ( x ) * F -1( ) =  f ()e   d 2kt 

2 -1 1  x  Note that, F ( ) = exp  . 2kt  4kt Remark: Convolution Theorem : - F ( f * g ) = F ( f ) . F ( g ), where F stands for the Fourier Transform, and f * g is the convolution product defined as,

1  ( f * g) ( x ) =  f (x  )g()d . 2 

Heat conduction – Finite rod Consider the heat conduction problem in a finite rod of length l, placed along the x-axis extending from x = 0 to x = l, under the additional homogeneous boundary conditions that U ( 0 , t ) = 0 = u ( l, t ), t > 0. We may use the method of separation of variables, by taking u ( x, t ) = X ( x ) T ( t ) X '' T '' The equation gives,   , a constant. X kT We can notice that for non zero solution,  must be negative. Let  = - 2 . Then we have, X ‘’ +  2 X = 0 & T’ + kT = 0. The boundary conditions will become, X ( 0 ) = 0 = X ( l ). X ‘’ + X = 0  X ( x ) = Acosx  Bsin x . Now X ( 0 ) = 0 gives A = 0. Further X ( l ) = 0 gives Bsin l  0. To get non zero solution, we consider sin l  0 i.e.  nx  l  n,n  1,2,3,.... Hence Xn ( x ) = Bn sin   , n=1, 2, 3,..  l 

 n2 2kt Correspondingly, T’ + kT = 0  Tn ( t ) = Cn exp  .  2   l 

Thus, un ( x , t ) = an .

 By the principle of superposition, u ( x , t ) =  an . 1

Differential Equations 87   nx  Now u ( x , 0 ) = f ( x ) implies  an s in  = f ( x ) , 0  x  l . 1  l 

2  Then an =  f(x) dx. l 0 Thus we have the solution,

  n2 2kt u ( x , t ) = an e x p  , where an = f(x) dx.   2  1  l 

Theorem. The solution of the problem, ut – k uxx = F ( x , t ), 0 < x < l, t > 0 satisfying the initial condition u ( x, 0 ) = f ( x ), and boundary conditions u ( 0 , t ) = 0 = u ( l, t ),t  0, if exists, is unique.

Proof: Let u1 & u2 be two solutions. Take v = u1 – u2.

Then v satisfies, vt – k vxx = 0, 0 < x < l, t > 0, v ( x, 0 ) = 0, and v ( 0 , t ) = 0 = v ( l, t ), .

1 l Let E ( t ) = v 2 (x,t)dx . Note that E ( t )  0. 2k 0

dE 1 l l l l Then  v v dx  v v dx  vv l - v 2dx  - v 2dx  0 , by v (0, t ) = 0  t  xx  x 0  x  x dt k 0 0 0 0 = v (l, t ) Thus E ( t ) is a decreasing function. But from v ( x , 0 ) = 0, we get E ( 0 ) = 0. Thus E ( t )  0. But E ( t ) . Thus E = 0  v ( x , t ) = 0, , . i.e. v = 0.

Remark: The solution to the Heat conduction problem in a finite rod is unique, by the above Theorem.

*******

Differential Equations 88