<<

Lecture Notes in

Arkansas Tech University Department of Mathematics

A Second Course in Ordinary Differential Equations

Marcel B. Finan c All Rights Reserved 2015 Edition To my children

Amin and Nadia Preface

The present text, while mathematically rigorous, is readily accessible to stu- dents of either mathematics or engineering. In addition to all the theoretical considerations, there are numerous worked examples drawn from engineering and physics. It is important to have a good understanding of the theory underlying it, rather than just a cursory knowledge of its application. This text provides that understanding.

Marcel B. Finan Russellville, Arkansas May 2015

i ii PREFACE Contents

Prefacei

1 nth Order Linear Ordinary Differential Equations3 1.1 Calculus of -Valued Functions of a Real Variable....4 1.2 Uniform Convergence and Weierstrass M-Test...... 16 1.3 Existence and Uniqueness Theorem...... 26 1.4 The General Solution of nth Order Linear Homogeneous Equa- tions...... 38 1.5 Existence of Many Fundamental Sets...... 47 1.6 nth Order Homogeneous Linear Equations with Constant Co- efficients...... 55 1.7 Review of Power Series...... 66 1.8 Series Solutions of Linear Homogeneous Differential Equations 75 1.9 Non-Homogeneous nth Order Linear Differential Equations.. 80

2 First Order Linear Systems 87 2.1 Existence and Uniqueness of Solution to Initial Value First Order Linear Systems...... 88 2.2 Homogeneous First Order Linear Systems...... 96 2.3 First Order Linear Systems: Fundamental Sets and ...... 107 2.4 Homogeneous Systems with Constant Coefficients: Distinct Eigenvalues...... 116 2.5 Homogeneous Systems with Constant Coefficients: Complex Eigenvalues...... 126 2.6 Homogeneous Systems with Constant Coefficients: Repeated Eigenvalues...... 131

1 2 CONTENTS

Answer Key 141

Index 171 Chapter 1 nth Order Linear Ordinary Differential Equations

This chapter is devoted to extending the results for solving second order ordinary differential equations to higher order ODEs.

3 4CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS 1.1 Calculus of Matrix-Valued Functions of a Real Variable

In establishing the existence result for second and higher order linear differ- ential equations one transforms the equation into a linear system and tries to solve such a system. This procedure requires the use of concepts such as the derivative of a matrix whose entries are functions of t, the integral of a matrix, and the exponential matrix function. Thus, techniques from matrix theory play an important role in dealing with systems of differential equations. The present section introduces the necessary background in the calculus of matrix functions.

Matrix-Valued Functions of a Real Variable A matrix A of dimension m × n is a rectangular array of the form   a11 a12 ... a1n  a21 a22 ... a2n  A =    ......  am1 am2 ... amn where the aij’s are the entries of the matrix, m is the number of rows, n is the number of columns. When m = n we call the matrix a . The 0 is the matrix whose entries are all 0. The n×n identity 0 matrix In is a square matrix whose main consists of 1 s and the off diagonal entries are all 0. A matrix A can be represented with the following th th compact notation A = [aij]. The entry aij is located in the i row and j column.

Example 1.1.1 Consider the matrix −5 0 1 A =  10 −2 0 −5 2 −7

Find a22, a32, and a23.

Solution. The entry a22 is in the second row and second column so that a22 = −2. Similarly, a32 = 2 and a23 = 0 1.1. CALCULUS OF MATRIX-VALUED FUNCTIONS OF A REAL VARIABLE5

An m × n array whose entries are functions of a real variable defined on a common interval I is called a matrix function. Thus, the matrix   a11(t) a12(t) a13(t) A(t) =  a21(t) a22(t) a23(t)  a31(t) a32(t) a33(t) is a 3 × 3 matrix function whereas the matrix   x1(t) x(t) =  x2(t)  x3(t) is a 3 × 1 matrix function also known as a vector-valued function. We will denote an m × n matrix function by A(t) = [aij(t)] where aij(t) is the entry in the ith row and jth coloumn.

Arithmetic of Matrix Functions All the familiar rules of matrix arithmetic hold for matrix functions as well.

(i) Equality: Two m × n matrices A(t) = [aij(t)] and B(t) = [bij(t)] are said to be equal if and only if aij(t) = bij(t) for all 1 ≤ i ≤ m and 1 ≤ j ≤ n and all t ∈ I. That is, two matrices are equal if and only if all corresponding elements are equal. Notice that the matrices must be of the same dimension.

Example 1.1.2 Solve the following matrix equation. t2 − 3t + 2 1  0 1  = 2 −1 2 t2 − 4t + 2 Solution. Equating corresponding entries we get t2 − 3t + 2 = 0 and t2 − 4t + 2 = −1. Solving these equations, we find t = 1 and t = 2 for the firstr equation and t = 1 and t = 3 for the second equation. Hence, t = 1 is the common value

(ii) Addition: If A(t) = [aij(t)] and B(t) = [bij(t)] are m × n matrices then the sum is a new m × n matrix obtained by adding corresponding elements (A + B)(t) = A(t) + B(t) = [aij(t) + bij(t)]. 6CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS

Matrices of different dimensions cannot be added.

(iii) Subtraction: Let A(t) = [aij(t)] and B(t) = [bij(t)] be two m × n matrices. Then the difference (A − B)(t) is the new matrix obtained by subtracting corresponding elements,that is

(A − B)(t) = A(t) − B(t) = [aij(t) − bij(t)].

Example 1.1.3 Consider the following matrices

 t − 1 t2   t −1  A(t) = , B(t) = . 2 2t + 1 0 t + 2

Find A(t) + B(t) and A(t) − B(t).

Solution. We have  t − 1 t2   t −1  2t − 1 t2 − 1 A(t) + B(t) = + = 2 2t + 1 0 t + 2 2 3t + 3 and  t − 1 t2   t −1  −1 t2 + 1 A(t) − B(t) = − = 2 2t + 1 0 t + 2 2 t − 1

(iv) Matrix by a function: If α(t) is a function and A(t) = [aij(t)] is an m × n matrix then α(t)A)(t) is the m × n matrix obtained by multiplying the entries of A(t) by α(t); that is,

α(t)A)(t) = [α(t)aij(t)].

Example 1.1.4 Find t2A(t) where A(t) is the matrix defined in Example 1.1.3.

Solution. We have t3 − t2 t4  t2A(t) = 2t2 2t3 + t2 1.1. CALCULUS OF MATRIX-VALUED FUNCTIONS OF A REAL VARIABLE7

(v) : If A(t) is an m×n matrix and B(t) is an n×p matrix then the matrix AB(t) is the m × p matrix

AB(t) = [cij(t)] where n X cij(t) = aik(t)bkj(t). k=1

That is, the cij entry is obtained by multiplying componentwise the ith row of A(t) by the jth column of B(t). It is important to realize that the order of the multiplicands is significant, in other words AB(t) is not necessarily equal to BA(t). In mathematical terminology matrix multiplication is not commutative.

Example 1.1.5

1 2  2 −1 A = , B = 3 2 −3 4 Show that AB 6= BA. Hence, matrix multiplication is not commutative.

Solution. Using the definition of matrix multiplication we find

 −4 7   −1 2  AB = , BA = . 0 5 9 2

Hence, AB 6= BA

(vi) Inverse: An n×n matrix A(t) is said to be invertible or non-singular if and only if there is an n × n matrix B(t) such that AB(t) = BA(t) = In. We denote the inverse of A(t) by A−1(t).

Example 1.1.6 Find the inverse of the matrix  t 1  A = 1 t given that t 6= ±1. 8CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS

Solution. Suppose that b (t) b (t) A−1 = 11 12 b21(t) b22(t) Then b (t) b (t)  t 1 1 0 11 12 = b21(t) b22(t) 1 t 0 1 This implies that

tb (t) + b (t) b (t) + tb (t) 1 0 11 12 11 12 = tb21(t) + b22(t) b21(t) + tb22(t) 0 1 Hence,

tb11(t) + b12(t) = 1

b11(t) + tb12(t) = 0

tb21(t) + b22(t) = 0

b21(t) + tb22(t) = 1.

Applying the method of elimination to the first two equations we find

t −1 b11(t) = t2−1 and b12(t) = t2−1 . Applying the method of elimination to the last two equations we find

−1 t b21(t) = t2−1 and b22(t) = t2−1 . Hence, 1  t −1 A−1 = t2 − 1 −1 t

Norm of a Vector Function The norm of a vector function will be needed in the coming sections. In one dimension a norm is known as the absolute value. In multidimenesion, we define the norm of a vector function x(t) with components x1(t), x2(t), ··· , xn(t) by ||x(t)|| = |x1(t)| + |x2(t)| + ··· + |xn(t)|. From this definition one notices the following properties: (i) If ||x(t)|| = 0 then |x1(t)| + |x2(t)| + ··· + |xn(t)| = 0 and this implies that 1.1. CALCULUS OF MATRIX-VALUED FUNCTIONS OF A REAL VARIABLE9

|x1(t)| = |x2(t)| = ··· = |xn(t)| = 0. Hence, x(t) = 0. (ii) If α(t) is a function then ||α(t)x(t)|| = |α(t)x1(t)| + |α(t)x2(t)| + ··· + |α(t)xn(t)| = |α(t)|(|x1(t)| + |x2(t)| + ··· + |xn(t)|) = |α(t)|||x(t)||. (iii) If x(t) is vector function with components x1(t), x2(t), ··· , xn(t) and y(t) with components y1(t), y2(t), ··· , yn(t) then

||x(t) + y(t)|| = |x1(t) + y1(t)| + |x2(t) + y2(t)| + ··· + |xn(t) + yn(t)|

≤ (|x1(t)| + |x2(t)| + ··· + |xn(t)|) + (|y1(t)| + |y2(t)| + ··· + |yn(t)|) = ||x(t)|| + ||y(t)||.

Limits of Matrix Functions

If A(t) = [aij(t)] is an m × n matrix such that limt→t0 aij(t) = Lij exists for all 1 ≤ i ≤ m and 1 ≤ j ≤ n then we define

lim A(t) = [Lij] t→t0 Example 1.1.7 Suppose that  t2 − 5t t3  A(t) = . 2t 3

Find limt→1 A(t).

Solution.

lim (t2 − 5t) lim t3 −4 1 lim A(t) = t→1 t→1 = t→1 limt→1 2t limt→1 3 2 3 If one or more of the component function limits do not exist, then the limit of the matrix does not exist. For example, if

t t−1 A(t) = 0 et

1 then limt→0 A does not exist since limt→0 t does not exist.

We say that A(t) is continuous at t = t0 if

lim A(t) = A(t0). t→t0 10CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS

Example 1.1.8 Show that the matrix t t−1 A(t) = 0 et is continuous at t = 2.

Solution. Since  2 1/2  lim A(t) = 2 = A(2) t→2 0 e we have A(t) is continuous at t = 2 Most properties of limits for functions of a single variable are also valid for limits of matrix functions.

Matrix Differentiation Let A(t) be an m × n matrix such that each entry is a differentiable function of t. We define the derivative of A(t) to be A(t + h) − A(t) A0(t) = lim h→0 h provided that the limit exists.

Example 1.1.9 Let a (t) a (t) A(t) = 11 12 a21(t) a22(t) 0 where the entries a11, a12, a21, and a22 are differentiable. Find A (t). Solution. We have A(t + h) − A(t) A0(t) = lim h→0 h

" a11(t+h)−a11(t) a12(t+h)−a12(t) # limh→0 h limh→0 h = a21(t+h)−a21(t) a22(t+h)−a22(t) limh→0 h limh→0 h  0 0  a11(t) a12(t) = 0 0 a21(t) a22(t) 1.1. CALCULUS OF MATRIX-VALUED FUNCTIONS OF A REAL VARIABLE11

It follows from the previous example that the derivative of a matrix function is the matrix of derivatives of its component functions. From this fact one can check easily the following two properties of differentiation:

(i) If A(t) and B(t) are two m × n matrices with both of them differen- tiable then the matrix (A + B)(t) is also differentiable and (A + B)0(t) = A0(t) + B0(t). (ii) If A(t) is an m×n differentiable matrix and B(t) is an n×p differentiable matrix then the product matrix AB(t) is also differentiable and (AB)0(t) = A0(t)B(t) + A(t)B0(t). Example 1.1.10 Write the system  0 y1 = a11(t)y1(t) + a12(t)y2(t) + a13(t)y3(t) + g1(t)  0 y2 = a21(t)y1(t) + a22(t)y2(t) + a23(t)y3(t) + g2(t) 0  y3 = a11(t)y1(t) + a12(t)y2(t) + a13(t)y3(t) + g3(t) in matrix form. Solution. Let       y1(t) a11(t) a12(t) a13(t) g1(t) y(t) =  y2(t)  , A(t) =  a21(t) a22(t) a23(t)  , g(t) =  g2(t)  y3(t) a31(t) a32(t) a33(t) g3(t) Then the given system can be written in the matrix form y0(t) = A(t)y(t) + g(t) Matrix Integration: Since the derivative of a matrix function is a matrix of derivatives, it should not be surprising that antiderivatives of a matrix function can be evaluated by performing the corresponding antidifferentiation operations upon each component of the matrix function. That is, if A(t) is the m × n matrix   a11(t) a12(t) ··· a1n(t)  a (t) a (t) ··· a (t)   21 22 2n  A(t) =  . .   . .  am1(t) am2(t) ··· amn(t) 12CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS then  R R R  a11(t)dt a12(t)dt ··· a1n(t)dt Z  R a (t)dt R a (t)dt ··· R a (t)dt   21 22 2n  A(t)dt =  . . .  .  . . .  R R R am1(t)dt am2(t)dt ··· amn(t)dt

Example 1.1.11 Determine the matrix function A(t) if

 2t 1  A0(t) = . cos t 3t2

Solution. We have

Z  2   2    t + c11 t + c12 t t c11 c12 A(t) = A(t)dt = 3 = 3 + sin t + c21 t + c22 sin t t c21 c22

Finally, we conclude this section by showing that

Z t Z t

x(s)ds ≤ ||x(s)||ds. t0 t0 To see this,  R t  x1(s)ds t0 t R t Z  x2(s)ds  x(s)ds =  t0   .  t0  .  R t xn(s)ds t0 Z t Z t Z t

= x1(s)ds + x2(s)ds + ··· + xn(s)ds t0 t0 t0 Z t Z t Z t ≤ |x1(s)|ds + |x2(s)|ds + ··· + |xn(s)|ds t0 t0 t0 Z t Z t = (|x1| + |x2| + ··· + |xn|)ds = ||x(s)||ds. t0 t0 1.1. CALCULUS OF MATRIX-VALUED FUNCTIONS OF A REAL VARIABLE13 Practice Problems

Problem 1.1.1 Consider the following matrices

 t − 1 t2   t −1   t + 1  A(t) = , B(t) = , c(t) = . 2 2t + 1 0 t + 2 −1

(a) Find 2A(t) - 3tB(t). (b) Find A(t)B(t) - B(t)A(t). (c) Find A(t)c(t). (d) Find det(B(t)A(t)).

Problem 1.1.2 Determine all values t such that A(t) is invertible and, for those t−values, find A−1(t).  t + 1 t  A(t) = . t t + 1

Problem 1.1.3 Determine all values t such that A(t) is invertible and, for those t−values, find A−1(t).  sin t − cos t  A(t) = . sin t cos t

Problem 1.1.4 Find  sin t 3  t t cos t t+1 lim 3t 2t . t→0 e sec t t2−1 Problem 1.1.5 Find  te−t tan t  lim 2 sin t . t→0 t − 2 e

Problem 1.1.6 Find A0(t) and A00(t) if

 sin t 3t  A(t) = . t2 + 2 5 14CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS

Problem 1.1.7 Express the system 0 2 y1 = t y1 + 3y2 + sec t 0 y2 = (sin t)y1 + ty2 − 5 in the matrix form y0(t) = A(t)y(t) + g(t).

Problem 1.1.8 Determine A(t) where

 2t 1   2 5  A0(t) = , A(0) = . cos t 3t2 1 −2

Problem 1.1.9 Determine A(t) where

 1 t   1 1   −1 2  A00(t) = , A(0) = , A0(0) = . 0 0 −2 1 −2 3

Problem 1.1.10 R t Calculate A(t) = 0 B(s)ds where

 es 6s  B(s) = . cos 2πs sin 2πs

Problem 1.1.11 Construct a 2 × 2 non-constant matrix function A(t) such that A2(t) is a constant matrix.

Problem 1.1.12 (a) Construct a 2 × 2 differentiable matrix function A(t) such that

d d A2(t) 6= 2A A(t). dt dt That is, the power rule is not true for matrix functions. (b) What is the correct formula relating A2(t) to A(t) and A0(t)? 1.1. CALCULUS OF MATRIX-VALUED FUNCTIONS OF A REAL VARIABLE15

Problem 1.1.13 0 00 By letting x1 = y, x2 = y , and x3 = y , transform the following third-order equation y000 − 3ty0 + (sin 2t)y = 7e−t into a first order system of the form

x0(t) = Ax(t) + b(t).

Problem 1.1.14 00 By introducing new variables x1 and x2, write y − 2y + 1 = t as a system of two first order linear equations of the form x0 + Ax = b

Problem 1.1.15 Write the differential equation y00 + 4y0 + 4y = 0 as a first order system of the form x0(t) + A(t)x(t) = b(t).

Problem 1.1.16 Write the differential equation y00 + ky0 + (t − 1)y = 0 as a first order system.

Problem 1.1.17 Change the following second-order equation to a first-order system.

y00 − 5y0 + ty = 3t2, y(0) = 0, y0(0) = 1

Problem 1.1.18 Consider the following system of first-order linear equations.

 3 2  x0 = x. 1 −1

Find the second-order linear differential equation that x satisfies. 16CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS 1.2 Uniform Convergence and Weierstrass M- Test

In the next section we will be constructing solutions to higher order linear ordinary differential equations. The construction uses the uniform conver- gence of sequences of functions which we discuss here. ∞ Recall that a sequence of numbers {an}n=1 is said to converge to a number L if and only if for every given  > 0 there is a positive N = N() such that for all n ≥ N we have|an − L| < . In words, given a positive number , there is an index N where all the terms aN , aN+1, ··· are at a distance less than  from L. What is the analogue concept of convergence when the terms of the sequence n ∞ are variables? An example of a variable sequence is the sequence {x }n=1 which converges for −1 < x ≤ 1 and diverges otherwise. So let D ⊂ R and for each n ∈ N consider a function fn : D → R. Thus, we obtain a sequence ∞ of functions {fn}n=1. For such a sequence, there are two types of convergenve that we consider in this section: pointwise convergence and uniform conver- gence.

Pointwise Convergence ∞ We say that {fn}n=1 converges pointwise in D if and only if limn→∞ fn(x) exists for each x ∈ D. That is, limn→∞ fn(x) is a that depends on x or limn→∞ fn(x) is a function of x which we denote by f : D → R. That is,

lim fn(x) = f(x) n→∞

∞ for each x ∈ D. We call f(x) the pointwise limit of {fn}n=1. We will call ∞ the functions {fn}n=1 the base functions. Equivalently, for a given x ∈ D and  > 0 there is a positive integer N = N(x, ) such that if n ≥ N then |fn(x) − f(x)| < . It is important to note that N is a function of both x and . In words, given x ∈ D, if  > 0 then there is a positive integer N where the numbers fN (x), fN+1(x), ··· are within a distance less than  from f(x). See Figure 1.2.1. Note that there are values of the functions fN , fN+1, ··· that are not at a distance less than  from f(x). 1.2. UNIFORM CONVERGENCE AND WEIERSTRASS M-TEST 17

Figure 1.2.1

Example 1.2.1 nx ∞ Define fn : [0, ∞) → R by fn(x) = 1+n2x2 . Show that the sequence {fn}n=1 converges pointwise to the function f(x) = 0 for all x ≥ 0.

Solution. For all x ≥ 0, nx lim fn(x) = lim = 0 = f(x) n→∞ n→∞ 1 + n2x2 Example 1.2.2 For each positive integer n let fn : (0, ∞) → (0, ∞) be given by fn(x) = nx. ∞ Show that {fn}n=1 does not converge pointwise in D = (0, ∞).

Solution. This follows from the fact that lim nx = ∞ for all x ∈ D n→∞ Example 1.2.3 n For each positive integer n let fn : (0, 1) → R be given by fn(x) = nx+1 . ∞ 1 Show that {fn}n=1 converges pointwise to the function f(x) = x . Solution. We have n 1 1 lim = lim = n→∞ n→∞ 1 nx + 1 x + n x One of the weaknesses of this type of convergence is that it does not preserve ∞ some of the properties of the base functions {fn}n=1. For example, in the 18CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS

n previous example, the functions fn(x) = nx+1 are bounded since |fn(x)| < n for all x ∈ (0, 1). However, the limiting function is unbounded in (0, 1). In Problem 1.2.1, the base functions are continuous whereas the limiting function is not. A stronger type of convergence which preserves most of the properties of the base functions is the uniform convergence which we define next.

Uniform convergence ∞ Let D be a subset of R and let {fn}n=1 be a sequence of functions defined on ∞ D. We say that {fn}n=1 converges uniformly on D to a function f : D → R if and only if for all  > 0 there is a positive integer N = N() such that if n ≥ N then |fn(x) − f(x)| <  for all x ∈ D. This definition says that the integer N depends only on the given  (in con- trast to pointwise convergence where N depends on both x and ) so that for n ≥ N, the graph of fn(x) is bounded above by the graph of f(x) +  and below by the graph of f(x) −  as shown in Figure 1.2.2.

Figure 1.2.2

Example 1.2.4 x For each positive integer n let fn : [0, 1] → R be given by fn(x) = n . Show ∞ that {fn}n=1 converges uniformly to the zero function.

Solution. 1 Let  > 0 be given. Let N be a positive integer such that N >  . Then for 1.2. UNIFORM CONVERGENCE AND WEIERSTRASS M-TEST 19 n ≥ N we have

x |x| 1 1 |f (x) − f(x)| = − 0 = ≤ ≤ <  n n n n N for all x ∈ [0, 1]

Clearly, uniform convergence implies pointwise convergence to the same limit function. However, the converse is not true in general. Thus, one way to show that a sequence of functions does not converge uniformly is to show that it does not converge pointwise.

Example 1.2.5 nx Define fn : [0, ∞) → R by fn(x) = 1+n2x2 . By Example 1.2.1, this sequence 1 converges pointwise to f(x) = 0. Let  = 3 . Show that there is no positive integer N with the property n ≥ N implies |fn(x) − f(x)| <  for all x ≥ 0. Hence, the given sequence does not converge uniformly to f(x).

Solution. For any positive integer N and for n ≥ N we have     1 1 1 fn − f = >  n n 2

Problem 1.2.1 shows a sequence of continuous functions converging point- wise to a discontinuous function. That is, pointwise convergence does not preserve the property of continuity. One of the interesting features of uniform convergence is that it preserves continuity as shown in the next example.

Example 1.2.6 Suppose that for each n ≥ 1 the function fn : D → R is continuous in D. ∞ Suppose that {fn}n=1 converges uniformly to f. Let a ∈ D. (a) Let  > 0 be given. Show that there is a positive integer N such that if  n ≥ N then |fn(x) − f(x)| < 3 for all x ∈ D. (b) Show that there is a δ > 0 such that for all |x − a| < δ we have |fN (x) −  fN (a)| < 3 . (c) Using (a) and (b) show that for |x − a| < δ we have |f(x) − f(a)| < . Hence, f is continuous in D since a was arbitrary. 20CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS

Solution. (a) This follows from the definition of uniform convergence. (b) This follows from the fact that fN is continuous at a ∈ D. (c) For |x − a| < δ we have

|f(x) − f(a)| = |f(a) − fN (a) + fN (a) − fN (x) + fN (x) − f(x)|

≤ |fN (a) − f(a)| + |fN (a) − fN (x)| + |fN (x) − f(x)|    < + + =  3 3 3 From this example, we can write

lim lim fn(x) = lim lim fn(x). x→a n→∞ n→∞ x→a Indeed,

lim lim fn(x) = lim f(x) x→a n→∞ x→a

=f(a) = lim fn(a) n→∞

= lim lim fn(x). n→∞ x→a Does pointwise convergenvce allow the interchange of limits and integration? The answer is no as shown in the next example.

Example 1.2.7 x The sequence of functions fn : [0, ∞) → R defined by fn(x) = n converges pointwise to the zero function. Show that Z ∞ Z ∞ lim fn(x)dx 6= lim fn(x)dx. n→∞ 0 0 n→∞ Solution. We have Z ∞ 2 ∞ x x dx = = ∞. 0 n 2n 0 Hence, Z ∞ lim fn(x)dx = ∞ n→∞ 0 whereas Z ∞ lim fn(x)dx = 0. 0 n→∞ 1.2. UNIFORM CONVERGENCE AND WEIERSTRASS M-TEST 21

∞ We leave it to the reader to show that the sequence {fn}n=1 does not con- verge uniformly to zero

Contrary to pointwise convergence, uniform convergence allows the inter- ∞ change of limits and integration. That is, if {fn}n=1 converges uniformly to f on a closed interval [a, b] then

Z b Z b lim fn(x)dx = lim fn(x)dx. n→∞ a a n→∞

Theorem 1.2.1 Suppose that fn :[a, b] → R is a sequence of continuous functions that converges uniformly to f :[a, b] → R. Then

Z b Z b Z b lim fn(x)dx = lim fn(x)dx = f(x)dx. n→∞ a a n→∞ a

Proof. From Example 1.2.6, we have that f is continuous and hence integrable. Let  > 0 be given. By uniform convergence, we can find a positive integer N  such that |fn(x) − f(x)| < b−a for all x in [a, b] and n ≥ N. Thus, for n ≥ N, we have

Z b Z b Z b

fn(x)dx − f(x)dx = [fn(x) − f(x)]dx a a a Z b ≤ |fn(x) − f(x)|dx a Z b  < dx = . a b − a This completes the proof of the theorem

Uniform Convergence of Series Recall from calculus that a function series is a series where the summands are functions. Examples of function series include power series, Laurent se- ries, Fourier series, etc. Unlike series of numbers, there exist many types of convergence of series of functions, namely, pointwise, uniform, etc. We say that a series of functions 22CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS

P∞ n=1 fn(x) converges pointwise to a function f if and only if the sequence of partial sums

Sn(x) = f1(x) + f2(x) + ··· + fn(x) converges pointwise to f. We write

∞ X fn(x) = lim Sn(x) = f(x). n→∞ n=1

Example 1.2.8 P∞ n Show that n=0 x converges pointwise to a function to be determined for all all −1 < x < 1.

Solution. The nth term of the sequence of partial sums is given by

1 − xn+1 S (x) = 1 + x + x2 + ··· + xn = . n 1 − x Since lim xn+1 = 0, for − 1 < x < 1, n→∞ 1 the partial sums converge pointwise to the function 1−x . Thus,

∞ X 1 xn = 1 − x n=0 P∞ Likewise, we say that a series of functions n=1 fn(x) converges uniformly ∞ to a function f if and only if the sequence of partial sums {Sn}n=1 converges uniformly to f. The following theorem provide a tool for uniform convergence of series of functions.

Theorem 1.2.2 (Weierstrass M-test) P∞ Suppose that for each x in an interval I the series n=1 fn(x) is well-defined. Suppose further that

|fn(x)| ≤ Mn, ∀x ∈ I. P∞ P∞ If n=1 Mn is convergent then the series n=1 fn(x) is uniformly convergent. 1.2. UNIFORM CONVERGENCE AND WEIERSTRASS M-TEST 23

Example 1.2.9 P∞ sin (nx) Show that n=1 n2 is uniformly convergent in R.

Solution. For all x ∈ R, we have

sin (nx) | sin (nx)| 1 ≤ ≤ . n2 n2 n2

P∞ 1 The series n=1 n2 is convergent being a p−series with p = 2 > 1. Thus, by Weierstrass M-test the given series is uniformly convergent 24CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS Practice Problems

Problem 1.2.1 n Define fn : [0, 1] → R by fn(x) = x . Define f : [0, 1] → R by

 0, if 0 ≤ x < 1 f(x) = 1, if x = 1.

∞ (a) Show that the sequence {fn}n=1 converges pointwise to f. ∞ (b) Show that the sequence {fn}n=1 does not converge uniformly to f. Hint: Suppose otherwise. Let  = 0.5 and get a contradiction by using a point 1 (0.5) N < x < 1.

Problem 1.2.2 Consider the sequence of functions

nx + x2 f (x) = n n2 defined for all x in R. Show that this sequence converges pointwise to a function f to be determined.

Problem 1.2.3 Consider the sequence of functions

sin (nx + 3) fn(x) = √ n + 1 defined for all x in R. Show that this sequence converges pointwise to a function f to be determined.

Problem 1.2.4 2 n Consider the sequence of functions defined by fn(x) = n x for all 0 ≤ x ≤ 1. Show that this sequence does not converge pointwise to any function.

Problem 1.2.5 n π Consider the sequence of functions defined by fn(x) = (cos x) for all − 2 ≤ π x ≤ 2 . Show that this sequence converges pointwise to a discontinuous func- tion to be determined. 1.2. UNIFORM CONVERGENCE AND WEIERSTRASS M-TEST 25

Problem 1.2.6 xn Consider the sequence of functions fn(x) = x − n defined on [0, 1). Does ∞ {fn}n=1 converge to some limit function? If so, find the limit function and show whether the convergence is pointwise or uniform.

Problem 1.2.7 xn Let fn(x) = 1+xn for x ∈ [0, 2]. (a) Find the pointwise limit f(x) = limn→∞ fn(x) on [0, 2]. (b) Does fn → f uniformly on [0, 2]? Problem 1.2.8 n+cos x For each n ∈ N define fn : R → R by fn(x) = 2n+sin2 x . 1 (a) Show that fn → 2 uniformly. R 7 (b) Find limn→∞ 2 fn(x)dx. Problem 1.2.9 n Show that the sequence defined by fn(x) = (cos x) does not converge uni- π π formly on [− 2 , 2 ]. Problem 1.2.10 ∞ Let {fn}n=1 be a sequence of functions such that 2n sup{|f (x)| : 2 ≤ x ≤ 5} ≤ . n 1 + 4n (a) Show that this sequence converges uniformly to a function f to be found. R 5 (b) What is the value of the limit limn→∞ 2 fn(x)dx? Problem 1.2.11 P∞ 1 n Show that the series n=1 2n cos (3 x) converges uniformly on R. Problem 1.2.12 P∞ n Show that the series n=1 n4+x4 converges uniformly on R. Problem 1.2.13 P∞ 1 Show that the series n=1 n2+x2 converges uniformly on R. Problem 1.2.14 P∞ P∞ Suppose that n=1 |an| < ∞. Show that n=1 an cos (nx) is uniformly con- vergent on R. 26CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS 1.3 Existence and Uniqueness Theorem

In the following three sections we carry the basic theory of second order linear differential equations to the nth order linear differential equation

(n) (n−1) 0 y + pn−1(t)y + ··· + p1(t)y + p0(t)y = g(t) (1.3.1) where the functions p0(t), p1(t), ··· , pn−1(t) and g(t) are continuous func- tions for a < t < b. If g(t) is not identically zero, then this equation is said to be non-homogeneous; if g(t) is identically zero, then this equation is called homogeneous. Equation (1.3.1) is of order n because the highest order derivative that ap- pears in the equation is n. Equation (1.3.1) is called linear because of the following theorem.

Theorem 1.3.1 (Linearity) For any nth differentiable function y, we define the function

(n) (n−1) 0 L[y] = y + pn−1(t)y + ··· + p1(t)y + p0(t)y.

If y1 and y2 are n times differentiable and α1 and α2 are scalars then L satisfies the property

L[α1y1 + α2y2] = α1L[y1] + α2L[y2].

That is, L is linear mapping.

Proof. Indeed, we have

(n) (n−1) L[α1y1 + α2y2] = (α1y1 + α2y2) + pn−1(t)(α1y1 + α2y2) + ···

+ p0(t)(α1y1 + α2y2) (n) (n−1) 0 = (α1y1 + α1pn−1(t)y1 + ··· + α1p1(t)y1 + α1p0(t)y1) (n) (n−1) 0 + (α2y2 + α2pn−1(t)y2 + ··· + α2p1(t)y2 + α2p0(t)y2) (n) (n−1) 0 = α1(y1 + pn−1(t)y1 + ··· + p1(t)y1 + p0(t)y1) (n) (n−1) 0 + α2(y2 + pn−1(t)y2 + ··· + p1(t)y2 + p0(t)y2)

= α1L[y1] + α2L[y2] 1.3. EXISTENCE AND UNIQUENESS THEOREM 27

The above property applies for any number of functions.

Existence and Uniqueness of Solutions We begin by discussing the existence of a unique solution to the initial value problem (n) (n−1) 0 y + pn−1(t)y + ··· + p1(t)y + p0(t)y = g(t) 0 (n−1) y(t0) = y0, y (t0) = y1, ··· , y (t0) = yn−1, a < t0 < b. Theorem 1.3.2 The non-homogeneous nth order linear differential equation

(n) (n−1) 0 y + pn−1(t)y + ··· + p1(t)y + p0(t)y = g(t) (1.3.2) with initial conditions

0 (n−1) y(t0) = y0, y (t0) = y1, ··· , y (t0) = yn−1, a < t0 < b (1.3.3) has a unique solution in a closed interval [c, d] ⊆ (a, b) that contains t0. Proof. Existence: The existence of a local solution is obtained here by transform- ing the problem into a first order system. This is done by introducing the variables 0 (n−1) x1 = y, x2 = y , ··· , xn = y . In this case, we have

0 x1 = x2 0 x2 = x3 . . . = . 0 xn−1 = xn 0 xn = −pn−1(t)xn − · · · − p1(t)x2 − p0(t)x1 + g(t). Thus, we can write the initial-value problem as a system:  0       x1 0 −1 0 0 ··· 0 x1 0  x   0 0 −1 0 ··· 0   x   0   2     2     x   . . . . .   x   .   3  +  . . . . ··· .   3  =  .   .     .     .   0 0 0 0 · · · −1   .   0  xn p0 p1 p2 p3 ··· pn−1 xn g(t) 28CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS or in compact form

0 x (t) = A(t)x(t) + b(t), x(t0) = x0 (1.3.4) where  0 1 0 0 ··· 0   0 0 1 0 ··· 0     . . . . .  A(t) =  . . . . ··· .     0 0 0 0 ··· 1  −p0 −p1 −p2 −p3 · · · −pn−1  x   0  1  y   x   0  0  2     y   x   .   1  x(t) =  3  , b(t) =  .  , x0 =  .  .  .     .   .   0  yn−1 xn g(t) Note that if y(t) is a solution of (1.3.2)-(1.3.3) then the vector-valued function

 y   y0    x(t) =  .   .  y(n−1) is a solution to (1.3.4). Conversely, if the vector   x1  x   2   x  x(t) =  3   .   .  xn

0 00 (n−1) is a solution of (1.3.4) then x1 = x2, x1 = x3, ··· , x1 = xn. Hence, (n) 0 x1 = xn = −pn−1(t)xn − pn−2(t)xn−1 − · · · − p0(t)x1 + g(t) and

(n) (n−1) (n−2) x1 + pn−1(t)x1 + pn−2(t)x1 + ··· + p0(t)x1 = g(t) or (n) (n−1) (n−2) y + pn−1(t)y + pn−2(t)y + ··· + p0(t)y = g(t) 1.3. EXISTENCE AND UNIQUENESS THEOREM 29

which means that y = x1(t) is a solution to (1.3.2). Moreover, x1(t0) = 0 (n−1) y0, x1(t0) = x2(t0) = y1, ··· , x1 (t0) = xn(t0) = yn−1. That is, x1(t) satis- fies (1.3.3). Next, we start by reformulating (1.3.4) as an equivalent integral equation. Integration of both sides of (1.3.4) yields

Z t Z t x0(s)ds = [A(s)x(s) + b(s)]ds. (1.3.5) t0 t0 Applying the Fundamental Theorem of Calculus to the left side of (1.3.5) yields

Z t x(t) = x(t0) + [A(s)x(s) + b(s)]ds, x(t0) = x0. (1.3.6) t0 Thus, a solution of (1.3.6) is also a solution to (1.3.4) and vice versa. To prove the existence of a solution, we shall use the method of successive approximation or Picard’s iterations. Letting   y0  y   1  x0 =  .   .  yn−1 we can introduce Picard’s iterations defined recursively as follows:

x0(t) = x0 Z t x1(t) = x0 + [A(s)x0(s) + b(s)]ds t0

Z t x2(t) = x0 + [A(s)x1(s) + b(s)]ds t0 . . Z t xN (t) = x0 + [A(s)xN−1(s) + b(s)]ds t0 . . 30CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS

Let   x1,N  x   2,N  xN (t) =  .  .  .  xn,N

∞ For i = 1, 2, ··· , n, we are going to show that the sequence {xi,N (t)}N=1 converges uniformly to a function xi(t) such that x(t) (with components x1, x2, ··· , xn)is a solution to (1.3.6) and hence a solution to (1.3.4). Let [c, d] be a closed interval containing t0 and contained in (a, b). For i = 0, 1, ··· , n − 1, the function pi(t) is continuous in a < t < b and in particular it is continuous in [c, d] ⊆ (a, b). We know from Calculus that a continuous function on a closed interval is bounded. Hence, there exist positive constants k0, k1, ··· , kn−1 such that

max |p0(t)| ≤ k0, max |p1(t)| ≤ k1, ··· , max |pn−1(t)| ≤ kn−1. c≤t≤d c≤t≤d c≤t≤d

This implies that

||A(t)x(t)|| = |x2| + |x3| + ··· + |xn−1| + |p0x1 + p1x2 + ··· + pn−1xn|

≤ |x2| + |x3| + ··· + |xn−1| + |p0||x1| + |p1||x2| + ··· + |pn−1||xn|

≤ k0|x1| + (1 + k1)|x2| + ··· + (1 + kn−2)|xn−1| + kn−1|xn| ≤ K||x|| for all c ≤ t ≤ d, where we define

||x|| = |x1| + |x2| + ··· + |xn| and where

K = k0 + (1 + k1) + ··· + (1 + kn−2) + kn−1. For i = 1, 2, ··· , n, we have

Z t |xi,N − xi,N−1| ≤ ||xN − xN−1|| ≤ ||A(s) · (xN−1 − xN−2)||ds t0 Z t ≤ K ||xN−1 − xN−2||ds. t0 1.3. EXISTENCE AND UNIQUENESS THEOREM 31

Also, Z t ||x1 − x0|| ≤ ||[A(s) · x0 + b(s)]||ds t0

≤ M(t − t0) where M = K||x0|| + max |g(t)|. c≤t≤d An easy induction on N ≥ 1 yields (See Problem 1.3.9) (t − t )N ||x − x || ≤ MKN−1 0 . (1.3.7) N N−1 N!

Since N! ≥ (N − 1)! and t − t0 < b − a, we have (t − t )N (b − a)N ||x − x || ≤ MKN−1 0 ≤ MKN−1 . N N−1 (N − 1)! (N − 1)! Since ∞ X (b − a)N MKN−1 = M(b − a)eK(b−a) (N − 1)! N=1 P∞ by Weierstrass M-test we conclude that the series N=1[xi,N − xi,N−1] con- verges uniformly for all c ≤ t ≤ d. But N−1 X xi,N (t) = [xi,k+1(t) − xi,k(t)] + xi,1. k=1 ∞ Thus, the sequence {xi,N }N=1 converges uniformly to a function xi(t) for all c ≤ t ≤ d. The function xi(t) is a continuous function (a uniform limit of a sequence of continuous function is continuous). Also, we can interchange the order of taking limits and integration for such sequences. Therefore

x(t) = lim xN (t) N→∞ Z t = x0 + lim (A(s)xN−1 + b(s))ds N→∞ t0 Z t = x0 + lim (A(s)xN−1 + b(s))ds N→∞ t0 Z t = x0 + (A(s)x + b(s))ds. t0 32CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS

This shows that x(t) is a solution to the integral equation (1.3.6) and there- fore a solution to (1.3.4).

Uniqueness: For the uniqueness, we first derive the following result known as Gronwall’s lemma: Let u(t) and h(t) ≥ 0 be continuous in [a, b] such that Z t u(t) ≤ C + u(s)h(s)ds (1.3.8) a for some constant C and for all a ≤ t ≤ b. Then

R t h(s)ds u(t) ≤ Ce a for all a ≤ t ≤ b. To see this, differentiate both sides of (1.3.8) and use the second fundamental theorem of calculus to obtain

u0(t) − u(t)h(t) ≤ 0.

− R t h(s)ds Multiplying both sides by the integrating factor e a to obtain

d − R t h(s)ds [e a u(t)] ≤ 0. dt Integrating both sides from a to t, we find

− R t h(s)ds e a u(t) − u(a) ≤ 0 or R t h(s)ds R t h(s)ds u(t) ≤ e a u(a) ≤ Ce a . Now, the uniqueness of solution to (1.3.6) follows from Gronwall’s Inequality. Suppose that y(t) and z(t) are two solutions to the initial value problem (1.3.6). Then for all c ≤ t ≤ d we have Z t ||y(t) − z(t)|| ≤ K||y(s) − z(s)||ds. t0 Letting u(t) = ||y(t) − z(t)|| we have Z t 0 ≤ u(t) ≤ Ku(s)ds t0 1.3. EXISTENCE AND UNIQUENESS THEOREM 33 so that by Gronwall’s inequality with C = 0 and h(t) = K, we find u(t) ≡ 0 in [c, d]) and therefore y(t) = z(t) for all c ≤ t ≤ d. This completes a proof of the theorem

Remark 1.3.1 It can be shown that the interval a < t < b is the largest interval where a unique solution is guaranteed to exist.

Example 1.3.1 Find the largest interval where

(t2 − 16)y(4) + 2y00 + t2y = sec t, y(3) = 1, y0(3) = 3, y00(3) = −1 is guaranteed to have a unique solution.

Solution. We first put it into standard form 2 t2 sec t y000 + y00 + y = . t2 − 16 t2 − 16 t2 − 16

π The coefficient functions are continuous for all t 6= ±4 and t 6= (2n + 1) 2 where n is an integer. Since t0 = 3, the largest interval where the given initial value problem is guaranteed to have a unique solution is the ineterval π 2 < t < 4 Remark 1.3.2 According to Remark 1.3.1, Theorem 1.3.2 provides a largest interval where a unique solution to the initial-value problem (1.3.2)-(1.3.3) exists. The actual interval of existence of the solution might be larger as illustarted in the next example.

Example 1.3.2 Consider the initial value problem t2y00 − ty0 + y = 0, y(1) = 1, y0(1) = 1. (a) What is the largest interval on which the existence and uniqueness theo- rem guarantees the existence of a unique solution? (b) Show by direct substitution that the function y(t) = t is the unique solu- tion to this initial value problem. What is the interval on which this solution actually exists? (c) Does this example contradict the assertion of Theorem 1.3.2? Explain. 34CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS

Solution. (a) Writing the equation in standard form to obtain 1 1 y00 − y0 + y = 0. t t2

1 1 From this, we see that the functions p(t) = − t and q(t) = t2 are continuous for all t 6= 0. Since t0 = 1, the largest t−interval according to the existence and uniqueness theorem is 0 < t < ∞. 0 00 00 1 0 1 1 1 (b) If y(t) = t then y (t) = 1 and y (t) = 0 so that y − t y + t2 y = 0− t + t = 0, y(1) = y0(1) = 1. So y(t) = t is a solution so that by the existence and uniqueness theoren it is the only solution. The t−interval for this solution is −∞ < t < ∞. (c) No, because the theorem is a local existence theorem and not a global one 1.3. EXISTENCE AND UNIQUENESS THEOREM 35 Practice Problems

Problem 1.3.1 Convert the initial value problem

1 y000 − y00 + ln (t + 1)y0 + (cos t)y = 0, y(0) = 1, y0(0) = 3, y00(0) = 0 t2 − 9 into the matrix form

0 x (t) = A(t)x(t) + b(t), x(t0) = x0.

Problem 1.3.2 Convert the initial value problem

1 y000 + y0 + (tan t)y = 0, y(0) = 0, y0(0) = 1, y00(0) = 2 t + 1 into the matrix form

0 x (t) = A(t)x(t) + b(t), x(t0) = x0.

Problem 1.3.3 Convert the initial value problem

1 y000− y00+ln (t2 + 1)y0+(cos t)y = t3+t−5, y(0) = 1, y0(0) = 3, y00(0) = 0 t2 + 9 into the matrix form

0 x (t) = A(t)x(t) + b(t), x(t0) = x0.

Problem 1.3.4 Transform the following third-order equation

y000 − 3ty0 + (sin 2t)y = 7e−t into a first order system of the form

x0(t) = Ax(t) + b(t). 36CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS

Problem 1.3.5 14 −4t 13 2t 16 7t If y(t) = 33 e + 15 e − 55 e is the solution to the initial value problem y000 − 5y00 − 22y0 + 56y = 0, y(0) = 1, y0(0) = −2, y00(0) = −4 then what is the solution to the corresponding matrix system

0 x (t) = A(t)x(t) + b(t), x(t0) = x0?

Problem 1.3.6 Suppose that the vector et + e−t + cos t + sin t et − e−t − sin t + cos t x(t) =   et + e−t − cos t − sin t et − e−t + sin t − cos t is a solution to the matrix system 0 1 0 0 0 0 0 1 0 x (t) =   x. 0 0 0 1 1 0 0 0

Find the ordinary differential equation with solution y(t) = et + e−t + cos t + sin t.

Problem 1.3.7 Suppose that the vector  1 + cos (5t) + sin (5t)  x(t) =  −5 sin (5t) + 5 cos (5t)  −25 cos (5t) − 25 sin (5t) is a solution to the matrix system 0 1 0 0 x (t) = 0 0 1 x. 0 −25 0

Find the ordinary differential equation with solution y(t) = 1 + cos (5t) + sin (5t). 1.3. EXISTENCE AND UNIQUENESS THEOREM 37

Problem 1.3.8 Find the first two Picard’s iterations in Problem 1.3.1. Problem 1.3.9 Find the first two Picard’s iterations in Problem 1.3.3. Problem 1.3.10 Prove the inquality (1.3.6) by induction on N ≥ 1. Problem 1.3.11 R t Let f(t) be a continuous function such that f(t) ≤ C + K a f(s)ds for some constants C and K ≥ 0. Show that f(t) ≤ CeK(t−a). Problem 1.3.12 R t Let f(t) be a non-negative continuous function such that f(t) ≤ a f(s)ds for all t ≥ a. Show that f(t) ≡ 0 for all t ≥ a.

For Problems 1.3.12 - 1.3.15, use Theorem 1.3.2 to find the largest interval a < t < b in which a unique solution is guaranteed to exist. Problem 1.3.13 1 y000 − y00 + ln (t + 1)y0 + (cos t)y = 0, y(0) = 1, y0(0) = 3, y00(0) = 0. t2 − 9 Problem 1.3.14 1 y000 + y0 + (tan t)y = 0, y(0) = 0, y0(0) = 1, y00(0) = 2. t + 1 Problem 1.3.15 1 y000 − y00 + ln (t2 + 1)y0 + (cos t)y = 0, y(0) = 1, y0(0) = 3, y00(0) = 0. t2 + 9 Problem 1.3.16

(t2 − 4)y(4) + y00 + ty = csc t, y(1) = 1, y0(1) = 2, y00(1) = 0, y000(1) = −1. Problem 1.3.17 Determine the value(s) of r so that y(t) = ert is a solution to the differential equation y000 − 2y00 − y0 + 2y = 0. 38CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS

1.4 The General Solution of nth Order Linear Homogeneous Equations

In this section, we consider the question of solving the homogeneous equation

(n) (n−1) 0 y + pn−1(t)y + ··· + p1(t)y + p0(t)y = 0 (1.4.1) where p0(t), p1(t), ··· , pn−1(t) are continuous functions in the interval a < t < b. The next theorem shows that any linear combination of solutions to the homogeneous equation is also a solution.

Theorem 1.4.1 (Principle of Superposition) If y1, y2, ··· , yr satisfy the homogeneous equation (1.4.1) and if α1, α2, ··· , αr are any numbers, then

y(t) = α1y1 + α2y2 + ··· + αryr also satisfies the homogeneous equation (1.4.1).

Proof. Since y1, y2, ··· , yr are solutions to (1.4.1), L[y1] = L[y2] = ··· = L[yr] = 0. Now, using the linearity property of L (Theorem 1.3.1) we have

L[α1y1 + α2y2 + ··· + αryr] = α1L[y1] + α2L[y2] + ··· + αrL[yr] = 0 + 0 + ··· + 0 = 0.

That is, y(t) = α1y1 + α2y2 + ··· + αryr is a solution to (1.4.1)

Remark 1.4.1 The above theorem is false for non-homogeneous equations. For example, t3 t3 000 the functions y1(t) = 6 and y2(t) = 6 + 1 are solutions to y = 1. However, t3 the function y(t) = y1(t) + y2(t) = 3 + 1 is not a solution.

The principle of superposition states that if y1, y2, ··· , yr are solutions to (1.4.1) then any is also a solution. The next question that we consider is the question of existence of n solutions y1, y2, ··· , yn of equation (1.4.1) such that every solution to (1.4.1) can be written as a linear 1.4. THE GENERAL SOLUTION OF N TH ORDER LINEAR HOMOGENEOUS EQUATIONS39 combination of these functions. We call such a set of functions a funda- mental set of solutions. Note that the number of solutions comprising a fundamental set is equal to the order of the differential equation. Also, note that the general solution to (1.4.1) is then given by

y(t) = c1y1(t) + c2y2(t) + ··· + cnyn(t). Next, we consider a criterion for testing n solutions for a fundamental set. For that we first introduce the following definition: For n functions y1, y2, ··· , yn, we define the of these functions to be the

y1(t) y2(t) ··· yn(t)

y0 (t) y0 (t) ··· y0 (t) 1 2 n y00(t) y00(t) ··· y00(t) W (t) = 1 2 n ...... ··· . (n−1) (n−1) (n−1) y1 (t) y2 (t) ··· yn (t) Theorem 1.4.2 (Criterion for identifying fundamental sets) Let y1(t), y2(t), ··· , yn(t) be n solutions to the homogeneous equation (1.4.1). If there is a a < t0 < b such that W (t0) 6= 0 then {y1, y2, ··· , yn} is a fundamental set of solutions. Proof. We need to show that if y(t) is a solution to (1.4.1) then we can write y(t) as a linear combination of y1, y2(t), ··· , yn(t). That is

y(t) = c1y1(t) + c2y2(t) + ··· + cnyn.

So the problem reduces to finding the constants c1, c2, ··· , cn. These are found by solving the following linear system of n equations in the unknowns c1, c2, ··· , cn:

c1y1(t0) + c2y2(t0) + ··· + cnyn(t0) = y(t0) 0 0 0 0 c1y1(t0) + c2y2(t0) + ··· + cnyn(t0) = y (t0) . . . = . (n−1) (n−1) (n−1) (n−1) c1y1 (t) + c2y2 (t) + ··· + cnyn (t) = y (t0). Solving this system using Cramer’s rule we find

Wi(t0) ci = , 1 ≤ i ≤ n W (t0) 40CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS where

y1(t0) y2(t0) ··· y(t0) ··· yn(t0) 0 0 0 0 y1(t0) y2(t0) ··· y (t0) ··· yn(t0) 00 00 00 00 y (t0) y (t0) ··· y (t0) ··· y (t0) Wi(t0) = 1 2 n ...... ··· . (n−1) (n−1) (n−1) (n−1) y1 (t0) y2 (t0) ··· y (t0) ··· yn (t0)

th That is, Wi(t0) is the determinant of W with the i column being replaced by the right-hand column of the above system. Note that c1, c2, ··· , cn exist since W (t0) 6= 0

Example 1.4.1 Consider the differential equation

y00 + 4y = 0. (1.4.2)

(a) Show that y1(t) = cos 2t and y2(t) = sin 2t are solutions to (1.4.2). (b) Show that {cos 2t, sin 2t} is a fundamental set of solutions. π (c) Write the solution y(t) = 3 cos (2t + 4 ) as a linear combination of y1 and y2.

Solution. (a) A simple calculation shows

00 y1 + 4y1 = −4 cos 2t + 4 cos 2t = 0.

00 y2 + 4y2 = −4 sin 2t + 4 sin 2t = 0. (b) For any t we have

cos 2t sin 2t 2 2 W (y1(t), y2(t)) = = 2 cos 2t + 2 sin 2t = 2 6= 0. −2 sin 2t 2 cos 2t

Thus, {y1, y2} is a fundamental set of solutions. (c) Using the formulas for c1 and c2 with t0 = 0 we find y(0)y0 (0) − y0(0)y (0) c = 2 2 1 W (y (0), y (0)) 1 2 √ 6 cos π cos 0 + 6 sin π sin 0 3 2 = 4 4 = 2 2 1.4. THE GENERAL SOLUTION OF N TH ORDER LINEAR HOMOGENEOUS EQUATIONS41 and y0(0)y (0) − y(0)y0 (0) c = 1 1 2 W (y (0), y (0)) 1 2 √ −6 sin π cos 0 + 6 cos π sin 0 3 2 = 4 4 = − 2 2

As a first application to this result, we establish the existence of fundamental sets

Theorem 1.4.3 The linear homogeneous differential equation

(n) (n−1) 0 y + pn−1(t)y + ··· + p1(t)y + p0(t)y = 0 where pn−1(t), ··· , p1(t), p0(t) are continuous functions in a < t < b has a fundamental set {y1, y2, ··· , yn}. Proof. Pick a t0 in the interval a < t < b and consider the n initial value problems

(n) (n−1) 0 0 00 (n−1) y +pn−1(t)y +···+p1(t)y +p0(t)y = 0, y(t0) = 1, y (t0) = 0, y (t0) = 0, ··· , y (t0) = 0

(n) (n−1) 0 0 00 (n−1) y +pn−1(t)y +···+p1(t)y +p0(t)y = 0, y(t0) = 0, y (t0) = 1, y (t0) = 0, ··· , y (t0) = 0 (n) (n−1) 0 0 00 (n−1) y +pn−1(t)y +···+p1(t)y +p0(t)y = 0, y(t0) = 0, y (t0) = 0, y (t0) = 1, ··· , y (t0) = 0 . . (n) (n−1) 0 0 00 (n−1) y +pn−1(t)y +···+p1(t)y +p0(t)y = 0, y(t0) = 0, y (t0) = 0, y (t0) = 0, ··· , y (t0) = 1.

Then by Theorem 1.3.2, we can find unique solutions {y1, y2, ··· , yn}. This set is a fundamental set by the previous theorem since

1 0 0 ··· 0

0 1 0 ··· 0

W (t0) = . . . . = 1 6= 0 . . . ··· .

0 0 0 ··· 1

Theorem 1.4.2 says that if one can find a < t0 < b such that W (t0) 6= 0 then the set {y1, y2, ··· , yn} is a fundamental set of solutions. The following theorem shows that the condition W (t0) 6= 0 implies that W (t) 6= 0 for all t in the interval (a, b). That is, the theorem tells us that we can choose our test point t0 on the of convenience−any test point t0 will do. 42CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS

Theorem 1.4.4 (Abel’s) Let y1(t), y2(t), ··· , yn be n solutions to equation (1.4.1). Then 0 (1) W (t) satisfies the differential equation W (t) + pn−1(t)W (t) = 0. (2) If t0 is any point in (a, b) then

R t − t pn−1(s)ds W (t) = W (t0)e 0 .

Thus, if W (t0) 6= 0 then W (t) 6= 0 for all a < t < b.

Proof. 0 00 (n−1) (1) By introducing the variables x1 = y, x2 = y , x3 = y , ··· , xn = y we can write the differential equation as a first order system in the form

x0(t) = A(t)x(t) where     0 1 0 0 ··· 0 x1  0 0 1 0 ··· 0   x     2   . . . . .   x  A(t) =  . . . . ··· .  , x(t) =  3  .    .   0 0 0 0 ··· 1   .  −p0 −p1 −p2 −p3 · · · −pn−1 xn

We will show in Section 2.1 that for any linear system of the form

x0(t) = A(t)x(t) we have W 0(t) = tr(A(t))W (t) where tr(A(t)) is the of the matrix A(t). Recall that the trace of a square matrix is the sum of the entries on the main diagonal of the matrix. In our case

tr(A(t)) = 0 + 0 + ··· + 0 + (−pn−1(t)) = −pn−1(t) so that 0 W (t) + pn−1(t)W (t) = 0. 1.4. THE GENERAL SOLUTION OF N TH ORDER LINEAR HOMOGENEOUS EQUATIONS43

(2) We solve the previous differential equation by the method of integrating factor to obtain: R t t pn−1(s)ds 0 e 0 [W (t) + pn−1(t)W (t)] = 0 R t d h pn−1(s)ds i e t0 W (t) = 0 dt t Z R t d h pn−1(s)ds i e t0 W (t) dt = 0 t0 dt R t t pn−1(s)ds e 0 W (t) − W (t0) = 0 R t − t pn−1(s)ds W (t) = W (t0)e 0 Example 1.4.2 Use Abel’s formula to find the Wronskian of the DE: ty000+2y00−t3y0+et2 y = 0. Solution. The original equation can be written as 2 et2 y000 + y00 − t2y0 + y = 0. t t By Abel’s formula the Wronskian is

− R 2 dt C W (t) = Ce t = t2 We end this section by showing that the converse of Theorem 1.4.2 is also true. Theorem 1.4.5 If {y1, y2, ··· , yn} is a fundamental set of solutions to

(n) (n−1) 0 y + pn−1(t)y + ··· + p1(t)y + p0(t)y = 0 where pn−1(t), ··· , p1(t), p0(t) are continuous functions in a < t < b then W (t) 6= 0 for all a < t < b. Proof. Let t0 be any point in (a, b). By Theorem 1.3.2, there is a unique solution y(t) to the initial value problem

 (n) (n−1) 0 y + pn−1(t)y + ··· + p1(t)y + p0(t)y = 0, 0 (n−1) y(t0) = 1, y (t0) = 0, ··· , y (t0) = 0. 44CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS

Since {y1, y2, ··· , yn} is a fundamental set, there exist unique constants c1, c2, ··· , cn such that

c1y1(t) + c2y2(t) + ··· + cnyn(t) = y(t) 0 0 0 0 c1y1(t) + c2y2(t) + ··· + cnyn(t) = y (t) . . . . (n−1) (n−1) (n−1) (n−1) c1y1 (t) + c2y2 (t) + ··· + cnyn (t) = y (t) for all a < t < b. In particular for t = t0 we obtain the system

c1y1(t) + c2y2(t) + ··· + cnyn(t) = 1 0 0 0 c1y1(t) + c2y2(t) + ··· + cnyn(t) = 0 . . . . (n−1) (n−1) (n−1) c1y1 (t) + c2y2 (t) + ··· + cnyn (t) = 0.

This system has a unique solution (c1, c2, ··· , cn) where

Wi ci = W (t0)

th and Wi is the determinant W with the i column being replaced by the column 1 0   0   .  .  . 0

Note that for c1, c2, ··· , cn to exist we must have W (t0) 6= 0. By Abel’s theorem we conclude that W (t) 6= 0 for all a < t < b 1.4. THE GENERAL SOLUTION OF N TH ORDER LINEAR HOMOGENEOUS EQUATIONS45 Practice Problems

In Problems 1.4.1 - 1.4.3, show that the given solutions form a fundamental set for the differential equation by computing the Wronskian. Problem 1.4.1

000 0 t −t y − y = 0, y1(t) = 1, y2(t) = e , y3(t) = e . Problem 1.4.2

(4) 00 y + y = 0, y1(t) = 1, y2(t) = t, y3(t) = cos t, y4(t) = sin t. Problem 1.4.3

2 000 00 0 2 t y + ty − y = 0, y1(t) = 1, y2(t) = ln t, y3(t) = t . Use the fact that the solutions given in Problems 1.4.1 - 1.4.3 for a funda- mental set of solutions to solve the following initial value problems 1.4.4 - 1.4.6. Problem 1.4.4

y000 − y0 = 0, y(0) = 3, y0(0) = −3, y00(0) = 1. Problem 1.4.5 π π π π y(4) + y00 = 0, y( ) = 2 + π, y0( ) = 3, y00( ) = −3, y000( ) = 1. 2 2 2 2 Problem 1.4.6

t2y000 + ty00 − y0 = 0, , y(1) = 1, y0(1) = 2, y00(1) = −6. Problem 1.4.7 In each question below, show that the Wronskian determinant W (t) behaves as predicted by Abel’s Theorem. That is, for the given value of t0, show that R t − t pn−1(s)ds W (t) = W (t0)e 0 .

(a) W (t) found in Problem 1.4.1 and t0 = −1. (b) W (t) found in Problem 1.4.2 and t0 = 1. (c) W (t) found in Problem 1.4.3 and t0 = 2. 46CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS

Problem 1.4.8 Determine W (t) for the differential equation y000 +(sin t)y00 +(cos t)y0 +2y = 0 such that W (1) = 0.

Problem 1.4.9 Determine W (t) for the differential equation t3y000 −2y = 0 such that W (1) = 3.

Problem 1.4.10 Consider the initial value problem

y000 − y0 = 0, y(0) = α, y0(0) = β, y00(0) = 4.

t −t The general solution of the differential equation is y(t) = c1 + c2e + c3e . (a) For what values of α and β will limt→∞ y(t) = 0? (b) For what values α and β will the solution y(t) be bounded for t ≥ 0, i.e., |y(t)| ≤ M for all t ≥ 0 and for some M > 0? Will any values of α and β produce a solution y(t) that is bounded for all real number t?

Problem 1.4.11 000 00 0 Consider the differential equation y + p2(t)y + p1(t)y = 0 on the interval −1 < t < 1. Suppose it is known that the coefficient functions p2(t) and p1(t) 2 4 are both continuous on −1 < t < 1. Is it possible that y(t) = c1 + c2t + c3t is the general solution for some functions p1(t) and p2(t) continuous on −1 < t < 1? (a) Answer this question by considering only the Wronskian of the functions 1, t2, t4 on the given interval. 2 (b) Explicitly determine functions p1(t) and p2(t) such that y(t) = c1 +c2t + 4 c3t is the general solution of the differential equation. Use this information, in turn, to provide an alternative answer to the question.

Problem 1.4.12 (a) Find the general solution to y000 = 0. (b) Using the general solution in part (a), construct a fundamental set {y1(t), y2(t), y3(t)} satisfying the following conditions

0 00 y1(1) = 1, y1(1) = 0, y1 (1) = 0. 0 00 y2(1) = 0, y2(1) = 1, y2 (1) = 0. 0 00 y3(1) = 0, y3(1) = 0, y3 (1) = 1. 1.5. EXISTENCE OF MANY FUNDAMENTAL SETS 47 1.5 Existence of Many Fundamental Sets

In Section 1.3, we established the existence of fundamental sets. There re- main two questions that we would like to answer. The first one is about the number of fundamental sets. That is, how many fundamental sets are there. It turns out that there are more than one. In this case, our second question is about how these sets are related. In this section, we turn our attention to these questions. We start this section by the following observation. Suppose that the Wron- th skian of n solutions {y1, y2, ··· , yn} to the n order linear homogeneous dif- ferential equation is identically zero. In terms of , this means that one of the columns of W can be written as a linear combination of the remaining columns. For the sake of argument, suppose that the last column is a linear combination of the remaining columns:         yn y1 y2 yn−1  y0   y0   y0   y0   n   1   2   n−1   .  = c1  .  + c2  .  + ··· + cn−1  .  .  .   .   .   .  (n−1) (n−1) (n−1) (n−1) yn y1 y2 yn−1 This implies that

yn(t) = c1y1(t) + c2y2(t) + ··· + cn−1yn−1(t).

Such a relationship among functions merit a name: We say that n functions f1, f2, ··· , fm are linearly dependent on an inter- val I if at least one of them can be expressed as a linear combination of the others on I; equivalently, they are linearly dependent if there exist constants c1, c2, ··· , cm not all zero such that

c1f1(t) + c2f2(t) + ··· + cmfm(t) = 0 (1.5.1) for all t in I. A set of functions that is not linearly dependent is said to be linearly independent. This means that a sum of the form (1.5.1) implies that c1 = c2 = ··· = cm = 0.

Example 1.5.1 t −2t t −2t Show that the functions f1(t) = e , f2(t) = e , and f3(t) = 3e − 2e are linearly dependent on (−∞, ∞). 48CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS

Solution. Since f3(t) = 3f1(t) − 2f2(t), the given functions are linearly dependent on (−∞, ∞)

The concept of linear independence can be used to test a fundamental set of solutions to the equation

(n) (n−1) 0 y + pn−1(t)y + ··· + p1(t)y + p0(t)y = 0 (1.5.2) without the need of using the Wronskian function.

Theorem 1.5.1 The solution set {y1, y2, ··· , yn} is a fundamental set of solutions to

(n) (n−1) 0 y + pn−1(t)y + ··· + p1(t)y + p0(t)y = 0, where pn−1(t), ··· , p1(t), p0(t) are continuous functions in a < t < b, if and only if the functions y1, y2, ··· , yn are linearly independent.

Proof. Suppose first that {y1, y2, ··· , yn} is a fundamental set of solutions. Then by Theorem 1.4.5, there is a < t0 < b such that W (t0) 6= 0. Suppose that

c1y1(t) + c2y2(t) + ··· + cnyn(t) = 0 for all a < t < b. By repeated differentiation of the previous equation we find

0 0 0 c1y1(t) + c2y2(t) + ··· + cnyn(t) = 0 00 00 00 c1y1 (t) + c2y2 (t) + ··· + cnyn(t) = 0 . . (n−1) (n−1) (n−1) c1y1 (t) + c2y2 (t) + ··· + cnyn (t) = 0.

Thus, one finds c1, c2, ··· , cn by solving the homogeneous system

c1y1(t0) + c2y2(t0) + ··· + cnyn(t0) = 0 0 0 0 c1y1(t0) + c2y2(t0) + ··· + cnyn(t0) = 0 00 00 00 c1y1 (t0) + c2y2 (t0) + ··· + cnyn(t0) = 0 . . (n−1) (n−1) (n−1) c1y1 (t0) + c2y2 (t0) + ··· + cnyn (t0) = 0. 1.5. EXISTENCE OF MANY FUNDAMENTAL SETS 49

Solving this system using Cramer’s rule one finds 0 c1 = c2 = ··· , cn = = 0. W (t0)

Thus, y1(t), y2(t), ··· , yn(t) are linearly independent. Conversely, suppose that {y1, y2, ··· , yn} is a linearly independent set of so- lutions to the differential equation. Suppose that {y1, y2, ··· , yn} is not a fundamental set of solutions. Then by Theorem 1.4.2, W (t) = 0 for all a < t < b. Choose any a < t0 < b. Then W (t0) = 0. But this says that the matrix   y1(t0) y2(t0) ··· yn(t0) 0 0 0  y (t0) y (t0) ··· y (t0)   1 2 n   . . . .   . . . .  (n−1) (n−1) (n−1) y1 (t0) y2 (t0) ··· yn (t0) is not invertible which means that there exist c1, c2, ··· , cn not all zero such that c1y1(t0) + c2y2(t0) + ··· + cnyn(t0) = 0 0 0 0 c1y1(t0) + c2y2(t0) + ··· + cnyn(t0) = 0 00 00 00 c1y1 (t0) + c2y2 (t0) + ··· + cnyn(t0) = 0 . . (n−1) (n−1) (n−1) c1y1 (t0) + c2y2 (t0) + ··· + cnyn (t0) = 0.

Now, let y(t) = c1y1(t) + c2y2(t) + ··· + cnyn(t) for all a < t < b. Then by the principle of superposition, y(t) is a solution to the differential equation and 0 (n−1) y(t0) = y (t0) = ··· = y (t0) = 0. But the zero function also is a solution to this initial value problem. By the existence and uniqueness theorem (i.e, Theorem 1.3.2) we must have c1y1(t) + c2y2(t) + ··· + cnyn(t) = 0 for all a < t < b with c1, c2, ··· , cn not all equal to 0. But this means that y1, y2, ··· , yn are linearly dependent which contradicts our assumption that y1, y2, ··· , yn are linearly independent Remark 1.5.1 The fact that {y1, y2, ··· , yn} are solutions is very critical. That is, if y1, y2, ··· , yn are merely differentiable functions then it is possible for them to be linearly independent and yet have a vanishing Wronskian. Example 1.5.2 2 Show that the functions y1(t) = t and y2(t) = t|t| are linearly independent with zero Wronskian. 50CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS

Solution. Suppose that c1y1(t) + c2y2(t) = 0 for all −∞ < t < ∞. Letting t = 1 we find c1 + c2 = 0. Letting t = −1, we find c1 − c2 = 0. Solving this system of equations, we find c1 = c2 = 0. Hence, y1 and y2 are linearly independent. Now, if t ≥ 0, then

t2 t2 W (t) = = 0. 2t 2t If t < 0, we have

t2 −t2 W (t) = = 0. 2t −2t Thus, W (t) = 0 for all −∞ < t < ∞

Next, we will show how to generate new fundamental sets from a given one and therefore establishing the fact that a linear homogeneous differential equation has many fundamental sets of solutions. We also show how differ- ent fundamental sets are related to each other. For this, let us start with a fundamental set {y1, y2, ··· , yn} of solutions to (1.5.2). If {y1, y2, ··· , yn} are n solutions to (1.5.2) then they can be written as linear combinations of the {y1, y2, ··· , yn}. That is,

a11y1 + a12y2 + ··· + a1nyn = y1 a21y1 + a22y2 + ··· + a2nyn = y2 . .

an1y1 + an2y2 + ··· + annyn = yn or in matrix form as       a11 a12 a13 ··· a1n y1 y1  a a a ··· a   y   y   21 22 23 2n   2   2   . . . .   .  =  .  .  . . . ··· .   .   .  an1 an2 an3 ··· ann yn yn

Theorem 1.5.2

{y1, y2, ··· , yn} is a fundamental set if and only if det(A) 6= 0 where A is the coefficient matrix of the above matrix equation. 1.5. EXISTENCE OF MANY FUNDAMENTAL SETS 51

Proof. By differentiating (n − 1) times the system

a11y1 + a12y2 + ··· + a1nyn = y1 a21y1 + a22y2 + ··· + a2nyn = y2 . .

an1y1 + anny2 + ··· + annyn = yn one can easily check that

     T y1 y2 ··· yn y1 y2 ··· yn a11 a12 ··· a1n 0 0 0 0 0 0  y1 y2 ··· yn   y1 y2 ··· yn   a21 a22 ··· a2n   . . .  =  . . .   . . .   . . ··· .   . . ··· .   . . ··· .  (n−1) (n−1) (n−1) (n−1) (n−1) (n−1)   y1 y2 ··· yn y1 y2 ··· yn an1 an2 ··· ann By taking the determinant of both sides and using the facts that the deter- minant of a product is the product of and det(AT ) = det(A) then we can write W (t) = det(A)W (t).

Since W (t) 6= 0, W (t) 6= 0 (i.e., {y1, y2, ··· , yn} is a fundamental set) if and only if det(A) 6= 0

In words, the above theorem states that for a given a fundamental set, one obtains a new fundamental set by premultiplying the given fundamental set by any .

Example 1.5.3 t −t The set {y1(t), y2(t), y3(t)} = {1, e , e } is fundamental set of solutions to the differential equation y000 − y0 = 0.

(a) Show that {y1(t), y2(t), y3(t)} = {cosh t, 1−sinh t, 2+sinh t} is a solution set. (b) Determine the coefficient matrix A described in the previous theorem. (c) Determine whether this set is a fundamental set by calculating the deter- minant of the matrix A.

Solution. 0 00 000 000 0 (a) Since y1 = sinh t, y1 = cosh t, and y1 (t) = sinh t we have y1 − y1 = 0 so 52CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS

that y1 is a solution. A similar argument holds for y2 and y3. 1 t 1 −t 1 t 1 −t (b) Since y1(t) = 0 · 1 + 2 · e + 2 · e , y2(t) = 1 · 1 − 2 e + 2 · e , y3(t) = 1 t 1 −t 2 · 1 + 2 · e − 2 · e we have

 1 1  0 2 2 1 1 A = 1 − 2 2  . 1 1 2 2 − 2

3 (c) One can easily find that det(A) = 2 6= 0 so that {y1(t), y2(t), y3(t)} is a fundemantal set of solutions 1.5. EXISTENCE OF MANY FUNDAMENTAL SETS 53 Practice Problems

Problem 1.5.1 2t Determine if the solution set {y1(t), y2(t), y3(t)} = {e , sin (3t), cos t} is lin- early independent.

Problem 1.5.2 2 2 Determine whether the solution set {y1(t), y2(t), y3(t)} = {2, sin t, cos t} is linearly dependent or independent on −∞ < t < ∞.

Problem 1.5.3 2 Determine whether the set {y1(t), y2(t), y3(t)} = {1, 1+t, 1+t+t } is linearly dependent or independent. Show your work.

Problem 1.5.4 2 Consider the set of functions {y1(t), y2(t), y3(t)} = {t +2t, αt+1, t+α}. For what value(s) α is the given set linearly depedent on the interval −∞ < t < ∞?

Problem 1.5.5 2 Determine whether the set {y1(t), y2(t), y3(t)} = {t|t| + 1, t − 1, t} is linearly independent or linearly dependent on the given interval: (a) 0 ≤ t < ∞. (b) −∞ < t ≤ 0. (c) −∞ < t < ∞.

In Problems 1.5.6 - 1.5.7, for each differential equation, the corresponding set of functions {y1(t), y2(t), y3(t)} is a fundamental set of solutions.

(a) Determine whether the given set {y1(t), y2(t), y3(t)} is a solution set to the differential equation.

(b) If {y1(t), y2(t), y3(t)} is a solution set then find the coefficient matrix A such that       y1 a11 a12 a13 y1  y2  =  a21 a22 a23   y2  . y3 a31 a32 a33 y3

(c) If {y1(t), y2(t), y3(t)} is a solution set, determine whether it is a funda- mental set by calculating the determinant of A. 54CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS

Problem 1.5.6

y000 + y00 = 0 −t {y1(t), y2(t), y3(t)} = {1, t, e } −(t+2) {y1(t), y2(t), y3(t)} = {1 − 2t, t + 2, e }.

Problem 1.5.7

t2y000 + ty00 − y0 = 0, t > 0 2 {y1(t), y2(t), y3(t)} = {1, ln t, t } 2 3 {y1(t), y2(t), y3(t)} = {2t − 1, 3, ln (t )}. 1.6. N TH ORDER HOMOGENEOUS LINEAR EQUATIONS WITH CONSTANT COEFFICIENTS55

1.6 nth Order Homogeneous Linear Equations with Constant Coefficients

In this section, we investigate how to solve the nth order linear homogeneous equation with constant coefficients

(n) (n−1) 0 y + an−1y + ··· + a1y + a0 = 0. (1.6.1) The general solution is given by

y(t) = c1y1 + c2y2 + ··· + cnyn where {y1, y2, ··· , yn} is a fundamental set of solutions. Based on solving first order linear homogenoues differential equations (i.e., y0 + ky = 0), we seek solutions of the form y(t) = ert, where r is a constant (real or complex-valued) to be found. Inserting into (1.6.1) we find

n n−1 rt (r + an−1r + ··· a1r + a0)e = 0.

n n−1 We call P (r) = r + an−1r + ··· a1r + a0 the characteristic polynomial and the equation

n n−1 r + an−1r + ··· a1r + a0 = 0 (1.6.2) the characteristic equation. Thus, for y(t) = ert to be a solution to (1.6.1) r must satisfy (1.9.1).

Example 1.6.1 Solve: y000 − 4y00 + y0 + 6y = 0.

Solution. The characteristic equation is r3 − 4r2 + r + 6 = 0. We can factor to find the roots of the equation. A calculator can efficiently do this, or you can use the rational root theorem1 to get (r + 1)(r − 2)(r − 3) = 0.

1 n n−1 Candidates for a solution to the polynomial equation anx +an−1x +···+a1x+a0 = p 0 with integer coefficients, are ratios of the form q where p is a divisor of a0 and q is a divisor of an. 56CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS

Thus, the roots are r = −1, r = 2, r = 3. The Wronskian −t 2t 3t e e e 4t −e−t 2e2t 3e3t = 12e 6= 0.

e−t 4e2t 9e3t Hence, {e−t, e2t, e3t} is a fundamental set of solutions and the general solution is −t 2t 3t y = c1e + c2e + c3e Characteristic Equation with n Distinct Roots In the previous example, the characteristic solution had three distinct roots and the corresponding set of solutions formed a fundamental set. This is always true according to the following theorem.

Theorem 1.6.1 Assume that the characteristic equation

n n−1 r + an−1r + ··· a1r + a0 = 0 has n distinct roots r1, r2, ··· , rn (real-valued or complex-valued). Then the set of solutions {er1t, er2t, ··· , ernt} is a fundamental set of solution to the equation (n) (n−1) 0 y + an−1y + ··· + a1y + a0 = 0.

Proof. For a fixed number t0 we consider the Wronskian

er1t0 er2t0 ··· ernt0

r er1t0 r er2t0 ··· r ernt0 1 2 n r2er1t0 r2er2t0 ··· r2 ernt0 W (t0) = 1 2 n . . . . n−1 r1t0 n−1 r2t0 n−1 rnt0 r1 e r2 e ··· rn e

Now, in linear algebra one proves that if a row or a column of a matrix is multiplied by a constant then the determinant of the new matrix is the 1.6. N TH ORDER HOMOGENEOUS LINEAR EQUATIONS WITH CONSTANT COEFFICIENTS57 determinant of the old matrix multiplied by that constant. It follows that

1 1 ··· 1

r1 r2 ··· rn 2 2 2 r1t0 r2t0 rnt0 r r ··· r W (t) = e e ··· e 1 2 n . . . . n−1 n−1 n−1 r1 r2 ··· rn The resulting determinant above is the well-known Vandermonde deter- minant. Its value is the product of all factors of the form rj − ri where j > i. Since rj 6= ri for i 6= j, this determinant is not zero and consequently r1t r2t rnt W (t0) 6= 0. This establishes that {e , e , ··· , e } is a fundamental set of solutions

Example 1.6.2 Solve: y000 − 5y00 − 22y0 + 56y = 0.

Solution. The characteristic equation is

r3 − 5r2 − 22r + 56 = 0.

Using the rational root test, we find the solutions r1 = −4, r2 = 2, and r3 = 7. Hence, the general solution is given by

−4t 2t 7t y(t) = c1e + c2e + c3e

Characterisitc Equation with n Equal Roots Next, we consider characteristic equations whose roots are all equal. For example, if α is a real root that appears n times then we say that α is a root of multiplicity n. To find the general solution, we take motivation from the case y(n) = 0, whose characteristic equation is rn = 0 so that r = 0 is a root of multiplicity n, and looking at the general solution to this equation (which is found by repeated integration)

2 n−1 y(t) = c1 + c2t + c3t + ··· + cn−1t , we see that {1, t, t2, ··· , tn−1} is a fundamental set. Thus, in the case of repeated roots we look for solutions where ert is multiplied by a monomial 58CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS of the form tk. In general, if the characteristic equation has a root α with multiplicity n, we consider the functions:

eαt, teαt, t2eαt, ··· , tn−1eαt.

Theorem 1.6.2 The set {eαt, teαt, t2eαt, ··· , tn−1eαt} is a fundamental set.

Proof. We first show that the proposed set is linearly independent. For this purpose, suppose that

αt αt 2 αt n−1 αt c1e + c2te + c3t e + ··· + cn−1t e = 0 for all t. This implies

2 n−1 c1 + c2t + c3t + ··· + cn−1t = 0 for all t. Taking t = 0, we find c1 = 0. Taking the derivative of the above equation, we find

n−2 c2 + 3c3t + ··· + (n − 1)ncn−1t = 0 for all t. Letting t = 0 we find c2 = 0. Repeating this process, we find c3 = c4 = ··· = cn−1 = 0. Hence, the set

{eαt, teαt, t2eαt, ··· , tn−1eαt} is a linearly independent set. Now, for 0 ≤ k ≤ n − 1, the function y(t) = tkeαt is a solution to (1.6.1). To see this, we let

(n) (n−1) 0 L(y) = y + an−1y + ··· + a1y + a0.

Then L(ert) = P (r)ert (1.6.3) 1.6. N TH ORDER HOMOGENEOUS LINEAR EQUATIONS WITH CONSTANT COEFFICIENTS59 where P (α) = P 0(α) = ··· = P (n−1)(α) = 0 since P (r) = C(r − α)n. Now, we have L(eαt) = P (α)eαt = 0. Next, we have  ∂  ∂ L(tert) = L ert = [L(ert)] = P 0(r)ert + tP (r)ert ∂r ∂r so that L(teαt) = P 0(α)eαt + tP (α)eαt = 0. Repeating this process, we find L(eαt) = L(teαt) = L(t2eαt) = ··· = L(tn−1eαt) = 0 Example 1.6.3 Solve: y(5) + 5y(4) + 10y000 + 10y00 + 5y0 + y = 0. Solution. The characteristic equation is r5 + 5r4 + 10r3 + 10r2 + 5r + 1 = 0 which, by the binomial theorem, factors to (r + 1)5 = 0. Hence, the general solution is given by

−t −t 2 −t 3 −t 4 −t y(t) = c1e + c2te + c3t e + c4t e + c5t e

Characteristic Equation with Distinct Complex Roots Since complex roots occur in conjugate pairs, n is even. Let rk = αk +iβk for n rkt 1 ≤ k ≤ 2 . Now, e is a solution to (1.6.1) that involves complex numbers. However, one is interested in real-valued solutions. In this case, we will use Euler’s formula

(αk+iβk)t αkt e = e (cos (βkt) + i sin (βkt)) to obtain 1 eαkt cos (β t) = [e(αk+iβk)t + e(αk−iβk)t] k 2 1 eαkt sin (β t) = [e(αk+iβk)t − e(αk−iβk)t]. k 2i

αkt αkt It follows that e cos (βkt) and e sin (βkt) are solutions to (1.6.1) by the superposition principle. 60CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS

Theorem 1.6.3 The set

α t α t α n t α n t {e 1 cos (β t), e 1 sin (β t), ··· , e 2 cos (β n t), e 2 sin (β n t)} 1 1 2 2 is a fundamental set. Proof. αkt αkt As stated above, e cos (βkt) and e sin (βkt) are solutions to (1.6.1) by the superposition principle. It remains to show that the given set is linearly independent. Suppose that

n 2 X αkt αkt [cke cos (βkt) + dke sin (βkt)] = 0. k=1 This is equivalent to

n 1 X2  1   1   c + d erkt + c − d erkt = 0. 2 k i k k i k k=1

r t r n t r t r n t Since {e 1 , ··· , e 2 , e 1 , ··· , e 2 } is a linearly independent set, we obtain 1 c + d =0 k i k 1 c − d =0. k i k

Solving this system, we find ck = dk = 0. Hence, the given set is linearly independent Example 1.6.4 Solve: y(4) + 3y00 + 2y = 0. Solution. The characteristic equation is r4 + 3r2 + 2 = 0 which factors to (r2 + 1)(r2 + 2) = 0. √ Hence, the complex roots are ±i and ±i 2. The general solution is given by √ √ y(t) = c1 cos t + c2 sin t + c3 cos t 2 + c4 sin t 2 1.6. N TH ORDER HOMOGENEOUS LINEAR EQUATIONS WITH CONSTANT COEFFICIENTS61

Remark 1.6.1 If r1 = α + iβ is a complex root with multiplicity k then k k P (r) = (r − r1) (r − r1) p(r) where r1 = α − iβ, p(r1) 6= 0, p(r1) 6= 0. In this case, the 2k linearly independent solutions are given by eαt cos βt, teαt cos βt, ··· , tk−1eαt cos βt and eαt sin βt, teαt sin βt, ··· , tk−1eαt sin βt. The remaining n − 2k solutions needed to complete the fundamental set of solutions are determined by examining the roots of q(r) = 0. Example 1.6.5 Solve: y(4) + 2y00 + y = 0. Solution. The characteristic equation is r4 + 2r2 + 1 = 0 which factors to (r2 +1)2 = 0. Thus, r = i is a complex root with multiplicity 2. Hence, the general solution is

y(t) = c1 cos t + c2 sin t + c3t cos t + c4t sin t Example 1.6.6 Find the solution to y(5) + 4y000 = 0, y(0) = 2, y0(0) = 3, y00(0) = 1, y000(0) = −1, y(4)(0) = 1. Solution. The characteristic equation is r5 + 4r3 = r3(r2 + 4) = 0 which has a root of multiplicity 3 at r = 0 and complex roots r = 2i and r = −2i. We use what we have learned about repeated roots and complex roots to get the general solution. Since the multiplicity of the repeated root is 3, we have 2 y1(t) = 1, y2(t) = t, y3(t) = t . The complex roots give the other two solutions 62CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS

y4(t) = cos (2t) and y5(t) = sin (2t). The general solution is

2 y(t) = c1 + c2t + c3t + c4 cos (2t) + c5 sin (2t).

Finding the first four derivatives, we have

0 y (t) = c2 + 2c3t − 2c4 sin (2t) + 2c5 cos (2t) 00 y (t) = 2c3 − 4c4 cos (2t) − 4c5 sin (2t) 000 y (t) = 8c4 sin (2t) − 8c5 cos (2t) (4) y (t) = 16c4 cos (2t) + 16c5 sin (2t).

Next, plug in the initial conditions in the above equations to get

2 = c1 + c4

3 = c2 + 2c5

1 = 2c3 − 4c4

−1 = 8c5

1 = 16c4.

Solving these equations we find 31 23 5 1 1 c = , c = , c = , c = , c = . 1 16 2 4 3 8 4 16 5 8 The solution is given then by 31 23 5 1 1 y(t) = + t + t2 + cos (2t) + sin (2t) 15 4 8 16 8 Example 1.6.7 Solve: y(7) − 625y000 = 0.

Solution. The characteristic equation is

r7 − 625r3 = 0 which factors to r3(r − 5)(r + 5)(r2 + 25) = 0. 1.6. N TH ORDER HOMOGENEOUS LINEAR EQUATIONS WITH CONSTANT COEFFICIENTS63

The root r = 0 is of multiplicity 3 and the differential equation has the particular solutions

2 y1(t) = 1, y2(t) = t, y3(t) = t .

The roots r = −5 and r = 5 have multiplicity 1 and the differential equation has the particular solutions

−5t 5t y4(t) = e , y5(t) = e .

For the complex roots r = ±5i, the real-valued particular solutions to the differential equation are

y6(t) = cos (5t), y7(t) = sin (5t).

Hence, the general solution is given by

y(t) = c1y1(t) + c2y2(t) + c3y3(t) + c4y4(t) + c5y5(t) + c6y6(t) + c7y7(t)

Remark 1.6.2 All the method discussed in this section do not apply to (1.6.1) with variable coefficients. 64CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS Practice Problems

Problem 1.6.1 Solve y000 + y00 − y0 − y = 0.

Problem 1.6.2 Find the general solution of 16y(4) − 8y00 + y = 0.

Problem 1.6.3 Solve the following constant coefficient differential equation :

y000 − y = 0.

Problem 1.6.4 Solve y(4) − 16y = 0.

Problem 1.6.5 Solve the initial-value problem

y000 + 3y00 + 3y0 + y = 0, y(0) = 0, y0(0) = 1, y00(0) = 0.

Problem 1.6.6 Given that r = 1 is a solution of r3 + 3r2 − 4 = 0, find the general solution to y000 + 3y00 − 4y = 0.

Problem 1.6.7 2t Given that y1(t) = e is a solution to the homogeneous equation, find the general solution to the differential equation

y000 − 2y00 + y0 − 2y = 0.

Problem 1.6.8 Suppose that y(t) = c1 cos t + c2 sin t + c3 cos (2t) + c4 sin (2t) is the general solution to the equation

(4) 000 00 0 y + a3y + a2y + a1y + a0y = 0.

Find the constants a0, a1, a2, and a3. 1.6. N TH ORDER HOMOGENEOUS LINEAR EQUATIONS WITH CONSTANT COEFFICIENTS65

Problem 1.6.9 Suppose that y(t) = c1 + c2t + c3 cos 3t + c4 sin 3t is the general solution to the homogeneous equation (4) 000 00 0 y + a3y + a2y + a1y + a0y = 0.

Determine the values of a0, a1, a2, and a3. Problem 1.6.10 −t −t t t Suppose that y(t) = c1e sin t+c2e cos t+c3e sin t+c4e cos t is the general solution to the homogeneous equation (4) 000 00 0 y + a3y + a2y + a1y + a0y = 0.

Determine the values of a0, a1, a2, and a3. Problem 1.6.11 Consider the homogeneous equation with constant coefficients (n) (n−1) 0 y + an−1y + ··· + a1y + a0 = 0. t Suppose that y1(t) = t, y2(t) = e , y3(t) = cos t are several functions belong- ing to a fundamental set of solutions to this equation. What is the smallest value for n for which the given functions can belong to such a fundamental set? What is the fundamemtal set? Problem 1.6.12 Consider the homogeneous equation with constant coefficients (n) (n−1) 0 y + an−1y + ··· + a1y + a0 = 0. 2 t Suppose that y1(t) = t sin t, y2(t) = e sin t are several functions belonging to a fundamental set of solutions to this equation. What is the smallest value for n for which the given functions can belong to such a fundamental set? What is the fundamemtal set? Problem 1.6.13 Consider the homogeneous equation with constant coefficients (n) (n−1) 0 y + an−1y + ··· + a1y + a0 = 0. 2 2t Suppose that y1(t) = t , y2(t) = e are several functions belonging to a fundamental set of solutions to this equation. What is the smallest value for n for which the given functions can belong to such a fundamental set? What is the fundamemtal set? 66CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS 1.7 Review of Power Series

In the previous section we learned how to find the general solution to a linear homogeneous differential equation with constant coefficients. Our next task is to learn how to do the same for equations with variable coefficients. This requires the use of power series which is the topic of this section. ∞ Let {an}n=0 be a sequence of numbers. Then a power series about t = t0 is a series of the form

∞ X n 2 an(t − t0) = a0 + a1(t − t0) + a2(t − t0) + ··· n=0

Note that a power series about t = t0 always converges at t = t0 with sum equals to a0.

Example 1.7.1 1. A polynomial of degree n is a power series about t = 0 since

2 n p(t) = a0 + a1t + a2t + ··· + ant .

Note that ak = 0 for k ≥ n + 1. 2. The 1 + t + t2 + ··· is a power series about t = 0 with an = 1 for all n. 1 1 1 3. The series t + t2 + t3 + ··· is not a power series since it has negative powers of t. 4. The series 1 + t + (t − 1)2 + (t − 2)3 + (t − 3)4 + ··· is not a power series since each term is a power of a different quantity.

Pointwise Convergence of Power Series To study the convergence of a power series about t = t0 one starts by fixing t and then constructing the sequence of partial sums

S1(t) = a0,

S2(t) = a0 + a1(t − t0), 2 S3(t) = a0 + a1(t − t0) + a2(t − t0) , . . 2 n−1 Sn(t) = a0 + a1(t − t0) + a2(t − t0) + ··· + an−1(t − t0) . 1.7. REVIEW OF POWER SERIES 67

∞ Thus, obtaining the sequence {Sn(t)}n=1. If this sequence converges point- wise to a number L(t), i.e., limn→∞ Sn(t) = L(t), then we say that the power series converges pointwise to L(t) for the specific value of t. Otherwise, we say that the power series diverges. Example 1.7.2 For what values of x does the following power series converge? 1 + 2t + 4t2 + 8t3 + ··· . Solution. The nth partial sum is 1 − (2t)n S (t) = 1 + 2t + 4t2 + ··· + (2t)n−1 = . n 1 − 2t But lim (2t)n = 0 n→∞ 1 provided that |2t| < 1 or equivalently |t| < 2 . In such an interval, we have 1 1 + 2t + 4t2 + 8t3 + ··· = 1 − 2t The above example shows that power series may converge for some values of t and diverge for other values. The following theorem provides a general tool for determining the values of t for which a power series converges and those for which it diverges. Theorem 1.7.1 Given a power series ∞ X n 2 an(t − t0) = a0 + a1(t − t0) + a2(t − t0) + ··· n=0 There exists a unique 0 ≤ R ≤ ∞, called the radius of convergence, such that (i) If R = 0, then the given power series converges only at t = t0 and diverges for all other values. (ii) If R = ∞ then the given power series converges everywhere. (iii) If 0 < R < ∞ then the series converges absolutely for |t − t0| < R and diverges for |t − t0| > R. The series may or may not converge for |t − t0| = R. That is, for the values t = t0 − R and t = t0 + R. 68CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS

Proof. th Suppose first that the series converges at some point c 6= t0. By the n term 2 n test implies limn→∞ an(c − t0) = 0. Thus, there is a positive integer N such n PN−1 n that |an(c − t0) | < 1 whenever n ≥ N. Let M = n=0 |an(c − t0) | + 1. n Then |an(c − t0) | < M for all n ≥ 0. Hence, if |t − t0| < |c − t0| then n n n n t − t0 t − t0 |an(t − t0) | = |an(c − t0) | < M . c − t0 c − t0 n But the series P∞ M t−t0 is a convergent geometric series since t−t0 < n=0 c−t0 c−t0 3P∞ n 1. Hence, by the comparison test n=0 |an(t − t0) | converges. That is, P∞ n n=0 an(t − t0) converges absolutely for |t − t0| < |c − t0|. Next, consider the set ∞ X n C = {|t − t0| : an(t − t0) < ∞}. n=0 Let R be the least upper bound of this set. If the set is not bounded from above, that is the power series converges for any number t, we let R = ∞. Suppose that the set is bounded from above. In this case, 0 ≤ R < ∞. If R = 0 then the power series converges only at t = t0 and diverges at any other value. Suppose 0 < R < ∞. By the definition of R, if |t − t0| > R then the power series is divergent. Now, suppose that |t − t0| < R. There must be P∞ n c 6= t0 such that |t−t0| < |c−t0| and the power series n=0 an(c−t0) is con- vergent (for otherwise |t−t0| is an upper bound for C and hence R ≤ |t−t0|). P∞ n Thus, using the earlier result in the proof, the series n=0 an(t − t0) con- verges absolutely

The largest interval for which the power series converges is called the in- terval of convergence.

Finding the radius of convergence Theorem 1.7.1 does not give an explicit expression for the radius of conver- gence of a power series in terms of its coefficients. The ratio test of power series gives a simple, but useful, way to compute the radius of convergence. We first recall the ratio test from calculus.

2 P∞ If limn→∞ an 6= 0 then the series n=0 an is divergent. 3 P∞ P∞ If 0 ≤ an ≤ bn for all n and n=0 bn is convergent then n=0 an is also convergent. 1.7. REVIEW OF POWER SERIES 69

Theorem 1.7.2 (Ratio test)

P∞ an+1 Suppose that an is a series with an 6= 0 for all n ≥ N and limn→∞ = n=0 an L.

(i) If 0 ≤ L < 1 the series is convergent. (ii) If L > 1 the series is divergent. (iii) If L = 1, the test fails. Theorem 1.7.3 P∞ n Suppose that n=0 an(t − t0) is a power series with an 6= 0 for all n ≥ N.

(a) If lim |an+1| = 0 then R = ∞ and this means that the series con- n→∞ |an| verges for all t. (b) If lim |an+1| = L > 0 then R = 1 > 0. This is known as the Cauchy- n→∞ |an| L Hadamard formula. (c) If lim |an+1| = ∞ then R = 0 and this means that the series diverges n→∞ |an| for all t 6= t0. Proof. Consider the ratio n+1 an+1(t − t0) an+1 n = |t − t0|. an(t − t0) an n o∞ an+1 an+1 Suppose that limn→∞ a = L ≥ 0. Then the sequence a |t − t0| n n n=0 converges to L · |t − t0|. (a) If L = 0 then |t − t0| · L = 0 < 1 and the series converges for all real numbers t. Thus, R = ∞. 1 (b) Suppose that L > 0 and finite. If L · |t − t0| < 1 or |t − t0| < L then by P∞ n 1 the ratio test, the series n=0 an(t − t0) is convergent. If |t − t0| > L then by the ratio test, the series is divergent. Hence, by the uniqueness of R, we 1 have R = L . (c) If L = ∞ then the series diverges for all t 6= t0. Thus, R = 0 Example 1.7.3 P∞ tn Find the radius of convergence of the power series n=0 n! Solution. R = ∞ since 1 = lim |an+1| = lim n! = lim 1 = 0. Thus, R n→∞ |an| n→∞ (n+1)! n→∞ n+1 the series converges everywhere 70CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS

Example 1.7.4 P∞ n−1 (t−1)n Find the interval of convergence of the power series n=1(−1) n . Solution. We first find the radius of convergence. We have 1 n = lim = 1 R n→∞ n + 1 so that R = 1. Hence, the series converges for |t − 1| < 1 and diverges for |t − 1| > 1. So the series converges is the interval 0 < t < 2. What about the P∞ 1 endpoints t = 0 and t = 2? If we replace t by 0 we obtain the series − n=1 n which is divergent (harmonic series). If we replace t by 2 we obtain the alter- P∞ n−1 1 4 nating series n=1(−1) n which converges by the alternating series test . Thus, the interval of convergence is 0 < t ≤ 2

We next want to show that power series can be differentiated termwise. The following result is needed for that purpose.

Lemma 1.7.1 Let t and h be fixed numbers. Then for any non-negative integer n, we have

Z 1 (t + h)n − tn − nhtn−1 = h2n(n − 1) (1 − w)(t + wh)n−2dw. 0 Proof. Let f(s) = (t + sh)n where s > 0. By the Fundamental Theorem of Calculus, we have Z s f(s) = f(0) + f 0(w)dw. 0 Using integration by parts with u = f 0(w) and v = w − s we can write Z s 0 s 00 f(s) = f(0) + (w − s)f (w)|0 + (s − w)f (w)dw 0 or Z s (t + sh)n = tn + nshtn−1 + h2n(n − 1) (s − w)(t + wh)n−2dw. 0

4 P∞ n The series n=0(−1) an is convergent if an ≥ an+1 and limn→∞ an = 0. 1.7. REVIEW OF POWER SERIES 71

Letting s = 1 into this equation, we find

Z 1 (t + h)n = tn + nhtn−1 + h2n(n − 1) (1 − w)(t + wh)n−2dw 0

Theorem 1.7.4 Let ∞ X n f(t) = an(t − t0) , |t − t0| < R n=0 be a convergent power series with radius of convergence R and set

∞ X n−1 g(t) = nan(t − t0) . n=1

Then g(t) has a radius of convergence R and g(t) = f 0(t).

Proof. Without loss of generality, we assume that t0 = 0. The radius of convergence of g(t) is

nan an lim = lim = R. n→∞ (n + 1)an+1 n→∞ an+1 Fix |t| < R and let h be small enough so that |t+h| < R. Note that t+wh is in the interval with endpoints t and t+h for all 0 ≤ w ≤ 1. The function t+wh is continuous in the closed interval [0, 1] so that |t + wh| ≤ max{|t + wh| : 0 ≤ w ≤ 1} ≤ K < R for all 0 ≤ w ≤ 1. Hence,

∞ f(t + h) − f(t) 1 X n n n−1 − g(t) = an[(t + h) − t − nht ] h |h| n=0 ∞ 1 X ≤ |a ||(t + h)n − tn − nhtn−1| |h| n n=0 ∞ Z 1 X n−2 ≤|h| |an|n(n − 1) (1 − w)|t + wh| dw n=2 0 ∞ X n−2 ≤|h| |an|K n(n − 1). n=2 72CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS

By the Ratio Test, we have

n−1 an+1(n + 1)nK an+1 K lim n−2 = lim K = < 1. n→∞ ann(n − 1)K n→∞ an R

P∞ n−2 Hence, the series n=2 anK n(n − 1) is convergent, say to L. Thus,

f(t + h) − f(t) 0 ≤ − g(t) ≤ |h|L. h

Letting h → 0 and using the squeeze rule, we conclude that g(t) = f 0(t)

Example 1.7.5 Find a power series representation for the following function and determine 1 its interval of convergence: g(t) = (1−t)2 .

Solution. P∞ n 1 Recall the geometric series f(t) = n=0 t = 1−t with radius of convergence R = 1. But ∞ X g(t) = f 0(t) = ntn−1. n=0 The radius of convergence of g(t) is also 1

Remark 1.7.1 It follows from Theorem 1.7.4 that f is infinitely differentiabLE and that the derivatives of all orders share the same radius of convergence. 1.7. REVIEW OF POWER SERIES 73 Practice Problems

Write each of the series in Exercises 1.7.1 - 1.7.3 using sigma notation, i.e., P∞ n n=0 an(t − t0) .

Problem 1.7.1

(t − 1)2 (t − 1)4 (t − 1)6 1 − + − + ··· 2! 4! 6! Problem 1.7.2

(t − 1)5 (t − 1)7 (t − 1)9 (t − 1)3 − + − + ··· 2! 4! 6! Problem 1.7.3

4(t + 5)7 5(t + 5)9 2(t + 5)3 + 3(t + 5)5 + + + ··· 2! 3! Problem 1.7.4 P∞ n−1 t2n−1 Find the radius of convergence of the power series n=1(−1) (2n−1)! .

Problem 1.7.5 P∞ n Find the radius of convergence of the power series n=0(5t) .

Problem 1.7.6 P∞ (n+1) n Find the radius of convergence of the power series n=0 2n+n t .

Problem 1.7.7 Find the radius of convergence of the power series t + 4t2 + 9t3 + 16t4 + ···

Problem 1.7.8 4t2 8t3 Find the radius of convergence of the power series 1 + 2t + 2! + 3! + ···

Problem 1.7.9 t3 t5 t7 Find the radius of convergence of the power series t − 3 + 5 − 7 + ···

Problem 1.7.10 P∞ 2n 2n Find the radius and the interval of convergence of the series n=0 2 t . 74CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS

Problem 1.7.11 (a) Determine the radius of convergence of the series

t2 t3 tn t − + + ··· + (−1)n−1 + ··· 2 3 n What does this tell us about the interval of convergence of this series? (b) Investigate convergence at the endpoints of the interval of convergence of this series.

Problem 1.7.12 P∞ (2t)n 1 Show that the series n=1 n converges for |t| < 2 . Investigate whether the 1 1 series converges for t = 2 and t = − 2 .

Problem 1.7.13 2t (a) Find the power series representation of f(t) = 2−t and indicate its radius of convergence. 4 (b) Find the power series representation of g(t) = (2−t)2 and indicate its radius of convergence.

Problem 1.7.14 P∞ (−1)2n+1 2n+1 Given f(t) = sin t = n=0 (2n+1)! t with radius of convergence R = ∞. Find the power series representation of g(t) = cos t and its radius of convergence.

Problem 1.7.15 P∞ tn Consider the series g(t) = n=1 n with radius of convergence 1. (a) Find a formula for g0(t) not as a power series. (b) Find a formula for g(t) not as a power series. 1.8. SERIES SOLUTIONS OF LINEAR HOMOGENEOUS DIFFERENTIAL EQUATIONS75 1.8 Series Solutions of Linear Homogeneous Differential Equations

We know how to effectively solve a linear homogeneous differential equation with constant coefficients. In this section, we discuss finding the general solution to the homogeneous equation

(n) (n−1) 0 y + pn−1(t)y + ··· + p1(t)y + p0(t)y = 0 (1.8.1) where the functions p0(t), p1(t), ··· and pn−1(t) are continuous functions for a < t < b. Let t = t0. We are interested in answering the question: When is it possible to represent the general solution of (1.8.1) in terms of a power series that converges in some open interval centered at t0? In order to answer this question, we need to introduce the following defi- nitions: We say that a function f(t) is analytic at t = t0 if f(t) can be expressed as a Taylor series

∞ (n) X f (t0) f(t) = (t − t )n. n! 0 n=0 Examples of analytic functions are polynomial functions, rational functions and exponential functions. We say that t = t0 is an ordinary point or a regular point of the differ- ential equation (1.8.1) when p0(t), p1(t), ··· and pn−1(t) are analytic at t0. If one the pi(t)’s is not analytic at t0, we say that t0 is a singular point. Example 1.8.1 Consider the differential equation

2 00 0 5 (1 − t )y + (tan t)y + t 3 y = 0 in the open interval −2 < t < 2. Classify each point in this interval as an ordinary point or a singular point. Solution. We first rewrite the equation in the form (1.8.1), obtaining

5 tan t t 3 y00 + y0 + y = 0. 1 − t2 1 − t2 Therefore, 76CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS

5 tan t t 3 p1(t) = 1−t2 and p0(t) = 1−t2 .

The function p1(t) is not analytic at t = ±1 (where the denominator van- π ishes) and at t = ± 2 (where the tangent has a vertical asymptotes). The function p0(t) is not analytic at t = ±1 and at t = 0 (p0(t) does not have a second derivative at 0.) Thus, in the interval −2 < t < 2, the five points π t = 0, ±1, ± 2 are singular points and all other points are ordinary points

The following theorem, which we state without proof, answers our earlier question about a series solution to the equation (1.8.1).

Theorem 1.8.1 Suppose that p0(t), p1(t), ··· and pn−1(t) are analytic at t0 and let R be the smaller of the radii of convergence of their respective Taylor series rep- resentations. The general solution to (1.8.1) can be expressed as

∞ X n y(t) = an(t − t0) = c1y1(t) + c2y2(t) + ··· + cnyn, n=0 where the functions y1, y2, ··· , yn form a fundamental set of solutions, ana- lytic in the interval |t − t0| < R.

Example 1.8.2 Find the series solution of the differential equation near the ordinary point 0. (t2 − 4)y00 + 3ty0 + y = 0.

Solution. The singular points of the given differential equation are −2 and 2. Thus, we will find a series solution of the differential equation in the interval |t| < 2. The solution is of the form

∞ X n y(t) = ant . n=0

Differentiating this series twice we find

0 P∞ n−1 00 P∞ n−2 y (t) = n=1 nant and y (t) = n=2 n(n − 1)ant . 1.8. SERIES SOLUTIONS OF LINEAR HOMOGENEOUS DIFFERENTIAL EQUATIONS77

Substituting these series in the given equation we find

∞ ∞ ∞ ∞ 2 X n−2 X n−2 X n−1 X n t n(n − 1)ant − 4 n(n − 1)ant + 3t nant + ant =0 n=2 n=2 n=1 n=0 ∞ ∞ ∞ ∞ X n X n−2 X n X n n(n − 1)ant − 4n(n − 1)ant + 3nant + ant =0 n=2 n=2 n=1 n=0 ∞ ∞ ∞ ∞ X n X n X n X n n(n − 1)ant − 4(n + 2)(n + 1)an+2t + 3nant + ant =0 n=2 n=0 n=1 n=0 we shift the index of summation in the second series by 2, replacing n with n + 2 and using the initial value n = 0. Since we want to express everything in only one summation sign, we have to start the summation at n = 2 in every series

∞ ∞ ∞ ∞ X n X n X n X n n(n − 1)ant − 4(n + 2)(n + 1)an+2t + 3nant + ant = n=2 n=0 n=1 n=0 ∞ ∞ X n X n n(n − 1)ant − 4(2)(1)a2 − 4(3)(2)a3t − 4(n + 2)(n + 1)an+2t + n=2 n=2 ∞ ∞ X n X n 3a1t + 3nant + a0 + a1t + ant =0 n=2 n=2 or

∞ X n (a0 − 8a2) + (4a1 − 24a3)t + [−4(n + 2)(n + 1)an+2 + n(n − 1)an + 3nan + an] t =0 n=2 ∞ X  2  n (a0 − 8a2) + (4a1 − 24a3)t + (n + 1) an − 4(n + 2)(n + 1)an+2 t =0 n=2

Equating all the coefficients to zero we find

a0 − 8a2 =0

4a1 − 24a3 =0 2 (n + 1) an − 4(n + 1)(n + 2)an+2 =0 78CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS

Thus, a a = 0 2 8 a a = 1 3 6 (n + 1)a a = n , n ≥ 2. n+2 4(n + 2)

This last condition is called a recurrence formula, we can express each an+2 in terms of a previous coefficient an. From the above, we can see that a a = 0 2 8 a a = 1 3 6 3a a = 0 4 128 a a = 1 5 30 5a a = 0 6 1024 a a = 1 7 140

Notice that each even coefficient is expressed in terms of a0 and each odd coefficient is expressed in terms of a1. Then, the general solution is:  1 3 5   1 1 1  y(t) = a 1 + t2 + t4 + t6 + ··· +a t + t3 + t5 + t7 + ··· 0 8 128 1024 1 6 30 140 1.8. SERIES SOLUTIONS OF LINEAR HOMOGENEOUS DIFFERENTIAL EQUATIONS79

Practice Problems

Problem 1.8.1 Identify the singular and ordinary points of the differential equation

y00 + ty0 + (t2 + 2)y = 0.

Problem 1.8.2 Identify the singular and ordinary points of the differential equation 1 (t2 − 1)y00 + ty0 + y = 0. t Problem 1.8.3 Identify the singular and ordinary points of the differential equation

(t2 − 4)y00 + 3ty0 + y = 0.

Problem 1.8.4 The point t0 = 0 is an ordinary point of the differential equation

(t2 − 4)y00 + 3ty0 + y = 0.

Determine a value of R as stated in Theorem 18.1.

Problem 1.8.5 Find the series solution near the ordinary point 0 to the differential equation

y00 + ty0 + (t2 + 2)y = 0.

Problem 1.8.6 Find the series solution near the ordinary point 0 to the differential equation

y00 − ty0 − t2y = 0.

Problem 1.8.7 Find the series solution near the ordinary point 0 to the differential equation

(t2 + 1)y00 − y0 + y = 0. 80CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS

1.9 Non-Homogeneous nth Order Linear Dif- ferential Equations

We consider again the nth order linear non-homogeneous equation

(n) (n−1) 0 y + pn−1(t)y + ··· + p1(t)y + p0(t)y = g(t) (1.9.1) where the functions p0, p1, ··· , pn−1 and g(t) are continuous functions for a < t < b. The solution structure established for second order linear non-homogeneous equations applies as well in the nth order case.

Theorem 1.9.1 Let {y1(t), y2(t), ··· , yn(t)} be a fundamental set of solutions to the homo- geneous equation

(n) (n−1) 0 y + pn−1(t)y + ··· + p1(t)y + p0(t)y = 0 and yp(t) be a particular solution of the non-homogeneous equation

(n) (n−1) 0 y + pn−1(t)y + ··· + p1(t)y + p0(t)y = g(t).

The general solution of the non-homogeneous equation is given by

y(t) = yp(t) + c1y1(t) + c2y2(t) + ··· + cnyn(t). Proof. Let y(t) be any solution to equation (1.9.1). Since yp(t) is also a solution, we have

(n) (n−1) 0 (y − yp) + pn−1(t)(y − yp) + ··· + p1(t)(y − yp) + p0(t)(y − yp) =

(n) (n−1) 0 (n) (n−1) 0 y +pn−1(t)y +···+p1(t)y +p0(t)y−(yp +pn−1(t)yp +···+p1(t)yp +p0(t)yp) = g(t) − g(t) = 0.

Therefore y−yp is a solution to the homogeneous equation. But {y1, y2, ··· , yn} is a fundamental set of solutions to the homogeneous equation so that there exist unique constants c1, c2, ··· , cn such that y(t)−yp(t) = c1y1(t)+c2y2(t)+ ··· + cnyn(t). Hence,

y(t) = yp(t) + c1y1(t) + c2y2(t) + ··· + cnyn(t) 1.9. NON-HOMOGENEOUS N TH ORDER LINEAR DIFFERENTIAL EQUATIONS81

Since the sum c1y1(t) + c2y2(t) + ··· + cnyn(t) represents the general solution to the homogeneous equation, we will denote it by yh so that the general solution of (1.9.1) takes the form

y(t) = yh(t) + yp(t).

It follows from the above theorem that finding the general solution to non- homogeneous equations consists of three steps: 1. Find the general solution yh of the associated homogeneous equation. 2. Find a single solution yp of the original equation. 3. Add together the solutions found in steps 1 and 2.

The superposition of solutions is valid only for homogeneous equations and not true in general for non-homogeneous equations. However, we can have a property of superposition of non-homogeneous if one is adding two solutions of two different non-homogeneous equations. More precisely, we have

Theorem 1.9.2 (n) (n−1) 0 Let y1(t) be a solution of y + pn−1(t)y + ··· + p1(t)y + p0(t)y = g1(t) (n) (n−1) 0 and y2(t) a solution of y + pn−1(t)y + ··· + p1(t)y + p0(t)y = g2(t). Then for any constants c1 and c2 the function Y (t) = c1y1(t) + c2y2(t) is a solution of the equation

(n) (n−1) 0 y + pn−1(t)y + ··· + p1(t)y + p0(t)y = c1g1(t) + c2g2(t)

Proof. We have

(n) (n−1) 0 L[Y ] = c1(y1 + pn−1(t)y1 + ··· + p1(t)y1 + p0(t)y1) (n) (n−1) 0 + c2(y2 + pn−1(t)y2 + ··· + p1(t)y2 + p0(t)y2)

= c1g1(t) + c2g2(t)

Next, we discuss a method for finding a particular solution to a non-homogeneous linear differential equation known as the method of variation of param- eters. The basic assumption underlying the method is that we know a fundamental set of solutions {y1, y2, ··· , yn}. The homogeneous solution is then

yh(t) = c1y1 + c2y2 + ··· + cnyn. 82CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS

Then the constants c1, c2, ··· , cn are being replaced by functions u1, u2, ··· , un so that the particular solution assumes the form

yp(t) = u1y1 + u2y2 + ··· + unyn. (1.9.2)

We find u1, u2, ··· , un by solving a system of n equations with the n un- knowns u1, u2, ··· , un. We obtain the system by first imposing the n − 1 constraints

0 0 0 y1u1 + y2u2 + ··· + ynun = 0 0 0 0 0 0 0 y1u1 + y2u2 + ··· + ynun = 0 . . (1.9.3) (n−2) 0 (n−2) 0 (n−2) 0 y1 u1 + y2 u2 + ··· + yn un = 0.

This choice of constraints is made to make the successive derivatives of yp(t) have the following simple forms

0 0 0 0 yp = y1u1 + y2u2 + ··· + ynun 00 00 00 00 yp = y1 u1 + y2 u2 + ··· + ynun . . (n−1) (n−1) (n−1) (n−1) yp = y1 u1 + y2 u2 + ··· + yn un. Substituting (1.9.2) into (1.9.1), using (1.9.3) and the fact that each of the functions y1, y2, ··· , yn is a solution of the homogeneous equation we find

(n−1) 0 (n−1) 0 (n−1) 0 y1 u1 + y2 u2 + ··· + yn un = g(t). (1.9.4) Take together, equations (1.9.3) and (1.9.4) form a set of n linear equations 0 0 0 for the n unknowns u1, u2, ··· , un. In matrix form that system takes the form

   0    y1 y2 ··· yn u1 0  y0 y0 ··· y0   u0   0   1 2 n   2     . .   .  =  .   . .   .   .  (n−1) (n−1) (n−1) 0 y1 y2 ··· yn un g Solving this system we find W u0 = i g i W 1.9. NON-HOMOGENEOUS N TH ORDER LINEAR DIFFERENTIAL EQUATIONS83

where 1 ≤ i ≤ n, W is the Wronskian of {y1, y2, ··· , yn} and Wi is the determinant obtained after replacing the ith column of W with the column vector  0   0     .   .  1 It follows that Z W (t) Z W (t) Z W (t) y (t) = y 1 g(t)dt + y 2 g(t)dt + ··· + y n g(t)dt. p 1 W (t) 2 W (t) n W (t) Example 1.9.1 Solve y000 + y0 = sec t.

Solution. We first find the homogeneous solution. The characteristic equation is r3 + r = 0 or r(r2 + 1) = 0 so that the roots are r = 0, r = i, r = −i. We conclude yh(t) = c1 + c2 cos t + c3 sin t. We have yp(t) = u1 + u2 cos t + u3 sin t and the Wronskian is

1 cos t sin t

W (t) = 0 − sin t cos t = 1

0 − cos t − sin t So

0 cos t sin t

W1(t) = 0 − sin t cos t = 1

1 − cos t − sin t

1 0 sin t

W2(t) = 0 0 cos t = − cos t

0 1 − sin t 84CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS

1 cos t 0

W3(t) = 0 − sin t 0 = − sin t

0 − cos t 1 Hence,

Z W (t) Z u (t) = 1 g(t)dt = sec tdt = ln | sec t + tan t| 1 W (t) Z W (t) Z u (t) = 2 g(t)dt = −dt = −t 2 W (t) Z W (t) Z sin t u (t) = 3 g(t)dt = − dt = ln | cos t|. 3 W (t) cos t

Finally, the general solution is

y(t) = c1 + c2 cos t + c3 sin t + ln | sec t + tan t| − t cos t + ln | cos t| sin t 1.9. NON-HOMOGENEOUS N TH ORDER LINEAR DIFFERENTIAL EQUATIONS85 Practice Problems

Problem 1.9.1 Consider the non-homogeneous differential equation

t3y000 + at2y00 + bty0 + cy = g(t), t > 0.

Determine a, b, c, and g(t) if the general solution is given by y(t) = c1t + 2 4 c2t + c3t + 2 ln t.

Problem 1.9.2 Consider the non-homogeneous differential equation

y000 + ay00 + by0 + cy = g(t), t > 0

Determine a, b, c, and g(t) if the general solution is given by y(t) = c1 + c2t + 2t c3e + 4 sin 2t.

Problem 1.9.3 Find the general solution of the equation √ y000 − 6y00 + 12y0 − 8y = 2te2t.

Problem 1.9.4 (a) Verify that {t, t2, t4} is a fundamental set of solutions of the differential equation t3y000 − 4t2y00 + 8ty0 − 8y = 0. (b) Find the general solution of √ t3y000 − 4t2y00 + 8ty0 − 8y = 2 t, t > 0.

Problem 1.9.5 (a) Verify that {t, t2, t3} is a fundamental set of solutions of the differential equation t3y000 − 3t2y00 + 6ty0 − 6y = 0. (b) Using the method of variation of parameters, find the general solution of

t3y000 − 3t2y00 + 6ty0 − 6y = t, t > 0. 86CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS

Problem 1.9.6 Solve y(4) + 4y00 = 16 + 15et.

Problem 1.9.7 2t Given that y1(t) = e is a solution to the homogeneous equation, find the general solution to the differential equation,

y000 − 2y00 + y0 − 2y = 12 sin 2t.

Problem 1.9.8 Solve using the method of variation of parameters: y000 − y0 = 4 + 2 cos t.

Problem 1.9.9 Solve using the method of variation of parameters: y000 − y0 = −4et.

Problem 1.9.10 Solve using the method of variation of parameters: y000 − y00 = 4e−2t. Chapter 2

First Order Linear Systems

In this chapter we study systems of first order linear differential equations.

87 88 CHAPTER 2. FIRST ORDER LINEAR SYSTEMS 2.1 Existence and Uniqueness of Solution to Initial Value First Order Linear Systems

In this section we study the following initial-value problem

0 y1 = p11(t)y1 + p12(t)y2 + ··· + p1n(t)yn + g1(t) 0 y2 = p21(t)y1 + p22(t)y2 + ··· + p2n(t)yn + g2(t) . . 0 yn = pn1(t)y1 + pn2(t)y2 + ··· + pnn(t)yn + gn(t)

0 0 0 y1(t0) = y1, y2(t0) = y2, ··· , yn(t0) = yn, a < t0 < b where all the pij(t) and gi(t) functions are continuous in a < t < b. The above system can be recast in matrix form as

0 y (t) = P(t)y(t) + g(t), y(t0) = y0 (2.1.1) where      0  y1(t) g1(t) y1  y (t)   g (t)   y0   2   2   2  y(t) =  .  , g(t) =  .  , y0 =  .   .   .   .  0 yn(t) gn(t) yn and   p11(t) p12(t) ··· p1n(t)  p (t) p (t) ··· p (t)   21 22 2n  P(t) =  .   .  pn1(t) pn2(t) ··· pnn(t) We refer to (2.1.1) as a first order linear system. If g(t) is the zero vector in a < t < b then we call y0(t) = P(t)y(t) a first order homogeneous linear system. Otherwise, we call the system a first order non-homogeneous linear system. Next, we discuss the conditions required for (2.1.1) to have a unique solution.

Theorem 2.1.1 If the components of the matrices P(t) and g(t) are continuous in an closed 2.1. EXISTENCE AND UNIQUENESS OF SOLUTION TO INITIAL VALUE FIRST ORDER LINEAR SYSTEMS89

interval [c, d] ⊆ (a, b) that contains t0 then the initial value problem (2.1.1) has a unique solution on the entire interval c ≤ t ≤ b. As a matter of fact, it can be shown that the solution is unique on the entire interval a < t < b.

Proof. We start by reformulating the matrix differential equation in (2.1.1) as an integral equation. Integration of both sides of (2.1.1) yields

Z t Z t y0(s)ds = [P(s)y(s) + g(s)]ds. (2.1.2) t0 t0 Applying the Fundamental Theorem of Calculus to the left side of (2.1.2) yields

Z t y(t) = y0 + [P(s)y(s) + g(s)]ds. (2.1.3) t0 Thus, a solution of (2.1.3) is also a solution to (2.1.1) and vice versa.

Existence: To prove the existence we shall use again the method of successive approxi- mations as described in Theorem 1.3.2.

y0(t) = y0 Z t y1(t) = y0 + [P(s)y0(s) + g(s)]ds t0 Z t y2(t) = y0 + [P(s)y1(s) + g(s)]ds t0 . . . = . Z t yN (t) = y0 + [P(s)yN−1(s) + g(s)]ds. t0 Write   y1,N  y   2,N  yN (t) =  .  .  .  yn,N 90 CHAPTER 2. FIRST ORDER LINEAR SYSTEMS

For i = 1, 2, ··· , n, we are going to show that the sequence {yi,N (t)} converges uniformly to a function yi(t) such that y(t) (with components y1, y2, ··· , yn)is a solution to (2.1.3) and hence a solution to (2.1.1). By continuity, there exist positive constants kij, 1 ≤ i, j ≤ n, such that

max |pij(t)| ≤ kij. c≤t≤d This implies that

n n n X X X ||P(t)y(t)|| = | p1jyj| + | p2jyj| + ··· + | pnjyj| j=1 j=1 j=1 n n n 0 X 0 X 0 X ≤ K |yj| + K |yj| + ··· + K |yj| = K||y|| j=1 j=1 j=1 for all c ≤ t ≤ d, where we define

||y|| = |y1| + |y2| + ··· + |yn| and where n n 0 X X 0 K = kij,K = nK . i=1 j=1 It follows that for 1 ≤ i ≤ n Z t |yi,N − yi,N−1| ≤ ||yN − yN−1|| = || [P(s)(yN−1 − yN−2)ds|| t0 Z t ≤ ||P(s)(yN−1 − yN−2)||ds t0 Z t ≤ K ||yN−1 − yN−2||ds. t0 But Z t ||y1 − y0|| ≤ ||P(s)y0 + g(s)||ds t0

≤ M(t − t0) where

M = K||y0|| + max |g1(t)| + max |g2(t)| + ··· + max |gn(t)|. c≤t≤d c≤t≤d c≤t≤d 2.1. EXISTENCE AND UNIQUENESS OF SOLUTION TO INITIAL VALUE FIRST ORDER LINEAR SYSTEMS91

An easy induction yields that for 1 ≤ i ≤ n

(t − t )N |y − y | ≤ ||y − y || ≤ MKN−1 0 . i,N i,N−1 N N−1 N!

Since N! ≥ (N − 1)! and t − t0 < b − a, we have

(t − t )N (b − a)N ||y − y || ≤ MKN−1 0 ≤ MKN−1 . N N−1 (N − 1)! (N − 1)!

Since ∞ X (b − a)N MKN−1 = M(b − a)eK(b−a) (N − 1)! N=1 P∞ by the Weierstrass M-test we conclude that the series N=1[yi,N − yi,N−1] converges uniformly for all c ≤ t ≤ d. But

N−1 X yi,N (t) = [yi,k+1(t) − yi,k(t)] + yi,1. k=1

Thus, the sequence {yi,N } converges uniformly to a function yi(t) for all c ≤ t ≤ d. The function yi(t) is a continuous function (a uniform limit of a sequence of continuous functions is continuous). Also, we can interchange the order of taking limits and integration for such sequences. Therefore

y(t) = lim yN (t) N→∞ Z t = y0 + lim (P(s)yN−1 + g(s))ds N→∞ t0 Z t = y0 + lim (P(s)yN−1 + g(s))ds N→∞ t0 Z t = y0 + (P(s)y + g(s))ds. t0 This shows that y(t) is a solution to the integral equation (2.1.3) and there- fore a solution to (2.1.1). 92 CHAPTER 2. FIRST ORDER LINEAR SYSTEMS

Uniqueness: The uniqueness follows from Gronwall Inequality (See Theorem 1.3.2). Sup- pose that y(t) and z(t) are two solutions to the initial value problem, it follows that for all c ≤ t ≤ d we have

Z t ||y(t) − z(t)|| ≤ K||y(s) − z(s)||ds. t0 Letting u(t) = ||y(t) − z(t)|| we have

Z t u(t) ≤ Ku(s)ds t0 so that by Gronwall’s inequality u(t) ≡ 0 and therefore y(t) = z(t) for all c ≤ t ≤ d. This completes a proof of the theorem

Example 2.1.1 Consider the initial value problem

0 −1 y1 = t y1 + (tan t)y2 0 t y2 = (ln |t|)y1 + e y2

y1(3) = 0, y2(3) = 1.

Determine the largest t−interval such that a unique solution is guaranteed to exist.

Solution. 1 The function p11(t) = t is continuous for all t 6= 0. The function p12(t) = tan t π is continuous for all t 6= (2n + 1) 2 where n is an integer. The function t p21(t) = ln |t| is continuous for all t 6= 0. The function p22(t) = e is continuous for all real numbers. All these functions can be continuous on the common π domain t 6= 0 and t 6= (2n + 1) 2 . Since t0 = 3, the largest t−interval for π 3π which a unique solution is guaranteed to exist is 2 < t < 2 2.1. EXISTENCE AND UNIQUENESS OF SOLUTION TO INITIAL VALUE FIRST ORDER LINEAR SYSTEMS93 Practice Problems

Problem 2.1.1 Consider the initial value problem

0 (t + 2)y1 = 3ty1 + 5y2 0 (t − 2)y2 = 2y1 + 4ty2

y1(1) = 0, y2(1) = 2.

Determine the largest t−interval such that a unique solution is guaranteed to exist.

Problem 2.1.2 t t t Verify that the functions y1(t) = c1e cos t+c2e sin t and y2(t) = −c1e sin t+ t c2e cos t are solutions to the linear system

0 y1 = y1 + y2 0 y2 = −y1 + y2.

Problem 2.1.3 Consider the initial value problem

0 y (t) = Ay(t), y(0) = y0 where  3 2   −1  A = , y = 4 1 0 8  1   −1  (a) Verify that y(t) = c e5t + c e−t is a solution to the first 1 1 2 2 order linear system. (b) Determine c1 and c2 such that y(t) solves the given initial value problem.

Problem 2.1.4 √ Rewrite the differential equation (cos t)y00 − 3ty0 + ty = t2 + 1 in the matrix form y(t) = P(t)y(t) + g(t).

Problem 2.1.5 Rewrite the differential equation 2y00 + ty + e3t = y000 + (cos t)y0 in the matrix form y(t) = P(t)y(t) + g(t). 94 CHAPTER 2. FIRST ORDER LINEAR SYSTEMS

Problem 2.1.6 The initial value problem  0 1   0   1  y0(t) = y + , y(−1) = −3 2 2 cos (2t) 4 was obtained from an initial value problem for a higher order differential equation. What is the corresponding initial value problem? Problem 2.1.7 The initial value problem     y2 0 0  y3   0  y (t) =   , y(1) =    y4   −1  2 y2 + y3 sin y1 + y3 2 was obtained from an initial value problem for a higher order differential equation. What is the corresponding scalar initial value problem? Problem 2.1.8 Consider the system of differential equations y00 = tz0 + y0 + z z00 = y0 + z0 + 2ty. Write the above system in the form y0 = P(t)y + g(t) where  y(t)   y0(t)  y(t) =    z(t)  z0(t) Identify P(t) and g(t). Problem 2.1.9 Consider the system of differential equations y00 = 7y0 + 4y − 8z + 6z0 + t2 z00 = 5z0 + 2z − 6y0 + 3y − sin t. 2.1. EXISTENCE AND UNIQUENESS OF SOLUTION TO INITIAL VALUE FIRST ORDER LINEAR SYSTEMS95

Write the above system in the form

y0 = P(t)y + g(t) where  y(t)   y0(t)  y(t) =    z(t)  z0(t) Identify P(t) and g(t). 96 CHAPTER 2. FIRST ORDER LINEAR SYSTEMS 2.2 Homogeneous First Order Linear Systems

In this section we consider the following system of n homogeneous linear dif- ferential equations known as the first order homogeneous linear system.

0 y1 = p11(t)y1 + p12(t)y2 + ··· + p1n(t)yn 0 y2 = p21(t)y1 + p22(t)y2 + ··· + p2n(t)yn . . 0 yn = pn1(t)y1 + pn2(t)y2 + ··· + pnn(t)yn where the coefficient functions are all continuous in a < t < b. The above system can be recast in matrix form as

y0(t) = P(t)y(t) (2.2.1) where     y1(t) p11(t) p12(t) ··· p1n(t)  y (t)   p (t) p (t) ··· p (t)   2   21 22 2n  y(t) =  .  , P(t) =  .   .   .  yn(t) pn1(t) pn2(t) ··· pnn(t)

Example 2.2.1 (a) Rewrite the given system of linear homogeneous differential equations as a homogeneous linear system of the form y0(t) = P(t)y.

0 y1 = y2 + y3 0 y2 = −6y1 − 3y2 + y3 0 y3 = −8y1 − 2y2 + 4y3.

(b) Verify that the vector function

 et  y(t) =  −et  2et is a solution of y0(t) = P(t)y. 2.2. HOMOGENEOUS FIRST ORDER LINEAR SYSTEMS 97

Solution. (a)  0     y1 0 1 1 y1  y2  =  −6 −3 1   y2  y3 −8 −2 4 y3 (b) We have  et  0 y =  −et  2et and  0 1 1   et   et  0 P(t)y =  −6 −3 1   −et  =  −et  = y −8 −2 4 2et 2et Our first result shows that any linear combinations of solutions to (2.2.1) is again a solution.

Theorem 2.2.1 If y1, y2, ··· , yr are solutions to (2.2.1) then for any constants c1, c2, ··· , cr, the function y = c1y1 + c2y2 + ··· + cryr is also a solution.

Proof. Differentiating we find

0 0 y (t) = (c1y1 + c2y2 + ··· + cryr) 0 0 0 = c1y1 + c2y2 + ··· + cryr

= c1P(t)y1 + c2P(t)y2 + ··· + crP(t)yr

= P(t)(c1y1 + c2y2 + ··· + cryr) = P(t)y

Next, we pose the following question: Are there solutions {y1, y2, ··· , yn} such that every solution to (2.2.1) can be written as a linear combination of y1, y2, ··· , yn? We call such a set of functions a fundamental set of solutions. With such a set, the general solution is

y = c1y1 + c2y2 + ··· + cnyn.

Our next question is to find a criterion for testing n solutions to (2.2.1) for a fundamental set. For this purpose, writing the components of the vectors 98 CHAPTER 2. FIRST ORDER LINEAR SYSTEMS

y1, y2, ··· , yn       y1,1(t) y1,2(t) y1,n(t)  y (t)   y (t)   y (t)   2,1   2,2   2,n  y1(t) =  .  , y2(t) =  .  , ··· , yn(t) =  .  ,  .   .   .  yn,1 yn,2 yn,n we define the matrix Ψ(t) whose columns are the vectors y1, y2, ··· , yn. That is,   y1,1 y1,2 ··· y1,n  y y ··· y   2,1 2,2 2,n  Ψ(t) =  .   .  yn,1 yn,2 ··· yn,n We call Ψ(t) a solution matrix of y0 = P(t)y. In this case, Ψ(t) is a solution to the matrix equation Ψ0(t) = P(t)Ψ(t). Indeed,

0 0 0 0 Ψ (t) = [y1(t) y2(t) ··· yn(t)]

= [P(t)y1(t) P(t)y2(t) ··· P(t)yn(t)]

= P(t)[y1(t) y2(t) ··· yn(t)] = P(t)Ψ(t).

We define the Wronskian of y1, y2, ··· , yn to be the determinant of Ψ. That is, W (t) = det(Ψ(t)).

The following theorem provides a condition for the solution vectors y1, y2, ··· , yn to form a fundamental set of solutions.

Theorem 2.2.2 Let {y1, y2, ··· , yn} be a set of n solutions to (2.2.1). If there is a < t0 < b such that W (t0) 6= 0 then the set {y1, y2, ··· , yn} forms a fundamental set of solutions.

Proof. Let u(t) be any solution to (2.2.1). Can we find constants c1, c2, ··· , cn such that

u(t) = c1y1 + c2y2 + ··· + cnyn? 2.2. HOMOGENEOUS FIRST ORDER LINEAR SYSTEMS 99

A simple matrix algebra we see that     y1,1 y1,2 ··· y1,n c1  y y ··· y   c   2,1 2,2 2,n   2  c1y1 + c2y2 + ··· + cnyn =  .   .   .   .  yn,1 yn,2 ··· yn,n cn = Ψ(t)c where   c1  c   2  c =  .   .  cn Thus, u(t) = Ψ(t)c, a < t < b. In particular,

u(t0) = Ψ(t0)c.

Since W (t0) = det(Ψ(t0)) 6= 0, the matrix Ψ(t0) is invertible and as a result of this we find −1 c = Ψ (t0)u(t0) When the columns of Ψ(t) form a fundamental set of solutions of y0(t) = P(t)y(t) then we call Ψ(t) a fundamental matrix.

Example 2.2.2 (a) Verify the given functions are solutions of the homogeneous linear system. (b) Compute the Wronskian of the solution set. On the basis of this calcu- lation can you assert that the set of solutions forms a fundamental set? (c) If the given solutions are shown in part(b) to form a fundamental set, state the general solution of the linear homogeneous system. Express the general solution as the product y(t) = Ψ(t)c, where Ψ(t) is a square matrix whose columns are the solutions forming the fundamental set and c is a col- umn vector of arbitrary constants. (d) If the solutions are shown in part (b) to form a fundamental set, impose the given initial condition and find the unique solution of the initial value 100 CHAPTER 2. FIRST ORDER LINEAR SYSTEMS problem.  −21 −10 2  0 y =  22 11 −2  y −110 −50 11  3   5et   et   e−t  t −t y(0) =  −10  , y1(t) =  −11e  , y2(t) =  0  , y3(t) =  −e  . −16 0 11et 5e−t Solution. (a) We have  5et  0 t y1 =  −11e  0 and  −21 −10 2   5et   5et  t t 0  22 11 −2   −11e  =  −11e  = y1. −110 −50 11 0 0 Similarly,  et  0 y2 =  0  11et and  −21 −10 2   et   et  0  22 11 −2   0  =  0  = y2. −110 −50 11 11et 11et  −e−t  0 −t y3 =  e  −5e−t and  −21 −10 2   e−t   −e−t  −t −t 0  22 11 −2   −e  =  e  = y3. −110 −50 11 5e−t −5e−t (b) The Wronskian is given by

t t −t 5e e e t W (t) = −11et 0 −e−t = −11e .

0 11et 5e−t 2.2. HOMOGENEOUS FIRST ORDER LINEAR SYSTEMS 101

Since W (t) 6= 0, the set {y1, y2, y3} forms a fundamental set of solutions. (c) The general solution is

 t t −t    5e e e c1 t −t y(t) = c2y1 + c2y2 + c3y3 =  −11e 0 −e   c2  . t −t 0 11e 5e c3

(d) We have       5 1 1 c1 3  −11 0 −1   c2  =  −10  . 0 11 5 c3 −16

Solving this system we find c1 = 1, c2 = −1, and c3 = −1. Therefore, the solution to the initial value problem is

 5et   et   e−t   4et − e−t  y(t) =  −11et  −  0  −  −e−t  =  −11et + e−t  0 11et 5e−t −11et − 5e−t

The final result of this section is Abel’s theorem which states that the Wron- skian of a set of solutions either vanishes nowhere or it vanishes everywhere on the interval a < t < b.

Theorem 2.2.3 (Abel’s) Let {y1(t), y2, ··· , yn(t)(t)} be a set of solutions to (2.2.1) and let W(t) be the Wronskian of these solutions. Then W(t) satisfies the differential equation W 0(t) = tr(P(t))W (t) where tr(P(t)) = p11(t) + p22(t) + ··· + pnn(t).

Moreover, if a < t0 < b then

R t t tr(P(s))ds W (t) = W (t0)e 0

Proof. Since {y1, y2, ··· , yn} is a set of n solutions to (2.2.1), we have

n 0 X yi,j = pikyk,j, 1 ≤ i, j ≤ n. (2.2.2) k=1 102 CHAPTER 2. FIRST ORDER LINEAR SYSTEMS

Using the definition of determinant we can write X W (t) = sgn(σ)y1,σ(1)y2,σ(2) ··· yn,σ(n) σ where the sum is taken over all one-to-one functions σ from the set {1, 2, ··· , n} to itself and sgn(σ) denotes the signature function. Taking the derivative of both sides and using the product rule we find

0 X 0 W (t) = ( sgn(σ)y1,σ(1)y2,σ(2) ··· yn,σ(n)) σ X 0 X 0 = sgn(σ)y1,σ(1)y2,σ(2) ··· yn,σ(n) + sgn(σ)y1,σ(1)y2,σ(2) ··· yn,σ(n) σ σ X 0 + ··· + sgn(σ)y1,σ(1)y2,σ(2) ··· yn,σ(n) σ 0 0 0 y y ··· y y1,1 y1,2 ··· y1,n 1,1 1,2 1,n y y ··· y y0 y0 ··· y0 2,1 2,2 2,n 2,1 2,2 2,n = . + . . .

yn,1 yn,2 ··· yn,n yn,1 yn,2 ··· yn,n

y1,1 y1,2 ··· y1,n

y y ··· y 2,1 2,2 2,n + ··· + . . . 0 0 0 yn,1 yn,2 ··· yn,n But 0 0 0 Pn Pn Pn y y ··· y p1kyk,1 p1kyk,2 ··· p1kyk,n 1,1 1,2 1,n k=1 k=1 k=1 y y ··· y y y ··· y 2,1 2,2 2,n 2,1 2,2 2,n . = . . . .

yn,1 yn,2 ··· yn,n yn,1 yn,2 ··· yn,n We evaluate the determinant of the right-side using elementary row opera- tions. We multiply the second row by p12, the third by p13, and so on, add these n−1 rows and then subtract the result from the first row. The resulting determinant is 0 0 0 y y ··· y p11y1,1 p11y1,2 ··· p11y1,n 1,1 1,2 1,n y y ··· y y y ··· y 2,1 2,2 2,n 2,1 2,2 2,n . = . = p11W (t). . .

yn,1 yn,2 ··· yn,n yn,1 yn,2 ··· yn,n 2.2. HOMOGENEOUS FIRST ORDER LINEAR SYSTEMS 103

Proceeding similarly with the other determinants we obtain

0 W (t) = p11W (t) + p22W (t) + ··· + pnnW (t)

= (p11 + p22 + ··· + pnn)W (t) = tr(P(t))W (t).

This is a first-order scalar equation for W (t), whose solution can be found by the method of integrating factor

R t t tr(P(s))ds W (t) = W (t0)e 0 .

It follows that either W (t) = 0 for all a < t < b or W (t) 6= 0 for all a < t < b

Example 2.2.3 (a) Compute the Wronskian of the solution set and verify the set is a funda- mental set of solutions. (b) Compute the trace of the coefficient matrix. (c) Verify Abel’s theorem by showing that, for the given point t0,W (t) = R t t tr(P(s))ds W (t0)e 0 .

 9 5   5e2t   e4t  y0 = y, y (t) = , y (t) = , t = 0, −∞ < t < ∞ −7 −3 1 −7e2t 2 −e4t 0

Solution. (a) The Wronskian is

2t 4t 5e e 6t W (t) = = 2e . −7e2t −e4t

Since W (t) 6= 0, the set {y1, y2} forms a fundamental set of solutions. (b) tr(P(t)) = 9 − 3 = 6. R t R t 6t t tr(P(s))ds 6ds 6t (c) W (t) = 2e and W (t0)e 0 = 2e 0 = 2e 104 CHAPTER 2. FIRST ORDER LINEAR SYSTEMS Practice Problems

In Problems 2.2.1 - 2.2.3 answer the following two questions. (a) Rewrite the given system of linear homogeneous differential equations as a homogeneous linear system of the form y0(t) = P(t)y. (b) Verify that the given function y(t) is a solution of y0(t) = P(t)y.

Problem 2.2.1

0 y1 = −3y1 − 2y2 0 y2 = 4y1 + 3y2 and  et + e−t  y(t) = −2et − e−t

Problem 2.2.2

0 y1 = y2 0 2 2 y2 = − t2 y1 + t y2 and  t2 + 3t  y(t) = 2t + 3

Problem 2.2.3

0 y1 = 2y1 + y2 + y3 0 y2 = y1 + y2 + 2y3 0 y3 = y1 + 2y2 + y3 and  2et + e4t  y(t) =  −et + e4t  −et + e4t

In Problems 2.2.4 - 2.2.7 (a) Verify the given functions are solutions of the homogeneous linear system. (b) Compute the Wronskian of the solution set. On the basis of this calcu- lation can you assert that the set of solutions forms a fundamental set? (c) If the given solutions are shown in part(b) to form a fundamental set, 2.2. HOMOGENEOUS FIRST ORDER LINEAR SYSTEMS 105 state the general solution of the linear homogeneous system. Express the general solution as the product y(t) = Ψ(t)c, where Ψ(t) is a square matrix whose columns are the solutions forming the fundamental set and c is a col- umn vector of arbitrary constants. (d) If the solutions are shown in part (b) to form a fundamental set, impose the given initial condition and find the unique solution of the initial value problem.

Problem 2.2.4

 9 −4   0   2e3t − 4e−t   4e3t + 2e−t  y0 = y, y(0) = , y (t) = , y (t) = 15 −7 1 1 3e3t − 10e−t 2 6e3t + 5e−t

Problem 2.2.5

 −3 −5   5   −5e−2t cos 3t  y0 = y, y(0) = , y (t) = , 2 −1 2 1 e−2t(cos 3t − 3 sin 3t)

 −5e−2t sin 3t  y (t) = 2 e−2t(3 cos 3t + sin 3t)

Problem 2.2.6

 1 −1   −2   1   e3t  y0 = y, y(−1) = , y (t) = , y (t) = −2 2 4 1 1 2 −2e3t

Problem 2.2.7

 −2 0 0   3   e−2t   0  0 t y =  0 1 4  y, y(0) =  4  , y1(t) =  0  , y2(t) =  2e cos 2t  0 −1 1 −2 0 −et sin 2t

 0  t y3(t) =  2e sin 2t  et cos 2t

In Problems 2.2.8 - 2.2.9, the given functions are solutions of the homoge- neous linear system. (a) Compute the Wronskian of the solution set and verify the set is a funda- mental set of solutions. 106 CHAPTER 2. FIRST ORDER LINEAR SYSTEMS

(b) Compute the trace of the coefficient matrix. (c) Verify Abel’s theorem by showing that, for the given point t0,W (t) = R t t tr(P(s))ds W (t0)e 0 .

Problem 2.2.8

 6 5   5e−t   et  y0 = y, y (t) = , y (t) = , t = −1, −∞ < t < ∞ −7 −6 1 −7e−t 2 −et 0

Problem 2.2.9

 1 t   −1   et  y0 = y, y (t) = , y (t) = , t = −1, t 6= 0, 0 < t < ∞ 0 −t−1 1 t−1 2 0 0

Problem 2.2.10 The functions  5   2e3t  y (t) = , y (t) = 1 1 2 e3t are known to be solutions of the homogeneous linear system y0 = Py, where P is a real 2 × 2 constant matrix. (a) Verify the two solutions form a fundamental set of solutions. (b) What is tr(P)? (c) Show that Ψ(t) satisfies the homogeneous differential equation Ψ0 = PΨ, where  5 2e3t  Ψ(t) = [y (t) y (t)] = 1 2 1 e3t (d) Use the observation of part (c) to determine the matrix P.[Hint: Compute the matrix product Ψ0(t)Ψ−1(t). It follows from part (a) that Ψ−1(t) exists.] Are the results of parts (b) and (d) consistent?

Problem 2.2.11 The homogeneous linear system

 3 1  y0 = y −2 α has a fundamental set of solutions whose Wronskian is constant, W (t) = 4, − ∞ < t < ∞. What is the value of α? 2.3. FIRST ORDER LINEAR SYSTEMS: FUNDAMENTAL SETS AND LINEAR INDEPENDENCE107 2.3 First Order Linear Systems: Fundamen- tal Sets and Linear Independence

The results presented in this section are analogous to the ones established for nth order linear homogeneous differential equations (See Section 1.5). We start by showing that fundamental sets always exist. Theorem 2.3.1 The first-order linear homogeneous equation y0 = P(t)y, a < t < b where the entries of P are all continuous in a < t < b, has a fundamental set of solutions. Proof. Pick a number t0 such that a < t0 < b. Consider the following n initial value problems 0 y = P(t)y, y(t0) = e1 0 y = P(t)y, y(t0) = e2 . . 0 y = P(t)y, y(t0) = en where  1   0   0   0   1   0        e1 =  .  , e2 =  .  , ··· , en =  .  ,  .   .   .  0 0 1 By the existence and uniqueness theorem (Theorem 2.1.1), we find the so- lutions {y1, y2, ··· , yn}. Since W (t) = det([e1, e2, ··· , en]) = det(In) = 1 where In is the n×n we see that the solution set {y1, y2, ··· , yn} forms a fundamental set of solutions

Next, we establish the converse to Theorem 2.2.2. Theorem 2.3.2 If {y1, y2, ··· , yn} is a fundamental set of solutions to the first order linear homogeneous system y0 = P(t)y, a < t < b then W (t) 6= 0 for all a < t < b. 108 CHAPTER 2. FIRST ORDER LINEAR SYSTEMS

Proof. It suffices to show that W (t0) 6= 0 for some number a < t0 < b because by Abel’s theorem this implies that W (t) 6= 0 for all a < t < b. The general 0 solution y(t) = c1y1 + c2y2 + ··· + cnyn to y = P(t)y can be written as the matrix equation

y(t) = c1y1 + c2y2 + ··· + cnyn = Ψ(t)c, a < t < b where Ψ(t) = [y1 y2 ··· yn] is the fundamental matrix and   c1  c   2  c =  .  .  .  cn

In particular, y(t0) = Ψ(t0)c.

Since {y1, y2, ··· , yn} is a fundamental set of solutions, the above matrix −1 equation has a unique solution for c. This is possible only when Ψ (t0) exists which is equivalent to saying that W (t0) = det(Ψ(t0)) 6= 0. This com- pletes a proof of the theorem

We next extend the definition of linear dependence and independence to vector functions and show that a fundamental set of solutions is a linearly independent set of vector functions on the t−interval of existence. We say that a set of n × 1 vector functions {f1(t), f2(t), ··· , fr(t)}, where a < t < b, is linearly dependent if one of the vector function can be writ- ten as a linear combination of the remaining functions. Equivalently, this occurs if one can find constants k1, k2, ··· , kr not all zero such that

k1f1(t) + k2f2(t) + ··· + krfr(t) = 0, a < t < b.

If the set {f1(t), f2(t), ··· , fr(t)} is not linearly dependent then it is said to be linearly independent in a < t < b. Equivalently, {f1(t), f2(t), ··· , fr(t)} is linearly independent if and only if

k1f1(t) + k2f2(t) + ··· + krfr(t) = 0 implies k1 = k2 = ··· = 0. 2.3. FIRST ORDER LINEAR SYSTEMS: FUNDAMENTAL SETS AND LINEAR INDEPENDENCE109

Example 2.3.1 Determine whether the given functions are linearly dependent or linearly independent on the interval −∞ < t < ∞.  1   0  f1(t) =  t  , f2(t) =  1  . 0 t2

Solution. Suppose that k1f1(t) + k2f2(t) = 0 for all t. This implies that for t = 1 we have

k1 = 0

k1 + k2 = 0

k2 = 0.

Thus, k1 = k2 = 0 so that f1(t) and f2(t) are linearly independent

Theorem 2.3.3 The solution set {y1, y2, ··· , yn} is a fundamental set of solutions to y0 = P(t)y where the n × n matrix P(t) is continuous in a < t < b, if and only if the functions y1, y2, ··· , yn are linearly independent.

Proof. Suppose first that {y1, y2, ··· , yn} is a fundamental set of solutions. Then by Theorem 2.3.2 there is a < t0 < b such that W (t0) 6= 0. Suppose that

c1y1(t) + c2y2(t) + ··· + cnyn(t) = 0 for all a < t < b. This can be written as the matrix equation

Ψ(t)c = 0, a < t < b where   c1  c   2  c =  .  .  .  cn 110 CHAPTER 2. FIRST ORDER LINEAR SYSTEMS

In particular, Ψ(t0)c = 0. −1 −1 Since W (t0) = det(Ψ(t0)) 6= 0, Ψ (t0) exists so that c = Ψ (t0)Ψ(t0)c = −1 Ψ (t0) · 0 = 0. Hence, c1 = c2 = ··· = cn = 0. Therefore, y1, y2, ··· , yn are linearly independent. Conversely, suppose that {y1, y2, ··· , yn} is a linearly independent set. Sup- pose that {y1, y2, ··· , yn} is not a fundamental set of solutions. Then by Theorem 2.2.2, W (t) = det(Ψ(t)) = 0 for all a < t < b. Choose any a < t0 < b. Then W (t0) = 0. But this says that the matrix Ψ(t0) is not invertible. In terms of matrix theory, this means that Ψ(t0) · c = 0 for some vector   c1  c   2  c =  .  6= 0.  .  cn

Now, let y(t) = c1y1(t) + c2y2(t) + ··· + cnyn(t) for all a < t < b. Then y(t) is a solution to the differential equation and y(t0) = Ψ(t0)c = 0. But the zero function is also a solution to the initial value problem. By the existence and uniqueness theorem we must have c1y1(t) + c2y2(t) + ··· + cnyn(t) = 0 for all a < t < b with c1, c2, ··· , cn not all equal to 0. But this means that y1, y2, ··· , yn are linearly dependent which contradicts our assumption that y1, y2, ··· , yn are linearly independent

Remark 2.3.1 0 The fact that y1, y2, ··· , yn are solutions to y = Py is critical in the above theorem. For example, the vectors

 1   t   t2  f1(t) =  0  , f2(t) =  2  , f3(t) =  t  0 0 0 are linearly independent with det(f1, f2, f3) ≡ 0.

Example 2.3.2 Consider the functions  et   t2  f (t) = , f (t) = . 1 0 2 t 2.3. FIRST ORDER LINEAR SYSTEMS: FUNDAMENTAL SETS AND LINEAR INDEPENDENCE111

(a) Let Ψ(t) = [f1(t) f2(t)]. Determine det(Ψ(t)). (b) Is it possible that the given functions form a fundamental set of solutions for a linear system y0 = P(t)y where P(t) is continuous on a t−interval con- taining the point t = 0? Explain. (c) Determine a matrix P(t) such that the given vector functions form a fundamental set of solutions for y0 = P(t)y. On what t-interval(s) is the coefficient matrix P(t) continuous?(Hint: The matrix Ψ(t) must satisfy Ψ0(t) = P(t)Ψ(t) and det(Ψ(t)) 6= 0.)

Solution. (a) We have t 2 e t t det(Ψ)(t) = = te . 0 t

(b) Since det(Ψ)(0) = 0, the given functions do not form a fundamental set for a linear system y0 = P(t)y on any t−interval containing 0. (c) For Ψ(t) to be a fundamental matrix it must satisfy the differential equa- tion Ψ0(t) = P(t)Ψ(t) (See p. 98) and the condition det(Ψ(t)) 6= 0. But det(Ψ(t)) = tet and this is not zero on any interval not containing zero. Thus, our coefficient matrix P(t) must be continuous on either −∞ < t < 0 or 0 < t < ∞. Now, from the equation Ψ0(t) = P(t)Ψ(t) we can find P(t) = Ψ0(t)Ψ−1(t). That is,

1  et 2t   t −t2  P(t) = Ψ0(t)Ψ−1(t) = tet 0 1 0 et 1  tet (2t − t2)et  = tet 0 et  t 2t − t2  = t−1 0 1

Finally, we will show how to generate new fundamental sets from a given one and therefore establishing the fact that a first order linear homogeneous system has many fundamental sets of solutions. We also show how different fundamental sets are related to each other. For this, let us start with a funda- 0 mental set {y1, y2, ··· , yn} of solutions to y = P(t)y. If y1, y2, ··· , yn are n solutions then they can be written as linear combinations of the {y1, y2, ··· , yn}. 112 CHAPTER 2. FIRST ORDER LINEAR SYSTEMS

That is,

a11y1 + a21y2 + ··· + an1yn = y1 a12y1 + a22y2 + ··· + an2yn = y2 . .

a1ny1 + a2ny2 + ··· + annyn = yn or in matrix form as   a11 a12 a13 ··· a1n  a a a ··· a       21 22 23 2n  y1 y2 ··· yn = y1 y2 ··· yn  . . . .  .  . . . ··· .  an1 an2 an3 ··· ann

That is, Ψ(t) = Ψ(t)A

Theorem 2.3.4

{y1, y2, ··· , yn} is a fundamental set if and only if det(A) 6= 0 where A is the coefficient matrix of the above matrix equation.

Proof. Since Ψ(t) = Ψ(t)A and W (t) = det(Ψ(t)) 6= 0, W (t) = det(Ψ)(t)det(A) 6=

0 if and only if det(A) 6= 0. That is, {y1, y2, ··· , yn} is a fundamental set of solutions if and only if det(A) 6= 0

Example 2.3.3 Let

 0 1   et e−t   sinh t cosh t  y0 = y, Ψ(t) = , Ψ(t) = . 1 0 et −e−t cosh t sinh t

(a) Verify that the matrix Ψ(t) is a fundamental matrix of the given linear system. (b) Determine a constant matrix A such that the given matrix Ψ(t) can be represented as Ψ(t) = Ψ(t)A. (c) Use your knowledge of the matrix A and Theorem 2.3.4 to determine whether Ψ(t) is also a fundamental matrix, or simply a solution matrix. 2.3. FIRST ORDER LINEAR SYSTEMS: FUNDAMENTAL SETS AND LINEAR INDEPENDENCE113

Solution. (a) Since  et −e−t  Ψ0(t) = et e−t and  0 1   et e−t   et −e−t  P(t)Ψ(t) = = 1 0 et −e−t et e−t we conclude that Ψ is a solution matrix. To show that Ψ(t) is a fundamental matrix we need to verify that det(Ψ(t)) 6= 0. Since det(Ψ(t)) = −2 6= 0, Ψ(t) is a fundamental matrix. (b) First write

 sinh t cosh t  1  et − e−t et + e−t  Ψ(t) = = . cosh t sinh t 2 et + e−t et − e−t

Thus, the question is to find a, b, c, and d such that

1  et − e−t et + e−t   et e−t   a b   aet + ce−t bet + de−t  = = 2 et + e−t et − e−t et −e−t c d aet − ce−t bet − de−t

Comparing entries we find a = 1/2, b = 1/2, c = −1/2, and d = 1/2. 1 (c) Since det(A) = 2 , Ψ(t) is a fundamental matrix 114 CHAPTER 2. FIRST ORDER LINEAR SYSTEMS Practice Problems

In Problems 2.3.1 - 2.3.4, determine whether the given functions are linearly dependent or linearly independent on the interval −∞ < t < ∞.

Problem 2.3.1

 t   t2  f (t) = , f (t) = . 1 1 2 1

Problem 2.3.2

 et   e−t   et−e−t  f (t) = , f (t) = , f (t) = 2 . 1 1 2 1 3 0

Problem 2.3.3

 1   0   0  f1(t) =  t  , f2(t) =  1  , f3(t) =  0  . 0 t2 0

Problem 2.3.4

 1   0   1  2 2 f1(t) =  sin t  , f2(t) =  2(1 − cos t)  , f3(t) =  0  . 0 −2 1

Problem 2.3.5 Let  1 1 1   et e−t 4e2t   et + e−t 4e2t et + 4e2t  0 y =  0 −1 1  y, Ψ(t) =  0 −2e−t e2t  , Ψ(t) =  −2e−t e2t e2t  0 0 2 0 0 3e2t 0 3e2t 3e2t

(a) Verify that the matrix Ψ(t) is a fundamental matrix of the given linear system. (b) Determine a constant matrix A such that the given matrix Ψ(t) can be represented as Ψ(t) = Ψ(t)A. (c) Use your knowledge of the matrix A and Theorem 2.3.4 to determine whether Ψ(t) is also a fundamental matrix, or simply a solution matrix. 2.3. FIRST ORDER LINEAR SYSTEMS: FUNDAMENTAL SETS AND LINEAR INDEPENDENCE115

Problem 2.3.6 Let  1 1   et e−2t  y0 = y, Ψ(t) = 0 −2 0 −3e−2t where the matrix Ψ(t) is a fundamental matrix of the given homogeneous linear system. Find a constant matrix A such that Ψ(t) = Ψ(t)A with  1 0  Ψ(0) = . 0 1 116 CHAPTER 2. FIRST ORDER LINEAR SYSTEMS 2.4 Homogeneous Systems with Constant Co- efficients: Distinct Eigenvalues

In this section, we consider solving linear homogeneous systems of the form y0 = Py where P is a matrix with real-valued constants. Recall that the general solution to this system is given by

y(t) = c1y1(t) + c2y2(t) + ··· + cnyn(t) where {y1(t), y2(t), ··· , yn(t)} is a fundamental set of solutions. So the prob- lem of finding the general solution reduces to the one of finding a fundamental set of solutions. Let’s go back and look at how we solved a second order linear homogeneous equation with constant coefficients

y00 + ay0 + by = 0. (2.4.1)

To find the fundamental set of solutions we considered trial functions of the form y = ert and find out that r is a solution to the characteristic equation r2 + ar + b = 0. But (2.4.1) is a first order homogeneous linear system

 y 0  0 1   y  = . (2.4.2) y0 −b −a y

Now, if r is a solution to the characteristic equation r2 + ar + b = 0 then one can easily check that the vector function  ert   1  y = = ert rert r is a solution to (2.4.2). Motivated by the above discussion we will consider trial functions for the system

y0 = Py (2.4.3) of the form y = ertx where x is a nonzero vector. Substituting this into (2.4.3) we find rertx = Pertx. This can be written as a linear system of the form

(P − rIn)x = 0 (2.4.4) 2.4. HOMOGENEOUS SYSTEMS WITH CONSTANT COEFFICIENTS: DISTINCT EIGENVALUES117

where In is the n × n identity matrix. Since system (2.4.4) has a nonzero solution x, the matrix P − rI cannot be invertible (otherwise x=0). This means that

p(r) = det(P − rIn) = 0. (2.4.5)

We call (2.4.5) the characteristic equation associated with the linear sys- tem (2.4.3). Its solutions are called eigenvalues. A vector x corresponding to an eigenvalue r is called an eigenvector. The pair (r, x) is called an eigenpair. It follows that each eigenpair (r, x) yields a solution of the form y(t) = ertx. If there are n different eigenpairs then these will yield n different solutions. We will show below that these n different solutions form a fundamental set of solutions and therefore yield the general solution to (2.4.3). Thus, we need to address the following questions:

(1) Given an n × n matrix P, do there always exist eigenpairs? Is it possible to find n different eigenpairs and thereby form n different solutions of (2.4.3)? (2) How do we find these eigenpairs?

As pointed out earlier, the eigenvalues are solutions to equation (2.4.5). But

a11 − r a12 ··· a1n

a a − r ··· a 21 22 2n p(r) = . . = 0. . .

an1 an2 ··· ann − r The determinant is the sum of elementary products each having n factors no two come from the same row or column. Thus, one of the term has the form (a11 − r)(a22 − r) ··· (ann − r). From this we see that p(r) is a polynomial of degree n. We call p(r) the characteristic polynomial. By the Fundamental Theorem of Algebra, the equation p(r) = 0 has n roots, and therefore n eigenvalues. These eigenvalues may be zero or nonzero, real or complex, and some of them may be repeated. Now, for each eigenvalue r, we find a corresponding eigenvector by solving the linear system of n equations in n unknowns: (P − rIn) = 0. 118 CHAPTER 2. FIRST ORDER LINEAR SYSTEMS

Example 2.4.1 Consider the homogeneous first order system  4 2  y0 = y. −1 1

 1   −2  (a) Show that x = and x = are eigenvectors of P. Deter- 1 −1 1 1 mine the corresponding eigenvalues. (b) For each eigenpair found in (a), form a solution set{y1, y2} of the system y0 = Py. (c) Calculate the Wronskian and decide if the two solutions form a funda- mental set.

Solution. (a) Since  4 2   1   2  Px = = = 2x . 1 −1 1 −1 −2 1 x1 is an eigenvector corresponding to the eigenvalue 2. Similarly,  4 2   −2   −6  Px = = = 3x . 2 −1 1 1 3 2

Thus, x2 is an eigenvector corresponding to the eigenvalue 3.  1   −2  (b) The two solutions are y (t) = e2t and y (t) = e3t . 1 −1 2 1 (c) The Wronskian is

2t 3t e −2e 5t W (t) = = −e . −e2t e3t

Since W (t) 6= 0, the set {y1, y2} forms a fundamental set of solutions

Example 2.4.2  8 0  Find the eigenvalues of the matrix P = . 5 2

Solution. 8 − r 0 The characteristic polynomial is p(r) = = (8 − r)(2 − r). 3 2 − r Thus, the eigenvalues are r = 8 and r = 2 2.4. HOMOGENEOUS SYSTEMS WITH CONSTANT COEFFICIENTS: DISTINCT EIGENVALUES119

Example 2.4.3  −4 3  Suppose that r = 2 is an eigenvalue of the matrix P = . Find the −4 4 eigenvector corresponding to this eigenvalue.

Solution. We have (P − 2In)x = 0 or  −6 3   x   0  1 = . −4 2 x2 0

 1  Solving this system we find 2x = x . Thus, an eigenvector is 1 2 2 We next list some properties of eigenvalues and eigenvectors.

Theorem 2.4.1 (a) If (r, x) is an eigenpair then for any α 6= 0, (r, αx) is also an eigenpair. This shows that eigenvectors are not unique. (b) A matrix P can have a zero eigenvalue. (c) A real matrix may have one or more complex eigenvalues and eigenvectors.

Proof. (a) Suppose that x is an eigenvector corresponding to an eigenvalue r of a matrix P. Then for any nonzero constant α we have P(αx) = αPx = r(αx) with αx 6= 0. Hence, (r, αx) is an eigenpair.  0 0  (b) The characteristic equation of the matrix P = is r(r − 1) = 0 0 1 so that r = 0 is an eigenvalue.  1 1  (c) The characteristic equation of the matrix P = is r2−2r+2 = 0. −1 1 Its roots are r = 1 + i and r = 1 − i. For r = 1 + i we have the system  −i 1   x   0  1 = . −1 −i x2 0

 1  A solution to this system is the vector x = . Similarly, for r = 1 − i 1 i we have  i 1   x   0  1 = . −1 i x2 0 120 CHAPTER 2. FIRST ORDER LINEAR SYSTEMS

 1  A solution to this system is the vector x = 2 −i

Theorem 2.4.2 Eigenvectors x1, x2, ··· , xk corresponding to distinct eigenvalues r1, r2, ··· , rk are linearly independent.

Proof. Let us prove this by induction on k. The result is clear for k = 1 because eigenvectors are nonzero and a subset consisting of one nonzero vector is linearly independent. Now assume that the result holds for k − 1 eigenvec- tors. Let x1, x2, ··· , xk be eigenvectors corresponding to distinct eigenvalues r1, r2, ··· , rk. Assume that there is a linear combination

c1x1 + c2x2 + ··· + ckxk = 0.

Then we have

c1x1 + c2x2 + ··· + ckxk = 0 =⇒ P(c1x1 + c2x2 + ··· + ckxk) = 0

=⇒ c1Px1 + c2Px2 + ··· + ckPxk = 0

=⇒ c1r1x1 + c2r2x2 + ··· + ckrkxk = 0

=⇒ (c1r1x1 + c2r2x2 + ··· + ckrkxk)

− (c1rkx1 + c2rkx2 + ··· + ckrkxk) = 0

=⇒ c1(r1 − rk)x1 + c2(r2 − rk)x2

+ ··· + ck−1(rk−1 − rk)xk−1 = 0.

But by the induction hypothesis, the vectors x1, x2, ··· , xk−1 are linearly in- dependent so that c1(r1 − rk) = c2(r2 − rk) = ··· = ck−1(rk−1 − rk) = 0. Since the eigenvalues are all distinct, we must have c1 = c2 = ··· = ck−1 = 0. In this case we are left with ckxk = 0. Since xk 6= 0, ck = 0. This shows that {x1, x2, ··· , xk} is linearly independent

The next theorem states that n linearly independent eigenvectors yield a fundamental set of solutions to the equation y0 = Py.

Theorem 2.4.3 Consider the homogeneous system y0 = Py, − ∞ < t < ∞. Suppose that P 2.4. HOMOGENEOUS SYSTEMS WITH CONSTANT COEFFICIENTS: DISTINCT EIGENVALUES121

has eigenpairs (r1, x1), (r2, x2), ··· , (rn, xn) where x1, x2, ··· , xn are linearly independent. Then the set of solutions

r1t r2t rnt {e x1, e x2, ··· , e xn} forms a fundamental set of solutions.

Proof. r1t r2t rnt We will show that the vectors e x1, e x2, ··· , e xn are linearly indepen- dent. Suppose that

r1t r2t rnt c1e x1 + c2e x2 + ··· + cne xn = 0 for all −∞ < t < ∞. In particular, we can replace t by 0 and obtain

c1x1 + c2x2 + ··· + cnxn = 0.

Since the vectors x1, x2, ··· , xn are linearly independent, we must have c1 = r1t r2t rnt c2 = ··· = cn = 0. This shows that e x1, e x2, ··· , e xn are linearly in- dependent. Since each vector is also a solution, by Theorem 2.3.3 the set r1t r2t rnt {e x1, e x2, ··· , e xn} forms a fundamental set of solutions

Combining Theorem 2.4.2 and Theorem 2.4.3 we obtain

Theorem 2.4.4 Consider the homogeneous system y0 = Py, − ∞ < t < ∞. Suppose that P has n eigenpairs (r1, x1), (r2, x2), ··· , (rn, xn) with distinct eigenvalues. Then the set of solutions r1t r2t rnt {e x1, e x2, ··· , e xn} forms a fundamental set of solutions.

Proof. Since the eigenvalues are distinct, by Theorem 2.4.2 the eigenvectors x1, x2, ··· , xn are linearly independent. But then by Theorem 2.4.3 the set of solutions

r1t r2t rnt {e x1, e x2, ··· , e xn} forms a fundamental set of solutions 122 CHAPTER 2. FIRST ORDER LINEAR SYSTEMS

Example 2.4.4 Solve the following initial value problem  −2 1   3  y0 = y, y(0) = . 1 −2 1 Solution. The characteristic equation is

−2 − r 1 = (r + 1)(r + 3) = 0. 1 −2 − r

Solving this quadratic equation we find r1 = −1 and r2 = −3. Now,  −1 1   x   −x + x   0  (P + I)x = 1 = 1 2 = 1 −1 x2 x1 − x2 0

Solving this system we find x1 = x2. Letting x1 = 1 then x2 = 1. Thus, an eigenvector is  1  x = . 1 1 Similarly,  1 1   x   x + x   0  (P + 3I)x = 1 = 1 2 = . 1 1 x2 x1 + x2 0

Solving this system we find x1 = x2. Letting x1 = 1 then x2 = −1. Thus, an eigenvector is  1  x = . 2 −1 −t −3t By Theorem 2.4.4, a fundamental set of solutions is given by {e x1, e x2}. The general solution is then

−t −3t y(t) = c1e x1 + c2e x2.

Using the initial condtion we find c1 + c2 = 3 and c1 − c2 = 1. Solving this system we find c1 = 2 and c2 = 1. Hence, the unique solution is given by −t −3t y(t) = 2e x1 + e x2

 2e−t + e−3t  = 2e−t − e−3t 2.4. HOMOGENEOUS SYSTEMS WITH CONSTANT COEFFICIENTS: DISTINCT EIGENVALUES123 Practice Problems

In Problems 2.4.1 - 2.4.3, a 2 × 2 matrix P and vectors x1 and x2 are given. (a) Decide which, if any, of the given vectors is an eigenvector of P, and determine the corresponding eigenvalue. (b) For the eigenpair found in part (a), form a solution yk(t), where k = 1 or k = 2, of the first order system y0 = Py. (c) If two solutions are found in part (b), do they form a fundamental set of solutions for y0 = Py?

Problem 2.4.1

 7 −3   3   1  P = , x = , x = . 16 −7 1 8 2 2

Problem 2.4.2

 −5 2   1   1  P = , x = , x = . −18 7 1 3 2 2

Problem 2.4.3

 2 −1   1   1  P = , x = , x = . −4 2 1 −2 2 2

In Problems 2.4.4 - 2.4.6, an eigenvalue is given of the matrix P. Determine a corresponding eigenvector.

Problem 2.4.4

 5 3  P = , r = −1. −4 −3

Problem 2.4.5

 1 −7 3  P =  −1 −1 1  , r = −4. 4 −4 0 124 CHAPTER 2. FIRST ORDER LINEAR SYSTEMS

Problem 2.4.6

 1 3 1  P =  2 1 2  , r = 5. 4 3 −2

In Problems 2.4.7 - 2.4.10, find the eigenvalues of the matrix P.

Problem 2.4.7

 −5 1  P = . 0 4

Problem 2.4.8

 3 −3  P = . −6 6

Problem 2.4.9

 5 0 0  P =  0 1 3  . 0 2 2

Problem 2.4.10

 1 −7 3  P =  −1 −1 1  . 4 −4 0

In Problems 2.4.11 - 2.4.13, the matrix P has distinct eigenvalues. Using Theorem 2.4.4, determine a fundamental set of solutions of the system y0 = Py.

Problem 2.4.11

 −0.09 0.02  P = . 0.04 −0.07 2.4. HOMOGENEOUS SYSTEMS WITH CONSTANT COEFFICIENTS: DISTINCT EIGENVALUES125

Problem 2.4.12

 1 2 0  P =  −4 7 0  . 0 0 1

Problem 2.4.13

 3 1 0  P =  −8 6 2  . −9 −9 4

Problem 2.4.14 Solve the following initial value problem.

 5 3   2  y0 = y, y(1) = . −4 −3 0

Problem 2.4.15 Solve the following initial value problem.

 4 2 0   −1  0 y =  0 1 3  y, y(0) =  0  . 0 0 −2 3

Problem 2.4.16 Find α so that the vector x is an eigenvector of P. What is the corresponding eigenvalue?  2 α   1  P = , u = . 1 −5 −1

Problem 2.4.17 Find α and β so that the vector x is an eigenvector of P corresponding the eigenvalue r = 1.  α β   −1  P = , u = . 2α β 1 126 CHAPTER 2. FIRST ORDER LINEAR SYSTEMS 2.5 Homogeneous Systems with Constant Co- efficients: Complex Eigenvalues

We continue the study of finding the general solution of y0 = Py where P is an n × n matrix with real entries. In this section, we consider the case when P possesses complex eigenvalues. We start with the following result. Theorem 2.5.1 If (r, x) is an eigenpair of P then (r, x) is an eigenpair of P. Thus, complex eigenvalues always occur in conjugate pairs. Proof. Write r = α + iβ. Then we have Px = (α + iβ)x. Take the conjugate of both sides to obtain Px = (α − iβ)x. But P is a real matrix so that P = P. Thus, Px = (α − iβ)x. This shows that α − iβ is an eigenvalue of P with corresponding eigenvector x

In most applications, real-valued solutions are more meaningful than complex valued solutions. Our next task is to describe how to convert the complex solutions to y0 = Py into real-valued solutions. Theorem 2.5.2 Let P be a real valued n × n matrix. If P has eigenvalues r = α + iβ and r = α − iβ, where β 6= 0, and corresponding (complex conju- αt gate) eigenvectors x = a+ib and x = a−ib then y1 = e (a cos βt−b sin βt) αt and y2 = e (a sin βt + b cos βt) are two linearly independent solutions of y0 = Py. Proof. By Euler’s formula we have e(α+iβ)tx = eαt(cos βt + i sin βt)(a + ib) = eαt(a cos βt − b sin βt) + eαti(a sin βt + b cos βt)

= y1 + iy2 and e(α−iβ)tx = eαt(cos βt − i sin βt)(a − ib) = eαt(a cos βt − b sin βt) − eαti(a sin βt + b cos βt)

= y1 − iy2 2.5. HOMOGENEOUS SYSTEMS WITH CONSTANT COEFFICIENTS: COMPLEX EIGENVALUES127

0 We next show that y1 and y2 are solutions to y = Py. Indeed,

(α+iβ)t 0 0 0 [e x1] = y1 + iy2 and (α+iβ)t Pe x = Py1 + iPy2 (α+iβ)t (α+iβ)t 0 0 0 Since Pe x = [e x] , we must have Py1 = y1 and Py2 = y2. Next, we show that y1 and y2 are linearly independent. Suppose that c1y1 + c2y2 = 0. This is equivalent to 1 1 e(α+iβ)tx(c − ic ) + e(α−iβ)tx(c + ic ) = 0. 2 1 2 2 1 2

(α+iβ)t (α−iβ)t Since e x and e x are linearly independent, we must have c1 −ic2 = 0 and c1 + ic2 = 0. This system leads to c1 = c2 = 0. Hence, y1 and y2 are linearly independent

Example 2.5.1 Solve  1  0 − 2 1 y = 1 y. −1 − 2 Solution. The characteristic equation is 1 1 − 2 − r 1 2 1 = (r + ) + 1 = 0. −1 − 2 − r 2

1 1 Solving this quadratic equation we find r1 = − 2 − i and r2 = − 2 + i. Now,         1 −i 1 x1 −ix1 + x2 0 (P + ( + i)In)x = = = . 2 −1 −i x2 −x1 − ix2 0

Solving this system we find x = −ix2. Letting x2 = i then x1 = 1. Thus, an eigenvector is  1   1   0  x = = + i . i 0 1 1 An eigenvector corresponding to the eigenvalue − 2 + i is then  1   1   0  x = = − i . −i 0 1 128 CHAPTER 2. FIRST ORDER LINEAR SYSTEMS

The general solution is then           − t 1 0 − t 1 0 y(t) = c e 2 cos t − sin t + c e 2 sin t + cos t 1 0 1 2 0 1  − t  e 2 (c1 cos t + c2 sin t) = − t e 2 (−c1 sin t + c2 cos t) 2.5. HOMOGENEOUS SYSTEMS WITH CONSTANT COEFFICIENTS: COMPLEX EIGENVALUES129

Practice Problems Problem 2.5.1 Find the eigenvalues and the eigenvectors of the matrix  0 −9  P = . 1 0 Problem 2.5.2 Find the eigenvalues and the eigenvectors of the matrix  3 1  P = . −2 1 Problem 2.5.3 Find the eigenvalues and the eigenvectors of the matrix  1 −4 −1  P =  3 2 3  . 1 1 3 In Problems 2.5.4 - 2.5.6, one or more eigenvalues and corresponding eigenvec- tors are given for a real matrix P. Determine a fundamental set of solutions for y0 = Py, where the fundamental set consists entirely of real solutions. Problem 2.5.4 P is a 2 × 2 matrix with an eigenvalue r = i and corresponding eigenvector  −2 + i  x = . 5 Problem 2.5.5 P is a 2×2 matrix with an eigenvalue r = 1+i and corresponding eigenvector  −1 + i  x = . i Problem 2.5.6 P is a 4×4 matrix with eigenvalues r = 1+5i with corresponding eigenvector  i   1  x =    0  0 130 CHAPTER 2. FIRST ORDER LINEAR SYSTEMS and eigenvalue r = 1 + 2i with corresponding eigenvector

 0   0  x =   .  i  1

Problem 2.5.7 Solve the initial value problem

 0 −9   6  y0 = y, y(0) = . 1 0 2

Problem 2.5.8 Solve the initial value problem

 3 1   8  y0 = y, y(0) = . −2 1 6

Problem 2.5.9 Solve the initial value problem

 1 −4 −1   −1  0 y =  3 2 3  y, y(0) =  9  . 1 1 3 4 2.6. HOMOGENEOUS SYSTEMS WITH CONSTANT COEFFICIENTS: REPEATED EIGENVALUES131 2.6 Homogeneous Systems with Constant Co- efficients: Repeated Eigenvalues

In this section we consider the case when the characteristic equation possesses repeated roots. A major difficulty with repeated eigenvalues is that in some situations there is not enough linearly independent eigenvectors to form a fundamental set of solutions. We illustrate this in the next example.

Example 2.6.1 Solve the system  1 2  y0 = y. 0 1

Solution. The characteristic equation is

1 − r 2 2 = (r − 1) = 0 0 1 − r and has a repeated root r = 1. We find an eigenvector as follows.

 0 2   x   2x   0  1 = 2 = . 0 0 x2 0 0

It follows that x2 = 0 and x1 is arbitrary. Letting x1 = 1 then an eigenvector is  1  x = . 1 0 This is the only eigenvector. It yields the solution

 et  y = . 1 0

But we need two linearly independent solutions to form the general solution of the given system and we only have one. How do we find a second solution y2(t) such that {y1, y2} is a fundamental set of solutions? Let y(t) be a solution. Write

 y (t)  y(t) = 1 . y2(t) 132 CHAPTER 2. FIRST ORDER LINEAR SYSTEMS

Then we have

0 y1(t) = y1 + 2y2 0 y2(t) = y2.

t Solving the second equation we find y2(t) = c2e . Substituting this into the 0 t first differential equation we find y1(t) = y1 + 2c2e . Solving this equation t t using the method of integrating factor we find y1(t) = c1e + c2te . Therefore the general solution to y0 = Py is

 t t         c1e + c2te t 1 t 0 t 1 y(t) = t = c1e +c2 e + te = c1y1(t)+c2y2(t). c2e 0 1 0

Thus, a second solution to y0 = Py is

 0   1  y (t) = et + tet . 2 1 0

Finally, letting Ψ(t) = [y1 y2] we find

1 0 W (0) = det(Ψ(0)) = = 1 0 1 so that {y1, y2} is a fundamental set of solutions. Finally, note that the t t second solution can be expressed as y2(t) = te y1(t) + e x2 where x2 =  0  1

Example 2.6.2 Solve the initial value problem

 13 11   1  y0 = y, y(0) = . −11 −9 2

Solution. The characteristic equation is

13 − r 11 2 = (r − 2) = 0 −11 −9 − r 2.6. HOMOGENEOUS SYSTEMS WITH CONSTANT COEFFICIENTS: REPEATED EIGENVALUES133 and has a repeated root r = 2. We find an eigenvector as follows.

 11 11   x   11x + 11x   0  1 = 1 2 = . −11 −11 x2 −11x1 − 11x2 0

It follows that x2 = −x1. Letting x1 = 1 then an eigenvector is

 1  x = . 1 −1

Therefore, one solution of y0 = Py is

 e2t  y (t) = . 1 −e2t

The second solution has the form

2t 2t y2(t) = te x1 + e x2

0 where x2 is to be determined. Substituting y2 into the equation y = Py we find 2t 2t 2t 2t (1 + 2t)e x1 + 2e x2 = P(te x1 + e x2). We can rewrite this equation as

2t 2t te (Px1 − 2x1) + e (Px2 − 2x2 − x1) = 0.

But the set {e2t, te2t} is linearly independent so that

Px1 − 2x1 = 0

Px2 − 2x2 = x1.

From the second equation we find

 11 11   x   11x + 11x   1  1 = 1 2 = . −11 −11 x2 −11x1 − 11x2 −1

This shows that 11x1 + 11x2 = 1. Thus,       1 1 − 11x2 1 1 1 x2 = = − x2 . 11 11x2 11 0 −1 134 CHAPTER 2. FIRST ORDER LINEAR SYSTEMS

Letting x2 = 0 we find 1  1  x = . 2 11 0 Hence,  1  e2t  1   te2t + e2t  y (t) = te2t + = 11 . 2 −1 11 0 −te2t Computing the Wronskian of the two solutions we find

1 1 1 W (0) = 11 = 6= 0. −1 0 11 Therefore, the two solutions form a fundamental set of solutions and the general solution is given by

 2t 2t e2t    e te + 11 c1 y(t) = 2t 2t . −e −te c2 Imposing the initial condition,  1   1 1   c  y(0) = = 11 1 . 2 −1 0 c2

Solving this system we find c1 = −2 and c2 = 33. Hence, the unique solution to the initial value problem is  e2t + 33te2t  y(t) = 2e2t − 33te2t Multiplicity of an Eigenvalue As you have seen from the discussion above, when an eigenvalue is repeated then one worries as to whether there exist enough linearly independent eigen- vectors. These considerations lead to the following definitions. Let P be an n × n matrix and

n1 n2 nk det(P − rI) = (r − r1) (r − r2) ··· (r − rk) .

The numbers n1, n2, ··· , nk are called the algebraic multiplicities of the 3 2 eigenvalues r1, r2, ··· , rk. For example, if det(P−rI) = (r −2) (r −4) (r +1) then we say that 2 is an eigenvalue of P of multiplicity 3, 4 is of multiplicity 2, and −1 is of multiplicity 1. We define the geometric multiplicity of an eigenvalue to be the number of linearly independent eigenvectors corresponding to the eigenvalue. 2.6. HOMOGENEOUS SYSTEMS WITH CONSTANT COEFFICIENTS: REPEATED EIGENVALUES135

Example 2.6.3 Find the algebraic and geometric multiplicities of the matrix

 2 1 1 1   0 2 0 1  P =   .  0 0 2 1  0 0 0 3

Solution. The characteristic equation is given by

2 − r 1 1 1

0 2 − r 0 1 3 = (2 − r) (3 − r) = 0. 0 0 2 − r 1

0 0 0 3 − r

Thus, r = 2 is an eigenvalue of algebraic multiplicity 3 and r = 3 is an eigenvalue of algebraic multiplicity 1. Next, we find eigenvector(s) associated to r = 2. We have

      0 1 1 1 x1 0  0 0 0 1   x2   0      =   .  0 0 0 1   x3   0  0 0 0 1 x4 0

Solving this system we find       x1 1 0  x2   0   −1    = x1   + x2   .  x3   0   1  x4 0 0

Hence, the linearly independent eigenvectors are

 1   0   0   −1  x1 =   , x2 =   .  0   1  0 0 136 CHAPTER 2. FIRST ORDER LINEAR SYSTEMS

It follows that r = 2 has geometric multiplicity 2. Similarly, we find an eigenvector associated to r = 3.       −1 1 1 1 x1 0  0 −1 0 1   x2   0      =   .  0 0 −1 1   x3   0  0 0 0 0 x4 0 Solving this system we find  3   1  x3 =   .  1  1 It follows that r = 1 has geometric multiplicity 1

Theorem 2.6.1 Let A be an n×n matrix with eigenvalue r1. Then the geometric multiplicity of r1 is less than or equal to the algebraic multiplicity of r1.

If ki is the geometric multiplicity of an eigenvalue ri of an n × n matrix P and ni is its algebraic multiplicity such that ki < ni then we say that the eigenvalue ri is defective (it’s missing some of its eigenvectors) and we call the matrix P a . A matrix that is not defective is said to have a full set of eigenvectors. There are important family of square matrices that always have a full set of eigenvectors, namely, real symmetric matrices and Hermitian matrices that we discuss next. The of a matrix P, denoted by PT , is another matrix in which the T rows and columns have been reversed. That is, (P )ij = (P)ji. For example, the matrix   a11 a12 a13 P =  a21 a22 a23  a31 a32 a33 would have the transpose   a11 a21 a31 T P =  a12 a22 a32  . a13 a23 a33 2.6. HOMOGENEOUS SYSTEMS WITH CONSTANT COEFFICIENTS: REPEATED EIGENVALUES137

Theorem 2.6.2 (a) If P and Q are n × n matrices then (P + Q)T = PT + QT . (b) If P is an n × m matrix and Q is an m × p matrix then (PQ)T = QT PT .

An n × n matrix P with real entries and with the property P = PT is called a real . For example, the matrix

 1 2 3  P =  2 −4 5  3 5 6 is a real symmetric matrix. Real symmetric matrices are a special case of a larger class of matrices, known T as Hermitian matrices. An n × n matrix P is called Hermitian if P = P , where P is the complex conjugate of P(The conjugate of a complex matrix is the conjugate of all its entries). For example,

 3 2 + i  P = 2 − i 1

T is a . Note that PT = P when P is real matrix. Also note that a real symmetric matrix is a Hermitian matrix.

Theorem 2.6.3 If P is a real symmetric matrix or Hermitian matrix then its eigenvalues are all real.

The following theorem asserts that every Hermitian or real symmetric matrix has a full set of eigenvectors. Therefore, when we study the homogeneous linear first order system y0 = Py, where P is an n × n a real symmetric matrix we know that all solutions forming a fundamental set are of the form ertx, where (r, x) is an eigenpair.

Theorem 2.6.4 If P is a Hermitian matrix (or a symmetric matrix) then for each eigenvalue, the algebraic multiplicity equals the geometric multiplicity. 138 CHAPTER 2. FIRST ORDER LINEAR SYSTEMS

Practice Problems

In Problems 2.6.1 - 2.6.4, we consider the initial value problem y0 = Py, y(0) = y0. (a) Compute the eigenvalues and the eigenvectors of P. (b) Construct a fundamental set of solutions for the given differential equa- tion. Use this fundamental set to construct a fundamental matrix Ψ(t). (c) Impose the initial condition to obtain the unique solution to the initial value problem.

Problem 2.6.1

 3 2   4  P = , y = . 0 3 0 1

Problem 2.6.2

 3 0   2  P = , y = . 1 3 0 −3

Problem 2.6.3

 −3 −36   0  P = , y = . 1 9 0 2

Problem 2.6.4

 6 1   4  P = , y = . −1 4 0 −4

Problem 2.6.5 Consider the homogeneous linear system  2 1 0  0 y =  0 2 1  y. 0 0 2

(a) Write the three component differential equations of y0 = Py and solve these equations sequentially, first finding y3(t), then y2(t), and then y1(t). (b) Rewrite the component solutions obtained in part (a) as a single matrix equation of the form y = Ψ(t)c. Show that Ψ(t) is a fundamental matrix. 2.6. HOMOGENEOUS SYSTEMS WITH CONSTANT COEFFICIENTS: REPEATED EIGENVALUES139

In Problems 2.6.6 - 2.6.8, Find the eigenvalues and eigenvectors of P. Give the geometric and algebraic multiplicity of each eigenvalue. Does P have a full set of eigenvectors?

Problem 2.6.6

 5 0 0  P =  1 5 0  . 1 0 5

Problem 2.6.7

 5 0 0  P =  0 5 0  . 0 0 5

Problem 2.6.8

 2 0 0 0   0 2 0 0  P =   .  0 0 2 0  0 0 1 2

Problem 2.6.9 Let P be a 2 × 2 real matrix with an eigenvalue r1 = a + ib where b 6= 0. Can P have a repeated eigenvalue? Can P be defective?

Problem 2.6.10 Dtermine the numbers x and y so that the following matrix is real and symmetric.  0 1 x  P =  y 2 2  . 6 2 7

Problem 2.6.11 Determine the numbers x and y so that the following matrix is Hermitian.  2 x + 3i 7  P =  9 − 3i 5 2 + yi  . 7 2 + 5i 3 140 CHAPTER 2. FIRST ORDER LINEAR SYSTEMS Answer Key

Section 1.1

 2t − 2 − 3t2 2t2 + 3t  1.1.1 (a) . 4 2 − 2t − 3t2  2 2t2 + t + 2  (b) . −4 −2  −1  (c) . 1 (d) −(t3 + 3t2 + 2t).

 t + 1 −t  1.1.2 A−1(t) = 1 . 2t+1 −t t + 1  cos t cos t  1.1.3 A−1(t) = 1 . sin 2t − sin t sin t  1 0 3  1.1.4 . 1 1 0  0 0  1.1.5 . −2 1  cos t 3   − sin t 0  1.1.6 A0(t) = and A00(t) = . 2t 0 2 0 1.1.7

 y (t)   t2 3   sec t  y(t) = 1 , A(t) = , g(t) = . y2(t) sin t t −5

141 142 ANSWER KEY

 t2 + 2 t + 5  1.1.8 A(t) = . sin t + 1 t3 + −2  t2 − t + 1 t3 + 2t + 1  1.1.9 A(t) = 2 6 . −4 3t + 1  et − 1 3t2  1.1.10 sin 2πt 1−cos 2πt . 2π 2π  0 t  1.1.11 A(t) = . 0 0  1 t  1.1.12 (a) A(t) = (b) d A2(t) = A(t)A0(t) + A0(t)A(t). t2 0 dt

0 00 1.1.13 Let x1 = y, x2 = y , x3 = y . Then by letting       x1 0 1 0 0 x =  x2  , A =  0 0 1  , b =  0  −t x3 − sin 2t 3t 0 7e the differential equation can be presented by the first order system x0(t) = Ax(t) + b(t).

0 1.1.14 By letting x1 = y and x2 = y we have  x   0 −1   0  x = 1 , A = , b = . x2 −2 0 t − 1

0 1.1.15 By letting x1 = y and x2 = y we have  x   0 −1   0  x = 1 , A = , b = . x2 4 4 0

0 1.1.16 By letting x1 = y and x2 = y we have  x   0 −1   0  x = 1 , A = , b = . x2 t − 1 k 0

1.1.17 If we write the problem in the matrix form

0 x + Ax = b, x(0) = x0 143 then  0 −1  A = t −5         x1 y 0 0 x = = 0 , b = 2 , x0 = . x2 y 3t 1

1.1.18 y00 − 2y0 − 5y = 0. 144 ANSWER KEY

Section 1.2

n 1.2.1 (a) For all 0 ≤ x < 1 we have limn→∞ fn(x) = limn→∞ x = 0. Also, ∞ limn→∞ fn(1) = 1. Hence, the sequence {fn}n=1 converges pointwise to f. 1 (b) Suppose the contrary. Let  = 2 . Then there exists a positive integer N such that for all n ≥ N we have 1 |f (x) − f(x)| < n 2 for all x ∈ [0, 1]. In particular, we have 1 |f (x) − f(x)| < N 2 1 N for all x ∈ [0, 1]. Choose (0.5) N < x < 1. Then |fN (x)−f(x)| = x > 0.5 =  which is a contradiction. Hence, the given sequence does not converge uni- formly.

1.2.2 For every real number x, we have nx + x2 x x2 lim fn(x) = lim = lim + lim = 0 n→∞ n→∞ n2 n→∞ n n→∞ n2 ∞ Thus, {fn}n=1 converges pointwise to the zero function on R.

1.2.3 For every real number x, we have 1 1 −√ ≤ fn(x) ≤ √ . n + 1 n + 1 Moreover, 1 lim √ = 0. n→∞ n + 1 Applying the squeeze rule for sequences, we obtain

lim fn(x) = 0 n→∞ ∞ for all x in R. Thus, {fn}n=1 converges pointwise to the zero function on R.

1.2.4 First of all, observe that fn(0) = 0 for every n in N. So the sequence ∞ {fn(0)}n=1 is constant and converges to zero. Now suppose 0 < x < 1 then n2xn = n2en ln x. But ln x < 0 when 0 < x < 1, it follows that 145

limn→∞ fn(x) = 0 for 0 < x < 1

2 Finally, fn(1) = n for all n. So,

lim fn(1) = ∞. n→∞

∞ Therefore, {fn}n=1 is not pointwise convergent on [0, 1].

π π 1.2.5 For − 2 ≤ x < 0 and 0 < x ≤ 2 we have lim (cos x)n = 0. n→∞

∞ For x = 0 we have fn(0) = 1 for all n in N. Therefore, {fn}n=1 converges pointwise to

 0 if − π ≤ x < 0 and 0 < x ≤ π f(x) = 2 2 1 if x = 0.

1 1.2.6 Let  > 0 be given. Let N be a positive integer such that N >  . Then for n ≥ N n n x |x| 1 1 x − − x = < ≤ < . n n n N Thus, the given sequence converges uniformly (and pointwise) to the function f(x) = x.

1.2.7 (a) The pointwise limit is

 0 if 0 ≤ x < 1  1 f(x) = 2 if x = 1  1 if 1 < x ≤ 2

(b) The convergence cannot be uniform because if it were f would have to be continuous.

1.2.8 (a) Let  > 0 be given. Note that

2 1 2 cos x − sin x 3 |fn(x) − | = ≤ . 2 2(2n + sin2 x) 4n 146 ANSWER KEY

3 Since limn→∞ 4n = 0 we can find a positive integer N such that if n ≥ N 3 then 4n < . Thus, for n ≥ N and all x ∈ R we have 1 3 |f (x) − | ≤ < . n 2 4n

1 This shows that fn → 2 uniformly on R and also on [2, 7]. (b) We have

Z 7 Z 7 Z 7 1 5 lim fn(x)dx = lim fn(x)dx = dx = . n→∞ 2 2 n→∞ 2 2 2

1.2.9 We have proved earlier that this sequence converges pointwise to the discontinuous function  0 if − π ≤ x < 0 and 0 < x ≤ π f(x) = 2 2 1 if x = 0

Therefore, uniform convergence cannot occur for this given sequence.

1.2.10 (a) Using the squeeze rule we find

lim sup{|fn(x)| : 2 ≤ x ≤ 5} = 0. n→∞

∞ Thus, {fn}n=1 converges uniformly to the zero function. (b) We have Z 5 Z 5 lim fn(x)dx = 0dx = 0. n→∞ 2 2

1.2.11 We have 1 n 1 cos (3 x) ≤ = Mn. 2n 2n P∞ 1 1 1 Since n=1 2n = 2 1 = 1, by the Weierstrass M-test the given series con- 1− 2 verges uniformly on R.

1.2.12 Since n4 + x4 ≥ n4 for all x ∈ R, we have

n n 1 ≤ = = Mn. n4 + x4 n4 n3 147

P∞ 1 Since n=1 n3 is a p−series with p = 3 > 1 so it is convergent. By the Weierstrass M-test the given series converges uniformly on R.

1.2.13 Since n2 + x2 ≥ n2 for all x ∈ R, we have

1 1 ≤ = Mn. n2 + x2 n2

P∞ 1 Since n=1 n2 is a p−series with p = 2 > 1 so it is convergent. By the Weierstrass M-test the given series converges uniformly on R.

1.2.14 We have |an cos (nx)| ≤ |an| = Mn. P∞ Since n=1 |an| is convergent, the Weierstrass M-test implies that the given series converges uniformly on R. 148 ANSWER KEY

Section 1.3

0 00 1.3.1 Letting x1 = y, x2 = y , and x3 = y we find  0 1 0  A(t) =  0 0 1  1 − cos t − ln (t + 1) t2−9       x1 0 1 x(t) = x2 , b(t) = 0 , x(0) = 3 . x3 0 0

0 00 1.3.2 Letting x1 = y, x2 = y , and x3 = y we find  0 1 0 A(t) =  0 0 1 1 − tan t − t+1 0       x1 0 0 x(t) = x2 , b(t) = 0 , x(0) = 1 . x3 0 2

0 00 1.3.3 Letting x1 = y, x2 = y , and x3 = y we find  0 1 0  A(t) =  0 0 1  2 1 − cos t − ln (t + 1) t2+9       x1 0 1 x(t) = x2 , b(t) =  0  , x(0) = 3 . 3 x3 t + t − 5 0

1.3.4       x1 0 1 0 0 x =  x2  , A =  0 0 1  , b =  0  −t x3 sin 2t 3t 0 7e

1.3.5    14 −4t 13 2t 16 7t  y 33 e + 15 e − 55 e 0 56 −4t 26 2t 112 7t x(t) = y  =  − 33 e + 15 e − 55 e  . 00 224 −4t 52 2t 784 7t y − 33 e + 15 e − 55 e 149

1.3.6 y(4) − y = 0.

1.3.7 y000 + 25y0 = 0.

1.3.8

1 x0(t) = 3 0  1 + 3t  x1(t) =  3  − sin t − 3(t + 1) ln (t + 1) + 3t

1.3.9

1 x0(t) = 3 0  1 + 3t  x1(t) =  3  − sin t − 3t ln (t2 + 1) + 6t − 6 tan−1 t

1.3.10 Basis of Induction: For N = 1, we have

(t − t )1 ||x − x || ≤ M(t − t ) = MK1−1 0 . 1 0 0 1! Induction Hypothesis: Suppose

(t − t )k ||x − x || ≤ MKk−1 0 . k k−1 k! For k = 1, 2, ··· ,N. Induction Step: We want to show that

(t − t )N+1 ||x − x || ≤ MKN 0 . N+1 N (N + 1)! 150 ANSWER KEY

Indeed, we have

Z t ||xN+1 − xN || ≤ K ||xN − xN−1||ds t0 Z t (s − t )N ≤ K MKN−1 0 ds t0 N! (t − t )N+1 =MKN 0 (N + 1)!

1.3.11 This follows from Gronwall’s lemma with h(t) = K.

1.3.12 This follows from Gronwall’s lemma with C = 0 and h(t) = 1.

1.3.13 −1 < t < 3.

π 1.3.14 −1 < t < 2 .

1.3.15 −∞ < t < ∞.

1.3.16 0 < t < 2.

1.3.17 r = −1, 1, 2. 151

Section 1.4

1.4.1 W (t) = 2.

1.4.2 W (t) = 1.

4 1.4.3 W (t) = t .

1.4.4 y(t) = 2 − et + 2e−t.

1.4.5 y(t) = −(π + 1) + 4t + cos t + 3 sin t.

1.4.6 y(t) = 2 + 4 ln t − t2.

1.4.7 (a) For the given differential equation pn−1(t) = p2(t) = 0 so that Abel’s theorem predicts W (t) = W (t0). Now, for t0 = −1 we have W (t) = W (−1) = − R t 0ds constant. From Problem 1.4.1, we found that W (t) = W (−1)e −1 = 2. (b) For the given differential equation pn−1(t) = p3(t) = 0 so that Abel’s theorem predict W (t) = W (t0). Now, for t0 = 1 we have W (t) = W (1) = − R t 0ds constant. From Problem 1.4.2, we found that W (t) = W (1)e 1 = 1. 1 (c) For the given differential equation pn−1(t) = p2(t) = t so that Abel’s R t ds 2 2 − 2 s ln ( t ) theorem predict W (t) = W (2)e = W (2)e = t W (2). From Problem 4 R t ds − 2 s 1.4.3, we found that W (t) = t = W (2)e where W (2) = 2.

1.4.8 W (t) = 0.

1.4.9 W (t) = 3.

1.4.10 (a) α = 4 and β = −4 (b) Solution is bounded for t ≥ 0 when β = −4 and α is any real number. There are no values of α and β where the solution is bounded in −∞ < t < ∞.

1.4.11 (a) The Wronskian of 1, t2, t4 is

2 4 1 t t 3 W (t) = 0 2t 4t3 = 16t .

0 2 12t2 152 ANSWER KEY

Since 0 is in the interval −1 < t < 1 and W (0) = 0, {1, t2, t4} cannot be a fundamental set and therefore the general solution cannot be a linear com- bination of 1, t2, t4. 2 (b) First notice that y = 1 is a solution for any p1 and p2. If y = t is a solution then substitution into the differential equation leads to tp1 + p2 = 0. Since y = t4 is also a solution, substituting into the equation we find 2 3 3 t p1 + 3tp2 + 6 = 0. Solving for p1 and p2 we find p1(t) = t2 and p2(t) = − t . Note that both functions are not continuous at t = 0.

1 2 1.4.12 {1, t − 1, 2 (t − 1) }. 153

Section 1.5

1.5.1 Linearly independent.

1.5.2 Lienarly dependent.

1.5.3 Linearly independent.

1.5.4 α 6= ±1.

1.5.5 (a) Linearly independent (b) Linearly dependent (c) Linearly inde- pendent.

−(t+2) 1.5.6 (a) {y1(t), y2(t), y3(t)} = {1 − 2t, t + 2, e } is a solution set. (b)  1 −2 0  A =  2 1 0  0 0 e−2

−2 −(t+2) (c) Since det(A) = 5e 6= 0, {y1(t), y2(t), y3(t)} = {1 − 2t, t + 2, e } is a fundamental set of solutions.

2 3 1.5.7 (a) {y1(t), y2(t), y3(t)} = {2t − 1, 3, ln t } is a solution set. (b)  −1 0 2  A =  3 0 0  0 3 0

2 3 (c) Since det(A) = 18 6= 0, {y1(t), y2(t), y3(t)} = {2t − 1, 3, ln t } is a fundamental set of solutions. 154 ANSWER KEY

Section 1.6

−t −t t 1.6.1 y(t) = c1e + c2te + c3e .

− t − t t t 1.6.2 y(t) = c1e 2 + c2te 2 + c3e 2 + c4te 2 . √ √ 1 3t 1 3t t − 2 t − 2 t 1.6.3 y(t) = c1e + c2e cos 2 + c3e sin 2 .

2t −2t 1.6.4 y(t) = c1e + c2e + c3 cos (2t) + c4 sin (2t).

1.6.5 y(t) = te−t + t2e−t.

t −2t −2t 1.6.6 y(t) = c1e + c2e + c3te .

2t 1.6.7 y(t) = c1e + c3 cos t + c4 sin t.

1.6.8 y(4) + 5y00 + 4y = 0.

1.6.9 y(4) + 9y00 = 0.

1.6.10 y(4) + 4y = 0.

1.6.11 n = 5 and {1, t, cos t, sin t, et}.

1.6.12 n = 8 and {sin t, cos t, t sin t, t cos t, t2 sin t, t2 cos t, et sin t, et cos t}.

1.6.13 n = 4 and {1, t, t2, e2t}. 155

Section 1.7

P∞ n (t−1)2n 1.7.1 n=0(−1) (2n)! .

P∞ n (t−1)2n+3 1.7.2 n=0(−1) (2n)! .

P∞ (n+2)(t+5)2n+3 1.7.3 n=0 n! .

1.7.4 R = ∞.

1 1.7.5 R = 5 .

1.7.6 R = 2.

1.7.7 R = 1.

1.7.8 R = ∞.

1.7.9 R = 1.

1 1 1 1.7.10 R = 4 . The interval of convergence is − 4 ≤ t ≤ 4 .

1.7.11 (a) R = 1. The power series converges for |t| < 1 and diverges for |t| > 1. (b) Converges at t = 1 and diverges at t = −1. The interval of convergence is −1 < t ≤ 1.

1 1 1 1.7.12 R = 2 . The series converges for t = − 2 and diverges for t = 2 .

P∞ tn+1 0 1.7.13 (a) n=0 2n with radius of convergence R = 1. (b) g(t) = f (t) = P∞ n+1 n n=0 2n t with radius of convergence 1.

0 P∞ (−1)n 2n 1.7.14 g(t) = f (t) = n=0 (2n)! t with radius of convergence R = ∞.

0 1 1  1.7.15 (a) g (t) = 1−t , |t| < 1. (b) g(t) = ln 1−t . 156 ANSWER KEY

Section 1.8

1.8.1 Every real number is an ordinary point.

1.8.2 The points 1, −1, and 0 are singular points of the equation, any other real number is an ordinary point of the equation.

1.8.3 The singular points are t = ±2 and any other point is an ordinary point.

1.8.4 R = 2.

2 1 4 1 6  1 3 3 5 1 7  1.8.5 y(t) = a0 1 − t + 4 t − 60 t + ··· +a1 t − 2 t + 40 t − 1680 t + ··· .

1 4 1 6  1 3 3 5 13 7  1.8.6 y(t) = a0 1 + 12 t + 90 t + ··· + a1 t + 6 t + 40 t + 1008 t + ··· .

1 2 1 3 1 4 3 5  1 2 1 4 1 5  1.8.7 y(t) = a0 1 − 2 t − 6 t + 12 t + 40 t + ··· +a1 t + 2 t − 8 t − 40 t + ··· . 157

Section 1.9

1.9.1 a = −4, b = 8, c = −8 and g(t) = t3(4t−3 − 4t2(−2t−2) + 8t(2t−1) − 16 ln t = 28 − 16 ln t.

1.9.2 a = −1, b = c = 0 and g(t) = −32 cos 2t−2(−16 sin 2t) = −32 cos 2t+ 32 sin 2t.

√ 8 2 7 2t 2t 2 2t 2 2t 1.9.3 y(t) = c1e + c2te + c3t e + 105 t e .

4 1 2 1 2 1 16 1 2 4 2 2 2 2 4 2 1.9.4 y(t) = c1t + c2t + c3t − 3 t + 3 t − 21 t = c1t + c2t + c3t − 21 t .

2 3 t 3 2 3 t 1.9.5 y(t) = c1t + c2t + c3t + 2 ln t + 4 t = c1t + c2t + c3t + 2 ln t >

2 t 1.9.6 y(t) = c1 + c2t + c3 sin (2t) + c4 cos (2t) + 2t + 3e .

2t 1.9.7 y(t) = c1e + c2 cos t + c3 sin t + cos 2t + sin 2t.

−t t 1.9.8 y(t) = c1 + c2e + c3e − 4t − sin t.

−t t t 1.9.9 y(t) = c1 + c2e + c3e − 2te .

t 1 −2t 1.9.10 y(t) = c1 + c2t + c3e − 3 e . 158 ANSWER KEY

Section 2.1

2.1.1 −2 < t < 2.

0 t t 2.1.2 Taking derivatives we find y1(t) = (c1 + c2)e cos t + (c2 − c1)e sin t and 0 t t t y2(t) = −(c1 +c2)e sin t+(c2 −c1)e cos t. But y1 +y2 = (c1 +c2)e cos t+(c2 − t 0 t t 0 c1)e sin t = y1(t) and −y1 + y2 = −(c1 + c2)e sin t + (c2 − c1)e cos t = y2(t).

2.1.3 (b) c1 = 2 and c2 = 3. The unique solution to the system is

 2e5t − 3e−t  y(t) = 2e5t + 6e−t

2.1.4  y 0  0 1   y   0  y0 = = √ + y0 − t sec t 3t sec t y0 (t2 + 1) sec t = P(t)y + g(t).

2.1.5  y 0  0 1 0   y   0  0 y =  y0  =  0 0 1   y0  +  0  y00 t − cos t 2 y00 e3t = P(t)y(t) + g(t).

2.1.6 y00 + 3y − 2y0 = 2 cos 2t, y(−1) = 1, y0(−1) = 4.

2.1.7 y(4) = y0 + y00 sin y + (y00)2, y(1) = y0(1) = 0, y00(1) = −1, y000(1) = 2.

2.1.8  y0   0 1 0 0   y   0  00 0 0  y   0 1 1 t   y   0  y =   =     +    z0   0 0 0 1   z   0  z00 2t 1 0 1 z0 0 = P(t)y + g(t). 159

2.1.9  y0   0 1 0 0   y   0  00 0 2 0  y   4 7 −8 6   y   t  y =   =     +    z0   0 0 0 1   z   0  z00 3 −6 2 5 z0 − sin t = P(t)y + g(t). 160 ANSWER KEY

Section 2.2

2.2.1 (a)  y 0  −3 −2   y  1 = 1 y2 4 3 y2

2.2.2 (a)  0     y1 0 1 y1 = 2 2 y2 − t2 t y2

2.2.3 (a)  0     y1 2 1 1 y1  y2  =  1 1 2   y2  y3 1 1 2 y3

2.2.4 (b) The Wronskian is given by

3t −t 3t −t 2e − 4e 4e + 2e 2t W (t) = = 20e 3e3t − 10e−t 6e3t + 5e−t

Since W (t) 6= 0, the set {y1, y2} forms a fundamental set of solutions. (c) The general solution is

 3t −t 3t −t    2e − 4e 4e + 2e c1 y(t) = c2y1 + c2y2 = 3t −t 3t −t 3e − 10e 6e + 5e c2

(d) We have  −2 6   c   0  1 = −7 11 c2 1

Solving this system we find c1 = −0.3, c2 = −0.1. Therefore the solution to the initial value problem is

 2e3t − 4e−t   4e3t + 2e−t   e−3t + e−t  y(t) = −0.3 − 0.1 = . 3e3t − 10e−t 6e3t + 5e−t −1.5e3t + 2.5e−t

2.2.5 (b) The Wronskian is given by

−2t −2t −5e cos 3t −5e sin 3t −4t W (t) = = −15e e−2t(cos 3t − 3 sin 3t) e−2t(3 cos 3t − sin 3t) 161

Since W (t) 6= 0, the set {y1, y2} forms a fundamental set of solutions. (c) The general solution is

 −2t −2t    −5e cos 3t −5e sin 3t c1 y(t) = c2y1 + c2y2 = −2t −2t e (cos 3t − 3 sin 3t) e (3 cos 3t − sin 3t) c2 (d) We have  −5 0   c   5  1 = 1 3 c2 2

Solving this system we find c1 = −1, c2 = 1. Therefore the solution to the initial value problem is  −5e−2t cos 3t   −5e−2t sin 3t   5e−2t(cos 3t − sin 3t)  y(t) = − = . e−2t(cos 3t − 3 sin 3t) e−2t(3 cos 3t + sin 3t) e−2t(2 cos 3t + 4 sin 3t)

2.2.6 (b) The Wronskian is given by

3t 1 e −3t W (t) = = −3e 1 −2e3t

Since W (t) 6= 0, the set {y1, y2} forms a fundamental set of solutions. (c) The general solution is

 3t    1 e c1 y(t) = c2y1 + c2y2 = 3t 1 −2e c2 (d) We have  −3      1 e c1 −2 −3 = 1 −2e c2 4 3 Solving this system we find c1 = 0, c2 = −2e . Therefore the solution to the initial value problem is  −2e3(t+1)  y(t) = . 4e3(t+1)

2.2.7 (b) The Wronskian is given by

−2t e 0 0

W (t) = 0 2et cos 2t 2et sin 2t = 2.

0 −et sin 2t et cos 2t 162 ANSWER KEY

Since W (t) 6= 0, the set {y1, y2, y3} forms a fundamental set of solutions. (c) The general solution is

 −2t    e 0 0 c1 t t y(t) = c2y1 + c2y2 + c3y3 =  0 2e cos 2t 2e sin 2t   c2  t t 0 −e sin 2t e cos 2t c3 (d) We have       1 0 0 c1 3  0 2 0   c2  =  4  0 0 1 c3 −2

Solving this system using Cramer’s rule we find c1 = 3, c2 = 2, c3 = −2. Therefore the solution to the initial value problem is  3e−2t   0   0   3e−2t  y(t) =  0 + 4et cos 2t − 4et sin 2t  =  4et(cos 2t − sin 2t)  . 0 −2et sin 2t 2et cos 2t −2et(sin 2t + cos 2t)

2.2.8 (a) The Wronskian is

5e−t et W (t) = = 2. −7e−t −et

Since W (t) 6= 0, the set {y1, y2} forms a fundamental set of solutions. (b) tr(P(t)) = 6 − 6 = 0. R t R t t tr(P(s))ds 0ds (c) W (t) = 2 and W (t0)e 0 = 2e −1 = 2.

2.2.9 (a) The Wronskian is

t −1 e −1 t W (t) = = −t e . t−1 0

Since W (t) 6= 0, the set {y1, y2} forms a fundamental set of solutions. (b) tr(P(t)) = 1 − t−1 R t R t −1 −1 t t tr(P(s))ds (1−s )ds −1 t (c) W (t) = −t e and W (t0)e 0 = −e · e 1 = −t e .

2.2.10 (a) The Wronskian is given by

3t 5 2e 3t W (t) = = 3e 1 e3t 163

Since

W (t) 6= 0, the set {y1, y2} forms a fundamental set of solutions. (b) Since W 0(t)−tr(P(t))W (t) = 0, 9e3t −3tr(P(t))e3t = 0. Thus, tr(P(t)) = 3. (c) We have

0 0 0 Ψ (t) = [y1 y2] = [P(t)y1 P(t)y2] = P(t)[y1 y2] = P(t)Ψ(t)

(d) From part(c) we have

 0 6e3t   e3t −2e3t  P(t) = Ψ0(t)Ψ−1(t) = · 1 0 3e3t 3e3t −1 5

 −2 10  = −1 5

The results in parts (b) and (d) are consistent since tr(P(t)) = −2 + 5 = 3.

2.2.11 α = −3. 164 ANSWER KEY

Section 2.3

2.3.1 Linearly independent.

2.3.2 Linearly dependent.

2.3.3 Linearly dependent.

2.3.4 Linearly dependent.

2.3.5 (a) Ψ(t) is a fundamental matrix (b)

 1 0 1  A =  1 0 0  0 1 1

(c) Ψ(t) is a fundamental matrix.

 −3 −1  2.3.6 A = Ψ−1(0)Ψ(0) = − 1 . 3 0 1 165

Section 2.4

2.4.1 (a) x1 is an eigenvector corresponding to the eigenvalue r1 = −1. x2 −t is an eigenvector corresponding to the eigenvalue r2 = 1. (b) y1(t) = e x1 t and y2(t) = e x2. (c) yes.

2.4.2 (a) x1 is an eigenvector corresponding to the eigenvalue r1 = 1. x2 t is not an eigenvector of P. (b) y1(t) = e x1. (c) The Wronskian is not de- fined for this problem.

2.4.3 (a) x1 is an eigenvector corresponding to the eigenvalue r1 = 4. x2 4t is an eigenvector corresponding to the eigenvalue r2 = 0. (b) y1(t) = e x1 and y2(t) = x2. (c) yes.

2.4.4  1  x = . −2 2.4.5  2  x =  1  . −1 2.4.6  1  x =  1  . 1 2.4.7 r = −5 and r = 4.

2.4.8 r = 0 and r = 9.

2.4.9 r = 5, r = 4, and r = −1.

2.4.10 r = 0, r = 4, and r = −4.

  1   1  2.4.11 e−0.11t , e−0.05t . −1 2         0 1 1  2.4.12 et  0  , e3t  1  , e5t  2  .  1 0 0  166 ANSWER KEY

        1 1 1  2.4.13 e−2t  −5  , et  −2  , e2t  −1  .  −6 −3 0   1   3  2.4.14 y(t) = −e1−t + e3(t−1) . −2 −2  1   2   1  2.4.15 y(t) = e−2t  −3  − et  −3  + 0e4t  0  . 3 0 0 2.4.16 r = −6 and α = 8.

2.4.17 α = 0 and β = −1. 167

Section 2.5

  3    3  2.5.1 −3i, , 3i, . i −i   1 − i    1 + i  2.5.2 2 − i, , 2 + i, . −2 −2   1    4 − i    4 + i  2.5.3 2,  0  , 2 − 3i,  3i  , 2 + 3i,  −3i  . −1 −1 + i −1 − i 2.5.4  −2 cos t − sin t   cos t − 2 sin t  y (t) = , y (t) = . 1 5 cos t 2 5 sin t

2.5.5  −et cos t − et sin t   et cos t − et sin t  y (t) = , y (t) = . 1 −et sin t 2 et cos t

2.5.6  −et sin 5t   et cos 5t   0   0   et cos 5t   et sin 5t   0   0  y1(t) =   , y2(t) =   , y3(t) =   , y4(t) =   .  0   0   −et sin 2t   et cos 2t  0 0 et cos 2t et sin 2t

 6 cos 3t − 6 sin 3t  2.5.7 y(t) = . 2 sin 3t + 2 cos 3t  8 cos t + 14 sin t  2.5.8 y(t) = e2t . 6 cos t − 22 sin t  −2 + cos 3t − 13 sin 3t  2.5.9 y(t) = e2t  3 sin 3t + 9 cos 3t  . 2 + 2 cos 3t + 4 sin 3t 168 ANSWER KEY

Section 2.6

2.6.1 (a) r = 3 has algebraic multiplicity 3. The corresponding eigenvec- tor is  1  x = . 1 0     3t 0 3t 1 (b) y2(t) = e 1 + te and 2 0  e3t te3t  Ψ(t) = e3t 0 2  2t + 4  (c) y(t) = e3t . 1 2.6.2 (a) r = 3 has algebraic multiplicity 2. The corresponding eigenvec- tor is  0  x = 1 1  1   0  (b) y (t) = e3t + tet , 2 0 1  0 e3t  Ψ(t) = e3t te3t  2  (c) y(t) = e3t . 2t − 3 2.6.3 (a) r = 3 has algebraic multiplicity 2. The corresponding eigenvec- tor is  6  x = . 1 −1  −1   6  (b) y (t) = e3t + tet , 2 0 −1  6e3t (6t − 1)e3t  Ψ(t) = −e3t −te3t  −72t  (c) y(t) = e3t . 12t + 2 169

2.6.4 (a) r = 5 has algebraic multiplicity 2. The corresponding eigenvec- tor is  1  x = . 1 −1  1   1  (b) y (t) = e5t + tet , 2 0 −1

 e5t (t + 1)e5t  Ψ(t) = −e5t −te5t

 4  (c) y(t) = e5t . −4

2t 2t 2 2t 2t 2t 2t 2.6.5 (a) y1(t) = c1e +c2te +c3t e , y2(t) = c3te +c2e , y3(t) = c3e (b)  2t   2t   2 2t   e te t e   0  ,  e2t  ,  te2t  is a fundamental set.  0 0 e2t  2.6.6 r = 5 is of multiplicity 3 and corresponding eigenvectors  0   0  x1 =  1  , x2  0  0 1 Hence, r = 5 has algebraic multiplicity 3 and geometric multiplicity 2. Thus, P is defective.

2.6.7 r = 5 is of multiplicity 3 and corresponding eigenvectors  1   0   0  x1 =  0  , x2 =  1  , x3  0  0 0 1 Hence, r = 5 has algebraic multiplicity 3 and geometric multiplicity 3. Hence, P has a full set of eigenvectors.

2.6.8 r = 2 is of multiplicity 4 and corresponding eigenvectors  1   0   0   0   1   0  x1 =   , x2 =   , x3    0   0   0  0 0 1 170 ANSWER KEY

Hence, r = 2 has algebraic multiplicity 4 and geometric multiplicity 3. Hence, P defective.

2.6.9 P have the two distinct eigenvalues r1 = a + ib and r2 = a − ib. Thus, P has a full set of eigenvectors, i.e. P is non-defective.

2.6.10 x = 6 and y = 1.

2.6.11 x = 9 and y = −5. Index

nth term test, 68 Geometric multiplicity, 134 Gronwall’s lemma, 32 Algebraic multiplicities, 134 Alternating series test, 70 Hermitian matrix, 137 Analytic, 75 Homogeneous, 26 Homogeneous system, 88 Base functions, 16 Identity matrix,4 Characterisitc equation, 55 Interval of convergence, 68 Characteristic equation, 117 Invertible matrix,7 Characteristic polynomial, 55, 117 Comparison test, 68 linear, 26 Continuity,9 Linearly dependent, 47, 108 Convergence, 16 Linearly indepedent, 47 Linearly independent, 108 Defective matrix, 136 Derivative of a matrix, 10 Matrix function,5 Difference of matrices,6 Method of variation of parameters, 81 Dimension of a matrix,4 Non-homogeneous, 26 Eigenpair, 117 Non-singular,7 Eigenvalues, 117 Nonhomogeneous system, 88 Eigenvector, 117 Norm,8 Equal matrices,5 Euler’s formula, 59 ordinary point, 75

First order linear system, 88 Partial sums, 66 Function series, 21 Picard’s iterations, 29 Fundamental matrix, 99 Pointwise convergence, 16, 22 Fundamental set, 39 Power series, 66 Fundamental set of solutions, 97 Principle of Superposition, 38

General solution, 97 Radius of convergence, 67

171 172 INDEX

Ratio test, 69 Rational root test, 55 Real symmetric matrix, 137 Recurrence formula, 78 Regular point, 75 Root multiplicity, 57

Signature of a permutation, 102 Singular point, 75 Solution matrix, 98 Square matrix,4 Successive approximation, 29 Sum of matrices,5

Taylor series, 75 Trace, 42 Transpose of a matrix, 136

Uniform convergence, 18, 22

Vandermonde determinant, 57 Vector-valued function,5

Weierstrass M-test, 22 Wronskian, 39, 98

Zero matrix,4