<<

Boyce/DiPrima/Meade 11th ed, Ch 7.1: Introduction to Systems of First Order Linear Equations

Elementary Differential Equations and Boundary Value Problems, 11th edition, by William E. Boyce, Richard C. DiPrima, and Doug Meade ©2017 by John Wiley & Sons, Inc.

• A system of simultaneous first order ordinary differential equations has the general form

x1  F1(t, x1, x2 ,xn )

x2  F2 (t, x1, x2 ,xn ) 

xn  Fn (t, x1, x2 ,xn )

where each xk is a function of t. If each Fk is a linear function of x1, x2, …, xn, then the system of equations is said to be linear, otherwise it is nonlinear. • Systems of higher order differential equations can similarly be defined. Example 1

• The motion of a certain spring-mass system from Section 3.7 was described by the differential equation 1 u¢¢(t)+ u¢(t)+ u(t) = 0 8 • This second order equation can be converted into a system of

first order equations by letting x1 = u and x2 = u'. Thus

x1¢ = x2 1 x¢ + x + x = 0 or 2 8 2 1

x1¢ = x2 1 x¢ = -x - x 2 1 8 2 Nth Order ODEs and Linear 1st Order Systems

• The method illustrated in the previous example can be used to transform an arbitrary nth order equation y(n)  Ft, y, y, y,, y(n1)  into a system of n first order equations, first by defining (n1) x1  y, x2  y, x3  y, , xn  y Then

x1  x2

x2  x3 

xn 1  xn

xn  F(t, x1, x2 ,xn ) Solutions of First Order Systems

• A system of simultaneous first order ordinary differential equations has the general form

x1  F1(t, x1, x2 ,xn ) 

xn  Fn (t, x1, x2 ,xn ). It has a solution on I : a < t < b if there exists n functions

x1 1(t), x2 2 (t),, xn n (t) that are differentiable on I and satisfy the system of equations at all points t in I. • Initial conditions may also be prescribed to give an IVP: 0 0 0 x1(t0 )  x1 , x2 (t0 )  x2 ,, xn (t0 )  xn Theorem 7.1.1

• Suppose F1,…, Fn and

¶F1 / ¶x1,...,¶F1 / ¶xn,...,¶Fn / ¶x1,...,¶Fn / ¶xn are continuous in the region R of t x1 x2…xn-space defined by

a < t < b , a 1 < x 1 < b 1 , ..., a n < x n < b n and let the point 0 0 0  t 0 , x 1 , x 2 ,  , x n  be contained in R. Then in some interval

(t0 – h, t0 + h) there exists a unique solution

x1 1(t), x2 2 (t),, xn n (t) that satisfies the IVP. x1  F1(t, x1, x2 ,xn )

x2  F2 (t, x1, x2 ,xn ) 

xn  Fn (t, x1, x2 ,xn ) Linear Systems

• If each Fk is a linear function of x1, x2, …, xn, then the system of equations has the general form

x1  p11(t)x1  p12(t)x2  p1n (t)xn  g1(t)

x2  p21(t)x1  p22(t)x2  p2n (t)xn  g2 (t) 

xn  pn1(t)x1  pn2 (t)x2  pnn(t)xn  gn (t)

• If each of the gk(t) is zero on I, then the system is homogeneous, otherwise it is nonhomogeneous. Theorem 7.1.2

• Suppose p11, p12,…, pnn, g1,…, gn are continuous on an interval I : a < t < b with t0 in I, and let 0 0 0 x1 , x2 ,, xn prescribe the initial conditions. Then there exists a unique solution

x1 1(t), x2 2 (t),, xn n (t) that satisfies the IVP, and exists throughout I.

x1  p11(t)x1  p12(t)x2  p1n (t)xn  g1(t)

x2  p21(t)x1  p22(t)x2  p2n (t)xn  g2 (t) 

xn  pn1(t)x1  pn2 (t)x2  pnn(t)xn  gn (t) Boyce/DiPrima/Meade 11th ed, Ch 7.2: Review of Matrices

Elementary Differential Equations and Boundary Value Problems, 11th edition, by William E. Boyce, Richard C. DiPrima, and Doug Meade ©2017 by John Wiley & Sons, Inc.

• For theoretical and computational reasons, we review results of theory in this section and the next. • A matrix A is an m x n rectangular array of elements, arranged in m rows and n columns, denoted

 a11 a12  a1n     a21 a22  a2n  A  a  i j           am1 am2  amn 

• Some examples of 2 x 2 matrices are given below: 1 2 1 3  1 3 2i A   , B   , C    3 4 2 4 4  5i 6  7i

T • The transpose of A = (aij) is A = (aji).

 a11 a12  a1n   a11 a21  am1       a21 a22  a2n  a12 a22  am2  A   AT                      am1 am2  amn  a1n a2n  amn 

• For example, 1 4 1 2 T 1 3 1 2 3 T   A     A   , B     B  2 5 3 4 2 4 4 5 6   3 6 Conjugate

• The conjugate of A = (aij) is A = (aij).

 a11 a12  a1n   a11 a12  a1n       a21 a22  a2n   a21 a22  a2n  A   A                      am1 am2  amn  am1 am2  amn 

• For example,

 1 2  3i  1 2 3i A     A    3 4i 4  3 4i 4  Adjoint

• The adjoint of A is AT , and is denoted by A*

 a11 a12  a1n   a11 a21  am1       a21 a22  a2n  a12 a22  am2  A   A*                      am1 am2  amn  a1n a2n  amn 

• For example,

 1 2  3i *  1 3 4i A     A    3 4i 4  2 3i 4  Square Matrices

• A A has the same number of rows and columns. That is, A is n x n. In this case, A is said to have order n.  a a  a   11 12 1n  a21 a22  a2n  A            an1 an2  ann 

• For example, 1 2 3 1 2   A   , B  4 5 6 3 4   7 8 9 Vectors

• A column vector x is an n x 1 matrix. For example, 1   x  2   3 • A row vector x is a 1 x n matrix. For example,

y  1 2 3

• Note here that y = xT, and that in general, if x is a column vector x, then xT is a row vector. The

• The zero matrix is defined to be 0 = (0), whose dimensions depend on the context. For example, 0 0 0 0 0 0 0   0   , 0   , 0  0 0,  0 0 0 0 0   0 0 Matrix Equality

• Two matrices A = (aij) and B = (bij) are equal if aij = bij for all i and j. For example,

1 2 1 2 A   , B     A  B 3 4 3 4 Matrix – Multiplication

• The product of a matrix A = (aij) and a constant k is defined to be kA = (kaij). For example,

1 2 3  5 10 15 A     5A    4 5 6  20  25 30 Matrix Addition and Subtraction

• The sum of two m x n matrices A = (aij) and B = (bij) is defined to be A + B = (aij + bij). For example, 1 2 5 6  6 8  A   , B     A  B    3 4 7 8 10 12

• The difference of two m x n matrices A = (aij) and B = (bij) is defined to be A - B = (aij - bij). For example, 1 2 5 6  4  4 A   , B     A  B    3 4 7 8  4  4

• The product of an m x n matrix A = (aij) and an n x r matrix B = (bij) is defined to be the matrix C = (cij), where n cij  aikbkj k1 • Examples (note AB does not necessarily equal BA): 1 2 1 3 1 4 3 8   5 11 A   , B     AB       3 4 2 4 3 8 9 16 11 25  1 9 2 12 10 14   BA       2 12 4 16 14 20 3 0 1 2 3    3 2  0 0  4  3   5 1 C   , D   1 2  CD       4 5 6   12  5  0 0 10  6 17 4 0 1 Example 1: Matrix Multiplication

• To illustrate matrix multiplication and show that it is not commutative, consider the following matrices: 1  2 1  2 1 1     A  0 2 1, B  1 1 0      2 1 1  2 1 1  • From the definition of matrix multiplication we have:

æ 2 - 2 + 2 1+ 2 -1 -1+1 ö æ 2 2 0 ö AB = ç 2 - 2 -2 +1 -1 ÷ = ç 0 -1 -1 ÷ ç ÷ ç ÷ è 4 +1+ 2 2 -1-1 -2 +1 ø è 7 0 -1 ø æ 2 - 2 -4 + 2 -1 2 -1-1 ö æ 0 -3 0 ö BA = ç 1 -2 - 2 1+1 ÷ = ç 1 -4 2 ÷ ¹ AB ç ÷ ç ÷ è 2 + 2 -4 - 2 +1 2 +1+1 ø è 4 -5 4 ø Vector Multiplication

• The of two n x 1 vectors x & y is defined as n T x y   xi y j k1 • The inner product of two n x 1 vectors x & y is defined as n T x,y  x y   xi y j k1 • Example:  1   1      T x   2 , y  2  3i  x y  (1)(1)  (2)(2  3i)  (3i)(5  5i)  12  9i     3i 5  5i  x,y  xT y  (1)(1)  (2)(2  3i)  (3i)(5  5i) 18  21i Vector Length

• The length of an n x 1 vector x is defined as

n 1/ 2 n 1/ 2 1/ 2    2  x  x,x   xk xk   | xk |   k1   k1  • Note here that we have used the fact that if x = a + bi, then x x  a bia bi  a2 b2  x 2

• Example:  1    1/ 2 x   2   x  x,x  (1)(1)  (2)(2)  (3 4i)(3 4i)   3 4i  1 4  9 16  30

• Two n x 1 vectors x & y are orthogonal if (x,y) = 0. • Example:

1  11     x  2 y   4  x,y  (1)(11)  (2)(4)  (3)(1)  0     3  1

• The multiplicative identity matrix I is an n x n matrix given by 1 0  0   0 1  0 I            0 0  1

• For any square matrix A, it follows that AI = IA = A. • The dimensions of I depend on the context. For example,

1 0 01 2 3 1 2 3 1 21 0 1 2      AI       , IB  0 1 04 5 6  4 5 6 3 40 1 3 4      0 0 17 8 9 7 8 9 Inverse Matrix

• A square matrix A is nonsingular, or invertible, if there exists a matrix B such that that AB = BA = I. Otherwise A is singular. • The matrix B, if it exists, is unique and is denoted by A–1 and is called the inverse of A. • It turns out that A–1 exists iff detA ≠ 0, and A–1 can be found using row reduction (also called ) on the (A|I), see example on next slide. • The three elementary row operations: – Interchange two rows. – Multiply a row by a nonzero scalar. – Add a multiple of one row to another row. Example 2: Finding the Inverse of a Matrix (1 of 2)

• Use row reduction to find the inverse of the matrix A below, if it exists.  1 1 1   A   3 1 2   2 2 3 • Solution: If possible, use elementary row operations to reduce (A|I),  1 1 1 1 0 0   A I  3 1 2 0 1 0,   2 2 3 0 0 1

such that the left side is the identity matrix, for then the right side will be A–1. (See next slide.) Example 2: Finding the Inverse of a Matrix (2 of 2)

 1 1 1 1 0 0  1 1 1 1 0 0     A I  3 1 2 0 1 0  0 2 5  3 1 0     2 2 3 0 0 1 0 4 5  2 0 1  1 1 1 1 0 0  1 0 3/ 2 1/ 2 1/ 2 0      0 1 5/ 2  3/ 2 1/ 2 0  0 1 5/ 2  3/ 2 1/ 2 0     0 4 5  2 0 1 0 0  5 4  2 1

 1 0 3/ 2 1/ 2 1/ 2 0  1 0 0 7 /10 1/10 3/10      0 1 5/ 2  3/ 2 1/ 2 0  0 1 0 1/ 2 1/ 2 1/ 2     0 0  5 4  2 1 0 0 1  4 / 5 2 / 5 1/ 5  7 /10 1/10 3/10 1   • Thus A   1/ 2 1/ 2 1/ 2    4 / 5 2 / 5 1/ 5 Matrix Functions

• The elements of a matrix can be functions of a real variable. In this case, we write  x (t)   a (t) a (t)  a (t)   1   11 12 1n   x2 (t)   a21(t) a22(t)  a2n (t)  x(t)  , A(t)                x (t) a (t) a (t)  a (t)  m   m1 m2 mn  • Such a matrix is continuous at a point, or on an interval (a, b), if each element is continuous there. Similarly with differentiation and integration:

dA  daij  b  b    , A(t)dt   aij (t)dt    a a dt  dt    Example & Differentiation Rules

• Example:  3t 2 sin t  dA  6t cost    A(t)       , cost 4  dt  sin t 0  3   0   A(t)dt   0   1 4 

• Many of the rules from calculus apply in this setting. For example: dCA dA  C , where C is a constant matrix dt dt dA  B dA d B   dt dt dt dAB   dA   d B    B  A  dt  dt   dt  Boyce/DiPrima/Meade 11th ed, Ch 7.3: Systems of Linear Equations, , Eigenvalues

Elementary Differential Equations and Boundary Value Problems, 11th edition, by William E. Boyce, Richard C. DiPrima, and Doug Meade ©2017 by John Wiley & Sons, Inc.

• A system of n linear equations in n variables,

a1,1x1  a1,2 x2  a1,n xn  b1

a2,1x1  a2,2 x2  a2,n xn  b2

an,1x1  an,2 x2  an,n xn  bn , can be expressed as a matrix equation Ax = b:

 a1,1 a1,2  a1,n  x1   b1       a2,1 a2,2  a2,n  x2  b2                   a a  a  x  b   n,1 n,2 n,n  n   n  • If b = 0, then system is homogeneous; otherwise it is nonhomogeneous. Nonsingular Case

• If the coefficient matrix A is nonsingular, then it is invertible and we can solve Ax = b as follows: Ax  b  A1Ax  A1b  Ix  A1b  x  A1b

• This solution is therefore unique. Also, if b = 0, it follows that the unique solution to Ax = 0 is x = A–10 = 0.

• Thus if A is nonsingular, then the only solution to Ax = 0 is the trivial solution x = 0. Example 1: Nonsingular Case (1 of 3)

• From a previous example, we know that the matrix A below is nonsingular with inverse as given.  1  2 3 3/ 4 5/ 4 1/ 4   1   A  1 1  2, A   5/ 4  7 / 4 1/ 4      2 1 1  1/ 4 3/ 4 1/ 4 • Using the definition of matrix multiplication, it follows that the only solution of Ax = 0 is x = 0:  3/ 4 5/ 4 1/ 40 0 1      x  A 0  5/ 4  7 / 4 1/ 40  0       1/ 4  3/ 4 1/ 40 0 Example 1: Nonsingular Case (2 of 3)

• Now let’s solve the nonhomogeneous linear system Ax = b below using A–1: 0x1  x2  2x3  2

1x1  0x2  3x3  2

4x1  3x2  8x3  0

• This system of equations can be written as Ax = b, where  1  2 3  x   7    1    A  1 1  2 , x  x , b  5    2           2 1 1  x3   4 • Then 3/ 4 5/ 4 1/ 4 7  2 1      x  A b   5/ 4  7 / 4 1/ 45  1       1/ 4  3/ 4 1/ 4 4  1 Example 1: Nonsingular Case (3 of 3)

• Alternatively, we could solve the nonhomogeneous linear system Ax = b below using row reduction. x  2x  3x  7 1 2 3  x  x  2x  5 1 2 3 2x1  x2  x3  4 • To do so, form the augmented matrix (A|b) and reduce, using elementary row operations.  1  2 3 7  1  2 3 7  1  2 3 7       A b 1 1  2  5  0 1 1 2  0 1 1  2        2 1 1 4 0 3  7 10 0 3  7 10  1  2 3 7  1  2 3 7 x  2x  3x  7  2     1 2 3    0 1 1  2  0 1 1  2  x2  x3   2  x  1       0 0  4  4 0 0 1 1 x3  1  1 Singular Case

• If the coefficient matrix A is singular, then A-1 does not exist, and either a solution to Ax = b does not exist, or there is more than one solution (not unique). • Further, the homogeneous system Ax = 0 has more than one solution. That is, in addition to the trivial solution x = 0, there are infinitely many nontrivial solutions. • The nonhomogeneous case Ax = b has no solution unless (b, y) = 0, for all vectors y satisfying A*y = 0, where A* is the adjoint of A. • In this case, Ax = b has solutions (infinitely many), each of the form x = x(0) + x , where x(0) is a particular solution of Ax = b, and is any solution of Ax = 0. Example 2: Singular Case (1 of 2)

• Solve the nonhomogeneous linear system Ax = b below using row reduction. Observe that the coefficients are nearly the same as in the previous example x1  2x2  3x3  b 1

 x  x  2x  b 1 2 3 2 2x  x  3x  b 1 2 3 3 • We will form the augmented matrix (A|b) and use some of the steps in Example 1 to transform the matrix more quickly  1  2 3 b   1  2 3 b   1   1  A b 1 1  2 b2   0 1 1  b1  b2       2 1 3 b3  0 0 0 b1  3b2  b3 

x1  2x2  3 x3  b1

 x2  x3  b1  b2  b1  3b2  b3  0

0  b1  3b2  b3 x1  2x2  3x3  b 1

 x1  x2  2x3  b2

2x1  x2  3x3  b3 Example 2: Singular Case (2 of 2)

• From the previous slide, if b1 3b2 b 3  0 , there is no solution to the system of equations

• Requiring that b1 3b 2 b3  0 , assume, for example, that

b1  2, b2 1, b3  5 • Then the reduced augmented matrix (A|b) becomes:  1  2 3 b  x  2x  3 x  2  x  4 1  4  1  1 2 3  3      0 1 1 b1 b2   x2  x3  3  x   x3 3  x  x3 1  3         0 0 0 b1  3b2  b3  0  0  x3   1  0 • It can be shown that the second term in x is a solution of the nonhomogeneous equation and that the first term is the most

general solution of the homogeneous equation, letting x3   , where α is arbitrary Linear Dependence and Independence

• A set of vectors x(1), x(2),…, x(n) is linearly dependent if

there exists scalars c1, c2,…, cn, not all zero, such that (1) (2) (n) c1x  c2x  cnx  0

• If the only solution of

(1) (2) (n) is c1= c2 = …= cn = 0, then x , x ,…, x is linearly independent. Example 3: Linear Dependence (1 of 2)

• Determine whether the following vectors are linear dependent or linearly independent.  1 2   4 (1)   (2)   (3)   x   2, x   1, x   1       1  3 11

• We need to solve (1) (2) (3) c1x  c2x  c3x  0

or  1 2   4 0  1 2  4 c  0           1    c1 2  c2  1  c 1  0   2 1 1c2   0              1  3 11 0 1 3 11 c3  0  1 2   4 (1)   (2)   (3)   x   2, x   1, x   1       1  3 11 Example 3: Linear Dependence (2 of 2)

• We can reduce the augmented matrix (A|b), as before.  1 2  4 0  1 2  4 0  1 2  4 0       A b  2 1 1 0  0  3 9 0  0 1  3 0       1 3 11 0 0 5 15 0 0 0 0 0

c  2c  4c  0  2 1 2 3    c  3c  0  c  c 3 where c can be any number 2 3 3   3   0  0 1   (1) (2) (3) • So, the vectors are linearly dependent: if c3  1, 2x 3x  x  0 • Alternatively, we could show that the following is zero: 1 2  4

det(xij )  2 1 1  0 1 3 11 Linear Independence and Invertibility

• Consider the previous two examples: – The first matrix was known to be nonsingular, and its column vectors were linearly independent. – The second matrix was known to be singular, and its column vectors were linearly dependent. • This is true in general: the columns (or rows) of A are linearly independent iff A is nonsingular iff A-1 exists. • Also, A is nonsingular iff detA ≠ 0, hence columns (or rows) of A are linearly independent iff detA ≠ 0. • Further, if A = BC, then det(C) = det(A)det(B). Thus if the columns (or rows) of A and B are linearly independent, then the columns (or rows) of C are also. Linear Dependence & Vector Functions

• Now consider vector functions x(1)(t), x(2)(t),…, x(n)(t), where  x(k ) (t)  1  (k ) k   x2 (t) x (t)   , k 1,2,,n, t  I  ,       (k )   xm (t)

• As before, x(1)(t), x(2)(t),…, x(n)(t) is linearly dependent on I if

there exists scalars c1, c2,…, cn, not all zero, such that (1) (2) (n) c1x (t)  c2x (t)  cnx (t)  0, for all t I

• Otherwise x(1)(t), x(2)(t),…, x(n)(t) is linearly independent on I See text for more discussion on this. Eigenvalues and Eigenvectors

• The eqn. Ax = y can be viewed as a linear transformation that maps (or transforms) x into a new vector y. • Nonzero vectors x that transform into multiples of themselves are important in many applications. • Thus we solve Ax = l x or equivalently, (A – I)x = 0. • This equation has a nonzero solution if we choose such that det(A – I) = 0. • Such values of are called eigenvalues of A, and the nonzero solutions x are called eigenvectors. Example 4: Eigenvalues (1 of 3)

• Find the eigenvalues and eigenvectors of the matrix A.  3 1 A    4  2

• Solution: Choose l such that det(A – I) = 0, as follows.  3 1  1 0   detA  I  det     4  2 0 1 3  1  det   4  2     3  2   14  2    2    2 1    2,   1 Example 4: First Eigenvector (2 of 3)

• To find the eigenvectors of the matrix A, we need to solve (A – l I)x = 0 for = 2 and = –1. • Eigenvector for = 2: Solve 3 2 1 x  0  1 1 x  0 A  Ix  0    1        1                4  2  2 x2  0 4  4 x2  0

and this implies that x1  x2 . So

(1)  x2  1 (1) 1 x     c , c arbitrary  choose x     x2  1 1 Example 4: Second Eigenvector (3 of 3)

• Eigenvector for l = –1: Solve

31 1 x1  0 4 1 x1  0 A  Ix  0                4  2 1 x2  0 4 1 x2  0

and this implies that x2  4x.1 So

(2)  x1  1 (2)  1 x     c , c arbitrary  choose x    4x1  4 4 Normalized Eigenvectors

• From the previous example, we see that eigenvectors are determined up to a nonzero multiplicative constant. • If this constant is specified in some particular way, then the eigenvector is said to be normalized. • For example, eigenvectors are sometimes normalized by choosing the constant so that ||x|| = (x, x)½ = 1. Algebraic and Geometric Multiplicity

• In finding the eigenvalues l of an n x n matrix A, we solve det(A – I) = 0. • Since this involves finding the determinant of an n x n matrix, the problem reduces to finding roots of an nth degree polynomial.

• Denote these roots, or eigenvalues, by 1, 2, …, n. • If an eigenvalue is repeated m times, then its algebraic multiplicity is m. • Each eigenvalue has at least one eigenvector, and a eigenvalue of algebraic multiplicity m may have q linearly independent eigevectors, 1 ≤ q ≤ m, and q is called the geometric multiplicity of the eigenvalue. Eigenvectors and Linear Independence

• If an eigenvalue l has algebraic multiplicity 1, then it is said to be simple, and the geometric multiplicity is 1 also. • If each eigenvalue of an n x n matrix A is simple, then A has n distinct eigenvalues. It can be shown that the n eigenvectors corresponding to these eigenvalues are linearly independent. • If an eigenvalue has one or more repeated eigenvalues, then there may be fewer than n linearly independent eigenvectors since for each repeated eigenvalue, we may have q < m. This may lead to complications in solving systems of differential equations. Example 5: Eigenvalues (1 of 5)

• Find the eigenvalues and eigenvectors of the matrix A. 0 1 1   A  1 0 1 1 1 0   • Solution: Choose l such that det(A – I) = 0, as follows.   1 1   detA  I  det 1   1    1 1     3  3  2  (  2)( 1)2

 1  2, 2  1, 2  1 Example 5: First Eigenvector (2 of 5)

• Eigenvector for l = 2: Solve (A – I)x = 0, as follows.

 2 1 1 0  1 1  2 0  1 1  2 0        1  2 1 0   1  2 1 0  0  3 3 0        1 1  2 0  2 1 1 0 0 3  3 0  1 1  2 0  1 0 1 0 1x 1x  0     1 3  0 1 1 0  0 1 1 0  1x2 1x3  0     0 0 0 0 0 0 0 0 0x3  0

 x3  1 1 (1)     (1)    x   x3   c1, c arbitrary  choose x  1        x3  1 1 Example 5: 2nd and 3rd Eigenvectors (3 of 5)

• Eigenvector for l = –1: Solve (A – I)x = 0, as follows.

1 1 1 0  1 1 1 0 1x 1x 1x  0     1 2 3 1 1 1 0  0 0 0 0  0x2  0     1 1 1 0 0 0 0 0 0x3  0

 x2  x3  1 1 (2)        x   x2   x2  1  x3  0, where x2 , x3 arbitrary        x3   0  1  1  0      choose x(2)   0 , x(3)   1     1 1 Example 5: Eigenvectors of A (4 of 5)

• Thus three eigenvectors of A are 1  1  0 (1)   (2)   (3)   x  1, x   0 , x   1       1 1 1 where x(2), x(3) correspond to the double eigenvalue l = -1. • It can be shown that x(1), x(2), x(3) are linearly independent. • Hence A is a 3 x 3 (A = AT ) with 3 real eigenvalues and 3 linearly independent eigenvectors.

0 1 1   A  1 0 1   1 1 0 Example 5: Eigenvectors of A (5 of 5)

• Note that we could have we had chosen 1  1  1 (1)   (2)   (3)   x  1, x   0 , x   2       1 1  1 • Then the eigenvectors are orthogonal, since x(1) ,x(2)  0, x(1) ,x(3)  0, x(2) ,x(3)  0

• Thus A is a 3 x 3 symmetric matrix with 3 real eigenvalues and 3 linearly independent orthogonal eigenvectors. Hermitian Matrices

• A self-adjoint, or , satisfies A = A*, where we recall that A* = AT .

• Thus for a Hermitian matrix, aij = aji. • Note that if A has real entries and is symmetric (see last example), then A is Hermitian. • An n x n Hermitian matrix A has the following properties: – All eigenvalues of A are real. – There exists a full set of n linearly independent eigenvectors of A. – If x(1) and x(2) are eigenvectors that correspond to different eigenvalues of A, then x(1) and x(2) are orthogonal. – Corresponding to an eigenvalue of algebraic multiplicity m, it is possible to choose m mutually orthogonal eigenvectors, and hence A has a full set of n linearly independent orthogonal eigenvectors. Boyce/DiPrima/Meade 11th ed, Ch 7.4: Basic Theory of Systems of First Order Linear Equations

Elementary Differential Equations and Boundary Value Problems, 11th edition, by William E. Boyce, Richard C. DiPrima, and Doug Meade ©2017 by John Wiley & Sons, Inc.

• The general theory of a system of n first order linear equations

x1  p11(t)x1  p12(t)x2  p1n (t)xn  g1(t)

x2  p21(t)x1  p22(t)x2  p2n (t)xn  g2 (t) 

xn  pn1(t)x1  pn2 (t)x2  pnn(t)xn  gn (t) parallels that of a single nth order linear equation. • This system can be written as x' = P(t)x + g(t), where

 x (t)   g (t)   p (t) p (t)  p (t)   1   1   11 12 1n   x2 (t)  g2 (t)  p21(t) p22(t)  p2n (t) x(t)  , g(t)  , P(t)                           xn (t)  gn (t)  pn1(t) pn2 (t)  pnn(t) Vector Solutions of an ODE System

• A vector x = f (t) is a solution of x' = P(t)x + g(t) if the components of x,

x1 1(t), x2 2 (t),, xn n (t), satisfy the system of equations on I : a < t < b .

• For comparison, recall that x' = P(t)x + g(t) represents our system of equations

x1  p11(t)x1  p12(t)x2  p1n (t)xn  g1(t)

x2  p21(t)x1  p22(t)x2  p2n (t)xn  g2 (t) 

xn  pn1(t)x1  pn2 (t)x2  pnn(t)xn  gn (t)

• Assuming P and g continuous on I, such a solution exists by Theorem 7.1.2. Homogeneous Case; Vector Function Notation

• As in Chapters 3 and 4, we first examine the general homogeneous equation x' = P(t)x. • Also, the following notation for the vector functions x(1), x(2),…, x(k),… will be used:

 x (t)  x (t)   x (t)   11   12   1n   x21(t)  x22(t)  x2n (t) x(1) (t)  , x(2) (t)  ,, x(k ) (t)  ,                       xn1(t)  xn2 (t)  xnn(t) Theorem 7.4.1

• If the vector functions x(1) and x(2) are solutions of the system (1) (2) x' = P(t)x, then the c1x + c2x is also a solution for any constants c1 and c2.

• Note: By repeatedly applying the result of this theorem, it can be seen that every finite linear combination (1) (2) (k) x  c1x (t)  c2x (t)  ck x (t) of solutions x(1), x(2),…, x(k) is itself a solution to x' = P(t)x. Theorem 7.4.2

• If x(1), x(2),…, x(n) are linearly independent solutions of the system x' = P(t)x for each point in I : a < t < b , then each solution x = f (t) can be expressed uniquely in the form

(1) (2) (n) x  c1x (t)  c2x (t)  cnx (t)

• If solutions x(1),…, x(n) are linearly independent for each point in I : a < t < b , then they are fundamental solutions on I, and the general solution is given by

The and Linear Independence

• The proof of Thm 7.4.2 uses the fact that if x(1), x(2),…, x(n) are linearly independent on I, then detX(t) ≠ 0 on I, where  x (t)  x (t)  11 1n  X(t)      ,    xn1(t)  xnn(t)

• The Wronskian of x(1),…, x(n) is defined as W[x(1),…, x(n)](t) = detX(t). • It follows that W[x(1),…, x(n)](t) ≠ 0 on I iff x(1),…, x(n) are linearly independent for each point in I. Theorem 7.4.3

• If x(1), x(2),…, x(n) are solutions of the system x' = P(t)x on , thenI : athe< Wronskiant < b W[x(1),…, x(n)](t) is either identically zero on I or else is never zero on I. • This result relies on Abel’s formula for the Wronskian

dW [ p11(t) p22(t) pnn(t)]dt  ( p  p  p ) W(t) ce dt 11 22 nn where c is an arbitrary constant (Refer to Section 3.2) • This result enables us to determine whether a given set of solutions x(1), x(2),…, x(n) are fundamental solutions by evaluating W[x(1),…, x(n)](t) at any point t in a < t < b . Theorem 7.4.4

• Let 1 0 0       0 1 0 e(1)  0, e(2)  0,, e(n)                 0       0 0 1       • Let x(1), x(2),…, x(n) be solutions of the system x' = P(t)x, a < t < b , that satisfy the initial conditions (1) (1) (n) (n) x (t0 )  e , , x (t0 )  e ,

respectively, where t0 is any point in . Then x(1), x(2),…, x(n) are form a fundamental set of solutions of x' = P(t)x. Theorem 7.4.5

• Consider the system

x' = P(t)x where each element of P is a real-valued continuous function. If x = u(t) + iv(t) is a complex-valued solution of Eq. (3), then its real part u(t) and its imaginary part v(t) are also solutions of this equation.

Boyce/DiPrima/Meade 11th ed, Ch 7.5: Homogeneous Linear Systems with Constant Coefficients

Elementary Differential Equations and Boundary Value Problems, 11th edition, by William E. Boyce, Richard C. DiPrima, and Doug Meade ©2017 by John Wiley & Sons, Inc.

• We consider here a homogeneous system of n first order linear equations with constant, real coefficients:

x1  a11x1  a12x2  a1n xn

x2  a21x1  a22x2  a2n xn 

xn  an1x1  an2 x2  annxn • This system can be written as x' = Ax, where

 x (t)   a a  a   1   11 12 1n   x2 (t)  a21 a22  a2n  x(t)  , A                    xm (t) an1 an2  ann  Equilibrium Solutions

• Note that if n = 1, then the system reduces to x  ax  x(t)  eat • Recall that x = 0 is the only equilibrium solution if a ≠ 0. • Further, x = 0 is an asymptotically stable solution if a < 0, since other solutions approach x = 0 in this case. • Also, x = 0 is an unstable solution if a > 0, since other solutions depart from x = 0 in this case. • For n > 1, equilibrium solutions are similarly found by solving Ax = 0. We assume detA ≠ 0, so that x = 0 is the only solution. Determining whether x = 0 is asymptotically stable or unstable is an important question here as well. Phase Plane

• When n = 2, then the system reduces to

x1  a11x1  a12x2

x2  a21x1  a22x2

• This case can be visualized in the x1x2-plane, which is called the phase plane. • In the phase plane, a direction field can be obtained by evaluating Ax at many points and plotting the resulting vectors, which will be tangent to solution vectors. • A plot that shows representative solution trajectories is called a phase portrait. • Examples of phase planes, directions fields, and phase portraits will be given later in this section. Solving Homogeneous System

• To construct a general solution to x' = Ax, assume a solution of the form x = x ert, where the exponent r and the constant vector are to be determined. • Substituting x = ert into x' = Ax, we obtain rξert  Aξert  rξ  Aξ  A  rIξ  0

• Thus to solve the homogeneous system of differential equations x' = Ax, we must find the eigenvalues and eigenvectors of A. • Therefore x = ert is a solution of x' = Ax provided that r is an eigenvalue and is an eigenvector of the coefficient matrix A. Example 1 (1 of 2)

• Find the general solution of the system 2 0  x   x 0  3

• The most important feature of this system is that the coefficient matrix is a . Thus, by writing the system in scalar form, we obtain

x'1 = 2x1, x'2 = -3x2 • Each of these equations involves only one of the unknown variables, so we can solve the two equations separately. In this way we find that 2t -3t x1 = c 1e , x2 = c2e

where c1 and c2 are arbitrary constants. Example 1 (2 of 2)

• Then by writing the solution in vector form we have

2t æ ö 2t c1e æ ö æ 0 ö æ 1 ö æ 0 ö x = = c e + c = c e2t + c e-3t ç ÷ 1 ç ÷ 2 ç -3t ÷ 1 2 ç c e-et ÷ è 0 ø è e ø èç 0 ø÷ èç 1 ø÷ è 2 ø • Now we define two solutions x(1) and x(2) so that æ 1 ö æ 0 ö x(1) (t) = e2t , x(2) (t) = e-3t èç 0 ø÷ èç 1 ø÷ • The Wronskian of these solutions is e2t 0 W[x(1),x(2) ](t) = = e-t 0 e-3t which is never zero. Therefore, x(1) and x(2) form a fundamental set of solutions. Example 2: Direction Field (1 of 9)

• Consider the homogeneous equation x' = Ax below. 1 1 x   x 4 1

• A direction field for this system is given below. • Substituting x = x ert in for x, and rewriting system as (A – rI) = 0, we obtain

1 r 1 1  0        4 1 r 1  0 Example 2: Eigenvalues (2 of 9)

• Our solution has the form x = x ert, where r and are found by solving

1 r 1 1  0        4 1 r 1  0

• Recalling that this is an eigenvalue problem, we determine r by solving det(A – rI) = 0:

1 r 1  (1 r)2  4  r 2  2r 3  (r 3)(r 1) 4 1 r

• Thus r1 = 3 and r2 = –1. Example 2: First Eigenvector (3 of 9)

• Eigenvector for r1 = 3: Solve

13 11  0  2 11  0 A  rIξ  0                4 132  0  4  22  0

by row reducing the augmented matrix:

 2 1 0  1 1/ 2 0  1 1/ 2 0 11 1/ 22  0           4  2 0 4  2 0 0 0 0 02  0

(1) 1/ 22  1/ 2 (1)  1  ξ     c , c arbitrary  choose ξ     2   1  2 Example 2: Second Eigenvector (4 of 9)

• Eigenvector for r2 = -1: Solve

11 11  0 2 11  0 A  rIξ  0                4 112  0 4 22  0

by row reducing the augmented matrix:

2 1 0  1 1/ 2 0  1 1/ 2 0 11 1/ 22  0          4 2 0 4 2 0 0 0 0 02  0

(2) 1/ 22  1/ 2 (2)  1  ξ     c , c arbitrary  choose ξ     2   1   2 Example 2: General Solution (5 of 9)

• The corresponding solutions x = x ert of x' = Ax are

(1)  1 3t (2)  1 t x (t)   e , x (t)   e 2  2 • The Wronskian of these two solutions is e3t et Wx(1) ,x(2) (t)   4e2t  0 2e3t  2et • Thus x(1) and x(2) are fundamental solutions, and the general solution of x' = Ax is (1) (2) x(t)  c1x (t)  c2x (t)

 1 3t  1 t  c1 e  c2  e 2  2 Example 2: Phase Plane for x(1) (6 of 9)

(1) • To visualize solution, consider first x = c1x :  x   1 (1)  1  3t 3t 3t x (t)     c1 e  x1  c1e , x2  2c1e  x2  2 • Now x x x  c e3t , x  2c e3t  e3t  1  2  x  2x 1 1 2 1 c 2c 2 1 1 1 (1) • Thus x lies along the straight line x2 = 2x1, which is the line through origin in direction of first eigenvector x (1) • If solution is trajectory of particle, with position given by

(x1, x2), then it is in Q1 when c1 > 0, and in Q3 when c1 < 0. • In either case, particle moves away from origin as t increases. Example 2: Phase Plane for x(2) (7 of 9)

(2) • Next, consider x = c2x :  x   1 (2)  1  t t t x (t)     c2  e  x1  c2e , x2  2c2e  x2   2 (2) • Then x lies along the straight line x2 = –2x1, which is the line through origin in direction of 2nd eigenvector x (2) • If solution is trajectory of particle, with position given by

(x1, x2), then it is in Q4 when c2 > 0, and in Q2 when c2 < 0. • In either case, particle moves towards origin as t increases. Example 2: Phase Plane for General Solution (8 of 9)

(1) (2) • The general solution is x = c1x + c2x :

 1 3t  1 t x(t)  c1 e  c2  e 2  2 (1) (2) • As t ® ¥ , c1x is dominant and c2x becomes negligible. Thus, for c1 ≠ 0, all solutions asymptotically approach the line x2 = 2x1 as t .

• Similarly, for c2 ≠ 0, all solutions asymptotically approach the line x2 = –2x1 as t ® - ¥ . • The origin is a saddle point, and is unstable. See graph. Example 2: Time Plots for General Solution (9 of 9)

(1) (2) • The general solution is x = c1x + c2x :  1  1  x (t)  c e3t  c et  x(t)  c  e3t  c  et   1    1 2  1  2      3t t  2  2  x2 (t) 2c1e  2c2e 

• As an alternative to phase plane plots, we can graph x1 or x2 as a function of t. A few plots of x1 are given below. -t • Note that when c1 = 0, x1(t) = c2e ® 0 as t ® ¥ . 3t -t Otherwise, x1(t) = c1e + c2e grows unbounded as t .

• Graphs of x2 are similarly obtained. Example 3: Direction Field (1 of 9)

• Consider the homogeneous equation x' = Ax below.  3 2  x   x  2  2   • A direction field for this system is given below. • Substituting x = x ert in for x, and rewriting system as (A – rI) = 0, we obtain

æ ö æ ö -3- r 2 x1 æ 0 ö ç ÷ ç ÷ = ç ÷ ç ÷ è 2 -2 - r ø è x2 ø è 0 ø Example 3: Eigenvalues (2 of 9)

• Our solution has the form x = x ert, where r and are found by solving æ ö æ ö -3- r 2 x1 æ 0 ö ç ÷ ç ÷ = ç ÷ ç ÷ è 2 -2 - r ø è x2 ø è 0 ø

• Recalling that this is an eigenvalue problem, we determine r by solving det(A – rI) = 0:

3 r 2  (3 r)(2  r)  2  r 2  5r  4  (r 1)(r  4) 2  2  r

• Thus r1 = –1 and r2 = –4. Example 3: First Eigenvector (3 of 9)

• Eigenvector for r1 = –1: Solve

31 2   0   2 2   0   1   1 A  rIξ  0               2  2 1 2  0 2 1 2  0     by row reducing the augmented matrix:

  2 2 0  1  2 / 2 0  1  2 / 2 0                2 1 0  2 1 0 0 0 0  2 / 2   1  ξ(1)   2   choose ξ(1)       2   2    Example 3: Second Eigenvector (4 of 9)

• Eigenvector for r2 = –4: Solve

3 4 2   0  1 2   0   1   1 A  rIξ  0               2  2  4 2 0 2 2 2 0           by row reducing the augmented matrix:

 1 2 0  1 2 0  2      (2)  2        ξ     2 2 0 0 0 0  2   2  (2)    choose ξ     1 Example 3: General Solution (5 of 9)

• The corresponding solutions x = x ert of x' = Ax are  1  2  (1) t (2)   4t x (t)   e , x (t)   e  2   1 • The Wronskian of these two solutions is et  2e4t Wx(1) ,x(2) (t)   3e5t  0 2et e4t • Thus x(1) and x(2) are fundamental solutions, and the general solution of x' = Ax is (1) (2) x(t)  c1x (t)  c2x (t)  1  2   c  et  c  e4t 1  2    2   1 Example 3: Phase Plane for x(1) (6 of 9)

(1) • To visualize solution, consider first x = c1x :  x   1 (1)  1  t t t x (t)     c1 e  x1  c1e , x2  2c1e  x2   2  • Now x x x  c et , x  2c et  et  1  2  x  2x 1 1 2 1 c 2c 2 1 1 1 (1) ½ • Thus x lies along the straight line x2 = 2 x1, which is the line through origin in direction of first eigenvector x (1) • If solution is trajectory of particle, with position given by

(x1, x2), then it is in Q1 when c1 > 0, and in Q3 when c1 < 0. • In either case, particle moves towards origin as t increases. Example 3: Phase Plane for x(2) (7 of 9)

(2) • Next, consider x = c2x :  x   2  x(2) (t)   1   c  e4t  x   2c e4t , x  c e4t   2   1 2 2 2  x2   1 (2) ½ • Then x lies along the straight line x2 = –2 x1, which is the line through origin in direction of 2nd eigenvector x (2) • If solution is trajectory of particle, with position given by

(x1, x2), then it is in Q4 when c2 > 0, and in Q2 when c2 < 0. • In either case, particle moves towards origin as t increases. Example 3: Phase Plane for General Solution (8 of 9)

(1) (2) • The general solution is x = c1x + c2x :  1  2  (1) t (2)   4t x (t)   e , x (t)   e  2   1 (1) (2) • As t ® ¥ , c1x is dominant and c2x becomes negligible. Thus, for c1 ≠ 0, all solutions asymptotically approach origin along the line x2 = 2 x1 as t . • Similarly, all solutions are unbounded as t ® - ¥ . • The origin is a node, and is asymptotically stable. Example 3: Time Plots for General Solution (9 of 9)

(1) (2) • The general solution is x = c1x + c2x :  1  2   x (t) c et  2c e4t  x(t)  c  et  c  e4t   1    1 2  1 2  2    x (t)  t 4t     1  2   2c1e  c2e 

• As an alternative to phase plane plots, we can graph x1 or x2 as a function of t. A few plots of x1 are given below.

• Graphs of x2 are similarly obtained. 2 x 2 Case: Real Eigenvalues, Saddle Points and Nodes

• The previous two examples demonstrate the two main cases for a 2 x 2 real system with real and different eigenvalues: – Both eigenvalues have opposite signs, in which case origin is a saddle point and is unstable. – Both eigenvalues have the same sign, in which case origin is a node, and is asymptotically stable if the eigenvalues are negative and unstable if the eigenvalues are positive. Eigenvalues, Eigenvectors and Fundamental Solutions

• In general, for an n x n real linear system x' = Ax: – All eigenvalues are real and different from each other. – Some eigenvalues occur in complex conjugate pairs. – Some eigenvalues are repeated.

• If eigenvalues r1,…, rn are real & different, then there are n corresponding linearly independent eigenvectors (1),…, x (n). The associated solutions of x' = Ax are x(1) (t)  ξ(1)er1t ,,x(n) (t)  ξ(n)ernt

• Using Wronskian, it can be shown that these solutions are linearly independent, and hence form a fundamental set of solutions. Thus general solution is

(1) r1t (n) rnt x  c1ξ e  cnξ e Hermitian Case: Eigenvalues, Eigenvectors & Fundamental Solutions

• If A is an n x n Hermitian matrix (real and symmetric), then

all eigenvalues r1,…, rn are real, although some may repeat. • In any case, there are n corresponding linearly independent and orthogonal eigenvectors x (1),…, (n). The associated solutions of x' = Ax are x(1) (t)  ξ(1)er1t ,,x(n) (t)  ξ(n)ernt and form a fundamental set of solutions. Example 4: Hermitian Matrix (1 of 3)

• Consider the homogeneous equation x' = Ax below. 0 1 1   x  1 0 1x 1 1 0   • The eigenvalues were found previously in Ch 7.3, and were:

r1 = 2, r2 = –1 and r3 = –1. • Corresponding eigenvectors:

1  1  0 (1)   (2)   (3)   ξ  1, ξ   0 , ξ   1       1 1 1 Example 4: General Solution (2 of 3)

• The fundamental solutions are 1  1  0 (1)   2t (2)   t (3)   t x  1e , x   0e , x   1e

1 1 1       with general solution

1  1  0   2t   t   t x  c11e  c2  0e  c3 1e       1 1 1 Example 4: General Solution Behavior (3 of 3)

(1) (2) (3) • The general solution is x = c1x + c2x + c3x : 1  1  0   2t   t   t x  c11e  c2  0e  c3 1e       1 1 1 (1) (2) (3) • As t ® ¥ , c1x is dominant and c2x , c3x become negligible.

• Thus, for c1 ≠ 0, all solns x become unbounded as t ,

while for c1 = 0, all solns x ® 0 as t .

• The initial points that cause c1 = 0 are those that lie in plane determined by x (2) and (3). Thus solutions that start in this plane approach origin as t . Complex Eigenvalues and Fundamental Solns

• If some of the eigenvalues r1,…, rn occur in complex conjugate pairs, but otherwise are different, then there are still n corresponding linearly independent solutions

x(1) (t)  ξ(1)er1t ,,x(n) (t)  ξ(n)ernt ,

which form a fundamental set of solutions. Some may be complex-valued, but real-valued solutions may be derived from them. This situation will be examined in Ch 7.6.

• If the coefficient matrix A is complex, then complex eigenvalues need not occur in conjugate pairs, but solutions will still have the above form (if the eigenvalues are distinct) and these solutions may be complex-valued. Repeated Eigenvalues and Fundamental Solns

• If some of the eigenvalues r1,…, rn are repeated, then there may not be n corresponding linearly independent solutions of the form x(1) (t)  ξ(1)er1t ,,x(n) (t)  ξ(n)ernt

• In order to obtain a fundamental set of solutions, it may be necessary to seek additional solutions of another form. • This situation is analogous to that for an nth order linear equation with constant coefficients, in which case a repeated root gave rise solutions of the form ert ,te rt ,t 2ert , This case of repeated eigenvalues is examined in Section 7.8. Boyce/DiPrima/Meade 11th ed, Ch 7.6: Complex Eigenvalues

Elementary Differential Equations and Boundary Value Problems, 11th edition, by William E. Boyce, Richard C. DiPrima, and Doug Meade ©2017 by John Wiley & Sons, Inc.

• We consider again a homogeneous system of n first order linear equations with constant, real coefficients,

x1  a11x1  a12x2  a1n xn

x2  a21x1  a22x2  a2n xn 

xn  an1x1  an2 x2  annxn , and thus the system can be written as x' = Ax, where  x (t)   a a  a   1   11 12 1n   x2 (t) a21 a22  a2n  x(t)  , A                    xn (t) an1 an2  ann  Conjugate Eigenvalues and Eigenvectors

• We know that x = x ert is a solution of x' = Ax, provided r is an eigenvalue and x is an eigenvector of A.

• The eigenvalues r1,…, rn are the roots of det(A – rI) = 0, and the corresponding eigenvectors satisfy (A – rI) = 0. • If A is real, then the coefficients in the polynomial equation det(A – rI) = 0 are real, and hence any complex eigenvalues

must occur in conjugate pairs. Thus if r1 = l + i m is an eigenvalue, then so is r2 = l - i m . • The corresponding eigenvectors x (1), (2) are conjugates also. To see this, recall A and I have real entries, and hence (1) (1) (2) A  r1Iξ  0  A  r1Iξ  0  A  r2Iξ  0 Conjugate Solutions

• It follows from the previous slide that the solutions

x(1)  ξ(1)er1t , x(2)  ξ(2)er2t

corresponding to these eigenvalues and eigenvectors are conjugates conjugates as well, since x(2)  ξ(2)er2t  ξ(1)er2t  x(1) Example 1: Direction Field (1 of 7)

• Consider the homogeneous equation x' = Ax below. 1/ 2 1  x   x  1 1/ 2

• A direction field for this system is given below. • Substituting x = x ert in for x, and rewriting system as (A – rI) = 0, we obtain

æ 1 ö ç - - r 1 ÷ æ ö 2 x1 æ 0 ö ç ÷ ç ÷ = 1 x èç 0 ø÷ ç -1 - - r ÷ è 1 ø èç 2 ø÷ Example 1: Complex Eigenvalues (2 of 7)

• We determine r by solving det(A – rI) = 0. Now

1/ 2  r 1 5  r 1/ 22 1 r 2  r  1 1/ 2  r 4

• Thus

1 12  4(5/ 4) 1 2i 1 r      i 2 2 2

• Therefore the eigenvalues are r1 = –1/2 + i and r2 = –1/2 – i. Example 1: First Eigenvector (3 of 7)

• Eigenvector for r1 = –1/2 + i: Solve

1/ 2  r 1 1  0 A  rIξ  0         1 1/ 2  r 1  0

  i 11  0  1 i1  0               1  i2  0 1  i2  0

by row reducing the augmented matrix:

 1 i 0  1 i 0 (1) i2  (1) 1       ξ     choose ξ    1 i 0 0 0 0  2  i

• Thus (1)  1 0 ξ     i  0  1 Example 1: Second Eigenvector (4 of 7)

• Eigenvector for r1 = –1/2 – i: Solve

1/ 2  r 1 1  0 A  rIξ  0         1 1/ 2  r 1  0

 i 1  0  1  i  0    1        1     1 i  0 1 i  0   2      2    by row reducing the augmented matrix:

 1 i 0  1 i 0 (2) i2  (2)  1       ξ     choose ξ    1 i 0 0 0 0  2  i

• Thus (2)  1  0 ξ     i  0 1 Example 1: General Solution (5 of 7)

• The corresponding solutions x = x ert of x' = Ax are

t / 2  1 0  t / 2  cost  u(t)  e  cost   sin t  e   0  1   sin t 

t / 2  1 0  t / 2  sin t  v(t)  e  sin t   cost  e   0  1  cost 

• The Wronskian of these two solutions is et / 2 cost et / 2 sin t Wx(1) ,x(2) (t)   et  0  et / 2 sin t et / 2 cost

• Thus u(t) and v(t) are real-valued fundamental solutions of

x' = Ax, with general solution x = c1u + c2v. Example 1: Phase Plane (6 of 7)

• Given below is the phase plane plot for solutions x, with  x   et / 2 cost   et / 2 sin t  x   1   c    c     1 t / 2  2  t / 2   x2   e sin t  e cost  • Each solution trajectory approaches origin along a spiral path as t ® ¥ , since coordinates are products of decaying exponential and sine or cosine factors. • The graph of u passes through (1,0), since u(0) = (1,0). Similarly, the graph of v passes through (0,1). • The origin is a spiral point, and is asymptotically stable. Example 1: Time Plots (7 of 7)

• The general solution is x = c1u + c2v:  x (t)  c et / 2 cost  c et / 2 sin t  x   1    1 2     t / 2 t / 2   x2 (t)  c1e sin t  c2e cost 

• As an alternative to phase plane plots, we can graph x1 or x2 as a function of t. A few plots of x1 are given below, each one a decaying oscillation as t ® ¥ . General Solution

• To summarize, suppose r1 = l + i m , r2 = l - i m , and that r3,…, rn are all real and distinct eigenvalues of A. Let the corresponding eigenvectors be ξ(1)  a ib, ξ(2)  a ib, ξ(3) , ξ(4) ,, ξ(n) • Then the general solution of x' = Ax is

(3) r3 t (n) rn t x  c1u(t)  c2v(t)  c3ξ e  cnξ e

where u(t)  et acos  t bsin  t, v(t)  et asin  t bcos  t Real-Valued Solutions

• Thus for complex conjugate eigenvalues r1 and r2 , the corresponding solutions x(1) and x(2) are conjugates also. • To obtain real-valued solutions, use real and imaginary parts (1) (2) (1) of either x or x . To see this, let x = a + i b. Then

(1) (1) i t  t x  ξ e  a  ibe cos  t  isin  t  e t acos  t  bsin  t ie  t asin  t  bcos  t  u(t)  i v(t) where

t t u(t)  e acos  t bsin  t, v(t)  e asin  t bcos  t, are real valued solutions of x' = Ax, and can be shown to be linearly independent. Spiral Points, Centers, Eigenvalues, and Trajectories

• In previous example, general solution was  x   et / 2 cost   et / 2 sin t  x   1   c    c     1 t / 2  2  t / 2   x2   e sin t  e cost  • The origin was a spiral point, and was asymptotically stable. • If real part of complex eigenvalues is positive, then trajectories spiral away, unbounded, from origin, and hence origin would be an unstable spiral point. • If real part of complex eigenvalues is zero, then trajectories circle origin, neither approaching nor departing. Then origin is called a center and is stable, but not asymptotically stable. Trajectories periodic in time. • The direction of trajectory motion depends on entries in A. Example 2: Second Order System with Parameter (1 of 2)

• The system x' = Ax below contains a parameter a .   2 x   x  2 0 • Substituting x = x ert in for x and rewriting system as (A – rI) = 0, we obtain æ ö æ a - r 2 ö x1 æ 0 ö ç ÷ = èç -2 -r ø÷ x èç 0 ø÷ è 2 ø • Next, solve for r in terms of :

  r 2    2 16  r(r )  4  r 2  r  4  r   2  r 2    2 16 Example 2: r  2 Eigenvalue Analysis (2 of 2)

• The eigenvalues are given by the quadratic formula above. • For a < –4, both eigenvalues are real and negative, and hence origin is asymptotically stable node. • For > 4, both eigenvalues are real and positive, and hence the origin is an unstable node. • For –4 < < 0, eigenvalues are complex with a negative real part, and hence origin is asymptotically stable spiral point. • For 0 < < 4, eigenvalues are complex with a positive real part, and the origin is an unstable spiral point. • For = 0, eigenvalues are purely imaginary, origin is a center. Trajectories closed curves about origin & periodic. • For = ± 4, eigenvalues real & equal, origin is a node (Ch 7.8) Second Order Solution Behavior and Eigenvalues: Three Main Cases

• For second order systems, the three main cases are: – Eigenvalues are real and have opposite signs; x = 0 is a saddle point. – Eigenvalues are real, distinct and have same sign; x = 0 is a node. – Eigenvalues are complex with nonzero real part; x = 0 a spiral point. • Other possibilities exist and occur as transitions between two of the cases listed above: – A zero eigenvalue occurs during transition between saddle point and node. Real and equal eigenvalues occur during transition between nodes and spiral points. Purely imaginary eigenvalues occur during a transition between asymptotically stable and unstable spiral points.

b  b2  4ac r  2a Example 3: Multiple Spring-Mass System (1 of 6)

• The equations for the system of two masses and three springs discussed in Section 7.1, assuming no external forces, can be expressed as: 2 2 d x1 d x2 m1 2  (k1  k2 )x1  k2 x2 and m2 2  k2 x1  (k2  k3 )x2 dt dt or m y ' (k  k )y  k y and m y ' k y  (k  k )y 1 3 1 2 1 2 2 2 4 2 1 2 3 2 where y  x , y  x , y  x ', and y  x ' 1 1 2 2 3 1 4 2

• Given m1  2, m2  9/ 4, k1 1, k2  3, and k 3,  15 / 4 , the equations become

y1' y3, y2 ' y4 , y3 ' 2y1 3/2 y2 , and y4 ' 4/3 y1 3y2 y1' y3, y2 ' y4 , y3 ' 2y1 3/2 y2 , and y4 ' 4/3 y1 3y2

Example 3: Multiple Spring-Mass System (2 of 6)

• Writing the system of equations in matrix form:

• Assuming a solution of the form y = x ert , where r must be an eigenvalue of the matrix A and is the corresponding eigenvector, the characteristic polynomial of A is r 4 5r 2  4  (r 2 1)(r 2  4)

r1  i, r2  i, r3  2i, and r4  2i yielding the eigenvalues:  0 0 1 0    0 0 0 1 y'  y  Ay   2 3/ 2 0 0     4 / 3  3 0 0 Example 3: Multiple Spring-Mass System (3 of 6)

• For the eigenvalues r1  i, r2  i , r3  2i, and r4   2 i the correspond- ing eigenvectors are  3   3   3   3           2   2    4    4   (1)  ,  (2)  ,  (3)  , and  (4)  3i    3i   6i   6i                 2i  2i 8i  8i  • The products ξ(1)eit and ξ (3)e2it yield the complex-valued solutions:  3   3cost   3sin t         2   2cost   2sin t   (1)eit  (cost  isin t)   i  u(1) (t)  i v(1) (t) 3i    3sin t  3cost              2i  2sin t  2cost   3   3cos 2t   3sin 2t          4   4cos 2t    4sin 2t   (3)e2it  (cos 2t  isin 2t)   i  u(2) (t)  i v(2) (t)  6i    6sin 2t   6cos 2t              8i  8sin 2t  8cos 2t  y1' y3 , y2 ' y4 , y3 ' 2y1  3/2 y2 , and y4 ' 4/3 y1  3y2 Example 3: Multiple Spring-Mass System (4 of 6)

• After validating that u( 1) (t), v(1) (t ), u(2) (t), v ( 2 ) ( t ) are linearly independent, the general solution of the system of equations can be written as æ 3cost ö æ 3sint ö æ 3cos2t ö æ 3sin2t ö ç 2cost ÷ ç 2sint ÷ ç -4 cos2t ÷ ç -4sin2t ÷ y = c1 ç ÷ + c2 ç ÷ + c3 ç ÷ + c4 ç ÷ ç -3sint ÷ ç 3cost ÷ ç -6sin2t ÷ ç 6cos2t ÷ ç -2sint ÷ ç 2cost ÷ ç 8sin2t ÷ ç -8cos2t ÷ è ø è ø è ø è ø

• where c1, c2 , c 3 , c 4 are arbitrary constants. • Each solution will be periodic with period 2π, so each trajectory is a closed curve. The first two terms of the solution describe motions with frequency 1 and period 2π while the second two terms describe motions with frequency 2 and period π. The motions of the two masses will be different relative to one another for solutions involving only the first two terms or the second two terms.

y1 and y2 represent the motion of the masses and y3  y1', y4  y1' Example 3: Multiple Spring-Mass System (5 of 6)

• To obtain the fundamental mode of vibration with frequency 1

c 3 c4  0  occurs when 3y2 (0)  2 y1(0) and 3y4 (0)  2 y3(0)  y • To obtain the2 fundamental mode of vibration with frequency 2

c1 c2  0  occurs when 3y2 (0)  4 y1(0) and 3y4 (0)  4 y3(0)

• Plots of y1 and y 2 and parametric plots (y, y’) are shown for a selected solution with frequency 1

y1 and y2 represent the motion of the masses and y3  y1', y4  y1' Example 3: Multiple Spring-Mass System (6 of 6)

• Plots of y1 and y 2 and parametric plots (y, y’) are shown for a selected solution with frequency 2

Boyce/DiPrima/Meade 11th ed, Ch 7.7: Fundamental Matrices

Elementary Differential Equations and Boundary Value Problems, 11th edition, by William E. Boyce, Richard C. DiPrima, and Doug Meade ©2017 by John Wiley & Sons, Inc.

• Suppose that x(1)(t),…, x(n)(t) form a fundamental set of solutions for x' = P(t)x on a < t < b . • The matrix  x(1) (t)  x(n) (t)  1 1  Ψ(t)      ,  x(1) (t)  x(n) (t)  n n  whose columns are x(1)(t),…, x(n)(t), is a fundamental matrix for the system x' = P(t)x. This matrix is nonsingular since its columns are linearly independent, and hence det Y ≠ 0. • Note also that since x(1)(t),…, x(n)(t) are solutions of x' = P(t)x, satisfies the matrix differential equation ' = P(t) . Example 1:

• Consider the homogeneous equation x' = Ax below. 1 1 x   x 4 1

• In Example 2 of Chapter 7.5, we found the following fundamental solutions for this system:

(1)  1 3t (2)  1 t x (t)   e , x (t)   e 2  2 • Thus a fundamental matrix for this system is

 e3t et  Ψ(t)     3t t  2e  2e  Fundamental Matrices and General Solution

• The general solution of x' = P(t)x

x  c x(1) (t)  c x(n) 1 n can be expressed x = Y (t)c, where c is a constant vector with components c1,…, cn:  x(1) (t)  x(n) (t) c   1 1  1  x  Ψ(t)c          (1) (n)    xn (t)  xn (t)cn  Fundamental Matrix & Initial Value Problem

• Consider an initial value problem 0 x' = P(t)x, x(t0) = x 0 where a < t 0 < b and x is a given initial vector. • Now the solution has the form x = Y (t)c, hence we choose c 0 so as to satisfy x(t0) = x .

• Recalling (t0) is nonsingular, it follows that 0 1 0 Ψ(t0 )c  x  c  Ψ (t0 )x • Thus our solution x = (t)c can be expressed as

1 0 x  Ψ(t)Ψ (t0 )x Recall: Theorem 7.4.4

• Let 1 0 0       0 1 0 e(1)  0, e(2)  0,, e(n)                 0       0 0 1 • Let x(1),…, x(n) be solutions of x' = P(t)x on I: a < t < b that satisfy the initial conditions (1) (1) (n) (n) x (t0 )  e , , x (t0 )  e ,   t0  

Then x(1),…, x(n) are fundamental solutions of x' = P(t)x. Fundamental Matrix & Theorem 7.4.4 • Suppose x(1)(t),…, x(n)(t) form the fundamental solutions given by Thm 7.4.4. Denote the corresponding fundamental matrix by F (t). Then columns of (t) are x(1)(t),…, x(n)(t), and hence 1 0  0   0 1  0 (t )   I 0           0 0  1

–1 • Thus (t0) = I, and the hence general solution to the corresponding initial value problem is 1 0 0 x  Φ(t)Φ (t0 )x  Φ(t)x • It follows that for any fundamental matrix (t), 1 0 0 1 x  Ψ(t)Ψ (t0 )x  Φ(t)x  Φ(t)  Ψ(t)Ψ (t0 ) The Fundamental Matrix F . and Varying Initial Conditions

• Thus when using the fundamental matrix F (t), the general solution to an IVP is 1 0 0 x  Φ(t)Φ (t0 )x  Φ(t)x • This representation is useful if same system is to be solved for many different initial conditions, such as a physical system that can be started from many different initial states. • Also, once (t) has been determined, the solution to each set of initial conditions can be found by matrix multiplication, as indicated by the equation above. • Thus (t) represents a linear transformation of the initial conditions x0 into the solution x(t) at time t. Example 2: Find F (t) for 2 x 2 System (1 of 5)

• Find F (t) such that (0) = I for the system below. 1 1 x   x 4 1 • Solution: First, we must obtain x(1)(t) and x(2)(t) such that

(1)  1 (2) 0 x (0)   , x (0)    0  1 • We know from previous results that the general solution is

 1 3t  1 t x  c1 e  c2  e 2  2 • Every solution can be expressed in terms of the general solution, and we use this fact to find x(1)(t) and x(2)(t). Example 2: Use General Solution (2 of 5)

• Thus, to find x(1)(t), express it terms of the general solution

(1)  1 3t  1 t x (t)  c1 e  c2  e 2  2

and then find the coefficients c1 and c2. • To do so, use the initial conditions to obtain

(1)  1  1  1 x (0)  c1   c2      2  2 0 or equivalently,

 1 1 c1   1       2  2c2  0 Example 2: Solve for x(1)(t) (3 of 5)

• To find x(1)(t), we therefore solve

 1 1 c1   1       2  2c2  0

by row reducing the augmented matrix:

 1 1 1  1 1 1  1 1 1  1 0 1/ 2            2  2 0 0  4  2 0 1 1/ 2 0 1 1/ 2 c 1/ 2  1 c2  1/ 2 • Thus  1 3t 1 t  (1) 1  1 3t 1  1 t  e  e  x (t)   e   e   2 2  2 2 2  2  3t t       e  e  Example 2: Solve for x(2)(t) (4 of 5)

• To find x(2)(t), we similarly solve

 1 1 c1  0       2  2c2   1

by row reducing the augmented matrix:

 1 1 0  1 1 0  1 1 0  1 0 1/ 4            2  2 1 0  4 1 0 1 1/ 4 0 1 1/ 4 c  1/ 4  1 c2  1/ 4

• Thus  1 3t 1 t  1 1  e  e  (2) 1   3t 1   t 4 4 x (t)   e   e    4 2 4  2 1 3t 1 t      e  e   2 2  Example 2: Obtain F (t) (5 of 5)

• The columns of F (t) are given by x(1)(t) and x(2)(t), and thus from the previous slide we have  1 1 1 1   e3t  et e3t  et  Φ(t)   2 2 4 4  1 1  e3t  et e3t  et   2 2 

• Note (t) is more complicated than Y (t) found in Ex 1. However, it is now much easier to determine the solution to any set of initial conditions.

 e3t et  Ψ(t)     3t t  2e  2e  Functions

• Consider the following two cases: at 0 – The solution to x' = ax, x(0) = x0, is x = x0e , where e = 1. – The solution to x' = Ax, x(0) = x0, is x = F (t)x0, where (0) = I. • Comparing the form and solution for both of these cases, we might expect (t) to have an exponential character. • Indeed, it can be shown that (t) = eAt, where  Ant n  Ant n eAt    I   n0 n! n1 n! is a well defined matrix function that has all the usual properties of an exponential function. See text for details. • Thus the solution to x' = Ax, x(0) = x0, is x = eAtx0. Coupled Systems of Equations

• Recall that our constant coefficient homogeneous system

x1  a11x1  a12x2  a1n xn  x  a x  a x  a x , n n1 1 n2 2 nn n written as x' = Ax with  x (t)   a  a   1   11 1n  x(t)    , A      ,  x (t) a  a   n   n1 nn  is a system of coupled equations that must be solved simultaneously to find all the unknown variables. Uncoupled Systems & Diagonal Matrices

• In contrast, if each equation had only one variable, solved for independently of other equations, then task would be easier. • In this case our system would have the form

x1  d11x1  0x2  0xn  x2  0x1  d11x2  0xn   xn  0x1  0x2  dnnxn , or x' = Dx, where D is a diagonal matrix:

d11 0  0   x1(t)       0 d22  0  x(t)    , D             xn (t)    0 0  dnn  Uncoupling: Transform Matrix T

• In order to explore transforming our given system x' = Ax of coupled equations into an uncoupled system x' = Dx, where D is a diagonal matrix, we will use the eigenvectors of A. • Suppose A is n x n with n linearly independent eigenvectors (1) (n) x , ..., x , and corresponding eigenvalues l 1 , ..., l n . • Define n x n matrices T and D using the eigenvalues & eigenvectors of A:  0  0   (1)   (n)   1   1 1   0 2  0  T      , D         (1) (n)    n  n     0 0  n 

• Note that T is nonsingular, and hence T-1 exists. Uncoupling: T-1AT = D

• Recall here the definitions of A, T and D:

1 0  0   a  a   (1)   (n)     11 1n   1 1   0 2  0  A      , T      , D        a  a   (1)   (n)     n1 nn   n n   0 0     n  • Then the columns of AT are A x (1),…, A (n), and hence   (1)    (n)   1 1 n 1  AT        TD  (1) (n)  1n  nn 

• It follows that T–1AT = D. Similarity Transformations

• Thus, if the eigenvalues and eigenvectors of A are known, then A can be transformed into a diagonal matrix D, with T–1AT = D • This process is known as a similarity transformation, and A is said to be similar to D. Alternatively, we could say that A is diagonalizable.

 0  0   a  a   (1)   (n)   1  11 1n  1 1     0 2  0  A      , T      , D           (1) (n)    an1  ann  n  n     0 0  n  Similarity Transformations: Hermitian Case

• Recall: Our similarity transformation of A has the form T–1AT = D where D is diagonal and columns of T are eigenvectors of A. • If A is Hermitian, then A has n linearly independent orthogonal eigenvectors x ( 1 ) , ..., x ( n ), normalized so that ( x ( i ) , x ( i ) ) = 1 for i = 1,…, n, and ( x ( i ) , x ( k ) ) = 0 for i ≠ k. • With this selection of eigenvectors, it can be shown that T–1 = T*. In this case we can write our similarity transform as T*AT = D Nondiagonalizable A

• Finally, if A is n x n with fewer than n linearly independent eigenvectors, then there is no matrix T such that T–1AT = D. • In this case, A is not similar to a diagonal matrix and A is not diagonlizable.

 0  0   a  a   (1)   (n)   1  11 1n  1 1     0 2  0  A      , T      , D           (1) (n)    an1  ann  n  n     0 0  n  Example 3: Find Transformation Matrix T (1 of 2)

• For the matrix A below, find the similarity transformation matrix T and show that A can be diagonalized. 1 1 A    4 1

• We already know that the eigenvalues are l 1 = 3, 2 = –1 with corresponding eigenvectors

(1)  1 (2)  1 ξ (t)   , ξ (t)    2  2 • Thus  1 1 3 0 T   , D    2  2 0 1 Example 3: Similarity Transformation (2 of 2)

• To find T–1, augment the identity to T and row reduce:  1 1 1 0  1 1 1 0  1 1 1 0         2  2 0 1 0  4  2 1 0 1 1/ 2 1/ 4

 1 0 1/ 2 1/ 4 1 1/ 2 1/ 4     T    0 1 1/ 2 1/ 4 1/ 2 1/ 4 • Then   1 1/ 2 1/ 4  1 1 1 1 T AT         1/ 2 1/ 4  4 12  2  1/ 2 1/ 43 1 3 0         D 1/ 2 1/ 46 2 0 1 • Thus A is similar to D, and hence A is diagonalizable. Fundamental Matrices for Similar Systems (1 of 3)

• Recall our original system of differential equations x' = Ax. • If A is n x n with n linearly independent eigenvectors, then A is diagonalizable. The eigenvectors form the columns of the nonsingular transform matrix T, and the eigenvalues are the corresponding nonzero entries in the diagonal matrix D. • Suppose x satisfies x' = Ax, let y be the n x 1 vector such that x = Ty. That is, let y be defined by y = T–1x. • Since x' = Ax and T is a constant matrix, we have Ty' = ATy, and hence y' = T–1ATy = Dy. • Therefore y satisfies y' = Dy, the system similar to x' = Ax. • Both of these systems have fundamental matrices, which we examine next. Fundamental Matrix for Diagonal System (2 of 3)

• A fundamental matrix for y' = Dy is given by Q(t) = eDt. • Recalling the definition of eDt, we have    t n    1   n 0 0   0 0 n n 1 n n!  D t    t  n0  Q(t)     0  0    0  0   n n0 n! n0  n  n!   nt 0 0 n  0 0        n0 n!   1t  e 0 0    0  0    t   0 0 e n    Fundamental Matrix for Original System (3 of 3)

• To obtain a fundamental matrix Y (t) for x' = Ax, recall that the columns of (t) consist of fundamental solutions x satisfying x' = Ax. We also know x = Ty, and hence it follows that  (1)   (n) e1t 0 0   (1)e1t   (n)e nt   1 1    1 1  Ψ  TQ       0  0          t    t  t   (1)   (n)  0 0 e n   (1)e 1   (n)e n   n n    n n 

• The columns of (t) given the expected fundamental solutions of x' = Ax. Example 4: Fundamental Matrices for Similar Systems

• We now use the analysis and results of the last few slides. • Applying the transformation x = Ty to x' = Ax below, this system becomes y' = T–1ATy = Dy: 1 1 3 0 x   x  y   y 4 1 0 1

• A fundamental matrix for y' = Dy is given by Q(t) = eDt: e3t 0  Q(t)     t   0 e  • Thus a fundamental matrix Y (t) for x' = Ax is  1 1e3t 0   e3t et  Ψ(t)  TQ          t   3t t  2  2 0 e  2e  2e  Boyce/DiPrima/Meade 11th ed, Ch 7.8: Repeated Eigenvalues

Elementary Differential Equations and Boundary Value Problems, 11th edition, by William E. Boyce, Richard C. DiPrima, and Doug Meade ©2017 by John Wiley & Sons, Inc.

• We consider again a homogeneous system of n first order linear equations with constant real coefficients x' = Ax.

• If the eigenvalues r1,…, rn of A are real and different, then there are n linearly independent eigenvectors x ( 1 ) , ..., x ( n ) , and n linearly independent solutions of the form x(1) (t)  ξ(1)er1t ,,x(n) (t)  ξ(n)ernt

• If some of the eigenvalues r1,…, rn are repeated, then there may not be n corresponding linearly independent solutions of the above form. • In this case, we will seek additional solutions that are products of polynomials and exponential functions. Example 1: Eigenvalues (1 of 2)

• We need to find the eigenvectors for the matrix: æ 1 -1 ö A = x èç 1 3 ø÷ • The eigenvalues r and eigenvectors x satisfy the equation (A – rI ) =0 or

1 r 1 1  0       1 3 r  0   1    • To determine r, solve det(A – rI) = 0: 1 r 1  (r 1)(r 3) 1 r 2  4r  4  (r  2)2 1 3 r

• Thus r1 = 2 and r2 = 2. Example 1: Eigenvectors (2 of 2)

• To find the eigenvectors, we solve

1 2 11  0 1 11  0 A  rIξ  0                1 3 22  0  1 12  0

by row reducing the augmented matrix:

1 1 0 1 1 0  1 1 0 11 12  0           1 1 0 1 1 0 0 0 0 02  0

(1) 2  (1)  1  ξ     choose ξ     2  1

• Thus there is only one linearly independent eigenvector for the repeated eigenvalue r = 2. Example 2: Direction Field (1 of 10)

• Consider the homogeneous equation x' = Ax below. 1 1 x   x 1 3   • A direction field for this system is given below. • Substituting x = x ert in for x, where r is A’s eigenvalue and is its corresponding eigenvector, the previous example showed the existence of only one eigenvalue, r = 2, with one eigenvector:  1 ξ    1 Example 2: First Solution; and Second Solution, First Attempt (2 of 10)

• The corresponding solution x = x ert of x' = Ax is

(1)  1 2t x (t)   e 1 • Since there is no second solution of the form x = ert, we need to try a different form. Based on methods for second order linear equations in Ch 3.5, we first try x = te2t. • Substituting x = te2t into x' = Ax, we obtain ξe2t  2ξte 2t  Aξte 2t

or 2ξte 2t ξe2t  Aξte 2t  0 Example 2: Second Solution, Second Attempt (3 of 10)

• From the previous slide, we have 2ξte 2t ξe2t  Aξte 2t  0

• In order for this equation to be satisfied for all t, it is necessary for the coefficients of te2t and e2t to both be zero. • From the e2t term, we see that x = 0, and hence there is no nonzero solution of the form x = te2t. • Since te2t and e2t appear in the above equation, we next consider a solution of the form x  ξte 2t  ηe2t Example 2: Second Solution and its Defining Matrix Equations (4 of 10) • Substituting x = x te2t + h e2t into x' = Ax, we obtain

2t 2t 2t 2t 2t ξe  2ξte  2ηe  Aξte  ηe  or 2ξte 2t  ξ  2ηe2t  Aξte 2t  Aηe2t

• Equating coefficients yields A = 2 and A = + 2 , or A  2Iξ  0 and A  2Iη  ξ

• The first equation is satisfied if is an eigenvector of A corresponding to the eigenvalue r = 2. Thus  1 ξ    1 Example 2: Solving for Second Solution (5 of 10)

• Recall that 1 1  1 A   , ξ    1 3 1

• Thus to solve (A – 2I) h = x for , we row reduce the corresponding augmented matrix:

1 1 1 1 1 1  1 1 1         2  11  1 1 1 1 1 1 0 0 0

 1   0  1  η     η     k  11  1 1 Example 2: Second Solution (6 of 10)

• Our second solution x = x te2t + h e2t is now

 1 2t  0 2t  1 2t x   te   e  k e 1 1 1 • Recalling that the first solution was

(1)  1 2t x (t)   e , 1 we see that our second solution is simply

(2)  1 2t  0 2t x (t)   te   e , 1 1 since the last term of third term of x is a multiple of x(1). Example 2: General Solution (7 of 10)

• The two solutions of x' = Ax are

(1)  1 2t (2)  1 2t  0 2t x (t)   e , x (t)   te   e 1 1 1 • The Wronskian of these two solutions is e2t te 2t Wx(1) ,x(2) (t)   e4t  0  e2t te 2t  e2t • Thus x(1) and x(2) are fundamental solutions, and the general solution of x' = Ax is (1) (2) x(t)  c1x (t)  c2x (t)

 1 2t  1 2t  0 2t   c1 e  c2  te   e  1 1 1  Example 2: Phase Plane (8 of 10)

• The general solution is

 1 2t  1 2t  0 2t  x(t)  c1 e  c2  te   e  1 1 1  • Thus x is unbounded as t ® ¥ , and x 0 as t . • Further, it can be shown that as t ® - ¥ , x ® 0 asymptotic

to the line x2 = –x1 determined by the first eigenvector. • Similarly, as t , x is asymptotic

to a line parallel to x2 = –x1. Example 2: Phase Plane (9 of 10)

• The origin is an improper node, and is unstable. See graph. • The pattern of trajectories is typical for two repeated eigenvalues with only one eigenvector. • If the eigenvalues are negative, then the trajectories are similar but are traversed in the inward direction. In this case the origin is an asymptotically stable improper node. Example 2: Time Plots for General Solution (10 of 10)

• Time plots for x1(t) are given below, where we note that the general solution x can be written as follows.

 1 2t  1 2t  0 2t  x(t)  c1 e  c2  te   e  1 1 1   x (t)  c e2t  c te 2t    1    1 2     2t 2t   x2 (t)  (c1  c2 )e  c2te  General Case for Double Eigenvalues

• Suppose the system x' = Ax has a double eigenvalue r = r and a single corresponding eigenvector x . • The first solution is x ( 1 ) ( t ) = x e r t , where satisfies (A – r I) x = 0. • As in Example 1, the second solution has the form x(2)  ξte  t  ηe t where is as above and h satisfies .

• Even though det(A – I) = 0, it can be shown that ( A - r I ) h = x can always be solved for . • The vector is called a . Example 2 Extension: Fundamental Matrix (1 of 2)

• Recall that a fundamental matrix Y (t) for x' = Ax has linearly independent solution for its columns. • In Example 1, our system x' = Ax was 1 1 x   x 1 3 and the two solutions we found were æ 1 ö æ 1 ö æ 0 ö x(1) (t) = e2t , x(2)(t) = te2t + e2t èç -1 ø÷ èç -1 ø÷ èç -1 ø÷ • Thus the corresponding fundamental matrix is  e2t te 2t   1 t  Ψ(t)     e2t    2t 2t 2t     e te  e  1 t 1 Example 2 Extension: Fundamental Matrix (2 of 2)

• The fundamental matrix F (t) that satisfies (0) = I can be found using F ( t ) = Y ( t ) Y - 1 ( 0 ) , where

 1 0 1  1 0 Ψ(0)   , Ψ (0)   , 1 1 1 1

where Y –1(0) is found as follows:  1 0 1 0  1 0 1 0  1 0 1 0         1 1 0 1 0 1 1 1 0 1 1 1       • Thus

2t  1 t  1 0 2t 1t t  Φ(t)  e     e   1 t 11 1  t t 1 Jordan Forms

• If A is n x n with n linearly independent eigenvectors, then A can be diagonalized using a similarity transform T–1AT = D. The transform matrix T consisted of eigenvectors of A, and the diagonal entries of D consisted of the eigenvalues of A. • In the case of repeated eigenvalues and fewer than n linearly independent eigenvectors, A can be transformed into a nearly diagonal matrix J, called the Jordan form of A, with T–1AT = J. Example 2 Extension: Transform Matrix (1 of 2)

• In Example 2, our system x' = Ax was 1 1 x   x 1 3

with eigenvalues r1 = 2 and r2 = 2 and eigenvectors  1  0  1 ξ   , η     k  1 1 1

• Choosing k = 0, the transform matrix T formed from the two eigenvectors x and h is

 1 0 T    1 1 Example 2 Extension: Jordan Form (2 of 2)

• The Jordan form J of A is defined by T–1AT = J. • Now  1 0  1 0 T   , T1    1 1 1 1     and hence

1  1 01 1 1 0 2 1 J  T AT         1 11 31 1 0 2

• Note that the eigenvalues of A, r1 = 2 and r2 = 2, are on the main diagonal of J, and that there is a 1 directly above the second eigenvalue. This pattern is typical of Jordan forms. Boyce/DiPrima/Meade 11th ed, Ch 7.9: Nonhomogeneous Linear Systems

Elementary Differential Equations and Boundary Value Problems, 11th edition, by William E. Boyce, Richard C. DiPrima, and Doug Meade ©2017 by John Wiley & Sons, Inc.

• The general theory of a nonhomogeneous system of equations

x1  p11(t)x1  p12(t)x2  p1n (t)xn  g1(t)

x2  p21(t)x1  p22(t)x2  p2n (t)xn  g2 (t)   xn  pn1(t)x1  pn2 (t)x2  pnn(t)xn  gn (t) parallels that of a single nth order linear equation. • This system can be written as x' = P(t)x + g(t), where  x (t)   g (t)   p (t) p (t)  p (t)   1   1   11 12 1n   x2 (t)  g2 (t)  p21(t) p22(t)  p2n (t) x(t)  , g(t)  , P(t)                           xn (t)  gn (t)  pn1(t) pn2 (t)  pnn(t) General Solution

• The general solution of x' = P(t)x + g(t) on I: a < t < b has the form x  c x(1) (t)  c x(2) (t)  c x(n) (t)  v(t) 1 2 n where c x(1) (t)  c x(2) (t)  c x(n) (t) 1 2 n is the general solution of the homogeneous system x' = P(t)x and v(t) is a particular solution of the nonhomogeneous system x' = P(t)x + g(t). Diagonalization

• Suppose x' = Ax + g(t), where A is an n x n diagonalizable constant matrix. • Let T be the nonsingular transform matrix whose columns are the eigenvectors of A, and D the diagonal matrix whose diagonal entries are the corresponding eigenvalues of A. • Suppose x satisfies x' = Ax, let y be defined by x = Ty. • Substituting x = Ty into x' = Ax, we obtain Ty' = ATy + g(t). or y' = T–1ATy + T–1g(t) or y' = Dy + h(t), where h(t) = T–1g(t). • Note that if we can solve diagonal system y' = Dy + h(t) for y, then x = Ty is a solution to the original system. Solving Diagonal System

• Now y' = Dy + h(t) is a diagonal system of the form

y1  r1 y1  0y2  0yn  h1(t)   y1  r1 0  0  y1   h1          y2  0y1  r2 y2  0yn  h2 (t)  y2   0 r2  0  y2  h2                                      yn  0y1  0y2  r1 yn  hn (t)  yn   0 0  rn  yn  hn 

where r1,…, rn are the eigenvalues of A. • Thus y' = Dy + h(t) is an uncoupled system of n linear first

order equations in the unknowns yk(t), which can be isolated

yk  r1 yk  hk (t), k 1,,n and solved separately, using methods of Section 2.1: t rkt rk s rkt yk  e e hk (s)ds  cke , k 1,,n t 0 Solving Original System

• The solution y to y' = Dy + h(t) has components t rkt rk s rkt yk  e e hk (s)ds  cke , k 1,,n t 0 • For this solution vector y, the solution to the original system x' = Ax + g(t) is then x = Ty. • Recall that T is the nonsingular transform matrix whose columns are the eigenvectors of A. • Thus, when multiplied by T, the second term on right side of

yk produces general solution of homogeneous equation, while the integral term of yk produces a particular solution of nonhomogeneous system. Example 1: General Solution of Homogeneous Case (1 of 5)

• Consider the nonhomogeneous system x' = Ax + g below. æ -2 1 ö æ 2e-t ö x¢ = x + ç ÷ = Ax + g(t) èç 1 -2 ø÷ è 3t ø

• Note: A is a Hermitian matrix, since it is real and symmetric.

• The eigenvalues of A are r1 = -3 and r2 = -1, with corresponding eigenvectors

(1)  1 (2) 1 ξ   , ξ    1 1 • The general solution of the homogeneous system is then æ 1 ö æ 1 ö x(t) = c e-3t + c e-t 1 èç -1 ø÷ 2 èç 1 ø÷ Example 1: Transformation Matrix (2 of 5)

• Consider next the transformation matrix T of eigenvectors. Using a Section 7.7 comment, and A Hermitian, we have T–1 = T* = TT, provided we normalize x (1)and (2) so that ( (1), (1)) = 1 and ( (2), (2)) = 1. Thus normalize as follows:

1  1 1  1 ξ(1)      , (1)(1)  (1)(1) 1 2 1    

(2) 1 1 1 1 ξ       (1)(1)  (1)(1) 1 2 1 • Then for this choice of eigenvectors,

1  1 1 1 1 1 1 T   , T    2 1 1 2 1 1 Example 1: Diagonal System and its Solution (3 of 5)

• Under the transformation x = Ty, we obtain the diagonal system y' = Dy + T–1g(t):  y   3 0 y  1 1 12et   1     1                y2   0 1 y2  2 1 1 3t 

 3y  1 2et  3t    1        t    y2  2 2e  3t 

• Then, using methods of Section 2.1,

t 3 2 t 3  t 1  3t y1  3y1  2e  t  y1  e      c1e 2 2 2  3 9  3 3 y  y  2et  t  y  2te t  t 1 c et 2 2 2 2 2 2 Example 1: Transform Back to Original System (4 of 5)

• We next use the transformation x = Ty to obtain the solution to the original system x' = Ax + g(t):  1  t 1    et    k e3t  x 1 1 y 1 1   1  1  1   1    2  2 6            x2 2 1 1 y2 1 1 t 3 t      te  t 1 k e   2 2 

 3t  1  t 4 t  k1e  k2  e  t   te    2  3  c1 c2  , k1  , k2   3t  1  t 5 t  2 2  k1e  k2  e  2t   te    2  3  Example 1: Solution of Original System (5 of 5)

• Simplifying further, the solution x can be written as

 3t  1  t 4 t  k1e  k2  e  t   te   x1  2 3             x2  3t  1  t 5 t  k1e  k2  e  2t   te    2  3 

 1 3t 1 t 1  1 t 1 t  1 1 4  k1 e  k2  e   e   te   t    1 1 2 1 1 2 3 5

• Note that the first two terms on right side form the general solution to homogeneous system, while the remaining terms are a particular solution to nonhomogeneous system. Nondiagonal Case

• If A cannot be diagonalized, (repeated eigenvalues and a shortage of eigenvectors), then it can be transformed to its Jordan form J, which is nearly diagonal. • In this case the differential equations are not totally uncoupled, because some rows of J have two nonzero entries: an eigenvalue in diagonal position, and a 1 in adjacent position to the right of diagonal position.

• However, the equations for y1,…, yn can still be solved consecutively, starting with yn. Then the solution x to original system can be found using x = Ty. Undetermined Coefficients

• A second way of solving x' = P(t)x + g(t) is the method of undetermined coefficients. Assume P is a constant matrix, and that the components of g are polynomial, exponential or sinusoidal functions, or sums or products of these. • The procedure for choosing the form of solution is usually directly analogous to that given in Section 3.6. • The main difference arises when g(t) has the form ue t, where l is a simple eigenvalue of P. In this case, g(t) matches solution form of homogeneous system x' = P(t)x, and as a result, it is necessary to take nonhomogeneous solution to be of the form ate t + be t. This form differs from the Section 3.6 analog, ate t. Example 2: Undetermined Coefficients (1 of 5)

• Consider again the nonhomogeneous system x' = Ax + g:

æ -t ö æ -2 1 ö 2e æ -2 1 ö æ 2 ö -t æ 0 ö x¢ = x + ç ÷ = x + e + t èç 1 -2 ø÷ è 3t ø èç 1 -2 ø÷ èç 0 ø÷ èç 3 ø÷

• Assume a particular solution of the form v(t)  ate t bet ct d where the vector coefficents a, b, c, d are to be determined. • Since r = -1 is an eigenvalue of A, it is necessary to include both ate-t and be-t, as mentioned on the previous slide. Example 2: Matrix Equations for Coefficients (2 of 5)

• Substituting v(t)  ate t bet ct d in for x in our nonhomogeneous system x' = Ax + g, æ -2 1 ö æ 2 ö æ 0 ö x¢ = x + e-t + t, èç 1 -2 ø÷ èç 0 ø÷ èç 3 ø÷ we obtain æ ö æ ö -t -t -t -t 2 -t 0 -ate + (a - b)e + c = Aate + Abe + Act + Ad + ç ÷ e + ç ÷ t è 0 ø è 3 ø • Equating coefficients, we conclude that æ 2 ö æ 0 ö Aa = -a, Ab = a - b - , Ac = - , Ad = c èç 0 ø÷ èç 3 ø÷ Example 2: Solving Matrix Equation for (a) (3 of 5)

• Our matrix equations for the coefficients are: 2 0 Aa  a, Ab  a b  , Ac   , Ad  c 0 3 • From the first equation, we see that a is an eigenvector of A corresponding to eigenvalue r = -1, and hence has the form   a      • We will see on the next slide that a = 1, and hence 1 a    1 Example 2: Solving Matrix Equation for (b) (4 of 5)

• Our matrix equations for the coefficients are: 2 0 Aa  a, Ab  a b  , Ac   , Ad  c 0 3 • Substituting a = ( a , a ) T into the second equation,   2  b   2 1  1 0 b    2 Ab      1         1                   b2   1  2 0 1b2    

1 1 b1    2  1 1 b1                    1 1b2     0 0b2   1

• Thus a = 1, and solving for b, we obtain 1 0  0 b  k     choose k  0  b    1 1 1 Example 2: Particular Solution (5 of 5)

• Our matrix equations for the coefficients are: 2 0 Aa  a, Ab  a b  , Ac   , Ad  c 0 3 • Solving third equation for c, and then fourth equation for d, it is straightforward to obtain cT = (1, 2), dT = (–4/3, –5/3). • Thus our particular solution of x' = Ax + g is

æ 1 ö æ 0 ö æ 1 ö 1 æ 4 ö v(t) = te-t - e-t + t - èç 1 ø÷ èç 1 ø÷ èç 2 ø÷ 3èç 5 ø÷

• Comparing this to the result obtained in Example 1, we see that both particular solutions would be the same if we had chosen k = ½ for b on previous slide, instead of k = 0. Variation of Parameters: Preliminaries

• A more general way of solving x' = P(t)x + g(t) is the method of variation of parameters. • Assume P(t) and g(t) are continuous on a < t < b , and let Y (t) be a fundamental matrix for the homogeneous system. • Recall that the columns of are linearly independent solutions of x' = P(t)x, and hence (t) is invertible on the interval , and also Y ' ( t ) = P ( t ) Y ( t ) . • Next, recall that the solution of the homogeneous system can be expressed as x = Y (t)c. • Analogous to Section 3.7, assume the particular solution of the nonhomogeneous system has the form x = (t)u(t), where u(t) is a vector to be found. Variation of Parameters: Solution

• We assume a particular solution of the form x = Y (t)u(t). • Substituting this into x' = P(t)x + g(t), we obtain '(t)u(t) + (t)u'(t) = P(t) (t)u(t) + g(t) • Since '(t) = P(t) (t), the above equation simplifies to u'(t) = –1(t)g(t) • Thus u(t)   Ψ1(t)g(t)dt  c where the vector c is an arbitrary constant of integration. • The general solution to x' = P(t)x + g(t) is therefore t 1 x  Ψ(t)c  Ψ(t) Ψ (s)g(s)ds, t1 ,  arbitrary t 1 Variation of Parameters: Initial Value Problem

• For an initial value problem (0) x' = P(t)x + g(t), x(t0) = x , the general solution to x' = P(t)x + g(t) is t 1 (0) 1 x  Ψ(t)Ψ (t0 )x  Ψ(t) Ψ (s)g(s)ds t 0 • Alternatively, recall that the fundamental matrix F (t) satisfies (t0) = I, and hence the general solution is t x  Φ(t)x(0)  Φ(t) Ψ1(s)g(s)ds t 0 • In practice, it may be easier to row reduce matrices and solve necessary equations than to compute Y –1(t) and substitute into equations. See next example. Example 3: Variation of Parameters (1 of 3)

• Consider again the nonhomogeneous system x' = Ax + g:

æ -t ö æ -2 1 ö 2e æ -2 1 ö æ 2 ö -t æ 0 ö x¢ = x + ç ÷ = x + e + t èç 1 -2 ø÷ è 3t ø èç 1 -2 ø÷ èç 0 ø÷ èç 3 ø÷

• We have previously found general solution to homogeneous case, with corresponding fundamental matrix:  e3t et  Ψ(t)     3t t   e e  • Using variation of parameters method, our solution is given by x = Y (t)u(t), where u(t) satisfies (t)u'(t) = g(t), or  e3t et  u  2et    1      3t t       e e u2   3t  Example 3: Solving for u(t) (2 of 3) • Solving Y (t)u'(t) = g(t) by row reduction,

 e3t et 2et  e3t et 2et        3t t   t t   e e 3t   0 2e 2e  3t 

e3t et 2et  e3t 0 et  3t / 2        t t   t t   0 e e  3t / 2  0 e e  3t / 2

1 0 e2t  3te 3t / 2 u  e2t  3te 3t / 2     1 0 1 1 3te t / 2  u 1 3te t / 2   2 • It follows that æ ö æ 2t 3t 3t ö u1 e / 2 - te / 2 + e / 6 + c1 u(t) = ç ÷ = ç ÷ u ç t t ÷ è 2 ø è t + 3te / 2 - 3e / 2 + c2 ø Example 3: Solving for x(t) (3 of 3)

• Now x(t) = Y (t)u(t), and hence we multiply

 e3t et e2t / 2 te 3t / 2  e3t / 6  c  x    1   e3t et  t  3te t / 2 3et / 2  c    2  to obtain, after collecting terms and simplifying,

æ 1 ö æ 1 ö æ 1 ö 1 æ 1 ö æ 1 ö 1 æ 4 ö x = c e-3t + c e-t + te-t + e-t + t - 1 èç -1 ø÷ 2 èç 1 ø÷ èç 1 ø÷ 2 èç -1 ø÷ èç 2 ø÷ 3èç 5 ø÷

• Note that this is the same solution as in Example 1. Laplace Transforms

• The Laplace transform can be used to solve systems of equations. Here, the transform of a vector is the vector of component transforms, denoted by X(s):  x (t)  Lx (t) Lx(t) L  1    1   X(s)      x2 (t)  Lx2 (t)

• Then by extending Theorem 6.2.1, we obtain Lx(t) sX(s)  x(0) Example 4: Laplace Transform (1 of 5)

• Consider again the nonhomogeneous system x' = Ax + g:  2 1 2et    x   x     1  2  3t 

• Taking the Laplace transform of each term, we obtain

sX(s)  x(0)  AX(s) G(s)

where G(s) is the transform of g(t), and is given by æ 2 ö ç s +1 ÷ G(s) = ç ÷ ç 3 ÷ èç s2 ø÷ Example 4: Transfer Matrix (2 of 5)

• Our transformed equation is sX(s)  x(0)  AX(s) G(s)

• If we take x(0) = 0, then the above equation becomes sX(s)  AX(s) G(s) or sI  AX(s)  G(s) • Solving for X(s), we obtain X(s)  sI  A1G(s)

• The matrix (sI – A)–1 is called the transfer matrix. Example 4: Finding Transfer Matrix (3 of 5)

• Then  2 1 s  2 1 A     sI  A     1  2  1 s  2 • Solving for (sI – A)–1, we obtain

1 1 s  2 1 sI  A    (s 1)(s  3)  1 s  2 Example 4: Transfer Matrix (4 of 5)

• Next, X(s) = (sI – A)–1G(s), and hence

1 s  2 12 (s 1) X(s)       2  (s 1)(s  3)  1 s  2 3 s 

or  2s  2 3     s 1 2 (s  3) s2 s 1(s  3) X(s)       2 3(s  2)    2 2   s 1 (s  3) s s 1(s  3)  Example 4: Transfer Matrix (5 of 5)

• Thus  2s  2 3     s 1 2 (s  3) s2 s 1(s  3) X(s)       2 3(s  2)    2 2   s 1 (s  3) s s 1(s  3) 

• To solve for x(t) = L-1{X(s)}, use partial fraction expansions of both components of X(s), and then Table 6.2.1 to obtain:

2 æ 1 ö æ 2 ö æ 1 ö æ 1 ö 1 æ 4 ö x = - e-3t + e-t + te-t + t - 3èç -1 ø÷ èç 1 ø÷ èç 1 ø÷ èç 2 ø÷ 3èç 5 ø÷

• Since we assumed x(0) = 0, this solution differs slightly from the previous particular solutions. Summary (1 of 2)

• The method of undetermined coefficients requires no integration but is limited in scope and may involve several sets of algebraic equations. • Diagonalization requires finding inverse of transformation matrix and solving uncoupled first order linear equations. When coefficient matrix is Hermitian, the inverse of transformation matrix can be found without calculation, which is very helpful for large systems. • The Laplace transform method involves matrix inversion, matrix multiplication, and inverse transforms. This method is particularly useful for problems with discontinuous or impulsive forcing functions. Summary (2 of 2)

• Variation of parameters is the most general method, but it involves solving linear algebraic equations with variable coefficients, integration, and matrix multiplication, and hence may be the most computationally complicated method. • For many small systems with constant coefficients, all of these methods work well, and there may be little reason to select one over another.