<<

These notes contain course material that does not appear in the book. The topics are • Basic and arithmetic • Eivenvector decomposition and generalized eigenvectors • Solving linear systems using Generalized Eigenvectors • Matrix exponentiation • Integrating factors • General form for 2nd order equations

1. Linear Algebra and Matrix Arithmetic 1.1. Basic Definitions. An m × n matrix has the form   x11 x12 x13 x1n  x21 x22 x23 . . . x2n     x31 x32 x33 x3n  (1) X =    . ..   . .  xm1 xm2 xm3 xmn A 1 × n matrix  (2) X = x11 x12 x13 . . . x1n is called a row vector. An m × 1 matrix   x11  x21  (3) X =    .   .  xm1 is called a column vector. A 1 × m column vectors with (real entries) can be considered to be a point in the space Rm. Likewise n × 1 row vectors can be considered to be a point in Rn. 1.2. . If X is an m × k matrix and Y is a k × n matrix, it is possible to define a matrix product XY . To define this, set     x11 x12 . . . x1k y11 y12 . . . y1n  x21 x22 x2k   y21 y22 y2n  (4) X =   ,Y =    . ..   . ..   . .   . .  xm1 xm2 xmk yk1 yk2 ykn Then XY is the matrix whose ij entry is

(5) xi1y1j + xi2y2j + ... + xikykj. That is, the ij entry of XY is the dot product of the ith-row of X with the jth-column of Y . If X is and m × k matrix and Y is a k × 1 column vector, we have the product XY being       x11 x12 . . . x1k y11 x11y11 + x12y21 + ... + x1kyk1  x21 x22 x2k   y21   x21y11 + x22y21 + ... + x2kyk1  (6)     =    . ..   .   .   . .   .   .  xm1 xm2 xmk yk1 xm1y11 + xm2y21 + ... + xmkyk1 so the result is an m × 1 column vector. Thus any m × k matrix can be considered a map from Rk to Rm. Exercises

1) Let X be the matrix  1 −1 3  (7) X = . 2 0 −1 For which of the following matrices is it possible to form the product XY ?  1 −1 3   −1 3   1 −1 3  (8) Y = Y = 2 0 −1 Y = . 0 1   2 0 −1 3 −2 1 2) For each matrix Y in problem (1) for which is possible to form the product XY , compute the product. Also compute YX in those cases where it is possible. 1 2

3) The matrix  1 −2 1  (9) X =  0 0 −1  −1 −1 1

can be considered to be a map from R3 to R3. Plot the images of the points  1   0   0   1  (10) Y1 =  0  ,Y2 =  1  ,Y3 =  0  ,Y4 =  1  0 0 1 1 under the map Y 7→ XY .

1.3. Matrix Operations. If X is a matrix, its XT is obtained by interchanging its rows and columns. If X is a , this is the same as reflecting about the main diagonal. Given   x11 x12 x13 x1n  x21 x22 x23 . . . x2n     x31 x32 x33 x3n  (11) X =    . ..   . .  xm1 xm2 xm3 xmn we have   x11 x21 x31 xn1  x12 x22 x32 . . . xn2    T  x13 x23 x33 xn3  (12) X =    . ..   . .  x1m x2m x3m xnm If X is a square matrix, that is, an n×n matrix for some n, then there are two special operations on X, the and the trace. If   x11 x12 x13 x1n  x21 x22 x23 . . . x2n     x31 x32 x33 x3n  (13) X =    . ..   . .  xn1 xn2 xm3 xnn is an n × n matrix, it the trace is easy to define:

(14) T r X = x11 + x22 + ... + xnn The determinant is somewhat more difficult. The determinant of a 2 × 2 matrix is defined to be

a b (15) = ad − bc. c d The determinant of a 3 × 3 matrix is defined to be

x11 x12 x13 x22 x23 x21 x23 x21 x22 (16) x21 x22 x23 = x11 − x12 + x13 x32 x33 x31 x33 x31 x32 x31 x32 x33 Each 2 × 2 matrix on the right is called a cofactor. For larger matrices, the definition of the determinant is recursive:

x11 x12 . . . x1n x22 . . . x2n x21 . . . x2n x21 x22 x2n (17) = x . .. − x . .. ± ...... 11 . . 12 . . . . xn2 xnn xn1 xnn xn1 xn2 xnn where each cofactor is obtained by striking out the row and the column that the coefficient lies in, and constructing a corresponding (n − 1) × (n − 1) matrix. This is the method of cofactor expansion. If X is an n × n matrix, its cofactor matrix is the matrix where each entry is replaces by that entry’s cofactor. The adjugate of an n × n matrix is the transpose of its cofactor matrix. 3

Exercises

4) Given  1 0 2  (18) X =  1 −1 0  , 0 2 −2 compute the trace and determinant of X. Determine the cofactor matrix, and the adjugate matrix. 5) Given  0 2 2 −1   1 0 1 0  (19) X =   ,  0 −1 −2 1  3 −1 0 1 compute the trace and determinant of X. Determine the cofactor matrix, and the adjugate matrix. 4

2. Eigenspace Decomposition and Generalized Eigenvectors 2.1. Eigenvalues and eigenvectors. . If A is an n × n matrix, a column vector ~v is an eigenvector of A with eigenvalue λ provided (20) A~v = λ~v This can be rearranged as follows: A~v − λ~v = 0 (21) (A − λI)~v = 0. Any of these equations may be called the eigenvector equation. Now if any matrix X and vector ~v satisfies X~v = 0, it must be the case that det X = 0. Therefore, before solving the eigenvector equation we must first solve the eigenvalue equation: (22) det (A − λI) = 0. If A is an n × n matrix, then the resulting equation is a polynomial of degree n in λ. It is called the characteristic equation of the matrix A. A polynomial of degree n always has precisely n many roots (possibly complex), when counted with multiplicity. If λ is an eigenvalue, then it has at least one associated eigenvector. If λ is an eigenvalue of multiplicity k, it may have up to k many associated eigenvectors. 2.2. Generalized eigenvectors. If λ is an eigenvalue of multiplicity k but does not have k many linearly independent eigenvectors, it will have generalized eigenvectors, that is, solution of the generalized eigenvector equation (23) (A − λI)k ~v = 0. An eigenvalue of multiplicity k always has precisely k many linearly independent generalized eigenvectors (at least one of which is an actual eigenvector). To find these generalized vectors, one sets ~v1 = ~v, and solves progressively

(A − λI) ~v2 = ~v1

(A − λI) ~v3 = ~v2 (24) . .

(A − λI) ~vl = ~vl−1.

Each ~vi so obtained is a generalized eigenvector. We have obtained a chain of generalized eigenvectors associated to the eigenvector ~v. 5

3. Solving Linear Systems of Differential Equations using Generalized Eigenvectors Let dX~ (25) = A X~ dt be an n×n system of differential equations. If λ is an eigenvalue of A with associated eigenvector ~v, then one can check that (26) X~ (t) = eλt ~v

solves (25). Now if ~v is an eigenvector and it has the chain of generalized eigenvectors ~v1 = ~v, ~v2, . . . , ~vl, then the following l many vector-valued functions are also solutions to (25): λt X~ 1(t) = e ~v1 λt λt X~ 2(t) = te ~v1 + e ~v2 1 X~ (t) = t2eλt ~v + teλt~v + eλt~v (27) 3 2 1 2 3 . . 1 X~ (t) = tl−1eλt ~v + ... + teλt~v + eλt~v . l (l − 1)! 1 l−1 l For each chain of length l, we obtain exactly l many solution of the differential equations. Any n × n matrix A has precisely n linearly independent generalized eigenvectors, so we always obtain precisely n many linearly independent solutions to (25) by this method. 6

4. Matrix Exponentiation • Definition. If A is any n × n matrix, we define eA using the Taylor series for the exponential function:

∞ X 1 (28) eA = Ak k! k=0

p • Nilpotent Matrices. A matrix A is nilpotent if A = 0n×n (the n × n ) for some integer p. In this case the is relatively easy to compute:

∞ X 1 eA = Ak k! (29) n=0 1 1 = I + A + A2 + ... + Ap−1 2! (p − 1)!

For example, if

 0 1 2 3   0 0 1 2  (30) A =    0 0 0 1  0 0 0 0

then we have

 0 0 1 3   0 0 0 1   0 0 0 0  2  0 0 0 1  3  0 0 0 0  4  0 0 0 0  A =   ,A =   ,A =    0 0 0 0   0 0 0 0   0 0 0 0  0 0 0 0 0 0 0 0 0 0 0 0

so A is nilpotent. Then

1 1 eA = I + A + A2 + A3 2 6  1 0 0 0   0 1 2 3   0 0 1 3   0 0 0 1   0 1 0 0   0 0 1 2  1  0 0 0 1  1  0 0 0 0  =   +   +   +    0 0 1 0   0 0 0 1  2  0 0 0 0  6  0 0 0 0  0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0  1 1 5/2 14/3   0 1 1 5/2  =    0 0 1 1  0 0 0 1

• Diagonal matrices. If A is a , meaning it has the form

  a1 0 ... 0  0 a2 0  (31) A =    . .. .   . . .  0 0 . . . an

then

 k  (a1) 0 ... 0 k  0 (a2) 0  (32) An =    . .. .   . . .  k 0 0 ... (an) 7

so that ∞ X 1 eA = Ak k! k=1

 k  (a1) 0 ... 0 ∞ k X 1  0 (a2) 0  =  . .  k!  . .. .  k=1  . . .  k 0 0 ... (an)

(33)  P∞ 1 k  k=1 k! (a1) 0 ... 0 P∞ 1 k  0 k=1 k! (a2) 0  =    . .. .   . . .  P∞ 1 k 0 0 ... k=1 k! (an)

 ea1 0 ... 0   0 ea2 0  =    . .. .   . . .  0 0 . . . ean • Derivative Formula. If A = A(t) is a matrix that depends on time, it is usually false d A dA A that dt e = dt e . However, if A is a constant matrix, then we have the formula d (34) eAt = A eAt = eAtA dt We proved this in class. 0n×n • Differential Equations. Note that e = In×n. Setting At (35) X~ (t) = e X~ 0

where X~ 0 is some constant matrix, we compute:

0n×n X~ (0) = e X~ 0

(36) = In×nX~ 0

= X~ 0 and also d d   X~ (t) = eAtX~ dt dt 0   d At = e X~ 0 (37) dt  At  = A e X~ 0 = AX~ (t).

At Therefore X~ (t) = e X~ 0 satisfies the IVP dX~ (38) = AX~ (t), X~ (0) = X~ . dt 0 • Fundamental Matrices. Assuming

X~ 1(t), X~ 2(t),... X~ n(t)(39) dX~ ~ is a set of n linearly independent solutions to dt = AX(t) then the matrix   ~ ~ ~ (40) Φ(t) =  X1(t) X2(t) ... Xn(t) 

is called a fundamental matrix of the system. Fundamental matrices are not unique. We proved in class that any fundamental matrix is a matrix solution of the system d (41) Φ(t) = A · Φ(t). dt 8

• Fundamental Matrices and Exponentiation. Consider the matrix IVP d (42) M = A · M,M(0) = I . dt n×n We know two things: that M(t) = eAt solves this IVP, and that the solution to this IVP is unique. However, the matrix (43) M(t) = Φ(t) (Φ(0))−1 is also a solution of the IVP! By uniqueness, it must be the case that (44) eAt = Φ(t) (Φ(0))−1 whenever Φ is any fundamental matrix. • Computation of Matrix Exponentials We now have an elementary (though com- putationally intense) way to compute the matrix exponentials (45) eAt and eA. dX~ ~ Namely, solve the system dt = AX and create a fundamental matrix Φ(t). Then we have (46) eAt = Φ(t) (Φ(0))−1 and eA = Φ(1) (Φ(0))−1 . 9

5. Standard Form for Second Order Equations The standard form for a second order linear constant-coefficient differential equation is 2 y¨ + 2ζy˙ + ω0 = 0(47)

The constant ζ is called the damping coefficient, and ω0 is called the undamped natural frequency. The characteristic equation is 2 2 r + 2ζr + ω0 = 0(48) and the roots are therefore q 2 2 (49) r = −ζ ± ζ − ω0. • Undamped case: ζ = 0. The differential equation is 2 y¨ + ω0y = 0 and the general solution is

(50) y(t) = C1 cos(ω0t) + C2 sin(ω0t).

Solutions are periodic with radian frequency ω0, justifying the terminology.

2 2 • Underdamped case: ζ < ω0. In this case ω0 − ζ is positive, and we define q 2 2 (51) ω1 = ω0 − ζ .

The quantity ω1 is called the natural damped frequency. The roots of the characteristic equation are

(52) r = −ζ ± i ω1 and the general solution is −ζt −ζt (53) y(t) = C1e cos(ω1t) + C2e sin(ω1t). The solutions decay with exponential constant −ζ, and otherwise oscillate with radian frequency ω1.

• Critically damped case: ζ = ω0. We obtain a single root r = −ζ of the characteristic equation. The general solution is −ζ t −ζ t (54) y(t) = C1e + C2te . Solutions are characterized by decay to the equilibrium position as quickly as possible without overshoot.

• Overdamped case: ζ > ω0. We obtain two distinct roots of the characteristic equation q q 2 2 2 2 (55) r1 = −ζ + ζ − ω0, r2 = −ζ − ζ − ω0. The general solution is

r1t r2t (56) y(t) = C1e + C2e . Solutions are characterized by slow exponential decay to the equilibrium position.