Section 5 summary

Brian Krummel November 4, 2019

1 Terminology

Eigenvalue Eigenvector Eigenspace Characteristic polynomial Characteristic equation Similar matrices Similarity transformation Diagonalizable of a linear transformation relative to a System of differential equations Solution to a system of differential equations Solution set of a system of differential equations Fundamental solutions Initial value problem Trajectory Decoupled system of differential equations Repeller / source Attractor / sink Saddle point Spiral point Ellipse trajectory

1 2 How to find eigenvalues, eigenvectors, and a similar ma- trix

1. First we find eigenvalues of A by finding the roots of the characteristic equation

det(A − λI) = 0.

Note that if A is upper triangular (or lower triangular), then the eigenvalues λ of A are just the diagonal entries of A and the multiplicity of each eigenvalue λ is the number of times λ appears as a diagonal entry of A.

2. For each eigenvalue λ, find a basis of eigenvectors corresponding to the eigenvalue λ by row reducing A − λI and find the solution to (A − λI)x = 0 in vector parametric form. Note that for a general n × n matrix A and any eigenvalue λ of A,

dim(Eigenspace corresp. to λ) = # free variables of (A − λI) ≤ algebraic multiplicity of λ.

3. Assuming A has only real eigenvalues, check if A is diagonalizable. If A has n distinct eigenvalues, then A is definitely diagonalizable. More generally, if

dim(Eigenspace corresp. to λ) = algebraic multiplicity of λ

for each eigenvalue λ of A, then A is diagonalizable and Rn has a basis consisting of n eigenvectors of A. If

dim(Eigenspace corresp. to λ) < algebraic multiplicity of λ

for some eigenvalue λ of A, then A is not diagonalizable.

To diagonalize A, let λ1, λ2, . . . , λn be the eigenvalues of A (possibly repeated) with corre- n −1 sponding eigenvectors v1, v2,..., vn which form a basis for R . We express A = PDP where   λ1 0 ··· 0  0 λ ··· 0   2    D =  . . .. .  P = v1 v2 ··· vn  . . . .  0 0 ··· λn

For example, suppose  1 3  A = 2 2 To find the eigenvalues, we have

1 − λ 3 2 det(A − λI) = = (1 − λ)(2 − λ) − 6 = λ − 3λ − 4 = (λ − 4)(λ + 1) = 0 2 2 − λ

2 gives us the eigenvalues λ = 4, −1. To find the eigenvectors corresponding to λ = 4,  −3 3   1 −1  A − 4I = −→ 2 −2 0 0

so that x1 = x2 and x2 is a free variable, giving us the eigenvectors  1  x = x 2 1

for x2 6= 0. Similarly, one can find that an eigenvector corresponding to λ = −1 is  −3  . 2

We can write A = PDP −1 where  4 0   1 −3  D = P = . 0 −1 1 2

3’. If instead A is a 2×2 matrix with complex eigenvalue λ = a−ib and corresponding complex eigenvector u + iv, then we can express A as A = PBP −1 where  a −b  B = P =  u v  b a For instance, one can show that the matrix  2 9  A = −1 2 has complex eigenvalues λ = 2 ± 3i and an eigenvector corresponding to λ = 2 − 3i is  −3i   0   −3  = + i . 1 1 0 Hence  0 −3   1 −3   2 −3   0 −3 −1 A = = 1 0 1 2 3 2 1 0 3”. If A has a mix of real and complex eigenvalues, then we do a combination of Steps 3 and 3’. For instance, if A is a 3×3 matrix with a real eigenvalue λ and corresponding real eigenvector w, and with a complex eigenvalue µ = a − ib and corresponding complex eigenvector u + iv, then we can express A as A = PBP −1 where  λ 0 0    B =  0 a −b  P = w u v . 0 b a Here we place the real eigenvalue λ as a diagonal entry of B and place the corresponding eigenvector w in the corresponding row of P . For the complex eigenvalue µ = a−ib, we have a 2 × 2 block in B and place the corresponding real and imaginary parts of the eigenvector u + iv in the corresponding two rows of P .

3 Example of a non-diagonalizable matrix. The 2 × 2 matrix  2 1  A = 0 2 is not diagonalizable as it is an upper with the same diagonal entries. The diagonal entry 2 is an eigenvalue with multiplicity two, but one can check that the eigenspace of A is one-dimensional. In A we can replace the diagonal entry 2 with any number (though the diagonal entries must be the same) and the entry 1 with any nonzero number to obtain a different non-diagonalizable matrix; for instance,  3 4  A = 0 3 is also not diagonalizable.

3 System of differential equations

Let A be an 2 × 2 matrix with real number entries. (One could also consider an n × n A.) Let’s solve x0 = Ax where x = x(t): R → R2 is a vector-valued function which is parameterized in time t and x0 is its derivative. How to solve the system of differential equations. First repeat Steps 1 and 2 above to find the eigenvalues and eigenvectors of A. Then:

• If A has real eigenvalues λ1, λ2 with corresponding eigenvectors v1, v2, then the general solution to x0 = Ax is λ1t λ2t x(t) = c1e v1 + c2e v2

where c1, c2 are real number constants. • If A has a complex eigenvalue λ = a − ib with corresponding complex eigenvector u + iv, then one complex fundamental solution is x(t) = e(a−ib) t (u + iv). Expanding this fundamental solution x(t) = eat (cos(bt) − i sin(bt)(u + iv) = eat(cos(bt)u + sin(bt)v) + i eat(− sin(bt)u + cos(bt)v). The real and imaginary parts eat(cos(bt)u + sin(bt)v) eat(− sin(bt)u + cos(bt)v) are real fundamental solutions of x0 = Ax. Therefore, the general solution to x0 = Ax is

at at x(t) = c1e (cos(bt)u + sin(bt)v) + c2e (− sin(bt)u + cos(bt)v)

where c1, c2 are real number constants.

4 How to plot the trajectories of solutions. Assuming A is a diagonalizable 2 × 2 matrix with real eigenvalues:

1. Draw the eigenspaces corresponding to each eigenvalue.

2. Determine in which direction x(t) travels along each eigenspace. If the eigenvalue λ > 0 then x(t) moves away from the origin. If the eigenvalue λ < 0 then x(t) moves towards the origin.

3. Fill in the rest of the trajectories based on whether you have a node or saddle point.

If instead A is a 2 × 2 matrix with complex conjugate eigenvalues λ = a − bi for b > 0:

1. Plot the ellipse whose axises are real and imaginary part of a complex eigenvector.

2. Determine whether x(t) moves towards or away from the origin. If the eigenvalue λ > 0 then x(t) spirals away from the origin. If the eigenvalue λ < 0 then x(t) spirals towards the origin. If the eigenvalue λ = 0 then x(t) rotates about the origin along the ellipse.

3. Fill in the trajectories. Note that x(t) rotates from the real part of the eigenvector to the imaginary part.

Trajectories of solutions to x0 = Ax. If A is a diagonalizable 2 × 2 matrix with two real eigenvalues λ1, λ2:

• If λ1 > 0 and λ2 > 0 then x(t) moves away from the origin as t → +∞. We call the origin a repeller or source.

• If λ1 < 0 and λ2 < 0 then x(t) moves towards the origin as t → +∞. We call the origin a attractor or sink.

• If λ1 > 0 and λ2 < 0, we call the origin a saddle point. If instead A is a diagonalizable 2 × 2 matrix with a complex eigenvalue λ = a − ib where b 6= 0:

• If a > 0 then x(t) is spirals away from the origin. We call the origin a spiral point.

• If a < 0 then x(t) is spirals towards the origin. We call the origin a spiral point.

• If a = 0 then x(t) rotates around the origin along an ellipse. We call the origin a center point.

5 4 Matrix of a linear transformation

Suppose T : V → W is a linear transformation between vector spaces V and W . Let B = {b1, b2,..., bn} be a basis for V and D = {d1, d2,..., dm} be a basis for W . Suppose that (with m = 2 and n = 3)

T (b1) = a11d1 + a21d1 + a31d3,

T (b2) = a12d1 + a22d1 + a32d3, where aij are explicit real numbers. Then the matrix M of T relative to the given bases B and D is   a11 a12   M = [T (b1)]D [T (b2)]D =  a21 a22  a31 a32 with j-th column equal to the coordinate vector of T (bj). (Notice that above we write T (bj) as a linear combination of the di going left-to-right, but the j-column of M is the coordinates of T (bj) going up-and-down.)

n Now suppose that A is an n × n matrix and B = {b1, b2,..., bn} be a basis for R . We can associate A with the linear transformation T : Rn → Rn given by T (x) = Ax for each x in Rn. The matrix of M = [T ]B of T relative to the basis B is given by

−1   M = P AP where P = b1 b2 ··· bn .

Hence M satisfies A = PMP −1. (This is just like diagonalizing matrices. If A is diagonalizable and B is a basis for Rn consisting of eigenvectors of A, then M would be the diagonal matrix D whose diagonal entries are the corresponding eigenvalues so that A = PDP −1 with D in place of M.)

6