<<

EIGENVALUES & EIGENVECTORS

We say λ is an eigenvalue of a square A if

Ax = λx for some x = 0. The vector x is called an eigenvector of A, associated with the eigenvalue λ.Notethatif x is an eigenvector, then any multiple αx is also an eigenvector.

EXAMPLES:       −713−16 1 1        13 −10 13   −1  = −36  −1  −16 13 −7 1 1       10−1 1 1        1 −10  1  =0 1  −110 1 1 Knowing the eigenvalues and eigenvectors of a matrix A will often give insight as to what is happening when solving systems or problems involving A. THE CHARACTERISTIC

For an n × n matrix A,solvingAx = λx for a vector x = 0 is equivalent to solving the homogeneous linear system (A − λI)x =0 This has a nonzero solution if and only if det (A − λI)=0

  a1,1 − λ ··· a1,n  . .  fA(λ) ≡ det  . ... .  =0 an,1 ··· an,n − λ We can expand this by minors, obtaining fA(λ)= a1,1 − λ ···(an,n − λ) + terms of degree ≤ n − 2 − nλn =(1) n−1 n−1 +(−1) a1,1 + ···+ an,n λ + terms of degree ≤ n − 2 We call fA(λ) the characteristic polynomial of A;and

fA(λ)=0 is called the characteristic equation for A.SincefA(λ) is a polynomial of degree n: 1. The matrix A has at least one eigenvalue. 2. A has at most n distinct eigenvalues.

The multiplicity of λ as a root of fA(λ) = 0 is called the algebraic multiplicity of λ. The number of inde- pendent eigenvectors associated with λ is called the geometric multiplicity of λ.

EXAMPLES: 10 01 has the eigenvalue λ = 1 with both the algebraic and geometric multiplicity equal to 2. 11 01 has the eigenvalue λ = 1 with algebraic multiplicity 2 and geometric multiplicity 1. PROPERTIES

Let A and B be similar, meaning that for some non- singular matrix P , − B = P 1AP Being similar means that A and B represent the same linear transformation of the Rn or Cn, but with respect to different bases. Then f λ B − λI B( )=det( ) =detP −1AP − λI =detP −1 [A − λI] P = det P −1 det (A − λI)[detP ] =det(A − λI)=fA(λ) since det P −1 det P =det P −1P =1.Thissays that the similar matrices have the same eigenvalues. For the eigenvectors, write Ax λx = P −1AP P −1x = λP −1x Bz = λz with z = P −1x Thus there is a simple connection with the eigenvec- tors of similar matrices.

Since fB(λ)=fA(λ) for similar matrices A and B,we have that their coefficients are . In particular, note that

fA(0) = det A which is the constant term in fA(λ). Thus similar matrices have the same determinant. In particular, introduce

A = a1,1 + ···+ an,n It is the coefficient of λn−1, except for sign. Similar matrices have the same trace and determinant. Let λ1, ..., λn be the eigenvalues of A, repeated ac- cording to their multiplicity. Then

fA(λ)=(λ1 − λ) ···(λn − λ) n n n−1 n−1 =(−1) λ +(−1) (λ1 + ···λn) λ + ···+(λ1 ···λn) Thus

trace A = λ1 + ···λn, det A = λ1 ···λn EXAMPLE: The eigenvalues of 12 A = 34 satisfy

λ1 + λ2 =5,λ1λ2 = −2 CANONICAL FORMS

By use of similarity transformations, we change a ma- trix A into a similar matrixwhich is “simpler” in its form. These are often called canonical forms;and there is a large literature on such forms. The ones most used in numerical analysis are: The Schur normal form The principal axes form The Jordan form The singular value decomposition

The Schur normal form is less well-known, but is a powerful tool in deriving results in matrixalgebra. SCHUR’S NORMAL FORM.LetA be a square ma- trixof order n with coefficients from C. Then there is a nonsingular U for which ∗ ∗ U AU = T, U U = I with T an upper . Since U is unitary, U ∗ = U −1, and thus T and A are similar.

A proof by induction on n is given in the text, on page 474.

Note that for a triangular matrix T ,thecharacteristic polynomial is   t , − λt, ··· t ,n  1 1 1 2 1   0 ...  fT (λ)=det   . ... .  ··· t − λ 0 0 n,n = t1,1 − λ ···(tn,n − λ) Thus the diagonal elements ti,i are the eigenvalues of T . PRINCIPAL AXES THEOREM.LetA be Hermitian (or self-adjoint), meaning ∗ A = A Then there is a nonsingular unitary matrix U for which ∗ U AU = D a diagonal matrixwith only real entries. If the matrix A is real (and thus symmetric), the matrix U can be chosen as a real .

PROOF: Using the Schur normal form, ∗ U AU = T Now form the conjugate of both sides ∗ T ∗ =(U ∗AU) ∗ = U ∗A∗ (U ∗) ∗ = U ∗A∗U since (B∗) = B in general = U ∗AU since A∗ = A = T T Since T ∗ = T , the diagonal elements of T are real. Moreover, T = T ∗ implies T must be a , as asserted, and we write

D =diag[λ1, ..., λn] I will not show here that A real implies U real.

Consequences: Again, assume A (and U)aren × n; and let V = Rn or Cn, depending on whether A is real or complex. Write

U =[u1, ..., un] with u1, ..., un the orthogonal columns of U.Since these are elements of the n-dimensional space V, {u1, ..., un} is an orthogonal of V. In addition, re-write U ∗AU = D as

AU = UD   λ 0 ··· 0  1   0 · .  A [u , ..., un]=[u , ..., un]   1 1  . · 0  0 ··· 0 λn Matching corresponding columns on the two sides of this equation, we have

Auj = λjuj,j=1, ..., n

Thus the columns u1, ..., un are orthogonal eigenvec- tors of A; and they form a basis for V.

For a A, the eigenvalues are all real; and there is an orthogonal basis for the associated vector space V consisting of eigenvectors of A.

In dealing with such a matrix A in a problem, the basis {u1, ..., un} isoftenusedinplaceoftheoriginalbasis (or coordinate system) used in setting up the problem. With this basis, the use of A is clear:

A (c1u1 + ···+ cnun)=c1Au1 + ···+ cnAun = c1λ1u1 + ···+ cnλnun SINGULAR VALUE DECOMPOSITION. This is a canon- ical form for general rectangular matrices. Let A have order n × m. Then there are unitary matrices U and V , of respective orders m × m and n × n,forwhich ∗ V AU = F with F of order n × m and of the form   µ 0 ···  1   µ ···   0 2 0   . .   . ..  F =    µr     0  ...

The numbers µi are all real, with

µ1 ≥ µ2 ≥···≥µr > 0 This is proven in the text (p. 478). JORDAN CANONICAL FORM

This is probably the best known canonical form of lin- ear algebra textbooks. Begin by introducing matrices   λ 10···    0 λ ......  Jm(λ)=   ...... 1  0 ··· 0 λ The order is m × m, and all elements are zero except for λ on the diagonal and 1 on the superdiagonal. We refer to this matrixas a Jordan block.

Let A have order n, with elements chosen from C. Then there is a nonsingular matrix P for which   Jn (λ )0··· 0  1 1  −  0 Jn (λ ) ... .  P 1AP =  2 2   ...... 0  0 ··· 0 Jnr(λr)

The numbers λ1, ..., λr are the eigenvalues of A,and they need not be distinct. Note that the Jordan block Jm(λ)satisfies m Jm(0) =0 Such a matrixis called nilpotent. We sometimes write the Jordan canonical form as − P 1AP = D + N with D containing the diagonal elements of P −1AP and N containing the remaining nonzero elements. In this case, N is nilpotent with q N =0 with q =max{n1, ..., nr}≤n.

The proof of the Jordan canonical form is very com- plicated, and it can be found in more advanced level books on .