<<

Similar matrices and Jordan form

We’ve nearly covered the entire heart of – once we’ve finished singular value decompositions we’ll have seen all the most central topics.

AT A is positive definite A is positive definite if xT Ax > 0 for all x �= 0. This is a very important class of matrices; positive definite matrices appear in the form of AT A when computing least squares solutions. In many situations, a rectangular matrix is multiplied by its to get a . Given a symmetric positive definite matrix A, is its inverse also symmet­ ric and positive definite? Yes, because if the (positive) eigenvalues of A are −1 λ1, λ2, · · · λd then the eigenvalues 1/λ1, 1/λ2, · · · 1/λd of A are also positive. If A and B are positive definite, is A + B positive definite? We don’t know much about the eigenvalues of A + B, but we can use the property xT Ax > 0 and xT Bx > 0 to show that xT(A + B)x > 0 for x �= 0 and so A + B is also positive definite. Now suppose A is a rectangular (m by n) matrix. A is almost certainly not symmetric, but AT A is square and symmetric. Is AT A positive definite? We’d rather not try to find the eigenvalues or the pivots of this matrix, so we ask when xT AT Ax is positive. Simplifying xT AT Ax is just a matter of moving parentheses:

xT (AT A)x = (Ax)T (Ax) = |Ax|2 ≥ 0.

The only remaining question is whether Ax = 0. If A has n (independent columns), then xT (AT A)x = 0 only if x = 0 and A is positive definite. Another nice feature of positive definite matrices is that you never have to do row exchanges when row reducing – there are never 0’s or unsuitably small numbers in their pivot positions.

Similar matrices A and B = M−1 AM Two square matrices A and B are similar if B = M−1 AM for some matrix M. This allows us to put matrices into families in which all the matrices in a family are similar to each other. Then each family can be represented by a (or nearly diagonal) matrix.

Distinct eigenvalues If A has a full set of eigenvectors we can create its eigenvector matrix S and write S−1 AS = Λ. So A is similar to Λ (choosing M to be this S).

1 � � � � � � 2 1 3 0 3 0 If A = then Λ = and so A is similar to . But A 1 2 0 1 0 1 is also similar to: −1 M A M B � 1 −4 � � 3 0 � � 1 4 � � 1 −4 � � 2 9 � = 0 1 0 1 0 1 0 1 1 6 � −2 −15 � = . 1 6 In addition, B is similar to Λ. All these similar matrices have the same eigen­ values, 3 and 1; we can check this by computing the and of A and B. Similar matrices have the same eigenvalues! In fact, the matrices similar to A are all the 2 by 2 matrices with eigenvalues � � � � 3 7 1 7 3 and 1. Some other members of this family are and . To 0 1 0 3 prove that similar matrices have the same eigenvalues, suppose Ax = λx. We modify this equation to include B = M−1 AM: AMM−1x = λx M−1 AMM−1x = λM−1x BM−1x = λM−1x. The matrix B has the same λ as an eigenvalue. M−1x is the eigenvector. If two matrices are similar, they have the same eigenvalues and the same number of independent eigenvectors (but probably not the same eigenvectors). When we diagonalize A, we’re finding a Λ that is similar to A. If two matrices have the same n distinct eigenvalues, they’ll be similar to the same diagonal matrix.

Repeated eigenvalues If two eigenvalues of A are the same, it may not be possible to diagonalize A. Suppose λ1 = λ2 = 4. One family of matrices with eigenvalues 4 and 4 � � � � 4 0 4 1 contains only the matrix . The matrix is not in this family. 0 4 0 4 There are two families of similar matrices with eigenvalues 4 and 4. The � � 4 1 larger family includes . Each of the members of this family has only 0 4 one eigenvector. � � 4 0 The matrix is the only member of the other family, because: 0 4

� 4 0 � � 4 0 � M−1 M = 4M−1 M = 0 4 0 4 for any M.

2 Jordan form Camille Jordan found a way to choose a “most diagonal” representative from each family of similar matrices; this representative is said to be in Jordan nor­ � � � � 4 1 4 0 mal form. For example, both and are in Jordan form. This 0 4 0 4 form used to be the climax of linear algebra, but not any more. Numerical applications rarely need it. � � 4 1 We can find more members of the family represented by by choos­ 0 4 ing diagonal entries to get a trace of 4, then choosing off-diagonal entries to get a determinant of 16:

� 4 1 � � 5 1 � � 4 0 � � a b � , , , . 0 4 −1 3 17 4 (8a − a2 − 16)/b 8 − a

(None of these are diagonalizable, because if they were they would be similar � � 4 0 to . That matrix is only similar to itself.) What about this one? 0 4

⎡ 0 1 0 0 ⎤ ⎢ 0 0 1 0 ⎥ A = ⎢ ⎥ ⎣ 0 0 0 0 ⎦ 0 0 0 0

Its eigenvalues are four zeros. Its rank is 2 so the of its nullspace is 4 − 2 = 2. It will have two independent eigenvectors and two “missing” eigenvectors. When we look instead at

⎡ 0 1 7 0 ⎤ ⎢ 0 0 1 0 ⎥ ⎢ ⎥ , ⎣ 0 0 0 0 ⎦ 0 0 0 0

its rank and the dimension of its nullspace are still 2, but it’s not as nice as A. B is similar to A, which is the representative of this family. A has a 1 above the diagonal for every missing eigenvector and the rest of its entries are 0. Now consider: ⎡ 0 1 0 0 ⎤ ⎢ 0 0 0 0 ⎥ C = ⎢ ⎥ . ⎣ 0 0 0 1 ⎦ 0 0 0 0 Again it has rank 2 and its nullspace has dimension 2. Its four eigenvalues are 0. Surprisingly, it is not similar to A. We can see this by breaking the matrices

3 into their Jordan blocks:

⎡ 0 1 0 0 ⎤ ⎡ 0 1 0 0 ⎤ ⎢ 0 0 1 0 ⎥ ⎢ 0 0 0 0 ⎥ A = ⎢ ⎥ , C = ⎢ ⎥ . ⎣ 0 0 0 0 ⎦ ⎣ 0 0 0 1 ⎦ 0 0 0 0 0 0 0 0

A Jordan block Ji has a repeated eigenvalue λi on the diagonal, zeros below the diagonal and in the upper right hand corner, and ones above the diagonal: ⎡ ⎤ λi 1 0 · · · 0 ⎢ 0 λi 1 0 ⎥ ⎢ ⎥ ⎢ . . . ⎥ Ji = ⎢ . .. . ⎥ . ⎢ ⎥ ⎣ 0 0 λi 1 ⎦ 0 0 · · · 0 λi

Two matrices may have the same eigenvalues and the same number of eigen­ vectors, but if their Jordan blocks are different sizes those matrices can not be similar. Jordan’s theorem says that every square matrix A is similar to a J, with Jordan blocks on the diagonal: ⎡ ⎤ J1 0 · · · 0 ⎢ 0 J2 · · · 0 ⎥ ⎢ ⎥ J = ⎢ . . . ⎥ . ⎣ . .. . ⎦ 0 0 · · · Jd

In a Jordan matrix, the eigenvalues are on the diagonal and there may be ones above the diagonal; the rest of the entries are zero. The number of blocks is the number of eigenvectors – there is one eigenvector per block.

To summarize: • If A has n distinct eigenvalues, it is diagonalizable and its Jordan matrix is the diagonal matrix J = Λ. • If A has repeated eigenvalues and “missing” eigenvectors, then its Jordan matrix will have n − d ones above the diagonal. We have not learned to compute the Jordan matrix of a matrix which is missing eigenvectors, but we do know how to diagonalize a matrix which has n distinct eigenvalues.

4 MIT OpenCourseWare http://ocw.mit.edu

18.06SC Linear Algebra Fall 2011

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.