Diagonalization

Diagonalization

Section 5.2 Diagonalization We have seen that diagonal and triangular matrices are much easier to work with than are most matrices. For example, determinants and eigenvalues are easy to compute, and multiplication is much more straightforward. Diagonal matrices are particularly nice. For example, the product of two diagonal matrices can be computed by simply multiplying their corresponding diagonal entries: 0 1 0 1 0 1 a1 0 ::: 0 b1 0 ::: 0 a1b1 0 ::: 0 B C B C B C B 0 a2 ::: 0 C B 0 b2 ::: 0 C B 0 a2b2 ::: 0 C B . C B . C = B . C : @ . .. A @ . .. A @ . .. A 0 0 : : : an 0 0 : : : bn 0 0 : : : anbn Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity: Definition 1. An n × n matrix A is similar to a matrix B if there is an invertible matrix P so that B = P −1AP; and the function cP defined by −1 cP (A) = P AP is called a similarity transformation. As an example, matrices ( ) ( ) 1 0 −8 3 A = and B = 0 4 −36 13 are similar; with ( ) ( ) 4 −1 1 1 P = and P −1 = ; −3 1 3 4 you should check that B = P −1AP ( ) ( )( )( ) −8 3 1 1 1 0 4 −1 = ; −36 13 3 4 0 4 −3 1 so that A and B are indeed similar. We are concerned with similarity not for its own sake as an interesting phenomenon, but because of quantities known as similarity invariants. We can get a feel for what similarity invariants are by considering data about the matrices from the previous example: in particular, let's calculate the determinant, trace and eigenvalues of A and B: 1 Section 5.2 property A B determinant 4 4 trace 5 5 eigenvalues 1; 4 1; 4 As you may have guessed from the table above, certain matrix properties, such as the determi- nant, trace, and eigenvalues, are \shared" among similar matrices; this is what we mean when we use the phrase similarity invariant. In other words, the determinant of the matrix A is also the determinant of any matrix similar to A. In fact, it is quite easy to check that the determinant is a similarity invariant; to do so, we recall two rules: 1 det(AB) = det(A) det(B); and det(P −1) = : det P Any matrix similar to a given matrix A must have form P −1AP , with P an invertible matrix. Let's calculate the determinant of this similar matrix, using the rules above: det(P −1AP ) = (det P −1)(det A)(det P ) ( ) 1 = (det A)(det P ) det P = det A: We have just shown that similar matrices share determinants{that is, det A = det(P −1AP ) for any invertible matrix P . In general, if matrices A and B are similar, so that B = P −1AP , then they share: determinant trace eigenvalues rank nullity invertibility characteristic eigenspace corresponding to a polynomial particular (shared) eigenvalue Based on the example above, you may have already guessed the reason that we care about the idea of similarity: If A and B are similar matrices, and if A is diagonal, then it is much easier to calculate data such as determinant and eigenvalues about A than it is about B. With this in mind, we introduce the idea of diagonalizability: Definition 2. An n × n matrix A is diagonalizable if it is similar to a diagonal matrix. That is, A is diagonalizable if there is an invertible matrix P so that P −1AP is diagonal. The matrix ( ) −8 3 B = −36 13 2 Section 5.2 is diagonalizable, since it is similar to the diagonal matrix ( ) 1 0 A = : 0 4 Key Point. It is important to note that not every matrix is diagonalizable. Indeed, there are many matrices which are simply not similar to a diagonal matrix. We will examine a few later in this section. Criteria for Diagonalizability Given the information we have collected so far, it should be clear that, while diagonal matrices are arguably the easiest type of matrix to work with, diagonalizable matrices are almost as easy. If I want to know the determinant, trace, etc. of a matrix B that is similar to a diagonal matrix A, then I simply need to make the (easier) calculations for A. This leads to a few interesting questions: how can we be certain that a given matrix is diago- nalizable? And if we know that a matrix is diagonlizable, how do we find the diagonal matrix to which it is similar? The following theorem answers the first question: Theorem 1.5.2. If A is an n × n matrix, then the following statements are equivalent: (a) A is diagonalizable. (b) A has n linearly independent eigenvectors. In other words, we can check that matrix A is diagonalizable by looking at its eigenvectors: if A has n linearly independent eigenvectors, then it is diagonalizable. Example In Section 5:1, we saw that the matrix 0 1 2 0 0 0 B0 −1 0 0C A = B C @0 4 2 0A 4 0 0 3 has repeated eigenvalue λ1 = λ3 = 2; with two corresponding linearly independent eigenvectors 0 1 0 1 1 0 B 0 C B0C x = B C and x = B C : 1 @ 0 A 3 @1A −4 0 3 Section 5.2 In addition, the eigenvalues λ2 = −1 and λ4 = 3 have eigenvectors 0 1 0 1 0 0 B 1 C B0C x = B C and x = B C ; 2 @− 4 A 4 @ A 3 0 0 1 respectively. You should check that the four eigenvectors above are linearly independent inspecting the linear combination ax1 + bx2 + cx3 + dx4; indeed, it is easy to see that the corresponding system a = 0 b = 0 4 − b + c = 0 3 −4a + d = 0 has only the trivial solution a = b = c = d = 0. Since A is 4 × 4 and has 4 distinct eigenvectors, A is diagonalizable. Example Determine if the matrix ( ) 4 −1 A = 1 2 is diagonalizable. We need to check the eigenvectors of A; if A is diagonalizable, then it has two linearly indepen- dent eigenvectors. Accordingly, we begin by finding the eigenvalues of A, using the characteristic equation: ( ) λ − 4 1 det(λI − A) = det −1 λ − 2 = (λ − 4)(λ − 2) + 1 = λ2 − 6λ + 8 + 1 = λ2 − 6λ + 9; 4 Section 5.2 so that the characteristic equation for A is λ2 − 6λ + 9 = 0: By factoring the equation, we see that its roots are λ = 3, so that A has a single repeated eigenvalue. Any eigenvector x corresponding to λ = 3 must satisfy Ax = 3x ( )( ) ( ) 4 −1 x 3x 1 = 1 1 2 x2 3x2 ( ) ( ) 4x − x 3x 1 2 = 1 : x1 + 2x2 3x2 Thus we see that 4x1 − x2 = 3x1 x1 + 2x2 = 3x2; both of which amount to the single equation x1 − x2 = 0 or x1 = x2: Parameterizing x1 as x1 = t, we see that any eigenvector of A must have form ( ) ( ) t 1 = t : t 1 Thus A has only one linearly independent eigenvector; since A is 2 × 2, it is not diagonalizable. The following theorem on eigenvalues and their associated eigenvectors will give us a quick way to check some matrices for diagonalizability: Theorem 5.2.2. If λ1, λ2,. , λk are distinct eigenvalues of an n × n matrix A, and x1, x2,. , xk are eigenvectors corresponding to λ1, λ2,. , λk respectively, then the set fx1; x2;:::; xkg is a linearly independent set. The theorem says that, for each distinct eigenvalue of a matrix A, we are guaranteed another linearly independent eigenvector. For example, if a 4 × 4 matrix A has eigenvalues −10, 2, 0, and 5, then since it has four distinct eigenvalues, A automatically has four linearly independent eigenvectors. Taken together with Theorem 5:2:1 on diagonlizability, we have the following corollary: Corollary. Any n × n matrix with distinct eigenvalues is diagonalizable. 5 Section 5.2 Key Point. We must be extremely careful to note that we can only use the corollary to draw conclusions about an n × n matrix if the matrix has n distinct eigenvalues. If the matrix does not have n distinct eigenvalues, then it may or may not be diagonalizable. In fact, we have seen two matrices 0 1 2 0 0 0 ( ) B0 −1 0 0C 4 −1 B C and @0 4 2 0A 1 2 4 0 0 3 with repeated eigenvalues: the first matrix has eigenvalues −1; 2; 2; and 3, and the second has eigenvalues 3 and 3. The first matrix is diagonalizable, while the second is not. Finding the Similar Diagonal Matrix Earlier, we asked how we could go about finding the diagonal matrix to which a diagonalizable matrix A is similar. If A is diagonalizable, with P −1AP the desired diagonal matrix, then we can rephrase the question above: How do we find P ? The answer to this question turns out to be quite interesting: Theorem. Let the n × n matrix A be diagonalizable with n linearly independent eigenvectors x1, x2,. , xn. Set 0 1 j j j @ A P = x1 x2 ::: xn : j j j In other words, P is the matrix whose columns are the n linearly independent eigenvectors of A. −1 Then P AP is a diagonal matrix whose diagonal entries are the eigenvalues λ1, λ2,. , λn that correspond to the eigenvectors forming the successive columns of P . Example Find the diagonal matrix to which 0 1 2 0 0 0 B0 −1 0 0C A = B C @0 4 2 0A 4 0 0 3 is similar.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    12 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us