Diagonalizability

Diagonalizability

Diagonalizability Consider the constant-coe±cient linear system y0 = Ay (1) where A is some n£n matrix. In general, this problem is not easy to solve since the equations in the system are usually coupled. · ¸ 2 1 Example 1. 3 If A = , the system y0 = Ay is 4 ¡1 0 y1 = 2y1 + y2 0 y2 = 4y1 ¡ y2 Because both y1 and y2 appear in both equations, we cannot solve these equations indepen- dently. 3 In some cases, however, the equations in the system are uncoupled. · ¸ 3 0 Example 2. 3 If A = , the system y0 = Ay is 0 ¡2 0 y1 = 3y1 0 y2 = ¡2y2: Since the ¯rst equation only contains y1 and the second equation only contains y2, we may solve the two equations independently of one another. The solution is 3t y1 = c1e ¡2t y2 = c2e or in vector form · ¸ · ¸ · ¸ 3t c1e 3t 1 ¡2t 0 y = ¡2t = c1e + c2e : c2e 0 1 3 The matrix A in this example is called a diagonal matrix, since its entries o® the main diagonal are all zero. In general, a linear system is uncoupled if and only if its coe±cient matrix is a diagonal matrix. Now suppose we are given a system y0 = Ay which is coupled. Is it possible to make a substitution which transforms the system into an uncoupled system? To answer this question, let's consider a substitution of the form y = Cx; 1 where C is some n £ n invertible matrix. This de¯nes a change of coordinates. The old variable y will be replaced by the new variable x. To see what a®ect this has on the system y0 = Ay, we substitute the change of coordinates into both sides. On the left side we have y0 = Cx0, and on the right side Ay = ACx, so Cx0 = ACx. Multiplying both sides by the inverse of C gives x0 = C¡1ACx: Thus the original system in y with coe±cient matrix A has been replaced by the new system in x with coe±cient matrix C¡1AC. In order for this new system to be uncoupled, we need this coe±cient matrix be a diagonal matrix. That is, we want C¡1AC = D for some diagonal matrix D. De¯nition 1. We say that a matrix A is diagonalizable if there exists and invertible matrix C and a diagonal matrix D such that C¡1AC = D. So a system y0 = Ay can be uncoupled if and only if A is a diagonalizable matrix. This conclusion leads to the following questions. ² Which matrices are diagonalizable? ² For those matrices A which are diagonalizable, how do we ¯nd C and D? To answer these questions, suppose that we have found matrices 2 ¯ ¯ ¯ 3 ¯ ¯ ¯ 2 3 ¯ ¯ ¯ ¸ 0 ¢ ¢ ¢ 0 6 ¯ ¯ ¯ 7 1 6 7 6 0 ¸ ¢ ¢ ¢ 0 7 6 7 6 2 7 C = 6v¯1 v¯2 ¢ ¢ ¢ v¯n7 and D = 6 . .. 7 4 ¯ ¯ ¯ 5 4 . 5 ¯ ¯ ¯ ¯ ¯ ¯ 0 0 ¢ ¢ ¢ ¸n such that C¡1AC = D. Then multiplying both sides by C gives AC = CD. But 2 ¯ ¯ ¯ 3 2 ¯ ¯ ¯ 3 ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ 6 ¯ ¯ ¯ 7 6 ¯ ¯ ¯ 7 6 7 6 7 6 7 6 7 AC = 6A¯v1 A¯v2 ¢ ¢ ¢ A¯vn7 and CD = 6¸1¯v1 ¸2¯v2 ¢ ¢ ¢ ¸n¯vn7 ; (2) 4 ¯ ¯ ¯ 5 4 ¯ ¯ ¯ 5 ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ so if we equate the columns of these matrices we ¯nd that Av1 = ¸1v1 Av2 = ¸2v2 . Avn = ¸nvn: Motivated by this, we make the following de¯nition. 2 De¯nition 2. A nonzero vector v such that Av = ¸v for some scalar ¸ is called an eigenvector of A with eigenvalue ¸. Thus in order for A to be diagonalized by the matrices C and D, the columns of C must be eigenvectors whose eigenvalues are the diagonal entries of D. Since C is invertible, its columns must be linearly independent. Thus we have shown that if A is diagonalizable, then there exist n linearly independent eigenvectors of A. But conversely, if there exist n linearly independent eigenvectors of A, then placing these vectors into the columns of a matrix C, and placing their eigenvalues into the diagonal matrix D, it follows by the same calculations above that AC = CD, so C¡1AC = D and A is diagonalizable. Theorem 1. An n£n matrix A is diagonalizable if and only if there there exist n linearly independent eigenvectors of A. · ¸ · ¸ 2 1 1 Example 3. 3 Let A = and de¯ne v = . Then 4 ¡1 1 1 · ¸ · ¸ · ¸ 2 1 1 3 Av = = = 3v ; 1 4 ¡1 1 3 1 · ¸ ¡1 so v is an eigenvector with eigenvalue 3. Next, let v = . Then 1 2 4 · ¸ · ¸ · ¸ 2 1 ¡1 2 Av = = = ¡2v ; 2 4 ¡1 4 ¡8 2 so v2 is an eigenvector· ¸ with eigenvalue ¡2. Since v1 and v2 are linearly independent, the 1 ¡1 matrix C = is invertible, and by the reasoning above, 1 4 · ¸ 3 0 C¡1AC = D = : (Check this yourself!) 0 ¡2 Thus we have found matrices C and D which diagonalize A. Therefore, if we now make the change of variable y = Cx, then the system y0 = Ay is transformed into the uncoupled system x0 = Dx: 0 x1 = 3x1 0 x2 = ¡2x2: 3 3t ¡2t The general solution is x1 = c1e , x2 = c2e , or · ¸ 3t c1e x = ¡2t : c2e Therefore the general solution of the system y0 = Ay is · ¸ · ¸ · ¸ · ¸ 3t 1 ¡1 c1e 3t 1 ¡2t ¡1 y = Cx = ¡2t = c1e + c2e : 1 4 c2e 1 4 3 The last statement in this example can be generalized as follows. Theorem 2. If fv1; v2;:::; vng are linearly independent eigenvectors of A with eigenval- 0 ues ¸1; ¸2; : : : ; ¸n, respectively, then the general solution of y = Ay is ¸1t ¸2t ¸nt y = c1e v1 + c2e v2 + ¢ ¢ ¢ + cne vn: Proof. Let C be the matrix with columns v1 through vn and D the diagonal matrix with diagonal entries ¸1 through ¸n. Then using equations (2), we have AC = CD and therefore C¡1AC = D. So, as above, the substitution y = Cx transforms y0 = Ay into x0 = Dx. The solution of this uncoupled system is 2 3 ¸1t c1e 6 ¸2t 7 6c2e 7 x = 6 . 7 4 . 5 ¸nt cne so ¸1t ¸2t ¸nt y = Cx = c1e v1 + c2e v2 + ¢ ¢ ¢ + cne vn: We now focus on the task of ¯nding the eigenvectors and eigenvalues of a matrix. We begin with the eigenvalues. Finding Eigenvalues Suppose that v is an eigenvector of A with eigenvalue ¸. Then Av = ¸v: We can rewrite this equation as Av ¡ ¸v = 0; or, since Iv = v for any vector v, Av ¡ ¸Iv = 0: 4 This may be written (A ¡ ¸I)v = 0; which means that v 2 N(A ¡ ¸I): Since eigenvectors are nonzero by de¯nition, this implies that the null space of A ¡ ¸I is non-trivial. This in turn implies that A ¡ ¸I is not invertible, and therefore det(A ¡ ¸I) = 0: Each step in this chain of implications can be reversed, so we have the following test for eigenvalues. Theorem 3. Let A be an n £ n matrix. Then ¸ is an eigenvalue of A if and only if det(A ¡ ¸I) = 0. Example 4. 3 Let · ¸ 1 2 A = : 4 3 Then · ¸ · ¸ · ¸ 1 2 ¸ 0 1 ¡ ¸ 2 A ¡ ¸I = ¡ = 4 3 0 ¸ 4 3 ¡ ¸ so det(A ¡ ¸I) = (1 ¡ ¸)(3 ¡ ¸) ¡ 8 = ¸2 ¡ 4¸ ¡ 5 = (¸ ¡ 5)(¸ + 1): This expression is zero when ¸ = 5 or ¸ = ¡1, so these are the only eigenvalues of A. 3 The expression pA(¸) = det(A ¡ ¸I) is in general a polynomial of degree n, called the characteristic polynomial of A. The eigenvalues of A are the roots of this polynomial. Next let's turn to the task of ¯nding the eigenvectors. Finding Eigenvectors Once an eigenvalue ¸ of a matrix A is known, the associated eigenvectors are the nonzero solutions of Av = ¸v. Equivalently, they are the nonzero elements of the null space of A ¡ ¸I. We call E¸ = N(A ¡ ¸I) the eigenspace associated with the eigenvalue ¸. Every nonzero vector in E¸ is an eigen- vector with eigenvalue ¸. 5 · ¸ 1 2 Example 5. 3 Let A = . In the previous example we found that the eigenvalues of 4 3 A were 5 and ¡1. For ¸ = 5, · ¸ · ¸ µ· ¸¶ 1 ¡4 2 1 ¡ 2 1 A ¡ 5I = ¡! ; =) E5 = N(A ¡ 5I) = span : 4 ¡2 rref 0 0 2 | {z } v1 For ¸ = ¡1, · ¸ · ¸ µ· ¸¶ 2 2 1 1 ¡1 A ¡ (¡1)I = ¡! ; =) E¡1 = N(A ¡ (¡1)I) = span : 4 4 rref 0 0 1 | {z } v2 Thus v1 and all its nonzero multiples are eigenvectors with eigenvalue 5, and v2 and all of its nonzero multiples are eigenvectors· ¸ with eigenvalue ¡1. Since v1 and v2 are linearly 1 ¡1 independent, the matrix C = is invertible, and we have 2 1 · ¸ 5 0 C¡1AC = D = ; (Check!) 0 ¡1 so the matrix A is diagonalizable. 3 Here is an example of a non-diagonalizable matrix. · ¸ 2 1 Example 6. 3 Let A = . The characteristic polynomial of A is p (¸) = ¸2 ¡ 6¸ + ¡1 4 A 9 = (¸ ¡ 3)2, so ¸ = 3 is the only eigenvalue of A. Since · ¸ · ¸ µ· ¸¶ ¡1 1 1 ¡1 1 A ¡ 3I = ¡! ; =) E3 = N(A ¡ 3I) = span ; ¡1 1 rref 0 0 1 | {z } v the only eigenvectors of A are multiples of v. Thus there do not exist two linearly independent eigenvectors of A, so by Theorem 1, A is not diagonalizable. 3 Part of the problem is this example was that the 2 £ 2 matrix A only had one eigenvalue. That is, the number of di®erent eigenvalues of A was less than the size of A. It turns out that if we avoid this situation, then we are guaranteed diagonalizability.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    12 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us