
Lecture 12: Diagonalization A square matrix D is called diagonal if all but diagonal entries are zero: a1 0 0 ¢ ¢ ¢ 0 a2 0 D = 2 ¢ ¢ ¢ 3 . (1) ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ 6 0 0 an 7 6 ¢ ¢ ¢ 7n n 4 5 £ Diagonal matrices are the simplest matrices that are basically equivalent to vectors in Rn. Obviously, D has eigenvalues a1, a2, ..., an, and for each i = 1, 2, ..., n, a1 0 0 0 0 . ¢ ¢ ¢ . ¢ ¢.¢ . ... 2 . 3 2.3 2 . 3 D~ei = 0 ai 0 1 = ai = ai~ei. 6 . ¢ ¢.¢ . ¢.¢ ¢ . 7 6.7 6 . 7 6 . .. 7 6.7 6 . 7 6 7 6 7 6 7 6 0 0 a 7 607 6 0 7 6 n7 6 7 6 7 4 ¢ ¢ ¢ ¢ ¢ ¢ 5 4 5 4 5 We thus conclude that for any diagonal matrix, eigenvalues are all diagonal entries, and ~ei is an eigenvector associated with ai, the ith diagonal entry, i.e., ~ei is an eigenvector associated with eigenvalue ai, for i = 1, 2, ..., n. Due to the simplicity of diagonal matrices, one likes to know whether any matrix can be similar to a diagonal matrix. Diagonalization is a process of …nding a diagonal matrix that is similar to a given non-diagonal matrix. De…nition 12.1. An n n matrix A is called diagonalizable if A is similar to a diagonal matrix D. £ Example 12.1. Consider 7 2 5 0 1 1 A = , D = , P = . 4 1 0 3 1 2 ·¡ ¸ · ¸ ·¡ ¡ ¸ 1 (a) Verify A = P DP ¡ (b) Find Dk and Ak (c) Find eigenvalues and eigenvectors for A. Solution: (a) It su¢ces to show that AP = P D and that P is invertible. Direct calculations lead to det P = 1 = 0 = P is invertible ¡ 6 ) Again, direct computation leads to 5 3 5 3 AP = , P D = . 5 6 5 6 ·¡ ¡ ¸ ·¡ ¡ ¸ 1 Therefore AP = P D. (b) 5 0 5 0 52 0 5k 0 D2 = = , Dk = . 0 3 0 3 0 32 0 3k · ¸ · ¸ · ¸ · ¸ 2 1 1 1 1 2 1 A = P DP ¡ P DP ¡ = P DP ¡ P DP ¡ = P D P ¡ k k 1 A = P D P ¡ ¡ ¢ (c) Eigenvalues of A = Eigenvalues of D, which are ¸1 = 5, ¸2 = 3. For D, we know that 1 ~e = is an eigenvectors for ¸ = 5 : D~e = 5~e (2) 1 0 1 1 1 · ¸ 0 ~e = is an eigenvectors for ¸ = 3 : D~e = 3~e . (3) 2 1 2 2 2 · ¸ Since AP = P D, from (2) and (3), respectively, we see AP~e1 = P D~e1 = P (5~e1) = 5P~e1, AP~e2 = P D~e2 = P (3~e2) = 3P~e2. Therefore, 1 1 1 1 P~e = = 1 1 2 0 1 ·¡ ¡ ¸ · ¸ ·¡ ¸ is an eigenvector associated with eigenvalue ¸= 5 for matrix A, and 1 1 0 1 P~e = = 2 1 2 1 2 ·¡ ¡ ¸ · ¸ ·¡ ¸ is an eigenvector of A associated with eigenvalue ¸= 3. From this example, we observation that if A is diagonalizable and A is similar to a diagonal matrix D (as in (1)) through an invertible matrix P, AP = P D. Then P~ei is an eigenvector associated with ai, for i = 1, 2, ..., n. This generalization can be easily veri…ed in the manner analogous to Example 12.1. More- over, these n eigenvectors P~e1, P~e2, ..., P~en are linear independent. To see independence, we consider the linear system n ¸iP~ei = ~0. i=1 X 2 Suppose the linear system admits a solution (¸1, ..., ¸n). Then n P ¸i~ei = ~0. Ã i=1 ! X Since P is invertible, the above equation leads to n ¸i~ei = ~0. i=1 X Since has ~e1,~e2, ...,~en are linear independent, it follows that the last linear system has only the trivial solution ¸i = 0 for all i = 1, 2, ..., n. Therefore, P~e1, P~e2, ..., P~en are linear inde- pendent. This observation leads to Theorem 12.1. (Diagonalization).A n n matrix A is diagonalizable i¤ A has n lin- early independent eigenvectors. Furthermor£e, suppose that A has n linearly independent eigenvectors ~v1,~v2, ...,~vn . Set f g a 0 0 1 ¢ ¢ ¢ 0 a2 0 P = [~v1,~v2, ...,~vn] , D = 2 ¢ ¢ ¢ 3 ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ 6 0 0 an 7 6 ¢ ¢ ¢ 7 4 5 where ai is the eigenvalue associated with ~vi, i.e., A~vi = ai~vi.Then, P is invertible and 1 P ¡ AP = D. Proof. We have demonstrated that if A is diagonalizable, then A has n linearly independent eigenvectors. We next show that the reverse is also true: if A has n linearly independent eigenvectors, then A must be diagonalizable. To this end, we only need to verify that AP = P D with P and D described above, since P is obviously invertible due to independence of eigenvectors. The proof of AP = P D is straightforward by calculation as follows: AP = A [~v1,~v2, ...,~vn] = [A~v1, A~v2, ..., A~vn] = [a1~v1, a2~v2, ..., an~vn] , P D = P [a1~e1, a2~e2, ..., an~en] = [a1P~e1, a2P~e2, ..., anP~en] . Now, 1 0 0 1 P~e1 = [~v1,~v2, ...,~vn] 2 3 = ~v1, P~e2 = [~v1,~v2, ...,~vn] 2 3 = ~v2, ... ¢ ¢ ¢ ¢ ¢ ¢ 6 0 7 6 0 7 6 7 6 7 4 5 4 5 3 This shows AP = P D. Example 12.2. Diagonalize 1 3 3 A = 3 5 3 . 2¡3 ¡3 ¡1 3 4 5 Solution: Diagonalization means …nding a diagonal matrix D and an invertible matrix P such that AP = P D. We shall follow Theorem 12.1 step by step. Step 1. Find all eigenvalues. From computations 1 ¸ 3 3 det (A ¸I) = det ¡3 5 ¸ 3 ¡ 2 ¡3 ¡ 3¡ 1¡ ¸3 ¡ = ¸34 3¸2 + 4 5 ¡ ¡ = ¸3 + ¸2 4¸2 + 4 ¡ ¡ = ¸3 + ¸2 + 4¸2 + 4 ¡ ¡ ¡¢ = ¸2 (¸ 1) 4 ¸2 1 ¡ ¡ ¢ ¡¡ ¡ ¢ = (¸ 1) ¸2 + 4 (¸+ 1) ¡ ¡ ¡ ¢ = (¸ 1) (¸+ 2)2 . ¡ ¡ £ ¤ we see that A has eigenvalues ¸1 = 1, ¸2 = 2 (this is a double root). ¡ Step 2. Find all eigenvalues – …nd a basis for each eigenspace Null (A ¸Ii). ¡ For ¸1 = 1, 1 3 3 1 0 0 A ¸1I = 3 5 3 0 1 0 ¡ 2¡3 ¡3 ¡1 3 ¡ 20 0 13 4 0 3 3 5 4 3 5 3 0 R1 R2 = 3 6 3 ! 3 6 3 2¡3 ¡3 ¡0 3 ! 2¡0 ¡3 ¡3 3 4 3 3 5 0 4 1 0 1 5 R2+R1 R2 ¡ ! 0 3 3 0 1 1 . ! 20 ¡3 ¡3 3 ! 20 0 0 3 4 5 4 5 Therefore, linear system (A I) ~x = 0 ¡ reduces to x1 x3 = 0 ¡ x2 + x3 = 0, 4 where x3 is the free variable. Choose x3 = 1, we obtain an eigenvector 1 ~x = 1 . 2¡1 3 For ¸ = 2, 4 5 2 ¡ 1 3 3 1 0 0 A ¸2I = 3 5 3 ( 2) 0 1 0 ¡ 2¡3 ¡3 ¡1 3 ¡ ¡ 20 0 13 4 3 3 3 5 1 41 1 5 = 3 3 3 0 0 0 . 2¡3 ¡3 ¡3 3 ! 20 0 03 The corresponding linear system is4 5 4 5 x1 + x2 + x3 = 0 It follows that x2 and x3 are free variables. As we did before, we need to select (x2, x3) to be (1, 0) and (0, 1) . Choose x2 = 1, x3 = 0 = x1 = x2 x3 = 1; choose x2 = 0, ) ¡ ¡ ¡ x3 = 1 = x1 = x2 x3 = 1. We thus got two independent eigenvectors for ¸2 = 2 : ) ¡ ¡ ¡ ¡ x1 1 x1 1 ¡ ¡ ~v = x2 = 1 , ~v = x2 = 0 . 2 2 3 2 3 3 2 3 2 3 x3 0 x3 1 Step 3. Assemble orderly D a4nd P5 as 4follow5s: there4are5seve4ral c5hoices to pair D and P. 1 0 0 1 1 1 ¡ ¡ Choice#1 : D = 0 2 0 , P = [~v1,~v2,~v3] = 1 1 0 20 ¡0 23 2¡1 0 1 3 ¡ 41 0 0 5 4 1 1 15 ¡ ¡ Choice#2 : D = 0 2 0 , P = [~v1,~v3,~v2] = 1 0 1 20 ¡0 23 2¡1 1 0 3 ¡ 4 2 0 05 4 1 1 1 5 ¡ ¡ ¡ Choice#3 : D = 0 2 0 , P = [~v2,~v3,~v1] = 1 0 1 . 2 0 ¡0 13 2 0 1 ¡1 3 Remark. Not every matr4ix is diagona5lizable. For instance,4 5 1 1 A = , det (A ¸I) = (¸ 1)2 . 0 1 ¡ ¡ · ¸ The only eigenvalue is ¸= 1; it has the multiplicity m = 2. From 0 1 A I = , ¡ 0 0 · ¸ 5 we see that (A I) has only one pivot. Thus r (A I) = 1. By Dimension Theorem, we have ¡ ¡ dim Null (A I) = 2 r (A) = 2 1 = 1. ¡ ¡ ¡ In other words, the basis consists of Null (A I) consists of only one vector. For instance, one may choose ¡ 1 0 · ¸ as a basis. According to the discussion above, if A is diagonalizable, i.e., AP = P D for a diagonal matrix a 0 D = 0 b · ¸ then P~e1 is an eigenvector of A associated with a, P~e2 is an eigenvector of A associated with b. Furthermore, since P is invertible, P~e1 and P~e2 are linearly independent (why?). Since we have already demonstrated that there is no more than two linearly independent eigenvectors, it is impossible to diagonalize A. In general, if ¸0 is an eigenvalue of multiplicity m (i.e., the characteristic polynomial det (A ¸I) = (¸ ¸ )m Q (¸) , Q (¸ ) = 0 ), ¡ ¡ 0 0 6 then dim (Null (A ¸I)) m. ¡ · Theorem 12.2. Let A be an n n matrix with distinct real eigenvalues ¸1, ¸2, ..., ¸p £ with multiplicity m1, m2, ..., mp, respectively. Then, 1. ni = dim (Null (A ¸iI)) mi and m1 + m2 + ..
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages9 Page
-
File Size-