
LECTURE NOTES ON GENERALIZED EIGENVECTORS FOR SYSTEMS WITH REPEATED EIGENVALUES We consider a matrix A 2 Cn×n. The characteristic polynomial P (λ) = jλI − Aj admits in general p complex roots: λ1; λ2; : : : ; λp with p ≤ n. Each of the root has a multiplicity that we denote ki and P (λ) can be decomposed as p ki P (λ) = Πi=1(λ − λi) : The sum of the multiplicity of all eigenvalues is equal to the degree of the polyno- mial, that is, p X ki = n: i Let Ei be the subspace of eigenvectors associated to the eigenvalue λi, that is, n Ei = fu 2 C such that Au = λiug: Theorem 1 (from linear algebra). The dimension of Ei is smaller than the mul- tiplicity of λi, that is, dim(Ei) ≤ ki: If dim(Ei) = ki for all i 2 f1; : : : ; pg; then we can find a basis of ki independent eigenvectors for each λi, which we denote by ui ; ui ; : : : ; ui : 1 2 ki Pp Since i=1 ki = n, we finally get n linearly independent eigenvectors (eigen- vectors with distinct eigenvalues are automatically independent). Therefore the dY matrix A is diagonalizable and we can solve the system dt = AY by using the basis of eigenvectors. The general solution is given by p X (1) Y (t) = eλit(a ui + a ui + ::: + a ui ) 1;i 1 2;i 2 ki;i ki i=1 for any constant coefficients ai;j. example: We consider the matrix 01 0 01 A = @0 3 1A 0 −2 0 Date: 14.03.2011. 1 2 GENERALIZED EIGENVECTORS 2 The characteristic is P (λ) = −(λ − 2)(λ − 1) and we have two eigenvalues, λ1 = 2 (with multiplicity 1) and λ2 = 1 (with multiplicity 2). We compute the eigenvectors for λ1 = 2. We have to solve 0−1 0 0 1 0x1 @ 0 1 1 A @yA = 0 0 −2 −2 z It yields two independent relations (Hence, the dimension of E1 = n − 2 = 1), namely x = 0 y + z = 0 Thus, u = (0; 1; −1) is an eigenvector for λ1 = 2. We compute the eigenvectors for λ2 = 1. We have to solve 00 0 0 1 0x1 @0 2 1 A @yA = 0 0 −2 −1 z It yields one independent relation, namely 2y + z = 0: Hence, the dimension of E2 is equal to n − 1 = 2. The two independent vectors 1 u2 = (1; 0; 0) 2 u2 = (0; 1; −2) dY form a basis for E2. Finally, the general solution of dt = AY is given by 0 0 1 011 0 0 1 2t t t Y (t) = a1e @ 1 A + a2e @0A + a3e @ 1 A : −1 0 −2 End of example. If for some i, dim(Ei) < ki, then we cannot find ki independent eigenvectors in Ei. We say that the eigenvalue λi is incomplete. In this case, we are not able to find n linearly independent eigenvectors and cannot get an expression like (1) for the solution of the ODE. We have to use generalized eigenvectors. example: We consider −2 1 A = : 0 −2 2 The characteristic polynomial is P (λ) = (λ+2) and there is one eigenvalue λ1 = −2 with multiplicity 2. We compute the eigenvectors. We have to solve 0 1 x = 0 0 0 y It yields one independent relation, namely y = 0 and therefore the dimension of E1 is 1 and A is not diagonalizable. An eigenvector is given by u1 = (1; 0). λ1t We know that Y1(t) = e u1 is a solution. Let us look for solutions of the type Y (t) = teλtu + eλtv GENERALIZED EIGENVECTORS 3 for two unknown vectors u and v different from zero. Such Y is solution if and only if eλtu + λteλtu + λeλtv = teλtAu + eλtAv for all t. It implies that we must have (2) Au = λu (3) Av = u + λv: The first equality implies (because we want u 6= 0) that u is an eigenvector and λ is an eigenvalue. We take λ = λ1 and u = u1. Let us compute v. We have to solve 0 1 x 1 = 0 0 y 0 which yields y = 1 We take (for example) v = (0; 1) and we have that 1 0 Y (t) = teλ1tu + eλ1tv = te−2t + e−2t 2 1 0 1 is a solution. Since Y1(0) and Y2(0) are independent, a general solution is given by 1 1 0 Y (t) = a e−2t + a te−2t + e−2t 1 0 2 0 1 −2t −2t a1e + a2te = −2t a2e : for any constant a1, a2 in R. End of example. In the previous example, note that v satisfies (4) (A − λI)2v = 0 and (A − λI)v 6= 0: A vector which satisfies (4) is a generalized eigenvector. Let us give now the general definition of such vectors. Definition 2. For a given eigenvalue λ, the vector u is a generalized eigenvector of rank r if (A − λI)ru = 0 (A − λI)r−1u 6= 0: Remark: An eigenvector is a generalized eigenvector of rank 1. Indeed, we have (A − λI)u = 0 and u 6= 0. Given an generalized eigenvector u of rank r, let us define the vectors v1; : : : ; vr as follows 0 vr = (A − λI) u = u 1 vr−1 = (A − λI) u (5) . r−1 v1 = (A − λ) u r Note that v1 is an eigenvector as v1 6= 0 and (A − λI)v1 = (A − λ) u = 0. The vectors v1; : : : ; vr form a chain of generalized eigenvectors of length r. 4 GENERALIZED EIGENVECTORS Definition 3. Given an eigenvalue λ, we say that v1; v2; : : : ; vr form a chain of generalized eigenvectors of length r if v1 6= 0 and vr−1 = (A − λI)vr vr−2 = (A − λI)vr−1 . (6) . v1 = (A − λI)v2 0 = (A − λI)v1 r Remark: Given a chain fvigi=1, the first element, that is, v1, is allways an eigen- value. By using the definition (6), we get that i−1 (7) (A − λI) vi = v1 and therefore the element vi is a generalized vector of rank i . Theorem 4. The vectors in a chain of generalized eigenvectors are linearly inde- pendent. Proof. We consider the linear combination r X (8) aivi = 0: i=1 By using the definition (6), we get that r−i vi = (A − λI) vr so that (8) is equivalent to r X r−i (9) ai(A − λI) vr = 0: i=1 We want to prove that all the ai are equal to zero. We are going to use the fact that (10) (A − λI)mu = 0 for all m ≥ r: Indeed, m m−r r m−r (A − λI) vr = (A − λI) (A − λI) vr = (A − λI) (A − λI)v1 = 0: Now, we apply (A − λI)r−1 to (9) and get r X 2r−i−1 (11) ai(A − λI) vr = 0: i=1 2r−i−1 Since (A − λI) vr = 0 for i ≤ r − 1, the equation (11) simplifies and we get r−1 ar(A − λI) vr = arv1 = 0 Hence, ar = 0 because v1 6= 0. Now, we know that ar = 0 so that (9) rewrites as r−1 X r−i (12) ai(A − λI) vr = 0: i=1 We apply (A − λI)r−2 to (12) and obtain that r−1 X 2r−i−2 r−1 ai(A − λI) vr = ar−1(A − λI) vr = ar−1v1 = 0: i=1 GENERALIZED EIGENVECTORS 5 2r−i−2 because (A − λI) vr = 0 for i ≤ r − 2. Therefore, ar−1 = 0. We proceed recursively with the same argument and prove that all the ai are equal to zero so that the vectors vi are linearly independent. A chain of generalized eigenvectors allow us to construct solutions of the system of ODE. Indeed, we have Theorem 5. Given a chain of generalized eigenvector of length r, we define λt X1(t) = v1e λt X2(t) = (tv1 + v2)e t2 X (t) = v + tv + v eλt 3 2 1 2 3 . tr−1 t2 X (t) = v + ::: + v + tv + v eλt r (r − 1)! 1 2 r−2 r−1 r r dX The functions fXi(t)gi=1 form r linearly independent solutions of dt = AX. Proof. We have j ! X tj−i X (t) = eλt v : j (j − i)! i i=1 We use the convention that v0 = 0 and, with this convention, we can check from (6) that Avi = vi−1 + λvi for i = f1; : : : ; rg. We have, on one hand, j−1 j X tj−i−1 X tj−i X_ (t) = eλt v + eλt λ v j (j − i − 1)! i (j − i)! i i=1 i=1 and, on the other hand, j X tj−1 AX (t) = eλt Av j (j − i)! i i=1 j X tj−i = eλt (v + λv ) (j − i)! i−1 i i=1 j j X tj−i X tj−i = eλt v + eλt λ v (j − i)! i−1 (j − i)! i i=1 i=1 j−1 j X tj−i−1 X tj−i = eλt v + eλt λ v : (j − i − 1)! i (j − i)! i i=1 i=1 Hence, Xi is a solution. To prove that Xi(t) are independent, it is enough to prove that Xi(0) = vi are independent. This follows from Theorem 4. Conclusion: A chain of generalized eigenvectors of length r gives us r independent solutions. 6 GENERALIZED EIGENVECTORS It turns out that there exist enough chains of generalized eigenvectors to obtain a complete set of independent solutions. This is the content of the following theorem.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-