Generalized Eigenvectors and Jordan Form

Total Page:16

File Type:pdf, Size:1020Kb

Generalized Eigenvectors and Jordan Form Generalized Eigenvectors and Jordan Form We have seen that an n £ n matrix A is diagonalizable precisely when the dimensions of its eigenspaces sum to n. So if A is not diagonalizable, there is at least one eigenvalue with a geometric multiplicity (dimension of its eigenspace) which is strictly less than its algebraic multiplicity. In this handout, we will discuss how one can make up for this de¯ciency of eigenvectors by ¯nding what are called generalized eigenvectors, which can in turn be used to ¯nd the Jordan form of the matrix A. First consider the following non-diagonalizable system. Example 1. 3 The matrix · ¸ 3 1 A = 0 3 2 has characteristic polynomial (¸ ¡ 3)µ·, so¸¶ it has only one eigenvalue ¸ = 3, and the cor- 1 responding eigenspace is E = span . Since dim(E ) = 1 < 3, the matrix A is not 3 0 3 diagonalizable. Nevertheless, it is still possible to solve the system dy = Ay dt without much di±culty. Writing out the two equations dy 1 = 3y + y dt 1 2 dy 2 = 3y dt 2 3t we see that the second equation has general solution y2 = c1e . Plugging this into the ¯rst equation gives dy 1 = 3y + c e3t dt 1 1 This is a ¯rst order linear equation that can be solved by the method of integrating factors. 3t 3t Its solution is y1 = c1te + c2e . Thus the general solution of the system is · ¸ · ¸ µ · ¸ · ¸¶ · ¸ 3t 3t y1 c1te + c2e 3t 1 0 3t 1 y = = 3t = c1e t + + c2e y2 c2e 0 1 0 3 The matrix A in the previous example is said to be in Jordan form. De¯nition 1. A 2 £ 2 matrix of the form · ¸ ¸ 1 J = 0 ¸ for some number ¸ is called a Jordan form matrix. 1 The following theorem generalizes the calculations of the previous example. Theorem 1. The general solution of the system · ¸ dy ¸ 1 = Jy;J = dt 0 ¸ is µ · ¸ · ¸¶ · ¸ 1 0 1 y = c e¸t t + + c e¸t 1 0 1 2 0 In general, suppose A is a 2£2 matrix with a single repeated eigenvalue ¸ with dim(E¸) = 1. Then, as we have seen before, making the change of variables y = Cx transforms the system dy = Ay dt into the system dx = C¡1ACx: dt We now want to ¯nd a matrix C such that the coe±cient matrix C¡1AC for the new system is a Jordan form matrix J. That is, we want AC = CJ. Writing 2 3 j j · ¸ ¸ 1 C = 4v v 5 ;J = ; 1 2 0 ¸ j j we have 2 3 2 3 j j j j AC = 4Av1 Av25 ;CJ = 4¸v1 ¸v2 + v15 j j j j Therefore the columns of C must satisfy Av1 = ¸v1 Av2 = ¸v2 + v1 Thus the vector v1 is an eigenvector with eigenvalue ¸. Rewriting these equations (A ¡ ¸I)v1 = 0 (A ¡ ¸I)v2 = v1 2 2 it follows that (A ¡ ¸I) v2 = (A ¡ ¸I)v1 = 0. Thus the vector v2 must be in ker(A ¡ ¸I) . 2 De¯nition 2. A nonzero vector v which satis¯es (A¡¸I)pv = 0 for some positive integer p is called a generalized eigenvector of A with eigenvalue ¸. The vectors v1 and v2 form a generalized eigenvector chain, as the following diagram illustrates: v2 ¡! v1 ¡! 0 A¡¸I A¡¸I Therefore, to ¯nd the columns of the matrix C that puts A in Jordan form, we must ¯nd a chain of generalized eigenvectors, as follows: 2 ² Find a nonzero vector v2 in ker(A ¡ ¸I) that is not in ker(A ¡ ¸I). ² Set v1 = (A ¡ ¸I)v2. dy Having found such vectors, we can then write down the solution of the system dt = Ay. dx From above, we know that the solution of the system dt = Jx is µ · ¸ · ¸¶ · ¸ 1 0 1 x = c e¸t t + + c e¸t 1 0 1 2 0 so multiplying this by C we get ¸t ¸t y = Cx = c1e (tv1 + v2) + c2e v1: In summary, we have the following result. Theorem 2. Let A be a 2 £ 2 matrix and suppose v2 ! v1 ! 0 is a chain of generalized dy eigenvectors of A with eigenvalue ¸. Then the general solution of the system dt = Ay is ¸t ¸t y(t) = c1e (tv1 + v2) + c2e v1: Example 2. 3 Let · ¸ 2 1 A = ¡1 4 The characteristic polynomial of A is ¸2 ¡ 6¸ + 9 = (¸ ¡ 3)2, so ¸ = 3 is the only eigenvalue of A. Next, we compute · ¸ · ¸ ¡1 1 0 0 A ¡ 3I = ; (A ¡ 3I)2 = ¡1 1 0 0 2 Now we choose· v¸2 to be any vector in ker(A ¡ 3I) that is not in ker(·A¸¡ 3I). One such 0 1 vector is v = . With this choice, we than have v = (A ¡ 3I)v = . The solution of 2 1 1 2 1 dy the system dt = Ay is therefore µ · ¸ · ¸¶ · ¸ 1 0 1 y = c e3t t + + c e3t : 1 1 1 2 1 3 3 For larger systems it turns out that any chain of generalized eigenvectors of the coe±cient matrix can be used to write down a linearly independent set of solutions. Theorem 3. Suppose fv1; v2;:::; vkg is a chain of generalized eigenvectors of A with eigenvalue ¸. That is, vk ¡! vk¡1 ¡! vk¡2 ¡! ¢ ¢ ¢ ¡! v2 ¡! v1 ¡! 0: A¡¸I A¡¸I A¡¸I A¡¸I A¡¸I A¡¸I Then ¸t y1(t) = e v1 ¸t y2(t) = e (v2 + tv1) · ¸ t2 y (t) = e¸t v + tv + v 3 3 2 2! 1 . · ¸ t2 tk¡2 tk¡1 y (t) = e¸t v + tv + v + ¢ ¢ ¢ + v + v k k k¡1 2! k¡2 (k ¡ 2)! 2 (k ¡ 1)! 1 dy is a linearly independent set of solutions of dt = Ay. Proof. We show that yk is a solution. The proof that the others are solutions is similar. Di®erentiating gives · dy t2 k = e¸t (¸v + v ) + t(¸v + v ) + (¸v + v ) + ¢ ¢ ¢ dt k k¡1 k¡1 k¡2 2! k¡2 k¡3 ¸ tk¡2 tk¡1 + (¸v + v ) + (¸v ) : (k ¡ 2)! 2 1 (k ¡ 1)! 1 Multiplying by A gives · ¸ t2 tk¡2 tk¡1 Ay = e¸t (Av ) + t(Av ) + (Av ) + ¢ ¢ ¢ + (Av ) + (Av ) : k k k¡1 2! k¡2 (k ¡ 2)! 2 (k ¡ 1)! 1 The two expressions above are equal, since Av1 = ¸v1, Av2 = ¸v2 + v1, and so on. Next we show that the chain fv1; v2;:::; vkg is necessarily linearly independent. Suppose c1v1 + c2v2 + ¢ ¢ ¢ + ckvk = 0: (1) k¡1 The multiplying by (A ¡ ¸I) sends v1 through vk to zero, and vk to v1, so we are left with ckv1 = 0. Since v1 is nonzero, this implies ck = 0. Similar reasoning shows that the remaining coe±cients must also be zero. Thus the chain of generalized eigenvectors is linearly independent. Now suppose c1y1(t) + c2y2(t) + ¢ ¢ ¢ + cnyn(t) = 0 for all t. Then since yj(0) = vj, at t = 0 this equation is precisely equation (1) above, which we just showed implies c1 = c2 = ¢ ¢ ¢ = ck = 0. 4 The next theorem guarantees that we can always ¯nd enough solutions of this form to generate a fundamental set of solutions. Theorem 4. Let A be an n£n matrix, and suppose ¸ is an eigenvalue of A with algebraic multiplicity m. Then there is some integer p · m such that dim(ker(A ¡ ¸I)p) = m. Example 3. 3 Let 2 3 5 1 ¡4 A = 44 3 ¡55 3 1 ¡2 3 The characteristic polynomial of A is pA(¸) = (¸ ¡ 2) . Since 2 3 2 3 3 1 ¡4 1 0 ¡1 A ¡ 2I = 44 1 ¡55 ¡! 40 1 ¡15 ; 3 1 ¡4 rref 0 0 0 the eigenspace E2 = ker(A ¡ 2I) has dimension 1. Next, since 2 3 2 3 1 0 ¡1 1 0 ¡1 (A ¡ 2I)2 = 41 0 ¡15 ¡! 40 0 0 5 ; 1 0 ¡1 rref 0 0 0 the space ker(A ¡ 2I)2 has dimension 2. Finally, since 2 3 0 0 0 (A ¡ 2I)3 = 40 0 05 ; 0 0 0 the space ker(A ¡ 2I)3 has dimension 3, which matches the algebraic multiplicity of the eigenvalue ¸ = 2. We can now form a chain of 3 generalized eigenvectors by choosing a vector v3 in ker(A¡ 3 2 2I) and de¯ning v2 = (A ¡ 2I)v3 and v1 = (A ¡ 2I)v2 = (A ¡ 2I) v3. To ensure that v2 2 and v1 are both non-zero, we need v3 to not be in ker(A ¡ 2I) (which in turn implies that 3 3 v3 is not in ker(A ¡ 2I)). Since ker(A ¡ 2I) = R , we can choose v3 to be any vector not in ker(A ¡ 2I)2. Since the ¯rst column of (A ¡ 2I)2 is non-zero, we may choose 2 3 2 3 2 3 1 3 1 v3 = 405 =) v2 = (A ¡ 2I)v3 = 445 =) v1 = (A ¡ 2I)v2 = 415 : 0 3 1 5 Then 2 3 1 2t y1(t) = e 415 1 0 2 3 2 31 1 3 2t y2(t) = e @t 415 + 445A 1 3 0 2 3 2 3 2 31 1 3 1 t2 y (t) = e2t @ 415 + t 445 + 405A 3 2! 1 3 0 dy is a fundamental set of solutions of dt = Ay. 3 There may in general be more than one chain of generalized eigenvectors corresponding to a given eigenvalue. Since the last vector in each chain is an eigenvector, the number of chains corresponding to an eigenvalue ¸ is equal to the dimension of the eigenspace E¸.
Recommended publications
  • Regular Linear Systems on Cp1 and Their Monodromy Groups V.P
    Astérisque V. P. KOSTOV Regular linear systems onCP1 and their monodromy groups Astérisque, tome 222 (1994), p. 259-283 <http://www.numdam.org/item?id=AST_1994__222__259_0> © Société mathématique de France, 1994, tous droits réservés. L’accès aux archives de la collection « Astérisque » (http://smf4.emath.fr/ Publications/Asterisque/) implique l’accord avec les conditions générales d’uti- lisation (http://www.numdam.org/conditions). Toute utilisation commerciale ou impression systématique est constitutive d’une infraction pénale. Toute copie ou impression de ce fichier doit contenir la présente mention de copyright. Article numérisé dans le cadre du programme Numérisation de documents anciens mathématiques http://www.numdam.org/ REGULAR LINEAR SYSTEMS ON CP1 AND THEIR MONODROMY GROUPS V.P. KOSTOV 1. INTRODUCTION 1.1 A meromorphic linear system of differential equations on CP1 can be pre­ sented in the form X = A(t)X (1) where A{t) is a meromorphic on CP1 n x n matrix function, " • " = d/dt. Denote its poles ai,... ,ap+i, p > 1. We consider the dependent variable X to be also n x n-matrix. Definition. System (1) is called fuchsian if all the poles of the matrix- function A(t) axe of first order. Definition. System (1) is called regular at the pole a,j if in its neighbour­ hood the solutions of the system are of moderate growth rate, i.e. IW-a^l^Odt-a^), Ni E R, j = 1,...., p + 1 Here || • || denotes an arbitrary norm in gl(n, C) and we consider a restriction of the solution to a sector with vertex at ctj and of a sufficiently small radius, i.e.
    [Show full text]
  • The Rational and Jordan Forms Linear Algebra Notes
    The Rational and Jordan Forms Linear Algebra Notes Satya Mandal November 5, 2005 1 Cyclic Subspaces In a given context, a "cyclic thing" is an one generated "thing". For example, a cyclic groups is a one generated group. Likewise, a module M over a ring R is said to be a cyclic module if M is one generated or M = Rm for some m 2 M: We do not use the expression "cyclic vector spaces" because one generated vector spaces are zero or one dimensional vector spaces. 1.1 (De¯nition and Facts) Suppose V is a vector space over a ¯eld F; with ¯nite dim V = n: Fix a linear operator T 2 L(V; V ): 1. Write R = F[T ] = ff(T ) : f(X) 2 F[X]g L(V; V )g: Then R = F[T ]g is a commutative ring. (We did considered this ring in last chapter in the proof of Caley-Hamilton Theorem.) 2. Now V acquires R¡module structure with scalar multiplication as fol- lows: Define f(T )v = f(T )(v) 2 V 8 f(T ) 2 F[T ]; v 2 V: 3. For an element v 2 V de¯ne Z(v; T ) = F[T ]v = ff(T )v : f(T ) 2 Rg: 1 Note that Z(v; T ) is the cyclic R¡submodule generated by v: (I like the notation F[T ]v, the textbook uses the notation Z(v; T ).) We say, Z(v; T ) is the T ¡cyclic subspace generated by v: 4. If V = Z(v; T ) = F[T ]v; we say that that V is a T ¡cyclic space, and v is called the T ¡cyclic generator of V: (Here, I di®er a little from the textbook.) 5.
    [Show full text]
  • MAT247 Algebra II Assignment 5 Solutions
    MAT247 Algebra II Assignment 5 Solutions 1. Find the Jordan canonical form and a Jordan basis for the map or matrix given in each part below. (a) Let V be the real vector space spanned by the polynomials xiyj (in two variables) with i + j ≤ 3. Let T : V V be the map Dx + Dy, where Dx and Dy respectively denote 2 differentiation with respect to x and y. (Thus, Dx(xy) = y and Dy(xy ) = 2xy.) 0 1 1 1 1 1 ! B0 1 2 1 C (b) A = B C over Q @0 0 1 -1A 0 0 0 2 Solution: (a) The operator T is nilpotent so its characterictic polynomial splits and its only eigenvalue is zero and K0 = V. We have Im(T) = spanf1; x; y; x2; xy; y2g; Im(T 2) = spanf1; x; yg Im(T 3) = spanf1g T 4 = 0: Thus the longest cycle for eigenvalue zero has length 4. Moreover, since the number of cycles of length at least r is given by dim(Im(T r-1)) - dim(Im(T r)), the number of cycles of lengths at least 4, 3, 2, 1 is respectively 1, 2, 3, and 4. Thus the number of cycles of lengths 4,3,2,1 is respectively 1,1,1,1. Denoting the r × r Jordan block with λ on the diagonal by Jλ,r, the Jordan canonical form is thus 0 1 J0;4 B J0;3 C J = B C : @ J0;2 A J0;1 We now proceed to find a corresponding Jordan basis, i.e.
    [Show full text]
  • Jordan Decomposition for Differential Operators Samuel Weatherhog Bsc Hons I, BE Hons IIA
    Jordan Decomposition for Differential Operators Samuel Weatherhog BSc Hons I, BE Hons IIA A thesis submitted for the degree of Master of Philosophy at The University of Queensland in 2017 School of Mathematics and Physics i Abstract One of the most well-known theorems of linear algebra states that every linear operator on a complex vector space has a Jordan decomposition. There are now numerous ways to prove this theorem, how- ever a standard method of proof relies on the existence of an eigenvector. Given a finite-dimensional, complex vector space V , every linear operator T : V ! V has an eigenvector (i.e. a v 2 V such that (T − λI)v = 0 for some λ 2 C). If we are lucky, V may have a basis consisting of eigenvectors of T , in which case, T is diagonalisable. Unfortunately this is not always the case. However, by relaxing the condition (T − λI)v = 0 to the weaker condition (T − λI)nv = 0 for some n 2 N, we can always obtain a basis of generalised eigenvectors. In fact, there is a canonical decomposition of V into generalised eigenspaces and this is essentially the Jordan decomposition. The topic of this thesis is an analogous theorem for differential operators. The existence of a Jordan decomposition in this setting was first proved by Turrittin following work of Hukuhara in the one- dimensional case. Subsequently, Levelt proved uniqueness and provided a more conceptual proof of the original result. As a corollary, Levelt showed that every differential operator has an eigenvector. He also noted that this was a strange chain of logic: in the linear setting, the existence of an eigen- vector is a much easier result and is in fact used to obtain the Jordan decomposition.
    [Show full text]
  • (VI.E) Jordan Normal Form
    (VI.E) Jordan Normal Form Set V = Cn and let T : V ! V be any linear transformation, with distinct eigenvalues s1,..., sm. In the last lecture we showed that V decomposes into stable eigenspaces for T : s s V = W1 ⊕ · · · ⊕ Wm = ker (T − s1I) ⊕ · · · ⊕ ker (T − smI). Let B = fB1,..., Bmg be a basis for V subordinate to this direct sum and set B = [T j ] , so that k Wk Bk [T]B = diagfB1,..., Bmg. Each Bk has only sk as eigenvalue. In the event that A = [T]eˆ is s diagonalizable, or equivalently ker (T − skI) = ker(T − skI) for all k , B is an eigenbasis and [T]B is a diagonal matrix diagf s1,..., s1 ;...; sm,..., sm g. | {z } | {z } d1=dim W1 dm=dim Wm Otherwise we must perform further surgery on the Bk ’s separately, in order to transform the blocks Bk (and so the entire matrix for T ) into the “simplest possible” form. The attentive reader will have noticed above that I have written T − skI in place of skI − T . This is a strategic move: when deal- ing with characteristic polynomials it is far more convenient to write det(lI − A) to produce a monic polynomial. On the other hand, as you’ll see now, it is better to work on the individual Wk with the nilpotent transformation T j − s I =: N . Wk k k Decomposition of the Stable Eigenspaces (Take 1). Let’s briefly omit subscripts and consider T : W ! W with one eigenvalue s , dim W = d , B a basis for W and [T]B = B.
    [Show full text]
  • Lecture 6 — Generalized Eigenspaces & Generalized Weight
    18.745 Introduction to Lie Algebras September 28, 2010 Lecture 6 | Generalized Eigenspaces & Generalized Weight Spaces Prof. Victor Kac Scribe: Andrew Geng and Wenzhe Wei Definition 6.1. Let A be a linear operator on a vector space V over field F and let λ 2 F, then the subspace N Vλ = fv j (A − λI) v = 0 for some positive integer Ng is called a generalized eigenspace of A with eigenvalue λ. Note that the eigenspace of A with eigenvalue λ is a subspace of Vλ. Example 6.1. A is a nilpotent operator if and only if V = V0. Proposition 6.1. Let A be a linear operator on a finite dimensional vector space V over an alge- braically closed field F, and let λ1; :::; λs be all eigenvalues of A, n1; n2; :::; ns be their multiplicities. Then one has the generalized eigenspace decomposition: s M V = Vλi where dim Vλi = ni i=1 Proof. By the Jordan normal form of A in some basis e1; e2; :::en. Its matrix is of the following form: 0 1 Jλ1 B Jλ C A = B 2 C B .. C @ . A ; Jλn where Jλi is an ni × ni matrix with λi on the diagonal, 0 or 1 in each entry just above the diagonal, and 0 everywhere else. Let Vλ1 = spanfe1; e2; :::; en1 g;Vλ2 = spanfen1+1; :::; en1+n2 g; :::; so that Jλi acts on Vλi . i.e. Vλi are A-invariant and Aj = λ I + N , N nilpotent. Vλi i ni i i From the above discussion, we obtain the following decomposition of the operator A, called the classical Jordan decomposition A = As + An where As is the operator which in the basis above is the diagonal part of A, and An is the rest (An = A − As).
    [Show full text]
  • MA251 Algebra I: Advanced Linear Algebra Revision Guide
    MA251 Algebra I: Advanced Linear Algebra Revision Guide Written by David McCormick ii MA251 Algebra I: Advanced Linear Algebra Contents 1 Change of Basis 1 2 The Jordan Canonical Form 1 2.1 Eigenvalues and Eigenvectors . 1 2.2 Minimal Polynomials . 2 2.3 Jordan Chains, Jordan Blocks and Jordan Bases . 3 2.4 Computing the Jordan Canonical Form . 4 2.5 Exponentiation of a Matrix . 6 2.6 Powers of a Matrix . 7 3 Bilinear Maps and Quadratic Forms 8 3.1 Definitions . 8 3.2 Change of Variable under the General Linear Group . 9 3.3 Change of Variable under the Orthogonal Group . 10 3.4 Unitary, Hermitian and Normal Matrices . 11 4 Finitely Generated Abelian Groups 12 4.1 Generators and Cyclic Groups . 13 4.2 Subgroups and Cosets . 13 4.3 Quotient Groups and the First Isomorphism Theorem . 14 4.4 Abelian Groups and Matrices Over Z .............................. 15 Introduction This revision guide for MA251 Algebra I: Advanced Linear Algebra has been designed as an aid to revision, not a substitute for it. While it may seem that the module is impenetrably hard, there's nothing in Algebra I to be scared of. The underlying theme is normal forms for matrices, and so while there is some theory you have to learn, most of the module is about doing computations. (After all, this is mathematics, not philosophy.) Finding books for this module is hard. My personal favourite book on linear algebra is sadly out-of- print and bizarrely not in the library, but if you can find a copy of Evar Nering's \Linear Algebra and Matrix Theory" then it's well worth it (though it doesn't cover the abelian groups section of the course).
    [Show full text]
  • Calculus and Differential Equations II
    Calculus and Differential Equations II MATH 250 B Linear systems of differential equations Linear systems of differential equations Calculus and Differential Equations II Second order autonomous linear systems We are mostly interested with2 × 2 first order autonomous systems of the form x0 = a x + b y y 0 = c x + d y where x and y are functions of t and a, b, c, and d are real constants. Such a system may be re-written in matrix form as d x x a b = M ; M = : dt y y c d The purpose of this section is to classify the dynamics of the solutions of the above system, in terms of the properties of the matrix M. Linear systems of differential equations Calculus and Differential Equations II Existence and uniqueness (general statement) Consider a linear system of the form dY = M(t)Y + F (t); dt where Y and F (t) are n × 1 column vectors, and M(t) is an n × n matrix whose entries may depend on t. Existence and uniqueness theorem: If the entries of the matrix M(t) and of the vector F (t) are continuous on some open interval I containing t0, then the initial value problem dY = M(t)Y + F (t); Y (t ) = Y dt 0 0 has a unique solution on I . In particular, this means that trajectories in the phase space do not cross. Linear systems of differential equations Calculus and Differential Equations II General solution The general solution to Y 0 = M(t)Y + F (t) reads Y (t) = C1 Y1(t) + C2 Y2(t) + ··· + Cn Yn(t) + Yp(t); = U(t) C + Yp(t); where 0 Yp(t) is a particular solution to Y = M(t)Y + F (t).
    [Show full text]
  • SUPPLEMENT on EIGENVALUES and EIGENVECTORS We Give
    SUPPLEMENT ON EIGENVALUES AND EIGENVECTORS We give some extra material on repeated eigenvalues and complex eigenvalues. 1. REPEATED EIGENVALUES AND GENERALIZED EIGENVECTORS For repeated eigenvalues, it is not always the case that there are enough eigenvectors. Let A be an n × n real matrix, with characteristic polynomial m1 mk pA(λ) = (λ1 − λ) ··· (λk − λ) with λ j 6= λ` for j 6= `. Use the following notation for the eigenspace, E(λ j ) = {v : (A − λ j I)v = 0 }. We also define the generalized eigenspace for the eigenvalue λ j by gen m j E (λ j ) = {w : (A − λ j I) w = 0 }, where m j is the multiplicity of the eigenvalue. A vector in E(λ j ) is called a generalized eigenvector. The following is a extension of theorem 7 in the book. 0 m1 mk Theorem (7 ). Let A be an n × n matrix with characteristic polynomial pA(λ) = (λ1 − λ) ··· (λk − λ) , where λ j 6= λ` for j 6= `. Then, the following hold. gen (a) dim(E(λ j )) ≤ m j and dim(E (λ j )) = m j for 1 ≤ j ≤ k. If λ j is complex, then these dimensions are as subspaces of Cn. gen n (b) If B j is a basis for E (λ j ) for 1 ≤ j ≤ k, then B1 ∪ · · · ∪ Bk is a basis of C , i.e., there is always a basis of generalized eigenvectors for all the eigenvalues. If the eigenvalues are all real all the vectors are real, then this gives a basis of Rn. (c) Assume A is a real matrix and all its eigenvalues are real.
    [Show full text]
  • On the M-Th Roots of a Complex Matrix
    The Electronic Journal of Linear Algebra ISSN 1081-3810 A publication of the International Linear Algebra Society Volume 9, pp. 32-41, April 2002 ELA http://math.technion.ac.il/iic/ela ON THE mth ROOTS OF A COMPLEX MATRIX∗ PANAYIOTIS J. PSARRAKOS† Abstract. If an n × n complex matrix A is nonsingular, then for everyinteger m>1,Ahas an mth root B, i.e., Bm = A. In this paper, we present a new simple proof for the Jordan canonical form of the root B. Moreover, a necessaryand sufficient condition for the existence of mth roots of a singular complex matrix A is obtained. This condition is in terms of the dimensions of the null spaces of the powers Ak (k =0, 1, 2,...). Key words. Ascent sequence, eigenvalue, eigenvector, Jordan matrix, matrix root. AMS subject classifications. 15A18, 15A21, 15A22, 47A56 1. Introduction and preliminaries. Let Mn be the algebra of all n × n complex matrices and let A ∈Mn. For an integer m>1, amatrix B ∈Mn is called an mth root of A if Bm = A.IfthematrixA is nonsingular, then it always has an mth root B. This root is not unique and its Jordan structure is related to k the Jordan structure of A [2, pp. 231-234]. In particular, (λ − µ0) is an elementary m k divisor of B if and only if (λ − µ0 ) is an elementary divisor of A.IfA is a singular complex matrix, then it may have no mth roots. For example, there is no matrix 01 B such that B2 = . As a consequence, the problem of characterizing the 00 singular matrices, which have mth roots, is of interest [1], [2].
    [Show full text]
  • Math 2280 - Lecture 23
    Math 2280 - Lecture 23 Dylan Zwick Fall 2013 In our last lecture we dealt with solutions to the system: ′ x = Ax where A is an n × n matrix with n distinct eigenvalues. As promised, today we will deal with the question of what happens if we have less than n distinct eigenvalues, which is what happens if any of the roots of the characteristic polynomial are repeated. This lecture corresponds with section 5.4 of the textbook, and the as- signed problems from that section are: Section 5.4 - 1, 8, 15, 25, 33 The Case of an Order 2 Root Let’s start with the case an an order 2 root.1 So, our eigenvalue equation has a repeated root, λ, of multiplicity 2. There are two ways this can go. The first possibility is that we have two distinct (linearly independent) eigenvectors associated with the eigen- value λ. In this case, all is good, and we just use these two eigenvectors to create two distinct solutions. 1Admittedly, not one of Sherlock Holmes’s more popular mysteries. 1 Example - Find a general solution to the system: 9 4 0 ′ x = −6 −1 0 x 6 4 3 Solution - The characteristic equation of the matrix A is: |A − λI| = (5 − λ)(3 − λ)2. So, A has the distinct eigenvalue λ1 = 5 and the repeated eigenvalue λ2 =3 of multiplicity 2. For the eigenvalue λ1 =5 the eigenvector equation is: 4 4 0 a 0 (A − 5I)v = −6 −6 0 b = 0 6 4 −2 c 0 which has as an eigenvector 1 v1 = −1 .
    [Show full text]
  • 23. Eigenvalues and Eigenvectors
    23. Eigenvalues and Eigenvectors 11/17/20 Eigenvalues and eigenvectors have a variety of uses. They allow us to solve linear difference and differential equations. For many non-linear equations, they inform us about the long-run behavior of the system. They are also useful for defining functions of matrices. 23.1 Eigenvalues We start with eigenvalues. Eigenvalues and Spectrum. Let A be an m m matrix. An eigenvalue (characteristic value, proper value) of A is a number λ so that× A λI is singular. The spectrum of A, σ(A)= {eigenvalues of A}.− Sometimes it’s possible to find eigenvalues by inspection of the matrix. ◮ Example 23.1.1: Some Eigenvalues. Suppose 1 0 2 1 A = , B = . 0 2 1 2 Here it is pretty obvious that subtracting either I or 2I from A yields a singular matrix. As for matrix B, notice that subtracting I leaves us with two identical columns (and rows), so 1 is an eigenvalue. Less obvious is the fact that subtracting 3I leaves us with linearly independent columns (and rows), so 3 is also an eigenvalue. We’ll see in a moment that 2 2 matrices have at most two eigenvalues, so we have determined the spectrum of each:× σ(A)= {1, 2} and σ(B)= {1, 3}. ◭ 2 MATH METHODS 23.2 Finding Eigenvalues: 2 2 × We have several ways to determine whether a matrix is singular. One method is to check the determinant. It is zero if and only the matrix is singular. That means we can find the eigenvalues by solving the equation det(A λI)=0.
    [Show full text]