Jordan Canonical Form

Total Page:16

File Type:pdf, Size:1020Kb

Jordan Canonical Form Capitolo 0. INTRODUCTION 2.1 Diagonalizable matrices If matrix A has n real and distinct eigenvalues λi, then it also has n real • eigenvectors vi which are linearly independent: Avi = λi vi, i 1, 2, 3, ..., n . ∈ { } In this case the matrix A can be diagonalized using a transformation matrix T having the eigenvectors vi as its columns: T = v1 v2 v3 ... vn 1 The transformed matrix A = T− AT is diagonal. The coefficients on the • diagonal of matrix A are equal to the eigenvalues λi of matrix A: λ1 λ2 1 A = T− AT = λ3 .. . λn The order of the eigenvalues λi on the diagonal of matrix A if equal to the • order of the the corresponding eigenvectors vi within matrix T. A matrix A can be diagonalized if and only if it is possible to find a number • of linear independent eigenvalues vi equal to the dimension n of matrix A. The matrices which are not diagonalizable can be transformed in the Jordan • canonical form. This canonical form is the form more likely diagonal. A matrix A can be diagonalized also if its eigenvalues λi are complex • conjugate. In this case the eigenvectors vi are complex conjugate and the transformation T is complex. Zanasi Roberto - System Theory. A.A. 2015/2016 Capitolo 2. CANONICAL FORMS 2.2 Jordan canonical form Let λi, for i =1, ..., h, be the “distinct” eigenvalues of matrix A and let ri • be the corresponding molteplicity degree within the characteristic polynomial: r1 r2 rh ∆A(λ) = (λ λ ) (λ λ ) ... (λ λh) − 1 − 2 − A matrix A transformed in the Jordan canonical form A has the following • block diagonal form: J1 0 ... 0 0 J ... 0 A = T 1AT = 2 − . 0 0 ... Jh To each distinct eigenvalue λi of matrix A corresponds Jordan block Ji of • dimension equal to the algebraic molteplicity ri of eigenvalue λi, that is the molteplicity ri of eigenvalue λi within the characteristic polynomial ∆A(λ). Each Jordan block Ji has itself the structure of a block diagonal matrix: • J 0 ... 0 i,1 dim J = r 0 J ... 0 i i J = i,2 i . i =1, ..., h 0 0 ... Ji,mi On the diagonal of the Jordan block Ji are present mi Jordan miniblocks • Ji, j, where mi is the geometric molteplicity of the eigenvaalue λi, that is the number of linear independent eigenvectors vi,j associated to the eigenvalue λi. The structure of all the Jordan miniblocks Ji, j is the following: • λi 1 0 ... 0 0 0 λi 1 ... 0 0 dim Ji,j = νi,j 0 0 λi ... 0 0 Ji,j = . . j =1, ..., mi 0 0 0 ... λi 1 0 0 0 ... 0 λi Zanasi Roberto - System Theory. A.A. 2015/2016 Capitolo 2. CANONICAL FORMS 2.3 The following relations hold: • h mi n = ri, ri = νi,j. i=1 j=1 X X The matrix A is diagonalizable if and only if the dimension νi,j of all the • Jordan miniblocks Ji, j is unitary: νi,j =1. In this case all the Jordan miniblocks Ji, j = λi are equal to the corresponding • eigenvalue λi and all the Jordan blocks Ji are diagonal and characterized by the same eigenvalue λi: λ 0 ... 0 i dim J = r 0 λ ... 0 i i J = λ , J = i , i, j i i . i =1, ..., h 0 0 ... λi A matrix A is diagonalizable if and only if the algebraic molteplicity ri • of each eigenvalue λi is equal to the geometric molteplicity mi, that is if the number mi of linearly independent eigenvectors vi,j associated to each eigenvalue λi is equal to the algebraic molteplicity ri of the eigenvalue λi within the characteristic polynomial ∆A(λ). Example. Matrix in Jordan canonical form: • 2 3 1 1 ∆A(λ)=(λ + 1) (λ + 3) − 0 1 J1,1 n =5, h =2 − J1 A= 3 = J2,1 = , λ1 = 1, λ2 = 3 − J2 − − 3 1 J2,2 r =2, r =3 − 1 2 0 3 m1 =1, m2 =2 − The matrix A has two distinct eigenvalues λ = 1 and λ = 3. The eigenvalue λ has 1 − 2 − 1 algebraic molteplicity r1 = 2 and geometric molteplicity m1 = 1. The eigenvalue λ2 has algebraic molteplicity r2 =3 and geometric molteplicity m2 =2. The Jordan block J1 has dimension r1 =2 and it is composed by only one miniblock J1,1. The second Jordan block J2 has dimension r2 =3 and it is composed by m2 =2 miniblocks J2,1 and J2,2. Zanasi Roberto - System Theory. A.A. 2015/2016 Capitolo 2. CANONICAL FORMS 2.4 The mi linear independent eigenvectors vi,j associated to the eigenvalue λi • can be determined solving the following autonomous linear system: (λiI A)vi,j =0, j =1, ..., mi. − The number mi of linearly independent eigenvectors vi,j is equal to the • number of miniblocks Ji,j present within the Jordan block Ji. In the case mi < ri, the number of linearly independent eigenvectors vi,j • (k) is not sufficient for diagonalizing the matrix. In this case, the chains vi,j of generalized eigenvectors for i = 1, 2, ..., h, j = 1, 2, ..., mi and k =1, 2, ..., νi,j must be determined. The generalized eigenvectors v(k) can be determined solving the following • i,j linear equations: (2) (1) (A λiI)v = v = vi,j − i,j i,j (3) (2) (A λiI)v = v − i,j i,j . (νi,j) (νi,j 1) (A λiI)vi,j = vi,j − − (1) The first eigenvectorvi,j = vi,j is known. Solving the first equation one (2) (2) obtains the generalized eigenvector vi,j . Substituting vi,j and solving the (3) next equation one obtains the generalized eigenvector vi,j , ..., and so on. The particular “almost diagonal” form of the Jordan miniblocks Ji,j is ob- • (k) tained inserting the chains of generalized eigenvectors vi,j as columns of the transformation matrix T: (1) (2) (νi,j) T = ... vi,j vi,j ... vi,j ... h i If the geometric molteplicity mi is equal to the algebraic molteplicity ri, that • (k) is if mi = ri, the chain of generalized eigenvectors vi,j has unitary length and it is formed by only one eigenvector vi,j. Zanasi Roberto - System Theory. A.A. 2015/2016 Capitolo 2. CANONICAL FORMS 2.5 Numeric example in Matlab: • --------------------------------------------------------------------------- clc; echo on %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Matrix to be transformed in Jordan canonical form %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% A =sym([... 73/129, -18/43, 1/129, 383/129, -1213/258; 2610/989, -2419/989, -687/989, 1759/989, -6889/1978; 2885/989, 655/989, -4039/989, 2310/989, -3905/989; 2273/989, -2079/989, 1565/989, -326/989, -5915/1978; 4456/2967, -3052/989, 9190/2967, 4778/2967, -13964/2967]); %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % The command "jordan(A)" transforms matrix A in Jordan canonical form %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% [V,AJ]=jordan(A); %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% AJ %%% Jordan canonical form of marix A %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% AJ = [ -3, 1, 0, 0, 0] [ 0, -3, 0, 0, 0] [ 0, 0, -1, 1, 0] [ 0, 0, 0, -1, 0] [ 0, 0, 0, 0, -3] %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% V %%% Matrix of generalized eigenvectors of matrix A %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% V = [ -522/989, -242/989, 4760/2967, 1231/989, 6450201/4106690] [ -406/989, -3334/2967, 2380/2967, 3334/2967, -142348/293335] [ -290/989, -3275/2967, 2975/2967, 3275/2967, -903187/1642676] [ -348/989, -418/989, 1785/989, 418/989, -186498/2053345] [ -580/989, -718/2967, 4760/2967, 718/2967, 966065/821338] %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Verification: the columns of V are the generalized eigenvectors of A %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% (A-(-3)*eye(5))*V(:,1)==0 % --> Vero (A-(-3)*eye(5))*V(:,2)==V(:,1) % --> Vero (A-(-1)*eye(5))*V(:,3)==0 % --> Vero (A-(-1)*eye(5))*V(:,4)==V(:,3) % --> Vero (A-(-3)*eye(5))*V(:,5)==0 % --> Vero --------------------------------------------------------------------------- Zanasi Roberto - System Theory. A.A. 2015/2016 Capitolo 2. CANONICAL FORMS 2.6 The free evolution of a discrete linear system is the following: • k 1 k k 1 x(k) = A x0 = (TAT− ) x0 = TA T− x0 k k J1 0 ... 0 J1 0 ... 0 0 J ... 0 0 Jk ... 0 = T 2 T 1x = T 2 T 1x . − 0 . − 0 k 0 0 ... Jh 0 0 ... Jh The free evolution of a time-continuous linear system is the following: • At (TAT 1)t At 1 x(t) = e x0 = e − x0 = Te T− x0 J1 0 ... 0 0 J2 ... 0 J1t . t e 0 ... 0 . J2t 0 0 ... Jh 1 0 e ... 0 1 = T e T− x0 = T . T− x0 . J t 0 0 ... e h The power Ak and the exponential eAt of matrix A can be determined if it • k Jit is known how to compute the power Ji and the exponential e of the generic Jordan miniblock Ji of dimension ν: λ 1 0 ... 0 0 0 λ 1 ... 0 0 0 0 λ ... 0 0 J = . = λI + N . 0 0 0 ... λ 1 0 0 0 ... 0 λ Matrix J can be expressed as a sum of the diagonal matrix λI and the • nilpotent matrix N which has non zero and unitary elements only on the first over-diagonal. In the case ν =5, it is: 01000 00100 N = 00010 00001 00000 Zanasi Roberto - System Theory. A.A. 2015/2016 Capitolo 2. CANONICAL FORMS 2.7 The powers of matrix N have the following structure: • 00100 00010 00010 00001 N2 = 00001 N3 = 00000 ... 00000 00000 00000 00000 that is, the matrix Nk has non zero elements only on the k-th over-diagonal. The matrix N is nilpotent of order ν if it satisfies the following relation: • Nν =0 dove ν = dim N. The k-th power of matrix J can be expressed in the following way: • k k Jk = (λI + N)k = λkI + λk 1N + λk 2N2 + ..
Recommended publications
  • Regular Linear Systems on Cp1 and Their Monodromy Groups V.P
    Astérisque V. P. KOSTOV Regular linear systems onCP1 and their monodromy groups Astérisque, tome 222 (1994), p. 259-283 <http://www.numdam.org/item?id=AST_1994__222__259_0> © Société mathématique de France, 1994, tous droits réservés. L’accès aux archives de la collection « Astérisque » (http://smf4.emath.fr/ Publications/Asterisque/) implique l’accord avec les conditions générales d’uti- lisation (http://www.numdam.org/conditions). Toute utilisation commerciale ou impression systématique est constitutive d’une infraction pénale. Toute copie ou impression de ce fichier doit contenir la présente mention de copyright. Article numérisé dans le cadre du programme Numérisation de documents anciens mathématiques http://www.numdam.org/ REGULAR LINEAR SYSTEMS ON CP1 AND THEIR MONODROMY GROUPS V.P. KOSTOV 1. INTRODUCTION 1.1 A meromorphic linear system of differential equations on CP1 can be pre­ sented in the form X = A(t)X (1) where A{t) is a meromorphic on CP1 n x n matrix function, " • " = d/dt. Denote its poles ai,... ,ap+i, p > 1. We consider the dependent variable X to be also n x n-matrix. Definition. System (1) is called fuchsian if all the poles of the matrix- function A(t) axe of first order. Definition. System (1) is called regular at the pole a,j if in its neighbour­ hood the solutions of the system are of moderate growth rate, i.e. IW-a^l^Odt-a^), Ni E R, j = 1,...., p + 1 Here || • || denotes an arbitrary norm in gl(n, C) and we consider a restriction of the solution to a sector with vertex at ctj and of a sufficiently small radius, i.e.
    [Show full text]
  • The Rational and Jordan Forms Linear Algebra Notes
    The Rational and Jordan Forms Linear Algebra Notes Satya Mandal November 5, 2005 1 Cyclic Subspaces In a given context, a "cyclic thing" is an one generated "thing". For example, a cyclic groups is a one generated group. Likewise, a module M over a ring R is said to be a cyclic module if M is one generated or M = Rm for some m 2 M: We do not use the expression "cyclic vector spaces" because one generated vector spaces are zero or one dimensional vector spaces. 1.1 (De¯nition and Facts) Suppose V is a vector space over a ¯eld F; with ¯nite dim V = n: Fix a linear operator T 2 L(V; V ): 1. Write R = F[T ] = ff(T ) : f(X) 2 F[X]g L(V; V )g: Then R = F[T ]g is a commutative ring. (We did considered this ring in last chapter in the proof of Caley-Hamilton Theorem.) 2. Now V acquires R¡module structure with scalar multiplication as fol- lows: Define f(T )v = f(T )(v) 2 V 8 f(T ) 2 F[T ]; v 2 V: 3. For an element v 2 V de¯ne Z(v; T ) = F[T ]v = ff(T )v : f(T ) 2 Rg: 1 Note that Z(v; T ) is the cyclic R¡submodule generated by v: (I like the notation F[T ]v, the textbook uses the notation Z(v; T ).) We say, Z(v; T ) is the T ¡cyclic subspace generated by v: 4. If V = Z(v; T ) = F[T ]v; we say that that V is a T ¡cyclic space, and v is called the T ¡cyclic generator of V: (Here, I di®er a little from the textbook.) 5.
    [Show full text]
  • MAT247 Algebra II Assignment 5 Solutions
    MAT247 Algebra II Assignment 5 Solutions 1. Find the Jordan canonical form and a Jordan basis for the map or matrix given in each part below. (a) Let V be the real vector space spanned by the polynomials xiyj (in two variables) with i + j ≤ 3. Let T : V V be the map Dx + Dy, where Dx and Dy respectively denote 2 differentiation with respect to x and y. (Thus, Dx(xy) = y and Dy(xy ) = 2xy.) 0 1 1 1 1 1 ! B0 1 2 1 C (b) A = B C over Q @0 0 1 -1A 0 0 0 2 Solution: (a) The operator T is nilpotent so its characterictic polynomial splits and its only eigenvalue is zero and K0 = V. We have Im(T) = spanf1; x; y; x2; xy; y2g; Im(T 2) = spanf1; x; yg Im(T 3) = spanf1g T 4 = 0: Thus the longest cycle for eigenvalue zero has length 4. Moreover, since the number of cycles of length at least r is given by dim(Im(T r-1)) - dim(Im(T r)), the number of cycles of lengths at least 4, 3, 2, 1 is respectively 1, 2, 3, and 4. Thus the number of cycles of lengths 4,3,2,1 is respectively 1,1,1,1. Denoting the r × r Jordan block with λ on the diagonal by Jλ,r, the Jordan canonical form is thus 0 1 J0;4 B J0;3 C J = B C : @ J0;2 A J0;1 We now proceed to find a corresponding Jordan basis, i.e.
    [Show full text]
  • MATH 2030: MATRICES Introduction to Linear Transformations We Have
    MATH 2030: MATRICES Introduction to Linear Transformations We have seen that we may describe matrices as symbol with simple algebraic properties like matrix multiplication, addition and scalar addition. In the particular case of matrix-vector multiplication, i.e., Ax = b where A is an m × n matrix and x; b are n×1 matrices (column vectors) we may represent this as a transformation on the space of column vectors, that is a function F (x) = b , where x is the independent variable and b the dependent variable. In this section we will give a more rigorous description of this idea and provide examples of such matrix transformations, which will lead to the idea of a linear transformation. To begin we look at a matrix-vector multiplication to give an idea of what sort of functions we are working with 21 0 3 1 A = 2 −1 ; v = : 4 5 −1 3 4 then matrix-vector multiplication yields 2 1 3 Av = 4 3 5 −1 We have taken a 2 × 1 matrix and produced a 3 × 1 matrix. More generally for any x we may describe this transformation as a matrix equation y 21 0 3 2 x 3 x 2 −1 = 2x − y : 4 5 y 4 5 3 4 3x + 4y From this product we have found a formula describing how A transforms an arbi- 2 3 trary vector in R into a new vector in R . Expressing this as a transformation TA we have 2 x 3 x T = 2x − y : A y 4 5 3x + 4y From this example we can define some helpful terminology.
    [Show full text]
  • Diagonalizing a Matrix
    Diagonalizing a Matrix Definition 1. We say that two square matrices A and B are similar provided there exists an invertible matrix P so that . 2. We say a matrix A is diagonalizable if it is similar to a diagonal matrix. Example 1. The matrices and are similar matrices since . We conclude that is diagonalizable. 2. The matrices and are similar matrices since . After we have developed some additional theory, we will be able to conclude that the matrices and are not diagonalizable. Theorem Suppose A, B and C are square matrices. (1) A is similar to A. (2) If A is similar to B, then B is similar to A. (3) If A is similar to B and if B is similar to C, then A is similar to C. Proof of (3) Since A is similar to B, there exists an invertible matrix P so that . Also, since B is similar to C, there exists an invertible matrix R so that . Now, and so A is similar to C. Thus, “A is similar to B” is an equivalence relation. Theorem If A is similar to B, then A and B have the same eigenvalues. Proof Since A is similar to B, there exists an invertible matrix P so that . Now, Since A and B have the same characteristic equation, they have the same eigenvalues. > Example Find the eigenvalues for . Solution Since is similar to the diagonal matrix , they have the same eigenvalues. Because the eigenvalues of an upper (or lower) triangular matrix are the entries on the main diagonal, we see that the eigenvalues for , and, hence, are .
    [Show full text]
  • Vectors, Matrices and Coordinate Transformations
    S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between them, physical laws can often be written in a simple form. Since we will making extensive use of vectors in Dynamics, we will summarize some of their important properties. Vectors For our purposes we will think of a vector as a mathematical representation of a physical entity which has both magnitude and direction in a 3D space. Examples of physical vectors are forces, moments, and velocities. Geometrically, a vector can be represented as arrows. The length of the arrow represents its magnitude. Unless indicated otherwise, we shall assume that parallel translation does not change a vector, and we shall call the vectors satisfying this property, free vectors. Thus, two vectors are equal if and only if they are parallel, point in the same direction, and have equal length. Vectors are usually typed in boldface and scalar quantities appear in lightface italic type, e.g. the vector quantity A has magnitude, or modulus, A = |A|. In handwritten text, vectors are often expressed using the −→ arrow, or underbar notation, e.g. A , A. Vector Algebra Here, we introduce a few useful operations which are defined for free vectors. Multiplication by a scalar If we multiply a vector A by a scalar α, the result is a vector B = αA, which has magnitude B = |α|A. The vector B, is parallel to A and points in the same direction if α > 0.
    [Show full text]
  • Support Graph Preconditioners for Sparse Linear Systems
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Texas A&M University SUPPORT GRAPH PRECONDITIONERS FOR SPARSE LINEAR SYSTEMS A Thesis by RADHIKA GUPTA Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE December 2004 Major Subject: Computer Science SUPPORT GRAPH PRECONDITIONERS FOR SPARSE LINEAR SYSTEMS A Thesis by RADHIKA GUPTA Submitted to Texas A&M University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE Approved as to style and content by: Vivek Sarin Paul Nelson (Chair of Committee) (Member) N. K. Anand Valerie E. Taylor (Member) (Head of Department) December 2004 Major Subject: Computer Science iii ABSTRACT Support Graph Preconditioners for Sparse Linear Systems. (December 2004) Radhika Gupta, B.E., Indian Institute of Technology, Bombay; M.S., Georgia Institute of Technology, Atlanta Chair of Advisory Committee: Dr. Vivek Sarin Elliptic partial differential equations that are used to model physical phenomena give rise to large sparse linear systems. Such systems can be symmetric positive definite and can be solved by the preconditioned conjugate gradients method. In this thesis, we develop support graph preconditioners for symmetric positive definite matrices that arise from the finite element discretization of elliptic partial differential equations. An object oriented code is developed for the construction, integration and application of these preconditioners. Experimental results show that the advantages of support graph preconditioners are retained in the proposed extension to the finite element matrices. iv To my parents v ACKNOWLEDGMENTS I would like to express sincere thanks to my advisor, Dr.
    [Show full text]
  • 3.3 Diagonalization
    3.3 Diagonalization −4 1 1 1 Let A = 0 1. Then 0 1 and 0 1 are eigenvectors of A, with corresponding @ 4 −4 A @ 2 A @ −2 A eigenvalues −2 and −6 respectively (check). This means −4 1 1 1 −4 1 1 1 0 1 0 1 = −2 0 1 ; 0 1 0 1 = −6 0 1 : @ 4 −4 A @ 2 A @ 2 A @ 4 −4 A @ −2 A @ −2 A Thus −4 1 1 1 1 1 −2 −6 0 1 0 1 = 0−2 0 1 − 6 0 11 = 0 1 @ 4 −4 A @ 2 −2 A @ @ −2 A @ −2 AA @ −4 12 A We have −4 1 1 1 1 1 −2 0 0 1 0 1 = 0 1 0 1 @ 4 −4 A @ 2 −2 A @ 2 −2 A @ 0 −6 A 1 1 (Think about this). Thus AE = ED where E = 0 1 has the eigenvectors of A as @ 2 −2 A −2 0 columns and D = 0 1 is the diagonal matrix having the eigenvalues of A on the @ 0 −6 A main diagonal, in the order in which their corresponding eigenvectors appear as columns of E. Definition 3.3.1 A n × n matrix is A diagonal if all of its non-zero entries are located on its main diagonal, i.e. if Aij = 0 whenever i =6 j. Diagonal matrices are particularly easy to handle computationally. If A and B are diagonal n × n matrices then the product AB is obtained from A and B by simply multiplying entries in corresponding positions along the diagonal, and AB = BA.
    [Show full text]
  • Jordan Decomposition for Differential Operators Samuel Weatherhog Bsc Hons I, BE Hons IIA
    Jordan Decomposition for Differential Operators Samuel Weatherhog BSc Hons I, BE Hons IIA A thesis submitted for the degree of Master of Philosophy at The University of Queensland in 2017 School of Mathematics and Physics i Abstract One of the most well-known theorems of linear algebra states that every linear operator on a complex vector space has a Jordan decomposition. There are now numerous ways to prove this theorem, how- ever a standard method of proof relies on the existence of an eigenvector. Given a finite-dimensional, complex vector space V , every linear operator T : V ! V has an eigenvector (i.e. a v 2 V such that (T − λI)v = 0 for some λ 2 C). If we are lucky, V may have a basis consisting of eigenvectors of T , in which case, T is diagonalisable. Unfortunately this is not always the case. However, by relaxing the condition (T − λI)v = 0 to the weaker condition (T − λI)nv = 0 for some n 2 N, we can always obtain a basis of generalised eigenvectors. In fact, there is a canonical decomposition of V into generalised eigenspaces and this is essentially the Jordan decomposition. The topic of this thesis is an analogous theorem for differential operators. The existence of a Jordan decomposition in this setting was first proved by Turrittin following work of Hukuhara in the one- dimensional case. Subsequently, Levelt proved uniqueness and provided a more conceptual proof of the original result. As a corollary, Levelt showed that every differential operator has an eigenvector. He also noted that this was a strange chain of logic: in the linear setting, the existence of an eigen- vector is a much easier result and is in fact used to obtain the Jordan decomposition.
    [Show full text]
  • R'kj.Oti-1). (3) the Object of the Present Article Is to Make This Estimate Effective
    TRANSACTIONS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 259, Number 2, June 1980 EFFECTIVE p-ADIC BOUNDS FOR SOLUTIONS OF HOMOGENEOUS LINEAR DIFFERENTIAL EQUATIONS BY B. DWORK AND P. ROBBA Dedicated to K. Iwasawa Abstract. We consider a finite set of power series in one variable with coefficients in a field of characteristic zero having a chosen nonarchimedean valuation. We study the growth of these series near the boundary of their common "open" disk of convergence. Our results are definitive when the wronskian is bounded. The main application involves local solutions of ordinary linear differential equations with analytic coefficients. The effective determination of the common radius of conver- gence remains open (and is not treated here). Let K be an algebraically closed field of characteristic zero complete under a nonarchimedean valuation with residue class field of characteristic p. Let D = d/dx L = D"+Cn_lD'-l+ ■ ■■ +C0 (1) be a linear differential operator with coefficients meromorphic in some neighbor- hood of the origin. Let u = a0 + a,jc + . (2) be a power series solution of L which converges in an open (/>-adic) disk of radius r. Our object is to describe the asymptotic behavior of \a,\rs as s —*oo. In a series of articles we have shown that subject to certain restrictions we may conclude that r'KJ.Oti-1). (3) The object of the present article is to make this estimate effective. At the same time we greatly simplify, and generalize, our best previous results [12] for the noneffective form. Our previous work was based on the notion of a generic disk together with a condition for reducibility of differential operators with unbounded solutions [4, Theorem 4].
    [Show full text]
  • (VI.E) Jordan Normal Form
    (VI.E) Jordan Normal Form Set V = Cn and let T : V ! V be any linear transformation, with distinct eigenvalues s1,..., sm. In the last lecture we showed that V decomposes into stable eigenspaces for T : s s V = W1 ⊕ · · · ⊕ Wm = ker (T − s1I) ⊕ · · · ⊕ ker (T − smI). Let B = fB1,..., Bmg be a basis for V subordinate to this direct sum and set B = [T j ] , so that k Wk Bk [T]B = diagfB1,..., Bmg. Each Bk has only sk as eigenvalue. In the event that A = [T]eˆ is s diagonalizable, or equivalently ker (T − skI) = ker(T − skI) for all k , B is an eigenbasis and [T]B is a diagonal matrix diagf s1,..., s1 ;...; sm,..., sm g. | {z } | {z } d1=dim W1 dm=dim Wm Otherwise we must perform further surgery on the Bk ’s separately, in order to transform the blocks Bk (and so the entire matrix for T ) into the “simplest possible” form. The attentive reader will have noticed above that I have written T − skI in place of skI − T . This is a strategic move: when deal- ing with characteristic polynomials it is far more convenient to write det(lI − A) to produce a monic polynomial. On the other hand, as you’ll see now, it is better to work on the individual Wk with the nilpotent transformation T j − s I =: N . Wk k k Decomposition of the Stable Eigenspaces (Take 1). Let’s briefly omit subscripts and consider T : W ! W with one eigenvalue s , dim W = d , B a basis for W and [T]B = B.
    [Show full text]
  • Diagonalizable Matrix - Wikipedia, the Free Encyclopedia
    Diagonalizable matrix - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Matrix_diagonalization Diagonalizable matrix From Wikipedia, the free encyclopedia (Redirected from Matrix diagonalization) In linear algebra, a square matrix A is called diagonalizable if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix P such that P −1AP is a diagonal matrix. If V is a finite-dimensional vector space, then a linear map T : V → V is called diagonalizable if there exists a basis of V with respect to which T is represented by a diagonal matrix. Diagonalization is the process of finding a corresponding diagonal matrix for a diagonalizable matrix or linear map.[1] A square matrix which is not diagonalizable is called defective. Diagonalizable matrices and maps are of interest because diagonal matrices are especially easy to handle: their eigenvalues and eigenvectors are known and one can raise a diagonal matrix to a power by simply raising the diagonal entries to that same power. Geometrically, a diagonalizable matrix is an inhomogeneous dilation (or anisotropic scaling) — it scales the space, as does a homogeneous dilation, but by a different factor in each direction, determined by the scale factors on each axis (diagonal entries). Contents 1 Characterisation 2 Diagonalization 3 Simultaneous diagonalization 4 Examples 4.1 Diagonalizable matrices 4.2 Matrices that are not diagonalizable 4.3 How to diagonalize a matrix 4.3.1 Alternative Method 5 An application 5.1 Particular application 6 Quantum mechanical application 7 See also 8 Notes 9 References 10 External links Characterisation The fundamental fact about diagonalizable maps and matrices is expressed by the following: An n-by-n matrix A over the field F is diagonalizable if and only if the sum of the dimensions of its eigenspaces is equal to n, which is the case if and only if there exists a basis of Fn consisting of eigenvectors of A.
    [Show full text]