<<

Math 314, lecture 20 Jordan canonical form

Instructor: Tony Pantev

University of Pennsylvania

April 13, 2020

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Outline

Generalized eigenvectors Properties Nilpotent operators Jordan canonical form

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Generalized eigenvectors (i)

Let V be an n dimensional over K and let T : V Ñ V be a linear operator which has n eigenvalues counting with multiplicities. In this situation we proved: (1) There is a basis of V in which the of T is upper triangular. (2) If for each eigenvalue its algebraic multiplicity is equal to its geometric multiplicity, then V has a basis of eigenvectors for T and hence in this basis the matrix of T is diagonal.

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Generalized eigenvectors (ii)

In general T will have fewer than n linearly independent eigenvectors and will not be diagonalizable. Still we can search for a basis in which the matrix of T is upper triangular and as close to diagonal as possible.

Idea: Since we may not have enough eigenvectors use generalized eigenvectors to build a basis that simplifies T .

Notation: To shorten the notation we will write T ´ λ instead of T ´ λ ¨ idV and A ´ λ instead of A ´ λIn.

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Generalized eigenvectors (iii)

Definition: Let T : V Ñ V be a linear operator on a finite dimensional vector space over K and let λ P K. A with eigenvalue λ is a vector u P V satisfying pT ´ λqk u “ 0 for some k ě 0. The smallest such k is called the exponent of u.

Note: (1) u is a generalized eigenvector of exponent 0 (for 0 any λ) if and only if u “ 0. Indeed: pT ´ λq “ idV by definition. (2) u is a generalized eigenvector of eigenvalue λ and exponent 1 if and only if u is an eigenvector of eigenvalue λ.

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Generalized eigenvectors (iv)

Example: Let V be the R-vector space of infinitely differentiable R-valued functions on the real line. Let T : V Ñ V be the operator of differentiation. That is T pf q“ f 1. Then The eigenvectors of T of eigenvalue λ are the functions of the form aeλx for some a P R. The generalized eigenvectors of eigenvalue λ are the functions of the form ppxqeλx for some polynomial ppxq. Moreover the exponent of the generalized eigenvector ppxqeλx is equal to deg p ` 1.

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Generalized eigenvectors (v)

Claim: Let Eλ Ă V be the subset consisting of all generalized eigenvectors of eigenvalue λ. Then (i) Eλ Ă V is a T -invariant subspace of V . (ii) Eλ ‰ 0 if and only if λ is an eigenvalue for T .

Proof: To prove (ii) We have to show that if T has a non zero generalized eigenvector of eigenvalue λ, then T has an actual eigenvector of eigenvalue λ

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Generalized eigenvectors (vi)

Let u ‰ 0 P Eλ be a generalized eigenvector of eigenvalue λ and exponent d ą 0. Then pT ´ λqd u “ 0 but the vectors

u0 :“ u, u1 :“ pT ´ λqu, ¨ ¨ ¨ d´1 ud´1 :“ pT ´ λq u,

are all nonzero vectors. By definition

d´1 d pT ´ λqud´1 “pT ´ λqpT ´ λq u “pT ´ λq u “ 0,

and so ud´1 is an eigenvector of T of eigenvalue λ. This proves part (ii).

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Generalized eigenvectors (vii)

If u, v P Eλ and k is bigger than the exponents of u and v we will have pT ´ λqk u “ 0 and pT ´ λqk v “ 0. In particular for any numbers a, b P K we will have

pT ´λqk pau`bvq“ apT ´λqk u`bpT ´λqk v “ a0`b0 “ 0.

Thus au ` bv P Eλ which shows that Eλ is a subspace.

Note next that any operator T commutes with itself and commutes with idV . Therefore T and T ´ λ “ T ´ λ ¨ idV commute. By induction on k this implies that T and pT ´ λqk commute for all k ą 0.

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Generalized eigenvectors (viii)

Finally let u P Eλ be a generalized eigenvector of eigenvalue λ and exponent d. Then

pT ´ λqd pT uq“ pT ´ λqd T u “ “T pT ´ λqd ‰ u “ “T pT ´ λqd‰u “ T p`0q“ 0. ˘

Therefore T u is also a generalized eigenvector of eigenvalue λ and exponent ď d. Hence T u P Eλ and since u was arbitrary this shows that T pEλqĂ Eλ, i.e. Eλ is T -invariant. This proves (i) and completes the proof of the Claim. l

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Properties (i) (1) The set of generalized eigenvectors of eigenvalue λ and exponent ď k is the kernel of the operator pT ´ λqk . In particualr if u has exponent k the vector pT ´ λqu will have exponent k ´ 1.

(2) Eλ is the union of the increasing sequence of subspaces

kerpT ´ λqĂ kerpT ´ λq2 è¨¨Ă kerpT ´ λqi 訨

of V . SInce V is finite dimensional the sequence must stabilize starting at some step, and so there will be some m such that

m Eλ “ kerpT ´ λq .

The first m for which this happens is called the depth of the eigenvalue λ. Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Properties (ii) (3) Choose a basis of kerpT ´ λq, complete it to a basis of kerpT ´ λq2, and continue this way until we get a basis of m e1,..., er of kerpT ´ λq “ Eλ.

By construction, if a vector ej in the resulting basis has exponent k, then we have that j ą dim kerpT ´ λqk´1.

But if ej has exponent k we also have that k´1 pT ´ λqej P kerpT ´ λq ,

which means that pT ´ λqej is a linear combination of the first dim kerpT ´ λqk´1 vectors in the basis, i.e.

T ej “ λej `plinear combination of ei ’s with i ă jq

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Properties (iii)

Conclusion: The matrix of T|Eλ in the basis e1,..., er is upper trangular with all diagonal entries equal to λ. This has two important consequences: r (3.i) The characteristic polynomial of T|Eλ is pt ´ λq where r “ dim Eλ. (3.ii) If µ ‰ λ, then T ´ µ is non-degenerate on Eλ. Proof: (3.i) follows since the determinant of an upper is the product of its diagonal entries. Next note that if a non-zero vector u P Eλ is in the kernel of

T ´ µ, then u is an eigenvector of T|Eλ of eigenvalue µ. But

by (3.i) λ is the only eigenvalue of T|Eλ and since µ ‰ λ this gives a conttradiction. This proves (3.ii). l

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Properties (iv)

(4) The algebraic multiplicity of an eigenvalue λ of a linear operator T : V Ñ V is equal to dim Eλ.

Proof: Let e1,..., er be a basis of Eλ constructed in (3). Complete it to a basis e1,..., en of V . Since Eλ is T -invariant it follows that the matrix of T in te basis e1,..., en is the block upper triangular matrix

AB , ˆ0 C˙

where A is an r ˆ r block which is equal to the matrix of T|Eλ in the basis e1,..., er .

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Properties (v)

Thus the characteristic polynomial of T is equal to

AB tIr ´ A ´B det tIn ´ “ det ˆ ˆ0 C˙˙ ˆ 0 tIn´r ´ C ˙

“ detptIr ´ Aq detptIn´r ´ Cq r “pt ´ λq detptIn´r ´ Cq.

We need to show that λ is not a root of detptIn´r ´ Cq.

Set W “ spanper`1,..., enqĂ V , and let S : W Ñ W be the linear operator which in the basis er`1,..., en is given by left multiplication with the matrix C.

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Properties (vi)

Thus detptIn´r ´ Cq is the characteristic polynomial of the operator S. If we assume that λ is a root of detptIn´r ´ Cq, then we will be able to find a non zero vector w P W for which Sw “ λw. On the other hand we have that T w “ Sw ` F w where F : W Ñ Eλ is the which in the bases er`1,..., en and e1,... er is given by the matrix B. This gives

T w “ λw ` u,

where u “ F w P Eλ Ă V .

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Properties (vii)

Then pT ´ λqw “ u P Eλ, i.e. pT ´ λqw is a generalized eigenvector of T with eigenvalue λ. Explicitly this means that for some k ě 0 we have

pT ´ λqk pT ´ λqw “ 0.

Bu then w P kerpT ´ λqk`1 and so is also a generalized eigenvector of T . This means w P Eλ X W “ t0u. This contradicts the assumption that w is non zero and proves that λ is not an eigenvalue of S. l

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Properties (viii)

(5) Let λ1,..., λk be distinct eigenvalues of T . Then the

generalizaed eigenspaces Eλ1 ,..., Eλk are linearly

independent: if v1 P Eλ1 ,..., vk P Eλk are such that

v1 ` v2 `¨¨¨` vk´1 ` vk “ 0,

then v1 “¨¨¨“ vk “ 0.

Proof: Use induction on k. The statement is obvious for k “ 1. Suppose we know that the statement holds for any pk ´ 1q-tuple of distinct eigenvalues. Suppose λ1,..., λk are distinct eigenvalues of T and let

v1 P Eλ1 ,..., vk P Eλk be such that v1 `¨¨¨` vk “ 0.

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Properties (ix)

d Let d be the exponent of vk . Applying the operator pT ´ λk q to both sides of

v1 ` v2 `¨¨¨` vk´1 ` vk “ 0,

we get

d d d pT ´ λk q v1 `pT ´ λk q v2 `¨¨¨`pT ´ λk q vk´1 “ 0.

Since all generalized eigenspaces are invariant subspaces for T d it follows that pT ´ λk q vi is in Eλi . By the inductive hypothesis if a k ´ 1 generalized eigenvectors corresponding to distinct eigenvalues add up to zero, then each of the vectors must be zero.

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Properties (x)

Hence we get that

d d d pT ´ λk q v1 “pT ´ λk q v2 “¨¨¨“pT ´ λk q vk´1 “ 0

d But by Property (3.ii) the operator pT ´ λk q is

non-degenerate on each of the spaces Eλ1 ,..., Eλk´1 . Therefore we must have that

v1 “ v2 “¨¨¨“ vk´1 “ 0.

Then the original combination becomes vk “ 0 and so all vi are zero. This completes the proof of Property (5). l

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Properties (xi) (6) Let T : V Ñ V be a linear operator on an n-dimensional r1 rs space and let detpt ¨ idV ´T q“pt ´ λ1q ¨¨¨pt ´ λs q with distinct λ1,..., λs . Then V decomposes into a direct sum

V “ Eλ1 ‘¨¨¨‘ Eλs

Proof: By Property (5) it follows that

Eλ1 `¨¨¨` Eλs “ Eλ1 ‘¨¨¨‘ Eλs Ă V .

By Property (4) we have dim Eλi “ ri . Hence

dim pEλ1 ‘¨¨¨‘ Eλs q“ r1 `¨¨¨` rs “ n “ dim V .

Hence Eλ1 ‘¨¨¨‘ Eλs “ V , which proves the Property. l

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Properties (xii)

(7) Let T : V Ñ V be a linear operator on an n-dimensional r1 rs space and let detpt ¨ idV ´T q“pt ´ λ1q ¨¨¨pt ´ λs q with B distinct λ1,..., λs . Choose bases k of Eλk and combine those bases into a basis B “ B1 Y¨¨¨Y Bs of V . Then the matrix A of T in the basis B is block diagonal of the form

A 1 A A 2 . , “ ¨ .. ˛ A ˝ s ‚ K B where Ak P Matr r p q is the matrix of T Eλ in the basis k . k ˆ k | k

Proof: Follows from Property (5) and the fact that each Eλi is T -invariant. l.

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Nilpotent operators (i)

Property (7) reduces the problem of simplifying a linear operator T to the problem of symplifying the restriction of the operator on each generalized eigenspace. To analyze this better we introduce the following notion:

Definition: A linear operator N : V Ñ V is called nilpotent if for some positive integer k the power Nk is zero. The smallest such k is called the exponent of N.

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Nilpotent operators (ii)

Key observation: If λ is an eigenvalue of T of depth m, then the space Eλ of generalized eigenvectors of eigenvalue λ is m Eλ “ kerpT ´ λq . Hence the operator

N “pT ´ λq|Eλ : Eλ Ñ Eλ

satisfies Nm “ 0 and is therefore nilpotent. Since

T|Eλ “ N ` λ ¨ idEλ

our problem reduces to understanding nilpotent operators.

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Nilpotent operators (iii)

Suppose now V is n-dimensional and N : V Ñ V is a nilpotent operator. Then by definition we have: Every vector v P V is a generalized eigenvector of N with eigenvalue 0. The exponent of a vector v P V is less than or equal to the exponent of N. There exist vectors in V whose exponent is equal to the exponent of N.

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Nilpotent operators (iv)

Lemma: Let v P V be a vector of exponent m. Then the vectors v, Nv, N2v,..., Nm´1v are linearly independent.

Proof: Suppose we have a non-trivial linear relation

m´1 c0v ` c1Nv `¨¨¨` cm´1N v “ 0.

Let ci be the first non-zero coefficient in this relation. Then applying Nm´i´1 to both sides of the relation we get

m´1 m m`1 (1) ci N v ` ci`1N v ` ci`2N v `¨¨¨“ 0.

m m´1 But N v “ 0 and hence (1) is equivalent to ci N v “ 0. m´1 This is a contradiction since N v ‰ 0 and ci ‰ 0. l

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Cyclic subspaces (i)

Definition: Let v P V be a vector of exponent m. The sub- space U “ spanpv, Nv, N2v,..., Nm´1vq Ă V is called the cyclic subspace generated by v.

Properties of cyclic subspaces: (1) U Ă V is an N-invariant subspace. Indeed the operator N maps the basis vectors v, Nv,..., Nm´2v, Nm´1v to the vectors Nv, N2v,..., Nm´1v, 0 which are all again in U.

(2) The restriction N|U is a nilpotent operator of exponent m. Indeed Nm annihilates every vector in the basis, and m´1 m m´1 N v ‰ 0. Thus pN|U q “ 0 but pN|U q ‰ 0.

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Cyclic subspaces (ii)

Properties of cyclic subspaces: 0 1 0 ¨ ¨ ¨ 0 0 In the basis 0 0 1 ¨ ¨ ¨ 0 0 ¨ . ˛ tNm´1v,..., Nv, vu 0 0 0 .. 0 0 (3) Jp0q“ ˚...... ‹ the operator N|U is ˚...... ‹ ˚ ‹ ˚0 0 0 0 1‹ given by the matrix ˚ ¨ ¨ ¨ ‹ ˚0 0 0 ¨ ¨ ¨ 0 0‹ ˝ ‚ which is called the nilpotent Jordan block of size m. (4) Every vector of U “ spanpv, Nv,..., Nm´1vq which is not contained in the subspace NU “ spanpNv,..., Nm´1vq has exponent m and hence generates the whole cyclic subspace U in V .

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Cyclic subspaces (iii)

Theorem 1: Let V be an n dimensional space and let N : V Ñ V be a nilpotent operator. Then V decomposes into a direct sum of cyclic subspaces for N. The number of summands is equal to the nullity of N.

Proof: We will argue by induction on n. Base: If n “ 1 the statement is clear since N being nilpotent means that N “ 0 in this case.

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Cyclic subspaces (iv)

Step; Let n ą 1 and suppose we have proven the statement for nilpotent operators on vector spaces of dimension ă n. Since N : V Ñ V is nilpotent, it has a non-trivial eigenvector of eigenvalue 0, i.e. dim ker N ą 0. Therefore dim im N ă n. Choose an n ´ 1 dimensional subspace U Ă V which contains im N. Then NU Ă im N Ă U, and so U is N-invariant. By the inductive hypothesis we have

U “ U1 ‘¨¨¨‘ Uk ,

where all Ui are cyclic subspaces for N.

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Cyclic subspaces (v)

Choose a vector v R U. Since Nv P im N Ă U “ U1 ‘¨¨¨‘ Uk we have Nv “ u1 `¨¨¨` uk , with ui P Ui . If for some i ui “ Nvi P NUi , pvi P Ui q, 1 then replacing v by v “ v ´ vi does not change the property that v 1 R U but ensures that the i-th component in the decomposition of Nv 1 is zero while the other components remain the same. Therefore without loss of generality we may assume that for every i either ui “ 0 or ui R NUi .

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Cyclic subspaces (vi) If ui “ 0 for all i “ 1,..., k, then Nv “ 0 and so the cyclic subspace generated by v is the line spanpvq. But

V “ spanpvq‘ U1 ‘¨¨¨‘ Uk which provides the desired decomposition of V .

Suppose now that Nv ‰ 0. Since Nv “ i ui it follows that ř pexponent of Nvq“ maxpexponent of ui q. i

Relabeling the Ui we may assume that the exponent m of Nv is equal to the exponent of u1. Then the exponent of v will be equal to m ` 1 and we claim that

m V “ spanpv, Nv,..., N vq‘ U2 ‘¨¨¨‘ Uk .

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Cyclic subspaces (vii)

First note that since u1 P U1 ´ NU1 the vector u1 generates the whole cyclic subspace U1 and hence dim U1 “ m. But

dim V “ n “ 1 ` dim U

“ 1 ` dim Ui ÿi “p1 ` mq` dim U2 `¨¨¨` dim Uk .

m So to show that V “ spanpv, Nv,..., N vq‘ U2 ‘¨¨¨‘ Uk it is enough to check that

m spanpv, Nv,..., N vqXpU2 ‘¨¨¨‘ Uk q “ t0u.

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Cyclic subspaces (viii)

Suppose we have a vector

m w “ c0v ` c1Nv `¨¨¨` cmN v P U2 ‘¨¨¨‘ Uk Ă U.

Since v R U and im N Ă U, we must have c0 “ 0. Taking the U1 component of w we get

2 m´1 c1u1 ` c2N u1 `¨¨¨` N u1 “ 0.

m´1 But u1,..., N u1 is a basis of U1 and therefore c1 “¨¨¨“ cm “ 0. This completes the step of the induction and shows that V decomposes into a direct sum of cyclic subspaces of N.

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Cyclic subspaces (ix)

Finally note that if V “ V1 ‘¨¨¨‘ Vs is a decomposition into a direct sum of cyclic subspaces for N, then we have that

ker N “ ker N|V1 ‘¨¨¨‘ ker N|Vs .

But for any cyclic subspace U “ spanpu, Nu,..., Nk´1u we k´1 have ker N|U “ spanpN uq and so dim ker N|U “ 1

s Therefore dim ker N “ i“1 dim ker N|Vi “ s which completes the proof of the Theorem.ř l

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Jordan blocks (i) Suppose now T : V Ñ V is any operator,and Eλ is a generalized eigenspace of T .

Let U Ă Eλ be a cyclic subspace for the nilpotent operator pT ´ λq|U generated by a vector u P Eλ of exponent k. Then in the basis pT ´ λqk´1u,..., pT ´ λqu, u of U the the operator T|U has matrix λ 1 0 ¨ ¨ ¨ 0 0 0 λ 1 ¨ ¨ ¨ 0 0 ¨ . ˛ 0 0 λ .. 0 0 Jpλq“ Jp0q` λIk “ ˚ ...... ‹ . ˚ ...... ‹ ˚0 0 0 λ 1‹ ˚ ¨ ¨ ¨ ‹ ˚0 0 0 ¨ ¨ ¨ 0 λ‹ ˝ ‚ Terminology: The matrix Jpλq is called the k ˆ k Jordan block of type λ. Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Jordan blocks (ii)

To spell out the structure of the simplification of T this gives we introduce the following notion: Definition: We say that an nˆn matrix J is a Jordan matrix or a matrix in Jordan form if J is block diagonal of the form

J1 ¨ J2 ˛ J “ .. ˚ . ‹ ˚ ‹ ˚ Jℓ‹ ˝ ‚

where each Ji “ Jpλi q is a Jordan block of type λi P K and some size di ˆ di .

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Jordan blocks (iii) Remark:

(1) In a Jordan form J “ diagpJpλ1q,..., Jpλs qq the numbers λ1,...,λs will not be distinct in general. (2) If J “ diagpJpλ1q,..., Jpλs qq with Jpλi q of size di ˆ di , then d1 `¨¨¨` dℓ “ n, and since each Jordan block is upper triangular the characteristic polynomial of J is d1 d2 ds equal to pt ´ λ1q pt ´ λ2q ¨¨¨pt ´ λs q .

Example: Up to reordering of the blocks the 3 ˆ 3 Jordan forms are

λ1 0 0 λ1 1 0 λ1 1 0 ¨ 0 λ2 0 ˛ , ¨ 0 λ1 0 ˛ , ¨ 0 λ1 1 ˛ 0 0 λ 0 0 λ 0 0 λ ˝ 3‚ ˝ 2‚ ˝ 1‚

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Jordan decomposition theorem (i)

Theorem 2: Vector space version: Let T : V Ñ V be a linear operator on an n dimensional space and suppose the characteristic polynomial of T splits into a product of n linear factors. Then there exists a basis B of V in which the matrix of T has Jordan form.

Matrix version: Let A P MatnˆnpKq be a matrix whose char- acteristic polynomial of T splits into a product of n linear fac- tors. Then there is an invertible matrix P such that P´1AP has Jordan form.

Proof: Follows immediately from Property (6) of generalized eigenvectors and Theorem 1. l

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Jordan decomposition theorem (ii)

Combined with the fundamental theorem of algebra the Jordan decomposition theorem implies the following

Corollary: Vector space version: Let T : V Ñ V be a linear operator on an n dimensional complex vector space. Then there exists a basis B of V in which the matrix of T has Jordan form.

Matrix version: Let A P MatnˆnpCq be any matrix. Then there is an invertible complex matrix P such that P´1AP has Jordan form.

Terminology: The basis B in which the matrix of T has Jordan form is called a Jordan basis for T .

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Jordan decomposition theorem (iii) From the proof of Theorem 1 we see that the construction of a Jordan basis for a nilpotent operator involves many choices. So the Jordan basis is not unique. However it turns out that the Jordan matrix of the operator is essentially unique.

Theorem 3: Let T : V Ñ V be a linear operator on an n dimensional space and suppose the characteristic polynomial of T splits into a product of n linear factors. Then the Jordan matrix of T is unique up to a reordering of the Jordan blocks.

Proof: Recall that the Jordan matrix for T is built separately in each generalized eigenspace, and for each eigenvalue λ of T , the Jordan form of T on Eλ is the same as the Jordan form

of the nilpotent operator pT ´ λq|Eλ .

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Jordan decomposition theorem (iv)

So to prove the theorem we have to show that if N : V Ñ V is a nilpotent operator, then the Jordan matrix of N is uniquely determined by N and does not depend on choices.

To construct a Jordan basis for N we took a decomposition of V into a direct sum V “ U1 ‘ ... ‘ Us of cyclic subspaces.

The sizes of the Jordan blocks of N are the dimensions of these subspaces so we need to show that these dimensions only depend on N and not on the choice of decomposition.

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Jordan decomposition theorem (v)

To keep track of things set di “ dim Ui and if necessary relabel the Ui so that d1 ě d2 쨨¨ě ds . We can represent the corresponding Jordan basis of N by a diagram

‚ ‚ ‚ ... ‚ ‚ ‚ ‚ ‚ ‚ ... ‚ ‚ ‚ ‚ ... ‚ ‚ ‚ ‚ . . . .

‚ ‚

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Jordan decomposition theorem (vi)

di ´1 The dots in the i-th column of the diagram N ui corresponds to the basis of the space Ui . . generated by a vector ui of exponent di , and the arrows correspond to the action of N. Nui

ui

From this definition we see that if we apply the operator N to the diagram it just sends everything in the first row to zero, i.e. erases the dots in the first row and keeps all the other dots. The resulting diagram is just the diagram of the Jordan basis in im N. By the same reasoning applying Nk erases the top k rows of the diagram and keeps the remaining dots.

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20 Jordan decomposition theorem (vii)

Therefore we conclude that: # of dots in the first row “ dim ker N; # of dots in the second row “ dim ker N2 ´ dim ker N; # of dots in the k-th row “ dim ker Nk ´ dim ker Nk´1. This shows that the dot diagram is unquely determined by N and is independent of the choices of the cyclic subspace decomposition. This proves the Theorem. l

Instructor: Tony Pantev University of Pennsylvania Math 314, lecture 20