<<

Derivation of n × n Jordan block: Jλ(n). A block of these: Jλ(n1, n2, . . . , nk).

Property of a linear transformation with matrix representation J0(n): If is {v1, v2, . . . , vn} then we have

2 n−1 n f(vn) = vn−1, f (vn) = vn−2, . . . , f (vn) = v1, f (vn) = 0.

So the basis can be expressed in the form

n−1 n−2 0 {f (u1), f (u2), . . . , f (u1)} where u1 = vn. More generally, a linear transformation with matrix repre- sentation J0(n1, n2, . . . , nk) has a basis of the form

n1−1 0 n2−1 0 nk−1 0 {f (u1), . . . , f (u1), f (u2), . . . , f (u2), . . . , f (uk), . . . , f (uk)} where n1 n2 nk f (u1) = f (u2) = ··· = f (uk) = 0. Clearly the transformation is nilpotent. Lemma: If f : V → V is nilpotent of a finite-dimensional vector then V has a basis of the form

n1−1 0 n2−1 0 nk−1 0 {f (u1), . . . , f (u1), f (u2), . . . , f (u2), . . . , f (uk), . . . , f (uk)} where n1 n2 nk f (u1) = f (u2) = ··· = f (uk) = 0.

Hence f has a matrix representation in the form J0(n1, n2, . . . , nk). Proof: By induction on the of V . Trivial in dimension 1. Assume true for dimension ≤ n. Consider dimension n+1. Let W = f(V ). Since f is nilpotent, W 6= V . If W = 0 then the zero-matrix represents f. Otherwise, W has dimension smaller than the dimension of V and f : W → W is nilpotent. By the induction hypothesis, W has a basis of the form

n1−1 0 n2−1 0 nk−1 0 {f (v1), . . . , f (v1), f (v2), . . . , f (v2), . . . , f (vk), . . . , f (vk)}

1 where n1 n2 nk f (v1) = f (v2) = ··· = f (vk) = 0.

n1−1 nk−1 Find ui so that f(ui) = vi for each i ≤ k. Expand the vectors f (v1), . . . , f (vk), n1−1 nk−1 which belong to the of f, to a basis f (v1), . . . , f (vk), uk+1, . . . , up. We claim that the desired basis for V is

n1 0 nk 0 0 f (u1), . . . , f (u1), f (uk), . . . , f (uk+1), . . . , f (up).

We must show that these vectors are linearly independent and that there are the right number of them. They are linearly independent: Suppose

k ni p X X j X 0 αijf (ui) + βif (ui) = 0V . i=1 j=0 i=k+1

Applying f we obtain k ni X X j αijf (vi) = 0V . i=1 j=0

Therefore the αij = 0 for i < ni. This implies

k p X ni X 0 αini f (ui) + βif (ui) = 0V . i=1 i=k+1

In other words,

k p X ni−1 X 0 αini f (vi) + βif (ui) = 0V . i=1 i=k+1

By linear independence, the remaining coefficients are equal to zero.

There are the right number of them: The dimension of W is n1 + ··· + nk. Hence the of f is n1 + ··· + nk. We also have that the nullity of f is p. By the Rank-Nullity theorem, the dimension of V is n1 + ··· + nk + p. This is the number of linearly independent vectors we have produced.

2 Theorem: Every linear transformation f : V → V of a finite-dimensional V over the complex numbers has a block- rep- resentation where each diagonal block is of the form Jλ(n1, . . . , nk). Proof: First, get matrix into triangular form with eigenvalues along diagonals. Second, change basis so that the matrix is in block diagonal form with con- stant diagonals. Third, show each block A can be placed in Jordan Normal Form. Proof of third step: Suppose the eigenvalue corresponding to A is λ. Then A − λI is the matrix representation of a nilpotent transformation. So the restriction of f −λe to the corresponding subspace is nilpotent. Hence f −λe has a matrix representation of the form J0(n1, . . . , nk), and this implies that f has a matrix representation of the form Jλ(n1, . . . , nk).

3