<<

NOTES ON LINEAR

LIVIU I. NICOLAESCU

CONTENTS 1. Multilinear forms and determinants3 1.1. Mutilinear maps3 1.2. The symmetric group4 1.3. Symmetric and skew-symmetric forms7 1.4. The of a matrix8 1.5. Additional properties of . 11 1.6. Examples 16 1.7. Exercises 18 2. Spectral decomposition of linear operators 23 2.1. Invariants of linear operators 23 2.2. The determinant and the characteristic of an operator 24 2.3. Generalized eigenspaces 26 2.4. The Jordan form of a complex operator 31 2.5. Exercises 41 3. Euclidean spaces 44 3.1. Inner products 44 3.2. Basic properties of Euclidean spaces 45 3.3. Orthonormal systems and the Gramm-Schmidt procedure 48 3.4. Orthogonal projections 51 3.5. Linear functionals and adjoints on Euclidean spaces 54 3.6. Exercises 62 4. Spectral theory of normal operators 65 4.1. Normal operators 65 4.2. The spectral decomposition of a 66 4.3. The spectral decomposition of a real symmetric operator 68 4.4. Exercises 69 5. Applications 70 5.1. Symmetric bilinear forms 70 5.2. Nonnegative operators 73 5.3. Exercises 76 6. Elements of linear 79 6.1. Normed vector spaces 79 6.2. Convergent 82 6.3. Completeness 85 6.4. Continuous maps 86

Date: Started January 7, 2013. Completed on . Last modified on April 28, 2014. These are notes for the Honors Algebra Course, Spring 2013. 1 2 LIVIU I. NICOLAESCU

6.5. Series in normed spaces 89 6.6. The exponential of a 90 6.7. The exponential of a matrix and systems of linear differential equations. 92 6.8. Closed and open subsets 94 6.9. Compactness 95 6.10. Exercises 98 References 100 3

1. MULTILINEARFORMSANDDETERMINANTS In this section, we will deal exclusively with finite dimensional vector spaces over the field F = R, C. If U 1, U 2 are two F-vector spaces, we will denote by Hom(U 1, U 2) the space of F-linear maps U 1 → U 2. 1.1. Mutilinear maps.

Definition 1.1. Suppose that U 1,..., U k, V are F-vector spaces. A map

Φ: U 1 × · · · × U k → V is called k-linear if for any 1 ≤ i ≤ k, any vectors ui, vi ∈ U i, vectors uj ∈ U j, j 6= i, and any λ ∈ F we have Φ(u1,..., ui−1, ui + vi, ui+1,..., uk)

= Φ(u1,..., ui−1, ui, ui+1,..., uk) + Φ(u1,..., ui−1, vi, ui+1,..., uk),

Φ(u1,..., ui−1, λui, ui+1,..., uk) = λΦ(u1,..., ui−1, ui, ui+1,..., uk). In the special case U 1 = U 2 = ··· = U k = U and V = F, the resulting map Φ: U × · · · × U → F | {z } k is called a k- on U. When k = 2, we will refer to 2-linear forms as bilinear forms. We will denote by Tk(U ∗) the space of k-linear forms on U. ut

∗ ∗ Example 1.2. Suppose that U is an F- and U is its dual, U := Hom(U, F). We have a natural ∗ ∗ h−, −i : U × U → F, U × U 3 (α, u) 7→ hα, ui := α(u). The bilinear map is called the canonical pairing between the vector space U and its dual. ut

Example 1.3. Suppose that A = (aij)1≤i,j≤n is an n × n matrix with real entries. Define n n X ΦA : R × R → R, Φ(x, y) = aijxiyj, i,j     x1 y1  .   .  x =  .  , y =  .  . xn yn n To show that Φ is indeed a we need to prove that for any x, y, z ∈ R and any λ ∈ R we have ΦA(x + z, y) = ΦA(x, y) + ΦA(z, y), (1.1a)

ΦA(x, y + z) = ΦA(x, y) + ΦA(x, z), (1.1b)

φA(λx, y) = ΦA(x, λy) = λΦA(x, by). (1.1c) To verify (1.1a) we observe that X X X X ΦA(x + z, y) = aij(xi + zi)yj = (aijxiyj + aijziyj) = aijxiyj + aijziyj i,j i,j i,j ij

= ΦA(x, y) + ΦA(z, z). 4 LIVIU I. NICOLAESCU

The equalities (1.1b) and (1.1c) are proved in a similar fashion. Observe that if e1,..., en is the n natural of R , then ΦA(ei, ej) = aij. This shows that ΦA is completely determined by its action on the basic vectors e1,... en. ut

2 n Proposition 1.4. For any bilinear form Φ ∈ T (R ) there exists an n × n real matrix A such that Φ = ΦA, where ΦA is defined as in Example 1.3. ut

The proof is left as an exercise. 1.2. The symmetric group. For any finite sets A, B we denote Bij(A, B) the collection of bijective maps ϕ : A → B. We set S(A) := Bij(A, A). We will refer to S(A) as the symmetric group on A and to its elements as permutations of A. Note that if ϕ, σ ∈ S(A) then ϕ ◦ σ, ϕ−1 ∈ S(A). The composition of two permutations is often referred to as the product of the permutations. We denote by 1, or 1A the identity permutation that does not permute anything, i.e., 1A(a) = a, ∀a ∈ A. For any finite set S we denote by |S| its cardinality, i.e., the number of elements of S. Observe that Bij(A, B) 6= ∅⇐⇒|A| = |B|.

In the special case when A is the discrete interval A = In = {1, . . . , n} we set

Sn := S(In).

The collection Sn is called the symmetric group on n objects. We will indicate the elements ϕ ∈ Sn by diagrams of the form  1 2 . . . n  . ϕ1 ϕ2 . . . ϕn For any finite set S we denote by |S| its cardinality, i.e., the number of elements of S. Proposition 1.5. (a) If A, B are finite sets and |A| = |B|, then | Bij(A, B)| = | Bij(B,A)| = |S(A)| = |S(B)|.

(b) For any positive integer n we have |Sn| = n! := 1 · 2 · 3 ··· n. Proof. (a) Observe that we have a bijective correspondence Bij(A, B) 3 ϕ 7→ ϕ−1 ∈ Bij(B,A) so that | Bij(A, B)| = | Bij(B,A)|. Next, fix a ψ : A → B. We get a correspondence

Fψ : Bij(A, A) → Bij(A, B), ϕ 7→ Fψ(ϕ) = ψ ◦ ϕ. This correspondence is injective because −1 −1 Fψ(ϕ1) = Fψ(ϕ2) ⇒ ψ ◦ ϕ1 = ψ ◦ ϕ2 ⇒ ψ ◦ (ψ ◦ ϕ1) = ψ ◦ (ψ ◦ ϕ2) ⇒ ϕ1 = ϕ2. This correspondence is also surjective. Indeed, if φ ∈ Bij(A, B) then ψ−1 ◦ φ ∈ Bij(A, A) and −1 −1 Fψ(ψ ◦ φ) = ψ ◦ (ψ ◦ φ) = φ.

This, Fψ is a bijection so that |S(A)| = | Bij(A, B)|. LINEAR ALGEBRA 5

Finally we observe that |S(B)| = | Bij(B,A)| = | Bij(A, B)| = |S(A)|. This takes care of (a). To prove (b) we argue by induction. Observe that |S1| = 1 because there exists a single bijection {1} → {1}. We assume that |Sn−1| = (n − 1)! and we prove that Sn| = n!. For each k ∈ In we set k  Sn := ϕ ∈ Sn; ϕ(n) = k . k A permutation ϕ ∈ Sn is uniquely detetrimined by its restriction to In \{n} = In−1 and this restriction is a bijection In−1 → In \{k}. Hence k |Sn| = | Bij(In−1, In \{k})| = |Sn−1|, where at the last equality we used part(a). We deduce 1 n |Sn| = |Sn| + ··· + |Sn| = |Sn−1| + ··· + |Sn−1| | {z } n

= n|Sn−1| = n(n − 1)!, where at the last step we invoked the inductive assumption. ut

Definition 1.6. An inversion of a permutation σ ∈ Sn is a pair (i, j) ∈ In × In with the following properties. • i < j. • σ(i) > σ(j). We denote by |σ| the number of inversions of the permutation σ. The signature of σ is then the (σ) := (−1)|σ| ∈ {−1, 1}. ± A permutation σ is called even/odd if sign(σ) = ±1. We denote by Sn the collection of even/odd permulations. ut

Example 1.7. (a) Consider the permutation σ ∈ S5 given by  1 2 3 4 5  σ = . 5 4 3 2 1 The inversions of σ are (1, 2), (1, 3), (1, 4), (1, 5), (2, 3), (2, 4), (2, 5), (3, 4), (3, 5), (4, 5), so that |σ| = 4 + 3 + 2 + 1 = 10, sign(σ) = 1. (b) For any i 6= j in In we denote by τij the permutation defined by the equalities  k, k 6= i, j  τij(k) = j, k = i i, k = j.

A transposition is defined to be a permutation of the form τij for some i < j. Observe that

|τij| = 2|j − i| − 1, so that sign(τij) = −1, ∀i 6= j. (1.2) 6 LIVIU I. NICOLAESCU

ut

Proposition 1.8. (a) For any σ ∈ Sn we have Y σ(j) − σ(i) sign(σ) = . (1.3) j − i 1≤i

(b) For any ϕ, σ ∈ Sn we have sign(ϕ ◦ σ) = sign(ϕ) · sign(σ). (1.4) (c) sign(σ−1) = sign(σ)

σ(j)−σ(i) Proof. (a) Observe that the ratio j−i negative if and only if (i, j) is inversion. Thus the number σ(j)−σ(i) of negative ratios j−i , i < j, is equal to the number of inversions of σ so that the product Y σ(j) − σ(i) j − i 1≤i

Y σ(j) − σ(i) = | sign(σ)| = 1, j − i 1≤i

Y ϕ(σ(j)) − ϕ(σ(i)) = sign(ϕ ◦ σ). j − i i

1.3. Symmetric and skew-symmetric forms.

Definition 1.9. Let U be an F-vector space, F = R or F = C. k ∗ (a) A k-linear form Φ ∈ T (U ) is called symmetric if for any u1,..., uk ∈ U, and any permutation σ ∈ Sk we have Φ(uσ(1),..., uσ(k)) = Ψ(u1,..., uk). We denote by SkU ∗ the collection of symmetric k-linear forms on U. k ∗ (b) A k-linear form Φ ∈ T (U ) is called skew-symmetric if for any u1,..., uk ∈ U, and any permutation σ ∈ Sk we have

Φ(uσ(1),..., uσ(k)) = sign(σ)Ψ(u1,..., uk). We denote by ΛkU ∗ the space of skew-symmetric k-linear forms on U. ut

n ∗ Example 1.10. Suppose that Φ ∈ Λ U , and u1,..., un ∈ U. The skew-linearity implies that for any i < j we have

Φ(u1,..., ui−1, ui, ui+1,..., uj−1, uj, uj+1,..., un)

= −Φ(u1,..., ui−1, uj, ui+1,..., uj−1, ui, uj+1,..., un). Indeed, we have Φ(u1,..., ui−1, uj, ui+1,..., uj−1, ui, uj+1,..., un)

= Φ(uτij (1),..., uτij (k),..., uτij (n)) and sign(τij) = −1. In particular, this implies that if i 6= j, but ui = uj then

Φ(u1,..., un) = 0. ut

Proposition 1.11. Suppose that U is an n-dimensional F-vector space and e1,..., en is a basis of n ∗ U. Then for any scalar c ∈ F there exists a unique skew-symmetric n-linear form Φ ∈ Λ U such that Φ(e1,..., en) = c.

Proof. To understand what is happening we consider first the special case n = 2. Thus dim U = 2. 2 ∗ If Φ ∈ Λ U and u1, u2 ∈ U we can write

u1 = a11e1 + a21e2, u2 = a12e1 + a22e2, for some scalars aij ∈ F , i, j ∈ {1, 2}. We have

Φ(u1, u2) = Φ(a11e1 + a21e2, a12e1 + a22e2)

= a11Φ(e1, a12e1 + a22e2) + a21Φ(e2, a12e1 + a22e2)

= a11a12Φ(e1, e1) + a11a22Φ(e1, e2) + a21a12Φ(e2, e1) + a21a22Φ(e2, e2). The skew- of Φ implies that

Φ(e1, e1) = Φ(e2, e2) = 0, Φ(e2, e1) = −Φ(e1, e2). Hence Φ(u1, u2) = (a11a22 − a21a12)Φ(e1, e2). If dim U = n and u1,..., un ∈ U, then we can write n n X X u1 = ai11ei1 ,..., uk = aikkeik i1=1 ik=1 8 LIVIU I. NICOLAESCU

n n ! X X Φ(u1,..., un) = Φ ai11ei1 ,..., ainnein i1=1 in=1 n X = ai11 ··· ainnΦ(ei1 ,..., ein ). i1,...,in=1 Observe that if the indices i1, . . . , in are not pairwise distinct then

Φ(ei1 ,..., ein ) = 0.

Thus, in the above sum we get contributions only from pairwise distinct choices of indices i1, . . . , in, Such a choice corresponds to a permutation σ ∈ Sn, σ(k) = ik. We deduce that X Φ(u1,..., un) = aσ(1)1 ··· aσ(n)nΦ(eσ(1),..., eσ(n)). σ∈Sn   X =  sign(σ)aσ(1)1 ··· aσ(n)n Φ(e1,..., en). σ∈Sn n ∗ Thus, Φ ∈ Λ U is uniquely determined by its value on (e1,..., en). Conversely, the map n X X (u1,..., un) → c sign(σ)aσ(1)1 ··· aσ(n)n, uk = aikei, σ∈Sn i=1 is indeed n-linear, and skew-symmetric. The proof is notationally bushy, but it does not involve any subtle idea so I will skip it. Instead, I’ll leave the proof in the case n = 2 as an exercise. ut

n 1.4. The determinant of a . Consider the vector space F we  1   0   0   0   1   0  e =   , e =   ,..., e =   . 1  .  2  .  n  .   .   .   .  0 0 1 n According to Proposition 1.11 there exists a unique, n-linear skew-symmetric form Φ on F such that

Φ(e1,..., en) = 1. n We will denote this form by det and we will refer to it as the determinant form on F . The proof of n Proposition 1.11 shows that if u1,..., un ∈ F ,   u1k  u2k  u =   , k = 1, . . . , n, k  .   .  unk then X det(u1,... un) = sign(σ)uσ(1)1uσ(2)2 ··· uσ(n)n. (1.6) σ∈Sn Note that X det(u1,... un) = sign(ϕ)u1ϕ(1)u2ϕ(2) ··· unϕ(n). (1.7) ϕ∈Sn LINEAR ALGEBRA 9

Definition 1.12. Suppose that A = (aij)1≤i,j≤n is an n × n-matrix with entries in F which we regard n n as a linear operator A : F → F . The determinant of A is the scalar

det A := det(Ae1,...,Aen) n where e1,..., en is the canonical basis of F , and Aek is the k-th column of A,   a1k  a2k  Ae =   , k = 1, . . . , n. k  .   .  ank ut

Thus, according to (1.6) we have X (1.7) X det A = sign(σ)aσ(1)1 ··· aσ(n)n = sign(σ)a1σ(1) ··· anσ(n). (1.8) σ∈Sn σ∈Sn

Remark 1.13. Consider a typical summand in the first sum in (1.8), aσ(1)1 ··· aσ(n)n. Observe that the n entries

aσ(1)1, aσ(2)2, . . . , aσ(n)n lie on different columns of A and thus occupy all the n columns of A. Similarly, these entries lie on different rows of A. A collection of n entries so that no two lie on the same row or the same column is called a rook placement.1 Observe that in order to describe a rook placement, you need to indicate the of the entry on the first column, by indicating the row σ(1) on which it lies, then you need to indicate the position of the entry on the second column etc. Thus, the sum in (1.8) has one term for each rook placement. ut

If A† denotes the of the n × n-matrix A with entries † aij = aji we deduce that † X † † X det A = sign(σ)aσ(1)1 ··· aσ(n)n = sign(σ)a1σ(1) ··· anσ(n) = det A. (1.9) σ∈Sn σ∈Sn Example 1.14. Suppose that A is a 2 × 2 matrix  a a  A = 11 12 a21 a22 Then

det A = a11a22 − a12a21. ut Proposition 1.15. If A is an upper triangular n × n-matrix, then det A is the product of the entries. A similar result holds if A is lower triangular.

1If you are familiar with chess, a rook controls the row and the column at whose intersection it is situated. 10 LIVIU I. NICOLAESCU

Proof. To keep the ideas as transparent as possible, we carry the proof in the special case n = 3. Suppose first that A is upper tringular, Then   a11 a12 a13 A =  0 a22 a23  0 0 a33 so that Ae1 = a11e1,Ae2 = a12e1 + a22e2,Ae3 = a13e1 + a23e2 + a33e3 Then det A = det(Ae1,Ae2,Ae3) = det(a11e1, a12e1 + a22e2, a13e1 + a23e2 + a33e3) = det(a11e1, a12e1, a13e1 + a23e2 + a33e3) + det(a11e1, a22e2, a13e1 + a23e2 + a33e3) = a11a12 det(e1, e1, a13e1 + a23e2 + a33e3) +a11a22 det(e1, e2, a13e1 + a23e2 + a33e3) | {z } =0   = a11a22 det(e1, e2, a13e1) + det(e1, e2, a23e2) + det(e1, e2, a33e3) | {z } | {z } =0 =0 = a11a22a33 det(e1, e2, e3) = a11a22a33. This proves the proposition when A is upper triangular. If A is lower triangular, then its transpose A† is upper triangular and we deduce † † † † det A = det A = a11a22a33 = a11a22a33. ut

Recall that we have a collection of elementary column (row) operations on a matrix. The next result explains the effect of these operations on the determinant of a matrix. Proposition 1.16. Suppose that A is an n × n-matrix. The following hold. (a) If the matrix B is obtained from A by multiplying the elements of the i-th column of A by the same nonzero scalar λ, then det B = λ det A. (b) If the matrix B is obtained from A by switching the order of the columns i and j, i 6= j then det B = − det A. (c) If the matrix B is obtained from A by adding to the i-th column, the j-th column, j 6= i then det B = det A. (d) Similar results hold if we perform row operations of the same type. Proof. (a) We have

det B = det(Be1,...,Ben) = det(Ae1, . . . , λAei,Aen)

= λ det(Ae1,...,Aei,Aen) = λ det A. (b) Observe that for any σ ∈ Sn we have

det(Aeσ(1),...,Aeσ(n)) = sign(σ) det(Ae1,...,Aeσ(n)) = sign(σ) det A. Now observe that the columns of B are

Be1 = Aeτij (1),...,Ben = Aeτij (n) and sign(τij) = 1. LINEAR ALGEBRA 11

For (c) we observe that

det B = det(Ae1,...,Aei−1,Aei + Aej,Aei+1,...,Aej,...,Aen)

= det(Ae1,...,Aei−1,Aei,Aei+1,...,Aej,...,Aen)

+ det(Ae1,...,Aei−1,Aej,Aei+1,...,Aej,...,Aen) | {z } =0 = det A. Part (d) follows by applying (a), (b), (c) to the transpose of A, observing that the rows of A are the columns of A† and then using the equality det C = det C† . ut

The above results represents one efficient method for computing determinants because we know that by performing elementary row operations on a square matrix we can reduce it to upper triangular form. Here is a first application of determinants.

Proposition 1.17. Suppose that A is an n×n-matrix with entries in F. Then the following statements are equivalent. (a) The matrix A is invertible. (b) det A 6= 0.

Proof. A matrix A is invertible if and only if by performing elementary row operations we can reduce to an upper B whose diagonal entries are nonzero, i.e., det B 6= 0. By performing elementary row the determinant changes by a nonzero factor so that det A 6= 0⇐⇒ det B 6= 0. ut

n Corollary 1.18. Suppose that u1,..., un ∈ F . The following statements are equivalent. (a) The vectors u1,..., un are linearly independent. (b) det(u1,..., un) 6= 0. n n Proof. Consider the linear operator A : F → F given by Aei = ui, i = 1, . . . , n. We can tautologically identify it with a matrix and we have

det(u1,..., un) = det A.

Now observe that (u1,..., un) are linearly independent of and only if A is invertible and according to the previous propostion, this happens if and only if det A 6= 0. ut

1.5. Additional properties of determinants. Proposition 1.19. If A, B are two n × n-matrices , then det AB = det A det B. (1.10) Proof. We have n n X X det AB = det(ABe1, . . . , ABen) = det( bi11Aei1 ,... binnAein ) i1=1 in=1 12 LIVIU I. NICOLAESCU

b X = bi11 ··· binn det(Aei1 ,...Aein ) i1,...,in=1 In the above sum, the only nontrivial terms correspond to choices of pairwise distinct indices i1, . . . , in. For such a choice, the i1, . . . , in describes a permutation of In. We deduce X det AB = bσ(1)1 ··· bσ(n)n det(Aeσ(1),...,Aeσ(n)) σ∈Sn | {z } =sign(σ) det A X = det A sign(σ)bσ(1)1 ··· bσ(n)n = det A det B. σ∈Sn ut

Corollary 1.20. If A is an , then 1 det A−1 = . det A Proof. Indeed, we have A · A−1 = 1 so that det A det A−1 = det 1 = 1. ut

Proposition 1.21. Suppose that m, n are positive integers and S is an (m + n) × (m + n)-matrix that has the block form  AC  S = , 0 B where A is an m × m-matrix, B is an n × n-matrix and C is an m × n-matrix. Then det S = det A · det B.

Proof. We denote by sij the (i, j)-entry of S, i, j ∈ Im+n. From the block description of S we deduce that j ≤ m and i > n ⇒ sij = 0. (1.11) We have m+n X Y det S = sign(σ) sσ(i)i, σ∈Sm+n i=1

From (1.11) we deduce that in the above sum the nonzero terms correspond to permutations σ ∈ Sm+n such that σ(i) ≤ m, ∀i ≤ m. (1.12)

If σ is such a permutation, ten its restriction to Im is a permutation α of Im and its restriction to Im+n \ Im is a permutation of this set, which we regard as a permutation β of In. Conversely, given

α ∈ Sm and β ∈ Sn we obtain a permutation σ = α ∗ β ∈ Smn satisfying (1.12) given by ( α(i), i ≤ m, α ∗ β(i) = m + β(i − m), i > m. LINEAR ALGEBRA 13

Observe that sign(α ∗ β) = sign(α) sign(β), and we deduce m+n X Y det S = sign(α ∗ β) sα∗β(i)i α∈Sm, β∈Sn i=1  m   n  X Y X Y =  sign(α) sα(i)i  sign(β) sm+β(j),j+m = det A det B. α∈Sm i=1 β∈Sn j=1 ut

Definition 1.22. If A is an n×n-matrix and i, j ∈ In, we denote by A(i, j) the matrix obtained from A by removing the i-th row and the j-th column. ut

Corollary 1.23. Suppose that the j-th column of an n × n-matrix A is sparse, i.e., all the elements on the j-th column, with the possible exception of the element on the i-th row, are equal to zero. Then i+j det A = (−1) aij det A(i, j). Proof. Observe that if i = j = 1 then A has the block form  a ∗  A = 11 0 A(1, 1) and the result follows from Proposition 1.21. We can reduce the general case to this special case by permuting rows and columns of A. If we switch the j-th column with (j − 1)-th column we can arrange that the (j − 1)-th column is the sparse column. Iterating this procedure we deduce after (j − 1) such switches that the first column is the sparse column. By performing (i − 1) row-switches we can arrange that the nontrivial element on this sparse column is situated on the first row. Thus, after a total of i + j − 2 row and column switches we obtain a new matrix A0 with the block form  a ∗  A0 = ij 0 A(i, j) We have i+j 0 (−1) det A = det A = aij det A(i, j). ut

Corollary 1.24 (Row and column expansion). Fix j ∈ In. Then for any n × n-matrix we have n n X i+j X i+k det A = (−1) aij det A(i, j) = (−1) ajkA(j, k). i=1 k=1 The first equality is referred to as the j-th column expansion of det A, while the second equality is referred to as the j-th row expansion of det A. Proof. We prove only the column expansion. The row expansion is obtained by applying to column expansion to the transpose matrix. For simplicity we assume that j = 1. We have n X  det A = det(Ae1,Ae2,...,Aen) = det ai1ei,Ae2,...,Aen i−1 14 LIVIU I. NICOLAESCU

n X  = ai1 det ei,Ae2,...,Aen . i−1

Denote by Ai the matrix whose first column is the column basic vector ei, and the other columns are the corresponding columns of A, Ae2,...,Aen. We can rewrite the last equality as n X det A = ai1 det Ai. i=1

The first column of Ai is sparse, and the submatrix Ai(i, 1) is equal to the submatrix A(i, 1). We deduce from the previous corollary that i+1 i+1 det Ai = (−1) det Ai(i, 1) = (−1) det A(i, 1). This completes the proof of the column expansion formula. ut

Corollary 1.25. If k 6= j then n X i+j (−1) aik det A(i, j) = 0. i=1

Proof. Denote by A0 the matrix obtained from A by removing the j-th column and replacing with the k-th column of A. Thus, in the new matrix A0 the j-th and the k-th columns are identical so that det A0 = 0. On the other hand A0(i, j) = A(i, j) Expanding det A0 along the j-th column we deduce n n 0 X i+j 0 X ij 0 = det A = (−1) aij det A(i, j) = (−1) aik det A(i, j). i=1 i=1 ut

Definition 1.26. For any n × n matrix A we define the adjoint matrix Aˇ to be the n × n-matrix with entries i+j aˇij = (−1) det A(j, i), ∀i, j ∈ In. ut Form Corollary 1.24 we deduce that for any j we have n X aˇjiaij = det A, i=1 while Corollary 1.25 implies that for any j 6= k we have n X aˇjiaik = 0. i=1 The last two identities can be rewritten in the compact form AAˇ = (det A)1. (1.13) If A is invertible, then from the above equality we conclude that 1 A−1 = A.ˇ (1.14) det A LINEAR ALGEBRA 15

Example 1.27. Suppose that A is a 2 × 2 matrix  a a  A = 11 12 a21 a22 Then

det A = a11a22 − a12a21,

A(1, 1) = [a22],A(1, 2) = [a21],A(2, 1) = [a12],A(2, 2) = [a11],

aˇ11 = det A(1, 1) = a22, aˇ12 = − det A(2, 1) = −a12,

aˇ21 = − det A(1, 2) = −a21, aˇ22 = det A(2, 2) = a11, so that  a −a  Aˇ = 22 12 −a21 a11 and we observe that  det A 0  AAˇ = . ut 0 det A

n Proposition 1.28 (Cramer’s Rule). Suppose that A is an invertible n × n-matrix and u, x ∈ F are two column vectors such that Ax = u, i.e., x is a solution of the  a x + a x + ··· + a x = u  11 1 12 2 1n n 1  a21x1 + a22x2 + ··· + a2nxn = u2 . . .  . . .   an1x1 + an2x2 + ··· + annxn = un.

Denote by Aj(u) the matrix obtained from A by replacing the j-th column with the column vector u. Then det A (u) x = j , ∀j = 1, . . . , n. (1.15) j det A

Proof. By expanding along the j-th column of Aj(u) we deduce n X j+k det Aj(u) = (−1) det A(k, j). (1.16) k=1 On the other hand, (det A)x = (AAˇ )x = Aˇu. Hence n X X k+j (1.16) (det A)xj = aˇjkuk = (−1) uk det A(k, j) = det Aj(u). k=1 k ut 16 LIVIU I. NICOLAESCU

1.6. Examples. To any list of complex numbers (x1, . . . , xn) we associate the n × n matrix  1 1 ··· 1   x1 x2 ··· xn    V (x1, . . . , xn) =  . . . .  . (1.17)  . . . .  n−1 n−1 n−1 x1 x2 ··· xn

This matrix is called the associated to the list of numbers (x1, . . . , xn). We want to compute its determinant. Observe first that

det V (x1, . . . xn) = 0. if the numbers z1, . . . , zn are not distinct. Observe next that  1 1  det V (x1, x2) = det = (x2 − x1). x1 x2 Consider now the 3 × 3 situation. We have  1 1 1  det V (x1, x2, x3) = det  x1 x2 x3  . 2 2 2 x1 x2 x3

Subtract from the 3rd row the second row multiplied by x1 to deduce  1 1 1  det V (x1, x2, x3) = det  x1 x2 x3  2 2 0 x2 − x1x2 x3 − x3x1  1 1 1  = det  x1 x2 x3  . 2 0 x2(x2 − x1) x3 − x3x1 Subtract from the 2nd row the first row multiplied by x1 to deduce   1 1 1   x2 − x1 x3 − x1 det V (x1, x2, x3) = det  0 x2 − x1 x3 − x1  = det x2(x2 − x1) x3(x3 − x1) 0 x2(x2 − x1) x3(x3 − x1)  1 1  = (x2 − x1)(x3 − x1) det = (x2 − x1)(x3 − x1) det V (x2, x3). x2 x3

= (x2 − x1)(x3 − x1)(x3 − x1). We can write the above equalities in a more compact form Y Y det V (x1, x2) = (xj − xi), det V (x1, x2, x3) = (xj − xi). (1.18) 1≤i

det V (x1, . . . , xn) = (x2 − x1) ··· (xn − x1) det V (x2, . . . , xn). (1.19) We have the following general result.

Proposition 1.29. For any integer n ≥ 2 and any complex numbers x1, . . . , xn we have Y det Vn(x1, . . . , xn) = (xj − xi). (1.20) 1≤i

Proof. We will argue by induction on n. The case n = 2 is contained in (1.18). Assume now that (1.20) is true for n − 1. This means that Y det V (x2, . . . , xn) = (xj − xi). 2≤i

Corollary 1.30. If x1, . . . , xn are distinct complex numbers then for any complex numbers r1, . . . , rn there exists a polynomial of degree ≤ n − 1 uniquely determined by the conditions

P (x1) = r1, . . . , p(xn) = rn. (1.21) Proof. The polynomial P must have the form n−1 P (x) = a0 + a1x + ··· + an−1x , where the coefficients a0, . . . , an−1 are to be determined. We will do this using (1.21) which can be rewritten as a system of linear equations in which the unknown are the coefficients a0, . . . , an−1,  n−1  a0 + a0x1 + ··· + an−1x1 = r1  n−1  a0 + a1x2 + ··· + an−1x2 = r2 . . .  . . .  n−1  a0 + a1xn + ··· + an−1xn = rn. We can rewrite this in matrix form  n−1      1 x1 ··· x1 a0 r1 n−1  1 x2 ··· x   a1   r2   2  ·   =   .  . . . .   .   .   . . . .   .   .  n−1 1 xn ··· xn an−1 rn | {z } † =V (x1,...,xn)

Because the numbers x1, . . . , xn are distinct, we deduce from (1.20) that † det V (x1, . . . , xn) = V (x1, . . . , xn) 6= 0.

Hence the above linear system has a unique solution a0, . . . , an−1. ut 18 LIVIU I. NICOLAESCU

1.7. Exercises. Exercise 1.1. Prove that the map in Example 1.2 is indeed a bilinear map. ut

Exercise 1.2. Prove Proposition 1.4. ut

1 Exercise* 1.3. (a) Show that for any i 6=∈ In we have τij ◦ τij = In .

(b) Prove that for any permutation σ ∈ Sn there exists a sequence of transpositions τi1j1 , . . . , τimjm , m < n, such that 1 τimjm ◦ · · · ◦ τi1j1 ◦ σ = In . Conclude that any permutation is a product of transpositions. ut

Exercise 1.4. Decompose the permutation  1 2 3 4 5  5 4 3 2 1 as a composition of transpositions. ut

2 Exercise 1.5. Suppose that Φ ∈ T (U) is a symmetric bilinear map. Define Q : U → F by setting Q(u) = Φ(u, u), ∀u ∈ U. Show that for any u, v ∈ U we have 1  Φ(u, v) = Q(u + v) − Q(u − v) . ut 4 Exercise 1.6. Prove that the map 2 2 Φ: R × R → R, Φ(u, v) = u1v2 − u2v1, is bilinear, and skew-symmetric. ut

Exercise 1.7. (a) Show that a bilinear form Φ: U × U → F is skew-symmetric if and only if Φ(u, u) = 0, ∀u ∈ U. Hint: Expand Φ(u + v, u + v) using the bilinearity of Φ. (b) Prove that an n-linear form Φ ∈ Tn(U) is skew-symmetric if and only if for any i 6= j and any vectors u1,..., un ∈ U such that ui = uj we have

Φ(u1,..., un) = 0. Hint. Use the trick in part (a) and Exercise 1.3. ut

Exercise 1.8. Compute the determinant of the following 5 × 5-matrix  1 2 3 4 5   2 1 2 3 4     3 2 1 2 3  .    4 3 2 1 2  5 4 3 2 1 LINEAR ALGEBRA 19

Exercise 1.9. Fix complex numbers x and h. Compute the determinant of the matrix  1 −1 0 0   x h −1 0    .  x2 hx h −1  x3 hx2 hx h Can you generalize thus example? ut

Exercise 1.10. Prove the equality (1.19). ut

Exercise 1.11. (a) Consider a degree (n − 1) polynomial n−1 n−2 P (x) = an−1x + an−2x + ··· + a1x + a0, an−1 6= 0. Compute the determinant of the following matrix.  1 1 ··· 1   x1 x2 ··· xn     . . . .  V =  . . . .  .  n−2 n−2 n−2   x1 x2 ··· xn  P (x1) P (x2) ··· P (xn) (b) Compute the determinants of the following n × n matrices  1 1 ··· 1   x1 x2 ··· xn     . . . .  A =  . . . .  ,  n−2 n−2 n−2   x1 x2 ··· xn  x2x3 ··· xn x1x3x4 ··· xn ··· x1x2 ··· xn−1 and  1 1 ··· 1   x1 x2 ··· xn     . . . .  B =  . . . .  .  n−2 n−2 n−2   x1 x2 ··· xn  n−1 n−1 n−1 (x2 + x3 + ··· + xn) (x1 + x3 + x4 + ··· + xn) ··· (x1 + x2 + ··· + xn−1)

Hint. To compute det B it is wise to write S = x1 + ··· + xn so that x2 + x3 + ··· + xn = (S − x1), k x1 + x3 + ··· + xn = S − x2 etc. Next observe that (S − x) is a polynomial of degree k in x. ut

Exercise 1.12. Suppose that A is skew-symmetric n × n matrix, i.e., A† = −A. Show that det A = 0 if n is odd. ut

Exercise 1.13. Suppose that A = (aij)1≤i,j≤n is an n × n matrix with complex entries. (a) Fix complex numbers x1, . . . , xn, y1, . . . , yn and consider the n × n matrix B with entries

bij = xiyjaij. Show that det B = (x1y1 ··· xnyn) det A. 20 LIVIU I. NICOLAESCU

(b) Suppose that C is the n × n matrix with entries i+j cij = (−1) aij. Show that det C = det A. ut

Exercise 1.14. (a) Suppose we are given three sequences of numbers a = (ak)k≥1, b = (bk)k≥1 and c = (ck)k≥1. To these seqences we associate a sequence of Jacobi matrices   a1 b1 0 0 ··· 0 0  c1 a2 b2 0 ··· 0 0     0 c2 a3 b3 ··· 0 0  Jn =   . (J)  ......   ......  0 0 0 0 ··· cn−1 an Show that det Jn = an det Jn−1 − bn−1cn−1 det Jn−2. (1.22) Hint: Expand along the last row. (b) Suppose that above we have

ck = 1, bk = 2, ak = 3, ∀k ≥ 1.

Compute J1,J2. Using (1.22) determine J3,J4,J5,J6,J7. Can you detect a pattern? ut

Exercise 1.15. Suppose we are given a sequence of with complex coefficients (Pn(x))n≥0, deg Pn = n, for all n ≥ 0, n Pn(x) = anx + ··· , an 6= 0.

Denote by V n the space of polynomials with complex coefficients and degree ≤ n. (a) Show that the collection {P0(x),...,Pn(x)} is a basis of V n. (b) Show that for any x1, . . . , xn ∈ C we have   P0(x1) P0(x1) ··· P0(xn)  P1(x1) P1(x2) ··· P1(xn)  Y det   = a a ··· a (x − x ). ut  . . . .  0 1 n−1 j i  . . . .  i

n−1 Exercise 1.16. To any polynomial P (x) = c0 + c1x + ... + cn−1x of degree ≤ n − 1 with complex coefficients we associate the n × n   c0 c1 c2 ··· cn−2 cn−1  cn−1 c0 c1 ··· cn−3 cn−2    CP =  cn−2 cn−1 c0 ··· cn−4 cn−3  ,    ··················  c1 c2 c3 ··· cn−1 c0 Set 2πi √ ρ = e n , i = −1, n n−1 so that ρ = 1. Consider the n × n Vandermonde matrix Vρ = V (1, ρ, . . . , ρ ) defined as in (1.17) (a) Show that for any j = 1, . . . , n − 1 we have 1 + ρj + ρ2j + ··· + ρ(n−1)j = 0. LINEAR ALGEBRA 21

(b) Show that n−1  CP · Vρ = Vρ · Diag P (1),P (ρ),...,P (ρ ) , where Diag(a1, . . . , an) denotes the diagonal n × n-matrix with diagonal entries a1, . . . , an. (c) Show that n−1 det CP = P (1)P (ρ) ··· P (ρ ). ut ∗ 2 3 (d) Suppose that P (x) = 1 + 2x + 3x + 4x so that CP is a 4 × 4-matrix with integer entries and thus det CP is an integer. Find this integer. Can you generalize this computation? Exercise 1.17. Consider the n × n-matrix  0 1 0 0 ··· 0 0   0 0 1 0 ··· 0 0     ......  A =  ......     0 0 0 0 ··· 0 1  0 0 0 0 ··· 0 0 (a) Find the matrices A2,A3,...,An. (b) Compute (I − A)(I + A + ··· + An−1). (c) Find the inverse of (I − A). ut

Exercise 1.18. Let d d−1 P (x) = x + ad−1x + ··· + a1x + a0 be a polynomial of degree d with complex coefficients. We denote by S the collection of sequences of complex numbers, i.e., functions  f : 0, 1, 2,... → C, n 7→ f(n).

This is a complex vector space in a standard fashion. We denote by SP the subcollection of sequences f ∈ S satisfying the recurrence relation

f(n + d) + ad−1f(n + d − 1) + ··· a1f(n + 1) + a0f(n) = 0, ∀n ≥ 0. (RP )

(a) Show that SP is a vector subspace of S. d (b) Show that the map I : SP → C which associates to f ∈ SP its initial values If,  f(0)   f(1)  If =   ∈ d  .  C  .  f(d − 1) is an of vector spaces. (c) For any λ ∈ C we consider the sequence fλ defined by n fλ(n) = λ , ∀n ≥ 0. 0 (Above it is understood that λ = 1.) Show that fλ ∈ SP if and only if P (λ) = 0, i.e., λ is a root of P .

(d) Suppose P has d distinct roots λ1, . . . , λd ∈ C. Show that the collection of sequences fλ1 , . . . , fλd is a basis of SP .  (e) Consider the Fibonacci sequence f(n) n≥0 defined by f(0) = f(1) = 1, f(n + 2) = f(n + 1) + f(n), ∀n ≥ 0. 22 LIVIU I. NICOLAESCU

Thus, f(2) = 2, f(3) = 3, f(4) = 5, f(5) = 8, f(6) = 13,.... Use the results (a)–(d) above to find a short formula describing f(n). ut

Exercise 1.19. Let b, c be two distinct complex numbers. Consider the n × n Jacobi matrix  b + c b 0 0 ··· 0 0   c b + c b 0 ··· 0 0     0 c b + c b ··· 0 0  J =   . n  ......   ......     0 0 ··· 0 c b + c b  0 0 0 0 ··· c b + c

Find a short formula for det Jn. Hint: Use the results in Exercises 1.14 and 1.18. ut LINEAR ALGEBRA 23

2. SPECTRALDECOMPOSITIONOFLINEAROPERATORS 2.1. Invariants of linear operators. Suppose that U is an n-dimensional F-vector space. We denote by L(U) the space of linear operators (maps) T : U → U. We already know that once we choose a basis be = (e1,..., en) of U we can represent T by a matrix

A = M(e,T ) = (aij)1≤i,j≤n, where the elements of the k-th column of A describe the coordinates of T ek in the basis e, i.e., n X T ek = a1ke1 + ··· + anken = ajkej. j=1

A priori, there is no good reason of choosing the basis e = (e1,..., en) over another f = (f 1,..., f n). With respect to this new basis the operator T is represented by another matrix n X B = M(f,T ) = (bij)1≤i,j≤n,T f k = bjkf j. j=1 The basis f is related to the basis e by a transition matrix n X C = (cij)1≤i,j≤n, f k = cjkej. j=1

Thus the, k-th column of C describes the coordinates of the vector f k in the basis e. Then C is invertible and B = C−1AC. (2.1) The space U has lots of bases, so the same operator T can be represented by many different matrices. The question we want to address in this section can be loosely stated as follows. Find bases of U so that, in these bases, the operator T represented by ”very simple” matrices. We will not define what a a ”very simple” matrix is but we will agree that the more zeros a matrix has, the simpler it is. We already know that we can find bases in which the operator T is represented by upper triangular matrices. These have lots of zero entries, but it turns out that we can do much better than this. The above question is closely related to the concept of invariant of a linear operator. An invariant is roughly speaking a quantity naturally associated to the operator that does not change when we change bases. Definition 2.1. (a) A subspace V ⊂ U is called an of the linear operator T ∈ L(U) if T v ∈ V , ∀v ∈ V .

(b) A nonzero vector u0 ∈ U is called an eigenvector of the linear operator T if and only if the spanned by u0 is an invariant subspace of T . ut

Example 2.2. (a) Suppose that T : U → U is a linear operator. Its null space or ker T :=  u ∈ U; T u = 0 , is an invariant subspace of T . Its , dim ker T , is an invariant of T because in its definition we have not mentioned any particular basis. We have already encountered this dimension under a different guise. 24 LIVIU I. NICOLAESCU

If we choose a basis e = (e1,..., en) of U and use it to represent T as an n × n matrix A = (Aij)1≤i,j≤n, then dim ker T is equal to the nullity of A, i.e., the dimension of the vector space of solutions of the linear system   x1  .  n Ax = 0, x =  .  ∈ F . xn The range R(T ) = T u; u ∈ U is also an invariant subspace of T . Its dimension dim R(T ) can be identified with the of the matrix A above. The rank nullity theorem implies that dim ker T + dim R(T ) = dim U. (2.2)

(b) Suppose that u0 ∈ U is an eigenvector of T . Then T u0 ∈ span(u0) so that there exists λ ∈ F such that T u0 = λu0. ut 2.2. The determinant and the characteristic polynomial of an operator. Assume again that U is an n-dimensional F-vector space. A more subtle invariant of an operator T ∈ L(U) is its determinant. This is a scalar det T ∈ F. Its definition requires a choice of a basis of U, but the end result is independent any choice of basis. Here are the details. Fix a basis e = {e1,..., en} of U. We use it to represent T as an n × n real matrix A = (aij)1≤i,j≤n. More precisely, this means that n X T ej = aijei, ∀j = 1, . . . , n. i=1 If we choose another basis of U, f = (f 1,..., f n), then we can represent T by another n × n matrix B = (bij)1≤i,j≤n, i.e., n X T f j = bijf i, j = 1, . . . , n. i=1 As we have discussed above the basis f is obtained from e via a change-of-basis matrix C = (cij)1≤i,j≤n, i.e., n X f j = cijej, j = 1, . . . , n. i=1 Moreover the matrices A, B, C are related by the transition rule (2.1), B = C−1AC. Thus det B = det(C−1AC) = det C−1 det A det C = det A. The upshot is that the matrices A and B have the same determinant. Thus, no mater what basis of U we choose to represent T as an n × n matrix, the determinate of that matrix is independent of the basis used. This number, denoted by det T is an invariant of T called the determinant of the operator T . Here is a simple application of this concept. LINEAR ALGEBRA 25

Corollary 2.3. ker T 6= 0⇐⇒ det T = 0. ut

More generally, for any x ∈ F consider the operator x1 − T : U → U, defined by (λ1 − T )u = xu − T u, ∀u ∈ U. We set

PT (x) = det(x1 − T ).

Proposition 2.4. The quantity PT (x) is a polynomial of degree n = dim U in the variable λ.

Proof. Choose a basis e = (e1,..., en). In this basis T is represented by an n × m matrix A = (aij)1≤i,j≤n and the operator x1 − T is represented by the matrix   x − a11 −a12 −a13 · · · −a1n  −a21 x − a22 −a23 · · · −a2n  xI − A =   .  . . . . .   . . . . .  −an1 −an2 −an3 ··· x − ann As explained in Remark 1.13, the determinant of this matrix is a sum of products of certain choices of n entries of this matrix, namely the entries that form a rook placement. Since there are exactly n entries in this matrix that contain the variable x, we see that each product associated to a rook placement of entries is a polynomial in x of degree ≤ n. There exists exactly one rook placement so that each of the entries of this placement contain the term x This pavement is easily described, it consists of the terms situated on the diagonal of this matrix, and the product associated to these entries is

(x − a11) ··· (x − ann). Any other rook placement contains at most (n−1) entries that involve the term x, so the corresponding product of these entries is a polynomial of degree at most n − 1. Hence

det(xI − A) = (x − a11) ··· (x − ann) + polynomial of degree ≤ n − 1.

Hence PT (x) = det(xI − A) is a polynomial of degree n in x. ut

Definition 2.5. The polynomial PT (x) is called the characteristic polynomial of the operator T . ut

Recall that a number λ ∈ F is called an eigenvalue of the operator T if and only if there exists u ∈ U \ 0 such that T u = λu, i.e., (λ1 − T )u = 0. Thus λ is an eigenvalue of T if and only if ker(λI − T ) 6= 0. Invoking Corollary 2.3 we obtain the following important result.

Corollary 2.6. A scalar λ ∈ F is an eigenvalue of T if and only if it is a root of the characteristic polynomial of T , i.e., PT (λ) = 0. ut 26 LIVIU I. NICOLAESCU

The collection of eigenvalues of an operator T is called the spectrum of T and it is denoted by spec(T ). If λ ∈ spec(T ), then the subspace ker(λ1−T ) ⊂ U is called the eigenspace corresponding to the eigenvalue λ. From the above corollary and the fundamental theorem of algebra we obtain the following impor- tant consequence. Corollary 2.7. If T : U → U is a linear operator on a complex vector space U, then spec(T ) 6= ∅.ut

We say that a linear operator T : U → U is triangulable if there exists a basis e = (e1,..., en) of U such that the matrix representing T in this basis is upper triangular. We will refer to A as a triangular representation of T . Triangular representations, if they exist, are not unique. We already know that any linear operator on a complex vector space is triangulable. Corollary 2.8. Suppose that T : U → U is a triangulable operator. Then for any basis e = (e1,..., en) of U such that the matrix A = (aij)1≤i,j≤n representing T in this basis is upper trian- gular, we have PT (x) = (x − a11) ··· (x − ann). Thus, the eigenvalues of T are the elements along the diagonal of any triangular representation of T . ut

2.3. Generalized eigenspaces. Suppose that T : U → U is a linear operator on the n-dimensional F-vector space. Suppose that spec(T ) 6= ∅. Choose an eigenvalue λ ∈ spec(T ). Lemma 2.9. Let k be a positive integer. Then ker(λ1 − T )k ⊂ ker(λ1 − T )k+1. Moreover, if ker(λ1 − T )k = ker(λ1 − T )k+1, then ker(λ1 − T )k = ker(λ1 − T )k+1 = ker(λ1 − T )k+2 = ker(λ1 − T )k+3 = ··· .

Proof. Observe that if (λ1 − T )ku = 0, then (λ1 − T )k+1u = (λ1 − T )(λ1 − T )ku = 0, so that ker(λ1 − T )k ⊂ ker(λ1 − T )k+1. Suppose that ker(λ1 − T )k = ker(λ1 − T )k+1 To prove that ker(λ1 − T )k+1 = ker(λ1 − T )k+2 it suffices to show that ker(λ1 − T )k+1 ⊃ ker(λ1 − T )k+2. Let v ∈ ker(λ1 − T )k+2. Then (λ1 − T )k+1(1 − λT )v = 0, so that (1 − λT )v ∈ ker ker(λ1 − T )k+1 = ker(λ1 − T )k so that (λ1 − T )k(1 − λT )v = 0, i.e., v ∈ ker(λ1 − T )k+1. We have thus shown that ker(λ1 − T )k+1 = ker(λ1 − T )k+2. The remaining equalities ker(λ1 − T )k+2 = ker(λ1 − T )k+3 = ··· are proven in a similar fashion. ut LINEAR ALGEBRA 27

Corollary 2.10. For any m ≥ n = dim U we have ker(λ1 − T )m = ker(λ1 − T )n, (2.3a)

R(λ1 − T )m = R(λ1 − T )n. (2.3b) Proof. Consider the sequence of positive integers 1 1 k d1(λ) = dimF(λ − T ), . . . , dk(λ) = dimF(λ − T ) ,.... Lemma 2.9 shows that

d1(λ) ≤ d2(λ) ≤ · · · ≤ n = dim U.

Thus there must exist k such that dk(λ) = dk+1(λ). We set  k0 = min k; dk(λ) = dk+1(λ) . Thus

d(λ) < ··· < dk0 (λ) ≤ n, so that k0 ≤ n. On the other hand, since dk0 (λ) = dk0+1(λ) we deduce that

k0 m ker(λ1 − T ) = ker(λ1 − T ) , ∀m ≥ k0.

Since n ≥ k0 we deduce

n k0 m ker(λ1 − T ) = ker(λ1 − T ) = ker(λ1 − T ) , ∀m ≥ k0. This proves (2.3a). To prove (2.3b) observe that if m > n, then   R(λ1 − T )m = (λ1 − T )n (λ1 − T )m−nV ⊂ (λ1 − R)n V  = R(λ1 − T )n.

On the other hand, the rank-nullity formula (2.2) implies that dim R(λ1 − T )n = dim U − dim ker(λ1 − T )n

= dim U − (λ1 − T )m = dim R(λ1 − T )m. This proves (2.3b). ut

Definition 2.11. Let T : U → U be a linear operator on the n-dimensional F-vector space U. Then for any λ ∈ spec(T ) the subspace ker(λ1 − T )n is called the generalized eigenspace of T corresponding to the eigenvalue λ and it is denoted by Eλ(T ). We will denote its dimension by mλ(T ), or mλ, and we will refer to it as the multiplicity of the eigenvalue λ. ut

Proposition 2.12. Let T ∈ L(U) , dimF U = n, and λ ∈ spec(T ). Then the generalized eigenspace Eλ(T ) is an invariant subspace of T .

Proof. We need to show that TEλ(T ) ⊂ Eλ(T ). Let u ∈ Eλ(T ), i.e., (λ1 − T )nu = 0. n+1 Clearly λu − T u ∈ ker(λ1 − T ) = Eλ(T ). Since λu ∈ Eλ(T ) we deduce that

T u = λu − (λu − T ) ∈ Eλ(T ). ut 28 LIVIU I. NICOLAESCU

Theorem 2.13. Suppose that T : U → U is a triangulable operator on the the n-dimensional F -vector space U. Then the following hold.

(a) For any λ ∈ spec(T ) the multiplicity mλ is equal to the number of times λ appears along the diagonal of a triangular representation of T . (b) Y det T = λmλ(T ), (2.4a) λ∈spec(T )

Y mλ(T ) PT (x) = (x − λ) , (2.4b) λ∈spec(T ) X mλ(T ) = deg PT = dim U = n. (2.4c) λ∈spec(T )

Proof. To prove (a) we will argue by induction on n. For n = 1 the result is trivially true. For the inductive step we assume that the result is true for any triangulable operator on an (n−1)-dimensional F-vector space V , and we will prove that the same is true for triangulable operators acting on an n- dimensional space U. Let T ∈ L(U) be such an operator. We can then find a basis e = (e1,..., en) of U such that, in this basis, the operator T is represented by the upper triangular matrix   λ1 ∗ ∗ ∗ ∗  0 λ2 ∗ ∗ ∗     . . . . .  A =  . . . . .     0 ··· 0 λn−1 ∗  0 ··· 0 0 λn Suppose that λ ∈ spec(T ). For simplicity we assume λ = 0. Otherwise, we carry the discussion of the operator T − λ1. Let ν be the number of times 0 appears on the diagonal of A we have to show that ν = dim ker T n.

Denote by V the subspace spanned by the vectors e1,..., en−1. Observe that V is an invariant subspace of T , i.e., T V ⊂ V . If we denote by S the restriction of T to V we can regard S as a linear operator S : V → V . The operator S is triangulable because in the basis (e1,..., en−1) of V it is represented by the upper triangular matrix   λ1 ∗ ∗ ∗  0 λ2 ∗ ∗  B =   .  . . . .   . . . .  0 ··· 0 λn−1 Denote by µ the number of times 0 appears on the diagonal of B. The induction hypothesis implies that µ = dim ker Sn−1 = dim ker Sn. Clearly µ ≤ ν. Note that ker Sn ⊂ ker T n so that µ = dim ker Sn ≤ dim ker T n. We distinguish two cases. LINEAR ALGEBRA 29

1. λn 6= 0. In this case we have µ = ν so it suffices to show that ker T n ⊂ V . Indeed, if that were the case, we would conclude that ker T n ⊂ ker Sn, and thus dim ker T n = dim ker Sn = µ = ν. We argue by contradiction. Suppose that there exists u ∈ ker T n such that u 6∈ V . Thus, we can find v ∈ V and c ∈ F \ 0 such that u = v + cen. Note that T nv ∈ V and en = λnen + vector in V . Thus n n T cen = cλnen + vector in V so that n n T u = cλnen + vector in V 6= 0. This contradiction completes the discussion of Case 1.

2. λn = 0. In this case we have ν = µ + 1 so we have to show that dim ker T n = µ + 1. We need an auxiliary result. Lemma 2.14. There exists u ∈ U \ V such that T nu = 0 so that dim(V + ker T n) ≥ dim V + 1 = n. (2.5)

Proof. Set vn := T en. n−1 n Observe that vn ∈ V . From (2.3b) we deduce that R S = R S so that there exists v0 ∈ V such that n−1 n S vn = S v0. Set u := en − v0. Note that u ∈ U \ V ,

T u = vn − T v0 = vn − Sv, n n−1 n−1 n T u = T (vn − Sv0) = S vn − S v0 = 0. ut

Now observe that (2.5) n = dim U ≥ dim(V + ker T n) ≥ n, so that dim(V + ker T n) = n. We conclude that n = dim(V + ker T n) = dim(ker T n) + dim V − dim(U ∩ ker T n) | {z } | {z } n−1 =µ = dim(ker T n) + n − 1 − µ, which shows that dim ker T n = µ + 1 = ν. 30 LIVIU I. NICOLAESCU

This proves (a). The equalities (2.4c), (2.4b), (2.4c) follow easily from (a). ut

In the remainder of this section we will assume that F is the field of complex numbers, C. Suppose that U is a complex vector space and T ∈ L(U) is a linear operator. We already know that T is triangulable and we deduce from the above theorem the following important consequence. Corollary 2.15. Suppose that T is a linear operator on the complex vector space U. Then

Y mλ(T ) Y mλ(T ) det T = λ ,PT (x) = (x − λ) , λ∈spec(T ) λ∈spec(T ) X mλ(T ). ut λ∈spec(T ) For any polynomial with complex coefficients n p(x) = a0 + a1x + ··· + anx ∈ C[x] and any linear operator T on a complex vector space U we set n p(T ) = a01 + a1T + ··· + anT . Note that if p(x), q(x) ∈ C[x], and if we set r(x) = p(x)q(x), then r(T ) = p(T )q(T ). Theorem 2.16 (Cayley-Hamilton). Suppose T is a linear operator on the complex vector space U. If PT (x) is the characteristic polynomial of T , then

PT (T ) = 0.

Proof. Fix a basis e = (e1,..., en) in which T is represented by the upper triangular matrix   λ1 ∗ ∗ ∗ ∗  0 λ2 ∗ ∗ ∗     . . . . .  A =  . . . . .     0 ··· 0 λn−1 ∗  0 ··· 0 0 λn Note that n Y PT (x) = det(x1 − T ) = (x − λj) j=1 so that n Y PT (T ) = (T − λj1). j=1 For j = 1,...,N we define U j := span{e1,..., ej}. and we set U 0 = {0}. Note that for any j = 1, . . . , n we have

(T − λj1)U j ⊂ U j−1. Thus n n−1 Y Y   PT (T )U = (T − λj)U n = (T − λj) (T − λn1)U n j=1 j=1 LINEAR ALGEBRA 31

n−1 n−2 Y Y ⊂ (T − λj)U n−1 ⊂ (T − λj)U n−2 ⊂ · · · ⊂ (T − λ1)U 1 ⊂ {0}. j=1 j=1 In other words, PT (T )u = 0, ∀u ∈ U. ut

Example 2.17. Consider the 2 × 2-matrix  3 2  A = −2 −1 Its characteristic polynomial is  x − 3 −2  P (x) = det(xI−A) = det = (x−3)(x+1)+4 = x2−2x−3+4 = x2−2x+1. A 2 x + 1 The Cayley-Hamilton theorem shows that A2 − 2A + 1 = 0. Let us verify this directly. We have  5 8  A2 = −4 −3 and  5 4   3 2   1 0  A2 − 2A + I = − 2 + = 0. −4 −3 −2 −1 0 1 We can rewrite the last equality as A2 = 2A − I so that An+2 = 2An+1 − An, We can rewrite this as An+2 − An+1 = An+1 − An = An − An−1 = ··· = A − I. Hence An = (An − An−1) + (An−1 − An−2) + ··· + (A − I) +I = nA − (n − 1)I. ut | {z } =n(A−I) 2.4. The of a complex operator. Let U be a complex n-dimensional vector space and T : U → U. For each eigenvalue λ ∈ spec(T ) we denote by Eλ(T ) the corresponding generalized eigenspace, i.e., k u ∈ Eλ(T )⇐⇒∃k > 0 : (T − λ1) u = 0.

From Proposition 2.12 we know that Eλ(T ) is an invariant subspace of T . Suppose that the spectrum of T consists of ` distinct eigenvalues,  spec(T ) = λ1, . . . , λ` . Proposition 2.18.

U = Eλ1 (T ) ⊕ · · · ⊕ Eλ` (T ). 32 LIVIU I. NICOLAESCU

Proof. It suffices to show that

U = Eλ1 (T ) + ··· + Eλ` (T ), (2.6a)

dim U = dim Eλ1 (T ) + ··· + dim Eλ` (T ). (2.6b) The equality (2.6a) follows from (2.4c) since

dim U = mλ1 (T ) + ··· + mλ` (T ) = dim Eλ1 (T ) + ··· + dim Eλ` (T ), so we only need to prove (2.6b). Set

V := Eλ1 (T ) + ··· + Eλ` (T ) ⊂ U. We have to show that V = U. Note that since each of the generalized eigenspaces Eλ(T ) are invariant subspaces of T , so is there sum V . Denote by S the restriction of T to V , which we regard as an operator S : V → V . If λ ∈ spec(T ) and v ∈ Eλ(T ) ⊂ V , then (S − λ1)kv = (T − λ1)kv = 0 for some k ≥ 0. Thus λ is also an eigenvalue of S and v is also a of S. This proves that spec(T ) ⊂ spec(S), and

Eλ(T ) ⊂ Eλ(S), ∀λ ∈ spec(T ). In particular, this implies that X X dim U = dim Eλ(T ) ≤ dim Eµ(S) = dim V ≤ dim U. λ∈spec(T ) µ∈spec(S) This shows that dim V = dim U and thus V = U. ut

For any λ ∈ spec(T ) we denote by Sλ the restriction of T on the generalized eigenspace Eλ(T ). Since this is an invariant subspace of T we can regard Sλ as a linear operator

Sλ : Eλ(T ) → Eλ(T ).

Arguing as in the proof of the above proposition we deduce that Eλ(T ) is also a generalized eigenspace of Sλ. Thus, the spectrum of Sλ consists of a single eigenvalue and

dim Eλ(T ) mλ(T ) Eλ(T ) = Eλ(S) = ker(λ1 − Sλ) = ker(λ1 − Sλ) .

Thus, for any u ∈ Eλ(T ) we have

mλ(T ) (λ1 − Sλ) u = 0, i.e.,

mλ(T ) mλ(T ) mλ(T ) (Sλ − λ1) = (−1) (λ1 − Sλ) = 0. Definition 2.19. A linear operator N : U → U is called nilpotent if N k = 0 for some k > 0. ut

If we set Nλ = Sλ − λ1 we deduce that the operator Nλ is nilpotent. LINEAR ALGEBRA 33

Definition 2.20. Let N : U → U be a nilpotent operator on a finite dimensional complex vector spec V .A tower of N is an ordered collection T of vectors

u1, u2,..., uk ∈ U satisfying the equalities

Nu1 = 0, Nu2 = u1,..., Nuk = uk−1.

The vector u1 is called the bottom of the tower, the vector uk is called the top of the tower, while the integer k is called the height of the tower. ut

In Figure1 we depicted a tower of height 4. Observe that the vectors in a tower are generalized eigenvectors of the corresponding nilpotent operator. u 4 N

u3 N

u2 N

u1

FIGURE 1. Pancaking a tower of height 4.

Towers interact in a rather pleasant way. Proposition 2.21. Suppose that N : U → U is a nilpotent operator on a complex vector space U and T1,..., Tr are towers of N with bottoms b1,..., br. If the bottom vectors b1,..., br are linearly independent, then the following hold. (i) The towers T1,..., Tr are mutually disjoint, i.e., Ti ∩ Tj = ∅ if i 6= j. (ii) The union T = T1 ∪ · · · ∪ Tr is a linearly independent family of vectors.

Proof. Denote by ki the height of the tower Ti and set

k = k1 + ··· + kr. We will argue by induction on k, the sum of the heights of the towers. For k = 1 the result is trivially true. Assume the result is true for all collections of towers with total heights < k and linear independent bases, and we will prove that it is true for collection of towers with total heights = k. Denote by V the subspace spanned by the union T. It is an invariant subspace of N, and we denote by S the restriction of N to V . We regard S as a linear operator S : V → V . 0 Denote by Ti the tower obtained by removing the top of the tower Ti and set (see Figure2) 0 0 0 T = T1 ∪ · · · ∪ Tr. Note that R(S) = span(T0). (2.7) 34 LIVIU I. NICOLAESCU

0 0 The collection of towers T1,... Tr has total height 0 k = (k1 − 1) + ··· + (kr − 1) = k − r < k.

Moreover the collection of bottoms is a subcollection of {b1,..., br} so it is linearly independent.

T T TT T 1 2 3 4 5

FIGURE 2. A group of 5 towers. The tops are shaded and about to be removed.

0 0 The inductive assumption implies that the towers T1,..., Tr are mutually disjoint and their union T0 is a linear independent family of vectors. Hence T0 is a basis of R(S), and thus dim R(S) = k0 = k − r. On the other hand Sbj = Nbj = 0, ∀j = 1, . . . , r.

Since the vectors b1,..., br are linearly independent we deduce that dim ker S ≥ r, From the rank-nullity theorem we deduce that dim V = dim ker S + dim R(S) ≥ r + k − r = r. on the other hand dim V = dim span(T) ≤ |T| ≤ k. Hence |T| = dim V = k.

This proves that the towers T1,... Tr are mutually disjoint and their union is linearly independent. ut

Theorem 2.22 (Jordan normal form of a nilpotent operator). Let N : U → U be a nilpotent operator on an n-dimensional complex vector space U. Then U has a basis consisting of a disjoint union of towers of N. Proof. We will argue by induction on the dimension n of U. For n = 1 the result is trivially true. We assume that the result is true for any nilpotent operator on a space of dimension < n and we prove it is true for any nilpotent operator N on a space U of dimension n. LINEAR ALGEBRA 35

Observe that V = R(N) is an invariant subspace of N. Moreover, since ker N 6= 0, we deduce from the rank nullity formula that dim V = dim U − dim ker N < dim U. Denote by M the restriction of N to V . We view M as a linear operator M : V → V . Clearly M is nilpotent. The induction assumption implies that there exist a basis of V consisting a mutually disjoint towers of M, T1,..., Tr. For any j = 1, . . . , r we denote by kj the height of Tj, by bj the bottom of Tj and by tj the top of Tj. By construction dim R(N) = k1 + ··· + kr. Since tj ∈ V = R(N) there exists uj ∈ U such that (see Figure3)

tj = Nuj.

u1 N u u 3 5 t1 N N u N t3 t

t2 u4 N

t v v b b b3 b 4 b5 1 2 1 2 4 KerN

FIGURE 3. Towers in R(N).

Next observe that the bottoms b1,..., br belong to ker N and are linearly independent, because they are a subfamily of the linearly independent family T1 ∪ · · · ∪ Tr. We can therefore extend the family {b1,..., br} to a basis of ker N,

b1,..., br, v1,..., vs, r + s = dim ker N. ˆ ˆ ˆ ˆ We obtain new towers T1,..., Tr, Tr+1,..., Tr+s defined by (see Figure3) ˆ ˆ ˆ ˆ T1 := T1 ∪ {u1},..., Tr := Tr ∪ {ur}, Tr+1 := {v1},..., Tr+s := {vs}. The sum of the heights of these towers is

(k1 + 1) + (k2 + 1) + ··· + (kr + 1) + 1 + ··· + 1 | {z } s

= (k1 + ··· + kr) + (r + s) = dim U. | {z } | {z } =dim R(N) =dim ker N 36 LIVIU I. NICOLAESCU

By construction, their bottoms are linearly independent and Proposition 2.21 implies that they are mutually disjoint and their union is a linearly independent collection of vectors. The above computa- tion shows that the number of elements in the union of these towers is equal to the dimension of U. Thus, this union is a basis of U. ut

Definition 2.23. A Jordan basis of a nilpotent operator N : U → U is a basis of U consisting of a disjoint union of towers of N. ut

Example 2.24. (a) Suppose that the nilpotent operator N : U → U admits a Jordan basis consisting of a single tower e1,..., en. n Denote by Cn the matrix representing N in this basis. We use this basis to identify U with C and thus  1   0   0   0   1   0         0   0   0  e =   , e =   ,..., e =   . 1  .  2  .  n  .   .   .   .         0   0   0  0 0 1 From the equalities Ne1 = 0, Ne2 = e1, Ne3 = e2,... we deduce that the first column of Cn is trivial, the second column is e1, the 3-rd column is e2 etc. Thus Cn is the n × n matrix.  0 1 0 ··· 0 0   0 0 1 ··· 0 0     ......  Cn =  ......     0 0 0 ··· 0 1  0 0 0 ··· 0 0

The matrix Cn is called a (nilpotent) Jordan cell of size n. (b) Suppose that the nilpotent operator N : U → U admits a Jordan basis consisting of mutually disjoint towers T1,..., Tr of heights k1, . . . , kr. For j = 1, . . . , r we set

U j = span(Tj).

Observe that U j is an invariant subspace of N, Tj is a basis of U j and we have a decom- position U = U 1 ⊕ · · · ⊕ U r.

The restriction of N to U j is represented in the basis Tj by the Jordan cell Ckj so that in the basis T = T1 ∪ · · · ∪ Tr the operator N has the block-matrix description   Ck1 0 0 ··· 0  0 Ck 0 ··· 0   2  . ut  . . . . .   . . . . . 

0 0 0 ··· Ckr We want to out that the sizes of the Jordan cells correspond to the heights of the towers in a Jordan basis. While there may be several Jordan bases, the heights of the towers are the same in all of them; see Remark 2.26. In other words, these heights are invariants of the nilpotent operator. ut LINEAR ALGEBRA 37

Definition 2.25. The Jordan invariant of a nilpotent operator N is the nonincreasing list of the sizes of the Jordan cells that describe the operator in a Jordan basis. ut

Remark 2.26 (Algorithmic construction of a Jordan basis). Here is how one constructs a Jordan basis of a nilpotent operator N : U → U on a complex vector space U of dimension n. (i) Compute N 2, N 3,... and stop at the moment m when N m = 0. Set 2 m R0 = U,R1 = R(N),R2 = R(N ),...,Rm = R(N ) = {0}.

Observe that R1,R2,...,Rm are invariant subspaces of N, satisfying

R0 ⊃ R1 ⊃ R2 ⊃ · · · ,

(ii) Denote by Nk the restriction of N to Rk, viewed as an operator Nk : Rk → Rk. Note that Nm−1 = 0, N0 = N and

Rk = R(Nk−1), ∀k = 1, . . . , m.

Set rk = dim Rk so that dim ker Nk = rk − rk+1. (iii) Construct a basis Bm−1 of Rm−1 = ker Nm−1. Bm−1 consists of rm−1 vectors. (iv) For each b ∈ Bm−1 find a vector t(b) ∈ U such that b = N m−1t(b).

For each b ∈ Bm−1 we thus obtain a tower of height m  m−1 m−2 Tm−1(b) = b = N t(b), N t(b),..., Nt(b), t(b) , b ∈ Bm−1.

(v) Extend Bm−1 to a basis Bm−2 = Bm−1 ∪ Cm−2

of ker Nm−2 ⊂ Rm−2. m−2 (vi) For each b ∈ Cm−2 ⊂ Rm−2 find t = t(b) ∈ N such that N t = b. For each b ∈ Cm−2 we thus obtain a tower of height m − 1  m−2 Tm−2(b) = b = N t(b),..., Nt(b), t(b) , b ∈ Bm−2

(vii) Extent Bm−2 to a basis Bm−3 = Bm−2 ∪ Cm−3

of ker Nm−3 ⊂ Rm−3. m−3 (viii) For each b ∈ Cm−3 ⊂ Rm−3, find t(b) ∈ N such that N t(b) = b. For each b ∈ Cm−3 we thus obtain a tower of height m − 2  m−3 Tm−3(b) = b = N t(b),..., Nt(b), t(b) , b ∈ Cm−3 (ix) Iterate the previous two steps (x) In the end we obtain a basis

B0 = Bm−1 ∪ Cm−2 ∪ · · · ∪ C0

of ker N0 = ker N, vectors t(b), b ∈ Cj, and towers  j Tj(b) = b = N t(b),..., Nt(b), t(b) , j = 0, . . . , m − 1, b ∈ Cj. These towers form a Jordan basis of N. 38 LIVIU I. NICOLAESCU

(xi) For uniformity set Cm−1 = Bm−1, and for any j = 1, . . . , m denote by cj the cardinatlity of Cj−1. In the above Jordan basis the operator N will be a dirrect sum of c1 cells of dimension 1, c2 cells of dimension 2, etc. We obtain the identities

n = c1 + 2c2 + ··· + mcm,

r1 = c2 + 2c3 + ··· + (m − 1)cm−1, r2 = c3 + ··· + (m − 2)cm−1, rj = cj+1 + 2cj+2 + ··· + (m − j)cm, ∀j = 0, . . . , m − 1. (2.8) where r0 = n. If we treat the equalities (2.8) as a linear system with unknown k1, . . . , km, we see that the matrix of this system is upper triangular with only 1-s along the diagonal. It is thus invertible so that the numbers c1, . . . , cm are uniquely determined by the numbers rj which are invariants of the operator N. This shows that the sizes of the Jordan cells are independent of the chosen Jordan basis. If you are interested only in the sizes of the Jordan cells, all you have to do is find the integers m, r1, . . . , rm−1 and then solve the system (2.8). Exercise 2.13 explains how to explicitly express the cj-s in terms of the rj-s. ut

Remark 2.27. To find the Jordan invariant of a nilpotent operator N on a complex n-dimensional space proceed as follows. (i) Find the smallest integer m such that N m = 0. j 0 (ii) Find the ranks rj of the matrices N , j = 0, . . . , m − 1, where N : −I. (iii) Find the nonnegative integers c1, . . . , cm by solving the linear system (2.8). (iv) The Jordan invariant of N is the list m, . . . , m, (m − 1),..., (m − 1),..., 1,..., 1 . | {z } | {z } | {z } cm cm−1 c1 ut

If T : U → U is an arbitrary linear operator on a complex n-dimensional space U with spectrum  spec(T ) = λ1, . . . , λm}, then we have a direct sum decomposition

U = Eλ1 (T ) ⊕ · · · ⊕ Eλm (T ), where Eλj (T ) denotes the generalized eigenspace of T corresponding to the eigenvalue λj. The generalized eigenspace Eλj (T ) is an invariant subspace of T and we denote by Sλj the restriction of 1 T to Eλj (T ). The operator Nλj = Sλj − λj is nilpotent.

A Jordan basis of U is basis obtain as a union of Jordan bases of the nilpotent operators Nλ1 ,...,Nλr . The matrix representing T in a Jordan basis is a direct sum of elementary Jordan λ-cells  λ 1 0 ··· 0 0   0 λ 1 ··· 0 0     ......  Cn(λ) = Cn + λI =  ......  .    0 0 0 ··· λ 1  0 0 0 ··· 0 λ Definition 2.28. The Jordan invariant of a complex operator T is a collection of lists, one list for every eigenvalue of of T . The list Lλ corresponding to the eigenvalue λ is the Jordan invariant of the nilpotent operator Nλ, the restriction of T − λ1 to Eλ(T ) arranged in noincreasing order. ut LINEAR ALGEBRA 39

Example 2.29. Consider the 4 × 4 matrix  1 0 0 0   1 −1 1 0  A =   ,  2 −4 3 0  0 0 0 1 4 4 viewed as a linear operator C → C . Expanding along the first row and then along the last column we deduce that  x + 1 −1  P (x) = det(xI − A) = (x − 1)2 det = (x − 1)4. A 4 x − 3 Thus A has a single eigenvalue λ = 1 which has multiplicity 4. Set  0 0 0 0   1 −2 1 0  N := A − I =   .  2 −4 2 0  0 0 0 0 The matrix N is nilpotent. In fact we have N 2 = 0 Upon inspecting N we see that ech of its columns is a multiple of the fiurst column. This means that the range of N is spanned by the vector  0   1  u1 := Ne1 =   ,  2  0 4 where e1,..., e4 denotes the canonical basis of C . The vector u1 is a tower in R(N) which we can extend to a taller tower of N

T1 = (u1, u2), u2 = e1.

Next, we need to extend the basis {u1} of R(N) to a basis of ker N. The rank nulity theorem tells us that dim R(N) + dim ker N = 4, so that dim ker N = 3. Thus, we need to find two more vectors v1, v2 so that the collection {u1, v1, v2} is a basis of ker N. To find ker N we need to solve the linear system   x1  x2  Nx = 0, x =   .  x3  x4 which we do using Gauss elimination, i.e., row operations on N. Observe that  0 0 0 0   1 −2 1 0   1 −2 1 0   1 −2 1 0   2 −4 2 0   0 0 0 0  N =   ∼   ∼   .  2 −4 2 0   0 0 0 0   0 0 0 0  0 0 0 0 0 0 0 0 0 0 0 0 Hence  4 ker N = x ∈ C ; x1 − 2x2 + x3 = 0 , 40 LIVIU I. NICOLAESCU and thus a basis of ker N is  2   −1   0   1   0   0  f =   , f =   , f :=   . 1  0  2  1  3  0  0 0 1 Observe that u1 = 2f 1 + f 2, and thus the collection {u1, f 2, f 3} is also a basis of ker N. We now have a Jordan basis of N consisting of the towers

T1 = {u1, u2}, T2 = {f 2}, T3 = {f 3}. In this basis the operator is described as a direct sum of three Jordan cells: a cell of dimension 2, and two cells of dimension 1. Thus the Jordan invariant of A consists of single list L1 corresponding to the single eigenvalue 1. More precisely

L1 = 2, 1, 1. ut LINEAR ALGEBRA 41

2.5. Exercises.

Exercise 2.1. Denote by U 3 the space of polynomials of degree ≤ 3 with real coefficients in one variable x. We denote by B the canonical basis of U 3, B = 1, x, x2, x3 .

(a) Consider the linear operator D : U 3 → U 3 given by dp U 3 p 7→ Dp = ∈ U . 3 dx 3 Find the matrix that describes D in the canonical basis B. (b) Consider the operator ∆ : U 3 → U 3 given by (∆p)(x) = p(x + 1) − p(x). Find the matrix that describes ∆ in the canonical basis B. (c) Show that for any p ∈ U 3 the Z ∞ dp(x + t) x 7→ (Lp)(x) = e−t dt 0 dx is also a polynomial of degree ≤ 3 and then find the matrix that describes L in the canonical basis B. ut

Exercise 2.2. (a) For any n × n matrix A = (aij)1≤i,j≤n we define the of A as the sum of the diagonal entries of A, n X tr A := a11 + ··· + ann = aij. i=1 Show that if A, B are two n × n matrices then tr AB = tr BA.

(b) Let U be an n-dimensional F-vector space, and e, f be two bases of U. Suppose that T : U → U is a linear operator represented in the basis e by the matrix A and the basis f by the matrix B. Prove that tr A = tr B. (The common value of these traces is called the trace of the operator T and it is denoted by tr T .) Hint: Use part (a) and (2.1). 2 2 (c) Consider the operator A : F → F described by the matrix  a a  A = 11 12 . a21 a22 Show that 2 PT (x) = x − tr Ax + det A. (d) Let T be a linear operator on the n-dimensional F-vector space. Prove that the characteristic polynomial of T has the form n n−1 n PT (x) = det(x1 − T ) = x − (tr T )x + ··· + (−1) det T. ut

Exercise 2.3. Suppose T : U → U is a linear operator on the F-vector space U, and V 1, V 2 ⊂ U are invariant subspaces of T . Show that V 1 ∩ V 2 and V 1 + V 2 are also invariant subspaces of T . ut 42 LIVIU I. NICOLAESCU

Exercise 2.4. Consider the Jacobi matrix  2 1 0 0 ··· 0 0   1 2 1 0 ··· 0 0     0 1 2 1 ··· 0 0  Jn =    ......   ......  0 0 0 0 ··· 1 2

(a) Let Pn(x) denote the characteristic polynomial of Jn,

Pn(x) = det(xI − Jn). Show that 2 P1(x) = x − 2,P2(x) = x − 4x + 3 = (x − 1)(x − 3),

Pn(x) = (x − 2)Pn−1(x) − Pn−2(x), ∀n ≥ 3.

(b) Show that all the eigenvalues of J4 are real and distinct, and then conclude that the matrices 1 2 3 ,J4,J4 ,J4 are linearly independent. ut

Exercise 2.5. Consider the n × n matrix A = (aij)1≤i,j≤n, where aij = 1, for all i, j, which we n n regard as a linear operator C → C . Consider the vector  1   1  c =   ∈ n.  .  C  .  1 (a) Compute Ac. (b) Compute dim R(A), dim ker A and then determine spec(A). (c) Find the characteristic polynomial of A. ut

Exercise 2.6. Find the eigenvalues and the eigenvectors of the circulant matrix described in Exercise 1.16. ut

Exercise 2.7. Prove the equality (2.7) that appears in the proof of Proposition 2.21. ut

Exercise 2.8. Let T : U → U be a linear operator on the finite dimensional complex vector space U. Suppose that m is a positive integer and u ∈ U is a vector such that T m−1u 6= 0 but T mu = 0. Show that the vectors u,T u,...,T m−1u are linearly independent. ut

Exercise 2.9. Let T : U → U be a linear operator on the finite dimensional complex vector space U. Show that if dim ker T dim U−1 6= dim ker T dim U , then T is a nilpotent operator and dim ker T j = j, ∀j = 1,..., dim U. Can you construct an example of operator T satisfying the above properties? ut LINEAR ALGEBRA 43

Exercise 2.10. Let S, T be two linear operators on the finite dimensional complex vector space U. Show that ST is nilpotent if and only if TS is nilpotent. ut

4 4 Exercise 2.11. Find a Jordan basis of the linear operator C → C described by the ×4 matrix  1 0 −1 −1   0 1 1 1  A =   . ut  0 0 1 0  0 0 0 1

7 7 Exercise 2.12. Find a Jordan basis of the linear operator C → C given by the matrix  3 2 1 1 0 −3 1   −1 2 0 0 1 −1 1     1 1 3 0 −6 5 −1    A =  0 2 1 4 1 −3 1  . ut    0 0 0 0 3 0 0     0 0 0 0 −2 5 0  0 0 0 0 −2 0 5 Exercise 2.13. (a) Find the inverse of the m × m matrix  1 2 3 ··· m − 1 m   0 1 2 ··· m − 2 m − 1     ......  A =  ......  .    0 0 0 ··· 1 2  0 0 0 ··· 0 1 (b) Find the Jordan normal form of the above matrix. ut 44 LIVIU I. NICOLAESCU

3. EUCLIDEAN SPACES

In the sequel F will denote either the field R of real√ numbers, or the field C of complex numbers. Any has the form z = a + bi, i = −1, so that i2 = −1. The a is called the real part of z and it is denoted by Re z. The real number b is called the imaginary part of z and its is denoted by Im z. The conjugate of a complex number z = a + bi, is the complex number z¯ = a − bi. In particular, any real number is equal to its conjugate. Note that z +z ¯ = 2 Re z, z − z¯ = 2i Im z. The or of a complex number z = a + bi is the real number p |z| = a2 + b2. Observe that |z|2 = a2 + b2 = (a + bi)(a − bi) = zz.¯ In particular, if z 6= 0 we have 1 1 = z.¯ z |z|2

3.1. Inner products. Let U be an F-vector space. Definition 3.1. An inner product on U is a map

h−, −i : U × U → F, U × U 3 (u1, u2) 7→ hu1, u2i ∈ F, satisfying with the following condition. (i) Linearity in the first variable, i.e., ∀u, v, u2 ∈ U, x, y ∈ F we have

hxu + yv, v2i = xhu, u2i + yhv, v2i.

(ii) Conjugate linearity in the second variable, i.e., ∀u1,, u, v ∈ U, x, y ∈ F we have

hu1, xu + yvi =x ¯hu1, ui +y ¯hu1, vi. (iii) Hermitian property, i.e., ∀u, v ∈ U we have hv, ui = hu, vi. (iv) Positive definiteness, i.e., hu, ui ≥ 0, ∀u ∈ U, and hu, ui = 0⇐⇒u = 0. A vector space u equipped with an inner product is called an . ut

n Example 3.2. (a) The standard real n-dimensional Euclidean space. The vector space R is equipped with a canonical inner product n n h−, −i : R × R → R. LINEAR ALGEBRA 45

More precisely if     u1 v1  u2   v2  u =   , v =   ∈ n,  .   .  R  .   .  un vn then n X † hu, vi = u1v1 + ··· + unvn = ukvk = u · v. k=1 You can verify that this is indeed an inner product, i.e., it satisfies the conditions (i)-(iv) in Definition 3.1. n (b) The standard complex n-dimensional Euclidean space. The vector space C is equipped with a canonical inner product n n h−, −i : C × C → C. More precisely if     u1 v1  u2   v2  u =   , v =   ∈ n  .   .  C  .   .  un vn then n X hu, vi = u1v¯1 + ··· + unv¯n = ukv¯k. k=1 You can verify that this is indeed an inner product. (c) Denote by Pn the vector space of polynomials with real coefficients and degree ≤ n. We can define an inner product on Pn h−, −i : Pn × Pn → R, by setting Z 1 hp, qi = p(x)q(x)dx, ∀p, q ∈ Pn. 0 You can verify that this is indeed an inner product (d) Any finite dimensional F-vector space U admits an inner product. Indeed, if we fix a basis (e1,..., en) of U, then we define h−, −i : U × U → F, by setting D E u1e1 + ··· + unen , v1e1 + ··· + vnen = u1v¯1 + ··· + unv¯n. ut

3.2. Basic properties of Euclidean spaces. Suppose that (U, h−, −i) is an space. We define the norm or of a vector u ∈ U to be the nonnegative number kuk := phu, ui. n Example 3.3. In the standard Euclidean space R of Example 3.2(a) we have v u n uX 2 kuk = t uk. ut k=1 46 LIVIU I. NICOLAESCU

Theorem 3.4 (Cauchy-Schwarz inequality). For any u, v ∈ U we have

hu, vi | ≤ kuk · kvk. Moreover, the equality is achieved if and only if the vectors u, v are linearly dependent, i.e., collinear.

Proof. If hu, vi = 0, then the inequality is trivially satisfied. Hence we can assume that hu, vi = 0. In particular, u, v 6= 0. From the positive definiteness of the inner product we deduce that for any x ∈ F we have 0 ≤ ku − xvk2 = hu − xv, u − xvi = hu, u − xvi − xhv, u − xvi = hu, ui − x¯hu, vi − xhv, ui + xx¯hv, vi = kuk2 − x¯hu, vi − xhv, ui + |x|2kvk2. If we let 1 x = hu, vi, (3.1) 0 kvk2 then 1 1 |hu, vi|2 x¯ hu, vi + x hv, ui = hu, vihu, vi + hu, vihu, vi = 2 , 0 0 kvk2 kvk2 kvk2 |hu, vi|2 |x |2kvk2 = , 0 kvk2 and thus |hu, vi|2 |hu, vi|2 |hu, vi|2 0 ≤ ku − x vk2 = kuk2 − 2 + = kuk2 − . (3.2) 0 kvk2 kvk2 kvk2 Thus |hu, vi|2 ≤ kuk2 kvk2 so that 2 2 2 hu, vi | ≤ kuk · kvk . Note that if 0 = hu, vi = kuk · kvk then at least one of the vectors u, v must be zero. If 0 6= hu, vi = kuk · kvk, then by choosing x0 as in (3.1) we deduce as in (3.2) that

ku − x0vk = 0.

Hence u = x0v. ut

Remark 3.5. The Cauchy-Schwarz theorem is a rather nontrivial result, which in skilled hands can produce remarkable consequences. Observe that if U is the standard real Euclidean space of Example 3.2(a), then the Cauchy-Schwarz inequality implies that for any real numbers u1, v1, . . . , un, vn we have v v n u n u n X uX 2 uX 2 ukvk ≤ t uk · t vk k=1 k=1 k=1 If we square both sides of the above inequality we deduce n !2 n ! n ! X X 2 X 2 ukvk ≤ uk · vk . (3.3) k=1 k=1 k=1 ut LINEAR ALGEBRA 47

Observe that if u, v are two nonzero vectors in an Euclidean space (U, h−, −i), then the Cauchy- Schwarz inequality implies that Rehu, vi ∈ [−1, 1]. kuk · kvk Thus there exists a unique θ ∈ [0, π] such that Rehu, vi cos θ = . kuk · kvk

This θ is called the angle between the two nonzero vectors u, v. We denote it by ](u, v). In particular, we have, by definition, Rehu, vi cos (u, v) = . (3.4) ] kuk · kvk π Note that if the two vectors u, v were in the classical sense, i.e., ](u, v) = 2 , then Rehu, vi = 0. This justifies the following notion. Definition 3.6. Two vectors u, v in an Euclidean vector space (U, h−, −i) are said to be orthogonal if hu, vi = 0. We will indicate the of two vectors u, v using the notation u ⊥ v. ut

In the remainder of this subsection we fix an Euclidean space (U, h−, −i).

Theorem 3.7 (Pythagora). If u ⊥ v, then ku + vk2 = kuk2 + kvk2. Proof. We have ku + vk2 = hu + v, u + vi = hu, ui + hu, vi + hv, ui +hv, vi | {z } | {z } =0 =0

= kuk2 + kvk2. ut

Theorem 3.8 ( inequality). ku + vk ≤ kuk + kvk, ∀u, v ∈ U. Proof. Observe that the inequality is can be rewritten equivalently as ku + vk2 ≤ kuk + kvk 2. Observe that ku + vk2 = hu + v, u + vi = hu, ui + hu, vi + hv, ui + hv, vi

2 2 2 2 kuk + kvk + 2

3.3. Orthonormal systems and the Gramm-Schmidt procedure. In the sequel U will denote an n-dimensional Euclidean F-vector space. We will denote the inner product on U by h−, −i. Definition 3.9. A family of nonzero vectors  ν1,..., uk ⊂ U is called orthogonal if ui ⊥ uj, ∀i 6= j. An orthogonal family  u1,..., uk ⊂ U is called orthonormal if kuik = 1, ∀i = 1, . . . , k. A basis of U is called orthogonal (respectively orthonormal) if it is an orthogonal (respectively orthonormal) family. ut

Proposition 3.10. Any orthogonal family in U is linearly independent. Proof. Suppose that  u1,..., uk ⊂ U is an orthogonal family. If x1, . . . , xk ∈ F are such that

x1u1 + ··· + xkuk = 0, then taking the inner product with uj of both sides in the above equality we deduce

0 = x1hu1, uji + ··· + xj−1huj−1, uji + xjhuj, uji + xj+1huj+1, uji + ··· + xnhun, uji

(hui, uji = 0, ∀i 6= j) 2 = xjkujk . Since uj 6= 0 we deduce xj = 0. This happens for any j = 1, . . . , k, proving that the family is linearly independent. ut

Theorem 3.11 (Gramm-Schmidt). Suppose that  u1,..., uk ⊂ U is a linearly independent family. Then there exists an orthonormal family  e1,..., ek ⊂ U such that  span{u1,..., uj = span e1,..., ej , ∀j = 1, . . . , k.

Proof. We will argue by induction on k. For k = 1, if {u1} ⊂ U is a linearly independent family, then u1 6= 0 and we set 1 e1 := u1. ku1k Clearly {e1} is an orthonormal family spanning the same subspace as {u1}. Suppose that the result is true for any linearly independent family of vectors con sting of (k − 1) vectors. We need to prove that the result is true for linearly independent families consisting of k vectors. Let  u1,..., uk ⊂ U LINEAR ALGEBRA 49 be such a family. The induction assumption implies that we can find an orthonormal system  e1,..., ek−1 such that  span{u1,..., uj = span e1,..., ej , ∀j = 1, . . . , k − 1. Define vk := huk, e1ie1 + ... + huk, ek−1iek−1,

f k := uk − vk. Observe that  vk ∈ span e1,..., ek−1 = span{u1,..., uk−1 . Since {u1,..., uk} is a linearly independent family we deduce that  uk 6∈ e1,..., ek−1 so that f k = uk − vk 6= 0. We can now set 1 ek := f k. kf kk By construction kekk = 1. Also note that if 1 ≤ j < k, then

hf k, eji = huk − vk, eji = huk, eji − hvk, eji D E = huk, eji − huk, e1ie1 + ... + huk, ek−1iek−1, ej | {z } =vk huk, eji − huj, eji · hejeji = 0. This proves that {e1,..., ek−1, ek} is an orthonormal family. Finally observe that uk = vk + f k = vk + kf kkek. Since  vk ∈ span e1,..., ek−1 we deduce  uk ∈ span e1,..., ek−1, ek and thus  span{u1,..., uk = span e1,..., ek ut

Remark 3.12. The strategy used in the proof of the above theorem is as important as the theorem itself. The procedure we used to produce the orthonormal family {e1,..., ek}. This procedure goes by the name of the Gramm-Schmidt procedure. To understand how it works we consider a simple 2 case, when U is the space R equipped with the canonical inner product. Suppose that  3   5  u = , u := . 1 4 2 0 Then 2 2 2 ku1k = 3 + 4 = 9 + 16 = 25, so that and we set  3  1 5 e1 := u1 =   . ku1k 4 5 50 LIVIU I. NICOLAESCU

Next    9  5 5 v2 = hu2, e1ie1 = 3e1, f 2 = u2 − v2 =   −   12 0 5  16   4  5 5 =   = 4   . 12 3 − 5 − 5 We see that f 2 = 4 and thus  4  5 e2 =   . ut 3 − 5

The Gramm-Schmidt theorem has many useful consequences. We will discuss a few of them Corollary 3.13. Any finite dimensional Euclidean vector space (over F = R, C) admits an orthonor- mal basis. Proof. Apply Theorem 3.11 to a basis of the vector space. ut

The orthonormal bases of an Euclidean space have certain computational advantages. Suppose that

e1,..., en is an orthonormal basis of the Euclidean space U. Then the coordinates of a vector u ∈ U in this basis are easily computed. More precisely, if

u = x1e1 + ··· + xnen, x1, . . . , xn ∈ F, (3.5) then xj = hu, eji ∀j = 1, . . . , n. (3.6) Indeed, the equality (3.5) implies that

hu, eji = x1e1 + ··· + xnen, ej = xjhej, eji = xj, where at the last step we used the condition which translates to ( 1, i = j hei, eji = 0, i 6= j. Applying Pythagora’s theorem we deduce 2 2 2 2 2 kx1e1 + ··· + xnenk = x1 + ··· + xn = |hu, e1i| + ··· + |hu, eni | . (3.7) 2 Example 3.14. Consider the orthonormal basis e1, e2 of R constructed in Remark 3.12. If  2  u = , 1 then the coordinates x1, x2 of u in this basis are given by 6 4 x = hu, e i = + = 2, 1 1 5 5 8 3 x = hu, e i = − = 1, 2 2 5 5 so that u = 2e1 + e2. ut LINEAR ALGEBRA 51

Corollary 3.15. Any orthonormal family  e1,..., ek in a finite dimensional Euclidean vector can be extended to an orthonormal basis of that space. Proof. According to Proposition 3.37, the family  e1,..., ek is linearly independent. Therefore, we can extend it to a basis of U,  e1,..., ek, uk+1,..., un . If we apply the Gramm-Schmidt procedure to the above linearly independent family we obtaine an orthonormal basis that extends our original orthonormal family.2 ut

3.4. Orthogonal projections. Suppose that (U, h−, −i) is a finite dimensional Euclidean F-vector space. If X is a subset of U then we set X⊥ = u ∈ U; hu, xi = 0, ∀x ∈ X . In other words, X⊥ consists of the vectors orthogonal to all the vectors in X. For this reason we will obten write u ⊥ X to indicate that u ∈ X⊥. The following result is left to the reader. Proposition 3.16. (a) The subset X⊥ is a vector subspace of U. (b) X ⊂ Y ⇒ X⊥ ⊃ Y ⊥. (c) X⊥ = span(X) ⊥. ut

Theorem 3.17. If V is a subspace of U, then U = V ⊕ V ⊥.

Proof. We need to check two things. V ∩ V ⊥ = 0, (3.8a) U = V + V ⊥. (3.8b) Proof of (3.8a). If x ∈ V ∩ V ⊥ then 0 = hx, xi which implies that x = 0. Proof of (3.8b). Fix an orthonormal basis of V ,

B := {e1,..., ek}. Extend it to an orthonormal basis of U,

e1,..., ek, ek+1,..., en. By construction, ⊥ ⊥ ⊥ ek+1,..., en ∈ B = span(B) = V so that ⊥ span{ek+1,..., en} ⊂ V .

2Exercise 3.5 asks you to verify this claim. 52 LIVIU I. NICOLAESCU

Clearly any vector u ∈ U can be written as a sum of two vectors ⊥ u = v + w, v ∈ span(B) = V , w ∈ span{ek+1,..., en} ⊂ V . ut

Corollary 3.18. If V is a subsace of U, then V = (V ⊥)⊥.

Proof. Theorem 3.17 implies that for any subspace W of U. We have dim W ⊥ = dim U − dim W . If we let W = V ⊥ we deduce that dim(V ⊥)⊥ = dim U − dim V ⊥. If we let W = V we deduce dim V ⊥ = dim U − dim V . Hence dim V = dim(V ⊥)⊥ so it suffices to show that V ⊂ (V ⊥)⊥, i.e., we have to show that any vector v in V is orthogonal to any vector w in V ⊥. Since w ∈ V ⊥ we have w ⊥ v so that v ⊥ w. ut

Suppose that V is a subspace of U. Then any u ∈ U admits a unique decomposition u = v + w, v ∈ V , w ∈ V ⊥. We set PV u := v. Observe that if ⊥ u0 = v0 + w0, u1 = v1 + w2, v0, v1 ∈ V , w0, w1 ∈ V , then (u0 + u1) = (v0 + v1) + (w0 + w1) | {z } | {z } ∈V ∈V ⊥ and we deduce PV (u0 + u1) = v0 + v1 = PV u0 + PV u1. Similarly, if λ ∈ F and u ∈ U, then λu = λv + λw and we deduce PV (λu) = λv = λPV u. We have thus shown that the map

PV : U → U, u 7→ PV u is a linear operator. It is called the the orthogonal onto the subpace V . Observe that ⊥ R(PV ) = V , ker PV = V . (3.9)

Note that PV u is the unique vector v in V with the property that (u − v) ⊥ V . LINEAR ALGEBRA 53

Proposition 3.19. Suppose V is a subspace of U and e1,..., ek is an orthonormal basis of V . Then

PV u = hu, e1ie1 + ··· + hu, ekiek, ∀u ∈ U. Proof. It suffices to show that the vector  w = u − hu, e1ie1 + ··· + hu, ekiek is orthogonal to all the vectors e1,..., ek because then it will be orthogonal to any of these vectors. We have  hw, eji = hu, eji − hu, he1, ihe1, eji + ··· + hu, ekihek, eji .

Since e1,..., ek is an orthonormal basis of V we deduce that ( 1, i = j hei, eji = 0, i 6= j. Hence  hu, he1, ihe1, eji + ··· + hu, ekihek, eji = hu, ejihejeji = hu, eji. ut

Theorem 3.20. Let V be a subspace of U. Fix u0 ∈ U. Then

ku0 − PV u0k ≤ ku0 − vk, ∀v ∈ V , and we have equality if and only if v = v0. In other words, PV u0 is the vector in V closest to u0. ⊥ Proof. Set v0 := PV u0, w0 := u0 − PV u0 ∈ V . Then for any v ∈ V we have

u − v = (v0 − v) + w0.

Since v0 ⊥ (u0 − v) we deduce from Pythagora’s Theorem that 2 2 2 2 2 ku − vk = kv0 − vk + kw0k ≥ kw0k = ku0 − PV u0k . Hence ku0 − PV u0k ≤ ku0 − vk, and we have equality if and only if v = v0 = PV u0. ut

Proposition 3.21. Let V be a subspace of U. Then 2 PV = PV , and kPV uk ≤ kuk, ∀u ∈ U. Proof. By construction PV v = v, ∀v ∈ V . Hence  PV PV u = PV u, ∀u ∈ U because PV u ∈ V . Next, observe that for any u ∈ U we have PV u ⊥ (u − PV u) so that 2 2 2 2 kuk = kPV uk + ku − PV uk ≥ kPV uk . ut 54 LIVIU I. NICOLAESCU

3.5. Linear functionals and adjoints on Euclidean spaces. Suppose that U is a finite dimensional F-vector space, F = R, C. The dual of U, is the F-vector space of linear functionals on U, i.e., linear maps α : U → F. The dual of U is denoted by U ∗. The vector space U ∗ has the same dimension as U and thus they are isomorphic. However, there is no distinguished isomorphism between these two vector spaces! We want to show that if we fix an inner product on U, then we can construct in a concrete way an isomorphism between these spaces. Before we proceed with this construction let us first observe that there is a natural bilinear map ∗ B : U × U → F,B(α, u) = α(u). Theorem 3.22 (Riesz representation theorem: the real case). Suppose that U is a finite dimensional real vector space equipped with an inner product h−, −i : U × U → R. To any u ∈ U we associate the linear u∗ ∈ U ∗ defined by the equality B(u∗, x) = u∗(x) := hx, ui, ∀x ∈ U. Then the map U 3 U 7→ u∗ ∈ U ∗ is a linear isomorphism. Proof. Let us show that the map u 7→ u∗ is linear. For any u, v, x ∈ U we have (u + v)∗(x) = hx, u + vi = hx, u∗i + hx, v∗i = u∗(x) + v∗(x) = (u∗ + v∗)(x), which show that (u + v)∗ = u∗ + v∗. For any u, x ∈ U and any t ∈ R we have (tu)∗(x) = hx, tui = thx, ui = tu∗(x), i.e., (tu)∗ = tu∗. This proved the claimed linearity. To prove that it is an isomorphism, we need to show that it is both injective and surjective. Injectivity. Suppose that u ∈ U is such that u∗ = 0. This means that u∗(x) = 0, ∀x ∈ U. If we let x = u we deduce 0 = u∗(u) = hu, ui = kuk2, so that u = 0. Surjectivity. We have to show that that for any α ∈ U ∗ there exists u ∈ U such that α = u∗. Let n = dim U. Fix an orthonormal basis e1,..., en of U. The linear functional α is uniquely determined by its values on ei, αi = α(ei). Define n X u := αkek. k=1 Then ∗ u (ei) = hei, ui = hei, α1e1 + ··· + αiei + ··· + αneni LINEAR ALGEBRA 55

= hei, α1e1i + ··· + hei, αieii + ··· + hei, αneni = αi. Thus ∗ u (ei) = α(ei), ∀i = 1, . . . , n, so that α = u∗. ut

Theorem 3.23 (Riesz representation theorem: the complex case). Suppose that U is a finite dimen- sional complex vector space equipped with an inner product h−, −i : U × U → C. To any u ∈ U we associate the linear functional u∗ ∈ U ∗ defined by the equality B(u∗, x) = u∗(x) := hx, ui, ∀x ∈ U. Then the map U 3 U 7→ u∗ ∈ U ∗ is bijective and conjugate linear, i.e., for any u, v ∈ U and any z ∈ C we have (u + v)∗ = u∗ + v∗, (zu)∗ =z ¯u∗. Proof. Let us first show that the map u 7→ u∗ is conjugate linear. For any u, v, x ∈ U we have (u + v)∗(x) = hx, u + vi = hx, u∗i + hx, v∗i = u∗(x) + v∗(x) = (u∗ + v∗)(x), which show that (u + v)∗ = u∗ + v∗. For any u, x ∈ U and any z ∈ C we have (zu)∗(x) = hx, zui =z ¯hx, ui =z ¯u∗(x), i.e., (zu)∗ =z ¯u∗. This proved the claimed conjugate linearity. We now prove the bijectivity claim. Injectivity. Suppose that u ∈ U is such that u∗ = 0. This means that u∗(x) = 0, ∀x ∈ U. If we let x = u we deduce 0 = u∗(u) = hu, ui = kuk2, so that u = 0. Surjectivity. We have to show that that for any α ∈ U ∗ there exists u ∈ U such that α = u∗. Let n = dim U. Fix an orthonormal basis e1,..., en of U. The linear functional α is uniquely determined by its values on ei, αi = α(ei). Define n X u := α¯kek. k=1 Then ∗ u (ei) = hei, ui = hei, α¯1e1 + ··· +α ¯iei + ··· +α ¯neni

= hei, α¯1e1i + ··· + hei, α¯ieii + ··· + hei, α¯neni = ei, α¯iei = αihei, eii = αi. Thus ∗ u (ei) = α(ei), ∀i = 1, . . . , n, so that α = u∗. ut 56 LIVIU I. NICOLAESCU

Suppose that U and V are two F-vector spaces equipped with inner products h−, −iU and respec- tively h−, −iV . Next, assume that T : U → V is a . Theorem 3.24. There exists a unique linear map S : V → U satisfying the equality

hT u, viV = hu,SviU , ∀u ∈ U, v ∈ V . (3.10)

This map is called the adjoint of T with respect to the inner products h−, −iU , h−, −iV and it is denoted by T ∗. The equality (3.10) can then be rewritten ∗ ∗ hT u, viV = hu,T viU , hT v, uiU = hv,T uiU , ∀u ∈ U, v ∈ V . (3.11)

Proof. Uniqueness. Suppose there are two linear maps S1,S2 : V → U satisfying (3.10). Thus

0 = hu,S1viU − hu,S2viU = hu, (S1 − S2)viU , ∀u ∈ U, v ∈ V .

For fixed v ∈ V we let u = (S1 − S2)v and we deduce from the above equality 2 0 = h (S1 − S2)v, (S1 − S2)v iU = k(S1 − S2)vkU , so that (S1 − S2)v = 0, for any v in V . This shows that S1 = S2, thus proving the uniqueness part of the theorem. Existence. Any v ∈ V defines a linear functional

Lv : U → F,Lv(u) = hT u, viV . Thus there exists a unique vector Sv ∈ U such that ∗ Lv = (Sv) ⇐⇒hT u, viV = hu,SviU , ∀u ∈ U. One can verify easily that the correspondence V 3 v 7→ Sv ∈ U described above is a linear map; see Exercise 3.13. ut

Example 3.25. Let T : U → V be as in the statement of Theorem 3.24. Assume m = dimF U, n = dimF V . Fix an orthonormal basis

e := {e1,..., em} of U and an orthonormla basis f := {f 1,..., f n} of V . With respect to these bases the operator T is represented by an n × m matrix   a11 a12 ··· a1m  a21 a22 ··· a2m  A =   ,  . . . .   . . . .  an1 an2 ··· anm while the adjoint operator T ∗ : V → U is represented by an m × n matrix  ∗ ∗ ∗  a11 a12 ··· a1n ∗ ∗ ∗  a21 a22 ··· a2n  A∗ =   .  . . . .   . . . .  ∗ ∗ ∗ am1 am2 ··· amn

The j-th column of AS describes the coordinates of the vector T ej in the basis f,

T ej = a1jf 1 + ··· anjf n. LINEAR ALGEBRA 57

We deduce that hT ej, f iiV = aij. Hence hf i,T ejiV = hT ej, f iiV =a ¯ij. ∗ ∗ On the other hand, the i-th column of A describes the coordinates of T f i in the basis e so that ∗ ∗ ∗ T f i = a1ie1 + ··· + amiem and we deduce that ∗ ∗ hT f i, eji = aji. On the other hand, we have

(3.11) ∗ ∗ a¯ij = hf i,T ejiV = hT f i, eji = aji. Thus, A∗ is the conjugate transpose of A. In other words, the entries of A∗ are the conjugates of the corresponding entries of the transpose of A, A∗ = A†. ut

Definition 3.26. Let U be a finite dimensional Euclidean F-space with inner product h−, −i. A linear operator T : U → U is called selfadjoint or symmetric if T = T ∗, i.e.,

hT u1, u2i = hu1,T u2, ∀u1, u2 ∈ U. ut n Example 3.27. (a) Consider the standard real Euclidean n-dimensional space R . Any real n × n n n matrix A can be identified with a linear operator TA : R → R . The operator TA is selfadjoint if and only if the matrix A is symmetric, i.e.,

aij = aji, ∀i, j or, equivalently A = A† = the transpose of A. n (b) Consider the standard complex Euclidean n-dimensional space C . Any complex n × n matrix n n A can be identified with a operator TA : C → C . The operator TA is selfadjoint if and only if the matrix A is Hermitian, i.e., aij =a ¯ji, ∀i, j or, equivalently A = A∗ = the conjugate transpose of A. (c) Suppose that V is a subspace of the finite dimensional Euclidean space U. Then the orthogonal projection PV : U → U is a selfadjoint operator, i.e.,

hPV u1, u2i = hu1,PV u2i, ∀u1, u2 ∈ U.

Indeed, let u1, u2 ∈ U. They decompose uniquely as ⊥ u1 = v1 + w1, u2 = v2 + w2, v1, v2 ∈ V , w1, w2 ∈ V .

Then PV u1 = v1 so that

hPV u1, u2i = hv1, v2 + w2i = hv1, v2i + hv1, w2i = hv1, v2i. | {z } w2⊥v1

Similarly, PV u2 = v2 and we deduce

hu1,PV u2i = hv1 + w1, v2i = hv1, v2i + hw1, v2i = hv1, v2i. ut | {z } w1⊥v2 58 LIVIU I. NICOLAESCU

Proposition 3.28. Suppose that U is a finite dimensional Euclidean F-space and T : U → U is a selfadjoint operator. Then spec T ⊂ R. In other words, the eigenvalues of a selfadjoint operator are real. Proof. Let λ be an eigenvalue T and u 6= 0 an eigenvector of T corresponding to the eigenvalue λ. We have T u = λu so that 1 λkuk2 = hλu, ui = hT u, ui ⇒ λ = hT u, ui. kuk2 On the other hand hT u, ui = hu,T ui. Since T is selfadjoint we deduce hu,T ui = hT u, ui so that hT u, ui = hT u, ui. Hence the inner product hT u, ui is a real number. From the equality 1 λ = hT u, ui kuk2 we deduce that λ is a real number as well. ut

Corollary 3.29. If A is an n × n complex matrix such that A = A∗, then all the roots of the charac- teristic polynomial PA(λ) = det(λ1 − A) are real. n n Proof. The roots of PA(λ) are the eigenvalues of the linear operator TA : C → C defined by A. ∗ n Since A = A we deduce that TA is selfadjoint with respect to the natural inner product on C so that all its eigenvalues are real. ut

Theorem 3.30. Suppose that U, V are two finite dimensional Euclidean F-vector spaces and T : U → V is a linear operator. Then the following hold. (a) (T ∗)∗ = T . (b) ker T = R(T ∗)⊥. (c) ker T ∗ = R(T )⊥. (d) R(T ) = (ker T ∗)⊥. (e) R(T ∗) = (ker T ⊥. Proof. (a) The operator (T ∗)∗ is a linear operator U → V . We need to prove that, for any u ∈ U we have x := (T ∗)∗u − T u=0. Let Because (T ∗)∗ is the adjoint of T ∗ we deduce from (3.11) that ∗ ∗ ∗ h(T ) u, viV = hu,T viU . Because T ∗ is the adjoint of T we deduce from (3.11) that ∗ hu,T viU = hT u, viV . Hence, for any v ∈ V we have ∗ ∗ 0 = h(T ) u, viV − hT u, viV = hx, viV . By choosing v = x we deduce x = 0. LINEAR ALGEBRA 59

(b) We need to prove that u ∈ ker T ⇐⇒u ⊥ R(T ∗). Let u ∈ ker T , i.e., T u = 0. To prove that u ⊥ R(T ∗) we need to show that u ⊥ T ∗v, ∀v ∈ V . For v ∈ V we have (3.11) hu,T ∗vi = hT u, vi = 0, so that u ⊥ T ∗v for any v ∈ V . Conversely, let us assume that u ⊥ T ∗v for any v ∈ V . We have to show that x = T u = 0. Observe that x ∈ V so that u ⊥ T ∗x. We deduce 0 = hu,T ∗xi = hT u, xh= hx, xi = kxk2. Hence x = 0. (c) Set S := T ∗ from (b) we deduce ker T ∗ = ker S = R(S∗)⊥. From (a) we deduce S∗ = (T ∗)∗ = T and (c) is now obvious. Part (d) follows from (c) and Corollary 3.18, while (e) follows from (b) and Corollary 3.18. ut

Corollary 3.31. Suppose that U is a finite dimensional Euclidean vector space, and T : U → U is a selfadjoint operator. Then ker T = R(T )⊥, R(T ) = (ker T )⊥, U = ker T ⊕ R(T ). ut

Proposition 3.32. (a) Suppose that U, V , W are finite dimensional Euclidean F-spaces. If T : U → V ,S : V → W are linear operators, then (ST )∗ = T ∗S∗. (b) Suppose that T : U → V is a linear operator between two finite dimensional Euclidean F-spaces. Then T is invertible if and only if the adjoint T ∗ : V → U is invertible. Moreover, if T is invertible, then (T −1)∗ = (T ∗)−1. (c)Suppose that S, T : U → V are linear operators between two finite dimensional Euclidean F- spaces. Then ∗ ∗ ∗ ∗ ∗ (S + T ) = S + T , (zS) =zS ¯ , ∀z ∈ F. ut

The proof is left to you as Exercise 3.15.

Proposition 3.33. Suppose that U is a finite dimensional Euclidean F-space and S, T : U → U are two selfadjoint operators. Then S = T ⇐⇒hSu, ui = hT u, ui, ∀u ∈ U. 60 LIVIU I. NICOLAESCU

Proof. The implication ” ⇒ ” is obvious so it suffices to prove that hSu, ui = hT u, ui, ∀u ∈ U ⇒ S = T. We set A := S − T . Then A is a selfadjoint operator and it suffices to show that hAu, ui = 0, ∀u ∈ U ⇒ A = 0. We distinguish two cases. (a) F = R. For any u, v ∈ U we have

0 = hA(u + v), u + vi = hAu, u + vi + hAv, u+vi = hAu, ui +hAu, vi + hAv, ui + hAv, vi | {z } | {z } =0 =0 = hAu, vi + hv,Aui = 2hAu, vi. Hence hAu, vi = 0, ∀u, v ∈ U. If in the above equality we let v = Au we deduce kAuk2 = 0, ∀u ∈ U, i.e., A = 0. (b) F = C. For any u, v ∈ U we have 0 = hA(u + v), u + vi = hAu, u + vi + hAv, u + vi = hAu, ui +hAu, vi + hAv, ui + hAv, vi | {z } | {z } =0 =0 = hAu, vi + hv,Aui = hAu, vi + hAu, vi = 2 RehAu, vi. Hence RehAu, vi = 0, ∀u, v ∈ U. (3.12) Similarly for any u, v ∈ U we have 0 = hA(u + iv), u + ivi = hAu, u + ivi + ihAv, u + ivi = hAu, ui −ihAu, vi + ihAv, ui − i2 hAv, vi | {z } | {z } =0 =0   = −ihAu, vi + ihv,Aui = −i hAu, vi − hAu, vi = 2 ImhAu, vi. Hence ImhAu, vi = 0, ∀u, v ∈ U. (3.13) Putting together (3.12) and (3.13) we deduce that hAu, vi = 0, ∀u, v ∈ U. If we now let v = Au in the above equality we deduce as in the real case that Au = 0, ∀u ∈ U. ut

Definition 3.34. Let U, V be two finite dimensional Euclidean F-vector spaces. A linear operator T : U → V is called an isometry if for any u ∈ U we have

kT ukV = kukU . ut LINEAR ALGEBRA 61

Proposition 3.35. A linear operator T : U → V between two finite dimensional Euclidean vector spaces is an isometry if and only if ∗ T T = 1U . ut

The proof is left to you as Exercise 3.16. Definition 3.36. Let U be a finite dimensional Euclidean F-space. A linear operator T : U → U is called an orthogonal operator if T is an isometry. We denote by O(U) the space of orthogonal operators on U. ut

Proposition 3.37. Let U be a finite dimensional Euclidean F-space. Then ∗ ∗ T ∈ O(U)⇐⇒T T = TT = 1U .

Proof. The implication ” ⇐ ” follows from Proposition 3.35. To prove the opposite implication, assume that T is an orthogonal operator. Hence kT uk = kuk, ∀u ∈ U. This implies in particular that ker T = 0, so that T is invertible. If we let u = T −1v in the above equality we deduce kT −1vk = kvk, ∀v ∈ U. Hence T −1 is also an isometry so that −1 ∗ −1 (T ) T = 1U . Using Proposition 3.32 we deduce (T −1)∗ = (T ∗)−1. Hence ∗ −1 −1 (T ) T = 1U . Taking the inverses of both sides of the above equality we deduce ∗ −1 −1 −1 −1 −1 ∗ −1 −1 ∗ 1U = (T ) T = ( T ) ((T ) ) = TT . ut 62 LIVIU I. NICOLAESCU

3.6. Exercises. Exercise 3.1. Prove the claims made in Example 3.2 (a), (b), (c). ut

Exercise 3.2. Let U, h−, −i  be an Euclidean space. (a) Show that for any u, v ∈ U we have ku + vk2 + ku − vk2 = 2 kuk2 + kvk2 .

(b) Let u0,..., un ∈ U. Prove that

ku0 − unk ≤ ku0 − u1k + ku1 − u2k + ··· + kun−1 − unk. ut

Exercise 3.3. Show that for any complex numbers z1, . . . , zn we have 2 2 2  |z1| + ··· + |zn| ≤ n |z1| + ··· + |zn| and 2 2  2 |z1| + ··· + |zn| ≤ |z1| + ··· + |zn| .

Exercise 3.4. Consider the space P3 of polynomials with real coefficients and degrees ≤ 3 equipped with the inner product Z 1 hP,Qi = P (x)Q(x)dx, ∀P,Q ∈ P3. 0 Construct an orthonormal basis of P3 by using the Gramm-Schmidt procedure applied to the basis of P3 given by 2 3 E0(x) = 1,E1(x) = x, E2(x) = x ,E3(x) = x . ut Exercise 3.5. Fill in the missing details in the proof of Corollary 3.15. ut

Exercise 3.6. Suppose that T : U → U is a linear operator on a finite dimensional complex Eu- clidean vector space. Prove that there exists an orthonormal basis of U, such that, in this basis T is represented by an upper triangular matrix. ut

Exercise 3.7. Prove Proposition 3.16. ut

3 Exercise 3.8. Consider the standard Euclidean space R . Denote by e1, e2, e3 the canonical or- 3 thonormal basis of R and by V the subspace generated by the vectors

v1 = 12e1 + 5e3, v2 = e1 + e2 + e3. 3 3 Find the matrix representing the orthogonal projection PV : R → R in the canonical basis e1, e2, e3. ut

Exercise 3.9. Consider the space P3 of polynomials with real coefficients and degrees ≤ 3. Find 0 P ∈ P3 such that p(0) = p (0) = 0 and Z 1 2 2 + 3x − p(x) dx 0 is as small as possible. Hint Observe that the set  0 V = p ∈ P3; p(0) = p (0) = 0 LINEAR ALGEBRA 63 is a subspace of P3. Then compute PV , the orthogonal projection onto V with respect to the inner product Z 1 hp, qi = p(x)q(x)dx, p, q ∈ P3. 0 The answer will be PV q, q = 2 + 3x ∈ P3. ut

Exercise 3.10. Suppose that (U, h−, −i) is a finite dimensional real Euclidean space and P : U → U is a linear operator such that P 2 = P, kP uk ≤ kuk, ∀u ∈ U. (P )

Show that there exists a subspace V ⊂ U such that P = PV = the orthogonal projection onto V . Hint: Let V = R(P ) = P (U), W := ker P . Using (P ) argue by contradiction that V ⊂ W ⊥ and then conclude that P = PV . ut

Exercise 3.11. Let P2 denote the space of polynomials with real coefficients and degree ≤ 2. De- scribe the polynomial p0 ∈ P2 uniquely determined by the equalities Z π Z π cos xq(x)dx = p0(x)q(x)dx, ∀q ∈ P2. ut −π −π

Exercise 3.12. Let k be a positive integer, and denote by Pk denote the space of polynomials with real coefficients and degree ≤ k. For k = 2, 3, 4, describe the polynomial pk ∈ Pk uniquely determined by the equalities Z 1 q(0) = pk(x)q(x)dx, ∀q ∈ Pk. ut −1 Exercise 3.13. Finish the proof of Theorem 3.24. (Pay special attention to the case when F = C.) ut

Exercise 3.14. Let P2 denote the space of polynomials with real coefficients and degree ≤ 2. We equip it with the inner product Z 1 hp, qi = p(x)q(x)dx. 0 Consider the linear operator T : P2 → P2 defined by dp T p = , ∀p ∈ P . dx 2 Describe the adjoint of T . ut

Exercise 3.15. Prove proposition 3.32. ut

Exercise 3.16. Suppose that U is a finite dimensional Euclidean F-spaces and T : U → V is an orthogonal operator. Prove that λ ∈ spec(T ) ⇒ |λ| = 1. ut

Exercise 3.17. Let U be a finite dimensional Euclidean F-vector space and T : U → U is a linear operator. Prove that the following statements are equivalent. (i) The operator T : U → U is orthogonal. (ii) hT u,T vi = hu, vi, ∀u, v ∈ U. (iii) For any orthonormal basis e1,..., en of U, the collection T e1,...,T en is also an orthonor- mal basis of U. 64 LIVIU I. NICOLAESCU

ut LINEAR ALGEBRA 65

4. SPECTRAL THEORY OF NORMAL OPERATORS 4.1. Normal operators. Let U be a finite dimensional complex Euclidean space. A linear operator T : U → U is called normal if T ∗T = TT ∗. Example 4.1. (a) A selfadjoint operator T : U → U is a normal operator. Indeed, we have T = T ∗ so that T ∗T = T 2 = TT ∗. (b) An orthogonal operator T : U → U is a normal operator. Indeed Proposition 3.37 implies that ∗ ∗ T T = TT = 1U . ut

(c) If T : U → U is a normal operator and λ ∈ C then λ1U − T is also a normal operator. Indeed ∗ ∗ ∗ ¯ ∗ (λ1U − T ) = (λ1U ) − (T ) = λ1U − T and we have ∗ ¯ ∗ (λ1U − T ) (λ1U − T ) = (λ1U − T ) · (λ1U − T ). ut

Proposition 4.2. If T : U → U is a normal operator then so are any of its powers T k, k > 0. Proof. To see this we invoke Proposition 3.32 and we deduce (T k)∗ = (T ∗)k Then (T k)∗T k = (T ∗ ··· T ∗) · (T ··· T ) = (T ∗ ··· T ∗) · (T ··· T ) ·T ∗ | {z } | {z } | {z } | {z } k k k−1 k = (T ∗ ··· T ∗) · (T ··· T ) ·(T ∗)2 = ··· = (T ··· T )(T ∗)k = T k(T ∗)k = T k(T k)∗. | {z } | {z } | {z } k−2 k k ut

Definition 4.3. Let U be a finite dimensional Euclidean F -space. A linear operator T : U → U is called orthogonally diagonalizable if there exists an orthonormal basis of U such that, in this basis the operator T is represented by a . ut

We can unravel a bit the above definition and observe that a linear operator T on an n-dimensional Euclidean F-space is orthogonally diagonalizable if and only if there exists an orthonormal basis e1,..., en and numbers a1, . . . , an ∈ F such that

T ek = akek, ∀k. Thus the above basis is rather special: it is orthonormal, and it consists of eigenvectors of T . The numbers ak are eigenvalues of T . Note that the converse is also true. If U admits a n orthonormal basis consisting of eigenvectors of T , then T is orthogonally diagonalizable. Proposition 4.4. Suppose that U is a complex Euclidean space of dimension n and T : U → U is orthogonally diagonalizable. Then T is a normal operator. 66 LIVIU I. NICOLAESCU

Proof. Fix an orthonormal basis e = (e1,..., en) of U such that, in this basis, the operator T is represented by the diagonal matrix   a1 0 0 ··· 0 0  0 a2 0 ··· 0 0  D =   , a , . . . , a ∈ .  ......  1 n C  ......  0 0 0 ··· 0 an The computations in Example 3.25 show that the operator T ∗ is represented in the basis e by the matrix D∗. Clearly  2  |a1| 0 0 ··· 0 0 2  0 |a2| 0 ··· 0 0  DD∗ = D∗D =   .  ......   ......  2 0 0 0 ··· 0 |an| ut

4.2. The spectral decomposition of a normal operator. We want to show that the converse of Proposition 4.4 is also true. This is a nontrivial and fundamental result of linear algebra. Theorem 4.5 ( for Normal Operators). Let U be an n-dimensional complex Eu- clidean space and T : U → U a normal operator. Then T is orthogonally diagonalizable, i.e., there exists an orthonormal basis of U consisting of eigenvectors of T . Proof. The key fact behind the Spectral Theorem is contained in the following auxiliary result. Lemma 4.6. Let λ ∈ spec(T ). Then 2 ker(λ1U − T ) = ker(λ1U − T ).

We first complete the proof of the Spectral Theorem assuming the validity of the above result. Invoking Lemma 2.9 we deduce that 2 2 ker(λ1U − T ) = ker(λ1U − T ) = ker(λ1U − T ) = ··· so that the generalized eigenspace of T corresponding to an eigenvalue λ coincides with the eigenspace ker(λ1U − T ), Eλ(T ) = ker(λ1U − T ). Suppose that  spec(T ) = λ1, . . . , λ` . From Proposition 2.18 we deduce that

U = ker(λ11U − T ) ⊕ · · · ⊕ ker(λ`1U − T ). (4.1) The next crucial observation is contained in the following elementary result. Lemma 4.7. Suppose that λ, µ are two distinct eigenvalues of T , and u, v ∈ U are eigenvectors T u = λu,T v = µv. Then T ∗u = λν,¯ T ∗v =µ ¯v, LINEAR ALGEBRA 67 and u ⊥ v.

1 ∗ ∗ ¯1 Proof. Let Sλ = T − λ U so that Sλu = 0. Note that Sλ = T − λ U so that we have to show that ∗ Sλu = 0. As explained in Example 4.1(c), the operator Sλ is normal. We deduce that ∗ ∗ 0 = SλSλu = SλSλu. Hence ∗ ∗ ∗ ∗ 2 0 = hSλSλu, ui = hSλu,Sλui = kSλuk . This proves that T ∗u = λ¯u. A similar argument shows that T ∗v =µ ¯v. From the equality T u = λu we deduce

λhu, vi = hT u, vi = hu,T ∗vi = hu, µ¯vi = µhu, vi.

Hence (λ − µ)hu, vi = 0. Since λ 6= µ we deduce hu, vi = 0. ut

From the above result we conclude that the direct summands in (4.1) are mutually orthogonal. Set dk = dim ker(λk1U − T ). We fix an orthonormal basis

e(k) = e1(k),..., edk (k) of ker(λk1U − T ). By construction, the vectors in this basis are eigenvectors of T . Since the spaces ker(λk1U − T ) are mutually orthogonal we deduce from (4.1) that the union of the orthonormal bases ek is an orthonormal basis of U consisting of eigenvectors of T . This completes the proof of the Spectral Theorem, modulo Lemma 4.6. ut

Proof of Lemma 4.6. The operator S = λ1U − T is normal so that the conclusion of the lemma follows if we prove that for any normal operator S we have

ker S2 = ker S.

Note that ker S ⊂ ker S2 so that it suffices to show that ker S2 ⊂ ker S. Let u ∈ U such that S2u = 0. We have to show that Su = 0. Note that

0 = (S∗)2S2u = S∗S∗SSu = S∗SS∗Su.

Set A := S∗S. Note that A is selfadjoint, A = A∗ and we can rewrite the above equality as 0 = A2u. Hence 0 = hA2u, ui = hAu,Aui = kAuk2. The equality Au = 0 now implies

0 = hAu, ui = hS∗Su, ui = hSu,Sui = kSuk2.

This completes the proof of Lemma 4.6. ut 68 LIVIU I. NICOLAESCU

4.3. The spectral decomposition of a real symmetric operator. We begin with the real counterpart of Proposition 4.4 Proposition 4.8. Suppose that U is a real Euclidean space of dimension n and T : U → U is orthogonally diagonalizable. Then T is a symmetric operator. Proof. Fix an orthonormal basis e = (e1,..., en) of U such that, in this basis, the operator T is represented by the diagonal matrix   a1 0 0 ··· 0 0  0 a2 0 ··· 0 0  D =   , a , . . . , a ∈ .  ......  1 n R  ......  0 0 0 ··· 0 an Clearly D = D∗ and the computations in Example 3.25 show that T is a selfadjoint operator. ut

We can now state and prove the real counterpart of Theorem 4.5 Theorem 4.9 (Spectral Theorem for Real Symmetric Operators). Let U be an n-dimensional real Euclidean space and T : U → U a symmetric operator. Then T is orthogonally diagonalizable, i.e., there exists an orthonormal basis of U consisting of eigenvectors of T . Proof. We argue by induction on n = dim U. For n = 1 the result is trivially true. We assume that the result is true for real symmetric operators action on Euclidean spaces of dimension < n and we prove that it holds for a symmetric operator T on a real n-dimenasional Euclidean space U. To begin with let us observe that T has at least one real eigenvalue. Indeed, if we fix an orthonormal basis e1,..., en of U, then in this basis the operator T is represented by a symmetric n×n real matrix A. As explained in Corolllary 3.29, all the roots of the characteristic polynomial det(λ1 − A) are real, and they coincide with the eigenvalues of T . Fix one such eigenvalue λ ∈ spec(T ) ⊂ λ ∈ R and denote by Eλ the corresponding eigenspace

Eλ := ker(λ1 − T ) ⊂ U. ⊥ Lemma 4.10. The orthogonal complement Eλ is an invariant subspace of T , i.e., u ⊥ Eλ ⇒ T u ⊥ Eλ. ⊥ Proof. Let u ∈ Eλ . We have to show that T u ⊥ Eλ, i.e., T u ⊥ v, ∀v ∈ Eλ. Given such a v we have T v = λv. ⊥ Next observe that u ⊥ v since u ∈ Eλ . Hence hT u, vi = hu,T vi = λhu, vi = 0. ut

⊥ ⊥ ⊥ The restriction Sλ of T to Eλ is a symmetric operator Eλ → Eλ and the induction hypothesis ⊥ implies that we can find and orthonormal basis of Eλ such that, in this basis, the operator Sλ is ⊥ represented by a diagonal matrix Dλ. Fix an arbitrary basis e of Eλ . The union of e with is an orthonormal basis of U. In this basis T is represented by the  1 0  Eλ . 0 Dλ The above matrix is clearly diagonal. ut LINEAR ALGEBRA 69

4.4. Exercises. Exercise 4.1. Let U be a finite dimensional complex Euclidean vector space and T : U → U a normal operator. Prove that for any complex numbers a0, . . . , ak the operator k a01U + a1T + ··· + akT is a normal operator. ut

Exercise 4.2. Let U be a finite dimensional complex vector space and T : U → U a normal operator. Show that the following statements are equivalent. (i) The operator T is orthogonal. (ii) If λ is an eigenvalue of T , then |λ| = 1. ut

Exercise 4.3. (a) Prove that the product of two orthogonal operators on a finite dimensional Euclidean space is an orthogonal operator. (b) Is it true that the product of two selfadjoint operators on a finite dimensional Euclidean space is also a selfadjoint operator? ut

Exercise 4.4. Suppose that U is a finite dimensional Euclidean space and P : U → U is a linear operator such that P 2 = P . Show that the following statements are equivalent. (i) P is the orthogonal projection onto a subspace V ⊂ U. (ii) P ∗ = P . ut

Exercise 4.5. Suppose that U is a finite dimensional complex Euclidean space and T : U → U is a normal operator. Show that R(T ) = R(T ∗). ut 3 3 Exercise 4.6. Does there exists a symmetric operator T : R → R such that  1   0   1   1  T  1  =  0  ,T  2  =  2 ? ut 1 0 3 3 Exercise 4.7. Show that a normal operator on a complex Euclidean space is selfadjoint if and only if all its eigenvalues are real. ut

Exercise 4.8. Suppose that T is a normal operator on a complex Euclidean space U such that T 7 = T 5. Prove that T is selfadjoint and T 3 = T . ut

Exercise 4.9. Let U be a finite dimensional complex Euclidean space and T : U → U be a selfad- joint operator. Suppose that there exists a vector u, a complex number µ, and a number ε > 0 such that kT u − µuk < εkuk. Prove that there exists an eigenvalue λ of T such that |λ − µ| < ε. ut 70 LIVIU I. NICOLAESCU

5. APPLICATIONS 5.1. Symmetric bilinear forms. Suppose that U is a finite dimensional real vector space. Recall that a on U is a bilinear map Q : U × U → R such that Q(u1, u2) = Q(u1, u2). We denote by Sym(U) the space of symmetric bilinear forms on U. Suppose that e = (e1,..., en) is a basis of the real vector space U. This basis associated to any symmetric bilinear form Q ∈ Sym(U) a

A = (aij)1≤i,j≤n, aij = Q(ei, ej) = Q(ej, ei) = aji. Note that the form Q is completely determined by the matrix A. Indeed if n n X X u = uiei, v = vjej, i=1 j=1 then  n n  n m X X X X Q(u, v) = Q  uiei, vjej = uivjQ(ei, ej) = aijuivj. i=1 j=1 i,j=1 i,j=1 The matrix A is called the symmetric matrix associated to the symmetric form Q in the basis e. n Conversely any symmetric n × n matrix A defines a symmetric bilinear form QA ∈ Sym(R ) defined by m X n Q(u, v) = aijuivj, ∀u, v ∈ R . i,j=1 n If h−, −i denotes the canonical inner product on R , then we can rewrite the above equality in the more compact form QA(u, v) = hu,Avi. Definition 5.1. Let Q ∈ Sym(U) be a symmetric bilinear form on the finite dimensional real space U. The associated to Q is the function

ΦQ : U → R, ΦQ(u) = Q(u,, u), ∀u ∈ U. ut Observe that if Q ∈ U, then we have the 1  Q(u, v) = Φ (u + v) − Φ (u − v) . 4 Q Q This shows that a symmetric bilinear form is completely determined by its associated quadratic form. Proposition 5.2. Suppose that Q ∈ Sym(U) and

e = {e1,..., en}, f = {f 1,..., f n}. Denote by S the matrix describing the transition from the basis e to the basis f. In other words, the j-th column of S describes the coordinates of f j in the basis e, i.e., n X f j = sijei. i=1 LINEAR ALGEBRA 71

Denote by A (respectively B) the matrix associated to Q by the basis e (respectively f). Then B = S†AS. (5.1)

Proof. We have n n ! X X bij = Q(f i, f j) = Q skiek, s`je` k=1 `=1 n n X X = skis`jQ(eke`) = skiak`s`j k,`=1 k,`=1 † † † If we denote by sij the entries of the transpose matrix S , sij = sji, we deduce n X † bij = sikak`s`j. k,`=1 † The last equality shows that bij is the (i, j)-entry of the matrix S AS. ut

We strongly recommend the reader to compare the change of base formula (5.1) with the change of base formula (2.1).

Theorem 5.3. Suppose that Q is a symmetric bilinear form on a finite dimensional real vector space U. Then there exist at least one basis of U such that the matrix associated to Q by this basis is a diagonal matrix. Proof. We will employ the spectral theory of real symmetric operators. For this reason with fix an Euclidean inner product h−, −i on U and the choose a basis e of U which is orthonormal with respect to the above inner product. We denote by A the symmetric matrix associated to Q by this basis, i.e., aij = Q(ei, ej).

This matrix defines a symmetric operator TA : U → U by the formula n X TAej = aijei. i=1 Let us observe that Q(u, v) = hu,TAvi, ∀u, v ∈ U. (5.2) To verify the above equality we first notice that both sides of the above equalities are bilinear on u, v so that it suffices to check the equality in the special case when the vectors u, v belong to the basis e. We have * + X hei,TAeji = ei, akjek = aij = Q(ei, ej). k The spectral theorem for real symmetric operators implies that there exists an orthonormal basis f of U such that the matrix B representing TA in this basis is diagonal. If S denotes the matrix describing the transition to the basis e to the basis f then the equality (2.1) implies that B = S−1AS. 72 LIVIU I. NICOLAESCU

Since the biases e and f are orthonormal we deduce from Exercise 3.17 that the matrix S is orthog- onal, i.e., S†S = SS† = 1. Hence S−1 = S† and we deduce that B = S∗AS. The above equality and Proposition 5.2 imply that the diagonal matrix B is also the matrix associated to Q by the basis f. ut

Theorem 5.4 (The law of inertia). Let U be a real vector space of dimension n and Q a symmetric bilinear form on U. Suppose that e, f are bases of U such that the matrices associated to Q by these bases are diagonal.3 Then these matrices have the same number of positive (respectively negative, respectively zero) entries on their diagonal. Proof. We will show that these two matrices have the same number of positive elements and the same number of negative entries on their . Automatically then they must have the same number of trivial entries on their diagonals. We take care of the positive entries first. Denote by A the matrix associated to Q by e and by B the matrix associated by f. We denote by p the number of positive entries on the diagonal of A and by q the number of positive entries on the diagonal of B. We have to show that p = q. We argue by contradiction and we assume that p 6= q, say p > q. We can label the elements in the basis e so that

aii = Q(ei, ei) > 0, ∀i ≤ p, ajj = Q(ej, ej) ≤ 0, ∀j > p. (5.3) Observe that since A is diagonal we have

Q(ei, ej) = 0, ∀i 6= j. (5.4) Similarly we can label the elements in the basis f so that

bkk = Q(f k, f k) > 0, ∀k ≤ q, b`` = Q(f `, f `) ≤ 0, ∀` > q. (5.5) Since B is diagonal we have

Q(f i, f j) = 0, ∀i 6= j. (5.6)

Denote V the subspace spanned by the vectors ei, i = 1, . . . , p, and by W the subspace spanned by the vectors f q+1,..., f n. From the equalities (5.3), (5.4), (5.5), (5.6) we deduce that Q(v, v) > 0, ∀v ∈ V \ 0, (5.7a)

Q(w, w) ≤ 0, ∀w ∈ W \ 0. (5.7b) On the other hand, we observe that dim V = p, dim W = n − q. Hence dim V + dim W = n + p − q > n > dim U so that there exists a vector u ∈ (V ∩ W ) \ 0. The vector u cannot simultaneously satisfy both inequalities (5.7a) and (5.7b). This contradiction implies that p = q. Using the above argument for the form −Q we deduce that A and B have the same number of negative elements on their diagonals. ut

3Such a bases are called diagonalizing bases of Q. LINEAR ALGEBRA 73

The above theorem shows that no matter what diagonalizing basis of Q we choose, the diagonal matrix representing Q in that basis will have the same number of positive negative and zero elements on its diagonal. We will denote these common numbers by µ+(Q), µ−(Q) and respectively µ0(Q). These numbers are called the indices of inertia of the symmetric form Q. The integer µ−(Q) is called Morse index of the symmetric form Q and the difference

σ(Q) = µ+(Q) − µ−(Q). is called the signature of the form Q. Definition 5.5. A symmetric bilinear form Q :∈ Sym(U) is called positive definite is Q(u, u) > 0, ∀u ∈ U \ 0. It is called positive semidefinite if Q(u, u) ≥ 0, ∀u ∈ U. It is called negative (semi)definite if −Q is positive (semi)definite. ut

We observe that a symmetric bilinear form on an n-dimensional real space U is positive definite if and only if µ+(Q) = n = dim U. Definition 5.6. A real symmetric n×n matrix A is called positive definite if and only if the associated n symmetric bilinear form QA ∈ Sym(R ) is positive definite. ut

5.2. Nonnegative operators. Suppose that U is a finite dimensional Euclidean F-vector space. Definition 5.7. A linear operator T : U → U is called nonnegative if the following hold. (i) T is selfadjoint, T ∗ = T . (ii) hT u, ui ≥ 0, for all u ∈ U. The operator is called positive if it is nonnegative and hT u, ui = 0⇐⇒u = 0. ut

Example 5.8. Suppose that T : U → U is a linear operator. Then the operator S := T ∗T is nonnegative. Indeed, it is selfadjoint and hSu, ui = hT ∗T u, ui = hT u,T ui = kT uk2 ≥ 0. Note that S is positive if and only ker S = 0 so that S is injective. ut

Definition 5.9. Suppose that T : U → U is a linear operator on the finite dimensional F-space U. A square root of T is a linear operator S : U → U such that S2 = T . ut

Theorem 5.10. Let T : U → U be a linear operator on the n-dimensional Euclidean F-space. Then the following statements are equivalent. (i) The operator T is nonnegative. (ii) The operator T is selfadjoint and all its eigenvalues are nonnegative. (iii) The operator T admits a nonnegativesquare root. (iv) The operator T admits a selfadjoint root. (v) there exists an operator S : U → U such that T = S∗S. Proof. (i) ⇒ The operator T being nonnegative is also selfadjoint. Hence all its eigenvalues are real. If λ is an eigenvalue of T and u ∈ ker(λ1U − T ) \ 0. then λkuk2 = hT u, ui ≥ 0. 74 LIVIU I. NICOLAESCU

This implies λ ≥ 0. (ii) ⇒ (iii) Since T is selfadjoint, there exists an orthonormal basis e = {e1,..., en} of U such that, in this basis, the operator T is represented by the diagonal matrix

A = Diag(λ1, . . . , .λn), where λ1, . . . , λn are the eigenvalues of T . They are all nonnegative so we can form a new diagonal matrix p p B = Diag( λ1,..., λn). The matrix B defines a selfadjoint linear operator S on U which is represented by the matrix B in the basis e. More precisely p Sei = λiei, ∀i = 1, . . . , n. Pn If u = i=1 uiei, then n n X p X p 2 Su = λiuiei, hSu, ui = λi|ui| ≥ 0. i=1 i=1 Hence S is nonnegative. From the obvious equality A = B2 we deduce T = S2 so that S is nonnegative square root of T . The implication (iii) ⇒ (iv) is obvious because any nonnegative square root of T is automatically a selfadjoint square root. To prove the implication (iv) ⇒ (v) observe that if S is a selfadjoint square root of T then T = S2 = S∗S. The implication (v) ⇒ (i) was proved in Example 5.8. ut

Proposition 5.11. Let U be a finite dimension Euclidean F-space. Then any nonnegative operator T : U → U admits a unique nonnegative square root. Proof. We have an orthogonal decomposition M U = ker(λ1U − T ) λ∈spec(T ) so that any vector u ∈ U can be written uniquely as X u = uλ, uλ ∈ ker(λ1U − T ). (5.8) λ∈spec(T ) Moreover X T u = λuλ. λ∈spec(T )

Suppose that S is a nonnegative square root of T . If µ ∈ spec(S) and u ∈ ker(µ1U − S), then T u = S2u = S(Su) = S(µu) = µ2u. Hence µ2 ∈ spec(T ) and 2 ker(µ1U − S) ⊂ ker(µ 1U − T ). We have a similar orthogonal decomposition M M 2 M U = ker(µ1U − S) ⊂ ker(µ 1U − T ) ⊂ ker(λ1U − T ) = U. µ∈spec(S) µ∈spec(S) λ∈spec(T ) LINEAR ALGEBRA 75

This implies that  2 2 spec(T ) = µ ; µ ∈ spec(S) , ker(µ1U − S) = ker(µ 1U − T ), ∀µ ∈ spec(S). √ Since all the eigenvalues of S are nonnegative we deduce that for any λ ∈ spec(T ) we have λ ∈ spec(S). Thus if u is decomposed as in (5.8), X u = uλ, λ∈spec(T ) then X √ Su = λuλ. λ∈spec(T ) The last equality determines S uniquely. ut

√Definition 5.12. If T is a nonnegative operator, then its unique nonnegative square root is denoted by T . ut 76 LIVIU I. NICOLAESCU

5.3. Exercises. Exercise 5.1. Let Q be a symmetric bilinear form on an n-dimensional real vector space. Prove that there exists a basis of U such that the matrix associated to Q by this basis is diagonal and all the entries belong to {−1, 0, 1}. ut

Exercise 5.2. Let Q be a symmetric bilinear form on an n-dimensional real vector space U with indices of inertia µ+, µ−, µ0. We say Q is positive definite on a subspace V ⊂ U if Q(v, v) > 0, ∀v ∈ V \ 0.

(a) Prove that if Q is positive definite on a subspace V , then dim V ≤ µ+. (b) Show that there exists a subspace of dimension µ+ on which Q is positive definite. Hint: (a) Choose a diagonalizing basis e = {e1,..., en} of Q. Assume that Q(ei, ei) > 0, for i = 1, . . . , µ+ and Q(ej, ej) ≤ 0 for j > µ+. Argue by contradiction that

dim V + dim span{ej; j > µ+} ≤ dim U = n. ut Exercise 5.3. Let Q be a symmetric bilinear form on an n-dimensional real vector space U. with indices of inertia µ+, µ−, µ0. Define Null (Q) :=  u ∈ U; Q(u, v) = 0, ∀v ∈ U. .

Show that Null (Q) is a vector subspace of U of dimension µ0. ut

Exercise 5.4 (Jacobi). For any n × n matrix M we denote by Mi the i × i matrix determined by the first i rows and columns of M. Suppose that Q is a symmetric bilinear form on the real vector space U of dimension n. Fix a basis e = {e1,..., en} of U. Denote by A the matrix associated to Q by the basis e and assume that

∆i := det Ai 6= 0, ∀i = 1, . . . , n.

(a) Prove that there exists a basis f = (f 1,..., f n) of U with the following properties. (i) span{e1,..., ei} = span{f 1,..., f i}, ∀i = 1, . . . , n. (ii) Q(f k, ei) = 0, ∀1 ≤ i < k ≤ n, Q(f k, ek) = 1, ∀k = 1, . . . , n. Hint: For fixed k, express the vector f k in terms of the vectors ei, n X f k = sikei i=1 and then show that the conditions (i) and (ii) above uniquely determine the coefficients sik, again with k fixed. (b) If f is the basis found above, show that

Q(f k, f i) = 0, ∀i 6= k, ∆k−1 Q(f k, f k) = , ∀k = 1, . . . , n, ∆0 := 1. ∆k (c) Show that the Morse index µ−(Q) is the number of sign changes in the sequence

1, ∆1, ∆2,..., ∆n. ut

Exercise 5.5 (Sylvester). For any n × n matrix M we denote by Mi the i × i matrix determined by the first i rows and columns of M. Suppose that Q is a symmetric bilinear form on the real vector space U of dimension n. Fix a basis e = {e1,..., en} of U and denote by A the matrix associated to Q by the basis e. Prove that the following statements are equivalent. LINEAR ALGEBRA 77

(i) Q is positive definite. (ii) det Ai > 0, ∀i = 1, . . . , n. ut

Exercise 5.6. (a) Let and f : [0, 1] → [0, ∞) a which is not identically zero. For any k = 01, 2,... we define the k-th of f to be the real number Z 1 k µk := µk(f) = x f(x)dx. 0 Prove that the symmetric (n + 1) × (n + 1)-matrix symmetric matrix

A = (aij)0≤i,j≤n, aij = µi+j. is positive definite. Hint: Associate to any vector   u0  u1  u =    .   .  un n the polynomial Pu(x) = u0 + u1x + ··· + unx and then express the Z 1 Pu(x)Pv(x)f(x) 0 in terms of A. (b) Prove that the symmetric n × n symmetric matrix 1 B = (b ) , b = ij 1≤i,j≤n ij i + j is positive definite. (c) Prove that the symmetric (n + 1) × (n + 1) symmetric matrix

C = (cij)0≤i,j≤n, cij = (i + j)!, where 0! := 1, n! = 1 · 2 ··· n. Hint: Show that for any k = 0, 1, 2,... we have Z ∞ xke−xdx = k!, 0 and then due the trick in (a). ut

Exercise 5.7. (a) Show that the 2 × 2-symmetric matrix  2 −1  A = −1 2 is positive definite. 2 (b) Denote by QA the symmetric bilinear form on R defined by then above matrix A. Since QA is 2 positive definite, it defines an inner product h−, −iA on R ,

hu, viA = QA(u, v) = hu,Avi 2 # where h−, −i denotes the canonical inner product on R . Denote by T the adjoint of T with respect to the inner product h−, −iA, i.e., # 2 hT u, viA = hu,T viA, ∀u, v ∈ R . (#) 78 LIVIU I. NICOLAESCU

Show that T # = A−1T ∗A, where T ∗ is the adjoint of T with respect to the canonical inner product h−, −i. What does this formula tell you in the special case when T is described by the symmetric matrix  1 2  ? 2 3 ∗ 2 Hint: In (#) use the equalities hx, yiA = hx,Ayi, hT x, yi = hx,T yi, ∀x, y ∈ R . ut

Exercise 5.8. Suppose that U√is a finite dimensional Euclidean F-space√ and T : U → U is an invertible operator. Prove that T ∗T is invertible and the operator S = T ( T ∗T )−1 is orthogonal.ut

Exercise 5.9. (a) Suppose that U is a finite dimensional real Euclidean space and Q ∈ Sym(U) is a positive definite symmetric bilinear form. Prove that there exists a unique positive operator T : U → U such that Q(u, v) = hT u,T vi, ∀u, v ∈ U. ut LINEAR ALGEBRA 79

6. ELEMENTSOFLINEARTOPOLOGY 6.1. Normed vector spaces. As usual, F will denote one of the fields R or C. Definition 6.1. A norm on the F-vector space U is a function k − k : U → [0, ∞), satisfying the following conditions. (i) (Nondegeneracy) kuk = 0⇐⇒u = 0. (ii) (Homogeneity) kcuk = |c| · kuk, ∀u ∈ U, c ∈ F . (iii) () ku + vk ≤ kuk + kvk, ∀u, v ∈ U. A normed F-space is a pair (U, k − k) where U is a F-vector space and k − k is a norm on U. ut

The between two vectors u, v in a normed space (U, k − k) is the quantity dist(u, v) := ku − vk. Note that the triangle inequality implies that dist(u, w) ≤ dist(u, v) + dist(v, w), ∀u, v, w ∈ U (6.1a)

kuk − kvk ≤ ku, vk, ∀u, v ∈ U. (6.1b) Example 6.2. (a) The absolute value | − | : R → [0, ∞) is a norm on the one-dimensional real vector space R. p (b) The absolute value | − | : C → [0, ∞), |x + iy| = x2 + y2 is a norm on the one dimensional complex vector space C. (c) If U is an Euclidean F-space with inner product h−, −i then the function | − | : U → [0, ∞), |u| = phu, ui is a norm on U called the norm determined by the inner product. N (d) Consider the space R with canonical basis e1,..., eN . The function N p 2 2 | − |2 : R → [0, ∞), |u1e1 + ··· + uN eN |2 = |u1| + ··· + |uN | n is the norm associated to the canonical inner product on R . The functions N | − |1, | − |∞ : R → [0, ∞) defined by N X |u1e1 + ··· + uN eN |∞ := max |uk|, |u|1 = |uk|, 1≤k≤N k=1 N are also a norm on R . The proof of this fact is left as an exercise. The norm | − 1 is sometimes n known as the taxicab norm. One can define similarly norms | − |2 and | − |∞ on C . (e) Let U denote the space C([0, 1]) of continuous functions u : [0, 1] → R. Then the function

k − k∞ : C([0, 1]) → [0, ∞), kuk∞ := sup |u(x)| x∈[0,1] is a norm on U. The proof is left as an exercise. 80 LIVIU I. NICOLAESCU

(f) If (U 0, k − k0) and (U 1, k − k1) are two normed F-spaces, then their U 0 × U 1 is equipped with a natural norm

k(u0, u1)k := ku0k0 + ku1k1, ∀u0 ∈ U 0, u1 ∈ U 1. ut N Theorem 6.3 (Minkowski). Consider the space R with canonical basis e1,..., eN . Let p ∈ (1, ∞). Then the function 1 p ! p N X p | − |p : R → [0, ∞), |u1e1 + ··· + unen|p = |uk| k=1 N is a norm on R .

Proof. The non degeneracy and homogeneity of | − |p are obvious. The tricky part is the triangle inequality. We will carry the proof of this inequality in several steps. Define q ∈ (1, ∞) by the equality 1 1 p 1 = + ⇐⇒q = . p q p − 1 Step 1. Young’s inequality. We will prove that if a, b are nonnegative real numbers then ap bq ab ≤ + . (6.2) p q 1 The inequality is obviously true if ab = 0 so we assume that ab 6= 0. Set α := p and define α f : (0, ∞) → R, f(x) = x − αx + α − 1. Observe that f 0(x) = α(xα−1 − 1). Hence f 0(x) = 0⇐⇒x = 1. Moreover, since α < 1 we deduce f 00(1) = α(α − 1) < 0 so 1 is a global max of f(x), i.e., f(x) ≤ f(1) = 0, ∀x > 0. ap Now let x = bq . We deduce 1 1 ap  p 1 ap 1 1 ap  p ap bq − ≤ 1 − = ⇒ bq − ≤ bq p bq p q bq p q p q q− q a b ⇒ ab p ≤ + . p q q 1 This is precisely (6.2) since q − p = q(1 − p ) = 1. N Step 2. Holder’s¨ inequality. We prove that for any x, y ∈ R we have

|hx, yi| ≤ |x|p|y|q, (6.3) N where hx, yi is the canonical inner product of the vectors x, y ∈ R ,

hx, yi = x1y1 + ··· + xN xN . Clearly it suffices to prove the inequality only in the case x, y 6= 0. Using Young’s inequality with |x | |y | a = k , b = k |x|p |y|q LINEAR ALGEBRA 81 we deduce p q |xkyk| 1 |xk| 1 |yk| ≤ p + q , ∀k = 1,...,N. |x|p · |y|q p |x|p q |y|q Summing over k we deduce N N 1 X 1 X p 1 X q 1 1 |xkyk| ≤ p |xk| + q |yk| = + . |x|p · |y|q p|x|p q|y|q p q k=1 k=1 k=1

| {z p } | {z q } =|x|p =|y|q Hence X |xkxk| ≤ |u|p · |v|q. k=1 Now observe that

N X X |hx, yi| = xkyk ≤ |xkyk|. k=1 k=1 Step 3. Conclusion. We have N N p X p X p−1 |u + v|p = |uk + vk| = |uk + vk| · |uk + vk| k=1 k=1 N N X p−1 X p−1 ≤ |uk| · |uk + vk|) + |vk| · |uk + vk|) . k=1 k=1 1 p 1 Using Holder’s¨ inequality and the equalities q(p − 1) = p, q = q · p we deduce 1 1 N N ! p N ! q p X p−1 X p X q(p−1) q |uk| · |uk + vk| ≤ |uk| · |uk + vk| = |u|p · |u + v|p k=1 k=1 k=1 Similarly, we have N p X p−1 q |vk| · |uk + vk| ≤ |v|p · |u + v|p . k=1

Hence p p q  |u + v|p ≤ |u + v| |u|p + |v|p , so that p p− q |u + v|p = |u + v|p ≤ |u|p + |v|p. ut

Definition 6.4. Suppose that k − k0 and k − k1 are two norms on the same vector space U we say that k − k1 is stronger than k − k0, and we denote this by k − k0 ≺ k − k1 if there exists C > 0 such that kuk0 ≤ Ckuk1, ∀u ∈ U. ut The relation ≺ is a partial order on the set of norms on a vector space U. This means that if k − k0, k − k1, k − k2 are three norms such that

k − k0 ≺ k − k1, k − k1 ≺ k − k2, then k − k0 ≺ k − k2. 82 LIVIU I. NICOLAESCU

Definition 6.5. Two norms k − k0 and k − k1 on the same vector space U are called equivalent, and we denote this by k − k0 ∼ k − k1, if

k0 − k1 ≺ k − k1 ≺ k − k0. ut

Observe that two norms k − 0k1 are equivalent if and only if there exist positive constants c, C such that k − k0 ∼ k − k1⇐⇒∃c, C > 0 : ckuk1 ≤ kuk0 ≤ Ckuk1, ∀u ∈ U. (6.4) The relation ”∼” is an on the space of norms on a given vector space. N Proposition 6.6. The norms | − |p, p ∈ [1, ∞], on R are all equivalent with the norm | − |∞ N Proof. Indeed for u ∈ R we have 1 N ! p 1   p X p p 1 |u|p = |uk| ≤ N max |uk| = N p |u|∞, 1≤k≤N k=1 so that | − |p ≺ | − |∞. Conversely

1 1 N ! p   p p X p |u|∞ = max |uk| ≤ |uk| = |u|p. 1≤k≤N k=1 ut

N N Proposition 6.7. The norm | − |∞ on R is stronger than any other norm k − k on R . Proof. Let N C = max kekk, k=1 where e1,..., eN is the canonical basis in We have

kuk = ku1e1 + ··· + uneN k ≤ |u1|ke1k + ··· + |uN |keN k. ut

N Corollary 6.8. For any p ∈ [1, ∞], the norm | − |p on R is stronger than any other norm k − k on N R .

Proof. Indeed the norm | − |p is stronger than ∞ which is stronger than k − k. ut

6.2. Convergent sequences.

Definition 6.9. Let U be a vector space. A sequence (u(n))n≥0 of vectors in U is said to converge to u∗ in the norm k − k if lim ku(n) − u ∗ k = 0, n→∞ i.e., ∀ε > 0 ∃ν = ν(ε) > 0 such that ku(n) − u∗k < ε, ∀n ≥ ν. We will use the notation u∗ = lim u(n) n→∞ to indicate that the sequence u(n) converges to u∗. ut LINEAR ALGEBRA 83

Proposition 6.10. A sequence of vectors   u1(n)  .  N u(n) =  .  ∈ R uN (n) converges to  ∗  u1 ∗  .  N u =  .  ∈ R ∗ uN in the norm | − |∞ if and only if ∗ lim uk(n) = u , ∀k = 1,...,N. n→∞ k

Proof. We have ∗ ∗ ∗ lim |u(n) − u |∞⇐⇒ lim max |uk(n) − uk| = 0⇐⇒ lim |uk(n) − uk| = 0, ∀k = 1,...,N. n→∞ n→∞ 1≤k≤N n→∞ ut

Proposition 6.11. Let (U, k − k) be a normed space. If the sequence ( u(n))n≥0 in U converges to u∗ ∈ U in the norm k − k, then lim ku(n)k = ku∗k. n→∞

Proof. We have (6.1b) ∗ ∗ ku(n)k − ku k ≤ ku(n) − u k → 0 as n → ∞. ut

The proof of the next result is left as an exercise.

Proposition 6.12. Let (U, k−k) be a normed F-vector space. If the sequences ( u(n))n≥0, (v(n))n≥0 converge in the norm k − k to u∗ and respectively v∗, then their sum u(n) + v(n) converges in the ∗ ∗ norm k − k to u + v . Moreover, for any scalar λ ∈ F, the sequence λu(n) converges in the norm k − k to the vector λu∗. ut

Proposition 6.13. Let U be a vector space and k − k0, k − k1 be two norms on U. The following statements are equivalent.

(i) k − k0 ≺ k − k1. (ii) If a sequence converges in the norm k − k1, then it converges to the same limit in the norm k − k0 as well.

Proof. (i) ⇒ (ii). We know that there exists a constant C > 0 such that kuk0 ≤ Ckuk1, ∀u ∈ U. ∗ We have to show that if the sequence ( u(n))n≥0 converges in the norm k − k1 to u , then it also ∗ converges to u in the norm k − k0. We have ∗ ∗ ku(n) − u k0 ≤ Cku(n) − u k1 → 0 as n → ∞. 84 LIVIU I. NICOLAESCU

(ii) ⇒ (i). We argue by contradiction and we assume that for any n > 0 there exists v(n) ∈ U \ 0 such that kv(n)k0 ≥ nkv(n)k1. (6.5) Set 1 u(n) := v(n), ∀n > 0. kv(n)k0 Multiplying both sides of (6.5) by 1 we deduce kv(n)k0

1 = ku(n)k0 ≥ nku(n)k1, ∀n > 0.

Hence u(n) → 0 in the norm k − k1 and thus u(n) → 0 in the norm k − k0. Using Proposition 6.11 we deduce ku(n)k0 → 0. This contradicts the fact that ku(n)k0 = 1 for any n > 0. ut

Corollary 6.14. Two norms on the same vector space are equivalent if and only if any sequence that converges in one norm converges to the same limit in the other norm as well. ut

N Theorem 6.15. Any norm k − k on R is equivalent to the norm | − |∞.

Proof. We already know that k − k ≤ | − ∞ so it suffices to show that | − |∞ ≺ k − k. Consider the ”sphere”  N S := u ∈ R ; |u|∞ = 1 , We set m := inf kuk. u∈S We claim that m > 0 (6.6) N Let us observe that the above inequality implies that | − |∞ ≺ k − k. Indeed for any u ∈ R \ 0 we have 1 u¯ := u ∈ S |u|∞ so that ku¯k ≥ m.

Multiplying both sides of the above inequality with |u|∞ we deduce 1 kuk ≥ m|u| ⇒ |u| ≤ kuk, ∀u ∈ N \ 0⇐⇒|− | ≺ k − k. ∞ ∞ m R ∞ Let us prove the claim (6.6). Choose a sequence u(n) ∈ S such that lim ku(n)k = m. (6.7) n→∞

Since u(n) ∈ S, coordinates uk(n) of u(n) satisfy the inequalities

uk(n) ∈ [−1, 1], ∀k = 1,...,N, ∀n. The Bolzano-Weierstrass theorem implies that we can extract a subsequence ( u(ν)) of the sequence ∗ ∗ N ( u(n)) such that sequence ( uk(ν)) converges to some uk ∈ R, for any k = 1,...,N. Let u ∈ R ∗ ∗ ∗ be the vector with coordinates u1, . . . , uN . Using Proposition 6.10 we deduce that u(ν) → u in the ∗ ∗ norm | − |∞. Since |u(ν)|∞ = 1, ∀ν, we deduce from Proposition 6.11 that |u |∞ = 1, i.e., u ∈ S. ∗ On the other hand k − k ≺ | − |∞ and we deuce that u(ν) converges to u in the norm k − k. Hence (6.7) ku∗k = lim ku(ν)k = m. ν→∞ Since |u∗| = 1 we deduce that u∗ 6= 0 so that m = ku∗k > 0. ut LINEAR ALGEBRA 85

Corollary 6.16. On a finite dimensional real vector space U any two norms are equivalent. N Proof. Let N = dim U. We can then identify U with R and we can invoke Theorem 6.15 which N implies that any two norms on R are equivalent. ut

Corollary 6.17. Let U be a finite dimensional real vector space. A sequence in U converges in some norm if and only if it converges to the same limit in any norm. ut

6.3. Completeness.

Definition 6.18. Let (U, k − k) be a normed space. A sequence ( u(n))n≥0 of vectors in U is said to be a Cauchy sequence (in the norm k − k) if lim ku(m) − u(n)k = 0, m,n→∞ i.e., ∀ε > 0 ∃ν = ν(ε) > 0 such that distu(m), u(n)  = ku(m) − u(n)k < ε, ∀m, n > ν. ut Proposition 6.19. Let (U, k − k) be a normed space. If the sequence ( u(n)) is convergent in the norm k − k, then it is also Cauchy in the norm k − k. Proof. Denote by u∗ the limit of the sequence ( u(n)). For any ε > 0 we can find ν = ν(ε) > 0 such that ε ku(n) − u∗k < . 2 If m, n > ν(ε) then ε ε ku(m) − u(n)k ≤ ku(m) − u∗k + ku∗ − unk < + = ε. 2 2 ut

Definition 6.20. A normed space (U, k − k) is called complete if any Cauchy sequence is also con- vergent. A is a complete normed space. ut

Proposition 6.21. Let k − k0 and k − k1 be two equivalent norms on the same vector space U. Then the following statements are equivalent.

(i) The space U is complete in the norm k − k0. (ii) The space U is complete in the norm k − k1.

Proof. We prove only that (i) ⇒ (ii). The opposite implication is completely similar. Thus, we know that U is k − k0-complete and we have to prove that it is also k − k0-complete. Suppose that ( u(n)) is a Cauchy sequence in the norm k − k1. We have to prove that it converges in the norm k − k1. Since k − k0 ≺ k − k1 we deduce that there exists a constant C > 0 such that

ku(n) − u(m)k0 ≤ Cku(n) − u(m)k1 → 0 as m, n → ∞.

Hence the sequence ( u(n)) is also Cauchy in the norm k − k0. Since U is k − k0-complete, we ∗ deduce that the sequence ( u(n)) converges in the norm k − k0 to some vector u ∈ U. Using the fact that k − k1 ≺ k − k0 we deduce from Proposition 6.13 that the sequence converges ∗ to u in the norm k − k1 as well. ut

Theorem 6.22. Any finite dimensional normed space (U, k − k) is complete. 86 LIVIU I. NICOLAESCU

Proof. For simplicity we assume that U is a real vector space of dimension N. Thus we can identify N it with R . Invoking Proposition 6.21 we see that it suffices to prove that U complete with respect N to any norm equivalent to | − k. Invoking Theorem 6.15 we see that it suffices to prove that R is complete with respect to the norm | − |∞. N Suppose that ( u(n)) is a sequence in R which is Cauchy with respect to the norm | − |∞. As N usual, we denote by u1(n), . . . , uN (n) the coordinates of u(n) ∈ R . From the inequalities

| uk(m) − uk(n)| ≤ | u(m) − u(n) |∞, ∀k = 1,...,N, we deduce that for any k = 1,...,N the sequence of coordinates ( uk(n)) is a Cauchy sequence of ∗ real numbers. Therefore it converges to some real number uk. Invoking Proposition 6.10 we deduce ∗ N ∗ ∗ that the sequence ( u(n)) converges to the vector u ∈ R with coordinates u1, . . . , uN . ut

6.4. Continuous maps. Let (U, k−kU ) and (V , k−kV ) be two normed vector spaces and S ⊂ U. ∗ Definition 6.23. A map f : S → V is said to be continuous at a point s ∈ S if for any sequence ∗ ( s(n)) of vectors in S that converges to s in the norm k − kU the sequence f(s(n)) of vectors in ∗ V converges in the norm k − kV to the vector f(s ). The function f is said to be continuous (on S) if it is continuous at any point s∗ in S. ut

Remark 6.24. The notion of continuity depends on the choice of norms on U and V because it relies on the notion of convergence in these norms. Hence, if we replace these norms with equivalent ones, the notion of continuity does not change. ut

Example 6.25. (a) Let (U, k − k) be a normed space. Proposition 6.11 implies that the function f : U → R, f(u) = kuk is continuous. (b) Suppose that U, V are two normed spaces, S ⊂ U and f : S → V is a continuous map. Then 0 0 the restriction of f to any subset S ⊂ U is also acontinuous map f|S0 : S → V . ut

Proposition 6.26. Let U, V and S as above, f : S → V a map and s∗ ∈ S. Then the following statements are equivalent. (i) The map f is continuous at s∗. ∗ (ii) For any ε > 0 there exists δ = δ(ε) > 0 such that if s ∈ S and ks − s kU < δ, then ∗ kf(s) − f(s )kV < ε.

Proof. (i) ⇒ (ii) We argue by contradiction and we assume that there exists ε0 > 0 such that for any n > 0 there exists sn ∈ S such that 1 ks − s∗k < and kf(s ) − f(s∗)k ≥ ε . n U n n V 0 ∗ From the first inequality above we deduce that the sequence sn converges to s in the norm k − kU . The continuity of f implies that ∗ lim kf(sn) − f(s )kV = 0. n→∞ ∗ The last equality contradicts the fact that kf(sn) − f(s )kV ≥ ε0 for any n. (ii) ⇒ (i) Let s(n) be a sequence in S that converges to s∗. We have to show that f( s(n)) converges to f(s∗). ∗ Let ε > 0. By (ii), there exists δ(ε) > 0 such that if s ∈ S and ks − s kV < δ(ε), then ∗ ∗ kf(s) − f(s )kV < ε. Since s(n) converges to s , we can find ν = ν(ε) such that if n ≥ ν(ε), then LINEAR ALGEBRA 87

∗ ks(n) − s kV < δ(ε). In particular, for any n ≥ ν(ε) we have ∗ kf( s(n)) − f(s )kV < ε. This proves that f( s(n)) converges to f(s∗). ut

Definition 6.27. A map f : S → V is said to be Lipschitz if there exists L > 0 such that

kf(s1) − f(s2)kV ≤ Lks1 − s2kU , ∀s1, s2 ∈ S. Observe that if f : S → V is a Lipschitz map, then   |f(s1) − f(s2)kV sup ; s1, s2 ∈ S; s1 6= s2 < ∞. ks1 − s2kU The above supremum is called the Lipschitz constant of f. Proposition 6.28. A Lipschitz map f : S → V is continuous. Proof. Denote by L the Lipschitz constant of f. Let s∗ ∈ S and s(n) a sequence in S that converges to s∗. ∗ ∗ kf(s(n)) − f(s )kV ≤ Lks(n) − s kU → 0 as n → ∞. ut

Proposition 6.29. Let (U, k − kU ) and (V , k − kV ) be two normed vector spaces and T : U → V a linear map. Then the following statements are equivalent. (i) The map T : U → V is continuous on U. (ii) There exists C > 0 such that

kT ukV ≤ CkukU , ∀u ∈ U. (6.8) (iii) The map T : U → V is Lipschitz.

Proof. (i) ⇒ (ii) We argue by contradiction. We assume that for any positive integer n there exists a vector un ∈ U such that

kT unkV ≥ nkunkU . Set 1 u¯n := u(n). nku(n)kU Then 1 ku¯ k = , (6.9a) n U n

kT u¯nkV ≥ nku¯nkU = 1. (6.9b)

The equality (6.9a) implies that u¯n → 0. Since T is continuous we deduce kT u¯nkV → 0. This contradicts (6.9b). (ii) ⇒ (iii) For any u1, u2 ∈ U we have

kT u1 − T u2kV = kT (u1 − u2)kV ≤ Cku1 − u2kU . This proves that T is Lipschitz. The implication (iii) ⇒ (i) follows from Proposition 6.28. ut 88 LIVIU I. NICOLAESCU

Definition 6.30. Let (U, k − kU ) and (V , k − kV ) be two normed vector spaces and T : U → V a linear map. The norm of T , denoted by kT k or kT kU,V , is the infimum of all constants C > 0 such that (6.8) is satisfied. If there is no such C we set kT k := ∞. Equivalently

kT ukV kT k = sup = sup kT ukV . u∈U\0 kukU kukU =1 We denote by B(U, V ) the space of linear operators such that that kT k < ∞, and we will refer to such operators as bounded. When U = V and k − kU = k − kV we set B(U) := B(U, U). ut Observe that we can rephrase Proposition 6.29 as follows.

Corollary 6.31. Let (U, k − kU ) and (V , k − kV ) be two normed vector spaces. A linear map T : U → V is continuous if and only if it is bounded. ut

Let us observe that if T : U → V is a continuous operator, then

kT k ≤ C⇐⇒kT ukV ≤ CkukU , ∀u ∈ U. (6.10) We have the following useful result whose proof is left as an exercise. Proposition 6.32. Let (U, k − k) be a normed F-vector space and consider the space B(U) of continuous linear operators U → U. If S, T ∈ B(U) and λ ∈ F, then kλSk = |λ| · kSk, kS + T k ≤ kSk + kT k, kS ◦ T k ≤ kSk · kT k. In particular, the map B(U) 3 T 7→ kT k ∈ [0, ∞) is a norm on the vector space of bounded linear operators U → U. For T ∈ B(U) the quantity kT k is called the of T . ut

Corollary 6.33. If (Sn) and (Tn) are sequences in B(U) which converge in the operator norm to S ∈ B(U) and respectively T ∈ B(U), then the sequence (SnTn) converges in the operator norm to ST . Proof. We have

kSnTn − ST k = k(SnTn − STn) + (STn − ST )k ≤ k(Sn − S)Tnk + kS(Tn − T )k

≤ k(Sn − S)k · kTnk + kSk · k(Tn − T )k. Observe that k(Sn − S)k → 0, kTnk → kT k, kSk · k(Tn − T )k → 0. Hence k(Sn − S)k · kTnk + kSk · k(Tn − T )k → 0, and thus kSnTn − ST k → 0. ut

Theorem 6.34. Let (U, k − kU ) and (V , k − kV ) be two finite dimensional normed vector spaces. Then any linear map T : U → V is continuous. Proof. For simplicity we assume that the vector spaces are real vector spaces, N = dim U, M = ∼ N dim V . By choosing bases in U and V we can identify them with the standard spaces U = R , ∼ M V = R . The notion of continuity is only based on the notion of convergence of sequences and as we know, on finite dimensional spaces the notion of convergence of sequences is independent of the norm used. Thus we can assume that the norm on U and V is | − |∞. LINEAR ALGEBRA 89

The operator T can be represented by a M × N matrix

A = (aij)1≤i≤m, 1≤j≤N . Set X C := |akj|. j,k In other words, C is the sum of the absolute values of all the entries of A. If   u1  .  N u =  .  ∈ R uN M then the vector v = T u ∈ R has coordinates N X vk = akjuj, k = 1,...,M. j=1 We deduce that that for any k = 1,...,M we have N N N X X X |vk| ≤ |akj| · |uj| ≤ max |uj| |akj| = |u|∞ |akj| ≤ C|u|∞. 1≤j≤N j=1 j=1 j=1 N Hence, for any u ∈ R we

|T u|∞ = |v|∞ = max |vk| ≤ C|u|∞. 1≤k≤M This proves that T is continuous. ut

6.5. Series in normed spaces. Let (U, k − k) be a normed space. Definition 6.35. A series in U is a formal infinite sum of the form X u(n) = u(0) + u(1) + u(2) + ··· , n≥0 where u(n) is a vector in U for any n ≥ 0. The n-th partial sum of the series is the finite sum

Sn = u(0) + u(1) + ··· + u(n).

The series is called convergent if the sequence Sn is convergent. The limit of this sequence is called the sum of the series. The series is called absolutely convergent if the series of nonnegative real numbers X ku(n)k n≥0 is convergent. ut

Proposition 6.36. Let (U, k − k) be a Banach space, i.e., a complete normed space. If the series in U X u(n) (6.11) n≥0 is absolutely convergent, then it is also convergent. Moreover X X k u(n)k ≤ ku(n)k. (6.12) n≥0 ≥0 90 LIVIU I. NICOLAESCU

+ Proof. Denote by Sn the n-th partial sum of the series (6.11), and by Sn the n-th partial sum of the series X ku(n)k. n≥0 Since U is complete, it suffices to show that the sequence (Sn) is Cauchy. For m < n we have + + kSn − Smk = ku(m + 1) + ··· + u(n)k ≤ ku(m + 1)k + ··· + ku(n)k = Sn − Sm. + Since the sequence (Sn ) is convergent, we deduce that it also Cauchy so that lim (S+ − S+) = 0. m,n→∞ n m This lim kSn − Smk = 0. m,n→∞ The inequality (6.12) is obtained by letting n → ∞ in the inequality ku(0) + ··· + u(n)k ≤ ku(0)k + ··· + ku(n)k. ut

Corollary 6.37 (Comparison trick). Let X u(n) n≥0 be a series in the Banach space (U, k−k). If there exists a convergent series of positive real numbers X cn, n≥0 such that ku(n)k ≤ cn, ∀n ≥ 0, (6.13) P then the series n≥0 u(n) is absolutely convergent and thus convergent. P Proof. The inequality (6.13) implies that the series of nonnegative real numbers n≥0 ku(n)k is convergent. ut

6.6. The exponential of a matrix. Suppose that A is an N × N complex matrix. We regard it as a n n n linear operator A : C → C . As such, it is a continuous map, in any norm we choose on C . For simplicity, we will work with the Euclidean norm p 2 2 |u| = |u|2 = |u1| + ··· + |uN | . The operator norm of A is then the quantity |Au| kAk = sup . N |u| u∈C \0 N N We denote by BN the vector space of continuous linear operators C → C equipped with the operator norm defined above. The space is a finite dimensional complex vector space (dimC BN = 2 N ) and thus it is complete. Consider the series in BN X 1 1 1 An = 1 + A + A2 + ··· . n! 1! 2! n≥0 Observe that kAnk ≤ kAkn LINEAR ALGEBRA 91 so that 1 1 kAnk ≤ kAkn. n! n! The series of nonnegative numbers X 1 1 1 kAkn = 1 + kAk + kAk2 + ··· , n! 1! 2! n≥0 kAk P 1 n is e Thus the series n≥0 n! A is absolutely convergent and hence convergent. Definition 6.38. For any complex N × N matrix A we denote by eA the sum of the convergent (and absolutely convergent) series 1 1 1 + A + A2 + ··· . 1! 2! The N × N matrix A is called the exponential of A. ut

Note that keAk ≤ ekAk. Example 6.39. Suppose that A is a diagonal matrix,

A := Diag(λ1, . . . , λN ). Then n n n A = Diag(λ1 , . . . , λN ) and thus   X 1 X 1 eA = Diag λn,..., λn = Diageλ1 , . . . , eλN . ut  n! 1 n! N  n≥0 n≥0

Before we explain the uses of the exponential of of a matrix, we describe a simple strategy for computing it. Proposition 6.40. Let A be a complex N × N matrix and S and invertible N × N matrix. Then −1 S−1eAS = eS AS. (6.14)

−1 Proof. Set AS := S AS. We have to prove that S−1eAS = eAS . Set 1 1 1 1 S := 1 + A + ··· + An,S0 = 1 + A + ··· + An. n 1! n! n 1! S n! S Observe that that for any positive integer k k −1 −1 −1 −1 k AS = (S AS)(S AS) ··· (S AS) = S A S. | {z } k Moreover, for any two N × N matrices B,C we have S−1(B + C)S = A−1BS + S−1CS. We deduce that −1 0 S SnS = Sn, ∀n ≥ 0. 92 LIVIU I. NICOLAESCU

A Since Sn → e , we deduce from Corollary 6.33 that 0 −1 −1 A Sn = S SnS → S e S.

0 AS On the other hand Sn → e and this implies (6.16). ut

We can rewrite (6.16) in the form

−1 eA = SeS ASS−1. (6.15) If we can choose S cleverly so that S−1AS is not too complicated, then we have a good shot at −1 computing eS AS, which then leads via (6.15) to a description of eA. Here is one such instance. Suppose that A is normal, i.e., A∗A = AA∗. The spectral theorem for normal operators then implies that there exists an invertible matrix S such that −1 S AS = Diag(λ1, . . . , λN ), where λ1, . . . , λN ∈ C are the eigenvalues of A. Moreover the matrix S also has an explicit descrip- tion: its j-th column is an eigenvector of Euclidean norm 1 of A corresponding to the eigenvalue λj. The computations in Example 6.39) then lead to the formula

eA = S−1 Diag(eλ1 , . . . , eλN )S. (6.16)

6.7. The exponential of a matrix and systems of linear differential equations. The usual expo- nential function ex satisfies the very important property ex+y = exey, ∀x, y. If A, B are two N × N matrices, then the equality eA+B = eAeB no longer holds because the of matrices is non commutative. Something weaker does hold. Theorem 6.41. Suppose that A, B are complex N ×N matrices that commute, i.e., AB = BA. Then eA+B = eAeB.

Proof. The proof relies on the following generalization of ’s binomial formula whose proof is left as an exercise. Lemma 6.42 (Newton’s binomial formula: the matrix case). If A, B are two commuting N × N matrices, then for any positive integer n we have n X n n n n (A + B)n = An−kBk = An + An−1B + An−2B2 + ··· , k 0 1 2 k=0 n where k is the binomial coefficient n n! := , ∀0 ≤ k ≤ n. ut k k!(n − k)! LINEAR ALGEBRA 93

We set 1 1 1 1 S (A) = 1 + A + ··· + An,S (B) = 1 + B + ··· + Bn, n 1! n! n 1! n! 1 1 S (A + B) = 1 + (A + B) + ··· + (A + B)n. n 1! n! We have A B A+B Sn(A) → e ,Sn(B) → e ,Sn(A + B) → e as n → ∞. and we will prove that lim kSn(A)Sn(B) − Sn(A + B)k = 0, n→∞ which then implies immediately the desired conclusion. Observe that  1 1   1 1  S (A)S (B) = 1 + A + ··· + An 1 + B + ··· + Bn n n 1! n! 1! n! n k ! 2n n ! X X 1 1 X X 1 1 = Ai AiBk−i + Ai AiBk−i i! (k − i)! i! (k − i)! k=0 i=0 k=n+1 i=0 n k ! 2n n ! X 1 X k! X 1 X k! = AiBk−i + AiBk−i k! i!(k − i)! k! i!(k − i)! k=0 i=1 k=n+1 i=1 | {z } =(A+B)k n 2n n ! X 1 X 1 X k! = (A + B)k + AiBk−i k! k! i!(k − i)! k=0 k=n+1 i=1 Hence 2n n ! X 1 X k! S (A)S (B) − S (A + B) = AiBk−i n n n k! i!(k − i)! | {z } k=n+1 i=1 =:Rn and we deduce 2n n ! 2n n ! X 1 X k! X 1 X k! kR k ≤ kAiBk−ik ≤ kAkikBkk−ik n k! i!(k − i)! k! i!(k − i)! k=n+1 i=1 k=n+1 i=1 2n k ! 2n X 1 X k! X 1 ≤ kAkikBkk−ik = (kAk + kBk)k. k! i!(k − i)! k! k=n+1 i=1 k=n+1 | {z } (kAk+kBk)k Set C := kAk + kBk so that 2n X 1 kR k ≤ Ck. (6.17) n k! k=n+1 Fix an positive integer n0 such that n0 > 2C. For k > n0 we have k−n0 k! = 1 · 2 ··· n0 · (n0 + 1) ··· k ≥ n0 so that 1 1 ≤ , ∀k > n0. k! k−n0 n0 94 LIVIU I. NICOLAESCU

We deduce that if n > n0, then 2n 1 2n 1 2n  C k−n0 X k n0 X k−n0 n X C = C C ≤ C0 k! k! n0 k=n+1 k=n+1 k=n+1 2n 2n X 1 X 1  1 1  ≤ Cn0 = (2C)n0 = (2C)n0 + ··· + 2k−n0 2k 2n+1 22n k=n+1 k=n+1 (2C)n0  1 1  (2C)n0 (2C)n0 = 1 + + ··· + ≤ 2 = 2n+1 2 2n−1 2n+1 2n | {z } ≤2 Using (6.17) we deduce (2C)n0 kR k ≤ → 0 as n → ∞. n 2n ut

6.8. Closed and open subsets. Definition 6.43. (a) Let U be a vector space. A subset C ⊂ U is said to be closed in the norm k − k on U if for any sequence ( u(n)) of vectors in C which converges in the norm k − k to a vector u∗ ∈ U we have u∗ ∈ C . (b) A subset O in U is said to be open in the norm k − k if the complement U \ O is closed in the norm k − k. ut

Remark 6.44. The notion of closed set with respect to a norm is based only on the notion of conver- gence with respect to a norm. Thus, if a set is closed in a given norm, it is closed in any equivalent. In particular, the notion of closed set in a finite dimensional normed space is independent of the norm used. A similar observation applies to open sets. ut

Proposition 6.45. Suppose that (U, k − k) is a and O ⊂ U. Then the following statements are equivalent. (i) The set O is open in the norm k − k. (ii) For any u ∈ O there exists ε > 0 such that any v ∈ U satisfying kv − uk < ε belongs to O. ut

Proof. (i) ⇒ (ii) We argue by contradiction. We assume that there exists u∗ ∈ O so that for any n > 0 there exists vn ∈ U satisfying 1 kv − u∗k < but v 6∈ O. n n n ∗ 1 Set C so that C is closed and vn ∈ C for any n > 0. The inequality kvn − u k < n implies ∗ ∗ vn → u . Since C is closed we deduce that u ∈ C . This contradicts the initial assumption that u∗ ∈ O = bsU \ C . (ii) ⇒ (i) We have to prove that C = U \ O is closed, given that O satisfies (ii). Again we argue by contradiction. Suppose that there exists sequence ( u(n)) in C which converges to a vector u∗ ∈ U \ C = O. Thus there exists ε > 0 such that any v ∈ U satisfying kv − u∗k does not belong to C . This contradicts the fact that u(n) → u∗ because at least one of the terms u(n) satisfies ku(n) − u∗k < ε, and yet it belongs to C . ut LINEAR ALGEBRA 95

Definition 6.46. Let (U, k − k) be an normed space. The open ball of center u and r is the set B(u, r) := v ∈ U; kv − uk < r . ut We can rephrase Proposition 6.45 as follows. Corollary 6.47. Suppose that (U, k − k) is a normed vector space. A subset O ⊂ U is open in the norm k − k if and only if for any u ∈ O there exists ε > 0 such that B(u, ε) ⊂ O. ut

The proof of the following result is left as an exercise.

Theorem 6.48. Suppose that (U, k − kU ), (V , k − kV ) are normed spaces and f : U → V is a map. Then the following statements are equivalent. (i) The map f is continuous. −1 (ii) For any subset O ⊂ V that is open in the norm k − kV the pre f (O) ⊂ U is open in the norm k − kV . −1 (iii) For any subset C ⊂ V that is closed in the norm k − kV the pre image f (C ) ⊂ U is closed in the norm k − kV . ut

6.9. Compactness. We want to discuss a central concept in modern mathematics which is often used proving existence results. Definition 6.49. Let (U, k − k) be a normed space. A subset K ⊂ U is called compact if any sequence of vectors in K has a subsequence that converges to some vector in K. ut

Remark 6.50. Since the notion of compactness is expressed only in terms of convergence of se- quences we deduce that if a set is compact in some norm, it is compact in any equivalent norm. In particular, on finite dimensional vector spaces the notion of compactness is independent of the norm. ut

Example 6.51 (Fundamental Example). For any real numbers a < b the closed interval [a, b] is a compact subset of R. ut

The next existence result illustrates our claim of the usefulness of compactness in establishing existence results. Theorem 6.52 (Weierstrass). Let (U, k − k) be a normed space and f : K → R a continuous ∗ function defined on the compact subset K ⊂ U. Then there exist u∗, u ∈ K such that ∗ f(u∗) ≤ f(u) ≤ f(u ), ∀u ∈ K, i.e., ∗ f(u∗) = inf f(u), f(u ) = sup f(u). u∈K u∈K In particular, f is bounded both from above and from below. Proof. Let m := inf f(u) ∈ [−∞, ∞). u∈K There exists a sequence ( u(n)) in K such that4 lim f(u(n)) = m. n→∞

4Such sequence is called a minimizing sequence of f. 96 LIVIU I. NICOLAESCU

Since K is compact, there exists a subsequence ( u(ν)) of ( u(n)) that converges to some u∗ ∈ K. Observing that lim f( u(ν) ) = lim f( u(n) ) = m ν→∞ n→∞ we deduce from the continuity of f that

f(u∗) = lim f( u(ν) ) = m. ν→∞ ∗ ∗ The existence of a u such that f(u ) = supu∈K f(u) is proved in a similar fashion. ut

Definition 6.53. Let (U, k − k) be a normed space. A set S ⊂ U is called bounded in the norm k − k if there exists M > 0 such that ksk ≤ M, ∀s ∈ S. In other words, S is bounded if and only if sup ksk < ∞ ut s∈S Let us observe again that if a set S is bounded in a norm k − k, it is also bounded in any other equivalent norm. Proposition 6.54. Let (U, k − k) be a normed space and K ⊂ U a compact subset. Then K is both bounded and closed.

Proof. We first prove that K is bounded. Indeed, Example 6.25 shows that the function f : K → R, f(u) = kuk is continuous and Theorem 6.52 implies that supu∈K f(u) < ∞. This proves that K is bounded. To prove that K is also closed consider a seequence (u(n)) in K which converges to some u∗ in U. We have to show that in fact u∗ ∈ K. Indded, since K is compact, the sequence ( u(n)) contains a subsequence which converges to some point in K. On the other hand, since ( u(n)) is convergent, limit of this subsequence must coincide with the limit of u(n) which is u∗. Hence u∗ ∈ K. ut

Theorem 6.55. Let (U, k−k) be a finite dimensional normed space, and K ⊂ U. Then the following statements are equivalent. (i) The set K is compact. (ii) The set K is bounded and closed.

Proof. The implication (i) ⇒ (ii) is true for any normed space so we only have to concetrate on the opposite implication. For simplicity we assume that U is a real vector space of dimension N. By N fixing a basis of U we can identify it with R . Since the notions of compactness, boundedness and N closedness are independent of the norm we use on R we can assume that k − k is the norm | − |∞. Since K is bounded we deduce that there exists M > 0 such that

|u|∞ ≤ M, ∀u ∈ K. In particular we deduce that

uk ∈ [−M,M], ∀k = 1,...,N, ∀u ∈ K, N where as usual we denote by u1, . . . , uN the coordinates of a vector u ∈ R . Suppose now that ( u(n)) is a sequence in K. We have to show that it contains a subsequence that converges to a vector in K. We deduce from the above that

uk(n) ∈ [−M,M], ∀k = 1,...,N, ∀n. LINEAR ALGEBRA 97

Since the interval [−M,M] is a compact subset of R we can find a subsequence (, u(ν)) of ( u(n)) ∗ such that each of the sequences ( uk(ν)), k = 1,...,N converges to some uk ∈ [−M,M]. denote ∗ N ∗ ∗ by u the vector in R with coordinates u1, . . . , uN . Invoking Proposition 6.10 we deuce that the ∗ subsequence ( u(ν)) converges in the norm |−|∞ to u . Since K is closed and the sequence ( u(ν)) is in K we deduce that its limit u∗ is also in K. ut 98 LIVIU I. NICOLAESCU

6.10. Exercises. N Exercise 6.1. (a) Consider the space R with canonical basis e1,..., eN . Prove that the functions N | − |1, | − |∞ : R → [0, ∞) defined by N X |u1e1 + ··· + uN eN |∞ := max |uk|,, |u|1 = |uk| 1≤k≤N k=1 N are norms on R . (b) Let U denote the space C([0, 1]) of continuous functions u : [0, 1] → R. Then the function

k − k∞ : C([0, 1]) → [0, ∞), kuk∞ := sup |u(x)| x∈[0,1] is a norm on U. ut

Exercise 6.2. Prove Proposition 6.12. ut

Exercise 6.3. (a) Prove that the space U of continuous functions u : [0, 1] → R equipped with the norm kuk = max |u(x)| x∈[0,1] is a complete normed space. (b) Prove that the above vector space U is not finite dimensional. For part (a) you need to use the concept of uniform convergence of functions. For part (b) consider 1 the linear functionals Ln : U → R Ln(u) = u( n ), ∀u ∈ U, n > 0 and then show they are linearly independent. ut

Exercise 6.4. Prove Proposition 6.32. ut

Exercise 6.5. Prove Theorem 6.48. ut

Exercise 6.6. Let (U, k − k) be a normed space.

(a) Show that if C1,..., Cn are closed subsets of U, then so is their union C1 ∪ · · · ∪ Cn. (b) Show that if (Ci)i∈I is an arbitrary family of closed subset of U, then their intersection ∩i∈I Ci is also closed. (c) Show that if O1,..., On are open subsets of U, then so is their intersection O1 ∩ · · · ∩ On. (d) Show that if (Oi)i∈I is an arbitrary family of open subset of U, then their union ∪i∈I Oi is also open. ut

Exercise 6.7. Suppose that (U, k − k) is a normed space. Prove that for any u ∈ U and any r > 0 the ball B(u, r) is an open subset. ut

Exercise 6.8. (a) Show that the subset [0, ∞) ⊂ R is closed. (b) Use (a), Theorem 6.48 and Exercise 6.6 to show that the set  N PN := µ ∈ R ; µ1 + ··· + µN = 1, µk ≥ 0, ∀k = 1,...,N is closed. Draw a picture of PN for n = 1, 2, 3. N N Hint: Use Theorem 6.34 to prove that the maps f : R → R and `k : R → R given by

f(u) = u1 + ··· + uN , `k(u) = uk LINEAR ALGEBRA 99 are continuous. ut

Exercise 6.9. Consider the 2 × 2-matrix  0 −1  J = . 1 0 (a) Compute J 2,J 3,J 4,J 5. tJ (b) Let t ∈ R. Compute e . Describe the behavior of the point  1  u(t) = etJ ∈ 2 0 R as t varies in the interval [0, 2π]. ut

Exercise 6.10. Prove Newton’s binomial formula, Lemma 6.42, using induction on n. ut 100 LIVIU I. NICOLAESCU

REFERENCES [1] S. Axler: Linear Algebra Done Right, Springer Verlag, 2004. [2] V.V. Prasolov: Problems and Theorem in Linear Algebra, in Russian. [3] S. Treil: Linear Algebra Done Wrong, Notes for a Linear algebra course at Brown University.

DEPARTMENT OF MATHEMATICS,UNIVERSITYOF NOTRE DAME,NOTRE DAME, IN 46556-4618. E-mail address: [email protected] URL: http://www.nd.edu/˜lnicolae/