<<

Advanced Linear (MiM 2019, Spring)

P.I.Katsylo Contents

1 A Review of 2 1.1 Algebraic Fields ...... 2 1.2 Vector Spaces and Subspaces ...... 4 1.3 Dual Spaces and Linear Maps ...... 8 1.4 Change of ...... 11 1.5 Forms ...... 14 1.6 Problems ...... 17

2 20 2.1 of Vector Spaces ...... 20 2.2 Symmetric Powers of Vector Spaces ...... 25 2.3 Exterior Powers of Vector Spaces ...... 27 2.4 of Linear maps ...... 28 2.5 Canonical Forms of Tensors ...... 29 2.6 *Multilinear Maps ...... 31 2.7 *Tensor Fields ...... 34 2.8 Problems ...... 37

3 Operators 39 3.1 Realification and Complexification ...... 39 3.2 Jordan Normal Form ...... 40 3.3 Operators in Real Inner Product Spaces ...... 44 3.4 Operators in Hermitian Spaces ...... 48 3.5 Operators Decompositions ...... 53 3.6 *Normed Spaces ...... 57 3.7 *Functions of Operators ...... 59 3.8 Problems ...... 61

4 Rings and Modules 67 4.1 Rings ...... 67 4.2 Modules ...... 71 4.3 Tensor Products of Modules ...... 74 4.4 *A Proof of Cayley-Hamilton’s Theorem ...... 75 4.5 *Tor and Ext ...... 78 4.6 Problems ...... 80

1 Chapter 1 A Review of Linear Algebra

1.1 Algebraic Fields

Definition 1.1.1. An algebraic field or, simply, a field is a k with fixed elements 0k 0 (zero) and 1k 1 (identity), where 0 1, and two operations: an

= = k k k, λ,≠ µ λ µ and a × → ( ) ↦ + k k k, λ, µ λµ such that for all λ, µ, . . . k we have × → ( ) ↦ 1. λ µ µ λ (commutativity∈ of the addition);

2. λ+ µ= +γ λ µ γ (associativity of the addition);

3. λ( +0 )λ+; = + ( + )

4. for+ every= λ k there exists λ k such that λ λ 0;

5. λµ µλ (commutativity∈ of the− multiplication);∈ + (− ) =

6. λ µγ= λµ γ (associativity of the multiplication);

7. 1λ( λ);= ( ) ∗ − − 8. for= every λ k k 0 there exists λ 1 k such that λλ 1 1;

9. λ µ γ λµ∈ λγ∶= (distributivity).∖ { } ∈ =

In a( field+ )k=we introduce+ the operations: the subtraction k k k, λ, µ λ µ λ µ and the division × → ( ) ↦ − ∶= + (− ) k k∗ k, λ, µ λµ−1.

× → 2( ) ↦ Loosely speaking, a field is a set with the zero element 0, the 1, and four arithmetic : addition, subdtraction, multiplication, and division with natural properties.

Examples 1.1.2. The basic examples of fields are

ˆ Q – the field of rational ,

ˆ R – the field of real numbers,

ˆ C – the field of complex numbers, ˆ Fp 0, 1,..., p 1 – the field integers modulo p.

Cosider= { the field C− of} complex numbers. The map

C C, z a b 1 z a b 1 √ √ is an automorphism of⋅ ∶C. This→ means= that+ − ↦ = − − ˆ 0 0, 1 1;

ˆ z = w z= w;

ˆ zw+ z=w.+

Complex= z is called conjugate of z. A field k is called algebraically closed whenever for every nonconstant f t there is t0 k such that f t0 0; that is, every nonconstant polynomial over k has a root in k. If a field k is algebraically closed, then every nonconstant polynomial can( ) be decomposed∈ into product( ) of= of degree one. Exercise 1.1.3. Prove that

1. The fields Q and R are not algebraically closed. 2. Any finite field is not algebraically closed.

Theorem 1.1.4 (Fundamental Theorem of Algebra). The field C of complex num- bers is algebraically closed.

For a proof see textbooks.

3 1.2 Vector Spaces and Subspaces

Definition 1.2.1. Let k be a field (elements of k are called scalars). A over k is a set V (elements of V are called vectors) with the fixed vector 0 V (the zero vector) and compositions: an addition of vectors ∈ V V V, v, u v u (1.2.1) and a multiplication of vectors× by→ scalars ( ) ↦ +

k V V, λ, v λv such that for all λ, µ k, v, u, w× V →we have( ) ↦

1. v u w v ∈ u w (associativity∈ of the addition);

2. (v +u ) v+ u=(commutativity+ ( + ) of the addition);

3. v + 0 = v;+

4. for+ every= v V there exist v V such that v v 0;

5. λ v u λv∈ λu (distributivity);− ∈ + (− ) =

6. λ( +µ v) = λv + µv (distributivity);

7. (λµ+ v) λ= µv +;

8.( 1v )v.= ( )

Vector= spaces over R are called real and vector spaces over C are called complex vector spaces.

Examples 1.2.2. ˆ The zero space 0 (consists of the zero vector only).

ˆ R1 – the “usual” 1-dimensional line{ with} an origin 0 and a basis i : R1 0 -i ( )

⋅ ˆ R2 – the “usual” 2-dimensional with an origin 0 and a basis i, j :

( ) 6j R2

0 -i

4 ˆ R3 – the “usual” 3-dimensional space with an origin 0 and a basis i, j, k :

( ) 6k R3 j¨¨ 0 ¨* i ¨¨ -

⋅ ˆ The n-dimensional space over a field k, a1 kn ai k . ⎛an⎞ = {⎜ ⋮ ⎟ S ∈ } In particular, the real and the complex⎝ n⎠-dimensional coordinate vector spaces. ˆ The space k t of polynomials in the t with coefficients in the field k. ˆ The space k[t]⩽n of polynomials of degree n in the variable t with coefficients in the field k. [ ] ⩽ ˆ The space Matn×m k of n m matrices with entries in the field k.

Consider vector V1,...,V( ) n spaces× over the same field. Then the set-theoretical direct product V1 ... Vn v1, . . . , vn vi Vi with the compositions × × = {( )S ∈ } ˆ ′ ′ ′ ′ v1, . . . , vn v1, . . . , vn v1 v1, . . . , vn vn ; ˆ (λ v1, . . . , v)n + ( λv1, . . .) , λv∶= n( + + ) is a vector( space.) It∶= ( is called the)direct product of V1,...,Vn and denoted by V1 ... Vn (the same as of the set-theoretical direct product). Also it is called the of V1,...,Vn and denoted by V1 ... Vn. × × Remark 1.2.3. Suppose that J is a set and Vj j∈J is a set of vector spaces. Then we have the following spaces. ⊕ ⊕ { } ˆ The direct product Vj vj Vj j∈J j∈J ×J of spaces Vj j∈J . In patricular,M if ∶V=j{{V ∈for all} j}, then we have V j∈J V . ˆ The direct{ sum} = ∶= ∏

Vj vj Vj vj 0 for finitely many j j∈J j∈J

? ∶= {{ ∈ S ≠ } } ⊕J of spaces Vj j∈J . In patricular, if Vj V for all j, then we have V j∈J V .

{ } 5 = ∶= ⊕ Note that the direct product of an infinite number of vector spaces is not the direct sum of those vector spaces. Let V be a vector space over a field k. A subset W V is called a or, simply, subspace of V whenever W 0 and ⊂ w1, . . . , wn W, λ1, . . . , λn k∋ imply λ1w1 ... λnwn W.

Clearly, a subspace is∈ a vector space over∈ the same field.+ + ∈ Examples 1.2.4. In a vector space V , 1. we always have two subspaces: ˆ 0 – the zero subspace; ˆ V itself; { } 2. vectors v1, . . . , vn V span the subspace

Span∈ v1, . . . , vn λ1u1 ... λnun λi k V.

Example 1.2.5. A system( of linear) ∶ equations= { + + S ∈ } ⊂

1 2 n a1 x1 a1 x2 ... a1 xn 0 , (1.2.2) ⎧...... ⎪ 1 + 2 + + n = ⎪amx1 amx2 ... amxn 0 ⎨ ⎪ j ⎪ where ai k, defines the subspace⎩ + + + =

∈ x1 x2 n x1, . . . , xn satisfy (1.2.2) k . ⎛ ⎞ ⎜x ⎟ {⎜ n⎟ S } ⊂ ⎜ ⋮ ⎟ Examples 1.2.6. If W1⎝,...,W⎠ n are subspaces of a vector space V , then

1. their intersection W1 ... Wn is a subspace of V ;

2. their sum ∩ ∩ W1 ... Wn w1 ... wn wi Wi is a subspace of V ; the sum W ... W is called direct whenever + + 1 ∶= { +n + S ∈ }

w1 ... wn 0, wi + Wi + imply wi 0 for 1 i n.

In linear algebra,+ a totally+ = ordered∈ set of vectors is called= a list⩽ of⩽ vectors.

Definition 1.2.7. Suppose that vi is a list of vectors of a vector space V . Then

( ) 6 ˆ A finite sum of the form

λi1 vi1 λi2 vi2 ... λil vil , where λi k is called a of the list v ; the scalars λ , λ , . . . , λ are + + + i ∈ i1 i2 il called the coefficients of this linear combination. ( ) ˆ The list vi is called linearly dependent whenever some nontrivial linear com- bination of vi is equal to 0. A list consisting( ) of one vector is linearly dependent whenever this vector is the zero vector. A list( of) vectors is linearly dependent whenever some vector of the list is a linear combination of the others. In particular, if some vector of the list of vectors is the zero vector, then the list of vectors is linearly dependent. A list of vectors ei of V is called a basis of V whenever every vector v V can be uniquely represented as a linear combination of ei . Theorem 1.2.8. For( a) vector space (even infinite-dimensional) there exists∈ a basis. ( ) Remark 1.2.9. A basis of infinite-dimensional space is called a Hamel’s bases of that space. Examples 1.2.10. ˆ In R1 we have the basis i , in R2 we have the basis i, j , and in R3 we have the basis i, j, k . ( ) ( ) ˆ n In k we have the ( ) 1 0 0 0 1 0 e1 , e2 , . . . , en . ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎜0⎟ ⎜0⎟ ⎜1⎟ = ⎜ ⎟ = ⎜ ⎟ = ⎜ ⎟ ⎜ ⋮ ⎟ ⎜ ⋮ ⎟ ⎜ ⋮ ⎟ 2 n ˆ In k t ⩽n we have the basis⎝ ⎠1, t, t , .⎝ . . ,⎠ t . ⎝ ⎠ ˆ 2 In k[t] we have the Hamel’s( basis 1, t, t ),... . Lemma 1.2.11. A list of vectors e of a vector space V is a basis of V whenever [ ] i ( ) 1. every vector v V can be represented as a linear combination of e ; ( ) i 2. ei is linearly∈ independent. ( ) Definition 1.2.12. The number of vectors in a basis ei of a vector space V (a natural( number) or the symbol ) is called the of V and is denoted by dim V . ( ) Lemma 1.2.13. The dimension∞ of a vector space is well-defined. ( ) We have n dim k n, dim k t ⩽n n 1, dim k t . Lemma 1.2.14. Suppose that V is an n-dimensional vector space and a list of ( ) = ( [ ] ) = + ( [ ]) = ∞ vectors ei of V consists of n vectors. Then ei is a basis of V whenever the list ei is linearly independent. ( ) ( ) ( ) 7 1.3 Dual Spaces and Linear Maps

Let V be a vector space over a field k. A s V k is called linear whenever

s λ1v1 ... λkvk λ1s v1 ... λks v∶k for→ all λi k; vi V.

Example 1.3.1.( +An element+ ) =x k(defines) + the+ linear( ) function ∈ ∈

sx k∈t ⩽n k, s f f x .

The set of all linear functions∶ [ ] on V→ is denoted( ) = by(V)∗; it is a vector space under the following compositions:

∗ 1. the sum of s1, s2 V is

s∈1 s2 V k, s1 s2 v s1 v s2 v ;

2. the product of λ k+ and∶ s →V ∗ is ( + )( ) = ( ) + ( )

∈ λs ∈V k, λs v λs v .

The space V ∗ is called the dual to∶ V→vector( space.)( ) = ( ) Remark 1.3.2. For vector spaces with structures there are definitions of their duals and that duals are vector spaces with the same structures. For example, duals of normed, topological, Banach, ... vector spaces are normed, topological, Banach, ... vector spaces respectively.

Suppose that V is a finite-dimensional vector space and ei e1, . . . , en is a i 1 n ∗ basis of V . A basis x x , . . . , x of V is called dual to ei whenever ( ) = ( )

( ) = ( i ) i 1 if i j, ( ) x ej δj ⎧0 if i j. ⎪ = ( ) = ∶= ⎨ ⎪ ≠ i ∗ Lemma 1.3.3. There is a unique dual to ⎩⎪ei basis x of V . Namely, i x v the ith coordinate( of) v in the( basis) ei .

Corollary 1.3.4. dim( )V= ∗ dim V . ( )

n Example 1.3.5. Consider( ) the= space( )k . A row s s1, . . . , sn , where si k, defines the (we denote it by the same symbol) = ( ) ∈ a1 n 1 n s k k, s1a ... sna . ⎛an⎞ ∶ → ⎜ ⋮ ⎟ ↦ + + ⎝ ⎠ 8 Conversely, a linear function on kn is defined by a uniquely determined row of length n with entries in k. Due to this we identify the dual vector space kn ∗ and the space of rows of length n with entries in k. Clearly, the basis ( ) x1 1, 0, 0,..., 0 , x2 0, 1, 0,..., 0 , = ( ) ...... , xn = (0, 0, 0,..., 1).

n∗ n of k is dual to the standard basis= of( k . ) Remark 1.3.6. In analysis, when we consider an open subset U Rn with coordinate functions x1, . . . , xn , the standard notation of the standard basis of the tangent ∂ ∂ ⊂ space Tx0 U of U at x0 U is ∂x1 ,..., ∂xn and the standard notation of the dual ∗( ) 1 n basis of Tx0 U is dx , . . . , dx . ( ) ∈ ‰ Ž Let V and( )U be( vector spaces) over a field k. A map ϕ V U is called linear whenever ∶ → s λ1v1 ... λkvk λ1s v1 ... λks vk for all λi k, vi V.

Linear maps( also+ called+ linear) = transformations( ) + + . The( ) set of linear∈ maps∈ of a vector space V to a vector space U is denoted by Hom V,U . If V U, then Hom V,V is also denoted by End V ; elements of End V are called operators. Clearly,

1 (∗ ) = ( ) ( ) Hom V, k ( )V . Note that Hom V,U is a vector space( under) ≃ the following compositions: 1. the sum ϕ , ϕ Hom V,U is (1 2 ) ϕ∈ 1 ϕ2( V ) U, ϕ1 ϕ2 v ϕ1 v ϕ2 v ; 2. the product λ k +and ϕ∶ →Hom V,U( +is )( ) = ( ) + ( )

∈ λϕ∈ V ( U, ) λϕ v λϕ v . Example 1.3.7. A ∶ → ( )( ) = ( ) 1 1 1 Φ1 Φ2 ... Φn 2 2 2 Φ1 Φ2 ... Φn Φ Matm×n k ⎛ ⎞ ⎜Φm Φm ... Φm⎟ = ⎜ 1 2 n ⎟ ∈ ( ) ⎜ ⋮ ⋮ ⋱ ⋮ ⎟ defines the (we⎝ denote it by the same⎠ symbol) 1 1 1 1 1 a Φ1 Φ2 ... Φn a a2 Φ2 Φ2 ... Φ2 a2 Φ kn km, 1 2 n . ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎜an⎟ ⎜Φm Φm ... Φm⎟ ⎜an⎟ ∶ → ⎜ ⎟ ↦ ⎜ 1 2 n ⎟ ⎜ ⎟ ⎜ ⋮ ⎟ ⎜ ⋮ ⋮ ⋱ ⋮ ⎟ ⎜ ⋮ ⎟ ⎝ ⎠ ⎝9 ⎠ ⎝ ⎠ Conversely, a linear map of the form kn km is defined by a uniquely determined m n matrix. So, we identify → n m × Hom k , k Matm×n k .

Example 1.3.8. Suppose that V,U( are) vector= spaces,( ) ϕ V U is a linear map, and W is a subspace of V . Then we have the linear map ∶ → ϕ W W U, ϕ W w ϕ w ; it is called the restriction ofS ϕ∶ to W→. S ( ) = ( ) Let V,U be vector spaces and ϕ V U be a linear map. Denote

Ker ϕ v V ϕ∶ v→ 0 the of ϕ , Im ϕ ϕ v v V the image of ϕ . ( ) ∶= { ∈ S ( ) = }( ) Clearly, Ker ϕ is a subspace( ) ∶= { of( )SV and∈ }( Im ϕ is a subspace) of U. The dimen- sion dim Im ϕ is called the of ϕ and is denoted by rk ϕ . If V is finite- dimensional,( then) ( ) ( ( )) dim Im ϕ dim Ker ϕ dim V . ( ) Exercise 1.3.9. Consider the following vector spaces and linear maps: ( ( )) + ( ( )) = ( )

ϕ1 ϕ2 ϕn V0 V1 ... Vn

Prove that Ð→ Ð→ Ð→ dim Ker ϕn ... ϕ2 ϕ1 dim Ker ϕi . 1⩽i⩽n A linear map ϕ (V (U is○ called○ an○ )) ⩽ Q (whenever( )) ϕ is bijective. In this case the set-theoretical inverse map ϕ−1 U V is a linear map. ∶ → Lemma 1.3.10. Suppose that V and U are finite-dimensional∶ → vector spaces. Then a linear map ϕV U is an isomorphism whenever dim V dim U and Ker ϕ 0.

Quotient spaces.→ ( ) = ( ) ( ) = Let V be a vector space and W be a subspace of V . Define the quotient space V W in the following way:

~ ˆ (as a set) V W v W v V , the zero vector is 0 W W ; ˆ (addition of~ vectors)∶= { +v1 SW∈ }v2 W v1 v2 W+ ; =

ˆ (multiplication of vectors( + by) scalars)+ ( + λ v) = W( + λv) + W .

( + ) = +

10 We have the surjective linear map π V V W, π v v W

(called the quotient map) with∶ Ker→ π~ W .( If)V= is+ finite-dimensional, then dim V (dim) =V W dim W . Roughly speaking, vectors of(V)W= are( vectors~ ) + of V( considered) up to addition of vectors of W . Suppose that W is spanned~ by wi i∈I. Then we can consider vectors of V W as vectors of V modulo relations { } ~ wi 0 for all i I.

For a vector space U, we have™ the= bijection ∈ Linear maps Linear maps ϕ V U , ϕ V W U such that ϕ wi 0 for all i I ∶ → œ ϕ¡ ↔ œϕ, where ϕ v ϕ π v , ¡ ∶ ~ → ( ) = ∈ ϕ, where ϕ v W ϕ v ϕ. ↦ { ( ) = ( ( ))} { ( + ) = ( )} ↤ 1.4

Let V be a finite-dimensional vector space, and ei e1, . . . , en and e˜i e˜1,..., e˜n be bases of V . Decompose vectors of the basis e˜i in the basis (ei): = ( ) ( ) = ( ) 1 2 n e˜1 A1e(1 )A1e2 ... A1(en), 1 2 n e˜2 A e1 A e2 ... A en, = 2 + 2 + + 2 ...... = 1 + 2 + + n e˜n Ane1 Ane2 ... Anen j (in one formula,e ˜i Ai ej) and= organize+ the+ coefficients+ into the matrix: 1 1 1 = A1 A2 ...An A2 A2 ...A2 A 1 2 n ⎛ ⎞ ⎜An An ...An⎟ = ⎜ 1 2 n⎟ ⎜ ⋮ ⋮ ⋱ ⋮ ⎟ (the ith column of A is formed by⎝ the coefficients of⎠ the decomposition ofe ˜i). The matrix A is called the transition matrix from the basis ei to the basis e˜i . Lemma 1.4.1. Suppose that e˜ e˜ ,..., e˜ is another bases of V , then i 1 n ( ) ( ) Transition matrix ( )Transition= ( matrix) Transition matrix from the basis ei from the basis ei from the basis e˜i ⎛ to the basis e˜i ⎞ ⎛ to the basis e˜i ⎞ ⎛ to the basis e˜i ⎞ ⎜ ( )⎟ = ⎜ ( )⎟ ⋅ ⎜ ( )⎟ ⎝ ( ) ⎠ ⎝ 11 ( ) ⎠ ⎝ ( ) ⎠ −1 Corollary 1.4.2. The transition matrix from the basis e˜i to the basis ei is A .

i 1 n i 1 n Let x x , . . . , x be the dual to ei basis and( )x˜ x˜ ,...,(x˜ ) be the ∗ dual to e˜i basis of the V . ( ) = ( ) ( ) ( ) = ( ) − ⊺ Lemma( 1.4.3.) The transition matrix from the basis xi to the basis x˜i is A 1 . That is, ( ) ( ) ( ) 1 −1 1 1 −1 1 2 −1 1 n x˜ A 1x A 2x ... A nx , x˜2 A−1 2x1 A−1 2x2 ... A−1 2 xn, = ( )1 + ( )2 + + ( )n ...... n = ( −1) n 1+ ( −1) n 2+ + ( −1) n n x˜ A 1 x A 2 x ... A nx

i −1 i j (in one formula, x˜ A= ( jx ).) + ( ) + + ( )

Let ψ V V be= an( operator.) Decompose images ψ ei in the basis ei : ∶ → 1 2 n (ψ)e1 Ψ1e1 Ψ(1e)2 ... Ψ1 en, 1 2 n ψ e2 Ψ e1 Ψ e2 ... Ψ en, ( ) = 2 + 2 + + 2 ...... ( ) = 1 + 2 + + n ψ en Ψne1 Ψne2 ... Ψnen

j (in one formula, ψ ei Ψi e(j)) and= organize+ the+ coefficients+ into the matrix:

1 1 1 ( ) = Ψ1 Ψ2 ... Ψn Ψ2 Ψ2 ... Ψ2 Ψ 1 2 n ⎛ ⎞ ⎜Ψn Ψn ... Ψn⎟ = ⎜ 1 2 n⎟ ⎜ ⋮ ⋮ ⋱ ⋮ ⎟ (the ith column of Ψ is formed by⎝ the coefficients of⎠ the decomposition of ψ ei ). The matrix Ψ is called the matrix of ψ in the basis ei . ( ) Example 1.4.4. The matrix of the operator( λ)IV in any basis of V is the scalar matrix λI.

Lemma 1.4.5. The matrix of ψ in the bases e˜i is

Ψ˜ A−1ΨA.( )

This formula can be written as = ˜ i −1 i k l Ψj A kΨl Aj.

= ( )

12 Let U be another finite-dimensional vector space, fi f1, . . . , fm be a basis of U, and ϕ V U be a linear map. Decompose images ϕ ei in the basis fi : ( ) = ( ) ∶ → 1 2 m (ϕ )e1 Φ1f1 Φ(1f2) ... Φ1 fm, 1 2 m ϕ e2 Φ f1 Φ f2 ... Φ fm, ( ) = 2 + 2 + + 2 ...... ( ) = 1 + 2 + + m ϕ en Φnf1 Φnf2 ... Φn fm j (in one formula, ϕ ei Φi(fj)) and= organize+ the+ coefficients+ into the matrix: 1 1 1 ( ) = Φ1 Φ2 ... Φn Φ2 Φ2 ... Φ2 Φ 1 2 n ⎛ ⎞ ⎜Φm Φm ... Φm⎟ = ⎜ 1 2 n ⎟ ⎜ ⋮ ⋮ ⋱ ⋮ ⎟ (the ith column of Φ is formed by⎝ the coefficients of⎠ the decomposition of ϕ ei ). def The matrix Φ is called the matrix of ϕ in the bases ei and fi . The rank of Φ ( maximal number of linearly independent rows of Φ = maximal number of linearly( ) independent columns of Φ) is equal to the rank of the( ) map (ϕ. ) = ˜ Let fi be another basis of the space U and C be the transition matrix from ˜ the basis fi to the basis fi . ( ) Lemma 1.4.6. The matrix of ϕ in the bases e˜ and f˜ is ( ) ( ) i i ˜ −1 Φ C ΦA.( ) ( ) This formula can be written as = ˜ i −1 i k l Φj C kΦl Aj. i ∗ i ∗ Let x be the dual to ei basis= of( V )and y be the dual to fi basis of U . Lemma 1.4.7. The matrix of the map ϕt U ∗ V ∗ (see problem 1.6.6) ( ) ( ) ( ) ( ) in the bases yi and xi is Φ⊺. That is,

t 1 1 1 1 2 ∶1 m→ ( ) ( ϕ) y Φ1x Φ2x ... Φmx , ϕt y2 Φ2x1 Φ2x2 ... Φ2 xm, ( ) = 1 + 2 + + m ...... t( n) = n 1+ n 2+ + n m ϕ y Φ1 x Φ2 x ... Φmx t i i j (in one formula, ϕ y Φ(jx )).= + + + Consider a linear map ( ) = Φ kn km, a Φa, there Φ Mat × k . Then, by the previous lemma, m n ∶ → ↦ t m∗ n∗ ∈ ( ) Φ k k , s sΦ. ∶ → 13 ↦ 1.5 Forms

Bilinear forms. Let V and U be vector spaces. A map

β V U k is called a on V U whenever∶ × →β v, u linear by v and linear by u.

Examples 1.5.1. 1. Consider× Rn with the( Euclidean) inner product , . Then the map n n R R R, v, u v, u ⟨⋅ ⋅⟩ is a bilinear form. Note that the map × → ( ) ↦ ⟨ ⟩ n n C C C, v, u v, u ,

where , is the Hermitian× inner→ product( on) ↦C⟨n C⟩n, is not a bilinear form because v, u is antilinear but not linear by u. ⟨⋅ ⋅⟩ × 2. The map⟨ ⟩

2 2 R R R,

a1 b1 a1 b1 , × → a1b2 a2b1 a2 b2 a2 b2

is a bilinear form. (Œ ‘ Œ ‘) ↦ W W = −

3. The map Matn k Matn k k, A, B tr AB is a bilinear form. ( ) × ( ) → ( ) ↦ ( )

4. Suppose that B bij Matn×m k ; then the map

= ( ) ∈kn km ( )k, v, u v⊺Bu

is a bilinear form. × → ( ) ↦ The set of bilinear forms on V U is a vector space under the following operations:

1. the sum of forms β1, β2 is ×

β1 β2 V U k, β1 β2 v, u β1 v, u β2 v, u ;

2. the product of+ a scalar∶ × λ→k and( a form+ )(β is ) = ( ) + ( )

λβ V ∈ U k, λβ v, u λβ v, u .

∶ × → 14( )( ) = ( ) Suppose that V and U are finite-dimensional, ei is a basis of V , and fi is a basis of U. We have ( ) ( ) i j i j i j β x ei, y fj β ei, fj x y bijx y , i,j i,j ‰Q Q Ž = Q ( ) = Q where bij β ei, fj . The matrix

= ( ) b11 b12 . . . b1m b b . . . b B 21 22 2m . ⎛ ⎞ ⎜b b . . . b ⎟ = ⎜ n1 n2 nm⎟ ⎜ ⋮ ⋮ ⋱ ⋮ ⎟ is called the matrix of the bilinear⎝ form β in the bases⎠ ei and fi . In coordinates

x1 y1 ( ) ( ) i j x x ei, y y fj ⎛xn⎞ ⎛ym⎞ = ⎜ ⋮ ⎟ = Q = ⎜ ⋮ ⎟ = Q we have ⎝ ⎠ ⎝ ⎠ β x, y x⊺By.

Lemma 1.5.2 (Change of basis). Suppose( ) = that e˜i is another basis of V , A is the ˜ transition matrix from ei to e˜i , fi is another basis of U, C is the transition ˜ ˜ matrix from fi to fi , and B˜ is the matrix of(β )in the bases e˜i and fi . Then we have ( ) ( ) ( ) ( ) ( ) B˜ A⊺BC. ( ) ( )

Symmetric bilinear forms. = Let V be a vector space. A map

q V V k is called a on∶ V× V→whenever

1. q is a bilinear form, × 2. q v, u q u, v for all v, u V .

Example( 1.5.3.) = ( Suppose) that Q∈ is a symmetric n n matrix; then the map ⊺ kn kn k, v, u v×Qu is a symmetric bilinear form. × → ( ) ↦

Suppose that V is finite-dimensional and ei is a basis of V . Then

i j i j i j q x ei, y fj q ei,( ej )x y qijx y , i,j i,j ‰Q Q Ž = Q15( ) = Q where qij q ei, ej . The matrix

= ( ) q11 q12 . . . q1n q q . . . q Q 21 22 2n . ⎛ ⎞ ⎜q q . . . q ⎟ = ⎜ n1 n2 nn⎟ ⎜ ⋮ ⋮ ⋱ ⋮ ⎟ is called the matrix of q in the basis⎝ ei . In coordinates⎠

1 1 x ( ) y i j x x ei, y y fj ⎛xn⎞ ⎛ym⎞ = ⎜ ⋮ ⎟ = Q = ⎜ ⋮ ⎟ = Q we have ⎝ ⎠ ⎝ ⎠ β x, y x⊺Qy. Exercise 1.5.4. Check that the matrix Q is symmetric. ( ) = Lemma 1.5.5 (Change of basis). Suppose that e˜i is another bases of V , A is the transition matrix from ei to e˜i , and Q˜ is the matrices of q in the bases e˜i . Then ( ) ( ) ( ) Q˜ A⊺QA. ( )

Skew-symmetric bilinear forms.= Let V be a vector space. A map ω V V k is called a skew-symmetric bilinear form∶ × on V→ V whenever 1. ω is a bilinear form, × 2. ω v, u ω u, v ) for all v, u V .

Example( 1.5.6.) = − Suppose( ) that Ω is∈ a skew-symmetric n n matrix; then the map n n ⊺ k k k, v, u v Ωu × is a skew-symmetric bilinear form.× → ( ) ↦

Suppose that V is finite-dimensional and ei is a basis of V . Then

i j i j i j ω x ei, y fj ω( ei), ej x y ωijx y , i,j i,j ‰Q Q Ž = Q ( ) = Q where ωij ω ei, ej . The matrix

= ( ) ω11 ω12 . . . ω1n ω ω . . . ω Ω 21 22 2n . ⎛ ⎞ ⎜ω ω . . . ω ⎟ = ⎜ n1 n2 nn⎟ ⎜ ⋮ ⋮ ⋱ ⋮ ⎟ ⎝ 16 ⎠ is called the matrix of ω in the basis ei . In coordinates

x1 ( ) y1 i j x x ei, y y fj ⎛xn⎞ ⎛ym⎞ = ⎜ ⋮ ⎟ = Q = ⎜ ⋮ ⎟ = Q we have ⎝ ⎠ ⎝ ⎠ β x, y x⊺Ωy. Exercise 1.5.7. Check that the matrix Ω is skew-symmetric. ( ) = Lemma 1.5.8 (Change of basis). Suppose that e˜i is another bases of V , A is the transition matrix from ei to e˜i , and Ω˜ is the matrices of ω in the bases e˜i . Then ( ) ( ) ( ) Ω˜ A⊺ΩA. ( )

= 1.6 Problems

Problem 1.6.1. Find and bases of the subspaces 1. x1 x2 5 x1 x2 x4 0 5 x3 F2 F2; ⎛ ⎞ x x x 0 ⎜x ⎟ ⎧ 1 3 5 ⎜ 4⎟ ⎪ + + = {⎜x ⎟ ∈ S ⎨ } ⊂ ⎜ 5⎟ ⎪ + + = ⎜ ⎟ ⎩⎪ 2. ⎝ ⎠ 1 2 3 2 3 5 4 Span , , F7. ⎛1⎞ ⎛0⎞ ⎛1⎞ ⎜0⎟ ⎜2⎟ ⎜2⎟ (⎜ ⎟ ⎜ ⎟ ⎜ ⎟) ⊂ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ∗ Problem 1.6.2. Suppose that n ⎝N⎠ and⎝ ⎠x⎝1, .⎠ . . , xm k. Prove that linear functions

si k t ⩽n ∈k, si f f xi , ∈ 1 i m are linearly independent∶ whenever[ ] → m n( )1=and( x)1, . . . ,⩽ xm⩽are pairwise different.

Problem 1.6.3. Prove that the lists⩽ of+ vectors below are bases and find the dual bases 1 3 1 1 1 (1) , R2; 2 , C2; 1 1 1 1√ + − (Œ ‘ Œ ‘) ⊂ ( ) (Œ√ ‘ Œ ‘) ⊂ −1 0− 1 − 3 (3) 2 , 2 , 0 F3. ⎛1⎞ ⎛1⎞ ⎛1⎞ (⎜ ⎟ ⎜ ⎟ ⎜ ⎟) ⊂ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ 17 Problem 1.6.4. Suppose that k is a field. Consider the vector space

⨉ N k x0, x1, x2,... xi k and its subspace ∶= {( )S ∈ } > N N k x0, x1, x2,... k xi 0 for finitely many i .

Prove that ∶= {( ) ∈ S ≠ } 1. k> N k⨉ N;

> ∗ ⨉ 2. k N≠ k N; > ⨉ 3. (k N ) k≃ N. Problem≃ 1.6.5.~ Find rank, kernel and image of the linear maps 1 2 1 4 4 3 1. 1 0 4 1 F5 F5; ⎛2 2 0 0⎞ ⎜ ⎟ ∶ → 3 ′ ′′ 2. ϕ⎝ F3 t ⩽2 ⎠ F3 t ⩽4, f t t f t tf 1 f t 1 .

Problem∶ 1.6.6.[ ] →Suppose[ ] that( )V,U↦ are( finite-dimensional) − ( ) + ( + vector) spaces and ϕ V U is a linear map. Prove that the map ∶ → ϕt U ∗ V ∗, r ϕt r , where ϕt r v r ϕ v is a well-defined linear∶ → map (it is↦ called( )transpose of( )(ϕ);) = ( ( )) Problem 1.6.7. Suppose that V is a finite-dimensional vector space. Prove that the map α V V ∗ ∗, v α v V ∗ k, where α v s s v , is a natural isomorphism between vector spaces (“natural” means that, for any op- erator ϕ V ∶V ,→ we( have) α ϕ↦ ϕ( t)t∶ α).→ ( )( ) = ( )

Using∶ the→ isomorphism○ α= (from) ○ the previous problem, we identify a space V and its doudle dual V ∗ ∗. Remark. For a vector space V there is no isomorphism α V V ∗ such that, for any operator ϕ V V , we have(α )ϕ ϕt α. In other words, there is no natural isomorphism V V ∗. ∶ → ∶ → ○ = ○ Problem 1.6.8.≃ Suppose that V,U are finite-dimensional vector spaces and ϕ V U is a linear map. Prove that ϕt t ϕ. ∶ → Problem 1.6.9. Consider a vector( ) = space V over a field k with a basis e1, . . . , e5 , its subspace W Span e1 e3 e5, e2 e4 e5 , ( ) and the quotient space V W . Construct an isomorphism ϕ V W k3. = ( + − − + ) ~ 18 ∶ ~ → An operator ϕ V V is called a projection onto a subspace V1 V parallel to a subspace V2 V whenever ∶ → ⊂ 1. V V1 ⊂V2,

2. ϕ =v1 v⊕2 v1 for all v1 V1, v2 V2.

Thus, a( projection+ ) = ϕ is a projection∈ onto∈ Im ϕ parallel to Ker ϕ .

Problem 1.6.10. Consider an operator ϕ V( ) V . Prove that ( )

2 ϕ is a projection ϕ ∶ ϕ → ϕ Im(ϕ) IIm(ϕ).

⇐⇒ = ⇐⇒ S =

19 Chapter 2 Tensors

In this we define tensor products of vector spaces and prove some of their properties. Vector spaces we consider in this chapter are assumed defined over k Q, or over k R (real vector spaces), or over k C (complex vector spaces). = = = 2.1 Tensor Product of Vector Spaces

Tensor product of two vector spaces Let V and U be vector spaces. Consider an infinite dimensional vector space with the basis v u v V, u U . Vectors of this space are formal expressions of the form ∗ ( ⊗ S ∈ ∈ ) λ1v1 u1 ... λlvl ul, where l N , λi k, vi V, ui U, with the natural⊗ addition+ + and multiplication⊗ by∈ scalars;∈ the∈ zero vector∈ is the ex- pression 0 0. By definition, the tensor product V U is the quotient of this space modulo relations ⊗ ⊗ 1 v1 v2 u v1 u v2 u, 2 v u1 u2 v u1 v u2, (2.1.1) (3)(λv +u ) ⊗λv = u⊗ v+ λu⊗ . ( ) ⊗ ( + ) = ⊗ + ⊗ Elements of V U are( called) tensors⊗ = (. The) ⊗ standard= ⊗ ( notation) of the class of an expression λ1v1 u1 ... λlvl ul in V U is not ⊗ λ1v⊗1 u+1 ...+ λl⊗vl ul or⊗ λ1v1 u1 ... λlvl ul but ⊗ + + ⊗ [ ⊗ + + ⊗ ] λ1v1 u1 ... λlvl ul (the same as of the expression). Thus, every tensor T V U can be written as ⊗ + + ⊗ T λ1v1 u1 ... λlvl u∈l. ⊗ Two tensors are equal if one= of them⊗ can+ be+ transformed⊗ into the other by using (2.1.1).

20 Example 2.1.1.

v1 v2 2u1 u2 v2 u2 v1 2u1 u2 v2 2u1 u2 v2 u2 2v1 u1 v1 u2 2v2 u1 v2 u2 v2 u2 ( + ) ⊗ ( − ) + ⊗ = ⊗ ( − ) + ⊗ ( − ) + ⊗ = 2v1 u1 v1 u2 2v2 u1. ⊗ − ⊗ + ⊗ − ⊗ + ⊗ = Exercise 2.1.2. Prove that if one of a factor of a tensor product is the zero space, ⊗ − ⊗ + ⊗ then the tensor product is the zero space. A tensor v u is called the tensor product of vectors v V , u U. A tensor is called decomposable whenever it is a tensor product of some vectors. From definition⊗ it follows that tensor product V U is spanned∈ by∈ decomposable tensors. Thus, to define a linear map ⊗ ϕ V U W, where W is a vector space, we have to define the images of decomposable tensors ∶ ⊗ → such that these images are consistent with relations (2.1.1). That is, we have to define ϕ v u for all v V , u U such that ϕ v v u ϕ v u ϕ v u , ( ⊗ ) ∈ 1 2∈ 1 2 ϕ v u1 u2 ϕ v u1 ϕ v u2 , (2.1.2) λϕ(( v +u ) ⊗ϕ )λv= ( u⊗ )ϕ+v ( λu⊗ ). ( ⊗ ( + )) = ( ⊗ ) + ( ⊗ ) Roughly speaking, (2.1.2) means that ϕ v u is linear by v and linear by u. ( ⊗ ) = (( ) ⊗ ) = ( ⊗ ( )) Example 2.1.3. If s V k and r U k are linear functions, then ( ⊗ ) V U k, v u s v r u ∶ → ∶ → is a well-defined linear function. ⊗ → ⊗ ↦ ( ) ( ) Suppose that the spaces V and U are finite-dimensional, e1, . . . , en is a basis of V and f1, . . . , fm is a basis of U. Lemma 2.1.4. e f is a basis of V U. ( ) ( i ) j Proof. First, let us prove that every tensor of the tensor product is a linear com- ( ⊗ ) ⊗ bination of tensors ei fj. Note that every tensor is a finite sum of decomposable tensors. Therefore, it is sufficient to present every decomposable tensor as a lin- i ear combination of tensors⊗ ei fj. Suppose that T v u, where v x ei V , i u y fi U. Using (2.1.1), we remove the parentheses: ⊗ = ⊗ = ∈ 1 n 1 m (1) = ∈ x e1 ... x en y f1 ... y fm 1 1 m n 1 m (2) x e1 y (f1 ...+ y+fm )...⊗ ( x +en + y f1 ) ...= y fm 1 1 1 m x e1 y f1 ... x e1 y fm ... ( ) ⊗ ( + + ) + + ( ) ⊗ ( + + ) = n 1 n m (3) ( x e)n⊗ ( y f)1+ ...+ ( x )en⊗ ( y f)m+ + 1 1 1 m x y e1 f1 ... x y e1 fm ... ( n) ⊗1 ( ) + + ( n )m⊗ ( ) = x y en f1 ... x y en fm. ( ) ⊗ + + ( ) ⊗ + + ( ) ⊗ + 21+ ( ) ⊗ 1 n Now, let us prove that the tensors ei fj are linearly independent. Let x , . . . , x ∗ 1 m be the dual to ei basis of the space V and y , . . . , y be the dual to fi basis of the space U ∗. For 1 k n and 1 ⊗l m define the linear function ( ) ( ) ( ) ( ) k l F⩽kl ⩽V U k⩽,⩽ v u x v y u . From example 2.1.3 it follows∶ ⊗ that→ these functions⊗ ↦ ( are) well-defined.( ) Now suppose ij that c ei fj 0. Applying the linear function Fkl to both sides of this we kl obtain c 0 for all k, l. This proves that tensors ei fj are linearly independent. ⊗ = By lemma= 2.1.4, a tensor T V U can be uniquely⊗ decomposed as ij ∈ T⊗ T ei fj. ij The coefficients T are called the coordinates= ⊗ of the tensor T in the basis ei fj . ij We can organize the coefficients of the tensor T T ei fj into the matrix: ( ) ( ⊗ ) 11 12 1m T T =T ⊗ T 21 T 22 T 2m T ij . ⎛ ⋯ ⎞ ⎜T n1 T n2 ⋯ T nm⎟ ( ) = ⎜ ⎟ ⎜ ⋮ ⋮ ⋱ ⋮ ⎟ ij The matrix T is called the matrix⎝ of the tensor⋯ T ⎠in the bases ei and fi . Remark 2.1.5. Sometimes tensors are identified by their coordinates. In this situa- tion, the corresponding( ) basis must be in mind. For example, the phrase( ) “Let( T)ij be ij a tensor . . . ” means “Let T ei fj be a tensor . . . ,” where ei is some basis of V and fi is some basis of U. ⊗ ( ) Lemma 2.1.6 (Change of Basis). Suppose that e˜i is another basis of V , A is the ( ) ˜ transition matrix from ei to e˜i , fj is another basis of U, B is the transition ˜ ij matrix from fj to fj , and T˜ is the matrix( of) the tensor T in the bases e˜i ˜ ( ) ( ) ( ) and fi . Then ( ) ( ) ( T)ij A T˜ij B⊺. ( ) This( formula) can be written as ( ) = ( ) ij i j ˜lk T AlBkT .

The rank of a tensor T V U is= defined to be the rank of its matrix in some bases. Exercise 2.1.7. Prove that rank∈ ⊗ of tensor is well-defined. Lemma 2.1.8. Suppose that V and U are finite-dimensional vector spaces. Then we have the isomorphism V ∗ U ∗ bilinear forms on V U , V U k, ⊗s r → { .× } v, u s v r u × → ⊗ ↦ œ ¡ ( )22↦ ( ) ( ) There are canonical between some tensor products. Some of them are discribed in the following theorem.

Theorem 2.1.9. For vector spaces V, U, V1,...,Vk we have the isomorphisms

V U U V, v u u v; (2.1.3)

V1 ... ⊗Vk → U ⊗ V1 U⊗ ↦... ⊗ Vk U , (2.1.4) v1, . . . , vk u v1 u, . . . , vk u . ( ⊕ ⊕ ) ⊗ → ( ⊗ ) ⊕ ⊕ ( ⊗ ) V ∗ U Hom V,U , s u ϕ, where ϕ v s v u; (2.1.5) ( ) ⊗ ↦ ( ⊗ ⊗ ) V ∗ U ∗ V U ∗, s r ϕ, where ϕ v u s v r u . (2.1.6) ⊗ → ( ) ⊗ ↦ ( ) = ( ) ⊗ → ( ⊗ ) ⊗ ↦ ( ⊗ ) = ( ) ( ) Tensor product of finitely many vector spaces

Let V1,...,Vk be vector spaces. Consider the set of formal expressions of the form

(1) (1) (l) (l) ∗ (i) λ1v1 ... vk ... λlv1 ... vk , where l N , λi k, vj Vj, as a vector space⊗ ⊗ with+ the natural+ ⊗ addition⊗ and multiplication∈ by∈ scalars;∈ the zero 1 vector is the expression 0 0. By definition, the tensor product V1 ... Vk is the quotient of this space modulo relations ⊗ ⊗ ⊗ ′ v1 ... vi−1 vi vi vi+1 ... vk ′ v1 ... vi−1 vi vi+1 ... vk v1 ... vi−1 vi vi+1 ... vk, (2.1.7) ⊗ ⊗ ⊗ ( + ) ⊗ ⊗ ⊗ = λv1 ... vk v1 ... vi−1 λvi vi+1 ... vk. ⊗ ⊗ ⊗ ⊗ ⊗ ⊗ + ⊗ ⊗ ⊗ ⊗ ⊗ ⊗ ⊗ ⊗ = ⊗ ⊗ ⊗ ( ) ⊗ ⊗ ⊗ Elements of V1 ... Vk are called tensors. The standard notation of the class of an expression ⊗ ⊗ (1) (1) (l) (l) λ1v1 ... vk ... λlv1 ... vk in V ... V is 1 k ⊗ ⊗ + + ⊗ ⊗ (1) (1) (l) (l) ⊗ ⊗ λ1v1 ... vk ... λlv1 ... vk

(the same as of the expression).⊗ ⊗ Thus every+ + tensor⊗T V⊗1 ... Vk can be written as (1) (1) (l) (l) T λ1v1 ... vk ... λlv1 ∈... ⊗vk .⊗ Two tensors are equal if one of them can be transformed into the other by using = ⊗ ⊗ + + ⊗ ⊗ (2.1.7).

1This vector space can be defined formally as

? kv1 ⊗ ... ⊕ vk vi∈Vi

23 Example 2.1.10.

v1 v2 u w1 w2 v2 u w1 v1 u w1 w2 v2 u w1 w2 v2 u w1 ( + ) ⊗ (− ) ⊗ ( − ) + ⊗ ⊗ = v1 u w1 v1 u w2 v2 u w1 v2 u w2 v2 u w1 − ⊗ ⊗ ( − ) − ⊗ ⊗ ( − ) + ⊗ ⊗ = v1 u w1 v1 u w2 v2 u w2. − ⊗ ⊗ + ⊗ ⊗ − ⊗ ⊗ + ⊗ ⊗ + ⊗ ⊗ = Exercise 2.1.11. Prove− that⊗ if⊗ one of+ a⊗ factor⊗ of+ a tensor⊗ ⊗ product is the zero space, then the tensor product is the zero space.

A tensor v1 ... vk is called the tensor product of vectors vi Vi. A tensor is called decomposable whenever it is a tensor product of some vectors. From definition⊗ it⊗ follows that tensor product V1 ... Vk is spanned∈ by decom- posable tensors. Thus to define a linear map ⊗ ⊗ ϕ V1 ... Vk W, where W is a vector space, we have∶ ⊗ to define⊗ → the images of decomposable tensors such that these images are consistent with relations (2.1.7). That is, we have to define ϕ v1 ... vk for all vi Vi such that

′ ( ⊗ ⊗ ϕ)v1 ... v∈i−1 vi vi vi+1 ... vk ′ ϕ v1 ... vi−1 vi vi+1 ... vk ϕ v1 ... vi−1 vi vi+1 ... vk , ( ⊗ ⊗ ⊗ ( + ) ⊗ ⊗ ⊗ ) = λϕ v1 ... vk ϕ v1 ... vi−1 λvi vi+1 ... vk . ( ⊗ ⊗ ⊗ ⊗ ⊗ ⊗ ) + ( ⊗ ⊗ ⊗ ⊗ ⊗ ⊗ (2.1.8)) ( ⊗ ⊗ ) = ( ⊗ ⊗ ⊗ ( ) ⊗ ⊗ ⊗ ) Roughly speaking, (2.1.8) means that ϕ v1 ... vk is linear by every vi. The properties of tensor products of finitely many spaces are similar to the corresponding properties of tensor products( ⊗ of two⊗ spaces.) (i) Suppose that V1,...,Vk are finite-dimensional vector spaces and ej is a basis of Vi, 1 i k. ( ) Lemma 2.1.12. e(1) ... e(k) is a basis of the tensor product V ... V . ⩽ ⩽ j1 jk 1 k

Thus, a tensor(T V⊗1 ...⊗ V)k can be decomposed as ⊗ ⊗ (1) (k) T T j1,...,jk e ... e . ∈ ⊗ ⊗ j1 jk

(1) (k) The coefficients T j1,...,jk are called= the coordinates⊗ ⊗ of T in the basis e ... e . j1 jk (i) Lemma 2.1.13((Change) of basis). Suppose that e˜j is another basis( of⊗ the⊗ space) (i) (i) Vi, A i is the transition matrix from the basis e to the basis e˜ . Then (j ) j j j [ ] T j1,...,jk A 1 1 ...A k( k T˜)l1,...,lk , ( ) l1 lk

(1) (k) where T˜j1,...,jk are the coordinates= of[ the] tensor[ ]T in the basis e˜ ... e˜ . j1 jk ( ) 24 ( ⊗ ⊗ ) Theorem 2.1.14. For vector spaces V1,...,Vk we have the isomorphisms

V1 ... Vk Vσ(1) ... Vσ(k), v1 ... vk vσ(1) ... vσ(k) (2.1.9) where σ⊗ Sk;⊗ → ⊗ ⊗ ⊗ ⊗ ↦ ⊗ ⊗

V1∈ V2 V3 V1 V2 V3 , v1 v2 v3 v1 v2 v3 ; (2.1.10)

∗ ∗ ∗ ( ⊗ ) ⊗V1 →... ⊗V(k ⊗ V1) ...( ⊗Vk ), ⊗ ↦ ⊗ ( ⊗ ) V1 ... Vk k, (2.1.11) s1⊗ ...⊗ sk → ( ⊗ ⊗ ) . v1 ... vk s1 v1 . . . sk vk ⊗ ⊗ → ⊗ ⊗ ↦ œ ¡ 2.2 Symmetric Powers⊗ of⊗ Vector↦ ( ) Spaces( )

Let V be a vector space. By definition, the k-th tensor of V is

k if k 0, V ⊗k ⎧V ... V if k 0. ⎪ = ⎪ k ∶= ⎨ ⎪ ⊗ ⊗ > An element σ S defines the isomorphism⎪´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ (we denote it by the same symbol) k ⎩⎪ ⊗k ⊗k ∈ σ V V , v1 ... vk vσ(1) ... vσ(k).

For example, ∶ → ⊗ ⊗ ↦ ⊗ ⊗ 1 2 3 4 V ⊗4 V ⊗4, 4 1 3 2 1 2 3 4 Œ ‘ ∶ → v v v v v v v v . 4 1 3 2 1 2 3 4 4 1 3 2

Clearly, Œ ‘ ∶ ⊗ ⊗ ⊗ ↦ ⊗ ⊗ ⊗ σ τ T σ τ T for all σ, τ S , T V ⊗k. k ( ( )) = ( ○ )( ) A tensor T V ⊗k is called symmetric whenever ∈ ∈ ∈ σ T T for all σ Sk.

Example 2.2.1. The simplest( symmetric) = tensor∈ is the zero tensor. The following tensors are symmetric

6 ⊗2 e1 e1 2e1 e3 2e3 e1 R ,

5 ⊗3 e2 e2 ⊗e5 −e2 e⊗5 e−2 e5⊗ e2∈ ( e2 ) C .

⊗ ⊗ + ⊗ ⊗ 25+ ⊗ ⊗ ∈ ( ) The tensor 3 ⊗2 T e2 e2 e3 e1 e1 e3 R is not symmetric because = ⊗ + ⊗ − ⊗ ∈ ( ) 1 2 T e e e e e e T. 2 1 2 2 1 3 3 1

Consider the subsetŒ ‘ ( ) = ⊗ + ⊗ − ⊗ ≠

k ⊗k S V T V σ T T for all σ Sk

k ⊗k of symmetric tensors.( Clearly,) ∶= { the∈ subsetS ( S ) V= is a subspace∈ } of V . Consider the map 1 sym V ⊗k V ⊗k,T( ) σ T . k! σ∈Sk ∶ →⊗k ↦ Q ( ) Lemma 2.2.2. For a tensor T V and an element τ Sk we have

τ sym T∈ sym τ T sym T∈ . k Corollary 2.2.3. Map sym( is( a)) projection= ( ( onto)) S= V (. ) For vectors v1, . . . , vk V define their symmetric( product) 1 v ... v sym∈ v ... v v ... v Sk V 1 k 1 k k! σ(1) σ(k) σ∈Sk A symmetric⋅ tensor⋅ ∶=T is called( ⊗decomposable⊗ ) = Qwhenever⊗ it is⊗ a symmetric∈ ( ) product of some vectors. Lemma 2.2.4. We have ′ ′ v1 ... vi−1 λvi λ vi vi+1 ... vk ′ ′ λv1 ... vi−1 vi vi+1 ... vk λ v1 ... vi−1 vi vi+1 ... vk, (2.2.1) ⋅ ⋅ ⋅ ( + ) ⋅ ⋅ ⋅ = vσ(1) ... vσ(k) v1 ... vk ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ + ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ′ ′ for all i 1, . . . , k; λ, λ k;⋅ vj, v⋅ j V ;=σ ⋅Sk.⋅

Suppose= that V is finite-dimensional∈ ∈ ∈ and e1, . . . , en is a basis of V . Lemma 2.2.5. The list e ... e i ... i is a basis of Sk V . i1 ik 1 ( k ) From this lemma it follows( ⋅ that⋅ S ⩽ ⩽ ) ( ) dim V k 1 dim Sk V k ( ) + − and a tensor q Sk V can be( decomposed:( )) = ‹ 

i1,...,ik ∈ ( ) q f ei1 ... eik . i1⩽...⩽ik = Q 26 ⋅ ⋅ Remark 2.2.6. We can define the symmetric powers of V without using tensors. Namely, we define the kth symmetric power Sk V as the space of formal expressions of the form ( ) f1v11 ... v1k ... flvl1 ... vlk, where l N, fi k, vij V modulo relations⋅ from⋅ (2.2.1).+ + One can⋅ prove⋅ that this∈ definition∈ is consistent∈ with the definition with tensors. If V kn∗ and x1, . . . , xn is a basis of V ∗, then Sk kn∗ is the “usual” space of homogenuous polynomials of degree k in the variables x1, . . . , xn. = ( ) ( ) 2.3 Exterior Powers of Vector Spaces

Let V be a vector space. A tensor T V ⊗k is called skew-symmetric whenever

∈ σ T sgn σ T for all σ Sk.

Example 2.3.1. The simplest( ) = skew-symmetric( ) tensor∈ is the zero tensor. The fol- lowing tensors are skew-symmetric

7 ⊗2 e1 e3 e3 e1 5e2 e3 5e3 e2 R ,

5 ⊗3 e2 e3 e5 e3 e5 e⊗2 e5− e⊗2 e3+ e2 ⊗e5 −e3 ⊗e3 ∈e2( e)5 e5 e3 e2 C . The tensor ⊗ ⊗ + ⊗ ⊗ + ⊗ ⊗ − ⊗ ⊗ − ⊗ 2 ⊗⊗2 − ⊗ ⊗ ∈ ( ) T e1 e1 e2 e1 e1 e2 R , is not skew-symmetric because = ⊗ + ⊗ − ⊗ ∈ ( ) 1 2 1 2 T e e e e e e sgn T T. 2 1 1 1 1 2 2 1 2 1

Consider theŒ subset‘ ( ) = ⊗ + ⊗ − ⊗ ≠ Œ ‘ = −

k ⊗k Λ V T V σ T sgn σ T for all σ Sk of skew-symmetric tensors.( ) ∶= { Clearly,∈ S the( subset) = Λ(k )V is a subspace∈ } of V ⊗k. Consider the map 1 alt V ⊗k V ⊗k,T ( sgn) σ σ T . k! σ∈Sk

∶ → ⊗k ↦ Q ( ) ( ) Lemma 2.3.2. For a tensor T V and an element τ Sk we have

τ alt T ∈ alt τ T sgn τ alt∈T .

Corollary 2.3.3. Map alt( is( a)) projection= ( ( )) onto= Λk(V) . ( )

27 ( ) For vectors v1, . . . , vk V define their product 1 v ... v alt v∈ ... v sgn σ v ... v Λk V . 1 k 1 k k! σ(1) σ(k) σ∈Sk A skew-symmetric∧ ∧ ∶= tensor( T⊗is called⊗ )decomposable= Q ( whenever) ⊗ it is⊗ an external∈ ( product) of some vectors. Lemma 2.3.4. We have ′ ′ v1 ... vi−1 λvi λ vi vi+1 ... vk ′ ′ λv1 ... vi−1 vi vi+1 ... vk λ v1 ... vi−1 vi vi+1 ... vk, (2.3.1) ∧ ∧ ∧ ( + ) ∧ ∧ ∧ = vσ(1) ... vσ(k) sgn σ v1 ... vk ∧ ∧ ∧ ∧ ∧ ∧ + ∧ ∧ ∧ ∧ ∧ ∧ ′ ′ for all i 1, . . . , k; λ, λ k∧; vj,∧ vj V ;=σ S(k.) ∧ ∧

Suppose= that V is finite-dimensional∈ ∈ ∈ and e1, . . . , en is a basis of V . Lemma 2.3.5. The list of vectors e ... e i ... i is a basis of Λk V . i1 ( ik 1 ) k k dim(V ) Corollary 2.3.6. 1. dim Λ V ( ∧ k ∧ forS 0< k define( the) exterior powers of V without using tensors. Namely, we define the kth exterior power Λk V as the space of formal expressions of the form λ1v11 ... v1k ... λlvl1 ( ...) vlk, where l N, λi k, vij V, modulo relations (2.3.1). One can prove that this definition is consistent with the ∧ ∧ + + ∧ ∧ ∈ ∈ ∈ definition with tensors. Remark 2.3.8. In analysis, when we consider an open subset U Rn with coordinates 1 n k ∗ x , . . . , x , elements of Λ Tx0 U , where Tx0 U is the of U at k ∗ ⊂ x0 U, are called k-forms. We have the basis of Λ Tx0 U : ( ) ( ( )) ( ) i1 ik ∈ dx ... dx i1 ... ( ik (. ))

So, a k-form ω can be decomposed:( ∧ ∧ S < < )

i1 ik ω ωi1,...,ik dx ... dx . i1<...

Let V1,...,Vk,U1,...,Uk be vector spaces and ϕi Vi Ui, i 1, . . . , k be linear maps. Define the linear map ∶ → = ϕ ... ϕ V ... V U ... U , 1 k 1 k 1 k (2.4.1) v1 ... vk ϕ1 v1 ... ϕk vk . ⊗ ⊗ ∶ ⊗ ⊗ → ⊗ ⊗ ⊗ ⊗ 28 ↦ ( ) ⊗ ⊗ ( ) It is easy to see that ϕ1 ... ϕk is well-defined; it is called the tensor product of the maps ϕ1, . . . , ϕk. Let V be a vector space⊗ and⊗ ϕ V V be an operator. We have the endomor- phism ϕ⊗k ϕ ∶ ...→ ϕ V ⊗k V ⊗k.

k ∶= ⊗ ⊗ ∶ → Lemma 2.4.1. We have the inclusions´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ 1. ϕ⊗k ΛkV ΛkV ;

⊗k k k 2. ϕ (S V) ⊂ S V . From this lemma it follows that the operators ( ( )) ⊂ ( ) Λk ϕ ΛkV ΛkV,T ϕ⊗k T ; Sk ϕ Sk V Sk V ,T ϕ⊗k T . ( ) ∶ → ↦ ( ) are well-defined. ( ) ∶ ( ) → ( ) ↦ ( )

2.5 Canonical Forms of Tensors

In this section we consider finite-dimensional vector spaces. There is the following General problem. Given a tensor T , find a “simplest” presentation of T in coordinates (the canonical form of T ). Here are some special cases of this problem.

1. q S2 V , k C. In this case, the canonical form is 2 2 ∈ ( ) = q e1 ... ek, where k 0, ei is some basis of=V .+ Thus+ k is the rank of q and the matrix of q in the basis ei is ⩾ ( ) Ik 0 ( ) 0 0 If k dim V , then q is called nondegenerateŒ ‘ .

2 2. q S= V ,(k ) R. In this case, the canonical form is 2 2 2 2 ∈ ( ) = q e1 ... ek ek+1 ... ek+l, where k, l 0, ei is some= basis+ + of V−. Thus− k − l is the rank of q and the matrix of q in the basis ei is ⩾ ( ) + ( ) Ik 0 0 0 Il 0 . ⎛0 0 0⎞ ⎜ − ⎟ ⎝ 29 ⎠ The pair of integers k, l is called the signature of q. If k l dim V , then q is called nondegenerate. ( ) + = ( ) 3. ω Λ2V , k is an arbitrary field. In this case, the canonical form is

∈ ω e1 e2 e3 e4 ... e2k−1 e2k,

where k 0, ei is some= basis∧ of+ V .∧ + + ∧

∗ 4. ϕ V V⩾ (Hom) V,V , k C. In this case, the canonical form is

j i ∈ ⊗ = ( ) = ϕ Φi ej x ,

i ∗ j where ei is a basis of V , x is the= dual⊗ basis of the dual space V , and Φi is a Jordan matrix. ( ) ( ) ( ) 5. f S3 V , where dim V 3, k C. In this case, if f is “in general position,” then there exists a basis e1, e2, e3 of V such that ∈ ( ) ( ) = = 3 3 3 T f( λ1 e1) e2 e3 λ2e1e2e3,

where λ1, λ2 C. = ( + + ) +

Remark 2.5.1. In some∈ cases canoniacal form is uniquely determined (examples (1), (2), and (3) above), and in some cases it is not uniquely determined (examples (4) and (5) above). Remark 2.5.2. For tensors of a “general” space, there are no “good” canonical forms. For example, there are no “good” canonical forms for tensors of the space S4 V , where dim V 3, k C. Exercise 2.5.3. Find the canonical form of the tensor ( ) ( ) = = 1 4 3 2 2 4∗ x x x S C , where xi is the standard basis of −C4(∗. ) ∈ ( ) Exercise 2.5.4. Find the canonical form of the tensor ( ) 1 5 2 4 3 2 2 5∗ x x x x x S R , where xi is the standard basis− of R5+∗.( ) ∈ ( ) Exercise 2.5.5. Find the canonical form of the tensor ( ) 1 2 1 5 2 4 3 3 3 6 2 6∗ x x x x x x x x x x Λ F3 ,

i 6∗ where x is the∧ standard+ ∧ basis− of∧F3 .+ ∧ + ∧ ∈ ( )

( ) 30 2.6 *Multilinear Maps

Definition 2.6.1. If V1,...,Vk,W are vector spaces, then

1. A map ϕ V1 ... Vk W is called multilinear whenever ϕ v1, . . . , vk is linear by vi for all 1 i k. ∶ × × → ( ) 2. A of⩽ the⩽ form ϕ V1 ... Vk k is called a .

The concept of multilinear form is a∶ generalization× × → of the concept bilinear form. Example 2.6.2. The map

Matk×l Matl×m Matm×n Matk×n, A, B, C ABC. is trilinear. × × → ( ) ↦ Lemma 2.6.3. Consider the map

α V1 ... Vk V1 ... Vk, v1, . . . , vk v1 ... vk.

Then we have:∶ × × → ⊗ ⊗ ( ) ↦ ⊗ ⊗ 1. α is a multilinear map.

2. Span α V1 ... Vk V1 ... Vk.

3. Suppose( ( that× ϕ ×V1 ))...= V⊗k ⊗W is a multilinear map; then there exists a unique linear map ϕα V1 ... Vk W such that ϕ ϕα α. The following commutative diagram∶ × illustarates× → this statement ∶ ⊗ ⊗ → = ○ α V1 ... Vk / V1 ... Vk

× × ϕ ϕα ⊗ ⊗ $ z W

The map α from this lemma is called the universal multilinear map. Clearly, the set of multilinear maps ϕ V1 ... Vk W is a vector space.

Lemma 2.6.4. We have the isomorphisms∶ × × →

∗ ∗ V1 ... Vk multilinear forms on V1 ... Vk , V1 ... Vk k, s1⊗ ...⊗ sk → { × × . } v1, . . . , vk s1 v1 . . . sk vk × × → ⊗ ⊗ ↦ œ ¡ Definition 2.6.5. If k N∗, W is( a vector space,) ↦ ( and) V is( a vector) space. Then

∈ 31 1. A multilinear map ϕ V ×k W is called symmetric whenever

ϕ σ v∶1, . . . ,→ vk ϕ v1, . . . , vk for all σ Sk.

2. A symmetric map( of( the form))ϕ=V (×k k is called) a symmetric∈ form on V ×k.

The concept of symmetric multilinear∶ form→ is a generalization of the concept bilinear symmetric form. Example 2.6.6. The following maps are symmetric: 1. the Euclidean

n n R R R, v, u v u;

2. the map ⋅ ∶ × → ( ) ↦ ⋅

Matn k Matn k Matn k , A, B AB BA.

Lemma 2.6.7. Consider( ) the× map( ) → ( ) ( ) ↦ +

γ sym α V ×k Sk V , where α is the universal multilinear∶= map.○ ∶ Then→ we( have:) 1. γ is a symmetric map. 2. Span γ V ×k Sk V .

×k 3. Suppose( ( that))ϕ= V( ) W is a symmetric map; then there exists a unique k linear map ϕγ S V W such that ϕ ϕγ γ. The following commutative diagram illustrates∶ this→ statement: ∶ → = ○ γ V ×k / SkV

ϕ ϕ  γ W

The map γ from this lemma is called the universal symmetric map. Clearly, the set of symmetric maps of the form ϕ V ×k W is a vector space.

Lemma 2.6.8. We have the isomorphisms ∶ → Sk V ∗ symmetric forms on V ×k , V ×k k, ( ) → { } s ... s 1 1 k ⎧ v , . . . , v s v . . . s v ⎫ ⎪ 1 k → k! σ(1) 1 σ(k) k ⎪ ⎪ σ∈Sk ⎪ ⋅ ⋅ ↦ ⎨ ⎬ ( ) ↦ ( ) ( ) ⎪ 32 Q ⎪ ⎩⎪ ⎭⎪ Suppose that V is a finite-dimensional vector space and k N∗. A map p V k is called a homogeneous form of degree k on V whenever ∈ ∶ → p v q v, . . . , v ,

k ( ) = ( ) where q V ×k k is a symmetric form.´¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¶ Clearly, the set of homogeneous forms p V k of degree k is a vector space. The concept of homogeneous form is a generalization∶ of→ the concept of quadratic form. ∶ → Lemma 2.6.9. Suppose that V is a finite-dimensional vector space and k N∗. Then the map ∈ homogeneous forms symmetric forms on V ×k , p V k of degree k

{ q v1, . . . , vk} → œp v q v, . . . , v ¡ ∶ → is an isomorphism between vector( spaces.) ↦ ( ) = ( )

Definition 2.6.10. If k N∗, W is a vector spaces, and V is a vector spaces. Then × 1. A multilinear map ∈ϕ V k W is called skew-symmetric whenever

ϕ σ v1, . .∶ . , vk → sgn σ ϕ v1, . . . , vk for all σ Sk.

2. A skew-symmetric( ( map of)) the= form( )ϕ( V ×k k) is called a∈skew-symmetric form. ∶ → The concept of skew-symmetric form is a generalization of the concept of skew- symmetric bilinear form.

Example 2.6.11. The following maps are skew-symmetric:

1. the

3 3 3 R R R , v, u v u;

2. the Lie ⋅ × ⋅ ∶ × → ( ) ↦ ×

, Matn k Matn k Matn k , A, B AB BA.

3. the map [⋅ ⋅] ∶ ( ) × ( ) → ( ) ( ) ↦ −

1 1 1 1 a1 an a1 . . . an kn ×n k, ,..., det . n n n n ⎛a1 ⎞ ⎛an⎞ ⎛a1 . . . an⎞ ( ) → (⎜ ⋮ ⎟ ⎜ ⋮ ⎟) ↦ ⎜ ⋮ ⋱ ⋮ ⎟ ⎝ ⎠ 33 ⎝ ⎠ ⎝ ⎠ Lemma 2.6.12. Consider the map β alt α V ×k Λk V , where α is the universal multilinear∶= ○ map.∶ Then→ we( have:) 1. β is a skew-symmetric map. 2. Span β V ×k Λk V .

×k 3. Suppose( ( that))ϕ=V ( )W is a skew-symmetric map; then there exists a unique k linear map ϕβ Λ V W such that ϕ ϕβ β. The following commutative diagram illustrates∶ this→ statement: ∶ ( ) → = ○ β V ×k / Λk V

ϕ ϕβ ( )  W

The map β from this lemma is called the universal skew-symmetric map. Clearly, the set of skew-symmetric maps of the form ϕ V ×k W is a vector space. ∶ → Lemma 2.6.13. We have the isomorphism Λk V ∗ skew-symmetric forms on V ×k , V ×k k, ( ) → ™ ž s ... s 1 1 k ⎧ v , . . . , v sgn σ s v . . . s v ⎫ ⎪ 1 k → k! σ(1) 1 σ(k) k ⎪ ⎪ σ∈Sk ⎪ ∧ ∧ ↦ ⎨ ⎬ ( ) ↦ ( ) ( ) ( ) ⎪ Q ⎪ 2.7 *Tensor Fields⎩⎪ ⎭⎪

Let U be an open subset of Rn.A scalar field on U is a map v U R.A vector field on U is a map v U Rn. For example, electric and magnetic fields. The natural generalization of scalar and vector fields are tensor fields. A∶ tensor→ field is a map of the form v U ∶ →Rn ⊗p Rn∗ ⊗q, where p, q N. For example, a Riemannian metric on U is a positive-definite tensor field g U S2Rn∗. The point is that many physical laws are∶ of→ the( ) form⊗ D( f ) 0, where f is∈ a tensor field which describes a physical object and D is some universal differential∶ → operation. Maxwell’s equations in electrodynamics and Einstein’s( ) equation= in gravitational theory are of this form. Here is an example of a universal differential operation. Example 2.7.1. Consider the space Rn with the coordinates x1, . . . , xn , the dual space Rn∗ with the dx1, . . . , dxn of . Consider the space Ωk U of tensor fields of the form ( ) k n∗ ω U Λ R . ( )

∶ →34 ( ) A tensor field ω Ωk U can be written in the form

i1 ik ∞ wi1∈,...,ik (x )dx ... dx , where wi1,...,ik x C U .

The ( )is a universal∧ ∧ differential operation.( ) ∈ It is( defined) as d Ωk U Ωk+1 U ,

∂wi ,...,i x w x dxi1 ... dxik 1 k dxl dxi1 ... dxik . i1,...,ik ∶ ( ) → ∂x( l) ( ) Maxwell’s equations( ) ∧ in∧ electrodynamics.↦ ∧ ∧ ∧ Consider the 4-dimensional space-time R4 with coordinates t, x, y, z and the corresponding basis dt, dx, dy, dz of R4∗. Consider an open subset U R4. For Maxwell’s equations we have the following data. ( ) (ρ) The density of charge ⊂ ρ U R, where ρ t, x, y, z is the density of charge at the point x, y, z at the time t. ∶ → (J) The ( ) ( ) 3 J U R , J t, x, y, z Jx t, x, y, z , Jy t, x, y, z , Jz t, x, y, z , ∶ → where J t, x, y, z ( is the current) = ( ( density) at the( point )x, y,( z at the)) time t. (E) The electric field ( ) ( ) 3 E U R , E t, x, y, z Ex t, x, y, z , Ey t, x, y, z , Ez t, x, y, z , ∶ → where E t, x, y, z( is the) electric= ( ( field at) the point( x,) y, z (at the time)) t. (B) The magnetic field ( ) ( ) 3 B U R , B t, x, y, z Bx t, x, y, z , By t, x, y, z , Bz t, x, y, z , ∶ → where B t, x, y, z( is the) magnetic= ( ( field at) the( point )x, y,( z at the)) time t. For Maxwell’s equations we need the following tensor fields or, simply, tensors: ( ) ( ) 2 F Exdt dx Eydt dy Ezdt dz Bxdy dz Bydz dx Bzdx dy Ω U

– Faraday’s∶= − ∧ tensor,− ∧ − ∧ + + ∧ + ∧ + ∧ ∈ ( )

2 F Bxdt dx Bydt dy Bzdt dz Exdy dz Eydz dx Ezdx dy Ω U

⋆– the∶= dual∧ to Faraday’s+ ∧ tensor,+ and∧ + + ∧ + ∧ + ∧ ∈ ( )

3 ϕ ρdx dy dz Jxdt dy dz Jydt dz dx Jzdt dx dy Ω U .

= ∧ ∧ − ∧ ∧ − 35 ∧ ∧ − ∧ ∧ ∈ ( ) Exercise 2.7.2. Check that the matrix of Faraday’s tensor in the basis dt, dx, dy, dz of the space R4∗ is 0 Ex Ey Ez ( ) Ex 0 Bz By ⎛Ey −Bz −0− Bx ⎞ ⎜E B B −0 ⎟ ⎜ z y x ⎟ Maxwell’s equations are ⎜ − ⎟ ⎝ − ⎠ d F 0, d F ϕ. Einstein’s equation in gravitational theory. ( ) = (⋆ ) = Consider R4 with coordinates x x0, x1, x2, x3 R4, the standard basis ∂ ∂ ∂ ∂ =, ( , , ) ∈ ∂x0 ∂x1 ∂x2 ∂x3 of R4 and the dual basis ‹  dx0, dx1, dx2, dx3 of the dual space R4∗. Consider an open subset U R4. For Einstein’s equation we have the following data. ( ) (g) The ⊂ 2 4∗ i j g U S R , g x gij x dx dx and its dual ∶ → ( ) ( ) = ( ) ∂ ⊗ ∂ g−1 U S2 R4 , g−1 x gij x , ∂xi ∂xj ij where g is the inverse∶ to→ gij( matrix.) ( ) = ( ) ⊗ (T) The -energy tensor ( ) ( 2 ) 4∗ i j T U S R ,T x Tij x dx dx . Einstein’s equation is ∶ → ( ) ( ) = ( ) ⊗ 1 8πG R g R Λg T , µν 2 µν µν c4 µν where − + = ˆ Λ is the cosmological constant; ˆ G is Newton’s gravitational constant; ˆ c is the speed of light; ˆ ∂Γρ ∂Γρ R µν µρ Γρ Γσ Γσ Γρ , µν ∂xρ ∂xν µν ρσ µρ νσ where = 1 − ∂g + ∂g −∂g Γµ gµν νρ νσ ρσ ; ρσ 2 ∂xσ ∂xρ ∂xν µν ˆ R g Rµν. = ‹ + − 

= 36 2.8 Problems

Problem 2.8.1. Prove that a tensor T V U can be represented in the form

k∈ ⊗ T vi ui, i=1 = Q ⊗ where v1, . . . , vk V and u1, . . . , uk U are linearly independent vectors.

Problem 2.8.2.∈ Consider the 3-dimensional∈ space R3. Prove that the following linear maps are well-defined:

1. ϕ R3 R3 R, ϕ v u v u, (the dot product of v and u);

2. ϕ ∶ R3 ⊗ R3 → R3, ϕ( v⊗ u) = v⋅ u (the cross-product of v and u).

Problem∶ 2.8.3.⊗ →Prove that( ⊗ the) following= × “linear maps” are not well-defined:

1 1 1 1. R R R , a1 b1 a1 3b1 ;

⊗ → ( ) ⊗ ( ) ↦ (− + ) 2 1 1 a1 2. R R R , b1 a1a2b1 ; a2 ⊗ → Œ ‘ ⊗ ( ) ↦ ( ) 2 2 1 a1 b1 3. R R R , a1b2 a2b1 1 . a2 b2 ⊗ → Œ ‘ ⊗ Œ ‘ ↦ ( + + ) 2 1 2 a1 2a1 4. R R R , b1 . a2 a2 ⊗ → Œ ‘ ⊗ ( ) ↦ Œ ‘ Problem 2.8.4. Suppose that v1, v2, v−3 V are linearly independent vectors. Prove that the tensor v1 v2 v2 v3 V V is not decomposable. ∈ Problem 2.8.5. Find⊗ + decomposable⊗ ∈ ⊗ tensors on the list 1. T ij i j;

2. T ij = i + j;

3. T ij = ij−.

Problem= 2.8.6. Suppose that V and U are vector spaces. 1. For 0 v V prove that

(a) the≠ map∈ ϕ U V U, ϕ u v u is linear; (b) Ker ϕ 0; ∶ → ⊗ ( ) = ⊗ ( ) = 37 (c) the image of ϕ is a subspace of V U of dimension dim U (it is denoted by Span v U). ⊗ ( ) 2. For linearly independent( ) ⊗ v1, . . . vd V prove that the sum

Span v1 U∈ ... Span vd U

of subspaces is direct. ( ) ⊗ + + ( ) ⊗ Problem 2.8.7. Suppose that V is a finite-dimensional vector space. Define the tensor i ∗ δ x ei V V, where e is a basis of V , xi is the dual basis of the dual space V ∗. Prove that δ i ∶= ⊗ ∈ ⊗ is well-defined. ( ) ( ) ∗ Consider a tensor product V1 ... Vk such that Vj Vi , where i j. Then we have the well-defined linear map ⊗ ⊗ = < V1 ... Vk V1 ... Vˆi ... Vˆj ... Vk, v1 ... vk vj vi v1 ... vˆi ... vˆj ... vk; ⊗ ⊗ → ⊗ ⊗ ⊗ ⊗ ⊗ ⊗ it is called a contruction⊗ ⊗. ↦ ( ) ⊗ ⊗ ⊗ ⊗ ⊗ ⊗ Problem 2.8.8. Find all possible contractions of the tensor

1 2 3 n∗ n∗ n n x x x e1 e1 e2 R R R R .

Problem 2.8.9.( Prove+ ) lemma⊗ ⊗ 2.2.2⊗ ( and− its) corollary.∈ ⊗ ⊗ ⊗ Problem 2.8.10. Prove lemma 2.2.5.

Problem 2.8.11. Prove lemma 2.3.2 and its corollary.

Problem 2.8.12. Prove lemma 2.3.5.

Problem 2.8.13. Suppose that V is a finite-dimensional vector space. Prove that

1. V ⊗2 Λ2 V S2 V ;

⊗ 2. V k = Λk(V ) ⊕ Sk(V) for dim V 2, k 3.

Problem≠ 2.8.14.( ) ⊕Suppose( ) that V (is a) ⩾ finite-dimensional⩾ vector space. Construct explicitly the isomorphisms

1. Λ2 V U Λ2 V S2 U S2 V Λ2 U ;

2. S2 (V ⊗U ) ≃S2 (V ) ⊗S2 (U ) ⊕Λ2 (V ) ⊗Λ2 (U ).

( ⊗ ) ≃ ( ) ⊗ ( ) ⊕ ( ) ⊗ ( ) 38 Chapter 3 Operators

Vector spaces we consider in this chapter are assumed finite-dimensional real or complex. We describe operators in general vector spaces, real inner product spaces, and Hermitian spaces.

3.1 Realification and Complexification

A complex vector space U, considered as a real vector space, is called the realification of U and denoted by U R. If e1, . . . , en is a basis of U, then e1, 1e1, . . . , en, 1en is a basis of U R (problem 3.8.1). From this it follows that √ √ ( ) ( − − ) R dimR U 2 dimC U .

Definition 3.1.1. Suppose that V( is a) = real vector( ) space. An operator J V V 2 such that J IV is called a complex structure on V . ∶ → Example 3.1.2. Consider a complex vector space U and an operator = − U U, u 1u. √ In a natural way, this operator defines→ the↦ complex− structure on U R. A real vector space V with a complex structure J can be considered as a complex vector space with the multiplication by complex numbers

a b 1 v av bJ v , where v V, a b 1 C. √ √ In this situation,( + the− space) ⋅ ∶=V considered+ ( ) as a complex∈ vector+ space− ∈ is denoted by V J .

Lemma 3.1.3. Suppose that V is a real vector space. Then

C 1. V V R C with the multiplication by complex numbers

∶= ⊗ λ v µ v λµ , where v V and λ, µ C, is a complex vector⋅ ( ⊗ space) ∶= (it⊗ is( called) the complexification∈ of∈ V ). 39 2. If ei is a basis of V , then ei 1 is a basis of V C.

C 3. dim( C)V dimR V . ( ⊗ )

Consider( Rn) =and its( complexification) Rn C. We have the isomorphism of com- plex vector spaces (a1 ) λa1 n n R C C , λ ⎛an⎞ ⎛λan⎞ ( ) → n⎜ ⋮ ⎟ ⊗ ↦n ⎜ ⋮ ⎟ Using this isomorphism, we identify R ⎝C and⎠ C . ⎝ ⎠ Lemma 3.1.4. Suppose that V,U are( real) vector spaces and ϕ V U is a linear map. Then ∶ → 1. The map ϕC V C U C, v λ ϕ v λ between complex vector spaces is linear. ∶ → ⊗ ↦ ( ) ⊗

2. If V and U are finite-dimensional, ei is a basis of V , and fi is a basis of U, then the matrix of ϕC in the bases ei 1 and fi 1 is equal to the matrix of ϕ in the bases ei and fi . ( ) ( ) ( ⊗ ) ( ⊗ ) Remark 3.1.5. For a complex( ) vector( ) space U, there is no natural complex conjugation map ¯ U U with natural properties

1. u⋅ ∶1 u→2 u1 u2,

2. λu+ λu=. +

But if U =is the complexification of a real vector space V , that is, U V C, then we have the natural complex conjugation map = ¯ V C V C, v λ v λ.

⋅ ∶ → ⊗ ↦ ⊗ 3.2 Jordan Normal Form

Suppose that ϕ V V is an operator. A subspace U V is called ϕ-invariant or, simply, invariant whenever ϕ U U. For example, 0 , V , Im ϕ and Ker ϕ are invariant subspaces.∶ → ⊂ If ϕ V V is an operator( ) and⊂ U V is its invariant{ } subspace,( ) then we( ) have the operator ∶ → ϕU U U,⊂ u ϕ u ; it is called the restriction of ϕ to U. ∶ → ↦ ( )

40 Lemma 3.2.1. Suppose that ϕ V V is an operator and U, W are its invariant subspaces such that V U W . Consider basis e1, . . . , el of U and basis f1, . . . , fm of W . Then the matrix of ϕ in the∶ basis→ e1, . . . , el, f1, . . . , fm of V is the matrix = ⊕ ( ) ( ) Φ ( 0 ) U , 0 ΦW Œ ‘ where ΦU is the matrix of ϕU in the basis e1, . . . , el and ΦW is the matrix of ϕW in the basis f1, . . . , fm . ( ) ( )

Definition 3.2.2. ˆ The characteristic polynomial of a matrix Φ Matn×n k is the polynomial pΦ t det tI Φ . ∈ ( ) Clearly, the characteristic polynomial of an operator ϕ V V is a monic ( ) ∶= ( − ) polynomial of dergee dim V . ∶ → ˆ The characteristic polynomial( ) of an operator ϕ V V is the characteristic polynomial of its matrix in some basis (see problem 3.8.6). ∶ → ˆ The roots of the characteristic polynomial of an operator ϕ are called eigen- values of ϕ. The multiset1 of eigenvalues of ϕ is called the spectrum of ϕ and denoted by Spec ϕ .

ˆ Suppose that λ is( an) eigenvalue of an operator ϕ V V . Then

1. the subspace ∶ →

Vλ Ker ϕ λIV v V ϕ v λv

is called the eigenspace∶= of(ϕ −corresponding) = { ∈ toS the( ) eigenvalue= } λ;

2. a nonzero vector v Vλ is called an eigenvector of ϕ corresponding to the eigenvalue λ. ∈ Exercise 3.2.3. Suppose that ϕ V V is an operator and λ Spec ϕ . Prove that the eigenspace Vλ is a nonempty invariant subspace of V . ∶ → ∈ ( ) Exercise 3.2.4. Suppose that Φ Matn×m k and Ψ Matm×n k . Prove that m n t pΦΨ t t pΨΦ t . ∈ ( ) ∈ ( ) Example( ) = 3.2.5.(Consider) an operator 1 2 Φ 2 2. 4 1 R R

1 = Œ ‘ ∶ → By definition, a multiset is a set with multiplicities− of its elements.

41 Then t 1 2 p t det t2 9, Spec Φ 3, 3 . Φ 4 t 1 − − 1 ( ) = 1 Œ ‘ = − ( ) = { − } The vectors and are− eigenvectors+ of ϕ corresponding to the eigenvalues 3 1 2 and 3 respectivelyŒ ‘ andŒ thus‘ − 1 1 − 2 , 2 . R 3 1 R −3 2 ( ) = ⟨Œ ‘⟩ ( ) = ⟨Œ ‘⟩ 1 1 3 0 − The matrix of Φ in the basis ( , ) is 1 2 0 3 Œ ‘ Œ ‘ Œ ‘ Jordan− Normal Form− Definition 3.2.6. 1. An n n matrix of the form

× λ 1 λ 1 0 ⎛ ⎞ ⎜ ⎟ ⎜ 1 ⎟ ⎜ ⎟ ⎜ 0 ⋱ ⋱ λ 1⎟ ⎜ ⎟ ⎜ ⋱ λ⎟ ⎜ ⎟ ⎜ ⎟ is called a Jordan block. ⎝ ⎠ 2. A of the form

J1 0 J 2 , ⎛ ⎞ ⎜ ⎟ ⎜ 0 Jk⎟ ⎜ ⎟ ⎜ ⋱ ⎟ where J1,...,Jk are Jordan blocks,⎝ is called a⎠ Jordan matrix. 3. A Jordan basis of an operator ϕ V V is a basis of V such that the matrix of ϕ in this basis is a Jordan matrix. ∶ → 4. The Jordan form of an operator is the Jordan matrix of the operator in a Jordan basis. Exercise 3.2.7. Prove that for the operator

a1 cos α sin α a1 2 2, , R R a2 sin α cos α a2 → Œ ‘ ↦ Œ ‘ Œ ‘ where α πk with k Z, there is no a Jordan− basis.

≠ ∈ 42 Exercise 3.2.8. Suppose that V is a vector space and ϕ V V is an operator such that there exists a Jordan basis of ϕ. Prove that ∶ → 1. A Jordan basis of ϕ is not uniquely determined.

2. The Jordan form of ϕ is uniquely (up to of Jordan blocks) deter- mined.

Exercise 3.2.9. Find Jordan form of the operator

0 0 1 0 0 0 0 0 0 0 1 0 0 0 ⎛0 0 0 0 1 0 0⎞ ⎜ ⎟ 7 7 ⎜0 0 0 0 0 1 0⎟ k k . ⎜ ⎟ ⎜0 0 0 0 0 0 1⎟ ⎜ ⎟ ⎜0 0 0 0 0 0 0⎟ ∶ → ⎜ ⎟ ⎜0 0 0 0 0 0 0⎟ ⎜ ⎟ ⎜ ⎟ Theorem 3.2.10 (Jordan⎝ Normal Form Theorem)⎠ . Suppose that ϕ V V is an operator. Then ∶ → there exists the characteristic polynomial of ϕ . a Jordan basis of ϕ is completely decomposable

Corollaryœ 3.2.11. Suppose that¡ V⇔isœ a complex vector space; then for every¡ operator ϕ V V there exists a Jordan basis.

∶ Also,→ there is the Normal Form Theorem for operators in real vector spaces. Here is its special case:

Theorem 3.2.12. Suppose that V is a real vector space and ϕ V V is an operator such that the Jordan form of the complexification ϕC V C V C is a . Then there exist a basis of V such that the matrix of ϕ in this∶ basis→ is a matrix of the form ∶ → R1 0 ⎛ Rn ⎞ ⎜ ⎟ , ⎜ ⋱ λ1 ⎟ ⎜ ⎟ ⎜ 0 ⎟ ⎜ ⎟ ⎜ λm⎟ ⎜ ⋱ ⎟ where ⎜ ⎟ ⎝ ⎠ ai bi Ri Mat2×2 R , λj R, bi ai = Œ ‘ ∈ ( ) ∈ −

43 3.3 Operators in Real Inner Product Spaces

In this section we consider real vector spaces. Let V be a real vector space. An inner product on V is a map

, V V R such that ⟨⋅ ⋅⟩ ∶ × →

1. v, v R⩾0 and v, v 0 whenever v 0 (positive-definetness); ′ ′ ′ ′ 2. ⟨λv ⟩ λ∈ v , u λ⟨ v, u⟩ = λ v , u (linearity= by the first argument); ′ ′ ′ ′ 3. ⟨v, λu+ λ u ⟩ = λ⟨v, u⟩ + λ ⟨v, u ⟩ (linearity by the second argument);

4. ⟨v, u + u, v⟩ =(symmetry).⟨ ⟩ + ⟨ ⟩

A real⟨ vector⟩ = ⟨ space⟩ with a fixed inner product is called a real . Remark 3.3.1. In the definition above, properties 2–4 mean that , is a symmetric bilinear form on V V , and property 1 means that the corresponding quadratic form is positive-definite. ⟨⋅ ⋅⟩ × A basis ei of a real inner product space V is called orthonormal whenever

( ) 1, if i j, ei, ej δij ⎧0, if i j. ⎪ = ⟨ ⟩ = ∶= ⎨ Example 3.3.2. The is defined⎪ to≠ be the coordinate vector space ⎩⎪ Rn with the Euclidean inner product (also called the dot product)

x1 y1 ⊺ x, y x y x1y1 ... xnyn, where x , y . ⎛xn⎞ ⎛yn⎞ ⟨ ⟩ = = + + = ⎜ ⋮ ⎟ = ⎜ ⋮ ⎟ The standard basis of the Euclidean space Rn is orthonormal.⎝ ⎠ ⎝ ⎠ Example 3.3.3 (A generalization of the previous example). Reacall that an n n Q is called positive definite iff of their upper left k k submatrices are positive. Now, a positive define real symmetric n n matrix× Q defines the inner product × × n n ⊺ , R R R, x, y x Qy.

Example 3.3.4 (Frobenius⟨⋅ ⋅⟩ ∶ inner× product)→ . For( V) ↦Matn×m R we have the Frobe- nius inner product , F , where = ( ) ⊺ ⟨⋅ ⋅⟩ Φ, Ψ F tr ΦΨ .

⟨ ⟩ =44 ( ) Lemma 3.3.5. Suppose that V is a real inner product space and U is a subspace of V . Then the map U U R, v, u v, u is an inner product on U. It is called the restriction of the inner product , to the × → ( ) ↦ ⟨ ⟩ subspace U. ⟨⋅ ⋅⟩ Lemma 3.3.6. Suppose that V is a real inner product space and U is a subspace of V . Then

1. U ⊥ v V v, u 0 for all u U is a subspace of V ;

⊥ 2. V ∶=U{ U∈ S⟨ ⟩ = ∈ }

Lemma= 3.3.7.⊕ Suppose that V is a real inner product space. Then the maps V V ∗, v v, , where v, u v, u , (3.3.1)

Hom→ V,V ↦ ⟨Bilinear⋅⟩ forms⟨ on⋅⟩( V) = ⟨V , ⟩ (3.3.2) ϕ βϕ, where βϕ v, u ϕ v , u ( ) → { × } are isomorphisms between vector spaces. ↦ ( ) = ⟨ ( ) ⟩

Using isomorphisms (3.3.1) and (3.3.2), we identify a real inner product space V and its dual V ∗, and operators on a real inner product space V and bilinear forms on V V .

× Exercise 3.3.8. Suppose that V is a real inner product space and e1, . . . , en is an . Check that ( ) ∗ 1. The dual to e1, . . . , en basis of V coincides with the corresponding under ∗ the identification (3.3.1) basis of V , that is, coincides with e1, ,..., e1, . ( ) 2. If ϕ is an operator in V and βϕ is the corresponding under(⟨ the⋅ identification⟩ ⟨ ⋅⟩) (3.3.2) bilinear form, then the matrix Φ of ϕ in e1, . . . , en coincides with the transpose of the matrix BΦ of βϕ in e1, . . . , en : ( ) ⊺ Φ( BΦ. )

= Let V,U be a real inner product spaces and ϕ V U be a linear map. A linear map ϕ∗ U V is called the adjoint of ϕ whenever ∶ → ∗ ∶ → ϕ v , u U v, ϕ u V for all v V, u U.

Lemma 3.3.9. Suppose⟨ ( ) that⟩ V,U= ⟨ are( real)⟩ inner product∈ spaces∈ and ϕ V U is a linear map. Then ∶ → 45 1. there exist a unique adjoint of ϕ.

2. the matrix Φ∗ of ϕ∗ in an orthonormal basis of V and in an orthonormal basis of U is the transpose of the matrix Φ of ϕ in that bases:

Φ∗ Φ⊺.

n = Exercise 3.3.10. Consider R with the inner product defined by a positive define symmetric matrix Q and Rm with the inner product defined by a positive define symmetric matrix S (Example 3.3.3). Prove that the adjoint of an operator Φ Rn Rm is th operator Q−1Φ⊺S Rm Rn. Remark 3.3.11. Let V,U be a real inner product spaces and ϕ V U be a linear∶ map.→ The following commutative∶ diagram→ explains the between the adjoint and the transpose of ϕ: ∶ →

≅ U / U ∗

ϕ∗ ϕt

 ≅  V / V ∗

So that, under the identifications V V ∗ and U U ∗,

adjoint of ≅ϕ = transpose≅ of ϕ

Exercise 3.3.12. Suppose that V,U are real inner product spaces. Prove that

1. 0∗ 0;

∗ 2. IV= IV ; ∗ ∗ 3. (λϕ) = λϕ for every λ R and every operator ϕ V U; ∗ ∗ ∗ 4. (ϕ )ψ= ϕ ψ for operators∈ ϕ, ψ V U. ∶ →

Definition( + ) 3.3.13.= +Let V be a real inner product∶ → space. An operator ϕ V V is called ∶ → - normal whenever ϕϕ∗ ϕ∗ϕ.

∗ ∗ - orthogonal whenever ϕϕ= ϕ ϕ IV or, equivalently,

ϕ v =, ϕ u = v, u for all v, u V.

- symmetric (also called⟨ (sel-fadjoint) ( )⟩ =)⟨ whenever⟩ ϕ∗ ϕ.∈

∗ - skew-symmetric whenever ϕ ϕ. =

= − 46 Exercise 3.3.14. Prove that orthogonal, symmetric, and skew-symmetric operators are normal. Exercise 3.3.15. Suppose that V is a real inner product space, ϕ V V is an operator, Φ is the matrix of ϕ in an orthonormal basis. Check that ∶ → - ϕ is normal whenever ΦΦ⊺ Φ⊺Φ;

⊺ ⊺ - ϕ is orthogonal whenever ΦΦ= Φ Φ I; ⊺ - ϕ is symmetric whenever Φ Φ;= = ⊺ - ϕ is skew-symmetric whenever= Φ Φ.

Exercise 3.3.16. Suppose that V is a real= − inner product space, ϕ V V is an operator, and βϕ is the corresponding bilinear form. Check that ∶ → ∗ 1. ϕ is symmetric whenever βϕ is symmetric;

∗ 2. ϕ is skew-symmetric whenever βϕ is skew-symmetric. Theorem 3.3.17. Suppose that V is a real inner product space and ϕ V V is a normal operator. Then there exists an orthonormal basis of V such that the matrix of ϕ in that basis is a matrix of the form ∶ →

R1 0 ⎛ Rn ⎞ ⎜ ⎟ , ⎜ ⋱ λ1 ⎟ ⎜ ⎟ ⎜ 0 ⎟ ⎜ ⎟ ⎜ λm⎟ ⎜ ⎟ ⎜ ⋱ ⎟ where λj R, ⎝ ⎠

∈ cos αi sin αi Ri ri , ri 0 and αi πk with k Z sin αi cos αi = Œ ‘ > ≠ ∈ Proof. We prove by induction.− Case dim V 1 is evident, case dim V 2 is in Problem 3.8.28. Suppose that dim V 3. Consider the complexification ϕC V C V C and its eigenvalue λ.( By) = statement of Problem( 3.8.5,) = we have 2-dimensional ϕ-invariant subspace U V . By( statement) ⩾ of Problem 3.8.26, the subspace ∶U ⊥ is→ϕ-invariant. Note that V U U ⊥. Now we use Lemma 3.2.1, induction, and statement of Problem 3.8.27.⊂ = ⊕ Corollary 3.3.18. Suppose that V is a real inner product space and ϕ V V is an operator. Then ∶ →

47 1. If ϕ is orthogonal, then there is an orthonormal basis of V such that the matrix of ϕ in that basis is a matrix of the form

R1 0 ⎛ Rn ⎞ ⎜ ⎟ , ⎜ ⋱ λ1 ⎟ ⎜ ⎟ ⎜ 0 ⎟ ⎜ ⎟ ⎜ λm⎟ ⎜ ⎟ ⎜ ⋱ ⎟ where λj 1, ⎝ ⎠ cos αi sin αi Ri . = ± sin αi cos αi = Œ ‘ 2. If ϕ is symmetric, then there is an− orthonormal basis of V such that the matrix of ϕ in that basis is a diagonal matrix. 3. If ϕ is skew-symmetric, then there is an orthonormal basis of V such that the matrix of ϕ in that basis is a matrix of the form

Ω1 0 ⎛ Ωn ⎞ ⎜ ⎟ , ⎜ ⋱ 0 ⎟ ⎜ ⎟ ⎜ 0 ⎟ ⎜ ⎟ ⎜ 0⎟ ⎜ ⎟ ⎜ ⋱ ⎟ where ⎝ ⎠ 0 ωi Ωi , ωi 0. ωi 0 = Œ ‘ > Corollary 3.3.19. Suppose that V is− a real inner product space and β V V R is a symmetric bilinear form. Then there exists an orthonormal basis of V such that the matrix of β in that basis is diagonal. ∶ × →

3.4 Operators in Hermitian Spaces

In this section we consider complex vector spaces. Let V be a complex vector space. A function s V C is called conjugate linear (also called antilinear) whenever ∶ → s λ1v1 ... λkvk λ1s v1 ... λks vk for all λi C, vi V. n Exercise 3.4.1( . Prove+ + that conjugate) = ( linear) + functions+ ( ) on C are functions∈ ∈ of the form

x1 n C C, λ1x1 ... λnxn, ⎛xn⎞ → ⎜ ⋮ ⎟ ↦ + + ⎝ ⎠ 48 where λi C. The set of conjugate linear functions on a complex vector space V is a complex vector space∈ under the following compositions:

1. the sum of conjugate linear functions s1, s2 is the function

s1 s2 V C, s1 s2 v s1 v s2 v ;

2. the product of a scalar+ ∶λ →C and a( conjugate+ )( ) linear= ( ) function+ ( ) s is the function

λs∈ V C, λs v λs v .

The space of conjugate linear functions∶ → on( V )(is) called= (conjugate) dual of V ; it is ∗ denoted by V . Let V,U be complex vector spaces and ϕ V U be a linear map. It can be easily checked that the map ∗ ∗ ∶ → ϕt U V , s s ϕ. is linear; it is called the conjugate∶ transpose→ of↦ϕ. ○ Let V be a complex vector space. A map

β , V V C is called sesquilinear iff (⋅ ⋅) ∶ × → 1. β λv λ′v′, u λβ v, u λ′β v′, u (linearity by the first argument);

′ ′ ′ ′ 2. β(v, λu+ λ u ) = λβ(v, u) + λ β(v, u ) (conjugate linearity by the second argu- ment). ( + ) = ( ) + ( ) A β , on V V is called conjugate symmetric iff β v, u β u, v . Exercise 3.4.2. Prove that a sesquilinear form β , on V V is conjugate symmetric (⋅ ⋅) × ( ) = ( ) iff β v, v R for all v V . An Hermitian inner product on V V is a conjugate(⋅ ⋅) symmetric× sesquilinear map ( ) ∈ ∈ , ×V V C such that ⟨⋅ ⋅⟩ ∶ × → v, v 0 and v, v 0 whenever v 0 positive-definetness .

A complex⟨ vector⟩ ⩾ space with⟨ a⟩ = fixed Hermitian= inner( product is called an) Hermitian space (also called a unitary space). A basis ei of an Hermitian space is called orthonormal whenever ei, ej δij( )

⟨ 49⟩ = Example 3.4.3. The n-dimensional coordinate vector space Cn is an Hermitian inner product space with respect to the standard Hermitian inner product

x1 y1 ⊺ x, y x y x1y1 ... xnyn, where x , y . ⎛xn⎞ ⎛yn⎞ ⟨ ⟩ = = + + = ⎜ ⋮ ⎟ = ⎜ ⋮ ⎟ The standard basis of Cn is orthonormal. It is easy to note⎝ that⎠ the⎝ restriction⎠ of the n n standard Hermitian inner product , Cn on R R is the standard inner product on Rn. ⟨⋅ ⋅⟩ × Example 3.4.4 (A generalization of the previous example). Recall that

⊺ ˆ the of a complex matrix H is defined to be H ; ⊺ ˆ a complex matrix H is called Hermitian if H H;

ˆ an Hermitian n n matrix H is called positive definite= iff determinants of their upper left k k submatrices are positive. × Now, a complex positive× define Hermitian n n matrix H defines the Hermitian inner product n n ⊺ , R R R, x,× y x Hy.

Example 3.4.5 (Frobenius⟨⋅ ⋅⟩ ∶ inner× product)→ . For( V) ↦Matn×m C we have the Frobe- nius inner product , F , where = ( ) ⊺ ⟨⋅ ⋅⟩ Φ, Ψ F tr ΦΨ .

Lemma 3.4.6. Suppose that V ⟨is an⟩ Hermitian= ( ) space and U is a subspace of V . Then the map U U C, v, u v, u is an Hermitian inner product on U U. It is called the restriction of the Hermitian × → ( ) ↦ ⟨ ⟩ inner product , to the subspace U. × Lemma 3.4.7.⟨⋅ Suppose⋅⟩ that V is an Hermitian space and U is a subspace of V . Then

1. The subset U ⊥ v V v, u 0 for all u U is a subspace of V . It is called the orthogonal supplement of U. ∶= { ∈ S⟨ ⟩ = ∈ } 2. V U U ⊥.

= ⊕

50 Lemma 3.4.8. Suppose that V is an Hermitian space. Then the maps ∗ V V , v v, , where v, u v, u , (3.4.1)

Hom→ V,V ↦Sesquilinear⟨ ⋅⟩ forms⟨ ⋅⟩( on) =V⟨ V⟩ , (3.4.2) ϕ βϕ, where βϕ v, u ϕ v , u ( ) → { × } are isomorphisms between vector spaces. ↦ ( ) = ⟨ ( ) ⟩

Using isomorphisms (3.4.1) and (3.3.2), we identify an Hermitian space ∗ V and its conjugate dual V , and operators on an Hermitian space V and sesquilinear forms on V V .

Let V,U be Hermitian× spaces and ϕ V U be an operator. An operator ϕ∗ U V is called the of ϕ whenever ∶ → ∗ ∶ → ϕ v , u U v, ϕ u V for all v V, u U.

Lemma 3.4.9. Suppose⟨ ( ) that⟩ V,U= ⟨ are( Hermitian)⟩ spaces∈ and∈ϕ V U is an opera- tor. Then ∶ → 1. there exist a unique Hermitian adjoint of ϕ;

2. the matrix Φ∗ of ϕ∗ in an orthonormal basis of V and in an orthonormal basis of U is the cojugate transpose of the matrix Φ of ϕ in that bases:

⊺ Φ∗ Φ

Exercise 3.4.10. Consider Cn with the Hermitian= inner product defined by an Her- mitian matrix H and Cm with the Hermitian inner product defined by an S (Example 3.4.4). Prove that the adjoint of an operator Φ Cn Cm is the −1 ⊺ operator H Φ S Cm Cn. ∶ → Remark 3.4.11. Let V,U be Hermitian spaces and ϕ V U be an operator. The following commutative∶ → diagram explains the connection between Hermitian adjoint and conjugate transpose of ϕ: ∶ →

≅ ∗ U / U

ϕ∗ ϕt   ≅ ∗ V / V

∗ So that, under the identification V V ,

Hermitian adjoint of≅ ϕ = conjugate transpose of ϕ 51 Exercise 3.4.12. Suppose that V,U are Hermitian spaces. Prove that

1. 0∗ 0;

∗ 2. IV= IV ; ∗ ∗ 3. (λϕ) = λϕ for every λ C and every linear map ϕ V U; ∗ ∗ ∗ 4. (ϕ )ψ= ϕ ψ for every∈ linear maps ϕ, ψ V U∶. →

Definition( + ) 3.4.13.= +Let V be an Hermitian space and∶ →ϕ V V be an operator. ∗ ∗ 1. ϕ is called normal whenever ϕϕ ϕ ϕ. ∶ → ∗ ∗ 2. ϕ is called unitary whenever ϕϕ = ϕ ϕ IV or, equivalently,

ϕ v , ϕ u = v, u = for all v, u V.

3. ϕ is called Hermitian⟨ or( self-adjoint) ( )⟩ = ⟨ whenever⟩ ϕ∗ ϕ∈.

∗ 4. ϕ is called skew-Hermitian whenever ϕ ϕ. =

Exercise 3.4.14. Prove that unitary, Hermitian,= − and skew-Hermitian operators are normal. Exercise 3.4.15. Suppose that V is an Hermitian space, ϕ V V is an operator, Φ is the matrix of ϕ in an orthonormal basis. Check that ⊺ ⊺ ∶ → - ϕ is normal whenever ΦΦ Φ Φ; ⊺ ⊺ - ϕ is unitary whenever ΦΦ = Φ Φ I; ⊺ - ϕ is Hermitian whenever Φ = Φ; = ⊺ - ϕ is skew-Hermitian whenever= Φ Φ.

Exercise 3.4.16. Suppose that V is an Hermitian= − space, ϕ V V is an operator, and βϕ is the corresponding sesquilinear form. Check that ∶ → ∗ 1. ϕ is Hermitian whenever βϕ is conjugate symmetric;

∗ 2. ϕ is skew-Hermitian whenever βϕ v, u βϕ u, v for all v, u V , that is, βϕ is conjugate skew-symmetric. ( ) = − ( ) ∈ Theorem 3.4.17. Suppose that V is an Hermitian space and ϕ V V is a normal operator. Then there exists an orthonormal basis of V such that the matrix of ϕ in that basis is a diagonal matrix. ∶ →

52 Proof. We prove by induction. Case dim V 1 is evident. Suppose that dim V 2. Consider an eigenvalue λ of ϕ, a corresponding eigenvector v(λ of) =ϕ, and the ϕ-invariant subspave U Span vλ . By statement of Problem 3.8.26, the subspace( ) ⩾ U ⊥ is ϕ-invariant. Note that V U U ⊥. Now we use Lemma 3.2.1, induction, and statement of Problem 3.8.27.= ( ) = ⊕ Corollary 3.4.18. Suppose that V is an Hermitian space and ϕ V V is an operator. Then ∶ → 1. If ϕ is unitary, then there is an orthonormal basis of V such that the matrix of ϕ in that basis is a diagonal matrix with complex numbers on diagonal.

2. If ϕ is Hermitian, then there is an orthonormal basis of V such that the matrix of ϕ in that basis is a diagonal matrix with real numbers on diagonal.

3. If ϕ is skew-Hermitian, then there is an orthonormal basis of V such that the matrix of ϕ in that basis is a diagonal matrix with purely imaginary numbers on diagonal.

3.5 Operators Decompositions

In this section we consider LU, QR, and SVD decompositions. From the SVD we deduce properties of Moore-Penrose inverse. To make the layout more transparent, we consider only simplest version of the theorems for real vector spaces. LU decompisition. LU decompisition is a corollary of algorithm. Æ Theorem 3.5.1. Suppose that

a1,1 a1,2 . . . a1,n a2,1 a2,2 . . . a2,n A Matn×n R , ⎛ ⎞ ⎜a a . . . a ⎟ = ⎜ n,1 n,2 n,n⎟ ∈ ( ) ⎜ ⋮ ⋮ ⋱ ⋮ ⎟ where ⎝ ⎠

a1,1 a1,2 . . . a1,n a1,1 a1,2 a2,1 a2,2 . . . a2,n det a1,1 0, det 0,..., det 0. a2,1 a2,2 ⎛ ⎞ ⎜a a . . . a ⎟ ( ) ≠ Œ ‘ ≠ ⎜ n,1 n,2 n,n⎟ ≠ ⎜ ⋮ ⋮ ⋱ ⋮ ⎟ Then we have the decomposition A LU with uniquely⎝ determined lower⎠ triangu- lar matrix L with identity elements on the diagonal and invertible upper U. =

53 Example 3.5.2. Here are matrices and their LU decompositions:

2 1 2

1 2 ‰ Ž =1‰ 0Ž ‰ 1Ž 2 1 1 1 1 0 1 Œ ‘ = Œ ‘ Œ ‘ 1 2 1 1 0 0 −1 2 1 1 1 0 1 1 0 0 1 1 ⎛−0 1 1⎞ ⎛0 1 1⎞ ⎛−0 0 2⎞ ⎜− ⎟ = ⎜ ⎟ ⎜ − − ⎟ QR decompisition⎝ . − ⎠ ⎝ − ⎠ ⎝ − ⎠ TheoremÆ 3.5.3 (QR decomposition). Suppose that A is an n n invertible real matrix. Then there exist a unique n n Q and a unique upper triangular matrix R with positive diagonal elements such that A ×QR. × QR decompisition is a corollary of Gram-Schmidt orthonormalization= algorithm. Example 3.5.4. Here are matrices and their QR decompositions:

2 1 2

√ √ √ ‰ Ž2 = ‰ 2Ž ‰ Ž 3 2 1 2 √2 √2 2 √2 1 1 2 2 2 2 2 √0 2 √ √ √ √ √ Œ ‘ = Œ2 6 ‘3 Œ ‘3 2 2 1 2 1 √2 √6 − √3 2 √2 √2 1 1 0 2 6 3 √0 6 6 − 2 √6 √3 − 2 −√6 ⎛−0 1 1⎞ ⎛ 0 6 3⎞ ⎛ 0 0 2 2 ⎞ ⎜ 3 3 ⎟ ⎜ 3 ⎟ ⎜− ⎟ = ⎜− − − ⎟ ⎜ − ⎟ SVD - singular⎝ value− ⎠ decompisition⎝ − . ⎠ ⎝ ⎠ We start from the geometric version of SVD. Æ Theorem 3.5.5. Suppose that V and U are real inner product spaces and ϕ V U is a linear map. Then there are orthonormal bases e1, e2,... of V and orthonormal bases f1, f2,... of U such that ∶ → ( ) ( ) λifi if 1 i m, ϕ ei ⎧0 if i m 1, ⎪ ⩽ ⩽ ( ) = ⎨ ⎪ ⩾ + where m min dim V, dim U , λ1 ...⎩⎪ λm 0.

Elements⩽ λ{i in this theorem} are⩾ called⩾ singular⩾ values.

54 Proof. Firstly, it is enough to find an orthonormal basis e1, e2,... of V such that

hi 0, for 1 i m,( ) ϕ ei ⎧0, for m 1 i, ⎪ ≠ ⩽ ⩽ ( ) = ⎨ ⎪ + ⩽ where 0 m n, h1, . . . , hm is an⎩⎪ orthogonal list of vectors. Secondly, consider the preimage under ϕ of the inner product on U, that is, consider⩽ the⩽ symmetric( bilinear) form

β V V R, β v1, v2 ϕ v1 , ϕ v2

And, by Corollary 3.3.19,∶ × we take→ an orthonormal( ) = ⟨ basis( ) e(1, e)⟩2,... of V such that

β ei, ej 0 for all i j. ( )

( ) = ≠ The standard matrix version of SVD is the following:

Corollary 3.5.6. Suppose that Φ is n k real matrix. Then there exist an orthogonal n n matrix P , an orthogonal k k matrix R, and n k matrix × × × λ1 ×

Σ ⎛ 0⎞ , ⎜ λm ⎟ ⎜ ⎟ ⎜ ⋱ ⎟ = ⎜ ⎟ ⎜ 0 0⎟ ⎜ ⎟ where m min n, k , λ1 ... λm⎝ 0, such that Φ⎠ P ΣR.

3 2 Example⩽ 3.5.7.{ Consider} ⩾ R ⩾and R⩾ with the standard= bases e1, e2, e3 and f1, f2 and the standard inner products. Consider the linear map ( ) ) 2 1 1 Φ 3 2. 1 2 2 R R − = Œ ‘ ∶ → The preimage of the inner product on R2 is the symmetric bilinear form on R3 with the matrix 5 0 4 Φ⊺Φ 0 5 3 ⎛4 3 5⎞ The eigenvalues of Φ⊺Φ are 10, 5, 0 with∶= ⎜ the corresponding⎟ eigenvectors ⎝ ⎠ 4 3 4 3 , 4 , 3 . ⎛5⎞ ⎛−0 ⎞ ⎛−5 ⎞ ⎜ ⎟ ⎜ ⎟ ⎜− ⎟ ⎝ ⎠ ⎝ 55⎠ ⎝ ⎠ We have the corresponding orthonormal basis √ √ 2 2 3 4 2 √5 5 10√ e˜ 3 2 , e˜ 4 , e˜ 3 2 1 √10 2 5 3 √10 ⎛ 2 ⎞ ⎛−0 ⎞ ⎛− 2 ⎞ ⎜ 2 ⎟ ⎜ 2 ⎟ ( = ⎜ ⎟ = ⎜ ⎟ = ⎜− ⎟ 3 ⎝ ⎠ of R . Now we calculate: ⎝ ⎠ ⎝ ⎠ √ 5 2 ˜ ˜ √5 Φ˜e1 λ1f1, where λ1 10, f1 , 2√ 2 2 5 √ 5 = Œ √ ‘ = = = Œ√ ‘ 2 2 5 Φ˜e λ f˜ where λ 5, f˜ √5 , 2 1 2 2 2 2 5 − √ − 5 = Œ ‘ = 0 = = Œ ‘ Φ˜e . 3 0 ˜ ˜ The matrix of Φ in the bases e˜1, e˜2, e˜3 and= Œ ‘f1, f2 is

( )10 0( 0 ) Φ˜ √0 5 0 √ The matrix SVD deconposition is = Œ ‘ √ √ √ √ √ 2 2 3 2 2 2 1 1 5 2 5 10 0 0 5 10 2 √5 √5 3 4 0 . 1 2 2 2 5 5 0 5 0 √5 5√ √ 5 5 √ ⎛ 4 2 3 2 2⎞ − − ⎜ 10 10 2 ⎟ Œ ‘ = Œ ‘ ⋅ Œ √ ‘ ⋅ ⎜ − ⎟ Moore-Penrose inverse. ⎝− − ⎠ Let V , U be real inner product spaces and ϕ V U be a linear map. A linear mapÆψ V U is called Moore–Penrose inverse of ϕ iff ∶ → 1. ϕψϕ∶ →ϕ,

2. ψϕψ = ψ, ∗ 3. ϕψ = ϕψ, ∗ 4. (ψϕ) = ψϕ.

Theorem( ) 3.5.8.= Suppose that V , U are real inner product spaces and ϕ V U is a linear map. Then there is a unique Moore–Penrose inverse of ϕ. ∶ → This theorem is a corollary of SVD.

56 4 3 Example 3.5.9. Consider R and R with the standard bases e1, e2, e3, e4, and f1, f2, f3 and the standard inner products. Consider the linear map ( ) ( ) 5 0 0 0 4 3 Φ 0 2 0 0 R R . ⎛0 0 0 0⎞ = ⎜ ⎟ ∶ → The Moore-Penrose inverse of Φ⎝ is ⎠ 1 5 0 0 1 0 0 3 4 Ψ 2 R R . ⎛0 0 0⎞ ⎜0 0 0⎟ = ⎜ ⎟ ∶ → ⎜ ⎟ ⎝ ⎠ 3.6 *Normed Spaces

Let V be a finite-dimensional vector space defined over k, where k R or k C. Recall that a on V is a map = = V R, v v such that for all v, u V , λ kY,⋅ weY ∶ have→ ↦ Y Y

1. v 0 and v ∈ 0 ∈v 0; 2. YλvY ⩾ λ v Y; Y = ⇔ = 3. Yv Yu= S SYvY u . n ExamplesY + Y 3.6.1.⩽ Y Y + Y1.Y Manhattan norm 1 k R, where

x 1 Y ⋅ Y ∶ xi .→ 1⩽i⩽n Y Y = Q S S n 2. Euclidean norm 2 k R, where

1~2 Y ⋅ Y ∶ → 2 x 2 xi . 1⩽i⩽n Y Y = Œ Q S S ‘ n 3. Maximum norm ∞ k R, where

x ∞ max xi . Y ⋅ Y ∶ → 1⩽i⩽n

Y Y = S S n 4. (a generalization of the previous examples) p k R, where p 1,

1~p pY ⋅ Y ∶ → ⩾ x p xi . 1⩽i⩽n Y Y = 57Œ Q S S ‘ Examples 3.6.2. Consider a nonempty a, b R. We have the following norms [ ] ⊂ 1. 1 R t ⩽n R, where b

Y ⋅ Y ∶ [ ] → f 1 f t dt. a Y Y = S S ( )S 2. 2 R t ⩽n R, where 1~2 Y ⋅ Y ∶ [ ] → b 2 f 2 f t dt . ⎛ a ⎞ Y Y = ⎜S S ( )S ⎟ ⎝ ⎠ 3. Uniform norm ∞ R t ⩽n R, where

f ∞ max f t . Y ⋅ Y ∶ [ ] → t∈[a,b] Y Y = S ( )S 4. (a generalization of the previous examples) p R t ⩽n R, where p 1,

1~p b Y ⋅ Y ∶ [ ] → ⩾ p f p f t dt . ⎛ a ⎞ Y Y = ⎜S S ( )S ⎟ Examples 3.6.3. 1. Hom U, W⎝ R, where⎠ U, W are normed spaces, ϕ u Y ⋅ Y ∶ (ϕ )sup→ W . 0≠u∈U u U Y ( )Y Y Y = 2. (Frobenius norm) F Matn×m k R,Y whereY

Y ⋅ Y ∶ (tr) ΦΦ→ ⊺ , if k R, Φ F ⊺ ⎧tr ΦΦ , if k C. ⎪ ( ) = Y Y 2 = ⎨ Note that Φi,j F Φi,j . ⎪ ( ) = i,j ⎩ Exercise 3.6.4. FindY( maximum)Y = ∑ S ofS the function

f Matn×n C R, f Φ det Φ on the ball B Φ Mat∶n×n C (Φ) F→ 1 . ( ) = S ( )S Lemma 3.6.5.= {Suppose∈ that( )SY, isY an⩽ inner} product on V , then ⟨⋅ ⋅⟩V R, v v, v » is a norm. Y ⋅ Y ∶ → Y Y ∶= ⟨ ⟩ 58 What kind of norms can be obtained from inner products? The answer is in the lemma below. Lemma 3.6.6. Suppose that is norm on V . Then the following conditions are equivalent. Y ⋅ Y 1. is defined by some inner product on V .

2. Y ⋅ Y satisfies the parallerogram identity: 2 2 2 2 Y ⋅ Y v u v u 2 v 2 u for all v, u V.

Theorem 3.6.7. EveryY + twoY + Y norms− Y = 1Y andY + Y Y2 on a finite-dimensional∈ vector space V are equivalent; that is, there are real numbers c1, c2 0 such that such that Y ⋅ Y Y ⋅ Y v 2 c1 v 1 and v 1 c2 v 2 for all v> V.

Y Y ⩽ Y Y Y Y ⩽ Y Y ∈ 3.7 *Functions of Operators

In this section we discribe some type of functions of operators. Invariants of Operators. A function f Matn×n k k is called an invariant iff f A−1ΦA f Φ for all Φ Mat k , A GL k . ∶ ( ) → n×n n Example 3.7.1. The functions( ) = ( ) ∈ ( ) ∈ ( ) m Matn×n k k, Φ tr Φ , m 1, 2,... are polynomial invariants. ( ) → ↦ ( ) = Example 3.7.2. Consider the decomposition

n n−1 n−2 n pΦ t det tI Φ t p1 Φ t p2 Φ t ... 1 pn Φ , where ( ) = ( − ) = − ( ) + ( ) − + (− ) ( ) pm Matn×n k k, m 1, . . . , n are polynomial functions. Then the functions p , . . . , p are polynomial invariants. ∶ ( ) → 1 = n Theorem 3.7.3. Suppose that f Matn×n k k is a polynomial invariant. Then there is a polynomial s t1, . . . , tn in t1, . . . , tn such that ∶ ( ) → ( f Φ ) s p1 Φ , . . . , pn Φ .

Remark 3.7.4. The set of polynomial( ) = ( invariant( ) functions( )) of the form f Matn×n k k is a with natural and multiplication. Theorem 3.7.3 claims that p1, . . . , pn are generators of this ring. ∶ ( ) → 59 Let V be a vector space over k. An invariant function f Matn×n k k, where n dim V , well-define the function (we denote it by the same symbol) ∶ ( ) → = f ( Hom) V,V k, ϕ f Φ , where Φ is the matrix of ϕ in a basis ei of V. ∶ ( ) → For an operator ϕ ↦Hom( )V,V , the value f ϕ , where f Matn×(n k) k is an invariant function, is called an invariant of the operator ϕ. Roughly speaking, The- orem 3.7.3 claims that∈ p1 (ϕ , . .) . , pn ϕ is a complete( ) list of∶ polynomial( ) → invariants of an operator ϕ V V . ( ( ) ( )) Covariants of Operators. ∶ → A map F Matn×n k Matn×n k is called a covariant iff F A−1ΦA A−1F Φ A for all Φ Mat k , A GL k . ∶ ( ) → ( ) n×n n Theorem 3.7.5. Suppose( that F ) Mat= n×n(k ) k is a polynomial∈ ( covariant.) ∈ Then( ) n−1 F Φ f0 Φ I∶ f1 Φ Φ( ) ...→ fn−1 Φ Φ , where fi Matn×n k ( k),=i (0, 1), .+ . . , n( )1 are+ polynomial+ ( ) invariants.

Let V∶ be a vector( ) → space= over k. A covariant− F Matn×n k Matn×n k , where n dim V , well-defines the map (we denote it by the same symbol) ∶ ( ) → ( ) = (F )Hom V,V Hom V,V , Φ F Φ , where Φ and F Φ are the matrices ∶ ( ) → ( ) of ϕ and F Φ resp. in a basis ei of V. ↦ ( ) ( ) The map F is called covariant. ( ) ( ) Analytical Functions of Operators. Let U be an open subset of C. Consider – U – the ring of analytical functions on U, and

–O Mat( n)×n C U – the set of n n complex matrices Φ with Spec Φ U. We have the( map) × ( ) ⊂

U Matn×n C U Matn×n C , f z , Φ f Φ , (3.7.1) where f Φ (substitutionO( ) × of( Φ) into→ f z ) is( defined) ( ( in) the) ↦ following( ) way: find a polynomial r z of degree n 1 such that f z r z is divisible by pΦ z in U and put ( ) ( ) ( ) ⩽ − f Φ r (Φ).− ( ) ( ) O( )

Theorem 3.7.6. The map (3.7.1) satisfies( ) = ( the) following properties. 60 ˆ If f z U , Φ Matn×n C U , and A GLn C , then

( ) ∈ O( ) ∈ f( A)−1ΦA A∈−1f Φ( A.)

ˆ ( ) = ( ) If f z , g z U , λ, µ C, and Φ Matn×n C U , then

( ) ( ) ∈ O( ) λf∈ µg Φ ∈ λf Φ ( µg) Φ

and a similar statement( holds+ for)( convergent) = ( ) series+ ( of) functions. ˆ If f z , g z U and Φ Matn×n C U , then

( ) ( ) ∈ O( ) ∈ fg Φ ( )f Φ g Φ

and a similar statement holds( for)( convergent) = ( ) ( infinite) products of functions. Example 3.7.7. Consider

3 4 0 U C 0, Φ 2 1 0 . ⎛ 1 1 4⎞ = ∖ = ⎜ ⎟ Then Spec Φ 4, 5, 1 U. Let us take f⎝−z z−1⎠and calculate f Φ Φ−1. To find r z , we have the conditions ( ) = { − } ⊂ ( ) = ( ) = 1 1 ( ) r 4 f 4 , r 5 f 5 , r 1 f 1 1. 4 5

( ) = ( ) =1 2 ( ) = ( ) = (− ) = (− ) = − From this we find r z 20 z 8z 11 and thus 4 16 0 ( ) = (− 1 + − ) 1 f Φ r Φ Φ2 8Φ 11I 8 12 0 . 20 20 ⎛−3 7 5⎞ ( ) = ( ) = (− + − ) = ⎜ − ⎟ From theorem 3.7.6 it follows that the matrix f Φ is⎝ the− “usual”⎠ inverse of Φ. 12 4 Exercise 3.7.8. Calculate Γ , where Γ( z) is the gamma function. 25 8 (Œ ‘) ( ) 3.8 Problems − −

Problem 3.8.1. Suppose that e1, . . . , en is a basis of a complex vector space U.

1. Prove that e1, 1e1, . . .( , en, 1en) is a basis of the realification U R. √ √ 2. Find the matrix( of− the corresponding− ) complex structure (example 3.1.2) in that basis.

61 Problem 3.8.2. Suppose that V is a real vector space. Prove that every vector of its complexification V C can be uniquelly represented as v 1 u 1, where v, u V . √ Problem 3.8.3. Prove lemma 3.1.3. ⊗ + ⊗ − ∈ Problem 3.8.4. Prove lemma 3.1.4.

Problem 3.8.5. Suppose that V is a real vector space, ϕ V V is an operator, ϕC V C V C is its complexification, and λ a b 1 C is an eigenvalue of ϕC. Prove that √ ∶ → ∶ → = + − ∈ 1. If λ R, then λ is an eigenvalue of ϕ;

2. If λ ∈ R and v 1 u 1 V C is an eigenvector corresponding to λ, then √ (a) ∈~v and u are⊗ linearly+ ⊗ independent;− ∈ (b) ϕ v av bu, ϕ u bv au.

In particular, Span v, u is( a)2=-dimensional− (ϕ-invariant) = + subspace of V .

Problem 3.8.6. Prove( that) the characteristic polynomial of a linear operator is well-defined. That is, if ϕ V V is an operator, Φ and Φ˜ are matrices of ϕ in some bases of V , then ∶ det→tI Φ det tI Φ˜ .

n n−1 Problem 3.8.7. Suppose that Φ( Mat− n)×=n k ,(pΦ−t ) t pn−1t ... p0. Prove that ∈ ( ) ( ) = + + + 1. pn−1 tr Φ ;

1 2 2 2. pn−2 = −2 (tr )Φ tr Φ ; n 3. p0 = 1(( det( ))Φ .− ( ))

Problem= ( 3.8.8.− ) Find( ) invariant subspaces of the operator 1 0 0 0 0 2 0 0 4 4 R R . ⎛0 0 3 0⎞ ⎜0 0 0 4⎟ ⎜ ⎟ ∶ → ⎜ ⎟ Problem 3.8.9. Find invariant⎝ subspaces⎠ of the operator

1 1 0 0 0 1 0 0 4 4 R R . ⎛0 0 2 1⎞ ⎜0 0 0 2⎟ ⎜ ⎟ ∶ → ⎜ ⎟ ⎝ 62⎠ Problem 3.8.10. Suppose that ϕ V V is an operator and

Spec∶ ϕ → λ1, . . . , λn , where n dim V , λi λj for i (j.) Suppose= { that }ei is an eigenvector of ϕ corre- sponding to λi, 1 i n. Prove that = ( ) ≠ ≠ 1. ei is a Jordan⩽ ⩽ basis for ϕ;

2.( the) Jordan matrix of ϕ is a diagonal matrix with the diagonal entries λ1, . . . , λn. Problem 3.8.11. Consider the operator

2 2 Φ 2 2. 1 3 R R

1. Find a Jordan basis and the= JordanŒ ‘ ∶ matrix→ of Φ.

2. Calculate Φn, where n N.

Problem 3.8.12. Find a Jordan∈ basis and the Jordan form of the operator 2 1 Φ 2 2. 1 2 C C = Œ ‘ ∶ → Problem 3.8.13. Find a Jordan basis− and the Jordan form of the operator 3 4 0 3 3 Φ 2 1 0 R R . ⎛ 1 1 4⎞ = ⎜ ⎟ ∶ → Problem 3.8.14. Find a Jordan⎝ basis− and⎠ the Jordan form of the operator 0 4 0 1 0 0 0 1 9 2 5 5 Φ ⎛0 0 0− 0 5 ⎞ R R . ⎜0 0 0 0 4⎟ ⎜ ⎟ = ⎜ ⎟ ∶ → ⎜0 0 0 0 0 ⎟ ⎜ − ⎟ The minimal polynomial of⎝ an operator ϕ⎠is a nonzero polynomial f t k t of minimal degree such that f ϕ 0. For example, the minimal polynomial of the identity operator is t 1 and the minimal polynomial of a nontrivial projection( ) ∈ [ is] t2 t. ( ) = − Problem− 3.8.15. Suppose that V is a finite-dimensional vector space and ϕ V V is an operator. Prove the following statements. ∶ → 1. There exists a unique (up to a scalar factor) minimal polynomial of ϕ.

63 2. If f t is a polynomial such that f ϕ 0, then the minimal polynomial di- vides f t ; in particular, the minimal polynomial divides the characteristic polynomial( ) of ϕ. ( ) = ( ) Problem 3.8.16. Prove that the minimal polynomial of n n diagonal matrix with different diagonal elements is equal to its characteristic polynomial. × Problem 3.8.17. Consider a complex vector space V with a basis e1, . . . , en and the operator

ei+1, if i n, ( ) ϕ V V, ei ⎧e1, if i n. ⎪ ≠ ∶ → ↦ ⎨ 1. Find eigenvalues and corresponding eigenvectors⎪ = of ϕ. ⎩⎪ 2. Prove that the minimal polynomial of ϕ and its characteristic polynomial are both equal to tn 1.

Problem 3.8.18. Consider− an n n Jordan block J with λ on the diagonal.

1. Find eigenvalues and corresponding× eigenvectors of J. 2. Prove that the minimal polynomial of J and its characteristic polynomial are both equal to t λ n.

n Problem 3.8.19. (Consider− ) a Jordan matrix Φ with pΦ t t λi i , where 1⩽i⩽m λ λ for i j Prove that i j ( ) = ∏ ( − ) ≠1. ni is the≠ sum of sizes of Jordan blocks with λi on the diagonal. l 2. the minimal polynomial of Φ is t λi i , where li is the size of a maximal 1⩽i⩽m Jordan block with λ on the diagonal. i ∏ ( − ) An operator is called diagonalizable whenever its matrix in some basis is diagonal.

Problem 3.8.20. Suppose that for an operator ϕ there is a Jordan basis. Prove that

1. ϕ is diaginalizable whenever its minimal polynomial has no multiple roots;

2. if ϕm I, then ϕ is diaginalizable.

n Problem 3.8.21.= Suppose that Φ Matn×n C is such that the Φ is bounded. Prove that Spec Φ z z 1 . ∈ ( ) ( ) A matrix P Matn×n R( )is⊂ a{ calledSS S ⩽right} stochastic, iff

(a) Pi,j 0 for∈ all 1 i,( j ) n,

⩾ ⩽ ⩽ 64 (b) Pi,j 1 for all 1 i n. 1⩽j⩽n ∑ = ⩽ ⩽ Problem 3.8.22. Suppose that P Matn×n R is a right stochastic matrix. Con- sider P as a complex matrix. Prove that ∈ ( ) 1. 1 Spec P z z 1 ;

2. for∈ the( Jordan) ⊂ { normalSS S ⩽ } form of P , every Jordan block corresponding to λ Spec P , where λ 1, is a block of size 1 1 . ∈ Problem( 3.8.23.) ProveS S = lemma 3.3.7. × Problem 3.8.24. Suppose that V is a finite-dimensional real vector space with an inner product , V . Prove that

1. ⟨⋅ ⋅⟩ C C , V C V V C, v λ, u µ V C λµ v, u V is an Hermitian inner product on V C V C. ⟨⋅ ⋅⟩ ∶ × → ⟨ ⊗ ⊗ ⟩ = ⟨ ⟩ ∗ ∗ 2. For an operator ϕ V V we have ϕ×C ϕ C.

Problem 3.8.25. Suppose∶ that→ V is an inner( ) product= ( ) space. Prove that a projection ϕ V V is a normal operator whenever ϕ is an orthogonal projection.

Problem∶ → 3.8.26. Suppose that V is an inner product space, ϕ V V is a normal operator, and U is a ϕ-invariant subspace. Prove that U ⊥ is ϕ-invariant subspace. ∶ → Problem 3.8.27. Suppose that V is an inner product space, ϕ V V is a normal operator, and U is a ϕ-invariant subspace. Prove that ϕU is a normal operator. ∶ → Problem 3.8.28. Suppose that V is a 2-dimensional real inner product space and ϕ V V is a normal operator. Prove that

∶1.→ if ϕ have no real eigenvalue, then there is an orthonormal basis of V such that the matrix of ϕ in this basis is

cos α sin α r , where r 0 and α πk with k ; sin α cos α Z Œ ‘ > ≠ ∈ 2. if ϕ have a real− eigenvalue, then there is an orthonormal basis of V such that the matrix of ϕ in this basis is

λ1 0 , where λi R. 0 λ2 Œ ‘ ∈ Problem 3.8.29. Prove corollary 3.3.19.

65 Problem 3.8.30. Prove LU decomposition theorem (theorem 3.5.1). Problem 3.8.31. Prove QR decomposition theorem (theorem 3.5.3). Problem 3.8.32. Prove SVD decomposition theorem (theorem 3.5.5). Problem 3.8.33. Prove theorem 3.5.8. Problem 3.8.34. Prove lemma 3.6.5. Problem 3.8.35. Suppose that V is a real or complex vector space and is a norm on V . Prove that the first condition in lemma 3.6.6 implies the second condition. Y⋅Y Problem 3.8.36. Prove that in examples 3.6.1, norms 2 and 5 can be obtained from inner products, and norms 1, 2, and 4 cannot be obtained from inner products. Problem 3.8.37. Suppose that V and U are finite-dimensional vector spaces and ϕ V V and ψ U U are operators. Prove that 1. rk ϕ ψ rk ϕ rk ψ . ∶ → ∶ → 2. If( the⊗ characteristic) = ( ) ( polynomials) of ϕ and ψ are completely decomposable, than the spectrum of ϕ ψ is µiλj , where µi and λi are the spectrums of ϕ and ψ respectively. ⊗ { } { } { } 3. tr ϕ ψ tr ϕ tr ψ .

dim(U) dim(V ) 4. det( ϕ⊗ ψ) = det( )ϕ( ) det ψ . Problem 3.8.38. Suppose that ϕ V V and ψ U U are operators with the ( ⊗ ) = ( ) ( ) Jordan forms ∶ → 1 1∶ 0 → 1 1 and 0 1 1 0 1 ⎛0 0 1⎞ respectively. Find the JordanŒ form‘ of the operator⎜ ϕ ψ⎟. ⎝ ⎠ Problem 3.8.39. Suppose that ϕ V V is a operator with the Jordan form ⊗ 2 1 0 ∶ → 0 2 0 . ⎛0 0 3⎞ 2⎜ ⎟ 2 Find the Jordan form of operators S⎝ ϕ and⎠Λ ϕ . Problem 3.8.40. Suppose that V is an n-dimensional vector space, ϕ V V is ( ) ( ) an operator with a completely decomposable characteristic polynomial, and µi is the spectrum of ϕ. Prove that ∶ → k { } 1. The spectrum of S ϕ is µi1 . . . µik 1⩽i1,...,ik⩽n, 2. The spectrum of Λk ϕ is µ . . . µ . ( ) { i1 ik} 1⩽i1<...

The natural generalization of fields are rings and the natural generalization of vector spaces over fields are modules over rings. In this section we present an introduction to rings and modules over rings.

4.1 Rings

Definition 4.1.1. A ring is a set R with a fixed element 0 R (the zero) and two compositions: an addition ∈ R R R, r, s r s and a multiplication + ∶ × → ( ) ↦ + R R R, r, s rs satisfying the following axioms: × → ( ) ↦ 1. r s t r s t for all r, s, t R;

2. r( +s ) s+ =r for+ ( all+r,) s R; ∈ 3. r + 0 = r +for all r R; ∈ 4. for+ every= r R there∈ exists r R such that r r 0. 5. r s t rs∈ rt and s t −r ∈sr tr for all r,+ s,( t− )R=. ˆ Examples( + 4.1.2.) = + Every( + field) = is a+ ring. For example,∈ R, C, and Fp are rings. ˆ Z – the ring of integers. ˆ Z nZ – the ring of integers modulo n.

ˆ R~t – the in the variable t over a ring R. It is defined to be the set of expressions of the form [ ] n r0 r1t ... rnt , where n N, ri R

+ + + 67 ∈ ∈ with the natural addition

n m r0 r1t ... rnt s0 s1t ... smt 2 r0 s0 r1 s1 t r2 s2 t ... ( + + + ) + ( + + + ) = and the multiplication( + ) + ( + ) + ( + ) + n m r0 r1t ... rnt s0 s1t ... smt 2 r0s0 r0s1 r1s0 t r0s2 r1s1 r2s0 t ... ( + + + )( + + + ) = of polynomials.( In particular,) + ( + we have) + ( the polynomial+ + ring) +k t in the variable t over a field k. [ ] ˆ R t – the ring of formal power series in the variable t over a ring R. It is defined to be the set of series of the form [[ ]] 2 i r0 r1t r2t ... rit , where ri R i⩾0 with the natural addition+ + + = Q ∈

2 2 r0 r1t r2t ... s0 s1t s2t ... 2 r0 s0 r1 s1 t r2 s2 t ... ( + + + ) + ( + + + ) = and the multiplication( + ) + ( + ) + ( + ) + 2 2 r0 r1t r2t ... s0 s1t s2t ... 2 r0s0 r0s1 r1s0 t r0s2 r1s1 r2s0 t ... ( + + + )( + + + ) = of power series.( ) + ( + ) + ( + + ) + ˆ R t – the ring of formal Laurent series in the variable t over a ring R. It is defined to be the set of series of the form (( )) m m+1 m+2 i rmt rm+1t rm+2t ... rit , where m Z, ri R i⩾m with the natural+ addition+ and the+ multiplication= Q of Laurent∈ series.∈

ˆ Matn R – the of n n matrices with entries in a ring R with the addition and the multiplication of matrices. ( ) × Definition 4.1.3. ˆ A ring R is called associative whenever

r st rs t for all r, s, t R.

ˆ A ring R is called commutative( ) = (whenever) ∈

rs sr for all r, s R.

= 68 ∈ ˆ A ring R is called a ring with identity whenever there exists an identity el- emenet 1 R such that

∈ 1r r1 r for all r R.

ˆ An element r of a ring R with= identity= is called invertible∈ whenever there exists r−1 R such that rr−1 r−1r 1. ∈ The set of invertible elements of R is denoted by R∗. = = Remark 4.1.4. There are associative and commutative, associative and noncommu- tative, nonassociative and commutative, nonassociative and noncommutative rings with and without identity.

Definition 4.1.5. A ring R is called an algebra over a field k whenever R is a vector space over k and λ rs λr s r λs for all λ k, r, s R.

For example, k (t , k) =t( , )k =t (, and) Matn k∈, where∈k is a field, are k-.

[ ] Tensor[[ ]] (( algebra)) of a vector( ) space. Let V be a vector space over a field k. Consider the space

T V k V V ⊗2 V ⊗3 ... as an algebra with the multiplication( ) = ⊕ ,⊕ where⊕ the product⊕ of

k l v1 ... vk T V ⊗ and u1 ... ul T V is ⊗ ⊗ ∈ ( ) ⊗ ⊗ ∈ ( ) k+l v1 ... vk u1 ... ul T V .

Lemma 4.1.6. The algebra⊗ T ⊗V is⊗ well-defined.⊗ ⊗ ∈ ( )

The algebra T V is called( the) of the space V . Clearly, the tensor algebra of the 0-dimensional vector space V 0 is k. ⊗k Consider an 1-dimensional( ) vector space V . Fix a basis e1 e of V . Then V is an 1-dimensional vector space with the basis e⊗k , where = { } ( = ) e⊗k e ... (e . ) k ∶= ⊗ ⊗ We have the isomorphism ´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶

k t T V , 2 ⊗2 f0 f1t f2t ... f0 f1e f2e ... [ ] → ( ) + + + ↦ + + + 69 of a vector space Let V be a vector space. Consider the space

S V k S1 V S2 V ....

The space S V is an algebra( ) with= ⊕ the( multiplication) ⊕ ( ) ⊕ , where the product of

( ) f Sk V and g Sl V ⋅ is ∈ ( ) ∈ ( ) f g sym f g Sk+l V .

Lemma 4.1.7. The algebra S⋅ V∶= is well-defined.( ⊗ ) ∈ ( )

The algebra S V is called( the) symmetric algebra of V ; it is associative, commu- tative, and with identity. The algebra S V is not a subalgebra of T V ; while the map sym T V ( S) V is an algebra . ( ) ( ) ∶ ( ) → ( Exterior) algebra of a vector space Let V be a vector space. Consider the vector space

Λ V k Λ1V ... Λdim(V )V.

The space Λ V is an algebra( ) over= ⊕k with⊕ the⊕ multiplication , where the product of k l ( ) ω1 Λ V and ω2 Λ V ∧ is ∈ ( ) ∈ k+(l ) ω1 ω2 alt ω2 ω2 Λ V .

Lemma 4.1.8. The algebra ∧Λ V∶= is well-defined.( ⊗ ) ∈ ( )

The algebra Λ V is called( the) of V ; it is associative, noncom- mutative for dim V 2, and with identuty. The algebra Λ V is not a subalgebra of T V ; while the( map) alt T V Λ V is an . We have ( ) ⩾ ( ) ( ) ∶ (dim) →Λ V( ) 2dim(V ). A form ω Λ V is called homogeneous whenever ω Λk V for some k; in this case ( ( )) = k is called the degree of w and is denoted by deg ω . We have ∈ ( ) ∈ ( ) deg(ω1) deg(ω2) ω1 ω2 1 ( ω)1 ω2 for homogeneous forms ω1, ω∧2 Λ=V(−. ) ∧

∈ ( )

70 4.2 Modules

Let R be an associative, with identuty.

Definition 4.2.1. An R- is defined to be a set M with a fixed element 0 M (the zero) and two compositions: ∈ M M M, m, l m l an addition and + ∶ × → ( ) ↦ + ( )

R M M, r, m r m a multiplication by elements of R satisfying× the following→ ( axioms:) ↦ ⋅ ( ) 1. m l l m;

2. m+ l= +n m l n ;

3. (m +0 ) +m; = + ( + )

4. for+ every= m M there exists m M such that m m 0.

5. r m l r∈ m r l; − ∈ + (− ) =

6. r⋅ ( s + m) = r⋅ m+ s⋅ m;

7. (rs+ )m⋅ r= s⋅ m+ ; ⋅

8.( 1 m) ⋅ m= ⋅ ( ⋅ ) for all r,⋅ s =R and m, l, n M.

The first∈ four properties∈ mean that M is an abelian with the neutral element 0 and the composition .

Examples 4.2.2. 1. An abelian+ group M can be considered as a Z-module with the multiplication

r m sgn r m ... m , where r Z, m M. SrS ⋅ = ( )( + + ) ∈ ∈ ´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ 2. A vector space over a field k is a k-module.

3. the set 0 (consists of one element – the zero) with the multiplication

{ } r 0 0, where r R

is an R-module; it is called the⋅ trivial= R-module∈ . 71 4. the ring R itself with the multiplication r m rm, where r R, m R

1 is an R-module; it is denoted⋅ = by R . ∈ ∈ 5. The set r1 Rk ri R ⎛rk⎞ with the addition ∶= {⎜ ⋮ ⎟ S ∈ } r1 s⎝1 ⎠ r1 s1

⎛rk⎞ ⎛sk⎞ ⎛rk + sk⎞ and the multiplication ⎜ ⋮ ⎟ + ⎜ ⋮ ⎟ ∶= ⎜ ⋮ ⎟ ⎝ ⎠ ⎝r1 ⎠ ⎝rr1 + ⎠ r ⎛rk⎞ ⎛rrk⎞ is an R-module. ⋅ ⎜ ⋮ ⎟ = ⎜ ⋮ ⎟ ⎝ ⎠ ⎝ ⎠ Example 4.2.3. Suppose that V is a vector space over a field k and ϕ End V . Then we have the k t -module Vϕ, where ∈ ( ) ˆ Vϕ V as a set,[ ] ˆ the= multiplication is f t v f ϕ v . Exercise 4.2.4. Consider the( ring) ⋅ k= t (, where)( ) k is a field, and k t -module V . Sup- poose that V , considered as a vector space over k, is finite-dimensional. Prove that V Vϕ for some ϕ End V . [ ] [ ]

Example= 4.2.5. Suppose∈ ( that) M1,...,Mk are R-modules. Then their direct prod- uct is an R-module with respect to the addition

m1, . . . , mk l1, . . . , lk m1 l1, . . . , mk lk and the multiplication( ) + ( ) = ( + + )

r m1, . . . , mk r m1, . . . , r mk , where r R, mi Mi; it is called the⋅ (direct sum )of=R(-modules⋅ M⋅1,...,M) k and denoted∈ by∈ M1 ... Mk. We have R ... R Rn. ⊕ ⊕ n ⊕ ⊕ ≃ Definition 4.2.6. Suppose that ´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶M is an R-module. A subset L M is called an R-submodule whenever ⊂ m1, . . . , mk L and r1, . . . , rk R imply r1 m1 ... rk mk L.

∈ ∈ 72 ⋅ + + ⋅ ∈ Example 4.2.7. Suppose that M is an R-module. Then

1. 0 is an R-submodule; it is called the trivial submodule.

2. {M}itself is an R-submodule. 3. An element m M defines the submodule

∈ Rm r m r R ;

it is called the cyclic submodule generated∶= { ⋅ S by∈ m}. 4. (A generalization of the previous example) A subset X M generates the submodule r1 x1 ... rkxk k N, ri R, xi X ;⊂ it is called the submodule generated by X. { ⋅ + + S ∈ ∈ ∈ } Definition 4.2.8. Suppose that M and L are R-modules. A map ϕ M L is called an R- or, simply, homomorphism if ∶ → ϕ r1 m1 ... rk mk r1 ϕ m1 ... rk ϕ mk for all ri R, mi M.

An R(-module⋅ + homomorphism+ ⋅ ) = is⋅ called( ) + an R+-module⋅ ( isomorphism) whenever∈ ∈ it is a bijective map and the set-theoretic inverse map is an R-module homomorphism; in this case M is called isomorphic to L and we write M L.

Lemma 4.2.9. 1. The map ≃

1 1 1 1 1 r Φ1 Φ2 ... Φn r r2 Φ2 Φ2 ... Φ2 r2 Φ Rn Rk, 1 2 n , ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎜rn⎟ ⎜Φk Φk ... Φk⎟ ⎜rn⎟ ∶ → ⎜ ⎟ ↦ ⎜ 1 2 n⎟ ⎜ ⎟ ⎜ ⋮ ⎟ ⎜ ⋮ ⋮ ⋱ ⋮ ⎟ ⎜ ⋮ ⎟ where ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ 1 1 1 Φ1 Φ2 ... Φn 2 2 2 Φ1 Φ2 ... Φn Φ Matk×n R , ⎛ ⎞ ⎜Φk Φk ... Φk⎟ = ⎜ 1 2 n⎟ ∈ ( ) ⎜ ⋮ ⋮ ⋱ ⋮ ⎟ is an R-module homomorphism;⎝ ⎠ 2. An R-module homomorphism of the form Rn Rk is a left multiplication by some Φ Matk×n R . → Suppose that∈ M,N(are) R-modules. Consider the set Hom M,N of R-modules ϕ M N. Then Hom M,N is an R-module under the following compositions: ( ) ∶ → ( )

73 1. the sum of homomorphisms ϕ1, ϕ2 Hom M,N is the homomorphisms

ϕ1 ϕ2 M N, ϕ∈1 ϕ2 ( m )ϕ1 m ϕ2 m ;

2. the product of λ+ R and∶ → an homomorphism( + )( ϕ) = Hom( )M,N+ ( is) the homomor- phism ∈ λϕ M N, λϕ m ∈ λϕ m( . )

∗ In particular, for an R-module∶M we→ have( the)(R-module) = (M) Hom M,R . Exercise 4.2.10. Calculate Hom Z 18Z, Z 24Z Z ∶= ( ) Lemma 4.2.11. Suppose that M(is~ an R-module,~ ) and L is a submodule of M. Then 1. the set M L m L m M with the zero element is 0 L, with the addition ~ ∶= { + S ∈ } m+ L l L m l L, and the multiplication ( + ) + ( + ) = + + r m L r m L is an R-module; it is called the quotient of M by L. ⋅ ( + ) = ⋅ + 2. The map π M M L, ϕ m m L is a surjective R-module homomorphism; it is called the quotient homomor- phism. ∶ → ~ ( ) = +

4.3 Tensor Products of Modules

Let R be an associative and commutative ring, M and L be R-modules. Consider the set of formal expressions of the form

∗ r1m1 l1 ... rkmk lk, where k N , ri R, mi M, li L, as an R-module⊗ with+ the+ natural⊗ addition and multiplication∈ ∈ by∈ elements∈ of R; the zero element is the expression 0 0.1 By definition, the tensor product V U is the quotient of this R-module modulo relations ⊗ ⊗ 1 m1 m2 l m1 l m2 l, 2 m l1 l2 m l1 m l2, (4.3.1) (3)(rm +l )r⊗m= l ⊗m+ r⊗l . ( ) ⊗ ( + ) = ⊗ + ⊗ 1This R-module can be defined formally by ( ) ⊗ = (?⋅ Rm) ⊗⊗ l = ⊗ ( ⋅ ) m∈M,l∈L

74 The notation of the class of an expression r1m1 l1 ... rkmk lk in M L is

r1m1 l1 ... rkm⊗k +lk + ⊗ ⊗

(the same as of the element). Two⊗ element+ + of M ⊗L are equal if one of them can be transformed into the other by using (4.3.1). From definition it follows that ⊗ M 0 0 L 0 .

Example 4.3.1. In Z 10Z Z Z⊗3{Z }we= { have} ⊗ = { }

m ~ l ⊗21 m~ l m 21 l m 0 0 for every m Z 10Z, l⊗ Z=3(Z. Thus⋅ ) ⊗Z =10Z⊗ (Z Z ⋅3Z) = 0⊗. =

From definition∈ ~ it follows∈ ~ that elements~ ⊗ of the~ form= { m} l, where m M, l L are generators of M L. Thus to define an R-module homomorphism ⊗ ∈ ∈ ⊗ ϕ M L W, where W is an R-module, we have to∶ define⊗ the→ images ϕ m l W , where m M, l L such that these images are consistent with (4.3.1), that is, ( ⊗ ) ∈ ∈ ∈ 1 ϕ m1 m2 l ϕ m1 l ϕ m2 l , 2 ϕ m l1 l2 ϕ m l1 ϕ m l2 , (4.3.2) (3) rϕ((m +l )ϕ⊗ r) =m ( l ⊗ )ϕ+m( r⊗l ) . ( ) ( ⊗ ( + )) = ( ⊗ ) + ( ⊗ ) Roughly speaking,( (4.3.2)) ( means⊗ ) that= ((ϕ ⋅m ) ⊗l )is= linear( ⊗ by( m⋅ and)) linear by l.

( ⊗ ) 4.4 *A Proof of Cayley-Hamilton’s Theorem

Consider the ring Matn k of n n matrices with entries in a field k. Recall that

The (is the) function× of the form

Æ det Matn k k

that defined inductively by the formulas:∶ ( ) →

1. for 1 1 matrix A a1,1 , det A a1,1; 2. if A a is a n n matrix, where n 1, then × i,j = ( ) ( ) = 1+j = ( ) × det A 1> a1,jM1,j, 1⩽j⩽n ( ) = Q (− ) where Mi,j is the determinant of the submatrix of A formed by deleting the ith row and the jth column.

75 The determinant satisfies the following properties:

Æ 1. det I 1. 2. det AB det A det B for every A, B Mat k . ( ) = n 3. det A is a skew-symmetric form of rows of A Mat k . ( ) = ( ) ( ) ∈ ( ) n The characterictic( ) polynomial of a matrix A Matn∈k is ( )

Æ pA t det tI A ∈ k t . ( )

( ) = ( − ) ∈ [ ] The adjugate of a matrix A Matn k is the matrix

Æ M∈1,1 M( 2),1 M3,1 ... M1,2 M2,2 M3,2 ... adj A Matn k . ⎛ M1,3 −M2,3 M3,3 ...⎞ ⎜− − ⎟ ( ) ∶= ⎜ ⎟ ∈ ( ) ⎜ − ⎟ ⎝ ⋮ ⋮ ⋮ ⋱ ⎠ Lemma 4.4.1. Suppose that k is a field and A ai,j Matn k . Then

m+i m+i 1. for any m, det A 1⩽i⩽n 1 am,iMm,i= ( 1⩽)i⩽∈n 1 ( )ai,mMi,m;

2. A adj A adj( A) = A∑ det(−A)I. = ∑ (− )

Let ⋅R be( an) = associative,( ) ⋅ = commutative( ) ring with identity. Consider the set Matn R of n n matrices with entries in R. The set Matn R is an associative ring with identity. Similarly to the matrices with entries in a field, we have the following definitions( ) and× facts. ( )

The determinant is the function of the form

Æ det Matn R R

that defined inductively by the formulas:∶ ( ) →

1. for 1 1 matrix A a1,1 , det A a1,1. 2. if A a is a n n matrix, where n 1, then × i,j = ( ) ( ) = 1+j = ( ) × det A 1> a1,jM1,j, 1⩽j⩽n ( ) = Q (− ) where Mi,j is the determinant of the submatrix of A formed by deleting the ith row and the jth column.

The determinant satisfies the following properties:

Æ 1. det I 1.

( ) = 76 2. det AB det A det B for every A, B Matn R . 3. det A is a skew-symmetric form of rows of A Mat R . ( ) = ( ) ( ) ∈ ( ) n The characterictic( ) polynomial of a matrix A Matn∈R is ( )

Æ pA t det tI A ∈ R t .( )

( ) = ( − ) ∈ [ ] The adjugate of a matrix A Matn R is the matrix

Æ M∈1,1 M( 2,1) M3,1 ... M1,2 M2,2 M3,2 ... adj A Matn R . ⎛ M1,3 −M2,3 M3,3 ...⎞ ⎜− − ⎟ ( ) ∶= ⎜ ⎟ ∈ ( ) ⎜ − ⎟ Lemma 4.4.2. Suppose that ⎝R is⋮ an associative, ⋮ ⋮ commutative ⋱ ⎠ ring with identity and A ai,j Matn R . Then

m+i m+i 1. for= ( any)m∈ , det (A ) 1⩽i⩽n 1 am,iMm,i 1⩽i⩽n 1 ai,mMi,m;

2. A adj A adj( A) = A∑ det(−A)I. = ∑ (− )

Exercise⋅ 4.4.3( .)Suppose= ( ) that⋅ =R is an( ) associative and commutative ring with identity and A Matn R . Prove that tr adj A pn−1 A .

Cayley-Hamilton’s∈ ( ) theorem.( (Suppose)) = that( )R is an associative, commutative ring with identity, A Matn R , and

n n−1 n pA∈ t t( ) 1 p1 A t ... 1 pn A is the characterictic polynomial( ) = + ( of− )A.( Then) + + (− ) ( )

n n−1 n pA A A 1 p1 A A ... 1 pn A 0.

Proof. Consider the( ring) = R +t(−−1 ) of( formal) Laurent+ + (− series) ( in) the= variable t−1. Ele- ments of R t−1 are formal series (( )) i (( )) rit , where m Z, ri R. i⩽m

Q ∈ ∈ −1 The matrix tI A is an invertible element of Matn R t :

−1 −1 −2 −3 2 −1 − tI A t I t A t A ... ( Mat(( n )))R t .

By lemma 4.4.2(2),( − ) = + + + ∈ ( (( )))

adj tI A tI A det tI A I pA t I.

( − )( − ) = ( − ) = ( ) 77 Thus, −1 adj tI A pA t tI A or ( − ) = ( )( − ) adj tI A n n−1 n −1 −2 −3 2 (4.4.1) t 1 p1 A t ... 1 pn A t I t A t A ... . ( − ) = Now we( have+ (− ) ( ) + + (− ) ( ))( + + + ) the coefficient of t−1 the coefficient of t−1 0 of the left side of (4.4.1) of the right side of (4.4.1) n n−1 n = œ A 1 p1 A A ¡...= œ 1 pn A I pA A . ¡ =

+ (− ) ( ) + + (− ) ( ) = ( ) Remark 4.4.4. (In notations of Caley-Hamilton’s theorem and its proof) We have

n ptI−A λ det λI tI A det t λ I A 1 pA t λ n n n−1 n−1 1 t λ 1 p1 A t λ ... pn A ( ) = ( − ( − )) = (−(( − ) − )) = (− ) ( − ) = and thus (− ) ( − ) + (− ) ( )( − ) + + ( ) exercise 4.4.3 of the left side of (4.4.1) pn−1 tI A n−1 1 coefficient of λ of ptI−A λ n−1{ n−1 } n−2= ( n−1 ) = nt n 1 p1 A t n 2 p2 A t ... 1 pn−1 A (− ) ⋅ { ( )} = trace of the right side of (4.4.1) n − ( − ) ( n)−1 + ( − )n ( ) +−1 +−(2− ) −3 2 ( ) = tr t 1 p1 A t ... 1 pn A t I t A t A ... . { } = n−2 n−3 Equating(( coefficients+ (− ) of( the) + + (−t ) , t ( ,...))( of the+ middle+ expression+ )) with the coefficients of these monomials of the last expression we obtain the matrix version of the Newton’s identities

p1 A tr A , 2 2p2 A p1 A tr A tr A , ( ) = ( ) 2 3 3p3 A p2 A tr A p1 A tr A tr A , ( ) = ( ) ( ) − ( ) 2 3 4 4p4 A p3 A tr A p2 A tr A p1 A tr A tr A , ( ) = ( ) ( ) − ( ) ( ) + ( ) ...... , ( ) = ( ) ( ) − ( ) ( ) + ( ) ( ) − ( ) where pk A 0 for k n.

( ) = > 4.5 *Tor and Ext

Let R be a commutative associative ring with identity and M,N be R-modules. Consider a resolution

ϕ3 ϕ2 ϕ1 ϕ0 ... R⊕d2 R⊕d1 R⊕d0 M 0 (4.5.1)

Ð→ Ð→ 78Ð→ Ð→ Ð→ of M. It generates the sequence

ϕˆ3 ϕˆ2 ϕˆ1 ϕˆ0 ... R⊕d2 N R⊕d1 N R⊕d0 N 0, whereϕ ˆ0 0 andϕ ˆnÐ→ϕn IN⊗ forÐ→n 1, and⊗ theÐ→ sequence⊗ Ð→

ϕ˜3 ϕ˜2 ϕ˜1 ϕ˜0 ...= Hom =R⊕d2⊗,N Hom⩾ R⊕d1 ,N Hom R⊕d0 ,N 0, whereϕ ˜0 0←Ð and ϕˆn( h a ) ←Ðh ϕn a ( for n )1.←Ð It is not( hard to) ←Ð check that for n 0 we have = ( ( ))( ) = Im( ϕˆ(n+))1 Ker ⩾ϕˆn ⩾ and ( ) ⊂ ( ) Im ϕ˜n Ker ϕ˜n+1 . Thus, for n 0 we can define ( ) ⊂ ( ) R ⩾ Torn M,N Ker ϕˆn Im ϕˆn+1 and ( ) ∶= ( )~ ( ) n ExtR M,N Ker ϕ˜n+1 Im ϕ˜n . It is not hard to check that R-modules TorR M,N and Extn M,N are well-defined ( ) ∶= n ( )~ ( ) R (do not depend on a choice of resolution (4.5.1)). Exercise 4.5.1. Check that for vector spaces( V,U )over a field( k we) have

k n Torn V,U 0 and Extk V,U 0 for n 1.

Exercise 4.5.2. Suppose( that) R= is a commutative( ) and= associative⩾ ring with identity and M,N are R-modules. Prove that

R 0 Tor0 M,N M N and ExtR M,N Hom M,N .

Z Exercise 4.5.3. Calculate( ) Tor= n Z⊗ 12Z, Z 18Z (for all)n= 0. ( ) n Exercise 4.5.4. Calculate Ext Z 8Z, Z 16Z for all n 0. Z( ~ ~ ) ⩾ Exercise 4.5.5. Calculate TorZ~6Z 2 , 3 for all n 0. n ( ~Z Z~Z Z) ⩾ n Exercise 4.5.6. Calculate Ext ~24 Z 8Z, Z 6Z for all n 0. Z Z( ~ ~ ) ⩾ Exercise 4.5.7. Calculate TorR[t] 2 , 2 for all n 0, where n R(A ~RB ~ ) ⩾ 1 1 1 0 A ( ,B) ⩾ . 0 1 0 2

Exercise 4.5.8. Calculate Extn = Œ 2 , ‘3 for= allŒ n ‘0, where R[t] RA RB 0 1 0 (0 1 ) ⩾ A ,B 0 0 1 . 0 0 ⎛0 0 0⎞ = Œ ‘ = ⎜ ⎟ 79 ⎝ ⎠ 4.6 Problems

Problem 4.6.1. Prove that Z and Z nZ are not algebras over any fields.

Problem 4.6.2. Suppose that V is a~ finite-dimensional vector space, 0 v V , and ω Λk V . Prove that ≠ ∈ ′ ′ k−1 ∈ ( ) v ω 0 ω v ω for some ω Λ V .

Problem 4.6.3. Suppose∧ = that⇔V is= a finite-dimensional∧ ∈ vector( space,) 0 ω Λk V , and U u V u ω 0 . Prove that ≠ ∈ ( ) 1. dim= { U∈ Sk; ∧ = } ′ 2. for every( ) ⩽ 0 m k we have: dim U m iff ω u1 ... um ω for some ′ k−m u1, . . . , um V , ω Ω V . ⩽ ⩽ ( ) ⩾ = ∧ ∧ ∧ 2 Problem 4.6.4.∈Suppose∈ that (V is) a finite-dimensional vector space and ω Λ V . Prove that ω ω 0 whenever ω v1 v2 for some v1, v2 V . ∈ ( ) Problem 4.6.5.∧ =Suppose that R is= a commutative∧ and associative∈ ring with identity. i Prove that i⩾0 rit R t is invertible in R t whenever r0 is invertible in R.

2 −1 Problem 4.6.6.∑ Find∈ [[ the]] first 4 terms of the[[ series]] 5 t 4t Z 6Z t . 100 2 −1 Problem 4.6.7. Find the coefficient of t of the series( + 2+ t ) 4t∈ ~ F[[5 ]]t . Problem 4.6.8. Suppose that R is a commutative and associative( + + ring) with∈ identity.[[ ]] i Consider a formal Laurent series r i⩾m rit R t . Prove that

1. if rm is invertible in R, then r=is∑ invertible∈ in(( R)) t ; 2. 2 t Z 6Z t is invertible in Z 6Z t (while((2))is not invertible in Z 6Z). Problem+ ∈ 4.6.9.~ ((Suppose)) that R is an associative~ (( )) and commutative ring with~ iden- tity, and A, B Matn R . Prove that pAB t pBA t .

Problem 4.6.10.∈ Prove( ) lemma 4.2.9. ( ) = ( )

Problem 4.6.11. Prove that Q Z A 0 for a finite A.

Problem 4.6.12. Suppose that R⊗is a commutative= and associative ring with identity and M is an R-module. Prove that the map

M R R M, m 1 m is an isomorphism between R-modules.→ ⊗ ↦ ⊗ Problem 4.6.13. Suppose that n, m N∗; prove that

Z nZ Z Z∈mZ Z gcd m, n Z.

( ~ ) ⊗ ( ~ 80) ≃ ~ ( ) Problem 4.6.14. Consider k t -modules Vϕ and Uψ, where gcd pϕ t , pψ t 1. Prove that Homk[t] Vϕ,Uψ 0 and Vϕ k[t] Uψ 0. [ ] ( ( ) ( )) = 2 3 3 2 Problem 4.6.15. (Calculate) =HomR[t] R⊗A, RB and= HomR[t] RB, RA , where

( ) 0 1 0 ( ) 0 1 A ,B 0 0 1 . 0 0 ⎛0 0 0⎞ = Œ ‘ = ⎜ ⎟ 5 6 Problem 4.6.16. Calculate HomR[t] RA, RB ⎝, where ⎠

( ) 1 0 0 0 0 1 0 0 0 0 1 1 0 0 0 1 1 0 A ,B −0 0 1 0 0 −0 0 1 1 ⎛ ⎞ ⎛ ⎞ ⎜ 0− 0 0 1 1⎟ ⎜ 0 0 0 1⎟ ⎜ ⎟ = ⎜ ⎟ = ⎜ − ⎟ ⎜ ⎟ ⎜ 0 0 0 0 1⎟ ⎝ ⎠ ⎜ ⎟ 2 3 ⎝ 3 2 ⎠ Problem 4.6.17. Calculate RA R[t] RB and RB R[t] RA, where

⊗ 0⊗ 1 0 0 1 A ,B 0 0 1 . 0 0 ⎛0 0 0⎞ = Œ ‘ = ⎜ ⎟ 5 6 Problem 4.6.18. Calculate RA R[t] RB, where⎝ ⎠

⊗ 1 0 0 0 0 1 0 0 0 0 1 1 0 0 0 1 1 0 A ,B −0 0 1 0 0 −0 0 1 1 ⎛ ⎞ ⎛ ⎞ ⎜ 0− 0 0 1 1⎟ ⎜ 0 0 0 1⎟ ⎜ ⎟ = ⎜ ⎟ = ⎜ − ⎟ ⎜ ⎟ ⎜ 0 0 0 0 1⎟ ⎝ ⎠ ⎜ ⎟ ⎝ ⎠

81 Bibliography

[Ax] Axler S., Linear Algebra Done Right, Undergraduate Texts in Mathematics.

[HK] Hoffman K., Kunze R., Linear Algebra.

[L] Loehr N., Advanced Linear Algebra, Textbooks in Mathematics.

[Rom] Roman S., Advanced Linear Algebra, Graduate Texts in Mathematics, v.135.

[Rot] Rotman J. J., Advanced Modern Algebra, Graduate Studies in Mathematics, v.114.

[V] Vinberg E. B., A Course in Algebra.

82