<<

Part IB — Theorems

Based on lectures by S. J. Wadsley Notes taken by Dexter Chua

Michaelmas 2015

These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures. They are nowhere near accurate representations of what was actually lectured, and in particular, all errors are almost surely mine.

Definition of a (over R or C), subspaces, the space spanned by a subset. , bases, . Direct sums and complementary subspaces. [3] Linear maps, . Relation between and nullity. The space of linear maps from U to V , representation by matrices. Change of . Row rank and column rank. [4] and of a square . Determinant of a of two matrices and of the inverse matrix. Determinant of an endomorphism. The adjugate matrix. [3] Eigenvalues and eigenvectors. Diagonal and triangular forms. Characteristic and minimal polynomials. Cayley-Hamilton Theorem over C. Algebraic and geometric multiplicity of eigenvalues. Statement and illustration of Jordan form. [4] Dual of a finite-dimensional vector space, dual bases and maps. Matrix representation, rank and determinant of dual map. [2] Bilinear forms. Matrix representation, . Symmetric forms and their link with quadratic forms. Diagonalisation of quadratic forms. Law of inertia, classification by rank and signature. Complex Hermitian forms. [4] Inner product spaces, orthonormal sets, orthogonal projection, V = W ⊕ W ⊥. Gram- Schmidt orthogonalisation. Adjoints. Diagonalisation of Hermitian matrices. Orthogo- nality of eigenvectors and properties of eigenvalues. [4]

1 Contents IB Linear Algebra (Theorems)

Contents

0 Introduction 3

1 Vector spaces 4 1.1 Definitions and examples ...... 4 1.2 Linear independence, bases and the Steinitz exchange lemma . .4 1.3 Direct sums ...... 5

2 Linear maps 6 2.1 Definitions and examples ...... 6 2.2 Linear maps and matrices ...... 6 2.3 The first theorem and the rank-nullity theorem . . .6 2.4 Change of basis ...... 7 2.5 operations ...... 8

3 9 3.1 ...... 9 3.2 Dual maps ...... 9

4 Bilinear forms I 11

5 of matrices 12

6 Endomorphisms 14 6.1 Invariants ...... 14 6.2 The minimal polynomial ...... 14 6.2.1 Aside on polynomials ...... 14 6.2.2 Minimal polynomial ...... 15 6.3 The Cayley-Hamilton theorem ...... 15 6.4 Multiplicities of eigenvalues and ...... 16

7 Bilinear forms II 17 7.1 Symmetric bilinear forms and quadratic forms ...... 17 7.2 Hermitian form ...... 18

8 Inner product spaces 19 8.1 Definitions and basic properties ...... 19 8.2 Gram-Schmidt orthogonalization ...... 19 8.3 Adjoints, orthogonal and unitary maps ...... 20 8.4 ...... 20

2 0 Introduction IB Linear Algebra (Theorems)

0 Introduction

3 1 Vector spaces IB Linear Algebra (Theorems)

1 Vector spaces 1.1 Definitions and examples Proposition. In any vector space V , 0v = 0 for all v ∈ V , and (−1)v = −v, where −v is the additive inverse of v. Proposition. Let U, W be subspaces of V . Then U + W and U ∩ W are subspaces.

1.2 Linear independence, bases and the Steinitz exchange lemma

Lemma. S ⊆ V is linearly dependent if and only if there are distinct s0, ··· , sn ∈ S and λ1, ··· , λn ∈ F such that n X λisi = s0. i=1

Proposition. If S = {e1, ··· , en} is a subset of V over F, then it is a basis if and only if every v ∈ V can be written uniquely as a finite of elements in S, i.e. as n X v = λiei. i=1 Theorem (Steinitz exchange lemma). Let V be a vector space over F, and S = {e1, ··· , en} a finite linearly independent subset of V , and T a spanning subset of V . Then there is some T 0 ⊆ T of order n such that (T \ T 0) ∪ S still spans V . In particular, |T | ≥ n.

Corollary. Let {e1, ··· , en} be a linearly independent subset of V , and sup- pose {f1, ··· , fm} spans V . Then there is a re-ordering of the {fi} such that {e1, ··· , en, fn+1, ··· , fm} spans V . Corollary. Suppose V is a vector space over F with a basis of order n. Then (i) Every basis of V has order n. (ii) Any linearly independent set of order n is a basis. (iii) Every spanning set of order n is a basis. (iv) Every finite spanning set contains a basis. (v) Every linearly independent subset of V can be extended to basis.

Lemma. If V is a finite dimensional vector space over F, U ⊆ V is a proper subspace, then U is finite dimensional and dim U < dim V . Proposition. If U, W are subspaces of a finite dimensional vector space V , then dim(U + W ) = dim U + dim W − dim(U ∩ W ).

Proposition. If V is a finite dimensional vector space over F and U ∪ V is a subspace, then dim V = dim U + dim V/U.

4 1 Vector spaces IB Linear Algebra (Theorems)

1.3 Direct sums

5 2 Linear maps IB Linear Algebra (Theorems)

2 Linear maps 2.1 Definitions and examples

Lemma. If U and V are vector spaces over F and α : U → V , then α is an isomorphism iff α is a bijective .

Proposition. Let α : U → V be an F-linear map. Then (i) If α is injective and S ⊆ U is linearly independent, then α(S) is linearly independent in V . (ii) If α is surjective and S ⊆ U spans U, then α(S) spans V . (iii) If α is an isomorphism and S ⊆ U is a basis, then α(S) is a basis for V .

Corollary. If U and V are finite-dimensional vector spaces over F and α : U → V is an isomorphism, then dim U = dim V .

Proposition. Suppose V is a F-vector space of dimension n < ∞. Then writing n e1, ··· , en for the standard basis of F , there is a n Φ: {isomorphisms F → V } → {(ordered) basis(v1, ··· , vn) for V }, defined by α 7→ (α(e1), ··· , α(en)).

2.2 Linear maps and matrices

Proposition. Suppose U, V are vector spaces over F and S = {e1, ··· , en} is a basis for U. Then every function f : S → V extends uniquely to a linear map U → V .

Corollary. If U and V are finite-dimensional vector spaces over F with bases (e1, ··· , em) and (f1, ··· , fn) respectively, then there is a bijection

Matn,m(F) → L(U, V ), P sending A to the unique linear map α such that α(ei) = ajifj.

Proposition. Suppose U, V, W are finite-dimensional vector spaces over F with bases R = (u1, ··· , ur), S = (v1, .., vs) and T = (w1, ··· , wt) respectively. If α : U → V and β : V → W are linear maps represented by A and B respectively (with respect to R, S and T ), then βα is linear and represented by BA with respect to R and T .

2.3 The first isomorphism theorem and the rank-nullity theorem Theorem (First isomorphism theorem). Let α : U → V be a linear map. Then ker α and im α are subspaces of U and V respectively. Moreover, α induces an isomorphism

α¯ : U/ ker α → im α (u + ker α) 7→ α(u)

6 2 Linear maps IB Linear Algebra (Theorems)

Corollary (Rank-nullity theorem). If α : U → V is a linear map and U is finite-dimensional, then r(α) + n(α) = dim U. Proposition. If α : U → V is a linear map between finite-dimensional vector spaces over F, then there are bases (e1, ··· , em) for U and (f1, ··· , fn) for V such that α is represented by the matrix I 0 r , 0 0 where r = r(α) and Ir is the r × r . In particular, r(α) + n(α) = dim U. Corollary. Suppose α : U → V is a linear map between vector spaces over F both of dimension n < ∞. Then the following are equivalent (i) α is injective; (ii) α is surjective; (iii) α is an isomorphism.

Lemma. Let A ∈ Mn,n(F) = Mn(F) be a . The following are equivalent

(i) There exists B ∈ Mn(F) such that BA = In.

(ii) There exists C ∈ Mn(F) such that AC = In. If these hold, then B = C. We call A invertible or non-singular, and write A−1 = B = C.

2.4 Change of basis

Theorem. Suppose that (e1, ··· , em) and (u1, ··· , um) are basis for a finite- dimensional vector space U over F, and (f1, ··· , fn) and (v1, ··· , vn) are basis of a finite-dimensional vector space V over F. Let α : U → V be a linear map represented by a matrix A with respect to (ei) and (fi) and by B with respect to (ui) and (vi). Then B = Q−1AP, where P and Q are given by m n X X ui = Pkiek, vi = Qkifk. k=1 k=1

Corollary. If A ∈ Matn,m(F), then there exists invertible matrices P ∈ GLm(F),Q ∈ GLn(F) so that I 0 Q−1AP = r 0 0 for some 0 ≤ r ≤ min(m, n). T Theorem. If A ∈ Matn,m(F), then r(A) = r(A ), i.e. the row rank is equivalent to the column rank.

7 2 Linear maps IB Linear Algebra (Theorems)

2.5 Elementary matrix operations

Proposition. If A ∈ Matn,m(F), then there exists invertible matrices P ∈ GLm(F),Q ∈ GLn(F) so that I 0 Q−1AP = r 0 0

for some 0 ≤ r ≤ min(m, n).

8 3 Duality IB Linear Algebra (Theorems)

3 Duality 3.1 Dual space

Lemma. If V is a finite-dimensional vector space over f with basis (e1, ··· , en), ∗ then there is a basis (ε1, ··· , εn) for V (called the to (e1, ··· , en)) such that εi(ej) = δij. Corollary. If V is finite dimensional, then dim V = dim V ∗.

Proposition. Let V be a finite-dimensional vector space over F with bases (e1, ··· , en) and (f1, ··· , fn), and that P is the change of basis matrix so that

n X fi = Pkiek. k=1

Let (ε1, ··· , εn) and (η1, ··· , ηn) be the corresponding dual bases so that

εi(ej) = δij = ηi(fj).

−1 T Then the change of basis matrix from (ε1, ··· , εn) to (η1, ··· , ηn) is (P ) , i.e.

n X T εi = P`i η`. `=1

Proposition. Let V be a vector space over F and U a subspace. Then dim U + dim U 0 = dim V.

3.2 Dual maps Proposition. Let α ∈ L(V,W ) be a linear map. Then α∗ ∈ L(W ∗,V ∗) is a linear map.

Proposition. Let V,W be finite-dimensional vector spaces over F and α : V → W be a linear map. Let (e1, ··· , en) be a basis for V and (f1, ··· , fm) be a basis for W ;(ε1, ··· , εn) and (η1, ··· , ηm) the corresponding dual bases. Suppose α is represented by A with respect to (ei) and (fi) for V and W . Then α∗ is represented by AT with respect to the corresponding dual bases.

Lemma. Let α ∈ L(V,W ) with V,W finite dimensional vector spaces over F. Then (i) ker α∗ = (im α)0. (ii) r(α) = r(α∗) (which is another proof that row rank is equal to column rank). (iii) im α∗ = (ker α)0.

Lemma. Let V be a vector space over F. Then there is a linear map ev : V → (V ∗)∗ given by ev(v)(θ) = θ(v). We call this the evaluation map.

9 3 Duality IB Linear Algebra (Theorems)

Lemma. If V is finite-dimensional, then ev : V → V ∗∗ is an isomorphism.

Lemma. Let V,W be finite-dimensional vector spaces over F after identifying (V and V ∗∗) and (W and W ∗∗) by the evaluation map. Then we have (i) If U ≤ V , then U 00 = U. (ii) If α ∈ L(V,W ), then α∗∗ = α.

Proposition. Let V be a finite-dimensional vector space F and U1, U2 are subspaces of V . Then we have

0 0 0 (i)( U1 + U2) = U1 ∩ U2 0 0 0 (ii)( U1 ∩ U2) = U1 + U2

10 4 Bilinear forms I IB Linear Algebra (Theorems)

4 Bilinear forms I

Proposition. Suppose (e1, ··· , en) and (v1, ··· , vn) are basis for V such that X vi = Pkiek for all i = 1, ··· , n;

and (f1, ··· , fm) and (w1, ··· , wm) are bases for W such that X wi = Q`jf` for all j = 1, ··· , m.

Let ψ : V × W → F be a represented by A with respect to (e1, ··· , en) and (f1, ··· , fm), and by B with respect to the bases (v1, ··· , vn) and (w1, ··· , wm). Then B = P T AQ.

∗ Lemma. Let (ε1, ··· , εn) be a basis for V dual to (e1, ··· , en) of V ;(η1, ··· , ηn) ∗ be a basis for W dual to (f1, ··· , fn) of W . If A represents ψ with respect to (e1, ··· , en) and (f1, ··· , fm), then A also T represents ψR with respect to (f1, ··· , fm) and (ε1, ··· , εn); and A represents ψL with respect to (e1, ··· , en) and (η1, ··· , ηm).

Lemma. Let V and W be finite-dimensional vector spaces over F with bases (e1, ··· , en) and (f1, ··· , fm) be their basis respectively. Let ψ : V × W → F be a bilinear form represented by A with respect to these bases. Then φ is non-degenerate if and only if A is (square and) invertible. In particular, V and W have the same dimension.

11 5 Determinants of matrices IB Linear Algebra (Theorems)

5 Determinants of matrices

Lemma. det A = det AT . Lemma. If A is an upper , i.e.   a11 a12 ··· a1n  0 a22 ··· a2n  A =    . . .. .   . . . .  0 0 ··· ann

Then n Y det A = aii. i=1 Lemma. det A is a volume form.

Lemma. Let d be a volume form on Fn. Then swapping two entries changes the sign, i.e.

d(v1, ··· , vi, ··· , vj, ··· , vn) = −d(v1, ··· , vj, ··· , vi, ··· , vn).

Corollary. If σ ∈ Sn, then

d(vσ(1), ··· , vσ(n)) = ε(σ)d(v1, ··· , vn)

n for any vi ∈ F . Theorem. Let d be any volume form on Fn, and let A = (A(1) ··· A(n)) ∈ Matn(F). Then

(1) (n) d(A , ··· ,A ) = (det A)d(e1, ··· , en), where {e1, ··· , en} is the standard basis.

Theorem. Let A, B ∈ Matn(F). Then det(AB) = det(A) det(B).

Corollary. If A ∈ Matn(F) is invertible, then det A 6= 0. In fact, when A is invertible, then det(A−1) = (det A)−1.

Theorem. Let A ∈ Matn(F). Then the following are equivalent: (i) A is invertible.

(ii) det A 6= 0. (iii) r(A) = n.

Lemma. Let A ∈ Matn(F). Then (i) We can expand det A along the jth column by

n X i+j det A = (−1) Aij det Aˆij. i=1

12 5 Determinants of matrices IB Linear Algebra (Theorems)

(ii) We can expand det A along the ith row by

n X i+j det A = (−1) Aij det Aˆij. j=1

Theorem. If A ∈ Matn(F), then A(adj A) = (det A)In = (adj A)A. In particu- lar, if det A 6= 0, then 1 A−1 = adj A. det A Lemma. Let A, B be square matrices. Then for any C, we have

AC det = (det A)(det B). 0 B

Corollary.   A1 stuff n  A2  Y det   = det A  ..  i  .  i=1 0 An

13 6 Endomorphisms IB Linear Algebra (Theorems)

6 Endomorphisms 6.1 Invariants

Lemma. Suppose (e1, ··· , en) and (f1, ··· , fn) are bases for V and α ∈ End(V ). If A represents α with respect to (e1, ··· , en) and B represents α with respect to (f1, ··· , fn), then B = P −1AP, where P is given by n X fi = Pjiej. j=1 Lemma.

(i) If A ∈ Matm,n(F) and B ∈ Matn,m(F), then tr AB = tr BA.

(ii) If A, B ∈ Matn(F) are similar, then tr A = tr B.

(iii) If A, B ∈ Matn(F) are similar, then det A = det B. Lemma. If A and B are similar, then they have the same characteristic polyno- mial.

Lemma. Let α ∈ End(V ) and λ1, ··· , λk distinct eigenvalues of α. Then

k M E(λ1) + ··· + E(λk) = E(λi) i=1 is a direct sum.

Theorem. Let α ∈ End(V ) and λ1, ··· , λk be distinct eigenvalues of α. Write Ei for E(λi). Then the following are equivalent: (i) α is diagonalizable. (ii) V has a basis of eigenvectors for α. Lk (iii) V = i=1 Ei. Pk (iv) dim V = i=1 dim Ei.

6.2 The minimal polynomial 6.2.1 Aside on polynomials

Lemma (Polynomial division). If f, g ∈ F[t] (and g 6= 0), then there exists q, r ∈ F[t] with deg r < deg g such that f = qg + r.

Lemma. If λ ∈ F is a root of f, i.e. f(λ) = 0, then there is some g such that f(t) = (t − λ)g(t).

14 6 Endomorphisms IB Linear Algebra (Theorems)

Lemma. A non-zero polynomial f ∈ F[t] has at most deg f roots, counted with multiplicity.

Corollary. Let f, g ∈ F[t] have degree < n. If there are λ1, ··· , λn distinct such that f(λi) = g(λi) for all i, then f = g.

Corollary. If F is infinite, then f and g are equal if and only if they agree on all points. Theorem (The fundamental theorem of algebra). Every non-constant polyno- mial over C has a root in C.

6.2.2 Minimal polynomial Theorem (Diagonalizability theorem). Suppose α ∈ End(V ). Then α is diago- nalizable if and only if there exists non-zero p(t) ∈ F[t] such that p(α) = 0, and p(t) can be factored as a product of distinct linear factors.

Lemma. Let α ∈ End(V ), and p ∈ F[t]. Then p(α) = 0 if and only if Mα(t) is a factor of p(t). In particular, Mα is unique. Theorem (Diagonalizability theorem 2.0). Let α ∈ End(V ). Then α is diago- nalizable if and only if Mα(t) is a product of its distinct linear factors. Theorem. Let α, β ∈ End(V ) be both diagonalizable. Then α and β are simultaneously diagonalizable (i.e. there exists a basis with respect to which both are diagonal) if and only if αβ = βα.

6.3 The Cayley-Hamilton theorem Theorem (Cayley-Hamilton theorem). Let V be a finite-dimensional vector space and α ∈ End(V ). Then χα(α) = 0, i.e. Mα(t) | χα(t). In particular, deg Mα ≤ n.

Lemma. An endomorphism α is triangulable if and only if χα(t) can be written as a product of linear factors, not necessarily distinct. In particular, if F = C (or any algebraically closed field), then every endomorphism is triangulable.

Theorem (Cayley-Hamilton theorem). Let V be a finite-dimensional vector space and α ∈ End(V ). Then χα(α) = 0, i.e. Mα(t) | χα(t). In particular, deg Mα ≤ n.

Lemma. Let α ∈ End(V ), λ ∈ F. Then the following are equivalent: (i) λ is an eigenvalue of α.

(ii) λ is a root of χα(t).

(iii) λ is a root of Mα(t).

15 6 Endomorphisms IB Linear Algebra (Theorems)

6.4 Multiplicities of eigenvalues and Jordan normal form Lemma. If λ is an eigenvalue of α, then

(i)1 ≤ gλ ≤ aλ

(ii)1 ≤ cλ ≤ aλ.

Lemma. Suppose F = C and α ∈ End(V ). Then the following are equivalent: (i) α is diagonalizable.

(ii) gλ = aλ for all eigenvalues of α.

(iii) cλ = 1 for all λ.

Theorem (Jordan normal form theorem). Every matrix A ∈ Matn(C) is similar to a matrix in Jordan normal form. Moreover, this Jordan normal form matrix is unique up to permutation of the blocks. Theorem. Let α ∈ End(V ), and A in Jordan normal form representing α. Then the number of Jordan blocks Jn(λ) in A with n ≥ r is

n((α − λι)r) − n((α − λι)r−1).

Theorem (Generalized eigenspace decomposition). Let V be a finite-dimensional vector space C such that α ∈ End(V ). Suppose that

k Y ci Mα(t) = (t − λi) , i=1 with λ1, ··· , λk ∈ C distinct. Then

V = V1 ⊕ · · · ⊕ Vk,

ci where Vi = ker((α − λiι) ) is the generalized eigenspace.

16 7 Bilinear forms II IB Linear Algebra (Theorems)

7 Bilinear forms II 7.1 Symmetric bilinear forms and quadratic forms

Lemma. Let V be a finite-dimensional vector space over F, and φ : V × V → F is a symmetric bilinear form. Let (e1, ··· , en) be a basis for V , and let M be the matrix representing φ with respect to this basis, i.e. Mij = φ(ei, ej). Then φ is symmetric if and only if M is symmetric.

Lemma. Let V is a finite-dimensional vector space, and φ : V × V → F a bilinear form. Let (e1, ··· , en) and (f1, ··· , fn) be bases of V such that

n X fi = Pkiek. k=1

If A represents φ with respect to (ei) and B represents φ with respect to (fi), then B = P T AP.

Proposition (). Suppose that char F 6= 2, i.e. 1 + 1 6= 0 on F (e.g. if F is R or C). If q : V → F is a quadratic form, then there exists a unique symmetric bilinear form φ : V × V → F such that q(v) = φ(v, v).

Theorem. Let V be a finite-dimensional vector space over F, and φ : V ×V → F a symmetric bilinear form. Then there exists a basis (e1, ··· , en) for V such that φ is represented by a with respect to this basis. Theorem. Let φ be a symmetric bilinear form over a complex vector space V . Then there exists a basis (v1, ··· , vm) for V such that φ is represented by   Ir 0 0 0 with respect to this basis, where r = r(φ).

Corollary. Every symmetric A ∈ Matn(C) is congruent to a unique matrix of the form I 0 r . 0 0 Theorem. Let φ be a symmetric bilinear form of a finite-dimensional vector space over R. Then there exists a basis (v1, ··· , vn) for V such that φ is represented   Ip  −Iq  , 0 with p + q = r(φ), p, q ≥ 0. Equivalently, the corresponding quadratic forms is given by n ! p p+q X X 2 X 2 q aivi = ai − aj . i=1 i=1 j=p+1

17 7 Bilinear forms II IB Linear Algebra (Theorems)

Theorem (Sylvester’s law of inertia). Let φ be a symmetric bilinear form on a finite-dimensional real vector space V . Then there exists unique non-negative integers p, q such that φ is represented by   Ip 0 0  0 −Iq 0 0 0 0 with respect to some basis. Corollary. Every real is congruent to precisely one matrix of the form   Ip 0 0  0 −Iq 0 . 0 0 0

7.2 Hermitian form

Lemma. Let φ : V × V → C be a on a finite-dimensional vector space over C, and (e1, ··· , en) a basis for V . Then φ is Hermitian if and only if the matrix A representing φ is Hermitian (i.e. A = A†). Proposition (Change of basis). Let φ be a Hermitian form on a finite di- mensional vector space V ;(e1, ··· , en) and (v1, ··· , vn) are bases for V such that n X vi = Pkiek; k=1

and A, B represent φ with respect to (e1,..., en) and (v1, ··· , vn) respectively. Then B = P †AP. Lemma (Polarization identity (again)). A Hermitian form φ on V is determined by the function ψ : v 7→ φ(v, v). Theorem (Hermitian form of Sylvester’s law of inertia). Let V be a finite- dimensional complex vector space and φ a hermitian form on V . Then there exists unique non-negative integers p and q such that φ is represented by   Ip 0 0  0 −Iq 0 0 0 0 with respect to some basis.

18 8 Inner product spaces IB Linear Algebra (Theorems)

8 Inner product spaces 8.1 Definitions and basic properties Theorem (Cauchy-Schwarz inequality). Let V be an and v, w ∈ V . Then |(v, w)| ≤ kvkkwk. Corollary (Triangle inequality). Let V be an inner product space and v, w ∈ V . Then kv + wk ≤ kvk + kwk. Lemma (Parseval’s identity). Let V be a finite-dimensional inner product space with an orthonormal basis v1, ··· , vn, and v, w ∈ V . Then n X (v, w) = (vi, v)(vi, w). i=1 In particular, n 2 X 2 kvk = |(vi, v)| . i=1

8.2 Gram-Schmidt orthogonalization Theorem (Gram-Schmidt process). Let V be an inner product space and e1, e2, ··· a linearly independent set. Then we can construct an orthonormal set v1, v2, ··· with the property that

hv1, ··· , vki = he1, ··· , eki for every k. Corollary. If V is a finite-dimensional inner product space, then any orthonor- mal set can be extended to an orthonormal basis. Proposition. Let V be a finite-dimensional inner product space, and W ≤ V . Then V = W ⊥ W ⊥. Proposition. Let V be a finite-dimensional inner product space and W ≤ V . Let (e1, ··· , ek) be an orthonormal basis of W . Let π be the orthonormal projection of V onto W , i.e. π : V → W is a function that satisfies ker π = W ⊥, π|W = id. Then (i) π is given by the formula

k X π(v) = (ei, v)ei. i=1

(ii) For all v ∈ V, w ∈ W , we have kv − π(v)k ≤ kv − wk, with equality if and only if π(v) = w. This says π(v) is the point on W that is closest to v.

19 8 Inner product spaces IB Linear Algebra (Theorems)

W ⊥ v

w π(v)

8.3 Adjoints, orthogonal and unitary maps Lemma. Let V and W be finite-dimensional inner product spaces and α : V → W is a linear map. Then there exists a unique linear map α∗ : W → V such that

(αv, w) = (v, α∗w)(∗)

for all v ∈ V , w ∈ W . Lemma. Let V be a finite-dimensional space and α ∈ End(V ). Then α is orthogonal if and only if α−1 = α∗. Corollary. α ∈ End(V ) is orthogonal if and only if α is represented by an , i.e. a matrix A such that AT A = AAT = I, with respect to any orthonormal basis. Proposition. Let V be a finite-dimensional real inner product space and (e1, ··· , en) is an orthonormal basis of V . Then there is a bijection O(V ) → {orthonormal basis for V }

α 7→ (α(e1, ··· , en)). Lemma. Let V be a finite dimensional complex inner product space and α ∈ End(V ). Then α is unitary if and only if α is invertible and α∗ = α−1. Corollary. α ∈ End(V ) is unitary if and only if α is represented by a A with respect to any orthonormal basis, i.e. A−1 = A†. Proposition. Let V be a finite-dimensional complex inner product space. Then there is a bijection

U(V ) → {orthonormal basis of V }

α 7→ {α(e1), ··· , α(en)}.

8.4 Spectral theory Lemma. Let V be a finite-dimensional inner product space, and α ∈ End(V ) self-adjoint. Then (i) α has a real eigenvalue, and all eigenvalues of α are real. (ii) Eigenvectors of α with distinct eigenvalues are orthogonal. Theorem. Let V be a finite-dimensional inner product space, and α ∈ End(V ) self-adjoint. Then V has an orthonormal basis of eigenvectors of α.

20 8 Inner product spaces IB Linear Algebra (Theorems)

Corollary. Let V be a finite-dimensional vector space and α self-adjoint. Then V is the orthogonal (internal) direct sum of its α-eigenspaces.

Corollary. Let A ∈ Matn(R) be symmetric. Then there exists an orthogonal matrix P such that P T AP = P −1AP is diagonal. Corollary. Let V be a finite-dimensional real inner product space and ψ : V × V → R a symmetric bilinear form. Then there exists an orthonormal basis (v1, ··· , vn) for V with respect to which ψ is represented by a diagonal matrix. Corollary. Let V be a finite-dimensional real vector space and φ, ψ symmetric bilinear forms on V such that φ is positive-definite. Then we can find a basis (v1, ··· , vn) for V such that both φ and ψ are represented by diagonal matrices with respect to this basis.

Corollary. If A, B ∈ Matn(R) are symmetric and A is positive definitive (i.e. vT Av > 0 for all v ∈ Rn \{0}). Then there exists an Q such that QT AQ and QT BQ are both diagonal. Proposition.

(i) If A ∈ Matn(C) is Hermitian, then there exists a unitary matrix U ∈ Matn(C) such that U −1AU = U †AU is diagonal.

(ii) If ψ is a Hermitian form on a finite-dimensional complex inner product space V , then there is an orthonormal basis for V diagonalizing ψ. (iii) If φ, ψ are Hermitian forms on a finite-dimensional complex vector space and φ is positive definite, then there exists a basis for which φ and ψ are diagonalized.

† (iv) Let A, B ∈ Matn(C) be Hermitian, and A positive definitive (i.e. v Av > 0 for v ∈ V \{0}). Then there exists some invertible Q such that Q†AQ and Q†BQ are diagonal. Theorem. Let V be a finite-dimensional complex vector space and α ∈ U(V ) be unitary. Then V has an orthonormal basis of α eigenvectors.

21