Version of Revised 22 March 2017

Oxford University Mathematical Institute

Linear Algebra I

Notes by Peter M. Neumann (Queen’s College)

Preface

These notes are intended as a rough guide to the fourteen-lecture course I which is a part of the Oxford 1st year undergraduate course in mathematics (for Prelims). Please do not expect a polished account. They are lecture notes, not a carefully checked textbook. Nevertheless, I hope they may be of some help. The synopsis for the course is as follows.

• Systems of linear equations. Expression as an augmented matrix (understood simply as an array at this point). Elementary Row Operations (EROs). Solutions by row reduction. • Abstract vector spaces: definition of a over a field (expected examples R, Q, C). Examples of vector spaces: solution space of homogeneous systems of equations and differential equations; function spaces; polynomials; C as an R-vector space; sequence spaces. Subspaces, spanning sets and spans. (Emphasis on concrete examples, with de- duction of properties from axioms set as problems). • , definition of a , examples. Steinitz exchange lemma, and defi- nition of dimension. Coordinates associated with a basis. Algorithms involving EROs to find a basis of a subspace. • Sums, intersections and direct sums of subspaces. Dimension formula. • Linear transformations: definition and examples including projections. Kernel and image, –nullity formula. • Algebra of linear transformations. Inverses. Matrix of a linear transformation with re- spect to a basis. Algebra of matrices. Transformation of a matrix under change of basis. Determining an inverse with EROs. Column space, column rank. • Bilinear forms. Positive definite symmetric bilinear forms. Inner Product Spaces. Exam- n ples: R with dot product; function spaces. Comment on (positive definite) Hermitian forms. Cauchy–Schwarz inequality. Distance and angle. Transpose of a matrix. Orthogo- nal matrices.

The Faculty Teaching Committee has approved the following lists to support the teaching and learning of this material.

Reading List: (1) T. S. Blyth and E. F. Robertson, Basic Linear Algebra (Springer, London, 1998). (2) R. Kaye and R. Wilson, it Linear Algebra (OUP, 1998), Chapters 1–5 and 8. [More ad- vanced but useful on bilinear forms and inner product spaces.]

i Further Reading: (1) C. W. Curtis, Linear Algebra – An Introductory Approach (Springer, London, Fourth edi- tion, reprinted 1994). (2) R. B. J. T. Allenby, Linear Algebra (Arnold, London, 1995). (3) D. A. Towers, A Guide to Linear Algebra (Macmillan, Basingstoke, 1988). (4) D. T. Finkbeiner, Elements of Linear Algebra (Freeman, London, 1972). [Out of print, but available in many libraries.] (5) B. Seymour Lipschutz, Marc Lipson, Linear Algebra (McGraw Hill, London, Third Edition, 2001).

The fact that some of these texts were first published many years ago does not mean that they are out of date. Unlike some of the laboratory sciences, mathematics and mathematical pedagogy at first-year undergraduate level were already sufficiently well developed many, many years ago that they do not change much now. Nevertheless, we are always looking out for good modern expositions. Please let me have suggestions—author(s), title, publisher and date. A set of seven exercise sheets goes with this lecture course. The questions they contain will be found embedded in these notes along with a number of supplementary exercises.

Ackowledgements: I am very grateful to my wife Sylvia, to George Cooper (Balliol), to Jake Lee (St Catz), to Alexander Ober (Hertford), and to Yuyang Shi (Oriel) for drawing my attention to some misprints (now corrected) in earlier versions of these notes. I would much welcome further feedback. Please let me know of any errors, infelicities and obscurities (or reading-list suggestions—see above). Please email me at

[email protected] or write a note to me at The Queen’s College or The Andrew Wiles Building of the Mathematical Institute.

ΠMN: Queen’s: Version of Revised 22 March 2017

ii CONTENTS

1. Linear equations and matrices p. 1

Linear equations 1 Matrices 1 The beginnings of matrix algebra 2 More on systems of linear equations 4 Elementary Row Operations (EROs) 5

2. Vector spaces 9

Vectors as we know them 9 Vector Spaces 9 Subspaces 11 Examples of vector spaces 13

3. Bases of vector spaces 17

Spanning sets 17 Linear independence 18 Bases 19 The Steinitz Exchange Lemma 20

4. Subspaces of vector spaces 23

Bases of subspaces 23 Finding bases of subspaces 24 An algebra of subspaces 25 Direct sums of subspaces 26

5. An introduction to linear transformations 29

What is a linear transformation? 29 Some examples of linear transformations 30 Some algebra of linear transformations 31 Rank and nullity 33

6. Linear transformations and matrices 37

Matrices of linear transformations with respect to given bases 37 Change of basis 40 More about matrices: rank 41 More on EROs: row reduced echelon (RRE) form 44 Using EROs to invert matrices 45

iii 7. Inner Product Spaces 49

Bilinear forms 49 Inner product spaces 50 Orthogonal matrices 52 The Cauchy–Schwarz Inequality 53 Complex inner product spaces 54

iv 1 Linear equations; matrices

1.1 Linear equations

The solution of sets of simultaneous linear equations is one of the most widely used techniques of algebra. It is essential in most of pure mathematics, applied mathematics and statistics, and is heavily used in other areas such as economics, management, and other areas of practical or not-so-practical life. Although meteorological prediction, modern cryptography, and other such areas require the solutions of systems of many thousands, even millions, of simultaneous equations in a similar number of variables, we’ll be less ambitious to begin with. Systems of equations like

 x + 2y + 3z = 6  y + 2z + 3w = 0  x + 2y = 3   (?) , 2x + 3y + 4z = 9 and x + 2y + 3z + 4w = 2 2x + 3y = 5  3x + 4y + 5z = 12  2x + 3y + 4z + 5w = 0 may be solved (or shown to be insoluble) by a systematic elimination process that you should have come across before arriving at Oxford.

Exercise 1.1. Which (if any) of the following systems of linear equations with real coeffi- cients have no solutions, which have a unique solution, which have infinitely many solutions?

 2x + 4y − 3z = 0  x + 2y + 3z = 0  x + 2y + 3z = 0    (a) x − 4y + 3z = 0 ; (b) 2x + 3y + 4z = 1 ; (c) 2x + 3y + 4z = 2 .  3x − 5y + 2z = 1  3x + 4y + 5z = 2  3x + 4y + 5z = 2

1.2 Matrices

It is immediately apparent that it is only the coefficients that matter. Stripping away the variables from the systems (?) of equations we are left with the arrays

1 2 3 0 1 2 3 1 2 , 2 3 4 , and 1 2 3 4 2 3     3 4 5 2 3 4 5

 6  0 3 on the left sides of the equations and , 9 , and 2 on their right sides. 5     12 0 Such arrays are known as matrices. In general, an m × n matrix is a rectangular array with m rows and n columns. Conven- tionally the rows are numbered 1, . . . , m from top to bottom and the columns are numbered 1, . . . , n from left to right. In the Prelim context, the entries in the array will be real or com- plex numbers. The entry in row i and column j is usually denoted ai j or xi j , or something m n similar. If the m×n matrix A has entries ai j it is often written (ai j) (or [ai j] or (ai j)i=1, j=1 , or something similar)∗. A 1 × n matrix is often called a row vector, an m × 1 matrix is called

∗Please forgive my vagueness here and at a few other points in these notes. Mathematicians are inventive people, and it is not possible to list all the notational variations that they create. Clarification must often be sought from context.

1 a column vector. If A = (ai j) and ai j = 0 for all relevant i, j (that is, for 1 6 i 6 m, 1 6 j 6 n) then we write A = 0, or sometimes A = 0m×n , and refer to it as the zero matrix. Let F be the set from which the entries of our matrices come. Its members are known as scalars. In the Prelim context, usually F = R, the set of real numbers, or F = C, the set of complex numbers, (sometimes F = Q, the set of rational numbers). We define

Mm×n(F ) := {A | A is an m × n matrix with entries from F }.

n m Many authors write F for M1×n(F ) or F for Mm×1(F ). The context will usually make clear what is intended.

1.3 The beginnings of matrix algebra

There is a natural notion of addition for m × n matrices. If A = (ai j) and B = (bi j) then A + B = (ci j), where ci j = ai j + bi j . Thus addition of matrices—which makes sense when they are of the same size, not otherwise—is “coordinatewise”.

Likewise, there is a natural notion of scalar multiplication. If A = (aij) ∈ Mm×n(F ) and λ is a scalar then λA is the member of Mm×n(F ) in which the (i, j) entry is λaij . The following is a collection of simple facts that are used all the time, unnoticed.

Theorem 1.1. For A, B, C ∈ Mm×n(F ) and λ, µ ∈ F : (1) A + 0m×n = 0m×n + A = A; (2) A + B = B + A;[addition is commutative] (3) A + (B + C) = (A + B) + C;[addition is associative] (4) λ(µA) = (λµ)A; (5)( λ + µ)A = λA + µA; (6) λ(A + B) = λA + λB .

I do not propose to give formal proofs of these six assertions. It you are unfamiliar with this kind of reasoning, though, I recommend that you ensure that you can see exactly what a proof should involve: Exercise 1.2. Write out formal proofs of a few of the assertions in Theorem 1.1.

Multiplication is more complicated. The product AB is defined only if the number of columns of A is the same as the number of rows of B . Thus if A is an m × n matrix then B must be an n × p matrix for some p. Then the product AB will be an m × p matrix, and if A = (ai j), B = (bj k) then AB = (ci k), where n X ci k = ai1b1k + ai2b2k + ··· + ainbnk , or in summation notation, ci k = ai jbj k . j=1 What this means visually is that to multiply A and B we run along the ith row of A (which contains n numbers ai j ) and down the j th column of B (in which there are n numbers bj k ), we multiply corresponding numbers and add. Thus (AB)i k comes from ↓     a11 a12 ...... a1n b11 b12 . . . b1k . . . b1p  a a ...... a  b b . . . b . . . b   21 22 2n   21 22 2k 2p      ......  ......      →  ai1 ai2 ...... ain  ......       ......  ......  am1 am2 ...... amn bn1 bn2 . . . bnk . . . bnp

2 Exercise 1.3. Calculate the following matrix products: 2 1 2 x 0 1 1 2 1 2 3 ; ; 2 3 y 2 3 2 3 2 3 4 . 4 6 3 4 5

Note A. Using the definition, to multiply an m × n matrix and an n × p matrix the computation requires n numerical multiplications and n − 1 numerical additions for each of the mp entries of the product matrix, hence mnp numerical multiplications and m(n − 1)p numerical additions in total. Does it really require so many? It has been known since 1969 that one can do rather better. That is, when m, n, p are all at least 2, the product of two matrices can be calculated using fewer numerical operations than are needed for the the na¨ıve method. For large values of m, n, p it is not yet known how much better. This is a lively area of modern research on the boundary between pure mathematics and computer science.

Theorem 1.2. Let A, B, C be matrices and λ a scalar. (1) If A ∈ Mm×n(F ), B ∈ Mn×p(F ) and C ∈ Mp×q(F ), then A(BC) = (AB)C [multiplication is associative].

(2) If A, B ∈ Mm×n(F ) and C ∈ Mn×p(F ), then (A + B)C = AC + BC and A(B + C) = AB + AC ;[distributive laws].

(3) If A ∈ Mm×n(F ) and B ∈ Mn×p(F ) then (λA)B = A(λB) = λ(AB) .

The proofs are not hard, though the computation to prove part (1) may look a little daunting. I suggest that you write out a proof in order to familiarise yourself with what is involved. Later in these lectures we will see a computation-free proof and a natural reason for the associativity of matrix multiplication.

Exercise 1.4. Write out formal proofs of the assertions in Theorem 1.2.

Note B. If A ∈ Mm×n(F ) and B ∈ Mn×p(F ) then both AB and BA are defined if and only if m = p. Then AB ∈ Mm×m(F ) and BA ∈ Mn×n(F ). Thus if m 6= n then AB 6= BA. Square matrices A and B of the same size are said to commute if AB = BA. For n > 2 there are plenty of pairs A, B of n × n matrices that do not commute.

Exercise 1.5. For each n > 2 find matrices A, B ∈ Mn×n(F ) such that AB 6= BA.

a b Exercise 1.6. Let A be the 2 × 2 matrix . c d 1 0 (a) Show that A commutes with if and only if A is diagonal (that is, b = c = 0). 0 0 0 1 (b) Which 2 × 2 matrices A commute with ? 0 0 (c) Use the results of (a) and (b) to find the matrices A that commute with every 2 × 2 matrix.

Note C. The n × n matrix In in which the (i, j) entry is 1 if i = j and 0 if i 6= j :  1 if i = j, (I ) = δ = n ij ij 0 if i 6= j, is known as the identity matrix. An n × n matrix A is said to be invertible if there exists an n × n matrix B such that AB = BA = In . When this is case, there is only one such matrix B , and one writes A−1 for B .

3 Theorem 1.3. Let A, B be invertible n × n matrices. Then AB is invertible and (AB)−1 = B−1A−1 .

I’ll not prove this here—but you’ll find it a valuable exercise. It is a special case of a very general phenomenon. Note the reversal of the factors. Hermann Weyl (in his book Symmetry if I remember correctly) points out how it is familiar in everyday life. To undo the process of putting on socks and then shoes, we take off the shoes first, then the socks.

Exercise 1.7. Prove Theorem 1.3.

Exercise 1.8. Show that if A is an m × n matrix and B is an n × p matrix then AIn = A and InB = B .

Exercise 1.9. Let A be an n×n matrix. Show that if AB = BA = In and AC = CA = In then B = C .

tr Exercise 1.10. The transpose A of an m × n matrix A = (ai j) is the n × m matrix in which the (i, j) entry is aj i . Let A and B be m × n matrices, and let C be an n × p matrix.

(a) Show that (A + B)tr = Atr + Btr and that (λA)tr = λAtr for scalars λ. (b) Show that (AC)tr = Ctr Atr . (c) Suppose that n = m and that A is invertible. Show that Atr is invertible and that (Atr )−1 = (A−1)tr .

Exercise 1.11. Let A and B denote n × n matrices with real entries. For each of the following assertions, find either a proof or a counterexample. (a) A2 − B2 = (A − B)(A + B). (b) If AB = 0 then A = 0 or B = 0. (c) If AB = 0 then A and B cannot both be invertible. (d) If A and B are invertible then A + B is invertible. (e) If ABA = 0 and B is invertible then A2 = 0. [Hint: where the assertions are false there usually are counterexamples of size 2 × 2.]

Exercise 1.12. Let Jn be the n × n matrix with all entries equal to 1. Let α, β ∈ R with α 6= 0 and α + nβ 6= 0. Show that the matrix αIn + βJn is invertible. 2 [Hint: note that Jn = nJn ; seek an inverse of αIn +βJn of the form λIn +µJn where λ, µ ∈ R.] 3 2 2 2 2 3 2 2 Find the inverse of  . 2 2 3 2 2 2 2 3

1.4 More on systems of linear equations

Pn The system j=1 aijxj = bi (for 1 6 i 6 m) of m linear equations in n unknowns x1, . . . , xn may be expressed as the single matrix equation

Ax = b where A is the m × n matrix (aij) of coefficients, x is the n × 1 column vector with entries x1, . . . , xn , and b is the m × 1 column vector with entries b1, . . . , bm . In this notation the systems (?) of equations in §1.1 are 1 2 3 x  6  1 2 x 3 = , 2 3 4 y = 9 2 3 y 5       3 4 5 z 12 4 and x 0 1 2 3 0 y  1 2 3 4   = 2 . z  2 3 4 5 0 w

There is a systematic way to solve such systems. We divide every term of the first equation

a11x1 + a12x2 + ··· + a1nxn = b1 by a11 to get an equivalent equation in which the coefficient of x1 is 1. That is not possible if a11 = 0, but let’s set this difficulty aside for a moment and assume that a11 6= 0. Next, for † 2 6 i 6 m we subtract a1i × Equation (1) from Equation (i). The output from this first set of moves is a new system of linear equations which has a special form. Its first equation may be construed as giving an expression for x1 in terms of x2, . . . , xn , b1 , and the various coefficients a1j . Its other m − 1 equations do not involve x1 . They form a system of m − 1 equations in n − 1 variables. Clearly, this is progress: since the moves we have made (dividing the first equation through by a11 and then subtracting multiples of the new first equation from each of the others) do not change the solution set; moreover, it is fair to assume that the smaller problem of solving the set of m − 1 equations in the n − 1 variables x2, . . . , xn can be solved, and then we’ll have the value of x1 also. But what if a11 = 0? If ai1 = 0 for all i in the range 1 6 i 6 m then x1 did not occur in any of our equations, and from the start we had the simpler problem of a system of m linear equations in n − 1 variables. Therefore we may suppose that there exists r such that 2 6 r 6 m and ar 1 6= 0. Then we simply interchange Equation (1) and Equation (r). The system of equations is unchanged. Only the order in which they are listed has changed. The solution set is unchanged. This systematic process and variants of it are known as Gaussian elimination, referring to work of C. F. Gauss in the early 1800s. It was used several centuries before that in China and (later) in Europe.

1.5 Elementary Row Operations (EROs) Now let’s return to matrices. The original system Ax = b is completely determined by the so- called augmented coefficient matrix A|b that is obtained from the m×n matrix A by adjoining b as an (n + 1)th column. The operations on our systems of equations are elementary row operations (EROs) on the augmented matrix A|b:

P (r, s) for 1 6 r < s 6 m: interchange row r and row s.

M(r, λ) for 1 6 r 6 m and λ 6= 0: multiply (every entry of) row r by λ.

S(r, s, λ) for 1 6 r 6 m, 1 6 s 6 m and r 6= s: add λ times row r to row s.

Note. In my specification of M(r, λ) I used the word ‘multiply’, whereas in Gaussian elimination we actually divided by a11 . But of course, division by a11 is multiplication by 1/a11 . Similarly, in practice we subtract λ times one equation from another, but this is addition of −λ times the one to the other. It is both traditional and convenient to use multiplication and addition instead of division and subtraction in the definition of EROs.

†Can we multiply an equation by a number and subtract one equation from another? Technically not, I suppose. But it should be clear what is meant and this is no place for pedantry.

5 From the procedure described above it should be clear that the augmented matrix A|b can be changed using EROs to an m × (n + 1) matrix E |d which has the following form, known as echelon form:

• if row r of E has any non-zero entries then the first of these is 1;

• if 1 6 r < s 6 m and rows r, s of E contain non-zero entries, the first of which are er j and esk respectively, then j < k (the leading entries of lower rows occur to the right of those in higher rows);

• if row r of E contains non-zero entries and row s does not (that is esj = 0 for 1 6 j 6 n) then r < s—that is, zero rows (if any) appear below all the non-zero rows.

Example 1.1. Here is a simple example of reduction to echelon form using EROs.

0 1 2 3 0 1 2 3 4 2 P (1,2) 1 2 3 4 2 −→ 0 1 2 3 0 2 3 4 5 0 2 3 4 5 0 1 2 3 4 2 S(1,3,−2) −→ 0 1 2 3 0 0 −1 −2 −3 −4 1 2 3 4 2 S(2,3,1) −→ 0 1 2 3 2 0 0 0 0 −4   1 1 2 3 4 2 M(3,− 4 ) −→ 0 1 2 3 2 0 0 0 0 1

Returning to equations, in this example we have manipulated the augmented matrix of the system  y + 2z + 3w = 0  x + 2y + 3z + 4w = 2 ,  2x + 3y + 4z + 5w = 0 the third of those proposed at (?). The significance of the EROs is the following sequence of operations on equations: (1) interchange the first and second equations; (2) subtract 2 times the (new) first equation from the third equation; (3) add the (current) second equation to the (current) third equation. 1 (4) multiply both sides of the (current) third equation by − 4 . It is not hard to see that such manipulations do not change the set of solutions. But now the third equation has become 0x + 0y + 0z + 0w = 1, which clearly has no solutions. Therefore the original set of three simultaneous equations had no solutions—it is inconsistent. The point of all this is that it is completely general and sufficiently routine that it can easily be mechanised. A machine can work with the augmented matrix of a system of simultaneous linear equations. If the final few rows of the echelon form are zero they can be deleted (they say that 0 = 0, which is certainly true, but is not helpful). Then all rows are non-zero. If the final row has its leading 1 in the (n + 1)st position (the final position) then the equations are inconsistent. Otherwise we can choose arbitrary values for those variables xj for which the

6 j th position does not occur as the leading position of a non-zero row; then, working up from the last equation to the first, if the ith row has its leading 1 at the (i, j) position then the corresponding equation is of the form x +P c x = d , from which the value of x may j j

Exercise 1.13. Use elementary row operations (EROs) to find an echelon form for each of the following matrices:

2 4 −3 0 1 2 3 0 1 2 3 0 (a) 1 −4 3 0 ; (b) 2 3 4 1 ; (c) 2 3 4 2 . 3 −5 2 1 3 4 5 2 3 4 5 2

[Compare Exercise 1.1]

Exercise 1.14. For each α ∈ R, find an echelon form for the matrix 1 2 −3 −2 4 1 2 5 −8 −1 6 2 . 1 4 −7 4 0 α

Use your result either to solve the following system of linear equations over R, or to find values of α for which it has no solution:  x + 2x − 3x − 2x + 4x = 1  1 2 3 4 5 2x1 + 5x2 − 8x3 − x4 + 6x5 = 2 .  x1 + 4x2 − 7x3 + 4x4 = α

Exercise 1.15. Is there an m × n matrix that may be reduced by EROs to different echelon forms?

Exercise 1.16. Show that if the m × n matrices A, B can be reduced to the same matrix E in echelon form, then there is a sequence of EROs that changes A into B .

7

2 Vector spaces

2.1 Vectors as we know them

You probably came across vectors in two and three dimensions before you came to Oxford. They have magnitude (‘length’) and direction. This is what makes them so useful in geometry and mechanics. If we take our vectors to emanate from the origin then they are determined by their further end. Thus in 2D a vector is determined by the pair (x, y) of coordinates and in 3D a vector is determined by the triple (x, y, z) of coordinates of that point in space. A vector may be multiplied by a scalar λ: if λ > 0 then the effect is simply to magnify it by a factor λ without changing its direction; if λ < 0 the effect is to scale by a factor |λ| and to reverse the direction. Both in 2D and in 3D, vectors may be added geometrically using the parallelogram rule as shown in the diagram. Algebraically this is simply coordinatewise addi- tion (x1, y1) + (x2, y2) = (x1 + x2, y1 + y2) and (x1, y1, z1)+(x2, y2, z2) = (x1 +x2, y1 +y2, z1 +z2). In this chapter a very important and fruitful abstract version of this theory is to be introduced.

2.2 Vector Spaces

As in Section 1.2, we need a set F of numbers of some kind. Technically, all we need is that F is a Field, that is, a system in which addition, subtraction, multiplication and division (by non-zero members) are all defined and in which arithmetic works exactly the way we expect it to. The study of fields is one of the beautiful topics available in the second-year courses. For Prelim Linear Algebra we take F to be R (usually) or F to be C (sometimes). Definition 2.1. Let F be a field (R or C). A vector space over F is a set V with distinguished element 0V , together with maps V × V → V ,(u, v) 7→ u + v (called addition) and F × V → V ,(λ, v) 7→ λv (called scalar multiplication) satisfying the following conditions: (VS1) for all u, v ∈ V , u + v = v + u; (VS2) for all u, v, w ∈ V , u + (v + w) = (u + v) + w;

(VS3) for all v ∈ V , v + 0V = 0V + v = v;

(VS4) for each v ∈ V there exists w ∈ V such that v + w = w + v = 0V ; (VS5) for all u, v ∈ V and for all λ ∈ F , λ(u + v) = λu + λv; (VS6) for all v ∈ V and for all λ, µ ∈ F ,(λ + µ)v = λv + µv; (VS7) for all v ∈ V and for all λ, µ ∈ F ,(λµ)v = λ(µv); (VS8) for all v ∈ V , 1v = v.

Example 2.1. Vectors as we know them in 2D satisfy conditions (VS1)–(VS8) so form a 2 vector space. This is R . 3 Vectors as we know them in 3D satisfy conditions (VS1)–(VS8) so form a vector space, R . n Generally, n-vectors (x1, x2, . . . , xn) form a vector space R (see Example 2.2 below).

9 Note 1. Elements of V are called vectors, and elements of F are called scalars. When F = R we speak of a real vector space, and when F = C we speak of a complex vector space. In general one speaks of an F vector space. In this course we’ll focus almost entirely on real vector spaces. Nevertheless, almost all of the theory works just as well for any field of scalars. Only in Ch. 7, when we discuss inner products, will it be essential that the field of scalars is R.

Note 2. The existence of 0V ∈ V obviously implies that V 6= ∅. Also, one may prove that no other element than 0V has the property described in condition (VS3). That is, if z ∈ V and for all v ∈ V , v + z = z + v = v then z = 0V . I recommend that, if you have never seen the reasoning before, you do it. It is the sort of proof that one should write out once in one’s life and no more than once (unless an examiner insists). Often we write 0 for 0V , but of course no ambiguity would arise.

Note 3. One may prove that in condition (VS4), the element w is uniquely determined by v. That is, if w1, w2 ∈ V and v + w1 = w1 + v = 0V and v + w2 = w2 + v = 0V then w1 = w2 . Again, I recommend that, if you have never seen this proved before, you do it. It is the sort of proof that one should write out once in one’s life and no more than once (unless an examiner insists). This element w is always written −v; it is the ‘additive inverse’ of v.

Here are examples of such proofs. They are exactly the same if R is replaced by any other field F . Note that I write 0, rather than 0R , for the zero scalar. The number 0, which we all know and love, has a right to its traditional notation and if ambiguity threatens, it should always be resolvable from the context.

Theorem 2.1. Let V be a vector space over R. For all λ ∈ R and all v ∈ V : (1) λ0V = 0V ; (2)0 v = 0V ; (3)( −λ)v = −(λv) = λ(−v);

(4) if λv = 0V then either λ = 0 or v = 0V .

Proof. (1) By (V3) and (V5) we have λ0V = λ(0V + 0V ) = λ0V + λ0V . Add −(λ0V ) to both sides and use (VS2), (VS4) and (VS3). For (2), use a similar idea, based on the fact that 0 = 0 + 0 (in R). (3) Using (V5) and (V4) and part (1) of this theorem, we get that

λv + λ(−v) = λ(v + (−v)) = λ0V = 0V . Adding −(λv) to both sides and using (VS2), (VS4) and (VS3) again we find that λ(−v) = −(λv). Similarly (using the axioms, and part (2) of this theorem) λv +(−λ)v = (λ+(−λ))v = 0v = 0V , and it follows as before that (−λ)v = −(λv). −1 (4) Suppose that λv = 0V and that λ 6= 0. Multiplying by λ and using (VS8) and −1 −1 −1 (VS7) we get v = 1v = (λ λ)v = λ (λv) = λ 0V . Then by part (1) of this theorem, v = 0V . Therefore if λv = 0V then either λ = 0 or v = 0V .

Exercise 2.1. Prove from the vector space axioms listed in Definition 2.1 that if V is a vector space, v, z ∈ V and v + z = v then z = 0V . Exercise 2.2. Prove from the vector space axioms listed in Definition 2.1 that if V is a vector space, if v, w1, w2 ∈ V , v +w1 = w1 +v = 0V and v +w2 = w2 +v = 0V then w1 = w2 . Exercise 2.3. Let V be a vector space over R. Which of those vector space axioms are required in formal proofs of the following theorems?

(i) If λ ∈ R then λ0V = 0V . (ii) For λ, µ ∈ R and v ∈ V , if λv = µv and λ 6= µ then v = 0V .

10 2.3 Subspaces

Here we focus on a very important way of getting new vector spaces from known ones.

Definition 2.2. Let V be a vector space. A subspace of V is a non-empty subset that is closed under addition and scalar multiplication, that is, a subset U satisfying the following conditions: (SS1) U 6= ∅; (SS2) for all u, v ∈ U , u + v ∈ U ; (SS3) for all u ∈ U and all scalars λ, λu ∈ U .

Note. The sets {0V } and V are subspaces of a vector space V . The former is called the ‘trivial’ subspace or the ‘zero’ subspace; subspaces other than V are known as ‘proper’ subspaces.

The following is a condition for a subset to be a subspace that is sometimes known as ‘the subspace test’.

Theorem 2.2 [Subspace Test]. If V is a vector space and U is a subset of V then U is a subspace if and only if

(SST): 0V ∈ U and for all u, v ∈ U , and all scalars λ, λu + v ∈ U.

Proof. Suppose first that U is a subspace of a vector space V . Since U 6= ∅ there exists u ∈ U . Then by (SS3) also 0u ∈ U , that is (by Theorem 2.1(2)) 0V ∈ U . Also, if u, v ∈ U and λ is a scalar then λv ∈ U by (SS3) and so u + λv ∈ U by(SS1). Now suppose conversely that V is a vector space and U a subset satisfying (SST). Then (SS1) holds since 0V ∈ U . Also, taking λ := 1 in the second part of (SST) we see that if u, v ∈ U then u + v ∈ U , and taking v := 0V we see that if u ∈ U and λ is a scalar then λu ∈ U . Thus (SS2) and (SS3) also hold, so U is a subspace.

Notation. If V is a vector space and U is a subspace then we usually write U 6 V . We reserve U ⊆ V to mean that U is a subset of V which is not known to be a subspace.

Exercise 2.4. Show that R1 (see note to Example 2.2) has no non-zero proper subspaces.

Exercise 2.5. Identify the non-zero proper subspaces of R2 geometrically.

Exercise 2.6. Let V be a vector space and let u1, u2, . . . , um ∈ V . Define

U := {u ∈ V | ∃ α1, α2, . . . , αm scalars : u = α1u1 + α2u2 + ··· + αmum}.

A little less formally, U = {α1u1 + α2u2 + ··· + αmum ∈ V | α1, α2, . . . , αm are scalars}. Show that U is a subspace of V .

Note. This is an important example to which we will return in the next chapter.

Theorem 2.3. Let V be a vector space and let U 6 V . Then U is a vector space. Also, if W 6 U then W 6 V (that is, subspaces of subspaces are subspaces).

11 Proof. Since U is closed under addition, + (restricted to U ) is a map U × U → U , and since U is closed under multiplication by scalars, that multiplication (restricted to U ) is a map F × U → U where F is the field of scalars. Conditions (VS1) and (VS2) hold in U since they hold in V ; by Theorem 2.2, 0V ∈ U , and this element has the properties required of 0U in (VS3); by Theorem 2.1(3) with λ := −1 ∈ F , if v ∈ U then also −v ∈ U and so (VS4) holds; finally, (VS5), (VS6), (VS7) and (VS8) all hold in U because they hold in V . The second assertion is an immediate consequence of the definition of the concept of sub- space.

Exercise 2.7. For each of the following vector spaces and each of the specified subsets, determine whether or not the subset is a subspace. That is, in each case, either verify the conditions defining a subspace (or use the subspace test), or show by an example that one of the conditions does not hold.

(a) V = R4 : (i) {(a, b, c, d) ∈ V | a + b = c + d}; (ii) {(a, b, c, d) ∈ V | a + b = 1}; (iii) {(a, b, c, d) ∈ V | a2 = b2}.

(b) V = Mn×n(R) : (i) the set of upper triangular matrices; (ii) the set of invertible matrices; (iii) the set of matrices which are not invertible.

Note: an n × n matrix (ai j) is said to be upper triangular if ai j = 0 when i > j .

Definition 2.3. Let V be a vector space, let A, B ⊆ V (subsets now, not necessarily subspaces), and let λ be a scalar. We define

A + B := {v ∈ V | ∃a ∈ A, ∃b ∈ B : v = a + b}; λA := {v ∈ V | ∃a ∈ A : v = λa}.

Recall also from the language of sets that A ∩ B = {v ∈ V | v ∈ A and v ∈ B}.

More concisely, A + B = {a + b | a ∈ A, b ∈ B} and λA = {λa | a ∈ A}. I have written the definitions of A + B and λA in a more formal way simply in order to emphasize that, logically, existential quantifiers are involved.

Theorem 2.4. Let V be a vector space and let U, W 6 V . Then U + W 6 V and U ∩ W 6 V .

Although this is a useful, and therefore important, fact, I propose to skip the proof, or rather, to leave the proof as an exercise.

Exercise 2.8. Write out a proof of Theorem 2.4.

Exercise 2.9. Let V be a vector space and let U, W 6 V . Show that U \ W is never a subspace of V .

Exercise 2.10. Let V be a vector space and let U, W 6 V . Show that U ∪W is a subspace of V if and only if U ⊆ W or W ⊆ U .

Exercise 2.11. Let L, M and N be subspaces of a vector space V . (a) Prove that (L ∩ M) + (L ∩ N) ⊆ L ∩ (M + N); (b) Give an example of subspaces L, M, N of R2 where (L ∩ M) + (L ∩ N) 6= L ∩ (M + N). (c) Is it true in general that L + (M ∩ N) = (L + M) ∩ (L + N)? Either give a proof, or give a counterexample.

12 2.4 Examples of vector spaces

A definition is all very well, but it has little value without context and examples. Vector spaces appear everywhere in mathematics. Here is a small selection of examples. You are strongly encouraged to keep a diary for the next few years in which you note each time you come across a new example.

n Example 2.2. As has already been mentioned in Example 2.1 the set R of n-tuples of real numbers is a real vector space under componentwise addition and scalar multiplication,

(x1, . . . , xn) + (y1, . . . , yn) = (x1 + y1, . . . , xn + yn),

λ(x1, . . . , xn) = (λx1, . . . , λxn).

2 3 We identify R with the Cartesian plane, and R with three-dimensional space. The case 1 n = 1 is also legitimate: R is a real vector space, identifiable geometrically with the real line R. n n We usually write the elements of R as row vectors so that R = M1×n(R). Often it is useful to write n-tuples of real numbers as columns, that is, as members of Mn×1 . Some people n tr write (R ) for this space. Often there is so little difference that there is no need to specify, n n but there are some situations where it is helpful to distinguish. I might write Rrows and Rcols to try to clarify what I mean, but this is not standard notation.

1 Note. In these notes and in this term’s exercise sheets I shall write R for R considered as a real vector space. Most professional mathematicians use R to denote the set, the additive group, the field, the vector space indiscriminately. Until we are clear which we mean, however, it is probably best to maintain a distinction. Next term you may well find that you must work out from the context what it is that is intended.

2 Example 2.3. The field C is a real vector space. As such it is just the same as R .

Example 2.4. For positive integers m, n, the set Mm×n(R) is a real vector space (see Theorem 1.1).

Example 2.5. Let (H) be a system of homogeneous linear equations with real coefficients:  a11x1 + a12x2 + ··· + a1nxn = 0   a21x1 + a22x2 + ··· + a2nxn = 0 (H): . . .  . . .   am1x1 + am2x2 + ··· + amnxn = 0 .

It is the fact that the linear forms are all equated to 0, not to other real numbers, that qualifies such equations as ‘homogeneous’. Let V be the set of solutions of these equations. Then V is a real vector space. To see this, we’ll find it convenient to use matrix notation and a little matrix algebra. Write (H) as

Ax = 0 where A := (aij) ∈ Mm×n(R), x is an n × 1 column vector of variables, and 0 is shorthand for 0n×1 .

13 Members of V , that is solutions of (H), may be organised as n × 1 column vectors of real numbers. If u, v ∈ V , so that Au = Av = 0 then A(u + v) = Au + Av = 0 + 0 = 0. Also, if v ∈ V and λ ∈ R then A(λv) = λ(Av) = λ0 = 0. What this means is that V is a subspace n of R and is therefore a vector space in its own right.

n Example 2.6. Let n be a natural number. The set of polynomials c0 + c1x + ··· + cnx in a variable x with real coefficients and of degree 6 n is a real vector space Rn[x]

X Example 2.7. Let X be any set. Define R := {f | f : X → R}, the set of all real- valued functions on X . This is a vector space with the usual pointwise addition of functions and pointwise multiplication of a function by a real number:

for all x ∈ X ,(f + g)(x) = f(x) + g(x) and (λf)(x) = λf(x).

Some special cases are worthy of special consideration. For a natural number n, define [n] := {1, 2, . . . , n}. For a function f :[n] → R, define xi := f(i). The n-tuple (x1, x2, . . . , xn) obviously determines f . Since addition and scalar multiplication of functions work componentwise, they correspond with addition and scalar [n] n multiplication of n-tuples. Thus the function space R is the same as the vector space R . Similarly, since [m] × [n] is the set of pairs (i, j) of integers with 1 6 i 6 m, 1 6 j 6 n, and since any function f :[m]×[n] → R determines and is determined by its values at members [m]×[n] (i, j) of [m]×[n], the function space R may be identified with the vector space Mm×n(R). Another important case is RR , the vector space of all real-valued functions of a real variable.

Example 2.8. Sequences (an) of real numbers form a vector space. They are added term-by-term, and a sequence may be multiplied term-by-term by a real number. This vector space is naturally identifiable with the function space RN . In the course Analysis I we make an extensive study of convergence of real sequences. You’ll find that the set of convergent sequences is a vector space, a subspace of RN , in fact.

Exercise 2.12. Show that the set of all sequences (un) of real numbers satisfying the recur- rence relation un+1 = un + un−1 (for n > 1) is a real vector space (a subspace of the space of all sequences of real numbers).

Exercise 2.13. Let a1, . . . , ap be real numbers. Show that the set of all sequences (un) of real numbers satisfying the homogeneous linear recurrence relation

un+p + a1un+p−1 + ··· apun = 0 (for n > 0) is a real vector space (a subspace of the space of all sequences of real numbers).

Example 2.9. The course Analysis II contains formal definitions of what is meant by continuity of a function R → R, and what is meant by differentiability of a function. You will find that {f ∈ RR | f is continuous} and {f ∈ RR | f is differentiable} are vector spaces—they are very important subspaces of RR .

14 Example 2.10. Consider functions y of the real variable x that are twice differentiable and satisfy the homogeneous linear differential equation y00 + a(x)y0 + b(x)y = 0. Such an equation is described as linear (because y and its derivatives only occur to the first power and are not multiplied together), and homogeneous because of the occurrence of 0 on the right hand side (compare Example 2.5). You may well have come across the special class of such equations in which the coefficients a and b are constant before you arrived in Oxford. They are important in mechanics and physics. The point to be made here is that the set S of solutions of a homogeneous linear differential equation is a vector space, a subspace of RR . For, if w = u + λv where u, v ∈ S and λ ∈ R, then

w00 + a(x)w0 + b(x)w = (u00 + λv00) + a(x)(u0 + λv0) + b(x)(u + λv) = (u00 + a(x)u0 + b(x)u) + (v00 + a(x)v0 + b(x)v) = 0 + 0 = 0 .

Thus w ∈ S , and so by the subspace test, Theorem 2.2, S is a subspace of RR . The example here is a differential equation of order 2, meaning that it involves differential coefficients up to the second. But of course, the same theory works just as well for homogeneous linear differential equations of any order.

2.5 Subspaces of R1 , R2 and R3

1 1 Example 2.11. Subspaces of R . Let V := R and let U be a non-trivial subspace. Then there exists a ∈ U , a 6= 0. For any x ∈ R, if λ := x/a then x = λa ∈ U . Therefore 1 U = V . Thus R has no non-zero proper subspaces.

2 2 Example 2.12. Subspaces of R . Let V := R and let U be a non-trivial subspace. Then there exists u ∈ U , u 6= 0, and {λu | λ ∈ R} = Span (u) ⊆ U . Perhaps Span (u) = U : in this case, if u = (a, b) then either a 6= 0, in which case, if m := b/a then Span (u) = {(x, y) ∈ 2 R | y = mx} or a = 0, in which case Span (u) = {(0, y) | y ∈ R}. Thus if U is spanned by 2 a single element then it is represented geometrically by a line through the origin in R , and moreover, since every line either has a slope or is parallel to the y-axis, every line through the origin represents a subspace. Suppose now that U 6= Span (u), so that there exists v = (c, d) ∈ U \ Span (u). Applying a b EROs to the matrix produces new matrices whose rows lie in U . An echelon form of c d 1 e our matrix must be , however, and from its two rows we can get the vectors (1, 0) and 0 1 2 2 (0, 1), hence any vector (x, y) ∈ R . Thus the only non-zero proper subspaces of R are those represented geometrically by lines through the origin.

3 Example 2.13. Subspaces of R . A similar calculation with EROs may be used to 3 show that the only non-trivial proper subspaces of R are those represented geometrically by lines and by planes through the origin. I leave the proof as an exercise.

Exercise 2.14. Let U be a non-trivial proper subspace of R3 . Show that U is represented geometrically either by a line through the origin or by a plane through the origin of R3 .

15

3 Bases of vector spaces

3.1 Spanning sets

Let’s start with Exercise 2.6: let V be a vector space and let u1, u2, . . . , um ∈ V . Define

U := {α1u1 + α2u2 + ··· + αmum ∈ V | α1, α2, . . . , αm are scalars}. Here is a proof that U is a subspace of V .

Taking α1 := α2 := ··· := αm := 0 we see that 0V ∈ U . Now let u, v ∈ U , and let λ be a scalar. By definition of the set U , there exist scalars α1, α2, . . . , αm and β1, β2, . . . , βm such that u = α1u1 + α2u2 + ··· + αmum and v = β1u1 + β2u2 + ··· + βmum .

Then u+λv = γ1u1 +γ2u2 +···+γmum where γi := αi +λβi for 1 6 i 6 m. Thus u+λv ∈ U , so U is a subspace by the subspace test, Theorem 2.2.

Definition 3.1. Let V be a real vector space and let u1, u2, . . . , um ∈ V . A vector of the form α1u1 +α2u2 +···+αmum where α1, α2, . . . , αm ∈ R is known as a linear combination of the given vectors. The space defined above, the set of their linear combinations, is known as their span. There is no standard notation, or rather, there are many. Some authors use Span (u1, u2, . . . , um), some write Sp(u1, u2, . . . , um), others use hu1, u2, . . . , umi (which, how- ever, can be ambiguous when m = 2 since then angle brackets can have a different meaning) or hu1, u2, . . . , umiR (or, more generally, hu1, u2, . . . , umiF for vector spaces over a field F ). An important extension of the concept is this. If S ⊆ V then Span S or SpS or hSiF will denote the set of linear combinations α1u1 +α2u2 +···+αmum where α1, α2, . . . , αm ∈ R and u1, u2, . . . , um ∈ S . It is important that, even if S is an infinite set, each linear combination involves only finitely many of its members. It is a small (but slightly non-trivial) extension of Exercise 2.6, which I leave to the reader, to show that for any set S , the set Span S is a subspace of V . Note that since a sum without summands, P α u , is naturally taken to be 0 (in this ui∈∅ i i context 0V ), Span ∅ = {0V }, the trivial subspace. Exercise 3.1. Write out a proof that for any set S of vectors in a vector space V , the set Span S is a subspace.

Exercise 3.2. Let S and T be subsets of a vector space V . Which of the following state- ments are true? Either give a proof, or find a counterexample. (a) Span (S ∩ T ) = Span (S) ∩ Span (T ); (b) Span (S ∪ T ) = Span (S) ∪ Span (T ); (c) Span (S ∪ T ) = Span (S) + Span (T ).

Definition 3.2. Let V be a real vector space. If there exists a finite set u1, u2, . . . , um of vectors such that Span (u1, u2, . . . , um) = V then V is said to be finite-dimensional.

The reason for this terminology will become clear later in this chapter. Almost all of the vector spaces that are treated in Prelim Linear Algebra courses are finite-dimensional. Most X of the vector spaces, such as function spaces R and sequence spaces RN that are valuable in Analysis are infinite-dimensional.

n Exercise 3.3. Find finite spanning sets for R and for Mm×n(R).

17 3.2 Linear independence

We begin with a simple example.

4 Example 3.1. Consider the three vectors (1, 3, 4, 0), (0, 0, 1, 3), (0, 0, 0, 1) in R . Call them v1, v2, v3 respectively. Which linear combinations α1v1 + α2v2 + α3v3 represent the zero vector (which I’ll write here as 0, rather than 0R4 )? Well, of course if α1 = α2 = α3 = 0 then (by Theorem 2.1 and Definition 2.1 (VS4)—conscientiously referenced here, but for the last time) 0v1 + 0v2 + 0v3 = 0. But are there other linear combinations that yield 0? Well, the equation α1v1 + α2v2 + α3v3 = 0 is

α1(1, 3, 4, 0) + α2(0, 0, 1, 3) + α3(0, 0, 0, 1) = (0, 0, 0, 0).

Comparing first coordinates we see that α1 = 0; from third coordinates we then get that α2 = 0; and finally, from the fourth coordinate we find that α3 = 0. Thus in this case it is only the trivial linear combination that represents 0.

This is an instance of a phenomenon that is so very important that it warrants a name.

Definition 3.3. Let V be a real vector space. Vectors v1, v2, . . . , vm in V are said to be linearly independent if the only way to write 0V as a linear combination of them is the trivial combination; that is, if

α1v1 + α2v2 + ··· + αmvm = 0V =⇒ α1 = α2 = ··· = αm = 0 .

A subset S of V is said to be linearly independent if every finite subset of S is linearly independent. A collection of vectors that is not linearly independent is (not surprisingly) said to be linearly dependent. Thus vectors v1, v2, . . . , vm in V are linearly dependent if and only if there exist scalars α1, α2, . . . , αm , not all of which are 0, such that α1v1 + α2v2 + ··· + αmvm = 0V .

Example 3.2. Let V be a real vector space, let v, v1, v2, . . . , vm ∈ V and let S ⊆ V .

• If 0V ∈ S then S is linearly dependent.

• If two of v1, v2, . . . , vm are equal then these m vectors are linearly dependent.

• The singleton set {v} is linearly independent if and only if v 6= 0V .

Exercise 3.4. Write out proofs of the three assertions in Example 3.2.

Exercise 3.5. Which of the following sets of vectors in R3 are linearly independent? (i) {(1, 3, 0), (2, −3, 4), (3, 0, 4)}, (ii) {(1, 2, 3), (2, 3, 1), (3, 1, 2)}.

Exercise 3.6. Let V := RR = {f : R → R}. Which of the following sets of vectors in V are linearly independent? (i) {f, g, h} where f(x) = 5x2 + x + 1, g(x) = 2x + 3 and h(x) = x2 − 1. (ii) {p, q, r} where p(x) = cos2(x), q(x) = cos(2x) and r(x) = 1.

Exercise 3.7. Suppose that u, v, w are linearly independent. (i) Show that u + v, u − v, u − 2v + w are also linearly independent. (ii) Are u + v − 3w, u + 3v − w, v + w linearly independent?

18 Exercise 3.8. Let {v1, v2, . . . , vn} be a linearly independent set of n vectors in V . Prove that each of the following sets is also linearly independent:

(i) {c1v1, c2v2, . . . , cnvn} where ci 6= 0 for 1 6 i 6 n; (ii) {w1, w2, . . . , wn} where wi = vi + v1 for 1 6 i 6 n.

The following little result will prove to be of great use later.

Lemma 3.1. Let v1, v2, . . . , vm be linearly independent vectors in a vector space V and let vm+1 be a vector in V such that vm+1 ∈/ Span (v1, v2, . . . , vm). Then v1, v2, . . . , vm, vm+1 are linearly independent.

Proof. Suppose that α1v1 + α2v2 + ··· + αmvm + αm+1vm+1 = 0V , where α1, α2,..., αm, αm+1 are scalars. We need to show that each of these scalars is 0. If αm+1 were not 0 then we could divide by it and we would have

 α1 α2 αm  vm+1 = − v1 + v2 + ··· + vm , αm+1 αm+1 αm+1 which is not possible since vm+1 ∈/ Span (v1, v2, . . . , vm). Therefore αm+1 = 0. But now we are left with the relation α1v1 + α2v2 + ··· + αmvm = 0V , and so α1 = α2 = ··· = αm = 0 since v1, v2, . . . , vm are linearly independent. Thus all the coefficients are 0, as required.

3.3 Bases

Definition 3.4. A basis of a vector space V is a linearly independent spanning set of V .

Example 3.3. Let ei be the row vector that has coordinate 1 in its ith place, 0 in all n P other places. The vectors e1, e2, . . . , en form a basis of R . For, if αiei = 0 then, examining the ith coordinate we find that αi = 0 so e1, e2, . . . , en are linearly independent. And, since P n n (a1, a2, . . . , an) = aiei , they span R . This is known as the standard basis of R .

Exercise 3.9. Write down an analogous standard basis for Mm×n(R).

Theorem 3.2. Let V be a (real) vector space and let S = {v1, . . . , vn} ⊆ V . Then S is a basis of V if and only if every vector v ∈ V has a unique expression as a linear combination of the members of S . n Equivalently, S is a basis of V if and only if the map R → V given by (a1, a2, . . . , an) 7→ a1v1 + a2v2 + ··· + anvn is both onto and one-one, that is, a bijection.

Proof. Suppose first that S is a basis of V . Let v ∈ V : since S is a spanning set there exist a1, a2, . . . , an ∈ R such that v = a1v1+a2v2+···+anvn ; moreover, if also b1, b2, . . . , bn ∈ R and v = b1v1 + b2v2 + ··· + bnvn then (b1 − a1)v1 + (b2 − a2)v2 + ··· + (bn − an)vn = v − v = 0V and so bi = ai for 1 6 i 6 n since S is linearly independent. For the converse suppose that every vector v ∈ V has a unique expression as a linear combination of the members of S . Since every vector has such an expression, S is a spanning set for V ; also, since the expression of 0V as a linear combination is unique, if α1v1 + α2v2 + ··· + αmvm = 0V then α1 = α2 = ··· = αm = 0, and so S is linearly independent. Thus S is a basis.

When do bases exist? The following result guarantees that every finite-dimensional real vector space has a basis.

19 Theorem 3.3. Let V be a finite-dimensional vector space. Every finite spanning set contains a linearly independent spanning set, that is, a basis of V .

Proof. Let S be a finite spanning set for V . Choose a subset T of S that is of largest size subject to the condition that it is linearly independent (this may, of course, be S itself). Sup- pose, seeking a contradiction, that T did not span V . Then Span S 6⊆ Span T , so S 6⊆ Span T and there must exist v ∈ S \Span T . But then by Lemma 3.1 the set T ∪{v} would be linearly independent, contrary to the maximality of T . Therefore T must be a spanning set for V , as required.

Exercise 3.10. Let S be a finite spanning set for a vector space V . Let T be a smallest subset of S that spans V . Show that T is linearly independent, hence a basis of V .

n Exercise 3.11. Let V := {(x1, . . . , xn) ∈ R | x1 + ··· + xn = 0}. It is a very special case of Example 2.5 that V is a subspace of Rn . Find a basis for V .

Exercise 3.12. Let V := {(xi j) ∈ Mn×n(R) | xi j = xj i for all relevant (i, j)}. It is an- other special case of Example 2.5 that V is a subspace of Mn×n(R)—this is the space of real symmetric n × n matrices. Find a basis for V .

Exercise 3.13. Let V := {(xi j) ∈ Mn×n(R) | xi j = −xj i for all relevant (i, j)}. As a third special case of Example 2.5, V is a subspace of Mn×n(R)—this is the space of real skew-symmetric n × n matrices. Find a basis for V .

3.4 The Steinitz Exchange Lemma

Theorem 3.4 [The Steinitz Exchange Lemma]. Let X be a subset of a vector space V . Let u ∈ Span X and suppose that v ∈ X is such that u∈ / Span (X \{v}). Then Span ((X \ {v}) ∪ {u}) = Span X .

Proof. Since u ∈ Span X there exist v1, v2, . . . , vk ∈ X and scalars α1, α2, . . . , αk such that u = α1v1 + α2v2 + ··· + αkvk Since u∈ / Span (X \{v}), for some value of i, vi = v. Without loss of generality, v1 = v. Moreover, α1 6= 0. Divide through by α1 to get that v = β u + β2v2 + ··· + βkvk where β := 1/α1 and βi := −αi/α1 for 2 6 i 6 k. Thus any linear combination of members of X that involves v may be replaced by a linear combination that involves β u + β2v2 + ··· + βkvk instead of v. Therefore if Y is the set obtained by removing v from X and inserting u (exchanging u and v), then Span Y = Span X .

Theorem 3.5. Let V be a vector space, and let S, T be finite subsets of V . If S is linearly independent and T spans V then |S| 6 |T |.

To prove this, list S as u1, u2, . . . , um and list T as v1, v2, . . . , vn . Call this sec- ond list T0 . Identify the smallest value of i such that u1 ∈ Span (v1, v2, . . . , vi). Then u1 ∈/ Span (v1, v2, . . . , vi−1). Make a list T1 by removing vi from T0 and inserting u1 at the head of the list. By the Steinitz Exchange Lemma, Span (T1) = Span (T0) = Span T . Continue in the same way: if 1 < j 6 m and Tj is a list in which the first j members are uj, . . . , u1 , the rest are members of T , and Span (Tj) = Span T , then a member v of Tj may be replaced with uj+1 , which may placed at the start of the new list Tj+1 , and by the Steinitz Exchange Lemma then Span (Tj+1) = Span (Tj) = Span T . The relevant element v of Tj must occur later than one of the first j since uj+1 ∈/ Span (uj, . . . , u1), and therefore it is one of the vectors vi in T . After m such steps m members of T will have been removed and replaced with the m members of S . It follows that m 6 n, that is, |S| 6 |T |. 20 Corollary 3.6. Let V be a finite-dimensional vector space, and let S, T be bases of V . Then |S| = |T |.

For, since S is linearly independent and T is a spanning set, by Theorem 3.5, |S| 6 |T |. By the same argument, |T | 6 |S|, and therefore |S| = |T |. Definition 3.5. Let V be a finite-dimensional vector space. The dimension dim V of V is the size of any basis.

n From Example 3.3 we see that dim R = n, which is comforting.

Exercise 3.14. Find dim(Rn[x], where Rn[x] is as defined in Example 2.6.

Exercise 3.15. Find the dimension of the space of all sequences (un) of real numbers satis- fying the recurrence relation un+1 = un + un−1 for n > 1, introduced in Exercise 2.12.

We observed in Theorem 3.2 that if S is a basis of a real vector space V then the map n R → V given by (a1, a2, . . . , an) 7→ a1v1 + a2v2 + ··· + anvn is both onto and one-one, that is, a bijection. This means that when we have a basis S of a vector space V , each vector v ∈ V n may be associated with a unique member of R . The entries of this n-vector are known as the coordinates of v. In my experience, it is better not to choose a basis and use coordinates unless there is no other way. Seek coordinate-free methods wherever possible.

21

4 Subspaces of vector spaces

4.1 Bases of subspaces

Recall from §2.3 what is meant by a subspace of a vector space—it is a subset that is a vector space in its own right. Let’s now see what may be said about subspaces of finite-dimensional vector spaces. We begin with a fact which, even after many decades, I still find unexpected and exciting.

Theorem 4.1. Let U be a subspace of a finite-dimensional vector space V . Then U is finite-dimensional and dim U 6 dim V .

Proof. Let n := dim V . By Theorem 3.5, every linearly independent set in V has at most n members. Choose a largest linearly independent set S contained in U . If u ∈ U and u∈ / Span S then by Lemma 3.1 the set S ∪ {u} would be linearly independent, which it cannot be since it is larger than S . Therefore u ∈ Span S for all u ∈ U , that is, U = Span S . Thus U is finite-dimensional and since |S| 6 n, dim U 6 dim V .

Theorem 4.2. Let U be a subspace of a finite-dimensional vector space V . Then every basis of U may be extended to a basis of V —that is, if u1, . . . , um is a basis of U then there is a basis u1, . . . , um, um+1, . . . , un if V .

Proof. Let u1, . . . , um be a basis of U and v1, . . . , vn a basis of V . Define S0 := {u1, . . . , um} and for 1 6 i 6 n define  Si−1 if vi ∈ Span (Si−1), Si := Si−1 ∪ {vi} if vi ∈/ Span (Si−1).

The set S0 is linearly independent by definition, and if Si−1 is linearly independent then, either trivially or by Lemma 3.1, so is Si . Therefore Sn is linearly independent. Also, vi ∈ Span (Si) and Si ⊆ Sn , so vi ∈ Span (Sn) for 1 6 i 6 n. Therefore Span (Sn) = V . Thus Sn is a basis for V , and the construction has ensured that it contains the given basis u1, . . . , um of U as a subset, as required.

Exercise 4.1. Let V be a real vector space of dimension n and let U be a subspace such that dim U 6 n − 2. Show that there are infinitely many different subspaces W such that U 6 W 6 V .

Exercise 4.2. Let V be an n-dimensional real vector space. Let U, W be subspaces of V such that U 6 W . Show that there is a subspace X of V such that W ∩ X = U and W + X = V .

Exercise 4.3. Let V be an n-dimensional real vector space. Prove that V contains a subspace of dimension r for each r in the range 0 6 r 6 n.

Exercise 4.4. Show that if V is a vector space of dimension n and U0 < U1 < ··· < Uk 6 V (strict containments of subspaces of V ) then k 6 n. Show also that if k = n then dim Ur = r for 0 6 r 6 k .

23 4.2 Finding bases of subspaces

The previous subsections were devoted to the development of a rather abstract, very powerful n theory. In practical applications of linear algebra we usually work in R for specific values of n (sometimes quite small, like 2 or 3 or 4, sometimes large as in 100s, or huge as in 1 000 000s). We need methods—algorithmic routines that can be programmed for computation—to find, for example, bases of subspaces described in various ways. An important example is solution spaces of systems of linear equations.

n Problem FindBasis. Given a finite set S of vectors in R , how can we find a basis of Span S ?

There is a routine using EROs (see §1.5) that may easily be programmed. For an m×n ma- n trix X define its row space by RowSp (X) := Span R, where R is the subset of R consisting of the rows of X . Interchanging two rows of X obviously does not change RowSp (X); multiply- ing one of the rows by a non-zero scalar obviously does not change RowSp (X); it is only slightly less obvious that adding a multiple of one row to another does not change RowSp (X). There- fore if a matrix Y is obtained from X be a sequence of EROs then RowSp (Y ) = RowSp (X). Now for Problem FindBasis, let m := |S| and write the members of S as the m rows of an m × n matrix A. Use EROs to reduce A systematically to an echelon form E . We know that RowSp (E) = Span S . But now it is easy to see that the non-zero rows of E are linearly independent. Thus those non-zero rows form a basis for Span S .

4 Example 4.1. Let S := {(0, 1, 2, 3), (1, 2, 3, 4), (2, 3, 4, 5)} ⊆ R . To find a basis for 0 1 2 3 Span S we create the matrix 1 2 3 4 and reduce it to echelon form using EROs. Co- 2 3 4 5 incidentally, we have already done that in Example 1.1, where, ignoring the fifth column we 1 2 3 4 found the echelon form 0 1 2 3. Thus Span S = Span {(1, 2, 3, 4), (0, 1, 2, 3)} and this 0 0 0 0 is clearly a basis since if a(1, 2, 3, 4) + b(0, 1, 2, 3) = (0, 0, 0, 0) then a = 0 (first coordinate), after which b = 0 (second coordinate).

Exercise 4.5. Find bases for the following subspaces of R3 (a) Span ({(2, 1, 3), (4, −4, −5), (−3, 3, 2)}); (b) Span ({(1, 2, 3), (2, 3, 4), (3, 4, 5), (0, 1, 2)}); (c) Span ({(2, 1, 3), (4, −4, −5), (−3, 3, 2)}).

Exercise 4.6. Recall from Exercise 2.12 that the set of all sequences of real numbers satis- fying the recurrence relation

un+1 = un + un−1 (for n > 1) is a real vector space. Find a basis and write down its dimension.

Exercise 4.7. Recall from Exercise 2.13 that if a1, . . . , ap are real numbers then the set of all sequences (un) of real numbers satisfying the homogeneous linear recurrence relation

un+p + a1un+p−1 + ··· apun = 0 (for n > 0) is a real vector space. Find a basis and write down its dimension.

24 4.3 An algebra of subspaces and the dimension formula

My “algebra” of subspaces is intended as a collective noun for what appears in the lecture- course synopsis as “Sums, intersections and direct sums” of subspaces. Sums and intersections were defined above in Chapter 2, at Definition 2.3. In Theorem 2.4 it was observed that

If U, W are subspaces of a vector space V then also U + W 6 V and U ∩ W 6 V .

At that point the proof was left as a useful exercise. Now, however, since these concepts are the focus of interest, I should keep faith and give a proof. So let V be a vector space and let U, W 6 V . Since 0V ∈ U and 0V ∈ W , also 0V ∈ U + W (as 0v + 0V ) and 0V ∈ U ∩ W (by definition of set intersection).

Let x1, x2 ∈ U +W and let λ be a scalar. By definition of the sum, there exist u1, u2 ∈ U , w1, w2 ∈ W such that x1 = u1 + w1 and x2 = u2 + w2 . Then

x + λy = (u1 + w1) + λ(u2 + w2) = (u1 + λu2) + (w1 + λw2) , and since u1 + λu2 ∈ U , w1 + λw2 ∈ W (because U, W are subspaces of V ), x + λy ∈ U + W . Therefore U + W 6 V by the subpsace test, Theorem 2.2. Similarly, if x, y ∈ U ∩ W and λ is a scalar, then x + λy ∈ U (since U is a subspace) and x + λy ∈ W (since W is a subspace), so x + λy ∈ U ∩ W (definition of intersection). Therefore, by the subspace test again, U ∩ W 6 V .

This brings us to a remarkable result that is much used by mathematicians in many contexts—one of which is the Prelim examination.

Theorem 4.3 [Dimension Formula]. Let U, W be subspaces of a finite-dimensional vector space V . then dim(U + W ) + dim(U ∩ W ) = dim U + dim W .

Proof. Start by choosing a basis u1, . . . , up for the subspace U ∩ W . By Theorem 4.2 it may be extended to bases  u1, . . . , up, v1, . . . , vq of U and u1, . . . , up, w1, . . . , wr of W .

Thus p = dim(U ∩ W ), dim U = p + q and dim W = p + r. We’ll see that

(S): u1, . . . , up, v1, . . . , vq, w1, . . . , wr turns out to be a basis of U + W . First, let x ∈ U +W . Then x = u+w where u ∈ U and w ∈ W . Since bases are spanning sets there exist scalars α1, . . . , αp , β1, . . . , βq , γ1, . . . , γp , δ1, . . . , δr such that

u = α1u1 + ··· + αpup + β1v1 + ··· + βqvq and w = γ1u1 + ··· + γpup + δ1w1 + ··· + δrwr .

Therefore x = u + w = (α1 + γ1)u1 + ··· + (αp + γp)up + β1v1 + ··· + βqvq + δ1w1 + ··· + δrwr : thus (S) is a spanning set for U + W . To check that (S) is linearly independent consider a linear relation

α1u1 + ··· + αpup + β1v1 + ··· + βqvq + γ1w1 + ··· + γrwr = 0 .

Re write this relation as

α1u1 + ··· + αpup + β1v1 + ··· + βqvq = −(γ1w1 + ··· + γqwq).

25 Now the vector on the left lies in U , that on the right lies in W . Since they are equal, both lie in U ∩ W , and since u1, . . . , up is a basis for U ∩ W , there are scalars δ1, . . . , δp such that each is equal to δ1u1 + ··· + δpup . Then

δ1u1 + ··· + δpup + γ1w1 + ··· + γrwr = 0 and it follows, since {u1, . . . , up, w1, . . . , wr} is a linearly independent set of vectors that each of the coefficients is 0. In particular, γ1 = ··· = γr = 0. Then the original relation in volves only the elements of the basis of U , and therefore also α1 = ··· = αp = β1 = ··· = βq = 0. Thus the only linear relation between the members of (S) is the trivial one, so it is linearly independent. Therefore dim(U + W ) = p + q + r, and so dim(U + W ) + dim(U ∩ W ) = 2p + q + r. Also, dim U + dim W = (p + q) + (p + r). So dim(U + W ) + dim(U ∩ W ) = dim U + dim W , as required.

Example 4.2. Let V be a vector space of dimension 10, and let X,Y be subspaces of dimension 6. Then dim(X ∩ Y ) > 2. For, X + Y 6 V , and therefore dim(X + Y ) 6 10. From the dimension formula, 10 > 6 + 6 − dim(X ∩ Y ) and so dim(X ∩ Y ) > 2.

4 Exercise 4.8. Let V := R , and let X := {(x1, x2, x3, x4) ∈ V | x2 + x3 + x4 = 0}, Y := {(x1, x2, x3, x4) ∈ V | x1 + x2 = x3 − 2x4 = 0}. Find bases for X , Y , X ∩ Y and X + Y , and write down their dimensions.

5 Exercise 4.9. Let V := R , and define subspaces U1 and U2 of V by

U1 := {(x1, . . . , x5) ∈ V | x1 + x2 = 0}, P5 P5 i U2 := {(x1, . . . , x5) ∈ V | i=1 xi = 0 and i=1(−1) xi = 0}.

(a) Find a basis of U1 ∩ U2 .

(b) Let B be the basis you found in (a). Extend B to bases of U1 and of U2 .

(c) Now write down a basis of U1 + U2 . [Exploit the proof of the dimension formula].

(d) List the dimensions of U1 , U2 , U1 ∩ U2 and U1 + U2 .

n Exercise 4.10. Let V := R where n > 2, and let U, W be subspaces of dimension n − 1. (a) Show that if U 6= W , then dim(U ∩ W ) = n − 2.

(b) Now suppose that n > 3 and let U1,U2,U3 be three distinct subspaces of dimension n − 1. Must it be true that dim(U1 ∩ U2 ∩ U3) = n − 3? Either give a proof, or find a counterexample.

4.4 Direct sums of subspaces An extreme case of Theorem 4.3 is of considerable importance:

Definition 4.1. Let U, W be subspaces of a vector space V . If U ∩ W = {0V } and U + W = V then V is said to be the direct sum of U and W , and we write V = U ⊕ W . In this case W is said to be a complement of U in V (and vice-versa).

2 2 Exercise 4.11. Let L1,L2 be distinct 1-dimensional subspaces of R . Show that R = L1 ⊕ L2 .

26 Theorem 4.4. Let U, W be subspaces of a finite-dimensional vector space V . Then the following conditions are equivalent: (1) V = U ⊕ W ; (2) Every vector v ∈ V has a unique expression as u + w where u ∈ U and w ∈ W ; (3) dim V = dim U + dim W and V = U + W ;

(4) dim V = dim U + dim W and U ∩ W = {0V };

(5) If u1, . . . , uk is a basis for U and w1, . . . , wm is a basis for W then u1, . . . , uk, w1, . . . , wm is a basis for V .

The equivalence of assertions (1) and (2) is a straightforward consequence of the definition of direct sum; the equivalence of those with assertions (3), (4) and (5) follows directly from the dimension formula. I’ll therefore omit details, but I strongly recommend that you make sure that you can find yourself a proof. In case of difficulty, please feel free to write to me and ask for one.

Warning. A far too common error is the expectation that if V = U ⊕ W then every basis of V is the union of a basis of U and a basis of W . I strongly recommend the following two exercises.

Exercise 4.12. Let V be a finite-dimensional real vector space and let U, W be non-trivial subspaces such that V = U ⊕ W . Show that there are bases of V that contain no vectors from U and no vectors from W . Indeed, show that there are infinitely many such bases of V .

Exercise 4.13. Let U, W be subspaces of a finite-dimensional real vector space V and suppose that V = U ⊕ W . Show that every basis of V is the union of a basis of U and a basis of W if and only if either U = {0V } or W = {0V }.

Exercise 4.14. Let U, W be subspaces of a finite-dimensional real vector space V . Show that V = U ⊕ W if and only if every v ∈ V can be expressed uniquely in the form v = u + w with u ∈ U and w ∈ W .

Exercise 4.15. Let U, W be subspaces of a finite-dimensional real vector space V . Show that V = U ⊕ W if and only if dim V = dim U + dim W and V = U + W .

Exercise 4.16. Let U, W be subspaces of a finite-dimensional real vector space V . Show that V = U ⊕ W if and only if dim V = dim U + dim W and U ∩ W = {0}.

3 Exercise 4.17. Let V := R and U := {(x1, x2, x3) ∈ V | x1 + x2 + x3 = 0}. For each of the following subspaces W , either prove that V = U ⊕ W , or explain why this is not true: (i) W = {(x, 0, −x) | x ∈ R}; (ii) W = {(x, 0, x) | x ∈ R}; (iii) W = {(x1, x2, x3) ∈ V | x1 = x3}.

27

5 An introduction to linear transformations

Up to this point the theory has been concerned with individual vector spaces and their sub- spaces. Linear transformations permit comparison of different spaces, and are just as important as the spaces themselves.

5.1 What is a linear transformation? Definition 5.1. Let V,W be vector spaces (over a field F which we can take to be R in this chapter as elsewhere). A map (function) T : V → W is said to be linear and is called a linear transformation or a linear mapping if the following conditions are satisfied: (LT1) for all u, v ∈ V , T (u + v) = T (u) + T (v); (LT2) for all v ∈ V and all scalars λ, T (λv) = λT (v).

The definition is designed to capture the idea of a “structure-preserving” mapping. The vector-space structure is that of addition and multiplication by scalars, and it is this that should be preserved. In fact, the zero element is also part of the structure of a vector space that is built in from the start. Technically we should have an axiom (LT0) specifying that T (0V ) = 0W , and in other contexts one would do so. Here, however, it is conventional not to do so: this assertion (LT0) follows immediately from each of (LT1) and (LT2), and of course, therefore, from the two of them together.

Theorem 5.1. Let V,W be vector spaces and let T : V → W .

(1) If (LT1) holds then T (0V ) = 0W .

(2) If (LT2) holds then T (0V ) = 0W .

Here is a proof of (1)—the proof of (2) is left as an exercise. Let z := T (0V ) ∈ W . then z + z = T (0V ) + T (0V ) = T (0V + 0V ) by (LT1). Therefore z + z = T (0V ) = z , and it follows (see, for example, Exercise 2.1) that z = 0W .

Exercise 5.1. Let V,W be vector spaces and let T : V → W . Show that if for all v ∈ V and all scalars λ, T (λv) = λT (v) then T (0V ) = 0W .

It is worth remembering that this gives a simple first test for whether a given mapping T is linear or not. If T (0V ) 6= 0W then T certainly is not linear.

Exercise 5.2. Which of the following maps T : R2 → R2 are linear, which are not? As always, give reasons. (i) T (x, y) = (2x + 3y, x − y). (ii) T (x, y) = (2x + 3y, 0). (iii) T (x, y) = (y, x). (iv) T (x, y) = (2x + 3y + 1, x − y − 2). (v) T (x, y) = (x2 + y3, x − y). (vi) T (x, y) = (cos(x + y), sin(x − y)).

Exercise 5.3. Which of the following mappings T : R3 → R3 are linear transformations? (i) T (x, y, z) = (y, z, 0); (ii) T (x, y, z) = (|x|, −z, 0); (iii) T (x, y, z) = (x − 1, x, y). (iv) T (x, y, z) = (yz, zx, xy).

29 The definition describes linearity as being the property of preserving the operations of addition and multiplication by scalars. In practice what we usually use, and the way we think about linearity of a transformation, is that it maps linear combinations to linear combinations:

Theorem 5.2. Let V,W be vector spaces and let T : V → W . The following conditions are equivalent: (1) T is linear; (2) for all u, v ∈ V and all scalars α, β , T (αu + β v) = αT (u) + β T (v); (3) for any n ∈ N, if u1, , . . . , un ∈ V and α1, . . . , αn are scalars then T (α1u1 + ··· + αnun) = α1T (u1) + ··· + αnT (un) .

I leave the proof as an exercise: note that (2) follows from (1) by straightforward use of (LT1) followed by (LT2) (twice); (3) may be proved from (2) by induction on n; and (1) follows from (3) as a couple of very special cases.

Exercise 5.4. Write out a proof of Theorem 5.2.

5.2 Some examples of linear transformations

We start with two extreme cases. Although they may seem rather trivial, they are dispropor- tionately important, so are well worth remembering.

Example 5.1. Let V be a vector space. The identity map IV : V → V defined by IV : v 7→ v is a linear transformation. That IV satisfies (LT1) and (LT2) should be obvious.

Example 5.2. Let V,W be vector spaces. The zero map 0 : V → W , defined by 0 : v 7→ 0W for all v ∈ V , is a linear transformation.

The next few examples are particularly important. They will reappear time and time again in one context or another.

n m Example 5.3. Let m, n be positive integers, let V := Rcol = Mn×1(R), let W := Rcol = Mm×1(R) and let A ∈ Mm×n(R). The left-multiplication map LA : V → W , LA(v) = Av, is a linear transformation. This is immediate from Theorem 1.2: condition (LT1) is the second half of part (2) of that theorem and (LT2) is the second half of part (3). m n Similarly, the right-multiplication map RA : R → R (spaces of row vectors), given by RA(v) = vA, is a linear transformation.

Example 5.4. More generally, let m, n, p be positive integers, let V := Mn×p(R), let W := Mm×p(R) and let A ∈ Mm×n(R). The left multiplication map V → W , X 7→ AX is a linear transformation.

Example 5.5. Let V be a vector space with subspaces U, W such that V = U ⊕W . For each v ∈ V there are unique vectors u ∈ U , w ∈ W such that v = u+w (see Theorem 4.4(2)). Define P : V → V by P (v) = w. To see that this is linear, let v1, v2 ∈ V and let α1, α2 be scalars. There exist u1, u2 ∈ U and w1, w2 ∈ W such that v1 = u1+w1 and v2 = u2+w2 . Then α1v1 +α2v2 = (α1u1 +α2u2)+(α1w1 +α2w2) and since α1u1 +α2u2 ∈ U and α1w1 +α2w2 ∈ W , this is the relevant expression for α1v1 + α2v2 . Therefore P (α1v1 + α2v2) = α1w1 + α2w2 = α1P (v1) + α2P (v2), as required for linearity of P . This linear mapping P is known as the projection of V onto W along U . Clearly there is a companion linear transformation Q : V → V which is the projection of V onto U along W .

30 Example 5.6. If A = (aij) ∈ Mn×n(R), its trace is defined by

TraceA := a11 + a22 + ··· + ann , the sum of the entries on the main diagonal of A. The function Trace : Mn×n(R) → R is an important linear mapping.

Example 5.7. Let Rn[x] be the vector space of polynomials of degree (at most) n as in Example 2.6. The differential operator D : f(x) 7→ f 0(x), that is

2 n n−1 D : a0 + a1x + a2x + ··· + anx 7→ a1 + 2a2x + ··· + nanx , is a linear transformation Rn[x] → Rn[x]. It could also be construed as a linear transformation Rn[x] → Rn−1[x].

1 Example 5.8. Let C (R) be the subspace of RR consisting of functions f : R → R that 0 1 are differentiable. The differential operator D : f 7→ f is a linear transformation C (R) → RR .

∞ Example 5.9. Let C (R) be the subspace of RR consisting of functions f : R → R dnf that are “infinitely differentiable” in the sense that exists for every n. The differential dxn 0 ∞ ∞ operator D : f 7→ f is a linear transformation C (R) → C (R).

X Example 5.10. Let X be any set and let V := R , the function space described in Example 2.7. For a ∈ X , the evaluation map E : V → R, E(f) = f(a), is a linear mapping 1 E : V → R .

Exercise 5.5. Suppose that vectors v1, v2, . . . , vn form a basis of a vector space V , and let w1, w2, . . . , wn be any n vectors in a vector space W . Show that there is a linear transform- ation T : V → W such that T (vi) = wi for 1 6 i 6 n. Show, moreover, that such a linear transformation is unique.

Exercise 5.6. Suppose that vectors v1, v2, . . . , vn form a basis of a vector space V . Show that there are linear transformations TP (r, s), TM(r, λ), TS(r, s, λ) from V to V such that

TP (r, s) (for 1 6 r < s 6 n) interchanges vr and vs , fixes all other vi . TM(r, λ) (for 1 6 r 6 m and λ 6= 0) maps vr to λvr , fixes all other vi .

TS(r, s, λ) (for r, s ∈ {1, . . . , m}, r 6= s) maps vs to vs + λvr and fixes vi if i 6= s.

5.3 Some algebra of linear transformations

There are natural definitions of addition of linear transformations and of multiplication of a linear transformation by a scalar. As always with functions, this is done “pointwise”:

Theorem 5.3. Let V,W be vector spaces. For linear transformations S, T : V → W and scalars λ define S + T and λT by (S + T )(v) := S(v) + T (v) and (λT )(v) := λT (v) for all v ∈ V . With 0 defined as in Example 5.2, the set of linear transformations V → W is a vector space.

Since the proof is a routine matter of checking conditions (VS1)–(VS8) of Definition 2.1, it is omitted. You are advised, though, to check a few cases, just in case an examiner asks you to write one or two out.

31 Theorem 5.4. Let U, V, W be vector spaces and let S : U → V and T : V → W be linear. Then T ◦ S : U → W is linear.

Proof. Let u1, u2 ∈ U . Then

(T ◦ S)(u1 + u2) = T (S(u1 + u2)) [definition of composition] = T (S(u1) + S(u2)) [S is linear] = T (S(u1)) + T (S(u2)) [T is linear] = (T ◦ S)(u1) + (T ◦ S)(u2) [definition of composition] and so T ◦ S satisfies condition (LT1) of Definition 5.1. The calculation to check that condition (LT2) is satisfied by T ◦ S is very similar and is omitted.

Notation. It is common to write TS for T ◦ S . Some writers, however, seeing the diagram U −→S V −→T W on the page would write ST for this composite map. If ambiguity threatens then it is best to use the T ◦ S notation for clarity.

Exercise 5.7. Let U, V, W be vector spaces, and let S, S1,S2 : U → V and T,T1,T2 : V → W be linear mappings. Write a careful proof that

T (S1 + S2) = TS1 + TS2 and (T1 + T2)S = T1 S + T2S.

Definition 5.2. Let V,W be vector spaces. A linear transformation T : V → W is said to be invertible if there exists a linear transformation S : W → V such that ST = IV and TS = IW , where, recall from Example 5.1, IV , IW are the identity maps on V and W respectively. Then S is known as the inverse of T and is written T −1

As with ordinary functions (see Observation 3.25 of the notes by Alan Lauder, An Intro- duction to University Level Mathematics) and matrices (see Note C on p. 3 above), if a linear transformation is invertible then its inverse is unique, so the notation T −1 is unambiguous.

Theorem 5.5. Let V,W be vector spaces, and let T : V → W be a linear transformation. Then T is invertible if and only if T is a bijective map (one-to-one and onto).

Proof. From basic set theory we know that T is invertible as a function if and only if it is bijective (see Theorem 3.28 of the notes by Alan Lauder, An Introduction to University Level Mathematics). Thus if T is invertible then certainly it is bijective. What is to be proved is that, when T is bijective, given that it is linear, its set-theoretic inverse also is linear. So suppose that T is bijective and let S : W → V be its set-theoretic inverse. Let x, y ∈ W and let α, β be scalars. We need to prove that S(αx + β y) = αS(x) + β S(y). Since T is surjective there exist u, v ∈ V such that x = T (u), y = T (v). Then S(x) = u and S(y) = v by definition of S . Also

S(αx + β y) = S(αT (u) + β T (v)) = S(T (αu + β v)) [since T is linear].

But S(T (αu + β v)) = αu + β v since S was defined to be the (set-theoretic) inverse of T . Therefore S(αx + β y) = αu + β v = αS(x) + β S(y), as required.

Just as with ordinary functions and matrices (see Theorem 1.3), we have the following fact.

Theorem 5.6. Let U, V, W be vector spaces and let S : U → V , T : V → W be invertible linear transformations. Then TS : U → W is invertible and (TS)−1 = S−1 T −1 .

32 5.4 Rank and nullity

We come now to two sets which carry essential information about a linear transformation. Recall that for any function f : V → W (where now V,W can be any sets) the image is defined by

Im T := {w ∈ W | ∃ v ∈ V : f(v) = w} or, less formally, Im T := {f(v) | v ∈ V } .

Definition 5.3. Let V,W be vector spaces, and let T : V → W be a linear transform- ation. The kernel, also called the null space of T is defined by

Ker T := {v ∈ V | T (v) = 0W }

The kernel gives us quite a precise measure of how far from being injective a linear trans- formation is.

Lemma 5.7. Let V,W be vector spaces, and let T : V → W be a linear transformation. For u, v ∈ V , T (u) = T (v) if and only if u − v ∈ Ker T .

For, T (u) = T (v) ⇔ T (u) − T (v) = 0W ⇔ T (u − v) = 0W ⇔ u − v ∈ Ker T .

Corollary 5.8. Let V,W be vector spaces, and let T : V → W be a linear transform- ation. Then T is injective (one–one) if and only if Ker T = {0V }.

For, if Ker T = {0V } then the lemma tells us that T (u) = T (v) if and only if u = v, which means that T is injective. If, however, Ker T 6= {0V } then there exists u ∈ Ker T , u 6= 0V , and then T (u) = T (0V ), and so T is not injective.

Theorem 5.9. Let V,W be vector spaces, and let T : V → W be a linear transformation. Then (1) Ker T is a subspace of V and Im T is a subspace of W ; (2) if A is a spanning set for V then T (A) is a spanning set for Im T ; (3) if V is finite-dimensional then also Ker T and Im T are finite-dimensional.

Proof. Let v1, v2 ∈ Ker T , so that T (v1) = T (v2) = 0W . For any scalar λ, T (λv1 + v2) = λT (v1) + T (v2) since T is linear. But then T (λv1 + v2) = λ0W + 0W = 0W , that is, λv1 + v2 ∈ Ker T . Hence Ker T is a subspace of V by the subspace test.

Now let w1, w2 ∈ Im T and let λ be a scalar. Then there exist v1, v2 ∈ V such that T (v1) = w1 , T (v2) = w2 . By linearity of T ,

T (λv1 + v2) = λT (v1) + T (v2) = λw1 + w2 .

Thus λw1 + w2 ∈ Im T and so, again by the subspace test, Im T is a subspace of W .

For part (2), let A be a spanning set for V . If w ∈ Im T then w = T (v) for some v ∈ V : and for this element v there exist v1, v2, . . . , vn ∈ A and there exist scalars α1, α2, . . . , αn such that v = α1v1 + α2v2 + ··· + αnvn . Then by Theorem 5.2(3), w = T (v) = α1T (v1) + α2T (v2) + ··· + αnT (vn) ∈ Span (T (A)). Thus T (A) spans Im T , as required.

For (3), suppose now that V is finite-dimensional. Then Ker T is finite-dimensional by Theorem 4.1. Finite-dimensionality of Im T follows immediately from part (2).

33 Definition 5.4. Let V,W be vector spaces, and let T : V → W be a linear transform- ation. The rank and nullity of T are defined by

rank T := dim(Im T ) and null T := dim(Ker T ).

Note. Although it is not necessary that W be finite-dimensional, in practice it always will be. There does not seem to be a universally accepted notation for rank and nullity. On the web and in textbooks there are many variants. My usage here seems quite common, but almost equally common is rk(T ) for rank and ν(T ) (Greek letter nu) for nullity, and there are many variants. I am not a fan of ν(T ) since I have difficulty distinguishing ν from v in handwriting—and even in some fonts of type. Although notation is not standard, the names do seem to be the same in all English-speaking countries.

Exercise 5.8. Let V = Rn[x], the vector space of polynomials of degree 6 n. Define D : V → V to be differentiation with respect to x. Find the rank and the nullity of D.

Exercise 5.9. Describe the kernel and image of each of the following linear transformations, and in each case find the rank and the nullity. 1 −1 1 1 4 3 4 (i) T : R → R given by T (X) = AX for X ∈ Rcol , where A := 1 2 −1 1 . 0 3 −2 0 (ii) V = Mn×n(R), and T : V → R is given by T (A) = trace(A), the sum of the main- diagonal entries of A as defined in Example 5.6.

We come now to one of the classic theorems of the topic.

Theorem 5.10 [Rank-Nullity Theorem]. Let V,W be vector spaces, and let T : V → W be a linear transformation. If V is finite-dimensional then rank T + null T = dim V .

Proof. Choose a basis u1, . . . , un for Ker T , where n := dim Ker T = null T . By Theo- rem 4.2, this may be extended to a basis u1, . . . , un , v1, . . . , vr of V . Thus dim V = n + r. For 1 6 i 6 r define wi := T (vi). Our aim is to prove that w1, . . . , wr form a basis for Im T . From Theorem 5.9(3) we know that the vectors T (u1),...,T (un), T (v1),...,T (vr) span Im T . Since u1, . . . , un ∈ Ker T , the first n of those vectors are 0W however, and do not contribute to the span. Therefore Im T is spanned by the remainder, that is, by w1, . . . , wr .

We also need to show that w1, . . . , wr are linearly independent. So consider an arbitrary linear dependence relation α1w1 + ··· + αrwr = 0W , where α1, . . . , αr are scalars. Since T is linear this may be rewritten T (α1v1 + ··· + αrvr) = 0W . This says, however, that α1v1 + ··· + αrvr ∈ Ker T , and therefore this vector may be written as a linear combination of our basis vectors of Ker T : that is, there exist scalars β1, . . . , βn such that

α1v1 + ··· + αrvr = β1u1 + ··· + βnun .

Rewrite this equation as β1u1+···+βnun−α1v1−· · ·−αrvr = 0V . Since u1, . . . , un , v1, . . . , vr are linearly independent, the coefficients must all be 0. In particular, α1 = ··· = αr = 0. Therefore w1, . . . , wr are linearly independent.

Thus w1, . . . , wr form a basis for Im T and so r = rank T . Consequently dim V = n+r = null T + rank T , as the theorem states.

This theorem has many uses. I’ll leave some of them as exercises. The following is, however, too striking and too important not to be formulated as part of the theory.

34 Corollary 5.11. Let V be a finite-dimensional vector space. The following conditions on a linear transformation T : V → V are equivalent: (1) T is invertible; (2) rank T = dim V ; (3) null T = 0. For, we have seen in Theorem 5.5 that T is invertible if and only if it is bijective. So suppose (1) that T is invertible: then it is surjective, so rank T = dim V . Next suppose (2) that rank T = dim V : then by the Rank-Nullity Theorem, null T = 0. Finally suppose (3) that null T = 0 that is, Ker T = {0}: then by Corollary 5.8, T is injective, but also by the Rank-Nullity Theorem, rank T = dim V so T is surjective. Thus if null T = 0 then T is bijective and so by Theorem 5.5, T is invertible.

The last result of this section is well worth remembering. It is very often useful—and will, in fact, be used in our further study of matrices in the next section.

Lemma 5.12. Let T : V → W be a linear transformation of vector spaces and let U 6 V . Then dim U − null T 6 dim T (U) 6 dim U . In particular, if T is injective then dim T (U) = dim U .

Proof. Let S : U → W be the restriction of T to U (so that S(u) = T (u) for all u ∈ U ). Then S is linear and clearly Ker S 6 Ker T , so null S 6 null T . By the Rank-Nullity Theorem, dim Im S = dim U − null S . But of course Im S = T (U). Therefore dim T (U) 6 dim U and dim T (U) > dim U − null T . In particular, if T is injective then null T = 0 and we see that dim T (U) = dim U .

Exercise 5.10. Let U, V, W be finite-dimensional vector spaces, and let S : U → V , T : V → W be linear transformations. Show that Ker S 6 Ker (TS) and Im (TS) 6 Im T . Deduce that null (TS) > null S and rank (TS) 6 rank T . Must it also be true that null (TS) > null T and rank (TS) 6 rank S ?

Exercise 5.11. Let V be a finite-dimensional vector space and let S, T : V → V be linear transformations. Suppose that the composite TS : V → V is invertible. Show that then S is injective (one-one), and deduce that S is invertible. Show also that T is surjective and deduce that T is invertible. Deduce that ST is invertible if and only if TS is invertible.

Exercise 5.12. Let V be a finite-dimensional vector space, and let S, T : V → V be linear transformations. (i) Show that Im (S + T ) 6 Im S + Im T . Deduce that rank (S + T ) 6 rank S + rank T . (ii) Show that null (ST ) 6 null S + null T . [Hint: Focus on the restriction of S to Im T , and consider its image and its kernel.]

Exercise 5.13. Let V be a finite-dimensional vector space, U, W subspaces such that V = U ⊕ W . Let P : V → V be the projection onto U along W , and let Q : V → V be the projection onto W along U (see Example 5.5).

(i) Show that Q = IV − P . (ii) Show that P 2 = P , that Q2 = Q and that PQ = QP = 0.

Exercise 5.14. Let V be a finite-dimensional vector space. Let T : V → V be a linear transformation such that T 2 = T . (Such linear transformations are said to be idempotent). For v ∈ V let u := T v and let w := v − T v , so that v = u + w. Show that u ∈ Im T , w ∈ Ker T and v = u + w. Show also that Im T ∩ Ker T = {0}. Deduce that V = U ⊕ W where U := Im T , W := Ker T , and T is the projection onto U along W .

35 Exercise 5.15. Let V be an n-dimensional vector space and let T : V → V be a linear transformation. Prove that the following statements are equivalent:

2 1 (a) Im T = Ker T, and (b) T = 0, n is even and rank T = 2 n.

Exercise 5.16. Let V be the real vector space consisting of all sequences (un) of real num- bers satisfying the recurrence relation un+1 = un +un−1 (for n > 1) (see Exercise 2.12). Show 2 that the map T : V → R given by (an) 7→ (u0, u1) is a linear transformation. Calculate its rank and nullity, and hence show that dim V = 2.

36 6 Linear transformations and matrices

6.1 Matrices of linear transformations with respect to given bases

In Example 5.3 we saw two particularly natural linear transformations made using a ma- n m trix A ∈ Mm×n(R). One was LA : Rcol → Rcol , given by LA(v) = Av. The other was m n RA : R → R given by RA(v) = vA. In this section we’ll see that, in quite a strong sense, any linear transformation of (abstract) vector spaces over R (or indeed, over any field F ) may be “coordinatised” and compared directly with one of these. Definition 6.1. Let V be an n-dimensional vector space over a field F with a basis v1, v2, . . . , vn , let W be an m-dimensional vector space with a basis w1, w2, . . . , wm . To each linear transformation T : V → W may be assigned an m×n matrix in the following way. Since w1, w2, . . . , wm form a basis of W , for each i ∈ {1, . . . , n}, there are uniquely determined scalars aij (for 1 6 j 6 m) such that T (vi) = ai1w1 + ai2w2 + ··· + aimwm . That is,

T (v1) = a11w1 + a12w2 + ··· + a1mwm , T (v2) = a21w1 + a22w2 + ··· + a2mwm , ...... T (vn) = an1w1 + an2w2 + ··· + anmwm . The matrix M(T ) associated with T is not the n × m array of coefficients that we see on the page but its transpose, M(T ) := (aj i) ∈ Mm×n(F ). There is a good reason for this transposition, which will emerge in due course.

Note A. It is important to understand that the matrix associated with a linear trans- formation depends on the bases in V and in W . Different bases will usually give different matrices. This phenomenon is important and will be analysed later in this chapter.

Note B. It is also important to understand that in this context our bases are not just sets of vectors, they are ordered—that is to say, they come in sequence as v1, v2, . . . , vn and w1, w2, . . . , wm . Since the matrix depends on these ordered bases it would make sense to use a notation that recognises this dependency. Indeed, many authors would use some such notation as M B2 (T ) to denote the matrix of T created by means of the ordered bases B and B of V B1 1 2 and of W respectively. Personally, I find this rather cumbersome and I’ll try to avoid it when I can. If ambiguity threatens, however, I may be forced into some such usage.

Note C. In the case of a linear transformation T : V → V , we speak of the matrix of T with respect to a basis v1, v2, . . . , vn of V meaning, of course, the matrix with respect to these vectors as basis both of the domain and of the codomain of T .

n m Example 6.1. As a first example, let’s take the linear transformation LA : Fcol → Fcol , v 7→ Av of Example 5.3 (generalised from R), where A ∈ Mm×n(F ). There is a natural basis n for Fcol , for which traditional notation is e1, e2, . . . , en , where ei is the n-vector in which all the entries are 0 except the ith , which is 1; similarly, of course, there is a natural basis m th f1, f2, . . . , fm for Fcol , where fj is the m-vector, in which all the entries are 0 except the j , which is 1. We check that if A = (aij) then for 1 6 i 6 n,   a1i  a2i  L (e ) = Ae = (ith column of A) =   . A i i  .   .  ami 37 That is, for 1 6 i 6 n, LA(ei) = a1if1 + a2if2 + ··· + amifm . What is shown on the page is an n × m array of coefficients, in which the (i, j) entry is aj i . Therefore M(LA) is its transpose (aij), the matrix A that we started from.

m n Exercise 6.1. Let A ∈ Mm×n(F ) and let RA : F → F be defined by RA(v) = vA as in the second part of Example 5.3. Show that, with respect to the natural bases of F m and F n , tr the matrix M(RA) is the transpose A , of A.

Exercise 6.2. Let V := Rn[x], the space of polynomials of degree 6 n with real coefficients, and let D : V → V be the differentiation map. Find the matrix of D with respect to the basis 1, x, x2, . . . , xn of V both as its domain and as its codomain.

Theorem 6.1. Let V , W be n- and m-dimensional vector spaces over F with ordered bases BV and BW respectively.

(1) The matrix of 0 : V → W is 0m×n .

(2) The matrix of IV : V → V is In . (3) If S : V → W , T : V → W are linear and α, β are scalars then M(αS + β T ) = αM(S) + β M(T ).

The proof is quite routine, so it is omitted. I suggest, however, that you make sure that you know what it involves:

Exercise 6.3. Write out a careful proof of Theorem 6.1.

Theorem 6.2. Let U, V, W be finite-dimensional vector spaces with ordered bases X of size n, Y of size m and Z of size p respectively. Let S : U → V and T : V → W be linear transformations. Let A be the matrix of T with respect to the bases Y,Z , and let B be the matrix of S with respect to the bases X,Y . Then the matrix of T ◦ S with respect to X and Z is AB .

Proof. Note first that A is a p × m matrix and B is an m × n matrix, so the matrix product AB is defined and is a p × n matrix.

We start by introducing some notation: let X be u1, . . . , un , let Y be v1, . . . , vm , and let Z be w1, . . . , wp , let aj k be the (j, k) entry of A and let bij be the (i, j) entry of B . What this means is that m p X X S(ui) = bj ivj and T (vj) = ak jwk . j=1 k=1 Using the linearity of T we see that m X (T ◦ S)(ui) = T (S(ui)) = bj iT (vj) j=1 m p X  X  = bj i ak jwk . j=1 k=1 Pm The coefficient of wk in this expression for (T ◦ S)(ui) is j=1 bj iak j . Therefore the (k, i) entry of the matrix for T ◦ S with respect to the bases X and Z is this number, which is the (k, i) entry of AB , as required.

Note. This is the modern reason for the formula for multiplication of matrices (the original, early to mid nineteenth century, reason was essentially the same, but expressed in the language of linear substitution of coordinates). We now have the machinery to give a pretty painless proof of the associativity of matrix multiplication, as was promised after Theorem 1.2. It comes naturally out of the associativity of composition of functions:

38 Corollary 6.3. If A ∈ Mm×n(F ), B ∈ Mn×p(F ), C ∈ Mp×q(F ), then

A(BC) = (AB)C.

Proof. The left multiplication maps described in Example 5.3 are linear transformations n m m p p q LA : Fcol → Fcol , LB : Fcol → Fcol and LC : Fcol → Fcol . Their matrices with respect to the natural bases of the column spaces are A, B and C respectively—see Example 6.1. By p n Theorem 6.2, A(BC) is the matrix of LA ◦ (LB ◦ LC ): Rcol → Rcol and (AB)C is the matrix p n p n of (LA ◦ LB) ◦ LC : Rcol → Rcol with respect to the natural bases of Rcol and Rcol . But S ◦ (T ◦ U) = (S ◦ T ) ◦ U because composition of functions, when defined, is always associative (whatever those functions may be). Therefore A(BC) = (AB)C .

Here is another important but easy consequence of Theorem 6.2. I leave the proof, and a generalisation, as exercises.

Corollary 6.4. Let V be a finite-dimensional vector space and let T : V → V be an in- vertible linear transformation. Let A be the matrix of T with respect to the basis v1, v2, . . . , vn of V (both as domain and as codomain of T ). Then A is invertible and A−1 is the matrix of T −1 with respect to this basis.

Exercise 6.4. Write out a proof of Corollary 6.4.

Exercise 6.5. Let V,W be vector spaces of the same finite dimension n, and let v1, . . . , vn and w1, . . . , wn be bases of V and of W respectively. Let T : V → W be a linear transform- ation and let A be its matrix with respect to these bases. Show that if T is invertible then A is an invertible matrix and the matrix of T −1 with respect to the same bases is A−1 .

Exercise 6.6. Let V be a finite-dimensional vector space, and let U, W be subspaces such that V = U ⊕W . Let T : V → V be a linear transformation with the property that T (U) ⊆ U and T (W ) ⊆ W .

(a) Show that the matrix of T with respect to a basis of V which is the union of bases of U A 0  and of W (recall Theorem 4.4(5)) has block form . 0 B (b) What information do those matrices A and B carry? 4 (c) Let V := R and suppose that T (x1, x2, x3, x4) = (x1 + x2, x1 − x2, x4, 0). Find 2-dimensional subspaces U and W such that T (U) ⊆ U , T (W ) ⊆ W and V = U ⊕ W .

(d) Find bases BU for U and BW for W , and find the matrix of T with respect to BU ∪BW as in (a).

Exercise 6.7. Let T : R3 → R3 be the map defined by T (x, y, z) := (0, x, y). Check that T is linear, and that T 2 6= 0 but T 3 = 0. Find the matrix of T with respect to the standard basis for R3 .

Exercise 6.8. Let V be an n-dimensional vector space, and let T : V → V be a linear map. Suppose that there is a vector v ∈ V such that T n−1(v) 6= 0 but T n(v) = 0. Show that the set {v, T (v),T 2(v),...,T n−1(v)} is linearly independent, and hence is a basis of V . Find the matrix of T with respect to this basis (in the given order).

39 6.2 Change of basis

As it has been defined, the matrix of a linear transformation between two vector spaces depends heavily on the bases of those spaces that are used to define it. The natural question arises, how are the matrices of a given linear transformation with respect to different bases related? To answer this question we’ll work with the following data set:

 V , W are finite-dimensional vector spaces over the field F ;    T : V → W is a linear transformation;  0 0 0  v1, v2, . . . , vn and v1, v2, . . . , vn are bases for V ;  0 0 0  w1, w2, . . . , wm and w1, w2, . . . , wm are bases for W ;  n m (?) 0 X X 0 vi = pj ivj and wi = qj iwj;   j=1 j=1   A is the matrix of T with respect to v1, v2, . . . , vn and w1, w2, . . . , wm;   B is the matrix of T with respect to v0 , v0 , . . . , v0 and w0 , w0 , . . . , w0 .  1 2 n 1 2 m  A = (aij) ∈ Mm×n(F ) and B = (bij) ∈ Mm×n(F )

Notice that with this notation, the n×n matrix (pij) is the matrix of the linear transformation 0 X : V → V for which X(vi) = vi , the m × m matrix (qij) is the matrix of the linear 0 transformation Y : W → W in which Y (wi) = wi (see Exercise 5.5). Clearly, rank X = n and rank Y = m, so by Corollary 5.11, X and Y are invertible.

Theorem 6.5 [Change of Basis Theorem, Version 1]. With the assumptions and notation of (?), let P := (pij) ∈ Mn×n(F ) and let Q := (qij) ∈ Mm×m(F ). Then B = QAP .

Proof. By definition of the matrix of a linear transformation,

m m X 0 X 0 T (vi) = aj iwj and T (vi) = bj iwj for 1 6 i 6 n . j=1 j=1

0 In the first equation we can use the linearity of T and the expression for the vectors vi as 0 linear combinations of the vectors vi to obtain expressions for the vectors T (vi) in terms of the first basis w1, w2, . . . , wm of W :

n n n m m n 0  X  X X  X  X  X  T (vi) = T pj ivj = pj iT (vj) = pj i ak jwk = ak jpj i wk . j=1 j=1 j=1 k=1 k=1 j=1

0 Here the coefficient of wk in the expression for T (vi) is the (k, i) entry of the product AP . This should not come as a surprise—it is the matrix of the composite T ◦ X , where X : V → V is as described in the paragraph before the statement of this theorem. 0 0 0 0 We can now take one further step to rewrite T (vi) in terms of the basis w1, w2, . . . , wm of W : m n m n m 0 X  X  X  X  X 0 T (vi) = ak jpj i wk = ak jpj i ql kwl , k=1 j=1 k=1 j=1 l=1 0 and the coefficient of wl in the linear combination on the right of this equation is

m n X  X  ql k ak jpj i . k=1 j=1

Since this is the (l, i) entry of QAP , but must also, by definition, be the (l, i) entry of B , we have B = QAP , as predicted.

40 Corollary 6.6 [Change of Basis Theorem, Version 2]. Let V be a finite-dimensional 0 0 vector space, let T : V → V be a linear transformation, and let v1, . . . , vn and v1, . . . , vn be 0 0 bases for V . If A, B are the matrices of T with respect to the bases v1, . . . , vn and v1, . . . , vn respectively, then B = P −1AP where P is the change of basis matrix, that is, the n×n matrix n 0 X (pij) such that vi = pj ivj . j=1

m X 0 For, writing vi = qj ivj , and taking Q to be the n × n matrix (qij), we find that j=1 Q = P −1 , so this statement follows immediately from Theorem 6.5.

Note A. If A, B ∈ Mn×n(F ) and there exists an invertible n × n matrix P such that P −1AP = B then A, B are said to be similar. It is important that similar matrices represent one and the same linear transformation of a finite-dimensional vector space to itself, but with respect to different bases of that space.

Note B. The change of basis matrix P is the matrix of the identity map IV : V → V 0 0 with respect to the basis v1, . . . , vn for V as domain and the basis v1, . . . , vn for V as codomain.

Exercise 6.9. Let T : R3 → R3 be the linear map T (x, y, z) = (y, −x, z) for all 3 3 (x, y, z) ∈ R . Let E be the standard basis of R , and let F be the basis f1, f2, f3 where f1 = (1, 1, 1), f2 = (1, 1, 0), f3 = (1, 0, 0). (a) Calculate the matrix A of T with respect to the standard basis E of R3 . (b) Calculate the matrix B of T with respect to the basis F . (c) Let I be the identity map of R3 . Calculate the matrix P of I with respect to the bases E, F and the matrix Q of I with respect to the bases F, E , and check that PQ = I3 . (d) Verify that QBP = A.

Exercise 6.10. Recall from Example 5.6 that the trace of an n × n matrix is defined to be the sum a1 1 + a2 2 + ··· + an n of its main-diagonal entries.

(a) Show that if X,Y ∈ Mn×n(F ) then trace(XY ) = trace(YX). (b) Deduce that if A and B are similar n × n matrices then traceA = traceB .

6.3 More about matrices: rank

Let A ∈ Mm×n(R). In Section 4.2 we have defined its row space RowSp A to be the subspace n of R spanned by its rows. Of course, the field of scalars need not be R. This part of the theory is just the same for any field F of scalars.

Definition 6.2. For A ∈ Mm×n(F ) define its row rank by rowrank A := dim RowSp A. m Analogously, we define the column space ColSp A to be the subspace of Fcol spanned by the columns of A and the column rank by colrank A := dim ColSp A

tr Recall the transpose: given that A = (aij) ∈ Mm×n(F ), we have defined A = (aj i) ∈ tr tr Mn×m(F ). Directly from the definitions, ColSp A = RowSp A , so colrank A = rowrank A , and similarly RowSp A = ColSp Atr , so rowrank A = colrank Atr .

41 Theorem 6.7. Let A ∈ Mm×n(F ) and let r := colrank A. There exist invertible matrices   Ir 0r×s P ∈ Mn×n(F ) and Q ∈ Mm×m(F ) such that QAP has the block form . where 0t×r 0t×s s := n − r and t := m − r.

n m Proof. Consider the linear transformation LA : Fcol → Fcol , v 7→ Av of Example 5.3. In Example 6.1 it is shown that its matrix with respect to the natural basis of F n and the m natural basis of F is A. As part of the proof there, it is also shown that Im LA = ColSp A. n Therefore rank LA = r. Choose vectors u1, . . . , us ∈ Fcol that form a basis for Ker LA . As in the proof of the Rank-Nullity Theorem, it is an extra r vectors v1, . . . vr that are needed n to extend to a basis u1, . . . , us, v1, . . . vr for F . Defining wi := LA(vi), we then know that m w1, . . . , wr form a basis of Im LA . Extend this to a basis w1, . . . , wr , wr+1, . . . , wm of Fcol . Now consider the matrix of LA with respect to the ordered bases

n m v1, . . . vr, u1, . . . , us of Fcol and w1, . . . , wr, wr+1, . . . , wm of Fcol .

Since LA(vi) = wi for 1 6 i 6 r and LA(uj) = 0 for 1 6 j 6 s, the matrix of coefficients   Ir 0r×t that is seen on the page has the block form and therefore the matrix of LA with 0s×r 0s×t I 0  respect to this new pair of bases has the block form r r×s . Therefore by Theorem 6.5, 0t×r 0t×s there exist invertible matrices P ∈ Mn×n(F ) and Q ∈ Mm×m(F ) such that QAP has this block form.

Now we have the tools to prove a fundamental fact about matrices which I still find sur- prising and exciting, even after well over half a century of familiarity with it. (Write down a random 5 × 7 matrix. Does the columns space look as if it had anything at all in common with the row space?)

Theorem 6.8. Let A be an m × n matrix. Then colrank A = rowrank A.

Proof. Let Q be an invertible m × m matrix. We investigate how the row space and the n column space of QA are related to RowSp A and ColSp A. Let x1, . . . , xm ∈ Frow be the rows th of A. The rule for multiplication of matrices tells us that if Q = (qij) then the i row of QA is qi1x1 + ··· + qimxm . Thus the rows of QA are linear combinations of the rows of A, −1 that is, RowSp (QA) 6 RowSp A. But since Q has an inverse Q , to which what has just −1 been proved for Q may be applied, it follows that RowSp (Q (QA)) 6 RowSp (QA), that is, RowSp A 6 RowSp (QA). Thus RowSp (QA) = RowSp A. m Now the column space: let y1, . . . , yn ∈ Fcol be the columns of A. The rule for multipli- cation of matrices tells us that the columns of QA are Qy1, . . . , Qyn . Thus if U := ColSp A m m then ColSp (QA) = LQ(U), where LQ : Fcol → Fcol is the left multiplication map. Since Q is invertible LQ has nullity 0 and so by Lemma 5.12, dim LQ(U) = dim U . Therefore colrank (QA) = colrank A. A very similar proof shows that if P is an invertible n × n matrix then ColSp (AP ) = ColSp A and rowrank (AP ) = rowrank A. Actually, there is no need to repeat the argument: we may simply apply what has been proved with A replaced by its transpose Atr and Q replaced by P tr (recall from Exercise 1.10 that P tr is invertible with inverse (P −1)tr ). It follows now that if P and Q are invertible n × n and m × m matrices respectively then

rowrank (A) = rowrank (QAP ) and colrank (A) = colrank (QAP ) .

42 By Theorem 6.7, there exist invertible n × n and m × m matrices P,Q such that QAP = B , I 0  where B has the block form r r×s . Clearly, rowrank B = colrank B = r. Therefore 0t×r 0t×s rowrank A = colrank A, as the theorem promised.

Definition 6.3. Let A be an m × n matrix. the common value of rowrank A and colrank A is known as the rank of A, denoted (of course) rank A.

Note. Let T : V → W be a linear transformation. If the matrix of T with respect to (ordered) bases BV of V and BW of W is A then rank A = rank T .

Exercise 6.11. For each of the following matrices find a basis for the row space and a basis for the column space.

0 0 1 1 2 3 4 1 1 1 2 2 2 , 2 3 4 5 1 2 3       3 3 3 3 4 5 6 1 4 9   ,   . 4 5 6 7 1 8 27     5 6 7 8 1 16 81 6 7 8 9 1 32 243

Exercise 6.12. Let A be an m × n matrix, and let P,Q be invertible m × m and n × n matrices respectively. Show that rank (QA) = rank A and rank (AP ) = rank A.

Exercise 6.13. Let A be an n × n matrix. Show that rank A = n if and only if A is invertible.

2 1 Exercise 6.14. Let A be an n × n matrix such that A = 0. Show that rank A 6 2 n. k More generally, show that if A is an n × n matrix such that A = 0 then rank A 6 n/k .

There is an important connection between the (column) rank of matrices and solution sets of homogeneous linear equations.

Theorem 6.9. Let A be an m×n matrix, let x be the n×1 column vector of variables xi , and let S be the solution space of the system Ax = 0 of m homogeneous linear equations in n the n variables (that is, S = {v ∈ Fcol | Av = 0}). Then dim S = n − colrank A.

n m Proof. Consider the left multiplication map LA : Fcol → Fcol , v 7→ Av. Its image is n spanned by the vectors Aei , where e1, e2, . . . , en is the standard basis of Fcol . Since Aei is th the i column of A, that image is ColSp A, and so rank LA = colrank A. But by definition, n Ker LA = S . By the Rank-Nullity Theorem, therefore, dim S + colrank A = dim Fcol = n. Hence dim S = n − colrank A as the theorem states.

Exercise 6.15. Find the dimensions of the solution spaces of the following systems of homo- geneous linear equations:  2x + 4y − 3z = 0  x + 2y + 3z = 0  x + 2y + 3z = 0    (a) x − 4y + 3z = 0 ; (b) 2x + 3y + 4z = 0 ; (c) 2x + 3y + 4z = 0 .  3x − 5y + 2z = 0  3x + 4y + 5z = 0  3x + 4y + 5z = 0

n m Exercise 6.16. Let A ∈ Mm×n(R) and let T : Rcol → Rcol be the linear transformation T (x) = Ax. Prove the following: (a) if m < n the system of linear equations Ax = 0 always has a non-trivial solution; 43 (b) if m < n the system of linear equations Ax = b either has no solution or has infinitely many different solutions. (c) if A has rank m the system of linear equations Ax = b always has a solution. (d) if A has rank n the system of linear equations Ax = b has at most one solution; (e) if m = n and A has rank n, the system of linear equations Ax = b has precisely one solution.

n n−1 n Exercise 6.17. Let n > 2 and let Vn := {a0x + a1x y + ··· + any | a0, a1, . . . , an ∈ R}, the real vector space of homogeneous polynomials of degree n in two variables x and y . n n−1 n−1 n Let Bn be the “natural” basis x , x y, . . . , xy , y of Vn . Define L : Vn → Vn−2 by ∂2f ∂2f L(f) := + . ∂2x ∂2y (a) Check that L is a linear transformation.

(b) Find the matrix of L with respect to the bases Bn and Bn−2 .

(c) Find the rank of this matrix, and hence find the dimension of {f ∈ Vn | L(f) = 0}.

6.4 More on EROs: row reduced echelon (RRE) form

Recall the definition of EROs from Section 1.5: the three types of elementary row operation that may be made to a given matrix are

P (r, s) for 1 6 r < s 6 m: interchange row r and row s.

M(r, λ) for 1 6 r 6 m and λ 6= 0: multiply (every entry of) row r by λ. S(r, s, λ) for r, s ∈ {1, . . . , m} and r 6= s: add λ times row r to row s.

These were applied there to an m × (n + 1) matrix to reduce it to a matrix E in echelon form:

• if row r of E has any non-zero entries then the first of these is 1;

• if 1 6 r < s 6 m and rows r, s of E contain non-zero entries, the first of which are er j and esk respectively, then j < k (the leading entries of lower rows occur to the right of those in higher rows);

• if row r of E contains non-zero entries and row s does not (that is esj = 0 for 1 6 j 6 n) then r < s—that is, zero rows (if any) appear below all the non-zero rows.

The m×(n+1) matrix was the augmented matrix corresponding to a family of m simultaneous (inhomogeneous) linear equations in n variables. Reduction by EROs to echelon form provided a systematic method for solving that system of equations. Let’s now apply EROs just to m × n matrices. Let B be an m × n matrix in echelon form. Focus on the leading entry 1 in one of its rows, say er j . For 1 6 t < r − 1 subtract etj times row r from row t. These r − 1 operations are EROs. Do this for each of the leading entries 1 in B . The result is a new matrix E∗ which is in what is called row reduced echelon form (RRE form): it is in echelon form and each column that contains the leading entry of a row has all other entries 0.

Example 6.2. Let’s work with the matrix of Example 1.1. There we had an augmented matrix of some equations, but let’s forget the context and treat it simply as a 3 × 5 matrix 0 1 2 3 0 in which the last column has no special status. The matrix was 1 2 3 4 2, which was 2 3 4 5 0

44 1 2 3 4 2 changed by means of EROs to the matrix 0 1 2 3 2. This is in echelon form. Its 0 0 0 0 1 1 0 −1 −2 0 reduced echelon form is 0 1 2 3 0: the columns that contain a leading entry of a 0 0 0 0 1 row are the first, the second and the fifth, and all the other entries in these columns are 0.

Now let’s pause for a review. Let A be an m×n matrix. We know that it may be changed by EROs to a matrix B∗ in RRE form. An ERO, however does not change the row space of a matrix. Consequently RowSp B∗ = RowSp A. Now consider the special case where m = n and A is invertible, that is, rank A = n. Then also B∗ is an n × n matrix of rank n. Then B∗ can have no zero row, so each of its n rows has a leading entry 1, but these occur in different columns, and so the leading entry of row r must be in column r, that is in the (r, r) place. Moreover, in each row the leading entry 1 is the unique non-zero entry in its column. Therefore ∗ B = In . Thus we have the following useful result:

Theorem 6.10. An invertible n × n matrix may be changed by EROs to In .

6.5 Using EROs to invert matrices

Now for each ERO define the corresponding elementary matrix to be the result of applying that ERO to Im . Thus for P (r, s) the corresponding elementary matrix differs from Im only in that the (r, r) entry 1 has moved to the (r, s) place and the (s, s) entry 1 has moved to the (s, r) place; for M(r, λ) the corresponding elementary matrix differs from In only in that the (r, r) entry 1 has been replaced by λ; and for S(r, s, λ) the corresponding elementary matrix differs from Im only in that in the (r, s) place the entry 0 has been replaced by λ. The value of elementary matrices is this:

Fact. Let A be an m × n matrix, and let B be the matrix obtained from A by application of one of the EROs. Then B = EA, where E is the elementary matrix corresponding to that ERO.

The calculation to see this is quite elementary and I omit it. Simple though it is, the fact has a consequence that has great importance in that it leads to a computationally effective method for calculating inverses of n × n matrices of rank n.

Theorem 6.11. Let A be an n×n invertible matrix. Let X1,X2,...,XN be a sequence of EROs that changes A to In . Let B be the matrix obtained from In by this same sequence of EROs. Then B = A−1 .

Proof. Let Ei be the elementary matrix corresponding to the ERO Xi . Application of X1,X2,...,Xk in turn to A produces the matrix Ek ··· E2E1A; applying this sequence to In produces the matrix Ek ··· E2E1 . Applying all N EROs we know that from A we’ll have −1 obtained In , so EN ··· E2E1A = In , that is, EN ··· E2E1 = A . But applying all N EROs −1 to In we get EN ··· E2E1 which is B (by definition). Therefore B = A .

Example 6.3. We calculate the inverse of the 4 × 4 matrix in which the (i, j) entry is ji−1 (an example of a so-called Vandermonde matrix). The instruction S(1, k, −1) × 3 means

45 do the EROs S(1, 2, −1), S(1, 3, −1), S(1, 4, −1) in turn. This kind of shorthand works well for mathematicians, less well for computer programs.

1 1 1 1  1 0 0 0 1 2 3 4  0 1 0 0   ,   , 1 4 9 16  0 0 1 0 1 8 27 64 0 0 0 1 1 1 1 1  1 0 0 0 S(1,k,−1)×3 0 1 2 3 −1 1 0 0 −−−−−−−−−−−−−−→   ,   , 0 3 8 15 −1 0 1 0 0 7 26 63 −1 0 0 1 1 1 1 1  1 0 0 0 S(2,3,−3),S(2,4,−7) 0 1 2 3 −1 1 0 0 −−−−−−−−−−−−−→   ,   , 0 0 2 6  2 −3 1 0 0 0 12 42 6 −7 0 1 1 1 1 1  1 0 0 0 M(3, 1 ) 2 0 1 2 3 −1 1 0 0 −−−−−−−−−−−−−→   ,  3 1  , 0 0 1 3  1 − 2 2 0 0 0 12 42 6 −7 0 1 1 1 1 1  1 0 0 0 S(3,4,−12) 0 1 2 3 −1 1 0 0 −−−−−−−−−−−−→   ,  3 1  , 0 0 1 3  1 − 2 2 0 0 0 0 6 −6 11 −6 1 1 1 1 1  1 0 0 0 M(4, 1 ) 6 0 1 2 3 −1 1 0 0 −−−−−−−−−−−−−→   ,  3 1  . 0 0 1 3  1 − 2 2 0 11 1 0 0 0 1 −1 6 −1 6 From this point, the sequence S(4, 3, −3), S(4, 2, −3), S(4, 1, −1), S(3, 2, −2), S(3, 1, −1), S(2, 1, −1) reduces the original matrix to I4 and changes the matrix in the second column to  13 3 1  4 − 3 2 − 6 19 1 −6 2 −4 2   7 1 . Thus we have found that  4 −7 2 − 2  11 1 −1 6 −1 6

−1 1 1 1 1  24 −26 9 −1 1 2 3 4 1 −36 57 −24 3   =   , 1 4 9 16 6  24 −42 21 −3 1 8 27 64 −6 11 −6 1 which may be checked relatively easily by multiplication.

The method may seem tedious, and for human use it certainly can be, but it is ideal for machine computation: it is easy to program and it works fast. If you know about determinants you may know and like the wonderful determinantal formula for A−1 . That formula, however, though of great theoretical importance, is of very little use computationally. If one were to use the recursive definition of the determinant naively, or the formula for the determinant in terms of the matrix entries, one would need more than n! arithmetical operations to find A−1 . Even for 30 × 30 matrices that would mean well over 1030 operations, taking (at a very rough calculation using computing speed data from the web) 1012 , that is a million million years even

46 on the fastest modern computers. By contrast, the method using EROs requires fewer than 2n3 arithmetical operations so that a modern computer, even a relatively slow one, can invert 30 × 30 matrices in microseconds.

Exercise 6.18. Use EROs to find the inverses of each of the following matrices

2 3 1 1 0 1 −1 0 0 ; 3 2 1 0 1 ; 1 0 −1 0   . 0 1 1 1 0 0 −1 0 1 1 1

Exercise 6.19. Show that an n × n matrix may be reduced to RRE form by a sequence of at most n2 EROs.

47

7 Inner Product Spaces

The use of vectors in geometry and mechanics is greatly enriched by the scalar product x.y and vector product x ∧ y (often written x × y). The vector product is peculiar to 3-D geometry. There is no natural analogue in 2-D or in higher dimensions, and I propose to say nothing more about it. In contrast, the scalar product is seriously important in any number of dimensions. There is an abstract version of it for vector spaces over many coefficient domains F . In its best form, however, it is most useful for real vector spaces. A variant for complex vector spaces lies at the heart of the mathematics of Quantum Physics. In this chapter therefore we’ll permit the scalars to come from an arbitrary field F only in the first section. After that we will work over R, with a passing glance at C.

7.1 Bilinear forms

Definition 7.1. Let V be an F -vector space. A bilinear form on V is a scalar-valued function of two vector variables, traditionally (but not always) written h−, −i : V × V → F , satisfying the following two conditions:

(BL1): for any α, β ∈ F and any u, v, w ∈ V , hαu + βv, wi = αhu, wi + βhv, wi; (BL2): for any α, β ∈ F and any u, v, w ∈ V , hu, αv + βwi = αhu, vi + βhu, wi.

Condition (BL1) says that h−, −i is linear in its first variable when the second is kept fixed; (BL2) says that h−, −i is linear in its second variable when the first is kept fixed.

n n Example 7.1. For x = (x1, x2, . . . , xn) ∈ F and y = (y1, y2, . . . , yn) ∈ F define hx, yi := x1y1 + x2y2 + ··· + xnyn . It is easy to see that this formula describes a bilinear form. In 3-D, when F = R, this is the dot product or scalar product that we are all familiar with. It is often called the ‘dot product’ or ‘scalar product’, written x · y, also in higher dimensions.

n tr Example 7.2. Let A ∈ Mn×n(F ) and for x, y ∈ F define hx, yi := xAy . Then h−, −i is a bilinear form on F n . Note that for this to make sense the 1 × 1 matrix xAytr must be identified with the scalar which is its entry. But that is such a natural thing to do that it happens all the time. Note also tr that the scalar product of Example 7.1 is the special case when A = In : that is, x · y = xy .

tr Exercise 7.1. Write out a proof that when A ∈ Mn×n(F ) the formula hx, yi := xAy defines a bilinear form on F n .

Exercise 7.2. Show that the sum of two bilinear forms is also a bilinear form.

Definition 7.2. Let v1, v2, . . . , vn be vectors in a vector space V over a field F and let h−, −i be a bilinear form on V . The Gram matrix of the vectors with respect to the form is the n × n matrix (hvi, vji) ∈ Mn×n(F ).

Note. The Gram matrix is named for Jorgen Pedersen Gram [1850–1916], a Danish mathematician and statistician, who was also expert in the insurance business and in forestry. 49 Theorem 7.1. Let h−, −i be a bilinear form on a finite-dimensional F -vector space V , let v1, v2, . . . , vn be a basis, and let A ∈ Mn×n(F ) be the associated Gram matrix. For u, v ∈ V n n let x = (x1, x2, . . . , xn) ∈ F , y = (y1, y2, . . . , yn) ∈ F be the unique coordinate vectors such tr that u = x1v1 + x2v2 + ··· + xnvn , v = y1v1 + y2v2 + ··· + ynvn . Then hu, vi = xAy .

Proof. Using (BL1) to expand the first entry and (BL2) to expand the second entry of the bilinear form, we find that D X X E X D X E hu, vi = xivi, yjvj = xi vi, yjvj i j i j X X tr = xi yj hvi, vji = xAy , i j

as the theorem states.

Note. The theorem tells us that, in quite a strong sense, Example 7.2 describes all bilinear forms on vector spaces of dimension n.

Definition 7.3. A bilinear form h−, −i : V × V → F is said to be symmetric if hu, vi = hv, ui for all u, v ∈ V .

tr n Exercise 7.3. Show that if A ∈ Mn×n(F ) then the bilinear form hx, yi = xAy on F defined in Example 7.2 is symmetric if and only if A is a symmetric matrix, that is, A = Atr .

Exercise 7.4. A bilinear form h−, −i on a vector space V is said to be skew-symmetric if hu, vi = −hv, ui for all u, v ∈ V . Show that if 2 6= 0 in the field F of scalars then every bilinear form may be written uniquely as the sum of a symmetric bilinear form and a skew-symmetric bilinear form.

Exercise 7.5. A bilinear form h−, −i on a vector space V is said to be alternating if hu, ui = 0 for all u ∈ V . Show that if 2 6= 0 in the field F of scalars then a bilinear form is alternating if and only if it is skew-symmetric.

7.2 Inner product spaces

Definition 7.4. A bilinear form h−, −i on a real vector space V is said to be positive definite if hv, vi > 0 for all v ∈ V \{0}. An inner product on a real vector space V is a positive definite symmetric bilinear form on V , and an inner product space is a real vector space equipped with an inner product. When notation is not specified, this inner product will always be h−, −i.

Definition 7.5. In a real inner product space we define the norm (or ‘magnitude’ or ‘length’) of a vector v by ||v|| = phv, vi. Recall that in 2-D and in 3-D we have x · y = ||x|| ||y|| cos θ where θ is the angle between the vectors x and y. In the general case we turn this formula round to define angle in an abstract inner product space V : the angle between non-zero vectors x, y ∈ V is defined to be

−1  hx, yi  hx, yi cos . We’ll see in Section 7.4 that this makes good sense because 1 ||x|| ||y|| ||x|| ||y|| 6 for any two non-zero vectors x, y in an inner product space.

50 n Example 7.3. The dot product of Example 7.1 is an inner product on R . For, it is 2 2 certainly a symmetric bilinear form, and if x = (x1, . . . , xn) then x · x = x1 + ··· + xn > 0 except when x1 = ··· = xn = 0. n This inner product space, that is, R equipped with the dot product as inner product, is n often denoted E and referred to as n-dimensional euclidean space. Of course the dot product n n on column vectors is an inner product also. We’ll write Ecol for Rcol equipped with this inner product.

Example 7.4. Let V := Rn[x], the vector space of polynomials of degree 6 n. For R 1 f, g ∈ V define hf, gi := 0 f(x)g(x)dx. Clearly, h−, −i is bilinear and symmetric. It is positive definite because if f ∈ Rn[x] is non-zero then there are only finitely many values of a 2 in the range 0 6 a 6 1 for which f(x) = 0 and then f (x) is strictly positive elsewhere in this R 1 2 range, so 0 f (x)dx > 0.

a b Exercise 7.6. Let A = ∈ M ( ). Show that the bilinear form defined on 2 by b c 2×2 R R hx, yi := xAy is an inner product if and only if b2 < 4ac.

Exercise 7.7. Let V be a real vector space and let h−, −i, h−, −i1 and h−, −i2 be inner p products on V . The norm of a vector v has been defined by ||v|| := hv, vi. Norms ||v||1 and ||v||2 are defined analogously. 1 2 2 2 (a) Show that if u, v ∈ V then hu, vi = 2 (||u + v|| − ||u|| − ||v|| ). (b) Deduce that if ||x||1 = ||x||2 for all x ∈ V then hu, vi1 = hu, vi2 for all u, v ∈ V .

Exercise 7.8. Let V be a 2-dimensional real vector space with basis {e1, e2}. Describe all the inner products h−, −i on V for which he1, e1i = 1 and he2, e2i = 1.

Exercise 7.9. Show that if V is the vector space R2[x] of polynomials of degree at most 2 in x, then the definition hf(x), g(x)i := f(0)g(0) + f(1)g(1) + f(2)g(2) describes an inner product on V .

Theorem 7.2. Let V be a finite-dimensional real inner product space, and let u ∈ V \{0}. Define u⊥ := {v ∈ V | hv, ui = 0}. Then u⊥ is a subspace of V , dim u⊥ = dim V − 1, and V = Span (u) ⊕ u⊥ .

Proof. To see that u⊥ is a subspace of V and that dim u⊥ = dim V − 1 we consider the map V → R given by v 7→ hv, ui. Condition (BL1) guarantees that this is a linear mapping. Clearly, from the definition, u⊥ is its kernel. The image is non-zero since hu, ui > 0, and therefore it must be of dimension 1 (because any non-zero number spans R). By the Rank- Nullity Theorem therefore, dim V = dim u⊥ + 1, so dim u⊥ = dim V − 1. ⊥ Now suppose that v ∈ Span (u) ∩ u . Then v = λu for some λ ∈ R and also hv, ui = 0. But then hv, ui = hλu, ui = λhu, ui = 0, and therefore λ = 0 since hu, ui > 0. Thus Span (u) ∩ u⊥ = {0}. Therefore V = Span (u) ⊕ u⊥ by Theorem 4.4(4).

Exercise 7.10. Let V be an n-dimensional real vector space with inner product h−, −i, and let U be an m-dimensional subspace of V . Define U ⊥ := {v ∈ V | hv, ui = 0 for all u ∈ U}. (a) Show that U ⊥ is a subspace of V and that U ⊥ ∩ U = {0}. m (b) Let u1, . . . , um be a basis for U . Define T : V → R by T (v) = (x1, . . . , xm), where xi := hv, uii for i = 1, . . . , m. Show that T is a linear mapping. (c) For T as in part (b), show that Ker T = U ⊥ and that rank T = m. (d) Deduce that dim U ⊥ = n − m and that V = U ⊕ U ⊥ .

51 Exercise 7.11. Let V := R2[x] with the inner product defined in Exercise 7.9, and let U be the subspace spanned by the polynomials 1 and x. With notation as in Exercise 7.10 find polynomials p(x) ∈ U and q(x) ∈ U ⊥ such that x2 = p(x) + q(x).

Definition 7.6. Vectors v1, v2, . . . , vn in an inner product space are said to form an  1 if i = j, orthonormal set if hv , v i = δ = i j i,j 0 if i 6= j.

Note. If vectors v1, v2, . . . , vn form an orthonormal set in an inner product space V then they are linearly independent. For, if α1v1 + α2v2 + ··· + αnvn = 0 then, forming the inner product with vi for each i, we find that

0 = h0, vii = hα1v1 + α2v2 + ··· + αnvn, vii = α1hv1, vii + α2hv2, vii + ··· + αnhvn, vii = αi , and therefore each coefficient αi is 0.

Theorem 7.3. Let V be a finite-dimensional real inner product space. If n := dim V then there is an orthonormal basis v1, v2, . . . , vn of V .

Proof. We prove this assertion by induction on the dimension n. If n = 0 there is nothing to prove. If n = 1 let v be any non-zero vector in V and let v1 := v/||v||. Then 1 ||v ||2 = hv , v i = hv, vi = 1 1 1 1 ||v||2 so ||v1|| = 1 and certainly v1 is a basis for V . Now suppose that n > 1 and that the result is known to be true for inner product spaces of dimension n − 1. Let v be any non-zero vector in V and let v1 := v/||v||, so that, as above, ⊥ ||v1|| = 1. Let U := v1 as defined in Theorem 7.2. By that theorem, V = Span (v1) ⊕ U . Clearly, the restriction of h−, −i to U (or perhaps more precisely to U × U ) is bilinear, sym- metric and positive definite. By induction hypothesis, there is an orthonormal basis v2, . . . , vn ⊥ of U . Then, since U = v1 , we have hv1, vii = hvi, v1i = 0 for i = 2, . . . , n. Therefore v1, v2, . . . , vn is an orthonormal basis of V .

2 3 4 Exercise 7.12. Find an orthonormal basis of E , one of the vectors of which is ( 5 , 5 ).

Exercise 7.13. Find an orthonormal basis of the space R2[x] with the inner product defined in Example 7.4

7.3 Orthogonal matrices

tr Definition 7.7. A matrix X ∈ Mn×n(R) is said to be orthogonal if XX = In . Equi- valently, X is orthogonal if X−1 = Xtr .

Observation 7.4. Let X ∈ Mn×n(R). The following conditions are equivalent: tr (1) XX = In , that is, X is orthogonal; tr (2) X X = In ; n (3) the rows of X form an orthonormal basis of E ; n (4) the columns of X form an orthonormal basis of Ecol; n (5) for all x, y ∈ E , xX · yX = x · y.

52 The equivalence of (1) and (2) is the fact that if A, B ∈ Mn×n(F ) (for any field F ), then AB = In if and only if BA = In . For the equivalence of (1) and (3) note that the (i, j) entry of XXtr is the dot product of the ith row and the jth row of X . Similarly, for the equivalence of (2) and (4) note that the (i, j) entry of Xtr X is the dot product of the ith column and the jth column of X . For the equivalence of (5) with (1)–(4) we use the description of x · y as xytr . If (1) holds n then for all x, y ∈ E

tr tr tr tr tr tr tr x · y = xy = xIny = x(XX )y = (xX)(X y ) = (xX)(yX) , that is x·y = xX ·yX . Conversely, if (5) holds then ei X ·ej X = ei ·ej for i, j ∈ {1, 2, . . . , n}, n th where e1, e2, . . . , en is the standard basis of R . Now ei X is the i row of X , so if (5) holds n then the rows of X form an orthonormal set in E , which, since there are n rows, is then an orthonormal basis. Thus (3) may be derived from (5), and so all of (1)–(5) carry the same information.

Note. What is really important here is clause (5). It says that X is orthogonal if and only n n if the map E → E given by x 7→ xX preserves the inner product, hence preserves length n and angle—it is what is technically called an isometry of euclidean space E

Exercise 7.14. Show that if X and Y are n × n real orthogonal matrices then also X−1 and XY are orthogonal.

 cos θ sin θ Exercise 7.15. (a) Show that is an orthogonal matrix. − sin θ cos θ (b) Find all 2 × 2 real orthogonal matrices.

7.4 The Cauchy–Schwarz Inequality

Theorem 7.5 [The Cauchy–Schwarz Inequality]. Let V be a real inner product space and let u, v ∈ V . Then |hu, vi| 6 ||u|| ||v||. Equality holds if and only if u, v are linearly dependent.

Proof. The assertion is trivially true if u = 0. Assume therefore that u 6= 0. For t ∈ R define f(t) := ||tu + v||2 . Using the bilinearity and symmetry of h−, −i we find that

f(t) = htu + v, tu + vi = t2hu, ui + 2thu, vi + hv, vi = ||u||2t2 + 2hu, vit + ||v||2.

Since f(t) > 0 for all t ∈ R, the discriminant of this quadratic expression cannot be pos- 2 2 2 itive. That is, 4hu, vi 6 4||u|| ||v|| . Cancelling 4 and taking square roots we find that |hu, vi| 6 ||u|| ||v||, as promised. If equality holds then the equation f(t) = 0 has a (repeated) root, α say. Thus αu+v = 0, so u, v are linearly dependent. Conversely, if u, v are linearly dependent, so that (given that 2 u 6= 0) v = λu for some λ ∈ R, then |hu, vi| = |λ|hu, ui = |λ| ||u|| while p ||u|| ||v|| = phu, ui hv, vi = λ2hu, ui2 = |λ| ||u||2, that is, equality holds.

Note. Going back to Definition 7.5, we see that the Cauchy–Schwarz Inequality guaran- tees that the notion of angle in an inner product space makes good sense.

53 Exercise 7.16. Let a, b, c be positive real numbers and let a0, b0, c0 be the same numbers 0 0 0 2 2 2 listed in any other order. Show that aa + bb + cc 6 a + b + c . Is the assumption that a, b, c are positive really necessary? Can this be generalised to n real numbers?

P 2 P 2 Exercise 7.17. Let x1, x2, . . . , xn be real numbers. Show that ( i xi) 6 n i xi . Under what circumstances does equality occur?

Exercise 7.18. Let a, b,√ c be real numbers. Show that the maximum value of ax + by + cz when x2 + y2 + z2 = 1 is a2 + b2 + c2 . At what points is this maximum achieved?

7.5 Complex inner product spaces

In this final section the field of scalars is C. The section is designed to respond to the sentence in the lecture synopsis “Comment on (positive definite) Hermitian forms”.

Definition 7.8. Let V be a complex vector space. A function h−, −i : V × V → C is said to be a sesquilinear form (one-and-a-half linear) if for all u, v, w ∈ V and all α, β ∈ C (1) hαu + β v, wi = αhu, wi + βhv, wi and (2) hu, vi = hv, ui . Note that then hv, vi ∈ R for all v ∈ V . The form is said to be positive definite if hv, vi > 0 for all v ∈ V \{0}.A complex inner product space is a complex vector space equipped with a positive definite sesquilinear form. Positive definite sesquilinear forms are usually called Hermitian forms in honour of Charles Hermite (1822–1901). Complex inner product spaces are often called Hermitian spaces.

Some of the properties of Hermitian forms are developed in the following exercises. These go a little beyond the scope of the Prelim Linear Algebra I course. Try them if they interest you; otherwise ignore them for now. Notation is the same in all of them: V is a complex vector space, and h−, −i is a sesquilinear form on V .

Exercise 7.19. Show that hu, αv + β wi = αhu, vi + βhu, wi.

Exercise 7.20. Show that if h(x1, . . . , xn), (y1, . . . , yn)i = x1y1 + ··· + xnyn then h−, −i is a Hermitian form on Cn .

Exercise 7.21. Now suppose that V is finite-dimensional over C and the form h−, −i is ⊥ ⊥ Hermitian. Let u ∈ V and define u := {v ∈ V | hv, ui = 0}. Show that u 6 V and V = Span (u) ⊕ u⊥ . (Compare Theorem 7.2.)

Exercise 7.22. Now suppose that V is n-dimensional over C and the form h−, −i is Her-  1 if i = j, mitian. Show that there is a basis v , v , . . . , v of V such that hv , v i = 1 2 n i j 0 if i 6= j. (Compare Theorem 7.3.)

Exercise 7.23. Suppose that V is finite-dimensional over C and the form h−, −i is Hermi- tian. Let U be a subspace of V and define U ⊥ := {v ∈ V | hv, ui = 0 for all u ∈ U}. Show ⊥ ⊥ that U 6 V and V = U ⊕ U . (Compare Exercise 7.10.)

*****

ΠMN: Queen’s: Revised Revised 22 March 2017

54