<<

and Products for Physicists

Nadir Jeevanjee

NOTE: This is a rough draft and I would appreciate feedback. Please send any and all questions, comments, criticisms, or suggestions to [email protected]. Last updated 1/10/2007.

0Copyright c 2006 Nadir Jeevanjee

1 Contents

Preface 4

1 Vector Spaces 6 1.1 Definition and Examples ...... 6 1.2 Span, Linear Independence and Bases ...... 10 1.3 Components ...... 13 1.4 Linear Operators ...... 15 1.5 Dual Spaces ...... 19 1.6 Non-Degenerate Hermitian Forms ...... 21 1.7 Non-Degenerate Hermitian Forms and Dual Spaces ...... 26 Chapter 1 Problems ...... 29

2 Tensors 33 2.1 Definition and Examples ...... 33 2.2 Change of ...... 36 2.3 The Tensor Product - Definition and Properties ...... 43 2.4 Tensor Products of V and V ∗ ...... 45 2.5 Applications of the Tensor Product in Classical Physics ...... 48 2.6 Applications of the Tensor Product in Quantum Physics ...... 50 2.7 Symmetric and Antisymmetric Tensors ...... 58 Chapter 2 Problems ...... 68

3 Groups, Lie Groups and Lie Algebras 71 3.1 Groups - Definition and Examples ...... 72 3.2 Homomorphism and Isomorphism ...... 82 Chapter 3 Problems ...... 84

Dirac Dictionary 86

2 Bibliography 87

3 Preface

This text is written at the beginning graduate level, and aims to demystify tensors and provide a unified framework for understanding them in all the different contexts in which they arise in physics. The word tensor is ubiquitous in physics (stress ten- sor, moment of inertia tensor, field tensor, metric tensor, tensor product, etc. etc.) and yet tensors are rarely defined carefully (if at all), and the definition usually has to do with transformation properties, making it difficult to get a feel for these ob- jects. Furthermore, physics texts at the beginning graduate level usually only deal with tensors in their component form, so students wonder what the difference is be- tween a second rank tensor and a matrix, and why new, enigmatic terminology is introduced for something they’ve already seen. The irony of this situation is that a proper understanding of tensors doesn’t require much more than what most students encounter as undergraduates, and the clarity gained pays dividends far outweighing the modest investment. This text introduces just enough linear al- gebra to the stage for tensors, with plenty of examples to keep the discussion grounded. After laying the necessary linear algebraic foundations, we give the mod- ern (component-free) definition of tensors, followed by applications. Exercises and problems are included, and the exercises in particular should be done as they arise, or at least carefully considered, as they often fill out the text and provide good practice in using the definitions. It should be said that this text aims to be simultaneously intuitive and rigor- ous. Thus, although much of the language (especially in the examples) is informal, almost all the definitions given are precise and are the same as one would find in a pure math text. This may put off the less mathematically inclined reader; I hope, however, that such a reader will work through his or her discomfort and develop the necessary mathematical sophistication, as the results will be well worth it. As for prerequisites, it is assumed the reader has been through the usual undergraduate physics curriculum, including a “mathematical methods for physicists” course (with at least a cursory treatment of vectors and matrices), as well as the standard up- per division courses in classical mechanics, quantum mechanics, and relativity. Any

4 undergraduate versed in those topics, as well as any graduate student in physics, should be able to read this text. To undergraduates who are eager to learn about tensors but haven’t yet completed the standard curriculum, I apologize; many of the examples and practically all of the motivation for the text come from those courses, and to assume no knowledge of those topics would preclude discussion of the many applications that motivated me to write this text. Hopefully, such students will re- turn to this text once they have completed their upper-division coursework, and find it useful then. Besides the aforementioned prerequisites I’ve also indulged in the use of some very basic mathematical shorthand for brevity’s sake; a guide is below. Enjoy!

Some Mathematical Shorthand

R The set of real numbers C The set of complex numbers Z the set of positive and negative ∈ “is an element of”, “an element of”, i.e. 2 ∈ R reads “2 is an element of the real numbers” ∈/ ”is not an element of” ∀ “for all” ⊂ “is a subset of”, “a subset of” ≡ denotes a definition A × B The set {(a, b)} of all ordered pairs where a ∈ A, b ∈ B. Referred to as the cartesian product of sets A and B. Extends in the obvious way to n-fold products A1 × ... × An. Rn R × ... × R | {z } n times Cn C × ... × C | {z } n times

5 Chapter 1

Vector Spaces

Since tensors are basically a special class of functions defined on vector spaces, we must have a good foundation in before discussing them. In particular, one needs a little bit more linear algebra than is covered in most sophomore or junior level linear algebra/ODE courses. This chapter starts with the familiar material about vectors, bases, linear operators etc. but eventually moves on to slightly more sophisticated topics that are essential for understanding tensors in physics. As we lay this groundwork, hopefully the reader will also find our slightly more abstract viewpoint useful in clarifying the nature of many of the objects in physics he/she has already encountered.

1.1 Definition and Examples

We begin with the definition of an abstract vector space. We’re taught as under- graduates to think of vectors as arrows with a head and a tail, or as ordered triples of real numbers, but physics, and especially quantum mechanics, requires a more abstract notion of vectors. Before reading the definition of an abstract vector space, keep in mind that the definition is supposed to distill all the essential features of vectors as we know them (like addition and scalar multiplication) while detaching the notion of a vector space from specific constructs, like ordered n-tuples of real or complex numbers (denoted as Rn and Cn respectively). The mathematical utility of this is that much of what we know about vector spaces depends only on the essential properties of addition and scalar multiplication, not on other properties particular to Rn or Cn. If we work in the abstract framework and then come across other math- ematical objects that don’t look like Rn or Cn but that are abstract vector spaces, then most everything we know about Rn and Cn will apply to these spaces as well.

6 Physics also forces us to use the abstract definition since many quantum mechanical vector spaces are infinite-dimensional and cannot be viewed as Cn or Rn for any n. An added dividend of the abstract approach is that we will learn to think about vector spaces independently of any basis, which will prove very useful. That said, an abstract vector space is a set V (whose elements are called vectors), together with a set of scalars C (for us, C is always R or C) and operations of addition and scalar multiplication that satisfy the following axioms:

1. v + w = w + v for all v, w in V (Commutativity) 2. v + (w + x) = (v + w) + x for all v, w, x in V (Associativity) 3. There exists a unique vector 0 in V such that v + 0 = v ∀v ∈ V 4. ∀v ∈ V there is a unique vector −v such that v + (−v) = 0 5. c(v + w) = cv + cw ∀v, w ∈ V, c ∈ C (Distributivity) 6. 1v = v for all v in V 7. (c1 + c2)v = c1v + c2v for all scalars c1, c2 and vectors v 8. (c1c2)v = c1(c2v) for all scalars c1, c2 and vectors v

Some parts of the definition may seem tedious or trivial, but they are just meant to ensure that the addition and scalar multiplication operations behave the way we expect them too. In determining whether a set is a vector space or not, one is usually most concerned with defining addition in such a way that the set is closed under addition and that axioms 3 and 4 are satisfied; most of the other axioms are so natural and obviously satisfied that one, in practice, rarely bothers to check them. That said, let’s look at some examples from physics, most of which will recur throughout the text.

Example 1.1 Rn This is the most basic example of a vector space, and the one on which the abstract definition is modeled. Addition and scalar multiplication are defined in the usual way: for v = (v1, v2, . . . , vn), w = (w1, w2, . . . , wn) in Rn, we have (v1, v2, . . . , vn) + (w1, w2, . . . , wn) = (v1 + w1, v2 + w2, . . . , vn + wn) (1.1) and c(v1, v2, . . . , vn) = (cv1, cv2, . . . , cvn) (1.2) and the reader should check that the axioms are satisfied. These spaces, of course, are basic in physics; R3 is the usual 3 dimensional space we live in, R4 is spacetime

7 in special relativity, and Rn for higher n occurs in classical physics as configuration spaces for multiparticle systems (i.e. R6 is the configuration space in the classic two- body problem, as you need six coordinates to specify the position of two particles in three-dimensional space).

Example 1.2 Cn

This is another basic example - addition and scalar multiplication are defined as for Rn, and the axioms are again straightforward to verify. Note, however that Cn is a complex vector space, i.e. the set C in the definition is C so scalar multiplication by complex numbers is defined, whereas Rn is only a real vector space. This seemingly pedantic distinction can often end up being significant. Cn occurs in physics primarily as the ket space for finite-dimensional quantum mechanical systems, such as particles with spin but without translational degrees of freedom. For instance, a spin 1/2 particle fixed in space has ket space C2, and more generally a fixed particle with spin n/2 has ket space Cn+1.

Example 1.3 Mn(R) and Mn(C), n × n matrices with real or complex entries

n n The vector space structure of Mn(R) and Mn(C) is similar to that of R and C : denoting the entry in the ith row and jth column of a matrix A as Aij, we define addition and (real) scalar multiplication for A, B ∈ Mn(R) by

(A + B)ij = Aij + Bij (1.3)

(cA)ij = cAij (1.4) i.e. addition and scalar multiplication are done component-wise. The same defini- tions are used for Mn(C), which is of course a complex vector space. The reader can again check that the axioms are satisfied. Though these vector spaces don’t appear explicitly in physics very often, they have many important subspaces, one of which we consider in the next example.

Example 1.4 Hn(C), n × n Hermitian matrices with complex entries

1 Hn(C), the set of all n×n hermitian matrices , is obviously a subset of Mn(C), and in fact it is a subspace of Mn(C) in that it forms a vector space itself. To show this it is not necessary to verify all of the axioms, since most of them are satisfied by virtue

1hermitian matrices being those which satisfy A† ≡ (AT )∗ = A where superscript T denotes the transpose and superscript ‘*’ denotes complex conjugation of the entries.

8 of Hn(C) being a subset of Mn(C); for instance, addition and scalar multiplication in Hn(C) are just given by the restriction of those operations in Mn(C) to Hn(C), so the commutativity of addition and the distributivity of scalar multiplication over addition follow immediately. What does remain to be checked is that Hn(C) is closed under addition and contains the zero “vector” (in this case, the zero matrix), both of which are easily verified. One interesting thing about Hn(C) is that even though the entries of its matrices can be complex, it does not form a complex vector space; multiplying a hermitian matrix by i yields an anti-hermitian matrix, so Hn(C) is not closed under complex scalar multiplication. As far as physical applications go, we know that physical observables in quantum mechanics are represented by hermitian operators, and if we are dealing with a finite dimensional ket space such as those mentioned in Example 2 then observables can be represented as elements of Hn(C). As an example one can take a fixed spin 1/2 particle whose ket space is C2; the 1 angular momentum operators are then represented as Li = 2 σi, where the σi are the hermitian Pauli matrices  0 1   0 −i   1 0  σ ≡ , σ ≡ , σ ≡ . (1.5) x 1 0 y i 0 z 0 −1

Example 1.5 L2([a, b]), Square-integrable complex-valued functions on an interval

This example is fundamental in quantum mechanics. A complex-valued f on [a, b] ⊂ R is said to be square-integrable if Z b |f(x)|2dx < ∞. (1.6) a Defining addition and scalar multiplication in the obvious way,

(f + g)(x) = f(x) + g(x) (1.7) (cf)(x) = cf(x), (1.8) and taking the zero element to be the function which is identically zero (i.e.f(x) = 0 for all x) yields a complex vector space. (Note that if we considered only real-valued functions then we would only have a real vector space). Verifying the axioms is straightforward though not entirely trivial, as one must show that the sum of two square integrable functions is again square-integrable (Problem 1). This vector space arises in quantum mechanics as the set of normalizable wavefunctions for a particle in a one-dimensional infinite potential well. Later on we’ll consider the more general

9 scenario where the particle may be unbound, in which case a = −∞ and b = ∞ and the above definitions are otherwise unchanged. This vector space is denoted as L2(R).

l Example 1.6 Ym(θ, φ) The

3 3 Consider the set Pl(R ) of all complex-coefficient polynomial functions on R of fixed degree l, i.e. all linear combinations of functions of the form cxiyjzk where i + j + k = l and c ∈ C. Addition and (complex) scalar multiplication are defined in 3 the usual way and the axioms are again easily verified, so Pl(R ) is a vector space. 3 3 Now consider the vector subspace Hl(R ) ⊂ Pl(R ) of harmonic degree l polynomials, i.e. degree l polynomials satisfying ∆f = 0 where ∆ is the usual three dimensional Laplacian. The reader may be surprised to learn that the spherical harmonics of 3 degree l are essentially elements of Hl(R )! To see the connection, consider the subset 3 {x+iy, z, x−iy} ⊂ H1(R ). These functions are linear, hence clearly harmonic since ∆ is a second-order differential . Writing them in spherical coordinates with polar angle θ and azimuthal angle φ gives {reiφ sin θ, r cos θ, re−iφ sin θ}, which up to normalization and the factor of r give the usual spherical harmonics with l = 1. The l = 2 case is treated in exercise 1.1 below. Spherical harmonics are discussed further throughout this text; for a complete discussion, see [ST].

Exercise 1.1 Consider the functions

2 2 2 2 2 3 (x + iy) , z(x + iy), x + y − 2z , z(x − iy), (x − iy) ∈ P2(R ). (1.9)

Verify that they are in fact harmonic, and then write them in spherical coordinates and divide by r2 to obtain, up to normalization, the familiar spherical harmonics for l = 2.

Non-example GL(n, R), invertible n × n matrices

GL(n, R), the subset of Mn(R) consisting of invertible n × n matrices, is not a vector space though it seems like it could be. Why not?

1.2 Span, Linear Independence and Bases

The notion of a basis is probably familiar to most readers, at least intuitively: it’s a set of vectors out of which we can ‘make’ all the other vectors in a given vector space V . In this section we’ll make this idea precise and describe bases for some of the examples in the previous section.

10 First, we need the notion of the span of a set S = {v1, v2 . . . vk} ⊂ V , de- noted Span {v1, v2 . . . vk} or Span S : this is just the set of all vectors of the form 1 2 k c v1 + c v2 + ... + c vk. Such vectors are known as linear combinations of the vi, so Span S is just the set of all linear combinations of the vectors in S. For instance, if S = {(1, 0, 0), (0, 1, 0)} ⊂ R3, then Span S is just the set of all vectors of the form (c1, c2, 0) with c1, c2 ∈ R. If S has infinitely many elements then the span of S is again all the linear combinations of vectors in S, though in this case the linear combinations can have an arbitrarily large (but finite) number of terms.2 Next we need the notion of linear dependence: a (not necessarily finite) set of vectors S is said to be linearly dependent if there exists distinct vectors v1, v2, . . . , vk in S and scalars c1, c2, . . . , ck, not all of which are 0, such that 1 2 k c v1 + c v2 + ... + c vk = 0. (1.10) What this definition really means is that at least one vector in S can be writ- ten as a of the others, and in that sense is dependent (the reader should take a second to convince himself of this). If S is not linearly de- pendent then we say it is linearly independent, and in this case no nonzero vec- tor in S can be written as a linear combination of any others. For instance, the set S = {(1, 0, 0), (0, 1, 0), (1, 1, 0)} ⊂ R3 is linearly dependent whereas the set S0 = {(1, 0, 0), (0, 1, 0), (0, 1, 1)} is linearly independent, as the reader can check. With these definitions in place we can now define a basis for a vector space V as an ordered linearly independent set B ⊂ V whose span is all of V . This means, roughly speaking, that a basis has enough vectors to make all the others, but no more than that. When we say that B = {v1, . . . , vk} is an ordered set we mean that the order of the vi is part of the definition of B, so another basis with the same vectors but a different order is considered inequivalent. The reasons for this will become clear as we progress. One can show3 that all finite bases must have the same number of elements, so we define the of a vector space V , denoted dim V , to be the number of elements of any finite basis. If no finite basis exists, then we say that V is infinite dimensional.

Exercise 1.2 Given a vector v and a finite basis B = {ei}i=1...n, show that the expression of v as a linear combination of the ei is unique.

∞ N 2 X i X i We don’t generally consider infinite linear combinations like c vi = lim c vi because N→∞ i=1 i=1 in that case we would need to consider whether the limit exists, i.e. whether the sum converges in some sense. More on this later. 3See [HK].

11 Example 1.7 Rn and Cn Rn has the following natural basis, also known as the : {1, 0,..., 0), (0, 1,..., 0),..., (0,..., 1)}. (1.11) The reader should check that this is indeed a basis, and thus that the dimen- sion of Rn is, unsurprisingly, n. The same set serves as a basis for Cn, where of course now the linear combination coefficients ci are allowed to be complex numbers. Note that although this basis is the most natural, there are infinitely many other perfectly respectable bases out there; the reader should check, for instance, that {(1, 1, 0,..., 0), (0, 1, 1, 0,..., 0),..., (0,..., 1, 1), (1, 0,..., 0, 1)} is also a basis.

Example 1.8 Mn(R) and Mn(C)

Let Eij be the n × n matrix with a 1 in the ith row, jth column and zeros everywhere else. Then the reader can check that {Eij}i,j=1...n is a basis for both 2 Mn(R) and Mn(C), and that both spaces have dimension n . Again, there are other nice bases out there; for instance, the symmetric matrices Sij ≡ Eij + Eji, i ≤ j, and antisymmetric matrices Aij ≡ Eij − Eji, i < j taken together also form a basis for both Mn(R) and Mn(C).

Exercise 1.3 Let Sn(R),An(R) be the sets of n×n symmetric and antisymmetric matrices, respectively. Show that both are real vector spaces, compute their , and check that dim Sn(R) + dim An(R) = dim Mn(R), as expected.

Example 1.9 H2(C)

Let’s find a basis for H2(C). First, we need to know what a general element of † H2(C) looks like. In terms of complex components, the condition A = A reads  a b   a¯ c¯  = (1.12) c d ¯b d¯ where the bar denotes complex conjugation. This means that a, d ∈ R and b =c ¯, so in terms of real numbers we can write a general element of H2(C) as  t + z x − iy  = tI + xσ + yσ + zσ (1.13) x + iy t − z x y z where I is the identity matrix and σx, σy, σz are the Pauli matrices defined in (1.5). The reader can easily check that the set B = {I, σx, σy, σz} is linearly independent, and since (1.13) shows that B spans H2(C), B is a basis for H2(C). We also see that H2(C) has (real) dimension 4.

12 Exercise 1.4 Using the matrices Eij and Aij from example 1.8, construct a basis for Hn(C) and compute its dimension.

l Example 1.10 Ym(θ, φ)

l l l We saw in the previous section that the Ym, or more precisely r Ym, are elements 3 l l of the vector space Hl(R ). What’s more is that the set {r Ym}−l≤m≤l is actually a 3 3 3 basis for Hl(R ). In the case l = 1 this is clear since H1(R ) = P1(R ) and clearly 3 {x + iy, x − iy, z} is a basis for P1(R ). For l > 1 proving our claim requires a little 3 more effort; see Problem 2. Another simpler basis for P1(R ) would be the cartesian basis {x, y, z}; physicists use the spherical harmonic basis because those functions 3 are of the orbital angular momentum operator Lz, which on Hl(R ) ∂ ∂ is represented by Lz = −i(x ∂y − y ∂x ). We shall discuss the relationship between the two bases in detail later.

Not Quite Example L2([−a, a]) From doing 1-D problems in quantum mechanics one already ‘knows’ that the nπx i a 2 set {e }n∈Z is a basis for L ([−a, a]). There’s a problem, however; we’re used to taking infinite linear combinations of these basis vectors, but our definition above only allows for finite linear combinations. What’s going on here? It turns out that L2([−a, a]) has more structure than your average vector space: it is an infinite dimensional , and for such spaces we have a generalized definition of a basis, one that allows for infinite linear combinations. We will discuss Hilbert spaces in section 1.6

1.3 Components

One of the most useful things about introducing a basis for a vector space is that it allows us to write elements of the vector space as n-tuples, in the form of either column or row vectors, as follows: Given v ∈ V and a basis B = {ei}i=1...n for V, we can write n X i v = v ei (1.14) i=1

13 for some numbers vi, called the components of v with respect to B. We can then represent v by the column vector, denoted [v]B, as

 v1   v2     ·  [v]B =   (1.15)  ·     ·  vn or the row vector T 1 2 n [v]B = (v , v , . . . , v ) (1.16) where the superscript T denotes the usual transpose of a vector and where the subscript B just reminds us which basis the components are referred to, and will be dropped if there is no ambiguity. With a choice of basis, then, every n-dimensional vector space can be made to ‘look like’ Rn or Cn. Writing vectors in this way greatly facilitates computation, as we’ll see. One must keep in mind, however, that vectors exist independently of any chosen basis, and that their expressions as row or column vectors depend very much on a choice of basis B. This will be evident in the examples.

Example 1.11 E3 3-D

We said in Example 1.1 that R3 represented the 3-dimensional space we inhabit, but this is not quite true. After all, given (x, y, z) ∈ R3, what point of space is this referring to? The space we live in is not quite R3 but rather 3-dimensional Euclidean space E3, defined as follows: pick a point in space, call it the origin. Then there exists a directed line segment (arrow) from the origin to every other point in space. We identify the points in space with the arrows pointing to them from the origin. These arrows can then be added together head to tail and multiplied by real numbers by scaling, yielding a real vector space. If we pick a basis of three vectors, call it K = {ˆx, ˆy, ˆz}, we can then, for any point in space, write the corresponding T 3 3 vector v = xˆx+yˆy+zˆz as [v]K = (x, y, z), hence identifying E with R as described above.

Example 1.12 Rigid Body Motion

One area of physics where the distinction between a vector and its expression as an ordered triple is crucial is rigid body motion. In this setting our vector space is E3 and we usually deal with two bases, an arbitrary but fixed space axes K0 = {ˆx0, ˆy0, ˆz0}

14 and a time-dependent body axes K = {ˆx(t), ˆy(t), ˆz(t)} which is fixed relative to the rigid body. When we write down vectors in E3, like the angular momentum vector L or the angular velocity vector ω, we must keep in mind what basis we are using, as the component expressions will differ drastically depending on the choice of basis. For example, if there is no external torque on a rigid body, [L]K0 will be constant whereas [L]K will in general be time-dependent. Example 1.13 L2([−a, a]) We know from experience in quantum mechanics that all square integrable functions ∞ mπx 4 P i a on an interval [−a, a] have an expansion f = m=−∞ cme in terms of the ‘basis’ mπx i a {e }m∈Z. This expansion is known as the of f, and we see that the cn, commonly known as the Fourier coefficients, are nothing but the components of mπx i a the vector f in the basis {e }m∈Z.

1.4 Linear Operators

One of the basic notions in linear algebra, fundamental in quantum mechanics, is that of a linear operator. A linear operator on a vector space V is a function T from V to itself satisfying the linearity condition T (cv + w) = cT (v) + T (w). (1.17) Sometimes we write T v instead of T (v). The reader should check that the set of all linear operators on V forms a vector space, denoted T (V ). The reader has doubtless seen many examples of linear operators: for instance, we can interpret a real n×n ma- trix as a linear operator on Rn that acts on column vectors by matrix multiplication. Thus Mn(R) (and, similarly, Mn(C)) can be viewed as vector spaces whose elements are themselves linear operators. In fact, that was exactly how we interpreted the vector subspace H2(C) ⊂ M2(C) in example 1.4; in that case, we identified elements of H2(C) as the quantum-mechanical angular momentum operators. There are nu- merous other examples of quantum mechanical linear operators - for instance, the familiar position and momentum operatorsx ˆ andp ˆ act on L2([−a, a]) by xfˆ (x) = xf(x) (1.18) ∂f pfˆ (x) = ~ . (1.19) i ∂x Another class of less familiar examples is given below.

4This fact is proved in most real analysis books, see [R].

15 Example 1.14 T (V) acting on T (V)

We are familiar with linear operators taking vectors into vectors, but they can also be used to take linear operators into linear operators, as follows: Given A, B ∈ T (V ), we can define a linear operator adA ∈ T (T (V )) acting on B by

adA(B) ≡ [A, B] (1.20) where [·, ·] indicates commutator. This action of A on T (V ) is called the adjoint ac- tion or adjoint representation. The adjoint representation has important applications in quantum mechanics; for instance, the Heisenberg picture emphasizes T (V ) rather than V and interprets the Hamiltonian as an operator in the adjoint representation. In fact, for any observable A the Heisenberg equation of motion reads5 dA = i ad (A). (1.21) dt H 2

One important property of a linear operator T is whether or not it is invertible, i.e. whether there exists a linear operator T −1 such that TT −1 = T −1T = I where I is the identity operator.6 The reader may recall that, in general, an inverse for a map T exists if and only if T is both 1-1, meaning

T (v) = T (w) =⇒ v = w, (1.22) and onto, meaning that ∀ w there exists v such that T (v) = w. If this is unfamiliar the reader should take a moment to convince himself of this. In the case of a linear operator on a vector space, these two conditions are actually equivalent and turn out also (see exercise 1.6 below) to be equivalent to the statement

T (v) = 0 =⇒ v = 0, (1.23) so T is invertible if and only if the only vector it sends to 0 is the zero vector.

Exercise 1.5 Let T be a linear operator on a vector space V . Show that T being 1-1 is equivalent to T being onto. Feel free to introduce a basis to assist you in the proof.

5Normally there would be a factor of ~ in the denominator in the right hand side of (1.21), but here and below we set ~ = 1. 6Throughout this text I will denote the identity operator or identity matrix; it will be clear from context which is meant.

16 Exercise 1.6 Suppose T (v) = 0 =⇒ v = 0. Show that this is equivalent to T being 1-1, which by the previous exercise is equivalent to T being 1-1 and onto, which is then equivalent to T being invertible.

An important point to keep in mind is that a linear operator is not the same thing as a matrix; just as with vectors, the identification can only be made once a basis is chosen. For operators on finite-dimensional spaces this is done as follows: choose a basis B = {ei}i=1...n. Then the action of T is determined by its action on the basis vectors,

n ! n n X i X i X i j T (v) = T v ei = v T (ei) = v Ti ej (1.24) i=1 i=1 i,j=1

j 7 where the numbers Ti , again called the components of T with respect to B, are Pn j defined by T (ei) = j=1 Ti ej. We then have

 1   Pn i 1  v i=1 v Ti 2 Pn i 2  v   v T     i=1 i   ·   ·  [v]B =   and [T (v)]B =   (1.25)  ·   ·       ·   ·  n Pn i n v i=1 v Ti so if we define the matrix of T in the basis B as

 1 1 1  T1 T2 ...Tn  T 1 T 2 ...T n   2 2 2   ····  [T ]B =   (1.26)  ····     ····  1 2 n Tn Tn ...Tn we then have [T (v)]B = [T ]B[v]B (1.27) as a matrix equation. Thus, [T ]B really does represent T in components, where the action of [T ]B on vectors is by the usual matrix multiplication. Furthermore, if we

7Nomenclature to be justified in the next chapter.

17 have two linear operators A and B and we define their product (or composition) AB as the linear operator (AB)(v) ≡ A(B(v)), (1.28) the reader can then show that [AB] = [A][B]. Thus, composition of operators becomes matrix multiplication of the corresponding matrices.

Exercise 1.7 For two linear operators A and B on a vector space V , show that [AB] = [A][B] in any basis.

3 Example 1.15 Lz, Hl(R ) and Spherical Harmonics

3 3 Recall that H1(R ) is the set of all linear functions on R and that 1 {rYm}−1≤m≤1 = {x + iy, z, x − iy} and {x, y, z} are both bases for this space. Now ∂ ∂ consider the familiar angular momentum operator Lz = −i(x ∂y −y ∂x ) on this space. The reader can check that

1 2 3 Lz(x + iy) = x + iy =⇒ (Lz)1 = 1, (Lz)1 = (Lz)1 = 0 (1.29) i Lz(z) = 0 =⇒ (Lz)2 = 0 ∀i (1.30) 3 1 2 Lz(x − iy) = x − iy =⇒ (Lz)3 = −1, (Lz)3 = (Lz)3 = 0 (1.31) so in the spherical harmonic basis,  1 0 0 

1 [Lz]{rYm} =  0 0 0  . (1.32) 0 0 −1

This of course just says that the wavefunctions x+iy, z and x−iy have Lz eigenvalues of 1, 0, and −1 respectively. Meanwhile,

Lz(x) = iy (1.33)

Lz(y) = −ix (1.34)

Lz(z) = 0 (1.35) so in the cartesian basis,  0 −i 0  [Lz]{x,y,z} =  i 0 0  , (1.36) 0 0 0 a very different looking matrix.

18 ∂ ∂ ∂ ∂ Exercise 1.8 Compute the matrices of Lx = −i(y ∂z − z ∂y ) and Ly = −i(z ∂x − x ∂z ) 3 acting on H1(R ) in both the cartesian and spherical harmonic bases.

Before concluding this section we should remark that there is much more one can say about linear operators, particularly concerning eigenvectors, eigenvalues and diagonalization. Though these topics are relevant for physics, we will not need them in this text and good references for them abound, so we omit them. The interested reader can consult the first chapter of [SA] for a practical introduction, or [HK] for a thorough discussion.

1.5 Dual Spaces

Another basic construction associated with a vector space, essential for understand- ing tensors and usually left out of the typical ‘mathematical methods for physicists’ courses, is that of a dual space. Given a vector space V with scalars C, its dual space V ∗ is defined to be the set of C-valued linear functions f on V (referred to as dual vectors or linear functionals), where ‘linear’ again means f(cv + w) = cf(v) + f(w). It’s easily checked that the usual definitions of addition and scalar multiplication and the zero function turn V ∗ into a vector space over C. Given a (not necessarily finite) basis {ei} for V , any linear function f on V is determined entirely by it’s values on Pn i the ei, since for any v = i=1 v ei

n ! X i f(v) = f v ei (1.37) i=1 n X i = v f(ei) (1.38) i=1 n X i ≡ v fi (1.39) i=1 where the fi ≡ f(ei) are, unsurprisingly, referred to as the components of f in the i basis {ei}. To justify this nomenclature, consider a set {e } (note the raised indices) of dual vectors defined by i i e (ej) = δj. (1.40)

19 If V is finite-dimensional with dimension n, it’s easy to check (by evaluating both sides on basis vectors) that we can write8

n X i f = fie (1.41) i=1 so that the fi really are the components of f. Since f was arbitrary, this means that the ei span V ∗. In exercise 1.9 below the reader will show that the ei are actually i ∗ linearly independent, so {e }i=1...n is actually a basis for V . We sometimes say that i ∗ the e are dual to the ei. Note that we have shown that V and V always have the Pn i same dimension. Note also that for a given v = i=1 v ei,

n n i X j i X j i i e (v) = v e (ej) = v δj = v (1.42) j=1 j=1 so we can alternatively think of the ith component of a vector as the value of the ith dual vector on that vector.

Exercise 1.9 By carefully working with the definitions, show that the ei defined in (1.40) are linearly independent.

n n Example 1.16 Dual spaces of R , C , Mn(R) and Mn(C)

n n Consider the basis {ei} of R and C , where ei is the vector with a 1 in the ith place and 0’s everywhere else; this is just the basis described in Example 1.7. Now consider the element f j of V ∗ which eats a vector in Rn or Cn and spits out the jth j j j j component; clearly f (ei) = δi so the f are just the dual vectors e described above. ij ij Similarly, for Mn(R) or Mn(C) consider the dual vector f defined by f (A) = Aij; these vectors are clearly dual to the Eij and thus form the corresponding dual basis. While the f ij may seem a little unnatural or artificial, the reader should note that there is one linear functional on Mn(R) and Mn(C) which is familiar: the functional, denoted T r and defined by

n X T r(A) = Aii. (1.43) i=1 What are the components of T r with respect to the f ij ?

8If V is infinite-dimensional then this may not work as the sum required may be infinite, and as mentioned before care must be taken in defining infinite linear combinations .

20 Not Quite Example Dual space of L2([−a, a])

We haven’t yet properly treated L2([−a, a]) so we clearly cannot yet properly treat its dual, but we would like to point out here that in infinite dimensions, dual spaces get much more interesting. In finite dimensions, we saw above that a basis i ∗ ∗ {ei} for V induces a dual basis {e } for V , so in a sense V ‘looks’ very much like V . This is not true in infinite dimensions - in this case we still have linear functionals dual to a given basis, but these may not span the dual space. Consider the case of 2 n L ([−a, a]); the reader can check that {e }n∈Z defined by Z a n 1 −i nπx e (f(x)) ≡ e a f(x)dx (1.44) 2a −a

mπx mπx n i a m i a satisfy e (e ) = δn and are hence dual to {e }m∈Z. In fact, these linear functionals just eat a function and spit out its nth Fourier coefficient. There are linear functionals, however, that can’t be written as a linear combination of the ei; one such linear functional is the Dirac Delta Functional δ, defined by

δ(f(x)) ≡ f(0). (1.45)

We will prove this claim and place δ in its proper context in section 1.7.

1.6 Non-Degenerate Hermitian Forms

Non-degenerate Hermitian forms, of which the Euclidean , Minkowski metric and Hermitian scalar product of quantum mechanics are but a few examples, are very familiar to most physicists. We introduce them here not just to formal- ize their definition but also to make the fundamental but usually unacknowledged connection between these objects and dual spaces. A non-degenerate Hermitian form on a vector space V is a C-valued function (· , ·) which assigns to an ordered pair of vectors v, w ∈ V a scalar, denoted (v, w), having the following properties:

1. (v, w1 + cw2) = (v, w1) + c(v, w2) (linearity in the second argument) 2. (v, w) = (w, v)(Hermiticity; the bar denotes complex conjugation) 3. For each v 6= 0 ∈ V , there exists w ∈ V such that (v, w) 6= 0 (non-degeneracy)

Note that conditions 1 and 2 imply that (cv, w) =c ¯(v, w), so (· , ·) is conjugate- linear in the first argument. Also note that for a real vector space, condition 2

21 implies that (· , ·) is symmetric, i.e. (v, w) = (w, v)9; in this case, (· , ·) is called a metric. Condition 3 is a little nonintuitive but will be essential in the connection with dual spaces. If, in addition to the above 3 conditions, the Hermitian form obeys

4. (v, v) > 0 for all v ∈ V, v 6= 0 (positive-definiteness) then we say that (· , ·) is an inner product, and a vector space with such a Hermitian form is called an . Note that condition 4 implies 3. Our reason for separating condition 4 from the rest of the definition will become clear when we consider the examples. One very important use of non-degenerate Hermitian forms is to define preferred sets of bases known as orthornormal bases. Such bases B = {ei} by definition satisfy (ei, ej) = ±δij and are extremely useful for computation, and ubiquitous in physics for that reason. If (· , ·) is positive-definite (hence an inner product), then orthonormal basis vectors satisfy (ei, ej) = δij and may be constructed out of arbitrary bases by the Gram-Schmidt process. If (·, ·) is not positive-definite then orthonormal bases may still be constructed out of arbitrary bases, though the process is slightly more involved. See [HK, sec. 8.2, 10.2] for details.

Exercise 1.10 Let (· , ·) be an inner product. If a set of vectors e1, . . . , ek is orthogonal, i.e. (ei, ej) = 0 when i 6= j, show that they are linearly independent. Note that an orthonormal set (i.e. (ei, ej) = ±δij) is just an orthogonal set in which the vectors have unit length.

Example 1.17 The dot product (or Euclidean metric) on Rn

Let v = (v1, . . . , vk), w = (w1, . . . , wk) ∈ Rn. Define (· , ·) on Rn by n X (v, w) ≡ viwi. (1.46) i=1 This is sometimes written as v · w. The reader can check that (· , ·) is an inner product, and that the standard basis given in example 1.7 is an orthonormal basis.

Example 1.18 The Hermitian scalar product on Cn

Let v = (v1, . . . , vk), w = (w1, . . . , wk) ∈ Cn. Define (· , ·) on Cn by n X (v, w) ≡ v¯iwi. (1.47) i=1

9In this case, (· , ·) is linear in the first argument as well as the second and would be referred to as bilinear.

22 Again, the reader can check that (· , ·) is an inner product, and that the standard basis given in example 1.7 is an orthonormal basis. Such inner products on complex vector spaces are sometimes referred to as Hermitian scalar products and are present on every quantum mechanical vector space. In this example we see the importance of condition 2, manifested in the conjugation of the vi in (1.47); if that conjugation wasn’t there, a vector like v = (i, 0,..., 0) would have (v, v) = −1 and (· , ·) wouldn’t be an inner product.

Exercise 1.11 Let A, B ∈ Mn(C). Define (· , ·) on Mn(C) by 1 (A, B) = T r(A†B). (1.48) 2

Check that this is indeed an inner product. Also check that the basis {I, σx, σy, σz} for H2(C) is orthonormal with respect to this inner product.

Example 1.19 The Minkowski Metric on 4-D Spacetime

4 Consider two vectors (often called “events” in the physics literature) vi = (xi, yi, zi, ti) ∈ R , i = 1, 2. The Minkowski metric on spacetime,10 denoted η, is defined to be11

η(v1, v2) ≡ x1x2 + y1y2 + z1z2 − t1t2. (1.49)

η is clearly linear in both it’s arguments (i.e. bilinear) and symmetric, hence satisfy- ing conditions 1 and 2, and the reader will check condition 3 in exercise 1.12 below. Notice that for v = (1, 0, 0, 1), η(v, v) = 0 so η is not positive-definite, hence not an inner product. This is why we separated condition 4, and considered the more general non-degenerate Hermitian forms instead of just inner products.

Exercise 1.12 Let v = (x, y, z, t). Show that η is non-degenerate by finding another vector w such that η(v, w) 6= 0.

We should point out here that the Minkowski metric can be written in components as a matrix, just as a linear operator can. Taking the standard basis B = {ei}i=1,2,3,4 4 in R , we can define the components of η, denoted ηij, as

ηij ≡ η(ei, ej). (1.50)

10As in example 1.11, we can identify physical spacetime with R4 once we choose a coordinate system. 11We are, of course arbitrarily choosing the + + +− signature, we could equally well choose − − −+.

23 Then, just as was done for linear operators, the reader can check that if we define the matrix of η in the basis B, denoted [η]B, as the matrix     η11 η21 η31 η41 1 0 0 0  η12 η22 η32 η42   0 1 0 0  [η]B =   =   (1.51)  η13 η23 η33 η43   0 0 1 0  η14 η24 η34 η44 0 0 0 −1 we can write     1 0 0 0 x2 T  0 1 0 0   y2  η(v1, v2) = [v1] [η][v2] = (x1, y1, z1, t1)     (1.52)  0 0 1 0   z2  0 0 0 −1 t2 as some readers may be used to from computations in relativity. Note that the symmetry of η implies that [η]B is a for any basis B.

Example 1.20 The Hermitian scalar product on L2([−a, a])

For f, g ∈ L2([−a, a]), define

1 Z a (f, g) ≡ fg¯ dx. (1.53) 2a −a The reader can easily check that this defines an inner product on L2([−a, a]), and that nπx i a 2 {e }n∈Z is an orthonormal set. What’s more, this inner product turns L ([−a, a]) into a Hilbert Space, which is an inner product space that is complete. The notion of completeness is a technical one, so we will not give its precise definition, but in the case of L2([−a, a]) one can think of it as meaning roughly that a limit of square- integrable functions is again square-integrable. Making this precise and proving it for L2([−a, a]) is the subject of real analysis textbooks and far outside the scope of this text12, so we’ll content ourselves here with just mentioning completeness and noting that it is responsible for many of the nice features of Hilbert spaces, in particular the generalized notion of a basis which we now describe. Given a Hilbert space H and an orthonormal (and possibly infinite) set {ei} ⊂ H, the set {ei} is said to be an orthonormal basis for H if

(ei, f) = 0 ∀i =⇒ f = 0. (1.54)

12See [R] , for instance, for this and for proofs of all the claims made in this example.

24 The reader can check (see exercise 1.13 below) that in the finite-dimensional case this definition is equivalent to our previous definition of an orthonormal basis. In the infinite-dimensional case, however, this definition differs substantially from the old one in that we no longer require Span{ei} = H (recall that spans only include finite linear combinations). Does this mean, though, that we now allow arbitrary infinite combinations of the basis vectors? If not, which ones are allowed? For nπx 2 i a L ([−a, a]), for which {e }n∈Z is an orthonormal basis, we mentioned in example 1.13 that any f ∈ L2([−a, a]) can be written as

∞ X i nπx f = cne a (1.55) n=−∞ where ∞ 1 Z a X |f|2dx = |c |2 < ∞ (1.56) 2a n −a n=−∞ (The first equality should be familiar from quantum mechanics and follows from exer- cise 1.14 below). The converse to this is also true, and this is where the completeness 2 of L ([−a, a]) is essential: if a set of numbers cn satisfy (1.56), then the series

∞ X i nπx g(x) ≡ cne a (1.57) n=−∞ converges, yielding a square-integrable function g. So L2([−a, a]) is the set of all expressions of the form (1.55), subject to the condition (1.56). Now we know how to think about infinite-dimensional Hilbert spaces and their bases: a basis for a Hilbert space is an infinite set whose infinite linear combinations, together with some suitable convergence condition, form the entire vector space.

Exercise 1.13 Show that the definition (1.54) of a Hilbert space basis is equivalent to our original definition of a basis for a finite-dimensional inner product space V .

∞ ∞ X i nπx X i mπx 2 Exercise 1.14 Show that for f = cne a , g = dme a ∈ L ([−a, a]), n=−∞ m=−∞

∞ X (f, g) = c¯ndn (1.58) n=−∞ so that (· , ·) on L2([−a, a]) can be viewed as the infinite-dimensional version of the standard n Hermitian scalar product on C .

25 1.7 Non-Degenerate Hermitian Forms and Dual Spaces

We are now ready to explore the connection between dual vectors and non-degenerate Hermitian forms. Given a non-degenerate Hermitian form (· , ·) on a finite-dimensional vector space V , we can associate to any v ∈ V a dual vectorv ˜ ∈ V ∗ defined by

v˜(w) ≡ (v, w). (1.59)

Sometimes we writev ˜ = (v, ·). This map we have just defined, call it L, from V to V ∗ is conjugate-linear since for v = cx + z, v, z, x ∈ V ,

v˜(w) = (v, w) = (cx + z, w) =c ¯(x, w) + (z, w) =c ¯x˜(w) +z ˜(w) (1.60) so L(v) = L(cx + z) ≡ v˜ =c ¯x˜ +z ˜ =cL ¯ (x) + L(v). (1.61) In exercise 1.15 below the reader will show that the non-degeneracy of (· , ·) implies that L is 1-1 and onto, so L is an invertible map from V to V ∗. The reader will see in the examples below that he is already familiar with L in a couple of different contexts.

Exercise 1.15 Use the non-degeneracy of (· , ·) to show that L is 1-1, i.e. that L(v) = L(w) =⇒ v = w. Combine this with the argument used in exercise 1.6 to show that L is onto as well.

i Exercise 1.16 Given a basis {ei}i=1...n, under what circumstances do we have e =e ˜i for all i?

Example 1.21 Bras and kets in quantum mechanics

Let H be a quantum mechanical Hilbert space with inner product (· , ·) . In Dirac notation, a vector ψ ∈ H is written as a ket |ψi and the inner product (ψ, φ) is written hψ|φi. What about bras, written as hψ|? What, exactly, are they? Most quantum mechanics texts gloss over their definition, just telling us that they are in 1-1 correspondence with kets and can be combined with kets as hψ|φi to get a scalar. We are also told that the correspondence between bras and kets is conjugate-linear, i.e. that the bra corresponding to c|ψi isc ¯hψ|. From what we have seen in this section, it is now clear that bras really are dual vectors, labeled in the same way as regular vectors, because the map L allows us to identify the two. In short, hψ| is really just L(ψ), or equivalently (ψ, ·).

26 Example 1.22 Raising and lowering indices in relativity

4 0 µ Consider R with the Minkowski metric, let B = {eµ}µ=1−4 and B = {e }µ=1−4 be the standard basis and dual basis for R4 (we use a greek index to conform with P4 µ 4 standard physics notation), and let v = µ=1 v eµ ∈ R . What are the components ofv ˜ in terms of the vµ? Well, as we saw in section 1.5, the components of a dual vector are just given by evaluation on the basis vectors, so

X ν X ν v˜µ =v ˜(eµ) = (v, eµ) = v (eν, eµ) = v ηνµ. (1.62) ν ν In matrices, this reads [˜v]B0 = [η]B[v]B (1.63) so matrix multiplication of a vector by the metric matrix gives the corresponding dual vector in the dual basis. Thus, the map L is implemented in coordinates by [η]. Now, we mentioned above that L is invertible; what does L−1 look like in coordinates? Well, by the above, L−1 should be given by matrix multiplication by [η]−1, the matrix inverse to [η]. Denoting the components of this matrix by ηµν (so τµ µ ˜ −1 that η ηντ = δν ) and writing f ≡ L (f) where f is a dual vector, we have

[f˜] = [η]−1[f] (1.64) or in components ˜µ X νµ f = η fν. (1.65) ν Now, in physics one usually works with components of vectors, and in relativ- ity the numbers vµ are called the contravariant components of v and the numbers P ν vµ ≡ ν v ηνµ are referred to as the covariant components of v. We see now that the contravariant components of a vector are just its usual components, while its co- variant components are actually the components of the associated dual vectorv ˜. For a dual vector f, the situation is reversed - the covariant components fµ are its actual components, and the contravariant components are the components of f˜. Since L allows us to turn vectors into dual vectors and vice-versa, we usually don’t bother trying to figure out whether something is ‘really’ a vector or a dual vector; it can be either, depending on which components we use. The above discussion shows that the familiar process of “raising” and “lowering” indices is just the application of the map L (and its inverse) in components. For an interpretation of [η]−1 as the matrix of a metric on R4∗, see the Problems.

27 3 Exercise 1.17 Consider R with the Euclidean metric. Show that the covariant and contravariant components of a vector in an orthonormal basis are identical. This explains why we never bother with this terminology, nor the concept of dual spaces, in basic physics 3 4 where R is the relevant vector space. Is the same true for R with the Minkowski metric?

Example 1.23 L2([−a, a]) and its dual

In our above discussion of the map L we stipulated that V should be finite-dimensional. Why? If you examine the discussion closely, you’ll see that the only place where we use the finite-dimensionality of V is in showing that L is onto. Does this mean that L is not necessarily onto in infinite dimensions? Consider the Dirac Delta functional δ ∈ L2([−a, a])∗. Does δ(g) = g(0) =? (δ(x), g) (1.66) ∞ X i nπx for some function δ(x)? If we write g as g(x) = dne a , then simply evaluating n=−∞ g at x = 0 gives ∞ X ? g(0) = dn = (δ(x), g) (1.67) n=−∞ which, when compared with (1.58), tells us that the function δ(x) must have fourier coefficients cn = 1 for all n. Such cn, however, do not satisfy (1.56) and hence δ(x) cannot be a square-integrable function. So the dual vector δ is not in the image of the map L, hence L is not onto in the case of L2([−a, a]).

28 Chapter 1 Problems

1. Prove that L2([−a, a]) is closed under addition. You’ll need the triangle inequal- ity, as well as the following inequality, valid for all λ ∈ R : R a 2 0 ≤ −a(|f| + λ|g|) dx. l l 3 2. In this problem we show that {r Ym} is a basis for Hl(R ). We’ll gloss over a few subtleties here; for a totally rigorous discussion see [ST].

3 l a) Let f ∈ Hl(R ). Argue that f can be written as f = r Y (θ, φ). Then write ∆f = 0 in spherical coordinates, separating out the angular part of the laplacian (which we denote as ∆S2 ) as ∂2 2 ∂ 1 ∆ = + + ∆ 2 . (1.68) ∂r2 r ∂r r2 S You should find ∆S2 Y = −l(l + 1)Y. (1.69) b) If you have never done so, show that 2 2 2 2 ∆S2 = Lx + Ly + Lz ≡ L (1.70) so that (1.69) says that Y is an of L2, as expected. 13 3 The theory of angular momentum then tells us that Hl(R ) has dimension 2l + 1. 3 l l c) Exhibit a basis for Hl(R ) by considering the function fl ≡ (x + iy) and showing that l l l l Lz(fl ) = lfl ,L+(fl ) ≡ (Lx + iLy)(fl ) = 0. (1.71) m l l The theory of angular momentum then tells us that (L−) fl ≡ fl−m l l l satisfies Lz(fl−m) = (l − m)fm and that {fm}−l≤m≤l is a basis for 3 Hl(R ). l l l l 2 l l d) Writing fm = r Ym we see that Ym satisfies L Ym = −l(l + 1)Ym and l l l LzYm = mYm as expected. Now use this definition of Ym to compute all the spherical harmonics for l = 1, 2 and show that this agrees, up to normalization, with the spherical harmonics as tabulated in any quantum mechanics textbook. If you read example 1.6 and did 1 exercise 1.1 then all you have to do is compute fm, −1 ≤ m ≤ 1 and 2 fm, −2 ≤ m ≤ 2 and show that these functions agree with the ones given there.

13see [SA, G]

29 3. In discussions of quantum mechanics the reader may have heard the phrase “angular momentum generates rotations”. What this means is that if one takes a component of the angular momentum such as Lz and exponentiates it, i.e. if one considers the operator

∞ X 1 n exp (−iφL ) ≡ (−iφL ) (1.72) z n! z n=0 1 1 = I − iφL + (−iφL )2 + (−iφL )3 + ... z 2! z 3! z (the usual power series expansion for ex) then one gets the operator which represents a about the z axis by an angle φ. Confirm this in one instance by explicitly summing the power series for the operator [Lz]{x,y,z} of example 1.15 to get

 cos φ − sin φ 0   exp −i[Lz]{x,y,z} =  sin φ cos φ 0  , (1.73) 0 0 1

the usual matrix for a rotation about the z-axis.

4. a) Let A be a linear operator on a finite-dimensional complex vector space V with inner product (· , ·) . Define the adjoint of A, denoted A†, by the equation (A†v, w) = (v, Aw) (1.74) † † Show that in an orthonormal basis {ei}i=1...n,[A ] = [A] , where the dagger outside the brackets denotes the usual conjugate transpose of i i a matrix. You may want to prove and use the fact Aj = e (Aej). b) If A satisfies A = A†, A is then said to be self-adjoint or hermitian. Show that any eigenvalue of A must be real. Note then that in part a) you showed that in an orthonormal basis the matrix of a hermitian operator is a hermitian matrix. This is not necessarily true in a non-orthonormal basis.

30 5. Let g be a non-degenerate bilinear form on a vector space V (we have in mind the Euclidean metric on R3 or the Minkowski metric on R4). Pick an arbitrary (not necessarily orthonormal) basis, let [g]−1 be the matrix inverse of [g] in this basis, and write gµν for the components of [g]−1. Also let f, h ∈ V ∗. Define a non-degenerate bilinear formg ˜ on V ∗ by g˜(f, h) ≡ g(f,˜ h˜) (1.75) where f˜ = L−1(f) as in example 1.22. Show that g˜µν ≡ g˜(eµ, eν) = gµν (1.76) so that [g]−1 is truly a matrix representation of a non-degenerate bilinear form on V ∗.

6. In this problem we’ll acquaint ourselves with P (R), the set of polynomials in one variable x with real coefficients. We’ll also meet several bases for this space which the reader should find familiar.

a) P (R) is the set of all functions of the form 2 n f(x) = c0 + c1x + c2x ... + cnx (1.77)

where n is arbitrary. Verify that P (R) is a (real) vector space. Then show that P (R) is infinite-dimensional by showing that, for any finite set S ⊂ P (R), there is a polynomial that is not in Span S. Exhibit a simple infinite basis for P (R). d b) Compute the matrix corresponding to the operator dx ∈ T (P (R)) with respect to the basis you found in part a). c) One can turn P (R) into an inner product space by considering inner products of the form Z b (f, g) ≡ f(x)g(x)W (x) dx (1.78) a where W (x) is a nonnegative weight function. One can then take the basis B = {1, x, x2, x3,...} and apply the Gram-Schmidt process to get an . With the proper choice of range of integra- tion [a, b] and weight function W (x), we can obtain (up to normaliza- tion) the various orthogonal polynomials one meets in studying the various differential equations that arise in electrostatics and quantum mechanics.

31 i) Let [a, b] = [−1, 1] and W (x) = 1. Consider the set S = {1, x, x2, x3} ⊂ B. Apply the Gram-Schmidt process to this set to get (up to normalization) the first four

P0(x) = 1

P1(x) = x 1 P (x) = (3x2 − 1) 2 2 1 P (x) = (5x3 − 3x). 3 2 The Legendre Polynomials show up in the solutions to the differential equation (1.69), where we make the identification x = cos θ. Since −1 ≤ cos θ ≤ 1, this explains the range of integration in the inner product. ii) Now let [a, b] = (−∞, ∞) and W (x) = e−x2 . Gram-Schmidt S to get the first four

H0(x) = 1

H1(x) = 2x 2 H2(x) = 4x − 2 3 H3(x) = 8x − 12x. These polynomials arise in the solution to the Schrodinger equation for a one-dimensional harmonic oscillator. Note that the range of integration corresponds to the range of the position variable, as expected. iii) Finally, let [a, b] = (0, ∞) and W (x) = e−x. Again, Gram-Schmidt S to get the first four Laguerre Polynomials

L0(x) = 1

L1(x) = −x + 1 1 L (x) = (x2 − 4x + 2) 2 2 1 L (x) = (−x3 + 9x2 − 18x + 6). 3 6 These polynomials arise as solutions to the radial part of the Schrodinger equation for the Hydrogen atom. In this case x is interpreted as a radial variable, hence the range of integration (0, ∞).

32 Chapter 2

Tensors

Now that we’re familiar with vector spaces we can finally approach our main subject, tensors. We’ll give the modern component-free definition, from which will follow the usual transformation laws that used to be the definition. From here on out we will employ the Einstein summation convention, which is that whenever an index is repeated in an expression, once as a superscript and once as a subscript, then summation over that index is implied. Thus an expression like Pn i i v = i=1 v ei becomes v = v ei. We’ll comment on this convention in section 2.2.

2.1 Definition and Examples

A tensor of type (r, s) on a vector space V is a C-valued function T on

V × ... × V × V ∗ × ... × V ∗ (2.1) | {z } | {z } r times s times which is linear in each argument, i.e.

T (v1 + cw, v2, . . . , vr, f1, . . . , fs) = T (v1, . . . , vr, f1, . . . , fs) + cT (w, v2, . . . , f1, . . . , fs) (2.2) and similarly for all the other arguments. This property is called multilinearity. Note that dual vectors are (1, 0) tensors, and that vectors can be viewed as (0, 1) tensors as follows: v(f) ≡ f(v) where v ∈ V, f ∈ V ∗. (2.3) Similarly, linear operators can be viewed as (1, 1) tensors as

A(v, f) ≡ f(Av). (2.4)

33 We take (0, 0) tensors to be scalars, as a matter of convention. The reader will show in exercise 2.1 below that the set of all tensors of type (r, s) on a vector space V , r r denoted Ts (V ) or just Ts , form a vector space. This should not come as much of a surprise since we already know that vectors, dual vectors and linear operators all form vector spaces. Also, just as linearity implies that dual vectors and linear operators are determined by their values on the basis vectors, multilinearity implies the same thing for general tensors. To see this, let {ei}i=1...n be a basis for V and i {e }i=1...n the corresponding dual basis. Then, denoting the ith component of the i vector vp as vp and the jth component of the dual vector fq as fqj, we have (by repeated application of multilinearity)

i1 ir j1 js T (v1, . . . , vr, f1, . . . , fs) = v1 . . . vr f1j1 . . . fsjs T (ei1 , . . . , eir , e , . . . , e ) (2.5) ≡ vi1 . . . vir f . . . f T j1...js (2.6) 1 r 1j1 sjs i1,...,ir where, as before, the numbers

T j1...js ≡ T (e , . . . , e , ej1 , . . . , ejr ) (2.7) i1,...,ir i1 ir are referred to as the components of T in the basis {ei}i=1...n. The reader should check that this definition of the components of a tensor, when applied to vectors, dual vectors, and linear operators, agrees with the definitions given earlier. Also note that (2.7) gives us a concrete way to think about the components of tensors: they are the values of the tensor on the basis vectors.

Exercise 2.1 By choosing suitable definitions of addition and scalar multiplication, show r that Ts (V ) is a vector space.

If we have a non-degenerate bilinear form on V , then we may change the type of T by precomposing with the map L or L−1. If T is of type (1,1) with compo- j ˜ nents Ti , for instance, then we may turn it into a tensor T of type (2,0) by defining T˜(v, w) = T (v, L(w)). This corresponds to lowering the second index, and we write ˜ the components of T as Tij, omitting the tilde since the fact that we lowered the second index implies that we precomposed with L. This is in accord with the con- 4 ventions in relativity, where given a vector v ∈ R we write vµ for the components of v˜ when we should really writev ˜µ. From this point on, if we have a non-degenerate bilinear form on a vector space then we permit ourselves to raise and lower indices at will and without comment. In such a situation we often don’t discuss the type of a tensor, speaking instead of its rank, equal to r + s, which obviously doesn’t change as we raise and lower indices.

34 Example 2.1 Linear operators in quantum mechanics

Thinking about linear operators as (1, 1) tensors may seem a bit strange, but in fact this is what one does in quantum mechanics all the time! Given an operator H on a quantum mechanical Hilbert space spanned by orthonormal vectors {ei} (which in Dirac notation we would write as {|ii} ), we usually write H|ii for H(ei), hj|ii for e˜j(ei) = (ej, ei), and hj|H|ii for (ej, Hei). Thus, (2.4) would tell us that (using basis vectors instead of arbitrary vectors)

j j Hi = H(ei, e ) (2.8) j = e (Hei) (2.9) = hj|H|ii (2.10) where we converted to Dirac notation in the last equality to obtain the familiar quantum mechanical expression for the components of a linear operator. These components are often referred to as matrix elements, since when we write operators as matrices the elements of the matrices are just the components arranged in a particular fashion, as in (1.26).

Example 2.2 The Moment of Inertia Tensor

The moment of inertia tensor, denoted I, is the symmetric (2,0) tensor which, when evaluated on the angular velocity vector, yields the kinetic energy of a rigid body, i.e. 1 I(ω, ω) = KE (2.11) 2 Alternatively we can raise an index on I and define it to be the linear operator which eats the angular velocity and spits out the angular momentum, i.e.

L = Iω. (2.12)

(2.11) and (2.12) are most often seen in components (referred to a cartesian basis), where they read

KE = [ω]T [I][ω] (2.13) [L] = [I][ω]. (2.14)

Note that since we raise and lower indices with an inner product and usually use orthornormal bases, the components of I when viewed as a (2,0) tensor and when viewed as a (1,1) tensor are the same, cf. exercise 1.17.

35 Example 2.3 Metric Tensors

We met the Euclidean metric on Rn in example 1.17 and the Minkowski metric on R4 in example 1.19, and it’s easy to verify that both are (2,0) tensors (why isn’t the Hermitian scalar product of example 1.47 included?). We also have the inverse metrics, defined in Problem 4 of Chapter 1, and the reader can verify that these are (0,2) tensors.

Exercise 2.2 Show that for a metric g on V ,

j j gi = δi , (2.15) so the (1, 1) tensor associated to g (via g!) is just the identity operator.

2.2

Now we are in a position to derive the usual transformation laws that historically were taken as the definition of a tensor. Suppose we have a vector space V and two 0 bases for V , B = {ei}i=1...n and B = {ei0 }i=1...n. Since B is a basis, each of the ei0 can j j be expressed as ei0 = Ai0 ej for some numbers Ai0 . The same logic dictates that there j0 j0 exist numbers Ai (note that here the upper index is primed) such that ei = Ai ej0 , and since j0 j0 k ei = Ai ej0 = Ai Aj0 ek (2.16) we must have j0 k k Ai Aj0 = δi . (2.17) Considering (2.16) with the primed and unprimed indices switched also yields

j k0 k0 Ai0 Aj = δi0 , (2.18)

j0 j j j0 so, in a way, Ai and Ai0 are inverses of each other. Notice that Ai0 and Ai are not the components of tensors, as their indices refer to different bases. How do the i i0 corresponding dual bases transform? Let {e }i=1...n and {e }i=1...n be the bases dual 0 i0 i to B and B . Then the components of e with respect to {e }i=1...n are

i0 i0 k0 k0 i0 i0 e (ej) = e (Aj ek0 ) = Aj δk0 = Aj , (2.19) i.e. i0 i0 j e = Aj e . (2.20)

36 Likewise, i i j0 e = Aj0 e . (2.21) Notice how well the Einstein Summation convention and our convention for priming indices work together in the transformation laws. Now we are ready to see how the components of a general (r, s) tensor T transform:

j0 ...j0 0 0 1 s j1 js T 0 0 = T (ei0 , . . . , ei0 , e , . . . , e ) (2.22) i1,...,ir 1 r 0 0 k1 kr j1 l1 js ls = T (A 0 ek1 ,...,A 0 ekr ,A e ,...,A e ) (2.23) i1 ir l1 ls 0 0 k1 kr j1 js l1 ls = A 0 ...A 0 A ...A T (ek1 , . . . , ekr , e , . . . , e ) (2.24) i1 ir l1 ls 0 0 k1 kr j1 js l1...ls = A 0 ...A 0 A ...A T . (2.25) i1 ir l1 ls k1...kr (2.25) is the standard tensor transformation law, which is taken as the definition of a tensor in much of the physics literature; here we have derived it as a consequence of our definition of a tensor as a multilinear function on V and V ∗. The two are equivalent, however, as the reader will check in exercise 2.3 below. With the general transformation law in hand, we’ll now look at specific types of tensors and derive their matrix transformation laws; to this end, it will be useful to introduce the matrices

 10 10 10   1 1 1  A1 A2 ...An A10 A20 ...An0 20 20 20 2 2 2  A A ...A   A 0 A 0 ...A 0   1 2 n  −1  1 2 n  A =  . . . .  ,A =  . . . .  , (2.26)  . . . .   . . . .  n0 n0 n0 n n n A1 A20 ...An A10 A20 ...An0 which by virtue of (2.17) and (2.18) satisfy

AA−1 = A−1A = I (2.27) as our notation suggests.

Exercise 2.3 Consider a function which assigns to a basis {ei}i=1...n the set of numbers {T l1...ls } which transform according to (2.25) under a change of basis. Show that k1...kr this defines a multilinear function T of type (r, s) on V , and be sure to check that your definition is basis independent (i.e. that the value of T does not depend on which basis {ei}i=1...n you choose).

37 Example 2.4 Vectors and Dual Vectors

Given a vector v (considered as a (1, 0) form as per (2.3)), (2.25) tell us that its components transform as i0 i0 j v = Aj v (2.28) while the components of a dual vector f transform as

j fi0 = Ai0 fj. (2.29)

i0 Notice that the components of v transform with the Aj whereas the basis vectors j transform with the Ai0 , so the components of a vector obey the law opposite (‘contra’) to the basis vectors. This is the origin of the term ‘contravariant’. Note also that the components of a dual vector transform in the same way as the basis vectors, hence the term ‘covariant’. It makes sense that the basis vectors and the components of a vector should transform oppositely; v exists independently of any basis for V and i shouldn’t change under a change of basis, so if the ei change one way the v should change oppositely. Similar remarks apply to dual vectors. Incidentally, we can now explain a little bit more about the Einstein summation convention. We knew ahead of time that the components of dual vectors would transform like basis vectors, so we gave them both lower indices. We also knew that the components of vectors would transform like dual basis vectors, so we gave them both upper indices. Since the two transformation laws are opposite, we know (see exercise 2.6 below) that a summation over an upper index and lower index will yield an object that does not transform at all, so the summation represents an object or a process that doesn’t depend upon a choice of basis. For instance, the expression i v ei represents the vector v which is defined without reference to any basis, and i the expression fiv is just f(v), the action of the functional f on the vector v, also defined without reference to any basis. Processes such as these are so important and ubiquitous that it becomes very convenient to omit the summation sign for repeated upper and lower indices, and we thus have the summation convention. In terms of matrices, we can write (2.28) and (2.29) as

[v]B0 = A[v]B (2.30) −1T [f]B0 = A [f]B (2.31) where the superscript T again denotes the transpose of a matrix. The reader showed in exercise 1.17 that if we have an inner product (· , ·) on a real vector space V and an orthornormal basis {ei}i=1...n then the components of vectors and their corresponding dual vectors are identical, which is why we were able to ignore the distinction between

38 them for so long. (2.30) and (2.31) seem to contradict this, however, since it looks like the components of dual vectors transform very differently from the components of vectors. How do we explain this? Well, if we change from one orthonormal basis to another, we have

n k l X k k δi0j0 = (ei0 , ej0 ) = Ai0 Aj0 (ek, el) = Ai0 Aj0 (2.32) k=1 which in matrices reads A−1T A−1 = I (2.33) so we must have A−1T = A ⇐⇒ A−1 = AT . (2.34) Such matrices are known as orthogonal matrices, and we see here that a transforma- tion from one orthornormal basis to another is always implemented by an .1 For such matrices (2.30) and (2.31) are identical, resolving our contradic- tion. Incidentally, for a complex inner product space the reader will show that orthonor- mal basis changes are implemented by matrices satisfying A−1 = A†. Such matrices are known as unitary matrices and should be familiar from quantum mechanics.

Exercise 2.4 Show that for any A, (A−1)T = (AT )−1, justifying the sloppiness of our notation above.

Exercise 2.5 Show that for a complex inner product space V , the matrix A implementing an orthonormal change of basis satisfies A−1 = A†.

i T Exercise 2.6 Show that f(v) = f vi = [f] [v] is invariant under a change of basis, as T it should be. Prove this using the matrix transformation laws, i.e. show that [f]B [v]B = T 0 [f]B0 [v]B0 for any B, B .

Example 2.5 Linear Operators

We already noted that linear operators can be viewed as (1,1) tensors as per (2.4). (2.25) then tells us that, for a linear operator T on V ,

j0 k j0 l Ti0 = Ai0 Al Tk (2.35) 1See the Problems for more on orthogonal matrices.

39 which in matrix form reads −1 [T ]B0 = A[T ]BA (2.36) which is the familiar similarity transformation of matrices. This, incidentally, allows us to extend the trace functional from n × n matrices to linear operators as follows: Given T ∈ T (V ) and a basis B for V , define the trace of T as

T r(T ) ≡ T r([T ]B). (2.37)

The reader can then use (2.36) to show (see exercise 2.9) that T r(T ) does not depend on the choice of basis B.

Exercise 2.7 Show that for v ∈ V, f ∈ V ∗,T a linear operator on V, f(T v) = [f]T [T ][v] is invariant under a change of basis. Use the matrix transformation laws.

0 3 Exercise 2.8 Let B = {x, y, z}, B = {x + iy, z, x − iy} be bases for H1(R ), and consider the operator Lz for which matrix expressions were found with respect to both bases in i0 j example 1.15. Find the numbers Aj and Ai0 and use these, along with (2.36), to obtain [Lz]B0 from [Lz]B.

Exercise 2.9 Show that (2.36) implies that T r([T ]B) does not depend on the choice of basis B, so that T r(T ) is well-defined.

Example 2.6 (2,0) Tensors

(2,0) tensors g, which include important examples such as the Minkowski metric and the Euclidean metric, transform as follows according to (2.25):

k l gi0j0 = Ai0 Aj0 gkl (2.38) or in matrix form −1T −1 [g]B0 = A [g]B A . (2.39) Notice that if g is an inner product and B and B0 are orthonormal bases then [g]B0 = [g]B = I and (2.39) becomes

I = A−1T A−1, (2.40) again telling us that A must be orthogonal. Also note that if A is orthogonal, (2.39) is identical to (2.36), so we don’t have to distinguish between metric tensors and linear operators (as most of us haven’t in the past!). In the case of the Minkowski

40 metric η we aren’t dealing with an inner product but we do have orthonormal bases, with respect to which2 η takes the form  1 0 0 0   0 1 0 0  [η] =   (2.41)  0 0 1 0  0 0 0 −1 so if we are changing from one orthonormal basis to another we have  1 0 0 0   1 0 0 0   0 1 0 0  −1T  0 1 0 0  −1   = A   A (2.42)  0 0 1 0   0 0 1 0  0 0 0 −1 0 0 0 −1 or equivalently  1 0 0 0   1 0 0 0   0 1 0 0  T  0 1 0 0    = A   A. (2.43)  0 0 1 0   0 0 1 0  0 0 0 −1 0 0 0 −1 Matrices A satisfying (2.43) are known as Lorentz Transformations. Notice that these matrices are not quite orthogonal, so the components of vectors will transform slightly differently than those of dual vectors under these transformations. This is in contrast to the case of Rn with a positive-definite metric, where if we go from one orthonormal basis to another then the components of vectors and dual vectors transform identically as the reader showed in exercise 1.17.

Exercise 2.10 As in previous exercises, show using the matrix transformation laws that g(v, w) = [w]T [g][v] is invariant under a change of basis.

Before we conclude this section, we should remark that when we said that the j Ai0 were not the components of a tensor, we were lying a little; there is a tensor lurking around, namely the linear operator U that takes the old basis vectors into the new, i.e. U(ei) = ei0 ∀ i (the action of U on an arbitrary vector is then given by expanding that vector in the basis B and using linearity). What are the components of this tensor? Well, in the old basis B we have

j j j j j k j Ui = U(ei, e ) = e (Uei) = e (ei0 ) = e (Ai0 ek) = Ai0 (2.44)

2 We assume here that the basis vector et satisfying η(et, et) = −1 is the 4th vector in the basis, which isn’t necessary but is somewhat conventional in physics.

41 j so the Ai0 actually are the components of a tensor! Why did we lie, then? Well, the approach we have been taking so far is to try and think about things in a basis- independent way, and although U is a well-defined linear operator, its definition depends entirely on the two bases we’ve chosen, so we may as well work directly with the numbers that relate the bases. Also, using one primed index and one unprimed index makes it easy to remember transformation laws like (2.20) and (2.21) but is not consistent with our notation for the components of tensors. If we write out the components of U as a matrix, the reader should verify that

−1 [ei0 ]B = [U]B[ei]B = A [ei]B (2.45) which should be compared to (2.30). (2.45) is called an active transformation, since we use the matrix A (or, rather, its inverse) to change one vector into another, namely ei into ei0 . (2.30), on the other hand, is called a passive transformation, since we use the matrix A not to change the vector v but rather to change the basis which v is referred to, hence changing its components. The notation in most physics texts is not as explicit as ours; one usually sees matrix equations like

r0 = Ar (2.46) for both passive and active transformations, and one must rely on context to figure out how the equation is to be interpreted. In the active case, one considers the coordinate system fixed and and interprets the matrix A as taking the physical vector r into a new vector r0, where the components of both are expressed in the same coordinate system. In the passive case, the physical vector r doesn’t change but the basis does, so one interprets the matrix A as taking the components of r in the old coordinate system and giving back the components of the same vector r in the new (primed) coordinate system. Notice that in the active case the prime refers to a new vector, and in the passive case to a new coordinate system. Passive transformations are probably the ones encountered most often in classical physics, since a change of cartesian coordinates induces a passive transformation. Active transformations do crop up, though, especially in the case of rigid body motion. In this scenario, one specifies the orientation of a rigid body by the time- dependent orthogonal basis transformation A(t) which relates the space K0 to the body frame K(t) (we use here the notation of example 1.12). As we saw above, there corresponds to the time-dependent matrix A(t) a time-dependent linear 0 operator U(t) which satisfies U(t)(ei0 ) = ei(t). If K and K were coincident at t = 0 and r0 is the position vector of a point p of the rigid body at that time, then the 0 position of p at a later time is just r(t) = U(t)r0, which as a matrix equation in K

42 would read [r(t)]K0 = A(t)[r0]K0 (2.47) or in more common and less precise notation,

r(t) = A(t)r0. (2.48)

In other words, the position of a specific point on the rigid body at an arbitrary time t is given by the active transformation corresponding to the matrix A(t). The duality between passive and active transformations is also present in quantum mechanics. In the Schrodinger picture, one considers observables like the momentum or position operator as acting on the state ket while the basis kets remain fixed. This is the active viewpoint. In the Heisenberg picture, however, one considers the state ket to be fixed and considers the observables to be time-dependent (recall that (1.21) is the equation of motion for these operators). Since the operators are time- dependent, their eigenvectors (which form a basis3) are time-dependent as well, so this picture is the passive one in which the vectors don’t change but the basis does. Just as an equation like (2.46) can be interpreted in both the active and passive sense, a quantum mechanical equation like

< xˆ(t) > = hψ| ( U †xˆ U ) |ψi (2.49) = (hψ|U †)x ˆ (U|ψi), (2.50) where U is the time-evolution operator for time t, can also be interpreted in two ways: in the active sense of (2.50), in which the U’s act on the vectors and change them into new vectors, and in the passive sense of (2.49), where the U’s act on the operatorx ˆ by a similarity transformation to turn it into a new operator,x ˆ(t).

2.3 The Tensor Product - Definition and Properties

One of the most basic operations with tensors, again commonplace in physics but often unacknowledged (or, at best, dealt with in an ad-hoc fashion) is that of the tensor product. Before giving the precise definition, which takes a little getting used to, we give a rough, heuristic description. Given two vector spaces V and W (over the same set of scalars C), we would like to construct a product space, which we denote V ⊗ W , whose elements are in some sense products of vectors v ∈ V and

3For details on why the eigenvectors of hermitian operators form a basis, at least in the finite- dimensional case, see [HK].

43 w ∈ W . We denote these products by v ⊗ w. This product, like any respectable product, should be bilinear in the sense that

(v1 + v2) ⊗ w = v1 ⊗ w + v2 ⊗ w

v ⊗ (w1 + w2) = v ⊗ w1 + v ⊗ w2. (2.51)

Given this property, the product of any two arbitrary vectors v and w can then be expanded in terms of bases {ei}i=1...n and {fj}j=1...m for V and W as

i j v ⊗ w = (v ei) ⊗ (w fj) (2.52) i j = v w ei ⊗ fj (2.53) so {ei ⊗ fj}, i = 1 . . . n, j = 1 . . . m should be a basis for V ⊗ W , which would then have dimension nm. Thus the basis for the product space would be just the product of the basis vectors, and the dimension of the product space would be just the product of the dimensions. Now we make this precise. Given two vector spaces V and W , we define their tensor product V ⊗ W to be the set of all C-valued bilinear functions on V ∗ × W ∗. Such functions do form a vector space, as the reader can easily check. This definition may seem unexpected or counterintuitive at first, but the reader will see that this definition does yield the vector space described above. Now, given vectors v ∈ V, w ∈ W , we define the tensor product of v and w, written v ⊗ w, to be the element of V ⊗ W defined as follows:

(v ⊗ w)(h, g) ≡ v(h)w(g) ∀ h∈V ∗, g ∈W ∗. (2.54)

The bilinearity of the tensor product is immediate and the reader can probably verify it without writing anything down: just check that both sides of (2.51) are equal when evaluated on any pair of dual vectors. To prove that {ei ⊗fj}, i = 1 . . . n, j = 1 . . . m i j is a basis for V ⊗ W , let {e }i=1...n, {f }i=1...m be the corresponding dual bases and consider an arbitrary T ∈ V ⊗ W. Using bilinearity,

i j ij T (h, g) = higjT (e , f ) = higjT (2.55)

ij i j ij where T ≡ T (e , f ). If we consider the expression T ei ⊗ fj, then

ij k l ij k l (T ei ⊗ fj)(e , f ) = T ei(e )fj(f ) (2.56) ij k l = T δi δj (2.57) = T kl (2.58)

44 ij so T ei ⊗ fj agrees with T on basis vectors, hence on all vectors by bilinearity, so ij T = T ei ⊗ fj. Since T was an arbitrary element of V ⊗W , V ⊗ W = Span {ei ⊗ fj}. Furthermore,the ei ⊗ fj are linearly independent as the reader should check, so {ei ⊗ fj} is actually a basis for V ⊗ W and V ⊗ W thus has dimension mn. The tensor product has a couple of important properties besides bilinearity. First, it commutes with taking duals, that is (V ⊗ W )∗ = V ∗ ⊗ W ∗. (2.59)

Second, the tensor product it is associative, i.e. for vector spaces Vi, i = 1, 2, 3,

(V1 ⊗ V2) ⊗ V3 = V1 ⊗ (V2 ⊗ V3). (2.60)

This property allows us to drop the parentheses and write expressions like V1⊗...⊗Vn without ambiguity. One can think of V1 ⊗ ... ⊗ Vn as the set of C-valued multilinear ∗ ∗ functions on V1 × ... × Vn . Verifying these two properties is somewhat tedious and the reader is referred to [W] for proofs.

Exercise 2.11 If {ei}, {fj} and {gk} are bases for V1,V2 and V3 respectively, convince yourself that {ei⊗fj ⊗gk} is a basis for V1⊗V2⊗V3, and hence that dim V1⊗V2⊗V3 = n1n2n3 where dim Vi = ni. Extend the above to n-fold tensor products.

2.4 Tensor Products of V and V ∗

In the previous section we defined the tensor product for two arbitrary vector spaces V and W . Often, though, we’ll be interested in just the iterated tensor product of a vector space and its dual, i.e. in tensor products of the form V ∗ ⊗ .... ⊗ V ∗ ⊗ V ⊗ ... ⊗ V . (2.61) | {z } | {z } r times s times r This space is of particular interest because it is actually identical to Ts ! From the previous section we know that the vector space in (2.61) can be interpreted as the set of multilinear functions on V × ... × V × V ∗ × ... × V ∗, (2.62) | {z } | {z } r times s times r r but these functions are exactly Ts ! Since the space in (2.61) has basis Bs = i1 ir r r {e ⊗ ... ⊗ e ⊗ ej1 ⊗ ... ⊗ ejs }, we can conclude that Bs is a basis for Ts . In fact, we claim that if T ∈ T r has components T j1...js , then s i1...ir T = T j1...js ei1 ⊗ ... ⊗ eir ⊗ e ⊗ ... ⊗ e (2.63) i1...ir j1 js

45 r is the expansion of T in the basis Bs. To prove this, we just need to check that both sides agree when evaluated on an arbitrary set of basis vectors; on the left hand side j ...j we get T (e , . . . , e , ej1 , . . . , ejs ) = T 1 s by definition, and on the right hand i1 ir i1,...,ir side we have

(T l1...ls ek1 ⊗ ... ⊗ ekr ⊗ e ⊗ ... ⊗ e )(e , . . . , e , ej1 . . . , ejs ) k1...kr l1 ls i1 ir = T l1...ls ek1 (e ) . . . ekr (e )e (ej1 ) . . . e (ejs ) k1...kr i1 ir l1 ls = T l1...ls δk1 . . . δkr δj1 . . . δjs k1...kr i1 ir l1 ls = T j1...js (2.64) i1,...,ir so our claim is true. Thus, for instance, a (2, 0) tensor like the Minkowski metric can µ ν i j 2 be written as η = ηµνe ⊗ e . Also, a tensor product like f ⊗ g = figje ⊗ e ∈ T0 thus has components (f ⊗ g)ij = figj. Notice that we now have two ways of thinking about components: either as the values of the tensor on sets of basis vectors (as in (2.7)) or as the expansion coefficients in the given basis (as in (2.63)). This duplicity of perspective was pointed out in the case of vectors under (1.42), and it’s essential that the reader be comfortable thinking about components in either way.

r Exercise 2.12 Compute the dimension of Ts .

Exercise 2.13 Let T1 and T2 be tensors of type (r1, s1) and (r2, s2) respectively on a vector space V . Show that T1 ⊗ T2 can be viewed as an (r1 + r2, s1 + s2) tensor, so that the tensor product of two tensors is again a tensor, justifying the nomenclature.

One important operation on tensors which we are now in a position to discuss is that of contraction, which is the generalization of the trace functional to tensors of r arbitrary rank : Given T ∈ Ts (V ) with expansion T = T j1...js ei1 ⊗ ... ⊗ eir ⊗ e ⊗ ... ⊗ e (2.65) i1...ir j1 js we can define a contraction of T to be any (r − 1, s − 1) tensor resulting from feeding i e into one of the arguments, ei into another and then summing over i as implied by i the summation convention. For instance, if we feed ei into the rth slot and e into the (r + s)th slot and sum, we get the (r − 1, s − 1) tensor T˜ defined as ˜ i T (v1, . . . , vr−1, f1, . . . , fs−1) ≡ T (v1, . . . , vr−1, ei, f1, . . . , fs−1, e ). (2.66) The reader may be suspicious that T˜ depends on our choice of basis but exercise 2.14 below will disabuse him of that. Notice that the components of T˜ are T˜ j1...js−1 = T j1...js−1l. (2.67) i1...ir−1 i1...ir−1l

46 Similar contractions can be performed on any two arguments of T provided one argu- ment eats vectors and the other dual vectors. In terms of components, a contraction can be taken with respect to any pair of indices provided that one is covariant and the other contravariant. Notice that for a linear operator (or (1, 1) tensor) A we have ˜ i only one option for contraction: A = Ai = T r(A). Notice also that if we have two 2 linear operators A and B then their tensor product A ⊗ B ∈ T2 has components

jl j l (A ⊗ B)ik = Ai Bk , (2.68) and contracting on the first and last index gives a (1, 1) tensor AB whose components are j j l (AB)k = Al Bk . (2.69) The reader should check that this tensor is just the composition of A and B, as our notation suggests. What linear operator do we get if we consider the other j l contraction Ai Bj ?

Exercise 2.14 Show that if {ei}i=1...n and {ei0 }i=1...n are two arbitrary bases that

i i0 T (v1, . . . , vr−1, ei, f1, . . . , fs−1, e ) = T (v1, . . . , vr−1, ei0 , f1, . . . , fs−1, e ) (2.70) so that contraction is well-defined.

Example 2.7 V ∗ ⊗ V

One of the most important examples of tensor products of the form (2.61) is V ∗ ⊗ V , 1 which as we mentioned is the same as T1 , the space of linear operators. How does this identification work, explicitly? Well, given f ⊗ v ∈ V ∗ ⊗ V , we can define a linear operator by (f ⊗ v)(w) ≡ f(w)v. More generally, given

j i ∗ Ti e ⊗ ej ∈ V ⊗ V, (2.71) we can define a linear operator T by

j i i j T (v) = Ti e (v)ej = v Ti ej (2.72) which is identical to (1.24). This identification of V ∗ ⊗ V and linear operators is actually implicit in many quantum mechanical expressions. Let H be a quantum mechanical Hilbert space and let ψ, φ ∈ H so that L(φ) ∈ H∗. The tensor product of L(φ) and ψ, which we would write as L(φ)⊗ψ, is written in Dirac notation as |ψihφ| (note the transposition of the factors relative to our convention). If we’re given an

47 orthonormal basis B = {|ii}, the expansion (2.71) of an arbitrary operator H can be written in Dirac notation as X H = Hij |jihi|, (2.73) i,j an expression which may be familiar from advanced quantum mechanics texts.4 In particular, the identity operator can be written as X I = |iihi|, (2.74) i which is referred to as the resolution of the identity with respect to the basis {|ii}. A word about nomenclature: In quantum mechanics and other contexts the tensor product is often referred to as the direct or outer product. This last term is meant to distinguish it from the inner product, since both the outer and inner products eat a dual vector and a vector (strictly speaking the inner product eats 2 vectors, but remember that with an inner product we may identify vectors and dual vectors) but the outer product yields a linear operator whereas the inner product yields a scalar.

2.5 Applications of the Tensor Product in Classical Physics

Example 2.8 Moment of inertia tensor revisited

We took an abstract look at the moment of inertia tensor in example 2.2; now, armed with the tensor product, we can examine the moment of inertia tensor more concretely. Consider a rigid body and assume that its center of mass is fixed at a point O, so that it has only rotational degrees of freedom. Let O be the origin, pick time-dependent body-fixed axes K = {xˆ(t), yˆ(t), zˆ(t)} for E3, and let g denote the Euclidean metric on E3. Recall that g allows us to define a map L from E3 to E3∗. Also, let the ith particle in the rigid body have mass mi and position vector ri with 2 [ri]K = (xi, yi, zi) relative to O, and let ri ≡ g(ri, ri). Then the (2, 0) moment of inertia tensor is given by

X 2 I(2,0) = mi(ri g − L(ri) ⊗ L(ri)) (2.75) i

4We don’t bother here with index positions since most quantum mechanics texts don’t employ Einstein summation convention, preferring instead to explicitly indicate summation.

48 while the (1, 1) tensor reads

X 2 I(1,1) = mi(ri I − L(ri) ⊗ ri). (2.76) i The reader should check that in components (2.75) reads

X 2 Ijk = mi(ri δjk − (ri)j(ri)k). (2.77) i Writing a couple of components explicitly yields

X 2 2 Ixx = mi(yi + zi ) i X Ixy = − mixiyi, (2.78) i expressions which should be familiar from classical mechanics. So long as the basis k is orthonormal, the components Ij of the (1, 1) tensor in (2.76) will be the same as for the (2, 0) tensor, as remarked earlier. Note that if we had not used body-fixed axes, the components of ri (and hence the components of I, by (2.78)) would in general be time-dependent; this is the main reason for using the body-fixed axes in computation.

Example 2.9 Maxwell Stress Tensor

In considering the conservation of total momentum (mechanical plus electromag- netic) in classical electrodynamics one encounters the symmetric rank 2 Maxwell Stress Tensor, defined in (2, 0) form as 5 1 T = E ⊗ E + B ⊗ B − (E · E + B · B)g (2.79) (2,0) 2 where E and B are the dual vector versions of the electric and magnetic field vectors. T can be interpreted in the following way: T (v, w) gives the rate at which momentum in the v-direction flows in the w-direction. In components we have 1 T = E E + B B − (E · E + B · B)δ , (2.80) ij i j i j 2 ij which is the expression found in most classical electrodynamics textbooks.

5 Here and below we set all physical constants such as c and 0 equal to 1.

49 Example 2.10 The Electromagnetic tensor

As the reader has probably seen in discussions of relativistic electrodynamics, the electric and magnetic field vectors are properly viewed as components of a rank 2 antisymmetric tensor F , the electromagnetic field tensor.6 To write F in component- free notation requires machinery outside the scope of this text,7 so we settle for its expression as a matrix in an orthonormal basis, which in (2, 0) form is   0 Ex Ey Ez  −Ex 0 −Bz By  [F(2,0)] =   . (2.81)  −Ey Bz 0 −Bx  −Ez −By Bx 0

The Lorentz force law dpµ = qF µ vν (2.82) dτ ν where p = mv is the 4-momentum of a particle, v is its proper velocity and q its charge, can be rewritten without components as dp = qF (v) (2.83) dτ (1,1) which just says that the proper force on a particle is given by the action of the field tensor on the particle’s proper velocity!

2.6 Applications of the Tensor Product in Quan- tum Physics

In this section we’ll discuss further applications of the tensor product in quantum mechanics, in particular the oft-unwritten rule that to add degrees of freedom one should take the tensor product of the corresponding Hilbert spaces. Before we get to this, however, we must set up a little more machinery and address an issue that we’ve so far swept under the rug. The issue is that when dealing with spatial degrees of freedom, as opposed to ‘internal’ degrees of freedom like spin, we often encounter

6In this example and the one above we are actually not dealing with tensors but with tensor fields, i.e. tensor-valued functions on space and spacetime. For the discussion here, however, we will ignore the spatial dependence, focusing instead on the tensorial properties. 7One needs the exterior derivative, a generalization of the curl, divergence and gradient operators from vector calculus. See [SC] for a very readable account.

50 Hilbert spaces like L2([−a, a]) and L2(R) which are most conveniently described by ‘basis’ vectors which are eigenvectors of either the position operatorx ˆ or the momentum operatorp ˆ. The trouble with these bases is that they are often non- denumerably infinite (i.e. can’t be indexed by the integers, unlike all the bases we’ve worked with so far) and, what’s worse, the ‘basis vectors’ don’t even belong to the Hilbert space! Consider, for example, L2(R). The position operatorx ˆ acts on functions ψ(x) ∈ L2(R) by xˆ ψ(x) = x ψ(x). (2.84) If we follow the practice of most quantum mechanics texts and treat the Dirac delta functional δ as L(δ(x)) where δ(x), the ‘’, is infinite at 0 and 0 elsewhere, the reader can check (see exercise 2.15) that

xˆ δ(x − x0) = x0 δ(x − x0) (2.85) so that δ(x − x0) is an ‘eigenfunction’ ofx ˆ with eigenvalue x0 (in Dirac notation we write the corresponding ket as |x0i) . The trouble is that, as we saw in example 1.23, 2 there is no such δ(x) ∈ L (R)! Furthermore, since the basis {δ(x−x0)}xo∈R is indexed by R and not some subset of Z, we must expand ψ ∈ L2(R) by integrating instead of summing. Integration, however, is a limiting procedure and one should really worry about whether a given integral converges and in what sense it converges. Rectifying all this in a rigorous manner is outside the scope of this text, unfortunately, but we do wish to work with these objects, so we content ourselves with the traditional approach: ignore the fact that the delta functions are not elements of L2(R), work 8 without discomfort with the basis {δ(x − x0)}x0∈R , and fearlessly expand arbitrary functions ψ in the basis {δ(x − x0)}x0∈R as Z ∞ ψ(x) = dx0 ψ(x0)δ(x − x0), (2.86) −∞ where the above equation can be interpreted both as the expansion of ψ and just the definition of the delta function. In Dirac notation (2.86) reads Z ∞ |ψi = dx0 ψ(x0)|x0i. (2.87) −∞ Note that we can think of the numbers ψ(x) as the components of |ψi with respect to the basis {|xi}x∈R. Alternatively, if we define the inner product of our basis vectors to be hx|x0i ≡ δ(x − x0) (2.88)

8Working with the momentum eigenfunctions eipx instead doesn’t help; though these are legiti- R ∞ ipx 2 mate functions, they still are not square integrable since −∞ |e | dx = ∞ !

51 as is usually done, then using (2.87) we have

ψ(x) = hx|ψi (2.89) which gives another interpretation of ψ(x). These two interpretations of ψ(x) are just the ones mentioned below (2.64); that is, the components of a vector can be interpreted either as expansion coefficients (as in (2.87)), or as the value of a given dual vector on the vector, as in (2.89).

Exercise 2.15 By considering the integral Z ∞ (ˆx δ(x − x0))f(x) dx (2.90) −∞ (where f is an arbitrary square-integrable function), show formally that

xˆ δ(x − x0) = x0 δ(x − x0). (2.91)

Exercise 2.16 Check that {δ(x − x0)}x0∈R satisfies (1.54).

Exercise 2.17 Verify (2.89).

We mentioned in a footnote on the previous page that one could use momentum eigenfunctions instead of position eigenfunctions as a basis for L2(R). What does the corresponding change of basis look like?

Example 2.11 The Momentum Representation

As is well known from quantum mechanics, the eigenfunctions of the momentum i ~ d px operatorp ˆ = i dx are the wavefunctions {e ~ }p∈R, and these wavefunctions form a basis for L2(R). In fact, the expansion of an arbitrary function ψ ∈ L2(R) in this basis is just the Fourier expansion of ψ, written Z ∞ 1 i px ψ(x) = dp φ(p) e ~ (2.92) 2π −∞ where the component function φ(p) is known as the Fourier Transform of ψ. One could in fact work exclusively with φ(p) instead of ψ(x), and recast the operatorsx ˆ andp ˆ in terms of their action on φ(p) (see exercise 2.18 below); such an approach is known as the momentum representation. Now, what does it look like when we

52 switch from the postion representation to the momentum representation, i.e. when i px we change bases from {δ(x − x0)}xo∈R to {e ~ }p∈R? Since the basis vectors are indexed by real numbers p and x0 as opposed to integers i and j, our change of basis i0 will not be given by a matrix with components Aj but rather a function A(x0, p), which by (2.19) and the fact that both bases are orthonormal is given by the inner i px product of δ(x − x0) and e ~ . In Dirac notation this would be written as hx0|pi, and we have ∞ Z i i px px0 A(x0, p) = hx0|pi = dx δ(x − x0)e ~ = e ~ , (2.93) −∞ a familiar equation. 2

Exercise 2.18 Use (2.92) to show that in the momentum representation,p ˆ φ(p) = p φ(p) ~ dφ andx ˆ φ(p) = i dp .

The next issue to address is that of linear operators: having constructed a new Hilbert space H1 ⊗ H2 out of two Hilbert spaces H1 and H2, can we construct linear operators on H1 ⊗ H2 out of the linear operators on H1 and H2? Well, given linear operators Ai on Hi, i = 1, 2, we can define a linear operator A1 ⊗ A2 on H1 ⊗ H2 by

(A1 ⊗ A2)(v ⊗ w) ≡ (A1v) ⊗ (A2w). (2.94) The reader can check that with this definition, (A⊗B)(C ⊗D) = AC ⊗BD. In most quantum mechanical applications either A1 or A2 is the identity, i.e. one considers operators of the form A1 ⊗ I or I ⊗ A2. These are often abbreviated as A1 and A2 even though they’re acting on H1 ⊗ H2. The last subject we should touch upon is that of vector operators, which are defined to be sets of operators that transform as three dimensional vectors under the adjoint action of the total angular momentum operators Ji. That is, a vector operator is a set of operators {Bi}i=1−3 (often written collectively as B) that satisfies

adJi (Bj) = [Ji,Bj] = i~ijkBk (2.95) where ijk is the familiar Levi-Civita symbol. The three dimensional position operator ˆr = {x,ˆ y,ˆ zˆ}, momentum operator pˆ = {pˆx, pˆy, pˆz}, and orbital angular momentum operator L = {Lx,Ly,Lz} are all vector operators, as the reader can check.

Exercise 2.19 For spinless particles, J = L = xˆ × pˆ. Expressions for the components may be obtained by expanding the cross product or referencing example 1.15 and exercise 1.8. Use these expressions and the canonical commutation relations [xi, pj] = −i~δij to show that xˆ,pˆ and L are all vector operators.

53 Now we are finally ready to consider some examples, in which we’ll take as an axiom that adding degrees of freedom is implemented by taking tensor products of the corresponding Hilbert spaces. The reader will see that this process reproduces familiar results.

Example 2.12 Addition of translational degrees of freedom

Consider a spinless particle constrained to move in one dimension; the quantum 2 mechanical Hilbert space for this system is L (R) with basis {|xi}x∈R. If we con- sider a second dimension, call it the y dimension, then this degree of freedom has 2 its own Hilbert space L (R) with basis {|yi}y∈R. If we allow the particle both de- grees of freedom then the Hilbert space for the system is L2(R) ⊗ L2(R), with basis 2 2 {|xi ⊗ |yi}x,y∈R. An arbitrary ket |ψi ∈ L (R) ⊗ L (R) has expansion Z ∞ Z ∞ |ψi = dx dy ψ(x, y) |xi ⊗ |yi (2.96) −∞ −∞ with expansion coefficients ψ(x, y). If we iterate this logic, we get in 3 dimensions Z ∞ Z ∞ Z ∞ |ψi = dx dy dz ψ(x, y, z) |xi ⊗ |yi ⊗ |zi. (2.97) −∞ −∞ −∞ If we rewrite ψ(x, y, z) as ψ(r) and |xi ⊗ |yi ⊗ |zi as |ri where r = (x, y, z), then we have Z |ψi = d3r ψ(r)|ri (2.98) which is the familiar 3D expansion of a ket in terms of position eigenkets. Such a ket is an element of L2(R) ⊗ L2(R) ⊗ L2(R), which is also denoted as L2(R3).9 Example 2.13 Two-particle systems

Now consider two spinless particles in three dimensional space, possibly interacting through some sort of potential. The two-body problem with a 1/r potential is a classic example of this. The Hilbert space for such a system is then L2(R3)⊗L2(R3), 3 with basis {|r1i⊗|r2i}ri∈R . In many textbooks the tensor product symbol is omitted

9L2(R3) is actually defined to be the set of all square integrable functions on R3, i.e. functions f satisfying Z ∞ Z ∞ Z ∞ dx dy dz |f|2 < ∞. (2.99) −∞ −∞ −∞ Not too surprisingly, this space turns out to be identical to L2(R) ⊗ L2(R) ⊗ L2(R).

54 and such basis vectors are written as |r1i |r2i or even |r1, r2i. A ket |ψi in this Hilbert space then has expansion Z Z 3 3 |ψi = d r1 d r2 ψ(r1, r2) |r1, r2i (2.100) which is the familiar expansion of a ket in a two-particle Hilbert space. One can interpret ψ(r1, r2) as the of finding particle 1 in position r1 and particle 2 in position r2 simultaneously.

Example 2.14 Addition of orbital and spin angular momentum

Now consider a spin s particle in three dimensions. As remarked in example 1.2, the ket space corresponding to the spin degree of freedom is C2s+1, and one usually takes a basis {|mi}−s≤m≤s of Sz eigenvectors with eigenvalue ~m. The total Hilbert space for this system is L2(R3) ⊗ C2s+1, and we can take as a basis {|ri ⊗ |mi} where r ∈ R3 and −s ≤ m ≤ s. Again, the basis vectors are often written as |ri|mi or even |r, mi. An arbitrary ket |ψi then has expansion

s Z X 3 |ψi = d r ψm(r)|r, mi (2.101) m=−s where ψm(r) is the probability of finding the particle at position r and with m~ units of spin angular momentum in the z-direction. These wavefunctions are sometimes written in column vector form   ψs  ψ   s−1   .   .  . (2.102)    ψ−s+1  ψ−s The total angular momentum operator J is given by L ⊗ I + I ⊗ S where L is the orbital angular momentum operator. One might wonder why J isn’t given by L ⊗ S; there is a good answer to this question, but it requires delving into the (fascinating) subject of Lie groups and Lie algebras (see [Ha]), which we won’t do here. In the meantime, the reader can get a partial answer by checking (exercise 2.20 below) that the operators Li ⊗ Si don’t satisfy the angular momentum commutation relations whereas the Li ⊗ I + I ⊗ Si do.

55 Exercise 2.20 Check that

3 X [Li ⊗ I + I ⊗ Si,Lj ⊗ I + I ⊗ Sj] = ijk(Lk ⊗ I + I ⊗ Sk). (2.103) k=1 Also show that 3 X [Li ⊗ Si,Lj ⊗ Sj] 6= ijkLk ⊗ Sk. (2.104) k=1 Be sure to use the bilinearity of the tensor product carefully.

Example 2.15 Addition of spin angular momentum

Next consider two particles of spin s1 and s2 respectively, fixed in space so that they have no translational degrees of freedom. The Hilbert space for this system is 2s1+1 2s2+1 C ⊗ C , with basis {|m1i ⊗ |m2i} where −si ≤ mi ≤ si, i = 1, 2. Again, such tensor product kets are usually abbreviated as |m1i|m2i or |m1, m2i. There are several important linear operators on C2s1+1 ⊗ C2s2+1 :

S1 ⊗ I Vector spin operator on 1st particle I ⊗ S2 Vector spin operator on 2nd particle S ≡ S1 ⊗ I + I ⊗ S2 Total vector spin operator 2 P S ≡ i SiSi Total spin squared operator

2 2 (Why aren’t S1 and S2 in our list above?) The vectors |m1, m2i are clearly eigen- vectors of S1z and S2z and hence Sz (we abuse notation as mentioned below (2.95)) but, as the reader will show in exercise 2.21, they are not necessarily eigenvectors of 2 S . However, since the Si obey the angular momentum commutation relations (as the reader can check), the general theory of angular momentum tells us that we can 2s1+1 2s2+1 2 find a basis for C ⊗ C consisting of eigenvectors of Sz and S . Furthermore, it can be shown that the S2 eigenvalues that occur are ~2s(s + 1) where

|s1 − s2| ≤ s ≤ s1 + s2 (2.105) and for a given s the possible Sz eigenvalues are m~ where −s ≤ m ≤ s as usual (See [SA] for details). We will write these basis kets as {|s, mi} where the above restrictions on s and m are understood, and where we interpret {|s, mi} as a state p with total angular momentum equal to ~ s(s + 1) and with m~ units of angular momentum pointing along the z-axis. Thus we have two natural and useful bases for 2s1+1 2s2+1 0 C ⊗ C : B = {|m1, m2i} and B = {|s, mi}. What does the transformation

56 j between these two bases look like? Well, by their definition, the Ai0 relating the j two bases are given by e (ei0 ); using s, m collectively in lieu of the primed index and m1, m2 collectively in lieu of the unprimed index, we have, in Dirac notation,

m1,m2 As,m = hm1, m2|s, mi. (2.106) These numbers, the notation for which varies widely throughout the literature, are known as Clebsch-Gordon Coefficients and methods for computing them can be found in any standard quantum mechanics textbook. Let us illustrate the foregoing with an example: Exercise 2.21 Show that

2 2 2 X S = S1 ⊗ I + I ⊗ S2 + 2 S1i ⊗ S2i. (2.107) i 2 2 The right hand side of the above equation is usually abbreviated as S1 + S2 + 2S1 · S2. Use 2 this to show that |m1, m2i is not generally an eigenvector of S . Example 2.16 Entanglement

Consider two Hilbert spaces H1 and H2 and their tensor product H1 ⊗ H2. Only some of the vectors in H1 ⊗ H2 can be written as ψ ⊗ φ; such vectors are referred to as separable states or product states. All other vectors must be written as linear P combinations of the form i ψi ⊗ φi, and these vectors are said to be entangled, since in this case the measurement of the degrees of freedom represented by H1 will influence the measurement of the degrees of freedom represented by H2. The classic example of an entangled state comes from the previous example of two fixed particles 2 with spin; taking s1 = s2 = 1/2, we write the standard basis for C as {|+i , |−i} and consider the state |+i |−i − |−i |+i . (2.108) If an observer measure the first particle to be spin up then a measurement of the second particle’s spin is guaranteed to be spin-down, and vice-versa, so measuring one part of the system affects what one will measure for the other part. This is the sense in which the system is entangled. Note that for a product state ψ ⊗ φ such a statement cannot be made: a particular measurement of the first particle cannot affect what one measures for the second, since the second particle’s state will be φ no matter what. The reader will check below that (2.108) is not a product state.

2 Exercise 2.22 Prove that (2.108) cannot be written as ψ⊗φ for any ψ, φ ∈ C . Do this by expanding ψ and φ in the given basis and showing that no choice of expansion coefficients for ψ and φ will yield (2.108).

57 2.7 Symmetric and Antisymmetric Tensors

r 0 Given a vector space V there are certain subspaces of T0 (V ) and Tr (V ) which are of particular interest: the symmetric and antisymmetric tensors. A symmetric (r, 0) tensor is an (r, 0) tensor whose value is unaffected by the interchange (or transposition) of any two of its arguments, i.e.

T (v1, . . . , vi, . . . , vj, . . . , vr) = T (v1, . . . , vj, . . . , vi, . . . , vr) (2.109) for any i and j. Symmetric (0, r) tensors are defined similarly. The reader can easily check that the symmetric (r, 0) and (0, r) tensors each form vector spaces, denoted Sr(V ∗) and Sr(V ) respectively. For T ∈ Sr(V ∗), the symmetry condition implies that the components Ti1...ir are invariant under the transposition of any two indices, hence invariant under any rearrangement of the indices (since any rearrangement can be obtained via successive transpositions), so we may consider the expansion

i1 ir T = Ti1...ir e ⊗ ... ⊗ e (2.110) as a totally symmetric product of dual vectors, hence the notation Sr(V ∗). Similar remarks apply, of course, to Sr(V ). Notice that for rank 2 tensors, the symmetry condition implies Tij = Tji so that [T ]B for any B is a symmetric matrix. Also note that it doesn’t mean anything to say that a linear operator is symmetric, since a linear operator is a (1, 1) tensor and there is no way of transposing the arguments. One might find that the matrix of a linear operator is symmetric in a certain basis, but this won’t necessarily be true in other bases. If we have a non-degenerate Hermitian form to raise and lower indices then we can, of course, speak of symmetry by turning our linear operator into a (2, 0) or (0, 2) tensor.

Example 2.17 S2(R2∗)

1 1 2 2 1 2 2 1 2 2∗ i Consider the set {e ⊗ e , e ⊗ e , e ⊗ e + e ⊗ e } ⊂ S (R ) where {e }i=1,2 is the standard dual basis. The reader can check that this set is linearly independent, and that any symmetric tensor can be written as

1 1 2 2 1 2 2 1 T = T11e ⊗ e + T22e ⊗ e + T12(e ⊗ e + e ⊗ e ) (2.111) so this set is a basis for S2(R2∗). In particular, the Euclidean metric g on R2 can be written as g = e1 ⊗ e1 + e2 ⊗ e2. (2.112) Note that g would not take this simple form in a non-orthonormal basis. 2

58 There are many symmetric tensors in physics, almost all of them of rank 2. Many of them we’ve met already: the Euclidean metric on R3, the Minkowski metric on R4, the moment of inertia tensor, and the Maxwell stress tensor. The reader should refer to the examples and check that these are all symmetric tensors.

n Exercise 2.23 Let V = R with the standard basis B. Convince yourself that

[ei ⊗ ej + ej ⊗ ei]B = Sij (2.113) where Sij is the symmetric matrix defined in example 1.8.

Next we turn to antisymmetric tensors. An antisymmetric (or alternating)(r, 0) tensor is one whose value changes sign under transposition of any two of its argu- ments, i.e.

T (v1, . . . , vi, . . . , vj, . . . , vr) = −T (v1, . . . , vj, . . . , vi, . . . , vr). (2.114)

Again, antisymmetric (0, r) tensors are defined similarly and both sets form vec- tor spaces, denoted ΛrV ∗ and ΛrV (for r = 1 we define Λ1V ∗ = V ∗ and Λ1V = V ). Note that (2.114) implies that if any of the vi are equal to each other, then T (v1, . . . , vr) = 0. In fact, the reader will show in exercise 2.24 below that if {v1, . . . , vr} is just linearly dependent, then T (v1, . . . , vr) = 0. The reader will also show that if dim V = n, then the only tensor in ΛrV ∗ and ΛrV for r > n is the 0 tensor. An important operation on antisymmetric tensors is the wedge product: Given f, g ∈ V ∗ we define the wedge product of f and g, denoted f ∧ g, to be the antisym- metric (2, 0) tensor defined by

f ∧ g ≡ f ⊗ g − g ⊗ f. (2.115)

Note that f ∧ g = −g ∧ f, and that f ∧ f = 0. Expanding (2.115) in terms of the ei gives i j j i i j f ∧ g = figj(e ⊗ e − e ⊗ e ) = figj e ∧ e (2.116) i j so that {e ∧ e }i

59 of the factors are necessary to obtain it from f1 ⊗ ... ⊗ fr; if the number is odd the term is assigned −1, if even a +1. Thus,

f1 ∧ f2 = f1 ⊗ f2 − f2 ⊗ f1 (2.117)

f1 ∧ f2 ∧ f3 = f1 ⊗ f2 ⊗ f3 + f2 ⊗ f3 ⊗ f1 + f3 ⊗ f2 ⊗ f1

−f3 ⊗ f2 ⊗ f1 − f2 ⊗ f1 ⊗ f3 − f1 ⊗ f3 ⊗ f2 (2.118)

i1 ir and so on. The reader should convince himself that {e ∧ ... ∧ e }i1<...

r ∗ Exercise 2.24 Let T ∈ Λ V . Show that if {v1, . . . , vr} is a linearly dependent set then ∗ T (v1, . . . , vr) = 0. Use the same logic to show that if {f1, . . . fr} ⊂ V is linearly dependent, then f1 ∧ ... ∧ fr = 0. If dim V = n, show that any set of more than n vectors must be linearly dependent, so that ΛrV = ΛrV ∗ = 0 for r > n.

Exercise 2.25 Expand the (2,0) Electromagnetic field tensor of (2.81) in the basis {ei∧ej} where i < j and i, j = 1, 2, 3, 4.

r ∗ r n n! Exercise 2.26 Let dim V = n. Show that the dimension of Λ V and Λ V is r = (n−r)! r! .

Example 2.18 Identical particles

In quantum mechanics we often consider systems which contain identical particles, i.e. particles of the same mass, charge and spin. For instance, we might consider n non-interacting hydrogen atoms moving in a potential well, or the two electrons of the helium atom orbiting around the nucleus. In such cases we would assume that the total Hilbert space Htot would be just the n-fold tensor product of the single particle Hilbert space H. It turns out, however, that nature doesn’t work that way; for certain particles (known as bosons) only states in Sn(H) are observed, while for other particles (known as fermions) only states in ΛnH are observed. All known particles are either fermions or bosons. This restriction of the total Hilbert space to either Sn(H) or ΛnH is known as the symmetrization postulate and has far-reaching consequences. For instance, if we have two fermions, we cannot measure the same values for a complete set of quantum numbers for both particles, since then the state would have to include a term of the form |ψi |ψi and thus couldn’t belong to Λ2H. This fact that two fermions can’t be in the same state is known as the Pauli

60 Exclusion Principle. As another example, consider two identical spin 1/2 fermions 2 2 2 2 fixed in space, so that Htot = Λ C .Λ C is one-dimensional with basis vector     1 1 1 1 |0, 0i = − − − (2.119) 2 2 2 2

2 where we have used the notation of example 2.15. If we measure S or Sz for this system we will get 0. This is in marked contrast to the case of two distinguishable spin 1/2 fermions; in this case the Hilbert space is C2 ⊗ C2 and we have additional possible state kets   1 1 |1, 1i = (2.120) 2 2     1 1 1 1 |1, 0i = − + − (2.121) 2 2 2 2   1 1 |1, −1i = − − (2.122) 2 2

2 which yield nonzero values for S and Sz. 2

The next two examples are a little more mathematical than physical but they are necessary for the discussion of pseudovectors below. Hopefully the reader will also find them of interest in their own right.

Example 2.19 The Levi-Civita tensor

n Consider R with the standard inner product. Let {ei}i=1...n be an orthonormal basis for Rn and consider the tensor

1 n n n∗  ≡ e ∧ ... ∧ e ∈ Λ R . (2.123) The reader can easily check that   0 if {i1, . . . , in} contains a repeated index i1...in = −1 if {i1, . . . , in} is an odd rearrangement of {1, . . . , n} (2.124)  +1 if {i1, . . . , in} is an even rearrangement of {1, . . . , n}

For n = 3, the reader can also check that ijk has the same values as the Levi-Civita symbol (also written as ijk), so the Levi-Civita symbol can really be thought of as the components (in the standard basis) of an honest tensor, the Levi-Civita tensor!

61 Note that ΛnRn∗ is one dimensional, and that  is the basis for it described under (2.118). The reader may object that our construction of  seems to depend on a choice of metric and orthonormal basis. The former is true:  does depend on the metric, and we make no apologies for that. As to whether it depends on a particular choice of orthonormal basis, we must do a little bit of investigating; this will require a brief detour into the subject of .

3 Exercise 2.27 Check that the  tensor on R satisfies  10  +1 if {ijk} is a cyclic permutation of {1, 2, 3} ijk = −1 if {ijk} is an anticyclic permutation of {1, 2, 3} (2.125)  0 otherwise.

4 Is it true for  on R that ijkl = 1 if {ijkl} is a cyclic permutation of {1, 2, 3, 4}?

Example 2.20 The

The reader has doubtless encountered determinants before, and has probably seen them defined iteratively; that is, the determinant of a 2×2 square matrix A, denoted |A|, is defined to be |A| ≡ A11A22 − A12A21 (2.126) and then the determinant of a 3 × 3 matrix B is defined in terms of this, i.e.

B22 B23 B21 B23 B21 B22 |B| ≡ B11 − B12 + B13 . (2.127) B32 B33 B31 B33 B31 B32 This expression is known as the cofactor expansion of the determinant, and is not unique; one can expand about any row (or column), not necessarily (B11,B12,B13). In our treatment of the determinant we will take a somewhat more sophisticated approach.11 Take an n × n matrix A and consider its n columns as n column vectors n in R , so that the 1st column vector A1 has ith component Ai1 and so on. Then, constructing the  tensor using the standard basis and inner product on Rn, we define the determinant of A, denoted |A|, to be

|A| ≡ (A1,...,An) (2.128)

10A cyclic permutation of {1, . . . n} is any rearrangement of {1, . . . , n} obtained by suc- cessively moving numbers from the beginning of the sequence to the end. That is, {2, . . . , n, 1}, {3, . . . , n, 1, 2}, and so on are the cyclic permutations of {1, . . . , n}. Anti-cyclic per- mutations are cyclic permutations of {n, n − 1,..., 1}. 11For a complete treatment, however, the reader should consult [HK, Ch. 5].

62 or in components X |A| = i1....in A1i1 ...Anin . (2.129)

i1,...,in The reader should check explicitly that this definition reproduces (2.126) and (2.127) for n = 2, 3. The reader can also check in the Problems that many of the familiar properties of determinants (sign change under interchange of columns, invariance under addition of rows, factoring of scalars) follow quite naturally from the definition and the multilinearity and antisymmetry of . With the determinant in hand we may now explore to what extent the definition of  depends on our choice of orthonormal j 0 basis. Consider another orthonormal basis {ei0 = Ai0 ej}. If we define an  in terms of this basis, we find

0 = e10 ∧ ... ∧ en0 0 0 1 n i1 in = Ai1 ...Ain e ∧ ... ∧ e 0 0 1 n i1...in 1 n = Ai1 ...Ain  e ∧ ... ∧ i = |A| (2.130) where in the 3rd equality we used the fact that if ei1 ∧ ... ∧ ein doesn’t vanish it can always be rearranged to give e1 ∧ ... ∧ en, and the resulting sign change if any is accounted for by the Levi-Civita symbol. Now since both {ei} and {ei0 } are orthonormal bases, A must be an orthonormal matrix, so we can use the product rule for determinants |AB| = |A||B| (see the Problems for a simple proof) and the fact that |AT | = |A| to get

1 = |I| = |AAT | = |A||AT | = |A|2 (2.131) which implies |A| = ±1. Thus 0 =  if the two orthonormal bases used in their construction are related by an orthogonal transformation A with |A| = 1; such a transformation is called a rotation12, and two bases related by a rotation, or by any transformation with |A| > 0, are said to have the same orientation. If two bases are related by a basis transformation with |A| < 0 then the two bases are said to have the opposite orientation. We can then define an orientation as a maximal13 set of bases all having the same orientation, and the reader can show (see Problems)

12No doubt the reader is used to thinking about a rotation as a transformation that preserves distances and fixes a line in space (the axis of rotation). This definition of a rotation is particular to R3, since even in R2 a rotation can’t be considered to be “about an axis” since zˆ ∈/ R2. For the equivalence of our general definition and the more intuitive definition in R3, see [Go]. 13i.e. could not be made bigger.

63 that Rn has exactly two orientations. In R3 these two orientations are the right handed bases and the left handed bases. Thus we can say that  doesn’t depend on a particular choice of orthonormal basis, but it does depend on a metric and a choice of orientation, where the orientation chosen is the one determined by the standard basis. For orientation changing transformations on R3 one can show that A can be written as A = A0(−I), where A0 is a rotation and −I is referred to as the inversion transformation. The inversion transformation plays a key role in the next example.

Example 2.21 Pseudovectors in R3

NOTE: All indices in this example refer to orthonormal bases. The calculations and results below do not apply to non-orthonormal bases.

A pseudovector (or axial vector) is a tensor on R3 whose components transform like vectors under rotations but don’t change signs under inversion. Common exam- ples of pseudovectors are the angular velocity vector ω, as well as all cross products, such as the angular momentum vector L = r × p. It turns out that pseudovectors like these are actually elements of Λ2R3 . To show this, we’ll first demonstrate that elements of Λ2R3 (known as bivectors) transform like pseudovectors, and then we’ll show why L and ω are naturally interpreted as bivectors. First of all, what does it mean to say that bivectors in R3 transform ‘like’ vectors? It means that there exists a 1-1 and onto J from Λ2R3 to R3 such that if we transform the components of α ∈ Λ2R3 by a rotation first and then apply J, or apply J and then rotate the components, we get the same thing. Thus J allows us to identify Λ2R3 with R3, and the fact that J commutes with rotations means that both those spaces behave ‘the same’ under rotations. The map J is given by

23 31 12 23 31 12 J(α e2 ∧ e3 + α e3 ∧ e1 + α e1 ∧ e2) = α e1 + α e2 + α e3 (2.132) or in components as 1 (J(α))i = i αjk (2.133) 2 jk so J(α) is a contraction of the  tensor and α. The reader can check that this map is 1-1 and onto; that it is onto actually follows from the fact that it is 1-1, as in exercise 1.5, using the fact that dim Λ2R3 = dim R3 = 3. To show that J commutes with rotations, we need to show that (lettingα ˜ ≡ J(α) to clean up the notation)

i0 j 1 i0 k0 l0 mn A α˜ =  0 0 A A α . (2.134) j 2 k l m n

64 where A is a rotation. On the left hand side J is applied first followed by a rotation, and on the right hand side the rotation is done first, followed by J. We now compute for A orthogonal:

1 i0 k0 l0 mn 1 i0p0 k0 l0 mn  0 0 A A α =  0 0 0 δ A A α 2 k l m n 2 p k l m n 1 X i0 p0 k0 l0 mn =  0 0 0 A A A A α 2 p k l q q m n q

1 X 0 =  |A|Ai αmn 2 qmn q q

1 0 = |A|q Ai αmn 2 mn q i0 q = |A|Aq α˜ (2.135) where in the second equality we used a variant of (2.32) which comes from writing out AAT = I in components, in the 3rd equality we used the easily verified fact p0 k0 l0 that p0k0l0 Aq AmAn = |A|qmn, and in the 4th equality we raised an index to resume use of Einstein summation convention and were able to do so because covariant and contravariant components are equal in orthonormal bases. Now, for rotations |A| = 1 so in this case (2.135) and (2.134) are identical andα ˜ does transform like a vector. For inversion, however, |A| = | − I| = −1 so (2.135) tells us that the components of α˜ do not change sign under inversion, as those of an ordinary vector would. Another way to see this is given in the exercise below.

Exercise 2.28 Use the matrix transformation law for rank 2 tensors (why don’t we have to specify type?) to show that the components of α don’t change sign under inversion.

Thus pseudovectors are really just bivectors. The next question is, why are cross products and the angular velocity vector naturally interpreted as bivectors? In the case of cross products, the answer is simple: given two vectors v, w ∈ R3, their wedge product v ∧ w looks like (using (2.116))

2 3 3 2 3 1 1 3 v ∧ w = (v w − v w ) e2 ∧ e3 + (v w − v w ) e3 ∧ e1 1 2 2 1 +(v w − v w ) e1 ∧ e2 (2.136) so comparison with (2.132) shows that v × w = J(v ∧ w)! So you’ve actually met the wedge product a long time ago, you just didn’t know it. What about the angular velocity vector ω?

65 Example 2.22 The Angular Velocity Vector

Let K and K0 be two orthonormal bases for R3 as in example 1.12, with K time- dependent. One can think of K as representing an accelerated reference frame, or a frame attached to a rotating rigid body; either way we’ll refer to it as the body frame, and K0 as the space frame. Now, given a time-dependent vector v(t) ∈ R3 ∀ t, we’d like to compare the time-derivatives of v in the two frames. In physics texts this is usually accomplished via a decomposition

dv  dv  dv  = + (2.137) dt total dt body dt rotation

dv where ( dt )total represents the time-derivative of v as ‘seen in’ the fixed inertial frame, dv ( dt )body represents the time-derivative of v as ‘seen in’ the rotating or accelerating dv frame, and ( dt )rotation represents the difference between the two which is attributable solely to the acceleration of K, and which involves the angular velocity vector ω. We will obtain such a decomposition which will make precise the nature of these terms, as well as providing a proper definition of ω where its bivector nature will be manifest. Let A be the (time-dependent) matrix of the basis transformation taking K0 to K, so that [v]K0 = A[v]K (2.138) Then d dA d [v] 0 = [v] + A [v] dt K dt K dt K dA −1 d = A [v] 0 + A [v] . (2.139) dt K dt K

dA −1 Now, dt A is actually an antisymmetric matrix: d 0 = (I) dt d = (AAT ) dt dA dAT = AT + A dt dt dA dA T = AT + AT (2.140) dt dt

66 so we can define a bivectorω ˜ (note that here the ‘~’ denotes the bivector, not the dA −1 0 vector) whose components in the space frame are given by [˜ω]K = dt A . Then we define the angular velocity vector ω to be ω ≡ J(˜ω). (2.141) It follows from this definition that  0 −ω30 ω20  dA −1 0 0 A =  ω3 0 −ω1  (2.142) dt 0 0 −ω2 ω1 0 (we use primed indices since we’re working with the K0 components) so then  0 −ω30 ω20   v10  dA −1 30 10 20 A [v]K0 =  ω 0 −ω   v  (2.143) dt 0 0 0 −ω2 ω1 0 v3  ω20 v30 − ω30 v20  0 0 0 0 =  ω3 v1 − ω1 v3  (2.144) ω10 v20 − ω20 v10

= [ω × v]K0 . (2.145) If we identify dv   d ≡ [v]K0 (2.146) dt total K0 dt " # dv  d ≡ A [v] (2.147) dt dt K body K0 then (2.139) becomes dv  dv  = ω × v + (2.148) dt total dt body written in components referred to the space frame. This equation should be familiar from upper-division mechanics texts, and in our treatment we see that ω arises naturally as a bivector. Note that we could also write (2.139) in the body frame, which (as the reader can verify) would then tell us that dA [˜ω] = AT . (2.149) K dt Regardless of the frame in which we write (2.137), the reader should spend some time convincing himself that the identifications (2.146) and (2.147) above really do embody what we mean by the time derivatives of v as ‘seen in’ the different frames.

67 Chapter 2 Problems

1. In this problem we explore the properties of n × n orthogonal matrices. This is the set of real invertible matrices A satisfying AT = A−1, and is denoted O(n).

a) Is O(n) a subspace of Mn(R)? b) Show that the product of two orthogonal matrices is again orthogo- nal, that the inverse of an orthogonal matrix is again orthogonal, and that the identity matrix is orthogonal. These properties show that O(n) is a group, i.e. a set with an associative multiplication operation and identity element such that the set is closed under multiplication and every element has a multiplicative inverse. c) Show that the columns of an orthogonal matrix A, viewed as vectors in Rn, are mutually orthogonal under the usual inner product. Show the same for the rows. Show that for an active transformation, i.e.

[ei0 ]B = A[ei]B (2.150)

T where B = {ei}i=1...n so that [ei] = (0,..., 1 ,..., 0), the columns B |{z} ith slot of A are the [ei0 ]B. In other words, the components of the new basis vectors in the old basis are just the columns of A. d) Show that the orthogonal matrices A with |A| = 1, the rotations, form a subgroup unto themselves, denoted SO(n). Do the matrices with |A| = −1 also form a subgroup?

2. Prove the following basic properties of the determinant directly from the def- inition (2.128). We will restrict our discussion to operations with columns, though it can be shown that all the corresponding statements for rows are true as well.

a) Any matrix with a column of zeros has |A| = 0. b) Multiplying a column by a scalar c multiplies the whole determinant by c. c) The determinant changes sign under interchange of any two columns.

d) Adding two columns together, i.e. sending Ai → Ai + Aj for any i and j, doesn’t change the value of the determinant.

68 3. One can extend the definition of determinants from matrices to more general linear operators as follows: We know that a linear operator T on a vector space V (equipped with an inner product and orthonormal basis {ei}i=1...n) can be extended to arbitrary tensor products of V by

T (v1 ⊗ ... ⊗ vp) = (T v1) ⊗ ... ⊗ (T vp) (2.151)

n n and thus, since ΛnRn ⊂ R ⊗ ... ⊗ R , the action of T extends to ΛnV similarly | {z } n times by T (v1 ∧ ... ∧ vn) = (T v1) ∧ ... ∧ (T vn), vi ∈ V ∀i. (2.152) Consider then the action of T on the contravariant version of , the tensor n ˜ ≡ e1 ∧ ... ∧ en ; we know from exercise 2.26 that Λ V is one dimensional so that T (˜) = (T e1) ∧ ... ∧ (T en) is proportional to ˜, and we define the deter- minant of T to be this proportionality constant, so that

(T e1) ∧ ... ∧ (T en) ≡ |T | e1 ∧ ... ∧ en. (2.153)

a) Show by expanding the left hand side of (2.153) in components that this more general definition reduces to the old one of (2.129) in the case of V = Rn. b) Use this definition of the determinant to show that for two linear operators B and C on V ,

|BC| = |B||C|. (2.154)

In particular, this result holds when B and C are square matrices. c) Use b) to show that the determinant of a matrix is invariant un- der similarity transformations (see example 2.5). Conclude that we could have defined the determinant of a linear operator T as the determinant of its matrix in any basis.

69 4. Let V be a vector space V with an inner product and orthonormal basis {ei}i=1...n. Prove that a linear operator T is invertible if and only if |T |= 6 0, as follows:

a) Show that T is invertible if and only if {T (ei)}i=1...n is a linearly independent set (see exercise 1.6 for the ‘if’ part of the statement).

b) Show that |T |= 6 0 if and only if {T (ei)}i=1...n is a linearly independent set.

5. The determinant of a matrix A in Rn can actually be interpreted as follows: given the standard basis {ei}i=1...n, we can imagine the standard n-dimensional cube determined by the points (0,..., 1 , . . . , o) given by the ei and whose |{z} ith slot n-dimensional volume is 1. We can then imagine the cube determined by the Aei; the n-dimensional volume of this parallelepiped is then |A|, where the sign of the determinant signifies whether or not the orientation of {Aei} is the same as {ei}. Thus the determinant tells us the oriented volume of the n-cube determined by {Aei}. Verify this in the cases n = 2, 3, i.e. show that for n = 2 the parallelogram spanned by {Ae1, Ae2} has oriented area |A|, and that for n = 3 the three dimensional parallelepiped spanned by {Ae1, Ae2, Ae3} has oriented volume |A|. Feel free to refer to example 2.21 and to use the cross product formulas you learned in vector calculus. Note that if A is not invertible then the Aei are linearly dependent, hence span a space of dimension less than n and thus yield n-dimensional volume 0, so the geometrical picture is consistent with the results of the previous problem.

6. Let B be the standard basis for Rn, O the set of all bases related to B by a basis transformation with |A| > 0, and O0 the set of all bases related to B by a transformation with |A| < 0.

a) Using what we’ve learned in the preceding problems, show that a basis transformation matrix A cannot have |A| = 0. b) O is by definition an orientation. Show that O0 is also an orientation, and conclude that Rn has exactly two orientations. Note that both O and O0 contain orthonormal and non-orthonormal bases. c) For what n is A = −I an orientation-changing transformation?

70 Chapter 3

Groups, Lie Groups and Lie Algebras

In physics we are often interested in how a particular object behaves under a partic- ular set of transformations; for instance, in the classical physics literature one reads that a dipole moment transforms like a vector under rotations and a quadropole moment like a (2nd rank) tensor, and that the electric and magnetic fields trans- form like vectors under rotations but like a 2nd rank antisymmetric tensor under Lorentz transformations. Similarly, in quantum mechanics one is often interested in the“spin” of a ket (which specifies how it transforms under rotations), or its behavior under the time-reversal or space inversion (parity) transformations. This knowledge is particularly useful as it leads to the many famous “selection rules” which greatly simplify evaluation of matrix elements. Transformations are also crucial in quan- tum mechanics because all physical observables can be considered as “infinitesimal generators” of particular transformations; e.g., the angular momentum operators “generate” rotations (as we discussed briefly in Problem 3 of Chapter 1) and the momentum operator “generates” translations. Like tensors, this material is usually treated in a somewhat ad-hoc way which facilitates computation but obscures the underlying mathematical structures. These underlying structures are known to mathematicians as group theory, Lie theory and , and are known collectively to physicists as just “group theory”. Our aim in this second half of the book is to present the basic facts of this theory as well as its manifold applications to physics, both to clarify and unify the diverse phenomena in physics in which it is involved and also to provide a nice application of what we’ve learned about tensors. Before we discuss how particular objects transform, however, we must discuss the

71 transformations themselves. All the transformations we’ll be interested in share a few common properties: First, the performance of two successive transformations is always equivalent to the performance of a single, third transformation (just think of rotations and how any two successive rotations about two axes can be considered as a single rotation about a third axis). Second, every transformation has an inverse which undoes it (in the case of rotations, the inverse to any given rotation is a rotation about the same axis in the opposite direction). Sets of transformations like these occur so often in mathematics and physics that they are given a name: groups.

3.1 Groups - Definition and Examples

The definition of a group that we’re about to give may appear somewhat abstract, but is meant to embody the most important properties of sets of transformations. After giving the definition and establishing some basic properties, we’ll proceed shortly to some concrete examples.

A group is a set G together with a multiplication operation, denoted ·, that satisfies the following axioms:

1. (Closure) g, h ∈ G implies g·h ∈ G. 2. (Associativity) For g, h, k ∈ G, g·(h·k) = (g·h)·k. 3. (Existence of the Identity) There exists an element e ∈ G such that g ·e = e·g = g ∀g ∈ G. 4. (Existence of Inverses) ∀g ∈ G there exists an element h such that g·h = h·g = e.

In the case of transformations, the multiplication operation is obviously just composition; that is, if R and S are 3-d rotations, for instance, then R·S is just S followed by R. Note that we don’t necessarily have R·S = S ·R for all rotations R and S; if we did, we would say that the group is commutative (or abelian). If we do not, as is the case for rotations and most other groups of physical interest, we say the group is non-commutative (or non-abelian). There are several important properties of groups that follow almost immediately from the definition. Firstly, the identity is unique, for if e and f are both elements satisfying axiom 3 then we have

e = e·f since f is an identity (3.1) = f since e is an identity. (3.2)

72 Secondly, inverses are unique: Let g ∈ G and let h and k both be inverses of g. Then g·h = e (3.3) so multiplying both sides on the left by k gives

k·(g·h) = k, (k·g)·h = k by associativity, e·h = k since k is an inverse of g, h = k.

We henceforth denote the unique inverse of an element g as g−1. Thirdly, if g ∈ G and h is merely a right inverse for g, i.e.

g·h = e, (3.4) then h is also a left inverse for g and is hence the unique inverse g−1. This is seen as follows:

h·g = (g−1 ·g)·(h·g) = (g−1 ·(g·h))·g by associativity = (g−1 ·e)·g by (3.4) = g−1 ·g = e, so h = g−1. The last few properties concern inverses and can be verified immediately by the reader:

(g−1)−1 = g (3.5) (g·h)−1 = h−1 ·g−1 (3.6) e−1 = e. (3.7)

Exercise 3.1 Prove the cancellation laws for groups, i.e. that

g1 · h = g2 · h ⇒ g1 = g2 (3.8)

h · g1 = h · g2 ⇒ g1 = g2. (3.9)

73 Before we get to some examples, we should note that Properties 2 and 3 in the definition above are usually obviously satisfied and rarely does one bother to check them explicitly. The important thing in showing that a set is a group is verifying that it is closed under multiplication and contains all its inverses. Also, as a matter of notation, from now on we will omit the · when writing a product, and simply write gh for g·h.

Example 3.1 SO(2) Special in two dimensions

As the reader may recall from Problem 1 of Chapter 2, SO(2) is defined to be the set of all real orthogonal 2 × 2 matrices with determinant equal to 1 (we remind the reader that A “orthogonal” means A−1 = AT ). The reader will check in exercise 3.2 that SO(2) is an and that the general form of an element of SO(2) is

 cos θ − sin θ  . (3.10) sin θ cos θ

The reader will recognize that such a matrix represents a counterclockwise rotation of θ radians in the x − y plane.

Exercise 3.2 Verify that SO(2) is a group. If the reader has done Problem 1, part d) of Chapter 2 then he has already done this. Also check that SO(2) is abelian. Next, consider an arbitrary matrix  a b  A = (3.11) c d and impose the condition, as well as |A| = 1. Show that (3.10) is the most general solution to these constraints.

Example 3.2 SO(3) Special orthogonal group in three dimensions

This group is defined, just as for SO(2), as the set of all orthogonal 3 × 3 matrices with determinant equal to one. The reader’s proof that SO(2) is a group should carry over verbatim to show that SO(3) is a group. As we mentioned in example 2.20, orthogonal matrices with determinant equal to 1 actually represent rotations, and though this is obvious from (3.10) in the two dimensional case it is not as obvious in the three dimensional case . We tend to think of three-dimensional rotations as transformations which preserve distances and fix a line in space (called the axis of rotation). Do elements of SO(3) do this? To find out, let g be the Euclidean metric

74 p on R3. The distance between two points x, y ∈ R3 is then |x−y| ≡ g(x − y, x − y), and the statement that R ∈ SO(3) preserves distances means that |Rx − Ry| = |x − y|. (3.12) The reader will verify this in exercise 3.3 below. In fact, the reader will prove the stronger statement that 3 g(Rx, Ry) = g(x, y) ∀ x, y ∈ R (3.13) which can actually be taken as the definition of an orthogonal matrix. These two definitions correspond to the active and passive viewpoints of transformations: our first definition of an orthogonal matrix as one which implements the basis change from one orthonormal basis to another (and hence satisfies R−1 = RT ) gives the passive viewpoint, while the second definition, given by (3.13), of orthogonal matrices as linear operators which preserve an inner product, gives the active viewpoint. Now what about the axis of rotation? Does each R ∈ SO(3) have one? An axis of rotation is a line of points that are unaffected by R, so this line actually comprises a one dimensional eigenspace of R with eigenvalue 1. The question then becomes, does every R ∈ SO(3) have 1 as an eigenvalue? The answer is yes, as the reader will show in Problem 1. This fact is known as Euler’s Theorem. Now that we have convinced ourselves that every R ∈ SO(3) is a rotation (and these are, in fact, all rotations), can we find a general form for R? As the reader may know from classical mechanics courses, an arbitrary rotation can be described in terms of the Euler angles, which tell us how to rotate a given orthonormal basis into another of the same orientation (or handedness). In classical mechanics texts1, it is shown that this can be achieved by rotating the given axes by an angle φ around the original z-axis, then by an angle θ around the new y-axis, and finally by an angle ψ around the new z-axis. If we take the passive point of view, these three rotations take the form  cos ψ sin ψ 0   1 0 0   cos φ sin φ 0   − sin ψ cos ψ 0  ,  0 cos θ sin θ  ,  − sin φ cos φ 0  (3.14) 0 0 1 0 − sin θ cos θ 0 0 1 so multiplying them together gives a general form for R ∈ SO(3):

 cos ψ cos φ − cos θ sin φ sin ψ − sin ψ cos φ − cos θ sin φ cos ψ sin θ sin φ   cos ψ sin φ + cos θ cos φ sin ψ − sin φ sin ψ + cos θ cos φ cos ψ − sin θ cos φ  . sin θ sin ψ sin θ cos ψ cos θ (3.15) 1such as [Go]

75 3 T Exercise 3.3 Assume the standard basis in R so that g(x, y) = [x] [y]. Then use or- thogonality to show that g(Rx, Ry) = g(x, y). Also, turn the definition around and show 3 that any linear operator R which satisfies g(Rx, Ry) = g(x, y) ∀ x, y ∈ R must satisfy R−1 = RT .

Example 3.3 O(3) Orthogonal group in 3 dimensions

Another transformation of our three dimensional space that is familiar from physics and that we’ve mentioned before is the inversion transformation, −I, which sends every vector to minus itself. If we try to add this to the rotations to get a group, we get the orthogonal group O(3) which is defined to be the set of all 3 × 3 orthogonal matrices R, this time without the condition that the |R| = 1. Of course, as we pointed out in example 2.20, the orthogonality condition implies that |R| = ±12, so in going from SO(3) to O(3) we are just adding all the orthogonal matrices with |R| = −1 (which of course includes −I). The elements with |R| = 1 are sometimes referred to as proper rotations, and the elements with |R| = −1 as the improper rotations. The proper and improper rotations are disconnected, in the sense that one cannot continuously go from matrices with |R| = 1 to matrices with |R| = −1. This is represented schematically above. One can obtain any of the improper rotations, however, by multiplying a proper rotation by −I; this is easy to see, for if R is an improper rotation then −R is a proper rotation which yields R when multiplied by −I.

Example 3.4 SU(2) Special unitary group in two complex dimensions

This group is defined to be the set of all 2 × 2 complex matrices A which satisfy |A| = 1 and A† = A−1. (3.16) The reader will verify below that SU(2) is a group. The reader will also verify that a generic element of SU(2) looks like

 α β  α, β ∈ , |α|2 + |β|2 = 1. (3.17) −β¯ α¯ C

2This fact can be understood geometrically: since orthogonal matrices preserve distances and angles, they should preserve volumes as well, and as we learned in Problem 5 of Chapter 2 the determinant measure how volume changes under the action of a linear operator, so any volume preserving operator should have determinant ±1, where the sign is determined by whether or not the orientation is reversed.

76 We could also use three real parameters with no conditions rather than two complex parameters with a constraint; one such parametrization is  i(ψ+φ)/2 θ i(ψ−φ)/2 θ  e cos 2 ie sin 2 −i(ψ−φ)/2 θ −i(ψ+φ)/2 θ (3.18) ie sin 2 e cos 2 where we have used the same symbols for our parameters as we did for the Euler angles. This is no accident, as there is a close relationship between SU(2) and SO(3), which we will discuss in detail later. This relationship underlies the appearance of SU(2) in quantum mechanics, where rotations are implemented on spin 1/2 particles by elements of SU(2). The reader has already checked in exercise 2.5 that matrices satisfying (3.16) represent transformations between orthonormal bases of a complex vector space. As with orthogonal groups, however, there is another way to interpret the condition (3.16); we could define a unitary matrix as one which, when interpreted as a linear operator on Cn, preserves the hermitian scalar product on Cn, in the sense of (3.20) below. The reader will check that this condition is equivalent to (3.16), and so we have a passive and active view of unitary transformations, just as we did for orthogonal transformations. Exercise 3.4 Verify that SU(2) satisfies the group axioms. Also, consider an arbitrary complex matrix  α β  (3.19) γ δ and impose the unit determinant and unitary conditions. Show that (3.17) is the most general solution to these constraints.

n Exercise 3.5 Let U ∈ T (C ) satisfy n (Uv, Uw) = (v, w) ∀v, w ∈ C . (3.20) n Assume the standard basis on C , and show that this condition is equivalent to [U]† = [U]−1. (3.21)

Example 3.5 SO(3, 1)o The restricted Lorentz group

The restricted Lorentz group SO(3, 1)o is defined to be the set of all 4×4 real matrices A satisfying |A| = 1,A44 > 1, and  1 0 0 0   1 0 0 0   0 1 0 0  T  0 1 0 0    = A   A. (3.22)  0 0 1 0   0 0 1 0  0 0 0 −1 0 0 0 −1

77 The reader will verify in problem 2 below that SO(3, 1)o is a group. Where does its definition come from? Consider R4 with the Minkowski metric. As noted in example 2.6, any matrix satisfying (3.22) represents a transformation from one orthonormal basis to another, so the component form of the Minkowski metric is unchanged. As Einstein showed, this condition is precisely the one that guarantees the invariance of the speed of light under the change of inertial reference frame described by A. What about the other conditions? A44 > 0 just says that the transformation doesn’t reverse the direction of time (so that clocks in the new coordinates aren’t running backwards) and this, together with |A| = 1, implies that A does not reverse the orien- tation of the space axes. Thus, SO(3, 1)o represents all the changes from one inertial reference frame to another that don’t reverse the orientation of time or space. These transformations are known as restricted Lorentz transformations. A transformation between two reference frames that have parallel axes but may be moving relative to each other is called a pure Lorentz transformation or boost. What does a generic restricted Lorentz transformation look like? As with the previous examples, there is some arbitrariness in our choice of description, but we can proceed as follows: any restricted Lorentz transformation can be written as the product of a rotation and a boost3, where a rotation in this context looks like

 1 ~0  R0 = (3.23) ~0 R where ~0 stands for a column (on the left) or row (on the top) of three zeros and R ∈ SO(3). The reader can check that such a matrix is a Lorentz transformation, so we see that SO(3, 1)o subsumes SO(3). As for the form of an arbitrary boost, the reader will show later in this book that

2  ux(cosh u−1) uxuy(cosh u−1) uxuz(cosh u−1 ux  u2 + 1 u2 u2 u sinh u 2  uyux(cosh u−1) uy(cosh u−1) uyuz(cosh u−1) uy   u2 u2 + 1 u2 u sinh u  L = 2 (3.24)  uzux(cosh u−1) uzuy(cosh u−1) uz(cosh u−1) uz   u2 u2 u2 + 1 u sinh u  ux uy uz u sinh u u sinh u u sinh u cosh u is an arbitrary boost, where ~u (with u ≡ |~u| ) is a quantity known as the rapidity. The reader will verify below that ~u is related to the relative velocity β~ between the frames by tanh u β~ = ~u. (3.25) u 3this should be intuitively clear; for a discussion see [Go].

78 and that in terms of β~ and γ ≡ (1 − β2)−1/2 = cosh u, β ≡ |β~|, L takes the form

 2  βx(γ−1) βxβy(γ−1) βxβz(γ−1) β2 + 1 β2 β2 βxγ 2  βyβx(γ−1) βy(γ−1) βyβz(γ−1)   β2 β2 + 1 β2 βyγ  L =  2  . (3.26)  βzβx(γ−1) βzβy(γ−1) βz (γ−1) + 1 β γ   β2 β2 β2 z  βxγ βyγ βzγ γ Note that L has three arbitrary parameters so that our arbitrary restricted Lorentz transformation R0L has six parameters total.

Exercise 3.6 Use L as an passive transformation to obtain new coordinates (x0, y0, z0, t0) from old by  x0   x   y0   y    = L   . (3.27)  z0   z  t0 t Show that the origin of the primed frame, defined by x0 = y0 = z0 = 0, moves with velocity (3.25) relative to the old frame, and substitute this into (3.24) to get (3.26).

Example 3.6 O(3, 1) The extended Lorentz group

In the previous example we restricted our changes of inertial reference frame to those which preserved the orientation of space and time. This is sufficient in classical mechanics, but in quantum mechanics we are often interested in the effects of space and time inversion on the various Hilbert spaces we’re working with. If we add spatial inversion, also called parity and represented by the matrix  −1 0 0 0   0 −1 0 0  P =   (3.28)  0 0 −1 0  0 0 0 1 as well as time-reversal, represented by  1 0 0 0   0 1 0 0  T =   , (3.29)  0 0 1 0  0 0 0 −1 to the restricted Lorentz group, we get the improper or extended Lorentz group O(3, 1), which is defined by (3.22) with no restrictions on the determinant or A44.

79 The reader should verify that P,T ∈ O(3, 1), but P,T/∈ SO(3, 1)o. In fact, |P | = |T | = −1, which is no accident; as in the case of the orthogonal group, the defining equation (3.22) restricts the determinant, and the reader can check that (3.22) implies that |A| = ±1. In this case, however, we have four disconnected components instead of two! Obviously those matrices with |A| = 1 must be disconnected from those with |A| = −1, but those which reverse the orientation of the space axes must also be disconnected from those which do not, and those which reverse the orientation of time must be disconnected from those which do not. This is represented schematicallly above. Note that, as in the case of O(3), multiplication by the transformations P and T take us to and from the various different components.

Example 3.7 SL(2, C) Special linear group in two complex dimensions

SL(2, C) is defined to be the set of all 2 × 2 complex matrices A with |A| = 1. By now it should be obvious that this set is a group. The general form of A ∈ SL(2, C) is  a b  A = a, b, c, d ∈ , ad − bc = 1. (3.30) c d C The unit determinant constraint means that A is determined by three complex pa- rameters or six real parameters, just as for SO(3, 1)o. This is no coincidence; in fact, SL(2, C) bears the same relationship to SO(3, 1)o as SU(2) bears to SO(3), in that SL(2, C) implements restricted Lorentz transformations on spin 1/2 particles! We will discuss this in detail later, where we will also show that an arbitrary boost is implemented on a spin 1/2 particle by P˜ ∈ SL(2, C) of the form

 u uz u 1 u  cosh 2 − u sinh 2 − u (ux − iuy) sinh 2 1 u u uz u . (3.31) − u (ux + iuy) sinh 2 cosh 2 + u sinh 2 This, together with the facts that an arbitrary rotation is implemented by an SU(2) matrix R˜ of the form (3.18) (note that SU(2) ⊂ SL(2, C), which makes sense since SO(3, 1)o subsumes SO(3)) and that any Lorentz transformation can be written as a product of a boost and a rotation, yields the general form R˜P˜ for an element of SL(2, C) in terms of the same parameters we used for SO(3, 1)o.

Example 3.8 Z2 The group with two elements

2 Consider the set Z2 ≡ {e, g} with associative multiplication law given by g = e (note that the rest of the possible products are determined by the stipulation that e is the identity). The reader can easily check that this is a group, in fact an abelian

80 group. Though this group is defined in a much more abstract way than the groups encountered above, we’ll see in the next section that concrete representations of this group pop up in a few places in physics.

Example 3.9 Sn The on n letters

This group does not usually occur explicitly in physics but is intimately tied to per- mutation symmetry, the physics of identical particles, and much of the mathematics we discussed in section 2.7. The symmetric group on n letters (also known as the permutation group), denoted Sn, is defined to be the set of all 1-1 and onto maps of the set {1, 2, . . . , n} to itself, where the product is just the composition of maps. The maps are known as permutations. The reader should check that any composition of permutations is again a permutation and that permutations are invertible, so that Sn is a group. This verification is simple, and just relies on the fact that permutations are, by definition, 1-1 and onto. Any permutation σ is specified by the n numbers σ(i), i = 1 . . . n, and can conveniently be notated as  1 2 ··· n  . (3.32) σ(1) σ(2) ··· σ(n)

In such a scheme, the identity in S3 would just look like  1 2 3  (3.33) 1 2 3 while the cyclic permutation σ1 given by 1 → 2, 2 → 3, 3 → 1 would look like  1 2 3  σ = (3.34) 1 2 3 1 and the transposition σ2 which switches 1 and 2 and leaves 3 alone would look like  1 2 3  σ = . (3.35) 2 2 1 3

How do we take products of permutations? Well, the product σ1 · σ2 would take on the following values:

(σ1 · σ2)(1) = σ1(σ2(1)) = σ1(2) = 3 (3.36)

(σ1 · σ2)(2) = σ1(1) = 2 (3.37)

(σ1 · σ2)(3) = σ1(3) = 1 (3.38)

81 so we have  1 2 3   1 2 3   1 2 3  σ · σ = · = . (3.39) 1 2 2 3 1 2 1 3 3 2 1 The reader should take the time to inspect (3.39) and understand how to take such a product of permutations without having to write out (3.36)-(3.38). Though a proper discussion of the applications of Sn to physics must wait until the next section, we can point out here that if we have an n-fold tensor product V ⊗ V ⊗ ... ⊗ V of a vector space V , then Sn acts on this by

σ(v1 ⊗ v2 ⊗ ... ⊗ vn) = vσ(1) ⊗ vσ(2) ⊗ ... ⊗ vσ(n). (3.40)

In the case of n identical particles in quantum mechanics, where the total Hilbert space is the n-fold tensor product of the single-particle Hilbert space H, this ac- tion effectively interchanges particles, and we will later restate the symmetrization postulate from example 2.18 in terms of this action of Sn on H ⊗ H ⊗ ... ⊗ H.

3.2 Homomorphism and Isomorphism

In the last section we claimed that there is a close relationship between SU(2) and SO(3), as well as between SL(2, C) and SO(3, 1)o. We now make this relationship precise, and show that a similar relationship exists between Sn and Z2. We will also define what it means for two groups to be ‘the same’, which will then tie into our abstract discussion of Z2 in the last section. Given two groups G and H, a homomorphism from G to H is a map φ : G → H such that φ(g1g2) = φ(g1)φ(g2) ∀ g1, g2 ∈ G. (3.41) Note that the product in the left hand side of (3.41) takes place in G, whereas the product on the right hand side takes place in H. A homomorphism should be thought of as a map from one group to another which preserves the multiplicative structure. Note that φ need not be 1-1 or onto; if it is onto then φ is said to be a homomorphism onto H, and if in addition it is 1-1 then we say φ is an isomorphism. If φ is an isomorphism then it is invertible and thus sets up a one-to-one correspondence which preserves the group structure, so we regard G and H as ‘the same’ group, just with different labels for the elements, with φ providing the dictionary between the different labeling schemes.

82 Exercise 3.7 Let φ : G → H be a homomorphism, and let e be the identity in G and e0 the identity in H. Show that

φ(e) = e0 (3.42) φ(g−1) = φ(g)−1 ∀ g ∈ G. (3.43)

Suppose φ is a homomorphism but not an isomorphism. Is there a way to quantify how far it is from being an isomorphism? Define the kernel of φ to be the set K ≡ {g ∈ G| φ(g) = e0} where e0 is the identity in H. In other words, K is the set of all elements of G that get sent to e0 under φ. If φ is an isomorphism, then K = {e}. Also, if we have φ(g1) = φ(g2) = h, then

−1 −1 −1 φ(g1g2 ) = φ(g1)φ(g2) = hh = e (3.44)

−1 so g1g2 is in the kernel of φ, i.e. g1g2 = k for some k ∈ K. Multiplying on the right by g2 then gives g1 = kg2 so we see that any two elements of G that give the same element of H under φ are related by left multiplication by an element of K. Conversely, if we are given g ∈ G and φ(g) = h, then for all k ∈ K, φ(kg) = φ(k)φ(g) = e0φ(g) = φ(g) = h so if we define Kg ≡ {kg| k ∈ K}, then Kg are precisely those elements (no more, and no less) of G which get sent to h. Thus the size of K tells us how far φ is from being an isomorphism, and the elements of K tell us exactly which elements of G will map to a specific element h ∈ H. Homomorphisms and isomorphisms are ubiquitous in mathematics and occur frequently in physics, as we’ll see in the examples below.

Example 3.10 SU(2) and SO(3)

83 Chapter 3 Problems

1. In this problem we prove Euler’s theorem that any R ∈ SO(3) has an eigenvec- tor with eigenvalue 1. Recall that if λ is an eigenvalue of R then there exists a nonzero vector v such that Rv = λv, or (R − λI)v = 0. This means that R − λ is not invertible, which by Problem 4 of Chapter 2 means that |R − λI| = 0. The same problem shows that the converse is true, i.e. that if |R − λI| = 0 then R − λI is not invertible, hence there must be some vector v such that (R − λI)v = 0, or Rv = λv. So, we can prove Euler’s theorem if we can show that |R − I| = 0. (3.45) Do this using the orthogonality condition and properties of the determinant. You should not have to work in components.

2. Show that SO(3, 1)o is a group. Remember that SO(3, 1)o is defined by 3 conditions: |A| = 1, A44 > 1, and (3.22). Proceed as follows:

a) Show that I ∈ SO(3, 1)o. −1 b) Show that if A ∈ SO(3.1)o, then A ∈ SO(3, 1)o. Do this as follows: i) Verify that |A−1| = 1 ii) Show that A−1 satisfies (3.22). Use this to deduce that AT does also. iii) Write out the 44 component of (3.22) for both A and A−1. You should get equations of the form 2 2 a0 = 1 + ~a (3.46) 2 ~2 b0 = 1 + b . (3.47) −1 where b0 = (A )44. Clearly this implies b0 < −1 or b0 > 1. Now, write out the 44 component of the equation AA−1 = I. You should find ~ a0b0 = 1 − ~a · b. (3.48) If we let a ≡ |~a|, b ≡ |~b| then the last equation implies

1 − ab < a0b0 < 1 + ab. (3.49)

Assume b0 < −1 and use (3.47) to derive a contradiction −1 to (3.49), hence showing that b0 = (A )44 > 1, and that −1 A ∈ SO(3, 1)o.

84 d) Show that if A, B ∈ SO(3, 1)o, then AB ∈ SO(3, 1)o. You may have to do some inequality manipulating to show that (AB)44 > 0.

85 Dirac Dictionary

We summarize here all of the translations given in the text between Dirac notation and standard mathematical notation.

Standard Notation Dirac Notation

Vector ψ ∈ H |ψi Dual Vector L(ψ) hψ| Inner Product (ψ, φ) hψ|φi A(ψ),A ∈ T (H) A|ψi (ψ, Aφ), hψ|A|φi

j i X Ti e ⊗ ej Tij|jihi| i,j

ei ⊗ ej |ii |ji or |i, ji

86 Bibliography

[HK] K. Hoffman and D. Kunze, Linear Algebra, 2nd ed., Prentice Hall, 1971

[ST] S. Sternberg, Group Theory and Physics, Princeton University Press, 1994

[R] W. Rudin, Principles of Mathematical Analysis, 3rd ed., McGraw Hill, 1976

[SA] J.J. Sakurai, Modern Quantum Mechanics, 2nd ed., Addison Wesley Longman, 1994

[G] S. Gasiorowicz, Quantum Physics, 2nd ed., John Wiley and Sons, 1996

[F] G. Folland, Real Analysis, Modern Techniques and Their Applications, 2nd ed., Wiley Interscience, 1999

[SC] B. Schutz, Geometrical Methods of Mathematical Physics, Cambridge 1980

[W] F. Warner, Foundations of Differentiable Manifolds and Lie Groups, Springer 1979

[Ha] B. Hall, Lie Groups, Lie Algebras and Representations: An Elementary Intro- duction, Springer-Verlag 2003

[Go] H. Goldstein, Classical Mechanics, 2nd ed., Addison-Wesley 1980

[He] I. Herstein, Topics in Algebra, 2nd ed., Wiley 1975

87