<<

NOTES ON PRODUCTS AND THE EXTERIOR

PATRICK BROSNAN

1. INTRODUCTION These are a set of notes which attempt to introduce tensor products and exterior in the abstract way: using the various universal properties. They are meant to provide details to the first chapter in the book by Darling on differential forms [Dar94]. Here is a list of other texts that deal with the in one way or another: (i) Rudin. Principles of [Rud76]. Constructs the exterior algebra in a completely ad hoc (and elementary) way, and uses it to prove Stokes’ theorem. (ii) Warner. Foundations of Differentiable and Lie Groups [War83]. Excellent text in my opinion. The part on the is similar to what is in these notes. (iii) Bourbaki. Algebra. 1-3 [Bou98]. Similar to Warner but more advanced. My proof of Theo- rem 55 is modeled on the proof in Bourbaki. One nice thing about this treatment (which you hopefully see from the consequences of Theorem 55) is that it is independent of the existence of . Actually, the exterior algebra is really just a more sophisticated way of packaging the theory of determinants (and minors). Theorem 55 makes it possible to prove that determinants exist without knowing it a priori. On the other hand, Bourbaki is part of a long encyclopedic text which aims to do ev- erything in great generality. So that can make it unsuitable for reading as a textbook. There is also a book by Spivak called Calculus on Manifolds, which covers differential forms and Stokes’ theorem [Spi65]. There are a lot of nice things about that book, and I have to admit that I basically learned Stokes’ theorem from it along with Rudin’s book. However, there is a major mistake in Spivak’s text: he never puts any differentiability assumption on chains. The problem is really on page 101 in the second display where he nonsensically defines what the of a is over a singular chain. Also Spivak’s book works with the exterior algebra in a different way from the other texts listed above (including [Dar94]). In essence, he defines only the dual of the exterior algebra and not the exterior algebra itself. For these, reasons I don’t really recommend Spivak.

2. REVIEW Here I’ll review some things from and algebra: Math 405 and 403 at UMD. All vector spaces with will real vector spaces in these notes.

2.1. Maps of sets. Suppose S and T are sets. I write Maps(S, T) := { f : S → T : f is a }. In other words, Maps(S, T) is just the set of all functions (or maps) from S to T. Sometimes we write TS for Maps(S, T) for short. This is because, if S and T are finite, then the cardinality of Maps(S, T) is | Maps(S, T) = |T||S|.

2.2. structure. Suppose now that X is a vector space over R. Then Maps(S, X) has the structure of a vector space as follows: 1 2 PATRICK BROSNAN

2.2.1. Addition. Given f , g ∈ Maps(S, X), we define ( f + g)(s) := f (s) + g(s), for all s ∈ S. 2.2.2. Multiplication. Given f ∈ Maps(S, X) and x ∈ R, we define (x f )(s) = x f (x), for all s ∈ S. It is not hard (but just slightly tedious) to verify that, with this structure, Maps(S, X) is a vector space. The 0 element is the 0-function. That is the function 0 with 0(s) = 0 for all s ∈ S. I’ll skip this. 2.3. Linear tranformations. If V and W are vector spaces, then I write Lin(V, W) for the set of all linear transformations from V to W. It is not hard to see that Lin(V, W) is a subspace of Maps(V, W). 2.4. Free vector space. Suppose S is a set. We write Free S = { f ∈ RS : f (s) = 0 for all but finitely many s ∈ S.} For short, we sometimes write R[S] for Free S. It is easy to see that Free S is a subspace of Maps(S, R). S For each s ∈ S, write δs for the function in R given by ( 1, s = t; δs(t) = 0, s 6= t.

Obviously, δs ∈ Free S.

Claim 1. The set {δs}s∈S is a of Free S. In other words, every f ∈ Free S can be written uniquely as ∑s∈S asδs for some as ∈ R.

Proof. Suppose f ∈ Free S. Then the sum ∑s∈S f (s)δs has only finitely many terms. So the sum makes sense. Moreover, it is easy to f = ∑s∈S f (s)δs and that this presentation of f as a the δs’s is unique. 

We write δ : S → Free S for the map s 7→ δs. Theorem 2 ( of Free S). Suppose X is a vector space. For T ∈ Maps(Free S, X), ∗ ∗ write δ T : S → X for the map (δ T)(s) = T(δs). Then the map δ∗ Lin(Free S, X) → Maps(S, X) given by T 7→ δ∗T is an of vector spaces. Proof. It’s very easy to check that δ∗ is a linear transformation of vector spaces. So we leave that to the reader. ∗ ∗ To see see that δ is 1-1, note that δ T = 0 ⇒ T(δs) = 0 for all s ∈ S. But that implies that T( f ) = 0 for all f ∈ Free S since δs is a basis of Free S. So T = 0. To see that δ∗ is onto, suppose Q : S → X is a map. For f ∈ Free S, set T( f ) = ∑ f (s)Q(s). {s: f (s)6=0} This defines a linear transformation T ∈ Lin(Free S, X), and it is clear that T(δs) = Q(s) for all ∗ ∗ s ∈ S. So (δ T)(s) = T(δs) = Q(s) for all s ∈ S. Therefore, δ T = Q. 

Remark 3. Since the map δ : S → Free S given by s 7→ δs in clearly one-one, we sometimes abuse notation and identify each element s ∈ S with the corresponding element δs of Free S. In this way, we can think of S as a subset of Free S. Then Free S is a real vector space with basis S. NOTES ON TENSOR PRODUCTS AND THE EXTERIOR ALGEBRA 3

2.5. Surjective linear maps. Suppose T : V → W is a map of vector spaces and X is another vector space. We have a map T∗ : Lin(W, X) → Lin(V, X) given by S 7→ S ◦ T. Claim 4. The map T∗ defined above is a linear transformation.

Proof. Obvious. Skip it.  Claim 5. Suppose T : V → W is onto. Then T∗ : Lin(W, X) → Lin(V, X) in one-one for any vector space X. Proof. Suppose S ∈ ker T∗ and that T is onto. Then S ◦ T = 0 So S(T(v)) = 0 for all v ∈ V. But T is onto, so this implies that S(w) = 0 for all w ∈ W. Therefore S = 0.  Remark 6. The converse to Claim 4 also holds, but I won’t use it so I won’t prove it. (For infinite dimensional vector spaces, I believe it requires the axiom of choice.) Theorem 7. Suppose T ∈ Lin(V, W) is a linear transformation of vector spaces which is onto with K, and suppose that X is another vector space. Then T∗ is one-one with Im(T∗ : Lin(W, X) → Lin(V, X)) = {A ∈ Lin(V, X) : K ⊂ ker A}. In particular, for every A ∈ Lin(V, X) containing K in its kernel, there exists a unique B ∈ L(W, X) such that A = B ◦ T. Proof. We’ve already seen that T∗ is one-one. Suppose A ∈ Im T∗. Then there exists B ∈ Lin(W, X) such that B = A ◦ T. But then, for k ∈ K, Ak = B(T(k)) = B(0) = 0. So K ⊂ ker A. On the other hand, suppose A ∈ Lin(V, X) with K ⊂ ker A. Write U := {(Tv, Av) ∈ W × X : v ∈ V}. It is easy to see that U is a subspace of W × X as it is just the image of the map V → W × X given by v 7→ (Tv, Av). I claim that for each w ∈ W there is a unique x ∈ X such that (w, x) ∈ U. For uniqueness, suppose (w, x), (w, x0) ∈ U. Then there exists v, v0 ∈ V such that (w, x) = (Tv, Av), (w, x0) = (Tv0, Av0). So w = Tv = Tv0. Therefore, v0 − v ∈ ker T = K. So v0 − v ∈ ker A also by our hypotheses. Therefore x = Av = Av0 = x0. To show existence, note that T is onto. So, for every w ∈ W, there exists a v ∈ V with Tv = w. Then (w, Av) = (Tv, Av) ∈ U. Now let B : W → X be the map sending w ∈ W to the unique x ∈ X with (w, x) ∈ U. It is easy to check that B is linear, and it is also more or less obvious that B ◦ T = A.  2.6. Quotients. Suppose V is a vector space. If S and T are subsets of V, then we write S + T := {s + t : s ∈ S, t ∈ T}. Similarly, for v ∈ V and T ⊂ V, we write v + T for {v} + T when no confusion can arise. If K is a subspace of V, then a coset of K in V is a subset of V of the form v + K with v ∈ V. Sometimes we write [v] for v + K. We write

V/K := {[v]}v∈V for the set of cosets of K. We have a map πK : V → V/K given by πK(v) = [v]. This map is clearly onto. Sometimes I’ll write π for πK when K is fixed. Proposition 8. With the above notation, we have [v] = [w] ⇔ v − w ∈ K.

Proof. It’s easy to see that v + K = w + K ⇔ (v − w) + K = K ⇔ v − w ∈ K.  4 PATRICK BROSNAN

Theorem 9. There exists a unique structure of a real vector space on V/K such that the map π : V → V/K is a linear transformation. Moreover, this linear transformation is onto with kernel K.

Sketch. (Uniqueness) Suppose π : V → V/K is a linear transformation. Then, for [v], [w] ∈ V/K we have [v] + [w] = π(v) + π(w) = π(v + w) = [v + w]. And, for x ∈ R, we have x[v] = xπ(v) = π(xv) = [xv]. (Existence) We want to define addition and on V/K by the formulas above in the proof of uniqueness. That is, for [v], [w] ∈ V/K and x ∈ R, we want to define [v] + [w] = [v + w]; x[v] = [xv]. The thing to check though is that they are well defined. In other words, we have to show that, if [v] = [v0] and [w] = [w0], then [xv] = [xv0] and [v + w] = [v0 + w0]. This is easy, though: to check the first statement, note that [v] = [v0] ⇒ v0 − v ∈ K ⇒ xv0 − xv ∈ K ⇒ [xv] = [xv0]. And, similarly, if [v] = [v0] and [w] = [w0], then v0 − v, w0 − w ∈ K. So v0 + w0 − v − w ∈ K. And therefore, [v + w] = [v0 + w0]. (Vector space Axioms) Now that addition and scalar multiplication are defined, we have to prove that the vector space axioms hold. This is all easy and I’ll leave it to the reader. One thing to note that is that the 0 element of V/K is 0 = [0] = 0 + K = k + K = [k] for any k ∈ K. From that it’s clear that the kernel of π is exactly K. 

2.7. . If V is a vector space, we write V∗ = Lin(V, R). Then if T ∈ Lin(V, W) we get a map T∗ : W∗ → V∗ as in §4. The vector space V∗ is called the dual of V. The map T∗ is called the or sometimes the adjoint of T. n ∗ n If V has a finite basis {vi}i=1 then V has a particular finite basis called the {λi}i=1. This basis has the property that λj(vi) = δij where δij is the Kronecker δ: ( 1, i = j; δij = 0, else. ∗ ∗ Sometimes people write vi for λi, but this is slightly dangerous notation because vi depends not just on vi but on the whole basis {vi}. Still, it’s convenient so everyone does it. Also, some people write (λ, v) for λ(v) when λ ∈ V∗ and v ∈ V. In this notation, the transpose of a linear operator T : V → W satisfies the formula (λ, Tv) = (T∗λ, v) for λ ∈ W∗ and v ∈ V.

∗∗ 2.8. Double duality. There is a very nice map cV : V → V defined as follows. For v ∈ V and λ ∈ V∗, c(v)(λ) = λ(v). If T : V → W is a linear transformation, then we set c(T) = T∗∗. It is then easy to see that c(idV ) = idV∗∗ and that c(ST) = c(S)c(T) when ST makes sense. ∗∗ Proposition 10. Suppose dimV < ∞. Then cV : V → V is an isomorphism.

n n Sketch. Take a basis {vi}i=1 for V and a dual basis {λi}i=1. Then not too hard to easy to see that {c(vi)} is the dual basis of {λi}. (This is in most linear algebra books used in 405. For example Hoffman-Kunze or Friedberg-Insel-Spence.)  ∗∗ Remark 11. Often people tacitly identify V with V using the isomorphism cV (when V is finite dimensional). In other words, sometimes people think of them as basically the same thing. This turns out to be not too dangerous and it can also make the notation a lot easier. NOTES ON TENSOR PRODUCTS AND THE EXTERIOR ALGEBRA 5

3. TENSOR PRODUCTS

3.1. Multilinear maps. Suppose V1,..., Vn and X are all vector spaces. A map T : V1 × · · · × Vn → X is said to be multilinear if it is linear in each variable when the other variables are fixed. In other words, for each i, each x, y ∈ R and each set of vectors vi, wi ∈ Vi we have

T(v1,..., vi−1, xvi + ywi, vi+1,..., vn) = xT(v1,..., vi−1, vi, vi+1,..., vn) + yT(v1,..., vi−1, wi, vi+1,..., vn).

We write Mult(V1,..., Vn; X) for the set of all multilinear maps from V1 × · · · × Vn to X. It is (very) easy to see that Mult(V1,..., Vn; X) is a subspace of Maps(V1 × · · · × Vn, X). In the rare cases where it is helpful to emphasize the n, we will write Multn(V1,..., Vn; X) for Mult(V1,..., Vn; X). We can also call a map in this space an n-.

Proposition 12 (Change of target). Suppose V1,..., Vn, X and Y are all vector spaces. Let T ∈ Lin(X, Y) and S ∈ Mult(V1,..., Vn; X) be linear and bilinear maps respectively. Then T ◦ S ∈ Mult(V1,..., Vn; Y).

Proof. This is very easy and left to the reader. 

Lemma 13. In the situation of Proposition 12, if S ∈ Mult(V1,..., Vn; X), then the map ∗ S : Lin(X, Y) → Mult(V1,..., Vn; Y) given by T 7→ T ◦ S is a linear transformation.

Proof. This is also very easy. 

Definition 14. Suppose V1,..., Vn are vector spaces. Set U = Free V1 × · · · × Vn and let R be the subspace of U generated by the all expressions of the form

(v1,..., vi−1, xvi + ywi, vi+1,... vn) − x(v1,...,vi−1, vi, vi+1,..., vn)

+ y(v1,..., vi−1, wi, vi+1,..., vn).

The tensor V1 ⊗ · · · ⊗ Vn is the quotient space U/R.

In the definition above, I’ve used Remark 3 to think of V1 × · · · × Vn as a basis of Free V1 × · · · × Vn. The set U is called the set of generators for the and the set R is called the set of relations.

Write π : U → V1 ⊗ · · · ⊗ Vn for the quotient map and write v1 ⊗ · · · ⊗ vn for π(v1,..., vn) in the tensor product. Write ρ : V1 × · · · × Vn → V1 ⊗ · · · ⊗ Vn for the map taking (v1,..., vn) to v1 ⊗ · · · ⊗ vn. Remark 15. An element of the tensor product in the image of ρ is sometimes called a pure tensor.

Proposition 16. The map ρ above is in Mult(V1, ··· , Vn; V1 ⊗ · · · ⊗ Vn).

Proof. This follows directly from the form of the relations. 

Theorem 17 (Universal property of tensor product). Suppose V1,..., Vn and X are vector spaces. Write ∗ ρ : Lin(V1 ⊗ · · · ⊗ Vn, X) → Mult(V1 × · · · × Vn; X) ∗ for the map taking a T ∈ Lin(V1 ⊗ · · · ⊗ Vn, X) to T ◦ ρ. Then ρ is a vector space isomorphism. Proof. This is the same as in class on Monday (Feb 4, 2019). But I’ll write the proof. We have to show that the map ρ∗ is one-one and onto. ∗ One-one. Suppose ρ T = 0. Then T(v1 ⊗ · · · ⊗ vn) = 0 for all pure v1 ⊗ · · · ⊗ vn. But, since U is generated by the elements (v1,..., vn), the pure tensors generate the tensor product. So this implies that T = 0. 6 PATRICK BROSNAN

Onto. Suppose S ∈ Mult(V1,..., Vn; X). Since V1 × · · · × Vn is a basis for U, there is a unique map Tˆ : U → X such that Tˆ (v1,..., vn) = S(v1,..., vn) for all n-tuples (v1,..., vn) with vi ∈ Vi. But, since S is linear, it is easy to see that R ⊂ ker Tˆ. Therefore, there exists a T ∈ Lin(U/R, X) ∗ such that T(v1 ⊗ · · · ⊗ vn) = S(v1,..., vn). In other words, T = ρ S. 

3.2. Properties of tensor product.

Proposition 18. Suppose V1,..., Vn are vector spaces. Then there is a unique map

V1 ⊗ · · · ⊗ Vn → V1 ⊗ (V2 ⊗ · · · (Vn−1 ⊗ Vn)) taking the pure tensor v1 ⊗ · · · ⊗ vn to v1 ⊗ (v2 ⊗ · · · (vn−1 ⊗ vn)). This map is an isomorphism.

Proof. Easy using the universal property and induction. Let’s skip it. 

Lemma 19 (Foil). Suppose vi ∈ V, wi ∈ W for i = 1, 2 and a, b, c, d ∈ R. Then

(av1 + bv2) ⊗ (cw1 + dw2) = acv1 ⊗ w1 + adv1 ⊗ w2 + bcv2 ⊗ w1 + bdv2 ⊗ w2.

Proof. Follows from repeatedly using the relation in R.  n m Proposition 20. Suppose V and W are vector spaces with bases {vi}i=1 and {wj}j=1 respectively. Then B = {vi ⊗ wj} 1≤i≤n is a basis of V ⊗ W. 1≤j≤m

Proof. It is easy to see (using the relations in R and the fact that pure tensors generate V ⊗ W) that B generates V ⊗ W. So we just need to show that B is linearly independent. n m n m For this, let {λi}i=1 and {µj}j=1 denote the dual bases of {vi}i=1 and {wj}j=1 respectively. Then, for each pair (k, `) with 1 ≤ k ≤ n and 1 ≤ ` ≤ m, let Skl : V × W → R denote the function (v, w) 7→ λk(v)µ`(w). It is easy to see that Sk` ∈ Mult(V, W; R). So, by the uni- versal property, Theorem 17, there is a linear transformation Tk` ∈ Lin(V ⊗ W, R) such that Tk`(vi ⊗ wj) = λk(vi)µ`(wj) = δikδj`. (Here δik is the : 1 for i = k and 0 other- wise.)

Now, suppose r = ∑ xijvi ⊗ wj = 0. Then, for each pair (k, `) as above, xk` = Tk`(r) = 0. It follows that B is linearly independent. 

Here are a few corollaries.

Corollary 21. We have dimV ⊗ W = (dimV)(dimW) when dimV, dimW < ∞. In fact, if V1,..., Vn are finite dimensional vector spaces, then n dimV1 ⊗ · · · ⊗ Vn = ∏ dimVi. i=1

Proof. The first statement follows directly from Proposition 20. The second follows from the first and Proposition 18. 

Lemma 22. Suppose V1,..., Vn and W1,..., Wn are vector spaces and Ti ∈ Lin(Vi, Wi). Then there is a unique linear map

T1 ⊗ · · · ⊗ Tn : V1 ⊗ · · · ⊗ Vn → W1 ⊗ · · · Wn sending the pure tensor v1 ⊗ · · · ⊗ vn to T1v1 ⊗ · · · Tnvn.

Proof. The map V1 × · · · × Vn → W1 ⊗ · · · ⊗ Wn given by (v1,..., vn) 7→ Tv1 ⊗ · · · Tnvn is easily seen to be in Mult(v1,..., vn; W1 ⊗ · · · Wn). So the result follows from the universal property.  NOTES ON TENSOR PRODUCTS AND THE EXTERIOR ALGEBRA 7

Corollary 23 (). Suppose V1, V2, W1, W2 are vector spaces. Then we have vector space =∼ V1 ⊗ V2 ⊕ W1 ⊗ V2 → (V1 ⊕ W1) ⊗ V2 =∼ V1 ⊗ V2 ⊕ V1 ⊗ W2 → V1 ⊗ (V2 ⊕ W2).

Sketch. To give the first map, let A : V1 → V1 ⊕ W1 and B : W1 → V1 ⊕ W2 denote the inclusions.

Then, using Lemma 22, we get linear transformations A ⊗ idV2 and B ⊗ idV2 . The first map in

Corollary 23 is just T := A ⊗ idV2 +B ⊗ idV2 . If all the vector spaces in question are finite dimen- sional, then it is easy to check using Proposition 20 that the first map is an isomorphism. Alterna- tively, one can prove the Corollary without using Proposition 20 by using the universal property to define an inverse of T. (If you do this, then you get an alternate proof of Proposition 20 as a corollary.)  3.3. Adjunction. Write Bilin(V, W; X) for Mult(V, W; X). Functions in Bilin(V, W; X) are called bilinear maps from V × W to X. Suppose T ∈ Lin(V, Lin(W, X)). Define a map φ(T) : V × W → X by setting φ(T)(v, w) = (T(v))(w). It is easy to see, once you parse through all the parentheses, that φ(T) ∈ Bilin(V, W; X). So we get a map φ : Lin(V, Lin(W, X)) → Bilin(V, W; X) sending T to φ(T). It is also easy to see that this map φ is itself a linear transformation. Proposition 24. The map φ : Lin(V, Lin(W, X)) → Bilin(V, W; X) is a vector space isomorphism. Proof. This is also easy. Given an f ∈ Bilin(V, W; X), define ψ( f ) ∈ Lin(V, Lin(W, X)) by setting (φ( f )(v))(w) = f (v, w). You have to check that ψ( f ) actually is in Lin(V, Lin(W, X)) and that f 7→ ψ( f ) actually is a linear transformation, but that is very straightforward.  Corollary 25. We have an isomorphism Lin(V, Lin(W, X)) =∼ Lin(V ⊗ W, X). ∼ Proof. By the universal property Bilin(V, W; X) = Lin(V ⊗ W, X).  Corollary 25 is often called the adjunction isomorphism or just adjunction. Note that there is a fixed isomorphism: the one coming from composition φ with the isomorphism coming from the universal property. Corollary 26. Suppose that dimW < ∞. Then, for any vector space V, we have an isomorphism Lin(V, W) =∼ W ⊗ V∗. Proof. Adjuntion gives us an isomorphism Lin(V ⊗ W∗, R) = Lin(V, Lin(W∗, R)) = Lin(V, W∗∗). ∗∗ But, if dimW < ∞, double-duality gives us an isomorphisms c : W → W . 

4. ALGEBRAS 4.1. R-algebras. An associative R-algebra is an associative A equipped with a ring homo- morphism σA : R → A such that, for all a ∈ A and x ∈ R, σA(x)a = aσA(x). (In other words, σA maps R to the center of A.)

A homomorphism from an associative R-algebras (A, σA) to another associative R-algebra (B, σB) is a ring homomorphism f : A → B such that f ◦ σA = σB. For us the only algebras we care about are associative R-algebra. So I’ll just call those “algebras” for short. I write Hom(A, B) or HomR(A, B) for the set of homomorphisms from an algebra A to an algebra B. Remark 27. Any R-algebra A is itself a real vector space. The scalar multiplication is sends the pair (x, a) ∈ R × A to σA(x)a. 8 PATRICK BROSNAN

Examples 28. R, C, the algebra Mn(R) of n × n matrices (with real coefficients) are all examples of associative R-algebras. The ring homomorphisms σA : R → A should be obvious in each case. (It’s the identity for R, the inclusion for C and the map to the scalar diagonal matrices for Mn(R).)

4.2. Graded algebras. A grading of an algebra is a decomposition A = ⊕i∈N Ai such that Ai Aj ⊂ Ai+j for all i, j ∈ N. An element a ∈ A is called homogeneous of degree i (with respect to the grading) if a ∈ Ai. If a ∈ A is any element, then, by the definition of a , we can write a = ∑ ai uniquely with ai ∈ Ai. Then element ai is called the degree i homogeneous part of a.

4.3. Homogeneous ideals. A homogeneous in a graded algebra A = ⊕Ai is a (two-sided) ideal I which is generated by homogeneous elements. Remark 29. By ideal I always mean two-sided ideals. And two-sided ideals are the only ideals that will appear in these notes.

Lemma 30. Suppose I is a homogeneous ideal. Set Ik = I ∩ Ak. Then I = ⊕IK.

Proof. Suppose r ∈ I. Then we can find finitely many elements ak, bk ∈ A and fk homogeneous elements of I such that r = ∑ ak fk + ∑ fkbk. By decomposing the ak and bk into their homogenous parts, we can assume that all the ak and bk are homogeneous. This shows that any r in I can be written as a sum of r = ∑ rj where rj ∈ Ij. And the decomposition is unique because A = ⊕Aj. So I = ⊕Ik. 

Corollary 31. Suppose A = ⊕Ak is a and I is a homogenous ideal. Write (A/I)k = (Ak + ∼ I)/I = Ak/(Ak ∩ I) = Ak/Ik. Then A/I = ⊕(A/I)k. In other words, A/I is a graded ring.

Proof. Let’s skip it. 

4.4. Tensor algebra. Suppose V is a vector space. Set T0V = R and, for n > 0, set TnV = V ⊗ · · · ⊗ V where there are n copies of V tensored together. So T2 = V ⊗ V, etc. We have an obvious map • : TnV × TmV → Tn+mV sending (v1 ⊗ · · · ⊗ vn, w1 ⊗ · · · wm) to v1 ⊗ · · · ⊗ vn ⊗ w1 ⊗ · · · wm. n Set TV = ⊕n≥0T V. Then, expanding ∗ out by linearity defines a product on all of TV making TV into a graded associative R-algebra. (The structure of an R-algebra comes from the inclusion of T0V. k For convenience in the next theorem, I’ll write ik : T V → TV for the inclusion. (But note that I just view TkV as a subspace of TV.) Also, I drop the • from the notation in writing the multiplication in TV. Theorem 32 (Universal property of the tensor algebra). Suppose A is any (associative R-)algebra. Let ∗ i1 : Hom(TV, A) → Lin(V, A) ∗ be the map sending an f : TV → A to the linear map f ◦ i1. Then i1 is one-one and onto. Remark 33. One interesting consequence of Theorem 32 is that Hom(TV, A) always has the struc- ture of a real vector space (since Lin(V, A) does).

Proof. One-one. Suppose f , g : TV → A are two algebra homomorphisms with f (v) = g(v) n for all v ∈ V. Then, for any pure tensor, v1 ⊗ · · · ⊗ vn ∈ T V, we have f (v1 ⊗ · · · ⊗ vn) = n f (v1) ··· f (vn) = g(v1) ··· g(vn) = g(v1 ⊗ · · · ⊗ vn). But, since the pure tensors generate T V as a vector space for each n, they also generate TV as a vector space. So f = g. NOTES ON TENSOR PRODUCTS AND THE EXTERIOR ALGEBRA 9

n Onto. Suppose S : V → A is a linear transformation. The map Sn : V → A given by (v1,..., vn) 7→ S(v1)S(v2) ··· S(vn) is then easily seen to be multilinear — in Mult(V, V,..., V; A) n where V appears n times. So it defines a map fn : T V → A. Expanding out by linearity gives a linear map f : TV → A. It is pretty easy to check that f is an algebra homomorphism. So I leave that to the reader. Then it is obvious that f (v) = S(v) for v ∈ V. In other words, f ◦ i1 = S. So we are done. 

Definition 34. Suppose A = ⊕Ak and B = ⊕Bk are graded algebras. A graded algebra homomor- phism is an algebra homomorphism φ : A → B such that φ(Ak) ⊂ Bk. Theorem 35. Suppose f ∈ Lin(V, W) where V and W are vector spaces. Then there is a unique algebra homomorphism T( f ) : TV → TW such that, for v ∈ V = T1V,T( f )(v) = f (v). Moreover, (i) T( f ) is a graded algebra homomorphism. We write Tn( f ) : TnV → TnW for the induced homo- morphism on the components; (ii) T(idV ) = idT(V); (iii) If g : W → X is another linear transformation, then T(g ◦ f ) = T(g) ◦ T( f ).

f i Proof. Using the universal property for TV, the composition V → W →1 TW gives rise to a unique algebra homomorphism T( f ) such that T( f )(v) = f (v). Since T( f ) is an algebra homomorphism, n n it takes the pure tensor v1 ⊗ · · · ⊗ vn ∈ T V to the pure tensor ( f v1) ⊗ · · · ⊗ ( f vn) ∈ T W. So, since TnV is generated as a vector space by pure tensors, T( f )[TnV] ⊂ TnW. The rest is easy and I leave it to the reader. 

5. EXTERIORALGEBRAS Definition 36. Suppose V is a vector space. Write I for the ideal in TV generated by the elements v ⊗ v ∈ T2V. The exterior algebra or Grassman algebra of V is the quotient algebra ∧V := TV/I. For α, β ∈ ∧V, we write α ∧ β for the product. 2 n Remark 37. Since the generators of I are all in T V, I is homogeneous. So, if we set In := I ∩ T V, n n n we have I = ⊕n≥0 In. Moreover, the algebra ∧V is graded: ∧V = ⊕ ∧ V with ∧ V = T V/In. n n Since I is generated (as in ideal) by elements in degree 2, we have I0 = I1 = 0. So ∧ V = T V for n < 2. More explicitly, we have ∧0V = R and ∧1V = V. Note that, since TnV is generated as a vector space by pure tensors, ∧nV is generated by the images of pure tensors under the quotient map. In other, words it is generated by expressions of the form v1 ∧ · · · ∧ vn. (Although it is not commonly used terminology, we might call these pure symbols.)

In the remark above, it is really more correct to say that ∧0V is isomorphic to R and ∧1V is iso- morphic to V. But it makes it a lot easier to think and write about the exterior algebra if we think of these as being equalities. This is unproblematic, because the remark gives us fixed isomorphisms. Lemma 38. Suppose v, w ∈ V. Then, in ∧V, we have v ∧ w = −(w ∧ v).

Proof. Since u ∧ u = 0 for any u ∈ V, we have v ∧ w = v ∧ w + (v − w) ∧ (v − w) = v ∧ w + v ∧ v − v ∧ w − w ∧ v + w ∧ w = −w ∧ v.  10 PATRICK BROSNAN

Corollary 39. Suppose v1,..., vn ∈ V. Then

(i) if vi = vj for any i 6= j, then v1 ∧ · · · ∧ vn = 0; (ii) if σ ∈ Sn is a , then

vσ(1) ∧ · · · ∧ vσ(n) = sign(σ)v1 ∧ · · · ∧ vn. Here sign(σ) denotes the sign of the permutation σ.

Proof. Exercise in repeatedly applying Lemma 38. Let’s skip it.  As for the tensor product, there are two universal properties for the exterior product: one for ∧nV and another for the algebra ∧V. To explain the one for ∧nV we need a definition.

Definition 40. Suppose V and X are vectors spaces. We write Altn(V, X) for the subspace of n n Multn(V ; X) consisting of n-linear maps f : V → X such f (v1,..., vn) = 0 if there exists an i < n with vi = vi+1. We have the following Proposition which is analogous to Corollary 39.

Proposition 41. Suppose f ∈ Altn(V, W) and v1,..., vn ∈ V. (i) We have

f (v1,..., vi−1, vi, vi+1, vi+2,..., vn) = − f (v1,..., vi−1, vi+1, vi, vi+2,..., vn).

(ii) If vi = vj for any i 6= j, then f (v1,..., vn) = 0. (iii) If σ ∈ Sn, then f (vσ(1),..., vσ(n)) = sign(σ) f (v1,..., vn).

Proof. We just prove (i), the rest is analogous to the proof of Corollary 39. I.e., the rest is an exercise in using (i) and the definition of an alternating map. f (v1,..., vn) = f (v1,..., vn) + f (v1,..., vi−1, vi+1 − vi, vi+1 − vi, vi+2,..., vn)

= f (v1,..., vn) − f (v1,..., vi−1, vi, vi+1, vi+2,..., vn) − f (v1,..., vi−1, vi+1, vi, vi+2,..., vn)

= − f (v1,..., vi−1, vi+1, vi, vi+2,..., vn).  n n n n Lemma 42. The map jn : V → ∧ V sending (v1,..., vn) to v1 ∧ · · · ∧ vn is in Altn(V , ∧ V).

n n Proof. If in : V → T V is the map to the tensor algebra from §4.4, then jn = π ◦ in where π : n n TV → ∧V is the quotient map. So jn is in Multn(V , ∧ V) since π is linear. And since u ∧ u = 0 for all u ∈ V, jn(v1,..., vn) = 0 if vi = vi+1 for some i.  Lemma 43. Suppose V, X and Y are vector spaces and T : X → Y is a linear transformation. For S ∈ Altn(V, X),T ◦ S ∈ Altn(V, Y). Moreover, the resulting map ∗ S : Lin(X, Y) → Altn(V, Y) sending T to T ◦ S is linear.

Proof. Same as Proposition 12 and Lemma 13.  Theorem 44. Suppose V and X are vector spaces. Then the map ∗ n jn : Lin(∧ V, X) → Altn(V, X) given by T 7→ T ◦ jn is an isomorphism. NOTES ON TENSOR PRODUCTS AND THE EXTERIOR ALGEBRA 11

∗ n Proof. The map jn is one-one because ∧ is generated by pure symbols. ∗ n To see that jn is onto, suppose f ∈ Altn(V, X). We have Altn(V, X) ⊂ Multn(V , X). So, by the n universal property of tensor products, we can find a map F : T V → X such that F(v1 ⊗ · · · ⊗ vn) = f (v1,..., vn). But, from the fact that f is alternating, it follows that F(v1 ⊗ · · · ⊗ vn) = 0 n n if vi = vi+1 for some i < n. But this easily implies that In ⊂ ker F. So, writing π : T V → ∧ V for the quotient map, we see that there exists a map S : ∧nV → X such that F = S ◦ π. But then S ◦ jn = f . 

We now move to the universal property for algebras.

Lemma 45. Suppose α ∈ ∧nV and β ∈ ∧mV. Then α ∧ β = (−1)nmβ ∧ α.

Proof. It is not hard to see that it suffices to prove the Lemma for pure symbols α and β (using the fact that the pure symbols generate ∧V as a vector space). But for pure symbols, the Lemma follows by repeatedly applying Lemma 38.  Theorem 46. Suppose V is a vector space, and A is an algebra. Let

∗ j1 : Hom(∧V, A) → Lin(V, A) ∗ be the map sending sending an algebra homomorphism f to the linear map f ◦ j1. Then j1 is one-one and its image is exactly the set of linear transformations S : V → A such that S(v)2 = 0 for all v ∈ V.

Proof. The algebra ∧V is generated as an algebra over R by ∧1V = V. In other words, every ∗ element of ∧V is a (finite) sum of (finite) products of elements of V. From this, it follows that j1 is one-one. ∗ 2 To see that j1 has the image claimed, suppose S ∈ Lin(V, A) and suppose S(v) = 0 for all v ∈ V. Using the universal property of tensor algebras, we see that there exists a (unique) f ∈ Hom(TV, A) such that f (v) = S(v) for all v ∈ V. In particular, f (v)2 = f (v ⊗ v) = 0 for all v ∈ V. Since f is an algebra homomorphism, this implies that I ⊂ ker f . So there is a (unique) algebra homorphism g : ∧V → A such that g ◦ π = f (where π : TV → ∧V is the quotient map). But then we’re done. 

The next Proposition is analogous to Theorem 35

Proposition 47. Suppose f ∈ Lin(V, W) where V and W are vector spaces. Then there is a unique algebra homomorphism ∧(S) : ∧V → ∧W such that ∧( f )(v) = f (v) for v ∈ ∧1V = V. Moreover, (i) ∧( f ) is a graded algebra homomorphism. We write ∧n( f ) : ∧nV → ∧nW for the induced homomorphism on the components; (ii) ∧(idV ) = id∧(V); (iii) If g : W → X is another linear transformation, then ∧(g ◦ f ) = ∧(g) ◦ ∧( f ).

Proof. The proof is also analogous to the proof of Theorem 35 and I leave it to the reader.  Example 48. Suppose we V is a one-dimensional vector space with basis B = {v}. Then ∧pV = 0 for all p > 1. To prove this, note that any pure symbol is of the form (x1v) ∧ · · · (xpv) = p p (∏i=1 xi)v ∧ · · · ∧ v and this is 0 for p > 1 since v ∧ v = 0. Since the pure symbols generate ∧ V as a vector space, it follows that ∧pV = 0. On the other hand, we have ∧0V = R and ∧1V = V. So we know ∧V completely. 12 PATRICK BROSNAN

6. TENSORS, SUMSANDBASESOFEXTERIORALGEBRAS Suppose V and W are two vector spaces. For a, b, c, d ∈ N, let’s define a map • : (∧aV ⊗ ∧bW) × (∧cV ⊗ ∧dW) → ∧a+cV ⊗ ∧b+dW by sending (α ⊗ β, γ ⊗ δ) to (−1)bc(α ∧ γ) ⊗ (β ∧ δ). If we expand out by linearity, we get a map • : (∧V ⊗ ∧W) × (∧V ⊗ ∧W) → ∧V ⊗ ∧W. Proposition 49. The product • defined above makes (∧V) ⊗ (∧W) into an C := p q C(V, W). If we set Ck = ⊕p+q=k ∧ V ⊗ ∧ W then C = ⊕Ck is graded.

Proof. Associativity seems to be the main thing to check. So I leave it to the reader.  Remark 50. The definition of the product • on C is similar to what Rudin does to give an ad hoc definition of the exterior algebra.

Definition 51. A graded algebra A = ⊕Ak is said to be graded commutative if, for a ∈ Ap, b ∈ Aq, ab = (−1)pqba.

It follows from Lemma 45 that ∧V is graded commutative, and it is not hard to see using the definition above that C = ∧V ⊗ ∧W is as well (using the product •).

Lemma 52. Suppose A is a graded commutative (real) algebra, and k is an odd number. Then, for a ∈ Ak, we have a2 = 0.

k2 Proof. a2 = (−1) a2 = −a2. So 2a2 = 0. But that implies that a2 = 0.  I want to show that, in fact, C is isomorphic as an algebra to ∧(V ⊕ W). For this, I need maps back and forth.

Using Proposition 47, the inclusions iV : V → V ⊕ W and iW : W → V ⊕ W give us maps ∧(iv) : ∧V → ∧(V ⊕ W) and ∧(iW ) : ∧W → ∧(V ⊕ W). Then we define a map f¯ : ∧V × ∧W → ∧(V ⊕ W) by setting f¯(α, β) = (∧(iV )(α)) ∧ (∧(iW )(β)). This map is clearly bilinear, so, by the universal property of tensor products, it gives a map f : ∧V ⊗ ∧W → ∧(V ⊕ W). If we view V and W as being contained in the factors of V ⊕ W, we can see the map f is an easier way: we just have f (α ⊗ β) = α ∧ β for α ∈ ∧V and β ∈ ∧W. Proposition 53. The map f : ∧V ⊗ ∧W → ∧(V ⊗ W) is an algebra homomorphism.

Proof. This is more or less obvious from the description of f given above (although it is slightly annoying to write down in formulas).  Now we want to come up with a homomorphism back the other way. This is slightly easier. The degree 1 part of C = ∧V ⊗ ∧W is just V ⊗ R ⊕ R ⊗ W. So it is isomorphic to V ⊕ W in an obvious way (and we will just identify it with V ⊕ R. To be super explicit, the map V → V ⊗ R sending v to v ⊗ 1 is an isomorphism and similarly the map w → 1 ⊗ w is an isomorphism. Se get a map g¯ : V ⊕ W → C1 ⊂ C sending (v, w) to v ⊗ 1 + 1 ⊗ w. Lemma 54. We have g¯(v, w)2 = 0 for all (v, w) ∈ V ⊕ W.

Proof. Since g¯(v, w) ∈ C1, this follows from Lemma 52.  NOTES ON TENSOR PRODUCTS AND THE EXTERIOR ALGEBRA 13

Now, from the universal property of the exterior algebra and Lemma 54, it follows that we get a (unique) algebra homomorphism g : ∧(V ⊕ W) → C = (∧V) ⊗ (∧W) such that g(v, w) = g¯(v, w) for (v, w) ∈ V ⊕ W. Theorem 55. The maps f and g above are inverse algebra homomorphisms. Consequently, we have an isomrophism of algebras ∧(V ⊕ W) =∼ (∧V) ⊗ (∧W).

Proof. The trick to prove this is to compose the maps and apply the universal properties to see that the compositions are the identity. I’ll skip it. 

Corollary 56. Suppose V has basis v1,..., vn. For each I = (i1,..., ip) with 1 ≤ i1 < i2 < ··· < ip ≤ n, write vI = vi1 ∧ · · · ∧ vip . Then the vI as I ranges over all subsets of {1, . . . , n} form a basis for ∧V. n Consequently, dim ∧ V = 2n and dim ∧p V = . p

Proof. Let’s first prove, by induction on dimV, that dim ∧ V = 2dimV. We know this when dimV = 1 and it’s obvious when dimV = 0. So assume dimV > 1. Then, set A = hv1,..., vn−1i and n−1 B = hvni. By induction, dim ∧ A = 2 and dim ∧ B = 2. So, since V = A ⊕ B, dim ∧ V = dim((∧A) ⊗ (∧B)) = (dim ∧ A)(dim ∧ B) = 2n−1 · 2 = 2n. 

Definition 57. A p-tuple I = (i1,..., ip) with 1 ≤ i1 < ··· < ip ≤ n as above is called a multi-index of cardinality p. We write |I| = p. Corollary 58. Suppose V is an n-dimensional vector space and f ∈ End V is a linear transformation from V to itself. Then there is a unique δ( f ) such that the map ∧n( f ) : ∧n(V) → ∧n(V) is multiplication by δ( f ). We have δ(idV ) = 1, and, if g is another element of End V, then δ( f g) = δ( f )δ(g).

Proof. Since V is n-dimensional, ∧nV is 1-dimensional (by Corollary 56). Therefore any map from ∧nV to itself is given by multiplication by an (obviously unique) scalar. This proves the existence (and uniqueness) of δ( f ). The rest follows from Proposition 47.  Theorem 59. We have (in the context of Corollary 58), δ( f ) = det f for f ∈ End V.

Proof. The scalar det : End V → R is characterized by two properties

(i) det idV = 1; (ii) det is an alternating multi-linear function of the columns of f (in the representation with respect to a basis of V). The Property (i) holds by Corollary 58. Property (ii) is easy since the map Vn → ∧nV sending (v1,..., vn) to v1 ∧ · · · ∧ vn is multi-linear and alternating. It then follows that, if v1,..., vn is an ordered basis of V, then the function taking a linear transformation f to ( f v1) ∧ · · · ∧ ( f vn) is an alternating, multilinear function of the columns. From that, it follows easily that det f = δ( f ).  14 PATRICK BROSNAN

Proposition 60. There is a unique linear transformation h , i : ∧pV∗ ⊗ ∧pV → R sending the tensor p (λ1 ∧ · · · ∧ λp) ⊗ (v1 ∧ · · · ∧ vn) to [det λi(vj)]i,j=1. This pairing induces an isomorphism φ : ∧p(V∗) → (∧pV)∗. n If {vi}i=1 is a basis of V and {λi} is the dual basis then, for I and J multi-indices of cardinality p, we have hλI, vJi = δIJ. (Here δIJ is 0 if I 6= J and 1 otherwise.)

Proof. To get the pairing, it suffices (by adjunction), to get a linear map φ : ∧pV∗ → Lin(∧pV, R) = (∧pV)∗. This we have to do in steps using the various universal properties. ∗ p p First, suppose (λ1,..., λp) ∈ (V ) . Define a map φ˜(λ1,..., λp) : V → R by settting ˜ p φ(λ1,..., λp)(v1,... vp) = det[λi(vj)]i,j=1. It is easy to see that, holding the λ’s fixed, the above expression is alternating and multilinear in 0 ˜ ˆ the vis. So, using the universal property for V, we see that φ gives rise to a map φ(λ1,..., λp) ∈ Lin(∧pV, R). This then gives us a map φˆ : (V∗)p → Lin(∧pV, R). It is then easy to see that this map φˆ is itself alternating and mutltilinear. So, now using the universal property for V∗, we get a map φ : ∧pV∗ → (∧pV)∗. ( )( ) = [ ])p = ( ) = ( ) We have φ λI vJ det λis , vjt s,t=1 where I i1,..., ip and J j1,..., jp . So, it is not hard to see that hλI, vJi = δIJ. This proves that φ is an isomorphism.  Remark 61. When V is finite dimensional, people use this isomorphism φ to think of ∧pV∗ and (∧pV)∗ as basically the same thing. This is analogous to the situation with double-duals.

7. BILINEARFORMS, ORIENTATIONS AND THE STAR OPERATOR 7.1. Inner products. Fix a positive definite inner product h , i : V × V → R on a finite dimensional vector space V. This is a symmetric such that, for each v ∈ V, kvk2 := hv, vi ≥ 0 with strict inequality for all nonzero v. Here symmetric means that hv, wi = hw, vi for all v, w ∈ V. As h , i is a , it gives rise (via adjuntion) to a linear transformation γ : V → V∗, and it is not hard to see that the positive definiteness implies that γ is an isomorphism. A good example of a positive definite inner product is just the standard on Rn. In ∗ ∗ this case, if {ei} is the and {ei } is the dual basis, then we just have γ(ei) = ei . Hoffman and Kunze is a decent place to read about positive definite inner products. So on the exterior algebra side, if we use γ to identify V and V∗, Proposition 60 gives us a linear transformation h , i : ∧pV ⊗ ∧pV → R. (62)

Fix an {ei} for V. (Using Graham-Schmidt, every V has such a basis.) Then it is not hard to see that heI, eJi = δIJ (where here I and J are multi-indexes. It follows that the pairing (62) is itself a positive definite inner product with orthonormal basis eI.

7.2. Orientations. Now suppose dimV = n. Then dim ∧n V = 1. So, under the inner product (62) n there are exactly two vectors α ∈ ∧ V with kαk = 1. If we have a orthonormal basis e1,..., en there are exactly ±e1 ∧ · · · ∧ en. The choice of one of these vectors Ω of norm 1 is called an of V. By reordering our orthonormal basis {ei}, we can always arrange that Ω = e1 ∧ · · · ∧ en. In fact, we call an (ordered) orthonormal basis v1,..., vn an oriented basis if v1 ∧ · · · ∧ vn = Ω. REFERENCES 15

The choice of an orientation gives rise to a unique isomorphism ω : ∧nV → R taking Ω to 1. We fix this isomorphism. Using it we get a composition ω ∧p V ⊗ ∧n−pV → ∧nV → R (63) where the first map is induced (using the universal property of tensor products) from the (obvi- ously bilinear) wedge product. Theorem 64. For each λ ∈ ∧pV, there exists a unique ∗λ ∈ ∧n−pV such that, for any µ ∈ ∧n−pV, λ ∧ µ = h∗λ, µiΩ. (65) Moreover, the resulting map ∗ : ∧pV → ∧n−pV is linear. Sketch. This essentially follows from the fact that (62) is positive definite (and thus non-degenerate).  The operator ∗ is called the . Except for the matter of signs, it is easy to understand in terms of multi-indexes. 0 Lemma 66. Suppose {ei} is an oriented orthonormal basis. For any multiindex I, write I for the comple- ment. That is the set of indices in I0 is the complement in {1, . . . , n} of the set of indices in I. Then

∗eI = ±1eI0 .

Proof. Suppose |I| = p. Write ∗eI = ∑|J|=n−p aJeJ where each aJ is a real number. For K a multi- 0 0 index with |K| = n − p, we have eI ∧ eK = 0 unless K = I . So if K 6= I , aK = h∗eI, eKi = 0. 0 On the other hand, if K = I , we have eI ∧ eI0 = eΩ where e = ±1. So

eI ∧ eI0 = eΩ = h∗eI, eI0 iΩ = h∑ aJeJ, eI0 iΩ = aI0 Ω.

So e = aI0 . So it follows that ∗eI = eeI0 .  Let’s record the last part of the proof.

Corollary 67. Suppose {ei} is an ordered basis. For a multi-index I, let eI ∧ eI0 = eΩ. Then e = ±1 and

∗eI = eΩ. Example 68. This makes it very easy to compute the ∗ operator. Let’s do it for dimV = 3. Then 0 0 Ω = e1 ∧ e2 ∧ e3. For I = (1), I = (2, 3) and e = 1. For I = (2), I = (1, 3) and e = −1. And for 0 I = (3), I = (12) and e = 1. So ∗e1 = e2 ∧ e3, ∗e2 = −e1 ∧ e3 = e3 ∧ e1, and ∗e3 = e1 ∧ e2.

REFERENCES [Bou98] . Algebra I. Chapters 1–3. Elements of Mathematics (Berlin). Springer- Verlag, Berlin, 1998, pages xxiv+709. ISBN: 3-540-64243-9. Translated from the French, Reprint of the 1989 English translation [ MR0979982 (90d:00002)]. [Dar94] R. W. R. Darling. Differential forms and connections. Cambridge University Press, Cam- bridge, 1994, pages x+256. ISBN: 0-521-46800-0. DOI: 10.1017/CBO9780511805110. URL: https://doi.org/10.1017/CBO9780511805110. [Rud76] Walter Rudin. Principles of mathematical analysis. McGraw-Hill Book Co., New York- Auckland-Dusseldorf,¨ third edition, 1976, pages x+342. International Series in Pure and . [Spi65] Michael Spivak. Calculus on manifolds. A modern approach to classical theorems of advanced calculus. W. A. Benjamin, Inc., New York-Amsterdam, 1965, pages xii+144. 16 REFERENCES

[War83] Frank W. Warner. Foundations of differentiable manifolds and Lie groups, 94 of Grad- uate Texts in Mathematics. Springer-Verlag, New York-Berlin, 1983, pages ix+272. ISBN: 0-387-90894-3. Corrected reprint of the 1971 edition.

DEPARTMENT OF MATHEMATICS,UNIVERSITYOF MARYLAND,COLLEGE PARK,MDUSA Email address: pbrosnanmath.umd.edu