<<

Chapter 22

Tensor , Symmetric Algebras and Exterior Algebras

22.1 Products

We begin by defining products of vector spaces over a field and then we investigate some properties of these tensors, in particular the existence of bases and . After this, we investigate special kinds of tensors, namely, symmetric tensors and skew-symmetric tensors. Tensor products of modules over a commutative with identity will be discussed very briefly. They show up naturally when we consider the of sections of a tensor of vector bundles.

Given a linear , f : E F ,weknowthatifwehaveabasis,(ui)i I ,forE,thenf → ∈ is completely determined by its values, f(ui), on the vectors. For a , f : En F ,wedon’tknowifthereissuchanicepropertybutitwouldcertainlybevery useful. → In many respects, tensor products allow us to define multilinear maps in terms of their action on a suitable basis. The crucial idea is to linearize,thatis,tocreateanewvectorspace, n n n E⊗ , such that the multilinear map, f : E F ,isturnedintoalinear map, f : E⊗ F , ⊗ which is equivalent to f in a strong sense. If→ in , f is symmetric, then we can define→ asymmetrictensorpower,Symn(E), and every symmetric multilinear map, f : En F ,is turned into a , f :Symn(E) F ,whichisequivalenttof in a strong→ sense. ⊙ Similarly, if f is alternating, then we can define→ a skew- power, n(E), and every alternating multilinear map is turned into a linear map, f : n(E) F ,whichis ∧ equivalent to f in a strong sense. → ￿ ￿ Tensor products can be defined in various ways, some more abstract than others. We tried to stay down to earth, without excess!

Let K be a given field, and let E1,...,En be n 2givenvectorspaces.Foranyvector space, F ,recallthatamap,f : E E F≥,ismultilinear iffit is linear in each of 1 ×···× n → 585 586 CHAPTER 22. TENSOR ALGEBRAS its argument, that is,

f(u1,...ui1 ,v+ w, ui+1,...,un)=f(u1,...ui1 ,v,ui+1,...,un)

+ f(u1,...ui1 ,w,ui+1,...,un)

f(u1,...ui1 ,λv,ui+1,...,un)=λf(u1,...ui1 ,v,ui+1,...,un), for all u E (j = i), all v, w E and all λ K,fori =1...,n. j ∈ j ￿ ∈ i ∈ The of multilinear maps as above forms a denoted L(E1,...,En; F )or Hom(E1,...,En; F ). When n = 1, we have the vector space of linear maps, L(E,F)or Hom(E,F). (To be very precise, we write HomK (E1,...,En; F ) and HomK (E,F).) As usual, the , E∗, of E is defined by E∗ = Hom(E,K). Before proceeding any further, we recall a basic fact about pairings. We will use this fact to deal with dual spaces of tensors.

Definition 22.1 Given two vector spaces, E and F ,amap,( , ): E F K,isa nondegenerate pairing iffit is bilinear and iff(u, v)=0forall−v −F implies× →u =0and (u, v)=0forallu E implies v = 0. A nondegenerate pairing induces∈ two linear maps, ∈ ϕ: E F ∗ and ψ : F E∗,definedby → → ϕ(u)(y)=(u, y) ψ(v)(x)=(x, v), for all u, x E and all v, y F . ∈ ∈ Proposition 22.1 For every nondegenerate pairing, ( , ): E F K, the induced maps − − × → ϕ: E F ∗ and ψ : F E∗ are linear and injective. Furthermore, if E and F are finite → → dimensional, then ϕ: E F ∗ and ψ : F E∗ are bijective. → →

Proof .Themapsϕ: E F ∗ and ψ : F E∗ are linear because u, v (u, v)isbilinear. Assume that ϕ(u)=0.Thismeansthat→ →ϕ(u)(y)=(u, y)=0forall￿→y F and as our pairing is nondegenerate, we must have u =0.Similarly,ψ is injective. If E∈and F are finite dimensional, then dim(E)=dim(E∗)anddim(F ) = dim(F ∗). However, the injectivity of ϕ and ψ implies that that dim(E) dim(F ∗)anddim(F ) dim(E∗). Consequently dim(E) ≤ ≤ ≤ dim(F )anddim(F ) dim(E), so dim(E)=dim(F ). Therefore, dim(E)=dim(F ∗)andϕ ≤ is bijective (and similarly dim(F )=dim(E∗)andψ is bijective). Proposition 22.1 shows that when E and F are finite dimensional, a nondegenerate pairing induces canonical isomorphims ϕ: E F ∗ and ψ : F E∗,thatis,isomorphismsthatdo not depend on the choice of bases. An→ important special→ case is the case where E = F and we have an inner product (a symmetric, positive definite ) on E.

Remark: When we use the term “canonical ” we mean that such an isomor- phism is defined independently of any choice of bases. For example, if E is a finite dimensional 22.1. TENSORS PRODUCTS 587

vector space and (e1,...,en) is any basis of E,wehavethedualbasis,(e1∗,...,en∗ ), of E∗ (where, ei∗(ej)=δij)andthus,themapei ei∗ is an isomorphism between E and E∗.This isomorphism is not canonical. ￿→ On the other hand, if , is an inner product on E,thenProposition22.1showsthat ￿− −￿ the nondegenerate pairing, , ,inducesacanonicalisomorphismbetweenE and E∗. ￿− −￿ ￿ This isomorphism is often denoted ￿: E E∗ and we usually write u for ￿(u), with u E. → ∈ Given any basis, (e1,...,en), of E (not necessarily orthonormal), if we let gij =(ei,ej), then for every u = n u e ,sinceu￿(v)= u, v ,forallv V ,weget i=1 i i ￿ ￿ ∈ ￿ n n ￿ u = ωiei∗, with ωi = gijuj. i=1 j=1 ￿ ￿ If we use the convention that coordinates of vectors are written using superscripts n i (u = i=1 u ei)andcoordinatesofone-forms(covectors)arewrittenusingsubscripts n (ω = i=1 ωiei∗), then the map, ￿,hastheeffectoflowering(flattening!)indices.The ￿ n ￿ inverse of ￿ is denoted ￿: E∗ E.Ifwewriteω E∗ as ω = i=1 ωiei∗ and ω E as ￿ ￿n ￿ j → ∈ ∈ ω = (ω ) ej,since j=1 ￿ ￿ n ω = ω(e )= ω￿,e = (ω￿)jg , 1 i n, i i ￿ i￿ ij ≤ ≤ j=1 ￿ we get n ￿ i ij (ω ) = g ωj, j=1 ￿ where (gij)istheinverseofthematrix(g ). The inner product, ( , ), on E induces an ij − − inner product on E∗ also denoted ( , )andgivenby − − ￿ ￿ (ω1,ω2)=(ω1,ω2),

for all ω ,ω E∗.Then,itisobviousthat 1 2 ∈ (u, v)=(u￿,v￿), for all u, v E. ∈

If (e1,...,en)isabasisofE and gij =(ei,ej), as

n ￿ ik (ei∗) = g ek, ￿k=1 an easy computation shows that

￿ ￿ ij (ei∗,ej∗)=((ei∗) , (ej∗) )=g , 588 CHAPTER 22. TENSOR ALGEBRAS

ij that is, in the basis (e1∗,...,en∗ ), the inner product on E∗ is represented by the (g ), the inverse of the matrix (gij). The inner product on a finite vector space also yields a natural isomorphism between the space, Hom(E,E; K), of bilinear forms on E and the space, Hom(E,E), of linear maps from E to itself. Using this isomorphism, we can define the of a bilinear form in an intrinsic manner. This technique is used in differential , for example, to define the of a differential one-form.

Proposition 22.2 If , is an inner product on a finite vector space, E, (over a field, K), then for every bilinear￿− −￿ form, f : E E K, there is a unique linear map, f ￿ : E E, such that × → → f(u, v)= f ￿(u),v , for all u, v E. ￿ ￿ ∈ The map, f f ￿, is a linear isomorphism between Hom(E,E; K) and Hom(E,E). ￿→ Proof .Foreveryg Hom(E,E), the map given by ∈ f(u, v)= g(u),v ,u,vE, ￿ ￿ ∈ is clearly bilinear. It is also clear that the above defines a linear map from Hom(E,E)to Hom(E,E; K). This map is injective because if f(u, v)=0forallu, v E,as , is an inner product, we get g(u)=0forallu E. Furthermore, both spaces∈ Hom(E,E￿− )and−￿ Hom(E,E; K)havethesamedimension,soourlinearmapisanisomorphism.∈

If (e1,...,en)isanorthonormalbasisofE, then we check immediately that the trace of alinearmap,g,(whichisindependentofthechoiceofabasis)isgivenby

n tr(g)= g(e ),e , ￿ i i￿ i=1 ￿ where n =dim(E). We define the trace of the bilinear form, f,by

tr(f)=tr(f ￿).

From Proposition 22.2, tr(f)isgivenby

n

tr(f)= f(ei,ei), i=1 ￿

for any , (e1,...,en), of E.Wecanalsocheckdirectlythattheabove expression is independent of the choice of an orthonormal basis. We will also need the following Proposition to show that various families are linearly independent. 22.1. TENSORS PRODUCTS 589

Proposition 22.3 Let E and F be two nontrivial vector spaces and let (ui)i I be any family ∈ of vectors ui E. The family, (ui)i I , is linearly independent ifffor every family, (vi)i I , of ∈ ∈ vectors v F∈, there is some linear map, f : E F , so that f(u )=v , for all i I. i ∈ → i i ∈ Proof .Leftasanexercise. First, we define tensor products, and then we prove their existence and uniqueness up to isomorphism.

Definition 22.2 A of n 2vectorspacesE1,...,En,isavectorspaceT , together with a multilinear map ϕ: E ≥ E T , such that, for every vector space F 1 ×···× n → and for every multilinear map f : E1 En F ,thereisauniquelinearmapf : T F , ⊗ with ×···× → → f(u1,...,un)=f (ϕ(u1,...,un)), ⊗ for all u E ,...,u E ,orforshort 1 ∈ 1 n ∈ n f = f ϕ. ⊗ ◦ Equivalently, there is a unique linear map f such that the following diagram commutes: ⊗

ϕ E E ￿ T 1 ×···× n f ⊗ f ￿ ￿ F

First, we show that any two tensor products (T1,ϕ1)and(T2,ϕ2)forE1,...,En,are isomorphic.

Proposition 22.4 Given any two tensor products (T1,ϕ1) and (T2,ϕ2) for E1,...,En, there is an isomorphism h: T T such that 1 → 2 ϕ = h ϕ . 2 ◦ 1 Proof .Focusingon(T ,ϕ ), we have a multilinear map ϕ : E E T ,andthus, 1 1 2 1 ×···× n → 2 there is a unique linear map (ϕ2) : T1 T2,with ⊗ →

ϕ2 =(ϕ2) ϕ1. ⊗ ◦ Similarly, focusing now on on (T ,ϕ ), we have a multilinear map ϕ : E E T , 2 2 1 1 ×···× n → 1 and thus, there is a unique linear map (ϕ1) : T2 T1,with ⊗ →

ϕ1 =(ϕ1) ϕ2. ⊗ ◦ But then, we get ϕ1 =(ϕ1) (ϕ2) ϕ1, ⊗ ◦ ⊗ ◦ 590 CHAPTER 22. TENSOR ALGEBRAS and ϕ2 =(ϕ2) (ϕ1) ϕ2. ⊗ ◦ ⊗ ◦ On the other hand, focusing on (T1,ϕ1), we have a multilinear map ϕ1 : E1 En T1, but the unique linear map h: T T ,with ×···× → 1 → 1 ϕ = h ϕ 1 ◦ 1 is h =id,andsince(ϕ1) (ϕ2) is linear, as a composition of linear maps, we must have ⊗ ◦ ⊗

(ϕ1) (ϕ2) =id. ⊗ ◦ ⊗ Similarly, we must have (ϕ2) (ϕ1) =id. ⊗ ◦ ⊗ This shows that (ϕ1) and (ϕ2) are inverse linear maps, and thus, (ϕ2) : T1 T2 is an ⊗ ⊗ ⊗ → isomorphism between T1 and T2. Now that we have shown that tensor products are unique up to isomorphism, we give a construction that produces one.

Theorem 22.5 Given n 2 vector spaces E1,...,En, a tensor product (E1 En,ϕ) for E ,...,E can be constructed.≥ Furthermore, denoting ϕ(u ,...,u ) as u⊗···⊗ u , 1 n 1 n 1 ⊗···⊗ n the tensor product E1 En is generated by the vectors u1 un, where u1 E ,...,u E , and for⊗··· every⊗ multilinear map f : E E ⊗···F ,⊗ the unique linear∈ 1 n ∈ n 1 ×···× n → map f : E1 En F such that f = f ϕ, is defined by ⊗ ⊗···⊗ → ⊗ ◦

f (u1 un)=f(u1,...,un), ⊗ ⊗···⊗ on the generators u u of E E . 1 ⊗···⊗ n 1 ⊗···⊗ n Proof .Givenanyset,I,viewedasanindexset,letK(I) be the set of all functions, f : I K, → such that f(i) =0onlyforfinitelymanyi I. As usual, denote such a by (fi)i I , ∈ it is a family of￿ finite . We make K∈(I) into a vector space by defining addition and by

(fi)+(gi)=(fi + gi)

λ(fi)=(λfi).

The family, (ei)i I ,isdefinedsuchthat(ei)j =0ifj = i and (ei)i =1.Itisabasisof ∈ the vector space K(I),sothateveryw K(I) can be￿ uniquely written as a finite linear ∈ (I) combination of the ei.Thereisalsoaninjection,ι: I K ,suchthatι(i)=ei for every i I. Furthermore, it is easy to show that for any vector→ space, F ,andforanyfunction, f ∈: I F ,thereisauniquelinearmap,f : K(I) F ,suchthat → → f = f ι, ◦ 22.1. TENSORS PRODUCTS 591

as in the following diagram: ι I ￿ K(I)

f f ￿ ￿ F This shows that K(I) is the free vector space generated by I. Now, apply this construction to the , I = E E ,obtainingthefreevectorspaceM = K(I) on 1 ×···× n I = E1 En.Sinceevery,ei,isuniquelyassociatedwithsomen-tuple i =(u1,...,un) E ×···E×,wewilldenotee by (u ,...,u ). ∈ 1 ×···× n i 1 n Next, let N be the subspace of M generated by the vectors of the following type:

(u ,...,u + v ,...,u ) (u ,...,u,...,u ) (u ,...,v,...,u ), 1 i i n − 1 i n − 1 i n (u ,...,λu,...,u ) λ(u ,...,u,...,u ). 1 i n − 1 i n

We let E1 En be the M/N of the free vector space M by N, π : M M/N be the quotient⊗··· map⊗ and set → ϕ = π ι. ◦ By construction, ϕ is multilinear, and since π is surjective and the ι(i)=ei generate M, since i is of the form i =(u ,...,u ) E E ,theϕ(u ,...,u )generateM/N.Thus, 1 n ∈ 1 ×···× n 1 n if we denote ϕ(u1,...,un)asu1 un,thetensorproductE1 En is generated by the vectors u u ,where⊗u···⊗E ,...,u E . ⊗···⊗ 1 ⊗···⊗ n 1 ∈ 1 n ∈ n For every multilinear map f : E1 En F ,ifalinearmapf : E1 En F ×···× → ⊗ ⊗···⊗ → exists such that f = f ϕ,sincethevectorsu1 un generate E1 En,themap ⊗ f is uniquely defined by◦ ⊗···⊗ ⊗···⊗ ⊗

f (u1 un)=f(u1,...,un). ⊗ ⊗···⊗ (E1 En) On the other hand, because M = K ×···× is free on I = E1 En, there is a unique (E1 En) ×···× linear map f : K ×···× F ,suchthat → f = f ι, ◦ as in the diagram below:

ι (E1 En) E E ￿ K ×···× 1 ×···× n f f ￿ ￿ F Because f is multilinear, note that we must have f(w)=0,foreveryw N.Butthen, f : M F induces a linear map h: M/N F , such that ∈ → → f = h π ι, ◦ ◦ 592 CHAPTER 22. TENSOR ALGEBRAS by defining h([z]) = f(z), for every z M,where[z]denotestheequivalenceclassinM/N of z M: ∈ ∈ π ι (E1 En) E E ◦ ￿ K ×···× /N 1 ×···× n h f ￿ ￿ F Indeed, the fact that f vanishes on N insures that h is well defined on M/N,anditisclearly linear by definition. However, we showed that such a linear map h is unique, and thus it agrees with the linear map f defined by ⊗

f (u1 un)=f(u1,...,un) ⊗ ⊗···⊗ on the generators of E E . 1 ⊗···⊗ n What is important about Theorem 22.5 is not so much the construction itself but the fact that it produces a tensor product with the universal mapping property with respect to multilinear maps. Indeed, Theorem 22.5 yields a canonical isomorphism,

L(E E ,F) = L(E ,...,E ; F ), 1 ⊗···⊗ n ∼ 1 n between the vector space of linear maps, L(E1 En,F), and the vector space of multilinear maps, L(E ,...,E ; F ), via the linear⊗ map···⊗ ϕ defined by 1 n −◦ h h ϕ, ￿→ ◦ where h L(E E ,F). Indeed, h ϕ is clearly multilinear, and since by Theorem ∈ 1 ⊗···⊗ n ◦ 22.5, for every multilinear map, f L(E1,...,En; F ), there is a unique linear map f ∈ ⊗ ∈ L(E1 En,F)suchthatf = f ϕ,themap ϕ is bijective. As a matter of fact, ⊗ its inverse⊗··· is⊗ the map ◦ −◦ f f . ￿→ ⊗ Using the “Hom” notation, the above canonical isomorphism is written

Hom(E E ,F) = Hom(E ,...,E ; F ). 1 ⊗···⊗ n ∼ 1 n

Remarks:

(1) To be very precise, since the tensor product depends on the field, K,weshouldsubscript the with K and write ⊗ E E . 1 ⊗K ···⊗K n However, we often omit the subscript K unless confusion may arise. 22.2. BASES OF TENSOR PRODUCTS 593

(2) For F = K,thebasefield,weobtainacanonicalisomorphismbetweenthevector space L(E E ,K), and the vector space of multilinear forms L(E ,...,E ; K). 1 ⊗···⊗ n 1 n However, L(E E ,K)isthedualspace,(E E )∗,andthus,thevector 1 ⊗···⊗ n 1 ⊗···⊗ n space of multilinear forms L(E1,...,En; K)iscanonicallyisomorphicto(E1 En)∗. We write ⊗···⊗ L(E ,...,E ; K) = (E E )∗. 1 n ∼ 1 ⊗···⊗ n

The fact that the map ϕ: E1 En E1 En is multilinear, can also be expressed as follows: ×···× → ⊗···⊗

u (u + v ) u =(u u u ) 1 ⊗···⊗ i i ⊗···⊗ n 1 ⊗···⊗ i ⊗···⊗ n +(u v u ), 1 ⊗···⊗ i ⊗···⊗ n u (λu ) u = λ(u u u ). 1 ⊗···⊗ i ⊗···⊗ n 1 ⊗···⊗ i ⊗···⊗ n Of course, this is just what we wanted! Tensors in E E are also called n-tensors, 1 ⊗···⊗ n and tensors of the form u1 un,whereui Ei,arecalledsimple (or indecomposable) n-tensors.Thosen-tensors⊗ that···⊗ are not simple are∈ often called compound n-tensors. Not only do tensor products act on spaces, but they also act on linear maps (they are ). Given two linear maps f : E E￿ and g : F F ￿,wecandefineh: E F → → × → E￿ F ￿ by ⊗ h(u, v)=f(u) g(v). ⊗ It is immediately verified that h is bilinear, and thus, it induces a unique linear map

f g : E F E￿ F ￿, ⊗ ⊗ → ⊗ such that (f g)(u v)=f(u) g(u). ⊗ ⊗ ⊗

If we also have linear maps f ￿ : E￿ E￿￿ and g￿ : F ￿ F ￿￿,wecaneasilyverifythat → → the linear maps (f ￿ f) (g￿ g)and(f ￿ g￿) (f g)agreeonallvectorsoftheform u v E F .Sincethesevectorsgenerate◦ ⊗ ◦ ⊗E ◦F ,weconcludethat⊗ ⊗ ∈ ⊗ ⊗ (f ￿ f) (g￿ g)=(f ￿ g￿) (f g). ◦ ⊗ ◦ ⊗ ◦ ⊗

The generalization to the tensor product f1 fn of n 3linearmapsfi : Ei Fi is immediate, and left to the reader. ⊗···⊗ ≥ →

22.2 Bases of Tensor Products

We showed that E1 En is generated by the vectors of the form u1 un. However, there vectors are not⊗··· linearly⊗ independent. This situation can be fixed⊗··· when⊗ considering bases, which is the object of the next proposition. 594 CHAPTER 22. TENSOR ALGEBRAS

k Proposition 22.6 Given n 2 vector spaces E1,...,En, if (ui )i Ik is a basis for Ek, ∈ 1 k n, then the family of≥ vectors ≤ ≤ 1 n (ui ui )(i1,...,in) I1 ... In 1 ⊗···⊗ n ∈ × × is a basis of the tensor product E E . 1 ⊗···⊗ n

Proof .Foreachk,1 k n,everyvk E can be written uniquely as ≤ ≤ ∈ k k k k v = vj uj , j I ￿∈ k k for some family of scalars (vj )j Ik .LetF be any nontrivial vector space. We show that for ∈ every family

(wi1,...,in )(i1,...,in) I1 ... In , ∈ × × of vectors in F ,thereissomelinearmaph: E E F , such that 1 ⊗···⊗ n → h(u1 un )=w . i1 ⊗···⊗ in i1,...,in Then, by Proposition 22.3, it follows that

1 n (ui ui )(i1,...,in) I1 ... In 1 ⊗···⊗ n ∈ × × k 1 n is linearly independent. However, since (ui )i Ik is a basis for Ek,theui ui also ∈ 1 n generate E E ,andthus,theyformabasisofE E . ⊗···⊗ 1 ⊗···⊗ n 1 ⊗···⊗ n We define the function f : E E F as follows: 1 ×···× n → f( v1 u1 ,..., vn un )= v1 vn w . j1 j1 jn jn j1 ··· jn j1,...,jn j1 I1 jn In j1 I1,...,jn In ￿∈ ￿∈ ∈ ￿ ∈ It is immediately verified that f is multilinear. By the universal mapping property of the tensor product, the linear map f : E1 En F such that f = f ϕ,isthedesired ⊗ ⊗ map h. ⊗···⊗ → ◦

In particular, when each Ik is finite and of size mk =dim(Ek), we see that the k of the tensor product E1 En is m1 mn. As a corollary of Proposition 22.6, if (ui )i Ik ⊗···⊗ ··· ∈ is a basis for Ek,1 k n,theneverytensorz E1 En can be written in a unique way as ≤ ≤ ∈ ⊗···⊗ z = λ u1 un , i1,...,in i1 ⊗···⊗ in (i1,...,in) I1 ... In ￿∈ × × for some unique family of scalars λ K,allzeroexceptforafinitenumber. i1,...,in ∈ 22.3. SOME USEFUL FOR TENSOR PRODUCTS 595 22.3 Some Useful Isomorphisms for Tensor Products

Proposition 22.7 Given 3 vector spaces E,F,G, there exists unique canonical isomor- phisms

(1) E F F E ⊗ ￿ ⊗ (2) (E F ) G E (F G) E F G ⊗ ⊗ ￿ ⊗ ⊗ ￿ ⊗ ⊗ (3) (E F ) G (E G) (F G) ⊕ ⊗ ￿ ⊗ ⊕ ⊗ (4) K E E ⊗ ￿ such that respectively

(a) u v v u ⊗ ￿→ ⊗ (b) (u v) w u (v w) u v w ⊗ ⊗ ￿→ ⊗ ⊗ ￿→ ⊗ ⊗ (c) (u, v) w (u w, v w) ⊗ ￿→ ⊗ ⊗ (d) λ u λu. ⊗ ￿→

Proof . These isomorphisms are proved using the universal mapping property of tensor prod- ucts. We illustrate the proof method on (2). Fix some w G.Themap ∈ (u, v) u v w ￿→ ⊗ ⊗ from E F to E F G is bilinear, and thus, there is a linear map fw : E F E F G, such that× f (u ⊗v)=⊗u v w. ⊗ → ⊗ ⊗ w ⊗ ⊗ ⊗ Next, consider the map (z, w) f (z), ￿→ w from (E F ) G into E F G.Itiseasilyseentobebilinear,andthus,itinducesa linear map⊗ × ⊗ ⊗ f :(E F ) G E F G, ⊗ ⊗ → ⊗ ⊗ such that f((u v) w)=u v w. ⊗ ⊗ ⊗ ⊗ Also consider the map (u, v, w) (u v) w ￿→ ⊗ ⊗ from E F G to (E F ) G.Itistrilinear,andthus,thereisalinearmap × × ⊗ ⊗ g : E F G (E F ) G, ⊗ ⊗ → ⊗ ⊗ such that g(u v w)=(u v) w.Clearly,f g and g f are identity maps, and thus, f and g are isomorphisms.⊗ ⊗ The⊗ other⊗ cases are similar.◦ ◦ 596 CHAPTER 22. TENSOR ALGEBRAS

Given any three vector spaces, E,F,G, we have the canonical isomorphism

Hom(E,F; G) ∼= Hom(E,Hom(F, G)). Indeed, any , f : E F G,givesthelinearmap,ϕ(f) Hom(E,Hom(F, G)), where ϕ(f)(u) is the linear map× in Hom(→ F, G)givenby ∈

ϕ(f)(u)(v)=f(u, v).

Conversely, given a linear map, g Hom(E,Hom(F, G)), we get the bilinear map, ψ(g), given by ∈ ψ(g)(u, v)=g(u)(v), and it is clear that ϕ and ψ and mutual inverses. Consequently, we have the important corollary:

Proposition 22.8 For any three vector spaces, E,F,G, we have the canonical isomorphism,

Hom(E F, G) = Hom(E,Hom(F, G)), ⊗ ∼ 22.4 Duality for Tensor Products

In this , all vector spaces are assumed to have finite dimension. Let us now see how tensor products behave under duality. For this, we define a pairing between E∗ E∗ and 1 ⊗···⊗ n E1 En as follows: For any fixed (v1∗,...,vn∗) E1∗ En∗,wehavethemultilinear map,⊗···⊗ ∈ ×···× lv ,...,v :(u1,...,un) v∗(u1) v∗(un), 1∗ n∗ ￿→ 1 ··· n from E1 En to K.Themaplv ,...,v extends uniquely to a linear map, ×···× 1∗ n∗ Lv ,...,v : E1 En K. We also have the multilinear map, 1∗ n∗ ⊗···⊗ −→

(v∗,...,v∗) Lv ,...,v , 1 n ￿→ 1∗ n∗ from E∗ E∗ to Hom(E E ,K), which extends to a linear map, L,from 1 ×···× n 1 ⊗···⊗ n E∗ E∗ to Hom(E E ,K). However, in view of the isomorphism, 1 ⊗···⊗ n 1 ⊗···⊗ n Hom(U V,W) = Hom(U, Hom(V,W)), ⊗ ∼ we can view L as a linear map,

L:(E∗ E∗) (E E ) K, 1 ⊗···⊗ n ⊗ 1 ⊗···⊗ n → which corresponds to a bilinear map,

(E∗ E∗) (E E ) K, 1 ⊗···⊗ n × 1 ⊗···⊗ n −→ 22.4. DUALITY FOR TENSOR PRODUCTS 597

via the isomorphism (U V )∗ ∼= L(U, V ; K). It is easy to check that this bilinear map is nondegenerate and thus,⊗ by Proposition 22.1, we have a canonical isomorphism,

(E E )∗ = E∗ E∗. 1 ⊗···⊗ n ∼ 1 ⊗···⊗ n

This, together with the isomorphism, L(E1,...,En; K) ∼= (E1 En)∗,yieldsacanonical isomorphism ⊗···⊗ L(E ,...,E ; K) = E∗ E∗. 1 n ∼ 1 ⊗···⊗ n We prove another useful canonical isomorphism that allows us to treat linear maps as tensors.

Let E and F be two vector spaces and let α: E∗ F Hom(E,F)bethemapdefined such that × → α(u∗,f)(x)=u∗(x)f,

for all u∗ E∗, f F ,andx E. This map is clearly bilinear and thus, it induces a linear map, ∈ ∈ ∈ α : E∗ F Hom(E,F), ⊗ ⊗ → such that α (u∗ f)(x)=u∗(x)f. ⊗ ⊗ Proposition 22.9 If E and F are vector spaces with E of finite dimension, then the linear map, α : E∗ F Hom(E,F), is a canonical isomorphism. ⊗ ⊗ →

Proof .Let(ej)1 j n be a basis of E and, as usual, let ej∗ E∗ be the defined by ≤ ≤ ∈

ej∗(ek)=δj,k,

where δj,k =1iffj = k and 0 otherwise. We know that (ej∗)1 j n is a basis of E∗ (this is ≤ ≤ where we use the finite dimension of E). Now, for any linear map, f Hom(E,F), for every x = x e + + x e E,wehave ∈ 1 1 ··· n n ∈

f(x)=f(x e + + x e )=x f(e )+ + x f(e )=e∗(x)f(e )+ + e∗ (x)f(e ). 1 1 ··· n n 1 1 ··· n n 1 1 ··· n n Consequently, every linear map, f Hom(E,F), can be expressed as ∈

f(x)=e∗(x)f + + e∗ (x)f , 1 1 ··· n n

for some fi F .Furthermore,ifweapplyf to ei,wegetf(ei)=fi,sothefi are unique. Observe that∈

n n

(α (e1∗ f1 + + en∗ fn))(x)= (α (ei∗ fi))(x)= ei∗(x)fi. ⊗ ⊗ ··· ⊗ ⊗ ⊗ i=1 i=1 ￿ ￿ 598 CHAPTER 22. TENSOR ALGEBRAS

Thus, α is surjective. As (ej∗)1 j n is a basis of E∗,thetensorsej∗ f,withf F ,span ⊗ ≤ ≤ n ⊗ ∈ E∗ F .Thus,everyelementofE∗ F is of the form e∗ f ,forsomef F . Assume ⊗ ⊗ i=1 i ⊗ i i ∈ n n ￿ α ( ei∗ fi)=α ( ei∗ fi￿)=f, ⊗ ⊗ ⊗ ⊗ i=1 i=1 ￿ ￿ for some f ,f￿ F and some f Hom(E,F). Then for every x E, i i ∈ ∈ ∈ n n

ei∗(x)fi = ei∗(x)fi￿ = f(x). i=1 i=1 ￿ ￿ Since the fi and fi￿ are uniquely determined by the linear map, f,wemusthavefi = fi￿ and α is injective. Therefore, α is a . ⊗ ⊗ Note that in Proposition 22.9, the space F may have infinite dimension but E has finite dimension. In view of the canonical isomorphism Hom(E ,...,E ; F ) = Hom(E E ,F) 1 n ∼ 1 ⊗···⊗ n and the canonical isomorphism (E1 En)∗ ∼= E1∗ En∗,wheretheEi’s are finite- dimensional, Proposition 22.9 yields⊗ the··· canonical⊗ isomorphism⊗···⊗

Hom(E ,...,E ; F ) = E∗ E∗ F. 1 n ∼ 1 ⊗···⊗ n ⊗ 22.5 Tensor Algebras

The tensor product V V ⊗···⊗ m is also denoted as m￿ ￿￿ ￿ m V or V ⊗

1 0 and is called the m-th tensor power of￿V (with V ⊗ = V ,andV ⊗ = K). We can pack all the tensor powers of V into the “big” vector space,

m T (V )= V ⊗ , m 0 ￿≥ also denoted T •(V ), to avoid confusion with the tangent . This is an interesting object because we can define a multiplication on it which makes it into an called the of V . When V is of finite dimension n,thisspacecorrespondstothe algebra of with coefficients in K in n noncommuting variables. Let us recall the definition of an algebra over a field. Let K denote any (commutative) field, although for our purposes, we may assume that K = (and occasionally, K = C). Since we will only be dealing with associative algebras with a multiplicative , we only define algebras of this kind. 22.5. TENSOR ALGEBRAS 599

Definition 22.3 Given a field, K,aK-algebra is a K-vector space, A,togetherwitha bilinear operation, : A A A,calledmultiplication,whichmakesA into a ring with · × → unity, 1 (or 1A,whenwewanttobeveryprecise).Thismeansthat is associative and that there is a multiplicative , 1, so that 1 a = a 1=·a,foralla A.Given two K-algebras A and B,aK-algebra ,· h: A · B,isalinearmapthatis∈ → also a ring homomorphism, with h(1A)=1B.

For example, the ring, M (K), of all n n matrices over a field, K,isaK-algebra. n × There is an obvious notion of of a K-algebra: An ideal, A A,isalinearsubspace of A that is also a two-sided ideal with respect to multiplication⊆ in A.IfthefieldK is understood, we usually simply say an algebra instead of a K-algebra. We would like to define a multiplication operation on T (V )whichmakesitintoaK- algebra. As i T (V )= V ⊗ , i 0 ￿≥ n for every i 0, there is a natural injection ι : V ⊗ T (V ), and in particular, an injection ≥ n → ι0 : K T (V ). The multiplicative unit, 1,ofT (V )istheimage,ι0(1), in T (V )oftheunit, 1, of the→ field K.Sinceeveryv T (V )canbeexpressedasafinitesum ∈ v = ι (v )+ + ι (v ), n1 1 ··· nk k

ni where vi V ⊗ and the ni are natural with ni = nj if i = j,todefinemultiplica- tion in T (∈V ), using bilinearity, it is enough to define multiplication￿ ￿ operations, m n (m+n) n n : V ⊗ V ⊗ V ⊗ , which, using the isomorphisms, V ⊗ = ιn(V ⊗ ), yield multi- · × −→ m n (m+n) ∼ plication operations, : ιm(V ⊗ ) ιn(V ⊗ ) ιm+n(V ⊗ ). More precisely, we use the canonical isomorphism,· × −→ m n (m+n) V ⊗ V ⊗ = V ⊗ , ⊗ ∼ which defines a bilinear operation,

m n (m+n) V ⊗ V ⊗ V ⊗ , × −→ m n (m+n) which is taken as the multiplication operation. The isomorphism V ⊗ V ⊗ ∼= V ⊗ can be established by proving the isomorphisms ⊗

m n m V ⊗ V ⊗ = V ⊗ V V ⊗ ∼ ⊗ ⊗···⊗ n m (m+n) V ⊗ V V = V ⊗ , ⊗ ⊗···⊗ ∼ ￿ ￿￿ ￿ n which can be shown using methods￿ ￿￿ similar￿ to those used to proved associativity. Of course, m n (m+n) the multiplication, V ⊗ V ⊗ V ⊗ ,isdefinedsothat × −→ (v v ) (w w )=v v w w . 1 ⊗···⊗ m · 1 ⊗···⊗ n 1 ⊗···⊗ m ⊗ 1 ⊗···⊗ n 600 CHAPTER 22. TENSOR ALGEBRAS

(This has to be made rigorous by using isomorphisms involving the associativity of tensor products, for details, see see Atiyah and Macdonald [9].)

Remark: It is important to note that multiplication in T (V )isnot commutative. Also, in all rigor, the unit, 1,ofT (V )isnot equal to 1, the unit of the field K. However, in view of the injection ι0 : K T (V ), for the sake of notational simplicity, we will denote 1 by 1. → n n More generally, in view of the injections ιn : V ⊗ T (V ), we identify elements of V ⊗ with their images in T (V ). → The algebra, T (V ), satisfies a universal mapping property which shows that it is unique up to isomorphism. For simplicity of notation, let i: V T (V )bethenaturalinjectionof V into T (V ). →

Proposition 22.10 Given any K-algebra, A, for any linear map, f : V A, there is a unique K-, f : T (V ) A, so that → → f = f i, ◦ as in the diagram below: i V ￿ T (V )

f f ￿ ￿ A Proof .Leftananexercise(useTheorem22.5). Most algebras of interest arise as well-chosen of the tensor algebra T (V ). This is true for the , (V )(alsocalledGrassmann algebra), where we take the quotient of T (V )modulotheidealgeneratedbyallelementsoftheformv v,wherev V , and for the , Sym(￿ V ), where we take the quotient of T⊗(V )modulothe∈ ideal generated by all elements of the form v w w v,wherev, w V . ⊗ − ⊗ ∈ Algebras such as T (V ) are graded, in the sense that there is a of subspaces, n V ⊗ T (V ), such that ⊆ n T (V )= V ⊗ k 0 ￿≥ m n (m+n) and the multiplication, , behaves well w.r.t. the grading, i.e., : V ⊗ V ⊗ V ⊗ . Generally, a K-algebra,⊗E,issaidtobeagraded algebra iffthere⊗ is a sequence× of→ subspaces, En E,suchthat ⊆ E = En k 0 ￿≥ (E0 = K)andthemultiplication, ,respectsthegrading,thatis, : Em En Em+n. Elements in En are called homogeneous· elements of (or degree) ·n. × → In differential geometry and in it is necessary to consider slightly more general tensors. 22.5. TENSOR ALGEBRAS 601

Definition 22.4 Given a vector space, V ,foranypairofnonnegativeintegers,(r, s), the tensor space, T r,s(V ), of type (r, s), is the tensor product

r,s r s T (V )=V ⊗ (V ∗)⊗ = V V V ∗ V ∗, ⊗ ⊗···⊗ ⊗ ⊗···⊗ r s

0,0 , with T (V )=K. We also define the tensor￿ algebra￿￿ , T￿• •(V￿ ), as￿￿ the ￿

, r,s T • •(V )= T (V ). r,s 0 ￿≥ Tensors in T r,s(V )arecalledhomogeneous of degree (r, s).

r,0 r , Note that tensors in T (V )arejustour“oldtensors”inV ⊗ .WemakeT • •(V )intoan algebra by defining multiplication operations,

T r1,s1 (V ) T r2,s2 (V ) T r1+r2,s1+s2 (V ), × −→ in the usual way, namely: For u = u u u∗ u∗ and 1 ⊗···⊗ r1 ⊗ 1 ⊗···⊗ s1 v = v v v∗ v∗ ,let 1 ⊗···⊗ r2 ⊗ 1 ⊗···⊗ s2

u v = u u v v u∗ u∗ v∗ v∗ . ⊗ 1 ⊗···⊗ r1 ⊗ 1 ⊗···⊗ r2 ⊗ 1 ⊗···⊗ s1 ⊗ 1 ⊗···⊗ s2

r s r s Denote by Hom(V , (V ∗) ; W )thevectorspaceofallmultilinearmapsfromV (V ∗) to W .Then,wehavetheuniversalmappingpropertywhichassertsthatthereisacanonical× isomorphism r,s r s Hom(T (V ),W) ∼= Hom(V , (V ∗) ; W ). In particular, r,s r s (T (V ))∗ ∼= Hom(V , (V ∗) ; K). For finite dimensional vector spaces, the duality of Section 22.4 is also easily extended to the tensor spaces T r,s(V ). We define the pairing

r,s r,s T (V ∗) T (V ) K × −→ as follows: If r,s v∗ = v∗ v∗ u u T (V ∗) 1 ⊗···⊗ r ⊗ r+1 ⊗···⊗ r+s ∈ and r,s u = u u v∗ v∗ T (V ), 1 ⊗···⊗ r ⊗ r+1 ⊗···⊗ r+s ∈ then (v∗,u)=v∗(u ) v∗ (u ). 1 1 ··· r+s r+s This is a nondegenerate pairing and thus, we get a canonical isomorphism,

r,s r,s (T (V ))∗ ∼= T (V ∗). 602 CHAPTER 22. TENSOR ALGEBRAS

Consequently, we get a canonical isomorphism,

r,s r s T (V ∗) ∼= Hom(V , (V ∗) ; K).

r,s r r,s Remark: The tensor spaces, T (V )arealsodenotedTs (V ). A tensor, α T (V )is said to be contravariant in the first r arguments and covariant in the last s∈arguments. This terminology refers to the way tensors behave under coordinate changes. Given a basis, r,s (e1,...,en), of V ,if(e1∗,...,en∗ ) denotes the , then every tensor α T (V )is given by an expression of the form ∈

i1,...,ir α = a e e e∗ e∗ . j1,...,js i1 ⊗···⊗ ir ⊗ j1 ⊗···⊗ js i1,...,ir j1￿,...,js The tradition in classical tensor notation is to use lower indices on vectors and upper indices on linear forms and in accordance to Einstein convention (or ) the of the indices on the coefficients is reversed. Einstein summation convention is to assume that a summation is performed for all values of every index that appears simul- taneously once as an upper index and once as a lower index. According to this convention, the tensor α above is written

α = ai1,...,ir e e ej1 ejs . j1,...,js i1 ⊗···⊗ ir ⊗ ⊗···⊗ An older view of tensors is that they are multidimensional arrays of coefficients,

i1,...,ir aj1,...,js , subject to the rules for changes of bases.￿ ￿ Another operation on general tensors, contraction, is useful in differential geometry.

r,s r 1,s 1 Definition 22.5 For all r, s 1, the contraction, ci,j : T (V ) T − − (V ), with 1 i r and 1 j s,isthelinearmapdefinedongeneratorsby≥ → ≤ ≤ ≤ ≤

c (u u v∗ v∗) i,j 1 ⊗···⊗ r ⊗ 1 ⊗···⊗ s = v∗(u ) u u u v∗ v∗ v∗, j i 1 ⊗···⊗ i ⊗···⊗ r ⊗ 1 ⊗···⊗ j ⊗···⊗ s where the hat over an argument means that it should be omitted. ￿ ￿

1,1 Let us figure our what is c1,1 : T (V ) R,thatisc1,1 : V V ∗ R.If(e1,...,en) → ⊗ → is a basis of V and (e1∗,...,en∗ )isthedualbasis,everyh V V ∗ ∼= Hom(V,V )canbe expressed as ∈ ⊗ n h = a e e∗. ij i ⊗ j i,j=1 ￿ 22.5. TENSOR ALGEBRAS 603

As c (e e∗)=δ , 1,1 i ⊗ j i,j we get n

c1,1(h)= aii =tr(h), i=1 ￿ where tr(h)isthetrace of h, where h is viewed as the linear map given by the matrix, (aij). Actually, since c1,1 is defined independently of any basis, c1,1 provides an intrinsic definition of the trace of a linear map, h Hom(V,V ). ∈

Remark: Using the Einstein summation convention, if

α = ai1,...,ir e e ej1 ejs , j1,...,js i1 ⊗···⊗ ir ⊗ ⊗···⊗ then

i1,...,ik 1,i,ik+1...,ir j1 js − jl ck,l(α)=aj1,...,jl 1,i,jl+1,...,js ei1 eik eir e e e . − ⊗···⊗ ⊗···⊗ ⊗ ⊗···⊗ ⊗···⊗

If E and F are two K-algebras, we know￿ that their tensor product,￿E F ,existsasa vector space. We can make E F into an algebra as well. Indeed, we have⊗ the multilinear map ⊗ E F E F E F × × × −→ ⊗ given by (a, b, c, d) (ac) (bd), where ac is the product of a and c in E and bd is the product of b and d in￿→F . By⊗ the universal mapping property, we get a linear map,

E F E F E F. ⊗ ⊗ ⊗ −→ ⊗ Using the isomorphism,

E F E F = (E F ) (E F ), ⊗ ⊗ ⊗ ∼ ⊗ ⊗ ⊗ we get a linear map, (E F ) (E F ) E F, ⊗ ⊗ ⊗ −→ ⊗ and thus, a bilinear map, (E F ) (E F ) E F, ⊗ × ⊗ −→ ⊗ which is our multiplication operation in E F .Thismultiplicationisdeterminedby ⊗ (a b) (c d)=(ac) (bd). ⊗ · ⊗ ⊗ One immediately checks that E F with this multiplication is a K-algebra. ⊗ We now turn to symmetric tensors. 604 CHAPTER 22. TENSOR ALGEBRAS 22.6 Symmetric Tensor Powers

Our goal is to come up with a notion of tensor product that will allow us to treat symmetric multilinear maps as linear maps. First, note that we have to restrict ourselves to a single vector space, E,ratherthenn vector spaces E1,...,En,sothatsymmetrymakessense. Recall that a multilinear map, f : En F ,issymmetric iff →

f(uσ(1),...,uσ(n))=f(u1,...,un),

for all u E and all , σ : 1,...,n 1,...,n .Thegroupofpermutations i ∈ { }→{ } on 1,...,n (the symmetric )isdenotedSn.Thevectorspaceofallsymmetric multilinear{ maps,} f : En F ,isdenotedbySn(E; F ). Note that S1(E; F ) = Hom(E,F). → We could proceed directly as in Theorem 22.5, and construct symmetric tensor products from scratch. However, since we already have the notion of a tensor product, there is a more economical method. First, we define symmetric tensor powers.

Definition 22.6 An n-th symmetric tensor power of a vector space E,wheren 1, is a vector space S, together with a symmetric multilinear map ϕ: En S,suchthat,forevery≥ vector space F and for every symmetric multilinear map f : En F→,thereisauniquelinear map f : S F ,with → ⊙ → f(u1,...,un)=f (ϕ(u1,...,un)), ⊙ for all u1,...,un E,orforshort ∈ f = f ϕ. ⊙ ◦ Equivalently, there is a unique linear map f such that the following diagram commutes: ⊙ ϕ En ￿ S f f ⊙ ￿ ￿ F

First, we show that any two symmetric n-th tensor powers (S1,ϕ1)and(S2,ϕ2)forE, are isomorphic.

Proposition 22.11 Given any two symmetric n-th tensor powers (S1,ϕ1) and (S2,ϕ2) for E, there is an isomorphism h: S S such that 1 → 2 ϕ = h ϕ . 2 ◦ 1

Proof .Replacetensorproductbyn-th symmetric tensor power in the proof of Proposition 22.4. We now give a construction that produces a symmetric n-th tensor power of a vector space E. 22.6. SYMMETRIC TENSOR POWERS 605

Theorem 22.12 Given a vector space E, a symmetric n-th tensor power (Symn(E),ϕ) for E can be constructed (n 1). Furthermore, denoting ϕ(u1,...,un) as u1 un, ≥ n ⊙···⊙ the symmetric tensor power Sym (E) is generated by the vectors u1 un, where n ⊙···⊙ u1,...,un E, and for every symmetric multilinear map f : E F , the unique linear map f :Sym∈ n(E) F such that f = f ϕ, is defined by → ⊙ → ⊙ ◦

f (u1 un)=f(u1,...,un), ⊙ ⊙···⊙ on the generators u u of Symn(E). 1 ⊙···⊙ n

n Proof .ThetensorpowerE⊗ is too big, and thus, we define an appropriate quotient. Let n C be the subspace of E⊗ generated by the vectors of the form

u u u u , 1 ⊗···⊗ n − σ(1) ⊗···⊗ σ(n)

for all ui E,andallpermutationsσ : 1,...,n 1,...,n .Weclaimthatthequotient ∈n { }→{ } space (E⊗ )/C does the job. n n n n Let p: E⊗ (E⊗ )/C be the quotient map. Let ϕ: E (E⊗ )/C be the map → → (u ,...,u ) p(u u ), 1 n ￿→ 1 ⊗···⊗ n or equivalently, ϕ = p ϕ ,whereϕ (u ,...,u )=u u . ◦ 0 0 1 n 1 ⊗···⊗ n Let us denote ϕ(u1,...,un)asu1 un.Itisclearthatϕ is symmetric. Since the n ⊙···⊙ vectors u1 un generate E⊗ ,andp is surjective, the vectors u1 un generate n ⊗···⊗ ⊙···⊙ (E⊗ )/C. n n Given any symmetric multilinear map f : E F , there is a linear map f : E⊗ F → ⊗ → such that f = f ϕ0,asinthediagrambelow: ⊗ ◦ n ϕ0 n E ￿ E⊗ f f ⊗ ￿ ￿ F

n However, since f is symmetric, we have f (z)=0foreveryz E⊗ .Thus,wegetan n ⊗ ∈ induced linear map h:(E⊗ )/C F ,suchthath([z]) = f (z), where [z] is the equivalence n n → ⊗ in (E⊗ )/C of z E⊗ : ∈ n p ϕ0 n E ◦ ￿ (E⊗ )/C

h f ￿ ￿ F n However, if a linear map f :(E⊗ )/C F exists, since the vectors u1 un generate n ⊙ → ⊙···⊙ (E⊗ )/C,wemusthave f (u1 un)=f(u1,...,un), ⊙ ⊙···⊙ 606 CHAPTER 22. TENSOR ALGEBRAS

n n which shows that h and f agree. Thus, Sym (E)=(E⊗ )/C and ϕ constitute a symmetric ⊙ n-th tensor power of E. Again, the actual construction is not important. What is important is that the symmetric n-th power has the universal mapping property with respect to symmetric multilinear maps.

Remark: The notation for the commutative multiplication of symmetric tensor powers is not standard. Another notation⊙ commonly used is . We often abbreviate “symmetric tensor power” as “symmetric power”. The symmetric power,· Symn(E), is also denoted SymnE or S(E). To be consistent with the use of ,wecouldhaveusedthenotation n E.Clearly, 1 ⊙ 0 Sym (E) = E and it is convenient to set Sym (E)=K. ∼ ￿ The fact that the map ϕ: En Symn(E) is symmetric and multinear, can also be expressed as follows: →

u (u + v ) u =(u u u ) 1 ⊙···⊙ i i ⊙···⊙ n 1 ⊙···⊙ i ⊙···⊙ n +(u v u ), 1 ⊙···⊙ i ⊙···⊙ n u (λu ) u = λ(u u u ), 1 ⊙···⊙ i ⊙···⊙ n 1 ⊙···⊙ i ⊙···⊙ n u u = u u , σ(1) ⊙···⊙ σ(n) 1 ⊙···⊙ n for all permutations σ S . ∈ n The last identity shows that the “operation” is commutative. Thus, we can view the symmetric tensor u u as a . ⊙ 1 ⊙···⊙ n Theorem 22.12 yields a canonical isomorphism

n n Hom(Sym (E),F) ∼= S(E ; F ), between the vector space of linear maps Hom(Symn(E),F), and the vector space of sym- metric multilinear maps S(En; F ), via the linear map ϕ defined by −◦ h h ϕ, ￿→ ◦ where h Hom(Symn(E),F). Indeed, h ϕ is clearly symmetric multilinear, and since by Theorem∈ 22.12, for every symmetric multilinear◦ map f S(En; F ), there is a unique linear map f Hom(Symn(E),F)suchthatf = f ϕ,themap∈ ϕ is bijective. As a matter ⊙ ⊙ of fact, its∈ inverse is the map ◦ −◦ f f . ￿→ ⊙ In particular, when F = K,wegetacanonicalisomorphism

n n (Sym (E))∗ ∼= S (E; K).

Symmetric tensors in Symn(E)arealsocalledsymmetric n-tensors,andtensorsofthe form u u ,whereu E,arecalledsimple (or decomposable) symmetric n-tensors. 1 ⊙···⊙ n i ∈ 22.7. BASES OF SYMMETRIC POWERS 607

Those symmetric n-tensors that are not simple are often called compound symmetric n- tensors. 2 Given two linear maps f : E E￿ and g : E E￿,wecandefineh: E E Sym (E￿) by → → × → h(u, v)=f(u) g(v). ⊙ It is immediately verified that h is symmetric bilinear, and thus, it induces a unique linear map 2 2 f g :Sym(E) Sym (E￿), ⊙ → such that (f g)(u v)=f(u) g(u). ⊙ ⊙ ⊙

If we also have linear maps f ￿ : E￿ E￿￿ and g￿ : E￿ E￿￿,wecaneasilyverifythat → →

(f ￿ f) (g￿ g)=(f ￿ g￿) (f g). ◦ ⊙ ◦ ⊙ ◦ ⊙

The generalization to the symmetric tensor product f f of n 3linearmaps 1 ⊙···⊙ n ≥ f : E E￿ is immediate, and left to the reader. i →

22.7 Bases of Symmetric Powers

n The vectors u1 un,whereu1,...,un E,generateSym (E), but they are not linearly independent. We⊙··· will⊙ prove a version of Proposition∈ 22.6 for symmetric tensor powers. For this, recall that a (finite) multiset over a set I is a function M : I N,suchthatM(i) =0 → ￿ for finitely many i I,andthatthesetofallmultisetsoverI is denoted as N(I).Welet ∈ dom(M)= i I M(i) =0 ,whichisafiniteset.Then,foranymultisetM N(I),note { ∈ | ￿ } ∈ that the sum i I M(i)makessense,since i I M(i)= i dom(M) M(i), and dom(M) ∈ ∈ ∈ is finite. For every multiset M (I),foranyn 2, we define the set J of functions ￿ N ￿ ￿ M η : 1,...,n dom(M), as follows:∈ ≥ { }→ 1 J = η η : 1,...,n dom(M), η− (i) = M(i),i dom(M), M(i)=n . M { | { }→ | | ∈ } i I ￿∈ 1 In other words, if i I M(i)=n and dom(M)= i1,...,ik , any function η JM specifies ∈ { } ∈ asequenceoflengthn,consistingofM(i1) occurrences of i1, M(i2) occurrences of i2,..., ￿ M(ik)occurrencesofik.Intuitively,anyη defines a “” of the sequence (of length n) i ,...,i ,i,...,i ,...,i,...,i . ￿ 1 1 2 2 k k￿ M(i1) M(i2) M(ik)

1Note that must have k n. ￿ ￿￿ ￿ ￿ ￿￿ ￿ ￿ ￿￿ ￿ ≤ 608 CHAPTER 22. TENSOR ALGEBRAS

Given any k 1, and any u E,wedenote ≥ ∈ u u ⊙···⊙ k

k as u⊙ . ￿ ￿￿ ￿ We can now prove the following Proposition.

Proposition 22.13 Given a vector space E, if (ui)i I is a basis for E, then the family of ∈ vectors

M(i1) M(i ) ⊙ ⊙ k ui1 uik ⊙···⊙ (I) M N , i I M(i)=n, i1,...,ik =dom(M) ￿ ￿ ∈ ∈ { } is a basis of the symmetric n-th tensor power Sym￿n(E).

Proof .TheproofisverysimilartothatofProposition22.6.Foranynontrivialvectorspace F ,foranyfamilyofvectors

(w ) (I) , M M N , i I M(i)=n ∈ ∈ we show the existence of a symmetric multilinear￿ map h:Symn(E) F ,suchthatforevery (I) → M N with i I M(i)=n,wehave ∈ ∈

￿ M(i1) M(ik) h(u⊙ u⊙ )=wM , i1 ⊙···⊙ ik where i ,...,i =dom(M). We define the map f : En F as follows: { 1 k} →

f( v1 u1 ,..., vn un )= v1 vn w . j1 j1 jn jn η(1) ··· η(n) M j1 I jn I M N(I) ￿η JM ￿ ￿∈ ￿∈ ￿∈ ￿∈ i I M(i)=n ∈ ￿ It is not difficult to verify that f is symmetric and multilinear. By the universal mapping property of the symmetric tensor product, the linear map f :Symn(E) F such that ⊙ f = f ϕ,isthedesiredmaph.Then,byProposition22.3,itfollowsthatthefamily→ ⊙ ◦

M(i1) M(i ) ⊙ ⊙ k ui1 uik ⊙···⊙ (I) M N , i I M(i)=n, i1,...,ik =dom(M) ￿ ￿ ∈ ∈ { } ￿ is linearly independent. Using the commutativity of ,wecanalsoshowthatthesevectors generate Symn(E), and thus, they form a basis for⊙ Symn(E). The details are left as an exercise.

As a consequence, when I is finite, say of size p =dim(E), the dimension of Symn(E)is the of finite (j1,...,jp), such that j1 + + jp = n, jk 0. We leave as p+n 1 ··· ≥ an exercise to show that this number is n− .Thus,ifdim(E)=p,thenthedimensionof ￿ ￿ 22.8. SOME USEFUL ISOMORPHISMS FOR SYMMETRIC POWERS 609

n p+n 1 n n Sym (E)is n− . Compare with the dimension of E⊗ ,whichisp . In particular, when p =2,thedimensionofSymn(E)isn +1.Thiscanalsobeseendirectly. ￿ ￿ p+n 1 Remark: The number n− is also the number of homogeneous ￿ ￿ Xj1 Xjp 1 ··· p of total degree n in p variables (we have j1 + + jp = n). This is not a coincidence! Symmetric tensor products are closely related to··· polynomials (for more on this, see the next remark).

Given a vector space E and a basis (ui)i I for E,Proposition22.13showsthatevery ∈ symmetric tensor z Symn(E)canbewritteninauniquewayas ∈ M(i1) M(ik) z = λM u⊙ u⊙ , i1 ⊙···⊙ ik M N(I) ￿∈ i I M(i)=n ∈ i1,...,i =dom(M) { ￿ k} for some unique family of scalars λ K,allzeroexceptforafinitenumber. M ∈ This looks like a homogeneous of total degree n, where the monomials of total degree n are the symmetric tensors

M(i1) M(i ) u⊙ u⊙ k , i1 ⊙···⊙ ik in the “indeterminates” ui,wherei I (recall that M(i1)+ + M(ik)=n). Again, this is not a coincidence. Polynomials can∈ be defined in terms of symmetric··· tensors.

22.8 Some Useful Isomorphisms for Symmetric Powers

We can show the following property of the symmetric tensor product, using the proof tech- nique of Proposition 22.7:

n n k n k Sym (E F ) = Sym (E) Sym − (F ). ⊕ ∼ ⊗ ￿k=0 22.9 Duality for Symmetric Powers

In this section, all vector spaces are assumed to have finite dimension. We define a nonde- n n generate pairing, Sym (E∗) Sym (E) K,asfollows:Considerthemultilinearmap, × −→ n n (E∗) E K, × −→ given by (v∗,...,v∗,u ,...,u ) v∗ (u ) v∗ (u ). 1 n 1 n ￿→ σ(1) 1 ··· σ(n) n σ Sn ￿∈ 610 CHAPTER 22. TENSOR ALGEBRAS

Note that the expression on the right-hand side is “almost” the , det(vj∗(ui)), except that the sign sgn(σ)ismissing(wheresgn(σ)isthesignatureofthepermutation σ,thatis,theparityofthenumberoftranspositionsintowhichσ can be factored). Such an expression is called a .Itiseasilycheckedthatthisexpressionissymmetric n w.r.t. the ui’s and also w.r.t. the vj∗.Foranyfixed(v1∗,...,vn∗) (E∗) ,wegetasymmetric multinear map, ∈

lv ,...,v :(u1,...,un) v∗ (u1) v∗ (un), 1∗ n∗ ￿→ σ(1) ··· σ(n) σ Sn ￿∈ n n from E to K.Themaplv1∗,...,vn∗ extends uniquely to a linear map, Lv1∗,...,vn∗ :Sym (E) K. Now, we also have the symmetric multilinear map, →

(v∗,...,v∗) Lv ,...,v , 1 n ￿→ 1∗ n∗ n n n from (E∗) to Hom(Sym (E),K), which extends to a linear map, L,fromSym(E∗)to Hom(Symn(E),K). However, in view of the isomorphism,

Hom(U V,W) = Hom(U, Hom(V,W)), ⊗ ∼ we can view L as a linear map,

n n L:Sym (E∗) Sym (E) K, ⊗ −→ which corresponds to a bilinear map,

n n Sym (E∗) Sym (E) K. × −→ Now, this pairing in nondegenerate. This can be done using bases and we leave it as an exer- cise to the reader (see Knapp [89], Appendix A). Therefore, we get a canonical isomorphism,

n n (Sym (E))∗ ∼= Sym (E∗). Since we also have an isomorphism

n n (Sym (E))∗ ∼= S (E,K), we get a canonical isomorphism

n n Sym (E∗) ∼= S (E,K) which allows us to interpret symmetric tensors over E∗ as symmetric multilinear maps.

n n Remark: The isomorphism, µ:Sym(E∗) ∼= S (E,K), discussed above can be described explicity as the linear extension of the map given by

µ(v∗ v∗)(u ,...,u )= v∗ (u ) v∗ (u ). 1 ⊙···⊙ n 1 n σ(1) 1 ··· σ(n) n σ Sn ￿∈ 22.10. SYMMETRIC ALGEBRAS 611

n n Now, the map from E to Sym (E)givenby(u1,...,un) u1 un yields a n n ￿→ ⊙···⊙ surjection, π : E⊗ Sym (E). Because we are dealing with vector spaces, this map has → n n some section, that is, there is some injection, ι:Sym (E) E⊗ ,withπ ι = id. If our field, K, has 0, then there is a special section having→ a natural◦ definition involving asymmetrizationprocessdefinedasfollows:Foreverypermutation,σ,wehavethemap, n n r : E E⊗ ,givenby σ → r (u ,...,u )=u u . σ 1 n σ(1) ⊗···⊗ σ(n) n n As rσ is clearly multilinear, rσ extends to a linear map, rσ : E⊗ E⊗ , and we get a map, n n → S E⊗ E⊗ ,namely, n × −→ σ z = r (z). · σ n It is immediately checked that this is a left action of the , Sn,onE⊗ and n the tensors z E⊗ such that ∈ σ z = z, for all σ S · ∈ n n n are called symmetrized tensors. We define the map, ι: E E⊗ ,by → 1 1 ι(u ,...,u )= σ (u u )= u u . 1 n n! · 1 ⊗···⊗ n n! σ(1) ⊗···⊗ σ(n) σ Sn σ Sn ￿∈ ￿∈ n n As the right hand side is clearly symmetric, we get a linear map, ι:Sym(E) E⊗ . n n → Clearly, ι(Sym (E)) is the set of symmetrized tensors in E⊗ . If we consider the map, n n S = ι π : E⊗ E⊗ ,itiseasytocheckthatS S = S.Therefore,S is a and by linear◦ algebra,−→ we know that ◦

n n n E⊗ = S(E⊗ ) Ker S = ι(Sym (E)) Ker S. ⊕ ⊕ n It turns out that Ker S = E⊗ I =Kerπ,whereI is the two-sided ideal of T (E)generated ∩ 2 by all tensors of the form u v v u E⊗ (for example, see Knapp [89], Appendix A). Therefore, ι is injective, ⊗ − ⊗ ∈

n n n n E⊗ = ι(Sym (E)) E⊗ I = ι(Sym (E)) Ker π, ⊕ ∩ ⊕ n n and the symmetric tensor power, Sym (E), is naturally embedded into E⊗ .

22.10 Symmetric Algebras

As in the case of tensors, we can pack together all the symmetric powers, Symn(V ), into an algebra, Sym(V )= Symm(V ), m 0 ￿≥ 612 CHAPTER 22. TENSOR ALGEBRAS

called the symmetric tensor algebra of V . We could adapt what we did in Section 22.5 for general tensor powers to symmetric tensors but since we already have the algebra, T (V ), we can proceed faster. If I is the two-sided ideal generated by all tensors of the form 2 u v v u V ⊗ ,weset ⊗ − ⊗ ∈ Sym•(V )=T (V )/I.

Then, Sym•(V ) automatically inherits a multiplication operation which is commutative and since T (V )isgraded,thatis, m T (V )= V ⊗ , m 0 ￿≥ we have m m Sym•(V )= V ⊗ /(I V ⊗ ). ∩ m 0 ￿≥ However, it is easy to check that

m m m Sym (V ) = V ⊗ /(I V ⊗ ), ∼ ∩ so Sym•(V ) ∼= Sym(V ). When V is of finite dimension, n, T (V ) corresponds to the algebra of polynomials with coefficients in K in n variables (this can be seen from Proposition 22.13). When V is of infinite dimension and (ui)i I is a basis of V ,thealgebra,Sym(V ), corresponds to the ∈ algebra of polynomials in infinitely many variables in I. What’s nice about the symmetric tensor algebra, Sym(V ), is that it provides an intrinsic definition of a polynomial algebra in any set, I,ofvariables. It is also easy to see that Sym(V )satisfiesthefollowinguniversalmappingproperty: Proposition 22.14 Given any commutative K-algebra, A, for any linear map, f : V A, there is a unique K-algebra homomorphism, f :Sym(V ) A, so that → → f = f i, ◦ as in the diagram below: i V ￿ Sym(V )

f f ￿ ￿ A

n n Remark: If E is finite-dimensional, recall the isomorphism, µ:Sym(E∗) S (E,K), defined as the linear extension of the map given by −→

µ(v∗ v∗)(u ,...,u )= v∗ (u ) v∗ (u ), 1 ⊙···⊙ n 1 n σ(1) 1 ··· σ(n) n σ Sn ￿∈ 22.11. EXTERIOR TENSOR POWERS 613

m n m+n Now, we have also a multiplication operation, Sym (E∗) Sym (E∗) Sym (E∗). The following question then arises: × −→ Can we define a multiplication, Sm(E,K) Sn(E,K) Sm+n(E,K), directly on sym- metric multilinear forms, so that the following× diagram commutes:−→

m n m+n Sym (E∗) Sym (E∗) ⊙ ￿ Sym (E∗) × µ µ µ × ￿ ￿ Sm(E,K) Sn(E,K) · ￿ Sm+n(E,K). × The answer is yes!Thesolutionistodefinethismultiplicationsuchthat,forf Sm(E,K) and g Sn(E,K), ∈ ∈ (f g)(u ,...,u )= f(u ,...,u )g(u ,...,u ), · 1 m+n σ(1) σ(m) σ(m+1) σ(m+n) σ shuffle(m,n) ∈ ￿ where shuffle(m, n)consistsofall(m, n)-“shuffles”, that is, permutations, σ,of 1,...m+n , such that σ(1) < <σ(m)andσ(m +1)< <σ(m + n). We urge the reader{ to check} this fact. ··· ··· Another useful canonical isomorphim (of K-algebras) is Sym(E F ) = Sym(E) Sym(F ). ⊕ ∼ ⊗ 22.11 Exterior Tensor Powers

We now consider alternating (also called skew-symmetric)multilinearmapsandexterior tensor powers (also called alternating tensor powers), denoted n(E). In many respect, alternating multilinear maps and exterior tensor powers can be treated much like symmetric tensor powers except that the sign, sgn(σ), needs to be inserted in￿ front of the formulae valid for symmetric powers. Roughly speaking, we are now in the world of rather than in the world of permanents. However, there are also some fundamental differences, one of which being that the exterior tensor power, n(E), is the trivial vector space, (0), when E is finite-dimensional and when n>dim(E). As in the case of symmetric tensor powers, since we already have the tensor algebra, T (V ),￿ we can proceed rather quickly. But first, let us review some basic definitions and facts. Definition 22.7 Let f : En F be a multilinear map. We say that f alternating iff f(u ,...,u )=0whenever u→= u , for some i with 1 i n 1, for all u E, 1 n i i+1 ≤ ≤ − i ∈ that is, f(u1,...,un)=0whenever two adjacent arguments are identical. We say that f is skew-symmetric (or anti-symmetric) iff

f(uσ(1),...,uσ(n)) = sgn(σ)f(u1,...,un), for every permutation, σ S , and all u E. ∈ n i ∈ 614 CHAPTER 22. TENSOR ALGEBRAS

For n =1,weagreethateverylinearmap,f : E F ,isalternating.Thevector space of all multilinear alternating maps, f : En F ,→ is denoted Altn(E; F ). Note that Alt1(E; F ) = Hom(E,F). The following basic proposition→ shows the relationship between alternation and skew-.

Proposition 22.15 Let f : En F be a multilinear map. If f is alternating, then the following properties hold: →

(1) For all i, with 1 i n 1, ≤ ≤ − f(...,u,u ,...)= f(...,u ,u,...). i i+1 − i+1 i (2) For every permutation, σ S , ∈ n

f(uσ(1),...,uσ(n))=sgn(σ)f(u1,...,un).

(3) For all i, j, with 1 i

f(...,ui,...uj,...)=0 whenever ui = uj.

Moreover, if our field, K, has characteristic different from 2, then every skew-symmetric multilinear map is alternating.

Proof .(i)Bymultilinearityappliedtwice,wehave

f(...,ui + ui+1,ui + ui+1,...)=f(...,ui,ui,...)+f(...,ui,ui+1,...)

+ f(...,ui+1,ui,...)+f(...,ui+1,ui+1,...).

Since f is alternating, we get

0=f(...,ui,ui+1,...)+f(...,ui+1,ui,...), that is, f(...,u,u ,...)= f(...,u ,u,...). i i+1 − i+1 i n (ii) Clearly, the symmetric group, Sn, acts on Alt (E; F )ontheleft,via

σ f(u ,...,u )=f(u ,...,u ). · 1 n σ(1) σ(n)

Consequently, as Sn is generated by the transpositions (permutations that swap exactly two elements), since for a transposition, (ii) is simply (i), we deduce (ii) by induction on the number of transpositions in σ.

(iii) There is a permutation, σ,thatsendsui and uj respectively to u1 and u2. As f is alternating, f(uσ(1),...,uσ(n))=0. 22.11. EXTERIOR TENSOR POWERS 615

However, by (ii), f(u1,...,un)=sgn(σ)f(uσ(1),...,uσ(n))=0.

Now, when f is skew-symmetric, if σ is the transposition swapping ui and ui+1 = ui,as sgn(σ)= 1, we get − f(...,u,u,...)= f(...,u,u,...), i i − i i so that 2f(...,ui,ui,...)=0,

and in every characteristic except 2, we conclude that f(...,ui,ui,...)=0,namely,f is alternating. Proposition 22.15 shows that in every characteristic except 2, alternating and skew- symmetric multilinear maps are identical. Using Proposition 22.15 we easily deduce the following crucial fact: Proposition 22.16 Let f : En F be an alternating multilinear map. For any families of vectors, (u ,...,u ) and (v ,...,v→ ), with u ,v E, if 1 n 1 n i i ∈ n v = a u , 1 j n, j ij i ≤ ≤ i=1 ￿ then

f(v ,...,v )= sgn(σ) a a f(u ,...,u )=det(A)f(u ,...,u ), 1 n σ(1),1 ··· σ(n),n 1 n 1 n ￿σ Sn ￿ ￿∈ where A is the n n matrix, A =(a ). × ij Proof . Use property (ii) of Proposition 22.15. . We are now ready to define and construct exterior tensor powers. Definition 22.8 An n-th exterior tensor power of a vector space, E,wheren 1, is a vector space, A, together with an alternating multilinear map, ϕ: En A,suchthat,for≥ every vector space, F ,andforeveryalternatingmultilinearmap,f : E→n F ,thereisa unique linear map, f : A F ,with → ∧ → f(u1,...,un)=f (ϕ(u1,...,un)), ∧

for all u1,...,un E,orforshort ∈ f = f ϕ. ∧ ◦ Equivalently, there is a unique linear map f such that the following diagram commutes: ∧ ϕ En ￿ A

f f ∧ ￿ ￿ F 616 CHAPTER 22. TENSOR ALGEBRAS

First, we show that any two n-th exterior tensor powers (A1,ϕ1)and(A2,ϕ2)forE,are isomorphic.

Proposition 22.17 Given any two n-th exterior tensor powers (A1,ϕ1) and (A2,ϕ2) for E, there is an isomorphism h: A A such that 1 → 2 ϕ = h ϕ . 2 ◦ 1

Proof .Replacetensorproductbyn exterior tensor power in the proof of Proposition 22.4.

We now give a construction that produces an n-th exterior tensor power of a vector space E.

Theorem 22.18 Given a vector space E, an n-th exterior tensor power ( n(E),ϕ) for E can be constructed (n 1). Furthermore, denoting ϕ(u1,...,un) as u1 un, the exterior n ≥ ∧···∧￿ tensor power (E) is generated by the vectors u1 un, where u1,...,un E, and for every alternating multilinear map f : En F , the∧ unique···∧ linear map f : n(E∈) F such ∧ that f = f ￿ϕ, is defined by → → ∧ ◦ ￿

f (u1 un)=f(u1,...,un), ∧ ∧···∧ on the generators u u of n(E). 1 ∧···∧ n ￿ Proof sketch.Wecangiveaquickproofusingthetensoralgebra,T (E). let Ia be the 2 two-sided ideal of T (E)generatedbyalltensorsoftheformu u E⊗ .Then,let ⊗ ∈ n n n (E)=E⊗ /(I E⊗ ) a ∩ ￿n n and let π be the projection, π : E⊗ (E). If we let u1 un = π(u1 un), it is easy to check that ( n(E), )satisfiestheconditionsofTheorem22.18.→ ∧···∧ ⊗···⊗ ∧ ￿ ￿ Remark: We can also define

n

(E)=T (E)/Ia = (E), n 0 ￿ ￿≥ ￿ the exterior algebra of E. This is the skew-symmetric counterpart of Sym(E)andwewill study it a little later. For simplicity of notation, we may write n E for n(E). We also abbreviate “exterior 1 0 tensor power” as “exterior power”. Clearly, (E) ∼= E and it is convenient to set (E)= K. ￿ ￿ ￿ ￿ 22.11. EXTERIOR TENSOR POWERS 617

The fact that the map ϕ: En n(E)isalternatingandmultinear,canalsobeexpressed as follows: → ￿ u (u + v ) u =(u u u ) 1 ∧···∧ i i ∧···∧ n 1 ∧···∧ i ∧···∧ n +(u v u ), 1 ∧···∧ i ∧···∧ n u (λu ) u = λ(u u u ), 1 ∧···∧ i ∧···∧ n 1 ∧···∧ i ∧···∧ n u u =sgn(σ) u u , σ(1) ∧···∧ σ(n) 1 ∧···∧ n for all σ S . ∈ n Theorem 22.18 yields a canonical isomorphism n n Hom( (E),F) ∼= Alt (E; F ), between the vector space of linear maps￿ Hom( n(E),F), and the vector space of alternating multilinear maps Altn(E; F ), via the linear map ϕ defined by ￿ −◦ h h ϕ, ￿→ ◦ where h Hom( n(E),F). In particular, when F = K,wegetacanonicalisomorphism ∈ n ∗ ￿ n (E) ∼= Alt (E; K). ￿ ￿ ￿ Tensors α n(E)arecalledalternating n-tensors or alternating tensors of degree n ∈ and we write deg(α)=n.Tensorsoftheformu1 un,whereui E,arecalledsimple (or decomposable)￿ alternating n-tensors.Thosealternating∧···∧ n-tensors that∈ are not simple are n often called compound alternating n-tensors.Simpletensorsu1 un (E)arealso n ∧···∧ ∈ called n-vectors and tensors in (E∗)areoftencalled(alternating) n-forms. ￿ 2 Given two linear maps f : E E￿ and g : E E￿,wecandefineh: E E (E￿)by ￿→ → × → h(u, v)=f(u) g(v). ￿ ∧ It is immediately verified that h is alternating bilinear, and thus, it induces a unique linear map 2 2 f g : (E) (E￿), ∧ → such that ￿ ￿ (f g)(u v)=f(u) g(u). ∧ ∧ ∧ If we also have linear maps f ￿ : E￿ E￿￿ and g￿ : E￿ E￿￿,wecaneasilyverifythat → → (f ￿ f) (g￿ g)=(f ￿ g￿) (f g). ◦ ∧ ◦ ∧ ◦ ∧

The generalization to the alternating product f1 fn of n 3linearmapsfi : E E￿ is immediate, and left to the reader. ∧···∧ ≥ → 618 CHAPTER 22. TENSOR ALGEBRAS 22.12 Bases of Exterior Powers

Let E be any vector space. For any basis, (ui)i Σ,forE,weassumethatsometotalordering, ∈ ,onΣ,hasbeenchosen.Callthepair((ui)i Σ, )anordered basis. Then, for any nonempty ∈ finite≤ , I Σ, let ≤ ⊆ u = u u , I i1 ∧···∧ im where I = i ,...,i ,withi < d, the exterior power n(E) is trivial, that is n(E)=(0). Otherwise, n for every ordered basis, ((ui)i Σ, ), the family, (uI ), is basis of (E), where I ranges over ∈ finite nonempty of Σ of size≤ I =￿n. ￿ | | ￿ Proof . First, assume that E has finite dimension, d =dim(E)andthatn>d.Weknow that n(E)isgeneratedbythetensorsoftheformv v ,withv E.Ifu ,...,u 1 ∧···∧ n i ∈ 1 d is a basis of E,aseveryvi is a of the uj,whenweexpandv1 vn using￿ multilinearity, we get a linear combination of the form ∧···∧

v v = λ u u , 1 ∧···∧ n (j1,...,jn) j1 ∧···∧ jn (j1￿,...,jn) where each (j ,...,j )issomesequenceofintegersj 1,...,d . As n>d,eachsequence 1 n k ∈{ } (j1,...,jn) must contain two identical elements. By alternation, uj1 ujn =0andso, v v =0.Itfollowsthat n(E)=(0). ∧···∧ 1 ∧···∧ n Now, assume that either dim(￿E)=d and that n d or that E is infinite dimensional. ≤ The argument below shows that the uI are nonzero and linearly independent. As usual, let u∗ E∗ be the linear form given by i ∈

ui∗(uj)=δij.

For any nonempty subset, I = i1,...,in Σ, with i1 <

Note that when dim(E)=d and n d, the forms ui∗1 ,...,ui∗n are all distinct so, the above does hold. Since L (u )=1,weconcludethat≤ u = 0. Now, if we have a linear combination, I I I ￿

λI uI =0, I ￿ where the above sum is finite and involves nonempty finite subset, I Σ, with I = n,for ⊆ | | every such I,whenweapplyLI we get

λI =0, proving . As a corollary, if E is finite dimensional, say dim(E)=d and if 1 n d,thenwehave ≤ ≤ n n dim( (E)) = d ￿ ￿ ￿ and if n>d,thendim( n(E)) = 0.

Remark: When n =0,ifweset￿ u =1,then(u )=(1)isabasisof 0(V )=K. ∅ ∅ It follows from Proposition 22.19 that the family, (uI )I ,whereI Σrangesoverfinite n ⊆￿ subsets of Σis a basis of (V )= n 0 (V ). ≥ As a corollary of Proposition￿ ￿ 22.19￿ we obtain the following useful criterion for linear independence:

Proposition 22.20 For any vector space, E, the vectors, u1,...,un E, are linearly inde- pendent iff u u =0. ∈ 1 ∧···∧ n ￿ Proof .Ifu1 un =0,thenu1,...,un must be linearly independent. Otherwise, some u would be∧ a··· linear∧ combination￿ of the other u ’s (with j = i)andthen,asintheproof i j ￿ of Proposition 22.19, u1 un would be a linear combination of wedges in which two vectors are identical and∧ thus,···∧ zero.

Conversely, assume that u1,...,un are linearly independent. Then, we have the linear forms, u∗ E∗, such that i ∈ u∗(u )=δ 1 i, j n. i j i,j ≤ ≤ As in the proof of Proposition 22.19, we have a linear map, L : n(E) K,givenby u1,...,un → Lu ,...,u (v1 vn)=det(u∗(vi)), ￿ 1 n ∧···∧ j for all v v n(E). As, 1 ∧···∧ n ∈ ￿ Lu ,...,u (u1 un)=1, 1 n ∧···∧ we conclude that u u =0. 1 ∧···∧ n ￿ Proposition 22.20 shows that, geometrically, every nonzero wedge, u1 un,corre- sponds to some oriented version of an n-dimensional subspace of E. ∧···∧ 620 CHAPTER 22. TENSOR ALGEBRAS 22.13 Some Useful Isomorphisms for Exterior Powers

We can show the following property of the exterior tensor product, using the proof technique of Proposition 22.7: n n k n k − (E F ) = (E) (F ). ⊕ ∼ ⊗ ￿ ￿k=0 ￿ ￿ 22.14 Duality for Exterior Powers

In this section, all vector spaces are assumed to have finite dimension. We define a nonde- n n generate pairing, (E∗) (E) K,asfollows:Considerthemultilinearmap, × −→ n n ￿ ￿ (E∗) E K, × −→ given by

(v∗,...,v∗,u ,...,u ) sgn(σ) v∗ (u ) v∗ (u ) = det(v∗(u )). 1 n 1 n ￿→ σ(1) 1 ··· σ(n) n j i σ Sn ￿∈

It is easily checked that this expression is alternating w.r.t. the ui’s and also w.r.t. the vj∗. n For any fixed (v∗,...,v∗) (E∗) ,wegetanalternatingmultinearmap, 1 n ∈

lv ,...,v :(u1,...,un) det(v∗(ui)), 1∗ n∗ ￿→ j from En to K.Bytheargumentusedinthesymmetriccase,wegetabilinearmap,

n n (E∗) (E) K. × −→ ￿ ￿ Now, this pairing in nondegenerate. This can be done using bases and we leave it as an exercise to the reader. Therefore, we get a canonical isomorphism,

n n ( (E))∗ ∼= (E∗). ￿ ￿ Since we also have a canonical isomorphism

n n ( (E))∗ ∼= Alt (E; K), ￿ we get a canonical isomorphism

n n (E∗) ∼= Alt (E; K) ￿ which allows us to interpret alternating tensors over E∗ as alternating multilinear maps. 22.14. DUALITY FOR EXTERIOR POWERS 621

n n The isomorphism, µ: (E∗) ∼= Alt (E; K), discussed above can be described explicity as the linear extension of the map given by ￿

µ(v∗ v∗)(u ,...,u )=det(v∗(u )). 1 ∧···∧ n 1 n j i

Remark: Variants of our isomorphism, µ,arefoundintheliterature.Forexample,there is a version, µ￿,where 1 µ￿ = µ, n! 1 with the factor n! added in front of the determinant. Each version has its its own merits and inconvenients. Morita [114] uses µ￿ because it is more convenient than µ when dealing with characteristic classes. On the other hand, when using µ￿,someextrafactorisneeded in defining the wedge operation of alternating multilinear forms (see Section 22.15) and for exterior differentiation. The version µ is the one adopted by Warner [147], Knapp [89], Fulton and Harris [57] and Cartan [29, 30].

If f : E F is any linear map, by transposition we get a linear map, f ￿ : F ∗ E∗, given by → → f ￿(v∗)=v∗ f, v∗ F ∗. ◦ ∈ Consequently, we have

f ￿(v∗)(u)=v∗(f(u)), for all u E and all v∗ F ∗. ∈ ∈ For any p 1, the map, ≥ (u ,...,u ) f(u ) f(u ), 1 p ￿→ 1 ∧···∧ p from En to p F is multilinear alternating, so it induces a linear map, p f : p E p F , defined on generators by → ￿ ￿ ￿ ￿ p f (u u )=f(u ) f(u ). 1 ∧···∧ p 1 ∧···∧ p ￿ ￿ p ￿ p p p Combining and duality, we get a linear map, f ￿ : F ∗ E∗,definedongener- ators by → ￿ p ￿ ￿ ￿ f ￿ (v∗ v∗)=f ￿(v∗) f ￿(v∗). 1 ∧···∧ p 1 ∧···∧ p ￿￿ ￿ Proposition 22.21 If f : E F is any linear map between two finite-dimensional vector spaces, E and F , then →

p p µ f ￿ (ω) (u ,...,u )=µ(ω)(f(u ),...,f(u )),ω F ∗,u,...,u E. 1 p 1 p ∈ 1 p ∈ ￿￿￿ ￿ ￿ ￿ 622 CHAPTER 22. TENSOR ALGEBRAS

Proof .Itisenoughtoprovetheformulaongenerators.Bydefinitionofµ, we have

p µ f ￿ (v∗ v∗) (u ,...,u )=µ(f ￿(v∗) f ￿(v∗))(u ,...,u ) 1 ∧···∧ p 1 p 1 ∧···∧ p 1 p ￿￿ ￿ ￿ ￿ =det(f ￿(vj∗)(ui))

=det(vj∗(f(ui))) = µ(v∗ v∗)(f(u ),...,f(u )), 1 ∧···∧ p 1 p as claimed. p The map f ￿ is often denoted f ∗,althoughthisisanambiguousnotationsincep is p dropped. Proposition 22.21 gives us the behavior of f ∗ under the identification of E∗ and Altp(E; K) via￿the isomorphism µ. ￿ n n As in the case of symmetric powers, the map from E to (E)givenby(u1,...,un) n n ￿→ u1 un yields a surjection, π : E⊗ (E). Now, this map has some section so there ∧···∧ n n → ￿ is some injection, ι: (E) E⊗ ,withπ ι =id.Ifourfield,K,hascharacteristic0, then there is a special section→ having a natural￿ ◦ definition involving an antisymmetrization process. ￿

n Recall that we have a left action of the symmetric group, Sn,onE⊗ .Thetensors, n z E⊗ ,suchthat ∈ σ z =sgn(σ) z, for all σ S · ∈ n n n are called antisymmetrized tensors. We define the map, ι: E E⊗ ,by → 1 ι(u ,...,u )= sgn(σ) u u . 1 n n! σ(1) ⊗···⊗ σ(n) σ Sn ￿∈ n n As the right hand side is clearly an alternating map, we get a linear map, ι: (E) E⊗ . n n → Clearly, ι( (E)) is the set of antisymmetrized tensors in E⊗ .Ifweconsiderthemap, n n ￿ A = ι π : E⊗ E⊗ ,itiseasytocheckthatA A = A.Therefore,A is a projection and by◦ linear￿ algebra,−→ we know that ◦

n n n E⊗ = A(E⊗ ) Ker A = ι( (A)) Ker A. ⊕ ⊕ n ￿ It turns out that Ker A = E⊗ Ia =Kerπ,whereIa is the two-sided ideal of T (E) ∩ 2 generated by all tensors of the form u u E⊗ (for example, see Knapp [89], Appendix A). Therefore, ι is injective, ⊗ ∈

n n n n E⊗ = ι( (E)) E⊗ I = ι( (E)) Ker π, ⊕ ∩ ⊕

￿n ￿ n and the exterior tensor power, (E), is naturally embedded into E⊗ . ￿ 22.15. EXTERIOR ALGEBRAS 623 22.15 Exterior Algebras

As in the case of symmetric tensors, we can pack together all the exterior powers, n(V ), into an algebra, m ￿ (V )= (V ), m 0 ￿ ￿≥ ￿ called the exterior algebra (or Grassmann algebra) of V .Wemimictheprocedureused for symmetric powers. If Ia is the two-sided ideal generated by all tensors of the form 2 u u V ⊗ ,weset ⊗ ∈ • (V )=T (V )/Ia.

Then, •(V ) automatically inherits￿ a multiplication operation, called wedge product,and since T (V )isgraded,thatis, ￿ m T (V )= V ⊗ , m 0 ￿≥ we have • m m (V )= V ⊗ /(I V ⊗ ). a ∩ m 0 ￿ ￿≥ However, it is easy to check that

m m m (V ) = V ⊗ /(I V ⊗ ), ∼ a ∩ so ￿ • (V ) ∼= (V ). When V has finite dimension, d,weactuallyhaveafinitecoproduct￿ ￿

d m (V )= (V ), m=0 ￿ ￿ ￿ m d and since each (V )hasdimension, m ,wededucethat ￿ ￿ ￿ dim( (V )) = 2d =2dim(V ). ￿ The multiplication, : m(V ) n(V ) m+n(V ), is skew-symmetric in the following precise sense: ∧ × → ￿ ￿ ￿ Proposition 22.22 For all α m(V ) and all β n(V ), we have ∈ ∈ ￿β α =( 1)mnα ￿β. ∧ − ∧ 624 CHAPTER 22. TENSOR ALGEBRAS

Proof .Sincev u = u v for all u, v V , Proposition 22.22 follows by induction. ∧ − ∧ ∈ Since α α =0foreverysimple tensor, α = u1 un,itseemsnaturaltoinferthat α α =0for∧ every tensor α (V ). If we consider∧··· the∧ case where dim(V ) 3, we can indeed∧ prove the above assertion.∈ However, if dim(V ) 4, the above fact is generally≤ false! ￿ ≥ For example, when dim(V )=4,ifu1,u2,u3,u4 are a basis for V ,forα = u1 u2 + u3 u4, we check that ∧ ∧ α α =2u u u u , ∧ 1 ∧ 2 ∧ 3 ∧ 4 which is nonzero. The above discussion suggests that it might be useful to know when an alternating tensor is simple, that is, decomposable. It can be shown that for tensors, α 2(V ), α α =0iff α is simple. A general criterion for decomposability can be given in terms∈ of some∧ operations known as left hook and right hook (also called interior products), see Section￿ 22.17. It is easy to see that (V )satisfiesthefollowinguniversalmappingproperty: Proposition 22.23 Given￿ any K-algebra, A, for any linear map, f : V A, if (f(v))2 =0 for all v V , then there is a unique K-algebra homomorphism, f : (V→) A, so that ∈ → f = f i, ￿ ◦ as in the diagram below: i V ￿ (V )

f f ￿ ￿ ￿ A n n When E is finite-dimensional, recall the isomorphism, µ: (E∗) Alt (E; K), de- fined as the linear extension of the map given by −→ ￿ µ(v∗ v∗)(u ,...,u ) = det(u∗(u )). 1 ∧···∧ n 1 n j i m n m+n Now, we have also a multiplication operation, (E∗) (E∗) (E∗). The following question then arises: × −→ ￿ ￿ ￿ Can we define a multiplication, Altm(E; K) Altn(E; K) Altm+n(E; K), directly on alternating multilinear forms, so that the following× diagram commutes:−→

m n m+n (E∗) (E∗) ∧ ￿ (E∗) × µ µ µ ￿ ×￿ ￿ ￿ m ￿ n m+n Alt (E; K) Alt (E; K) ∧ ￿ Alt (E; K). × As in the symmetric case, the answer is yes! The solution is to define this multiplication such that, for f Altm(E; K)andg Altn(E; K), ∈ ∈ (f g)(u ,...,u )= sgn(σ) f(u ,...,u )g(u ,...,u ), ∧ 1 m+n σ(1) σ(m) σ(m+1) σ(m+n) σ shuffle(m,n) ∈ ￿ 22.15. EXTERIOR ALGEBRAS 625

where shuffle(m, n)consistsofall(m, n)-“shuffles”, that is, permutations, σ,of 1,...m+n , such that σ(1) < <σ(m)andσ(m+1) < <σ(m+n). For example, when{ m = n =1,} we have ··· ··· (f g)(u, v)=f(u)g(v) g(u)f(v). ∧ − When m =1andn 2, check that ≥ m+1 i 1 (f g)(u ,...,u )= ( 1) − f(u )g(u ,...,u ,...,u ), ∧ 1 m+1 − i 1 i m+1 i=1 ￿ ￿ where the hat over the argument ui means that it should be omitted. As a result of all this, the coproduct

Alt(E)= Altn(E; K) n 0 ￿≥

is an algebra under the above multiplication and this algebra is isomorphic to (E∗). For the record, we state ￿ n n Proposition 22.24 When E is finite dimensional, the maps, µ: (E∗) Alt (E; K), induced by the linear extensions of the maps given by −→ ￿

µ(v∗ v∗)(u ,...,u ) = det(u∗(u )) 1 ∧···∧ n 1 n j i

yield a canonical isomorphism of algebras, µ: (E∗) Alt(E), where the multiplication in Alt(E) is defined by the maps, : Altm(E; K) Alt−→n(E; K) Altm+n(E; K), with ∧ ￿ × −→ (f g)(u ,...,u )= sgn(σ) f(u ,...,u )g(u ,...,u ), ∧ 1 m+n σ(1) σ(m) σ(m+1) σ(m+n) σ shuffle(m,n) ∈ ￿ where shuffle(m, n) consists of all (m, n)-“shuffles”, that is, permutations, σ, of 1,...m+n , such that σ(1) < <σ(m) and σ(m +1)< <σ(m + n). { } ··· ···

Remark: The algebra, (E) is a graded algebra. Given two graded algebras, E and F ,we can make a new tensor product, E F ,whereE F is equal to E F as a vector space, but with a skew-commutative￿ multiplication⊗ given⊗ by ⊗ ￿ ￿ (a b) (c d)=( 1)deg(b)deg(c)(ac) (bd), ⊗ ∧ ⊗ − ⊗ where a Em,b F p, c En,d F q.Then,itcanbeshownthat ∈ ∈ ∈ ∈ (E F ) = (E) (F ). ⊕ ∼ ⊗ ￿ ￿ ￿ ￿ 626 CHAPTER 22. TENSOR ALGEBRAS 22.16 The Hodge - ∗ In order to define a generalization of the Laplacian that will apply to differential forms on a Riemannian , we need to define isomorphisms,

k n k − V V, −→ for any space, V ,ofdimension￿ ￿n and any k,with0 k n.If , denotes the inner product on V ,wedefineaninnerproducton k V ,alsodenoted≤ ≤ ￿−, −￿, by setting ￿− −￿ u u ,v v =det(u ,v￿), ￿ 1 ∧···∧ k 1 ∧···∧ k￿ ￿ i j￿ for all u ,v V and extending , by bilinearity. i i ∈ ￿− −￿ k It is easy to show that if (e1,...,en) is an orthonormal basis of V ,thenthebasisof V consisting of the eI (where I = i1,...,ik ,with1 i1 <

If V is oriented by the basis (e1,...,en), then V ∗ is oriented by the dual basis, (e1∗,...,en∗ ). If V is an oriented vector space of dimension n,thenwecandefinealinearmap,

k n k − : V V, ∗ → called the Hodge -operator,asfollows:Foranychoiceofapositivelyorientedorthonormal￿ ￿ ∗ basis, (e1,...,en), of V ,set (e e )=e e . ∗ 1 ∧···∧ k k+1 ∧···∧ n In particular, for k =0andk = n,wehave

(1) = e e ∗ 1 ∧···∧ n (e e )=1. ∗ 1 ∧···∧ n It is easy to see that the definition of does not depend on the choice of positively oriented orthonormal basis. ∗ k n k The Hodge -operators, : V − V ,inducesalinearbijection, ∗ ∗ → k n k : (V ) (V ). We also have Hodge -operators, : V ∗ − V ∗. ∗ → ￿ ￿∗ ∗ → ￿The following￿ proposition is easy to show: ￿ ￿ 22.17. TESTING DECOMPOSABILITY; LEFT AND RIGHT HOOKS 627

Proposition 22.25 If V is any oriented vector space of dimension n, for every k, with 0 k n, we have ≤ ≤ k(n k) (i) =( id) − . ∗∗ − (ii) x, y = (x y)= (y x), for all x, y k V . ￿ ￿ ∗ ∧∗ ∗ ∧∗ ∈ ￿ If (e1,...,en)isanorthonormalbasisofV and (v1,...,vn)isanyotherbasisofV ,itis easy to see that v v = det( v ,v ) e e , 1 ∧···∧ n ￿ i j￿ 1 ∧···∧ n from which it follows that ￿ 1 (1) = v1 vn ∗ det( v ,v ) ∧···∧ ￿ i j￿ (see Jost [83], Chapter 2, Lemma 2.1.3).￿

22.17 Testing Decomposability; Left and Right Hooks

In this section, all vector spaces are assumed to have finite dimension. Say dim(E)=n. Using our nonsingular pairing,

p p , : E∗ E K (1 p n), ￿− −￿ × −→ ≤ ≤ ￿ ￿ defined on generators by

u∗ u∗,v u =det(u∗(v )), ￿ 1 ∧···∧ p 1 ∧···∧ p￿ i j we define various contraction operations,

p p+q q : E E∗ E∗ (left hook) ￿ × −→ ￿ ￿ ￿ and p+q p q : E∗ E E∗ (right hook), ￿ × −→ as well as the versions obtained￿ by replacing￿ E￿by E∗ and E∗∗ by E. We begin with the left or left hook, ￿. Let u p E.Foranyq such that p + q n,multiplicationontherightbyu is a linear map ∈ ≤ ￿ q p+q (u): E E, ∧R −→ ￿ ￿ 628 CHAPTER 22. TENSOR ALGEBRAS

given by v v u ￿→ ∧ where v q E.Thetransposeof (u)yieldsalinearmap, ∈ ∧R ￿ p+q q t ( (u)) :( E)∗ ( E)∗, ∧R −→ p+q ￿ p+q ￿ q q which, using the isomorphisms ( E)∗ ∼= E∗ and ( E)∗ ∼= E∗ can be viewed as amap ￿ p+￿q q ￿ ￿ t ( (u)) : E∗ E∗, ∧R −→ given by ￿ ￿ z∗ z∗ (u), ￿→ ◦∧R p+q where z∗ E∗. ∈ We denote z∗ (u)by ￿ ◦∧R u ￿ z∗. In terms of our pairing, the q-vector u ￿ z∗ is uniquely defined by p q p+q u z∗,v = z∗,v u , for all u E, v E and z∗ E∗. ￿ ￿ ￿ ￿ ∧ ￿ ∈ ∈ ∈ It is immediately verified that ￿ ￿ ￿

(u v) z∗ = u (v z∗), ∧ ￿ ￿ ￿ so defines a left action ￿ p p+q q : E E∗ E∗. ￿ × −→ By interchanging E and E∗ and using￿ the isomorphism,￿ ￿

k k ( F )∗ ∼= F ∗, we can also define a left action ￿ ￿ p p+q q : E∗ E E. ￿ × −→ ￿ ￿ ￿ In terms of our pairing, u∗ ￿ z is uniquely defined by p q p+q v∗,u∗ z = v∗ u∗,z , for all u∗ E∗, v∗ E∗ and z E. ￿ ￿ ￿ ￿ ∧ ￿ ∈ ∈ ∈ In order to proceed any further, we need some￿ combinatorial￿ properties￿ of the basis of p E constructed from a basis, (e1,...,en), of E.Recallthatforany(nonempty)subset, I 1,...,n ,welet ￿⊆{ } e = e e , I i1 ∧···∧ ip 22.17. TESTING DECOMPOSABILITY; LEFT AND RIGHT HOOKS 629

where I = i1,...,ip with i1 < l . |{ | ∈ × }|

Proposition 22.26 For any basis, (e1,...,en), of E the following properties hold:

(1) If H L = , H = h, and L = l, then ∩ ∅ | | | | ρ ρ =( 1)hl. H,L L,H −

(2) For H, L 1,...,m , we have ⊆{ }

eH eL = ρH,LeH L. ∧ ∪

(3) For the left hook, p p+q q : E E∗ E∗, ￿ × −→ we have ￿ ￿ ￿

e e∗ =0 if H L H ￿ L ￿⊆ eH ￿ eL∗ = ρL H,HeL∗ H if H L. − − ⊆

p p+q q Similar formulae hold for ￿ : E∗ E E. Using Proposition 22.26, we have the × −→ ￿ ￿ ￿ Proposition 22.27 For the left hook,

p p+q q : E E∗ E∗, ￿ × −→ ￿ ￿ ￿ for every u E, we have ∈ s u (x∗ y∗)=( 1) (u x∗) y∗ + x∗ (u y∗), ￿ ∧ − ￿ ∧ ∧ ￿ s where y E∗. ∈ ￿ 630 CHAPTER 22. TENSOR ALGEBRAS

Proof .Wecanprovetheaboveidentityassumingthatx∗ and y∗ are of the form eI∗ and eJ∗ us- ing Proposition 22.26 but this is rather tedious. There is also a proof involving determinants, see Warner [147], Chapter 2.

Thus, ￿ is almost an anti-derivation, except that the sign, ( 1)s is applied to the wrong factor. −

It is also possible to define a right interior product or right hook, ￿, using multiplication on the left rather than multiplication on the right. Then, ￿ defines a right action,

p+q p q : E∗ E E∗, ￿ × −→ ￿ ￿ ￿ such that

p q p+q z∗,u v = z∗ u, v , for all u E, v E,andz∗ E∗. ￿ ∧ ￿ ￿ ￿ ￿ ∈ ∈ ∈ Similarly, we have the right action ￿ ￿ ￿

p+q p q : E E∗ E, ￿ × −→ ￿ ￿ ￿ such that

p q p+q u∗ v∗,z = v∗,z u∗ , for all u∗ E∗, v∗ E∗,andz E. ￿ ∧ ￿ ￿ ￿ ￿ ∈ ∈ ∈ p p+q ￿ q ￿ ￿ Since the left hook, : E E∗ E∗,isdefinedby ￿ × −→ ￿ ￿ p￿ q p+q u z∗,v = z∗,v u , for all u E, v E and z∗ E∗, ￿ ￿ ￿ ￿ ∧ ￿ ∈ ∈ ∈ the right hook, ￿ ￿ ￿ p+q p q : E∗ E E∗, ￿ × −→ by ￿ ￿ ￿ p q p+q z∗ u, v = z∗,u v , for all u E, v E,andz∗ E∗, ￿ ￿ ￿ ￿ ∧ ￿ ∈ ∈ ∈ and v u =( 1)pqu v,weconcludethat ￿ ￿ ￿ ∧ − ∧ pq u z∗ =( 1) z∗ u, ￿ − ￿ p p+q where u E and z E∗. ∈ ∈ Using the￿ above property￿ and Proposition 22.27 we get the following version of Proposi- tion 22.27 for the right hook: 22.17. TESTING DECOMPOSABILITY; LEFT AND RIGHT HOOKS 631

Proposition 22.28 For the right hook,

p+q p q : E∗ E E∗, ￿ × −→ ￿ ￿ ￿ for every u E, we have ∈ r (x∗ y∗) u =(x∗ u) y∗ +( 1) x∗ (y∗ u), ∧ ￿ ￿ ∧ − ∧ ￿ r where x∗ E∗. ∈ ￿ Thus, ￿ is an anti-derivation. For u E,therighthook,z∗ ￿ u,isalsodenoted,i(u)z∗,andcalledinsertion operator or interior product∈ .Thisoperatorplaysanimportantroleindifferentialgeometry.Ifweview n+1 n+1 n z∗ (E∗) as an alternating multilinear map in Alt (E; K), then i(u)z∗ Alt (E; K) is given∈ by ∈ ￿ (i(u)z∗)(v1,...,vn)=z∗(u, v1,...,vn).

Note that certain authors, such as Shafarevitch [138], denote our right hook z∗ ￿u (which ￿ is also the right hook in Bourbaki [21] and Fulton and Harris [57]) by u ￿ z∗.

p n p Using the two versions of ￿,wecandefinelinearmapsγ : E − E∗ and p n p → δ : E∗ − E.Foranybasis(e1,...,en)ofE, if we let M = 1,...,n , e = e1 en, → ￿ { ￿ } ∧···∧ and e∗ = e∗ e∗ ,then ￿ 1 ∧￿···∧ n γ(u)=u ￿ e∗ and δ(v)=v∗ ￿ e,

p p for all u E and all v∗ E∗.Thefollowingpropositioniseasilyshown. ∈ ∈ p n p p n p Proposition￿ 22.29 The linear￿ maps γ : E − E∗ and δ : E∗ − E are isomorphims. The isomorphisms γ and δ map decomposable→ vectors to decomposable→ vectors. p ￿ ￿ ￿ ￿ p Furthermore, if z E is decomposable, then γ(z),z =0, and similarly for z E∗. ∈ ￿p ￿ n p p ∈ n p If (e1￿ ,...,en￿ ) is any other basis of E and γ￿ : E − E∗ and δ￿ : E∗ − E ￿ → 1 → ￿ are the corresponding isomorphisms, then γ￿ = λγ and δ￿ = λ− δ for some nonzero λ Ω. ￿ ￿ ￿ ￿∈ Proof . Using Proposition 22.26, for any subset J 1,...,n = M such that J = p,we have ⊆{ } | | γ(eJ )=eJ ￿ e∗ = ρM J,J eM∗ J and δ(eJ∗ )=eJ∗ ￿ e = ρM J,J eM J . − − − − Thus, p(n p) δ γ(eJ )=ρM J,J ρJ,M J eJ =( 1) − eJ . ◦ − − − Asimilarresultholdsforγ δ.Thisimpliesthat ◦ p(n p) p(n p) δ γ =( 1) − id and γ δ =( 1) − id. ◦ − ◦ − 632 CHAPTER 22. TENSOR ALGEBRAS

Thus, γ and δ are isomorphisms. If z p E is decomposable, then z = u u where ∈ 1 ∧···∧ p u1,...,up are linearly independent since z =0,andwecanpickabasisofE of the form ￿ ￿ (u1,...,un). Then, the above formulae show that

γ(z)= u∗ u∗ . ± p+1 ∧···∧ n Clearly γ(z),z =0. ￿ ￿ m If (e1￿ ,...,en￿ )isanyotherbasisofE, because E has dimension 1, we have

e￿ e￿ = λ￿e1 en 1 ∧···∧ n ∧···∧ for some nonnull λ Ω, and the rest is trivial. ∈ We are now ready to tacke the problem of finding criteria for decomposability. We need afewpreliminaryresults. Proposition 22.30 Given z p E, with z =0, the smallest vector space W E such that z p W is generated by∈ the vectors of the￿ form ⊆ ∈ ￿ p 1 ￿ u∗ z, with u∗ − E∗. ￿ ∈ Proof . First, let W be any subspace such that z ￿p(E)andlet(e ,...,e ,e ,...,e )be ∈ 1 r r+1 n abasisofE such that (e1,...,er)isabasisofW .Then,u∗ = e∗,whereI 1,...,n ￿ I I ⊆{ } and I = p 1, and z = eJ ,whereJ 1,...,r and J = p r.Itfollowsimmediately | | − J ⊆{ } | | ￿≤ from the formula of Proposition 22.26 (3) that u∗ ￿ z W . ￿ ∈ Next, we prove that if W is the smallest subspace of E such that z p(W ), then W p 1 ∈ is generated by the vectors of the form u∗ ￿ z,whereu∗ − E∗.Supposenot,thenthe p 1 ∈ ￿ vectors u∗ ￿ z with u∗ − E∗ span a proper subspace, U,ofW .Weprovethatforevery ∈ ￿ subspace, W ￿,ofW ,withdim(W ￿) = dim(W ) 1=r 1, it is not possible that u∗ ￿ z W ￿ p 1 ￿ − − ∈ for all u∗ − E∗.Butthen,asU is a proper subspace of W ,itiscontainedinsome ∈ subspace, W ￿,withdim(W ￿)=r 1andwehaveacontradiction. ￿ − Let w W W ￿ and pick a basis of W formed by a basis (e1,...,er 1)ofW ￿ and w.We ∈ − p p 1 − can write z = z￿ + w z￿￿,wherez￿ W ￿ and z￿￿ − W ￿,andsinceW is the smallest ∧ ∈ ∈ subspace containing z,wehavez￿￿ =0.Consequently,ifwewritez￿￿ = eI in terms of ￿ ￿ ￿ I the basis (e1,...,er 1)ofW ￿,thereissomeeI ,withI 1,...,r 1 and I = p 1, so − ⊆{ − } ￿ | | − that the coefficient λI is nonzero. Now, using any basis of E containing (e1,...,er 1,w), by − Proposition 22.26 (3), we see that

e∗ (w e )=λw, λ = 1. I ￿ ∧ I ± It follows that

e∗ z = e∗ (z￿ + w z￿￿)=e∗ z￿ + e∗ (w z￿￿)=e∗ z￿ + λw, I ￿ I ￿ ∧ I ￿ I ￿ ∧ I ￿ with eI∗ ￿ z￿ W ￿,whichshowsthateI∗ ￿ z/W ￿.Therefore,W is indeed generated by the ∈ p 1∈ vectors of the form u∗ z,whereu∗ − E∗. ￿ ∈ ￿ 22.17. TESTING DECOMPOSABILITY; LEFT AND RIGHT HOOKS 633

Proposition 22.31 Any nonzero z p E is decomposable iff ∈ p 1 (u∗ z) z =0￿, for all u∗ − E∗. ￿ ∧ ∈ Proof .Clearly,z p E is decomposable iffthe smallest￿ vector space, W ,suchthatz p ∈ ∈ W has dimension p.Ifdim(W )=p,wehavez = e1 ep where e1,...,ep form a ￿ p 1 ∧···∧ basis of W .ByProposition22.30,foreveryu∗ − E∗,wehaveu∗ ￿ z W ,soeachu∗ ￿ z ￿ ∈ ∈ is a linear combination of the ei’s and (u∗ ￿ z) z =(u∗ ￿ z) e1 ep =0. ∧ ￿ ∧ ∧···∧ p 1 Now, assume that (u∗ z) z =0forallu∗ − E∗ and that dim(W )=n>p.If ￿ ∧ ∈ e1,...,en is a basis of W ,thenwehavez = λI eI ,whereI 1,...,n and I = p. I ￿ ⊆{ } | | Recall that z =0,andso,someλI is nonzero. By Proposition 22.30, each ei can be written ￿ p 1 ￿ p 1 as u∗ z for some u∗ − E∗ and since (u∗ z) z =0forallu∗ − E∗,weget ￿ ∈ ￿ ∧ ∈ ￿ e z =0 for j =1,...,n. ￿ j ∧

By wedging z = I λI eI with each ej,asn>p,wededuceλI =0forallI,soz =0,a contradiction. Therefore, n = p and z is decomposable. ￿ p 1 In Proposition 22.31, we can let u∗ range over a basis of − E∗,andthen,theconditions are ￿ (e∗ z) z =0 H ￿ ∧ p+1 for all H 1,...,n ,with H = p 1. Since (e∗ z) z E,thisisequivalentto ⊆{ } | | − H ￿ ∧ ∈

e∗ ((e∗ z) z)=0 ￿ J H ￿ ∧

for all H, J 1,...,n ,with H = p 1 and J = p +1.Then,forallI,I￿ 1,...,n ⊆{ } | | − | | ⊆{ } with I = I￿ = p,wecanshowthat | | | |

e∗ ((e∗ e ) e )=0, J H ￿ I ∧ I￿ unless there is some i 1,...,n such that ∈{ }

I H = i ,J I￿ = i . − { } − { } In this case,

eJ∗ (eH∗ ￿ eH i ) eJ i = ρ i ,H ρ i ,J i . ∪{ } ∧ −{ } { } { } −{ } If we let ￿ ￿ ￿i,J,H = ρ i ,H ρ i ,J i , { } { } −{ } we have ￿i,J,H =+1iftheparityofthenumberofj J such that j

p Proposition 22.32 (Grassmann-Pl¨ucker’s Equations) For z = I λI eI E, the condi- tions for z =0to be decomposable are ∈ ￿ ￿ ￿

￿i,J,H λH i λJ i =0, ∪{ } −{ } i J H ∈￿− for all H, J 1,...,n such that H = p 1 and J = p +1. ⊆{ } | | − | | Using these criteria, it is a good exercise to prove that if dim(E)=n,theneverytensor n 1 in − (E)isdecomposable.Thiscanalsobeshowndirectly. ￿It should be noted that the equations given by Proposition 22.32 are not independent. For example, when dim(E)=n =4andp =2,theseequationsreducetothesingleequation

λ λ λ λ + λ λ =0. 12 34 − 13 24 14 23 When the field, K,isthefieldofcomplexnumbers,thisisthehomogeneousequationofa 5 in CP known as the .Thepointsonthisquadricareinone-to-one 3 correspondence with the lines in CP .

22.18 Vector-Valued Alternating Forms

In this section, the vector space, E,isassumedtohavefinitedimension.Weknowthat n n there is a canonical isomorphism, (E∗) ∼= Alt (E; K), between alternating n-forms and alternating multilinear maps. As in the case of general tensors, the isomorphisms, ￿ n n Alt (E; F ) ∼= Hom( (E),F) n n ￿ Hom( (E),F) = ( (E))∗ F ∼ ⊗ ￿n n￿ ( (E))∗ ∼= (E∗) ￿ ￿ yield a canonical isomorphism

n n Alt (E; F ) ∼= (E∗) F. ￿ ￿ ⊗ ￿ Note that F may have infinite dimension. This isomorphism allows us to view the tensors in n (E∗) F as vector valued alternating forms,apointofviewthatisusefulindifferential × n geometry. If (f1,...,fr)isabasisofF ,everytensor,ω (E∗) F can be written as some￿ linear combination ∈ × r ￿ ω = α f , i ⊗ i i=1 ￿ 22.18. VECTOR-VALUED ALTERNATING FORMS 635

n with α (E∗). We also let i ∈ ￿ n (E; F )= (E∗) F = (E) F. ⊗ ⊗ n=0 ￿ ￿ ￿ ￿ ￿ ￿￿ ￿ Given three vector spaces, F, G, H,ifwehavesomebilinearmap,Φ:F G H,then we can define a multiplication operation, ⊗ → : (E; F ) (E; G) (E; H), ∧Φ × → as follows: For every pair, (m, n￿), we define￿ the multiplication,￿

m n m+n

Φ : (E∗) F (E∗) G (E∗) H, ∧ ￿ ⊗ ￿ × ￿ ⊗ ￿ −→ ⊗ ￿￿ ￿ ￿￿ ￿ ￿ ￿ ￿ by (α f) (β g)=(α β) Φ(f,g). ⊗ ∧Φ ⊗ ∧ ⊗ As in Section 22.15 (following H. Cartan [30]) we can also define a multiplication, : Altm(E; F ) Altm(E; G) Altm+n(E; H), ∧Φ × −→ directly on alternating multilinear maps as follows: For f Altm(E; F )andg Altn(E; G), ∈ ∈ (f g)(u ,...,u )= sgn(σ)Φ(f(u ,...,u ),g(u ,...,u )), ∧Φ 1 m+n σ(1) σ(m) σ(m+1) σ(m+n) σ shuffle(m,n) ∈ ￿ where shuffle(m, n)consistsofall(m, n)-“shuffles”, that is, permutations, σ,of 1,...m+n , such that σ(1) < <σ(m)andσ(m +1)< <σ(m + n). { } ··· ··· In general, not much can be said about Φ unless Φhas some additional properties. In particular, is generally not associative. We∧ also have the map, ∧Φ n n µ: (E∗) F Alt (E; F ), ￿ ￿ ⊗ −→ ￿ defined on generators by

µ((v∗ v∗) a)(u ,...,u )=(det(v∗(u ))a. 1 ∧···∧ n ⊗ 1 n j i Proposition 22.33 The map n n µ: (E∗) F Alt (E; F ), ￿ ￿ ⊗ −→ ￿ defined as above is a canonical isomorphism for every n 0. Furthermore, given any three ≥ n vector spaces, F, G, H, and any bilinear map, Φ: F G H, for all ω ( (E∗)) F and n × → ∈ ⊗ all η ( (E∗)) G, ∈ ⊗ ￿ µ(α Φ β)=µ(α) Φ µ(β). ￿ ∧ ∧ 636 CHAPTER 22. TENSOR ALGEBRAS

n n Proof .Sincewealreadyknowthat( (E∗)) F and Alt (E; F )areisomorphic,itisenough n ⊗ to show that µ maps some basis of ( (E∗)) F to linearly independent elements. Pick ￿ ⊗ some bases, (e1,...,ep)inE and (fj)j J in F . Then, we know that the vectors, eI∗ fj,where ∈￿ n ⊗ I 1,...,p and I = n form a basis of ( (E∗)) F .Ifwehavealineardependence, ⊆{ } | | ⊗

λ µ(￿e∗ f )=0, I,j I ⊗ j I,j ￿

applying the above combination to each (ei1 ,...,ein )(I = i1,...,in , i1 <

λI,jfj =0, j ￿ and by linear independence of the fj’s, we get λI,j =0,forallI and all j.Therefore,the µ(eI∗ fj)arelinearlyindependentandwearedone.Thesecondpartofthepropositionis easily⊗ checked (a simple computation). AspecialcaseofinterestisthecasewhereF = G = H is a and Φ(a, b)=[a, b], is the Lie of F . In this case, using a , (f ,...,f ), of F if we write ω = α f 1 r i i ⊗ i and η = βj fj, we have j ⊗ ￿ ￿ [ω,η]= α β [f ,f ]. i ∧ j ⊗ i j i,j ￿ Consequently, [η,ω]=( 1)mn+1[ω,η]. − The following proposition will be useful in dealing with vector-valued differential forms:

n Proposition 22.34 If (e1,...,ep) is any basis of E, then every element, ω ( (E∗)) F , can be written in a unique way as ∈ ⊗ ￿

ω = e∗ f ,fF, I ⊗ I I ∈ I ￿

where the eI∗ are defined as in Section 22.12.

n Proof .Since,byProposition22.19,theeI∗ form a basis of (E∗), elements of the form n eI∗ f span ( (E∗)) F . Now, if we apply µ(ω)to(ei1 ,...,ein ), where I = i1,...,in 1,...,p⊗ ,weget ⊗ ￿ { }⊆ { } ￿ µ(ω)(e ,...,e )=µ(e∗ f )(e ,...,e )=f . i1 in I ⊗ I i1 in I

Therefore, the fI are uniquely determined by f. Proposition can also be formulated in terms of alternating multilinear maps, a fact that will be useful to deal with differential forms. 22.19. TENSOR PRODUCTS OF MODULES OVER A COMMMUTATIVE RING 637

n n n Define the product, : Alt (E; R) F Alt (E; F ), as follows: For all ω Alt (E; R) and all f F , · × → ∈ ∈ (ω f)(u ,...,u )=ω(u ,...,u )f, · 1 n 1 n n for all u1,...,un E.Then,itisimmediatelyverifiedthatforeveryω ( (E∗)) F of the form ∈ ∈ ⊗ ￿ ω = u∗ u∗ f, 1 ∧···∧ n ⊗ we have µ(u∗ u∗ f)=µ(u∗ u∗ ) f. 1 ∧···∧ n ⊗ 1 ∧···∧ n · Then, Proposition 22.34 yields

n Proposition 22.35 If (e1,...,ep) is any basis of E, then every element, ω Alt (E; F ), can be written in a unique way as ∈

ω = e∗ f ,fF, I · I I ∈ I ￿

where the eI∗ are defined as in Section 22.12.

22.19 Tensor Products of Modules over a Commmutative Ring

If R is a with identity (say 1), recall that a over R (or R-module) is an , M,withascalarmultiplication, : R M M,andalltheaxiomsof avectorspacearesatisfied. · × → At first glance, a module does not seem any different from a vector space but the lack of multiplicative inverses in R has drastic consequences, one being that unlike vector spaces, modules are generally not free, that is, have no bases. Furthermore, a module may have torsion elements,thatis,elements,m M,suchthatλ m =0,eventhoughm =0and λ =0. ∈ · ￿ ￿ Nevertheless, it is possible to define tensor products of modules over a ring, just as in Section 22.1 and the results of this section continue to hold. The results of Section 22.3 also continue to hold since they are based on the universal mapping property. However, the results of Section 22.2 on bases generally fail, except for free modules. Similarly, the results of Section 22.4 on duality generally fail. Tensor algebras can be defined for modules, as in Section 22.5. Symmetric tensor and alternating tensors can be defined for modules but again, results involving bases generally fail. Tensor products of modules have some unexpected properties. For example, if p and q are relatively prime , then

Z/pZ Z/qZ =(0). ⊗Z 638 CHAPTER 22. TENSOR ALGEBRAS

It is possible to salvage certain properties of tensor products holding for vector spaces by restricting the class of modules under consideration. For example, projective modules,have aprettygoodbehaviorw.r.t.tensorproducts.

AfreeR-module, F ,isamodulethathasabasis(i.e.,thereisafamily,(ei)i I ,of ∈ linearly independent vectors in F that span F ). Projective modules have many equivalent characterizations. Here is one that is best suited for our needs:

Definition 22.9 An R-module, P ,isprojective if it is a summand of a , that is, if there is a free R-module, F ,andsomeR-module, Q,sothat

F = P Q. ⊕

Given any R-module, M,weletM ∗ = HomR(M,R)beitsdual.Wehavethefollowing proposition:

Proposition 22.36 For any finitely-generated projective R-modules, P , and any R-module, Q, we have the isomorphisms:

P ∗∗ ∼= P Hom (P, Q) = P ∗ Q. R ∼ ⊗R Sketch of proof .Weonlyconsiderthesecondisomorphism.SinceP is projective, we have some R-modules, P1,F,with P P = F, ⊕ 1 where F is some free module. Now, we know that for any R-modules, U, V, W , we have

Hom (U V,W) = Hom (U, W ) Hom (V,W) = Hom (U, W ) Hom (V,W), R ⊕ ∼ R R ∼ R ⊕ R so ￿ P ∗ P ∗ = F ∗, Hom (P, Q) Hom (P ,Q) = Hom (F, Q). ⊕ 1 ∼ R ⊕ R 1 ∼ R By tensoring with Q and using the fact that tensor distributes w.r.t. , we get

(P ∗ Q) (P ∗ Q) = (P ∗ P ∗) Q = F ∗ Q. ⊗R ⊕ 1 ⊗ ∼ ⊕ 1 ⊗R ∼ ⊗R Now, the proof of Proposition 22.9 goes through because F is free and finitely generated, so

α :(P ∗ R Q) (P1∗ Q) = F ∗ R Q HomR(F, Q) = HomR(P, Q) HomR(P1,Q) ⊗ ⊗ ⊕ ⊗ ∼ ⊗ −→ ∼ ⊕ is an isomorphism and as αα maps P ∗ R Q to HomR(P, Q), it yields an isomorphism between these two spaces. ⊗

The isomorphism α : P ∗ R Q = HomR(P, Q)ofProposition22.36isstillgivenby ⊗ ⊗ ∼

α (u∗ f)(x)=u∗(x)f, u∗ P ∗,f Q, x P. ⊗ ⊗ ∈ ∈ ∈ 22.20. THE POLYNOMIAL 639

It is convenient to introduce the evaluation map,Evx : P ∗ R Q Q,definedforevery x P by ⊗ → ∈ Ev (u∗ f)=u∗(x)f, u∗ P ∗,f Q. x ⊗ ∈ ∈ In Section 11.2 we will need to consider a slightly weaker version of the universal mapping property of tensor products. The situation is this: We have a commutative R-algebra, S, where R is a field (or even a commutative ring), we have two R-modules, U and V ,and moreover, U is a right S-module and V is a left S-module. In Section 11.2, this corresponds i to R = R, S = C∞(B), U = (ξ)andV =Γ(ξ), where ξ is a . Then, we can A form the tensor product, U R V ,andweletU S V be the quotient module, (U R V )/W , where W is the submodule⊗ of U V generated⊗ by the elements of the form ⊗ ⊗R us v u sv. ⊗R − ⊗R As S is commutative, we can make U V into an S-module by defining the action of S via ⊗S s(u v)=us v. ⊗S ⊗S It is immediately verified that this S-module is isomorphic to the tensor product of U and V as S-modules and the following universal mapping property holds:

Proposition 22.37 For every, R-bilinear map, f : U V Z, if f satisfies the property × → f(us, v)=f(u, sv), for all u U, v V, s S, ∈ ∈ ∈ then f induces a unique R-linear map, f : U V Z, such that ⊗S → f(u, v)=f(u v), for all u U, v V. ⊗S ￿ ∈ ∈

Note that the linear map, f : U￿ V Z,isonlyR-linear, it is not S-linear in general. ⊗S → ￿ 22.20 The Pfaffian Polynomial

Let so(2n)denotethevectorspace(actually,Liealgebra)of2n 2n real skew-symmetric × matrices. It is well-known that every matrix, A so(2n), can be written as ∈

A = PDP￿,

where P is an and where D is a block matrix

D1 D2 D =   ...    Dn     640 CHAPTER 22. TENSOR ALGEBRAS

consisting of 2 2blocksoftheform × 0 a D = i . i a −0 ￿ i ￿ For a proof, see see Horn and Johnson [79], Corollary 2.5.14, Gantmacher [61], Chapter IX, or Gallier [58], Chapter 11. 2 Since det(D )=a and det(A) = det(PDP￿)=det(D)=det(D ) det(D ), we get i i 1 ··· n det(A)=(a a )2. 1 ··· n The Pfaffian is a polynomial function, Pf(A), in skew-symmetric 2n 2n matrices, A,(a polynomial in (2n 1)n variables) such that × − Pf(A)2 =det(A)

and for every arbitrary matrix, B,

Pf(BAB￿)=Pf(A)det(B).

The Pfaffian shows up in the definition of the class of a vector bundle. There is a simple way to define the Pfaffian using some exterior algebra. Let (e1,...,e2n)beanybasis of R2n.Foranymatrix,A so(2n), let ∈ ω(A)= a e e , ij i ∧ j i

n ω(A)=n!Pf(A) e e e . 1 ∧ 2 ∧···∧ 2n ￿ Clearly, Pf(A) is independent of the basis chosen. If A is the block D, asimplecalculationshowsthat

ω(D)= (a1e1 e2 + a2e3 e4 + + ane2n 1 e2n) − ∧ ∧ ··· − ∧ and that n ω(D)=( 1)nn! a a e e e , − 1 ··· n 1 ∧ 2 ∧···∧ 2n and so ￿ Pf(D)=( 1)na a . − 1 ··· n Since Pf(D)2 =(a a )2 =det(A), we seem to be on the right track. 1 ··· n 22.20. THE PFAFFIAN POLYNOMIAL 641

Proposition 22.38 For every skew symmetric matrix, A so(2n) and every arbitrary ma- trix, B, we have: ∈ (i) Pf(A)2 =det(A)

(ii) Pf(BAB￿)=Pf(A)det(B).

Proof . If we assume that (ii) is proved then, since we can write A = PDP￿ for some orthogonal matrix, P , and some block diagonal matrix, D, as above, as det(P )= 1and Pf(D)2 =det(A), we get ±

2 2 2 2 Pf(A) =Pf(PDP￿) =Pf(D) det(P ) =det(A),

which is (i). Therefore, it remains to prove (ii). 2n Let fi = Bei,fori =1,...,2n,where(e1,...,e2n)isanybasisofR . Since fi = k bkiek, we have ￿ τ = a f f = b a b e e = (BAB￿) e e , ij i ∧ j ki ij lj k ∧ l kl k ∧ l i,j i,j ￿ ￿ ￿k,l ￿k,l and so, as BAB￿ is skew symmetric and e e = e e ,weget k ∧ l − l ∧ k τ =2ω(BAB￿).

Consequently,

n n n n τ =2 ω(BAB￿)=2 n!Pf(BAB￿) e e e . 1 ∧ 2 ∧···∧ 2n Now, ￿ ￿ n τ = Cf f f , 1 ∧ 2 ∧···∧ 2n for some C R.IfB is singular,￿ then the fi are linearly dependent which implies that f f ∈ f =0,inwhichcase, 1 ∧ 2 ∧···∧ 2n Pf(BAB￿)=0,

as e e e =0.Therefore,ifB is singular, det(B)=0and 1 ∧ 2 ∧···∧ 2n ￿ Pf(BAB￿)=0=Pf(A)det(B).

If B is invertible, as τ = a f f =2 a f f , we have i,j ij i ∧ j i

so n τ =2nn!Pf(A)det(B) e e e 1 ∧ 2 ∧···∧ 2n and as ￿n n τ =2 n!Pf(BAB￿) e e e , 1 ∧ 2 ∧···∧ 2n we get ￿ Pf(BAB￿)=Pf(A)det(B), as claimed.

Remark: It can be shown that the polynomial, Pf(A), is the unique polynomial with coefficients such that Pf(A)2 =det(A)andPf(diag(S, . . . , S)) = +1, where 01 S = , 10 ￿− ￿ see Milnor and Stasheff[110] (Appendix C, Lemma 9). There is also an explicit formula for Pf(A), namely: 1 n Pf(A)= sgn(σ) aσ(2i 1) σ(2i). 2nn! − σ S2n i=1 ￿∈ ￿ Beware, some authors use a different sign convention and require the Pfaffian to have ￿ the value +1 on the matrix diag(S￿,...,S￿), where 0 1 S￿ = . 10− ￿ ￿ For example, if R2n is equipped with an inner product, , ,thensomeauthorsdefine ω(A)as ￿− −￿ ω(A)= Ae ,e e e , ￿ i j￿ i ∧ j i

We will also need another property of Pfaffians. Recall that the ring, Mn(C), of n n × matrices over C is embedded in the ring, M2n(R), of 2n 2n matrices with real coefficients, × using the injective homomorphism that maps every entry z = a + ib C to the 2 2matrix ∈ × a b . ba− ￿ ￿ If A Mn(C), let A M2n(R) denote the real matrix obtained by the above process. ∈ R ∈ Observe that every skew , A u(n), (i.e.,withA∗ = A￿ = A)yieldsa ∈ − matrix A so(2n). R ∈ 22.20. THE PFAFFIAN POLYNOMIAL 643

Proposition 22.39 For every skew Hermitian matrix, A u(n), we have ∈ n Pf(AR)=i det(A).

Proof . It is well-known that a skew Hermitian matrix can be diagonalized with respect to a , U, and that the eigenvalues are pure imaginary or zero, so we can write

A = U diag(ia1, . . . , ian)U ∗,

for some reals, ai R.Consequently,weget ∈

AR = UR diag(D1,...,Dn)UR￿, where 0 a D = i i a −0 ￿ i ￿ and Pf(A )=Pf(diag(D ,...,D )) = ( 1)n a a , R 1 n − 1 ··· n as we saw before. On the other hand,

det(A) = det(diag(ia , . . . , ia )) = in a a , 1 n 1 ··· n and as ( 1)n = inin,weget − n Pf(AR)=i det(A), as claimed.

Madsen and Tornehave [100] state Proposition 22.39 using the factor ( i)n,whichis wrong. − 644 CHAPTER 22. TENSOR ALGEBRAS Bibliography

[1] Ralph Abraham and Jerrold E. Marsden. Foundations of . Addison Wesley, second edition, 1978.

[2] George E. Andrews, Richard Askey, and Ranjan Roy. Special Functions. Cambridge University Press, first edition, 2000.

[3] Vincent Arsigny. Processing Data in Lie Groups: An Algebraic Approach. Application to Non-Linear Registration and Diffusion Tensor MRI. PhD thesis, Ecole´ Polytech- nique, Palaiseau, France, 2006. Th`ese de Sciences.

[4] Vincent Arsigny, Olivier Commowick, Xavier Pennec, and Nicholas Ayache. A fast and log-euclidean polyaffine framework for locally affine registration. Technical report, INRIA, 2004, route des Lucioles, 06902 Sophia Antipolis Cedex, France, 2006. Report No. 5865.

[5] Vincent Arsigny, Pierre Fillard, Xavier Pennec, and Nicholas Ayache. Geometric means in a novel vector space structure on symmetric positive-definite matrices. SIAM J. on and Applications,29(1):328–347,2007.

[6] Vincent Arsigny, Xavier Pennec, and Nicholas Ayache. Polyrigid and polyaffine trans- formations: a novel geometrical tool to deal with non-rigid deformations–application to the registration of histological slices. Medical Analysis,9(6):507–523,2005.

[7] Michael Artin. Algebra. Prentice Hall, first edition, 1991.

[8] Andreas Arvanitoyeogos. An Introduction to Lie Groups and the Geometry of Homo- geneous Spaces. SML, Vol. 22. AMS, first edition, 2003.

[9] M. F. Atiyah and I. G. Macdonald. Introduction to Commutative Algebra. Addison Wesley, third edition, 1969.

[10] Michael F. Atiyah. K-. Addison Wesley, first edition, 1988.

[11] Michael F Atiyah, Raoul Bott, and Arnold Shapiro. Clifford modules. ,3, Suppl. 1:3–38, 1964.

645 646 BIBLIOGRAPHY

[12] Sheldon Axler, Paul Bourdon, and Wade Ramey. Harmonic Function Theory.GTM No. 137. Springer Verlag, second edition, 2001.

[13] Andrew Baker. Matrix Groups. An Introduction to Theory. SUMS. Springer, 2002.

[14] Ronen Basri and David W. Jacobs. Lambertian reflectance and linear subspaces. IEEE Transactions on Pattern Analysis and Machine Intelligence,25(2):228–233,2003.

[15] Marcel Berger. G´eom´etrie 1. Nathan, 1990. English edition: Geometry 1, Universitext, Springer Verlag.

[16] Marcel Berger. A Panoramic View of .Springer,2003.

[17] Marcel Berger and Bernard Gostiaux. G´eom´etrie diff´erentielle: vari´et´es,courbes et surfaces. Collection Math´ematiques. Puf, second edition, 1992. English edition: Dif- ferential geometry, , curves, and surfaces, GTM No. 115, Springer Verlag.

[18] William M. Boothby. An Introduction to Differentiable Manifolds and Riemannian Geometry. Academic Press, second edition, 1986.

[19] Raoul Bott and Tu Loring W. Differential Forms in . GTM No. 82. Springer Verlag, first edition, 1986.

[20] . Alg`ebre, Chapitre 9. El´ements de Math´ematiques. Hermann, 1968.

[21] Nicolas Bourbaki. Alg`ebre, Chapitres 1-3. El´ements de Math´ematiques. Hermann, 1970.

[22] Nicolas Bourbaki. Elements of Mathematics. Lie Groups and Lie Algebras, Chapters 1–3.Springer,firstedition,1989.

[23] Nicolas Bourbaki. Topologie G´en´erale, Chapitres 1-4. El´ements de Math´ematiques. Masson, 1990.

[24] Nicolas Bourbaki. Topologie G´en´erale, Chapitres 5-10.El´ementsdeMath´ematiques. CCLS, 1990.

[25] T. Br¨ocker and T. tom Dieck. Representation of Compact Lie Groups.GTM,Vol.98. Springer Verlag, first edition, 1985.

[26] R.L. Bryant. An introduction to Lie groups and symplectic geometry. In D.S. Freed and K.K. Uhlenbeck, editors, Geometry and Quantum Theory, pages 5–181. AMS, Providence, Rhode Island, 1995.

[27] N. Burgoyne and R. Cushman. Conjugacy classes in linear groups. Journal of Algebra, 44:339–362, 1977. BIBLIOGRAPHY 647

[28] Elie´ Cartan. Theory of .Dover,firstedition,1966.

[29] Henri Cartan. Cours de Calcul Diff´erentiel. Collection M´ethodes. Hermann, 1990.

[30] Henri Cartan. Differential Forms.Dover,firstedition,2006.

[31] Roger Carter, Graeme Segal, and Ian Macdonald. Lectures on Lie Groups and Lie Algebras. Cambridge University Press, first edition, 1995.

[32] Sheung H. Cheng, Nicholas J. Higham, Charles Kenney, and Alan J. Laub. Approx- imating the of a matrix to specified accuracy. SIAM Journal on Matrix Analysis and Applications,22:1112–1125,2001.

[33] Shiing-shen Chern. Complex Manifolds without . Universitext. Springer Verlag, second edition, 1995.

[34] Claude Chevalley. Theory of Lie Groups I. Princeton Mathematical , No. 8. Princeton University Press, first edition, 1946. Eighth printing.

[35] Claude Chevalley. The Algebraic Theory of Spinors and Clifford Algebras. Collected Works, Vol. 2. Springer, first edition, 1997.

[36] Yvonne Choquet-Bruhat and C´ecile DeWitt-Morette. Analysis, Manifolds, and Physics, Part II: 92 Applications. North-Holland, first edition, 1989.

[37] Yvonne Choquet-Bruhat, C´ecile DeWitt-Morette, and Margaret Dillard-Bleick. Anal- ysis, Manifolds, and Physics, Part I: . North-Holland, first edition, 1982.

[38] Morton L. Curtis. Matrix Groups. Universitext. Springer Verlag, second edition, 1984.

[39] James F. Davis and Paul Kirk. Lecture Notes in Algebraic Topology.GSM,Vol.35. AMS, first edition, 2001.

[40] Anton Deitmar. A First Course in Harmonic Analysis. UTM. Springer Verlag, first edition, 2002.

1 [41] C. R. DePrima and C. R. Johnson. The range of A− A∗ in GL(n, C). and Its Applications,9:209–222,1974.

[42] Jean Dieudonn´e. Sur les Groupes Classiques. Hermann, third edition, 1967.

[43] Jean Dieudonn´e. Special Functions and Linear Representations of Lie Groups.Regional Conference Series in Mathematics, No. 42. AMS, first edition, 1980.

[44] Jean Dieudonn´e. El´ementsd’Analyse,´ Tome V. Groupes de Lie Compacts, Groupes de Lie Semi-Simples. Edition Jacques Gabay, first edition, 2003. 648 BIBLIOGRAPHY

[45] Jean Dieudonn´e. El´ementsd’Analyse,´ Tome VI. Analyse Harmonique. Edition Jacques Gabay, first edition, 2003. [46] Jean Dieudonn´e. El´ementsd’Analyse,´ Tome VII. Equations´ Fonctionnelles Lin´eaires. Premi`ere partie, Op´erateurs Pseudo-Diff´erentiels. Edition Jacques Gabay, first edition, 2003. [47] Jean Dieudonn´e. El´ementsd’Analyse,´ Tome II. Chapitres XII `aXV. Edition Jacques Gabay, first edition, 2005. [48] Dragomir Djokovic. On the exponential map in classical lie groups. Journal of Algebra, 64:76–88, 1980. [49] Manfredo P. do Carmo. Differential Geometry of Curves and Surfaces. Prentice Hall, 1976. [50] Manfredo P. do Carmo. Riemannian Geometry.Birkh¨auser,secondedition,1992. [51] B.A. Dubrovin, A.T. Fomenko, and S.P. Novikov. Modern Geometry–Methods and Applications. Part I. GTM No. 93. Springer Verlag, second edition, 1985. [52] B.A. Dubrovin, A.T. Fomenko, and S.P. Novikov. Modern Geometry–Methods and Applications. Part II. GTM No. 104. Springer Verlag, first edition, 1985. [53] J.J. Duistermaat and J.A.C. Kolk. Lie Groups. Universitext. Springer Verlag, first edition, 2000. [54] Gerald B. Folland. A Course in Abstract Harmonic Analysis.CRCPress,firstedition, 1995. [55] Joseph Fourier. Th´eorie Analytique de la Chaleur. Edition Jacques Gabay, first edition, 1822. [56] William Fulton. Algebraic Topology, A first course. GTM No. 153. Springer Verlag, first edition, 1995. [57] William Fulton and Joe Harris. , A first course. GTM No. 129. Springer Verlag, first edition, 1991. [58] Jean H. Gallier. Geometric Methods and Applications, For Science and .TAM,Vol.38.Springer,firstedition,2000. [59] Jean H. Gallier. and square roots of real matrices. Technical report, University of Pennsylvania, Levine Hall, 3330 Walnut Street, Philadelphia, PA 19104, 2008. Report No. MS-CIS-08-12. [60] S. Gallot, D. Hulin, and J. Lafontaine. Riemannian Geometry. Universitext. Springer Verlag, second edition, 1993. BIBLIOGRAPHY 649

[61] F.R. Gantmacher. The Theory of Matrices, Vol. I. AMS Chelsea, first edition, 1977. [62] Christopher Michael Geyer. Catadioptric Projective Geometry: Theory and Applica- tions. PhD thesis, University of Pennsylvania, 200 South 33rd Street, Philadelphia, PA 19104, 2002. Dissertation. [63] Andr´eGramain. Topologie des Surfaces.CollectionSup.Puf,firstedition,1971. [64] Robin Green. Spherical harmonic lighting: The gritty details. In Archives of the Game Developers’ Conference,pages1–47,2003. [65] Marvin J. Greenberg and John R. Harper. Algebraic Topology: A First Course. Addi- son Wesley, first edition, 1981. [66] Phillip Griffiths and Joseph Harris. Principles of . Wiley Inter- science, first edition, 1978. [67] Cindy M. Grimm. Modeling Surfaces of Arbitrary Topology Using Manifolds.PhD thesis, Department of , Brown University, Providence, Rhode Island, USA, 1996. Dissertation. [68] Cindy M. Grimm and John F. Hughes. Modeling surfaces of arbitrary topology using manifolds. In Proceedings of the 22nd ACM Annual Conference on and Interactive Techniques (SIGRAPH’95), pages 359–368. ACM, August 6-11 1995. [69] Victor Guillemin and Alan Pollack. Differential Topology. Prentice Hall, first edition, 1974. [70] Brian Hall. Lie Groups, Lie Algebras, and Representations. An Elementary Introduc- tion. GTM No. 222. Springer Verlag, first edition, 2003. [71] Allen Hatcher. Algebraic Topology. Cambridge University Press, first edition, 2002. [72] Sigurdur Helgason. Groups and . Geometry, Differential Operators and Spherical Functions. MSM, Vol. 83. AMS, first edition, 2000. [73] Sigurdur Helgason. Differential Geometry, Lie Groups, and Symmetric Spaces.GSM, Vol. 34. AMS, first edition, 2001. [74] Nicholas J. Higham. The and squaring method of the revisited. SIAM Journal on Matrix Analysis and Applications,26:1179–1193,2005. [75] D. Hilbert and S. Cohn-Vossen. Geometry and the Imagination. Chelsea Publishing Co., 1952. [76] Morris W. Hirsch. Differential Topology. GTM No. 33. Springer Verlag, first edition, 1976. 650 BIBLIOGRAPHY

[77] Friedrich Hirzebruch. Topological Methods in Algebraic Geometry.SpringerClassics in Mathematics. Springer Verlag, second edition, 1978.

[78] Harry Hochstadt. The Functions of .Dover,firstedition,1986.

[79] Roger A. Horn and Charles R. Johnson. Matrix Analysis. Cambridge University Press, first edition, 1990.

[80] Roger Howe. Very basic . American Mathematical Monthly,90:600–623, 1983.

[81] James E. Humphreys. Introduction to Lie Algebras and Representation Theory.GTM No. 9. Springer Verlag, first edition, 1972.

[82] Dale Husemoller. Fiber Bundles. GTM No. 20. Springer Verlag, third edition, 1994.

[83] J¨urgen Jost. Riemannian Geometry and Geometric Analysis. Universitext. Springer Verlag, fourth edition, 2005.

[84] Charles S. Kenney and Alan J. Laub. Condition estimates for matrix functions. SIAM Journal on Matrix Analysis and Applications,10:191–209,1989.

[85] A.A. Kirillov. representations of orthogonal groups. Technical report, University of Pennsylvania, Math. Department, Philadelphia, PA 19104, 2001. Course Notes for Math 654.

[86] A.A. Kirillov. Lectures on the Orbit Method. GSM, Vol. 64. AMS, first edition, 2004.

[87] A.A. Kirillov (Ed.). Representation Theory and Noncommutative Harmonic Analysis. Encyclopaedia of Mathematical Sciences, Vol. 22. Springer Verlag, first edition, 1994.

[88] Wilhelm Klingenberg. Riemannian Geometry.deGruyter&Co,secondedition,1995.

[89] Anthony W. Knapp. Lie Groups Beyond an Introduction.ProgressinMathematics, Vol. 140. Birkh¨auser, second edition, 2002.

[90] Shoshichi Kobayashi and Katsumi Nomizu. Foundations of Differential Geometry, II. Wiley Classics. Wiley-Interscience, first edition, 1996.

[91] Wolfgang K¨uhnel. Differential Geometry. Curves–Surfaces–Manifolds.StudentMath- ematical Library, Vol. 16. AMS, first edition, 2002.

[92] Jacques Lafontaine. Introduction Aux Vari´et´esDiff´erentielles.PUG,firstedition,1996.

[93] . Real and Analysis.GTM142.SpringerVerlag,thirdedition, 1996.

[94] Serge Lang. Undergraduate Analysis. UTM. Springer Verlag, second edition, 1997. BIBLIOGRAPHY 651

[95] Serge Lang. Fundamentals of Differential Geometry. GTM No. 191. Springer Verlag, first edition, 1999.

[96] Blaine H. Lawson and Marie-Louise Michelsohn. Geometry. Princeton Math. Series, No. 38. Princeton University Press, 1989.

[97] N. N. Lebedev. Special Functions and Their Applications.Dover,firstedition,1972.

[98] John M. Lee. Introduction to Smooth Manifolds. GTM No. 218. Springer Verlag, first edition, 2006.

[99] Pertti Lounesto. Clifford Algebras and Spinors. LMS No. 286. Cambridge University Press, second edition, 2001.

[100] Ib Madsen and Jorgen Tornehave. From to . De Rham Cohomol- ogy and Characteristic Classes. Cambridge University Press, first edition, 1998.

[101] Paul Malliavin. G´eom´etrie Diff´erentielle Intrins`eque. Enseignement des Sciences, No. 14. Hermann, first edition, 1972.

[102] Jerrold E. Marsden and T.S. Ratiu. Introduction to Mechanics and Symmetry. TAM, Vol. 17. Springer Verlag, first edition, 1994.

[103] William S. Massey. Algebraic Topology: An Introduction. GTM No. 56. Springer Verlag, second edition, 1987.

[104] William S. Massey. A Basic Course in Algebraic Topology. GTM No. 127. Springer Verlag, first edition, 1991.

[105] Yukio Matsumoto. An Introduction to Morse Theory.TranslationsofMathematical Monographs No 208. AMS, first edition, 2002.

[106] John W. Milnor. Morse Theory. Annals of Math. Series, No. 51. Princeton University Press, third edition, 1969.

[107] John W. Milnor. On of inner product spaces. Inventiones Mathematicae, 8:83–97, 1969.

[108] John W. Milnor. Topology from the Differentiable Viewpoint. The University Press of Virginia, second edition, 1969.

[109] John W. Milnor. of left invariant metrics on lie groups. Advances in Mathematics,21:293–329,1976.

[110] John W. Milnor and James D. Stasheff. Characteristic Classes. Annals of Math. Series, No. 76. Princeton University Press, first edition, 1974. 652 BIBLIOGRAPHY

[111] R. Mneimn´eand F. Testard. Introduction `ala Th´eorie des Groupes de Lie Classiques. Hermann, first edition, 1997.

[112] Jules Molk. Encyclop´ediedes Sciences Math´ematiquesPures et Appliqu´ees.Tome I (premier ), Arithm´etique. Gauthier-Villars, first edition, 1916.

[113] Mitsuo Morimoto. Analytic Functionals on the Sphere.TranslationsofMathematical Monographs No 178. AMS, first edition, 1998.

[114] Shigeyuki Morita. Geometry of Differential Forms.TranslationsofMathematical Monographs No 201. AMS, first edition, 2001.

[115] James R. Munkres. Topology, a First Course. Prentice Hall, first edition, 1975.

[116] James R. Munkres. Analysis on Manifolds. Addison Wesley, 1991.

[117] Raghavan Narasimham. Compact Riemann Surfaces.LectureinMathematicsETH Z¨urich. Birkh¨auser, first edition, 1992.

[118] Mitsuru Nishikawa. On the exponential map of the group O(p, q)0. Memoirs of the Faculty of Science, Kyushu University, Ser. A,37:63–69,1983.

[119] Barrett O’Neill. Semi-Riemannian Geometry With Applications to Relativity.Pure and Applies Math., Vol 103. Academic Press, first edition, 1983.

[120] Xavier Pennec. Intrinsic on Riemannian Manifolds: Basic tools for geometric measurements. Journal of Mathematical Imaging and Vision,25:127–154,2006.

[121] Peter Petersen. Riemannian Geometry. GTM No. 171. Springer Verlag, second edition, 2006.

[122] L. Pontryagin. Topological Groups. Princeton University Press, first edition, 1939.

[123] L. Pontryagin. Topological Groups.GordonandBreach,secondedition,1960.

[124] Ian R. Porteous. Topological Geometry. Cambridge University Press, second edition, 1981.

[125] M.M. Postnikov. Geometry VI. Riemannian Geometry.EncyclopaediaofMathematical Sciences, Vol. 91. Springer Verlag, first edition, 2001.

[126] Marcel Riesz. Clifford Numbers and Spinors. Kluwer Academic Press, first edition, 1993. Edited by E. Folke Bolinder and Pertti Lounesto.

[127] Wulf Rossmann. Lie Groups. An Introduction Through Linear Groups.GraduateTexts in Mathematics. Oxford University Press, first edition, 2002. BIBLIOGRAPHY 653

[128] Joseph J. Rotman. Introduction to Algebraic Topology. GTM No. 119. Springer Verlag, first edition, 1988.

[129] Arthur A. Sagle and Ralph E. Walde. Introduction to Lie Groups and Lie Algebras. Academic Press, first edition, 1973.

[130] Takashi Sakai. Riemannian Geometry. Mathematical Monographs No 149. AMS, first edition, 1996.

[131] Hans Samelson. Notes on Lie Algebras. Universitext. Springer, second edition, 1990.

[132] Giovanni Sansone. .Dover,firstedition,1991.

[133] Hajime Sato. Algebraic Topology: An Intuitive Approach.MathematicalMonographs No 183. AMS, first edition, 1999.

[134] D.H. Sattinger and O.L. Weaver. Lie Groups and Algebras with Applications to Physics, Geometry, and Mechanics. Applied Math. Science, Vol. 61. Springer Verlag, first edition, 1986.

[135] Laurent Schwartz. Analyse II. Calcul Diff´erentiel et Equations Diff´erentielles.Collec- tion Enseignement des Sciences. Hermann, 1992.

[136] Jean-Pierre Serre. Lie Algebras and Lie Groups. Lecture Notes in Mathematics, No. 1500. Springer, second edition, 1992.

[137] Jean-Pierre Serre. Complex Semisimple Lie Algebras. Springer Monographs in Math- ematics. Springer, first edition, 2000.

[138] Igor R. Shafarevich. Basic Algebraic Geometry 1.SpringerVerlag,secondedition, 1994.

[139] Richard W. Sharpe. Differential Geometry. Cartan’s Generalization of Klein’s Erlan- gen Program. GTM No. 166. Springer Verlag, first edition, 1997.

[140] Marcelo Siqueira, Dianna Xu, and Jean Gallier. Construction of C∞-surfaces from triangular meshes using parametric pseudo-manifolds. Technical report, University of Pennsylvania, Levine Hall, Philadelphia, PA 19104, 2008. pdf file available from http://repository.upenn.edu/cis reports/877.

[141] Norman Steenrod. The Topology of Fibre Bundles. Princeton Math. Series, No. 14. Princeton University Press, 1956.

[142] Elias M. Stein and Guido Weiss. Introduction to on Euclidean Spaces. Princeton Math. Series, No. 32. Princeton University Press, 1971.

[143] S. Sternberg. Lectures On Differential Geometry. AMS Chelsea, second edition, 1983. 654 BIBLIOGRAPHY

[144] Michael E. Taylor. Noncommutative Harmonic Analysis.MathematicalSurveysand Monographs, No. 22. AMS, first edition, 1986.

[145] Loring W. Tu. An Introduction to Manifolds. Universitext. Springer Verlag, first edition, 2008.

[146] N.J. Vilenkin. Special Functions and the Theory of Group Representations.Transla- tions of Mathematical Monographs No 22. AMS, first edition, 1968.

[147] Frank Warner. Foundations of Differentiable Manifolds and Lie Groups. GTM No. 94. Springer Verlag, first edition, 1983.

[148] Andr´eWeil. Foundations of Algebraic Topology. Colloquium Publications, Vol. XXIX. AMS, second edition, 1946.

[149] Andr´eWeil. L’Int´egration dans les Groupes Topologiques et ses Applications. Hermann, second edition, 1979.

[150] R.O. Wells. Differential Analysis on Complex Manifolds. GTM No. 65. Springer Verlag, second edition, 1980.

[151] . The Classical Groups. Their Invariants and Representations.Prince- ton Mathematical Series, No. 1. Princeton University Press, second edition, 1946.