<<

, Universal Enveloping , and the Poincar´e-Birkhoff-WittTheorem

Lucas Lingle August 22, 2012

Abstract We investigate the fundamental ideas behind Lie groups, Lie algebras, and universal enveloping algebras. In particular, we emphasize the use- ful properties of the exponential mapping, which allows us to transition between Lie groups and Lie algebras. From there, we discuss universal enveloping algebras, prove their existence and uniqueness, and after in- troducing the necessary machinery, we prove the Poincar´e-Birkhoff-Witt Theorem.

1 Introduction

In the first section, we introduce Lie groups and prove some basic theorems about them. In the second section, we discuss and prove the properties of the exponential mapping. In the third section, we introduce Lie algebras and prove some important facts relating Lie groups to Lie algebras. In the fourth section, we introduce universal enveloping algebras, and prove their existence and uniqueness. In the fifth and final section, we prove the Poincar´e-Birkhoff- Witt Theorem and its corollaries.

2 Lie Groups

Definition 2.1. A Lie G is group which is also a finite-dimensional smooth manifold, and in which the group operation and inversion are smooth maps.

Definition 2.2. The general linear group over the real numbers, denoted GLn(R), is the set of all n × n invertible real matrices, equipped with the operation of matrix multiplication. Similarly, the general linear group over the complex numbers, denoted GLn(C), is the set of all n × n invertible complex matrices, equipped with the operation of matrix multiplication.

1 Since the general linear groups only contain invertible matrices, each matrix in GLn(R) has an inverse in GLn(R), so the general linear groups are closed under inversion. Since the product AB of any two invertible matrices A and B is also invertible, and has entries in the same field as A and B, the general linear groups are closed under the group operation. Lastly, since matrix multiplication is associative, the elements of GLn(R) associate. Hence, GLn(R) is a group. The above logic likewise holds for GLn(C). More abstractly, the general linear group of a V , written GL(V ), is the automorphism group, whose elements can be written in matrix form but can also be thought of as operators that form a group under composition.

Definition 2.3. Denote the set of all n × n complex matrices by Mn(C).

Definition 2.4. Let {Am} be a sequence of complex matrices in Mn(C). We say that {Am} converges to a matrix A if each entry of the matrices in the sequence converges to the corresponding entry in A. That is, if (Am)kl converges to Akl for all 1 ≤ k, l ≤ n, we say {Am} converges to A.

Definition 2.5. A matrix Lie group is any subgroup G of GLn(C) such that if {Am} is any sequence of matrices in G converging to some matrix A, then either A is in G or else A is not invertible. Thus a matrix Lie group is a set algebraically closed under the inherited group operation from GLn(C), and is also a topologically closed subset of GLn(C). In other words, a matrix Lie group is a closed subgroup of GLn(C). Definition 2.6. A matrix Lie group G is said to be compact if the following two conditions are satisfied:

1. If {Am} is any sequence of matrices in G and {Am} converges to a matrix A, then A is in G. 2. There is some C ∈ R such that for all matrices A ∈ G, |Aij| ≤ C for all 1 ≤ i, j ≤ n.

Definition 2.7. A matrix Lie group G is connected if given any two matrices A, B ∈ G, there exists a continuous path A(t), for a ≤ t ≤ b, so that A(a) = A and A(b) = B. Technically, this is what is known as path-connectedness in topology, and generally is not the same as connectedness. However, a matrix Lie group is connected if and only if it is path connected, and so we shall continue to refer to matrix Lie groups as connected when they are path-connected. Definition 2.8. A matrix Lie group G that is not connected can be uniquely described as the union of disjoint sets. Each such disjoint set is called a compo- nent of G. Proposition 2.9. If G is a matrix Lie group, then the component of G con- taining the identity is a subgroup of G.

2 Proof. Let A and B be two matrices in the component of G containing the identity. Then there exist two continuous paths A(t) and B(t), with A(0) = B(0) = I, A(1) = A, and B(1) = B. Then A(t)B(t) is a continuous path from I to AB. But A and B are any two elements of the identity component, and their product AB is also in the identity component, since the continuous path given by A(t)B(t) goes from I to AB and such a continuous path can only be formed between elements of the same component. Let (A(t))−1 denote the inverse of the matrix given by A(t), for each t. Then (A(t))−1 goes from I to A−1, and by the same logic as above, A−1 must be in the identity component as well. Since the identity component is closed under the inherited group operation and under inversion, it is a subgroup of G.  Definition 2.10. Let G and H be matrix Lie groups. A map Φ : G → H is called a Lie group homomorphism if Φ is continuous and Φ(g1g2) = Φ(g1)Φ(g2) for all g1, g2 ∈ G. If Φ is a bijective Lie group homomorphism and Φ−1 is continuous, then Φ is called a Lie group .

3 The Exponential Mapping

Although Lie groups are endowed with some extra structure and thus are an easier form of manifold to study, they themselves can still be difficult to deal with. For this reason, we often deal with a more wieldy object, namely the Lie corresponding to the group. In order to transfer information from the to the Lie group, we use a function called the exponential mapping. Definition 3.1. Let X be any matrix. Define the matrix exponential by

∞ X Xm eX = . m! m=0

One might wonder if this even converges. As we will see shortly, the answer is an emphatic yes. First, though, we must introduce a few new concepts. Definition 3.2. The Hilbert-Schmidt norm of an n × n matrix X is given by  n n 1/2 X X 2 ||X|| = |xij| . j=1 i=1

It is easy to verify, using the triangle and Cauchy-Schwarz inequalities, that the norm obeys the following:

||X + Y || ≤ ||X|| + ||Y ||,

||XY || ≤ ||X|| ||Y ||.

3 Proposition 3.3. For any n × n real or complex matrix X, the series above converges. Furthermore, eX is a continuous function of X. Proof. Since we are working with matrices having real or complex entries, we know that there is some entry whose absolute value is the greatest among the entries. Let M denote the maximum, in absolute value, of all entries of the 2 2 matrix X. Then |(X)ij| ≤ M, and since X is a n × n matrix, |(X )ij| ≤ nM , m m−1 m and so on. In general, |(X )ij| ≤ n M . Then

∞ X nm−1M m m! m=0

m converges by a simple application of the ratio test. Then since |(X )ij| ≤ nm−1M m, we can use the comparison test. Thus, the sum

∞ m ∞  m  X |(X )ij| X X = m! m! m=0 m=0 ij converges as well. Then by a basic theorem from analysis, we know that since

∞ X Xm  m! m=0 ij converges absolutely, it converges in general. By Definition 2.4, we know the sequence (of partial sums) of matrices converges—and hence

∞ X Xm = eX m! m=0

X converges. It is easy to see that e is continuous.  Now that we see that the exponential mapping is well-behaved, we can prove some important properties about it. Proposition 3.4. Let X and Y be arbitrary n × n matrices, and let M ∗ denote the conjugate transpose of a matrix M. Then we have the following:

1. e0 = I, ∗ 2. (eX )∗ = e(X ), 3. eX is invertible and (eX )−1 = e−X , 4. e(α+β)X = eαX eβX for all α, β ∈ C, 5. if XY = YX, then eX+Y = eX eY = eY eX , −1 6. if C is invertible, then eCXC = CeX C−1, 7. ||eX || ≤ e||X||.

4 Proof. Point 1 is obvious, and Point 2 follows from taking the conjugate trans- poses term-wise. Points 3 and 4 are special cases of Point 5. For Point 5, we note that since eZ converges for all Z, eX eY is defined for all X and Y . Furthermore,

 X2  Y 2  eX eY = I + X + + ··· I + Y + + ··· . 2! 2! Multiplying out, and collecting terms where the power of X plus the power of Y is m, we get

∞ m ∞ m X X Xk Y m−k X 1 X m! eX eY = = XkY m−k. k! (m − k)! m! k!(m − k)! m=0 k=0 m=0 k=0 And since X and Y commute,

m X m! (X + Y )m = XkY m−k. k!(m − k)! k=0 So we get ∞ X 1 eX eY = (X + Y )m = e(X+Y ). m! m=0 Point 6 follows immediately, since each term of the matrix exponential can be written as (CXC−1)m (CXC−1)(CXC−1) ··· (CXC−1)(CXC−1) Xm  = = C C−1. m! m! m!

For Point 7, notice that for each m ∈ N, by the Cauchy-Schwarz inequality, m m m X ||X || ||X|| = ≤ . m! m! m! And since ||X|| is a real number,

∞ X ||X||m e||X|| = m! m=0 converges. By the comparison test, we know that

∞ m X X . m! m=0 converges as well. It follows from the triangle inequality that

K m K m K m X X X X X ||X|| SK := ≤ ≤ =: LK . m! m! m! m=0 m=0 m=0

5 Since the sequence defined by

K X Xm E := K m! m=0 converges (to eX ), we know that the sequence

K m X X SK := m! m=0 converges as well (to ||eX ||). It follows that X ||X|| lim SK = e ≤ e = lim LK . K→∞ K→∞  Proposition 3.5. Let X be a n × n complex matrix. Then etX is a smooth curve in GLn(C) and d etX = XetX = etX X. dt In particular,

d tX e = X. dt t=0

tX Proof. For each i and j, we know (e )ij is given by an everywhere convergent d tX tX power series and so we can find dt e by differentiating the power series for e term by term. Everything else follows immediately.  Proposition 3.6. Let X and Y be n × n complex matrices. Then  m X+Y X Y e = lim e m e m . m→∞

Though this result is important, we will not prove it here, as it relies on the matrix logarithm, which we have avoided discussing due to space constraints. A good proof can be found in [1].

Definition 3.7. A function A : R → GLn(C) is a one-parameter subgroup of GLn(C) if 1. A is continuous, 2. A(0) = I, 3. A(t + s) = A(t)A(s) for all t, s ∈ R.

Theorem 3.8. If A is a one-parameter subgroup of GLn(C), then there exists a unique n × n complex matrix X so that A(t) = etX . Though this is an important result that we will use later, we will not prove it; the proof builds upon the concept of the matrix logarithm. Skeptics should consult [1].

6 4 Lie Algebras

As explained in the previous section, it will be convenient to explore a Lie group’s Lie algebra—its tangent space at the identity element. Such inquiry will be quite rewarding, as it will let us discover important and otherwise difficult-to-access information with ease. Definition 4.1. A finite-dimensional real or complex Lie algebra is a finite- dimensional real or complex vector space g together with a map [·, ·] from g × g into g with the following properties: 1. [·, ·] is bilinear, 2. [X,Y ] = −[Y,X] for all X,Y ∈ g, 3. [X, [Y,Z]] + [Y, [X,Z]] + [Z, [X,Y ]] = 0 for all X,Y,Z ∈ g. This property is called the Jacobi identity.

Definition 4.2. Let G be a matrix Lie group. The Lie algebra of G, denoted g, is the set of all matrices X such that etX ∈ G for all real numbers t, and we refer to g as a matrix Lie algebra.

These definitions may not seem to coincide; after proving the next few propo- sitions, it will become clear that matrix Lie algebras satisfy Definition 4.1. Proposition 4.3. Let G be a matrix Lie group, and X an element of its Lie algebra. Then eX is an element in the identity component of G. Proof. By the definition of the Lie algebra for matrix Lie groups, we know etX ∈ G for all real numbers t. We know A(t) = etX is a continuous function going from I to eX as t goes from 0 to 1, and since I is in the identity component X of G, we know e is as well.  Proposition 4.4. Let G be a matrix Lie group with Lie algebra g. Let X be an element of g, and A be an element of G. Then AXA−1 is in g.

−1 Proof. It follows from Proposition 3.4 that etAXA = AetX A−1. By the def- inition of a Lie algebra for a matrix group we know that etX is in G for all real t, and since G is closed under inversion, we know A−1 is in G; thus −1 AetX A−1 = et(AXA ) ∈ G, which implies AXA−1 ∈ g by the definition of the Lie algebra for a matrix Lie group.  Now we are well-positioned to prove that matrix Lie algebras are indeed Lie algebras: as this next theorem tells us, they are vector spaces which we can equip with a bilinear antisymmetric operation satisfying the Jacobi identity. Theorem 4.5. Let G be a matrix Lie group with Lie algebra g, and let X and Y be elements of g. Then

1. sX ∈ g for all real numbers s, 2. X + Y ∈ g,

7 3. XY − YX ∈ g.

Proof. Part 1. For X ∈ g, and all real t and s, we know e(ts)X ∈ G. Then since e(ts)X = et(sX), we know that et(sX) ∈ G for all real t and s. Then by the definition of a Lie algebra for a matrix Lie group we know sX ∈ g for all real s. Part 2. If X,Y ∈ g commute, then for all real t, we know et(X+Y ) = etX etY . Clearly etX and etY are in G for all real t, so we know et(X+Y ) is in G as well, which means X + Y ∈ g. In the general case, however, we use Proposition 3.6, the Lie product formula:  m et(X+Y ) = lim etX/metY/m . m→∞

Because X,Y ∈ g, we know that etX/m and etY/m are in G and so is (etX/metY/m)m, since G is a group. However, since G is a matrix Lie group, the limit of elements in G are also in G, so long as the limit is invertible. Since et(X+Y ) is invertible, X/m Y/m m tX/m tY/m m we know limm→∞(e e ) is in G as well. Then limm→∞(e e ) is in G, and hence so is et(X+Y ). This implies that X + Y ∈ g. Part 3. Using Proposition 3.5, we can see that

d tX e Y = XY. dt t=0 Then by the product rule,

d tX −tX 0 0 e Y e = (XY )e + (e Y )(−X) = XY − YX. dt t=0 By Parts 1 and 2, we know that g is a vector space. By Proposition 4.4, we know etY Xe−tY ∈ g for all real t. Finally, since γ(t) = etX Y e−tX is a smooth curve through g, we know the derivative of γ with respect to t exists and is always in g; indeed, the derivative of a curve in a vector space is always in that vector space. Therefore,

d γ(t) = XY − YX dt t=0 is in g.  Definition 4.6. Given two n × n matrices A and B, the of A and B, denoted [A, B], is defined to be AB − BA. It is easy to verify that the commutator satisfies all the necessary properties that the bracket of a Lie algebra must have. Furthermore, since the commutator of two matrices in a matrix Lie algebra is also that matrix Lie algebra, we shall use the commutator for our bracket operation when dealing with matrix Lie algebras. It should be noted that [·, ·] is used to denote the Lie bracket of any Lie algebra and need not correspond to the commutator. However, since we tend to use the commutator as our Lie bracket for matrix Lie algebras, it inherits the somewhat ambiguous bracket notation.

8 Definition 4.7. A subalgebra of a real or complex Lie algebra g is a subspace h of g such that [H1,H2] ∈ h for all H1,H2 ∈ h. If g is a complex Lie algebra and h is a real subspace of g closed under brackets then we say that h is a real subalgebra of g. If g and h are Lie algebras then a φ : g → h is called a Lie if φ([X,Y ]) = [φ(X), φ(Y )] for all X,Y ∈ g. Furthermore, if φ is also bijective, then φ is called a Lie algebra isomorphism. Lastly, a Lie algebra isomorphism with Lie algebra g as both its domain and codomain is called a Lie algebra automorphism. Theorem 4.8. Let G and H be Lie groups with Lie algebras g and h, respec- tively. Suppose Φ: G → H is a Lie group homomorphism. Then there exists a unique linear map φ : g → h such that Φ(eX ) = eφ(X), and 1. φ(AXA−1) = Φ(A)φ(X)Φ(A)−1, for all X ∈ g, A ∈ G, 2. φ([X,Y ]) = [φ(X), φ(Y )], for all X,Y ∈ g, d tX 3. φ(X) = dt Φ(e )|t=0, for all X ∈ g.

Proof. Since Φ is a continuous group homomorphism and etX is also continuous, we know Φ(etX ) will be a one-parameter subgroup of H. By Theorem 3.8, we know there is a unique matrix Z so that Φ(etX ) = etZ for all t ∈ R. Furthermore, we know Z ∈ h, since etZ = Φ(etX ) ∈ H for all t. Now we simply define φ(X) = Z and check that the necessary properties are satisfied. Step 1: Φ(eX ) = eφ(X). This follows from a few simple facts: we know etZ = Φ(etX ) for all t, and φ(X) = Z. Thus, etφ(X) = Φ(etX ) for all t, and in particular for t = 1, we know eφ(X) = Φ(eX ). Now for linearity! Step 2: φ(sX) = sφ(X). tφ(sX) t(sX) t(sφ(X)) t(sX) For all s, t ∈ R, we have e = Φ e , and e = Φ e . But for the first equation, we know that φ(sX) is the unique matrix so that etφ(sX) = Φ(etsX ). By the second equation, we know sφ(X) is the unique matrix so that esφ(X) = Φ(etsX ). Hence, sφ(X) = φ(sX). Step 3: φ(X + Y ) = φ(X) + φ(Y ). By Steps 1 and 2, we know that

etφ(X+Y ) = eφ(t(X+Y )) = Φet(X+Y ).

By the Lie product formula from Proposition 3.6, and the fact that Φ is a continuous homomorphism, we have

m  m   etφ(X+Y ) = Φ lim etX/metY/m = lim Φ(etX/m)Φ(etY/m) . m→∞ m→∞

But then by the relationship between Φ and φ, and applying the Lie product formula from Proposition 3.6, we know that  m  m lim Φ(etX/m)Φ(etY/m) = lim etφ(X)/metφ(Y )/m = et(φ(X)+φ(Y )). m→∞ m→∞

9 Thus, etφ(X+Y ) = et(φ(X)+φ(Y )). Using Proposition 3.5, we can differentiate both of these at t = 0 to get φ(X + Y ) = φ(X) + φ(Y ). Step 4: φ(AXA−1) = Φ(A)φ(X)Φ(A)−1. By Steps 1 and 2,

−1 −1 −1 etφ(AXA ) = eφ(tAXA ) = Φ(etAXA ).

By Proposition 3.4 and Step 1, we know that

−1 −1 etφ(AXA ) = Φ(etAXA ) = Φ(AetX A−1) = Φ(A)Φ(etX )Φ(A−1).

And since we know that Φ(A−1) = Φ(A)−1 for any homomorphism Φ, and since Φ(etX ) = etφ(X), we know

−1 etφ(AXA ) = Φ(A)etφ(X)Φ(A)−1.

Differentiating at t = 0 we obtain

φ(AXA−1) = Φ(A)φ(X)Φ(A)−1.

Step 5: φ([X,Y ]) = [φ(X), φ(Y )]. Recall from the proof of Theorem 4.5, we know that

d tX −tX [X,Y ] = e Y e . dt t=0 Hence,   d tX −tX d tX −tX φ([X,Y ]) = φ e Y e = φ(e Y e ) , dt t=0 dt t=0 since a derivative commutes with a linear transformation. Then by Step 1,

d tX −tX d tφ(X) −tφ(X) φ([X,Y ]) = Φ(e )φ(Y )Φ(e ) = e φ(Y )e . dt t=0 dt t=0 Of course, we know from the proof of Theorem 4.5 that the far right side of this equation is equal to φ(X)φ(Y ) − φ(Y )φ(X), so

φ([X,Y ]) = [φ(X), φ(Y )], and thus φ is a Lie algebra homomorphism. d tX Step 6: φ(X) = dt Φ(e ) t=0. To begin with, it is clear that etφ(X) = Φ(etX ), so

d tX d tφ(X) Φ(e )|t=0 = e . dt dt t=0

10

d tφ(X) By Proposition 3.5, we know dt e = φ(X), so t=0

d tX φ(X) = Φ(e ) . dt t=0

Step 7: φ is the unique linear map such that Φ(etX ) = etφ(X). Suppose ψ is another such linear map. Then,

etψ(X) = eψ(tX) = Φ(etX ).

And so by Step 6,

d tX ψ(X) = Φ(e ) = φ(X). dt t=0  Theorem 4.9. Suppose that G, H, and K are matrix Lie groups, with cor- responding Lie algebras g, h, and k. Let Φ: H → K and Ψ: G → H be Lie group homomorphisms, and let Λ: G → K be the composition of Φ and Ψ, so that Λ(A) = Φ(Ψ(A)) for all A in G. Let φ : h → k, ψ : g → h, and λ : g → k be the associated Lie algebra homomorphisms such that eφ(X) = Φ(eX ), eψ(X) = Ψ(eX ), and eλ(X) = Λ(eX ). Then for all X ∈ g, λ(X) = φ(ψ(X)). Proof. For any X ∈ g,

etλ(X) = Λ(etX ) = ΦΨ(etX ) = Φetψ(X) = etφ(ψ(X)).

Differentiating at t = 0, we know by Proposition 3.5 that

d tλ(X) d tφ(ψ(X)) λ(X) = e = e = φ(ψ(X)). dt t=0 dt t=0

 Definition 4.10. Let G be a matrix Lie group with Lie algebra g. Then for −1 each A ∈ G define a linear map AdA : g → g by the formula AdA(X) = AXA . This map is called the adjoint mapping. Proposition 4.11. Let G be a matrix Lie group, with Lie algebra g. Let GL(g) denote the group of all invertible linear transformations of g. Then for each A ∈ G, AdA is an invertible linear transformation of g with inverse AdA−1 , and the map Ad : A 7→ AdA is a group homomorphism of G into GL(g). Furthermore, for each A ∈ G, AdA satisfies AdA([X,Y ]) = [AdA(X), AdA(Y )] for all X,Y ∈ g. Proof. We can see that for any X ∈ g, and any A ∈ G,

−1 −1 −1 −1 AdA(AdA−1 (X)) = A(A XA)A = X = A (AXA )A = AdA−1 (AdA(X)).

11 And now

−1 −1 Ad(AB) = AdAB( · ) = AB( · )B A = AdA(AdB( · )) = Ad(A)(Ad(B)). Since multiplication of matrices is the group operation of G and composition of linear maps is the group operation in GL(g), we know that the Ad operator is a group homomorphism. And lastly, for any X,Y ∈ g and any A ∈ G,

−1 −1 −1 AdA([X,Y ]) = A(XY − YX)A = AXY A − AY XA . And the far right side of this equality is clearly

−1 1 −1 −1 AXA AY A − AY A AXA = AdA(X)AdA(Y ) − AdA(Y )AdA(X).

Thus AdA([X,Y ]) = [AdA(X), AdA(Y )], and so the map AdA is a Lie algebra homomorphism for each A ∈ G.  Since g is a real vector space with some k, we can pick a basis for g, and GL(g) can be written as a group of matrices with the group operation being matrix multiplication. (This notion is consistent with the traditional meaning of the general linear group of a vector space as the group of automorphisms on that vector space, having composition as the group operation.) Thus we can regard GL(g) as a matrix Lie group. It is easy to show that Ad : G → GL(g) is continuous and so is a Lie group homomorphism. By Theorem 4.8, there is an associated real linear map ad taking X to adX from the Lie algebra of G to the Lie algebra of GL(g) (that is, from g to gl(g), the space of all of g). Specifically, ad satisfies

eadX = Ad(eX ).

Proposition 4.12. Let G be a matrix Lie group with Lie algebra g. Let Ad : G → GL(g) be the Lie group homomorphism defined above. Let ad : g → gl(g) be the associated Lie algebra map. Then for all X,Y ∈ g, adX (Y ) = [X,Y ]. Proof. By Theorem 4.8, we know that ad can be calculated by

d tX adX = Ad(e ) . dt t=0 Thus,

d tX d tX −tX adX (Y ) = Ad(e )(Y ) = e Y e = [X,Y ]. dt t=0 dt t=0  Definition 4.13. A representation of a Lie group G on a vector space V is a Lie group homomorphism ρ : G → GL(V ), sending elements of G to automorphisms on V . Similarly, a representation of a Lie algebra g on a vector space V is a Lie algebra homomorphism ρ : g → gl(V ), mapping elements of g to endomorphisms on V .

12 Though we will not prove it here, it turns out that all finite-dimensional Lie algebras can be represented with matrices. Theorem (Ado) 4.14. Every finite-dimensional real Lie algebra is isomorphic to a real subalgebra of gln(R). Every finite-dimensional complex Lie algebra is isomorphic to a complex subalgebra of gln(C).

5 Universal Enveloping Algebras

Generally speaking, a Lie algebra g does not have any defined notion of asso- ciative multiplication. However, if we consider a representation ρ : g → gl(V ) then the product ρ(X)ρ(Y ) is well-defined. (Note that the “product” is actually composition of the operators ρ(X) and ρ(Y ), which are endomorphisms of V .) With convenience of multiplication and the structure passed on by the com- mutator, we will define the notion of the “universal” gener- ated by “products” of operators of the form ρ(X) for X ∈ g. To make things both more formal and more interesting, we will first introduce a few new con- cepts:

Definition 5.1. An associative algebra A is a vector space V over a field K equipped with an associative, bilinear vector product · : V × V → V . If there is some element 1 ∈ V such that 1 · a = a = a · 1 for every a ∈ A, then we say that A is unital, or “has unit.” Often we will describe an associative algebra A as “having unit over K” to mean that it is a unital algebra with its vector space over a field K. In particular, we will be very interested in an associative algebra called the . But first, we must define the .

Definition 5.2. Let Vp for 1 ≤ p ≤ k and W be modules over a R.A T = TV1,···Vk together with a multilinear map ⊗ : V1 × · · · × Vk → T is called universal for k-multilinear maps on V1 × · · · × Vk if for every multilinear map µ : V1 × · · · Vk → W there is a unique linear mapµ ˜ : T → W such that µ˜ ◦ ⊗ = µ. If such a universal object exists, it will be called a tensor product. It turns out that the tensor product is unique up to isomorphism.

Proposition 5.3. If (T1, ⊗1) and (T2, ⊗2) are both universal for k-multilinear maps on V1 × · · · × Vk, then there is a unique isomorphism Φ: T1 → T2 such that Φ ◦ ⊗1 = ⊗2.

Proof. By the assumption of universality, we know that there are maps ⊗1 and ⊗2 such that Φ ◦ ⊗1 = ⊗2, and Φ¯ ◦ ⊗2 = ⊗1. Thus we have Φ¯ ◦ Φ ◦ ⊗1 = ⊗1, and by the uniqueness part of the universality of ⊗1 it follows that Φ¯ ◦ Φ = Id. Similarly, Φ ◦ Φ¯ ◦ ⊗2 = ⊗2, and by the uniqueness part of the universality of ⊗2 ¯ ¯ −1 it follows that Φ ◦ Φ = Id. Thus Φ = Φ . 

13 More concretely, the realization of the tensor product of modules V1, ··· Vk is, at least roughly, the set of all linear combinations of symbols of the form v1 ⊗ · · · ⊗ vk subject to the multilinear relations

v1 ⊗ · · · ⊗ avi ⊗ · · · ⊗ vk = a(v1 ⊗ · · · ⊗ vi ⊗ · · · ⊗ vk), and

0 0 v1 ⊗ · · · ⊗ (vi + vi) ⊗ · · · ⊗ vk = (v1 ⊗ · · · ⊗ vi ⊗ · · · ⊗ vk) + (v1 ⊗ · · · ⊗ vi ⊗ · · · ⊗ vk).

This space is denoted by V1 ⊗ · · · ⊗ Vk, and is called the tensor product of the modules V1, ··· ,Vk.

Definition 5.4. The tensor algebra of a vector space V over a field K, denoted T (V ), is the associative algebra of tensors on V , with the tensor product ⊗ serving as the associative, bilinear vector product. For any vector space, we can construct its tensor algebra as follows: Let T 0V = K. For any k ∈ N, define the k-th tensor power of V , denoted T kV , to be the tensor product of V with itself k times:

T kV = V ⊗ V ⊗ · · · ⊗ V.

From here, we simply take the of the T kV for k = 0, 1, 2,... ,

∞ M k T (V ) = T V = K ⊕ V ⊕ (V ⊗ V ) ⊕ (V ⊗ V ⊗ V ) ··· . k=0 Now we are in a suitable position to discuss what is known as the “” of tensor algebras.

Proposition 5.5. Let A be any associative algebra with unit over K, and let f : V → A be a linear map. Then there exists a unique algebra homomorphism f¯ : T (V ) → A such that f = f¯ ◦ i, where i : V → T (V ) is the inclusion of V = T 1V into T (V ).

Proof. Let A be any associative algebra with unit over K. For any linear map f : V → A, define a linear map f¯ : T (V ) → A by ¯ f(v1 ⊗ · · · ⊗ vk) = f(v1) ··· f(vk).

This will be well-defined for any k ∈ N, and it is easy to see that it is indeed an algebra homomorphism. However, if k = 0, then we must clarify. Fortunately, this is easy: since A is an algebra with unit over K, we can simply let f¯(`) = `·1, where 1 is the unit element of A and ` ∈ K. The above definition is clearly the only way to extend f as a homomorphism, since V generates T (V ) as a K- algebra.  Now for one more detail that will be important later:

14 Definition 5.6. Let I be the two-sided in T (V ) generated by all the X ⊗ Y − Y ⊗ X (where X,Y ∈ V ). Define the of V , denoted Sym(V ), by Sym(V ) = T (V )/I. In other words, the symmetric algebra of V is just the commutative version of the tensor algebra. With that out of the way, we are ready to move on to the real topic of this section.

Definition 5.7. Let g be a Lie algebra over a field K. The universal enveloping algebra of g is a pair (Ug, i), satisfying the following:

1. Ug is an associative algebra with unit over K, 2. i : g → Ug is linear and i(X)i(Y ) − i(Y )i(X) = i([X,Y ]), for all X,Y ∈ g, 3. for any associative algebra A with unit over K and for any linear map j : g → A satisfying j(X)j(Y ) − j(Y )j(X) = j([X,Y ]) for each X,Y ∈ g, there exists a unique homomorphism of algebras φ : Ug → A such that φ ◦ i = j.

Notice that the Lie bracket is not necessarily the commutator—after all, there may be no notion of associative multiplication in g—but that applying i to the bracket of any two X,Y ∈ g must give us the commutator of i(X) and i(Y ).

Theorem 5.8. For any Lie algebra g over an arbitrary field K, there exists a unique universal enveloping algebra (Ug, i), up to isomorphism. Proof. Uniqueness: Suppose that the Lie algebra g has two universal enveloping 0 algebras (Ug, i) and (Ug , i0). Then by definition, for each associative K-algebra A there exists a unique homomorphism λA : Ug → A. In particular, since 0 Ug is an associative K-algebra, we have a unique homomorphism of algebras λ : Ug → Ug0. Switching the roles of Ug and Ug0 and applying the same logic, we know there exists a unique homomorphism of algebras µ : Ug0 → Ug. Then λ ◦ µ = 1Ug, and µ ◦ λ = 1Ug0 , which means λ is bijective. But a bijective homomorphism of algebras is an isomorphism of algebras. Thus (Ug, i) is unique up to isomorphism. Existence: Let T (g) be the tensor algebra of g, and let J be the two-sided ideal in T (g) generated by all X ⊗Y −Y ⊗X −[X,Y ], where X,Y ∈ g. We claim that Ug = T (g)/J satisfies all the necessary conditions delineated in Definition 5.7. Let π : T (g) → Ug be the homomorphism mapping each element of the tensor algebra to its equivalence class in the associative algebra T (g)/J. Clearly, M J ⊂ T kg. k>0

It follows that π maps T 0g = K isomorphically into T (g)/J, and hence Ug at least contains scalars—great! Now let i : g → Ug be the restriction of π to g ⊂ T (g). Let A be any associative algebra with unit over K, and let j : g → A be a linear map satisfying j(X)j(Y ) − j(Y )j(X) = j([X,Y ]) for all X,Y ∈ g.

15 The universal property of tensor algebras from Proposition 5.5 gives us a unique algebra homomorphism φ0 : T (g) → A that extends j and sends 1 to 1. It follows from the special property of j that X ⊗ Y − Y ⊗ X − [X,Y ] is in Ker(φ0) for all X,Y ∈ g. Thus, since each of the elements in T (g) generated by the terms X⊗Y −Y ⊗X−[X,Y ] gets mapped to zero, we can identify all such elements and a homomorphism shall still exist. In other words, φ0 induces a homomorphism φ : Ug → A such that φ ◦ i = j. The uniqueness of φ is evident, since 1 and Im(i) together generate Ug.  Now for a simple example. If g is an abelian Lie algebra (i.e., [X,Y ] = 0 for all X,Y ∈ g), then Ug = T (g)/J, where J is the two-sided ideal generated by all X ⊗ Y − Y ⊗ X − [X,Y ]. But since [X,Y ] is always zero, we simply have Ug = Sym(g).

6 The Poincar´e-Birkhoff-WittTheorem

It turns out that g is mapped injectively into Ug, and hence universal enveloping algebras are indeed “enveloping” in some sense. This becomes quite useful, since we can proceed think of each i(X) as simply being the corresponding X ∈ g; the lack of restrictions that characterize the universal enveloping algebra allow us to perform feats that would be cumbersome or impossible in the Lie algebra itself. This important result turns out to be a simple corollary of the much stronger Poincar´e-Birkhoff-WittTheorem.

Definition 6.1. A graded algebra G is an associative algebra that can be de- composed into the direct sum of abelian groups Gk (with the group operation being addition of vectors) and is characterized by the fact that if X ∈ Gm and Y ∈ Gp then X · Y ∈ Gm+p. For instance, the tensor algebra T (V ) of any vector space V is a graded algebra. Clearly if X ∈ T mV and Y ∈ T pV , then X ⊗ Y ∈ T m+pV . If we mod out by the two-sided ideal I generated by the X ⊗ Y − Y ⊗ X, for X,Y ∈ V , we get Sym(V ) = T (V )/I. This is a graded algebra as well, as we can define a grading SmV = T mV/I.

Definition 6.2. A filtration is an indexed set Qi of subobjects in a given Q, where the index i runs over an index set S which is totally ordered, and if i ≤ j then Qi ⊂ Qj. Definition 6.3. A filtered algebra F is an associative algebra over some field K, which has a filtration of linear subspaces, indexed by a set S, satisfying {0} ⊂ F0 ⊂ F1 ⊂ · · · ⊂ F, recovering F via the union [ Fs = F, s∈S and satisfying the following: if X ∈ Fm and Y ∈ Fp, then X · Y ∈ Fm+p.

16 So far, we know very little about Ug, other than the fact that it contains the scalars which were passed on isomorphically from K. For brevity, we shall write T instead if T (g), and Sym instead of Sym(g). Similarly, we shall write T m instead of T mg, and Sm instead of Smg. For algebras other than the tensor algebra itself, we shall also frequently omit the ⊗, choosing a dot or simply placing variables next to each other to indicate multiplication. Define a filtration on T by 0 1 m Tm := T ⊕ T ⊕ · · · ⊕ T .

Recall that π : T → Ug is the quotient map. Let Um = π(Tm), and U−1 = 0. Suppose we have W ∈ Tm, Z ∈ Tp, and define X = π(W ) ∈ Um, and Y = π(Z) ∈ Up. Then W ⊗ Z ∈ Tm+p, which implies that π(W ⊗ Z) ∈ π(Tm+p), and hence

XY = π(W )π(Z) = π(W ⊗ Z) ∈ π(Tm+p) = Um+p.

Thus, for all X ∈ Um,Y ∈ Up, XY ∈ Um+p, so the Um’s form a filtration on Ug. Define m G := Um/Um−1 (this is just a vector space), and let the multiplication in Ug define a bilinear map Gm × Gp → Gm+p. This operation is well defined, as we shall see momentarily. 0 Suppose we have two representatives X,X ∈ Up of the same equivalence class p 0 m in G , and two representatives Y,Y ∈ Um of the same equivalence class in G . 0 0 Define W = X − X ∈ Up−1 and Z = Y − Y ∈ Um−1. Then XY = (X0 + W )(Y 0 + Z) = X0Y 0 + (X0Z + WY 0 + WZ). 0 0 But surely WY ∈ Um+p−1, X Z ∈ Um+p−1, and WZ ∈ U(m−1)+(p−1) ⊂ Um+p−1. Thus, when we mod out by Um+p−1, in accordance with our defi- nition of Gm+p, all the terms in the parentheses vanish. Having gotten that out of the way, we define ∞ M G = Gm. m=0 This gives us a bilinear map G × G → G in accordance with the rules of mul- tiplication for Gm × Gp → Gm+p. It is clear that G is a graded associative algebra with unit. m Since π maps the last bit T of each Tm into Um, it follows that the com- m m posite linear map φm : T → Um → G = Um/Um−1 is well defined. And we m can write Tm = Tm−1 ⊕T . Clearly, π maps Tm surjectively onto Um = π(Tm). And surely the mapping from Um to Um/Um−1 is surjective, and so the map Φm : Tm → Um → Um/Um−1 is surjective as well. And since this map maps ev- erything in Tm−1 to Um−1/Um−1 = {0}, we know that the restriction of Φm to m m T , which we call φm : T → Um → Um/Um−1, hits everything in Um/Um−1, except possibly zero. But we know 0 = 0 ⊗ · · · ⊗ 0 ∈ T m, so zero is in the image m m of T under φm. Hence, φm is surjective onto G . The maps φm therefore can be combined to give us a surjective linear map φ : T → G sending 1 to 1.

17 Lemma 6.4. The map φ : T → G is an algebra homomorphism. Moreover, if I is the two-sided ideal generated by X ⊗Y −Y ⊗X for X,Y ∈ g, then I ∈ Ker(φ), and so φ induces a homomorphism ω of Sym = T/I onto G. Proof. Suppose we have some X ∈ T m and Y ∈ T p. It follows that φ(X) ∈ Gm, φ(Y ) ∈ Gp, and X ⊗ Y ∈ T m+p, so φ(X ⊗ Y ) ∈ Gm+p. Then by the definition of the product in G, it follows that φ(X ⊗ Y ) = φ(X)φ(Y ) for each X,Y ∈ T . Thus, φ is a (surjective) algebra homomorphism. Let X ⊗ Y − Y ⊗ X (for X,Y ∈ g) be a typical generator of the two-sided ideal I described earlier. Then π(X ⊗ Y − Y ⊗ X) ∈ U2, by definition. On the other hand, we also know π(X ⊗ Y − Y ⊗ X) = π([X,Y ]) ∈ U1, and so φ(X ⊗ Y − Y ⊗ X) ∈ U1/U1 = {0}. Hence I ⊂ Ker(φ), and so if we identify all the elements of the ideal I with the zero vector, we surely shall still have a (surjective) algebra homomorphism ω : Sym → G.  It turns out that ω is not only a surjective algebra homomorphism, but is also injective, and hence is an isomorphism of algebras. This fundamental result is known as the Poincar´e-Birkhoff-WittTheorem, which we shall prove after introducing the important corollaries it entails. Theorem (Poincar´e-Birkhoff-Witt)6.5. The homomorphism ω : Sym → G is an isomorphism of algebras. Corollary 6.6. Let W be a subspace of T m. Suppose the canonical map T m → m m S sends W isomorphically onto S . Then π(W ) is a complement to Um−1 in Um.

m m Proof. Let gm be the quotient map from T to S , and let hm be the quotient map from Um to Um/Um−1. By Lemma 6.4, and the definitions, we know the diagram below is commutative:

m π T / Um

gm hm

m m S ω / G

m m Since gm sends W ⊂ T isomorphically onto S by our supposition, and since ω : Sym → G is an isomorphism by the Poincar´e-Birkoff-Witt Theorem, m we know the map ω ◦ gm sends W isomorphically onto G . Since W is mapped isomorphically, it is mapped injectively, and hence Ker(hm ◦ π) ∩ W = {0}. It follows that Ker(hm) ∩ π(W ) = {0} as well. But the kernel of hm is just Um−1, and so Um−1 ∩ π(W ) = {0}. m And since W is mapped isomorphically onto G , we know that hm is an isomorphism from π(W ) to hm(π(W )) = Um/Um−1 = hm(Um). By the Rank- Nullity Theorem, ∼ Um = Ker(hm) ⊕ Im(hm).

18 The kernel of hm is just Um−1, and in this context Im(hm) = hm(Um) = ∼ hm(π(W )) = π(W ). Hence ∼ Um = Um−1 ⊕ π(W ).  Corollary 6.7. The canonical map i : g → Ug is injective.

Proof. This is just a special case of Corollary 6.6, with m = 1, and W = T 1 = g. The supposition in Corollary 6.6 tells us that T 1 and S1 are isomorphic holds 1 1 1 because we have S = gm(T ) = T . Using the same logic as in the proof of Corollary 6.6, it is clear that W = g must be mapped isomorphically onto U1. Since U1 is in the filtration of Ug we know that g is mapped injectively into Ug itself.  This result is very important—it allows us to identity each X ∈ g with i(X) ∈ Ug, and hence think of Ug as a bigger algebra “enveloping” the Lie algebra g.

Corollary 6.8. Let (x1, x2, x3, ··· ) be any ordered basis of g. Then the elements xi1 ··· xim = π(xi1 ⊗ · · · ⊗ xim ), where m ∈ N, and i1 ≤ i2 ≤ · · · ≤ im, along with 1, form a basis of Ug.

m Proof. Let W be the subspace of T spanned by all xi1 ⊗ · · · ⊗ xim , where m i1 ≤ i2 ≤ · · · ≤ im. It is evident that W maps isomorphically onto S , so we know by Corollary 6.6 that π(W ) is a complement to Um−1 in Um, and it is easy to see by induction that the union of 1 and the bases of each Um, for m ∈ N, form a basis for Ug.  And now we set out to prove the Poincar´e-Birkhoff-WittTheorem itself. But before we do, we shall introduce some new notation. Fix an ordered basis (xλ : λ ∈ Ω) of g. This choice identifies Sym with the polynomial algebra in indeterminates zλ, where λ ∈ Ω. For each sequence Σ = m (λ1, λ2, ··· λm) of indices (m is called the length of Σ), let zΣ = zλ1 ··· zλm ∈ S m and let xΣ = xλ1 ⊗ · · · ⊗ xλm ∈ T . We shall call Σ increasing if λ1 ≤ λ2 ≤ · · · ≤ λm in a given ordering of Ω. By fiat, let ∅ be increasing and set z∅ = 1. It follows that the set {zΣ | Σ increasing} is a basis of Sym. Later on, the fact that Sym is a filtered algebra will be of importance, and so we note it now: associated with the grading of

∞ M Sym = Sk k=0 is a filtration 0 k Sk = S ⊕ · · · ⊕ S . Lastly, in the following lemmas, we shall write λ ≤ Σ if λ ≤ µ for all µ ∈ Σ.

19 + Lemma 6.9. For each m ∈ Z , there exists a unique linear map fm : g⊗Sm → Sym satisfying: (Am) fm(xλ ⊗ zΣ) = zλzΣ for any λ ≤ Σ, and any zΣ ∈ Sm, (Bm) fm(xλ ⊗ zΣ) − zλzΣ ∈ Sk for any k ≤ m, and any zΣ ∈ Sk, (Cm) fm(xλ ⊗ fm(xµ ⊗ zT )) = fm(xµ ⊗ fm(xλ ⊗ zT )) + fm([xλ, xµ] ⊗ zT ) for all zT ∈ Sm−1. Moreover, the restriction of fm to g ⊗ Sm−1 agrees with fm−1.

Proof. First, note that all of the terms in (Cm) make sense once we have proven (Bm). Note further that the restriction of fm to g ⊗ Sm−1 must satisfy (Am−1), (Bm−1), and (Cm−1), so this restricted map must be the same as fm−1 due to the asserted uniqueness. To verify that existence and uniqueness hold for each fm, we proceed by induction on m. For m = 0, only zΣ = z∅ = 1 occurs; thus, we can let f0(xλ ⊗ 1) = zλ and extend linearly to g ⊗ S0. Evidently, (A0), (B0), and (C0) are satisfied. Furthermore, (A0) shows that our choice of f0 is the only possible one. Assuming the existence of a unique fm−1 satisfying (Am−1), (Bm−1), and (Cm−1), we will show how to extend fm−1 to a map fm. For this purpose, it will suffice to define fm(xλ ⊗ zΣ) where Σ is an increasing sequence of length m. For the case where λ ≤ Σ, condition (Am) cannot hold unless we define fm(xλ ⊗zΣ) = zλzΣ. On the other hand, if λ ≤ Σ fails to hold, then λ is greater than some element of Σ. Certainly, then, the first index µ in Σ is strictly less than λ, and Σ = (µ, T ), where µ ≤ T and T is of length m − 1. By (Am−1), we know zΣ = zµzT = fm−1(xµ ⊗ zT ). Since µ ≤ T , fm(xµ ⊗ zT ) = zµzT = zΣ is already defined, so the left side of (Cm) becomes fm(xλ ⊗ zΣ). On the other hand, (Bm−1), with k = m − 1 implies that

fm(xλ ⊗ zT ) = fm−1(xλ ⊗ zT ) = zλzT + y for a specific y ∈ Sm−1, since fm−1 is defined uniquely by the induction hypoth- esis. This shows that the right side of (Cm) is already defined:

zµzλzT + fm−1(xµ ⊗ y) + fm−1([xλ, xµ] ⊗ zT ), where y ∈ Sm−1. The preceding remarks show that fm can be defined, and in only one way. Moreover, (Am) and (Bm) certainly hold, as does (Cm), as long as µ < λ, µ ≤ T . But [xµ, xλ] = −[xλ, xµ], so (Cm) also holds for λ < µ, λ ≤ T . When λ = µ, (Cm) also holds. We now only need to consider the case where neither λ ≤ T nor µ ≤ T is true. Write T = (v, Ψ), where v ≤ Ψ, v < λ, and v < µ. To keep notation under control, write fm(x ⊗ z) as xz for any x ∈ g and z ∈ Sm. The induction hypothesis guarantees that xµzT = xv(xµzΨ) + [xµ, xv]zΨ, and we know xµzΨ = zµzΨ + w for some w ∈ Sm−2. Since v ≤ Ψ and v ≤ µ, (Cm) already applies to xλ(xv(zµzΨ)). By induction, we know (Cm) also applies to xλ(xvw), and thus to xλ(xv(xµzΨ)). Consequently,

xλ(xµzT ) = xv(xλ(xµzΨ)) + [xλ, xv](xµzΨ) + [xµ, xv](xλzΨ) + [xλ, [xµ, xv]]zΨ.

20 Recall that λ and µ are interchangeable throughout this argument. If we in- terchange them in the above equation, and subtract the two, we obtain xλ(xµzT )− xµ(xλzT ), which is equivalent to

xv(xλ(xµzΨ)) − xv(xµ(xµzT )) + [xλ, [xµ, xv]]zΨ − [xµ, [xλ, xv]]zΨ.

But this is the same as

xv([xλ, xµ]zΨ) + [xλ, [xµ, xv]]zΨ + [xµ, [xv, xλ]]zΨ, which can be written as

[xλ, xµ](xvzΨ) + ([xv, [xλ, xµ]] + [xλ, [xµ, xv]] + [xµ, [xv, xλ]])zΨ.

And thanks to the Jacobi identity, the terms in parenthesis vanish, and we obtain [xλ, xµ]zΨ. This proves (Cm) and with it the lemma.  Great! Just two more to go! Lemma 6.10. There exists a representation ρ : g → gl(Sym) satisfying:

1. ρ(xλ)zΣ = zλzΣ for λ ≤ Σ, 2. ρ(xλ)zΣ ≡ zλzΣ (mod Sm), if Σ has length m.

Proof. Lemma 6.9 allows us to define a linear map f : g⊗Sym → Sym satisfying (Am), (Bm), and (Cm) for all m, since fm restricted to g ⊗ Sm−1 coincides with fm−1 by the uniqueness part. In other words, Sym becomes a g-module by condition (Cm), giving us a representation ρ satisfying the conditions 1 and 2 listed above, thanks to conditions (Am) and (Bm). 

Lemma 6.11. Let t ∈ Tm ∩ J, where J = Ker(π), and π is the quotient map from T to Ug. Then the homogenous component tm of t of degree m lies in I, the kernel of the quotient map taking T to Sym.

Proof. Write tm as a linear combination of basis elements xΣ(i) for 1 ≤ i ≤ r, and where each Σ(i) is of length m. The Lie algebra homomorphism ρ : g → gl(Sym) constructed in Lemma 6.10 extends, by the universal property of Ug = T/J, to an algebra homomorphism ρ0 : T → End(Sym), with J ⊂ Ker(ρ0). So ρ0(t) = 0. Then 0 0 ρ (xΣ(i)) · 1 = ρ (xΣ(i)1 ⊗ · · · ⊗ xΣ(i)m ) · 1 = 0 by the definition of xΣ(i). But this becomes

ρ(xΣ(i)1 ) ··· ρ(xΣ(i)m ) · 1 = 0 due to the fact that ρ0 is an algebra homomorphism and because the restriction 0 0 of ρ to g is ρ. And then by Lemma 6.10, we obtain zΣ(i). Hence ρ (t) · 1 is a polynomial whose term of highest degree is the appropriate combination of the zΣ(i) (1 ≤ i ≤ r). Therefore this combination of the zΣ(i) is 0 in Sym, and tm ∈ I as required. 

21 Proof of the Poincar´e-Birkhoff-WittTheorem. Let t ∈ T m, and π : T → Ug be m the quotient map. We must show that π(t) ∈ Um−1 implies t ∈ I. But t ∈ T 0 0 and π(t) ∈ Um−1 together imply that π(t) = π(t ) for some t ∈ Tm−1, and so 0 0 t − t ∈ J. Applying Lemma 6.11 to the tensor t − t ∈ Tm ∩ J, and using the fact that the homogenous component of degree m is t, we get t ∈ I. 

Acknowledgments

First and foremost, I would like to thank my mentor, Jared Bass, for all of his help this summer—without his guidance, this paper would not have been possible. And of course, I would like to thank Peter May for creating and running the REU.

References

[1] Brian Hall. Lie Groups, Lie Algebras, and Representations. Springer-Verlag New York Inc. 1st Edition. 2003. [2] Jeffrey Lee. Manifolds and Differential Geometry. American Mathematical Society. 1st Edition. 2009. [3] Alexander Kirillov, Jr. An Introduction to Lie Groups and Lie Algebras. Cambridge University Press. 1st Edition. 2008.

[4] James Humphreys. Introduction to Lie Algebras and . Springer-Verlag New York Inc. 3rd Edition. 1980.

22