<<

Representation of Finite dimensional Lie

July 21, 2015 2 Contents

1 Introduction to Lie Algebras 5 1.1 Basic Definitions ...... 5 1.2 Definition of the classical simple Lie algebras ...... 10 1.3 and solvable Lie algebras ...... 11 1.4 g-modules, basic definitions ...... 14 1.5 Testing solvability and semisimplicity ...... 17 1.6 Jordan Decomposition and Proof of Cartan’s Criterion ...... 20 1.6.1 Properties ...... 22 1.7 Theorems of Levi and Malcev ...... 23 1.7.1 Weyl’s complete irreducibility theorem...... 26 1.7.2 Classification of irreducible finite dimensional sl(2, C) modules ...... 31 1.8 Universal enveloping algebras ...... 33

2 Representations of Lie algebras 45 2.1 Constructing new representations ...... 45 2.1.1 Pull-back and restriction ...... 45 2.1.2 Induction ...... 46 2.2 Verma Modules ...... 49 2.3 Abstract Jordan Decomposition ...... 52

3 Structure Theory of Semisimple Complex Lie Algebras 57 3.1 Root Space Decomposition ...... 58 3.2 Root Systems ...... 64 3.2.1 Changing scalar ...... 69 3.2.2 Bases of ...... 75 3.2.3 Weyl Chambers ...... 77

3 4 CONTENTS

3.2.4 Subsets of roots ...... 80 3.2.5 Classification of a parabolic subset over a fixed R+(B) . 82 3.3 Borel and Parabolic of a complex semi simple Lie ...... 82

4 Highest Weight Theory 85 4.1 Construction of highest weight modules ...... 91 4.2 Character formula ...... 96 4.3 O ...... 97 4.4 Cartan matrices and Dynkin diagrams ...... 100 4.4.1 Classification of irreducible, reduced root systems/ Dynkin Diagrams ...... 105

Literature

Humphreys:

• Introduction to Lie algebras and

• Complex Reflection Groups

• Representations of semi simple Lie Algebras

• Knapp: Lie groups: beyond an introduction

• V.S. Varadarajan: Lie Groups, Lie Algebras and their Representations Chapter 1

Introduction to Lie Algebras

(Lecture 1)

1.1 Basic Definitions

Definition 1.1.1. A is a g over some field k, together with a [−, −]: g × g → g such that:

[x, x] = 0∀x ∈ g(anti-) (1.1) [x, [y, z]] + [y, [z, x]] + [z, [x, y]]() (1.2)

Remarks. • [−, −]is called a Lie .

•∀ x, y ∈ g, [x, y] = −[y, x](because 0 = [x + y, x + y] = [x, x] + [x, y] + [y, x] + [y, y])

• Often: Say a k-Lie algebra if we have a Lie algebra over k. Examples. V a k-vector space, [v, w] = 0∀v, w ∈ V defines a k-Lie algebra. Any associative k-algebra is naturally a k-Lie algebra by

[a, b] := ab − ba∀a, b ∈ A

. Excercise: check 1.2.

5 6 CHAPTER 1. INTRODUCTION TO LIE ALGEBRAS

In particular: The vector space

gl(n, k) = {A ∈ Mn×n(k)} is a lie algebra via [A, B] = AB − BA (usual of matrices). More generally: gl(V) = {ρ :V → Vk − linear}; V a k-vector space, is a lie algebra via [f, g] = f ◦ g − g ◦ f ∀f, g ∈ gl(V). These are the general linear Lie algebras.

Let g1, be Lie algebras, then g1 ⊕ g2 is a Lie algebra via

[(x, y), (x0, y0)] = [[x, x0], [y, y0]](take the Lie algebra component wise)

. The Lie algebra g1 ⊕ g2 is called the of g1 and g2.

Definition 1.1.2. Given g1, g2 k-Lie algebras, a f : g1 → g2 of k-Lie algebras is a k- such that f([x, y]) = [f(x), f(y)].

Remarks. • id : g → g is a Lie algebra .

• f : g1 → g2, g : g2 → g3 Lie algebra , then g ◦ f : g1 → g2 is a Lie . (because

g ◦ f([x, y]) = g([f(x), f(y)]) = [g ◦ f(x), g ◦ f(y)]).

Hence: k-Lie algebras with Lie algebra homomorphisms form a cate- gory.

Example 1.1.3. Let g be a Lie algebra.

ad :g → gl(g) x 7→ ad(x) where ad(x)(y) = [x, y] ∀x, y ∈ g is a lie algebra homomorphism. It is called the . (because: linear clear, since [−, −] is bilinear, and

[ad(x), ad(y)](z) = ad(x) ad(y)(z) − ad(y) ad(x)(z) = [x, [y, z]] − [y, [x, z]] Jacobi=−id [[x, y], z] = ad([x, y])(z).) 1.1. BASIC DEFINITIONS 7

Definition 1.1.4. Let g be a k-Lie algebra, l ⊂ g a vector subspace. • l is called a sub-lie algebra if [x, y] ∈ l for all x, y ∈ l.

• l is called an if [x, y] ∈ l for all x ∈ l, y ∈ g.(denoted l C g ( ⇐⇒ [x, y] ∈ l for all x ∈ g, y ∈ l)).

Given a Lie algebra g,I C g, then the vector space g/I becomes a Lie algebra via [x + I, y + I] = [x, y] + I.

To check this is well defined:

Assume x+I = x0+I, y+I = y0+I =⇒ x0 = x+u, y0 = y+v, u, v ∈ I =⇒ [x0 + I, y0 + I] = [u + x + I, v + y + I] = [u + x, v + y] + I = [u, v] + [u, y] + [x, v] + [x, y] + I = [x, y] + I =⇒ well defined.

Proposition 1.1.5. Let f : g1 → g2 be a Lie algebra homomorphism. Then

1 ker(f) C g1.

2 im(f) is a Lie of g2.

3 : If I C g1, ker(f) ⊂ I, the following diagram com- mutes:

f x g1 - g2 @ @ @ @R ?

x + I ∈ g1/I

∼ In particular: g1/ ker(f) = im(f) is an of Lie algebras. Proof. : Standard. Remark 1.1.6. There are the usual :

a.I , J C g, I ⊆ J, then J/I C g/I and (g/I)/(J/I) ∼= g/J 8 CHAPTER 1. INTRODUCTION TO LIE ALGEBRAS

b.I , J C g. Then I + J C g;I ∩ J C g, and I/I ∩ J ∼= I + J/J

Definition 1.1.7. A Lie algebra g is called simple if g contains no ideals I C g except for I = 0 or I = g, and if [g, g] 6= 0. Def. Remark 1.1.8. [g, g] = 0 ⇐⇒ [x, y] = 0∀x, y ∈ g (i.e g is abelian). If g is simple, then: a. Z(g) = {x ∈ g :[x, y] = 0∀y ∈ g} b.[ g, g] = g (because: Z(g)Cg and g not abelian, hence Z(g) = 0); Z(g)Cg because ∀x, y ∈ g, z ∈ Z(g): [[x, z], y] = [x, [z, y]] + [z, [y, x]] [g, g] is obviously an ideal, because for x, y, z ∈ g,[x, [y, z]] ∈ [g, g]. [g, g] is called the derived lie algebra of g. Also, [g, g] is the smallest ideal of g such that g/[g, g] is abelian. Example 1.1.9. sl(2, k) := {A ∈ Mat(2 × 2, k)| Tr(A) = 0} Is a lie subalgebra of gl(2, k), because Tr([A, B]) = Tr([A, B]) = Tr(AB − BA) = 0 ∀A, B ∈ sl(2, k). Fact 1.1.10. sl(2, k) is simple ⇐⇒ char(k) 6= 2.

Proof. Choose a standard (sl2 ”triple”) as follows:

0 1  0 0  1 0  e = 0 0 f = 1 0 h = 0 −1 Then we have (excercise!): [h, e] = 2e [h, f] = −2f [e, f] = h These relations define the lie bracket. If char(k) = 2, then the vector subspace h spanned by h is a non-trivial ideal because [e, h] = 0, [f, h] = 0. Since [h, h] = 0, then g = sl(2, k) is not simple. 1.1. BASIC DEFINITIONS 9

Remark 1.1.11. gl(n, k) is not simple, because 1 spans a non-trivial ideal (Note Z(gl(n, k)) 6= 0).

Lemma 1.1.12. ker(ad(g) → gl(g)) = {x ∈ g| ad(x) = 0} = {x ∈ g|[x, y] = 0∀x, y ∈ g} = Z(g).

Proof. Just definitions.

Corollary 1.1.13. Let g be a , then g is a linear Lie algebra (i.e. g is a Lie subalgebra of some Lie algebra of matrices, i.e., of gl(V) for some vector space V).

Proof. Since g is simple, then Z(g) = 0, which implies ad is injective, hence g ∼= ad(g) is an isomorphism of Lie algebras, and ad(g) is a Linear lie algebra.

Theorem 1.1.14. (Theorem od Ado) Every finite dimensional Lie algebra is linear.

Proof. Later.

Theorem 1.1.15. (Cartan-Killing Classification) Every complex finite-dimensional simple Lie algebra is isomorphic to exactly one of the following list:

• Classical Lie Algebras:

An sl(n + 1, C); n ≥ 1

Bn so(2n + 1, C); n ≥ 2

Cn sp(2n, C); n ≥ 3

Dn so(2n, C); n ≥ 4 • Exceptional Lie Algebras:

E6

E7

E8

F4 10 CHAPTER 1. INTRODUCTION TO LIE ALGEBRAS

G2 Remark 1.1.16. Every finite dimensional which is a direct sum of simple Lie algebras is called semi-simple. Remark 1.1.17. (Without further explanations)

connected, compact Lie with trivial ←→1:1 s.s. complex f.d. Lie alg

G 7→ C ⊗R Lie(G) Where Lie(G) is the of G at the origin. −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−

(Lecture 2- 6th April 2011)

1.2 Definition of the classical simple Lie al- gebras

• sl(n + 1, C) = {A ∈ gl(n + 1, C) : Tr(A) = 0} • sp(2n, C) = {A ∈ gl(2n, C): f(Ax, y) = −f(x, Ay)∀x, y ∈ C2n} where f is the bilinear form given by f(x, y) := xtMy

where M = 0 In . −In 0

Explicitly:

ABt  0 I   0 I  AB n + n = 0 CD −In 0 −In 0 CD −Ct At  CD  ⇐⇒ = −Dt Bt −A −B Thus  A skew sp(2n, ) =  =⇒ dim(sp(2n, )) = n2 + n(n − 1) = 2n2 − n. C skew At C 1.3. NILPOTENT AND SOLVABLE LIE ALGEBRAS 11

 t • so(2n + 1, C) = x ∈ gl(2n + 1, C):X M + MX = 0 where

1 0 0  M = 0 0 In 0 −In 0

• so(2n, C) = {X ∈ gl(2n + 1, C):XtM + MX = 0} where  0 I  M = n In 0

1.3 Nilpotent and solvable Lie algebras

Definition 1.3.1. • Let A be a k-algebra, then x ∈ A is said to be nilpotent if xn = 0 for some n ∈ N. • Let g be a k-Lie algebra, then x ∈ g is said to be ad-nilpotent if ad(x) ∈ gl(g) = Endk(g) is nilpotent. Definition 1.3.2. Let g be a Lie algebra. Define g0 := g, g1 := [g, g], gi := [g, gi−1], and g(0) := g, g(1) := [g, g], g(i) := [g(i−1), g(i−1)]. For any Lie algebra g, ··· g1 ⊂ g0 is called the central and ··· g(1) ⊂ g(0) the derived series. The Lie algebra g is said to be nilpotent if gi = 0 for some i > 0. It is said to be solvable if g(i) = 0 for some i > 0.

Remark 1.3.3. g nilpotent =⇒ g solvable (ab−ba = 0∀a ∈ g, b ∈ gi−1 ⊂ g).

Examples. • [gl(n, C), gl(n, C)] = sl(n, C) (Property of .)

• [sl(n, C), sl(n, C)] = sl(n, C) =⇒ gl(n, C) and sl(n, C) are neither nilpotent nor solvable.

• N+ := { strictly upper triangular matrices } are nilpotent (the same for strictly lower ones).

• Diagonal matrices are abelian hence nilpotent and solvable.

• Heisenberg Lie algebra K3(R) 12 CHAPTER 1. INTRODUCTION TO LIE ALGEBRAS

• sl(2, k), char(k) = 2 is solvable.

Proposition 1.3.4. Let g be a Lie algebra. The following statements hold.

1. g nilpotent =⇒ any Lie subalgebra or quotient by an ideal is nilpotent.

2.I C g, g/I nilpotent, I ⊂ Z(g) =⇒ g nilpotent. 3. g 6= {0} nilpotent =⇒ Z(g) 6= {0}.

4. g nilpotent, x ∈ g =⇒ x is ad −nilpotent.

(i) 5.I C g =⇒ I C g∀i. Proof.

1. l ⊂ g Lie subalgebra =⇒ [l, l] ⊂ [g, g] =⇒ ∀i, li ⊂ li =⇒ l is nilpotent. Now, let f : g1 → g2 be a lie algebra homomorphism.

Claim. If g1 is nilpotent then so is f(g1):

i i We have f(g1) = f(g1) ; which is clear for i = 0, and for i > 0 we have

i i−1 i−1 i−1 i f(g1) = f[g1, g1 ] = [f(g1), f(g1 )] = [f(g1), f(g1) ] = f(g1) . induction

Hence any homomorphic of gi is nilpotent, so any quotient is.

2.( g/I) nilpotent =⇒ (g/I)i = 0 for some i ∈ N. Let can : g → g/I be the canonical projection. Then

(g/I)i = can(g)i = can(gi) =⇒ gi ⊂ I ⊂ Z(g) =⇒ [g, gi] = gi+1 = 0.

3. g nilpotent =⇒ gi = {0}, for some i ∈ N which we choose minimal. Then, [g, gi−1] = 0 so the center is not trivial.

4. gi+1 = 0 for some i ∈ N. Let x ∈ g =⇒ ad(x)i(g) ⊂ gi+1 = {0} =⇒ ad(x) is nilpotent.

• I, J C g ideals. Then by the Jacobi identity, [I, J] C g. Now apply induction on i. 1.3. NILPOTENT AND SOLVABLE LIE ALGEBRAS 13

Proposition 1.3.5. Let V 6= 0 be a finite dimensional k-vector space, and g ⊂ gl(V) a Lie subalgebra, such that g contains only nilpotent elements of Endk(V). Then: 1. There exists v ∈ V, v 6= 0, such that gv = {0}.

2. There is a chain of k-vector spaces {0} ⊂ V1 ⊂ · · · ⊂ Vd = V where dim(Vi) = i, such that g(Vi) ⊂ Vi−1. 0 ∗ g  ..  3. There is a basis of V such that ⊂ { . }, where elements of 0 0 g are viewed as matrices in this basis. (In particular g is nilpotent).

Proof. We know that if x ∈ g is nilpotent, then ad(x) ∈ Endk(gl(V)) is nilpotent because n X n ad(x)n(y) = (−1)n−j[xj, y]xn−j j j=0 1. We proceed by induction on dim(g). If dim(g) = 0, it’s clear. Choose l ⊂ g a maximal Lie subalgebra. Consider ad : g → g, ad : g/l → g/l. Since ad(l) consists of nilpotent elements, there exists x 6= 0 ∈ g/l such that ad(l)(x) = 0, by induction. Hence, there exists x 6= 0 ∈ g − l such that [l, x] ⊂ l, thus k · x + l is a Lie subalgebra of g, hence is equal to g by maximality of l. Now consider W = {w ∈ V: l · v = 0}. By induction hypothesis, W 6= {0}. Claim. xW ⊂ W because if y ∈ l, v ∈ W, yxv = xyv + [y, x]v = 0. Since x is nilpotent, there exists v ∈ W such that xv = 0, hence g·v = 0 because g = k · x + l. 2. Consider the inclusion p : g → gl(V). The image consists of nilpotent . Claim. There exists a flag

V0 ⊂ V1 ⊂ · · · ⊂ Vd = V,

where dim(Vi) = i, such that p(g)(Vi) ⊂ Vi−1. 14 CHAPTER 1. INTRODUCTION TO LIE ALGEBRAS

Induction on dim(V): If dim(V) = 1, this is clear. Now let dim(V) > 1. There exists v ∈ 0 V, v 6= 0 such that p(g)v = 0. Let V1 := k · v, V = V/V1, consider −1 can : V → V/V1. Let Vi := can (Vi−1) and proceed by induction.

1.4 g-modules, basic definitions

Theorem 1.4.1. (Engel) Let g be a finite dimensionalLie algebra. Then, g is nilpotent if and only if all its elements are ad-nilpotent. Definition 1.4.2. A representation of a k-Lie algebra g is a k-vector space V with a Lie algebra homomorphism ρ : g → gl(V). A representation is called irreducible or simple if the only sub vector spaces W of V such that ρ(W) ⊂ V are {0} or V. Remark 1.4.3. A representation ρ : g → gl(V) is also called a g-: Alternativel, a g-module is a k-linear map

g × V → V (x, v) 7→ x · v such that x · (y · v) − y · (x · v) = [x, y] · v. Remark 1.4.4. The representation is called indecomposable if there are no sub vector spaces W, W0 such that V = W ⊗W0, ρ(g). Clear: For g-modules, irreducible implies indecomposable, however the other implication doesn’t hold! Theorem 1.4.5. (Lie, abstract form) Every irreducible representation of a finite dimensional complex is 1-dimensional. Theorem 1.4.6. (Lie, concrete form) Let g ⊂ gl(V) be a solvable linear Lie algebra, dim(V) < ∞, V complex vector space. Then, there exists v ∈ V, v 6= 0, such that g · v = Cv(i.e. v is a common eigen vector for all the elements in g). Remark 1.4.7. This is wrong for a general field k. For example, sl(2, k) for char(k) = 2 is solvable, the 2-dimensional standard representation/vector representation sl(2, k) → gl(k2) is irreducible but not one dimensional. 1.4. G-MODULES, BASIC DEFINITIONS 15

Proof of Engel. The first implication holds by Proposition 1.3.4. If, for all x ∈ g, ad(x) is nilpotent, then, again by Proposition 1.3.4, g is nilpotent because ad(g) ∼= g/Z(g) by isomorphism theorem. ——————————————————————————————— Lecture 3, 11/04/2011. Theorem 1.4.8. (Lie) Let g be a finite dimensional sub Lie algebra of gl(V) for V some finite dimensional vector space. Then there exists v ∈ V, v 6= 0 such that g · v ⊂ Cv. Definition 1.4.9. Let V be a k-vector space and and g ⊂ gl(V) a Lie subalgebra. A linear map λ : g → k is called a weight for g if

Vλ := {v ∈ V: xv = λ(x)v∀x ∈ g}= 6 {0}

and then Vλ is called a weight space. Lemma 1.4.10. (Invariance Lemma) If char(k) = 0, (e.g k = C as in the Theorem) and g ⊂ gl(V) is a Lie subalgebra, where V is a finite dimensional vector space, ICg, λ :I → k a weight for I. Then Vλ is g-stable, i.e. gVλ ⊂ Vλ (i.e. xv ∈ Vλ∀x ∈ g, v ∈ Vλ). (Proof: Excercise) Proof of Lie’s Theorem by induction on dim(g). dim(g) = 1 Clear! dim(g) > 1 If g is solvable, then [g, g] ⊂ g. Choose a vector subspace U od codimension 1 in g, so g = U ⊕ Cz for some z ∈ g (as vector space). So, U is an ideal of g. Since Lie subalgebras and ideals inherit solvability we get that if U is solvable, then by induction hypothesis, there exists w ∈ V such that Uw ⊂ Cw. Let λ :U → C be the corresponding weight. By the Invariance Lemma (1.4.10), Vλ 6= 0 is g-stable, in particular z-stable, i.e. zVλ ⊂ Vλ. Hence, there exists an eigenvector 0 6= v ∈ Vλ for z. Let µ be the corresponding eigenvalue. Then v is the vector we are looking for:

x ∈ g = U ⊕ Cz, x = u + βz, u ∈ U, β ∈ C. Then, xv = (u + βz)v = λ(u)v + βµv = (λ(u) + βµ)v 16 CHAPTER 1. INTRODUCTION TO LIE ALGEBRAS

Remark 1.4.11. One could weaken the assumption for the field to be C by passing to a subfield which contains at least all of the occurring eigenvalues. However, char(k) = 0 is required.

Remark 1.4.12. Let ϕ : g1 → g2 a Lie algebra morphism. Then g is solvable if and only if ker(ϕ) and im(ϕ) are both solvable. In particular, for any Lie algebra g:

a.I , J C g solvable ideals, then I + J is again a solvable ideal because I + J/J ∼= I/I ∩ J

and it follows since then I + J/J and J are solvable. In particular, there is always a unique maximal proper solvable ideal.

b. Let ICg. Take a 2-dimensional Lie algebra with basis x, y and [x, y] = x solvable, not nilpotent. Then I := span(x) is a 1-dimensional ideal (in particular nilpotent), and the quotient g/I is also 1-dimensional nilpotent but g is not. Corollary 1.4.13. A Lie algebra g is solvable if and only if [g, g] is nilpotent. Proof. The quotient g/[g, g] is abelian, hence solvable, and [g, g] is nilpotent, hence solvable. Thus g is solvable. If g is solvable, then ad(g) ⊂ gl(g) is solvable. By Lie’s Theorem,   ∗ · · · ∗   g . .. . ad( ) ⊂ . . .  0 · · · ∗ 

After some good choice of basis in g, hence

  0 · · · ∗   g g . .. . [ad( ), ad( )] ⊆ . . .  0 ··· 0  and therefore a nilpotent Lie subalgebra. Now, ker(ad|[g,g]) = Z([g, g]), hence, we have a short exact sequence

0 → ker(ad) → [g, g] → ad([g, g]) → 0 1.5. TESTING SOLVABILITY AND SEMISIMPLICITY 17

Since ker(ad) is an ideal contained in Z(g), by Proposition 1.3.4, [g, g] is nilpotent.

Definition 1.4.14. Let g be a k-Lie algebra. The maximal solvable ideal (which exists by the previous Remark) is called the radical of g and denoted rad(g). The Lie algebra g is called semi-simple if rad(g) = {0}.

1.5 Testing solvability and semisimplicity

Aim. ”Explicit” criterion for solvability/ semi-simplicity. Motivation: V finite dimensional complex vector space, g ⊂ gl(V) a solvable Lie subalgebra. Then,   ∗ · · · ∗   g  ..  ⊆  .  (Lie’s Theorem)  0 ∗ 

So: tr(xy) = 0∀x ∈ g, y ∈ [g, g]. We develop traces to find the criterion we want.

Definition 1.5.1. Let g be a finite dimensional k-Lie algebra. The is the bilinear form

K(x, y) := Tr(ad(x) ad(y))

Properties of Killing form:

symmetric K(x, y) = K(y, x) (clear!) (1.3) invariance K([x, y], z) = K(x, [y, z])(”associativity”) (1.4)

To see this, note that for matrices X, Y, Z we have:

[X, Y]Z = XYZ − Y(XZ) X[Y, Z] = XYZ − (XZ)Y

In particular: 18 CHAPTER 1. INTRODUCTION TO LIE ALGEBRAS

a. rad(g) = {x ∈ g : K(x, y) = 0∀y ∈ g} C g (use invariance)

⊥ b.I C g, I := {x ∈ g : K(x, y) = 0∀y ∈ I} is again an ideal (use invari- ance).

Remark 1.5.2. The condition 1.4 is called ”invariance” because K corre- sponds to an K ∈ (g ⊗ g)∗ (= dual vector space of g ⊗ g) which is invariant under the natural action of g. Here invariant means sent to zero under the action of g. This makes sense by the following:

Let G be an affine , and V a representation. Then it is also a representation of g = Lie(G). Now, v ∈ V is said to be G-invariant in the sense that gv = v for all g ∈ G =⇒ g-invariant in the sense that xv = 0 for all x ∈ g.

Theorem 1.5.3 (Cartan’s irreducibility criterion). Let k be a field, char(k) = 0. Let g be a finite dimensional k-Lie algebra.

g solvable ⇐⇒ K(X, Y) = 0∀x ∈ g.y ∈ [g, g]

Proof. First we need a Lemma: Lemma 1.5.4. (Cartan criterion gl(V)) Let V be a complex vector space, finite dimensional, and g ⊂ gl(V) Lie subalgebra. Then

g solvable ⇐⇒ Tr(xy) = 0∀x ∈ g, y ∈ [g, g].

Assuming the lemma, we can deduce the theorem (k = C): K(x, y) = 0∀x ∈ g, y ∈ [g, g] ⇐⇒ Tr(ad(x) ad(y)) ⇐⇒ ad(g) solvable.

Now, we have a short exact sequence of Lie algebras:

0 → Z(g) → g → ad g → 0

So, g is solvable by the above.

Theorem 1.5.5 (Characterization of semi-simple Lie algebras). For g a finite dimensional, complex Lie algebra. Then the following are equivalent:

1 g is isomorphic to a direct sum of simple Lie algebras. 1.5. TESTING SOLVABILITY AND SEMISIMPLICITY 19

2 g is a direct sum of its simple ideals (viewed as Lie algebras.)

3 The Killing form K is non-degenerate.

4 g has no non-zero abelian ideals.

5 g has no non-zero solvable ideals. ——————————————————————————————— Lecture 4, 13/04/2011. Proof of Theorem 1.5.5. First note that the Cartan criterion says that if Kg = 0, then g is solvable.

4 ⇐⇒ 5 Every abelian ideal is solvable. On the other hand, if I C g is solvable, then the last non-zero step in the derived series is an abelian ideal. Claim. Let g be a finite dimensional Lie algebra (over arbitrary k), and I C g an ideal. Then KI = Kg|I×I where KI, Kg are the killing forms of I, g respectively.

Proof. Assume U ⊂ g is a vector subspace, and ϕ : g → g is a linear map. Then Tr(ϕ) = Tr(ϕ|U). Now take U = I and ϕ = ad(x) ad(y) for some x, y ∈ I. Then ad(x) ad(y)(z) ∈ I.

5 =⇒ 3 First, rad(K) = {x ∈ g : K(x, y) = 0∀x ∈ g}. Now, since K is invariant, then rad(K) C g(x ∈ rad(K), z ∈ g =⇒ K([x, z], y) = K([x, [z, y]] = 0)), hence by the Claim, Krad(K) = Krad(K)×rad(K) = 0. By Cartan’s criterion, rad(K) is solvable. Since by assumption there are no non- zero solvable ideals, then rad(K) = 0, so K is non-degenerate.

2 3 =⇒ 4 Assume {0} 6= I C g is an abelian ideal, then (ad(x) ad(y)) = 0∀x ∈ g, y ∈ I, i.e. ad(y) ad(x) ad(y)(z) = [y, [x, [y, z]]] ∈ [I, I] = 0. This means ad(x) ad(y) is nilpotent, so Tr(ad(x) ad(y)) = 0 for x ∈ g, y ∈ I. Hence K is degenerate, a contradiction!

2 =⇒ 1 =⇒ 4 Clear!

⊥ 5 =⇒ 2 Let I C g. Then I := {x ∈ g : K(x, y) = 0∀y ∈ I} is an ideal of g.(x ∈ I⊥, z ∈ g, =⇒ K([x, z], y) = K(x, [z, y]) = 0) By Cartan’s criterion, I ∩ I⊥ is a solvable ideal, hence zero. On the other hand, K is non-degenerate (because 5 =⇒ 3), so g = I ⊕ I⊥, and hence every 20 CHAPTER 1. INTRODUCTION TO LIE ALGEBRAS

ideal of I or I⊥ is an ideal of g. By induction on , I, I⊥ satisfy 2. Hence g is a direct sum of simple ideals. It is left to show that every simple ideal occurs: Assume that J ⊂ g is a simple ideal. Then,

r M J = [J, J] = [J, g] = [J, Ii] i=1

Lr where g = i=1 Ii is our decomposition into a sum of simple ide- als. This means that, since the Ii are simple, and J is also simple, J = [J, Ii] = Ii for some i. Hence every simple ideal occurs in the decomposition.

Example 1.5.6. The Killing form for sl(2, k) is given by the following ma- 0 0 4 0 1 1 0  0 0 trix 0 8 0 , in the basis e = , h = , f = .   0 0 0 −1 1 0 4 0 0 So, sl(2, k) is semisimple if and only if char(k) 6= 2.

Remark 1.5.7. One can show that, for sl(n, C),

K(x, y) = 2n tr(xy) and for gl(n, C):

K(a, b) = 2n tr(ab) − 2 tr(a) tr(b)

1.6 Jordan Decomposition and Proof of Cartan’s Criterion

Lemma 1.6.1. Let V be a finite dimensional C-vector space, and x :V → V a C-linear . Then there exist unique xs, xnC-linear endomor- phisms such that xs is diagonalizable and xn is nilpotent, x = xs + xn and xsxn = xnxs. Moreover, every subspace of V stabilized by x is stabilized by xs and xn. 1.6. JORDAN DECOMPOSITION ANDPROOF OF CARTAN’S CRITERION21

Example 1.6.2. i.

λ 1 λ 0 0 1 = + 0 λ 0 λ 0 0

x =xs+ xn

ii. 1 2 1 0 0 2 = + 0 3 0 3 0 0

x =xs+ xn

The latter is not a Jordan decomposition because xn and xs don’t commute.

Qk mi Proof. Let a1, ··· , ak be the distinct eigenvalues of x, so x(t) = i=1(t−ai) mi for some mi ∈ N. Let V(x, ai) := ker(x − ai) be the generalized eigenspace of x corresponding to ai. Then,

k M V = V(x, ai). (1.5) i=1

Hence, by the Chinese remainder Theorem, there exists P(t) ∈ C[t] a poly- nomial satisfying:

mi P(t) = ai(mod(t − ai) ); 1 ≤ i ≤ r P(t) = 0 mod(t)

Define

xs :=P(x)

xn :=Q(x) where Q(t) = t − P(t) ∈ C[t].

Now, xs and xn commute: in fact they commute with whatever x com- mutes with. By construction, xs − ai|V(x,ai) = 0, hence xs is diagonalizable with basis given by the decomposition 1.5 as above, so xn = x − xs must be nilpotent. This proves existence. 22 CHAPTER 1. INTRODUCTION TO LIE ALGEBRAS

To prove uniqueness, assume there exist two such decompositions of x, namely 0 x = xs + xn and x = xs + xn0 . We then have:

0 0 xs − xs = xn − xn (1.6)

The right hand side of (1.6) is nilpotent and the left hand side is diagonaliz- able, hence both sides must be zero, and uniqueness is proven.

1.6.1 Properties 1) Functoriality: Assume the following diagram commutes in the category of finite dimensional C-vector spaces.

f VW-

x y ? ? VW- f

Then the two following diagrams commute:

f VW-

xs/xn ys/yn ? ? VW- f

Proof. f ◦x = y◦f implies that f(V(x, λ)) ⊂ V(y, λ). Then go through the definition of xs, xn.

2) Im(xs) ⊆ im(x). (This holds since xs = P(x), and P(0) = 0. This means xs(v) = x ◦ r(x)(v) for some r(t) ∈ C[t]).

Lemma 1.6.3. If x = xs+xn is the Jordan decomposition of x, then ad(xs)+ ad(xn) is the Jordan decomposition of ad(x).

Remark 1.6.4. This lemma is crucial for defining the ”abstract” Jordan decomposition in a complex finite dimensional Lie algebra g. 1.7. THEOREMS OF LEVI AND MALCEV 23

Proof. We have [ad(xs), ad(xn)] = ad([xn, xs]) = 0, so ad(xs), ad(xn) com- mute. Further, ad(xn) is nilpotent because xn is nilpotent. To see that ad(xn) is diagonalizable, choose an eigenbasis v1, ··· , vk of V for xs, with eigenvalues xs(vi) = λivi for all vi ∈ Vi. Then Eij is an eigenvector for ad(xs) with eigen- value λi − λj. (Notation: Eij is the ij- , i.e. (Eij)kl = δikδjl)

Lemma 1.6.5. Let V be a finite dimensional C-vector space. Let A ⊂ B ⊂ End(V) be vector subspaces, and let

T := {z ∈ End(V) : ad(z)(B) ⊂ A}.

Then, if Tr(xz) = 0 for all z ∈ T, then x is nilpotent. (Proof later!) Proof of Cartan’s Criterion (Lemma 1.5.4). Assume Tr(xy) = 0 for all y ∈ g ⊂ gl(V), x ∈ [g, g]. It is enough to show that [g, g] is nilpotent. We use Lemma 1.6.5 in the following way: Take x ∈ [g, g]. We show that ( T r)(xy) = Pn 0 for all y ∈ gl(V) such that [y, g] ⊂ [g, g]. Write x = i=1[xi, yi]; xi, yi ∈ g. Take y ∈ gl(V) as above. Then:

n n X X Tr(xy) = Tr([xi, yi]y) = Tr(xi[yi, y]) = 0 (by hypothesis.) i=1 i=1

1.7 Theorems of Levi and Malcev

Aim. Let g be a finite dimensional Lie algebra. Then rad(g) ⊕ l = g as vector spaces for some l, and g = rad(g) n l as Lie algebras. Then l is called the Levi compliment of g. It is unique up to some inner . Definition 1.7.1. Let g be a k-Lie algebra. We define

Der(g) := {Derivations of g} = {δ ∈ Endk(g): g → g : δ([a, b]) = [δ(a), b] + [a, δ(b)]}

It is a Lie subalgebra of Endk(V). Example 1.7.2. Let x ∈ g. Then ad(x): g → g is a derivation. (Jacobi identity!) 24 CHAPTER 1. INTRODUCTION TO LIE ALGEBRAS

Remark 1.7.3. ad(g) := {ad(x): x ∈ g} C Der(g) Definition 1.7.4. Assume x ∈ g such that ad(x) is nilpotent, and char(k) = 0. Then X (ad x)k exp(ad(x)) := ∈ Aut(g) := {f ∈ End (g); f invertible } k! k k≥0

An automorphism f ∈ Aut(g) is called inner, if it is contained in the subgroup generated by the exp(ad x), x ∈ g as above.

Proposition 1.7.5. Let g be a finite dimensional semi-simple Lie algebra over C. Then Der(g) = ad(g) Proof. Since g is semi-simple, Z(g) = 0. Thus g = ad(g) ⊕ Z(g) ∼= ad(g) as Lie algebras. Let I C Der(g) an ideal of Lie algebras. Consider the Killing ⊥ ⊥ ⊥ form KI = KI×I. In particular, I ∩ I = 0 so [I, I ] = 0. Now, let δ ∈ I , hence we have for x, y ∈ g:

0 = [δ, ad(x)](y) = δ([x, y]) − [x, δ(y)] = [δx, y]

Hence ad(δ(x)) = 0 for all x ∈ g, hence δ = 0. This means there are no proper ideals, hence ad(g) ∼= Der(g) as Lie algebras. Definition 1.7.6. Let g be a finite dimensional k-Lie algebra. If

g = I ⊕ l as vector spaces, where I is an ideal and l is a Lie subalgebra, then g is called the semi direct product of I and l, and is denoted by

g = I n l If such a decomposition exists, then

α :l → Der(I)

x 7→ ad(x)|I

(check!) 1.7. THEOREMS OF LEVI AND MALCEV 25

Conversely, assume that we are given α : l → Der(h) a Lie algebra ho- momorphism for some k-Lie algebras h, l. Then g := h ⊕ l becomes a Lie algebra, via:

[(x, y), (x0, y0)] = (α(y)(x0) − α(y0)x + [x, x0], [y, y0]) For α = 0 this is just the direct sum of Lie algebras. Lemma 1.7.7. Let g be a finite dimensional Lie algebra. Then g/ rad(g) is semi-simple.

Proof. We show that rad(g/ rad(g)) is zero. Let I C g/ rad(g) be a solvable ideal. Consider can :g → g/ rad(g).

−1 J := can (I) C g, an ideal containing rad(g). We have the following short exact sequence:

0 - rad(g) - J - J/ rad(g) - 0

Hence, J must be solvable, hence J ⊂ rad(g), so J = rad(g), so there is no non-trivial solvable ideal. Theorem of Levi 1.7.8. Let g be a finite dimensional Lie algebra. If α : g  l is a Lie algebra homomorphism, surjective, and l is semi-simple, then there exists β : l → g, a Lie algebra homomorphism such that α ◦ β = Id |l. Corollary 1.7.9. Any finite dimensional Lie algebra g is a semi-direct prod- uct of it’s radical and a semi-simple Lie algebra; more precisely,

g = rad(g) n g/ rad(g) where can = α : g → g/ rad(g).

Proof of Corollary 1.7.9. The map can : g  g/ rad(g) is a surjective Lie algebra homomorphism. Therefore there exists a Lie algebra homomorphism β : g/ rad(g) → g as in Levi’s Theorem, hence β(l) ⊂ g is a Lie subalgebra, and rad(g) ⊕ β(l) = g as vector spaces. 26 CHAPTER 1. INTRODUCTION TO LIE ALGEBRAS

Theorem of Malcev 1.7.10. For two Levi complements l and l0 of g, there exists some x ∈ [g, rad(g)], ad(x) nilpotent, such that exp(ad(x))(l) = l0. Proof: Omitted. Remark 1.7.11. Lie complements are precisely the maximal semi-simple Lie subalgebras (Exercise).

1.7.1 Weyl’s complete irreducibility theorem. Let g be a k-Lie algebra. Let X, W be k-vector spaces, and let

ϕ : g → gl(V) ψ : g → gl(W) be two representations of Lie algebras.

Constructions of ’new’ representations

• Homk(V, W) becomes a g-module via: x · f(v) = x(fv) − f(xv) (Check: [x, y] · f = x · y · f − y · x · f) In particular, if W = k, x · λ = 0 ∗ for all λ ∈ k, x ∈ g. Then, Homk(V, k) = V is a g-module via:

x · f(v) = −f(x · v)

• V ⊗k W is a g-module via

x · v ⊗ w = x(v ⊗ w) = x · v ⊗ w + v ⊗ x · w For x ∈ g, v ∈ V, w ∈ W. Remark 1.7.12.

Homg(V, W) = {f ∈ Homk(V, W) : f(x · v) = x · f(v)} g Homk(V, W) = {f ∈ Homk(V, W) : x · f = 0∀x ∈ g}

The elements in Homg(V, W) are called g-module homomorphisms or of representations from V to W. 1.7. THEOREMS OF LEVI AND MALCEV 27

Schur’s Lemma 1.7.13. Let V be a finite dimensional complex vector space and g a finite dimensional complex Lie algebra. Assume V is a g-module, irreducible (i.e. there is no vector subspace stable under the action of g). Then, Endg(V) := Homg(V, V) = C Id. Proof: excercise

Remark 1.7.14. It also holds in case dim(V) is countable.

Remark 1.7.15. Schur’s Lemma doesn’t hold for k = R. Let g = R (with zero Lie bracket), and V = C: Consider the representation

R = g → gl(V) = EndR(C) λ 7→ multiplication by λi

This representation is irreducible (check!). However,

EndR(V) = C id 6= R id Hence in this case Schur’s Lemma doesn’t hold.

Definition 1.7.16. Let g be a k-Lie algebra, and V a g-module. Then, V is completely reducible if it is isomorphic to a direct sum of irreducible g-modules.

——————————————————————————————— Lecture 5, 20/04/2011. Today: Proof of Weyl’s Theorem (For k = C) only. Theorem 1.7.17 (Weyl’s Theorem). Let g be a semisimple complex finite dimensional Lie Algebra. Then every finite dimensional representation of g is completely reducible.

Lemma 1.7.18. For a finite dimensional Lie algebra and V a finite dimen- sional representation, the following are equivalent:

• Every subrepresentation W of V has a complement.

• The representation V is completely reducible.

• V is a sum of irreducible representations. 28 CHAPTER 1. INTRODUCTION TO LIE ALGEBRAS

Proof: Omitted.

Example 1.7.19. Consider g = C, with [x, y] = 0 for all x, y ∈ C. The representation

2 g = C → gl(C ) 0 λ λ 7→ 0 0 is not completely reducible. In general, the representation

k → gl(V)(dimk(V) < ∞) 1 7→ a is completely reducible if and only if a is diagonalizable. Definition 1.7.20. A finite dimensional Lie algebra is called reductive if the adjoint representation is completely reducible. By Weyl’s Theorem, if a finite dimensional Lie algebra g is semisimple, then it is also reductive. The other direction doesn’t hold:

Example 1.7.21. The Lie algebra g = gl(n, C) is reductive, but not semisim- ple.

The Casimir Definition 1.7.22. Let g be a finite dimensional Lie algebra. Let β : g × g → k be a non-degenerate invariant . Let V be a representation of g, and ϕ : g → gl(V). Let {x1, ··· xn} be a basis of g. Let {x1, ··· , xn} be the ’dual’ basis with respect to β, i.e. such that j β(xi, x ) = δij.

Then define Cβ ∈ Endk(V) by

Cβ :V → V n X i v 7→ xix v i=1

Lemma 1.7.23. Cβ ∈ Endg(V) 1.7. THEOREMS OF LEVI AND MALCEV 29

Proof. Let

j aij(y) = β([xi, y], x ); n X [xi, y] = aij(y)xj j=1 j bji(y) = β(xi, [y, x ]); n i X i [y, x ] = bji(y)x i=1 Then,

n X i i yCβ(v) − Cβ(yv) = (yxix − xix y)v; i=1 i i i i (yxix − xix y) = [y, xi]x v − xi[x , y]v−

Remark 1.7.24. If β is the Killing form, then Cβ is called the Casimir operator. Lemma 1.7.25. Let V be a finite dimensional vector space, g a finite di- mensional Lie algebra, g ⊆ gl(V) a Lie subalgebra, char(k) = 0. If g is semisimple, then 1 The form β defined by (x, y) 7→ tr(x, y) is a non-degenerate symmetric bilinear form (a ’trace form’).

2 Let C = Cβ. Then Tr(C) = dim(g) Proof. 1 The form β is clearly symmetric and invariant. Then rad(β) is a solvable ideal in g. Since g is semisimple, rad(β) = {0}. Hence, β is non-degenerate.

2 This is the definition of Cβ.

Example 1.7.26. Let g = sl(2, C), and take the {e, h, f}. Then the dual basis with respect to the trace form is: h 1  3  2 2 0 2 (f, , e) =⇒ Ctrace form = ef + h + fe = 3 ∈ Endg(C ) 2 2 0 2 30 CHAPTER 1. INTRODUCTION TO LIE ALGEBRAS

In contrast:

1 h2 1 C = (ef + fe) + = (C ) K 4 8 4 trace form Lemma 1.7.27. Let g be a semisimple finite dimensional Lie algebra over C, and let V be a finite dimensional representation. Then,

V = Vg ⊕ gV where Vg = {x|gx = 0∀g ∈ g} Proof. By induction on dim(V). Without loss of generality, assume V 6= Vg, where ϕ : g → gl(V) is our representation. Now, ϕ(g) ⊂ gl(V) is a Lie subalgebra; it is semisimple since it is the image under a Lie algebra homomorphism of a semisimple Lie algebra. Consider C = Ctrace form (as in Lemma 1.7.25 above). Then V decomposes into a direct sum of generalized eigenspaces for C (here k = C is being used). Since C ∈ Endg(V), these generalized eigenspaces are g subrepresentations. Hence, if there is more than one, we can decompose V = V1 ⊕ V2 and the statement follows by induction. Hence we may assume there is only one generalized eigenspace. Since Tr(C) = dim(g) 6= 0, the eigenvalue is not zero. Hence, V = CV and Vg = {0}. Hence, V = gV = gV ⊕ Vg and we are done. Proof of Weyl’s Theorem (Theorem 1.7.17). Let U ⊆ V be a subrepresenta- tion for a given finite dimensional representation V of g.

to show: U has a complement which is again a representation. Remark 1.7.28. Weyl’s Theorem doesn’t hold for infinite dimensional rep- resentations. Example 1.7.29. Let g = sl(2, C), V = C[x]. d e 7→ dx d h 7→ −2x dx d f 7→ −x2 dx Check: This defnes a homomorphism of Lie algebras. This is an irreducible infinite dimensional representation. 1.7. THEOREMS OF LEVI AND MALCEV 31

1.7.2 Classification of irreducible finite dimensional sl(2, C) modules Theorem 1.7.30.

1. Finite dimensional    ∼ 1:1 irreducible sl(2, C) / =←→ N  modules  V 7−→ dim(V) − 1 V(n) ←− n [ 2. With respect to the standard basis {e, f, h}, V(n) decomposes into one dimensional eigenspaces. 3.

V(n) ⊗ V(m) ∼= V(n + m) ⊕ V(n + m − 2) ⊕ · · · ⊕ V(m − n) e.g. V(n) ⊗ V(1) ∼= V(n + 1) where V(1)is the two dimensional vector representation. Remark 1.7.31. The multiplicities [V(n) ⊗ V(m) : V(j)], which denote the number of times that V(j) appears as a summand in V(n)⊗V(m), are called the Clebsch-Gordon coefficients. Example 1.7.32.

V(0) = , = C, with x · λ = 0 V(1) = vector representation V(2) = adjoint representation Proof of Theorem 1.7.30. Claim. Two irreducible sl(2, C)-modules of the same dimension are isomor- phic. Let V be a finite dimensional sl(2, C)-module. For µ ∈ C, let

Vµ := {v ∈ V: h · v = µv} 32 CHAPTER 1. INTRODUCTION TO LIE ALGEBRAS be the µ- eigenspace for the fixed basis element h ∈ sl(2, C). Then we have

eVµ ⊆ Vµ+2

fVµ ⊆ Vµ−2

Choose µ such that Vµ 6= {0}, and Vµ+2 = 0. Fix v ∈ Vµ − {0}. Then:

hf iv = (µ − 2i)f iv∀i ≥ 0 ef iv = i(µ − i + 1)f i−1v∀i ≥ 0 (this can be checked by induction on i) Hence, the span of {v, fv, ··· f rv ···} is a subrepresentation of V, so equal to it, by irreducibility of V.

Choose d ∈ N minimal such that f dv = 0 (Vis finite dimensional). Then,

{f iv : 1 ≤ i ≤ d} is a basis of V. Further,

0 = ef dv = d(µ − d + 1)f d−1v =⇒ µ = d + 1 Hence, µ is determined by the dimension, hence uniqueness.

For existence: Let V = k[x, y]. Consider the representation given by

d e 7→ x dy h 7→ xdx − ydy d f 7→ y dx Check: this defines an infinite dimensional representation. Consider Ud = subspace spanned by all monomials of degree d; i.e.

αi ωi Ud := {x y 0 ≤ i ≤ d; αi + ωi = d} 1.8. UNIVERSAL ENVELOPING ALGEBRAS 33

Then Ud := V(d) is invariant under sl(2, C) action, and dim(Ud) = d + 1. We further have the formulas:

eωi = iωi−1

fωi = (d − i)ωi+1

hωi = (d − 2i)ωi For 3., look at the dimension of the h-eigenspaces. ——————————————————————————————— Lecture 6, 27/04/2011.

Theorem 1.7.33 (Levi). Let α : g  l a surjective Lie algebra homomor- phism over C, where l is semisimple. Then there is a split β : l → g i.e. a Lie algebra homomorphism such that α ◦ β = idl

1.8 Universal enveloping algebras

Let g be any k-Lie algebra. Definition 1.8.1. A universal enveloping algebra of g is a pair (U(g), can), where U(g) is a k-algebra, associative, with 1, and can : g → U(g) is a Lie algebra homomorphism such that it satisfies the following universal property: Given a

ϕ g - A  can p ? p∃p ! unitary alg. homo p p U(g)p p

Examples. • g = {0}, then U(g) = k

{0} - A  p 17→p 1 ? p p p p k p p 34 CHAPTER 1. INTRODUCTION TO LIE ALGEBRAS

• g = k, with trivial/ zero Lie bracket. Then U(g) = k[x]:

ϕ g - A  17→x x7→p ϕ(1) ? p p p p k[x]p p

Remark 1.8.2. Universal enveloping algebras are unique up to unique iso- morphism of algebras. In fact. If g is any k-Lie algebra, and U(g), U0(g) are both universal enveloping algebras, then we have a commutative diagram:

U(g)  can ∃!α p p p? can0 p g - U0(p g) @ @ ∃!β p can @ p @R ?p p U(pg)

Then β ◦ α = id; α ◦ β = id. Hence α, β are . Aim. g-modules are the same as U(g)-modules Proposition 1.8.3. Let g be a k-Lie algebra. Let V be a k-vector space. Then     U(g)-module 1:1 g-module ←→ structures onV structures onV

Proof. Given a U(g)-module structure on V, i.e. an algebra homomorphism

ϕ : U(g) → Endk(V),

Then ϕ is in particular a Lie algebra homomorphism. We get thus an induced map

− can ϕ ϕ : g → U(g) → Endk(V) = gl(V) 1.8. UNIVERSAL ENVELOPING ALGEBRAS 35

which is a composition of Lie algebra homomorphisms, hence a Lie alge- bra homomorphism; and this defines a g-module structure on V. Now, given a g-module structure φ : g → gl(V) on V, we get, by the uni- versal property of the universal enveloping algebra, a unique unitary algebra ∼ homomorphism φ : U(g) → gl(V ) such that the following diagram commutes: φ g - gl(V)  can p ∼p ? p ∃p !φ p p U(g)p p ∼ In particular φ defines a U(g)-module structure on V. Ir follows from the ∼ − − ∼ universal property that ϕ = ϕ and φ = φ.

Corollary 1.8.4. There is an equivalence (even isomorphism) of categories

{U(g)-modules} ∼= {g-modules}

Proof. The corollary is clear on objects by Proposition 1.8.3. On morphisms, it’s just the identity.

Remark 1.8.5. Let g be any k-Lie algebra. Then k is a 1-dimensional (trivial) representation, via

x · λ = 0 ∀x ∈ g, λ ∈ k

This gives a U(g) module, i.e. a unitary algebra homomorphism

U(g) → Endk(k)

This map is called augmentation map or counit. In fact, U(g) is a (non-commutative) in general.

——————————————————————————————— Lecture 7, 04/05/2011.

Theorem 1.8.6. Every k-Lie algebra g (over any field k) has a universal enveloping algebra U(g). 36 CHAPTER 1. INTRODUCTION TO LIE ALGEBRAS

Def/Prop 1.8.7. Let V be a k-vector space. Then the algebra TkV is the k-algebra defined as the k-vector space:

M ⊗r TkV := k ⊕ V ⊕ (V ⊗ V) ⊕ (V ⊗ V ⊗ V) ··· = V r≥0 with V⊗0 := k and multiplication given by

⊗(r+s) (v1 ⊗ · · · ⊗ vr)(w1 ⊗ · · · ⊗ zs) := v1 ⊗ · · · ⊗ vr ⊗ w1 ⊗ ws ∈ V

Extending this linearly turns TkV into an associative, unitary k- algebra. Proof. Multiplication is well defined. In fact, if λ ∈ k = V⊗0, then define

λ · (w1 ⊗ · · · ws) = λw1 ⊗ · · · ws = (w1 ⊗ · · · ⊗ ws)λ, and for λ, µ ∈ k, λ · µ = λµ = µ · λ hence multiplication is well defined. ⊗0 To see that TkV is in fact unitary, let 1 ∈ V = k. This is the unit, by the above multiplication rules.

Proposition 1.8.8 (Universal property of TkV). Let V be a k-vector space. Let TkV be the . Let A be an associative unitary k-algebra, ϕ :V → A a k-linear map, and j :V → TkV the embedding into the 2nd summand. Then there exists a unique homo- ∼ morphism of unitary algebras ϕ :TkV → A making the following diagram commute: j - V TkV

ϕ p p ∼ p ϕp ?© p p p A p p

Proof. The algebra TkV is generated as unitary algebra by elements v ∈ V, 1 ∈ k, hence uniqueness (once existence). To show existence, define ∼ ϕ(v1 ⊗ · · · ⊗ vr) = ϕ(v1) ··· ϕ(vr) and extend linearly. This is the required homomorphism. 1.8. UNIVERSAL ENVELOPING ALGEBRAS 37

Let now V := g be the k-vector space underlying a k-Lie algebra g. Consider Tk(g). Let I be the ideal generated by all the elements of the form x ⊗ y − y ⊗ x − [x, y] ∈ Tk(g) for all x, y ∈ g. Then define

U(g) = Tk(g)/I and

j π can : g → Tk(g) → Tk(g)/I = U(g) where π is the canonical projection.

Theorem 1.8.9. Given g a k-Lie algebra, (U(g), can) is it’s universal - veloping algebra.

Proof. Let A be a k-algebra, associative and unitary. Let ϕ : g → A be a Lie algebra homomorphism. We have the following maps which make the following diagram commute:

can- g U(g) = Tk(g)/I Q Q Q 6 ϕ Q p j Q ? Qs  A ∼ Tk(g) ∃!ϕ pppppppppppp ∼ Claim. The unitary algebra homomorphism ϕ factors through U(g) (i.e. maps I to zero). Note that once the claim is shown we have the desired induced morphism − ϕ : U(g) → A which marked the diagram above still commute. Now, we have for x, y ∈ g,

∼ ϕ(x ⊗ y − y ⊗ x − [x, y]) = ϕ(x)ϕ(y) − ϕ(y)ϕ(x) − ϕ([x, y]) = 0.

Hence the claim is shown. We still have to check that can is a Lie algebra homomorphism. Let x, y ∈ g, then 38 CHAPTER 1. INTRODUCTION TO LIE ALGEBRAS

can([x, y]) = p ◦ j([x, y]) = p([x, y]) = p(x ⊗ y − y ⊗ x) = p(x) ⊗ p(y) − p(y) ⊗ p(x) = [can(x), can(y)]

Poincar´e-Birkhoff-WittTheorem 1.8.10 (concrete version). Let g be a k-Lie algebra, Let {xi : i ∈ I} be an ordered basis indexed by a totally − − − ordered set (I, <). Then the monomials xi1 ··· xir for r ∈ N with xi = ⊗1 can(xi) ∈ V = g where i1 ≤ · · · ≤ ir (For r = 0 there is a unique monomial denoted by ∅ or 1) form a k-basis of U(g)

Before the proof, we will state the Theorem in a more ’elegant’ way. For this, we need the following:

Definition 1.8.11. Let A k-algebra. A filtration of A is a sequence A≤0 ⊆ · · · ⊆ A of vector spaces such that: S 1.A= A≤i i≥0

2.A ≤iA≤j ⊆ A≤i+j If A has a filtration, then it is called afiltered algebra.

Definition 1.8.12. Let A be a filtered algebra. Then the multiplication in A induces a map

A≤i/A≤i−1 ⊗ A≤j/Aj−1 → A≤(i+j)/A≤(I+j−1)

Hence, the associated algebra defined as

gr(A) = ⊕1≥0A≤i/A≤(i−1) becomes an associative k- algebra with multiplication by the above induced map.

Remark 1.8.13. The associated graded algebra gr(A) = ⊕i≥0Ai, (Ai = A≤i/A≤(i−1)) is a graded algebra. 1.8. UNIVERSAL ENVELOPING ALGEBRAS 39

Example 1.8.14. Consider A= k[X], the algebra in one variable. Define A≤i to be the span of all homogeneous of degree less or equal that i, i.e.

i A≤i = {1, x, ··· x } S Then A≤iA≤j ⊆ A≤(i+j) and A = A≤i and this defines a filtration of A. i≥0 Note: In the remark above, Ai denotes the polynomials of degree i, and A = Ai ,AiAj ⊆ A(i+j) i≥0

Remark 1.8.15. Given a graded algebra, i.e. A = Ai ;Ai vector subspaces i≥0 such that AiAj ⊆ A(i+j), then A≤j := Ai defines a filtration on A. 0≤i≤j

Definition 1.8.16. Let V be a k-vector space, TkV it’s tensor algebra. De- fine Sk(V) := TkV/J, where J is the ideal generated by x ⊗ y − y ⊗ x, for x.y ∈ V. This is called the associated to V (It’s commu- tative!). ∼ Exercise 1.8.17. If dim(V) = n < ∞, then S(V) = k[x1, ··· , xn] as k- algebra. In particular, it has a natural filtration as in Example 1.8.14.

Now, the more ’elegant’ statement.

Theorem 1.8.18 (Poincar´e-Birkhoff-Witt’abstract version’). If Tk(g) is filtered by T≤i = span{x1 ⊗ · · · ⊗ xj : j ≤ i} and U(g) is filtered by p(Tk(g)≤i) = U(g)≤i then the following two maps have the same :

proj Sk(g) = Tk(g) ←− Tk(g) = gr(Tk)(g) → gr(U(g)) ∼ In particular, gr(U(g)) = Sk(g).

Proof of concrete version.

− − 1. The monomials xi1 ··· xir ; i1 ≤ · · · ≤ ir span U(g) = T(g)/I. It is hence generated as an algebra by elements from g, hence as vec- − − tor space by elements xi1 ··· xir ; i1 ≤ · · · ≤ ir (with not necessarily 40 CHAPTER 1. INTRODUCTION TO LIE ALGEBRAS

increasing indices) where we write short xy for x ⊗ y. Using this nota- tion, since, in U(g), xy − yx = [x, y] for all x, y ∈ p(g) ⊂ U(g). This means that for a σ ∈ Sr such that σ(i1) ≤ · · · ≤ σ(ir), we have

xi1 ··· x1r = xσ(i1) ··· xσ(ir) + rest;

where rest ∈ U(g)≤(r−1). Now, for r = 0 we have ∅ = 1 is in the basis and every element in U(g)≤0 is of the form λ1 for λ ∈ k. We are then done by induction.

− − 2. The monomials xi1 ··· xir ; i1 ≤ · · · ≤ ir are linearly independent.

ϕ Main idea: Construct a representation U(g) −→ Endk(S) where S = − − k[xi : i ∈ I] such that ϕ(xi1 ··· xir ) i1 ≤ · · · ≤ ir for i1 ≤ · · · ≤ ir are linearly independent endomorphisms. (Proof of this next time!)

Interesting consequence of PBW Corollary 1.8.19. Assume a k-Lie algebra g may be written as g = n ⊕ b, where the sum is as vector spaces, and assume n, b are both Lie subalgebras of g. Then, the multiplication

U(n) ⊗ U(b) −→ U(g)

p(n1 ⊗ · · · ⊗ nr) ⊗ p(b1 ⊗ · · · ⊗ bs) 7−→ p(n1 ⊗ · · · ⊗ nr ⊗ b1 ⊗ · · · ⊗ bs) defines an isomorphism of U(n), U(b)-bimodules. ——————————————————————————————— Lecture 9, 05/05/2011 Recall Corollary 1.8.19. The isomorphism given is, in general, not an isomorphism of algebras. 0 ∗ x 0 Example 1.8.20. Let g = sl(2, ); n = = e, h = = C 0 0 C 0 y 0 0 h, n = = f. We have C − ∗ 0 C

g = n− ⊕ h ⊕ n 1.8. UNIVERSAL ENVELOPING ALGEBRAS 41

However, multiplication cannot be an algebra homomorphism, because, on one side U(n−) ⊗ U(h) ⊗ U(n) = S(n−) ⊗ S(h) ⊗ S(n) is commutative, and U(g) is not. Proof of Corollary 1.8.19. To see that it’s an isomorphism of vector spaces, take a basis of n, {xi : i ∈ I}, and a basis of b, {xj : j ∈ J} so that {xk : k ∈ I∪ J} is a basis of g. Choose an ordering on I∪J such that ik < jm, and the result is a direct consequence of the PBW Theorem. To see that multiplication is an isomorphism of (U(n), U(b))-bimodules, let x ∈ n, can(x) ∈ U(n) (these elements generate U(n) as an algebra). Let

− − X can(x)xi1 ··· xir = αbb b∈PBW basis Then,

− − − − X − − m(can(x)xi1 ··· xir ⊗ xj1 ··· xjs ) = m( αbb ⊗ xj1 ··· xjs ) b X − − = αbbxj1 ··· xjs b − − − − = can(x)(xi1 ··· xir xj1 ··· xjs ) Hence, multiplication is an isomorphism of left U(n)-modules. Multiplication is analogously an isomorphism of right U(b)-modules. Remark 1.8.21. In particular, if M is a U(b) module, then U(g) ⊗ M is a left U(n)-module, even a U(g)-module. Corollary 1.8.22. The canonical map can : g → U(g) from the definition of U(g) is injective, hence g is a vector subspace of U(g).

− Proof. The elements xi = can(xi) where {xi : i ∈ I} is a basis of g are part of a PBW basis, so a basis of g is mapped to linearly independent vectors in U(g). Rest of proof of PBW ’concrete version’. − − To show: monomials xi1 ··· xir are linearly independent!

Fix a basis {xi : i ∈ I} for g. Let S = k[xi] for i ∈ I. For λ = (λ1, ··· λr) ∈ r I , denote Zλ := xλ1···λr ∈ S; r is called the length of λ, denoted |λ| = 42 CHAPTER 1. INTRODUCTION TO LIE ALGEBRAS

r, (λ1 ··· λr) increasing if λ1 ≤ · · · λr. Now, we know that {Zλ : λ increasing} r is a basis of S, and S is filtered by s≤r = span(zλ : |λ| ≤ r). For λ ∈ I , we say j ≤ λ if j ≤ λi for all i ≤ |λ|.

Lemma 1.8.23. For each m ∈ N, there exists a linear map

ϕm : g ⊗ S≤m −→ S≤(m+1)

x ⊗ s 7−→ x · s = ϕm(x ⊗ s) such that the following holds:

1. ϕm−1 is the restriction of ϕm to g ⊗ S≤(m − 1)

r 2. xi · Zλ = xixλ1 ··· xλr = ZiZλ; λ ∈ I , i ∈ I; i ≤ λ

m 3. xi · Zλ − ZiZλ ∈ S≤m for λ ∈ I , i ∈ I.

4. xi · xj · y − xj · xy · y = [xi, xj] · y, ∀y ∈ S≤(m−1)

This lemma implies the Theorem: it gives a representation ( follows from 4.) ϕ : g → Endk(S) which is well defined by 1., and by the universal ∼ property of U(g), we get an algebra homomorphism ϕ : U(g) → Endk(S), ∼ ∼ ∼ and we get ϕ(xi1 ··· xir )(1) = zi1 ··· zir ; i1 ≤ · · · ir; the latter are linearly ∼ ∼ ∼ independent, hence xi1 ··· xir must be linearly independent because ϕ is an algebra homomorphism.

——————————————————————————————— Lecture 10, 9.05.2011

Lemma 1.8.24. Let A, B be filtered k-algebras; denote by {A≤i}, {B≤j} the corresponding filtrations. If f :A → B is an algebra homomorphism such that f(A≤r) ⊆ B≤r, then we get an induced algebra homomorphism

gr(f) : gr(A) → gr(B)

Proof. We get induced maps fi :Ai → Bi.

In particular, we get a map gr(p):Tk(g) → gr(U(g)) for p :Tk(g) → U(g) ∼ the canonical projection (gr(Tk) = Tk(g)). 1.8. UNIVERSAL ENVELOPING ALGEBRAS 43

Proof of Theorem 1.8.18. Let J =< x ⊗ y − y ⊗ x : x, y ∈ g >= ker(α). To show: J ⊂ ker(gr(p)). Then get induced algebra homomorphism

∼ p : S(g) → gr(U(g))

∼ such that p ◦ α = gr(p). We have, for x, y ∈ g,

gr(o)(x ⊗ y − y ⊗ x) = [x, y] = 0 ∈ gr(U(g))2

∼ ∼ Hence, J ⊂ ker(gr(p)), so p exists. Besides, p is an isomorphism because it maps a basis given by ordered monomials in an ordered basis of g viewed as elements in S(g) to the corresponding PBW basis of ordered monomials in a basis of g. 44 CHAPTER 1. INTRODUCTION TO LIE ALGEBRAS Chapter 2

Representations of Lie algebras

Aim. Understand representations of g or U(g)-modules. In general: Hopeless! There is not even a classification of irreducible modules (up to isomorphism) for semi-simple complex Lie algebras (apart from g = sl(2, C), sl(3, C))

Reference for sl(2, C): V. Marzorchuk: Lectures on sl(2, C)-modules.

However: it is possible to classify finite dimensional irreducible repre- sentation and so called irreducible highest weight representations. To set this up we need Verma Modules. We will start constructing representations of g from representations of Lie subalgebras of g and vice versa.

2.1 Constructing new representations

2.1.1 Pull-back and restriction Let g, b be Lie algebras (over some field k). Let α : b → g be a Lie algebra homomorphism. Let ρ : g → gl(V) for some k-vector space V be a represen- tation of g. Then ρ ◦ α : b → gl(V) is a representation of b, by pulling ρ back to b.

Special case: If b is a Lie subalgebra of g, and α = incl is the inclusion, then pulling pack is nothing else than restricting. Denote

g resb V = ρ ◦ incl

45 46 CHAPTER 2. REPRESENTATIONS OF LIE ALGEBRAS

Note: The map incl : b → g induces, by universal property, a map − incl : U(b) → U(g). By PBW Theorem, this is an inclusion (take a basis of b g and extend to a basis of g). Then resb V corresponds to the usual restriction of the (usual) U(g) module V to the U(b) module V.

2.1.2 Induction Let g be a k-Lie algebra. Let b ⊂ g be a Lie subalgebra. Let ρ : b → gl(V) be a representation, V is the corresponding U(b) module. Then

U(g) indU(b) := U(g) ⊗U(b) V is a U(g)-module, it is called the induced module from U(b) to U(g), and it is a (U(b), U(g))- bimodule by multiplying with U(g) from the left and with U(b) on the right. In particular U(g) ⊗U(b) V is a left U(g)-module.

Proposition 2.1.1 (Universal property of Induction). Let g be a k- Lie algebra. Let b ⊂ g be a Lie subalgebra. Let M, N be U(b) modules, and f :M → N be a U(b)-. Then there exists a unique − f : U(g) ⊗U(b) M → N such that the following diagram commutes:

U(g) ⊗U(b) M

6 − ϕ p f p p p p p R- M p p f p p where ϕ(m) = 1 ⊗ m.

Proof. The map f should satisfy

− − − f(u ⊗ m) = f(u · 1 ⊗ m) = uf(1 ⊗ m) = uf(m)

We can thus define it in this way, extending linearly. The map is hence also automatically unique. − − − To see that f is well defined, we must check: f(ub ⊗ m) = f(u ⊗ bm) for all 2.1. CONSTRUCTING NEW REPRESENTATIONS 47 b ∈ b (since b generates U(b) as an algebra it is enough to take b ∈ b instead of b ∈ U(b)). By definition we have

− − − f(ub ⊗ m) = (ub)f(m) = u(bf(m)) = f(u ⊗ bm)

Proposition 2.1.2 (Adjointness of ⊗ and Hom). There are natural isomor- phisms of vector spaces for M ∈ U(b) − mod and N ∈ U(g)-mod as follows: ∼ U(g) ∼ HomU(g)(U(g) ⊗U(b) M, N) = Hom(M, resU(b)) = HomU(b)(M, HomU(g)(U(g), N)) where ∼ HomU(g)(U(g) ⊗U(b) M, N) = HomU(b)(M, HomU(g)(U(g), N)) ∼ f 7−→Φ f ◦ 0 g ←−Φ g [ where ∼ f(m)(u) := f(u ⊗ m) for u ∈ U(g), m ∈ M ◦ g(u ⊗ m) := g(m)(u)

Remark 2.1.3. The isomorphism given by Φ, Φ0 holds in general. Let R, S be rings, and let X be a (R, S)-bimodule, M and S-module, N an R-module. Then, ∼ HomR(X ⊗S M, N) = HomS(M, HomR(X, N)) and the maps and proof are exactly the same as in Proposition 2.1.2. Proof. We have: ∼ The map f is a U(g)-module homomorphism:

∼ ∼ f(m)(u1u2) = f(u1u2 ⊗ m)

∼ The map f is a U(b)-module homomorphism:

∼ ∼ ∼ ∼ f(bm)(u) = f(u⊗bm) = f(ub⊗m) = f(m)(ub) = (bf(m))(u) = u1f(u2⊗m) = u1f(m)(u2) 48 CHAPTER 2. REPRESENTATIONS OF LIE ALGEBRAS

The maps Φ, Φ0 are inverse to each other:

◦ ∼ ∼ f(u ⊗ m) = f(m)(u) = f(u ⊗ m)

∼ ◦ ◦ g(m)(u) = g(u ⊗ m) = g(m)(u) ◦ The map g is well defined:

◦ ◦ g(ub ⊗ m) = g(m)(ub) = bg(m)(u) = g(bm)(u) = g(u ⊗ bm)

◦ The map g is a U(g)-module homomorphism:

◦ g(u1u2 ⊗ m) = g(m)(u1u2) = u1g(m)(u2)

Hence we have an isomorphism ∼ HomU(g)(U(g) ⊗U(b) M, N) = HomU(b)(M, HomU(g)(U(g), N))

Now, we still need to check: Claim.

∼ U(g) HomU(g)(U(g), N) = resU(b) N f 7−→ f(1) as U(b)-modules.

Let f1, f2 ∈ HomU(b)(U(g), N), such that f1(1) = f2(1), then f1 = f2 because f1(u) = f1(u1) = uf1(1) = uf2(1) = f2(u) since they are both homomorphisms of U(g)-modules. Hence it is injective, and it is clearly surjective.

Example 2.1.4. Let g = gl(n, k) or g = sl(n, k). Then there is a decompo- sition into Lie subalgebras:

g = n ⊕ h ⊕ n+

Where h consists of all the diagonal matrices, n of strictly lower diagonal matrices, and n+ of strictly upper diagonal ones. 2.2. VERMA MODULES 49

Lemma 2.1.5. If h is an abelian (commutative) Lie algebra over C, then every irreducible U(h)-module is one dimensional and the action is given by h·v = λ(h)·v for h ∈ h, v 6= 0 ∈ V for some λ ∈ h∗. Denote the corresponding module by Cλ Next, we will study the (Verma) modules :

g M(λ) := U(g) ⊗U(b) Cλ = Indb Cλ Note that the above construction means extending the one dimensional rep- + + resentation Cλ to U(n ) by n · v = 0 for all n ∈ n . ——————————————————————————————— Lecture 11, 11.05.2011

2.2 Verma Modules

Remark 2.2.1.

• Good universal properties

• Easy to compare dimensions of h-eigenspaces (character formulas)

• Every finite dimensional irreducible g- module for g a semisimple or is isomorphic to a unique quotient of some .

Definition 2.2.2. Let g = gl(n, k) or g = sl(n, k), k any field. Let h = ∗ { diagonal matrices} ⊆ g. Let λ ∈ h . Let kλ be the one dimensional (irreducible) h-module given by h · v = λ(h)v for h ∈ h, v ∈ kλ. Extend this to a module for b = { upper triangular matrices} ⊂ g by n · v = 0 whenever n is strictly upper triangular. Then

g M(λ) = Indb = U(g) ⊗U(b) kλ is called the Verma module with highest weight λ. 50 CHAPTER 2. REPRESENTATIONS OF LIE ALGEBRAS

Theorem 2.2.3. (Properties of Verma Modules) ∼ 1. Any Verma module M(λ) is infinite dimensional; in fact M(λ) = U(n−), where n− = { strictly lower triangular matrices} as vector spaces; even as left U(n−)-modules.

2.M( λ) = ⊕ M(λ)µ, where M(λ)µ is the weight space of M(λ) for the µ∈h∗ weight µ, i.e.

M(λ)µ := {u ∈ M(λ): h · u = µ(h)u ∀h ∈ h}

3.M( λ) has a unique maximal proper (by inclusion) submodule N. In particular, M(λ)/N is an irreducible U(g)-module denoted L(λ). 1 0  Example 2.2.4. Let g = sl(2, ), where h = span , so h∗ is one C 0 −1 ∗ ∼ dimensional. Identify h = C via λ 7→ λ(h). Consider λ = 0. Then we have that ∼ ∼ M(0) = U(g) ⊗U(b) C = C[f] 0 if k > 0 or j > 0 f ihjek ⊗ v 7→ f i if k = j = 0 is an isomorphism of left U(n−)-modules since

0 if k > 0 since e ∈ n f ihjek ⊗ v = f i ⊗ hjekv = + 0 if k = 0 since λ = 0

Hence, by the PBW Theorem, the monomials f i ⊗1 form a basis of M(0). We calculate the action of e, h:

e(f ⊗ 1) = ef ⊗ 1 = fe ⊗ 1 + h ⊗ 1 = 1 ⊗ h · 1 = λ(h) = 0; e(1 ⊗ 1) = e ⊗ 1 = 1 ⊗ e · 1 = 0 e(f 2 ⊗ 1) = ef 2 ⊗ 1 = fef ⊗ 1 + hf ⊗ 1 = fh ⊗ 1 − 2f ⊗ 1 = −2f ⊗ 1

Proceeding similarly for further powers of f and for the action of h, we get the following picture:

1 ⊗ 1 / f ⊗ 1 / f 2 ⊗ 1 ··· 2.2. VERMA MODULES 51

Note. f 2 ⊗ 1, i ≥ 0 for an eigenbasis for the action of h.

Note. M(0)/N ∼= Trivial one dimensional U(g)-module, i.e., xv = 0 ∀x ∈ g, ∀v ∈ M(0)/N.

Fact 2.2.5. N is also irreducible! Actually, N ∼= M(−2)

Note. M(0)  N ⊗ M(0)/N because the surjection M(0)  M(0)/N doesn’t ∼ split. Alternatively, EndU(g)(M(0)) = C because every endomorphism ϕ is determined by ϕ(1 ⊗ 1) as ϕ(u ⊗ 1) = u · ϕ(1 ⊗ 1) and ϕ(1 ⊗ 1) belongs to the 0-eigenspace for h which is one dimensional, and spanned by 1 ⊗ 1, hence ϕ(1 ⊗ 1) = λ(1 ⊗ 1). In particular, Weyl’s Theorem doesn’t hold for infinite dimensional modules. ∼ Proof of Theorem 2.2.3. 1. To see this recall U(g) = U(n−) ⊗ U(b) is an isomorphism of (U(n−), U(b))-bimodules. We further have: ∼ ∼ ∼ M(λ) = U(g) ⊗U(b) kλ = (U(n−) ⊗ U(b)) ⊗U(b) kλ = U(n−) ⊗ kλ = U(n−)

all as U(n−)-modules. ∼ 2. From 1., we know M(λ) = U(n−) ⊗ kλ as a vector space. ∼ ∼ Claim. Take a PBW basis of U(n−) given by monomials xı1 ··· xir for i1 ≤ · · · ir where {xi : i ∈ I} is a basis of n−. This basis forms an eigenbasis of M(λ) for the action of h. For the “empty monomial” 1, we have 1 ⊗ 1 and h · (1 ⊗ 1) = 1 ⊗ h1 = 1 ⊗ λ(h)1 ∈ M(λ)λ, so it is a simultaneous eigenvector for h. Now consider the particular basis {Eij : i ≤ j} of n− where Eij denotes the matrix with (Eij)ij = 1 and zero otherwise. Now assume g = gl(n, k) (for sl(n, k), excercise!). Then a basis of h is given by {Eii}. Compute ∼ ∼ Eii · xı1 ··· xir recursively by using the formulas:

Eii(EabY) ⊗ 1 = EabEiiY ⊗ 1 + [Eii, Eab]Y ⊗ 1

[Eii, Eab] = δaiEab − δibEab

Hence, we get

Eii(EabY) ⊗ 1 = EabEiiY ⊗ 1 + scalars(EabY ⊗ 1)

Hence by induction we get a weight/eigenspace decomposition as wanted. 52 CHAPTER 2. REPRESENTATIONS OF LIE ALGEBRAS

3.

Lemma 2.2.6. Let M be a U(g)-module (g) as above and M = ⊕ Mµ µ∈h∗ be a weight space decomposition. Let N be a U(g)-submodule. Then N also has a weight space decomposition N = ⊕ Mµ. µ∈h∗

Pn Proof. Let x ∈ N, x 6= 0. Then x = i=1 αµi uµi where uµi ∈ Mµi . We

claim that all of the uµi ’s are contained in N. Choose x ∈ N such that n in the last expression is minimal with respect to x belonging to N. Let h ∈ h. Then clearly µ1(h)x, h · x ∈ N. Hence,

n X (hx − µ x) = u0 ∈ N 1 µi i=2

with u0 = (µ (h) − µ (h))u ∈ M . µi 1 i µi µi

3. Let U be a submodule of M(λ). Then U contains M(λ)λ if and only if U = M(λ). Hence the sum of all proper submodules is a proper submodule, hence the existence of one unique maximal submodule I.

2.3 Abstract Jordan Decomposition

Aim. Construct a triangular decomposition

g = n− ⊕ h ⊕ n+ for semisimple/ reductive Lie algebras.

Theorem 2.3.1. Let g be a semisimple complex Lie algebra, x ∈ g. Then:

1. There exists a unique decomposition x = s + n, where ad(s) is diago- nalizable/semisimple, ad(n) is nilpotent and [s, n] = 0.

2. If ρ : g → gl(V) is a finite dimensional representation, then ρ(s) = ρ(x)s, ρ(n) = ρ(x)n (i.e. ρ(x) = ρ(x)s + ρ(x)n) is it’s concrete Jordan decomposition. 2.3. ABSTRACT JORDAN DECOMPOSITION 53

3. Let g, g0 be semisimple complex Lie algebras, and ϕ : g → g0 be a Lie algebra homomorphism. Let x ∈ g, x = s + n as in 1. Then, ρ(x) = ρ(s) + ρ(n) satisfies the conditions in 1. and hence is the abstract Jordan decomposition of ϕ(x). Lemma 2.3.2. Let V be a finite dimensional vector space over C, and g ⊆ gl(V) a semisimple Lie subalgebra. If x = xs + xn is the Jordan de- composition, then xs, xn ∈ g Proof : later. Proof. 1. Let x ∈ g. Consider the concrete Jordan decomposition

ad(x) = ad(x)s + ad(x)n. Since g is semisimple, ad is injective. Hence ad(g) ⊂ gl(g) is a semisim- ple Lie subalgebra. By the above Lemma, ad(x)s, ad(x)n ∈ ad(g). Hence there exist elements s, n ∈ g such that ad(s) = ad(x)s, ad(n) = ad(x)n. By definition, x = s + n and by definition of the concrete Jordan decomposition, s is diagonalizable and n is nilpotent. Also by definition of the concrete Jordan decomposition we have:

ad([s, n]) = [ad(x)s, ad(x)n] = 0 Hence, since ad is injective we have [s, n] = 0. ———————————————————————————————

Lecture 11, 16.05.2011 2. Consider the following diagram

g α - ρ(g) incl- gl(V)

(ad(x))s ad(s) ad(ρ(s)) (ad(ρ))sad(ρ(s)) (ad(ρ))s

? ? ? g - ρ(g) - gl(V) ρ incl

We know that (ad(x))s = ad(s). Since the leftmost diagram commutes for the boldface maps and the other maps as well, and since α is sur- jective, (it is ρ with restricted target), we have ρ(s) = ρ(x)s. Similarly, ρ(n) = ρ(x)n. 54 CHAPTER 2. REPRESENTATIONS OF LIE ALGEBRAS

3. exercise

Proof of Lemma 2.3.2. Let V be a finite dimensional complex vector space, g ⊆ gl(V) a semisimple Lie subalgebra. Let x ∈ g, and x = xs + xn its concrete Jordan decomposition. To show: xs, xn ∈ g. Consider the following set:    a)[y, g] ⊆ g  D := y ∈ gl(V) : b) yW ⊆ W ∀g − invariantW ⊆ V  c) tr(y)|W = 0  Claim 1. The set D is a Lie subalgebra of gl(V). This is clear. Just take y, y0 ∈ D, and it is clear (using Jacobi identity) that [y, y0] ∈ D.

Claim 2. If y ∈ D then ys, yn ∈ D.

a) Let y ∈ D, then ad(y) ∈ gl(g), hence, (ad(y))s = ad(ys) ∈ gl(g) and (ad(y))n = ad(yn) ∈ gl(g).

b) The maps ys, yn preserve every subspace W ⊂ V preserved by y.

c) Follows since yn is nilpotent and hence tr(yn) = 0. Claim 3. g ⊆ D Let y ∈ g then: a)[ y, g] ⊆ g b) yW ⊂ W If W ⊂ V is g-invariant. c) pendiente

Claim 4. g C D This is a consequence of a).

Hence, D = g ⊕ I for some ideal I (Excercise sheet). In particular: [g, I] = {0} (follows from a) and the fact that I is an ideal.) To show: I = {0}. Let ϕ z ∈ I. Take a g-invariant subspace W ⊂ V. Then, w 7−→ zw is a g-module homomorphism since for x ∈ g,[x, z] = 0 (direct sum), hence ϕ(xw) − xϕ(w) = zxw − xzw = 0 2.3. ABSTRACT JORDAN DECOMPOSITION 55

Now, decompose V into a direct sum of irreducible g-modules (Weyl’s The- orem): r M V = Wi i=1

By Schur’s Lemma, the action of z on Wi is given by a scalar λi ∈ C. Terminology:

x ∈ g ⊆ gl(V), x = xs + xn

The element x is called nilpotent or semisimple if xs = 0, respectively xn = 0. Note. {semisimple elements}∩{nilpotent elements} = 0 56 CHAPTER 2. REPRESENTATIONS OF LIE ALGEBRAS Chapter 3

Structure Theory of Semisimple Complex Lie Algebras

Definition 3.0.3. Let g be a complex semisimple finite dimensional Lie algebra. A Lie subalgebra h ⊆ g is called a if: 1. The Lie algebra h is abelian and consists only of semisimple elements. 2. It is maximal, with respect to inclusion, with this property. Remark 3.0.4. In a more general setting, i.e. if h ⊆ g are both finite dimensional Lie algebras over any field k, then h is called a Cartan subalgebra if it is nilpotent. Remark 3.0.5. In the setting of Definition 3.0.3, the always exists a Cartan subalgebra. Proof. Assume that g contains only nilpotent elements. Then, by Engel’s Theorem, g is nilpotent. Since g is semisimple, then it must contain at least one semisimple element different from zero. Claim. If h ⊂ g consists of only ad-semisimple elements, then h is abelian. Assume it is not abelian, then there exists x ∈ h such that ad(x)(y) 6= 0 for some y ∈ h. Hence, for some z ∈ h, ad(x)(z) = λz for some λ ∈ C, λ 6= 0. However, ad2(z)(x) = [z, [z, x]] = [z, λz] = 0 which is a contradiction, because if ad2(x) = 0, then so is ad(y)(x).

Example 3.0.6. For g = sl(n, C), then h = { diagonal matrices with trace zero} is a Cartan subalgebra (Excercise!).

57 58CHAPTER 3. STRUCTURE THEORY OF SEMISIMPLE COMPLEX LIE ALGEBRAS 3.1 Root Space Decomposition

Definition 3.1.1. Let g be a semisimple complex finite dimensional Lie algebra, and let h ⊂ g be a Cartan subalgebra. Then, since h is abelian and contains only diagonalizable elements, there exists a simultaneous eigenspace decomposition of g with respect to the action of h via ad : M g = gλ λ∈h∗ where gλ := {x ∈ g : ad(h)(x) = λ(h)x ∀h ∈ h}. Now, if λ 6= 0 and gλ 6= {0}, then λ ∈ h∗ is called a root of g. The set R of roots is called a root system. The eigenspaces gλ for λ ∈ R are called root spaces. Hence we have

M g = g0 gλ λ∈R where g0 = {x ∈ g :[x, h] = 0} is the centralizer of h ∈ g.

Example 3.1.2. Let g = sl(n, C), where h ⊂ g is the “standard” Cartan subalgebra of all diagonal matrices. Denote by  ∈ h∗ the linear map defined   i a1 ··· 0  .  by i  .  = ai. Then, 0 ··· an M g = h ⊕ gλ λ∈R

R = {i − j : i 6= j}

A basis for the rootspace g − is given by the matrix Eij, since for h =   i j a1 ··· 0  .  h  .  ∈ , 0 ··· an

[h, Eij] = aiEij − ajEij = (i − j)(h)Eij.

Further, {Eii − Ei+1,i+1} is a basis of h. 3.1. ROOT SPACE DECOMPOSITION 59

Note. g0 = h Note. Root spaces are one dimensional!

Picture of root systems MISSING

Theorem 3.1.3. Let g be a semisimple complex finite dimensional Lie alge- bra, h ⊂ g a Cartan subalgebra, and R ⊂ h∗ the corresponding root system. Then,

1. g0 = h

2. dim(gλ) = 1 ∀λ ∈ R

3. For every α ∈ R, there exist xα ∈ gα, yα ∈ g−α, hα ∈ [gα, g−α]

4. For α ∈ R, if µλ ∈ R for some µ ∈ C, then µ ∈ {±1} such that the map

sl(2, C) −→ g e 7−→ xα

f 7−→ yα

h 7−→ hα

is an injective Lie algebra homomorphism.

5.[ gα, gβ] = gα+β for any α, β ∈ R such that α + β ∈ R.

Remark 3.1.4. The triples (xα, hα, yα) for α ∈ R are called “sl(2, C) triples” of g.

Theorem 3.1.5 (Jacobson Morosov).

To prepare for the proof of Theorem 3.1.3 we begin with a

Lemma 3.1.6.

1.[ gλ, gµ] ⊆ gλ+µ

2.K( gλ, gµ) = 0 unless λ = −µ

3.K |g0×g0 is non degenerate. 60CHAPTER 3. STRUCTURE THEORY OF SEMISIMPLE COMPLEX LIE ALGEBRAS

Proof. Let x ∈ gλy ∈ gµ. 1. For h ∈ h,

[h, [x, y]] = −[x, [y, h]]−[y, [h, x]] = µ(h)[x, y]+λ(h)[x, y] = (µ+λ)(h)[x, y].

2. We have ad(x) ad(y)(gν) ⊂ gλ+µ+ν Hence, since R is finite, ad(x) ad(y) is nilpotent if λ + µ 6= 0, so tr(ad(x) ad(y)) = 0 unless λ = −µ.

3.K( gα, g0) = 0 by 2. Hence, since g is semisimple, K|g0×g0 is non degen- erate.

——————————————————————————————— Lecture 13, 18.05.2011 Proof of Theorem 3.1.3.

Proof of 1. Since h is abelian then h ⊆ g0. Let x ∈ g0, and let x = s+n be it’s abstract Jordan Decomposition. By the properties of Jordan Decomposition, we have tha since ad(x)(h) ⊂ {0}, then also ad(s)(h) = ad(x)s(h) ⊂ {0} and ad(n)(h) = ad(x)n(h) ⊂ {0}. Hence, if x = s + n ∈ g0, then s, n ∈ g0. By maximality of h, this means that s ∈ h. In particular ad(s) = 0, and hence ad(x) = ad(n). By Engel’s Theorem, we get

Claim. The Lie algebra g0 is nilpotent. Hence, by Lie’s Theorem,   0 · · · ∗   g  ..  ad( 0) ⊆  .   0 ··· 0 

Assume there is z ∈ g0 which is ad-nilpotent, then K(z, g0) = 0. By Lemma

3.1.6, we know that K|g0×g0 is non degenerate. Hence z = 0. Hence, g0 is nilpotent, and thus x = s ∈ h. Remark 3.1.7. For g a semisimple finite dimensional complex Lie algebra, a Cartan subalgebra is a Cartan subalgebra in the general definition. 3.1. ROOT SPACE DECOMPOSITION 61

Lemma 3.1.8. Set up as above. Let α ∈ R. Then: a. −α ∈ R

b. dim([gα, g−α]) = 1

c. α doesn’t vanish on the line [gα, g−α] ⊆ g0 = h

Proof. If −α∈ / R, then g−α = {0}; hence K(gα, g−α) = 0, but, since g is semisimple, K is a non-degenerate form on g. By Lemma 3.1.6, K(gα, gβ) = 0 except if α = −β. This is a contradiction, hence −α ∈ R, and a. above holds. Now, let x ∈ gα, x 6= 0, y ∈ g−α, y 6= 0. Invariance of the Killing form gives

K(h, [x, y]) = K([h, x], y) = α(h)K(x, y).

⊥ Hence, ker(α) ⊆ [gα, g−α] , the of [gα, g−α] ⊆ g0 = h, and so ⊥ dim([gα, g−α] ) ≥ dim(h) − 1, hence dim([gα, gα]) ≤ 1

Now let x 6= 0, x ∈ gα, y 6= 0, y ∈ g−α, h = [x, y]. If we assume that α(h) = 0, then x, y, h span a nilpotent Lie subalgebra, hence a solvable one, thus by Lie’s Theorem, ad(x), ad(y), ad(h) are strictly upper triangular matrices for some basis, so ad(h) = ad([x, y]) is a nilpotent endomorphism. hence, by definition of Cartan subalgebra, we get that ad(h) = ad([x, y]) = 0, and by semisimplicty of g this means that h = 0. Hence c. holds. Now, we know that K is not zero on gα × g−α, so there exist non zero x ∈ gα, y ∈ g−α such that K(x, y) 6= 0. Now choose h ∈ h such that α(h) 6= 0, then

K(h, [x, y]) = α(h)K(x, y) 6= 0.

In particular, [x, y] 6= 0, so [gα, g−α] 6= 0, hence dim([gα, g−α]) = 1. Definition 3.1.9. Let g be a semisimple complex finite dimensional Lie ∨ algebra, and let h ⊆ g be a Cartan subalgebra, α ∈ R a root. Define αıh by the properties:

∨ • α ∈ [gα, g−α]

∨ ∨ • < α, α >= α(α) = 2 62CHAPTER 3. STRUCTURE THEORY OF SEMISIMPLE COMPLEX LIE ALGEBRAS

∨ This α ∈ h is called the dual root to α or the coroot of α.

Remark 3.1.10. Since α does not vanish on the line [gα, g−α], there exists ∨ α ∈ [gα, g−α] with the desired properties. We restate 3. in the following Claim. There is an embedding

sl(2, C) → gα ⊕ [gα, g−α] ⊕ gα of Lie algebras.

Proof of Claim (3.) Let α ∈ R. Then find x ∈ gα, y ∈ g−α such that h = [x, y] 6= 0, by previous Lemma. Then

∨ ∨ [α, x] = α(α)x = 2x ∨ ∨ [α, y] = −α(α)y = −2y

∨ Hence (x, α, y) define an sl(2, C) triple for α. Hence e 7−→ x ∨ h 7−→ α f 7−→ y defines the desired embedding.

Example 3.1.11. Let g = sl(3, C). Then ∨ (x = Eij, α = Eii − Ejj, y = Eji) form an sl(2)-triple for the root α = (i = j) Exercise 3.1.12. Find more “nonstandard” sl(2, C)- triples. Main Idea. For α ∈ R, we have an embedding

sl(2, C) → gα ⊕ [gα, g−α] ⊕ gα

We can then study g as an sl(2, C) module by letting sl(2, C) act via ad. Many structural problems of g can thus be solved using representation theory of finite dimensional sl(2, C)-modules. 3.1. ROOT SPACE DECOMPOSITION 63

Claim. Let α ∈ R, then Cα ∩ R = {±1}

∨ Proof. Let α ∈ R. Find x ∈ gα, y ∈ g−α, α ∈ [gα, g−α]. Consider the sl(2, C)- module M ∨ M = gtα ⊕ Cα t∈C Decompose it into irreducible finite dimensional sl(2, C)-modules.

∨ As possible weights (eigenvalues of ad(α)) we have, 0, with one dimen- 1 sional weight space gα, or 2t, in which case t ∈ 2 Z. By sl(2, C)- theory, it ∨ follows that the only even weights are 0, 2, −2 because gα ⊕ Cα ⊕ g−α is a summand and any other weight would give a higher dimensional zero weight space. By sl(2, C)-theory, we get that if βt is a weight for an sl(2, C)-module, 1 then 2β is a root. Hence, 2 α is not a root, but then α is not a root, and this means 1 is not a weight. Hence,

M = g−α ⊕ [gα, g−α] ⊕ g−α.

So there are only even weights and they are only {2, 0, −2}. Hence, t ∈ {±1}. In particular dim(gα) = 1. We have now finished proving 2. and 5. of Theorem 3.1.3 and hence all of it.

Remark 3.1.13. The fact that root spaces are one dimensional is a very strong property and special of finite dimensional Lie algebras. (...Kac-Moody Lie Algebras!)

Proposition 3.1.14. Let g be a semisimple complex finite dimensional Lie algebra, and h ⊂ g a Cartan subalgebra, R the associated set of roots. Then

∨ ∨ 1. For α, β ∈ R, < α, β >:= α(β) ∈ Z.

∨ 2. β− < β, α > α ∈ R.

3. R spans h∗.

Note. The root system R is not a basis of h∗.

(Motivation, sl3, reflection of a root is again a root) 64CHAPTER 3. STRUCTURE THEORY OF SEMISIMPLE COMPLEX LIE ALGEBRAS

∨ Proof. Assume β = α, then < α, α >= 2 ∈ Z. If α 6= β, Then consider the sl(2, C) representation M M = gβ+iα. i∈Z ∨ Each weight space is one dimensional, and α acts on gβ+iα with eigenvalue ∨ ∨ ∨ (β + iα)(α) =< β, α > +2i. Hence by sl(2, C) theory, we get < β, α >∈ Z. ∨ Also by sl(2, C) theory, − < β, α > is a weight. Hence gβiα 6= 0 for i = − < ∨ ∨ β, α >, so β− < β, α >∈ R. To show 3., it is enough to show \ ker(α) = {0}. α∈R

Let h ∈ h such that α(h) = 0 for all α ∈ R. Then, [h, gα] = 0 for all α ∈ R, so h ∈ Z(g) (the center of g). But g is semisimple, so Z(g) = {0}, and h = 0. ——————————————————————————————— Lecture 14, 23.05.2011

3.2 Root Systems

In this section, V will denote a finite dimensional k-vector space, where k is any field with char(k) = 0.

Definition 3.2.1. An endomorphism s ∈ Endk(V) is called a reflection if 2 s = Id and rank(IdV −s) = 1.

Remark 3.2.2. If dimk(V) = n, and s ∈ Endk(V) is a reflection, then

dimk(Hs := {v ∈ V: s(v) = v}) = n − 1.

This will be the reflection hyperplane of s.

Lemma 3.2.3. An endomorphism s ∈ Endk(V) is a reflection if and only if there exist 0 6= a ∈ V, 0 6= a∗ ∈ V∗ such that a∗(a) = 2 and

s(x) = x − a∗(x)a for all x ∈ V. 3.2. ROOT SYSTEMS 65

∗ ∗ Proof. Let s ∈ Endk(V), and assume there exist a ∈ V, a ∈ V as in the statement of Lemma 3.2.3. Let x ∈ V. Then, s2(x) = s(x) − a∗(x)s(a) = x − a∗(x)a − a∗(a − a∗a(a)a) = x − 2a∗(x)a + a∗(x)a∗(a)a = x and (Id −s)(x) = a∗(x)a Since a∗ 6≡ 0, this implies rank(s − Id) = 1. Now let s ∈ Endk(V) be a reflection, so dim(Im(Id −s)) = 1. Hence, there exist 0 6= a ∈ V, 0 6= a∗ ∈ V∗ such that for all x ∈ V, x − s(x) = a∗(x)a. Since s2 = Id, −a∗(x)a = s(x) − x = s(x − s(x)) = a∗(x)s(a) = a∗(x)(a − a∗(a)a); hence a∗(a) = 2. Example 3.2.4. Let g be a semisimple finite dimensional complex Lie alge- bra, h ⊆ g a Cartan subalgebra, and α ∈ R a root. Then

∗ ∗ s ∨ : h → h α,α ∨ x 7−→ x− < x, α > α and correspondingly s∨ : h → h are reflections. α,α + − Remark 3.2.5. Let s ∈ Endk(V) be a reflection. Then V = Vs ⊕ Vs where ± Vs is the ±1- eigenspace for s, respectively. − Note. The eigenspace Vs is a one dimensional subspace of V spanned by a ∈ V given in Lemma 3.2.3. Lemma 3.2.6. Let (− , −) be a bilinear, non-degenerate symmetric form on V, and let s ∈ Endk(V) be a reflection. Then: 1. The reflection s is orthogonal (i.e. (s(x), s(y)) = (x, y) for all x, y ∈ V) + + ⊥ − − ⊥ if and only if Vs ∩ (Vs ) = 0 and Vs ∩ (Vs ) = 0

2. If (− , −)|H is non-degenerate for some hyperplane H, then there exists a unique reflection s = sH ∈ Endk(V) such that s(h) = h for all h ∈ H and if α ∈ H⊥/{0}, then H⊥ = h · α, and 2(x, α) s(x) = x − α. (α, α) 66CHAPTER 3. STRUCTURE THEORY OF SEMISIMPLE COMPLEX LIE ALGEBRAS

In this case, sH is called the reflection in the hyperplane/ orthogonal to the hyperplane H. Proof. Excercise.

−1 Remark 3.2.7. If s ∈ Endk(V) is a reflection, then s ◦ sH ◦ s = ss(H). Definition 3.2.8. A subset R ⊂ V is called a root system if the following holds: R1 The set R is finite, 0 ∈/ V, and R spans V.

∨ ∗ ∨ R2 ∀α ∈ R, ∃α ∈ V such that α(α) = 2 and s ∨(R) ⊆ R α,α

∨ R3 ∀α, β ∈ R, α(β) ∈ Z. The root system R is called reduced if kα ∩ R = {±1} for all α ∈ R. The rank of the root system R ⊂ V is the dimension of V. We have already proven the following: Theorem 3.2.9. Let g be a semisimple complex finite dimensional Lie alge- bra, and h ⊆ g a Cartan subalgebra. Then R subseth∗, the corresponding set of roots of g, is a reduced root system in the sense of the above definition.

∨ Remark 3.2.10. The α in Definition 3.2.8 is uniquely determined: Suppose ∨ ∨ α, β satisfy R2 for α. Then, s ∨(α) = α = s ∨(α), and s(R) = s ∨(R) = α,α α,β α,α s ∨(R). Consider α,β

∨ ∨ (s ◦ t)n(x) = x − n(β(x) − α(x))α.

Since s, t generate a finite subgroup of GL(V), the of s ◦ t is finite. Substitute this in place of n above.

Remark 3.2.11. The reflections s ∨ generate a finite subgroup W of GL(V), α,α since they are all contained in the permutation group of the finite set R (R1, R2 i Definition 3.2.8). Let

G(R) := {w ∈ GL(V) : w(R) ⊂ R}.

Then W(R) ⊆ G(R). 3.2. ROOT SYSTEMS 67

Definition 3.2.12. The group W = WR is called the associated with R ⊂ V.

Example 3.2.13. Let g = sl(3, C), h ⊆ g Cartan subalgebra. Then R = ∼ {i − j : 1 ≤ i, j ≤ 3}, and WR = S3, the on three letters.

We will denote in the future s = s ∨, and W = W when there is no α α,α R risk of confusion.

Remark 3.2.14. Setting as before. Since sα(α) = −α, then −R = R. The identity map idV ∈ G(R), but it doesn’t need to be contained in WR(R)

Remark 3.2.15. If V = V1 ⊕ V2,R1 ⊆ V1, R2 ⊆ V2 are root systems, then R1 ∪ R2 ⊂ V is also a root system (excercise). This is called the direct sum of the root systems R1 and R2. Definition 3.2.16. The root system R is called irreducible if R is not the direct sum of two non empty root systems.

If R = R1 ∪ R2 is the direct sum of two root systems, then WR = WR1 ×

WR2 (excercise) and w|Vi = idVi for each w ∈ Rj, for j 6= i.

Fact 3.2.17. If R ⊂ V is an irreducible root system, then the action of WR on V defines an irreducible WR-module, i.e. a representation of the group WR.

∨ ∨ Proposition 3.2.18. Let R ⊂ V be a root system. Then R = {α : α ∈ R} ⊆ V∗ is a root system. It is called the dual root system of R. Proof. Without loss of generality, may assume R is an irreducible root sys- ∨ ∨ ∨ tem. Now, R is a finite set by definition, 0 ∈/ R because α(α) = 2. Now we ∨ ∨ show that R spans V∗. Assume there exists 0 6= x ∈ V such that α(x) = 0 for all α ∈ R. Then k · x is stable under the action of WR which contradicts V being an irreducible WR-module. Now let α, β ∈ R, and let γ = sα(β); θ = ∨ ∨ ∨ ∨ ∨ s∨(β). Then, θ(γ) = β(β− < β, α > α) − β(α)α(β− < β, α > α). Since α ∨ β(β) = 2, then θ(γ) = 2 and hence sγ,θ = sα ◦ sβ ◦ sα. Using the formula −1 s ◦ sH ◦ s = ss(H), we get that sγ,θ(R) ⊂ R, using. By uniqueness of coroot, ∨ ∨ ∨ ∨ ∨ ∨ we get θ − γ = s∨(β). Hence s∨(R) = R and α = α. α α 68CHAPTER 3. STRUCTURE THEORY OF SEMISIMPLE COMPLEX LIE ALGEBRAS

Remark 3.2.19. We have a

∨ R → R ∨ α 7→ α

It induces an isomorphism

WR → W∨ ; R w 7→ (tw)−1

∨ even an isomorphism G(R) ∼= G(R).

Why is W(R) interesting? It finally gives a classification of semisimple finite dimensional complex Lie ∨ algebras, up to isomorphism (< α, β >∈ Z). Theorem 3.2.20 (Harish-Chandra). Let U(g) be the universal envelop- ing algebra of a semisimple complex finite dimensional Lie algebra. Let Z(U(g)) ⊆ U(g) be it’s center. There is an isomorphism

Z(U(g)) ∼= S(h)WR

∗ Where the action of WR on h induces an action on S(h).

Remark 3.2.21. The space S(h)WR is the of regular functions on the ∗ ∗ space of orbits h /WR of the WR action on h .

Main Point

Theorem 3.2.22 (Sheppard-Todd). The algebra S(h)WR is again a polyno- ∗ mial ring, and h /WR is an affine . 3.2. ROOT SYSTEMS 69

3.2.1 Changing scalar

Let R ⊆ V be a root system, over a field k. Let VQ denote the Q span of the ∨ ∨ roots α ∈ R, and VQ∗ the Q-span of the coroots α ∈ R.

Note. For k ⊂ L a field extension, we have that L ⊗k R := {1 ⊗ α : α ∈ R} ∨ forms a root system for L ⊗k V, as L-vector space, with dual roots 1 ⊗ α, for α ∈ R.

Proposition 3.2.23. Let R ⊂ V be a root system, notation as above. (V is a k-vectors space, and Q ⊆ k is a field extension.)Then

1. R is a root system in VQ. 2.

k ⊗Q VQ → V λ ⊗ v 7→ λv

is an isomorphism, similarly for VQ

∗ 3.V Q∗ = (VQ)

Proof. 1. By definition, R ⊆ VQ. BY property R3 of root systems, if ∨ ∨ ∨ α, β ∈ R then α(β) ∈ Z. Hence, α(VQ) ⊆ Q, so that α indeed defines a linear functional over Q, so this defines a coroot. The properties of a root system follow from the definition.

2. The map

k −→i V λ ⊗ v 7−→ λv

is surjective since R spans V over k. Also,

∗ ∗ i ∗ V −→ k ⊗Q V ∨ ∨ α 7−→ 1 ⊗ α

is also surjective, so i must be an isomorphism. Hence 2. and 3. hold. 70CHAPTER 3. STRUCTURE THEORY OF SEMISIMPLE COMPLEX LIE ALGEBRAS

Corollary 3.2.24. Let R ⊂ V be a root system. This defines a root system

RQ ⊆ V(Q) and then RR := {1 ⊗ α : α ∈ R} ⊂ R ⊗Q VQ a root system over R. Remark 3.2.25. The Weyl groups W , W and W are all isomorphic. R RQ RR Recall the bilinear form

X ∨ ∨ γ(x, y) = α(x)α(y). α∈R

If x, y ∈ VQ, then γ(x, y) ∈ Q, hence we get bilinear forms

VQ × VQ −→ Q (x, y) 7−→ γ(x, y) and hence

γR : R ⊗Q VQ × R ⊗Q VQ −→ R (λ ⊗ x, µ ⊗ y) 7−→ λµ ⊗ γ(x, y)

Then, X ∨ 2 γR(x, x) = (α(x)) ∈ R α∈R

∨ and γR(x, x) = 0 if and only if α(x) = 0 for all α ∈ R, which happens if and only if x = 0. We thus get an inner product on the real vector space

R ⊗Q VQ := VR. Conversely, if (− , −) is an inner product on VR which is WR-invariant, then we can angles between roots. Given α, β ∈ R ⊆ VR, let θ = ](α, β) be the angle between α and β. Then (α, β) cos(θ) = kαkkβk

Note. This doesn’t depend on the choice of an inner product. If R is an kαk irreducible root system and α, β ∈ R, then kβk doesn’t depend on the choice of the inner form. 3.2. ROOT SYSTEMS 71

∨ ∨ 2(α,β) 2kβk ∨ Note. We have < β, α >= (α,α) = kαk cos(θ), hence < α, β >< β, α >= 4 cos2(θ) ≥ 0. We get the identity

∨ ∨ 0 ≤< α, β >< β, α >≤ 4 (3.1)

By the properties of root systems we conclude that there are only finitely many ∨ possibilities for the value of < α, β > for α, β ∈ R.

∨ <β,α> (β,β) Note. For α, β ∈ R, (α, β) 6= 0 =⇒ ∨ = (α,α) and <α,β>

∨ ∨ < β, α >= 0 ⇐⇒ < α, β >= 0 ⇐⇒ (α, β) = 0

∨ ∨ List of all possibilities for < α, β >, < β, α > with assumption kαk ≤ kβk.

∨ ∨ kβk2 < α, β > < β, α > θ kαk2 π I 0 0 2 undetermined π II 1 1 3 1 2π II −1 −1 3 1 π III 1 2 4 2 3π III −1 −2 4 2 π IV 1 3 6 3 5π IV −1 −3 6 3 don’t occur if R is reduced 1 4 0 4 ”” −1 −4 π 4 don’t occur if α 6= ±β 2 2 0 1 −2 −2 π 1 72CHAPTER 3. STRUCTURE THEORY OF SEMISIMPLE COMPLEX LIE ALGEBRAS

Rank two root systems I

II Type A2, Lie algebra: sl(3, C)

ε2 − ε3 = β

ε1 − ε3 = α + β ε2

ε1

ε3 ε1 − ε2 = α

III Type B2, Lie algebra:

ε2 ε1 + ε2

ε1

ε1 − ε2

IV Type G2

ε1 + 2ε2

ε2 2ε1 + ε2

ε1

ε3 ε1 − ε2 = α

Proposition 3.2.26. Let R ⊆ V be a root system. Then P ∨ ∨ 1.( x, y) = α∈R α(x)α(y) defines a G(R) (hence WR)-invariant symmet- ric non on V. 3.2. ROOT SYSTEMS 73

2. If R = R1 ∪ · · · ∪ Rs is a direct sum of irreducible root systems Ri ⊆ Vi and (− , −) is a symmetric G(R) or WR-invariant bilinear form on n

V = Vi , then (− , −)|V1×V1 is, up to scalar, the form defined in 1., i=1 and the spaces Vi are pairwise orthogonal. Remark 3.2.27. Given g a semisimple complex finite dimensional Lie alge- bra, and h ⊆ g a Cartan subalgebra, sometimes it’s useful to identify h with h∗ via the Killing form: h −→ h∗ h 7−→ K(h, −). then, the Killing form induces, via this identification, a symmetric, non degenerate bilinear form on h∗. One can show that under this isomorphism, we get

∨ 2α α = (α, α) ——————————————————————————————— Lecture 15, 30/05/2011 Recall. Let g be a semisimple complex finite dimensional Lie algebra, h ⊆ g ∨ a Cartan subalgebra, R the corresponding root system, R ⊆ h∗∗ the dual root system. We have a bijection ∨ R → R ∨ α 7−→ α. and for each α ∈ R, reflections

s ∨ : h → h α,α ∗ ∗ s∨ : h → h . α,α ∗ ∗ −1 Claim 5. Setting as above. We have s∨ = (s∨ ) = ( (s∨ )) where for a α,α α,α α,α linear map f :W1 → W2, the map ∗ ∗ ∗ f :W2 −→ W1

ϕ 7−→ (w1 7→ ϕ(f(w1))) is the dual map. 74CHAPTER 3. STRUCTURE THEORY OF SEMISIMPLE COMPLEX LIE ALGEBRAS

∗ 2 2 Proof. Its clear that ( s ∨) = id because (s ∨) = Id ∗ . We have for α,α α,α h α ∈ h∗, h ∈ h,

∗ ∨ ∨ ∨ ( s ∨)(α)(h) = α(s ∨(h)) = α(h− < h, α > α) = α(h)− < h, α > α(α) = −α(h). α,α α,α

∗ ∨ ∗ Similarly, s ∨(β) = β if β(α) = 0 hence rank( s ∨ − Id) = 1. α,α α,α Remark 3.2.28. The map w 7→ (∗w)−1 defines an isomorphism of groups ∼ ∗ ∗ ∗ WR = W∨ because WR is generated by reflections and s1 s2 = (s1s2). R

Remark 3.2.29. The same holds for G(R) instead of WR. Proof of Proposition 3.2.26. 1. That γ is bilinear and symmetric is clear from its definition. Take g ∈ WR. We have, X ∨ ∨ γ(g(x), g(y)) = α(g(x))α(g(x)) α∈R X ∨ ∨ = ∗g(α(x))∗g(α(y)) α∈R X ∨ ∨ β(x)β(y) β∈R ∨ because W∨ permutes β ∈ R. Hence γ is WR-invariant. To see it is non R ∨ degenerate, let α, β ∈ R. Then, α(β) ∈ Z. Hence γ(x, x) ∈ Z≥0. Also ∨ since α(α) = 2, we have γ(α, α) ≥ 4. In particular, γ 6= 0. Now let ∨ ∨ ∨ x ∈ Vi, y ∈ Vj, i 6= j. Then α(x)α(y) = 0 because α(x) = 0 for every α ∈ Ra, β ∈ Rb, a 6= b. Hence we may assume R ⊆ V is irreducible. By Excercise 25, we are done. 2. (The reason one actually needs a proof of this is that the form (−, −) only coincides with γ on the irreducible components a priori.) Define

Ui := {w(x) − x : x ∈ Vi; w ∈ WRi }.

Then Ui is a WRi -submodule of Vi. Since sα(α) − α = −2α, we get Ui 6= 0, so Ui = Vi since Vi is irreducible. Now let x ∈ Vi, y ∈ Vj for

i 6= j. Then since w|Vi = IdVj forw ∈ WRj , we get (w(x), y) = (w(x), w(y)) so that w(x) − x is orthogonal to y. Remaining part: Exercise. 3.2. ROOT SYSTEMS 75

3.2.2 Bases of root system Definition 3.2.30. Let R ⊂ V be a root system, where V is a finite di- mensional R-vector space. Choose a basis {e1, ··· , en} of V. Define the lexicographic order on elements of V by

n n X X λ = λiei µiei i=1 i=1 ⇐⇒

∃1 ≤ s ≤ n s. th. λi = µi for 1 ≤ i ≤ s and λs+1 > µs+1 Note. This order, of course, depends on the choice of basis! Remark 3.2.31. The lexicographic order defines a total order on elements of V. Definition 3.2.32. Let R ⊆ V as above, a lexicographic order on V. Consider R+ : = {α ∈ R: α 0} := positive roots R− : = {α ∈ R : 0 α} := negative roots It follows from the definition of these two sets that R = R+ ∪ R− and R+ ∩ R−. Let B := {α ∈ R+ : α is not the sum of two positive roots } (3.2) The set B is called the base or basis of the root system R. Elements of B are called simple roots. Note. Everything depends on the choice of ! Remark 3.2.33. The set B is compatible with , i.e. λ µ, λ0 µ0 =⇒ λ + λ0 µ + µ0 Corollary/Definition 3.2.34. Given g, a semisimple complex finite di- mensional Lie algebra and h ⊂ g a Cartan subalgebra, we get a triangular decomposition:

g = n− ⊕ h ⊕ n+ where all the summands are Lie subalgebras, and

n± = ⊕ gα. α∈R± 76CHAPTER 3. STRUCTURE THEORY OF SEMISIMPLE COMPLEX LIE ALGEBRAS

Proof. As a vector space the sum comes from the definition of the roots. Also, + by definition, h is a subalgebra. Now, let α, β ∈ R , and xα ∈ gα, xβ ∈ gβ. + Then [xα, xβ] ⊆ gα+β; by remark above α + β ∈ R . SImilarly, n− is a subalgebra.

∗ Definition 3.2.35. Let g = n−⊕h⊕n+ as above. Let λ ∈ h and define a one dimensional h-module C by h·v := λ(h)v for h ∈ h, v ∈ C, and call it Cλ. Ex- tend to n+ by x · v := 0 for all x ∈ n+. The Verma module of highest weight λ is

M(λ) = U(g) ⊗U(n+) (C)λ

Theorem 3.2.36. Let R ⊂ V be a root system, V f.d. R-vector space, a lexicographic order, B a basis with respect to ,R+ the corresponding set of positive roots. Then

+ P 1 Any α ∈ R is a sum of elements in B, i.e. α = nibi; ni∈Z, bi ∈ B. missing example sl3

∨ 2. For α, β ∈ B, if α 6= β then < α, β >≤ 0 andα − β∈ / R.

3. Elements of B form a basis of V.

∨ 4. For α ∈ R+/B, there exists β ∈ B such that < α, β > 0 and α−β ∈ R+.

+ + 5. Let α ∈ R , β ∈ B, α 6= β. Then sβ(α) ∈ R . In particular sβ permutes the set R+/{β}.

Remark 3.2.37. Let w ∈ WR. We denote by l(w) the minimal number of reflections sα, α ∈ B needed to write w as a product of such. For sl(3, C), ∼ WR = S3. We can choose B such that {sα : α ∈ B} corresponds to the set of simple transpositions in S3. Then l(w) is the usual length of an element in S3 + − (this holds for any n), and l(w) = #|WR(R) ∩ R |. The correct definition of VQ is the span of the elements in B, for char(k) 6= 0.

+ Proof. 1 Let αm · · · α1 be the ordered elements of R . Now, α1 ∈ B, otherwise, by definition of B, α = α + β for some α, β ∈ R+, so α1 α, β which is a contradiction. We proceed by induction. Assume the statement holds for αj, for 1 ≤ j ≤ i. Assume αi+1 ∈/ B. Then + αi+1 = α + β for some α, β ∈ R , and we are then done by induction. 3.2. ROOT SYSTEMS 77

2 If α−β ∈ R, then −(α−β) = β −α ∈ R, so either α−β or β −α ∈ R+ since α = (β − α) + α and β = (α − β) + β, this means neither αorβ ∨ con belong to B. By “Observations”, < α, β >≤ 0.

3 Since R spans V, any α ∈ R+ is a Z-linear combination of elements in B, and further for any α ∈ R−, −α ∈ R+, we conclude B spans V. To see that the elements if B are linearly independent, assume Ps Pm for λi ∈ R, that v = i=1 λibi = i=s+1 λibi, so by 2, < v, v >=< Ps Pm i=1 λibi, i=s+1 λibi >< 0, so λj = 0 since bi 0.

+ P 4 Let α ∈ R − B, α = nibi, with ni ∈ Z≥0 and bi ∈ B. Then, X 0 < (α, α) = (α, nibi)

∨ so < α, bi >> 0 for some i. By observations, α − bi, bi − α ∈ R, so one + must belong to R , but it cannot be bi − α because otherwise bi is a + sum of two positive roots, so α − bi ∈ R . P 5 Write α = nibi, ni ∈ Z≥0 and bi ∈ B. Since α 6= bi, there exists j ∨ P such that nj 6= 0 and bj 6= β. Hence, sβ(α) = α− < α, β >= mibi where mj = nj for our fixed j. Hence, since by 1, the coefficients of a root are either all positive or all negative, and mj > 0, we conclude + sβ(α) ∈ R .

3.2.3 Weyl Chambers

Let R ⊆ V be a root system, V a f.d. R-vector space and (−, −) the WR- invariant scalar product. Let {e1, ··· en} be an R-basis of V. Let

n X C := {x = λi(x)ei ∈ V: λi(x) > 0∀i} i=1

Given α ∈ R, then the hyperplane associated with the reflection sα is

∨ Hα := {x ∈ V:< x, α >= 0} 78CHAPTER 3. STRUCTURE THEORY OF SEMISIMPLE COMPLEX LIE ALGEBRAS

Consider

[ [ H := Hα = Hα α∈R α∈R+ Consider V − H. The connected components are called Weyl Chambers. Remark 3.2.38. The connected components don’t depend on (−, −). Also, WR acts on the set of weyl chambers, since sα is continuous, bijective with ∨ continuous inverse. Also WR(H) ⊂ H because < x, α >= 0 ⇐⇒ < ∨ w(x), w(α) >= 0 so x ∈ Hα ⇐⇒ sβ(x) ∈ Hsβ (α). Moreover, w ∈ WR permutes Weyl Chambers.

Lemma 3.2.39. The Weyl group WR acts transitively on Weyl chambers. Proof. Tauvel ——————————————————————————————— Lecture 17, 30/05/2011 Let V be a finite dimensional real vector space, R ⊂ V reduced root system, (−, −) a WR-invariant scalar product. 0 0 Proposition 3.2.40. Let B = {β1, ··· , βl} be a base of R. Let {β1, ··· βn} be the dual base with respect to (−, −), i.e. (βi, βj) = δij. Then, C(B) = {x ∈ V:(x, β) > 0∀β ∈ B}

= {x ∈ V:(x, βi) > 0∀i ∈ 1, l} X = { xiβi : xi ∈ R≥0} l 0 = R≥0βi i=1 Is a Weyl chamber, in particular an open simplicial cone. Proof. Claim. C(B) ⊆ V/H. + Let x ∈ C(B), α ∈ R. Since Hα = H−α, assume w.l.o.g. that α ∈ R , α = n1β1 + ··· + nlβl. Then, X (x, α) = ni(x, βi) > 0 so x∈ / Hα. 3.2. ROOT SYSTEMS 79

Claim. C(B) is connected. This is true if and only if C(B) ⊂ C for a Weyl chamber C. Assume then that C(B) * C. Let x ∈ C − C(B), then there exists i such that (x, βi) < 0. Let

C+ = {y ∈ C:(y, βi) > 0}

C− = {y ∈ C:(y, βj) < 0}; both open in C. Then C− is not empty (x ∈ C−), and C(B) ⊆ C+, so C isn’t connected, a contradiction. Theorem 3.2.41. Let R ⊂ V as before. Then 1

{bases of R} −→∼ {Weyl chambers} B 7→ C(B)

is WR-equivariant. 2 W(R) acts simply transitively on {bases of R} and on {Weyl chambers}. Definition 3.2.42. If a basis B of R is fixed, then C(B) is a fundamental Weyl chamber. Proof of Theorem 3.2.41.

1 Let B be a base of R. Let C0 be any Weyl chamber, then by Lemma 0 there exists w ∈ WR such that w(C(B)) = C . Note that w(B) is a base of R, and w(C(B) = C(w(B))), so the map is surjective and equivariant. 0 Now we check injectivity: Let B = {β1, ··· βl}, B = {α1, ··· αl} be 0 0 0 0 bases of R, and let {β1, ··· , βl}, {α1, ··· , αl} be the dual bases with respect to (−, −), respectively. Assume that

l l 0 0 0 ⊕ R>0βi = C(B) = C(B ) = ⊕ R>0αi. i=1 i=1

0 0 In particular, the set of edge rays, i.e. {R>0βi}, {R>0αi} coincide.

2 Let B be a base of R, and w ∈ WR. To show: w(B) = B implies w = id. Let w = s1 ··· sr be a reduced decomposition of w, where − si = sβi , βi ∈ B. By problem 29, if r ≥ 1, then w(βr) ∈ R , which contradicts our assumption w(B) = B, so r = 0 and thus w = id. 80CHAPTER 3. STRUCTURE THEORY OF SEMISIMPLE COMPLEX LIE ALGEBRAS

Notation. We will denote by R+(B) = R+(C) the sums of elements of B that are in R. The set R+(C) determines C:

C = C(B) = {x ∈ V:(x, β) > 0∀β ∈ B} = {x ∈ V:(x, β) > 0∀β ∈ R+(C)}

3.2.4 Subsets of roots

Let R be a reduced root system in a f.d. R-vector space V. Definition 3.2.43. A subset P ⊂ R is closed if ∀α, β ∈ P if α + β ∈ R, then α + β ∈ P. It is parabolic if it is closed and R = P ∪ (−P). Proposition 3.2.44. Let P ⊂ R be closed and such that P∪(−P) = ∅, then there exists a Weyl Chamber C of R such that P ⊂ R+(C). Proof. Assume w.l.o.g. that P 6= ∅.

Claim. If α1, ··· αn ∈ P and n ≥ 1, then α = α1 + ··· + αn 6= 0. We proceed by induction on n. For n = 1, theres nothing to do. If n ≥ 2, then assume α = 0. Thus,

(α1, α2 + ··· + αn) = (α1, −α1) < 0

Hence, there exists j ≥ 2 such that (α1, αj) < 0. Since α1 6= −αj because P ∩ (−P) = ∅ we have, by ’Observations’, that α1 + αj ∈ R, and since P is closed, α1 + αj ∈ P. Then α cannot be zero since α1 + αj 6= 0 and by P induction i6=j,1 αi 6= 0. Claim. There exists α ∈ P,such that∀β ∈ P, (α, β) ≥ 0.

Otherwise, there exist α1, α2 ∈ P such that (α1, α2) < 0, so α1 + α2 ∈ P. Then there exists α3 ∈ P such that (α1 + α2, α3) < 0, so α1 + α2 + α3 ∈ P. Hence for all i we may find a sequence α1, ··· , αi such that α1 +···+αi ∈ P. Since P is finite there exist αj+1, ··· , αk ∈ P with αj+1 + ··· + αk = 0, contradicting the previous claim. Claim. There exists an ordered basis of V such that all elements of P are positive with respect to the lexicographic order. This means, with respect to this basis, P ⊂ R+ = R+(B) = R+(C). 3.2. ROOT SYSTEMS 81

We proceed by induction on l = dim(V). For l = 1, just choose an element of P. Now assume l ≥ 2. Then, by the previous claim, there exists v1 ∈ V − {0} such that (v1, β) ≥ 0∀β ∈ P. Consider the hyperplane

⊥ H := (Rv1) = {x ∈ V:(x, v1) = 0}. We now have that R ∩ H ⊂ span(R ∩ H) is a root system, and P ∩ H ⊂ R ∩ H is closed and (P ∩ H) ∩ (−P ∩ H) = ∅. By induction, there exists an ordered basis (v1, ··· , vr) of spanR(R ∩ H) such that elements of P ∩ H are positive with respect to the lexicographic order. The claim now follows.

Corollary 3.2.45. Let P ⊂ R be a subset. The following are equivalent:

1 There exists a Weyl Chamber C of R such that P = R+(C). If so, the such a C is unique.

2 P is parabolic and P ∩ (−P) = ∅

3 P is closed and R = P ∩ (−P).

Proposition 3.2.46. Let P ⊆ R, then the following are equivalent:

1 P is parabolic.

2 P is closed and there exists a Weyl chamber such that R+(C) ⊆ P.

Proof. If R+(C) ⊂ P for some Weyl chamber C, then R = P∪(−P). Assume now that P is parabolic, and let C be a Weyl chamber such that |P ∩ R+(P)| is maximal. Claim. B ⊂ P Otherwise, let β ∈ B − P. Since R = P ∪ (−P), then −β ∈ P. Let 0 + C := sβ(C). Then, since sβ permutes R (C) − {β}, we have

+ 0 + + R (C ) = sβ(R (C)) = (R (C) − {β}) ∪ {−β}; ◦ P ∩ R+(C) = (P ∩ R+(C) − {β}) ∪ P ∩ {−β}

Since the latter is impossible, then B ⊆ P. 82CHAPTER 3. STRUCTURE THEORY OF SEMISIMPLE COMPLEX LIE ALGEBRAS

——————————————————————————————— Lecture 18, 08/06/2011

3.2.5 Classification of a parabolic subset over a fixed R+(B) Lemma 3.2.47. Let B be a base of R, and R+ := R+(C) = R+(B). Let P be closed, and assume that R+ ⊆ P (so P is parabolic). Let Σ := B ∩ (−P), and let Q be the set of roots that are sums of elements in −Σ = (−B) ∩ P. Then P = R+ ∪ Q. Lemma 3.2.48. Let B be a base of R, let Σ ⊆ B be a subset. and let Q = Q(Σ) as above. Then P := R+ ∪ Q is closed in R (and parabolic). Corollary 3.2.49. Let B be a base of R, notation as above. Then,   parabolic subsets of R ∼ ←→ { subsets of B} = P(B) containing R+ P 7−→ B ∩ (−P) R+ ∪ Q(Σ) ←− Σ [ In particular, the set on the left has 2|B| elements.

3.3 Borel and Parabolic subalgebras of a com- plex semi simple Lie algebra.

Recall.

g − complex semisimple Lie algebra, h ⊆ g Cartan subalgebra R = R(g, h) ⊂ h∗ root system M g = h ⊕ gα, h = g0. α∈R V ∈ h − mod, λ ∈ h∗,

Vλ := {v ∈ V: hv = λ(h)v∀h ∈ h} ∗ P(V) := set of weights of V = {λ ∈ h :Vλ 6= {0}} 3.3. BOREL AND PARABOLIC SUBALGEBRAS OF A COMPLEX SEMI SIMPLE LIE ALGEBRA. 83

Definition 3.3.1. A of (g, h) is a Lie subalgebra of g of the form M b = h gα α∈R+

For some choice of B a basis of R. A parabolic subalgebra of (g, h) is a Lie subalgebra Psubseteqg that contains a Borel subalgebra of (g, h).

Proposition 3.3.2. We have an inclusion preserving bijection:   1:1 Lie subalgebras of g M {closed subsets of R} ←→ P 7→ a(P) := h ⊕ g that contain h α α∈P non-zero weights of a P(a) − {0} ←− a w.r.t.h − action [ Moreover, if P ⊆ R is closed, then:

◦ 1 a(P) is a Borel subalgebra of (g, h) ⇐⇒ R = P ∪ (−P)

2 a(P) is a parabolic subalgebra of (g, h) ⇐⇒ P is parabolic.

Corollary 3.3.3. There are

{Weyl chambers of R} ←→ {bases of R} ←→ {Borel subalgebras of (g, h)}

Proposition 3.3.4. Let B be a base of R, and b the corresponding Borel subalgebra of (g, h). Then,   ∼ parabolic subalgebras of {subsets ofB} = P(B) −→ (g, h) containing b M σ 7→ h ⊕ b gα α∈R− a sum of elements in −Σ ——————————————————————————————— Lecture 19, 20/06/2011 84CHAPTER 3. STRUCTURE THEORY OF SEMISIMPLE COMPLEX LIE ALGEBRAS Chapter 4

Highest Weight Theory

Definition 4.0.5. Let g be a s.s. f.d. complex Lie algebra, and h ⊆ g Cartan subalgebra, V a representation of g, and λ ∈ h∗. Then

V(λ) = Vλ := {v ∈ V: hv = λ(h)v∀h ∈ h} is the λ-weight space of V. The element λ is called a weight of V if Vλ 6= {0}. Let P(V) be the set of all weights of V. Often elements in h∗ are just called (possible) weights.

Example 4.0.6. Let V = g, the adjoint representation. Then P(V) = R ∪ {0}.

Remark 4.0.7. If Cλ denotes the one dimensional representation, where hv := λ(h)v, then for any h-module V, ∼ HomU(h)(Cλ, V) = Vλ f 7→ f(1)

Definition 4.0.8. Let R+ ⊆ R be a choice of positive roots. The dominant weights is defined as

+ + + ∗ ∨ + X = X (R ) = {λ ∈ h :< λ, α >∈ Z≥0∀α ∈ R }

and

+ + ∗ ∨ R ⊆ X ⊆ X = X(R) := {λ ∈ h :< λ, α >∈ Z∀α ∈ R}

85 86 CHAPTER 4. HIGHEST WEIGHT THEORY is called the set of integral weights. We call the set X Q := { hαα : hα ∈ Z} the root , and

+ X Q := { hαα : hα ∈ Z≥0} the positive root lattice. + ∗ ∼ Example 4.0.9. Let g = sl(2, C), then R = {α}, h = C, X(R) = Zω1, α ∨ where ω1 = 2 , since < α, α >= 2. ∨ Remark 4.0.10. In general, the set {αi : αi ∈ B, a basis of R, i = 1, ··· , n}, ∗ ∨ forms a basis of h. Choose ωi ∈ h such that < ωi, αj >= δij for all 0 < i, j ≤ n. Then

∗ X(R) = Zω1 + ··· + Zωn ⊂ h

0 It is actually a free Z-module with basis the ωis. Each ωi is called the i-th fundamental dominant integral weight, and

+ ∗ X (R) = Nω1 + ··· + Nωn ⊂ h Aim.    finite dimensional  irreducible representations ←→1:1 X+(R+)  of g up to isomorphism  V 7→ maximal element in P(V)

Recall. For sl(2, C):  finite dimensional    1:1 1:1 + irreducible representations ←→ Z≥0 ←→ X (R)  of sl(2, C) up to isomorphism V 7→ dim(V) − 1

Definition 4.0.11. Define a partial ordering on X+(R), X(R), Q, Q+ by

λ ≥ µ ⇐⇒ λ − µ ∈ Q+ 87

Example 4.0.12. Let g = sl(2, C). Let V be an irreducible f.d. representa- tion of dimension n + 1. Then P(V) = {nω1, (n − 2)ω2, · · · − nω1}, and the maximal element is nω1, the minimal element is −nω1.

Remark 4.0.13. If V is not necessarily finite dimensional, or irreducible, then

• maximal/minimal element doesn’t necessarily exists (example: the Verma module M(0))

• P(V) could be empty (U(g) as left g-module)

• maximal/minimal elements need not be unique (take V ⊕ V for V a finite dimensional irreducible sl(2, C) module). However it is all true for f.d. irreducible g-modules

Lemma 4.0.14. Let V be a representation of g, setup as above. Let 0 6= v ∈ V. The following are equivalent:

1v ⊆ v ¯ C + + L 2 hv ⊂ Cv; n v = {0} where n = gα α∈R+

3 hv ⊂ Cv and gαv = {0}∀α ∈ B, where B is a basis of R with positive roots R+.

If this holds, then v ∈ V is called a primitive vector of V.

Lemma 4.0.15. Let V be a representation of g, and v ∈ V a primitive − L vector. Let U be the n−-submodule of V generated by v, where n = gα. α∈R− Assume that V is generated by v as a U(g)-module. Then:

1U=V

2 dim(Vµ) < ∞, dim(Vλ) = 1∀µ ∈ P(V) L 3V= Vµ µ∈P(V)

4 ∀µ ∈ P(V), µ ≤ λ 88 CHAPTER 4. HIGHEST WEIGHT THEORY

5 EndU(g)(V) = C idV Example 4.0.16. Let V = M(0) for g = sl(2, C), and h the standard Cartan 0 0 subalgebra. Action of f = : 1 0 • (weight λ = 0)

•Ø

•×

. . Proof. The space U is generated as a vector space by elements of the form − xα1 ··· xαn v with αi ∈ gi for some αi ∈ R , and not necessarily αi 6= αj. Note that n X x · (xα1 ··· xαn v) = xα1 ··· xαj−1 [x, xαj ]xαj+1 ··· xαn · v + xα1 ··· xαn xv j=0

(To see this note that x · xα1 xα2 · v = [x, xα1 ]xα2 v + xα1 xα2 xv and then do induction.) So, if h ∈ h then the above implies that

h · (xα1 ··· xαn v) = (α1(h) + ··· + αn(h) + λ(h))v, in particular, xα1 ··· xαn v is a weight vector of weight λ + α1 + ··· αn. Note that α1 + ··· αn is a sum of negative roots. Now take x ∈ n+, then xv = 0 because v is primitive and M [x, xαj ] ∈ h ⊕ gβ

β>αj so by induction on n, x · (xα1 ··· xαn v) ∈ U. Hence U is a U(g) submodule of V containing v ∈ V, hence, since V is generated by v as a U(g)-module, U = V, hence (1). For (2), we have

r n X dim(Vµ) ≤ |{(p1, ··· pn) ∈ N |λ − piβi = µ}| = P(λ − µ) i=1 for B = {β1, ··· βr}. 89

Remark 4.0.17. P(λ − µ) = the Constant’s partition counting the possibilities of writing λ − µ as a non negative linear combination of basis elements βi ∈ B. By definition P(λ−λ) = dim(Vλ) = 1, (3) is now clear. For (5), let f ∈ EndU(g)(V), then f(v) ∈ Vλ = Cv, and f(uv) = uf(v). Hence f is determined by f(v), and ∼ EndU(g)(V) = Cv v 7→ f(v) 90 CHAPTER 4. HIGHEST WEIGHT THEORY

——————————————————————————————— Lecture 20, 22/06/2011

Lemma 4.0.18. Let V be a simple representation of g, and let λ ∈ P(V). The following are equivalent:

1 ∀µ ∈ P(V), µ ≤ λ

2 λ is a maximal weight

3 If α ∈ B then α + λ∈ / P(V)

In this situation, the U(g)-module is called a highest weight module.

Definition 4.0.19. A U(g)-module V (not necessarily simple) is called a highest weight module if there exists a primitive vector v ∈ Vλ for some λ ∈ P(V) and V is generated as a U(g)-module by v.

Proof. It’s clear that 1 =⇒ 2, and, if λ is maximal, α + λ > λ, so λ + α∈ / P(V), so 2 =⇒ 3. Now, let 0 6= v ∈ Vλ, and assume 3. Then, hv ⊆ Cv and xαv = 0 because λ + α∈ / P(V) for all α ∈ B, xα ∈ gα, hence, v is a primitive vector. So, 3 =⇒ 4. Now assume 4. SInce V is simple, it is generated by the given primitive vector v. By Lemma 4.0.15, we have µ ≤ λ for all µ ∈ P(V). Let V be a simple representation of V such that P(V) has a maximal element λ. Then:

1 dim(Vµ) < ∞ for all µ, and dim(Vλ) = 1.

2V λ = { primitive vectors } 3 Fro all µ ∈ P(V), µ ≤ λ. 4.1. CONSTRUCTION OF HIGHEST WEIGHT MODULES 91 4.1 Construction of highest weight modules

Properties and Universality of Verma Modules Setup. g is a complex s.s. Lie algebra, h ⊆ g a Cartan subalgebra, R+ ⊂ R a choice of positive roots.

g = n− ⊕ h ⊕ n+; b the corresponding Borel subalgebra.

Senote by ≤ the partial ordering on h∗.

Theorem 4.1.1. 1

M(λ) = U(g) ⊗U(b) Cλ

is the Verma module of highest weight λ, and, as U(n−)-modules, ∼ U(n−) = M(λ) u 7→ u(1 ⊗ 1)

as U(n−)-module.

2M( λ) = M(λ)µ, ( dim)(M(λ)λ) = 1. µ∈h∗

3 dim(M(λ)µ) = P(λ − µ), where P is Konstant’s partition function.

In particular M(λ) is a highest weight module of highest weight λ. It is universal in the sense that if µ is another highest weight module of highest weight λ, then there exists a surjection

M(λ)  M Proof. By the PBW Theorem, we have ∼ M(λ) = U(g) ⊗U(b) Cλ = U(n−) ⊗ U(b) ⊗U(b) Cλ as vector space, and as U(n−)-left module, it is isomorphic to U(n−) ⊗ Cλ. + Now, for each α ∈ R , choose x−α ∈ g−α; the PBW Theorem says that U(n−) has as a basis

m1 mn {xm := x−α1 ··· xαn } for mi ∈ Z≥0. 92 CHAPTER 4. HIGHEST WEIGHT THEORY

Pn Then, xm ·vλ is a weight vector of weight λ− i=1 miαi, and by 1, the xm ·vλ form a basis of M(λ). It is now clear that in 2 we even have M M(λ) = M(λ)µ µ∈h∗,µ≤λ

Now, by definition of M(λ), we have that vλ = 1 ⊗ 1 generates M(λ) as a U(g)-module, so M(λ) is in fact a highest weight module. Let M be any U(g) module, and let λ ∈ P(V) be maximal.

Claim. HomU(g)(M(λ), M) 6= 0 Indeed, the adjunction of ⊗ and Hom gives

HomU(g)(M(λ), M) = HomU(g)(U(g) ⊗U(b) Cλ, M) = HomU(b)(Cλ, HomU(g)(U(g), M)) = HomU(b)(Cλ, M) = HomU(h)(Cλ, M) 6= 0

Given by 1 7→ v, v ∈ Mλ. If M is a highest weight module generated by v ∈ Mλ, then fv : M(λ)  M is surjective.

Classification of irreducible/ simple highest weight modules Theorem 4.1.2. 1M( λ), for λ ∈ h∗, has a unique proper maximal sub- module rad(M(λ)) 2 Let L(λ) = M(λ)/ rad(M(λ)). Then there is a bijection   1:1 irreducible highest weight h∗ ←→ U(g)-modules up to isomorphism λ 7→ L(λ)

3 if a simple representation has a highest weight, then it is already a highest weight module. L Proof. 1 Let U ⊆ M(λ) be a U(g)-submodule. Then U = Uµ because, µ∈h∗ since M(λ) is generated by v ∈ M(λ)λ which has dimension one and so any proper submodule is contained in M (λ)µ. In particular, the µ6=λ sum of all proper submodules must be a proper submodule, equal to rad(M(λ)). 4.1. CONSTRUCTION OF HIGHEST WEIGHT MODULES 93

2L( λ) is simple by construction. Claim. L(λ) is a highest weight module and L(λ) = L(µ) =⇒ λ = µ.

Let can : M(λ)  L(λ) be the canonical surjection, then can(1 ⊗ 1) ∈ L(λ)λ is the highest weight vector, so since L(λ) is simple it is generated by can(1 ⊗ 1), so it is a highest weight module. Now assume that M is another simple highest weight module , such that f : M(λ)  M. Then ker(f) ⊂ rad(M(λ)), so by isomorphism theorem we have an induced map L(λ) → M which is non-zero, hence an isomorphism, since M, L(λ) are both simple. We conclude that M(λ) has a unique simple quotient.

Aim. dim(L(λ)) < ∞ ⇐⇒ λ ∈ X+(R+)

Lemma 4.1.3. For α ∈ B,

M(sα · λ) ⊆ M(λ) if sα ◦ λ ≤ λ where Definition 4.1.4.

sα ◦ λ := sα(λ + ρ) − ρ; 1 X ρ = α 2 α∈R+

∗ is called the dot action of WR on h .

∨ Remark 4.1.5. < ρ, α >= 1∀α ∈ B because, for α ∈ B,

∨ 1 X 1 1 1 s (ρ) = ρ− < ρ, α > α = s ( β) + s (α) = β − α = ρ − α α 2 α 2 α 2 2 β6=α

Proof of Lemma 4.1.3. Let sα · λ < λ, α ∈ B ⇐⇒ sα(λ + ρ) − ρ < λ ⇐⇒ ∨ sα(λ) + sα(ρ) − ρ < λ ⇐⇒ sα(λ) + ρ− < ρ, α > α < λ + ρ =⇒ sα(λ) < λ ∨ in our partial ordering. Hence < λ, α >∈ Z≥0. n+1 Claim. Let x−α ∈ g−α. Then ω = x−α vλ 6= 0, where vλ = 1 ⊗ 1 ∈ M(λ), but n+1 xβx−α vλ = 0∀β ∈ B, in other words, ω is a highest weight vector. 94 CHAPTER 4. HIGHEST WEIGHT THEORY

n+1 Proof of Claim. We have that xβω = 0 because [xβ, x−α] = 0 and so xβx−α vλ = n+1 n+1 j j+1 x−α xβvλ = 0. Further, xαx−α vλ = 0 because xαx−αvλ0 = j(n−j +1)x−α vλ ∼ by sl(2, C)-theory, considering sl(2, C) = g−α ⊕ h ⊕ gα and the sl(2, C)- modules gen by vλ, x−αvλ, ··· . By universal property of Verma modules, we have

M(sαλ) → M(λ) 1 ⊗ 1 7→ ω because ω has weight sα · λ. It remains to show that the above map is injective. But for u ∈ U(n−), we have that u(1 ⊗ 1) 7→ uω, the modules M(sα·λ) and M(λ) are both free U(n−) - modules of rank 1, and the latter has no zero divisors by the PBW theorem, so the map must be injective.

Theorem 4.1.6. Let g be a s.s. complex Lie algebra. There is a bijection   1:1 irreducible finite dimensional X+(R+) ←→ representations of g up to isomorphism λ 7→ L(λ) highest weight of V ← v [ 4.1. CONSTRUCTION OF HIGHEST WEIGHT MODULES 95

Question. What is dim(L(λ))? What is dim(L(λ))µ?

Weyl’s dimension formula Theorem 4.1.7. Let g be a s.s. complex finite dimensional Lie algebra, λ ∈ X+(R). Then,

∨ Q < λ + ρ, α > α∈R+ dim(L(λ)) = ∨ Q < ρ, α > α∈R+

Example 4.1.8. Let g = sl(2, C), h ⊂ g standard Cartan subalgebra. Then

+ + 1:1 X (R ) ←→ N nω1 ← n [ L(nω1) is by definition the n+1 dimensional irreducible sl(2, C)-module. Ap- plying the dimension formula we get

∨ ∨ < λ + ρ, α > < λ, α > +1 dim(L(nω1)) = ∨ = = n + 1 < ρ, α > 1

To express dim(L(λ))µ one usually uses . 96 CHAPTER 4. HIGHEST WEIGHT THEORY 4.2 Character formula

Let mλ(µ) = dim(L(λ)µ), and let P be the weight lattice.

Definition 4.2.1. Let f ∈ Maps(P, Z) be a function. Define Supp(f) := {λ ∈ P: f(λ) 6= 0}

Let

H := {f ∈ Maps(P, Z): ∃S ⊆ P finite ∀λ ∈ Supp(f), λ ≤ s for some s ∈ S} We define a ring structure on H by (f + g)(λ) = f(λ) + g(λ) and (fg)(λ) = P µ∈P f(µ)g(λ − µ), which is well defined because of the definition of H. For any λ ∈ P, define

λ e (µ) = δλ,µ.

Proposition 4.2.2. We have 1. eλ ∈ H

2. eλeµ = eλ+µ

3. e0 is a 1 for H Definition 4.2.3. For each finite dimensional U(g) module V, we define

X µ ∗ ch(V) := dim(Vµ)e ∈ Zh µ∈h∗

This is called the (formal) character of V.

1 P Proposition 4.2.4. Let ρ = 2 α∈R+ α. Then Y X Ch(M(λ)) (eα/2 − e−α/2) = Ch(M(λ)) (−1)l(w)ew(ρ) = eλ+ρ α∈R+ w∈W Theorem 4.2.5 (). X X Ch(L(λ)) (−1)l(w)ew(ρ) = (−1)l(w)ew(λ+ρ) w∈W w∈W 4.3. 97

—————————————————————————————— Lecture 04/07/2011

4.3 Category O

Definition 4.3.1. Let g be a semi-simple complex finite dimensional Lie algebra with a fixed Cartan subalgebra h ⊆ g and R+ ⊆ R be a system L of positive roots, hence a Borel subalgebra b = h gα. Then the BGG α∈R+ category O(g) = O(g, h, R+) is the full subcategory of U(g)-modules given by all the U(g)-modules M which satisfy O1 M is finitely generated as a U(g)-module. L O2 There is a weight space decomposition M = Mλ. λ∈h∗ O3 M is locally U(b)-finite, i.e. for every m ∈ M there exists m ∈ N ⊆ M a finite dimensional U(b) invariant vector subspace. Remark 4.3.2. O3 implies that P(M), the set of weights of M, is contained in the set

{finite union of sets of the form λ − Q+}

Example 4.3.3.

• Every finite dimensional U(g)-module is an object in O(g).

• Every Verma module M(λ) is an object of O(g).

Proposition 4.3.4. 1. The category O(g) is noetherian, i.e. every object is a noetherian U(g)- module.

2. O(g) is closed under taking finite direct sums of objects, submodules and quotients.

3. Given an object M in O(g), and E a finite dimensional U(g)-module, then M ⊗ E is an object in O(g). 98 CHAPTER 4. HIGHEST WEIGHT THEORY

4. Given an object M in O(g), then it is finitely generated as an U(n−)- module. Remark 4.3.5. Since M(λ) is in O(g), Proposition 4.3.4 (2) implies that L(λ) is also an object in O(g). Proof. 1. Recall that by the PBW theorem, we have that gr(U(g)) ∼= §(g). In particular, U(g) is noetherian. Since every finitely generated module over a is noetherian, O1 implies 1.

2. Taking finite sums is ok. Submodules of finitely generated modules over a noetherian ring are again finitely generated, hence O1 holds, we already proved that O2 holds, and O3 id direct. L L 3.M , E as in the statement.Let M = Mµ, E = Eν where (M ⊗ E)λ µ∈h∗ µ∈h∗ is spanned by all vectors m ⊗ e where m ∈ Mµ, e ∈ Eν and λ = µ + ν. Let m1, ··· mn be generators of M as a U(g)-module and let v1, ··· vr be a basis of E.

Claim. The set {mi ⊗ vj : 1 ≤ i ≤ n; 1 ≤ j ≤ r} generate M ⊗ E as a U(g)-module. Since M is already finitely generated as a U(g)-module, it follows. Remark 4.3.6. Category O(g, b) is an , i.e. it satisfies: A0 There is a (namely the module {0}).

A1 For every pair M, N of objects, the direct sum and product is again in O(g, b).

A2 Every morphism between objects has kernel and cokernel.

A3 Every monomorphism is part of a kernel. Every epimorphism is part of a cokernel. Theorem 4.3.7. Every object M ∈ O(g, b) has a finite filtration

{0} ⊆ M0 ⊆ M1 ⊆ · · · ⊆ Mr = M

∼ ∗ of U(g)-modules such that Mi/Mi−1 = L(µi) for some µi ∈ h . This is called the Jordan H¨olderseries. 4.3. CATEGORY O 99

Remark 4.3.8. This filtration is not unique: Say M = L(λ) ⊕ L(µ), then {0} ⊆ L(λ) ⊆ M, {0} ⊆ L(µ) ⊆ M are both Jordan H¨olderseries but of course different. However, the sub quotients in a given filtration do not depend on the filtration.

Question. What is [M(λ), L(µ)] = aλµ?

If we know aλ, µ, then, via the following

Fact 4.3.9. If [M(λ) : L(µ)] 6= 0, then µ ∈ WR·λ and aλµ 6= 0 for only finitely 0 many µ s and in particular, M(λ) =L( λ) if λ is minimal in it’s WR-orbit under dot-action.

Example 4.3.10. Let g = sl(2, C).

• λ = 0

•Ø −2

•Ø −4

. .

We get a short exact sequence

M(sα · 0) ,→ M(0)  L(0) and M(sα · 0) = L(sα · 0).

Theorem 4.3.11 (Conjectured by Kazhdan-Lusztig, proven by several peo- ple in the 1980’s).

[M(y · λ) : L(x · λ)] = px,y(1)

−1 Where px,y is a polynomial defined recursively, in the ring Z[q, q ]. It is called a Kazdhan-Lusztig polynomial. 100 CHAPTER 4. HIGHEST WEIGHT THEORY

Warning. In general almost nothing is known about irreducible U(g)-modules not of the form L(λ). Many statements can be reformulated aas statements about objects in O(g, b).

Example 4.3.12. Let M be an irreducible U(g)-module (g a finite dimen- sional f.d. Lie algebra). Almost nothing known!

—————————————————————————————— Lecture 06/07/2011

4.4 Cartan matrices and Dynkin diagrams

Let (V, R) be a reduced root system in an n-dimensional vector space V. Let B = {α1, ··· , αn} be a chosen basis with a fixed ordering.

Definition 4.4.1. The associated to (V, R, B) is the n × n ∨ 2(αi,αj ) matrix A = (aij)1≤i,j≤n, where aij :=< αi, αj >= (αi,αi) Remark 4.4.2. The Cartan matrix depends on the choice of B and the order on the simple roots, however, different orderings just gives a Cartan Matrix which differs from the original by conjugation with a permutation matrix; and similarly if we choose a different basis.

Proposition 4.4.3. The Cartan matrix A of a root system (V, R) satisfies:

1.A ij ∈ Z.

2.A ii = 2.

3.A ij ≤ 0 if i 6= j.

4.A ij = 0 ⇐⇒ Aji = 0.

5. There exists a diagonal matrix D with positive diagonal entries such that S = DA is symmetric.

Definition 4.4.4. A n×n matrix satisfying the above is called an abstract Cartan Matrix. Two Cartan abstract matrices are isomorphic if they differ by conjugation with a permutation matrix. 4.4. CARTAN MATRICES AND DYNKIN DIAGRAMS 101

Proposition 4.4.5. A root system (V, R) is reducible if and only if the corresponding Cartan matrix is block diagonal with more than one block.

Definition 4.4.6. An abstract Cartan matrix is reducible if it is block di- agonal with more than one block after maybe renumbering simultaneously rows and columns.

Definition 4.4.7. Let A be a Cartan matrix associated to a reduced root system (V, R, B) as above. Then we associate to A a finite graph which is called a as follows:

• to each simple root (i.e. to each element of B) we associate a vertex of the graph.

• to vertices corresponding to αi ∈ B and αj ∈ B we put AijAji number of lines connecting the two vertices.

• If αi, αj ∈ B, with i 6= j with vertices connected by at least one line segment, and ||αi|| > ||αj|| then we indicate this by putting >, so means ||αi|| > ||αj||, if i < j and the nodes are ordered accordingly.

Examples.

2 0 g = sl(2, ) ⊕ sl(2, ) A = C C 0 2

 2 −1 g = sl(3, ) A = C −1 2 102 CHAPTER 4. HIGHEST WEIGHT THEORY

ε2 − ε3 = β

ε1 − ε3 = α + β ε2

ε1

ε3 ε1 − ε2 = α

 2 −3 g of type G ; A = 2 −1 2

ε1 + 2ε2

2ε1 + ε2 ε2

ε1

ε3 ε1 − ε2 = α

Recall. Given B, B0 two bases of a root system (V, R), then there exists a 0 unique w ∈ WR such that w(B) = B . Proposition 4.4.8. Let (V, R) be a root system of some finite dimensional semi-simple complex Lie algebra, and B ⊂ R+, B0 ⊂ R+ two choices of posi- tive roots. Then the corresponding Cartan matrices are isomorphic.

0 0 0 Proof. Let B = {α1, ··· αn}, B = {α1, ··· αn} be ordered in a way such that 0 w(αi) = αi. Then:

0 0 0 0 ∨ 2(αi, αj) < αi, αj > = 0 0 (αi, αi) 2(w(α ), w(α )) = i j (w(αi), w(αi)) 2(αi, αj) ∨ = =< αi, αj > (αi, αi) 4.4. CARTAN MATRICES AND DYNKIN DIAGRAMS 103

Hence, the corresponding Cartan Matrices agree up to simultaneous permu- tation of rows and columns.

Proposition 4.4.9. Let R, R0 be non-isomorphic reduced root systems with bases B, B0 respectively. Then the Cartan Matrices are not isomorphic.

Proof. Excercise; Recall. Let (V, R), (V0, R0) be root systems. They are isomorphic if there exists an isomorphism F of vector spaces F : V → V0 such that F(R) = R0, ∨ ∨ and < F(α), F(β) >=< α, β >.

Proposition 4.4.10. Let (V, R, B), (V0, R0, B0) de root systems with corre- sponding bases. Assume there exists a bijection f :B → B0 transforming the Cartan matrix for (V, R, B) into the Cartan matrix for (V0, R0, B0). Then (V, R) is isomorphic to (V0, R0) via an isomorphism F : V → V0 which ex- tends f. In particular, the Cartan matrix defines the root system up to isomorphism.

0 0 Proof. Let B = {α1, ··· , αn}, B = {α1, ··· , αn} be bases of R resp R , so in particular they are bases of V, resp. V0. Then we have

∨ sF(α)(F(β)) = F(β)− < F(β), F(α) > F(α) = F(sα(β)) hence

sF(α) ◦ F = F ◦ sα and so

WR → WR0 w 7→ F ◦ w ◦ F−1 is an isomorphism of the Weyl groups. Further, since F(β) = sF(α)(F(sα(β))) ∈ ∨ ∨ R0, then F(R) ⊂ R0, and < α, β >=< F(α), F(β) > is an exercise.

Proposition 4.4.11. Let (V, R) be a reduced root system, and let Γ be it’s Dynkin diagram.Then 104 CHAPTER 4. HIGHEST WEIGHT THEORY

1. (V, R) is irreducible ⇐⇒ Γ is connected.

2. Γ has no cycles, i.e. it is a forest. By 1., if it is irreducible, then Γ is a tree.

Proof. 1. If R = R1 ∪R2, and some bases B1, B2 then the Dynkin diagrams Γ(Ri) are disjoint. Conversely, assuming that Γ(R) is not connected, then we have by definition a basis of R, B = B1 ∪B2 where there are no connections between the vertices of B1 and the vertices from B2. Let Vi denote the span of Bi inside of V. Then, since the Bi are orthogonal with respect to (−, −), then both Vi are WR-stable. Then Ri = R ∩ Vi give decomposition R = R1 ∪ R2.

2. Assume β1, ··· , βn ∈ B correspond to vertices of a cycle, and let γi = βi . Now, if there is an edge between vertex βi and vertex βj, then we ||βi|| have

n X 2 X || γi|| = n + 2 (γi, γj) i=1 i

≤ n + 2[(γ1, γ2), ··· (γn−1, γn)] ≤ n − n = 0

Contradiction! 4.4. CARTAN MATRICES AND DYNKIN DIAGRAMS 105

Lecture 11/07/2011

4.4.1 Classification of irreducible, reduced root sys- tems/ Dynkin Diagrams Theorem 4.4.12. Let (V, R) be a reduced irreducible root system, and let Γ = Γ(R) be the corresponding Dynkin Diagram. Then Γ is isomorphic to exactly one diagram from the following list.

An

Bn

Cn

Dn

G2

F4

E6

E7

E8

Remark 4.4.13. It’s easy to see by hand that the Dynkin diagrams listed above are pairwise non-isomorphic.

Proof of Theorem 4.4.12. Recall that if βi, βj, i < j are simple roots, and if 2 ||βi|| we denote f(i, j) = 2 , then we have: ||βj || π I aij = 0 = aji; θ(βi, βj) = 2 2π II aij = 1 = aji; θ(βj, βi) = 3 √ 1 III f(j, i) = 2; f(i, j) = 2 ; aij = −2; aji = −1; ||βj|| = 2||βi||; θ(βi, βj). 1 5π IV f(j, i) = 3, f(i, j) = 3 , aij = −3, aji = −1; θ(βi, βj) = 6 . 106 CHAPTER 4. HIGHEST WEIGHT THEORY

Now: Classify the Dynkin diagrams.

1. Assume Γ√ has a triple edge. Then there are i, j with f(j, i) = 3 and 3 (γi, γj) = 2 . Assume we have an extra vertex k connected to i, w.l.o.g. 1 Then we know (γi, γk) ≤ − 2 ;(γj, γk) ≤ 0, hence, √ √ √ 2 3 · 2 3 4 || 3γ + 2γ + 2γ ||2 ≤ 3 + 4 + 1 − − = 0 j i k 2 2

And similarly consider the cases when

1. Assume Γ has two double edges, get a contradiction.

2. Assume Γ has a double edge and a branching point, get a contradiction.

3. Assume Γ has a double edge, get Bn, Cn, F4.

4. Assume Γ has no ramification point, get type An.

5. Assume Γ has more than one branching point, get contradiction.

6. Assume Γ has only one branching point, get Dn, E6, E7, E8

Problem. Find a simple Lie algebra for each Dynkin Diagram from Theorem 4.4.12.

Answer. Existence and Uniqueness Theorems from Serre.

Theorem 4.4.14 (Serre). Assume g is a semisimple complex finite dimen- sional Lie algebra with Cartan matrix A ∈ Matn×n(Z). Then g is isomorphic as a Lie algebra to the Lie algebra with generators {ei, fi, hi : 1 ≤ i ≤ n} and relations:

“sl(2, C)-type relations”

1.[ hi, hj] = 0

2.[ hi, ej] = aijej

3.[ hi, fj] = −aijfj 4.4. CARTAN MATRICES AND DYNKIN DIAGRAMS 107

4.[ ei, fj] = δijhi “Serre relations”

−aij +1 5. (ad ei) ej = 0, for i 6= j.

−aij +1 6. (ad fi) fj = 0, for i = ±j.

Example 4.4.15. For sl(n, C), we have that the Cartan matrix is given by aii+1 = −1 = aii−1. Then relation 5 becomes eiej = ejei if |i − j| > 1 and 2 2 0 ei ej − 2eiej + eiej = 0 if |i − j| = 1, and similarly relation 6 for the fi s. To define a Lie algebra with generators and relations we need a :

Definition 4.4.16. A free Lie algebra on a set X (of so called) generators is a pair (L, i), where L is a Lie algebra and i is a map i :X → L, such that the following universal property holds: For any Lie algebra g (over the same field as L), and function ϕ :X → g, there is a unique map making the following diagram commute:

i X / L

ϕ g 

Remark 4.4.17. Free Lie algebra on X is unique up to isomorphism, and existence is similar to the construction of a .

A Lie algebra generated by X with relations R is then the quotient of the free Lie algebra L by the ideal generated by the relations.

Theorem 4.4.18 (Serre). Given an irreducible Cartan matrix, the Lie alge- bra with generators and relations as in Theorem is a simple complex finite dimensional Lie algebra. Two such algebras are isomorphic if and only if their Cartan matrices are isomorphic.

Proof: Humphreys. 108 CHAPTER 4. HIGHEST WEIGHT THEORY

Last Lecture! Last time: Classified irreducible root systems via their Dynkin Diagram, hence obtained a classification of complex finite dimensional simple Lie algebras. On the other hand,

Question. Can we classify the Weyl group? Does WR distinguish the root system? ∼ Answer. No: WBn = WCn . Weyl Groups are special examples of Coxeter groups. Goal. Classify those. Definition 4.4.19. A Coxeter system is a pair (W, §), where • W is a group. • S ⊆ W is a set of generators of W, subject only to the following rela- tions: (ss0)m(s,s0) = e

for every s, s0 ∈ S, and some m(s, s0) ∈ N ∪ ∞, such that m(s, s) = 1, and m(s, s0) ≥ 2 for s 6= s0, and m(s, s0) = 0 if there is no relation between s, s0. To a Coxeter system, we may associate a Coxeter graph by putting a vertex for each s ∈ §, and an edge from s to s0 labelled by m(s, s0). Often one omits the edges labeled by 2 and omits the labels 3.

Examples. 1. Let W = S2, and S = {(12)} ⊂ S2 generates S2 with 2 the single relation (12) = e. More generally, let W = Sn, and S = {(ii + 1) = si : 1 ≤ i ≤ n − 1} generates the symmetric group. One can show that the relations generating it are

2 si = e(i.e.m(si, si) = 2)

(sisj) = sisjif |i − j| > 1(i.e.m(si, sj) = 2) 3 sisjsi = sjsisj ⇐⇒ (sisj) = e; ( so m(i, j) = 3) if |i − j| = 1.

So that (Sn, {si : 1 ≤ i ≤ n − 1}) is a Coxeter system. 4.4. CARTAN MATRICES AND DYNKIN DIAGRAMS 109

2. The Universal Coxeter System (W, S) is given by generators from S and relations m(s, s) = 1, m(s, s0) = ∞ if s 6= s0. If |S| = 2, then the Universal Coxeter system has W = infinite , D∞. Remark 4.4.20. This group appears as a “Weyl Group” for an infinite dimensional Lie algebra, g = sl(2, C). This Lie algebra is the Lie algebra corresponding to the Dynkin Diagram via generators and relations as in Serre’s theorem.

More concrete description: Start with g = sl(2, C). Then consider −1 gloop := g ⊗ C[t, t ] which is a Lie algebra via [a ⊗ tn, b ⊗ tm] := [a, b] ⊗ tn+m. Check: This satisfies the of a Lie bracket. Now add extra central element c:

−1 gloop ⊕ Cc : = g ⊗ C[t, t ] ⊕ Cc n m n+m [a ⊗ t + αc, b ⊗ t + βc] = [a, b] ⊗ t + K(a, b)δm+n,0c

m d m Now add an extra derivation: δ(a ⊗ t + αc) := t dt (a ⊗ t ), which indeed defines a derivation on gloop ⊕ C. Now, as a vector space, ∧ −1 sl2 = sl2 ⊗ C[t, t ] ⊕ Cc ⊕ Cd Where [d, A] = δ(A) makes it, together with the earlier Lie bracket, into a Lie algebra. Then, get a Weyl group via root theory, and get:

aff + W = WA1 o ZR

3.(W , S = {s1, s2, s3}) with m(s1, s2) = 3, m(s1, s3) = 2, m(s2, s3) = ∞. ∼ Then W = PSL(2, Z) := SL(2, Z)/(±1), namely, define Φ : W → PSL(2, Z) as: 0 1 s 7→ 1 1 0 −1 1 s 7→ 2 0 1 −1 0 s 7→ 3 0 1 110 CHAPTER 4. HIGHEST WEIGHT THEORY

Well defined: 0 1 0 1 1 0 Φ(s )2 = = 1 1 0 1 0 0 1 −1 1 −1 1 1 0 Φ(s )2 = = 2 0 1 0 1 0 1 −1 0 −1 0 1 0 Φ(s )2 = = 3 0 1 0 1 0 1  0 12 −1 0  1 0 Φ(s s )2 = = = 1 3 −1 0 0 −1 0 1 −1 0  Φ(s s )3 = 1 2 0 −1 1 a Φ(s s )a = (infinite order!) 2 3 0 1

 0 1 0 1 Note: , generate SL(2, ) as a group, hence, Φ is sur- −1 0 1 1 Z jective. To show it is injective, prove that PGL(2, Z) is a free product of subgroups of order 2 and 3 generated by the images above. Definition 4.4.21. A group W is a , if there exists S ⊂ W such that (W, S) is a Coxeter system. Two Coxeter groups are isomorphic as Coxeter groups if there exist S ⊂ W, S0 ⊂ W0 such that (W, S) and (W0, S0) are Coxeter systems and an isomoprhism ϕ :W → W0 of groups such that ϕ(S) = S0. Theorem 4.4.22. There exists a classification of finite Coxeter groups up to isomorphism. More precisely, they are classified by the following Coxeter diagrams:

An(n ≥ 1)

4 Bn

Dn 4.4. CARTAN MATRICES AND DYNKIN DIAGRAMS 111

4 F4

E6

E7

E8

4 H3 5 H4 m I2(m)

Main idea. Realize W as a reflection group. Given V a vector space on basis {αs : s ∈ S}, define a bilinear form B on V by:

 π 0 − cos( m(s,s0) ) if m(s, s ) < ∞ B(α , α 0 ) := s s −1 if m(s, s0) = ∞

0 In particular, B(αs, αs) = 1 and if s 6= s , then B(αs, αs0 ) ≤ 0. We get a reflection defined by rs(v) = v−B(v, αs)αs, and then a called the geometric representation defined by s 7→ rs, hence realize W as a finite subgroup of GL(V) generated by reflections.