<<

Finite Dimensional Lie Algebras and their Representations Michaelmas 2006

Prof. I. Grojnowski∗ Chris Almost†

The exam questions will consist of a subset of the in-class exercises.

Contents

Contents1

1 Motivation and Some Definitions2 1.1 Lie Groups and Lie Algebras ...... 2 1.2 Representations...... 4

2 Representations of sl2 7 2.1 Classification...... 7 2.2 Consequences...... 11 2.3 Characters ...... 11

3 Semi-Simple Lie Algebras: Structure and Classification 12 3.1 Solvable and Nilpotent Lie Algebras ...... 13 3.2 Structure Theory ...... 17

4 Root Systems 22 4.1 Abstract Root Systems...... 22 4.2 Dynkin Diagrams...... 26 4.3 Existence and Uniqueness of an Associated Lie Algebra ...... 28

5 30 5.1 Theorem of the Highest Weight ...... 30 5.2 Proof of the Theorem of the Highest Weight ...... 33 5.3 Weyl Character Formula...... 37

6 Crystals 41 6.1 Littelmann Paths ...... 46

Index 49

[email protected][email protected]

1 2 Lie Algebras

1 Motivation and Some Definitions

1.1 Lie Groups and Lie Algebras

This class is about the structure and representations of the so-called simple Lie groups

T SLn = A Matn det A = 1 , SOn = A SLn AA = I , Sp2n { ∈ | } { ∈ | } and exactly five others. The only technology we will use is linear algebra. These groups are slightly complicated, and they have a topology on them. To study them we pass to a “linearization,” the Lie algebra, and study these instead.

1.1.1 Definition. A linear algebraic group is a subgroup of GLn, for some n, de- fined by polynomial equations in the coefficients.

1.1.2 Example. All of the groups mentioned so far are linear algebraic groups, as is the collection of upper triangular matrices.

This definition requires an embedding; it is desirable to come up with one which does not. It turns out the linear algebraic groups are exactly the affine alge- braic groups, i.e. affine algebraic varieties such that multiplication and inversion are regular functions.

1.1.3 Definition. The Lie algebra associated with G is g = T1G, the tangent space to G at the identity.

 1 0  ” a b — 2 1.1.4 Example. Let g = 0 1 + " c d , where " = 0. Then

2 det g = (1 + "a)(1 + "d) bc" = 1 + "(a + d) − Therefore det g = 1 if and only if tr g = 0.

2 We are working in the E = C["]/" = 0 (the so-called dual numbers). If G GLn is a linear algebraic group, we can consider ⊆ G(E) = A Matn(E) A satisfies the equations defining G . { ∈ | } There is a natural map π : G(E) G(C) induced by the ring π : " 0. → 7→ 1.1.5 Definition. G(E) is the tangent bundle to G, denoted TG, so 1 g = T1G = π− (I) = A MatnC I + "A G(E) . { ∈ | ∈ } 1.1.6 Exercise. Show that this definition is equivalent to the usual definition of TG (say, from differential geometry).

1.1.7 Examples. 1 1. Let G = GLn = A Matn A− Matn . Then { ∈ | ∈ } 1 G(E) = GLn Matn" = A + B" A− Matn , ⊕ { | ∈ } 1 1 1 since (A + B")(A− A− BA− ") = I. Therefore − 2 gl : T GL Mat n . n = 1 n = n ∼= R Lie Groups and Lie Algebras 3

2. As an exercise, show that det(I + X ") = 1 + tr(X )". As a corollary, sln := T1SLn = X Matn tr(X ) = 0 . { ∈ | } T T T 3. Let G = On = A A A = I . Then (I + X ") (I + X ") = I + (X + X )", so { | } T on := T1On = X Matn(C) X + X = 0 . { ∈ | } Remark. If X on then tr X = 0 (when 2 = 0), so the Lie algebra of On is contained ∈ 6 in the Lie algebra of SLn. Also note that on = son, so the Lie algebra does not keep track of whether the group is connected.

What structure on g is there coming from the group structure on G? Notice that (I + A")(I + B") = I + (A + B)", so traditional multiplication only sees the structure. Instead we use the commutator map

1 1 G G G : (P,Q) PQP− Q− , × → 7→ 1 and consider it infinitesimally. If P = I + A" and Q = I + Bδ, then P− = I A" 1 − and Q− = I Bδ, so − 1 1 PQP− Q− = I + (AB BA)"δ. − Let [A, B] denote the map (A, B) AB BA. 7→ − 1.1.8 Exercises. 1 1 1 1 1 1. Show that (PQP− Q− )− = QPQ− P− implies [X , Y ] = [Y, X ] − 2. Show that the fact that multiplication is associative in G implies

[[X , Y ], Z] + [[Y, Z], X ] + [[Z, X ], Y ] = 0

1.1.9 Definition. Let K be a field of characteristic not 2 or 3. A Lie algebra g over 2 K is a vector space over K equipped with a [ , ] : Λ g g, the Lie bracket, such that for all X , Y, Z G, · · → ∈ 1. [X , Y ] = [Y, X ] (skew-symmetry); and − 2. [[X , Y ], Z] + [[Y, Z], X ] + [[Z, X ], Y ] = 0 (Jacobi identity).

1.1.10 Example. In most of the following the Lie bracket is inherited from gln. The following are Lie algebras.

1. sln = A gln tr A = 0 ; { ∈ | } T 2. son = A gln A + A = 0 ; { ∈ | } T 1 3. sp2n = A gl2n JA J − + A = 0 , where { ∈ | } J = anti-diag( 1, . . . , 1, 1, . . . , 1). | {z } | {z } − n − n

4. Upper triangular matrices; 4 Lie Algebras

5. Strictly upper triangular matrices;

6. Let V be any vector space and define the Lie bracket to be the zero map. This example is known as the Abelian Lie algebra.

1.1.11 Exercises. 1. Check directly that gln = Matn is a Lie algebra with Lie bracket [A, B] = AB BA, and in fact this works for any K-algebra; − 2. Check that all of the above are truly subalgebras, i.e. vector subspaces and closed under the Lie bracket.

1.1.12 Exercise. Classify all Lie algebras g over K with dimK g 3. ≤ 1.2 Representations

1.2.1 Definition. A Lie algebra homomorphism ϕ : g1 g2 is a such that → ϕ([x, y]1) = [ϕ(x), ϕ(y)]2. A Lie algebra representation (or simply representation) of a Lie algebra g on a vector space V is a Lie algebra homomorphism ϕ : g glV . →

1.2.2 Example. If g glV then the inclusion is a representation of g on V . For n example, K is a representation⊆ of son gln. ⊆ 1.2.3 Definition. Let x g and define the adjoint of x to be ∈ ad x : g g : y [x, y], → 7→ a linear map on g.

1.2.4 Lemma. The adjoint map ad : g End(g) is linear and is in fact ad is a representation of g on itself called the adjoint→ representation.

PROOF: We must check that ad[x, y] = ad x ad y ad y ad x. For any z, − 0 = [[x, y], z] + [[y, z], x] + [[z, x], y] Jacobi identity 0 = [[x, y], z] [x, [y, z]] + [y, [x, z]] skew-symmetry − [[x, y], z] = [x, [y, z]] [y, [x, z]] − so ad[x, y](z) = (ad x ad y ad y ad x)(z) for every z, implying the result. ƒ − 1.2.5 Definition. The center of g is

Zg = x g [x, y] = 0 for every y g = ker(ad : g End(g)). { ∈ | ∈ } → Warning: the notation Zg is not used in class (in fact, no notation for center is used in class).

By the definition of center, the adjoint homomorphism embeds g into glg if and only if g has trivial center. Representations 5

 0 K  1.2.6 Example. The adjoint homomorphism maps g = 0 0 to zero in glg.

1.2.7 Theorem (Ado). Any finite dimensional Lie algebra g has a finite dimen- sional faithful representation, i.e. g , gln for some n. → PROOF: Omitted. ƒ

 a b  1.2.8 Example. Recall that sl2 = c a . Its standard basis is { − }  0 1 0 0 1 0  e , f , h . = 0 0 = 1 0 = 0 1 − Show that [h, e] = 2e, [h, f ] = 2f , and [e, f ] = h. Whence, a representation of n − sl2 on C is a triple of matrices E, H, F such that [H, E] = 2E, [H, F] = 2F, and [E, F] = H. How do we find such a thing? −

1.2.9 Definition. If G is an algebraic group then an algebraic representation of G is a group homomorphism ρ : G GLV defined by polynomial equations. → 2 We pull the same trick and substitute E = C["]/" for C to get a homomor- phism of groups G(E) GLV (E). As ρ(I) = I, → ρ(I + A") = I + "(some of A).

Call this function dρ, so that ρ(I + A") = I + "dρ(A), giving dρ : g glV . → 1.2.10 Exercises.

1. Show that if ρ : G GLV is a group homomorphism then dρ : g glV is a Lie algebra homomorphism.→ →

2. Show that dρ truly is the differential of ρ at the identity when ρ is consid- ered as a smooth map between manifolds.

We have a ρ dρ from the algebraic representations of G to the 7→ Lie algebra representations of g = T1G. We will discuss this further, but first an example.

1 1.2.11 Example. Let G = SL2, and let L(n) be the collection of homogeneous polynomials of degree n in variables x and y, so dimK L(n) = n + 1. Indeed, it has n n 1 n 1 n a basis x , x − y,..., x y − , y . Now GL2 acts on L(n) via { }

ρn : GL2 GL(L(n)) = GLn+1 → ” a b — by (ρn( c d )f )(x, y) = f (ax + c y, bx + d y). Then ρ0 is the trivial representa- tion, ρ1 is the standard 2-dimensional representation (i.e. ρ1 is the identity map ” a b — on GL2), and ρ2 c d has matrix

 a2 ab b2  2ac ad bc 2bd .  +  c2 cd d2

1 1 1 1 L(n) = Γ(P , (n)) and H (P , (n)) = 0 for all n 0. O O ≥ 6 Lie Algebras

Now of course SL2 < GL2 so the restriction is a representation of SL2. To i j  0 1  compute dρn(e)(x y ) (recall e = 0 0 ) we compute:

i j  1 "  i j i j i j i+1 j 1 ρn(1 + "e)x y = ρn 0 1 x y = x ("x + y) = x y + " jx y −

i j i+1 j 1 Therefore dρn(e)(x y ) = jx y − . Similarly,

i j  1 0  i j i j i j i 1 j+1 ρn(1 + " f )x y = ρn " 1 x y = (x + " y) y = x y + "ix − y and

i j ” 1+" 0 — i j i i j j i j i j ρn(1 + " f )x y = ρn 0 1 " x y = (1 + ") x (1 ") y = x y + "(i j)x y , − − − so

i j i+1 j 1 i j i 1 j+1 i j i j e x y = jx y − f x y = ix − y h x y = (i j)x y . · · · − Upon closer inspection, these are operators that we know and love,

∂ ∂ ∂ ∂ dρn(e) = x , dρn(f ) = y , dρn(h) = x y . (1) ∂ y ∂ x ∂ x − ∂ y

1.2.12 Exercises. 1. Check directly that defining the action of e, f , and h on L(2) by (1) gives a representation of sl2.

2. Check that L(2) is the adjoint representation of sl2 (notice that they are both 3-dimensional).

3. If the characteristic of K is zero, check that L(n) is an irreducible represen- tation of sl2, and hence of the group SL2.

1.2.13 Example. Let’s compare representations of G and representations of g when G = C∗ (and hence g = C with [x, y] = 0 for all x, y C). Recall that ∈ the irreducible algebraic representations of G are one dimensional. z C∗ acts as n multiplication by z (n Z). Moreover, any representation of G splits∈ up into a finite direct sum of one dimensional∈ representations (“finite Fourier series”). In contrast, a representation ρ : g End(V ) of g = C on V is any matrix A = ρ(1) End(V ) (ρ : λ λA). A→ subrepresentation W V is just an A- stable subspace∈ (so AW W7→). Any linear transformation has⊆ an eigenvector, so there is always a 1-dimensional⊆ subrepresentation, which implies that all ir- reducible representations are the 1-dimensional ones. V breaks up into a direct sum of irreducibles (i.e. is completely reducible) if and only if A is diagonaliz- able. e.g. if A is the Jordan block of size n, then the invariant subspaces are

e1 , e1, e2 ,..., e1,..., en , and does not break up into a direct sum of subspaces. 〈Representations〉 〈 〉 〈 up to 〉 of gl1 are given by the theory of Jordan normal forms. Representations of G and g look “really different” in this case. Even the irreducibles are different, since for G they are in 1-1 correspondence with Z n (z z ), while for g they are in 1-1 correspondence with C (λ “1 λ”). (Here7→ Z , C : z 2πiz.) 7→ 7→ → 7→ Representations of sl2 7

In contrast, the map from representations of G to representation of g is an equivalence of categories if G is a simply connected simple algebraic group. In par- ticular, every representation of the Lie algebra sl2 is a direct sum of irreducibles, and the irreducibles are exactly L(n).

2 Representations of sl2 2.1 Classification

From now on, all representations and Lie algebras are over C unless otherwise specified.

2.1.1 Theorem. 1. There is a unique irreducible representation of sl2 of (n + 1) for every n 0. ≥ 2. Every finite dimensional representation of sl2 is isomorphic to a direct sum of irreducible representations.

PROOF (FIRST PART): Let V be a representation of sl2. Define

Vλ := v V hv = λv , { ∈ | } the λ-weight space. It is the space of eigenvectors of h with eigenvalue λ. For i j n example, L(n)λ = Cx y if i j = λ, and in particular L(n)n = Cx . Suppose that v Vλ, and consider ev. − ∈ h(ev) = (he eh + eh)v = [h, e]v + ehv = 2ev + λev = (2 + λ)ev −

Whence v Vλ if and only if ev Vλ+2. Similarly, v Vλ if and only if f v Vλ 2. ∈ ∈ ∈ ∈ − A highest weight vector of weight λ is an element of ker(e) Vλ, i.e. a v V such that ev = 0 and hv = λv. ∩ ∈

2 Claim. If v is a highest weight vector then W = span v, f v, f v,... is an sl2- invariant subspace of V . { }

We need to show that W is stable by e, f , h. It is clear that f W W. As k k hf v = (λ 2k)f v, it is clear that hW W. By definition, ev = 0, so ⊆ − ⊆ e f v = (e f f e)v + f ev = hv = λv W. − ∈ Similarly,

k 1 k e f + v = (e f f e + f e)f v k − k = hf v + f e f v k k = (λ 2k)f v + k(λ k + 1)f v by induction − k − = (k + 1)(λ k)f v, − k k 1 so e f v = k(λ k + 1)f − v W, by induction, so eW W and the claim is proved. − ∈ ⊆ 8 Lie Algebras

Claim. If v is a highest weight vector of weight λ then λ 0, 1, 2, . . . if V is finite dimensional. ∈ { }

Indeed, the vectors f k v all lie in different eigenspaces of h, and hence are k linearly independent if they are non-zero. If V is finite dimensional then f v = 0 for some k. Let k be minimal with this property. Then

k k 1 0 = e f v = k(λ k + 1)f − v, − k 1 and k 1, so λ = k 1 0, 1, 2, . . . since f − v = 0. ≥ − ∈ { } 6 Claim. There exists a highest weight vector if V is finite dimensional.

Let v V be any eigenvector for h with eigenvalue λ (such things exist because 2 C is algebraically∈ closed). As before, v, ev, e v, . . . are all eigenvectors for h with eigenvalues λ, λ + 2, λ + 4, . . . . Hence they are linearly independent, so there is k+1 k some k + 1 minimal such that e v = 0. Clearly e v Vλ+2k is a highest weight vector, proving the claim. ∈ If V is irreducible and dim V n 1 then there is a basis v ,..., v such C = + 0 n that { } ¨ vi+1 if i < n hvi = (n 2i)vi, f vi = , evi = i(n i + 1)vi 1. − 0 if i = n − −

Indeed, let v0 be a highest weight vector with weight λ 0, 1, 2, . . . . The string v, f v,..., f λ v is a subrepresentation of V . As V is irreducible∈ { it must} be all of V , so{ λ dim V } 1. Take v f i v. This proves the first part of the theorem. = C i = ƒ − 2.1.2 Exercise. C[x, y] is a representation of sl2 by the formulae in 1.2.11. Show λ µ that for λ, µ C, x y C[x, y] is also a representation, infinite dimensional, and analyze its submodule∈ structure.

PROOF (SECOND PART): We will show that strings of different lengths “do not inter- act,” and then that strings of the same length “do not interact.”

Let V be a finite dimensional representation of sl2. Let

: e f f e 1 h2 End V , Ω = + + 2 ( ) ∈ the Casimir of sl . Also notice that 1 h2 h 2f e. 2 Ω = ( 2 + ) +

Claim. (1) Ω is central, i.e. Ωe = eΩ, Ωf = f Ω, and Ωh = hΩ.

The proof of this is an exercise. For example,

e e e f f e 1 h2 Ω = ( + + 2 ) e e f f e 2e f e 1 heh 1 eh he h = ( ) + + 2 + 2 ( ) − − eh 2e f e 1 heh eh = + + 2 − 2e f e 1 heh = + 2 e f f e 1 h2 e = ( + + 2 ) Classification 9

Claim. (2) If V is an irreducible representation with highest weight n then Ω acts on V by multiplication by 1 n2 n. 2 +

(It follows from Schur’s Lemma that if V is an irreducible representation of sl2 then Ω acts on V as multiplication by a scalar.) Let v be a highest weight vector, so hv = nv and ev = 0, so

v 1 h2 h 2f e v 1 n2 n v. Ω = (( 2 + ) + ) = ( 2 + )

Hence f k v f k v 1 n2 n f k v, so acts by multiplication by 1 n2 n Ω( ) = (Ω ) = ( 2 + ) Ω 2 + since v, f v,..., f n v is a basis for V . { } Claim. (3) If L(n) and L(m) are both irreducible representations and Ω acts on them by the same scalar ω then n m (so L n L m ) and ω 1 n2 n. = ( ) ∼= ( ) = 2 + Indeed, L(n) and L(m) are irreducible representations with highest weight n and m, respectively, so by the last claim 1 n2 n ω 1 m2 m. But f x 1 x 2 2 + = = 2 + ( ) = 2 + x is a strictly increasing function for x > 1, so m = n and the representations are isomorphic. − ω dim V Let V = v V (Ω ωI) C v = 0 be the generalized eigenspace of Ω { ∈ | − } L ω associated with ω. The Jordan decomposition for Ω implies that V = ω V (as vector spaces). ∼

Claim. (4) Each V ω is a subrepresentation of V , i.e. the above is a direct sum decomposition as representations of sl2.

ω Indeed, if x sl2 and v V then since Ω is central, ∈ ∈ dim V dim V (Ω ω) C x v = x(Ω ω) C v = 0. − − Aside: For a g- W, a composition series is a sequence of submodules

0 = W0 < W1 < < Wr = W ··· such that each quotient module Wi/Wi 1 is an irreducible g-module. For example, 2 − 0 1  with g = C and W = C , acting as 1 0 0 , then e1 e1, e2 is a composition 7→ 〈 〉 ⊆ 〈  0〉 0  series (and this is the only one). If the action is instead 1 0 0 then any line 2 in C gives rise to a composition series. 7→ Recall that if W is finite dimensional then it has a composition series, and the subquotients are unique up to permutations.

Claim. (5) If V ω 0 then ω 1 n2 n for some n, and all the subquotients of a = = 2 + ω ω composition series6 for V are L(n), i.e. V is a bunch of copies of L(n), possibly stuck together in a complicated way.

ω Indeed, suppose W is an irreducible subrepresentation of V . Then Ω acts on W as multiplication by ω, since ω is the only possible generalized eigenvalue and Ω acts on irreducible representations as scalar mulitplication by Schur’s Lemma. Thus ω 1 n2 n for some n with W L n , by claim (3). By the same reasoning, = 2 + = ( ) ∼ ω Ω will act on any irreducible subrepresentation of V /W as multiplication by ω, so any such irreducible subrepresentation must be L(n), for the same n. Whence every subquotient in a composition series is isomorphic to L(n). 10 Lie Algebras

ω To finish the proof we must show that each V is a direct sum of L(n)’s. The key point will be to show that h is “diagonalizable.” At this stage we may assume ω that V = V . Notice that h acts on V with eigenvalues n, n 2, . . . , 2 n, n . (It is a fact { − − − } from linear algebra that if h acts on a vector space W and W 0 W is an h-invariant ≤ subspace then the eigenvalues of h on W are the eigenvalues of h on W 0 together with the eigenvalues of h on W/W 0. The remark follows from this since V has a composition series with subquotients all isomorphic to L(n), and we know from the first part that the eigenvalues of h on L(n) are n, n 2, . . . , 2 n, n .) Hence { − − − } Vm = 0 if m / n, n 2, . . . , 2 n, n . It follows that h has only one generalized Why does this follow? eigenvalue on∈ {ker(e−), namely−n. − } k k n 1 n 1 n Claim. hf = f (h 2k) and e f + = f + e + (n + 1)f (h n). − − The first equation is trivial, and the second is proved by induction as follows. Clearly e f = f e + e f f e = f e + h, and −

n 1 n e f + = (f e + e f f e)f n 1 − n n = f + e + nf (h n + 1) + f (h 2n) by induction n 1 − n − = f + e + (n + 1)f (h n). − Claim. h acts as multiplication by n on ker(e), and further, Vn = ker(e).

Indeed, if v Vn (so hv = nv) then ev Vn+2 = 0, so Vn ker(e). For the converse, consider∈ these statements: ∈ ⊆

dim V 1. If x ker(e), then (h n) x = 0 since n is the only generalized eigen- value∈ of h on ker(e). − dim V k k dim V 2. If x ker(e), then (h n+2k) f x = f (h n) x = 0 by the previous claim.∈ − −

n 3. If y ker(e) and y = 0 then f y = 0. (Indeed, let (Wi) be a composition ∈ 6 6 series for V , as above. Then there is i such that y Wi Wi 1. Let ¯y = ∈ \ − y + Wi Wi/Wi 1 = L(n). Then ¯y is a highest weight vector of L(n), so n n ∼ f ¯y = 0,∈ hence f− y = 0.) 6 6 n 1 n 1 4. f + x = 0, as f + x is in the generalized eigenspace with eigenvalue n 2, which is zero. − −

Hence

n 1 n n 1 n 0 = e f + x = (n + 1)f (h n)x + f + ex = (n + 1)f (h n)x − − n implying f ((h n)x) = 0, so by statement 3, (h n)x = 0, or hx = nx. − − Finally, choose a basis w1,..., w` of Vn = ker(e). Then { } n n span w1, f w1,..., f w1 span w2, f w2,..., f w2 ... { } ⊕ { } ⊕ n span w`, f w`,..., f w` ⊕ { } is a direct sum decomposition of V into subrepresentations, each isomorphic to L(n). ƒ Consequences 11

2.2 Consequences

2.2.1 Exercise. If V and W are representations of a Lie algebra g then the map

g End(V ) End(W) = End(V W) : x x 1 + 1 x → ⊗ ⊗ 7→ ⊗ ⊗ is a homomorphism of Lie algebras. (Recall that if a group G acts on V and W then G acts on V W as g g g. An action g End(V W) is obtained from this by differentiating.)⊗ 7→ ⊗ → ⊗

2.2.2 Corollary. If V and W are representations of g then so it V W. ⊗ Remark. If A is an and V and W are representations of A then V W is a representation of A A and not of A. So to make it a representation of⊗A you need additional structure,⊗ an algebra map A A A (making A a Hopf algebra, apparently). → ⊗

2.2.3 Example. Now take g = sl2. Then L(n) L(m) is also a representation of ⊗ sl2, so according to our theorem it should decompose into representations L(k) with some multiplicities, but how? The naive method is to try and find all the highest weight vectors in L(n) L(m). As an exercise, find all highest weight vectors in L(1) L(n) and L(2) L(n⊗). ⊗ ⊗ If vr denotes a highest weight vector in L(r) then vn vm is a highest weight vector in L(n) L(m). Indeed, ⊗ ⊗ h(vn vm) = (hvn) vm + vn (hvm) = (n + m)vn vm ⊗ ⊗ ⊗ ⊗ and e(vn vm) = (evn) vm + vn (evm) = 0. ⊗ ⊗ ⊗ This is a start, but L(n) L(m) = L(n + m) + lots of other stuff, since ⊗ dim(L(n) L(m)) = (n + 1)(m + 1) > n + m + 1 = dim L(n + m). ⊗ It is possible to write down explicit formulae for all the highest weight vectors in L(n) L(m); they are messy but interesting (compute them as a non-compulsory exercise).⊗ A better method is to use the weight space decomposition for h, which we will see later.

2.3 Characters

2.3.1 Definition. Let V be a finite dimensional representation of sl2. Then

X n + 1 ch V = dim Vnz Z [z, z− ] n Z ∈ ∈ is the character of V .

2.3.2 Proposition (Properties of Characters).

1. ch V z=1 = dim V | n n 1 2 n n zn+1 z n 1 2. ch L n z z z z − − : n 1 . ( ) = + − + + − + − = z −z 1 = [ + ] − ··· − 12 Lie Algebras

3. ch V ch W implies that V W. = ∼=

4. ch(V W) = ch(V ) ch(W). ⊗

PROOF: 1. By our theorem, h acts diagonalizably on V , so V L V with integer = n Z n eigenvalues. ∈

2. We saw this in the proof of our theorem.

1 Z/2Z 3. ch L(n) n 0 forms a basis for Z[z, z− ] (the action of Z/2Z is that {of swapping| ≥z and} z 1). Since any V uniquely decomposes as a direct sum L − P n 0 an L(n), we have ch V = n 0 an[n + 1], which uniquely determines ≥ ≥ the an.

P 4. Vn Wm (V W)n+m, so (V W)p = n m p Vn Wm. ƒ ⊗ ⊆ ⊗ ⊗ + = ⊗ Back to the example. For example we compute

ch(L(1) L(3)) = ch L(1) ch L(3) ⊗ 1 3 1 3 = (z + z− )(z + z + z− + z− ) 4 2 2 4 2 2 = (z + z + 1 + z− + z− ) + (z + 1 + z− ) so L(1) L(3) = L(4) + L(2). In general, the Clebsch-Gordan rule says ⊗ ∼ k= n+m M| | L(n) L(m) = L(k). ⊗ k= n m k n+|m−mod| 2 ≡

3 Semi-Simple Lie Algebras: Structure and Classification

3.0.3 Definition. A Lie algebra g is simple if

1. g is non-Abelian;

2. dim g > 1; and

3. the only ideals of g are 0 and g. g is semi-simple if it is a direct sum of simple Lie algebras.

3.0.4 Examples. 1. C is not a simple or semi-simple Lie algebra.

2. sl2 is a simple Lie algebra. Solvable and Nilpotent Lie Algebras 13

3.1 Solvable and Nilpotent Lie Algebras

3.1.1 Definition. Let g be a Lie algebra. The commutator algebra of g is [g, g] = span [x, y] x, y g . { | ∈ } 3.1.2 Exercises. 1. [g, g] is an ideal of g; and

2. g/[g, g] is Abelian.

0 3.1.3 Definition. The central series of g is defined inductively by g = g and n n 1 (0) (n) g = [g − , g] for n 1. The derived series is defined by g = g and g = (n 1) (n 1) ≥ [g − , g − ] for n 1. ≥ It is clear that g(n) gn for all n. ⊆ n 3.1.4 Definition. A Lie algebra g is said to be nilpotent if g = 0 for some n > 0 n (i.e. if the central series ends at zero, eventually), and solvable if g( ) = 0 for some n > 0 (i.e. if the derived series ends at zero, eventually).

Clearly nilpotent implies solvable.

3.1.5 Example. The Lie algebra n of strictly upper triangular matrices is nilpotent, while the Lie algebra b of upper triangular matrices is solvable but not nilpotent. Abelian Lie algebras are nilpotent.

3.1.6 Exercises. 1. Compute the derived and central series of the upper triangular and strictly upper triangular matrices and confirm the claims made in the example.

2. Compute the center of the upper triangular and strictly upper triangular matrices.

3. Let L be a vector space. Show that W = L L∗ is a symplectic vector space (i.e. a vector space together with a non-degenerate⊕ alternating form) with

the alternating form , defined by L, L = L∗, L∗ = 0 and v, f = f (v) 〈· ·〉 〈 〉 〈 〉 〈 〉 for v L and f L∗ (defined to be alternating). ∈ ∈ 4. Let W be a symplectic vector space. Define the Heisenberg Lie algebra, HeisW , to be W Kc, where c is a fresh variable, with bracket defined by [c, W] = 0 ⊕ and [x, y] = x, y c for x, y W. Show that HeisW is a nilpotent Lie algebra. Is it Abelian?〈 〉 ∈

5. Show that the simplest Heisenberg algebra (taking L = Cp and L∗ = Cq) has a representation on x by p ∂ , q x, and c 1. C[ ] ∂ x 7→ 7→ 7→ 3.1.7 Proposition. 1. Subalgebras and quotient algebras of solvable (resp. nilpotent) Lie algebras are also solvable (resp. nilpotent).

2. If g is a Lie algebra and h g is an ideal, then g is solvable if and only if both h and g/h are solvable⊆

3. g is nilpotent if and only if the center of g is non-zero and g/Zg is nilpotent. 14 Lie Algebras

1 n PROOF: Exercise. (For the third part, suppose g ) g ) ) g = 0. Then n 1 n 1 ··· [g − , g] = 0, so g − = 0 is contained in the center of g.) ƒ 6 3.1.8 Corollary. In particular, g is nilpotent if and only if ad g glg is nilpotent. ⊆ PROOF: ad g = g/Zg. ƒ

3.1.9 Theorem (Lie). Let g glV (V finite dimensional) be a solvable Lie algebra. Suppose K has characteristic⊆ zero and is algebraically closed. Then there is a basis

v1,..., vn of V such that, with respect to this basis, the matrices of all elements {of g are upper} triangular.

PROOF: Omitted. ƒ

Equivalently, Lie’s theorem says that there is a common eigenvector for g, i.e. there is λ : g K linear and v V such that x v = λ(x)v for all x g. This is the same as saying→ that V has a 1-dimensional∈ subrepresentation. ∈

3.1.10 Exercises. 1. Prove the equivalence of the two formulations of Lie’s Theorem (essential exercise).

2. Show that it is necessary that K is algebraically closed and that the charac- teristic is zero. (Hint: let K and g be the 3-dimensional Heis ∂ , x, c . = Fp ( ∂ x ) p Show that K[x]/x is an irreducible representation of g.)

3.1.11 Corollary. Suppose K has characteristic zero. If g is a solvable Lie algebra then [g, g] is a nilpotent Lie algebra.

PROOF: Apply Lie’s Theorem to the adjoint representation ad g glg. Then ad g is solvable, and Lie’s theorem implies that it sits inside the collection⊆ of upper triangular matrices for some choice of basis. Hence [ad g, ad g] consists of strictly upper triangular matrices, so it is nilpotent. But ad[g, g] = [ad g, ad g], so ad[g, g] is nilpotent, so [g, g] is nilpotent by 3.1.8. ƒ

3.1.12 Exercise. Construct a counterexample in characteristic p.

3.1.13 Theorem (Engel). Let g glV (V finite dimensional), and suppose that x : V V is a nilpotent transformation⊆ for all x g. Then there is a common eigenvector→ v for g. ∈

PROOF: Omitted. ƒ

Here “nilpotent” means that all the generalized eigenvalues are 0. Whence Engel’s theorem says that V has a 1-dimensional subrepresentation, or that x v = 0 for all x g. ∈ 3.1.14 Corollary. g is nilpotent if and only if ad g consists of nilpotent endomor- phisms of g.

PROOF: Exercise. ƒ

Cryptic Remark: Solvable and nilpotent Lie algebras are garbage. Solvable and Nilpotent Lie Algebras 15

3.1.15 Definition. A symmetric bilinear form (a.k.a. inner product) ( , ) : g g K is invariant if ([x, y], z) = (x, [y, z]) for all x, y, z g. · · × → ∈ 3.1.16 Exercise. If h g is an ideal and ( , ) is an invariant inner product on g then ⊆ · ·

h⊥ := x g (x, h) = 0 { ∈ | } is an ideal. In particular, g⊥ is an ideal.

3.1.17 Example (Trace forms). The most important class of invariant inner prod- ucts are the so-called trace forms, defined as follows. If (V, ρ) is a representation of g, define ( , )V : g g K by · · × → (x, y)V = tr(ρ(x)ρ(y) : V V ). → Clearly ( , )V is symmetric and bilinear. Show that it is invariant (you will need to use the fact· · that ρ is a representation). The Killing form is (x, y)ad = tr(ad x ad y : g g). → 3.1.18 Theorem (Cartan’s Criterion). Let V be a finite dimensional representa- tion of g (i.e. g , glV ) and K have characteristic zero. Then g is solvable if and → only if (x, y)V = 0 for all x g and y [g, g]. ∈ ∈ PROOF: The forward direction is immediate from Lie’s Theorem. The other direc- tion is where the content lies, and we omit the proof. ƒ

3.1.19 Corollary. g is solvable if and only if (g, [g, g])ad = 0.

PROOF: The forward direction follows from Lie’s Theorem. For the reverse direc- tion, Cartan’s Criterion implies that g/Zg = ad g is solvable. But then g is solvable as well. ƒ

If g is a solvable non-Abelian Lie algebra then all trace forms are degenerate

(as g⊥ [g, g]). Warning/Exercise: Not every invariant inner product is a trace form. ⊇

3.1.20 Exercise. Let h˜ = c, p, q, d be defined by [c, h˜] = 0, [p, q] = c, [d, p] = p, and d, q q (we think〈 of p 〉 ∂ , q z, and d x ∂ ). Then h˜ is a Lie [ ] = = ∂ x = = ∂ x algebra. Show− that it is solvable, and construct a non-degenerate− invariant inner product on it.

3.1.21 Definition. The radical of g is the maximal solvable ideal of g, denoted R(g).

3.1.22 Exercises. 1. Show that the sum of solvable ideals is solvable, and hence R(g) is the sum of all solvable ideals of g (so there is none of this nasty Zorn’s Lemma crap).

2. R(g/R(g)) = 0.

Recall that g is semi-simple (s.s.) if it is a direct sum of simple (non-Abelian) Lie algebras. 16 Lie Algebras

3.1.23 Theorem. Suppose the characteristic of K is zero. The following are equiv- alent.

1. g is semi-simple;

2. R(g) = 0;

3. The Killing form ( , )ad is non-degenerate (ironically, this is the “Killing cri- terion”). · ·

Moreover, if g is s.s. then every derivation D : g g (i.e. map D satisfying D[a, b] = [Da, b] + [a, Db]) is inner (i.e. D = ad x→for some x g), but not conversely. ∈

PROOF: Begin by observing that R(g) = 0 if and only if g has no non-zero commu- tative ideals. (i) implies (ii): If g is s.s. then it has no non-zero commutative ideals, by definition. (iii) implies (ii): It suffices to show that if a is a commutative ideal then it is contained in the kernel of the Killing form, i.e. g⊥. Write g = a + h for h some a  0  a vector space complement. If a then ad a has matrix 0 0∗ , as is Abelian and ∈   an ideal. For x g, ad x has matrix 0∗ ∗ , so (a, x)ad = tr(ad(a) ad(x)) = 0. ∈ ∗ ad (ii) implies (iii): Let r = g⊥, an ideal. We have a map r glg and (x, y)ad = 0 for all x, y r, by definition. By Cartan’s Criterion r/Z(r) is−→ solvable, and hence r is solvable and∈ so r R(g) = 0. ⊆ (iii), (ii) imply (i): Suppose ( , )ad is non-degenerate. Let a g be a non-zero · · ⊆ minimal ideal. Then ( , )ad a is either zero or non-degenerate. Indeed, the kernel · · | of ( , )ad a is an ideal of g and a is minimal. But if ( , )ad a = 0 then the Cartan Criterion· · | implies that a is solvable. But R(g) = 0, so· there· | should be no solvable ideals. Therefore ( , )ad a is non-degenerate, and we have g = a a⊥, a direct sum decomposition· as· ideals| with , ad a non-degenerate. a is not⊕ solvable, so it ( ) ⊥ must be simple (non-Abelian). By induction· · | on dimK g, we may write g as a direct sum of simple Lie algebras, so it is s.s. Finally, if D is a derivation then we get a linear function ` : g K : x → 7→ trg(D ad x). Since g is s.s., ( , )ad is non-degenerate, so `(x) = (y, x)ad for a · · · unique y g. This implies that trg((D ad y) ad x) = 0 for all x g. Let E = D ad∈y, a derivation. We must show−E = 0. Notice· that ∈ − ad(E x)(z) = [E x, z] = E[x, z] [x, Ez] = [E, ad x](z) − whence

(E x, z)ad = trg(ad(E x) ad z) · = trg([E, ad x] ad z) · = trg(E[ad x, ad z]) = trg((D ad y) ad[x, z]) = 0 − for all z g, so E x = 0 for all x g. ƒ ∈ ∈ 3.1.24 Exercises. 1. Prove that R(g) ker( , )ad [R(g), R(g)]. ⊇ · · ⊇ Structure Theory 17

2. Show that g s.s. implies that g can be written uniquely as a direct sum of its minimal ideals.

3. Show that a nilpotent Lie algebra always has a non-inner derivation.

4. Show that the Lie algebra defined by h = a, b , where [a, b] = b, has only inner derivations. 〈 〉

Remark. The exact sequence 0 R(g) g g/R(g) 0 shows that every Lie algebra has a maximal s.s. quotient,→ and→ is→ an extension→ of this by a maximal solvable ideal. In fact we have the following theorem

3.1.25 Theorem (Levi’s Theorem). If K has characteristic zero then the exact sequence splits, i.e. there is a subalgebra h , g such that g h R g (indeed, ∼= o ( ) we may take h = g/R(g)). →

PROOF: Omitted. ƒ

3.1.26 Exercise. Show R(slp(Fp)) = Fp I, but there does not exist a splitting g/R(g) , g. → 3.1.27 Exercises. 1. Let g be a simple Lie algebra and ( , )1 and ( , )2 be non-degenerate invari- · · · · ant forms. Then there is λ K∗ such that ( , )1 = λ( , )2. (Apply Schur’s Lemma.) ∈ · · · ·

2. (essential exercise) Let g = sln(C) and put (A, B) = tr(AB), so (A, B) = λ(A, B)ad. Find λ (you may assume g is simple).

3.2 Structure Theory

3.2.1 Definition. A torus t g is an Abelian subalgebra such that ad t : g g is semi-simple (i.e. diagonalizable)⊆ for all t t.A maximal torus is a torus→ which is maximal with respect to inclusion. A maximal∈ torus is also known as a Cartan subalgebra.

3.2.2 Exercises.

1. Let G be an algebraic group, T a subgroup isomorphic to C∗ C∗. Then ×· · ·× t = T1 T is a torus.

2. (essential exercise) Let g = sln, or T sp2n = A gl2n JA + A J = 0 { ∈ | } or T son = A gln MA + A M = 0 , { ∈ | } where M = anti-diag(1, . . . , 1). Prove that t = diagonal matrices in g is a maximal torus. { }

3.2.3 Lemma. Let t1,..., tr : V V , where V is finite dimensional, be pair- → ~ r wise commuting, semi-simple, linear maps. Let λ = (λ1,..., λr ) C , and set ∈ Vλ~ = v V ti v = λi v, i = 1, . . . , r be the simultaneous eigenspace. Then L V {~ ∈r V~ . | } = λ C λ ∈ 18 Lie Algebras

PROOF: By induction on r. The case r = 1 is clear since t1 is assumed to be semi- L simple, and hence diagonalizable. If r > 1 then V r 1 V λ ,...,λ by induc- = C − ( 1 r 1) tion. As t commutes with t ,..., t , t V V , so− decompose r 1 r 1 r ( (λ1,...,λr 1)) (λ1,...,λr 1) V as a direct sum of eigenspaces.− − ⊆ − (λ1,...,λr 1) ƒ − We now rephrase the above lemma in the language we have developed so far. Let t : Lr t be an Abelian Lie algebra isomorphic to r . Then V is = i=1 C i C a representation of t, by definition. Since a basis of t acts on V semi-simply, L V ~ r V~ is the decomposition of V as a representation of t, and is a direct = λ C λ ∈ sum of irreducibles. Hence all t t act semi-simply (as on each Vλ~ they act as multiplication by a scalar). ∈

Moreover, it is convenient to think of λ as being in the dual space t∗. For such a λ t∗, the λ-weight space of V is Vλ = v V tv = λ(t)v . (In the above ∈ P P { ∈ | } lemma we would take λ( i xi ti) = xiλi.) 3.2.4 Proposition. Let g be a semi-simple Lie algebra and t a maximal torus. Then t = 0, and the collection of elements of g which commute with t is exactly t itself. 6 PROOF: Omitted. ƒ L We may write g g0 t , 0 gλ, where for λ t , = + λ ∗ λ= ∗ ∈ 6 ∈ gλ := x g [t, x] = λ(t)x for all t t . { ∈ | ∈ } This is known as the root-space decomposition of g. The proposition implies that when g is s.s., g0 = t.

3.2.5 Definition. If λ = 0 and gλ = 0 then we say that λ is a root of g. Let 6 6 R = Rg := λ t∗ λ = 0, gλ = 0 , the set of roots of g. { ∈ | 6 6 }

3.2.6 Example. Let g = sln(C), so that t is the collection of diagonal matrices in g. Then for t t, [t, Ei j] = (ti t j)Ei j. Let "i : t C : t tii. Then [t, Ei j] = ∈ − → 7→ ("i "j)(t)Ei j. Note that "1 + + "n = 0 since sln is the collection of matrices with− trace zero. In this case we··· have that g E is one-dimensional, and "i "j = C i j − R = "i "j i = j . We will see that this holds in general. { − | 6 }

3.2.7 Proposition. sln(C) is simple (and hence semi-simple).

The Killing criterion can be used to show that sln(C) is semi-simple (do so as an exercise).

PROOF: Suppose that r sln is a non-zero ideal. Let r r, so by the root-space ⊆P ∈ decomposition, r = h + α R eα for some h t and eα gα. Choose such a non-zero r with a minimal number∈ of non-zero∈ terms. ∈ If h = 0 then choose a generic h0 (i.e. choose h0 so that its diagonal elements 6 of are all different, or equivalently so that α(h0) = 0 for all α R). As r is an P 6 ∈ ideal, α R α(h0)eα = [h0, r] r and if this is a non-zero element then it has ∈ ∈ fewer non-zero terms than r, contradicting our choice of r. Therefore [h0, r] = 0, and so r = h by 3.2.4. As h = 0, there are i and j such that c := ("i "j)(h) = 0, 6 − 6 so [h, Ei j] = cEi j r, implying Ei j r. From this we will now show that r contains ∈ ∈ a basis for sln and hence r = sln. As [Ei j, Ejk] = Eik if i = k and [Esi, Ei j] = Eik if 6 Structure Theory 19

r r j = s, we have Eab for all a = b. But now [Ei,i+1, Ei+1,i] = Eii Ei+1,i+1 as well.6 All these matrices∈ together6 form a basis for the trace zero matrices.− ∈ P On the other hand, suppose h = 0, so r = α R eα. If r = eα = cEi j with c = 0 ∈ P 6 then we are done as before. Suppose r = cEα +dEβ + γ R α,β eγ, where c, d = 0. ∈ \{ } 6 Choose h0 t such that α(h0) = β(h0), so [h0, r] = cα(h0)Eα + dβ(h0)Eβ + P ∈ 6 γ(h0)eγ. Therefore a linear combination of [h0, r] and r is in r and has fewer non-zero terms, contradicting that r was chosen with the minimal number. There- fore sln is simple. ƒ 3.2.8 Exercises.

1. Show that A son if and only if A is anti-skew-symmetric (i.e. skew-symmetric across the anti-diagonal).∈ ” A A — 2. Show that A sp if and only if A 1 2 with A and A anti-skew- 2n = A3 A4 2 3 ∈ aT symmetric and A4 = A1 (anti-transpose). − 3. Prove that the maximal tori in son and sp2n are exactly the diagonal matrices in those algebras. so 4. Show that the root spaces for n are C(Ei j En j+1,n i+1), where i+ j < n+1 and i = j. Also show that − − − 6 ¨ "i "j 1 i, j `, i = j if n = 2` R = {± ± | ≤ ≤ 6 } "i "j, "i 1 i, j `, i = j if n = 2` + 1 {± ± ± | ≤ ≤ 6 } sp 5. Show that the root spaces for 2n are C(Ei j En j+1,n i+1), where 1 i, j − − − ≤ ≤ n and i = j and C(Ei j + En j+1,n i+1), where n < i 2n and j n or i n and n <6 j 2n. Also show− that − ≤ ≤ ≤ ≤ R = "i "j, 2"i 1 i, j n, i = j . {± ± ± | ≤ ≤ 6 } 6. Show that sp2n is simple, and son is simple if n > 4 or n = 3, and show that so4 = so3 + so3 and that so3 = sl2. Notice that the root spaces are all one-dimensional. 3.2.9 Theorem (Structure Theorem for S.S. Lie Algebras). Let g be a s.s. Lie al- L gebra, t a maximal torus, and let g = t+ λ R gλ be the root-space decomposition. Then ∈

1. CR = t∗, i.e. the roots of g span t∗;

2. dim gα = 1 for all α R; ∈ g g g 3. For α, β R, if α + β R then [ α, β ] = α+β , and if α + β / R and β = α ∈ ∈ ∈ 6 − then [gα, gβ ] = 0;

4. [gα, g α] t is one-dimensional, and gα + [gα, g α] + g α is isomorphic to − − − sl2. ⊆

PROOF:

1. If R does not span t∗ then there is 0 = h t such that α(h) = 0 for all α R. 6 ∈ ∈ But then if x gα then [h, x] = α(h)x = 0, so [h, gα] = 0 for all α R. But [h, t] = 0 as t∈is Abelian, so [h, g] = 0, contradicting that g is semi-simple∈ and has no center (since it has no Abelian ideals at all). 20 Lie Algebras

g g g t g g 2,3,4. Observe that [ λ, µ] λ+µ as for t , x λ, and y µ, ⊆ ∈ ∈ ∈ [t, [x, y]] = [[t, x], y] + [x, [t, y]] = λ(t)[x, y] + µ(t)[x, y]

by the Jacobi identity.

Claim. (gλ, gµ)ad = 0 if λ+µ = 0. Moreover, if g is s.s. then ( , )ad restricted 6 · · to gλ + g λ is non-degenerate. −

The moreover part follows from the first part of the claim because, as ( , )ad · · is non-degenerate if g is s.s., it must be non-degenerate on each gλ +g λ. To g g N g g − prove the first part, if x λ and y µ then (ad x ad y) α α+N(λ+µ). g ∈ ∈ g ⊆ But is finite dimensional, so if λ + µ = 0 then α+N(λ+µ) = 0 for N suffi- ciently large. But this implies that (ad x6 ad y) is nilpotent, and so has zero trace.

Note in particular that ( , )ad t is non-degenerate. (Warning: ( , )ad t is not equal to the Killing form· of· t,| which is obviously zero). Therefore· · we| have an isomorphism

ν : t t∗ : t “t0 (t, t0)ad”. → 7→ 7→ This induces an inner product on t∗, denoted ( , ), by (ν(t), ν(t0)) := (t, t0)ad. · · Claim. α R implies α R. ∈ − ∈

Indeed, ( , )ad is non-degenerate on gα + g α. But (gα, gα)ad = 0 as 2α = 0, · · − 6 so if gα = 0 then it must be that g α = 0 also. Moreover, ( , )ad defines a 6 − 6 · · perfect pairing between gα and g α, namely gα = g∗ α. − ∼ − 1 Claim. If x gα and y g α then [x, y] = (x, y)adν− (α). ∈ ∈ −

It is clear that [x, y] t = g0, so to check the claim it is enough to check that ∈ 1 (t, [x, y])ad = (x, y)adα(t) for all t t, as (ν− (α), t)ad = α(t) by definition. ∈ But both sides are equal to ([t, x], y)ad since x gα. ∈ Let eα gα be non-zero, and pick e α g α such that (eα, e α)ad = 0. Then ∈ 1 − ∈1 − 1 − 6 [eα, e α] = (eα, e α)adν− (α), and [ν− (α), eα] = α(ν− (α))eα = (α, α)adeα. − − 1 Therefore eα, e α, ν− (α) span a Lie subalgebra m of dimension three. { − }

3.2.10 Key Lemma. (α, α)ad = 0. 6 1 Suppose otherwise. If (α, α)ad = 0 then [m, m] = Cν− (α), which implies that m is solvable. Lie’s theorem implies that ad[m, m] acts by nilpotent 1 1 elements, i.e. that ad ν− (α) acts nilpotently. But ad ν− (α) t and all elements of t act semisimply, so these together imply that α = 0.∈ This con- tradiction proves the claim. 2ν 1 α 2 Let h − ( ) , and rescale e so that e , e . Then e , h , e α = α,α α ( α α)ad = α,α α α α ( )ad − − ( )ad { − } span a copy of sl2.

Now we use the representation theory of sl2 to deduce the structure of R, g, etc.

Claim. dim g α = 1 for all α R. − ∈ Structure Theory 21

1 Choose eα, hα, e α as above, so that we have a map g α Cν− (α) = Chα : − − → x [eα, x]. If dim g α > 1 then this map has a kernel, i.e. there is a vector 7→ − v such that ad(eα)v = 0 and ad(hα)v = α(hα)v = 2v. But then v is a highest weight vector with negative weight,− implying that− the sl2-submodule generated by v is infinite dimensional, contradicting that dim g is finite. ƒ

3.2.11 Proposition. With the notation from the proof above,

1. 2 (α,β) for all α, β R; α,α Z ( ) ∈ ∈ 2. if α R and kα R then k = 1; ∈ ∈ ± L g sl 3. k Z β+kα is an irreducible 2-submodule for all α, β R, so the set β + kα ∈ R k Z 0 is of the form β pα,..., β, β + α∈,..., β + qα, where{ p ∈q |2 (α∈,β) ,} and ∪ { is} called the “α string− through β”; = α,α − ( ) g g g 4. if α, β, α + β R then [ α, β ] = α+β . ∈ PROOF: g 3. Let q = max k β + kα R . If v β+kα is non-zero then ad eα v 1 { | ∈ } ∈ 2ν− (α) ∈ gβ q 1 α 0 by choice of q and ad hα v β qα h v β α so +( + ) = = ( + )( ) = ( + ) (α,α) v is a highest weight vector for sls = eα, e α, hα , so its weight is in N, i.e. contains 2(α,β) 2q, which implies that〈 2−(α,β) 〉 . N α,α + α,α Z ( ) ( ) ∈ 1. It follows from the representation theory of sl2 that β+qα, β+(q 1)α,..., β q 2(α,β) α are all roots. We must show that there are no more− roots. Let− ( + (α,α) ) p = max k β kα R (start at the bottom and squeeze). We now get that β pα, β{ | p− 1 α∈,...,} β p 2(α,β) α are roots. Then q 2(α,β) p by ( ) + ( α,α ) + α,α − − − − ( ) ( ) ≤ the definition of p, and p 2(α,β) q by the definition of q. α,α − ( ) ≤ 2. If kα R then 2(kα,α) 2 and 2(α,kβ) 2k . Hence k 1 , 1, 2, (kα,kα) = k Z (α,α) = Z = 2 so it is∈ enough to show that∈ α R implies that ∈ 2α / R. Suppose± ±2α ± R ∈ − ∈ − ∈ and take v g 2α non-zero. Then ([eα, v], eα)ad = (v, [eα, eα])ad = 0 by ∈ − invariance. But ad eα v = λe α for some λ, and λ = 0 since the Killing form − is non-degenerate on g α + gα. Thus ad eα v = 0, by hα v = 4v, once again a contradiction (negative− highest weight). −

4. Suppose that α, β, α + β are roots. Since root spaces are one-dimensional, it is enough to show that g , g 0. Now L g is an irreducible [ α β ] = k Z β+kα sl g g 6 ∈ 2-module, so ad eα : β+kα β+(k+1)α is an isomorphism if p k < q, but α + β R, so q 1 and we→ are done. − ≤ ƒ ∈ ≥

3.2.12 Lemma. s β : β 2(α,β) α is a root when α, β R. α( ) = α,α − ( ) ∈ 2 α,β PROOF: Let r ( ) . If r 0 then p q r r and if r 0 then q p r r, = α,α = + = ( ) ≥ ≥ ≤ − ≥ − so β rα is in the string in either case. ƒ −

Recall that R spans t∗. 22 Lie Algebras

3.2.13 Proposition. 1. If α, β R then (α, β) Q. ∈ ∈ 2. If we choose a basis β1,..., β` of t∗ with each βi R and if β R then P ∈ ∈ β = i ciβi where ci Q for all i. ∈ 3. ( , ) is positive definite on the Q-span of R. · · PROOF: 1. As 2(α,β) , it is enough to show that α, α for all α R. Let t, t t. (α,α) Z ( ) Q 0 Then ∈ ∈ ∈ ∈ X (t, t0)ad = trg(ad t ad t0) = α(t)α(t0) α R ∈ as each gα is one-dimensional. So if λ, µ t∗ then ∈ 1 1 (λ, µ) = (ν− (λ), ν− (µ))ad X 1 1 X = α(ν− (λ))α(ν− (µ)) = (α, λ)(α, µ). α R α R ∈ ∈ P 2 2 In particular, (β, β) = α R α, β . Dividing by (β, β) /4, we get ∈

4 X 2(α, β) 2 , = ( , ) Z (β β) α R (β β) ∈ ∈ which implies (β, β) Q. ∈ 2. Exercise: if ( , ) is a non-degenerate symmetric bilinear form, β1,..., β` are · · linearly independent vectors then det(βi, βj)i j = 0. Set B = (βi, βj)i j, the matrix of inner products. Then B is a matrix6 with rational entries, B 1 P P − exists and is rational, so if β = i ciβi then (β, βj) = i ci(βi, βj) and 1 (cj)j = B− (β, βj)j is a rational vector. 3. If λ span R then λ, λ P λ, α 2 0 since it is a sum of squares Q( ) ( ) = α R( ) of rational∈ numbers. ∈ ≥ ƒ

4 Root Systems

4.1 Abstract Root Systems

Let V be a vector space over R (though the theory could be developed over Q as well), and let ( , ) be a positive definite symmetric bilinear form. If α V is 2α non-zero, define α· · , so α, α 2. Define sα : V V : v v v,∈α α, ∨ = (α,α) ( ∨) = ( ∨) a linear map. → 7→ −

4.1.1 Lemma. sα is the reflection in the hyper-plane perpendicular to α. In par- 2 ticular, all its eigenvalues are 1, except for one which is 1, i.e. sα = id, (sα − − 1)(sα + 1) = 0. Moreover,

sα O(V ) = ϕ : V V (ϕv, ϕw) = (v, w) for all v, w V . ∈ { → | ∈ } Abstract Root Systems 23

ROOF P : V = Rα α⊥, and sα v = v if v α⊥, sαα = α (α, α∨)α = α. ƒ ⊕ ∈ − − 4.1.2 Definition. A root system R in V is a finite set R V such that ⊆ 1. RR = V and 0 / R; ∈ 2. (α, β ∨) Z for all α, β R; ∈ ∈ 3. sαR = R for all α R; ∈ (4.) R is reduced if kα R implies k = 1 for all α R. ∈ ± ∈ Most of the time we will be concerned with reduced root systems.

4.1.3 Example. Let g be a s.s. Lie algebra over C and t a maximal torus. Then L g = t + α R gα and (R, RR t∗) is a reduced root system (this example is the whole point∈ of looking at root⊆ systems).

4.1.4 Definition. Let W GLV be the group generated by the reflections sα R for α R, the Weyl group ⊆of the root system R. ∈ ∈ 4.1.5 Lemma. The Weyl group W is finite.

PROOF: By property 3, W permutes R, so W SR. By property 1 this map is injective. → ƒ

4.1.6 Definition. The rank of a root system (R, V ) is the dimension of V . If (R, V ), (R0, V 0) are root systems then the direct sum root system is (R R0, V V 0). An isomorphism of root systems is a bijective linear map (not necessarilyq an⊕ isometry)

ϕ : V V 0 such that ϕ(R) = R0. A root system which is not isomorphic to a direct sum of→ root systems is an irreducible root system.

4.1.7 Examples. 1. When the rank is 1, V = R and (x, y) = x y. Therefore A1 = α, α (α = 0) is the only root system (up to isomorphism). The Weyl group{ is−Z/}2Z. 6 2 2. When the rank is 2, V = R and the inner product is the usual inner product. Root systems are A1 A1, A2, B2 = C2 and G2 (insert diagrams). ×

4.1.8 Lemma. If (R, V ) is a root system then (R∨, V ) is a root system.

PROOF: Exercise. ƒ 4.1.9 Definition. A root system is simply laced if all the roots have the same length (i.e. (α, α) is the same for all α R). ∈ 4.1.10 Example. A1, A2, and A1 A1 are simply laced, B2 and G2 are not. × 4.1.11 Exercise. If R, V is simply laced then R, V R , V , where R is a root ( ) ( ) ∼= ( 0 0) 0 system such that (α, α) = 2 for all α R0, (i.e. α = α∨ and α = p2). (Hint: if R is irreducible then just rescale). ∈ | |

n 4.1.12 Definition. A lattice L is a finitely generated free abelian group Z , equipped with a bilinear form ( , ) : L L Z. (We will usually assume that it is positive definite and non-degenerate.)· · × → 24 Lie Algebras

It follows that L , , is an inner product space, and it is positive definite ( Z R ( )) if ( , ) is positive definite.⊗ · · · · 4.1.13 Definition. A root of a lattice L is an ` L with (`, `) = 2. Write ∈

RL = ` L (`, `) = 2 = ` L ` = `∨ . { ∈ | } { ∈ | }

If α RL then sα(L) L. (Indeed, sαβ = β (β, α∨)α L.) ∈ ⊆ − ∈ 4.1.14 Lemma. RL is a root system in RRL and it is simply laced.

PROOF: RL is finite since it is the intersection of a compact set ` RL (`, `) = 2 with a discrete set L. The rest of the axioms are obvious (exercise).{ ∈ | ƒ}

4.1.15 Definition. L is generated by roots if ZRL = L.

If L is generated by roots then L is an even lattice, i.e. (`, `) 2Z for all ` L. ∈ ∈ 4.1.16 Example. Let L = Zα with (α, α) = λ. Then RL = α if λ = 2, and L is 2 {± } generated by roots. If k λ = 2 for all k Z then RL = ∅. 6 ∈ 4.1.17 Examples. A : Consider the square lattice n+1 Ln+1 e , where e , e δ . Let A be n Z = i=1 Z i ( i j) = i j n the sublattice

 n+1 n+1  n+1 X X ` Z (`, e1 + + en+1) = 0 = ai ei ai Z, ai = 0 . { ∈ | ··· } i=1 | ∈ i=1 Then R e e i j and R n n 1 . If α e e then s swaps An = i j = An = ( + ) = i j α the ith and j{th coordinate,− | 6 } so W| | s i j S . − An = i j = = n+1 〈 | 6 〉 D : Consider the square lattice n Ln e , where e , e δ . Take R n Z = i=1 Z i ( i j) = i j Dn = ei ej i = j and let {± ± | 6 } n n  X X  D R a e a , a 2 . n = Z Dn = i i i Z i Z i=1 | ∈ i=1 ∈ Then R 2n n 1 . It is seen that s swaps the ith and jth coor- Dn = ( ) ei ej dinate| and| s changes− the sign of the ith− and jth coordinate so W ei +ej Dn = n 1 n 1 (Z/2Z) − o Sn,((Z/2Z) − is the subgroup of an even number of sign changes). R is irreducible if n 3, and R R and R R . Dn D3 = A3 D2 = A1 A1 ≥ × E : Let k ,..., k either k for all i or k 1 for all i and 8 Γn = ( 1 n) i Z i 2 + Z n P k {2 . Consider| α 1 ,...,∈ 1 , so that α, α ∈ n . Then α and i 1 i Z = ( 2 2 ) ( ) = 4 Γn = ∈ } ∈ Γn is an even lattice only if 8 n. As an exercise, prove that if n > 1 then the | roots of Γ8n form a root system of type D8n (this requires also proving that Γ8n is a lattice). For n = 1, show that R e e , 1 e e i j, the number of minuses is even . Γ8 = i j 2 ( 1 8) = {± ± ± ± · · · ± | 6 } This is the root system of type E . Notice that R 7 8 4 128 240. 8 E8 = 2· + = Compute W (answer: W 21435527). | | E8 E8 = | | Abstract Root Systems 25

E6, E7: As an exercise, prove that if R is a root system and α R then α⊥ R is also a root system. In let α 1 ,..., 1 and β e e .∈ Then α R∩ is a root Γ8 = ( 2 2 ) = 7 + 8 ⊥ Γ8 system of type E and α, β R is a root system of type E∩. Show that 7 ( )⊥ Γ8 6 R 126 and R 72 and∩ describe the cooresponding root lattices. E7 = E6 = | | | | As an essential exercise, check this whole example and draw the low dimen- sional systems (e.g. draw A n+1 and R for n 1, 2). n Z An = ⊆ 4.1.18 Theorem (ADE classification). The complete list of simply laced irreducible

root systems is An (n 1), Dn (n 4), E6, E7, and E8. No two of these root systems are isomorphic. ≥ ≥

4.1.19 Theorem. The remaining root systems are B2 = C2, Bn (n 3), Cn (n 3), F4, and G2, where F4 is described below, G2 has been drawn, and≥ ≥

R e , e e i j n Bn = i i j = Z {± ± ± | 6 } ⊆ and R 2e , e e i j n. Cn = i i j = Z {± ± ± | 6 } ⊆ (Notice that R R and W W /2 S .) Bn = ∨Cn Bn = Cn = (Z Z) o n

4.1.20 Example. Set Q k ,..., k either all k or all k 1 . Then = ( 1 4) i Z i 2 + Z { | ∈ ∈ } F4 = α Q (α, α) = 2 or (α, α) = 1 . { ∈ | } We have R e , e e , 1 e e e e i j . F4 = i i j 2 ( 1 2 3 4) = {± ± ± ± ± ± ± | 6 } We want to choose certain “good” bases of V . We make an auxiliary choice of a linear function f : V R such that f (α) = 0 for all α R. A root α R is said to be “positive” if f (α) →> 0 and “negative” if6 f (α) < 0. Write∈ R+ for the∈ collection of positive roots, so that R = R+ ( R+). q − 4.1.21 Definition. α R+ is a simple root if it is not the sum of two other positive ∈ roots. Write Π = α1,..., αn for the collection of simple roots. { } 4.1.22 Proposition. 1. If α, β Π then α β / R; ∈ − ∈ 2. If α, β Π and α = β then (α, β ∨) < 0. ∈ 6 + P 3. Every α R can be written α kiαi with ki . = αi Π N ∈ ∈ ∈ 4. Simple roots are linearly independent, so the decomposition above is unique.

+ + 5. If α R Π then there is αi Π such that α αi R . ∈ \ ∈ − ∈ 6. Π is indecomposable (i.e. it cannot be written as a diagonal union of orthog- onal sets) if and only if R is reducible.

PROOF: Exercise: check all these properties for the above examples, or find a proof from the definition of root system (this is harder). ƒ

Write Π = α1,..., α` , where ` is the rank of R, which is dim V . Set ai j = { } (αi, α∨j ) and A = (ai j)` `, the Cartan matrix. × 26 Lie Algebras

4.1.23 Proposition. 1. ai j Z, aii = 2, and ai j 0 if i = j; ∈ ≤ 6 2. ai j = 0 if and only if aji = 0; 3. det A > 0 4. All principal subminors have positive determinant.

PROOF: (written down) ƒ A matrix which satisfies the conditions of 4.1.23 is an abstract Cartan matrix.

4.2 Dynkin Diagrams

Special graphs called Dynkin diagrams are used to summarize the information in the Cartan matrix. The vertices are the set of simple roots, and αi is joined to αj by ai j aji lines. But ai j aji is always 1 if the root system is simply laced, while Bn and Cn have a double edge, and G2 has a triple edge. If ai j aji is 2 or 3 then put an arrow in the direction of the shorter root.

An : n 1 ◦ ◦ ··· ◦ ◦ ≥

Bn : > n 2 ◦ ◦ ··· ◦ ◦ ≥

Cn : < n 3 ◦ ◦ ··· ◦ ◦ ≥

Dn : n 4 ◦ ◦ ··· ◦ ◦ ≥

◦ G2 : > ◦ ◦

F4 : > ◦ ◦ ◦

E8 : ◦ ◦ ◦ ◦ ◦ ◦ ◦

◦ E7 : ◦ ◦ ◦ ◦ ◦ ◦

◦ E6 : ◦ ◦ ◦ ◦ ◦

◦ Dynkin Diagrams 27

4.2.1 Exercises. 1. Show that these really are the Dynkin diagrams of the named root systems.

2. Show that the Dynkin diagram of AT is that of A, except with any arrows T reversed. Also check that A is the Cartan matrix of (A∨, V ). 3. Compute the determinants of the Cartan matrices.

4. For type A , compute the inverse of the Cartan matrix. (Hint: A has ma- n ” — 1 2 2 1 trix [ ], A2 has matrix 1− 2 , so the answer is something of the form 1 pos. ints . What do the− positive integers represent?) det A [ ] 5. Redo the above exercise for all other types.

Remark. SL A GL det A 1 , and Z λI λn 1 /n . Here n = n = SLn = n = = Z Z { ∈ | } { | } ∼ n = det A. In general, the determinant of the Cartan matrix is ZG , where G is a simply connected compact or algebraic group of the given type.| |

Notice that R is indecomposable if and only if the Dynkin diagram is connected.

4.2.2 Theorem. An abstract Cartan matrix which is indecomposable is, after re- ordering simple roots, has Dynkin diagram which is one of the given types.

PROOF: We will classify connected Dynkin diagrams of the Cartan matrix. First, it is immediate from 1–4 of the definition of an abstract Cartan matrix (4.1.23) that any subdiagram of a Dynkin diagram is a disjoint union of Dynkin ” 2 a — diagrams. All of the rank 2 Cartan matrices are A = b −2 , where 4 ab = det A > 0, so (a, b) (1, 2), (2, 1), (1, 3), (3, 1), (1, 1) . − − ∈ { } Second, Dynkin diagrams contain no cycles. Write Π = α1,..., αn , and put { } n X αi α = p . i=1 (αi, αi) Then n X 2(αi, αj) X p 0 < (α, α) = n + p = n ai j aji i=1 (αi, αi)(αj, αj) − i

Fourth, if the diagram is simply laced (i.e. has simple edges) then it is one of

A, D or E. Indeed, if it is not A` then it must have a branch point. But it cannot ˜ have D4 as a subdiagram, so it can only have triple branches. Further, it has only ˜ one branch point since otherwise it has D` as a subdiagram. Since it does contain any E˜, it must be one of A, D, or E.

Fifth (this part is an exercise), if a Dynkin diagram has G2 as a subdiagram then ˜ it is G2. (Hint: G2 is prohibited and the rest do not have positive determinant.) ˜ If there is a double bond then there is only on double bond (because of Cn), and there are no branch points (because of B˜n). If the double bond is in the middle ˜ then it is F4 (by F4), and if it is on an end then it is B or C series. (Warning: we are often using the fact that the direction of the arrows don’t matter from computing the determinant.) ƒ

4.3 Existence and Uniqueness of an Associated Lie Algebra

So far we have accomplished the following:

g a s.s. Lie algebra

choosing t a maximal torus  R root system

choosing f : V R a linear map  → R+ positive roots, Π simple roots, Cartan matrix A and we have classified all possible Cartan matrices A. But the choices made are inconsequential.

4.3.1 Theorem. All maximal tori are conjugate. More precisely, for K algebraically closed and characteristic zero, if t1 and t2 are maximal tori of g then there is

0 0 g (Aut g) = g GL(g) g : g g is a Lie algebra homomorphism ∈ { ∈ | → } 0 (the “ ” means “connected component of the identity”) such that g(t1) = t2.

4.3.2 Theorem. All R+ are conjugate. More precisely, let (R, V ) be a root system and f , f : V be two linear maps which define positive roots R+ and R+ . 1 2 R (1) (2) → + + Then there is a unique w WR such that wR 1 = R 2 and we have wΠ(1) = Π(2). ∈ ( ) ( ) 4.3.3 Corollary. g determines the Cartan matrix, i.e. the Cartan matrix is inde- pendent of the choices made.

There is a one-to-one correspondence between Cartan matrices and s.s. Lie algebras.

4.3.4 Theorem. Let gi (i = 1, 2) be s.s. Lie algebras over C, and ti maximal tori, + Ri, Ri , Πi, Ai be choices of associated objects. If, after reordering the indices, A1 = A2 then there is a Lie algebra isomorphism ϕ : g1 g2 such that ϕ(t1) = t2, → ϕ(R1) = R2, etc. Existence and Uniqueness of an Associated Lie Algebra 29

4.3.5 Theorem. Let A be an abstract Cartan matrix. Then there is a s.s. Lie algebra g over C with Cartan matrix A. Remark. sl so sp 1. We already know this for An, Bn, Cn, and Dn (namely n+1, 2n+1, 2n, and so2n, respectively).

2. It remains to check that E6, E7, E8, F4, G2 exist. We can do this case by case, e.g. G2 = Aut(O). There are abstract constructions in the literature. Existence and uniqueness will also follow from the following.

4.3.6 Definition. A generalized Cartan matrix is a matrix A such that

1. ai j Z, aii = 2, and ai j 0 for i = j; and ∈ ≤ 6 2. ai j = 0 if and only if aji = 0.

Let An n be such a matrix. Defineg ˜ to be a Lie algebra with generators × Ei, Fi, Hi i = 1, . . . , n and relations { | } 1. [Hi, H j] = 0 for all i, j;

2. [Hi, Ej] = ai j Ej for i = j; 6 3. [Hi, Fj] = ai j Fj for i = j; − 6 4. [Ei, Fj] = 0 for i = j; and 6 5. [Ei, Fi] = Hi for all i. Let g be the quotient ofg ˜ by the additional relations

1 ai j 1. (ad Ei) − Ej = 0 for i = j; and 6 1 ai j 2. (ad Fi) − Fj = 0 for i = j. 6 For example, if ai j = 0 then [Ei, Ej] = 0, and if ai j = 1 then [Ei, [Ei, Ej]] = 0. These are known as the Serre relations, though they are− due to Harish-Chandra and Chevalley.

4.3.7 Theorem. 1. Let A be indecomposable. Then g˜ has a unique maximal ideal and g is its quotient. Therefore g is a simple Lie algebra (which need not be finite di- mensional).

2. Hence if g is a finite dimensional s.s. Lie algebra with Cartan matrix A, then 1 2ν− (αi ) the map g˜ g that takes Ei eα , Fi e α , and Hi is surjective, i i (αi ,αi )ad and hence→ factors as g˜ g 7→g. 7→ − 7→ → → 3. g is finite dimensional if and only if A is a Cartan matrix (i.e. all principal subminors has positive determinant). (This last part is equivalent to the Existence and Uniqueness Theorem.)

4.3.8 Definition. g is the Kac-Moody algebra associated to the abstract Cartan matrix A. 30 Lie Algebras

5 Representation Theory

5.1 Theorem of the Highest Weight L For this section let g be a s.s. Lie algebra, g = t+ α R gα be the root space decom- + ∈ position, R a choice of positive roots, and Π = α1,..., α` be the corresponding { } simple roots. Let V be a (finite dimensional) representation of g. Set, for λ t∗, ∈ Vλ = v V tv = λ(t)v for all t t , the λ-weight space of V . { ∈ +| L ∈ }L + Define n = α R+ gα and n− = α R+ g α, so that g = n + t + n−. Notice + ∈ ∈ − + that n , t, and n− are all Lie subalgebras of g (and n and n− are nilpotent?). Recall also for each α R, gα + [gα, g α] + g α = sl2, and we write (sl2)α := − − ∼ eα, hα, e α g. ∈ 〈 − 〉 ≤ 5.1.1 Proposition. For g s.s. and V f.d., L 1. V t Vλ, i.e. each t t acts semi-simply. = λ ∗ ∈ ∈ 2. If Vλ = 0 then λ(hα) Z for all α R. 6 ∈ ∈ PROOF: For each α R, (sl2)α acts on V , and V is finite dimensional, so hα acts ∈ semi-simply and has eigenvalues in Z by the representation theory of sl2. From this 2 is immediate. As for 1, the h span t as the roots span t , in fact h ,..., h α ∗ α1 α` is a basis of t. { ƒ}

P λ λ 5.1.2 Definition. The character of V is ch(V ) = dim Vλe , where e Z[t∗] is a formal symbol intended to remind you of the exponential map. ∈

5.1.3 Examples. 1. Recall the representations L(n) of sl2 have characters zn+1 z (n+1) ch L n zn zn 2 z2 n z n − , ( ( )) = + − + + − + − = z − z 1 ··· − − where z eα and R α . = sl2 = {± } 2. Take g = sl3 and consider its adjoint representation. Then

X α 1 1 1 1 ch(g) = dim t + e = zw + z + w + 2 + w− + z− + z− w− , α R ∈ where w eα and z eβ , recalling R α, β, α β . = = sl3 = ( + ) {± ± ± } 5.1.4 Definition. The lattice of roots is Q := ZR, and the lattice of weights is

P := γ QR (γ, α∨) Z for all α R . { ∈ | ∈ ∈ }

Recall that (γ, α∨) = γ(hα) by definition of the normalization of hα, so the second part of 5.1.1 says that if Vλ = 0 then λ P. In particular, ch(V ) Z[P]. Notice that if α, β R then α, β 6 2(α,β) ∈ , so Q P (i.e. each root∈ is a ( ∨) = (β,β) Z weight). ∈ ∈ ⊆

5.1.5 Examples. 1. In the case of sl , Q α and P α since α, α 2. Therefore P/Q 2 = Z = Z( 2 ) ( ) = = 2. | | Theorem of the Highest Weight 31

2. For sl , Q α β and P 2α+β α+2β so P/Q 3. 3 = Z + Z = Z( 3 ) + Z( 3 ) = | | 5.1.6 Exercises. 1. Prove that P/Q = det A, the determinant of the Cartan matrix. | |

2. Show that P is preserved under the action of the Weyl group on t∗.

5.1.7 Proposition. If V f.d. then dim Vλ = dim Vwλ for all w W. ∈

PROOF: Since the Weyl group is generated by sα α R (by definition), it suffices to prove that dim V dim V . We give sketches{ | of∈ three} possible ways to prove λ = sαλ the result.

1. Suppose that G is an algebraic or compact group with Lie algebra g and let T be the torus corresponding to t. Then W = N(T)/T. (e.g. in the case of g = sln, G = SLn and T is the collection of diagonal matrices. Then N(T) is the collection of “monomial matrices” and it is seen that W S sln = n = N(T)/T.) Whence for any w T there is w˙ N(T) such that w = wT˙ , so for any t T and v Vλ, ∈ ∈ ∈ ∈ 1 1 twv˙ = w˙ w˙ − twv˙ = w˙λ(w˙ − tw˙)v = (wλ)(t)wv˙ ,

so w˙(Vλ) = Vwλ.

2. We can mimic the above construction in g. Observe that in sl2 we may decompose

0 1 1 0 1 1 1 0 − = − = exp(f ) exp( e) exp(f ). 1 0 1 1 0 1 1 1 −

In general, for each root α R define ˙sα = exp(e α) exp( eα) exp(e α). ∈ − − − Exercise:

˙ a) Each eα acts nilpotently on V (as V is f.d.) so Sα is a finite sum on V and is hence well-defined. 2 2 b) ˙sα =: "α : Vλ Vλ, where "α = 1 and "α is multiplication by a constant on Vλ. (Which→ constant?)

c) ˙sα(Vλ) = Vsαλ.

3. But we don’t really need to do any of this. Consider V as a t sl2-module. By the representation theory of sl2, it breaks up as a direct sum× of irreducibles, each of which is a string µ, µ α, µ 2α,..., µ mα, where m = λ(hα). − − − These strings are sα-invariant by the sl2-theory. ƒ

W It follows that when V is f.d., ch(V ) Z[P] . ∈ P` 5.1.8 Definition. For µ, λ P, write µ λ if λ µ = i 1 kiαi with ki N. ∈ ≤ − = ∈ 5.1.9 Exercise. Prove that “ ” is a partial order on P and show that µ P µ λ is an obtuse cone in P. ≤ { ∈ | ≤ } 32 Lie Algebras

5.1.10 Definition. µ P is a highest weight if Vµ = 0 and Vλ = 0 implies λ µ. ∈ 6 +6 ≤ v Vµ is a singular vector if v = 0 and eα v = 0 for all α R . (Notice that the ∈ 6 ∈ weight of eα v is µ + α, so if µ is a highest weight then v is singular.) µ is an extremal weight if wµ is an highest weight vector for some w W. ∈ It is clear that highest weights exist when V is f.d.

5.1.11 Theorem (Theorem of the Highest Weight). Let g be a s.s. Lie algebra over an algebraically closed field K of characteristic zero.

1. a) If V is a f.d. representation of g then V is a direct sum of irreducible representations. + + b) Let P = λ P (λ, α∨) 0 for all α R = λ P (λ, αi) { ∈ | ≥ ∈ } { ∈ | ≥ 0 for all αi Π , the cone of dominant weights. The f.d. irreducible representations∈ } of g are in with P+ via λ L(λ). 7→ 2. More precisely, let V be a f.d. representation of g and let v Vλ be a singular vector. Then ∈

a) dim Vλ = 1, i.e. Vλ = K v.

b) If Vµ = 0 then µ λ, i.e. λ is a highest weight. 6 ≤ c) λ h for all α , i.e. λ P+. ( αi ) N i Π ∈ ∈ ∈ Moreover, if V 0 is another f.d. irreducible representation of g and v0 V 0 is a singular vector of weight λ then there is a unique isomorphism of g-modules∈

V V 0 sending v to v0. We say that V has highest weight λ and denote this g-module→ by L(λ). 3. Given λ P+ there is a f.d. representation of g with highest weight λ. ∈ 5.1.12 Corollary. 1. For a highest weight λ,

λ X µ ch(L(λ)) = e + aµλe , µ λ ≤ and further X ch(L(λ)) = mλ + a˜µλmµ, µ λ,µ P+ ≤ ∈ P γ + W where mµ = γ Wµ e . The mµ µ P are a basis of Z[P] , and in fact ch(L(λ)) irreducible∈ representations{ | ∈ L(}λ) are a basis as well. { | } 2. If V and W are f.d. representations of g and ch V ch W then V W. ( ) = ( ) ∼= We will (slowly) prove (most of) this theorem and corollary.

5.1.13 Definition. Let ω1,..., ω` be a basis of P which is dual to the basis of { } simple co-roots, i.e. (ωi, α∨j ) = δi j for i, j = 1, . . . , `. These are the fundamental + L weights and we could have defined P = i Nωi. Yet another way to think of P+, the parameters of irreducible representations, is as + P = λ t∗ λ, hi N . { ∈ | 〈 〉 ∈ } Proof of the Theorem of the Highest Weight 33

5.1.14 Examples. L 1. Consider the adjoint representation of g on itself. If g = t + α R gα then P α ∈ the character of this representation is ` + α R e . What is the highest ∈ weight? Recall that a highest weight is a weight λ such that λ + αi / R for ∈ all αi Π. This implies that λ = θ, the highest root, is the highest weight, and g ∈= L(θ). n 2. Representations of sln. There is the standard representation over C , which "1 "n has character e + +e . The highest weight is "1 since "2 +("1 "2) = "1, ··· − "3 + ("2 "3) = "2, etc. − 3. If V and W are representations then so is V W. Whence V V is a repre- sentation, but σ : V V V V commutes⊗ with the g-action,⊗ so S2V and 2 s n V are g-modules.⊗ These→ need⊗ not be irreducible in general, but C and ∧m n ∧ S C are irreducible sln-modules. s n: for s n 1 has basis e e i < < i , with associated C i1 is 1 s ∧ weights≤ " − " . The{ action∧ · · · ∧ of an| element··· x g} on v v is i1 + + is 1 s s P v ···x v v . Notice E e δ ∈e , so E∧e · · · ∧ i=1 1 ( i) s i j = j,i+1 j 1 i( i1 e 0∧ for · · all· ∧ i if and∧ · only · · ∧ if e e e − e . Whence∧ · · the · ∧ is ) = i1 is = 1 s ∧ · · · ∧ s n ∧ · · · ∧ highest weight is ωs = "1 + +"s, and C = L(ωs) for all 1 s < n. ··· ∧ ≤ (Exercise: compute ch L(ωs).) Sm n: Exercise: Show that Sm n has basis e e i i , C C i1 im 1 m { ∧m · · · ∧ m | n ≤ · · · ≤ } and the unique highest weight vector is e1 (so S C = L(mω1) is irreducible) and compute its character.

5.1.15 Exercises. 2 2 1. Show that V V = S V + V . ⊗ ∧ 2. Show that V V ∗ Hom(V, V ) : v w∗ (x w∗(x)v) is an isomorphism ⊗ → ⊗ 7→ 7→ of g-modules, and that Homg(V, V ) is a submodule, which is 1-dimensional if V is irreducible.

n n 3. If g = sln then show that (C ) (C )∗ ∼ sln C (i.e. the adjoint represen- tation plus the ).⊗ −→ ⊕

2n 4. Let algebras so2n, sp2n act on C 2n a) Compute the highest weight of C ; b) Show that 2n 2n as g-modules; (C ) ∼= (C )∗ 2n 2n c) Show that (C ) (C ) has 3 irreducible components and find them. ⊗ 5.2 Proof of the Theorem of the Highest Weight

Let g be any Lie algebra with a non-degenerate invariant bilinear symmetric form ( , ) (e.g. the Killing form if g is s.s.). Let x1,..., xN be a basis of g, with dual N basis· · x 1,..., x N so that x , x j δ . Define{ P} x x i, the Casimir of g, ( i ) = i j Ω = i=1 i an operator{ on any} representation of g. (Exercise: show that Ω is independent of the choice of basis (though it does depend on the form).)

Claim. Ω is central, i.e. Ωx = xΩ for all x g. ∈ 34 Lie Algebras

Indeed,

hX i i X i X i [Ω, x] = xi x , x = xi[x , x] + [xi, x]x ,

i P j P as [ , x] is a derivation. Now write [x , x] = ai j x and [xi, x] = bi j x j, so · i i i ai j = ([x , x], x j) = ([x, x ], x j) = (x, [x , x j]) − i − i i = (x, [x j, x ]) = ([x, x j], x ) = ([x j, x], x ) = bji. − − Thus [Ω, x] = 0. L Now let g be s.s., g = t + α R gα, and ( , ) be the Killing form. Choose a basis ∈ · · 1 ` α u1,..., u` of t and eα gα, and a dual basis u ,..., u of t∗ and e− g α such { }α ∈ { } ∈ − that (eα, e− ) = 1. We may normalize the eα so that (eα, e α) = 1, and uniqueness α 1− of the dual basis implies that e− = eα, so [eα, e α] = ν− (α). Then − ` X i X X Ω = uiu + eαe α + e αeα − − i=1 α R+ α R+ ∈ ∈ ` X i X X 1 = uiu + 2 eαe α + ν− (α). − i=1 α R+ α R+ ∈ ∈ Write ρ 1 P α, so that = 2 α R+ ∈ ` X i 1 X Ω = uiu + 2ν− (ρ) + 2 e αeα. − i=1 α R+ ∈ Claim. Let V be a g-module with singular vector v V of weight λ. Then Ωv = 2 2 ( λ + ρ ρ )v. ∈ | | − | | + Indeed, eα v = 0 for all α R and tv = λ(t)v for all t t, so ∈ ∈  `  X i 1  Ωv =  λ(ui)λ(u ) + λ 2ν− (ρ)  v i=1   = (λ, λ) + 2(λ, ρ) v = (λ + ρ, λ + ρ) (ρ, ρ) v. − Claim. If V is also irreducible then Ω acts as (λ + ρ, λ + ρ) (ρ, ρ) on all of V . − This claim follows from Schur’s Lemma.

5.2.1 Definition. Let g be any Lie algebra over a field K. The universal enveloping algebra g is the free associative algebra generated by g modulo the relations x y yU x [x, y] x, y g . { − − | ∈ } 5.2.2 Exercises. 1. An enveloping algebra for g is an associative algebra A and a map ι : g A such that ιxι y ι yιx = ι[x, y]. Show that g is the initial object in→ the category of enveloping− algebras. For example,U if V is a representation of g then A = End(V ) is an enveloping algebra. Naturally, Ω g. Let ( g)n be ∈ U U the collection of sums of products of at most n things from g, e.g. ( g)0 = K, U ( g)1 = K + g, etc. U Proof of the Theorem of the Highest Weight 35

2. Show that n m = n+m, i.e. that is a filtered algebra. Also show that U U U U if x n and y m then x y y x n+m 1. ∈ U ∈ U − ∈ U − 3. Prove that these monomials span g (i.e. that Sg gr g is surjective). U → U 4. Show that the previous exercise gives a well-defined map Sg gr g. → U L 5.2.3 Theorem (PBW). gr g = n n/ n 1 = Sg. The natural map is an iso- . U U U − ∼

Equivalently (and in a language I can understand), the PBW theorem states a1 aN that if x1,..., xN is a basis of g then x1 xN a1,..., aN N is a basis of g. { } { ··· | ∈ }

U Let V be a representation of g and v Vλ a singular vector of weight λ (so n+ v = 0 and tv = λ(t)v for all t t). It is∈ clear that the g-submodule generated by v is gv. If gv = V then∈ we say that V is a highest weight module with highest weightU λ.U

Claim. If v is a singular vector then gv = n− v. U U + + Indeed, since g = n− + t + n we have g = n− t n , so the easy part of the PBW theorem implies that we canU writeU a basis⊗ U for⊗ Ug as a product of + U bases for n−, t, and n (in that order). Whence applying v kills the last bit, leaving only the n− part. Claim. If V is irreducible and f.d. then V is a highest weight module.

Indeed, V f.d. implies a highest weight exists, call it λ. Then each v Vλ is a singular vector, so gv is a submodule, whence it is V since V is irreducible.∈ U 5.2.4 Proposition. Let V be a (not necessarily f.d.) highest weight module for g and v V be a highest weight vector with highest weight Λ. Then ∈ 1. t acts diagonalizably, so V L V , where D µ µ µ = λ D(Λ) λ (Λ) = Λ = P ∈ { | ≤ } { ∈ t∗ µ = Λ kiαi, ki N ; | − ∈ } 2. VΛ = K v; all weight spaces are f.d.;

3. V is irreducible if and only if all singular vectors are in VΛ; 2 2 4. Ω = Λ + p p on V ; | | − | | 2 2 5. if vλ V is a singular vector then λ + p = Λ + p ; ∈ | | | | 6. if Λ QR, then there are only finitely many λ such that Vλ contains a singu- lar vector;∈ 7. V contains a unique maximal proper submodule I, where I is graded with L respect to t, i.e. I = (I Vλ) and I is the sum of all proper submodules. ∩ PROOF:

1. Write V = n− v for some v, so it is spanned by U e e e v e g , β R+ basis vectors . β1 β2 βr β β { − − ··· − | ∈ ∈ } But for x e e e v, t x β β t x, whence part = β1 β2 βr ( ) = (Λ 1 r )( ) 1. − − ··· − − − · · · − 36 Lie Algebras

2. There are only finitely many β R+ which appear in the sum of a given weight λ NR+. ∈ ∈ 3. The third part follows because if vλ Vλ is a singular vector then gvλ = ∈ U n− vλ =: is a submodule and a highest weight module. By 1, all weights U U P + of are in D(λ) = λ βi βi R . But λ = Λ implies that D(λ) ( D(Λ)U , so in particular{Λ−/ D(λ).| Whence∈ } is a proper6 submodule and V is not irreducible if there is∈ a singular vectorU with weight λ = Λ. 6 Conversely, if U ( V is a proper submodule, then tU U, so U is a t-graded, so its weights are in D and U L U . Let λ⊆be maximal such that (Λ) = λ D(Λ) λ ∈ + Uλ = 0. If w Uλ then eβ w Uλ+β = 0 by maximality for β R . Whence w is6 a singular∈ vector. λ = Λ∈since U = V . ∈ 6 6 7. Any proper submodule U is t-graded, and U L U , and the = λ D(Λ),λ=Λ λ sum of all such submodules is still of this form, so I ∈is a maximal6 proper submodule.

2 2 4. We know that for any singular vector of weight λ, Ωvλ = ( λ+ρ ρ )vλ, | | −| | so apply to v VΛ (recall that Ω is central). ∈ 2 2 6. If λ is singular then 5 (which is obvious) implies that λ + ρ = Λ + ρ . This defines a sphere in RR, a compact set, but D(Λ) is discrete.| | The| singular| vectors are the intersection of these, so there are finitely many. ƒ

5.2.5 Lemma. If λ is a dominant weight (i.e. λ P+), λ + µ P+, and λ µ = P 2 ∈ 2 ∈ − i kiαi with ki 0 for all i then λ + ρ = µ + ρ implies λ = µ. ≥ | | | | PROOF: We have

0 = (λ + ρ, λ + ρ) (µ + ρ, µ + ρ) − = (λ µ, λ + µ + 2ρ) ` − X = ki(αi, λµ + 2ρ) i=1

+ so since λ + µ + 2ρ P and (αi, 2ρ) = (αi, αi) > 0, we get that 0 is a positive ∈ linear combination of the ki, so each ki = 0 since they are all non-negative. ƒ

5.2.6 Definition. Let Λ t∗.A Verma module M(Λ) is an initial object in the category of highest weight∈ modules with highest weight Λ and highest weight vector vΛ, i.e. if V is any highest weight module with highest weight Λ and highest g weight vector v then there is a -module map M(Λ) V : vΛ v. → 7→ Such a map is obviously surjective and unique.

5.2.7 Proposition. Given Λ t∗ there exists a unique Verma module M(Λ) and there is a unique highest weight∈ module L(Λ) which is irreducible.

PROOF: Uniqueness is clear from the definition of Verma module via universal g property. Existence requires the PBW theorem. (M(Λ) = b kΛ, where b n+ t b U ⊗U t = + and kΛ is the 1-dimensional -module such that t x = λ(t)x for t ∈ Weyl Character Formula 37

n+ g g n t n+ n b and x = 0 (it is IndbkΛ). Since = − = − as vector U U ⊗U ⊗U U ⊗U spaces (by PBW), so basically M(Λ) = n− (as K-vector spaces).) If V is any highest weight module,UV = M(Λ)/J for some submodule J. If it is an irreducible highest weight module, J must be maximal. But previous proposition implies that the maximal proper submodule of M(Λ) is unique, so L(Λ) is unique. ƒ g 5.2.8 Exercise. Show that Homg( g b W, V ) = Homb(W, ResbV ). U ⊗U 5.2.9 Corollary. In particular, any irreducible finite dimensional module is L(Λ), + where Λ(hi) N, i.e. Λ P . ∈ ∈ We still need to show that if Λ P+ then L(Λ) is f.d. We also need to show that every f.d. module is a direct sum∈ of irreducibles. The proof has been omitted for lack of time, but all the ingredients are in place. (As with sl2, the Casimir Ω shows that the different irreducible representations do not interact. The fact that a representation whose composition series consists of a single irreducible is in fact a direct sum follows almost immediately from the fact that t acts diagonalizably.)

5.2.10 Exercises. 1. For g = sl2, there is an M(λ) for each λ C (where Hv = λv). Show that M(λ) is irreducible if and only if λ N∈. Moreover, if λ N then M(λ) contains a unique proper submodule,6∈ the span of F λ+1 v, F λ∈+2 v,. . .

g Λ(hi )+1 2. Show that if is an arbitrary s.s. Lie algebra and Λ(hi) N then Fi vΛ is a singular vector and F e (working in the sl attached∈ to some simple i = αi 2 − root αi).

λ 3. Compute ch M . Note for sl the characters are z . (Λ) 2 1 z 2 − − 5.3 Weyl Character Formula

Recall that W := sα α R . 〈 | ∈ 〉 + + 5.3.1 Lemma. If α is a simple root then sα(R α ) = R α . Note that sαα = α / R+. \{ } \{ } − ∈ P PROOF: Write s s for α . If α R+ then α k α where k 0, and i = αi i Π = i i i i ∈ ∈ ≥ X  X  siα = kjαj + ki + (αj, α∨i )kj αi. j=i − j=i 6 6

If α = αi then some kj > 0, so the coefficient of αj in siα is greater than zero. But R6 = R+ ( R+), so one positive coefficient implies that all coefficients are q − + positive, so siα R . ƒ ∈ 1 1 If g O(V ) (the orthogonal group) then gsα g− = sgα, so in particular wsαw− = swα for all∈ w W. It is a theorem that all Weyl groups are of the form ∈ 2 mi j W = s1,..., s` si = 1, (sisj) = 1 〈 | 〉 where the mi j are given by the Cartan matrix according to the following table. 38 Lie Algebras

ai j aji 0 1 2 3 mi j 2 3 4 6 5.3.2 Exercise. Check for each root system that the above holds (note that it is enough to check for the rank two root systems (why?)). Compare this to the existence and uniqueness of g which was generated by ei, hi, fi , αi Π. { } ∈ 5.3.3 Lemma. ρ h 1 for i 1, . . . , `, i.e. ρ P` ω . ( i) = = = i=1 i

PROOF: By 5.3.1,

1 1 X  1 1 X s ρ s α α α α ρ α i = i 2 i + 2 = 2 i + 2 = i + + α R αi − α R αi − ∈ \{ } ∈ \{ } P but also siρ = ρ ρ, α∨i αi, so ρ, α∨i = 1 for all i, implying ρ = i ωi. ƒ − 〈 〉 〈 〉 For sln,

2ρ = ("1 "2) + + ("n 1 "n) − ··· − − + ("1 "3) + + ("n 2 "n) − ··· − − + + ("1 "n) ··· − = (n 1)"1 + (n 3)"2 + + (3 n)"n 1 + (1 n)"n. − − ··· − − −

5.3.4 Lemma. For λ t∗ arbitrary, if M(λ) is the Verma module with highest weight λ then ∈

λ Y α 1 λ Y α 2α ch M(λ) = e (1 e− )− = e (1 + e− + e− + ) α R+ − α R+ ··· ∈ ∈ + k1 ks PROOF: Write R β ,..., βs . Then e e v is a basis of M λ of weight = 1 β1 βs λ ( ) { } { − ··· − } λ k1β1 ksβs, so dim M(λ)λ β is the number of ways which β can be − − · · · − P − written as a sum i kiβi of roots βi with non-negative coefficients ki, which is β Q α 1 the coefficient of e− in α R+ (1 e− )− . Indeed, ch(V W) = ch V ch W, and ch x 1 . ∈ − ⊗ C[ ] = 1 x ƒ − Q α λ Let ∆ = α R+ (1 e− ), so ch M(λ) = e /∆. ∈ − ρ ρ 5.3.5 Lemma. w(e ∆) = det(w)e ∆ for all w W. (Notice that det(w) 1 ∈ ∈ {± } since det sα = 1 for all α R.) − ∈ ρ ρ PROOF: It is enough to show that si(e ∆) = e ∆ as W is generated by the si. But − Y ρ ρ αi α e ∆ = e (1 e− ) (1 e− ) + − α R αi − ∈ Y\{ } ρ ρ αi αi α si(e ∆) = e − (1 e ) (1 e− ) + − α R αi − Y∈ \{ } ρ αi α = e (e− 1) (1 e− ) + − α R αi − ∈ \{ } ρ = e ∆. ƒ − Weyl Character Formula 39

+ Recall that P = λ t∗ λ(hi) Z for all αi Π and P = λ t∗ λ(hi) { ∈ | ∈ ∈ } { ∈ | ∈ N for all αi Π . ∈ } 5.3.6 Theorem (Weyl Character Formula). Let λ P+, so L(λ) is finite dimensional. Then ∈ 1   ‚ eλ Œ X w(λ+ρ) ρ X ch L(λ) = det(w)e − = w ∆ w W w W ∆ ∈ ∈ P i.e. ch L(λ) = w W det(w) ch M(w(λ + ρ) ρ). ∈ − PROOF: Omitted, but again, all the ingredients are in place. The second equality is immediate from 5.3.5. ƒ For sl , set z e α/2, so P z, z 1 , ρ α , 1 z 2 1, W s 2 = − C[ ] = C[ − ] = 2 ∆ = ( − )− = 2 1 − 〈 | s = 1 , and sz = z− . Then 〉 zm z (m+1) 1 ch L m α − − ( 2 ) = − z 2 1 − m+1 − (m+1) z z− = z − z 1 − m −m 2 2 m m = z + z − + + z − + z− . ··· 5.3.7 Corollary. ch L(0) = 1, as L(0) = C, so

Y α X w(ρ) ρ ∆ = (1 e− ) = det(w)e − . α R+ − w W ∈ ∈ This is the Weyl denominator formula.

5.3.8 Exercise. For g = sln, show that the Weyl denominator formula is just the identity (rewriting allowed)

 1 . . . 1   x ... x   1 n  Y det x x ,  . .  = ( j i)  . .  i

1 1 i.e. the Vandermond determinant formula, and that Z[P] = Z[x1± ,..., xn± ] and W = Sn.

λ (λ,µ) Given µ P, define a homomorphism Fµ : C[P] C((q)) by e q− . λ ∈ → 7→ Then F0(e ) = 1 and F0(ch L(λ)) = dim L(λ). Let’s apply Fµ to the Weyl denomi- nator formula. We get

(ρ,µ) Y (α,µ) X (w(ρ),µ) X (ρ,w(µ)) q− (1 q ) = det(w)q− = det(w)q− α R+ − w W w W ∈ ∈ ∈ so P (w(λ+ρ),µ) w W det(w)q− Fµ(ch L(λ)) = ∈ q (ρ,µ) Q q(α,µ) − α>0(1 ) − if (α, µ) = 0 for all α R+ (e.g. µ = ρ satisfies this condition). 6 ∈ 40 Lie Algebras

5.3.9 Proposition (q-dimension formula). q (ρ,λ+ρ) Q 1 q(α,λ+ρ) X (β,ρ) − α>0( ) dimq L(λ) := dim L(λ)β q = − − q (ρ,ρ) Q q(α,ρ) β − α>0(1 ) − PROOF: Immediate from the above discussion. ƒ 5.3.10 Corollary (Weyl dimension formula).

Y (α, λ + ρ) dim L λ ( ) = , α>0 (α ρ)

PROOF: Use l’Hopital, letting q 1. ƒ → + 5.3.11 Example. For sl3 (a.k.a. A2), R = α, β, α + β , ρ = α + β = ω1 + ω2, so { } if λ = m1ω1 + m2ω2 (mi N) then ∈ α β α + β ( , λ + ρ) m1 + 1 m2 + 1 m1 + m2 + 2 · ( , ρ) 1 1 2 · whence dim L λ 1 m 1 m 1 m m 2 . As an exercise, compute ( ) = 2 ( 1 + )( 2 + )( 1 + 2 + ) dimq L(λ), and do this for B2 and G2.

5.3.12 Proposition. The q-dimension Fρ(ch L(λ)) = dimq L(λ) is a (symmetric) unimodal polynomial in q.

PROOF: If V is an representation of sl2, then ch V is a symmetric unimodal poly- 1 P n nomial. If we put H = ν− (ρ) then Fρ(ch L(λ)) = dim L(λ)nq , where L(λ)n = x L λ H x nx , so we need to extend H to an sl . Set E e e . ( ) = 2 = α1 + + α` { ∈ | P} ··· Observe that H = i cihi for some ci Q, as the hi are a basis of t. Define F P c e (where e , h α , e ∈are the root sl ’s). Exercise: E, H, F is = i i αi α α = ∨ α 2 − 〈 − 〉 〈 〉 an sl2. ƒ 5.3.13 Exercise. Deduce the following polynomials are unimodal.

n n ! qn q n 1.   [ ] , where n − and n ! n n 1 1 . (Hint: k = k ! n k ! [ ] = q −q 1 [ ] = [ ][ ] [ ] [ ] [ ] − − ··· g sl − k n g sl − k n+k = n and V = S C or = n+k and V = C .) ∧ 2 n 2. (1 + q)(1 + q ) (1 + q ). (Hint: spin representation of Bn.) ··· + P ν ν If λ, µ P , then L(λ) L(µ) = ν P+ mµλ L(ν) for some mµλ N. Let’s de- termine these∈ coefficients using⊗ the Weyl∈ character formula. Define∈ a conjugation λ λ λ on Z[P] by e = e− and CT : Z[P] Z by CT(e ) = 1 if λ = 0 and 0 otherwise. Define , : P P by f→, g 1 CT f g , and let χ ch L λ . ( ) Z[ ] Z[ ] Q ( ) = W ( ∆∆) λ = ( ) · · × → | | + 5.3.14 Lemma. (χλ, χµ) = δλµ if λ, µ P . ∈ PROOF: We have 1  X  χ , χ CT det x det w ew(λ+ρ) ρ ex(µ+ρ) ρ , ( λ µ) = W ( ) ( ) − − x,w W | | ∈ but λ, µ P+ so w(λ + ρ) = µ + ρ if and only if µ = λ and w = 1 (exercise). ∈ w(λ+ρ) x(µ+ρ) Whence CT(e − ) = δw,x δµλ. ƒ Crystals 41

ν 5.3.15 Corollary. mλµ = (χµχλ, χν ) = dim Homg(L(ν), L(µ) L(λ)). ⊗

6 Crystals

The following chapter consists mostly of results of Kashiwara. Let g be a s.s. Lie algebra, Π = α1,..., α` a choice of simple roots, and P the corresponding weight lattice. { }

6.0.16 Definition. A crystal for g is a set B (not containing 0, by convention) with ˜ functions wt : B P and ˜ei : B B 0 and fi : B B 0 such that → → q { } → q { } ˜ ˜ 1. if ˜ei b = 0 then wt(˜ei b) = wt(b) + αi and if fi b = 0 then wt(fi b) = wt(b) αi; 6 6 −

˜ 2. for all b, b0 B, ˜ei b = b0 if and only if b = fi b0; and ∈

3. ϕi(b) "i(b) = wt(b), α∨i for all i and b B, where − 〈 〉 ∈ n ˜n "i(b) = max n 0 ˜ei b = 0 and φi(b) = max n 0 fi b = 0 . { ≥ | 6 } { ≥ | 6 }

The coloured, directed graph with vertices B and edge bb0 coloured by i if ˜ei b0 = b is the crystal graph, i.e. the i-arrows of the crystal graph point along the action of ˜ fi.

6.0.17 Example. With g = sl2, for any n

n / n 2 / n 4 / ... / 2 n / n − − − − is a crystal, as is n / n 2 / n 4 / ... , − − where all the edge colours are 1. For sl2, if b is of weight n 2k (so b = (n 2k)ω1, where ω 1 α), in the finite crystal " b k and ϕ −b n k, so−ϕ b 1 = 2 1( ) = 1( ) = 1( ) − − "1(b) = n 2k = wt(b), α∨1 as required. Also observe that ϕ1(b) + "1(b) = n, the length− of the string.〈 〉

6.0.18 Definition. For crystals B1 and B2, define B1 B2 as B1 B2 = B1 B2 as ⊗ ⊗ × sets, wt(b1 b2) = wt(b1) + wt(b2), and ⊗ ¨ ˜ei b1 b2 if ϕi(b1) "i(b2) ˜ei(b1 b2) = ⊗ ≥ , ⊗ b1 ˜ei b2 if ϕi(b1) < "i(b2) ⊗ and ¨ f˜ b b if ϕ b > " b f˜ b b i 1 2 i( 1) i( 2) . i( 1 2) = ⊗˜ ⊗ b1 fi b2 if ϕi(b1) "i(b2) ⊗ ≤ 42 Lie Algebras

Try it and see: in the case of sl2 B1 B2 breaks up into strings just as sl2- representations do. ⊗

/ / / / • • • • •

/ / / / • • • • • •   / / / • • • • • •    / / • • • • • •     / •• •••• 6.0.19 Exercises.

1. B1 B2 is a crystal with these associated functions. ⊗ 2. Via the obvious map, (B1 B2) B3 = B1 (B2 B3) as crystals. ⊗ ⊗ ∼ ⊗ ⊗

6.0.20 Definition. B∨, the dual crystal of B, is obtained from B by reversing ar- ˜ rows of the crystal graph. Formally, wt(b∨) = wt(b), ˜ei(b∨) = (fi b)∨ and ˜ − fi(b∨) = (˜ei b)∨, and "i(b∨) = ϕi(b) and ϕi(b∨) = "i(b).

6.0.21 Exercise. (B1 B2)∨ = B2∨ B1∨ via the obvious map. ⊗ ∼ ⊗ 6.0.22 Theorem. Let L(λ) be a f.d. irreducible highest weight representation with highest weight λ. Then there is a crystal B(λ) in one-to-one correspondence with a basis of L(λ).

P wt(b) 1. It follows that ch L(λ) = b B e . ∈ 2. Moreover, for each simple root αi, there is (sl2)i , g, and the decomposition → of L(λ) as an sl2-module is exactly given by the i-strings. In particular, the crystal graph of B(λ) is connected (when colours are ignored), and it is ˜ generated by the fi’s applied to the unique element of B(λ)λ.

3. The crystal B(λ) B(µ) is precisely the crystal of L(λ) L(µ), i.e. it decom- poses into connected⊗ components, each component a B⊗(ν) for some ν P+, and we get as many B(ν)’s as the multiplicity with which L(ν) occurs∈ in L(λ) L(µ). ⊗ PROOF: Omitted. ƒ

Remark. B(λ)∨ is the crystal of the g-module L(λ)∗ = L(µ), a f.d. irreducible module with lowest weight λ. B B∨ corresponds to the automorphism of Lie algebras g g mapping ei − fi, fi7→ ei, and hi hi. (Exercise: compute µ in terms of λ.)→ 7→ 7→ 7→ −

Remark. If h g is a subalgebra generated by (sl2)i, for i I Π, then the crystal ⊆ ∈ ⊆ for L(λ) h is just B(λ) forget stuff i for i / I . | | ∈ Crystals 43

Notice that L(λ) L(µ) ∼ L(µ) L(λ), so B(λ) B(µ) ∼ B(µ) B(λ) by the ⊗ −→ ⊗ ⊗ −→ ⊗ theorem, but B B0 = B0 B for arbitrary crystals. ⊗ 6∼ ⊗ n 6.0.23 Example. For g = sln and λ = ω1 = "1, L(λ) = C is the standard repre- sentation of sln, and it has crystal

λ = "1 • 1  λ α1 = "2 • − 2  . .

n 1 −  λ α1 αn 1 = "n • − − · · · − − 3 3 Taking the specific example of n = 3, computing C C is done by multiplying and decomposing. ⊗ 1 2 / / • • •

1 2 / / • • • • 1 1 1   2  / • • • • 2 2 2   1  / •• •• 3 3 2 3 2 3 From the diagram we see that C C = S C C . ⊗ ∼ ⊕ ∧ 6.0.24 Definition. A crystal B is an integrable crystal if it is the crystal of a f.d. representation for g. b B is a highest weight vector if ˜ei b = 0 for all i. ∈ B(λ) has a unique highest weight vector. If B is an integrable crystal then its decomposition into connected components is in one-to-one correspondence with highest weight vectors in B. We now apply this to B1 B2. ⊗ 6.0.25 Lemma. ˜ei(b1 b2) = 0 if and only if ˜ei b1 = 0 and "i(b2) ϕi(b1). ⊗ ≤

PROOF: If "i(b2) > ϕ(b1) then "i(b2) > 0, so ˜ei b2 = 0. The other direction is clear. 6 ƒ

6.0.26 Corollary. b b0 B(λ) B(µ) is a highest weight vector if and only if ⊗ ∈ ⊗ b B(λ)λ is a highest weight vector for B(λ) and "i(b0) λ, α∨i for all i. ∈ ≤ 〈 〉 ROOF P : "i(b0) ϕi(b) = λ, α∨i (which is the length of the i-string through the ≤ 〈 〉 highest weight vector b, and by condition 3 since ˜ei(b) = 0). ƒ 44 Lie Algebras

Whence we have a Clebsch-Gordan type rule M L(λ) L(µ) = L(λ + wt(b0)). ⊗ b0 B(µ) ∈ "i (b0) λ,α∨i ≤〈 〉

i n + Recall that C has highest weight ωi = "1 + + "i, so if λ P with ∧ ··· ∈ λ = k1ω1 + + kn 1ωn 1, then L(λ) is a summand of ··· − − k1 kn 1 L(ω1)⊗ L(ωn 1)⊗ − , ⊗ · · · ⊗ − k k1 n 1 as v vω⊗ − is a singular vector in this space since v is a highest weight ω⊗i n 1 ωi ⊗ · · · ⊗ − in each L(ωi), and the weight of this vector is λ = k1ω1 + + kn 1ωn 1. But in i n n i ··· − − the specific case of sln, C is a summand of (C )⊗ , i.e. B(ωi) can be determined i ∧ from B(ω1)⊗ . Therefore the combinatorial rule for product of crystals, together with the standard n-dimensional crystal, already determine the crystal of all sln- modules.

Semi-standard Young tableaux Write 1 2 ... n 1 B(ω1) = 1 / 2 / − / n , and consider i bi := 1 2 i B(ω1)⊗ , ⊗ ⊗ · · · ⊗ ∈ i n n i which corresponds to e1 e2 ei C (C )⊗ . ∧ ∧ · · · ∧ ∈ ∧ ⊆ 6.0.27 Exercises. i 1. Show that bi is a highest weight vector in B(ω1)⊗ , of highest weight ω1 = i "1 + + "i. Hence the connected component of B(ω1)⊗ containing bi is ··· B(ωi). 2. Show that this component consists precisely of

a a i 1 i 1 a1 < a2 < < ai n B(ω1)⊗ . { ⊗ · · · ⊗ | ≤ ··· ≤ } ⊆

a1 1 . . Write this vector as . , so that the highest weight vector is . .

ai i

P Now let λ = i kiωi, and embed

k1 kn 1 B(λ) , B(ω1)⊗ B(ωn 1)⊗ − → ⊗ · · · ⊗ − k1 kn 1 by the highest weight vector bλ b1⊗ bn⊗ 1− . Represent an element of the righthand side as (picture of7→ a Young⊗ tableaux · · · ⊗ with− no relations for rows and strictly increasing going down each column)

6.0.28 Definition. A semi-standard tableaux is a tableaux which strictly increases down columns and decreases (not necessarily strictly) along rows. Crystals 45

6.0.29 Theorem. 1. The collection of semi-standard tableaux of shape λ is the connected com- ponent of B(λ).

˜ 2. ˜ei and fi are given by. . . (determine this as an exercise).

PROOF: Exercise. ƒ

6.0.30 Example. The crystals of the standard representations of the classical Lie algebras are as follows.

An: We have seen that the standard representation of sln has crystal

1 2 n 1 1 / 2 / ... − / n .

so Bn: For 2n+1,

1 2 ... n 1 n n n 1 ... 2 1 1 / 2 / − / n / 0 / n − / / 2 / 1 .

Cn: For sp2n,

1 2 ... n 1 n n 1 ... 2 1 1 / 2 / − / n / n − / / 2 / 1 .

Dn: For so2n,

n . }> AA n 1 }} AA n − }} AA }} AA 1 2 ... n 2 } n 2 n 1 ... 2 1 1 / 2 / − / n 1 n 1 − / n − / / 2 / 1 − @ ? @@ ~~ − @@ ~~ n @@ ~~ n 1 @ ~~ − n

6.0.31 Exercises. 1. Check that these are the crystals of the standard representations.

2. What subcategory of the category of representations of g do they generate, i.e. what do you get by taking the tensor product any number of times and

decomposing (as we did above for An)? (Hint: consider the center of the simply connected group G with Lie algebra g and how it acts on the standard representation, i.e. consider the sublattice of P which all summands of ten- sor powers of the standard representation lie in. The centers are as follows:

Z/2 Z/2 for D2n, Z/4 for D2n+1, and Z/2 for Bn and Cn. The center does not act× faithfully on the standard representation for Bn, but does for Cn.) 46 Lie Algebras

6.0.32 Example (Spin Representation). Recall the a Lie algebra of type Bn has Dynkin diagram of the form

α1 = "1 "2 . • −

α1 = "2 "3 • −

. .

αn 1 = "n 1 "n • − − −

αn = "n • The representation L(ωn) is the spin representation. As an exercise, show that n dim L(ωn) = 2 (hint: use the Weyl dimension formula). Set B = (i1,..., in) i 1 and define wt i ,..., i 1 P i " . Define { | j ( 1 n) = 2 j j j ∈ {± }} ¨ i ,..., i , 1, 1, i ,..., i if i , i 1, 1 ˜e i ,..., i ( 1 j 1 j+2 n) ( j j+1) = ( ) j( 1 n) = 0− − otherwise − and ¨ i ,..., i , 1 if i 1 ˜e i ,..., i ( 1 n 1 ) n = . n( 1 n) = 0− otherwise−

+ In the case of Dn, the representations V := L(ωn) and V − := L(ωn 1) are the half-spin representations. Set −

n Y B± = (i1,..., in) ij 1 , ij = 1 , { | ∈ {± } j=1 ± } and wt and e1,..., en 1 as above. Define − ¨ i ,..., i , 1, 1 if i , i 1, 1 ˜e i ,..., i ( 1 n 2 ) ( n 1 n) = ( ) . n( 1 n) = 0− otherwise− − −

6.1 Littelmann Paths

6.1.1 Definition. Let P denote P , the -vector space spanned by P.A R Z R R Littelmann path (or simply path) is a⊗ piecewise-linear continuous map π : [0, 1] P . → R Paths are considered up to reparameterization, i.e. define π π ϕ, where ϕ : [0, 1] [0, 1] is any piecewise-linear homeomorphism. Let ∼ ◦ → = π π(0) = 0, π(1) P / . P { | ∈ } ∼ Littelmann Paths 47

Define a crystal structure on by wt(π) = π(1) and ei as follows. Let P

h = min 0, Z π(t), α∨i 0 t 1 { ∩ {〈 〉 | ≤ ≤ }} = smallest integer in π([0, 1]), α∨i 0 . 〈 〉 ∪ { } and define ˜ei(π) = 0 if h = 0. Otherwise, if h < 0, let t1 be the smallest time t [0, 1] such that π(t), α∨i = h (i.e. the first time that π crosses h) and let t0 ∈ 〈 〉 be the largest t [0, t1] such that π(t), α∨i = h + 1. Define the path ˜ei(π) by ∈ 〈 〉  π(t) 0 t < t0  ˜e π t π t s π t π t t ≤ t < t i( )( ) = ( 0) + αi ( ( ) ( 0)) 0 1  − ≤ π(t) + αi t t1 ≥

In words, ˜ei(π) is that path that is π up to t0 (i.e. the last time h + 1 is hit before the first time hitting h), the reflection of π via s from t to t , and then π again αi 0 1 from t1 to 1, but translated for continuity. (The picture from lecture makes this clear.)

6.1.2 Exercise. Show that "i(π) = h (hint: αi, α∨i = 2). − 〈 〉

6.1.3 Example. In sl2,

 α  α ˜e1 2 o 0 = 0 / 2 − and applying ˜ei again gives the zero path. Similarly,

€ Š α ˜e1 α o 0 = 2 0 , − − i.e. the path that goes from 0 to α and back. Applying ˜e again gives 0 α, and 2 1 applying it again give the zero path.− →

6.1.4 Exercise. What do you get when you hit α β 0 with ˜e1 and ˜e2’s in the case of sl3? − − ←

Let π∨ denote the reverse of the path π, i.e. define π∨ by π∨(t) = π(1 t) ˜ − − π(1). To complete the definition of the crystal structure let fi(π) = ˜ei(π∨)∨. ˜ 6.1.5 Exercise. Prove that is a crystal with wt, ˜ei, and fi defined in this way. P Let + π π 0, 1 P+ , where P+ x P i x, α 0 . = [ ] R R = R ( ) ∨i P { ∈ P |+ ⊆ } { ∈ | ∀ 〈 〉 ≥ } Observe that if π then ˜eiπ = 0 for all i. Let π be the subcrystal of generated by π, so∈ P f˜ f˜ π r 0 . B P π = i1 ir B { ··· | ≥ } + 6.1.6 Theorem (Littelmann). If π, π0 then π = π if and only if π(1) = ∈ P B ∼ B 0 π0(1). Furthermore, π is isomorphic to the crystal B(π(1)) (i.e. the crystal asso- ciated with the highestB wieght module L(π(1))).

It follows that we need only consider straight line segements 0 λ and their ˜ images under the fi. Littelmann described the paths explicitly in this→ case. 48 Lie Algebras

6.1.7 Definition. For π1, π2 , let π1 π2 be their concatenation, i.e. ∈ P ∗ ( π 2t 0 t 1 1( ) 2 (π1 π2)(t) = 1 π 2t 1 π 1 ≤ t ≤ 1 ∗ 2( ) + 1( ) 2 − ≤ ≤

6.1.8 Exercise. Prove that : : π1 π2 π1 π2 is a well-defined morphism of crystals. ∗ P ⊗ P ⊗ 7→ ∗

6.1.9 Corollary. B is an explicit model for the tensor product π1 π2 = π1 π2 of crystals. B ⊗ B ∼ ∗

6.1.10 Exercises.

1. Describe the tensor product of representations of sln in terms of Young tableaux. This explicit bit of combinatorics is called the Littlewood-Richardson rule.

2. Compute Lλ sl3. ⊗ Let B be a crystal with "i(b), γi(b) < for all i. We may define an action of the Weyl group on B by ∞

( wt b ,α ˜ ( ) ∨i fi〈 〉 b wt(b), α∨i 0 si b ( ) = wt(b),α∨i 〈〈 〉 ≥ ˜ei−〈 〉 b wt(b), α∨i 0 〈〈 〉 ≤ 2 Then si = 1 and clearly wt(si b) = si(wt(b)).

picture

6.1.11 Proposition (Kashiwara). If B is an integral crystal then the si’s satisfy the braid relations, so the above truly gives an action of W on B when extended in the obvious way (since the si generate W).

PROOF: It is enough to prove it for rank two Lie algebras, hence for the crystals of all sl3, sl2 sl2, so5, and G2 modules. Do this as an exercise using Littelmann paths. × ƒ

Warning: for w W, w˜ei = ˜ej w in general, even when wαi = αj. ∈ 6 Index

HeisW , 13 Kac-Moody algebra, 29 Killing form, 15 Abelian Lie algebra,4 abstract Cartan matrix, 26 lattice, 23 adjoint,4 lattice of roots, 30 adjoint representation,4 lattice of weights, 30 algebraic representation,5 Lie algebra,2,3 Lie algebra homomorphism,4 Cartan matrix, 25 Lie algebra representation,4 Cartan subalgebra, 17 Lie bracket,3 Casimir,8, 33 linear algebraic group,2 center,4 Littelmann path, 46 central series, 13 Littlewood-Richardson rule, 48 character, 11, 30 Clebsch-Gordan rule, 12 maximal torus, 17 commutator algebra, 13 nilpotent, 13 composition series,9 cone of dominant weights, 32 radical, 15 crystal, 41 rank of a root system, 23 crystal graph, 41 reduced, 23 representation,4 derived series, 13 root, 18 direct sum root system, 23 root of a lattice, 24 dual crystal, 42 root system, 23 dual numbers,2 root-space decomposition, 18 Dynkin diagrams, 26 semi-simple, 12 enveloping algebra, 34 semi-standard tableaux, 44 even lattice, 24 Serre relations, 29 extended Cartan matrix, 27 simple, 12 extremal weight, 32 simple root, 25 simply laced, 23 fundamental weights, 32 simultaneous eigenspace, 17 singular vector, 32 generalized Cartan matrix, 29 solvable, 13 generated by roots, 24 spin representation, 46 half-spin representations, 46 string,8 Heisenberg Lie algebra, 13 tangent bundle,2 highest root, 27 torus, 17 highest weight, 32 trace forms, 15 highest weight module, 35 highest weight vector,7, 43 universal enveloping algebra, 34 Hopf algebra, 11 Verma module, 36 integrable crystal, 43 invariant, 15 weight space,7 irreducible root system, 23 Weyl denominator formula, 39 isomorphism of root systems, 23 Weyl group, 23

49