Lectures on Infinite Dimensional Lie Algebras
Alexander Kleshchev
Contents
Part one: Kac-Moody Algebras page 1 1 Main Definitions 3 1.1 Some Examples 3 1.1.1 Special Linear Lie Algebras 3 1.1.2 Symplectic Lie Algebras 4 1.1.3 Orthogonal Lie Algebras 7 1.2 Generalized Cartan Matrices 10 1.3 The Lie algebra g˜(A) 13 1.4 The Lie algebra g(A) 16 1.5 Examples 20 2 Invariant bilinear form and generalized Casimir operator 26 2.1 Symmetrizable GCMs 26 2.2 Invariant bilinear form on g 27 2.3 Generalized Casimir operator 32 3 Integrable representations of g and the Weyl group 37 3.1 Integrable modules 37 3.2 Weyl group 39 3.3 Weyl group as a Coxeter group 42 3.4 Geometric properties of Weyl groups 46 4 The Classification of Generalized Cartan Matrices 50 4.1 A trichotomy for indecomposable GCMs 50 4.2 Indecomposable symmetrizable GCMs 58 4.3 The classification of finite and affine GCMs 61 5 Real and Imaginary Roots 68 5.1 Real roots 68 5.2 Real roots for finite and affine types 70 5.3 Imaginary roots 73
iii iv Contents 6 Affine Algebras 77 6.1 Notation 77 6.2 Standard bilinear form 77 6.3 Roots of affine algebra 80 6.4 Affine Weyl Group 84 6.4.1 A` 89 6.4.2 D` 89 6.4.3 E8 89 6.4.4 E7 90 6.4.5 E6 90 7 Affine Algebras as Central extensions of Loop Algebras 91 7.1 Loop Algebras 91 7.2 Realization of untwisted algebras 92 7.3 Explicit Construction of Finite Dimensional Lie Algebras 96 8 Twisted Affine Algebras and Automorphisms of Finite Order 99 8.1 Graph Automorphisms 99 8.2 Construction of Twisted Affine Algebras 108 8.3 Finite Order Automorphisms 114 9 Highest weight modules over Kac-Moody algebras 116 9.1 The category O 116 9.2 Formal Characters 118 9.3 Generators and relations 122 10 Weyl-Kac Character formula 127 10.1 Integrable highest weight modules and Weyl group 127 10.2 The character formula 128 10.3 Example: Lˆ(sl2) 132 10.4 Complete reducibility 134 10.5 Macdonald’s identities 136 10.6 Specializations of Macdonald’s identities 139 10.7 On converegence of characters 141 11 Irreducible Modules for affine algebras 144 11.1 Weights of irreducible modules 144 11.2 The fundamental modules for sbl2 151 Bibliography 155 Part one Kac-Moody Algebras
1 Main Definitions
1.1 Some Examples 1.1.1 Special Linear Lie Algebras
Let g = sln = sln(C). Choose the subalgebra h consisting of all diagonal ∨ matrices in g. Then, setting αi := eii − ei+1,i+1,
∨ ∨ α1 , . . . , αn−1
∗ is a basis of h. Next define ε1 . . . , εn ∈ h by
εi : diag(a1, . . . , an) 7→ ai.
Then, setting αi = εi − εi+1,
α1, . . . , αn−1 is a basis of h∗. Let ∨ aij = hαi , αji.
Then the (n − 1) × (n − 1) matrix A := (aij) is
2 −1 0 0 ... 0 0 0 −1 2 −1 0 ... 0 0 0 0 −1 2 −1 ... 0 0 0 . . . . 0 0 0 0 ... −1 2 −1 0 0 0 0 ... 0 −1 2
This matrix is called the Cartan matrix. Define
Xεi−εj := eij,X−εi+εj := eji (1 ≤ i < j ≤ n)
3 4 Main Definitions Note that
[h, Xα] = α(h)Xα (h ∈ h), and ∨ ∨ {α1 , . . . , αn−1} ∪ {Xεi−εj | 1 ≤ i 6= j ≤ n}
is a basis of g. Set ei = Xαi and fi = X−αi for 1 ≤ i < n. It is easy to check that ∨ ∨ e1, . . . , en−1, f1, . . . , fn−1, α1 , . . . αn−1 (1.1) generate g and the following relations hold:
∨ [ei, fj] = δijαi , (1.2) ∨ ∨ [αi , αj ] = 0, (1.3) ∨ [αi , ej] = aijej, (1.4) ∨ [αi , fj] = −aijfj, (1.5)
1−aij (ad ei) (ej) = 0 (i 6= j), (1.6)
1−aij (ad fi) (fj) = 0 (i 6= j). (1.7)
A (special case of a) theorem of Serre claims that g is actually gener- ated by the elements of (1.1) subject only to these relations. What is important for us is the fact that the Cartan matrix contains all the in- formation needed to write down the Serre’s presentation of A. Since the Cartan matrix is all the data we need, it makes sense to find a nicer ge- ometric way to picture the same data. Such picture is called the Dynkin diagram, and in our case it is:
••••••••... α1 α2 αn−1 Here vertices i and i + 1 are connected because ai,i+1 = ai+1,i = −1, others are not connected because aij = 0 for |i − j| > 1, and we don’t have to record aii since it is always going to be 2.
1.1.2 Symplectic Lie Algebras Let V be a 2n-dimensional vector space and ϕ : V × V → C be a non-degenerate symplectic bilinear form on V . Let g = sp(V, ϕ) = {X ∈ gl(V ) | ϕ(Xv, w) + ϕ(v, Xw) = 0 for all v, w ∈ V }. 1.1 Some Examples 5 An easy check shows that g is a Lie subalgebra of gl(V ). It is known from linear algebra that over C all non-degenerate symplectic forms are equivalent, i.e. if ϕ0 is another such form then ϕ0(v, w) = ϕ(gv, gw) for some fixed g ∈ GL(V ). It follows that
sp(V, ϕ0) = g−1(sp(V, ϕ))g =∼ sp(V, ϕ), thus we can speak of just sp(V ). To think of sp(V ) as a Lie algebra of matrices, choose a symplectic basis e1, . . . , en, e−n, . . . , e−1, that is
ϕ(ei, e−i) = −ϕ(e−i, ei) = 1, and all other ϕ(ei, ej) = 0. Then the Gram matrix is
0 s G = , −s 0 where 0 0 ... 0 1 0 0 ... 1 0 . s = . . (1.8) 0 1 ... 0 0 1 0 ... 0 0
It follows that the matrices of sp(V ) in the basis of ei’s are precisely the matrices from the Lie algebra
AB sp = { | B = sBts, C = sCts, D = −sAts}, 2n CD
∼ t so sp(V ) = sp2n. Note that sX s is the transpose of X with respect to the second main diagonal. Choose the subalgebra h consisting of all diagonal matrices in g. Then, ∨ setting αi := eii − ei+1,i+1 − e−i,−i + e−i−1,−i−1, for 1 ≤ i < n and ∨ αn = enn − e−n,−n, ∨ ∨ ∨ α1 , . . . , αn−1, αn is a basis of h. Next, setting αi = εi −εi+1 for 1 ≤ i < n, and αn := 2εn,
α1, . . . , αn−1, αn is a basis of h∗. Let ∨ aij = hαi , αji. 6 Main Definitions Then the Cartan matrix A is the n × n matrix
2 −1 0 0 ... 0 0 0 −1 2 −1 0 ... 0 0 0 0 −1 2 −1 ... 0 0 0 . . . . 0 0 0 0 ... −1 2 −2 0 0 0 0 ... 0 −1 2
Define
X2εi = ei,−i, (1 ≤ i ≤ n)
X−2εi = e−i,i, (1 ≤ i ≤ n)
Xεi−εj = eij − e−j,−i (1 ≤ i < j ≤ n)
X−εi+εj = eji − e−i,−j (1 ≤ i < j ≤ n)
Xεi+εj = ei,−j + ej,−i (1 ≤ i < j ≤ n)
X−εi−εj = e−j,i + e−i,j (1 ≤ i < j ≤ n).
Note that
[h, Xα] = α(h)Xα (h ∈ h), and
∨ ∨ {α1 , . . . , αn } ∪ {Xα}
is a basis of g. Set ei = Xαi and fi = X−αi for 1 ≤ i ≤ n. It is easy to check that
∨ ∨ e1, . . . , en, f1, . . . , fn, α1 , . . . αn (1.9) generate g and the relations (1.2-1.7) hold. Again, Serre’s theorem claims that g is actually generated by the elements of (1.11) subject only to these relations. The Dynkin diagram in this case is:
••••••••... < α1 α2 αn−1 αn
The vertices n−1 and n are connected the way they are because an−1,n = −2 and an,n−1 = −1, and in other places we follow the same rules as in the case sl. 1.1 Some Examples 7 1.1.3 Orthogonal Lie Algebras Let V be an N-dimensional vector space and ϕ : V × V → C be a non-degenerate symmetric bilinear form on V . Let g = so(V, ϕ) = {X ∈ gl(V ) | ϕ(Xv, w) + ϕ(v, Xw) = 0 for all v, w ∈ V }.
An easy check shows that g is a Lie subalgebra of gl(V ). It is known from linear algebra that over C all non-degenerate symmetric bilinear forms are equivalent, i.e. if ϕ0 is another such form then ϕ0(v, w) = ϕ(gv, gw) for some fixed g ∈ GL(V ). It follows that
so(V, ϕ0) = g−1(so(V, ϕ))g =∼ so(V, ϕ), thus we can speak of just so(V ). To think of so(V ) as a Lie alge- bra of matrices, choose a basis e1, . . . , en, e−n, . . . , e−1 if N = 2n and e1, . . . , en, e0, e−n, . . . , e−1 if N = 2n + 1, such that the Gram matrix of ϕ in this basis is 0 0 s 0 s and 0 2 0 , s 0 s 0 0 respectively, where s is the n × n matrix as in (1.8). It follows that the matrices of so(V ) in the basis of ei’s are precisely the matrices from the Lie algebra AB so = { | B = −sBts, C = −sCts, D = −sAts}, 2n CD if N = 2n, and A 2sxt B t t t so2n+1 = { y 0 x | B = −sB s, C = −sC s, D = −sA s}, C 2syt D if N = 2n + 1 (here x, y are arbitrary 1 × n matrices). We have in all ∼ cases that so(V ) = soN . Choose the subalgebra h consisting of all diagonal matrices in g. We now consider even and odd cases separately. First, let N = 2n. ∨ Then, setting αi := eii − ei+1,i+1 − e−i,−i + e−i−1,−i−1, for 1 ≤ i < n ∨ and αn = en−1,n−1 + enn − e−n+1,−n+1 − e−n,−n,
∨ ∨ ∨ α1 , . . . , αn−1, αn is a basis of h. Next, setting αi = εi − εi+1 for 1 ≤ i < n, and αn := 8 Main Definitions
εn−1 + εn,
α1, . . . , αn−1, αn is a basis of h∗. Let ∨ aij = hαi , αji.
Then the Cartan matrix A is the n × n matrix 2 −1 0 0 ... 0 0 0 −1 2 −1 0 ... 0 0 0 0 −1 2 −1 ... 0 0 0 . . . . 0 0 0 0 ... −2 −1 −1 0 0 0 0 ... −1 2 0 0 0 0 0 ... −1 0 2
Define
Xεi−εj = eij − e−j,−i (1 ≤ i < j ≤ n)
X−εi+εj = eji − e−i,−j (1 ≤ i < j ≤ n)
Xεi+εj = ei,−j − ej,−i (1 ≤ i < j ≤ n)
X−εi−εj = e−j,i − e−i,j (1 ≤ i < j ≤ n).
Note that
[h, Xα] = α(h)Xα (h ∈ h), and ∨ ∨ {α1 , . . . , αn } ∪ {Xα}
is a basis of g. Set ei = Xαi and fi = X−αi for 1 ≤ i ≤ n. It is easy to check that ∨ ∨ e1, . . . , en, f1, . . . , fn, α1 , . . . αn (1.10) generate g and the relations (1.2-1.7) hold. Again, Serre’s theorem claims that g is generated by the elements of (1.11) subject only to these relations. The Dynkin diagram in this case is:
•αn ••••••••... α1 α2 αn−2 αn−1 1.1 Some Examples 9
∨ Let N = 2n+1. Then, setting αi := eii−ei+1,i+1−e−i,−i+e−i−1,−i−1, ∨ for 1 ≤ i < n and αn = 2enn − 2e−n,−n,
∨ ∨ ∨ α1 , . . . , αn−1, αn is a basis of h. Next, setting αi = εi − εi+1 for 1 ≤ i < n, and αn := εn,
α1, . . . , αn−1, αn is a basis of h∗. Let ∨ aij = hαi , αji. Then the Cartan matrix A is the n × n matrix 2 −1 0 0 ... 0 0 0 −1 2 −1 0 ... 0 0 0 0 −1 2 −1 ... 0 0 0 . . . . 0 0 0 0 ... −1 2 −1 0 0 0 0 ... 0 −2 2
(It is transpose to the one in the symplectic case). Define
Xεi = 2ei,0 + e0,−i, (1 ≤ i ≤ n)
X−εi = 2e−i,0 + e0,i, (1 ≤ i ≤ n)
Xεi−εj = eij − e−j,−i (1 ≤ i < j ≤ n)
X−εi+εj = eji − e−i,−j (1 ≤ i < j ≤ n)
Xεi+εj = ei,−j − ej,−i (1 ≤ i < j ≤ n)
X−εi−εj = e−j,i − e−i,j (1 ≤ i < j ≤ n).
Note that
[h, Xα] = α(h)Xα (h ∈ h), and ∨ ∨ {α1 , . . . , αn } ∪ {Xα}
is a basis of g. Set ei = Xαi and fi = X−αi for 1 ≤ i ≤ n. It is easy to check that ∨ ∨ e1, . . . , en, f1, . . . , fn, α1 , . . . αn (1.11) generate g and the relations (1.2-1.7) hold. Again, Serre’s theorem 10 Main Definitions claims that g is actually generated by the elements of (1.11) subject only to these relations. The Dynkin diagram in this case is:
••••••••... > α1 α2 αn−1 αn
1.2 Generalized Cartan Matrices
Definition 1.2.1 A matrix A ∈ Mn(Z) is a generalized Cartan matrix (GCM) if
(C1) aii = 2 for all i; (C2) aij ≤ 0 for all i 6= j; (C3) aij = 0 if and only if aji = 0. Two GCMs A and A0 are equivalent if they have the same degree 0 n and there is σ ∈ Sn such that aij = aσ(i),σ(j). A GCM is called indecomposable if it is not equivalent to a diagonal sum of smaller GCMs.
Throughout we are going to assume that A = (aij)1≤i,j≤n is a gen- eralized Cartan matrix of rank `.
Definition 1.2.2 A realization of A is a triple (h, Π, Π∨) where h is a ∗ ∨ ∨ ∨ complex vector space, Π = {α1, . . . , αn} ⊂ h , and Π = {α1 , . . . , αn } ⊂ h such that (i) both Π and Π∨ are linearly independent; ∨ (ii) hαi , αji = aij for all i, j; (iii) dim h = 2n − `. Two realizations (h, Π, Π∨) and (h0, Π0, (Π0)∨) are isomorphic if there 0 ∨ exists an isomorphism ϕ : h → h of vector spaces such that ϕ(αi ) = 0 ∨ ∗ 0 ((αi) ) and ϕ (αi) = (αi) for i = 1, 2, . . . , n.
2 −1 0 Example 1.2.3 (i) Let A = −1 2 −1 . We have n = ` = 3. Let 0 −1 2 4 e1, . . . , e4 be the standard basis of C , ε1, . . . , ε4 be the dual basis, and h = {(a1, . . . , a4) | a1 + ··· + a4 = 0}. Finally, take Π = {ε1 − ε2, ε2 − ∨ ε3, ε3 − ε4} and Π = {e1 − e2, e2 − e3, e3 − e4}. 3 Another realization comes as follows. Let h = C , and αi denote the ∨ ith coordinate function. Now take αi to be the ith row of A. It is clear that the two realizations are isomorphic. 1.2 Generalized Cartan Matrices 11 2 −1 −1 (ii) Let A = −1 2 −1 . We have n = 3, ` = 2. Take h = C4 −1 −1 2 and let αi denote the ith coordinate function (we only need the first 2 −1 −1 0 ∨ three). Now take αi to be the ith row of the matrix −1 2 −1 0 . −1 −1 2 1
Proposition 1.2.4 For each A there is a unique up to isomorphism realization. Realizations of matrices A and B are isomorphic if and only if A = B.
A A Proof Assume for simplicity that A is of the form A = 11 12 A21 A22 where A11 is a non-singular ` × ` matrix. Let A11 A12 0 C = A21 A22 In−` . 0 In−` 0
2n−` Note det C = ± det A11, so C is non-singular. Let h = C . Define ∗ ∨ ∨ α1, . . . , αn ∈ h to be the first n coordinate functions, and α1 , . . . , αn to be the first n row vectors of C. Now, let (h0, Π0, (Π0)∨ be another realization of A. We complete 0 ∨ 0 ∨ 0 ∨ 0 ∨ 0 (α )1 ,..., (α )n to a basis (α )1 ,..., (α )2n−` of h . Then the matrix 0 ∨ 0 (h(αi) , αji) has form A11 A12 A21 A22 . B1 B2 By linear independence, this matrix has rank n. Thus it has n linearly independent rows. Since the rows rows ` + 1, . . . , n are linear combina- tions of rows 1, . . . , `, the matrix A11 A12 B1 B2
0 0 0 0 is non-singular. We now complete α1, . . . , αn to α1, . . . , α2n−`, so that 0 ∨ 0 the matrix (h(αi) , αji) is A11 A12 0 A21 A22 In−` . B1 B2 0 12 Main Definitions
0 0 0 ∗ This matrix is non-singular, so α1, . . . , α2n−` is a basis of (h ) . Since A11 is non-singular, by adding suitable linear combinations of the first ` rows to the last n − ` rows, we may achieve B1 = 0. Thus it is possible 0 ∨ 0 ∨ 0 ∨ 0 ∨ to choose (α )n+1,..., (α )2n−`, so that (α )1 ,..., (α )2n−` are a basis of h0 and A11 A12 0 0 ∨ 0 (h(αi) , αji) = A21 A22 In−` . 0 0 B2 0 0 The matrix B2 must be non-singular since the whole matrix is non- 0 ∨ 0 ∨ singular. We now make a further change to (α )n+1,..., (α )2n−` equiv- alent to multiplying the above matrix by I` 0 0 0 In−` 0 . 0 −1 0 0 (B2) Then we obtain A11 A12 0 0 ∨ 0 (h(αi) , αji) = A21 A22 In−` . 0 In−` 0 ∨ 0 ∨ This is equal to the matrix C above. Thus the map αi 7→ (αi) gives 0 0 an isomorphism h → h whose dual is given by αi 7→ αi. This shows that the realizations (h, Π, Π∨ and (h0, Π0, (Π0)∨ are isomorphic. Finally, assume that ϕ :(h, Π, Π∨) → (h0, Π0, (Π0)∨) is an isomorphism of realizations of A and B respectively. Then
0 ∨ 0 ∨ 0 ∨ ∗ 0 ∨ bij = h(αi) , αji = hϕ(αi ), αji = hαi , ϕ (αj)i = hαi , αji = aij.
Throughout we assume that (h, Π, Π∨) is a realization of A. We refer to the elements of Π as simple roots and the elements of Π∨ as simple coroots, to Π and Π∨ as root basis and coroot basis, respectively. Also set n n Q = ⊕i=1Zαi,Q+ = ⊕i=1Z+αi. We call Q root lattice. Dominance ordering is a partial order ≥ on h∗ Pn defined as follows: λ ≥ µ if and only if λ−µ ∈ Q+. For α = i=1 kiαi ∈ Q, the number n X ht α := ki i=1 1.3 The Lie algebra g˜(A) 13 is called the height of α.
1.3 The Lie algebra g˜(A)
Definition 1.3.1 The Lie algebra g˜(A) is defined as the algebra with generators ei, fi (i = 1, . . . , n) and h and relations
∨ [ei, fj] = δijαi , (1.12) [h, h0] = 0 (h, h0 ∈ h), (1.13)
[h, ei] = hαi, hiei (h ∈ h), (1.14)
[h, fi] = −hαi, hifi (h ∈ h). (1.15)
It follows from the uniqueness of realizations that g˜(A) depends only 0 ∗ 0 on A (this boils down to the following calculation: hαi, ϕ(h)i = hϕ (αi), hi = hαi, hi). Denote by n˜+ (resp. n˜−) the subalgebra of g˜(A) generated by all ei (resp. fi).
Lemma 1.3.2 (Weight Lemma) Let V be an h-module such that L V = λ∈h∗ Vλ where the weight space Vλ is defined as {v ∈ V | hv = hλ, hiv for all h ∈ h}. Let U be a submodule of V . Then U = L λ∈h∗ (U ∩ Vλ).
Proof Any elementr v ∈ V can be written in the form v = v1 + ··· + vm where vj ∈ Vλj , and theere is h ∈ h such that λj(h) are all distinct. For v ∈ U, we have
m k X k h (v) = λj(h) vj ∈ U (k = 0, 1, . . . , m − 1). j=1 We got a system of linear equations with non-singular matrix. It follows that all vj ∈ U.
Theorem 1.3.3 Let g˜ = g˜(A). Then
(i) g˜ = n˜− ⊕ h ⊕ n˜+.
(ii) n˜+ (resp. n˜−) is freely generated by the ei’s (resp. fi’s).
(iii) The map ei 7→ fi, fi 7→ ei, h 7→ −h (h ∈ h) extends uniquely to an involution ω˜ of g˜. 14 Main Definitions (iv) One has the root space decomposition with respect to h: