<<

Lectures on Infinite Dimensional Lie Algebras

Alexander Kleshchev

Contents

Part one: Kac-Moody Algebras page 1 1 Main Definitions 3 1.1 Some Examples 3 1.1.1 Special Linear Lie Algebras 3 1.1.2 Symplectic Lie Algebras 4 1.1.3 Orthogonal Lie Algebras 7 1.2 Generalized Cartan Matrices 10 1.3 The g˜(A) 13 1.4 The Lie algebra g(A) 16 1.5 Examples 20 2 Invariant bilinear form and generalized Casimir operator 26 2.1 Symmetrizable GCMs 26 2.2 Invariant bilinear form on g 27 2.3 Generalized Casimir operator 32 3 Integrable representations of g and the 37 3.1 Integrable modules 37 3.2 Weyl group 39 3.3 Weyl group as a 42 3.4 Geometric properties of Weyl groups 46 4 The Classification of Generalized Cartan Matrices 50 4.1 A trichotomy for indecomposable GCMs 50 4.2 Indecomposable symmetrizable GCMs 58 4.3 The classification of finite and affine GCMs 61 5 Real and Imaginary Roots 68 5.1 Real roots 68 5.2 Real roots for finite and affine types 70 5.3 Imaginary roots 73

iii iv Contents 6 Affine Algebras 77 6.1 Notation 77 6.2 Standard bilinear form 77 6.3 Roots of affine algebra 80 6.4 Affine Weyl Group 84 6.4.1 A` 89 6.4.2 D` 89 6.4.3 89 6.4.4 90 6.4.5 90 7 Affine Algebras as Central extensions of Loop Algebras 91 7.1 Loop Algebras 91 7.2 Realization of untwisted algebras 92 7.3 Explicit Construction of Finite Dimensional Lie Algebras 96 8 Twisted Affine Algebras and of Finite Order 99 8.1 Graph Automorphisms 99 8.2 Construction of Twisted Affine Algebras 108 8.3 Finite Order Automorphisms 114 9 Highest weight modules over Kac-Moody algebras 116 9.1 The category O 116 9.2 Formal Characters 118 9.3 Generators and relations 122 10 Weyl-Kac Character formula 127 10.1 Integrable highest weight modules and Weyl group 127 10.2 The character formula 128 10.3 Example: Lˆ(sl2) 132 10.4 Complete reducibility 134 10.5 Macdonald’s identities 136 10.6 Specializations of Macdonald’s identities 139 10.7 On converegence of characters 141 11 Irreducible Modules for affine algebras 144 11.1 Weights of irreducible modules 144 11.2 The fundamental modules for sbl2 151 Bibliography 155 Part one Kac-Moody Algebras

1 Main Definitions

1.1 Some Examples 1.1.1 Special Linear Lie Algebras

Let g = sln = sln(C). Choose the subalgebra h consisting of all diagonal ∨ matrices in g. Then, setting αi := eii − ei+1,i+1,

∨ ∨ α1 , . . . , αn−1

∗ is a basis of h. Next define ε1 . . . , εn ∈ h by

εi : diag(a1, . . . , an) 7→ ai.

Then, setting αi = εi − εi+1,

α1, . . . , αn−1 is a basis of h∗. Let ∨ aij = hαi , αji.

Then the (n − 1) × (n − 1) matrix A := (aij) is

 2 −1 0 0 ... 0 0 0  −1 2 −1 0 ... 0 0 0       0 −1 2 −1 ... 0 0 0   .  .  .   .     0 0 0 0 ... −1 2 −1 0 0 0 0 ... 0 −1 2

This matrix is called the . Define

Xεi−εj := eij,X−εi+εj := eji (1 ≤ i < j ≤ n)

3 4 Main Definitions Note that

[h, Xα] = α(h)Xα (h ∈ h), and ∨ ∨ {α1 , . . . , αn−1} ∪ {Xεi−εj | 1 ≤ i 6= j ≤ n}

is a basis of g. Set ei = Xαi and fi = X−αi for 1 ≤ i < n. It is easy to check that ∨ ∨ e1, . . . , −1, f1, . . . , fn−1, α1 , . . . αn−1 (1.1) generate g and the following relations hold:

∨ [ei, fj] = δijαi , (1.2) ∨ ∨ [αi , αj ] = 0, (1.3) ∨ [αi , ej] = aijej, (1.4) ∨ [αi , fj] = −aijfj, (1.5)

1−aij (ad ei) (ej) = 0 (i 6= j), (1.6)

1−aij (ad fi) (fj) = 0 (i 6= j). (1.7)

A (special case of a) theorem of Serre claims that g is actually gener- ated by the elements of (1.1) subject only to these relations. What is important for us is the fact that the Cartan matrix contains all the in- formation needed to write down the Serre’s presentation of A. Since the Cartan matrix is all the data we need, it makes sense to find a nicer ge- ometric way to picture the same data. Such picture is called the , and in our case it is:

••••••••... α1 α2 αn−1 Here vertices i and i + 1 are connected because ai,i+1 = ai+1,i = −1, others are not connected because aij = 0 for |i − j| > 1, and we don’t have to record aii since it is always going to be 2.

1.1.2 Symplectic Lie Algebras Let V be a 2n-dimensional and ϕ : V × V → C be a non-degenerate symplectic bilinear form on V . Let g = sp(V, ϕ) = {X ∈ gl(V ) | ϕ(Xv, w) + ϕ(v, Xw) = 0 for all v, w ∈ V }. 1.1 Some Examples 5 An easy check shows that g is a Lie subalgebra of gl(V ). It is known from linear algebra that over C all non-degenerate symplectic forms are equivalent, i.e. if ϕ0 is another such form then ϕ0(v, w) = ϕ(gv, gw) for some fixed g ∈ GL(V ). It follows that

sp(V, ϕ0) = g−1(sp(V, ϕ))g =∼ sp(V, ϕ), thus we can speak of just sp(V ). To think of sp(V ) as a Lie algebra of matrices, choose a symplectic basis e1, . . . , en, e−n, . . . , e−1, that is

ϕ(ei, e−i) = −ϕ(e−i, ei) = 1, and all other ϕ(ei, ej) = 0. Then the Gram matrix is

 0 s G = , −s 0 where 0 0 ... 0 1 0 0 ... 1 0    .  s =  .  . (1.8)   0 1 ... 0 0 1 0 ... 0 0

It follows that the matrices of sp(V ) in the basis of ei’s are precisely the matrices from the Lie algebra

AB sp = { | B = sBts, C = sCts, D = −sAts}, 2n CD

∼ t so sp(V ) = sp2n. Note that sX s is the transpose of X with respect to the second main diagonal. Choose the subalgebra h consisting of all diagonal matrices in g. Then, ∨ setting αi := eii − ei+1,i+1 − e−i,−i + e−i−1,−i−1, for 1 ≤ i < n and ∨ αn = enn − e−n,−n, ∨ ∨ ∨ α1 , . . . , αn−1, αn is a basis of h. Next, setting αi = εi −εi+1 for 1 ≤ i < n, and αn := 2εn,

α1, . . . , αn−1, αn is a basis of h∗. Let ∨ aij = hαi , αji. 6 Main Definitions Then the Cartan matrix A is the n × n matrix

 2 −1 0 0 ... 0 0 0  −1 2 −1 0 ... 0 0 0       0 −1 2 −1 ... 0 0 0   .  .  .   .     0 0 0 0 ... −1 2 −2 0 0 0 0 ... 0 −1 2

Define

X2εi = ei,−i, (1 ≤ i ≤ n)

X−2εi = e−i,i, (1 ≤ i ≤ n)

Xεi−εj = eij − e−j,−i (1 ≤ i < j ≤ n)

X−εi+εj = eji − e−i,−j (1 ≤ i < j ≤ n)

Xεi+εj = ei,−j + ej,−i (1 ≤ i < j ≤ n)

X−εi−εj = e−j,i + e−i,j (1 ≤ i < j ≤ n).

Note that

[h, Xα] = α(h)Xα (h ∈ h), and

∨ ∨ {α1 , . . . , αn } ∪ {Xα}

is a basis of g. Set ei = Xαi and fi = X−αi for 1 ≤ i ≤ n. It is easy to check that

∨ ∨ e1, . . . , en, f1, . . . , fn, α1 , . . . αn (1.9) generate g and the relations (1.2-1.7) hold. Again, Serre’s theorem claims that g is actually generated by the elements of (1.11) subject only to these relations. The Dynkin diagram in this case is:

••••••••... < α1 α2 αn−1 αn

The vertices n−1 and n are connected the way they are because an−1,n = −2 and an,n−1 = −1, and in other places we follow the same rules as in the case sl. 1.1 Some Examples 7 1.1.3 Orthogonal Lie Algebras Let V be an N-dimensional vector space and ϕ : V × V → C be a non-degenerate symmetric bilinear form on V . Let g = so(V, ϕ) = {X ∈ gl(V ) | ϕ(Xv, w) + ϕ(v, Xw) = 0 for all v, w ∈ V }.

An easy check shows that g is a Lie subalgebra of gl(V ). It is known from linear algebra that over C all non-degenerate symmetric bilinear forms are equivalent, i.e. if ϕ0 is another such form then ϕ0(v, w) = ϕ(gv, gw) for some fixed g ∈ GL(V ). It follows that

so(V, ϕ0) = g−1(so(V, ϕ))g =∼ so(V, ϕ), thus we can speak of just so(V ). To think of so(V ) as a Lie alge- bra of matrices, choose a basis e1, . . . , en, e−n, . . . , e−1 if N = 2n and e1, . . . , en, e0, e−n, . . . , e−1 if N = 2n + 1, such that the Gram matrix of ϕ in this basis is 0 0 s 0 s and 0 2 0 , s 0   s 0 0 respectively, where s is the n × n matrix as in (1.8). It follows that the matrices of so(V ) in the basis of ei’s are precisely the matrices from the Lie algebra AB so = { | B = −sBts, C = −sCts, D = −sAts}, 2n CD if N = 2n, and A 2sxt B t t t so2n+1 = { y 0 x  | B = −sB s, C = −sC s, D = −sA s}, C 2syt D if N = 2n + 1 (here x, y are arbitrary 1 × n matrices). We have in all ∼ cases that so(V ) = soN . Choose the subalgebra h consisting of all diagonal matrices in g. We now consider even and odd cases separately. First, let N = 2n. ∨ Then, setting αi := eii − ei+1,i+1 − e−i,−i + e−i−1,−i−1, for 1 ≤ i < n ∨ and αn = en−1,n−1 + enn − e−n+1,−n+1 − e−n,−n,

∨ ∨ ∨ α1 , . . . , αn−1, αn is a basis of h. Next, setting αi = εi − εi+1 for 1 ≤ i < n, and αn := 8 Main Definitions

εn−1 + εn,

α1, . . . , αn−1, αn is a basis of h∗. Let ∨ aij = hαi , αji.

Then the Cartan matrix A is the n × n matrix  2 −1 0 0 ... 0 0 0  −1 2 −1 0 ... 0 0 0     0 −1 2 −1 ... 0 0 0     .   .  .  .     0 0 0 0 ... −2 −1 −1    0 0 0 0 ... −1 2 0  0 0 0 0 ... −1 0 2

Define

Xεi−εj = eij − e−j,−i (1 ≤ i < j ≤ n)

X−εi+εj = eji − e−i,−j (1 ≤ i < j ≤ n)

Xεi+εj = ei,−j − ej,−i (1 ≤ i < j ≤ n)

X−εi−εj = e−j,i − e−i,j (1 ≤ i < j ≤ n).

Note that

[h, Xα] = α(h)Xα (h ∈ h), and ∨ ∨ {α1 , . . . , αn } ∪ {Xα}

is a basis of g. Set ei = Xαi and fi = X−αi for 1 ≤ i ≤ n. It is easy to check that ∨ ∨ e1, . . . , en, f1, . . . , fn, α1 , . . . αn (1.10) generate g and the relations (1.2-1.7) hold. Again, Serre’s theorem claims that g is generated by the elements of (1.11) subject only to these relations. The Dynkin diagram in this case is:

•αn ••••••••... α1 α2 αn−2 αn−1 1.1 Some Examples 9

∨ Let N = 2n+1. Then, setting αi := eii−ei+1,i+1−e−i,−i+e−i−1,−i−1, ∨ for 1 ≤ i < n and αn = 2enn − 2e−n,−n,

∨ ∨ ∨ α1 , . . . , αn−1, αn is a basis of h. Next, setting αi = εi − εi+1 for 1 ≤ i < n, and αn := εn,

α1, . . . , αn−1, αn is a basis of h∗. Let ∨ aij = hαi , αji. Then the Cartan matrix A is the n × n matrix  2 −1 0 0 ... 0 0 0  −1 2 −1 0 ... 0 0 0       0 −1 2 −1 ... 0 0 0   .  .  .   .     0 0 0 0 ... −1 2 −1 0 0 0 0 ... 0 −2 2

(It is transpose to the one in the symplectic case). Define

Xεi = 2ei,0 + e0,−i, (1 ≤ i ≤ n)

X−εi = 2e−i,0 + e0,i, (1 ≤ i ≤ n)

Xεi−εj = eij − e−j,−i (1 ≤ i < j ≤ n)

X−εi+εj = eji − e−i,−j (1 ≤ i < j ≤ n)

Xεi+εj = ei,−j − ej,−i (1 ≤ i < j ≤ n)

X−εi−εj = e−j,i − e−i,j (1 ≤ i < j ≤ n).

Note that

[h, Xα] = α(h)Xα (h ∈ h), and ∨ ∨ {α1 , . . . , αn } ∪ {Xα}

is a basis of g. Set ei = Xαi and fi = X−αi for 1 ≤ i ≤ n. It is easy to check that ∨ ∨ e1, . . . , en, f1, . . . , fn, α1 , . . . αn (1.11) generate g and the relations (1.2-1.7) hold. Again, Serre’s theorem 10 Main Definitions claims that g is actually generated by the elements of (1.11) subject only to these relations. The Dynkin diagram in this case is:

••••••••... > α1 α2 αn−1 αn

1.2 Generalized Cartan Matrices

Definition 1.2.1 A matrix A ∈ Mn(Z) is a generalized Cartan matrix (GCM) if

(C1) aii = 2 for all i; (C2) aij ≤ 0 for all i 6= j; (C3) aij = 0 if and only if aji = 0. Two GCMs A and A0 are equivalent if they have the same degree 0 n and there is σ ∈ Sn such that aij = aσ(i),σ(j). A GCM is called indecomposable if it is not equivalent to a diagonal sum of smaller GCMs.

Throughout we are going to assume that A = (aij)1≤i,j≤n is a gen- eralized Cartan matrix of rank `.

Definition 1.2.2 A realization of A is a triple (h, Π, Π∨) where h is a ∗ ∨ ∨ ∨ complex vector space, Π = {α1, . . . , αn} ⊂ h , and Π = {α1 , . . . , αn } ⊂ h such that (i) both Π and Π∨ are linearly independent; ∨ (ii) hαi , αji = aij for all i, j; (iii) dim h = 2n − `. Two realizations (h, Π, Π∨) and (h0, Π0, (Π0)∨) are isomorphic if there 0 ∨ exists an isomorphism ϕ : h → h of vector spaces such that ϕ(αi ) = 0 ∨ ∗ 0 ((αi) ) and ϕ (αi) = (αi) for i = 1, 2, . . . , n.

 2 −1 0  Example 1.2.3 (i) Let A = −1 2 −1 . We have n = ` = 3. Let 0 −1 2 4 e1, . . . , e4 be the standard basis of C , ε1, . . . , ε4 be the dual basis, and h = {(a1, . . . , a4) | a1 + ··· + a4 = 0}. Finally, take Π = {ε1 − ε2, ε2 − ∨ ε3, ε3 − ε4} and Π = {e1 − e2, e2 − e3, e3 − e4}. 3 Another realization comes as follows. Let h = C , and αi denote the ∨ ith coordinate function. Now take αi to be the ith row of A. It is clear that the two realizations are isomorphic. 1.2 Generalized Cartan Matrices 11  2 −1 −1 (ii) Let A = −1 2 −1 . We have n = 3, ` = 2. Take h = C4 −1 −1 2 and let αi denote the ith coordinate function (we only need the first  2 −1 −1 0 ∨ three). Now take αi to be the ith row of the matrix −1 2 −1 0 . −1 −1 2 1

Proposition 1.2.4 For each A there is a unique up to isomorphism realization. Realizations of matrices A and B are isomorphic if and only if A = B.

A A  Proof Assume for simplicity that A is of the form A = 11 12 A21 A22 where A11 is a non-singular ` × ` matrix. Let   A11 A12 0 C = A21 A22 In−` . 0 In−` 0

2n−` Note det C = ± det A11, so C is non-singular. Let h = C . Define ∗ ∨ ∨ α1, . . . , αn ∈ h to be the first n coordinate functions, and α1 , . . . , αn to be the first n row vectors of C. Now, let (h0, Π0, (Π0)∨ be another realization of A. We complete 0 ∨ 0 ∨ 0 ∨ 0 ∨ 0 (α )1 ,..., (α )n to a basis (α )1 ,..., (α )2n−` of h . Then the matrix 0 ∨ 0 (h(αi) , αji) has form   A11 A12 A21 A22 . B1 B2 By linear independence, this matrix has rank n. Thus it has n linearly independent rows. Since the rows rows ` + 1, . . . , n are linear combina- tions of rows 1, . . . , `, the matrix   A11 A12 B1 B2

0 0 0 0 is non-singular. We now complete α1, . . . , αn to α1, . . . , α2n−`, so that 0 ∨ 0 the matrix (h(αi) , αji) is   A11 A12 0 A21 A22 In−` . B1 B2 0 12 Main Definitions

0 0 0 ∗ This matrix is non-singular, so α1, . . . , α2n−` is a basis of (h ) . Since A11 is non-singular, by adding suitable linear combinations of the first ` rows to the last n − ` rows, we may achieve B1 = 0. Thus it is possible 0 ∨ 0 ∨ 0 ∨ 0 ∨ to choose (α )n+1,..., (α )2n−`, so that (α )1 ,..., (α )2n−` are a basis of h0 and   A11 A12 0 0 ∨ 0 (h(αi) , αji) = A21 A22 In−` . 0 0 B2 0 0 The matrix B2 must be non-singular since the whole matrix is non- 0 ∨ 0 ∨ singular. We now make a further change to (α )n+1,..., (α )2n−` equiv- alent to multiplying the above matrix by   I` 0 0  0 In−` 0  . 0 −1 0 0 (B2) Then we obtain   A11 A12 0 0 ∨ 0 (h(αi) , αji) = A21 A22 In−` . 0 In−` 0 ∨ 0 ∨ This is equal to the matrix C above. Thus the map αi 7→ (αi) gives 0 0 an isomorphism h → h whose dual is given by αi 7→ αi. This shows that the realizations (h, Π, Π∨ and (h0, Π0, (Π0)∨ are isomorphic. Finally, assume that ϕ :(h, Π, Π∨) → (h0, Π0, (Π0)∨) is an isomorphism of realizations of A and B respectively. Then

0 ∨ 0 ∨ 0 ∨ ∗ 0 ∨ bij = h(αi) , αji = hϕ(αi ), αji = hαi , ϕ (αj)i = hαi , αji = aij.

Throughout we assume that (h, Π, Π∨) is a realization of A. We refer to the elements of Π as simple roots and the elements of Π∨ as simple coroots, to Π and Π∨ as root basis and coroot basis, respectively. Also set n n Q = ⊕i=1Zαi,Q+ = ⊕i=1Z+αi. We call Q root . Dominance ordering is a partial order ≥ on h∗ Pn defined as follows: λ ≥ µ if and only if λ−µ ∈ Q+. For α = i=1 kiαi ∈ Q, the number n X ht α := ki i=1 1.3 The Lie algebra g˜(A) 13 is called the height of α.

1.3 The Lie algebra g˜(A)

Definition 1.3.1 The Lie algebra g˜(A) is defined as the algebra with generators ei, fi (i = 1, . . . , n) and h and relations

∨ [ei, fj] = δijαi , (1.12) [h, h0] = 0 (h, h0 ∈ h), (1.13)

[h, ei] = hαi, hiei (h ∈ h), (1.14)

[h, fi] = −hαi, hifi (h ∈ h). (1.15)

It follows from the uniqueness of realizations that g˜(A) depends only 0 ∗ 0 on A (this boils down to the following calculation: hαi, ϕ(h)i = hϕ (αi), hi = hαi, hi). Denote by n˜+ (resp. n˜−) the subalgebra of g˜(A) generated by all ei (resp. fi).

Lemma 1.3.2 (Weight Lemma) Let V be an h-module such that L V = λ∈h∗ Vλ where the weight space Vλ is defined as {v ∈ V | hv = hλ, hiv for all h ∈ h}. Let U be a submodule of V . Then U = L λ∈h∗ (U ∩ Vλ).

Proof Any elementr v ∈ V can be written in the form v = v1 + ··· + vm where vj ∈ Vλj , and theere is h ∈ h such that λj(h) are all distinct. For v ∈ U, we have

m k X k h (v) = λj(h) vj ∈ U (k = 0, 1, . . . , m − 1). j=1 We got a system of linear equations with non-singular matrix. It follows that all vj ∈ U.

Theorem 1.3.3 Let g˜ = g˜(A). Then

(i) g˜ = n˜− ⊕ h ⊕ n˜+.

(ii) n˜+ (resp. n˜−) is freely generated by the ei’s (resp. fi’s).

(iii) The map ei 7→ fi, fi 7→ ei, h 7→ −h (h ∈ h) extends uniquely to an involution ω˜ of g˜. 14 Main Definitions (iv) One has the root space decomposition with respect to h:

M  M  g˜ = g˜−α ⊕ h ⊕ g˜α ,

α∈Q+,α6=0 α∈Q+,α6=0

where g˜α = {x ∈ g˜ | [h, x] = α(h)x for all h ∈ h}. Moreover, each g˜α is finite dimensional, and g˜±α ⊂ n˜± for ±α ∈ Q+, α 6= 0. (v) Among the ideals of g˜ which have trivial intersection with h, there is unique maximal ideal r. Moreover,

r = (r ∩ n˜−) ⊕ (r ∩ n˜+)(direct sum of ideals).

Proof Let V be a complex vector space with basis v1, . . . , vn and let λ ∈ h∗. Define the action of the generators on the tensor algebra T (V ) as follows:

(a) fi(a) = vi ⊗ a for a ∈ T (V ). (b) h(1) = hλ, hi and then inductively on s,

h(vj ⊗ a) = −hαj, hivj ⊗ a + vj ⊗ h(a)

for a ∈ T s−1(V ).

(c) ei(1) = 0 and then inductively on s,

∨ ei(vj ⊗ a) = δijαi (a) + vj ⊗ ei(a)

for a ∈ T s−1(V ).

To see that these formulas define a representation of g˜, let us check the relations. For the first relation:

(eifj − fjei)(a) = ei(vj ⊗ a) − vj ⊗ ei(a) ∨ = δijαi (a) + vj ⊗ ei(a) − vj ⊗ ei(a) ∨ = δijαi (a).

The second relation is obvious since h acts diagonally. For the third relation, apply induction on s, the relation being obvious for s = 0. For s−1 s > 0, take a = vk ⊗ a1 where a1 ∈ T (V ). Then using induction we 1.3 The Lie algebra g˜(A) 15 have

∨ (hej − ejh)(vk ⊗ a1) = h(δjkαj (a1)) + h(vk ⊗ ej(a1))

−ej(−hαk, hivk ⊗ a1) − ej(vk ⊗ h(a1)) ∨ = δjkαj (h(a1)) − hαk, hivk ⊗ ej(a1) ∨ +vk ⊗ h(ej(a1)) + hαk, hiδjkαj (a1) ∨ +hαk, hivk ⊗ ej(a1) − δjkαj (h(a1))

−vk ⊗ ejh(a1) ∨ = vk ⊗ [h, ej](a1) + hαj, hiδjkαj (a1) ∨ = vk ⊗ hαj, hiej(a1) + hαj, hiδjkαj (a1) ∨ = hαj, hi(vk ⊗ ej(a1) + δjkαj (a1))

= hαj, hiej(vk ⊗ a1).

Finally, for the fourth relation:

(hfj − fjh)(a) = h(vj ⊗ a) − vj ⊗ h(a)

= −hαj, hivj ⊗ a + vj ⊗ a − vj ⊗ h(a)

= −hαj, hivj ⊗ a.

Now we prove (i)-(v). (iii) is easy to check using the defining relations.

(ii) Consider the map ϕ : n˜− → T (V ), u 7→ u(1). We have ϕ(fi) = vi, and for any Lie word w(f1, . . . , fn) we have

ϕ(w(f1, . . . , fn)) = w(v1, . . . , vn).

Now, for two words w and w0, we have

0 0 ϕ([w(f1, . . . fn), w (f1, . . . fn)]) = [w(v1, . . . vn), w (v1, . . . vn)] 0 = [ϕ(w(f1, . . . fn)), ϕ(w (f1, . . . fn))], so ϕ is a Lie algebra homomorphism. Now T (V ) = F (v1, . . . , vn), the free associative algebra on v1, . . . , vn. Moreover, the free Lie algebra FL(v1, . . . , vn) lies in T (V ) and is spanned by all Lie words in v1, . . . , vn. Thus FL(v1, . . . , vn) is the of ϕ. But there is a Lie algebra ho- 0 momorphism ϕ : FL(v1, . . . , vn) → n˜−, vi 7→ fi, which is inverse to ϕ, so ϕ is an isomorphism. It follows that the fi generate n˜− freely. The similar result for n˜+ follows by applying the automorphismω ˜. (i) It is clear from relations that g˜ = n˜−+h+n˜+. Let u = n−+h+n+ = 0. Then in T(V) we have 0 = u(1) = n−(1) + hλ, hi. It follows that 16 Main Definitions hλ, hi = 0 for all λ, whence h = 0. Now 0 = n−(1) = ϕ(n−), whence n− = 0. (iv) It follows from the last two defining relations that M n˜± = g˜±α.

α∈Q+, α6=0 Moreover, | ht α| dim g˜α ≤ n . (1.16) L (v) By Lemma 1.3.2, for any ideal i of g˜, we have i = α∈h∗ (g˜α ∩ i). Since h = g˜0, the sum of the ideals which have trivial intersection with h is the unique maximal ideal with this property. It is also clear that the sum in (v) is direct. Finally, [fi, r ∩ n˜+] ⊂ n˜+. Hence [g˜, r ∩ n˜+] ⊂ r ∩ n˜+. Similarly for r ∩ n˜−.

Remark 1.3.4 Note that the formula (b) in the proof of the theorem implies that the natural homomorphism h → g˜ is an injection. This justifies out notation.

1.4 The Lie algebra g(A)

Definition 1.4.1 We define the Kac-Moody algebra g = g(A) to be the quotient g˜(A)/r where r is the ideal from Theorem 1.3.3(v). We refer to A as the Cartan matrix of g, and to n as the rank of g.

In view of Remark 1.3.4, we have a natural embedding h → g(A). The image of this embedding is also denoted h and is called a of g. We keep the same notation for the images of the elements ei, fi, h in g. The elements ei and fi are called Chevalley generators. We have the following root decomposition with respect to h: M g = gα, α∈Q with g0 = h. The number

mult α := dim gα is called the multiplicity of α. The element α ∈ Q is called a root if α 6= 0 and mult α 6= 0. A root α > 0 is called positive, a root α < 0 is called negative. Every root is either positive or negative. We denote 1.4 The Lie algebra g(A) 17 by ∆, ∆+, ∆− the sets of the roots, positive roots, and negative roots, respectively. The subalgebra of g generated by the ei’s (resp. fi’s) is denoted by n+ (resp. n−). From Theorem 1.3.3, we have

g = n− ⊕ h ⊕ n+.

It follows that gα ⊂ n+ if α > 0 and gα ⊂ n− if α < 0. So for α > 0, gα is a span of the elements of the form [... [[ei1 , ei2 ], ei3 ] . . . eis ] such that

αi1 + ··· + αis = α. Similarly for α < 0. It follows that

gαi = Cei, g−αi = Cfi, gsαi = 0 (s 6= ±1). Since every root is either positive or negative, we deduce

Lemma 1.4.2 If β ∈ ∆+ \{αi}, then (β + Zαi) ∩ ∆ ⊂ ∆+. From Theorem 1.3.3(v), r isω ˜-invariant, so we get the Chevalley in- volution

ω : g → g, ei 7→ −fi, fi 7→ −ei, h 7→ −h (h ∈ h). (1.17)

It is clear that ω(gα) = g−α, so mult α = mult(−α) and ∆− = −∆+.

Proposition 1.4.3 Let A1 be an n × n GCM, A2 be an m × m GCM,   A1 0 ∨ and A = be the direct sum matrix. Let (hi, Πi, Πi ) be a 0 A2 ∨ ∨ realization of Ai. Then (h1 ⊕ h2, Π1 t Π2, Π1 t Π2 ) is a realization of A, ∼ and g(A1)⊕g(A2) = g(A), the isomorphism sending (h1, h2) 7→ (h1, h2), (ei, 0) 7→ ei, (0, ej) 7→ en+j, (fi, 0) 7→ fi, (0, fj) 7→ fn+j.

Proof The first statement is obvious. For the second one, observe that generators (h1, h2), (ei, 0), (0, ej), (fi, 0), (0, fj) of g(A1) ⊕ g(A2) satisfy the defining relations of g˜(A). So there exists a surjective homomorphism

π˜ : g˜(A) → g(A1) ⊕ g(A2) which acts on the generators as the inverse of the isomorphism promised in the proposition. Moreover, since g(A1) ⊕ g(A2) has no ideals with intersect h1 ⊕ h2 trivially, it follows thatπ ˜ factors through the surjective homomorphism

π : g(A) → g(A1) ⊕ g(A2). It suffices to show that π is injective. If not, its must be an ideal whose intersection with h is non-trivial. But then dim π(h) < dim h 18 Main Definitions giving a contradiction with the fact that π(h) = h1 ⊕ h2 has dimension dim h1 + dim h2 = dim h. Denote by g0 = g0(A) the subalgebra of g(A) generated by all Chevalley generators ei and fj.

0 ∨ ∨ Proposition 1.4.4 Let h ⊂ h be the span of α1 , . . . , αn . 0 0 (i) g = n− ⊕ h ⊕ n+. (ii) g0 = [g, g].

0 0 Proof (i) It is clear that n− ⊕ h ⊕ n+ ⊂ g . Conversely, if a Lie word in the Chevalley generators is not equal to zero and belongs to h, it follows from the relations that it belongs to h0. (ii) It is clear that g0 is an ideal in g, and it follows from (i) that g/g0 =∼ 0 0 ∨ 1 ∨ h/h is abelian, so [g, g] ⊂ g . Conversely, αi = [ei, fi], ei = [ 2 αi , ei], 1 ∨ 0 and fi = [fi, 2 αi ], so g ⊂ [g, g]. n Let s = (s1, . . . , sn) ∈ Z . The s-grading M g = gj(s) j∈Z of g is obtained by setting M gj(s) = gα P P where the sum is over all α = i kiαi ∈ Q such that i siki = j. Note that

deg ei = − deg fi = −si, deg h = 0. The case s = (1,..., 1) gives the principal grading of g.

Lemma 1.4.5 If an element a of n+ (resp. n−) commutes with all fi (resp. all ei), then a = 0.

Proof Note that in the principal grading g−1 = span(f1, . . . , fn) and g1 = span(e1, . . . , en). So [a, g−1] = 0. Then

X i j (ad g1) (ad h) a i,j≥0 is an ideal of g contained in n+. This ideal must be zero, whence a = 0. 1.4 The Lie algebra g(A) 19 Proposition 1.4.6 The center of g and g0 is

c = {h ∈ h | hαi, hi = 0 for all i = 1, . . . , n}. (1.18) Moreover, dim c = n − `.

P Proof Let c ∈ g be central and c = i ci be decomposition with respect to the principal grading. Then [c, g−1] = 0 implies [ci, g−1] = 0, whence ci = 0 for i > 0 and similarly ci = 0 for i < 0. So c ∈ h, and then 0 0 = [c, ei] = hαi, ciei implies c ∈ c. Converse is clear. Finally, c ⊂ h , since otherwise dim c > n − `.

Lemma 1.4.7 Let I1,I2 be disjoint subsets of {1, . . . , n} such that aij = 0 = a for all i ∈ I , j ∈ I . Let β = P k(s)α (s = 1, 2). If ji 1 2 s i∈Is i i α = β1 + β2 is a root of g, then either β1 or β2 is zero.

∨ ∨ Proof Let i ∈ I1, j ∈ I2. Then [αi , ej] = 0, [αj , ei] = 0, [ei, fj] = 0, [ej, fi] = 0. Using Leibnitz formula and Lemma 1.4.5, we conclude (s) that [ei, ej] = [fi, fj] = 0. Denote by g be the subalgebra generated (1) (2) by all ei, fi for i ∈ Is. We have shown that g and g commute. (1) (2) Now, since gα is contained in the subalgebra generated by g and g it follows that it is contained in one of them.

Proposition 1.4.8 (i) g is a if and only if det A 6= 0 and for each pair of indices i, j the following condition holds:

there are indices i1, . . . , is such that aii1 ai1i2 . . . aisj 6= 0. (1.19) (ii) If the condition (1.19) holds then every ideal of g either contains g0 or is contained in the center.

Proof (i) If det A = 0, then the center of g is non-trivial by Proposi- tion 1.4.6. If (1.19) is violated, then we can split {1, . . . , n} into two non- trivial sunsets I1 and I2 such that aij = aji = 0 whenever i ∈ I1, j ∈ I2. Then g is a direct sum of two ideals by Proposition 1.4.3. Conversely, let det A 6= 0 and (1.19) hold. If i ⊂ g is an ideal, then i contains a non-zero element h ∈ h. By Proposition 1.4.6, c = 0, and hence [h, ej] = aej 6= 0 ∨ for some j. Hence ej ∈ i, and αj = [ej, fj] ∈ i. Now from (1.19) it ∨ follows that ej, fj, αj ∈ i for all i. Since det A 6= 0, h is a span of the ∨ αj ’s, and i = g. (ii) is proved similarly—exercise. 20 Main Definitions We finish with some terminology concerning duality. Note that At is also GCM, and (h∗, Π∨, Π) is its realization. The algebras g(A) and g(At) are called dual to each other. Then the dual root lattice

n ∨ X ∨ Q := Zαi i=1 corresponding to g(A) is the root lattice corresponding to g(At). Also, denote by ∆∨ ⊂ Q∨ the ∆(At) and refer to it as the dual root system of g.

1.5 Examples The following clumsy but easy result will be useful for dealing with examples:

Proposition 1.5.1 Let g be a Lie algebra over C and h be a finite dimensional abelian subalgebra of g with dim h = 2n − `. Suppose ∗ ∨ Π = {α1, . . . , αn} is a linearly independent system of h and Π = ∨ ∨ ∨ {α1 , . . . , αn } a linearly independent system of h satisfying hαi , αji = aij. Suppose also that e1, . . . , en, f1, . . . , fn are elements of g satisfying relations (1.12)-(1.15). Suppose e1, . . . , en, f1, . . . , fn and h generate g and that g has no non-zero ideals i with i ∩ h = 0. Then g is isomorphic to g(A).

Proof There is surjective homomorphism θ : g˜(A) → g. The restriction of θ to h ⊂ g˜(A) is an isomorphism onto h ⊂ g, cf. Remark 1.3.4. So ker θ ∩ h = 0. It follows that ker θ ⊂ r. In fact, ker θ = r, since g has no nonzero ideal i with i ∩ h = 0.

Example 1.5.2 Let

 2 −1 0 0 ... 0 0 0  −1 2 −1 0 ... 0 0 0       0 −1 2 −1 ... 0 0 0  A = An :=  .  .  .   .     0 0 0 0 ... −1 2 −1 0 0 0 0 ... 0 −1 2 1.5 Examples 21 ∼ We claim that g(A) = sln+1. We take h ⊂ sln+1 to be diagonal matrices ∗ of trace 0. Let εi ∈ h be the ith coordinate function, i.e.

εi(diag(a1, . . . , an)) = ai (1 ≤ i ≤ n). Now take

∨ αi = εi − εi+1, αi = eii − ei+1,i+1 (1 ≤ i ≤ n), and

ei = ei,i+1, fi = ei+1,i (1 ≤ i ≤ n). It is easy to see that all assumptions of Proposition 1.5.1 are satisfied. For example, to see that sln+1 does not contain nonzero ideals i with i ∩ h = 0, note that any such ideal would have to be a direct sum of the root subspaces, and it is easy to see that no such is an ideal. In fact, an argument along these lines shows that sln+1 is a simple Lie algebra, i.e. it has no non-trivial ideals. Note that the roots of sln+1 are precisely

εi − εj (1 ≤ i 6= j ≤ n + 1), with the corresponding root spaces gεi−εj = Ceij. Moreover, a similar argument shows that if g is a finite dimensional with Cartan matrix A, then g =∼ g(A). Before doing the next example we explain several general construc- tions. Let g be an arbitrary Lie algebra. A 2-cocycle on g is a bilinear map

ψ : g × g → C satisfying ψ(y, x) = −ψ(x, y)(x, y ∈ g), (1.20) ψ([x, y], z) + ψ([y, z], x) + ψ([z, x], y) = 0 (x, y, z ∈ g). (1.21) If ψ is a 2-cocyle and g˜ = g ⊕ Cc for some formal element c, then g˜ is a Lie algebra with respect to [x + λc, y + µc] = [x, y] + ψ(x, y)c. We refer to g˜ as the central extension of g with respect to the cocycle ψ. Let D : g → g be a Lie algebra derivation, i.e. D is a linear map and D([x, y]) = [D(x), y] + [x, D(y)] (x, y ∈ g). 22 Main Definitions Let

gˆ = g ⊕ Cd for some formal element d. Then gˆ is a Lie algebra with respect to

[x + λd, y + µd] = [x, y] + λd(y) − µd(x).

We refer tog ˆ as the Lie algebra obtained from g by adjoining the deriva- tion D. Sometimes we use the same letter d for both d and D. A typical example of derivation comes as follows. Let g = ⊕j ∈ Zgj be a Lie algebra grading on g. Then the map g sending x to jx for any x ∈ g is a derivation. Let

−1 L = C[t, t ], and for any Lie algebra g define the corresponding

L(g) := L ⊗ g.

This is an infinite dimensional Lie algebra with bracket

[P ⊗ x, Q ⊗ y] = PQ ⊗ [x, y](P,Q ∈ L, x, y ∈ g).

If (·|·) is a bilinear form on g, it can be extended to a L-valued bilinear form

(·|·)t : L(g) × L(g) → L by setting

(P ⊗ x|Q ⊗ y)t = PQ(x|y).

We define the residue function

X i Res : L → C, cit 7→ c−1. i∈Z

Lemma 1.5.3 Let (·|·) be a symmetric invariant bilinear form on g. The function ψ : L(g) × L(g) → C defined by da ψ(a, b) = Res ( |b) dt t

i j is a 2-cocycle on L(g). Moreover, ψ(t ⊗ x, t ⊗ y) = iδi,−j(x|y). 1.5 Examples 23 Proof Note that

i j i−1 j ψ(t ⊗ x, t ⊗ y) = Res (it ⊗ x|t ⊗ y)t = Res iti+j−1(x|y)  i(x|y) if i + j = 0 = 0 otherwise from which (1.20) follows. Moreover, we have

ψ([ti ⊗ x, tj ⊗ y], tk ⊗ z) = ψ(ti+j[x, y], tk ⊗ z)  (i + j)([x, y]|z) if i + j + k = 0 = 0 otherwise

Now, if i + j + k 6= 0, (1.21) is clear. If i + j + k = 0, the required sum is

−k([x, y]|z) − i([y, z]|x) − j([z, x]|y) = −k([x, y]|z) − i([x, y]|z) − j([x, y]|z) = 0 since the form is symmetric and invariant.

If g is a simple finite dimensional Lie algebra it possesses unique up to a scalar non-degenerate symmetric invariant form (·|·), so Lemma 1.5.3 allows us to define a 2-cocycle ψ on L(g), and the previous discussion then allows us to consider the corresponding central extension

L¯(g) = L(g) ⊕ Cc. Moreover, L¯(g) is graded with deg tj ⊗ x = j, deg c = 0. We then have the corresponding derivation

d : L¯(g) → L¯(g), tj ⊗ x 7→ jtj ⊗ x, c 7→ 0.

Finally, by adjoining d to L¯(g) we get the Lie algebra

Lˆ(g) := L(g) ⊕ Cc ⊕ Cd, with operation

m n [t ⊗ x + λc + µd, t ⊗ y + λ1c + µ2d] m+n n m = (t ⊗ [x, y] + µnt ⊗ y − µ1mt ⊗ x) + mδm,−n(x|y)c.

 2 −2 Example 1.5.4 Let A = A(1) := . We claim that 1 −2 2 ∼ g(A) = Lˆ(sl2), 24 Main Definitions sometimes also denoted scl2. First of all recall that the non-degenerate symmetric invariant form on sl2 is just the trace form

(x|y) = tr (xy)(x, y ∈ sl2). Then

(e, f) = 1, (h, h) = 2, (e, e) = (e, h) = (f, h) = (f, f) = 0.

Now set h = Ch ⊕ Cc ⊕ Cd and note that dim h = 2n − `. Next define

∨ ∨ α0 = c − 1 ⊗ h, α1 = 1 ⊗ h

∗ and α0, α1 ∈ h via

∨ ∨ hαi, αi i = 2, hαi, αj i = −2 (0 ≤ i 6= j ≤ 1) and

hα0, ci = 0, hα0, di = 1, hα1, ci = 0, hα1, di = 0. It is clear that we have defined a realization of A. Next set

−1 e0 = t ⊗ f, e1 = 1 ⊗ e, f0 = t ⊗ e, f1 = 1 ⊗ f. It is now easy to check the remaining conditions of Proposition 1.5.1. Indeed,

∨ [ei, fj] = δijαi , [h, ei] = αi(h)ei, [h, fi] = −αi(h)fi (h ∈ h) follow from definitions. Next, scl2 is generated by h, e0, e1, f0, f1: if m is the subalgebra generated by them, then clearly 1 ⊗ sl2 ⊂ m. Set i := {x ∈ sl2 | t ⊗ x ∈ m. We have f ∈ i, so i 6= 0. Also, if x ∈ i, y ∈ sl2, then [x, y] ∈ i, thus i is an ideal of sl2, whence i = sl2, and t ⊗ sl2 ⊂ m. We may now use the relation

[t ⊗ x, tk−1 ⊗ y] = tk ⊗ [x, y](k > 0)

k to deduce by induction on k that t ⊗sl2 ⊂ m for all k > 0. Analogously k t ⊗ sl2 ⊂ m for all k < 0. It remains to show that scl2 has no non-zero ideals i having trivial intersection with h. For this we study root space decomposition of scl2. Let δ ∈ h∗ be defined from

∨ ∨ δ(α1 ) = δ(α2 ) = 0, δ(d) = 1. 1.5 Examples 25 We claim that the roots are precisely

{±α1 + kδ | k ∈ Z} ∪ {kδ | k ∈ Z \{0}}. Indeed,

k k gα1+kδ = C(t ⊗ e), g−α1+kδ = C(t ⊗ f), (k ∈ Z) and k gkδ = C(t ⊗ h)(k ∈ Z \{0}).

Since δ = α1 + α2, positive roots are of the form {(k + 1)α1 + kα1, kα1 + (k + 1)α2,(k + 1)α1 + (k + 1)α2} for k ∈ Z≥0. Let i be a non-zero ideal of scl2 which has trivial intersection with h. It follows from Lemma 1.3.2 that some ti ⊗ x ∈ i where x = e, f or h. Take y to be f, e or h, respectively. Then (x|y) 6= 0, and [ti ⊗ x, t−i ⊗ y] = [x, y] + i(x|y)c ∈ i ∩ h, and hence [x, y] + i(x|y)c = 0. since [x, y] is a multiple of 1 ⊗ h, we must have i = 0, whence [x, y] = 0. But since i = 0 we cannot have x = h, and then [x, y] = 0 is a contradiction. ∗ In conclusion we introduce the element Λ0 ∈ h which is defined from ∨ ∨ Λ0 : α0 7→ 1, α1 7→ 0, d 7→ 0. ∗ Then {α0, α1, Λ0} and {α1, δ, Λ0} are bases of h . 2 Invariant bilinear form and generalized Casimir operator

2.1 Symmetrizable GCMs

A GCM A = (aij) is called symmetrizable if there exists a non-singular diagonal matrix D = diag(ε1, . . . , εn) and a symmetric matrix B such that A = DB. (2.1)

If A is symmetrizable, we also call g = g(A) symmetrizable.

Lemma 2.1.1 Let A be a GCM. Then A is symmetrizable if and only if

ai1i2 ai2i3 . . . aiki1 = ai2i1 ai3i2 . . . ai1ik for all i1, i2, . . . , ik ∈ {1, . . . , n}.

Proof If A is symmetrisable then aij = εibij, hence

ai1i2 ai2i3 . . . aiki1 = di1 . . . dik ai1i2 ai2i3 . . . aiki1 ,

ai2i1 ai3i2 . . . ai1ik = di1 . . . dik bi2i1 bi3i2 . . . bi1ik , and these are equal since B is symmetric. For the converse, we may assume that A is indecomposable. Thus for each i ∈ {1, . . . , n} there exists a sequence 1 = j1, . . . , jt = i with

aj1j2 aj2j3 . . . ajt−1jt 6= 0.

We choose a number ε1 6= 0 in R and define

ajtjt−1 . . . aj2j1 εi = ε1. (2.2) aj1j2 aj2j3 . . . ajt−1jt

26 2.2 Invariant bilinear form on g 27 To see that this definition depends only on i, not on the sequence chosen from 1 to i, let 1 = k1, . . . , ku = i be another such sequence. Then a . . . a a . . . a jtjt−1 j2j1 = kuku−1 k2k1 , aj1j2 aj2j3 . . . ajt−1jt ak1k2 ak2k3 . . . aku−1ku since it is equivalent to

a1k2 ak2k3 . . . akt−1iaijt−1 . . . aj21 = ak21ak3k2 . . . aiku−1 ajt−1jt . . . a1j2 , which is one of the given conditions on the matrix A. Thus εi ∈ R is well defined and εi 6= 0. Let bij = aij/εi. It remains to show that bij = bji or aij/εi = aji/εj. If aij = 0 this is clear since then aji = 0. If aij 6= 0, let 1 = j1, . . . , jt = i be a sequence from 1 to i of the type described above. Then 1 = j1, . . . , jt, j is another such sequence from 1 to j. These sequences may be used to obtain εi and εj respectively, and we have

aji εj = εi, aij as required.

Lemma 2.1.2 Let A be a symmetrizable indecomposable GCM. Then A can be expressed in the form A = DB where D = diag(ε1, . . . , εn), B is symmetric, with ε1, . . . , εn positive and bij ∈ Q. Also D is determined by these conditions up to a scalar multiple.

Proof We choose ε1 to be any positive rational number. Then (2.2) shows that we can choose all εi to be positive rational numbers. Mul- tiplying by a positive scalar we can make all εi positive integers. Also bij = aij/εi ∈ Q. The proof of Lemma 2.1.1 also shows that D is unique up to a scalar multiple.

Remark 2.1.3 If A is symmetrizable, in view of the above lemma, we may and always will assume that ε1, . . . , εn are positive integers and B is a rational matrix.

2.2 Invariant bilinear form on g Let A be a symmetrizable GCM as above. Fix a linear complement h00 to h0 in h: h = h0 ⊕ h00. 28 Invariant bilinear form and generalized Casimir operator Define a symmetric bilinear form (·|·) on h by the following two condi- tions:

∨ (αi |h) = hαi, hiεi (h ∈ h); (2.3) (h0|h00) = 0 (h0, h00 ∈ h00). (2.4) Note that ∨ ∨ (αi |αj ) = bijεiεj. (2.5)

Lemma 2.2.1

(i) The kernel of the restriction (·|·)|h0 is c. (ii) (·|·) is non-degenerate on h.

Proof (i) is clear from (1.18). (ii) It follows from (i) and Proposition 1.4.6 that the kernel of (·|·) is contained in h0. Now if for all h ∈ h we have m m X ∨ X 0 = ( ciαi |h) = h ciεiαi, hi, i=1 i=1 Pm whence i=1 ciεiαi = 0, and so all ci = 0. Since (·|·) is non-degenerate we have an isomorphism ν : h → h∗ such that

hν(h1), h2i = (h1|h2)(h1, h2 ∈ h), and the induced bilinear form (·|·) on h∗. Note from (2.3) that

∨ ν(αi ) = εiαi. (2.6) So, by (2.5), −1 (αi|αj) = bij = aijεi . (2.7)

Since all εi > 0 (Remark 2.1.3), it follows that

(αi|αi) > 0 (1 ≤ i ≤ n). (2.8)

(αi|αj) ≤ 0 (i 6= j). (2.9)

∨ 2 −1 αi = ν (αi). (2.10) (αi|αi) So we get the usual expression for Cartan matrix:

2(αi|αj) aij = . (αi|αi) 2.2 Invariant bilinear form on g 29 Example 2.2.2 (i) If A is as in Example 1.5.2, then the Gram matrix ∨ ∨ of (·|·) in the basis α1 , . . . , αn is A itself. In fact, we may take (·|·) to be the trace form restricted to h. (ii) If A is as in Example 1.5.4, choose h00 := Cd. Then the Gram ∨ ∨ matrix of (·|·) in the basis α0 , α1 , d and the transported form in the basis α0, α1, Λ0 is  2 −2 1  −2 2 0  , 1 0 0

∨ while the Gram matrix of the same forms in the bases α1 , c, d and α1, δ, Λ0 is 2 0 0  0 0 1  . 0 1 0

Theorem 2.2.3 Let g be symmetrizable. Fix decomposition (2.1) for A. Then there exists a non-degenerate symmetric bilinear form (·|·) on g such that (i) (·|·) is invariant, i.e. for all x, y, z ∈ g we have ([x, y]|z) = (x|[y, z]). (2.11)

(ii) (·|·)|h is as above. (iii) (gα, gβ) = 0 of α + β 6= 0.

(iv) (·|·)|gα⊕g−α is non-degenerate for α 6= 0. −1 (v) [x, y] = (x|y)ν (α) for x ∈ gα, y ∈ g−α, α ∈ ∆.

N Proof Set g(N) := ⊕j=−N gj, N = 0, 1,... , where g = ⊕j∈Zgj is the principal grading. Start with the form (·|·) on g(0) = h defined above and extend it to g(1) as follows:

(fj|ei) = (ei|fj) = δijεi, (g0|g±1) = 0, (g±1|g±1) = 0. An explicit check shows that the form (·|·) on g(1) satisfies (2.11) if both [x, y] and [y, z] belong to g(1). Now we proceed by induction to extend the form to an arbitrary g(N), N ≥ 2. By induction we assume that the form has been extended to g(N − 1) so that it satisfies (gi|gj) = 0 for |i|, |j| ≤ N − 1 with i + j 6= 0, and (2.11) for all x ∈ gi, y ∈ gj, z ∈ gk with |i + j|, |j + k| ≤ N − 1. We show that the form can be extended to g(N) with analogous properties. First we require that (gi|gj) = 0 for all |i|, |j| ≤ N with i + j 6= 0. It remains to define (x|y) = (y|x) for 30 Invariant bilinear form and generalized Casimir operator x ∈ gN , y ∈ g−N . Note that y is a linear combination of Lie monomials in f1, . . . , fn of degree N. Since N ≥ 2, each Lie monomial is a bracket of Lie monomials of degrees s and t with s + t = N. It follows that y can be written in the form X y = [ui, vi](ui ∈ g−ai , vi ∈ g−bi ) (2.12) i where ai, bi > 0 and ai + bi = N. The expression of y in this form need not be unique. Now define X (x|y) := ([x, ui]|vi). (2.13) i

The RHS is known since [x, ui] and vi lie in g(N −1). We must therefore show that RHS remains the same if a different expression (2.12) for y is chosen. In a similar way we can write x in the form X x = [wj, zj](wj ∈ gcj , zj ∈ gdj ) j where cj, dj > 0 and cj + dj = N. We will show that X X (wj|[zj, y]) = ([x, ui]|vi). j i

This will imply that the RHS of (2.13) is independent of the given ex- pression for y. In fact it is sufficient to show that

(wj|[zj, [ui, vi]]) = ([[wj, zj], ui]|vi).

Now

([[wj, zj], ui]|vi) = ([[wj, ui], zj]|vi) + ([wj, [zj, ui]]|vi)

= ([wj, ui]|[zj, vi]) − ([zj, ui]]|[wj, vi])

= ([wj, ui]|[zj, vi]) − ([wj, vi]|[zj, ui])

= (wj|[ui, [zj, vi]]) − (wj|[vi, [zj, ui]])

= (wj|[zj, [ui, vi]]).

We must now check (2.11) for all x ∈ gi, y ∈ gj, z ∈ gk with |i + j|, |j + k| ≤ N. We may assume that i + j + k = 0 and at least one of |i|, |j|, |k| is N. We suppose first that just one of |i|, |j|, |k| is N. Then the other two are non-zero. If |i| = N then (2.11) holds by definition of the form on g(N). Similarly for |k| = N. So suppose |j| = N. We may 2.2 Invariant bilinear form on g 31 assume that y has form y = [u, v] where u ∈ ga, v ∈ gb, a + b = |N|, and 0 < |a| < |j|, 0 < |b| < |j|. Then

([x, y]|z) = ([x, [u, v]]|z) = ([[v, x], u]|z) + ([[x, u], v]|z) = ([v, x]|[u, z]) + ([x, u]|[v, z]) = ([x, v]|[z, u]) + ([x, u]|[v, z]) = (x|[v, [z, u]]) + (x|[u, [v, z]]) = (x|[[u, v], z]) = (x|[y, z]).

Now suppose two of |i|, |j|, |k| are equal to N. Then i, j, k are N, −N, 0 in some order. Thus one of x, y, z lies in h. Suppose x ∈ h. We may again assume that y = [u, v]. Then

([x, y]|z) = ([x, [u, v]]|z) = ([[x, u], v]|z) − ([[x, v], u]|z) = ([x, u]|[v, z]) − ([x, v]|[u, z]) (by definition of (·|·) on g(N)) = (x|[u, [v, z]]) − (x|[v, [u, z]]) (by invariance of (·|·) on g(N − 1)) = (x|[[u, v], z]) = (x|[y, z]).

If z ∈ h the result follows by symmetry. Finally, let y ∈ h. Then we may assume that z = [u, v] where u ∈ ga, v ∈ gb, a + b = k, and 0 < |a| < |k|, 0 < |b| < |k|. Then

(x|[y, z]) = (x|[y, [u, v]]) = (x|[u, [y, v]]) + (x|[[y, u], v]) = ([x, u]|[y, v]) + ([x, [y, u]]|v) (by definition of (·|·) on g(N)) = ([[x, u], y]|v) + ([x, [y, u]]|v) (by invariance of (·|·) on g(N − 1)) = ([[x, y], u]|v) = ([x, y]|[u, v]) (by definition of (·|·) on g(N)) = ([x, y]|z).

By induction, we have defined a symmetric bilinear form on g which satisfies (i) and (ii). Let i be the radical of (·|·). Then i is an ideal in g. If i 6= 0 then i ∩ h 6= 0, which contradicts Lemma 2.2.1(ii). Thus (·|·) is non-degenerate. 32 Invariant bilinear form and generalized Casimir operator

The form also satisfies (iii), since for all h ∈ h, x ∈ gα, y ∈ gβ, using invariance, we have

0 = ([h, x]|y) + (x|[h, y]) = (hα, hi + hβ, hi)(x|y).

Now (iv) also follows from the non-degeneracy of the form.

Finally, let α ∈ ∆, x ∈ gα, y ∈ gβ, h ∈ h. Then

([x, y] − (x|y)ν−1(α)|h) = (x|[y, h]) − (x|y)hα, hi = 0, which implies (v).

The form (·|·) constructed in the theorem above is called the standard invariant form on g. It is uniquely determined by the conditions (i) and (ii) of the theorem (indeed, if (·|·)1 is another such form then (·|·) − (·|·)1 is too, but its radical is non-trivial ideal containing h, which is a contradiction). Throughout (·|·) denotes the standard invariant form on symmetriz- able g.

Example 2.2.4 (i) The standard invariant form is just the trace form on sln+1 is the trace form.

(ii) The standard invariant form on sbl2 is given by

m n (t ⊗ x|t ⊗ y) = δm,−ntr (xy),

(Cc + Cd|L(sl2)) = 0, (c|c) = (d|d) = 0, (c|d) = 1.

2.3 Generalized Casimir operator Let g be symmetrizable. By Theorem 2.2.3(iii),(iv), we can choose dual (i) (i) bases {eα } and {e−α} in gα and g−α. Then

X (s) (s) (x|y) = (x|e−α)(y|eα )(x ∈ gα, y ∈ g−α). (2.14) s

Lemma 2.3.1 If α, β ∈ ∆ and z ∈ gβ−α, then in g ⊗ g we have

X (s) (s) X (t) (t) e−α ⊗ [z, eα ] = [e−β, z] ⊗ eβ . (2.15) s t 2.3 Generalized Casimir operator 33 Proof Define a bilinear form (·|·) on g ⊗ g via

(x ⊗ y|x1 ⊗ y1) := (x|x1)(y|y1).

Taje e ∈ gα, f ∈ g−β. It suffices to prove that pairing of both sides of (2.15) with e ⊗ f gives the same result. We have, using (2.14),

X (s) (s) X (s) (s) (e−α ⊗ [z, eα ]|e ⊗ f) = (e−α|e)([z, eα ]|f) s s X (s) (s) = (e−α|e)(eα |[f, z]) s = (e|[f, z]). Similarly, X (t) (t) ([e−β, z] ⊗ eβ |e ⊗ f) = ([z, e]|f), t as required.

Corollary 2.3.2 In the notation of Lemma 2.3.1, we have

X (s) (s) X (t) (t) [e−α, [z, eα ]] = − [[z, e−β], eβ ](in g), (2.16) s t X (s) (s) X (t) (t) e−α[z, eα ] = − [z, e−β]eβ (in U(g)). (2.17) s t

Definition 2.3.3 A g-module V is called restricted if for every v ∈ V we have gαv = 0 for all but finitely many positive roots α. Let ρ ∈ h∗ be any functional satisfying

∨ hρ, αi i = 1 (1 ≤ i ≤ n). Then, by (2.10),

(ρ|αi) = (αi|αi)/2 (1 ≤ i ≤ n). (2.18)

For a restricted g-module V we define a linear operator Ω0 on V as follows: X X (i) (i) Ω0 = 2 e−αeα .

α∈∆+ i One can check that this definition is independent on choice of dual bases. 1 2 Let u1, u2,... and u , u ,... be dual bases of h. Note that

X i ∗ (λ|µ) = hλ, u ihµ, uii (λ, µ ∈ h ). (2.19) i 34 Invariant bilinear form and generalized Casimir operator Indeed,

(λ|µ) = (ν−1(λ)|ν−1(µ)) X −1 i −1 = (ν (λ)|u )(ν (µ)|ui) i X i = hλ, u ihµ, uii. i Also, X i −1 [ u ui, x] = x((α|α) + 2ν (α)) (x ∈ gα). (2.20) i Indeed,

X i X i X i [ u ui, x] = hα, u ixui + u hα, uiix i i i ! X i X i i = hα, u ihα, uiix + x u hα, uii + uihα, u i . i i Define the generalized Casimir operator to be the following linear operator Ω on V :

−1 X i Ω := 2ν (ρ) + u ui + Ω0. i

Example 2.3.4 (i) Let g = sl2. Then we have

Ω = h + h(1/2)h + 2fe = ef + fe + h(1/2)h,

P i i i.e. Ω = v vi for a pair {v } and {vi} of dual bases of sl2. This is a general fact for a finite dimensional simple Lie algebra. i (ii) Let g = sbl2. We can take a pair of dual bases u and ui of h as follows ∨ ∨ {α1 , c, d} and {(1/2)α1 , d, c}, and −1 ∨ 2ν (ρ) = α1 + 4d. Finally,

+∞ +∞ +∞ X −k k X −k k X −k k Ω0 = (t h)(t h) + 2 (t f)(t e) + 2 (t e)(t f). k=1 k=0 k=1 2.3 Generalized Casimir operator 35 For the purposes of he following theorem consider root space decom- position of U(g): M U(g) = Uβ, β∈Q where

Uβ = {u ∈ U(g)|[h, u] = hβ, hiu for all h ∈ h}.

Set 0 0 Uβ = Uβ ∩ U(g ),

0 L 0 so that U(g ) = β∈Q Uβ.

Theorem 2.3.5 Let g be symmetrizable.

0 0 (i) If V be a restricted g -module and u ∈ Uα then −1  [Ω0, u] = −u 2(ρ|α) + (α|α) + 2ν (α) . (2.21)

(ii) If V is a restricted g-module then Ω commutes with the action of g on V .

Proof Note that elements of h commute with Ω since Ω is of weight 0. Now (ii) follows from (i) and (2.20). Next, note that if (i) holds for 0 0 0 u ∈ Uα and u1 ∈ Uβ, then it also holds for uu1 ∈ Uα+β:

[Ω0, uu1] = [Ω0, u]u1 + u[Ω0, u1] −1  = −u 2(ρ|α) + (α|α) + 2ν (α) u1 −1  −uu1 2(ρ|β) + (β|β) + 2ν (β) −1 = −uu1 2(ρ|α) + (α|α) + 2ν (α) +2(α|β) + 2(ρ|β) + (β|β) + 2ν−1(β) −1  = −uu1 2(ρ|α + β) + (α + β|α + β) + 2ν (α + β) .

0 Since the eαi ’s and e−αi ’s generate g , it suffices to check (2.21) for u = eαi and e−αi . We explain the calculation for eαi , the case of e−αi being similar. We have

X X (s) (s) (s) (s) [Ω0, eαi ] = 2 ([e−α, eαi ]eα + e−α[eα , eαi ])

α∈∆+ s X X (s) (s) (s) (s) = 2[e−αi , eαi ]eαi + 2 ([e−α, eαi ]eα + e−α[eα , eαi ]).

α∈∆+\{αi} s 36 Invariant bilinear form and generalized Casimir operator Note using Theorem 2.2.3(v) that

−1 −1 2[e−αi , eαi ]eαi = −2ν (αi)eαi = −2(αi|αi)eαi − 2eαi ν (αi), which is the RHS of (2.21) for u = eαi . So it remains to prove that

X X (s) (s) (s) (s) ([e−α, eαi ]eα + e−α[eα , eαi ]) = 0. (2.22)

α∈∆+\{αi} s

Applying (2.17) to z = eαi , we get

X X (s) (s) (s) (s) ([e−α, eαi ]eα + e−α[eα , eαi ])

α∈∆+\{αi} s X X (s) X X (t) (t) = ([e , e ]e(s) − [e , e ]e . −α αi α −α−αi αi α+αi α∈∆+\{αi} s α∈∆\{αi} t

If α + αi 6∈ ∆, the last term is interpreted as zero. If α − αi 6∈ ∆, (s) then [e−α, eαi ] = 0. Thus we may assume α = β + αi in the first term with β ∈ ∆+ in view of Lemma 1.4.2, which makes that term equal to P P [e(t) , e ]e(t) , which completes the proof of (2.22). β∈∆\{αi} t −β−αi αi β+αi

Corollary 2.3.6 If in the assumptions of Theorem 2.3.5, v ∈ V is a high weight vector of weight Λ then Ω(v) = (Λ + 2ρ|Λ)v. If, additionally, v generates V , then

Ω = (Λ + 2ρ|Λ)IV .

Proof The second statement follows from the first and the theorem. The first statement is a consequence of the definition of Ω and (2.19). 3 Integrable representations of g and the Weyl group

3.1 Integrable modules Let ∨ g(i) = Cei + Cαi + Cfi.

It is clear that g(i) is isomorphic to sl2 with standard basis.

Lemma 3.1.1 (Serre Relations) If i 6= j then

1−aij 1−aij (ad ei) ej = 0, (ad fi) fj = 0. (3.1)

Proof We prove the second equality, the first then follows by application 1−aij of ω. Let v = fj, θij = (ad fi) fj. We consider g as a g(i)-module via ∨ adjoint action. We have eiv = 0 and αi v = −aijv. So, by of sl2,

−aij eiθij = (1 − aij)(−aij − (1 − aij) + 1)(ad fi) fj = 0 (i 6= j).

It is also clear from relations that ekθij = 0 if k 6= i, j or if k = j and aij 6= 0. Finally, if k = j and aij = 0, then

∨ ejθij = [ej, [fi, fj]] = [fi, αj ] = ajifi = 0. It remains to apply Lemma 1.4.5.

Let V be a g-module and x ∈ g. Then x is locally nilpotent on V if for every v ∈ V there is N such that xN v = 0.

Lemma 3.1.2 Let g be a Lie algebra, V be a g-module, and x ∈ g.

Ni (i) If y1, y2,... generate g and (ad x) yi = 0, i = 1, 2,... , then ad x is locally nilpotent on g.

37 38 Integrable representations of g and the Weyl group

(ii) If v1, v2,... generate V as g-module, ad x is locally nilpotent on Ni g, and x vi = 0, i = 1, 2,... , then x is locally nilpotent on V .

Proof Since ad x is a derivation, we have

k X k (ad x)k[y, z] = [(ad x)iy, (ad x)k−iz]. i i=0

This yields (i) by induction on the length of commutators in the yi’s. (ii) follows from the formula

k X k xka = ((ad x)ia)xk−i, (3.2) i i=0 which holds in any associative algebra.

Lemma 3.1.3 Operators ad ei and ad fi are locally nilpotent on g.

Proof Follows from the defining relations, Serre relations, and Lemma 3.1.2(i).

A g-module V is called h-diagonalizable if M V = Vλ, λ∈h∗ where the weight space Vλ is defined to be

Vλ = {v ∈ V |hv = λ(h)v for all h ∈ h}.

If Vλ 6= 0 we call λ a weight of V , and dim Vλ the multiplicity of the weight 0 0 λ denoted multV λ. h -diagonalizable g -modules are defined similarly. A g (resp. g0)-module V is called integrable if it is h (resp. h0)- diagonalizable and all ei, fi act locally nilpotently on V . For example the adjoint g-module is integrable.

Proposition 3.1.4 Let V be an integrable g-module. As a g(i)-module, V decomposes into a direct sum of finite dimensional irreducible h- invariant submodules.

Proof For v ∈ Vλ we have

k ∨ k−1 k eifi v = k(1 − k + hλ, αi i)fi v + fi eiv. 3.2 Weyl group 39 It follows that the subspace

X k m U := Cfi ei v k,m≥0 is (g(i) +h)-invariant. Since ei and fi are locally nilpotent on V , dim U < ∞. By Weyl’s Complete Reducibility Theorem, U is a direct sum of irreducible h-invariant g(i)-submodules (for h-invariance use the fact that k m k0 m0 ∨ fi ei v and fi ei v are of the same αi -weight if and only if they are of the same h-weight). It follows that each v ∈ V lies in a direct sum of finite dimensional h-invariant irreducible g(i)-modules, which implies the proposition.

Proposition 3.1.5 Let V be an integrable g-module, λ ∈ h∗ be a weight of V , and αi a simple root of g. Denote by M the set of all t ∈ Z such that λ + tαi is a weight of V , and let mt := multV (λ + tαi). Then: (i) M is a closed interval [−p, q] of integers, where both p and q are ∨ either non-negative integers or ∞; p − q = hλ, αi i when both p and q are finite; if multV λ < ∞ then p and q are finite.

(ii) The map ei : Vλ+tαi → Vλ+(t+1)αi is an embedding for t ∈ ∨ [−p, −hλ, αi i/2); in particular, the function t 7→ mt is increasing on this interval. ∨ (iii) The function t 7→ mt is symmetric with respect to t = −hλ, αi i/2. (iv) If λ and λ + αi are weights then ei(Vλ) 6= 0. ∨ (v) If λ + αi (resp. λ − αi) is not a weight, then hλ, αi i ≥ 0 (resp. ∨ hλ, αi i ≤ 0). ∨ (vi) λ − hλ, αi iαi is also a weight of V and ∨ multV (λ − hλ, αi iαi) = multV λ.

P Proof Set U := Vλ+kα . This is a (g +h)-module, which in view k∈Z i (i) of Proposition 3.1.4 is a direct sum of finite dimensional h-invariant irreducible g(i)-modules. Let p := − inf M and q := sup M. Then p, q ∈ Z+ since 0 ∈ M. Now everything follows from representation ∨ ∨ theory of sl2 using the fact hλ + tαi, αi i = 0 for t = −hλ, αi i/2.

3.2 Weyl group ∗ For each i = 1, . . . , n define the fundamental reflection ri of h by the formula ∨ ∗ ri(λ) = λ − hλ, αi iαi (λ ∈ h ). 40 Integrable representations of g and the Weyl group

It is clear that ri is a reflection with respect to the hyperplane

∗ ∨ Ti = {λ ∈ h | hλ, αi i = 0}. The W = W (A) of GL(h∗) generated by all fundamental re- ∗ flections is called the Weyl group of g. The action ri on h induces the ∨ dual fundamental reflection ri on h. Hence the Weyl groups of dual Kac-Moody algebras are contragredient linear groups which allows us ∨ to identify them. We will always do this and write ri for ri . A simple ∨ check shows that the dual fundamental reflection ri is given by ∨ ∨ ri (h) = h − hh, αiiαi .

Proposition 3.2.1

(i) Let V be an integrable g-module. Then multV λ = multV w(λ) for any λ ∈ h∗ and w ∈ W . In particular, the set of weights of V is W -invariant. (ii) The root system ∆ is W -invariant and mult α = mult w(α) for all α ∈ ∆, w ∈ W .

Proof Follows from Proposition 3.1.5.

Lemma 3.2.2 If α ∈ ∆+ and ri(α) < 0 then α = αi. In particular, ∆+ \{αi} is invariant with respect to ri.

Proof Follows from Lemma 1.4.2. If a is a locally nilpotent operator on a vector space V , and b is another operator on V such that (ad a)nb = 0 for some N, then (exp a)b(exp −a) = (exp(ad a))(b). (3.3) Indeed, using induction and (3.2), we get

k X k (ad a)k(b) = (−1)j ak−jbaj, j j=0 and so X ai X aj X 1 X k! ( )b( (−1)j ) = (−1)j(aibaj) i! j! k! i!j! i≥0 j≥0 k≥0 i+j=k X 1 = (ad a)k(b). k! k≥0 3.2 Weyl group 41 Lemma 3.2.3 Let π be an integrable representation of g in V . For i = 1, . . . , n set

π ri := (exp π(fi))(exp π(−ei))(exp π(fi)).

Then

π (i) ri (Vλ) = Vri(λ); ad (ii) ri ∈ Aut g; ad (iii) ri |h = ri.

Proof Let v ∈ Vλ. Then

π π π h(ri (v)) = ri (h(v)) = hλ, hiri (v) if hαi, hi = 0. (3.4)

Next we prove that

∨ π ∨ π αi (ri (v)) = −hλ, αi iri (v). (3.5)

This follows from

π −1 ∨ π ∨ (ri ) π(αi )ri = π(−αi ), (3.6) and, in view of (3.3), it is enough to check (3.6) holds for the of sl2. Applying (3.3) one more time, we see that it is enough to check (3.6) for the natural 2-dimensional representation of sl2. But in that representation we have

1 0 1 −1 0 −1 exp f = , exp(−e ) = , rπ = , i 1 1 i 0 1 i 1 0 which implies (3.6) easily. 0 ∨ Now, any h ∈ h can be written in the form h = h + cαi , where c is a 0 constant and hαi, h i = 0. Then using (3.4) and (3.5), we have

π 0 ∨ π π π h(ri (v)) = (hλ, h i − hλ, cαi i)ri (v) = hλ, ri(h)iri (v) = hri(λ), hiri (v), which proves (i). 0 ∨ For (iii), take h ∈ h and write it again in the form h = h + cαi ad 0 0 as above. Then it is clear that ri h = h , and we just have to prove ad ∨ ∨ that ri (αi ) = −αi . This can be done as above calculating with 2 × 2 42 Integrable representations of g and the Weyl group matrices, or, if you prefer, here is another argument.

∨ ∨ (exp ad fi)(αi ) = αi + 2fi; ∨ ∨ ∨ (exp ad (−ei))(αi + 2fi) = αi + 2ei + 2fi − 2αi − 2ei ∨ = −αi + 2fi; ∨ ∨ (exp ad fi)(−αi + 2fi) = −αi − 2fi + 2fi ∨ = −αi . (ii) follows from (3.3) applied to the adjoint representation:

ad ri [x, y] = (exp ad fi)(exp ad (−ei))(exp ad (fi))(ad x)(y)

= (exp ad fi)(exp ad (−ei))(exp ad (fi))(ad x)

×(exp ad (−fi))(exp ad ei)(exp ad (−fi))

×(exp ad fi)(exp ad (−ei))(exp ad (fi))(y) ad ad = ri (x)(ri (y)) ad ad = [ri (x), ri (y)].

Proposition 3.2.4 The bilinear form (·|·) on h∗ is W -invariant.

2 2 2 ∗ Proof Note that |ri(αi)| = | − αi| = |αi| . Now let Λ, Φ ∈ h and write Λ = cαi + λ, Φ = dαi + ϕ where (λ|αi) = (ϕ|αi) = 0, and c, d are constants. Then ri(Λ) = λ − cαi, ri(Φ) = ϕ − dαi, so

(ri(Λ)|ri(Φ) = (λ − cαi|ϕ − dαi) = (λ, ϕ) + (cαi|dαi) = (Λ|Φ).

3.3 Weyl group as a Coxeter group

Lemma 3.3.1 If αi is a simple root and ri1 . . . rit (αi) < 0 then there exists s such that 1 ≤ s ≤ t and r . . . r r = r ... r . . . r . i1 it i i1 cis it

Proof Set βk = rik+1 . . . rit (αi) for k < t and βt = αi. Then βt > 0 and β0 < 0. Hence for some s we have βs−1 < 0 and βs > 0. But

βs−1 = ris βs, so by Lemma 3.2.2, βs = αis , and we get

αis = w(αi), where w = ris+1 . . . rit . (3.7) 3.3 Weyl group as a Coxeter group 43

By Lemma 3.2.3, w =w ˜|h for somew ˜ from the subgroup of Aut g gen- ad erated by the ri . Applyingw ˜ to both sides of the equation [gαi , g−αi ] = α∨, we see that w(α∨) = α∨ . Since hw(α ), w(α∨)i = hα , α∨i = 2, C i C i C is i i i i we now conclude that w(α∨) = α∨ . (3.8) i is −1 It now follows that ris = wriw : wr w−1(λ) = w(w−1(λ) − hw−1(λ), α∨iα ) = λ − hλ, α∨ iα = r (λ). i i i is is is −1 It remains to multiply both sides of ris = wriw by ri1 . . . ris−1 on the left and by ris+1 . . . rit ri on the right.

Decomposition w = ri1 . . . ris is called reduced if s is minimal among all presentations of w as a product of simple reflections ri. Then s is called the length of w and is denoted `(w). Note that det ri = −1, so det w = (−1)`(w) (w ∈ W ). (3.9)

Lemma 3.3.2 Let w = ri1 . . . rit ∈ W be a reduced decomposition and αi be a simple root. Then

(i) `(wri) < `(w) if and only if w(αi) < 0; (ii) (Exchange Condition) If `(wri) < `(w) then there exists s such that 1 ≤ s ≤ t and

ris ris+1 . . . rit = ris+1 . . . rit ri

Proof By Lemma 3.3.1, w(αi) < 0 implies `(wri) < `(w). Now, if w(αi) > 0, then wri(αi) < 0 and it follows that `(w) = `(wriri) < `(wri), completing the proof of (i). (ii) If `(wri) < `(w) then (i) implies w(αi) < 0, and we deduce the required Exchange Condition from Lemma 3.3.1 by multiplying it with ris−1 . . . ri1 on the left and ri on the right.

Lemma 3.3.3 `(w) equals the number of roots α > 0 such that w(α) < 0.

Proof Denote

n(w) := |{α ∈ ∆+ | w(α) < 0}.

It follows from Lemma 3.2.2 that n(wri) = n(w) ± 1, whence n(w) ≤ `(w). We now apply induction on `(w) to prove that `(w) = n(w). If `(w) = 0 then w = 1 (by convention), and clearly n(w) = 0. Assume that 44 Integrable representations of g and the Weyl group

0 `(w) = t > 0, and w = ri1 . . . rit−1 rit . Denote w = ri1 . . . rit−1 . By 0 induction, n(w ) = t − 1. Let β1, . . . , βt−1 be the positive roots which 0 0 are sent to negative roots by w . By Lemma 3.3.2(i), w (αit ) > 0, whence w(αit ) < 0. It follows from Lemma 3.2.2 that rit (β1), . . . , rit (βt−1), αit are distinct positive roots which are mapped to negative roots by w, so n(w) ≥ `(w).

Lemma 3.3.4 (Deletion Condition) Let w = ri1 . . . ris . Suppose `(w) < s. Then there exist 1 ≤ j < k ≤ s such that

w = r ... r ... r . . . r . i1 cij cik is

Proof Since `(w) < s there exists 2 ≤ k ≤ s such that

`(ri1 . . . rik ) < `(ri1 . . . rik−1 ) = k − 1

. Then by Lemmas 3.3.2(i) and 3.3.1,

r . . . r = r ... r . . . r i1 ik i1 cij ik−1 for some 1 ≤ j < k.

Now for 1 ≤ i 6= j ≤ n define

 2 if a a = 0,  ij ji  3 if a a = 1,  ij ji mij := 4 if aijaji = 2,   5 if aijaji = 3,  ∞ if aijaji ≥ 4.

Lemma 3.3.5 Let 1 ≤ i 6= j ≤ n. Then the order of (rirj) is mij.

Proof The subspace Rαi + Rαj is invariant with respect to ri and rj, and we can make all calculations in this 2-dimensional space. The ma-     −1 −aij 1 0 trices of ri and rj in the basis αi, αj are and , 0 1 −aji −1   −1 + aijaji aij respectively. So the matrix of rirj is . The charac- −aji −1 2 teristic polynomial of this matrix is λ + (2 − aijaji)λ + 1, and now the result is an elementary calculation. 3.3 Weyl group as a Coxeter group 45

Proposition 3.3.6 W is generated by r1, . . . , rn subject only to the Coxeter relations

2 ri = 1 (1 ≤ i ≤ n), (3.10)

mij (rirj) = 1 (1 ≤ i 6= j ≤ n), (3.11) where w∞ is interpreted as 1. So W is a Coxeter group.

Proof This is a general fact. All we need is Deletion Condition. We need to show that every relation

r1 . . . ris = 1 in W is a consequence of (3.10) and (3.11). We have det ri = −1 for all i, so s = 2q. We apply induction on q. If q = 1 the relation looks like s s = 1. Hence s = s−1 = s . So our relation is s2 = 1, which is i1 i2 i2 i1 i1 i1 one of (3.10). For inductive step, rewrite the given relation as follows:

ri1 . . . riq riq+1 = ri2q . . . riq+2 . (3.12)

Then `(ri1 . . . riq riq+1 ) < q + 1, so by the Deletion Condition, r . . . r r = r ... r ... r . . . r (3.13) i1 iq iq+1 i1 cij cik iq+1 for some 1 ≤ j < k ≤ q + 1. Now, unless j = 1 and k = q + 1, this is a consequence of a relation with fewer than 2q terms—for example, if j > 1, (3.13) is equivalent to

r . . . r r = r ... r ... r . . . r . i2 iq iq+1 i2 cij cik iq+1 So, by induction, (3.13) can be deduced from the defining relations. The relation r ... r ... r . . . r = r . . . r i1 cij cik iq+1 i2q iq+2 has 2q − 2 terms, so is also a consequence of the defining relations. Therefore (3.12) is a consequence of the defining relations, unless j = 1 and k = q + 1. In the exceptional case (3.13) is

ri1 . . . riq riq+1 = ri2 . . . riq , or

ri1 . . . riq = ri2 . . . riq+1 . (3.14) 46 Integrable representations of g and the Weyl group Now we write (3.12) in the alternative form

ri2 . . . ri2q ri1 = 1. (3.15) In exactly the same way this relation will be a consequence of the defining relations unless

ri2 . . . riq+1 = ri3 . . . riq+2 . (3.16) If this relation is a consequence of the defining relations then (3.12) is also a consequence of the defining relations by the above argument, and we are done. Now, (3.12) is equivalent to

ri3 ri2 ri3 . . . riq riq+1 riq+2 riq+1 . . . ri4 = 1, (3.17) and this will be a consequence of the defining relations unless

ri3 ri2 ri3 . . . riq = ri2 ri3 . . . riq riq+1 , We may therefore assume that this is true. But we must also have

(3.17). So ri1 = ri3 . Hence the given relation will be a consequence of the defining relations unless ri1 = ri3 . However, an equivalent forms of the given relation are also ri2 . . . ri2q ri1 = 1, ri3 . . . ri2q ri1 ri2 = 1, etc. Thus this relation will be a consequence of the defining relations unless ri1 = ri3 = ··· = ri2q−1 and ri2 = ri4 = ··· = ri2q . Thus we may assume q that the given relation has form (ri1 ri2 ) = 1. Then mi1i2 divides q, and the relation is a consequence of the Coxeter relation (3.11).

3.4 Geometric properties of Weyl groups ∨ Let (hR, Π, Π ) be a realization of A over R, so that ∨ ∨ (h, Π, Π ) = (C ⊗R hR, Π, Π ). ∨ Note that hR is W -invariant since Q ⊂ hR. The set

C = {h ∈ hR | hαi, hi ≥ 0 for i = 1, . . . , n} is called the fundamental chamber, the sets of the form w(C) are called chambers, and their union [ X := w(C) w∈W is called the Tits cone. There are corresponding dual objects C∨,X∨, etc. in h∗ . R

Proposition 3.4.1 3.4 Geometric properties of Weyl groups 47

(i) For h ∈ C, the group Wh := {w ∈ W | w(h) = h} is generated by the fundamental reflections contained in it. (ii) The fundamental chamber is the fundamental domain for the ac- tion of W on X, i.e. every W -orbit intersects C in exactly one point. In particular, W acts regularly on the set of chambers.

(iii) X = {h ∈ hR | hα, hi < 0 for a finite number of α ∈ ∆+}. In particular X is a convex cone. P ∨ (iv) C = {h ∈ hR | h − w(h) = i ciαi , where ci ≥ 0, for any w ∈ W }. (v) The following conditions are equivalent: (a) |W | < ∞;

(b) X = hR; (c) |∆| < ∞; (d) |∆∨| < ∞.

(vi) If h ∈ X then |Wh| < ∞ if and only if h is an interior point of X.

Proof Take w ∈ W and let w = ri1 . . . ris be a reduced decomposition. 0 Take h ∈ C and assume that h = w(h) ∈ C. We have hαis , hi ≥ 0, 0 hence hw(αis ), w(h)i = hw(αis ), h i ≥ 0. It follows from Lemma 3.3.2(i) 0 0 that w(αis ) < 0, hence hw(αis ), h i ≤ 0, and hw(αis ), h i = 0, whence hαis , hi = 0. Hence ris (h) = h. Now for the proof of (i) and (ii) it suffices to apply induction on `(w). 0 (iii) Set X := {h ∈ hR | hα, hi < 0 for a finite number of α ∈ ∆+}. Let h ∈ X0 and w ∈ W . Then hα, w(h)i = hw−1α, hi. Only finitely many positive α’s are sent to negatives by w−1, see Lemma 3.3.3. So X0 is W -invariant, and clearly C ⊂ X0. Therefore X ⊂ X0. To prove the 0 converese embedding, take h ∈ X and set Mh := {α ∈ ∆+ | hα, hi < 0}. By definition Mh is finite. If Mh 6= ∅, then some simple root

αi ∈ Mh. But then it follows from Lemma 3.2.2 that |Mri(h)| < |Mh|. Now induction on |Mh| completes the proof of (iii). (iv) ⊃ is clear. The converse embedding is proved by induction on s = `(w). For s = 0 the result is clear and for s = 1 it is equivalent to the definition of C. Let s > 1 and w = ri1 . . . ris . We have

h − w(h) = (h − ri1 . . . ris−1 (h)) + ri1 . . . ris−1 (h − ris (h)). It follows from (the dual version of) Lemma 3.3.2(i) that r . . . r (α∨ ) ∈ i1 is−1 is ∨ ∨ Q+, which implies that the second summand is in Q+. The first sum- mand is there too by inductive assumption. 48 Integrable representations of g and the Weyl group

0 (v) (a) ⇒ (b). Let h ∈ hR, and choose an element h from the (finite) orbit W ·h for which ht (h0 −h) is maximal. Then h0 ∈ C, whence h ∈ X. (b) ⇒ (c) Take h in the interior of C. Then hα, −hi < 0 for all α ∈ ∆+, and it remains to apply (iii). (c) ⇒ (a) It suffices to prove that the action of W on the roots is faithful. Assume that w(α) = α for all α ∈ ∆, and w = ri1 . . . ris be a reduced decomposition. But then w(αis ) < 0 by Lemma 3.3.2(i). (d) ⇔ (a) is similar to (c) ⇔ (a), but using dual root system.

(vi) In view of (ii) we may assume that h ∈ C. Then by (i), Wh is generated by the fundamental reflections with respect to the roots orthogonal to h. The action of Wh on h induces the action of Wh on 0 h := hR/Rh. Moreover, this induced action allows us to identify Wh with a Weyl group W 0 acting naturally on h0. By (v), this group is finite if and only if its X0 = h0

Example 3.4.2 (i) Let g = sln+1. Then ri acts on ε1, . . . , εn+1 by ∼ swapping εi and εi+1, from which it follows that W = Sn+1. Introduce ∗ ∨ ∨ Λ1,..., Λn ∈ h as the dual basis to α1 , . . . , αn : ∨ hΛi, αj i = δij (1 ≤ i, j ≤ n). Then ∨ C = R≥0Λ1 ⊕ · · · ⊕ R≥0Λn. and X∨ = h∗ . R (ii) Let g = sl . Then h∗ = α ⊕ δ ⊕ Λ and the sum ( α ) ⊕ b 2 R R 1 R R 0 R 1 (Rδ ⊕ RΛ0) is orthogonal. Moreover,

r0 : α1 7→ −α1 + 2δ, δ 7→ δ, Λ0 7→ α1 − δ + Λ0; r1 : α1 7→ −α1, δ 7→ δ, Λ0 7→ Λ0, whence

r0r1(λα1 + µδ + νΛ0) = (λ + ν)α1 + (µ − 2λ − ν)δ + νΛ0. (3.18) Consider the affine subspace h∗ = {λ ∈ h∗ | hλ, ci = 1} ⊂ h∗ , 1 R R ∗ invariant with respect to the action of W . So W acts on h1 with affine ∗ transformations. Elements of h1 are of the form

λα1 + µδ + Λ0 (λ, µ ∈ R).

Moreover, it is clear that r0 and r1 act trivially on δ. So the action of 3.4 Geometric properties of Weyl groups 49

∗ ∗ W on h1 factors through to give an action of W on h1/Rδ which can be identified with Rα1. We will denote the induced affine action of w ∈ W on Rα1 viaw ¯. An easy calculation gives:

r¯1 : λα1 7→ −λα1, r¯0 : λα1 7→ −λα1 + α1, whence

r¯0r¯1(λα1) = λα1 + α1 is a ‘shift’ by α1. It follows that the image W¯ of W is a ¯ W = ZoS2. In fact the map w 7→ w¯ is injective. This follows from the fact that every ε k element of W can be written uniquely in the form r1(r0r1) where k ∈ Z and ε = 0 or 1. Thus

W = ZoS2. Next, 1 C = {λα + µδ + νΛ | 0 ≤ λ ≤ ν}. 1 0 2 It follows from (3.18) that 1 (r r )kC = {λα + µδ + νΛ | ν ≥ 0, kν ≤ λ ≤ (k + )ν} 0 1 1 0 2 1 r (r r )kC = {λα + µδ + νΛ | ν ≥ 0, −(k + )ν ≤ λ ≤ −kν}, 1 0 1 1 0 2 whence

X = {λα1 + µδ + νΛ0 | ν ≥ 0}. In terms of the affine action, C gets identified with the fundamental alcove 1 C = {λα | 0 ≤ λ ≤ }, af 1 2 which is the fundamental domain for the affine action of W on Rα1. 4 The Classification of Generalized Cartan Matrices

4.1 A trichotomy for indecomposable GCMs n Let v = (v1, . . . , vn) ∈ R . We write

v ≥ 0 if all vi ≥ 0 and

v > 0 if all vi > 0.

We consider v ∈ Rn as row or column as convenient.

Definition 4.1.1 A GCM A has finite type if the following three con- ditions hold: (i) det A 6= 0; (ii) there exists u > 0 with Au > 0; (iii) Au ≥ 0 implies u > 0 or u = 0. A GCM A has affine type if the following three conditions hold: (i) corank A = 1 (i.e. rank A = n − 1); (ii) there exists u > 0 with Au = 0; (iii) Au ≥ 0 implies Au = 0. A GCM A has indefinite type if the following two conditions hold: (i) there exists u > 0 with Au < 0; (ii) Au ≥ 0 and u ≥ 0 imply u = 0.

Remark 4.1.2 What we really have in mind in this. Let γ = u1α1 + n ··· + unαn, and u = (u1, . . . , un) ∈ R be the corresponding column ∨ ∨ vector. Then Au is the column vector (hγ, α1 i,..., hγ, αn i).

50 4.1 A trichotomy for indecomposable GCMs 51  2 −a Example 4.1.3 Let a, b be positive integers, and A = . −b 2 Then A is of finite (resp. affine, resp. indefinite) type if and only if ab ≤ 3 (resp. ab = 4, resp. ab > 4).

We will prove that an indecomposable GCM has exactly one of the three types above.

i n Lemma 4.1.4 Let v = (vi1, . . . , vin) ∈ R for i = 1, . . . , m. Then there exist x1, . . . , xn ∈ R with n X vijxj > 0 (i = 1, . . . , m) j=1 if and only if

1 m λ1v + ··· + λmv = 0, λ1, . . . , λm ≥ 0 implies λ1 = ··· = λm = 0.

Proof Consider the usual scalar product (x, y) = x1y1 +. . . xnyn for two n vectors x, y ∈ R . Suppose there exists a column vector x = (x1, . . . , xn) i 1 m such that (v , x) > 0 for all i. Suppose λ1v + ··· + λmv = 0 with all λi ≥ 0. Then 1 m λ1(v , x) + ··· + λm(v , x) = 0.

This implies λi = 0 for all i. 1 m Conversely, suppose λ1v + ··· + λmv = 0, λi ≥ 0 implies λi = 0 for all i. Let ( m m ) X i X S := λiv | λi ≥ 0, λi = 1 . i=1 i=1

p 2 2 Define f : S → R by f(y) = ||y|| := y1 + ··· + yn. Then S is a compact subset of Rn and f is a continuous function. Thus f(S) is a compact subset of R. Hence there exists x ∈ S with ||x|| ≤ ||x0|| for all x0 ∈ S. Clearly x 6= 0 since 0 6∈ S by assumption. We will show (vi, x) > 0 for all i as required. In fact we will show more, namely, that (y, x) > 0 for all y ∈ S. Now S is a convex subset of Rn. So for y 6= x we have ty +(1−t)x ∈ S for all 0 ≤ t ≤ 1. By the choice of x,

(ty + (1 − t)x, ty + (1 − t)x) ≥ (x, x) 52 The Classification of Generalized Cartan Matrices or t(y − x, y − x) + 2(y − x, x) ≥ 0. As t can be made arbitrarily small, this implies (y −x, x) ≥ 0 or (y, x) ≥ (x, x) > 0.

Proposition 4.1.5 Let C be an m × n matrix over R. Suppose u ≥ 0 and Ctu ≥ 0 imply u = 0. Then there exists v > 0 with Cv < 0.

Proof Let C = (cij) and consider the following system of inequalities: n X − cijxj > 0 (i = 1, . . . , m), j=1

xj > 0 (j = 1, . . . , n). We want to use Lemma 4.1.4 to show that this system has a solution. Thus we consider an equation of the form m n X X λi(−ci1,..., −cin) + µjεj = 0, i=1 j=1 n where λi, µj ≥ 0 and εj is the jth coordinate vector in R . Then m X λicij = µj (j = 1, . . . , n). i=1 t Let u = (λ1, . . . , λm). Then C u = (µ1, . . . , µn). Thus we have u ≥ 0 t t and C u ≥ 0. This implies u = 0 and C u = 0. Thus all λi and µj are zero. Hence Lemma 4.1.4 shows that the above inequalities have a solution. Thus there exists v > 0 with Cv < 0. We now consider three classes of GCM A. Let

SF = {A | A has finite type}

SA = {A | A has affine type}

SI = {A | A has indeterminate type} It is easy to see that no GCM can lie in more than one of these classes. We want to show that each indecomposable GCM lies in one of the three classes.

Lemma 4.1.6 Let A be an indecomposable GCM. Then u ≥ 0 and Au ≥ 0 imply that u > 0 or u = 0. 4.1 A trichotomy for indecomposable GCMs 53 Proof Suppose u ≥ 0, u 6= 0 and u 6> 0. Then we can reorder 1, . . . , n so PQ that u = ··· = u = 0 and u , . . . , u > 0. Let A = where 1 s s+1 n RS P is s × s and S is (n − s) × (n − s). Now all entries of the block Q are ≤ 0 since A is GCM, and if Q has a negative entry, then Au has a negative coefficient, giving a contradiction. Thus Q = 0, whence R = 0 by definition of GCM. Now A is decomposable, a contradiction.

Now let A be an indecomposable GCM and define

KA = {u | Au ≥ 0}.

KA is a convex cone. We consider its intersection with the convex cone {u | u ≥ 0}. We will distinguish between two cases:

{u | u ≥ 0, Au ≥ 0}= 6 {0}, {u | u ≥ 0, Au ≥ 0} = {0}.

The first of these cases splits into two subcases, as is shown by the next lemma.

Lemma 4.1.7 Suppose {u | u ≥ 0, Au ≥ 0}= 6 {0}. Then just one of the following cases occurs:

KA ⊂ {u | u > 0} ∪ {0}, n KA = {u | Au = 0} and KA is a 1-dimensional subspace of R .

Proof We know there exists u 6= 0 with u ≥ 0 and Au ≥ 0. By Lemma 4.1.6, u > 0. Suppose the first case does not hold. Then there is v 6= 0 with Av ≥ 0 such that some coordinate of v is ≤ 0. If v ≥ 0 then v > 0 by Lemma 4.1.6, so some coordinate of v is negative. We have Au ≥ 0 and Av ≥ 0, hence A(tu + (1 − t)v) ≥ 0 for all 0 ≤ t ≤ 1. Since all coordinates of u are positive and some coordinate of v is negative, there exists 0 < t < 1 with tu + (1 − t)v ≥ 0 and some coordinate of tu + (1 − t)v is zero. But then tu + (1 − t)v = 0 by Lemma 4.1.6. Thus v is a scalar multiple of u. We also have

0 = A(tu + (1 − t)v) = tAu + (1 − t)Av.

Since Au ≥ 0 and Av ≥ 0 this implies Av = Au = 0.

Now let w ∈ KA. Then Aw ≥ 0. Either w ≥ 0 or some coordinate of w is negative. If w ≥ 0 then w > 0 or w = 0 by Lemma 4.1.6. Suppose w > 0. Then by the above argument with u replaced by w, v 54 The Classification of Generalized Cartan Matrices is a scalar multiple of w, hence w is a scalar multiple of u. Now suppose some coordinate of w is negative. Then by the above argument with v replaced by w, w is a scalar multiple of u. Thus in all cases w is a scalar multiple of u. Hence KA = Ru = {u | Au = 0}. Finally, both cases cannot hold simultaneously since in the first case KA cannot contain a 1-dimensional subspace.

We can now identify the first case in the lemma above with the case of matrices of finite type.

Proposition 4.1.8 Let A be an indecomposable GCM. Then the follow- ing conditions are equivalent:

(i) A has finite type;

(ii) {u | u ≥ 0, Au ≥ 0}= 6 {0} and KA ⊂ {u | u > 0} ∪ {0}.

Proof (i) ⇒ (ii) Suppose A is of finite type. Then there exists u > 0 with Au > 0. Hence {u | u ≥ 0, Au ≥ 0} 6= {0}. Also, det A 6= 0. Thus {u | Au = 0} is not a 1-dimensional subspace. Hence (ii) holds by Lemma 4.1.7. (ii) ⇒ (i) There cannot exist u 6= 0 with Au = 0 for this would give a 1-dimensional subspace in KA. Thus det A 6= 0. Now there exists u 6= 0 with u ≥ 0 and Au ≥ 0. By Lemma 4.1.6, u > 0. If Au > 0, A has finite type. So suppose to the contrary that some coordinate of Au is zero. Choose the numbering of 1, . . . , n so that the first s coordinates of Au PQ are 0 and the last n − s are positive. Let A = where P is s × s RS and S is (n − s) × (n − s). The block Q 6= 0, since A is indecomposable. We choose numbering so that the first row of Q is not the zero vector. Then PQ u1 P u1 + Qu2 Au = = , RS u2 Ru1 + Su2 and P u1 + Qu2 = 0 and Ru1 + Su2 > 0. We also have u1, u2 > 0. Thus Qu2 ≤ 0 since the entries of Q are non-positive, and the first coordinate of Qu2 is negative. Hence P u1 ≥ 0 and the first coordinate of P u1 is positive. Since Ru1 + Su2 > 0 we can chose ε > 0 such that R(1 + ε)u1 + Su2 > 0. u1 We now consider instead of our original vector u = , the vector u2 4.1 A trichotomy for indecomposable GCMs 55 (1 + ε)u1 > 0. We have u2

(1 + ε)u1 P u1 + Qu2 + εP u1  εP u1  A = = . u2 Ru1 + Su2 + εRu1 R(1 + ε)u1 + Su2 The first coordinate and the last n − s coordinates of this vector are (1 + ε)u1 positive and the remaining coordinates are ≥ 0. Thus A ≥ u2 0 and the number of non-zero coordinates in this vector is greater than that in Au. We may now iterate this process, obtaining at each stage at least one more non-zero coordinate than we had before. We eventually obtain a vector v > 0 such that Av > 0.

We next identify the second case in Lemma 4.1.7 with that of an affine GCM.

Proposition 4.1.9 Let A be an indecomposable GCM. Then the follow- ing conditions are equivalent: (i) A has affine type;

(ii) {u | u ≥ 0, Au ≥ 0} 6= {0}, KA = {u | Au = 0}, and KA is a 1-dimensional subspace of Rn.

Proof (i) ⇒ (ii) Suppose A is of affine type. Then there exists u > 0 with Au = 0. It follows that {u | u ≥ 0, Au ≥ 0}= 6 {0}. Also λu ∈ KA for all λ ∈ R. It follows from Lemma 4.1.7 that we are in the second case of that lemma. (ii) ⇒ (i) Note first that corank A = 1. Also there exists u 6= 0 with u ≥ 0 and Au ≥ 0. By Lemma 4.1.6, u > 0. So there exists u > 0 with Au ≥ 0. But KA = {u | Au = 0}, so Au = 0. Finally, Au ≥ 0 implies Au = 0.

Proposition 4.1.10 Let A be an indecomposable GCM. Then (i) A has finite type if and only if At has finite type; (ii) A has affine type if and only if At has affine type.

Proof Let A be of finite type. There does not exist v > 0 with Av < 0 (Av < 0 ⇒ A(−v) > 0 ⇒ (−v) > 0 ⇒ v < 0). So by Proposi- tion 4.1.5, there exists u 6= 0 with u ≥ 0 and Atu ≥ 0. So

{u | u ≥ 0,Atu ≥ 0}= 6 {0}. 56 The Classification of Generalized Cartan Matrices By Lemma 4.1.7, either

KAt ⊂ {u | u > 0} ∪ {0}

t or KAt = {u | A u = 0} and this is a 1-dimensional subspace. Now det A 6= 0, so det At 6= 0. Thus the latter case cannot occur. The former case must therefore occur, so by Proposition 4.1.8, At is of finite type. Let A be of affine type. Again, there does not exist v > 0 with Av < 0 (Av < 0 ⇒ A(−v) > 0, which is impossible in the affine case). So by Proposition 4.1.5, there exists u 6= 0 with u ≥ 0 and Atu ≥ 0. So {u | u ≥ 0,Atu ≥ 0}= 6 {0}. By Lemma 4.1.7, either

KAt ⊂ {u | u > 0} ∪ {0}

t or KAt = {u | A u = 0} and this is a 1-dimensional subspace. Now corank A = 1 so corank At = 1. This shows that we cannot have the first possibility. Thus the second possibility holds, and by Proposition 4.1.9, we see that At has affine type. We may now identify the case not appearing in Lemma 4.1.7.

Proposition 4.1.11 Let A be an indecomposable GCM. Then the fol- lowing conditions are equivalent: (i) A has indefinite type; (ii) {u | u ≥ 0, Au ≥ 0} = {0}.

Proof If A has indefinite type then u ≥ 0 and Au ≥ 0 imply u = 0. Conversely, suppose {u | u ≥ 0, Au ≥ 0} = {0}. Then the same condition holds for At, i.e. {u | u ≥ 0,Atu ≥ 0} = {0}. Indeed this follows from Lemma 4.1.7 and Propositions 4.1.8, 4.1.9, 4.1.10. But then Proposition 4.1.5 implies that there exists v > 0 with Av < 0. Thus A has indefinite type.

Theorem 4.1.12 (Trichotomy Theorem) Let A be an indecompos- able GCM. Then exactly one of the following three possibilities holds: A has finite type, A has affine type, or A has indefinite type. Moreover, the type of A is the same as the type of At. Finally, (i) A has finite type if and only if there exists u > 0 with Au > 0. (ii) A has affine type if and only if there exists u > 0 with Au = 0. This u is unique up to a (positive) scalar. 4.1 A trichotomy for indecomposable GCMs 57 (iii) A has indefinite type if and only if there exists u > 0 with Au < 0.

Proof The first two statements have already been proved. We prove the third statement. Let u > 0. (i) Assume that Au > 0. A cannot have affine type as then Au ≥ 0 would imply Au = 0. A cannot have indefinite type as then u ≥ 0 and Au ≥ 0 would imply u = 0. Thus A has finite type. The converse is clear. (ii) Assume that Au = 0. A cannot have finite type as then det A = 0. A cannot have indefinite type as then u ≥ 0 and Au ≥ 0 would imply u = 0. Thus A has affine type. The converse is clear, and the remaining statement follows from Proposition 4.1.9. (iii) Assume that Au < 0. Then A(−u) > 0. A cannot have finite type as this would imply −u > 0 or −u = 0. A cannot have affine type as and A(−u) > 0 would then imply −u = 0. Thus A has indefinite type. The converse is clear.

Lemma 4.1.13 Let A be an indecomposable GCM.

(i) If A is of finite type then every principal minor AJ is also of finite type.

(ii) If A is of affine type then every proper principal minor AJ is of finite type.

Proof By passing to an equivalent GCM we may assume that J = {1, . . . , m} for some m ≤ n. Let K = {m + 1, . . . , n}. Write A Q A = J . RS

u  (i) We have Au > 0 for some u = J > 0. We have uK

A u + Qu  Au = J J K . RuJ + SuK

We have AJ uJ + QuK > 0. But QuK ≤ 0, so AJ uJ > 0. (ii) As in (i) we get AJ uJ +QuK = 0, and QuK ≤ 0 implies AJ uJ ≥ 0. Suppose if possible AJ uJ = 0. Then QuK = 0, and since uK > 0 this implies that Q = 0, which contradicts the assumption that A is indecomposable. Hence we have uJ > 0, AJ uJ ≥ 0, AJ uJ 6= 0. This implies that AJ cannot have affine or indefinite type. 58 The Classification of Generalized Cartan Matrices Remark 4.1.14 In proving results of this section we have never used the full force of the assumption that A is a GCM. Namely we nowhere needed that aii = 2 and aij ∈ Z.

4.2 Indecomposable symmetrizable GCMs

Proposition 4.2.1 Suppose A is a symmetric indecomposable GCM. Then:

(i) A has finite type if and only if A is positive definite. (ii) A has affine type if and only if A is positive semidefinite of corank 1. (iii) A has indefinite type otherwise.

Proof (i) Let A be of finite type. Then there exists u > 0 with Au > 0. Hence for all λ > 0 we have (A + λI)u > 0. Thus A + λI has finite type by Trichotomy Theorem. (Note that A + λI need not be GCM, but see Remark 4.1.14.) Thus det(A + λI) 6= 0 when λ ≥ 0, that is det(A − λI) 6= 0 when λ ≤ 0. Now the eigenvalues of the real symmetric matrix A are all real. Thus all the eigenvalues of A must be positive. Conversely, suppose A is positive definite. Then det A 6= 0, so A has finite or indefinite type. If A has indefinite type there exists u > 0 with Au < 0. But then utAu < 0, contradicting the fact that A is positive definite. Thus A must have finite type. (ii) Let A have affine type. Then there is u > 0 with Au = 0. The same argument as in (i) shows that all eigenvalues of A are non-negative. But A has corank 1, so 0 appears with multiplicity 1. Conversely, suppose A is positive semidefinite of corank 1. Then det A = 0 so A cannot have finite type. Suppose A has indefinite type. Then there exists u > 0 with Au < 0. Thus utAu < 0, which contradicts the fact that A is positive semidefinite. (iii) follows from (i) and (ii).

Lemma 4.2.2 Let A an indecomposable GCM of finite or affine type.

Suppose that ai1i2 ai2i3 . . . aik−1ik aiki1 6= 0 for some integers i1, . . . , ik with k ≥ 3 such that i1 6= i2, i2 6= i3, . . . , ik−1 6= ik, ik 6= i1. Then A is 4.2 Indecomposable symmetrizable GCMs 59 of the form

 2 −1 0 0 ... 0 0 −1 −1 2 −1 0 ... 0 0 0       0 −1 2 −1 ... 0 0 0   .  . (4.1)  .   .     0 0 0 0 ... −1 2 −1 −1 0 0 0 ... 0 −1 2

Proof Choose integers i1, . . . , ik as in the assumption with minimal possible k. We thus have

air is 6= 0 is (r, s) ∈ {(1, 2), (2, 3),..., (k, 1), (2, 1), (3, 2),... (1, k)}.

The minimality of k implies that air ,is = 0 if (r, s) does not lie in the above set.

Let J = {i1, . . . , ik}. Then the principal minor AJ of A has form

  2 −r1 0 0 ... 0 0 −sk −s 2 −r 0 ... 0 0 0   1 2     0 −s2 2 −r3 ... 0 0 0  AJ =  .  (4.2)  .   .     0 0 0 0 ... −sk−2 2 −rk−1 −rk 0 0 0 ... 0 −sk−1 2

with positive integers ri, si. In particular we see that AJ is indecom- posable. Now AJ must be finite or affine type by Lemma 4.1.13. Thus there exists u = (u1, . . . , uk) > 0 with AJ u ≥ 0. We define the k × k matrix

−1 −1 M := diag(u1 , . . . , uk )AJ diag(u1, . . . , uk).

−1 Then mij = ui aijuj. Thus

X −1 X mij = ui (AJ )ijuj ≥ 0. j j 60 The Classification of Generalized Cartan Matrices P In particular, ij mij ≥ 0. Now we have

 0 0  2 −r1 0 0 ... 0 0 −sk −s0 2 −r0 0 ... 0 0 0   1 2   0 0   0 −s2 2 −r3 ... 0 0 0  M =  .  ,  .   .   0 0   0 0 0 0 ... −sk−2 2 −rk−1 0 0 −rk 0 0 0 ... 0 −sk−1 2

0 −1 0 −1 where ri = ui riui+1, si = ui+1siui and uk+1 is interpreted as u1. We 0 0 0 0 note that ri, si > 0 and risi = risi ∈ Z. We also have X 0 0 0 0 mij = 2k − (r1 + s1) − · · · − (rk + sk). ij 0 0 √ ri+si p 0 0 0 0 P Now 2 ≥ risi = risi ≥ 1, hence ri + si ≥ 2. Since ij mij ≥ 0, 0 0 0 0 we deduce that ri + si = 2 and risi = 1. Hence risi = 1, and since ri, si are positive integers, we deduce that ri = si = 1, i.e. AJ is of the form (4.1).

Let v = (1,..., 1). Then v > 0 and AJ v = 0. Thus AJ is affine type by Theorem 4.1.12. Lemma 2.1.1 shows that this can only happen when AJ = A.

Theorem 4.2.3 Indecomposable GCM of finite or affine type is sym- metrisable.

Proof If there is a set of integers i1, . . . , ik as in Lemma 4.2.2, then we know that A is of the form (4.1), in particular it is symmetric. Otherwise A is symmetrizable by Lemma 2.1.1.

Theorem 4.2.4 Let A be an indecomposable GCM. Then: (i) A has finite type if and only if all its principal minors have posi- tive determinant. (ii) A has affine type if and only if det A = 0 and all proper principal minors have positive determinant. (iii) A has indefinite type if and only if neither of the above conditions holds.

Proof (i) Suppose A has finite type. Then A is symmetrizable by The- orem 4.2.3, hence A = DB where D = diag(d1, . . . , dn) with di > 0 and B symmetric, see Lemma 2.1.2. Theorem 4.1.12 shows that A and B 4.3 The classification of finite and affine GCMs 61 have the same type. By Lemma 4.1.13 all principal minors of B have fi- nite type, hence by Proposition 4.2.1 they all have positive determinant. Then the same is true for A. Conversely, let all principal minors of A have positive determinant. Suppose there is a set of integers i1, . . . , ik with k ≥ 3 such that i1 6= i2, i2 6= i3, . . . , ik−1 6= ik, ik 6= i1 and ai1i2 ai2i3 . . . aik−1ik aiki1 6= 0. As in the proof of the previous theorem, AJ has form (4.2). Analyzing 2 × 2 and 3 × 3 principal subminors we conclude that AJ is of the form (4.1). But then det AJ = 0, giving a contradiction. Thus there is no such sequence i1, . . . , ik and so A is symmetrizable by Lemma 2.1.1. Hence A = DB where D = diag(d1, . . . , dn) with di > 0 and B symmetric of the same type as A. Now, it follows from the assumption that all principal minors of B have positive determinant, so B is of finite type. (ii) If A has affine type, then det A = 0 and all proper principal minors have finite type so have positive determinants by (i). Conversely, supppose det A = 0 and all proper principal minors have positive determinants. As above, we have two cases:

(a) there is a principal minor of the form (4.1). Since det AJ = 0 we must have A = AJ , which is affine type. (b) A is symmetrizable, in which case we reduce to the symmetric case as above.

4.3 The classification of finite and affine GCMs To every GCM A we associate the graph S(A), called the Dynkin dia- gram of A, as follows. The vertices of the Dynkin diagram are labelled by 1, . . . , n (or the corresponding simple roots α1, . . . , αn). Let i, j be distinct vertices of S(A). The rules are as follows:

(a) If aijaji = 0, vertices i, j are not joined.

(b) If aij = aji = −1, vertices i, j are joined by a single edge.

(c) If aij = −1, aji = −2, vertices i, j are joined as follows

••> i j

(d) If aij = −1, aji = −3, vertices i, j are joined as follows

••> i j 62 The Classification of Generalized Cartan Matrices

(e) If aij = −1, aji = −4, vertices i, j are joined as follows

••> i j (f) If aij = −2, aji = −2, vertices i, j are joined as follows

••< > i j (g) If aijaji =≥ 5, vertices i, j are joined as follows

|aij|, |aji| •• i j It is clear that the GCM is determined by its Dynkin diagram. Moreover, A is indecomposable if and only if S(A) is connected.

Theorem 4.3.1 Let A be an indecomposable GCM. Then: (i) A is of finite type if and only if its Dynkin diagram belongs to Figure 4.1. Numbers on the right give det A. (ii) A is of affine type if and only if its Dynkin diagram belongs to Fig- ures 4.2 and 4.3. All diagrams there have `+1 vertices. Numeric marks are the coordinates of the unique vector δ = (a0, a1, . . . , a`) such that Aδ = 0 and the ai are positive mutually prime integers. (1) Each diagram X` in Figure 4.2 is obtained from the diagram X` in Figure 4.1 by adding a vertex labeled α0 and preserving the labeling of other vertices.

Proof We first prove that the numeric marks in the diagrams from Fig- ures 4.2 and 4.3 are the coordinates of the unique vector δ = (a0, a1, . . . , a`) such that Aδ = 0 and the ai are positive mutually prime integers. Note that Aδ = 0 is equivalent to X 2ai = mjaj for all i j where the sum is over all j which are linked with i; moreover if the number of edges between i and j is equal to s > 1 and the arrow points to i then then mj = s, otherwise mj = 1. Now check that the marks work in all cases. Now from Theorem 4.1.12 we conclude that all diagrams from Figures 4.2 and 4.3 are affine and δ is unique. Since all diagrams from Figure 4.1 are proper subdiagrams of diagrams from Figures 4.2 and 4.3, Theorem 4.2.4 implies that they are of finite type. It remains to show that if A is of finite (resp. affine) type then 4.3 The classification of finite and affine GCMs 63 S(A) appears in Figure 4.1 (resp. Figures 4.2 and 4.3). We establish this by induction on n. The case n = 1 is clear. Also, using the condition det A ≥ 0 and Theorem 4.2.4, we obtain:

finite diagrams of rank 2 are A2,C2,G2; (4.3) (1) (2) affine diagrams of rank 2 are A1 ,A2 ; (4.4)

finite diagrams of rank 3 are A3,B3,C3; (4.5) (1) (1) (1) (2) (2) (3) affine diagrams of rank 3 are A2 ,C2 ,G2 ,D3 ,A4 ,D4 . (4.6) Next, from Lemma 4.2.2, we have (1) if S(A) contains a cycle, then S(A) = A` . (4.7) Moreover, by induction and Lemma 4.1.13, Any proper subdiagram of S(A) appears in Figure 4.1. (4.8) Now let S(A) be a finite diagram. Then it does not have graphs appearing in Figures 4.2 and 4.3 as subgraphs and does not have cycles. This implies that every branch vertex has type D4 since otherwise we would get an affine subdiagram or a contradiction with (4.8). Using (4.8) again we see that there is at most one branch vertex, in which case it also follows that S(A) is D`,E6,E7, or E8. Similarly one checks that if S(A) has multiple edges then it must be B`,C`,F4, or G2. Finally, a graph without branch vertices, cycles and multiple edges must be A`. Let S(A) be affine. In view of (4.7) we may assume that S(A) has no cycles. In view of (4.8), S(A) is obtained from a diagram in Figure 4.1 by adjoining one vertex in such a way that every subdiagram is again in Figure 4.1. It is easy to see that in this way we can only get diagrams from Figures 4.2 and 4.3.

Proposition 4.3.2 Let A be an indecomposable GCM. Then the follow- ing conditions are equivalent: (i) A is of finite type.

(ii) A is symmetrizable and the (·|·) on hR is positive definite. (iii) |W | < ∞. (iv) |∆| < ∞. (v) g(A) is a finite dimensional simple Lie algebra.

(vi) There exists α ∈ ∆+ such that α + αi 6∈ ∆ for all i = 1, . . . , n.

Proof (i) ⇒ (ii) follows from Theorems 4.2.3 and 4.2.4. (ii) ⇒ (iii). In view of Proposition 3.2.4, W is a subgroup of the 64 The Classification of Generalized Cartan Matrices G := O((·|·)), which is known to be compact. If we can check that W is a discrete subgorup, it will follow from general theory that W is finite. To see that W is discrete it suffices to find an open neighborhood U of identity e in G with U ∩ W = {e}. Consider the action of G on hR and fix an element h in the interior C of the fundamental chamber. We get a continuos map ϕ : G → G · h. Take U := ϕ−1(C). (iii) ⇒ (iv) follows from Proposition 3.4.1(v). (iv) ⇒ (vi) is obvious. (vi) ⇒ (i). Let α ∈ ∆+ be such that α + αi 6∈ ∆ for all i. By ∨ Proposition 3.1.5(v), hα, αi i ≥ 0 for all i. Write α = u1α1 + ··· + unαn with non-negative coefficients ui. Then u = (u1, . . . , un) ≥ 0, u 6= 0, and Au ≥ 0. By Trichotomy Theorem, A is finite or affine type, and in ∨ the latter case we have hα, αi i = 0 for all i. But then α 6= αi, and so α−αi ∈ ∆+ for some i by Lemma 1.4.5, hence α−αi+2αi = α+αi ∈ ∆+ in view of Proposition 3.1.5(vi), giving a contradiction. Finally, (i) ⇒ (v) follows from Proposition 1.4.8 and (v) ⇒ (iv) is obvious. 4.3 The classification of finite and affine GCMs 65

A` ••••••••... ` + 1 α1 α2 α`−1 α`

B` ••••••••... > 2 α1 α2 α`−1 α`

C` ••••••••... < 2 α1 α2 α`−1 α`

•α`

D` ••••••••... 4 α1 α2 α`−2 α`−1

•α6

E6 ••••• 3 α1 α2 α3 α4 α5

•α7

E7 •••••• 2 α1 α2 α3 α4 α5 α6

•α8

E8 ••••••• 1 α1 α2 α3 α4 α5 α6 α7

F4 ••••> 1 α1 α2 α3 α4

G2 ••> 1 α1 α2

Fig. 4.1. Dynkin diagrams of finite GCMs 66 The Classification of Generalized Cartan Matrices

(1) A1 ••< > 1 1 1 •P  PP  PP (1)  ... PP A` (` ≥ 2) •••••••• 1 1 1 1

•1 (1) ... B` (` ≥ 3) ••••••••> 1 2 2 2 2

(1) ... C` (` ≥ 2) ••••••••> < 1 2 2 2 1

•1 •1 (1) ... D` (` ≥ 4) •••••••• 1 2 2 2 1 •1

•2

(1) E6 ••••• 1 2 3 2 1

•2

(1) E7 ••••••• 1 2 3 4 3 2 1 •3

(1) E8 •••••••• 1 2 3 4 5 6 4 2

(1) F4 •••••> 1 2 3 4 2

(1) G2 •••> 1 2 3 Fig. 4.2. Dynkin diagrams of untwisted affine GCMs 4.3 The classification of finite and affine GCMs 67

(2) 2 1 A2 ••< α0 α1

2 2 2 2 1 (2) ... A2` (` ≥ 2) ••••••••< < α0 α1 α2 α`−1 α`

α0 •1 1 2 2 2 1 (2) ... A2`−1 (` ≥ 3) ••••••••< α1 α2 α3 α`−1 α`

1 1 1 1 1 (2) ... D`+1 (` ≥ 2) ••••••••< > α0 α1 α2 α`−1 α`

(2) 1 2 3 2 1 E6 •••••< α0 α1 α2 α3 α4

(3) 1 2 1 D4 •••< α0 α1 α2

Fig. 4.3. Dynkin diagrams of twisted affine GCMs 5 Real and Imaginary Roots

5.1 Real roots A root α ∈ ∆ is called real if there exists w ∈ W such that w(α) is a re re simple root. Denote by ∆ and ∆+ the sets of the real and positive real roots respectively. If A is of finite type, then induction on height shows that every root is real. re Let α ∈ ∆ . Then α = w(αi) for some w and some i. Define the dual real root α∨ ∈ (∆∨)re by setting

∨ ∨ α = w(αi ). This definition is independent of the choice of the presentation α = w(αi). Indeed, we have to show that the equality u(αi) = αj implies ∨ ∨ u(αi ) = αj , but this has been proved in Lemma 3.3.1, see (3.8). Thus we have a canonical W -equivariant bijection bijection ∆re → (∆∨)re. For α ∈ ∆re, define the reflection

∗ ∗ ∨ rα : h → h , λ 7→ λ − hλ, α iα.

∨ −1 Since hα, α i = 2, it is indeed a reflection. If α = w(αi), then wriw = rα, so we have rα ∈ W .

Proposition 5.1.1 Let α ∈ ∆re. Then: (i) mult α = 1; (ii) kα is a root if and only if k = ±1. (iii) If β ∈ ∆, then there exist non-negative integers p, q such that p − q = hβ, α∨i such that β + kα ∈ ∆ ∪ {0} if and only if −p ≤ k ≤ q, k ∈ Z. (iv) Suppose that A is symmetrizable and let (·|·) is the standard in- variant bilinear form on g. Then

68 5.1 Real roots 69 (a) (α|α) > 0; (b) α∨ = 2ν−1(α)/(α|α); P (αi|αi) (c) if α = i kiαi, then ki (α|α) ∈ Z for all i. (v) if ±α 6∈ Π, then there exists i such that

| ht ri(α)| < | ht α|. (vi) if α > 0 then α∨ > 0.

Proof The proposition is true if α is a simple root, see (2.8), (2.10), and Proposition 3.1.5. Now (i)-(iii) follow from Proposition 3.2.1(ii), and (iv)(a),(b) from Proposition 3.2.4. ∨ P ∨ (iv)(c) follows from the fact that α ∈ i Zαi and the formula

X (αi|αi) α∨ = k α∨, (5.1) (α|α) i i i which in turn follows from (iv)(b). (v) Assume the statement does not hold. We may assume that α > 0. Then −α ∈ C∨, and by Proposition 3.4.1(iv) applied to dual root system, −α + w(α) ≥ 0 for any w ∈ W . Taking w such that w(α) ∈ Π we get a contradiction. (vi) Apply induction on ht α. For ht α > 1 we have by (v) that ∨ ∨ ht riα < ht α, for some i, and riα > 0. By induction, ri(α ) = (riα) > 0, whence α∨ > 0.

Lemma 5.1.2 Assume that A is symmetrizable. Then the set of all P α = i kiαi ∈ Q such that

ki(αi|αi) ∈ (α|α)Z for all i (5.2) is W -invariant.

Proof It suffices to check that riα again satisfies (5.2), i.e.

∨ (ki − hα|αi i)(αi|αi) ∈ (α|α)Z, or

2(α|αi) ∈ (α|α)Z, which follows from (5.2):

X 2(αj|αi) X 2(α|α ) = k (α |α ) = a k (α |α ) ∈ (α|α) . i (α |α ) j j j ji j j j Z j j j j 70 Real and Imaginary Roots

Let A be an indecomposable symmetrizable and (·|·) be a standard invariant bilinear form. Then for a real root α we have (α|α) = (αi|αi), where αi is one of the simple roots. We call α a short (resp. long) root if (α|α) = mini(αi|αi) (resp. (α|α) = maxi(αi|αi)). This definition is independent of the choice of the standard form since α is a linear combination of simple roots. Note that if A is symmetric then all simple roots are of the same length (so they are both short and long). If A is not symmetric and S(A) has m arrows directed in the same direction then A has m+1 different lengths, as the arrow is directed from a longer to a shorter root. Hence if A is not symmetric in Figure 4.1 then every root is either long or short. (2) Moreover, if A is not symmetric and affine and its type is not A2` for ` > 1, then every real root is either short or long. In the exceptional case there are three root lengths for real roots. We use notation

re re re ∆s , ∆l , ∆i to denote the set of all short, long, and intermediate roots, repspectively. Note that α is a short real root for g(A) if and only if α∨ is a long real root for g(At). Indeed, by Proposition 5.1.1(iv)(b) 2ν−1(α) 2ν−1(α) 4 (α∨|α∨) = ( | ) = . (5.3) (α|α) (α|α) (α|α)

Throughout this chapter: we normalize the form so that (αi|αi) are mutually prime positive integers for each connected component of S(A). In particular, if A is symmetric then (αi|αi) = 1 for all i.

5.2 Real roots for finite and affine types Throughout this section we assume that A is finite or affine type. P P If α = i kiαi ∈ Q then (α|α) = i,j kikj(αi|αj). Now, (αi|αj) ∈ Q 1 for all i, j. Thus there exists a positive d such that (αi|αj) ∈ d Z 1 for all i, j. Thus if (α|α) > 0 then (α|α) ≥ d . Hence there exists m > 0 such that m = min{(α|α)|α ∈ Q and (α|α) > 0}.

P Lemma 5.2.1 Let α = i kiαi ∈ Q.

(i) If (α|α) = m then ±α ∈ Q+. (ii) If ki(αi|αi) ∈ (α|α)Z for all i then ±α ∈ Q+. 5.2 Real roots for finite and affine types 71

Proof If ±α 6∈ Q+, then α = β−γ for β, γ ∈ Q+ and supp β∩supp γ = ∅. Hence (β|γ) ≤ 0 and

(α|α) = (β|β) + (γ|γ) − 2(β|γ) ≥ (β|β) + (γ|γ).

All proper principal minors of A have finite type, so, considering con- nected components β1, . . . , βr of β we have (β|β) = (β1|β1) + ··· + (βr|βr) > 0. Hence (β|β) ≥ m. Similarly (γ|γ) ≥ m. Hence (α|α) ≥ 2m, which proves (i). Next, (ii) is clear if (α|α) = 0, so assume that (α|α) > 0. Then   (β|β) 1 X X = k2(α |α ) + 2k k (α |α ) (α|α) (α|α)  i i i i j i j  i i

X ki(αi|αi) X ki(αi|αi) = k + a k ( ) ∈ , i (α|α) ij j (α|α) Z i i 0, it follows that (β|β) ≥ (α|α). Similarly (γ|γ) ≥ (α|α). So (α|α) ≥ (β|β) + (γ|γ) ≥ 2(α|α). This contradiction yields (ii).

Proposition 5.2.2

X (αi|αi) ∆re = {α = k α ∈ Q | (α|α) > 0, and k ∈ for all i}. i i i (α|α) Z i

Proof ”⊂” is obvious for the short roots, follows from Proposition 5.2.4 for long roots, and from (5.4) for intermediate roots. Conversely, let α be as in the right hand side. Then w(α) ∈ ±Q+ for any w ∈ W be Lemmas 5.1.2 and 5.2.1(ii). We may assume that α ∈ Q+, and let P` 0 β = i=0 kiαi be an element of {w(α) | w ∈ W } ∩ Q+ with minimal P` 0 possible height. Since (β|β) > 0, we have i=0 ki(αi|β) > 0. As all 0 ∨ (αi|β) k ≥ 0, there is i with (αi|β) > 0, and hβ, α i = 2 > 0. So i i (αi|αi) ∨ ri(β) = β − hβ, αi iαi has smaller height, and ri(β) 6∈ Q+. But by Lemmas 5.2.1(ii) and 5.1.2, ±ri(β) ∈ Q+, so ri(β) ∈ −Q+. Hence 0 2 β = kαi for some positive integer k. Thus m = (β|β) = k (αi|αi). So 0 (αi|αi) 1 ki (β|β) = k , whence k = 1, and we are done.

Proposition 5.2.3 Let A be an indecomposable GCM of finite or affine type. Then re ∆s = {α ∈ Q | (α|α) = m}. 72 Real and Imaginary Roots Proof Suppose α ∈ Q satisfies (α|α) = m. By Lemma 5.2.1(i), we may assume that α ∈ Q+. Consider the set

{w(α) | w ∈ W } ∩ Q+. P We choose an element β = kiαi in this set with ht β minimal. Since (β|β) = (α|α) = m, we have X ki(αi|β) = m. i

∨ Since ki ≥ 0 and m > 0 there exists i with (αi|β) > 0. Then hβ, αi i > 0. So ri(β) has smaller height than β, whence si(β) ∈ −Q+, using the previous lemma. It follows that β = rαi for some positive integer r. 2 re re Since (rαi|rαi) ≥ r m, we have r = 1. Hence β ∈ ∆s and α ∈ ∆s also. re Conversely, if α ∈ ∆s then α = w(αi) for some i and (α|α) = (αi|αi). However, we have seen in the previous paragraph that the short simple roots have (αi|αi) = m, so (α|α) = m also.

Note from Proposition 5.2.3 that m is achieved on simple roots, so m is just mini(αi|αi). The following easier result follows immediately from Proposition 5.2.2.

Proposition 5.2.4 Let A be an indecomposable GCM of finite or affine type, and M := max{(α|α) | α ∈ ∆re}.

Then

X (αi|αi) ∆re = {α = k α ∈ Q | (α|α) = M, k ∈ for all i}. l i i i (α|α) Z i

(2) 0 Proposition 5.2.5 Let A = A2` and m = (αi|αi) for 1 ≤ i < `. Then

re 0 ∆i = {α ∈ Q | (α|α) = m }.

P` 0 Proof Let α = i=0 kiαi ∈ Q satisfy (α|α) = m . We just need to check that (α |α ) k i i ∈ for all i. (5.4) i (α|α) Z

(αi|αi) 0 Indeed the condition ki (α|α) ∈ Z is obvious for i 6= 0 since (αi|αi) = m 5.3 Imaginary roots 73

0 for i = 1, . . . , ` − 1 and 2m for i = `. It just remains to show that k0 is even. We have

` ` 2 X X (α|α) = k0(α0|α0) + 2k0k1(α0|α1) + ( αi| αi) i=1 i=1 ` 2 X 2 = k0(α0|α0) + k0k1a10(α1|α1) + ki (αi|αi) i=1 X + kikjaij(αi|αi). 1≤i

2 0 2 0 Thus (α|α) ∈ k0(α0|α0) + Zm . But (α|α) = m, so k0(α0|α0) ∈ Zm . 0 2 Since (α0|α0) = m /2, we have k0/2 ∈ Z, whence k0 is even as required.

5.3 Imaginary roots im im If a root is not real it is called imaginary. Denote by ∆ and ∆+ the sets of the imaginary and positive imaginary roots respectively.

Proposition 5.3.1

im (i) The set ∆+ is W -invariant. im ∨ (ii) For α ∈ ∆+ there exists a unique (positive) root β ∈ −C which is W -conjugate to α. (iii) If A is symmetrizable then the root α is imaginary if and only if (α|α) ≤ 0.

im Proof (i) As ∆+ ⊂ ∆ \ Π and the set Π \{αi} is ri-invariant, it follows im that ∆+ is W -invariant. im (ii) Let α ∈ ∆+ and β be the element of minimal height in W · α ⊂ ∨ ∨ ∆+. Then β ∈ −C . Indeed, if hβ, αi i > 0 then riβ ∈ ∆+ has smaller height. Uniqueness of β follows from Proposition 3.4.1(ii). (iii) If α ∈ ∆im . Since the form is W -invariant, as in (ii), we may P ∨ assume that α = i kiαi ∈ −C and ki ∈ Z+. Then

X X ki (α|α) = k (α|α ) = |α |2hα, α∨i ≤ 0. i i 2 i i i i The converse follows from Proposition 5.1.1(iv)(a). 74 Real and Imaginary Roots P For α = i kiαi ∈ Q define the support of α, denoted supp α, as the subdiagram of S(A) of the vertices i such that ki 6= 0 and all edges connecting them. By Lemma 1.4.7, supp α is connected. Set

∨ K = {α ∈ Q+ \{0} | hα, αi i ≤ 0 for all i and supp α is connected}.

im Lemma 5.3.2 K ⊂ ∆+ .

P Proof Let α = i kiαi ∈ K. Set

Ωα = {γ ∈ ∆+ | γ ≤ α}.

The set Ωα is finite, and it is non-empty since the simple roots appearing P in decomposition of α belong to Ωα. Let β = i miαi be an element of maximal height in Ωα. Note by definition

β + αi 6∈ ∆+ if ki > mi. (5.5)

Next, supp β = supp α.

∨ Indeed, if some i ∈ supp α \ supp β, we may assume that hβ, αi i < 0, whence β + αi ∈ Ωα by Proposition 3.1.5(v), giving a contradiction.

Let A1 be the principal minor of A corresponding to the subset supp α. ∨ If A1 is of finite type then hα, αi i ≤ 0 for all i implies α = 0 giving a contradiction (see the argument in the proof of Proposition 4.3.2). If A1 is not of finite type, then by Proposition 4.3.2(vi),

P := {j ∈ supp α | kj = mj}= 6 ∅.

We aim to first show that P = supp α, and so α = β ∈ ∆+. Let R be a connected component of subdiagram supp α \ P . By (5.5) and Proposition 3.1.5(v),

∨ hβ, αi i ≥ 0 for all i ∈ R. (5.6)

0 P Set β = i∈R miαi. Then

0 ∨ ∨ X hβ , αi i = hβ, αi i − mjaij. j∈supp α\R

0 ∨ 0 ∨ Now (5.6) implies hβ , αi i ≥ 0 for all i ∈ R and hβ , αj i > 0 for some j ∈ R. 5.3 Imaginary roots 75

Let AR be the principal minor corresponding to the subset R, and u be the column vector with entries mj, j ∈ R. Since

0 ∨ X hβ , αi i = aijmj (i ∈ R), j∈R we have u > 0, AM u ≥ 0, and AM u 6= 0. It follows that AM is not affine or indefinite type, hence it is finite type. Now let

0 X α = (ki − mi)αi. i∈R P We have ki − mi > 0 for all i ∈ R, and α − β = i∈supp α\P (ki − mi)αi. Thus for i ∈ R we have

∨ X X 0 ∨ hα − β, αi i = (kj − mj)aij = (ki − mi)aij = hα , αi i, j∈supp α\P j∈R since R is a connected component of supp α \ P . Thus

0 ∨ ∨ ∨ hα , αi i = hα, αi i − hβ, αi i (i ∈ R).

∨ ∨ 0 ∨ Now hα, αi i ≤ 0 since α ∈ K and hβ, αi i ≥ 0 by (5.6), so hα , αi i ≤ 0 for all i ∈ R. Now let u be the column vector with coordinates ki − mi for i ∈ R. Then we have u > 0 and AM u ≤ 0. Since AM has finite type AM (−u) ≥ 0 implies −u > 0 or −u = 0, giving a contradiction. This completes the proof of the fact that α ∈ ∆+. Finally, 2α satisfies all the assumptions of the lemma, so α ∈ ∆+, and im by Proposition 5.1.1(ii), α ∈ ∆+ .

im S Theorem 5.3.3 ∆+ = w∈W w(K).

Proof ”⊃” follows from Lemma 5.3.2 and Proposition 5.3.1(i). The converse embedding holds in view of Proposition 5.3.1(i),(ii) and the fact that supp α is connected for every root α.

im Proposition 5.3.4 If α ∈ ∆+ and r a non-zero rational number such that rα ∈ Q, then rα ∈ ∆im .

Proof In view of Proposition 5.3.1(i),(ii) we may assume that α ∈ −C∨ ∩ Q+. Since α is a root, its support is connected, so α ∈ K. Then rα ∈ K im for any r > 0 as in the assumption. By Lemma 5.3.2, rα ∈ ∆+ .

Theorem 5.3.5 Let A be indecomposable. 76 Real and Imaginary Roots

(i) If A is finite type then ∆im = ∅. im (ii) If A is affine type then ∆+ = {nδ | n ∈ Z>0}, where δ = P` i=0 aiαi and ai are the marks in the Dynkin diagram. (iii) If A is indefinite type then there exists a positive imaginary root P ∨ α = i kiαi such that ki > 0 and hα, αi i < 0 for all i = 1, . . . , n.

Proof By the definition of types and Remark 4.1.2, the set

{α ∈ Q+ | hα, αii ≤ 0} is {0} if A is finite type, is Zδ is A is affine type, and there exists P ∨ α = i kiαi such that ki > 0 and hα, αi i < 0 for all i = 1, . . . , n, if A is indefinite type. Now apply Theorem 5.3.3.

∨ We call a root α null-root if α|h0 = 0, or equivalently hα, αi i = 0 for all i. It follows from Theorem 4.1.12 that if α is a null-root if and only if supp α is affine type which represents a connected component of the diagram A and α = kδ for k ∈ Z. We call a root α isotropic if (α|α) = 0.

Proposition 5.3.6 Let A be symmetrizable. A root α is isotropic if and only if it is W -conjugate to an imaginary root β such that supp β is a subdiagram of affine type in S(A).

Proof Let α be an isotropic root. We may assume that α > 0. Then α ∈ im ∆+ by Proposition 5.1.1(iv)(a), and α is W -conjugate to an imaginary ∨ root β ∈ K such that hβ, αi i ≤ 0 for all i, thanks to Proposition 5.3.1(ii). P Let β = i∈P kiαi and P = supp β. Then X (β|β) = ki(β|αi) = 0, i∈P where ki > 0 and 1 (β|α ) = |α |2hβ, α∨i ≤ 0 i 2 i i ∨ for all i ∈ P . So hβ, αi i = 0 for all i ∈ P , and P is an affine diagram. Conversely, let β = kδ be an imaginary root for an affine diagram. Then 2 2 X (β|β) = k (δ|δ) = k ai(δ|αi) = 0 i ∨ since hδ, αi i = 0 for all i. 6 Affine Algebras

6.1 Notation Throughout we use the following notation in the affine case:

• A is an indecomposable GCM of affine type of order ` + 1 and rank `.

• a0, a1, . . . , a` are the marks of the diagram S(A) (note that a0 = (2) 1, unless A = A2` in which case a0 = 2). ∨ ∨ ∨ t • a0 , a1 , . . . , a` are the marks of the dual diagram S(A ) (this di- agram is obtained from S(A) by changing direction of all arrows and preserving the labels of the vertices). Note that in all cases ∨ a0 = 1. • The numbers ` ` X ∨ X ∨ h := ai, h := ai i=0 i=0 are Coxeter and dual Coxeter numbers. (r) • r ∈ {1, 2, 3} refers to the number r in the type XN . P` ∨ ∨ • c = i=0 ai αi is the canonical central element. By Proposi- tion 1.4.6, the center c of g is Cc. P` im im • δ = i=0 aiαi. Then ∆ = {±δ, ±2δ, . . . }, ∆+ = {δ, 2δ, . . . }, see Theorem 5.3.5.

6.2 Standard bilinear form We know that A is symmetrizable. Moreover,

a0 a` A = diag( ∨ ,..., ∨ )B (6.1) a0 a`

77 78 Affine Algebras

t ∨ for a symmetric matrix B. Indeed let δ = (a0, . . . , a`) and δ = ∨ ∨ t (a0 , . . . , a` ) . If A = DB where D is a diagonal invertible matrix and B is a symmetric matrix then Bδ = 0, and hence δtB = 0. On the other hand, (δ∨)tA = 0 implies (δ∨)tDB = 0, whence BDδ∨ = 0, and since dim ker B = 1, we get Dδ∨ is proportional to δ. Fix an element d ∈ h such that

hαi, di = 0 for i = 1, . . . , `, hα0, di = 1. d is defined up to a summand proportional to c and is called energy ∨ ∨ ∨ element. Note that {α0 , α1 , . . . , α` , d} is a basis of h. Indeed, we must ∨ ∨ ∨ show that d is not a linear combination of α0 , α1 , . . . , α` . Otherwise P` ∨ t t d = i=0 uiαi , and A u ≥ 0, A u 6= 0, giving a contradiction with the affine type of At. Note that

g = [g, g] ⊕ Cd. Following §2.2, define the non-degenerate symmetric bilinear form (·|·) on h by

∨ ∨ aj (αi |αj ) = ∨ aij (0 ≤ i, j ≤ `); aj ∨ (αi |d) = δi,0a0 (0 ≤ i ≤ `); (d|d) = 0.

It follows that

∨ (c|αi ) = 0 (0 ≤ i ≤ `); (c|c) = 0;

(c|d) = a0.

By Theorem 2.2.3, this form can be uniquely extended to g so that all conditions of that theorem hold. The extended form (·|·) will be referred to as the normalized invariant form. ∗ Next define Λ0 ∈ h by

∨ hΛ0, αi i = δi0, hΛ0, di = 0. Then

{α0, . . . , α1, Λ0} is a basis of h∗. 6.2 Standard bilinear form 79 The isomorphism ν : h → h∗ defined by the form (·|·) is given by

∨ ai ν : αi 7→ ∨ αi, ai ν : d 7→ a0Λ0. We also have that ν : c 7→ δ.

The transported form (·|·) on h∗ has the following properties:

∨ ai (αi|αj) = aij (0 ≤ i, j ≤ `); ai −1 (αi|Λ0) = δi0a0 (0 ≤ i ≤ `);

(Λ0|Λ0) = 0;

(δ|αi) = 0 (0 ≤ i ≤ `); (δ|δ) = 0;

(δ|Λ0) = 1. It follows that there is an isometry of lattices

Q∨(A) =∼ Q(At). (6.2)

◦ ◦ Denote by h (resp. h ) the -span (resp. -span) of α∨, . . . , α∨. R C R 1 ` ◦ ◦ The dual notions h∗ and h∗ are defined as similar linear combinations of R α1, . . . , α`. Then we have decompositions into orthogonal direct sums ◦ ◦ ∗ ∗ h =h ⊕(Cc + Cd), h = h ⊕ (Cδ + CΛ0). Set ◦ ◦ h :=h + c + d, h∗ = h∗ + Λ + δ. R R R R R R R 0 R ◦ ◦ By Theorem 4.2.4, the restriction of the bilinear form (·|·) to h and h∗ ◦ ◦ R R (resp. h + c and h∗ + δ) is positive definite (resp. positive semidefi- R R R R nite with kernels Rc and Rδ). For a subset S ⊂ h∗ denote by S¯ the orthogonal projection of S onto ◦ h∗. We have |λ|2 − |λ¯|2 λ − λ¯ = hλ, ciΛ + δ (λ ∈ h∗, hλ, ci= 6 0). (6.3) 0 2hλ, ci ¯ Indeed, λ − λ = b1Λ0 + b2δ. Applying (·|δ), we deduce that b1 = (λ|δ) = 80 Affine Algebras

2 ¯ 2 hλ, ci. Now, |λ| = |λ| + 2b1b2, which implies the required expression for b2. The following closely related formula is proved similarly: ¯ λ = λ + hλ, ciΛ0 + (λ|Λ0)δ. (6.4) Define ρ ∈ h∗ by

∨ hρ, di = 0, hρ, αi i = 1 (0 ≤ i ≤ `). Then (6.4) gives ∨ ρ =ρ ¯ + h Λ0. (6.5)

6.3 Roots of affine algebra ◦ Denote by g the subalgebra of g generated by ei and fi for i = 1, . . . , `. ◦ ◦ This subalgebra is isomorphic to g(A) where A is obtained from A by removing 0th row and 0th column. This is a finite dimensional simple Lie algebra whose Dynkin diagram comes from S(A) by deleting the 0th vertex, see Proposition 4.3.2. Indeed, let ◦ ◦ ∨ ∨ ∨ Π= {α1, . . . , α`}, Π = {α1 , . . . , α` }. ◦ ◦ ◦ ◦ ◦ ∨ ∨ Then h, Π, Π is a realization of A, and since [ei, fi] = αi , g is generated ◦ by ei, fi for i = 1, . . . , ` and h, and the relations (1.12)-(1.15) hold. So ◦ ◦ ◦ there is a homomorphism from g˜(A) onto g. We claim that g has no ◦ non-trivial ideals which have trivial intersection with h. Otherwise, if i is such an ideal let x ∈ i be a non-zero element of weight α 6= 0. Then ◦ α ∈∆, where ◦ ◦ ∆= ∆ ∩ h∗. (6.6) We may assume that α is the smallest positive root for which such x exists. Then

[fi, x] = 0 (i = 1, . . . , `).

But it is also clear from the relations that [f0, x] = 0. By Lemma 1.4.5, x = 0. This contradiction proves that there is a homomorphism from ◦ ◦ ◦ g(A) onto g. Since g(A) is simple by Proposition 1.4.8(i), this homo- morphism must be an isomorphism. We will also use the notations ◦ ◦ ◦ ◦ ◦ ◦ ∨ ∨ ∆+:=∆ ∩∆+, Q= Z ∆, Q = Z∆ . 6.3 Roots of affine algebra 81 ◦ ◦ ◦ ∆s and ∆l for the sets of short and long roots in ∆, respectively, and ◦ ◦ W for the Weyl group for ∆. ∨ Note that a0 = 1 implies ◦ ∨ ∨ Q = Q ⊕ Zc (orthogonal direct sum). (6.7) re re We denote by ∆s and ∆l the sets of short and long real roots, re- (2) re spectively. For type A2` we denote by ∆i the set of real roots of intermediate length.

Proposition 6.3.1 ◦ (i) If r = 1 then ∆re = {α + nδ | α ∈∆, n ∈ Z}, and α + nδ ∈ ∆re ◦ is short if and only if α ∈∆ is short. (2) (ii) If r = 2 or 3 and A 6= A2` then ◦ re ∆s = {α + nδ | α ∈∆s, n ∈ Z}, ◦ re ∆l = {α + nrδ | α ∈∆l, n ∈ Z}. (2) (iii) If A = A2` for ` > 1 then 1 ◦ ∆re = { (α + (2n − 1)δ) | α ∈∆ , n ∈ }, s 2 l Z ◦ re ∆i = {α + nδ | α ∈∆s, n ∈ Z}, ◦ re ∆l = {α + 2nδ | α ∈∆l, n ∈ Z}. (2) (iv) If A = A2 then 1 ◦ ∆re = { (α + (2n − 1)δ) | α ∈∆, n ∈ }, s 2 Z ◦ re ∆l = {α + 2nδ | α ∈∆, n ∈ Z}. (v) ∆re + rδ = ∆re. ◦ re re (vi) ∆+ = {α ∈ ∆ with n > 0}∪ ∆+.

Proof (v),(vi) follow from (i)-(iv). ◦ ◦ (2) re Suppose that A 6= A2` . Then ∆s⊂ ∆s . Let α ∈∆s. Then (α|α) = m. Hence for n ∈ Z we have (α + nδ|α + nδ) = m. By Proposition 5.2.3, re α + nδ ∈ ∆s . P` re Conversely, let β = i=0 kiαi ∈ ∆s . By Proposition 5.2.3,

(β − k0δ|β − k0δ) = (β|β) = m. 82 Affine Algebras P` Since a0 = 1, we have β − k0δ = i=1(ki − k0ai)αi. So by (6.6) and ◦ Proposition 5.2.3 again, we deduce that β − k0δ ∈∆s, thus the short roots have the required form. ◦ re We now consider the long roots. Note that ∆l⊂ ∆l . Let α = Pn ◦ i=1 kiαi ∈∆l. Then (α + nδ|α + nδ) = (α|α) = M. By Propo- (αi|αi) sition 5.2.4, we have ki (α|α) ∈ Z for i = 1, . . . , `. By the same re (αi|αi) proposition α + nδ ∈ ∆l if and only nai (α|α) ∈ Z for i = 0, . . . , `. ∨ ∨ 2ai 2ai Now (αi|αi) = , so the condition is n ∈ . Note also that ai (α|α) Z (α0|α0) = 2.

First suppose that α0 is a long root, i.e. we are in the case (i). Then ∨ 2ai re (α|α) = 2, and so n (α|α) ∈ Z. Hence α + nδ ∈ ∆l for all n ∈ Z. P` re Conversely, let β = i=0 kiαi ∈ ∆l . By Proposition 5.2.4, (β − (αi|αi) k0δ|β − k0δ) = (β|β) = M, and ki (β|β) ∈ Z for i = 0, . . . , `. We ∨ (αi|αi) 2ai have k0ai ∈ also since (αi|αi) = , and (β|β) = 2. We (β|β) Z ai P` ◦ now conclude that have β − k0δ = i=1(ki − k0ai)αi ∈∆l by (6.6) and Proposition 5.2.4 again, and so the long roots have the required form.

Now suppose that α0 is a short root, i.e. we are in the case (ii). Note that r = (α|α) . Thus (α0|α0) 2a∨ a∨ n i = n i , (α|α) r

∨ Since a0 = 1 this lies in Z for all i = 0, . . . , ` if and only if n is divisible re by r. Thus by Proposition 5.2.4, α + rnδ ∈ ∆l for all n ∈ Z. P` re Conversely, let β = i=0 kiαi ∈ ∆l . By Proposition 5.2.4, (β − (αi|αi) k0δ|β − k0δ) = (β|β) = M, and ki (β|β) ∈ Z for i = 0, . . . , `. In (α0|α0) k0 particular, k0 (β|β) = r ∈ Z. We have (α |α ) k k a i i = a∨ 0 ∈ 0 i (β|β) i r Z

∨ 2ai for i = 1, . . . , `, as (αi|αi) = and (β|β) = 2r. Thus by Propo- ai P` ◦ sition 5.2.4, β − k0δ = i=1(ki − k0ai)αi ∈∆l by (6.6) and Proposi- tion 5.2.4 again, and so the long roots have the required form. The proof of (iii) and (iv) is similar.

Proposition 6.3.1 will also follow from explicit constructions of affine Lie algebras given in the next chapter. 6.3 Roots of affine algebra 83 ¯ ◦ (2) Remark 6.3.2 ∆ =∆ \{0} in all cases, except A2` , in which case the ◦ root system ∆¯ is not reduced, and ∆ is the corresponding reduced root system.

Introduce the element ` X ◦ θ := δ − a0α0 = aiαi ∈Q . (6.8) i=1 We have

2 (θ|θ) = (δ − a0α0|δ − a0α0) = a0(α0|α0) = 2a0. (2) Thus (θ|θ) = M if r = 1 or A = A2` , and (θ|θ) = m otherwise. In all ◦ cases it follows from Propositions 5.2.3 and 5.2.4 that θ ∈∆+. Moreover,

` ν−1(θ) 1 1 X θ∨ = 2 = ν−1(θ) = a∨α∨. (θ|θ) a a i i 0 0 i=1 2 (θ∨|θ∨) = , a0 ∨ −1 ∨ α0 = ν (δ − θ) = c − a0θ .

(2) ◦ Proposition 6.3.3 If r = 1 or A = A2` , then θ ∈ (∆+)l and θ is the ◦ ◦ unique root in ∆ of maximal height (= h − a0). Otherwise θ ∈ (∆+)s ◦ and θ is the unique root in ∆s of maximal height (= h − 1).

◦ Proof One checks that all simple roots in ∆ of the same length are ◦ ◦ ◦ W -conjugate (this is essentially a type A2 argument). Hence ∆s and ∆l ◦ ◦ are the orbits of W on ∆. Moreover,

∨ ∨ hθ, αi i = hδ − a0α0, αi i = −a0ai0 ≥ 0 (1 ≤ i ≤ `). ◦ Hence θ is in the fundamental domain of W , which determines the short or long root uniquely. The height of θ is easy to compute from the definition. Finally, if θ0 is a maximal height root in the W -orbit of roots ◦ in ∆ of the same length as θ, then a standard argument shows that θ0 is in the fundamental chamber, hence θ0 = θ.

If A is a matrix of finite type, we assume that the standard invariant form (·|·) on g(A) is normalized by the condition (α|α) = 2 for α ∈ ∆l, and call it the normalized invariant form. 84 Affine Algebras

(r) Corollary 6.3.4 Let g be an affine algebra of type XN . Then the ratio ◦ of the restriction to the subalgebra g of the normalized invariant on g to ◦ the normalized invariant form on g is equal to r.

6.4 Affine Weyl Group ∨ Since hδ, αi i = 0 for all i, we have w(δ) = δ for all w ∈ W . Denote by ◦ W the subgroup of W generated by r1, . . . , r`. As ri(Λ0) = Λ0 for i ≥ 1, ◦ ◦ ◦ ∗ W acts trivially on CΛ0 + Cδ. It is also clear that h is W -invariant. So ◦ ◦ ◦ the action of W on h∗ is faithful, and we can identify W with the Weyl ◦ ◦ ◦ group of of g also acting on h∗. Hence W is finite. We have 1 r r (λ) = λ + hλ, ciν(θ∨) − hλ, θ∨i + (θ∨|θ∨)hλ, ciδ. (6.9) 0 θ 2 Indeed,

∨ r0rθ(λ) = r0(λ − hλ, θ iθ) ∨ ∨ ∨ = λ − hλ, α0 iα0 − hλ, θ i(θ − hθ, α0 iα0) ∨ 1 ∨ ∨ ∨ 1 = λ − hλ, α0 i (δ − θ) − hλ, θ iθ + hλ, θ ihθ, α0 i (δ − θ) a0 a0 hλ, α∨i hλ, θ∨ihθ, α∨i = λ + ( 0 − hλ, θ∨i − 0 )θ a0 a0 hλ, α∨i hλ, θ∨ihθ, α∨i +(− 0 + 0 )δ a0 a0 ∨ ∨ ∨ ∨ 1 = λ + (hλ, α0 i − a0hλ, θ i − hλ, θ ihθ, α0 i) θ a0 hλ, α∨i hλ, θ∨ihθ, α∨i −( 0 − 0 )δ a0 a0 ∨ ∨ ∨ ∨ ∨ = λ + (hλ, c − a0θ i − a0hλ, θ i − hλ, θ ihθ, c − a0θ i)ν(θ ) hλ, c − a θ∨i hλ, θ∨ihθ, c − a θ∨i −( 0 − 0 )δ, a0 a0 which easily implies (6.9). Set 1 ◦ t (λ) = λ + hλ, ciα − (λ|α) + (α|α)hλ, ciδ (λ ∈ h∗, α ∈ h∗). α 2 (6.10)

Then (6.9) is equivalent to r0rθ = tν(θ∨). 6.4 Affine Weyl Group 85

◦ ◦ Proposition 6.4.1 Let α, β ∈h, w ∈W . Then

(i) tαtβ = tα+β. −1 (ii) wtαw = tw(α).

∗ ∗ Proof The linear map tα : h → h is uniquely determined by the properties

tα(λ) = λ − (λ|α)δ if hλ, ci = 0, (6.11) 1 t (Λ ) = Λ + α − (α|α)δ. (6.12) α 0 0 2 since hαi, ci = 0 and hΛ0, ci = 1. If hλ, ci = 0 then

tαtβ(λ) = tα(λ − (λ|β)δ) = λ − (λ|α)δ − (λ|β)(δ − (δ|α)δ) = λ − (λ|α + β)δ

= tα+β(λ), since (δ|α) = 0, and

−1 −1 −1 wtαw (λ) = w(w (λ) − (w (λ)|α)δ) = λ − (λ|w(α))δ

= tw(α)(λ), since hw−1(λ), ci = 0 and w(δ) = δ. Also 1 t t (Λ ) = t (Λ + β − (β|β)δ) α β 0 α 0 2 1 1 = Λ + α − (α|α)δ + β − (β|α)δ − (β|β)(δ − (δ|α)δ) 0 2 2 1 = Λ + (α + β) − (α + β|α + β)δ 0 2 = tα+β(Λ0), using (δ|α) = 0 again, and 1 wt w−1(Λ ) = w(Λ + α − (α|α)δ) α 0 0 2 1 = Λ + w(α) − (w(α)|w(α))δ 0 2 = tw(α)(Λ0),

−1 since w (Λ0) = w(Λ0) = Λ0. This proves (i) and (ii). 86 Affine Algebras

◦ ◦ Now define the lattice M in h∗. Let ( ·θ∨) denote the lattice in R Z W ◦ ◦ h generated over Z by the finite set ·θ∨, and set R W ◦ ∨ M = ν(Z(W ·θ )).

Lemma 6.4.2 ◦ (i) If A is symmetric or r > a0 then M = Q¯ =Q. ◦ (ii) In all other cases M = ν(Q∨) = ν(Q∨).

◦ Proof If r = 1 then θ∨ is a short root in ∆∨, see Proposition 6.3.3. So ◦ ◦ ∨ ∨ W ·θ = ∆s . It is known (Exercise 6.9 in Kac) that for the finite type ◦ the short roots generate the root lattice, so we have M = ν(Q∨), which implies the result for r = 1. ◦ ◦ ∨ ∨ ∨ Similarly if a0r = 2 or 3, then θ is a long root in ∆ , so W ·θ = ◦ ◦ ∨ (2) ∨ 1 ∆l , whence M =Q. Finally, for A2` we have ν(θ ) = 2 θ. Hence ◦ ◦ 1 M = 2 Z ∆l=Q again.

Corollary 6.4.3 (1) (1) (1) (1) (2) ◦ (i) If A is not of types B` ,C` ,F4 ,G2 ,A2` , then M = Z ∆= P` i=1 Zαi. (1) (1) (1) (1) (ii) If A is of types B` ,C` ,F4 ,G2 , then ◦ X X M = Z ∆l= Zαi + Zpαi, ◦ ◦ αi∈∆l αi∈∆s

(1) where p = 3 for G2 and 2 in the other cases. (2) (iii) If A is of type A2` , then `−1 1 ◦ X 1 M = ∆ = α + α . 2Z l Z i Z2 ` i=1 The lattice M considered as an acts on h∗ by the formula (6.10). This action is faithful in view of (6.11),(6.12). Denote the corresponding subgroup of GL(h∗) by T and call it the group of translations. In view of (6.9) and Proposition 6.4.1(ii), T is a subgroup of W .

◦ Proposition 6.4.4 W = T o W . 6.4 Affine Weyl Group 87 ◦ ◦ Proof Since W is finite and T is a free abelian group, we have W ∩T = ◦ ◦ {1}. Moreover, r0 = tν(θ∨)rθ ∈ T W , so T and W generate W . Finally T is normal in W by Proposition 6.4.1(ii).

Observe that tν(θ∨) = r0rθ has determinant 1, and since T is generated −1 by the elements wtν(θ∨)w , all elements of T have determinant 1. For s ∈ R set h∗ := {λ ∈ h∗ | hλ, ci = 1}. s R ∗ ∗ Note that hs is W -invariant, so W acts on hs with affine transformations. ∗ The elements of h1 are of the form ` X ciαi + bδ + Λ0 (b, ci ∈ R). i=1 ∗ Since W acts trivially on δ, the action of W on h1 factors through to ∗ give an action of W on h1/Rδ. Note that the last space can be identified with ◦ α + ··· + α = h∗ R 1 R ` R via ` ` X X ciαi + bδ + Λ0 7→ ciαi. i=1 i=1 ◦ We use this identification to get an affine action of W on h∗. The affine ◦ R transformation of h∗ corresponding to w ∈ W is denoted by af(w), so R that ¯ ∗ af(w)(λ) = w(λ)(λ ∈ h1).

◦ ◦ Proposition 6.4.5 Let w ∈ , m ∈ M, λ ∈ h∗. Then af(w)(λ) = w(λ), W R af(tm)(λ) = λ + m.

Proof The first statement follows from w(δ) = δ, w(Λ0) = Λ0. For the second one, using (6.11) and (6.12), we get 1 af(t )(λ) = t (λ + Λ ) = λ − (λ|m)δ + Λ + m − (m|m)δ = λ + m. m m 0 0 2

◦ Corollary 6.4.6 The affine action of W on h∗ is faithful. R 88 Affine Algebras

◦ ◦ Proof Suppose t w ∈ W for m ∈ M, w ∈ acts trivially on h∗. Then m W R ◦ ◦ af(t w)(0) = 0 implies m = 0, i.e. t = 1. But acts faithfully on h∗, m m W R so w = 1 also.

◦ Corollary 6.4.7 s acts on h∗ as the reflection in the affine hyperplane 0 R

◦ T := {λ ∈ h∗ | (λ|θ) = 1}. θ,1 R

◦ Proof For λ ∈ h∗, we have R

∨ ∨ ∨ ∨ r0(λ) = tν(θ∨)rθ(λ) = rθ(λ)+ν(θ ) = λ−hλ, θ iθ+ν(θ ) = λ−((λ|θ)−1)ν(θ ), and the result follows.

Define the fundamental alcove

◦ C = {λ ∈ h∗ | (λ|α ) ≥ 0 for 1 ≤ i ≤ ` and (λ|θ) ≤ 1}. af R i

Proposition 6.4.8 Caf is the fundamental domain for the action of W ◦ on h∗. R

◦ Proof Consider the projection π : h∗ → h∗, λ 7→ λ¯. It is surjective and 1 R af(w) ◦ π = π ◦ w for w ∈ W . Moreover

−1 ∨ ∗ π (Caf ) = C ∩ h1.

∨ ∗ It remains to note that C ∩ h1 is the fundamental domain for the W - ∗ action on h1.

We complete this section with the list of explicit constructions related to the root systems of finite type. We identify Q and Q∨ via the non- degenerate form (·|·). Let Rn be the standard Euclidean space with standard basis ε1, . . . , εn. 6.4 Affine Weyl Group 89

6.4.1 A`

∨ X `+1 X Q = Q = { kiεi ∈ R | ki ∈ Z, ki = 0}, i

∆ = {εi − εj | 1 ≤ i 6= j ≤ ` + 1},

Π = {αi = εi − εi+1 | 1 ≤ ` ≤ `},

θ = ε1 − ε`+1, ∼ W = S`+1 = {all permutations of the εi}.

6.4.2 D`

∨ X ` X Q = Q = { kiεi ∈ R | ki ∈ Z, ki ∈ 2Z}, i

∆ = {±εi ± εj | 1 ≤ i < j ≤ `},

Π = {α1 = ε1 − ε2, . . . , α`−1 = ε`−1 − ε`, α` = ε`−1 + ε`},

θ = ε1 + ε2.

6.4.3 E8

X 1 X Q = Q∨ = { k ε ∈ 8 | all k ∈ or all k ∈ + , k ∈ 2 }, i i R i Z i 2 Z i Z i ∆ = {±εi ± εj | 1 ≤ i < j ≤ 8} 1 ∪{ (±ε ± · · · ± ε ) even number of minuses}, 2 1 8 Π = {αi = εi+1 − εi+2 | 1 ≤ i ≤ 6} 1 ∪{α = (ε − ε − · · · − ε + ε ), α = ε + ε }, 7 2 1 2 7 8 8 7 8 θ = ε1 + ε2. 90 Affine Algebras

6.4.4 E7 X 1 X Q = Q∨ = { k ε ∈ 8 | all k ∈ or all k ∈ + , k = 0}, i i R i Z i 2 Z i i

∆ = {±εi ± εj | 1 ≤ i < j ≤ 8} 1 ∪{ (±ε ± · · · ± ε ) (four minuses)}, 2 1 8 Π = {αi = εi+1 − εi+2 | 1 ≤ i ≤ 6} 1 ∪{α = (−ε − ε − ε − ε + ε + ε + ε + ε )}, 7 2 1 2 3 4 5 6 7 8 θ = ε2 − ε1.

6.4.5 E6

6 6 X √ 1 X Q = Q∨ = { k ε + 2k ε ∈ 7 | all k ∈ or all k ∈ + , k = 0}, i i 7 7 R i Z i 2 Z i i=1 i=1

∆ = {±εi − εj | 1 ≤ i 6= j ≤ 6} 1 √ √ ∪{ (±ε ± · · · ± ε ) ± 2ε (three minuses)} ∪ {± 2ε }, 2 1 6 7 7 Π = {αi = εi − εi+1 | 1 ≤ i ≤ 5} 1 √ ∪{α6 = (−ε1 − ε2 − ε3 + ε4 + ε5 + ε6+) + 2ε7)}, √ 2 θ = 2ε7. 7 Affine Algebras as Central extensions of Loop Algebras

7.1 Loop Algebras (1) Affine algebras of type X` are called untwisted. In this chapter we will describe an explicit construction of untwisted affine algebras. Recall the material from §1.5. In particular, L = C[t, t−1], and let ϕ be a bilinear form on L defined by dP ϕ(P,Q) = Res Q. dt One checks that

ϕ(P,Q) = −ϕ(Q, P ), (7.1) ϕ(P Q, R) + ϕ(QR, P ) + ϕ(RP,Q) = 0 (P, Q, R ∈ L). (7.2)

(1) Note that Cartan matrix A of type X` is the so-called extended ◦ ◦ Cartan matrix of the simple finite dimensional Lie algebra g= g(A), ◦ where the matrix A obtained from A by removing the 0th row and column is of type X`. Consider the loop algebra

◦ ◦ L(g) := L⊗ g .

◦ Fix a non-degenerate invariant symmetric bilinear form (·|·) on g. It ◦ can be extended to a L-valued bilinear form (·|·)t on L(g) via

(P ⊗ x|Q ⊗ y)t = PQ(x|y).

Also set da ◦ ψ(a, b) = ( |b) (a, b ∈ L(g)). dt t

91 92 Affine Algebras as Central extensions of Loop Algebras ◦ We know from Lemma 1.5.3 that ψ is a 2-cocycle on L(g), and

i j ψ(t ⊗ x, t ⊗ y) = iδi,−j(x|y). As in §1.5, we have a central extension

L¯(g) = L(g) ⊕ Cc ◦ corresponding to ψ. Moreover, L¯(g) is graded with deg tj⊗x = j, deg c = 0. We then have the corresponding derivation ◦ ◦ d : L¯(g) → L¯(g), tj ⊗ x 7→ jtj ⊗ x, c 7→ 0. ◦ Finally, by adjoining d to L¯(g) we get the Lie algebra ◦ ◦ Lˆ(g) := L(g) ⊕ Cc ⊕ Cd, with operation [tm ⊗ x + λc + µd, tn ⊗ y + λ0c + µ0d] m+n n 0 m = (t ⊗ [x, y] + µnt ⊗ y − µ mt ⊗ x) + mδm,−n(x|y)c.

7.2 Realization of untwisted algebras ◦ ◦ ◦ ◦ ◦ Let ω be the Chevalley involution of g, ∆⊂ h∗ be the root system of g, ◦ {α1, . . . , α`} be a root base of ∆ and

H1,...,H` ◦ be the coroot base in h, E1,...,E`, F1,...,F` be the Chevalley genera- ◦ tors, θ be the highest root in ∆, and let ◦ M ◦ g= gα ◦ α∈∆∪{0} ◦ be the root space decomposition. Choose F0 ∈gθ so that

◦ 2 (F | ω (F )) = − , 0 0 (θ|θ) and set ◦ E0 = − ω (F0). Then by Theorem 2.2.3(v), we have

∨ [E0,F0] = −θ . (7.3) 7.2 Realization of untwisted algebras 93 ◦ The elements E0,E1,...,E` generate the algebra g since in the adjoint ◦ ◦ representation we have g= U(n+)(E0). ◦ Return to the algebra Lˆ(g). It is clear that Cc is the (1-dimensional) ◦ ◦ center of Lˆ(g), and the centralizer of d in Lˆ(g) is the direct sum of ◦ ◦ Lie algebras Cc ⊕ Cd ⊕ (1⊗ g). From now on we identify g with the ◦ ◦ subalgebra 1⊗ g⊂ Lˆ(g). Further,

◦ h :=h ⊕Cc ⊕ Cd

◦ ◦ is an (`+2)-dimensional abelian subalgebra of Lˆ(g). Continue λ ∈ h∗ to ◦ a linear function on h by setting hλ, ci = hλ, di = 0, so h∗ gets identified with a subspace of h∗. Denote by δ the linear function on h defined from

δ|◦ = 0, hδ, di = 1. h⊕Cc Set

e0 = t ⊗ E0, −1 f0 = t ⊗ F0,

ei = 1 ⊗ Ei (1 ≤ i ≤ `)

fi = 1 ⊗ Fi (1 ≤ i ≤ `).

From (7.3) we get 2 [e , f ] = c − θ∨. (7.4) 0 0 (θ|θ)

Note the following facts on the root decomposition

◦ M ◦ Lˆ(g) = Lˆ(g)α α∈∆∪{0} with respect to h:

◦ Lˆ(g)0 = h, ◦ ∆ = {jδ + γ | j ∈ Z, γ ∈∆} ∪ {jδ | j ∈ Z \{0}}, ◦ ◦ j Lˆ(g)jδ+γ = t ⊗ gγ , ◦ ◦ j Lˆ(g)jδ = t ⊗ h . 94 Affine Algebras as Central extensions of Loop Algebras Set

Π = {α0 := δ − θ, α1, . . . , α`}, 2 Π∨ = {α∨ := c − θ∨, α∨ := 1 ⊗ H , . . . , α∨ := 1 ⊗ H }. 0 (θ|θ) 1 1 ` ` Note in view of Proposition 6.3.3(i) that our element θ agrees with the one introduced in (6.8), whence

∨ A = (hαi , αji)0≤i,j≤`. (7.5) So (h, Π, Π∨) is a realization of A.

◦ Theorem 7.2.1 Lˆ(g) is affine Kac-Moody algebra g(A), h is its Cartan ∨ matrix, Π and Π are its root and coroot bases, and e0, . . . , e`, f0, . . . , f` are Chevalley generators.

Proof We apply Proposition 1.5.1. All relations are easy to check (or have already been checked). ◦ Further, we will prove that Lˆ(g) has no non-trivial ideals i with i ∩ h = {0}. Indeed, if i is such an ideal, then by the Weight Lemma, ◦ j i ∩ Lˆ(g)α 6= {0} for some α = jδ + γ ∈ ∆. So some t ⊗ x ∈ i for some ◦ ◦ j ∈ Z and x ∈gγ , x 6= 0, γ ∈ ∆ ∪ {0}. By taking y ∈g−γ such that (x|y) 6= 0, we get [tj ⊗ x, t−j ⊗ y] = j(x|y)c + [x, y] ∈ h ∩ i. ◦ Hence j(x|y)c + [x, y] = 0. Since [x, y] ∈h we deduce that j = 0. Since ◦ α = jδ + γ 6= 0, we have γ 6= 0. Then 0 6= [x, y] ∈h ∩i. Contradiction. ◦ Finally we prove that the ei, fi and h generate Lˆ(g). Let g1 be the sub- ◦ algebra in Lˆ(g) generated by the ei, fi and h. Since E1,...,E`,F1,...,F` ◦ ◦ generate g, we deduce 1⊗ g⊂ g1. Let ◦ i = {x ∈g| t ⊗ x ∈ g1}. ◦ Then e0 = t ⊗ E0 ∈ g1, so E0 ∈ i, and i 6= 0. Also, if x ∈ i, y ∈g, then

t ⊗ [x, y] = [t ⊗ x, 1 ⊗ y] ∈ g1, ◦ ◦ ◦ ◦ whence i is an ideal of g. Since g is simple we have i =g or t⊗ g⊂ g1. We may now use the relation [t ⊗ x, tk−1 ⊗ y] = tk ⊗ [x, y] 7.2 Realization of untwisted algebras 95 ◦ k to deduce by induction on k that t ⊗ g⊂ g1 for all k > 0. In an analogous ◦ −1 −k way, starting with f0 = t ⊗ F0 we can show that t ⊗ g⊂ g1 for all k > 0.

Corollary 7.2.2 Let g be a non-twisted affine Lie algebra of rank ` + 1. Then the multiplicity of each imaginary root in g is `. ◦ Let (·|·) be the normalized invariant form on g (see the end of §6.3). ◦ Extend it to Lˆ(g) by ◦ (P ⊗ x|Q ⊗ y) = (Res t−1PQ)(x|y)(x, y ∈g,P,Q ∈ L), ◦ (Cc + Cd|L(g)) = 0, (c|c) = 0, (d|d) = 0, (c|d) = 1. The definition implies

i j (t ⊗ x|t ⊗ y) = δi,−j(x|y). We get a non-degenerate symmetric bilinear form. In order to check invariance, let us consider the only non-trivial case: ([d, P ⊗ x]|Q ⊗ y) = (d|[P ⊗ x, Q ⊗ y]). The left hand side of this equality is dP dP (t ⊗ x|Q ⊗ y) = (Res Q)(x|y), dt dt while the right hand side is dP dP (d|PQ ⊗ [x, y] + (Res Q)(x|y)c) = (Res Q)(x|y). dt dt Finally, the restriction of (·|·) to h agrees with the form defined in §6.2. Note that the element c is the canonical central element and d is the energy element. ◦ ◦ ◦ ◦ ◦ Let g=n− ⊕ h ⊕ n+ be the canonical triangular decomposition of g. ◦ Then the triangular decomposition of Lˆ(g) is ◦ Lˆ(g) = n− ⊕ h ⊕ n+, where ◦ −1 −1 ◦ −1 ◦ n− = (t C[t ] ⊗ (n+ ⊕ h)) ⊕ C[t ]⊗ n−, ◦ ◦ ◦ n+ = (tC[t] ⊗ (n− ⊕ h)) ⊕ C[t]⊗ n+ . 96 Affine Algebras as Central extensions of Loop Algebras ◦ The Chevalley involution of g can be written in terms of ω as follows ◦ ω(P (t) ⊗ x + λc + µd) = P (t−1)⊗ ω (x) − λc − µd. Set X t = Cc + gsδ. s∈Z\{0} Then t is isomorphic to the infinite dimensional Heisenberg algebra with s center Cc. Indeed, t = Cc ⊕s∈Z\{0} t ⊗ h, and the only non-trivial commutation is [ts ⊗ h, t−s ⊗ h0] = s(h|h0).

7.3 Explicit Construction of Finite Dimensional Lie Algebras

Let Q be the root lattice of type A`,D`, or E`, and let (·|·) be the normalized form on Q, i.e. ∆ = {α | (α|α) = 2}.

We then also have (α|α) ∈ 2Z for all α ∈ Q (explicit check). Let ε : Q × Q → {±1} be a function satisfying the ”bilinearity” condition for all α, α0β, β0 ∈ Q: ε(α + α0, β) = ε(α, β)ε(α0, β), ε(α, β + β0) = ε(α, β)ε(α, β0), (7.6) and the condition ε(α, α) = (−1)(α|α)/2 (α ∈ Q). (7.7) We call such ε an asymmetry function. Substituting α + β to the last equation we get ε(α, β)ε(β, α) = (−1)(α|β) (α, β ∈ Q). (7.8) An asymmetry function can be constructed as follows: choose an orien- tation of the Dynkin diagram, and let

i j ε(αi, αj) = −1 if i = j or if ◦→◦ i j i j ε(αi, αj) = 1 otherwise, i.e. if ◦←◦ or ◦ ◦, and extend by bilinearity. An easy check shows that the required con- ditions are satisfied. Now let us h be the complex hull of Q and extend (·|·) to h. Take 7.3 Explicit Construction of Finite Dimensional Lie Algebras 97 the direct sum of h with 1-dimensional vector spaces CEα, one for each α ∈ ∆: M g = h ⊕ ( CEα). α∈∆ Define the bracket on g as follows:

 [h, h0] = 0 if h, h0 ∈ h   [h, E ] = (h|α)E if h ∈ h, α ∈ ∆  α α [Eα,E−α] = −α if α ∈ ∆ (7.9)   [Eα,Eβ] = 0 if α, β ∈ ∆, α + β 6∈ ∆ ∪ {0}  [Eα,Eβ] = ε(α, β)Eα+β if α, β, α + β ∈ ∆ Define the symmetric bilinear form on g extending it from h as follows:

 (h|E ) = 0 if h ∈ h, α ∈ ∆ α (7.10) (Eα|Eβ) = −δα,−β if α, β ∈ ∆

Proposition 7.3.1 g is the simple Lie algebra of type A`,D` or E`, respectively, the form (·|·) being the normalized invariant form.

Proof To check the skew-commutativity it suffices to prove that [Eα,Eβ] = −[Eβ,Eα] when α, β, α + β ∈ ∆. Note that

α ± β ∈ ∆ ⇔ (α|β) = ∓1 (α, β ∈ ∆). (7.11)

Now the required equality follows from (7.8). Next we check Jacobi identity for three basis elements x, y, z. If one of these elements is in h, the Jacobi identity trivially holds. So let x = Eα, y = Eβ, z = Eγ . If α + β, α + γ, β + γ 6∈ ∆ ∪ {0}, the identity holds trivially, so we may assume that α + β ∈ ∆ ∪ {0}. If α + β = 0, consider four cases: (1) α ± γ 6∈ ∆ ∪ {0}; (2) α + γ or α − γ = 0; (3) α + γ ∈ ∆; (4) α − γ ∈ ∆. The Jacobi identity holds in cases (1) and (2) in view of (7.11). In case (3) it reduces to ε(−α, α + γ)ε(α, γ) = (α|γ), which follows from the bilinearity and (7.7). The case (4) is similar. Thus we may assume that α + β, α + γ, β + γ ∈ ∆, for the remaining cases follow either trivially or from bilinearity of ε. So (α|β) = (α|γ) = 98 Affine Algebras as Central extensions of Loop Algebras (β|γ) = −1, whence |α + β + γ|2 = 0, so α + β + γ = 0 using positive definiteness of the form. So Jacobi identity boils down to ε(β, γ)(β + γ) = −ε(α, β)(α + β) + ε(α, γ)(α + γ), which holds by bilinearity again. Thus g is a Lie algebra. Let

∨ Π = Π = {α1, . . . , α`}, ei = Eαi , fi = −E−αi . We can now apply Proposition 1.5.1. To check that the form is invariant is straightforward. 8 Twisted Affine Algebras and Automorphisms of Finite Order

8.1 Graph Automorphisms

Let A be of finite type XN . Let σ be a permutation of {1,...,N} such that aσ(i)σ(j) = aij. Such σ can be thought of as a graph of the Dynkin diagram of A. Let g = g(A). It is clear that such graph automorphism defines an automorphism, denoted by the same letter σ and called graph automorphism of g:

∨ ∨ σ : g → g, ei 7→ eσ(i), fi 7→ fσ(i), αi 7→ ασ(i). The interesting graph automorphisms are listed in Figure 8.1: Our first main goal is to determine the fixed point subalgebra gσ. We consider the linear action of σ on V := h∗ given from R

σ(αi) = ασ(i) (1 ≤ i ≤ N). For each orbit J of σ on {1,...,N} define 1 X α := α . (8.1) J |J| j j∈J σ Then the αJ form a basis of V as J runs over the σ-orbits on {1,...,N}. σ Note that αJ is the orthogonal projection of αj onto V . We see from Figure 8.1 that the following possible situations for the orbits J are possible: (1) |J| = 1; 0 (2) |J| = 2, J = {j, j }, αj + αj0 6∈ ∆; 0 00 (3) |J| = 3, J = {j, j , j }, αj + αj0 , αj + αj00 , αj0 + αj00 6∈ ∆; 0 (4) |J| = 2, J = {j, j }, αj + αj0 ∈ ∆.

These are referred to as orbits of types A1, A1 × A1, A1 × A1 × A1, and A2, respectively.

99 100 Twisted Affine Algebras and Automorphisms of Finite Order

1 2 ` ••••••••...

A2` ••••••••... 2` 2` − 1 ` + 1

αi 7→ α2`+1−i

1 2 ` − 1 •••••••... HH ¨•` A2`−1 •••••••... ¨ 2` − 1 2` − 2 ` + 1

αi 7→ α2`−i

¨•` ... ¨ D`+1 •••••••H 1 2 ` − 1 H•` + 1

αi 7→ αi (1 ≤ i < `), α` 7→ α`+1, α`+1 7→ α`

12 ¨•• 36 ¨ E6 ••H H•• 54

αi 7→ α6−i (1 ≤ i ≤ 6), α6 7→ α6

•1 2¨¨ D4 ••H 3 H•4

α1 7→ α3, α3 7→ α4, α4 7→ α1, α2 7→ α2

Fig. 8.1. Graph automorphisms

Lemma 8.1.1 The vectors αJ , αK for distinct σ-orbits J, K form a base 8.1 Graph Automorphisms 101 of a root system of rank 2 as follows:

JK Type of root system (i) •• •• ¨• (ii) •H¨ ••> H• ¨• (iii) •H¨ • ••> H• •• (iv) •• •• •• (v) ••> •• Finally, if no node in J is connected to any node in K then the type of the root system is A1 × A1.

Proof This is an easy calculation. Suppose for example that we have case (v) with roots numbered

1 •• 2 4 •• 3 Then α + α α + α α = 1 4 , α = 2 3 . J 2 K 2

So

1 (α |α ) = (α |α ), J J 2 1 1 1 (α |α ) = (α |α ), K K 4 1 1 1 (α |α ) = − (α |α ), J K 4 1 1

This is what was claimed.

σ Corollary 8.1.2 Let Π be the set of vectors αJ for all σ-orbits J on 102 Twisted Affine Algebras and Automorphisms of Finite Order {1,...,N}. Then Πσ is a base of a root system of the following type:

Type Π Order of σ Type Πσ A2` 2 B` A2`−1 2 C` D`+1 2 B` D4 3 G2 E6 2 F4 Now let ∆σ be the root system in V σ with base Πσ and Weyl group W σ and Cartan matrix Aσ. In view of Lemma 8.1.1 and Corollary 8.1.2, we know the type of this Cartan matrix and:

Lemma 8.1.3 Let I,J be distinct σ-orbits σ-orbits on {1,...,N}. Then  P σ i∈I aij for any j ∈ J if I has type A1, A1 × A1 or A1 × A1 × A1 aIJ = P 2 i∈I aij for any j ∈ J if I has type A2

Lemma 8.1.4 There is an isomorphism W σ → W 1 := {w ∈ W | wσ = σ σw} under which the fundamental reflection rJ ∈ W corresponding to αJ maps to (w0)J ∈ W , the element of maximal length in the Weyl group WJ generated by the ri, i ∈ J.

1 σ Proof Observe first that W acts on V . Next, we claim that (w0)J ∈ 1 −1 −1 W for each J. Indeed, σrjσ = rσ(j) implies σWJ σ = WJ , so σ induces the length preserving automorphism of WJ , hence (w0)J is invariant.

Now, note that (w0)J |V σ = rJ . Indeed, using the defining property of the longest element we have 1 X 1 X (w ) (α ) = (w ) ( α ) = − α = −α 0 J J 0 J |J| j |J| j J j∈J j∈J

σ Moreover, if v ∈ V and (αJ |v) = 0 then (αj|v) = 0 for all j ∈ J, whence (w0)J (v) = v. 1 Next, we show that the elements (w0)J generate W . Take w 6= 1 in 1 W . Then there exists a simple root αj with w(αj) < 0. Let J be the σ-orbit of j. Then w(αi) < 0 for all i ∈ J. Now (w0)J changes the signs of all roots in ∆J but of none in ∆ \ ∆J . Hence `(w(w0)J ) < `(w). Now apply induction on the length. We may now define a homomorphism W 1 → W σ by restricting the 1 σ action of w ∈ W from V to V , which maps (w0)J to rJ and so is 8.1 Graph Automorphisms 103 surjective. To see that the homomorphism is injective, take w 6= 1 in 1 W . We saw that there exists a σ-orbit J such that w(αi) < 0 for all i ∈ J, whence w(αJ ) 6= αJ . From now on we identify W σ and W 1. For each α ∈ ∆ denote by ασ its orthogonal projection into V σ.

Lemma 8.1.5 (i) For each α ∈ ∆, ασ is a positive multiple of a root in ∆σ. (ii) Let ∼ be the equivalence relation on ∆ given by α ∼ β ⇔ ασ is a positive multiple of βσ. Then the equivalence classes are the + σ subsets of ∆ of the form w(∆J ) where w ∈ W and J is a σ-orbit on {1,...,N}. (iii) There is a bijection between equivalence classes in ∆ and roots in σ + ∆ given by w(∆J ) 7→ w(αJ ).

+ Proof We first show that each α ∈ ∆ lies in w(∆J ) for some w ∈ σ −1 σ W and some σ-orbit J. We have σw0σ = w0, so w0 ∈ W . By σ Lemma 8.1.4 the elements (w0)J generate W , and so we can write + − w0 = (w0)J1 ... (w0)Jr . Let α ∈ ∆ . Then w0(α) ∈ ∆ . Thus there exists i such that

+ (w0)Ji+1 ... (w0)Jr (α) ∈ ∆ , but − (w0)Ji (w0)Ji+1 ... (w0)Jr (α) ∈ ∆ . Hence (w ) ... (w ) (α) ∈ ∆+ , 0 Ji+1 0 Jr Ji that is α ∈ (w ) ... (w ) (∆+ ), 0 Jr 0 Ji+1 Ji and −α ∈ (w ) ... (w ) (w ) (∆+ ). 0 Jr 0 Ji+1 0 Ji Ji σ + Now consider the projection α for α ∈ ∆J . If J has type A1,A1 ×A1 + σ or A1 × A1 × A1 then ∆J = ΠJ , so α = αJ . If J has type A2, then + ΠJ = {αj, αj0 } and ∆J = {αj, αj0 , αj + αj0 }, and  α if α = α or α 0 , ασ = J j j 2αJ if α = αj + αj0 . 104 Twisted Affine Algebras and Automorphisms of Finite Order

+ σ Thus for α ∈ ∆J , we know that α is a positive multiple of αJ . Hence + σ σ for α ∈ w(∆J ) with w ∈ W we know that α is a positive multiple of σ w(αJ ) ∈ ∆ , proving (i). + σ We now know that the elements of each set w(∆J ) for w ∈ W lie in + 0 + the same equivalence class. Suppose w(∆J ) and w (∆K ) lie in the same 0 σ 0 σ class for w, w ∈ W and orbits J, K. Then w(αJ ) = w (αK ) ∈ ∆ 0−1 0−1 or w w(αJ ) = αK . Consider the root w w(αj) ∈ ∆ for j ∈ J. 0−1 σ 0−1 The root has the property that (w w(αj)) = αK . So w w(αj) is a non-negative linear combination of the αk for k ∈ K. Hence 0−1 + 0−1 + + w w(ΠJ ) ⊂ ∆K , and so w w(∆J ) ⊂ ∆K . By symmetry we also 0 −1 + + + have w w (∆K ) ⊂ ∆J . Hence we have equality, that is w(∆J ) = 0 + w (∆K ), which completes the proof of (ii). σ σ Now, any root in ∆ has form w(αJ ) for some w ∈ W and some σ- orbit J. The set of the roots α ∈ ∆ such that ασ is a positive multiple of + + w(αJ ) is w(∆J ), as shown above. Thus w(∆J ) 7→ w(αJ ) is a bijection between equivalence classes of ∆ and elements of ∆σ, giving (iii).

Theorem 8.1.6 Let A be of finite type, and σ be a graph automorphism of g = g(A). Then gσ is isomorphic to g(Aσ).

∨ Proof For each σ-orbit J on {1,...,N} we define elements eJ , fJ , αJ of gσ by X X ∨ X ∨ eJ = ej, fJ = fj, αJ = αj j∈J j∈J j∈J if J is of type A1,A1 × A1 or A1 × A1 × A1, and √ √ X X ∨ X ∨ eJ = 2 ej, fJ = 2 fj, αJ = 2 αj j∈J j∈J j∈J

∨ σ if J is of type A2. Then the αJ form a basis of h . One checks us- σ ∨ ∨ ing Lemma 8.1.3 that h together with Π = {αJ }, Π = {αJ } give a realization of Aσ and the relations (1.12-1.15) hold. σ ∨ Thus the subalgebra g1 of g generated by the elements eJ , fJ , αJ is a quotient of g˜(Aσ). Since the dimension of hσ is the same as the dimension σ of the Cartan subalgebra of g˜(A ), we deduce that g1 is the quotient of g˜(Aσ) by an ideal whose intersection with the Cartan subalgebra is trivial. But we know that among such ideals there is the largest one r so that g˜(Aσ)/r =∼ g(Aσ), the simple finite dimensional Lie algebra of type Aσ. Moreover, the root spaces of g(Aσ) are 1-dimensional. Now σ σ it follows that g1 = g and it is isomorphic to g(A ) by dimension 8.1 Graph Automorphisms 105 considerations. Indeed, consider the decomposition of ∆ into equivalence classes given by Lemma 8.1.5. For each equivalence class S let

gS = ⊕α∈Sgα.

σ P σ σ Then σ(gS) = gS, and g = h ⊕ S gS. Now dim gS ≤ 1 for each equivalence class S. This is clear if S has type A1, A1 × A1 or A1 × A1 × A1. Suppose S has type A2. Then S = {α, β, α + β} with σ(gα) = gβ, σ(gβ) = gα, σ(gα+β) = gα+β. Take non-zero eα ∈ gα, eβ ∈ gβ. −1 −1 Then σ(eα) = λeβ, σ(eβ) = λ eα. Hence σ([eα, eβ]) = [λeβ, λ eα] = σ −[eα, eβ]. It follows that gS = C(eα + λeβ). We thus have

dim gσ ≤ dim hσ + |∆/ ∼ | = dim hσ + |∆σ| = dim g(Aσ), which completes the proof.

If σ is of order r (recall that r = 2 or 3) set η = e2πi/r, and g(i) be the ηi-eigenspace of σ on g for 0 ≤ i < r. Note that g(0) = gσ, and M g = g(i), 0≤i

Proposition 8.1.7 g(i) is an irreducible gσ-module.

Proof If i = 0 this is clear. Let i 6= 0. Suppose first that A = A2`−1,D`+1 or E6 (and so r = 2, i = 1). Let {α, β} be a 2-element orbit of σ on ∆ and Eα,Eβ be the corresponding root elements such (1) that σ(Eα) = Eβ. Then Eα − Eβ ∈ g , and, moreover, such elements yield a basis of g(1) as we run through all 2-element orbits. The roots α, β ∈ h∗ have the same restriction to hσ, and this restriction is the σ σ weight of Eα − Eβ with respect to h . The highest weight of the g - module g(1) thus comes from the highest 2-element orbit. Explicit check shows that the highest 2-element orbits are:

for A2`−1:(α1 + ··· + α2`−2, α2 + ··· + α2`−1);

for D`+1:(α1 + ··· + α`−1 + α`, α1 + ··· + α`−1 + α`);

for E6:(α1 + 2α2 + 2α3 + α4 + α5 + α6, α1 + α2 + 2α3 + 2α4 + α5 + α6).

σ Moreover, in this three cases the subalgebra g has type C`,B`, or F4, respectively. For the standard labellings of the corresponding Dynkin 106 Twisted Affine Algebras and Automorphisms of Finite Order diagrams, the highest weights for the gσ-module g(1) are:

for C`: α1 + 2α2 + ··· + 2α`−1 + α` = ω2;

for B`: α1 + α2 + ··· + α`−1 + α` = ω1;

for F4: α1 + 2α2 + 3α3 + 2α4 = ω4.

Note that in all cases we get the highest short root θ0 as the highest weight. Now

 2 2` − ` − 1, for C`, (1) σ  dim g = dim g − dim g = 2` + 1, for B`,  26, for F4, which according to Weyl’s dimension formula is the dimension of the irreducible module with the highest weight θ0. The argument for D4 and A2` is similar. To get a precise multiplication table for the non-simply-laced finite di- mensional Lie algebras, it is convenient to change our notation. Roughly speaking we drop indices σ from objects related to the fixed points of σ (so ∆σ becomes ∆) and use primes 0 to distinguish the objects corre- sponding the big Lie algebra g (so ∆ becomes ∆0. To be more precise, ∆0 is the root system of type (XN , r) = (D`+1, 2), (A2`−1, 2), (E6, 2), (D4, 3) 0 0 0 0 0 (r) with roots α ∈ ∆ , simple roots α1 . . . , αN , etc. Let g = g(XN ) be the corresponding Lie algebra, and σ the graph automorphism of g0 as before. We already know that g := g0σ is a simple Lie algebra of type

B`,C`,F4,G2, respectively. In all four cases fix an orientation of the Dynkin diagram XN which is σ-invariant, and let ε(α, β) be the corresponding asymme- try function, which is then also σ-invariant. This gives us an explicit realization 0 0 M 0 g = h ⊕ CEα0 α0∈∆0 as in (7.9). It is easy to see that

0 0 0 0 0 0 µ : α 7→ σ(α ),Eα0 7→ Eσ(α0) (α ∈ ∆ ) (8.2) is an automorphism of g0 which agrees with the graph automorphism σ on the generators, so σ = µ. Note that there are no σ-orbits of type A2, since we are staying away from type A2`. Moreover, there is a bijection between σ-orbits on ∆0 and the root system ∆ := ∆σ, given by mapping 8.1 Graph Automorphisms 107

0 0σ 1 P an orbit J = {α ,... } to α = |J| α∈J α. So we can (and will) identify the σ-orbits on ∆0 with elements of ∆. P 0 For α ∈ ∆ denote Eα = α0∈α Eα0 (note we have identified elements 0 α ∈ ∆ with σ-orbits on ∆ ). Similarly, for simple roots α1, . . . , α` ∈ ∆ 1 P 0 we have αi = ( 0 α )—this is just the formula (8.1) in our new |αi| α ∈αi 0σ notation. Let h = h , the subspace with basis α1, . . . , α`. Then

0σ M g = g = h ⊕ CEα. α∈∆ Moreover, the normalized invariant form (·|·)0 on g0 is σ-invariant and so it restricts to the invariant form (·|·) on g, which is non-degenerate and invariant. Moreover (·|·) is normalized since we already know that the single orbit elements α0 correspond to the long roots α ∈ ∆, and so 0 0 0 (α|α) = (α |α ) = 2 for α ∈ ∆l.

Proposition 8.1.8 Let ∆ = ∆s ∪ ∆l be a non-simply-laced root system of finite type in Euclidean space hR with root lattice Q. Set r = 2 if ∆ = B`, C`, or F4, and r = 3 if ∆ = G2. Let us h be the complex hull of hR and extend (·|·) to h. Let M g = h ⊕ ( CEα). α∈∆ Define the bracket on g as follows:

 [h, h0] = 0 if h, h0 ∈ h   [h, Eα] = (h|α)Eα if h ∈ h, α ∈ ∆   [E ,E ] = −α if α ∈ ∆  α −α l  [E ,E ] = −rα if α ∈ ∆  α −α s [Eα,Eβ] = 0 if α, β ∈ ∆, α + β 6∈ ∆ ∪ {0}  0 0  [Eα,Eβ] = (p + 1)ε(α , β )Eα+β if α, β, α + β ∈ ∆ where p ∈ Z≥0   is maximal with α − pβ ∈ ∆ and   α0 ∈ α, β0 ∈ β are representatives   such that α0 + β0 ∈ α + β

The normalized bilinear form (·|·) on g is given by extending (·|·) as follows:  −δα,−β if α, β ∈ ∆l (h, Eα) = 0, (Eα,Eβ) = −rδα,−β if α, β ∈ ∆s

Proof We just need to check the relations. The first one is obvious. For 108 Twisted Affine Algebras and Automorphisms of Finite Order the second, working in g0 we get

X 0 0 [h, Eα] = (h|α )Eα0 = (h|α)Eα, α0∈α since orthogonal projection of every α0 ∈ α to h equals α. The third and fourth relations follow from

X 0 X 0 X 0 0 X 0 [Eα,E−α] = [ Eα0 , E−β0 ] = [Eα0 ,E−α0 ] = − α . α0∈α β0∈α α0∈α α0∈α The fifth relation comes from the following (easy) fact: if α+β 6∈ ∆∪{0} then α0 + β0 6∈ ∆0 ∪ {0}. For the last relation, we have

X 0 X 0 X 0 0 0 [Eα,Eβ] = [ Eα0 , Eβ0 ] = ε(α , β )Eα0+β0 . α0∈α β0∈β α0∈α,β0∈β,α0+β0∈∆0 Now note that α0 + β0 ∈ α + β and ε(α0, β0) is the same for any repre- sentatives α0 ∈ α, β0 ∈ β such that α0 + β0 ∈ ∆0. Next check explicitly that each Eα0+β0 appears (p + 1) times.

8.2 Construction of Twisted Affine Algebras In this section we construct explicit realization of affine algebras of types (r) ◦ XN with r > 1, referred to as twisted affine algebras. Let g be a finite dimensional Lie algebra of type XN , and σ be its graph automorphism ◦ of order r. Then σ extends to a graph automorphism of Lˆ(g) denoted again by σ and given by ◦ σ : c 7→ c, d 7→ d, ti ⊗ x 7→ ti ⊗ σ(x)(x ∈g). ◦ Set η = e2πi/r and define a twisted graph automorphism of Lˆ(g) by ◦ τ : c 7→ c, d 7→ d, ti ⊗ x 7→ η−iti ⊗ σ(x)(x ∈g).

Proposition 8.2.1 We have ˆ τ ∼ (2) L(g(A2`−1)) = g(A2`−1), ˆ τ ∼ (2) L(g(A2`)) = g(A2` ), ˆ τ ∼ (2) L(g(D`+1)) = g(D`+1), ˆ τ ∼ (2) L(g(E6)) = g(E6 ), ˆ τ ∼ (3) L(g(D4)) = g(D4 ). 8.2 Construction of Twisted Affine Algebras 109 ◦ ◦ ◦ (2) g Proof We will skip the proof for A2` . Let = g(A), where A is of type A2`−1,D`+1,E6 or D4. If r = 2 then we have ◦ ◦ ◦ τ X 2n σ X 2n+1 (1) Lˆ(g) = (t ⊗ (g) ) ⊕ (t ⊗ (g) ) ⊕ Cc ⊕ Cd, n∈Z k∈Z whereas if r = 3, then ◦ ◦ ◦ ◦ τ X 3n σ X 3n+1 (1) X 3n+2 (2) Lˆ(g) = (t ⊗(g) )⊕ (t ⊗(g) )⊕ (t ⊗(g) )⊕Cc⊕Cd. n∈Z n∈Z n∈Z

Let E1,...,EN ,F1,...,FN ,H1,...,HN be the standard generators of ◦ ◦ g. Pick a representative θ0 ∈∆ of the highest 2- or 3-element σ-orbit on ◦ ∆, cf. the proof of Proposition 8.1.7. Specifically we pick

for A2`−1: θ0 = α1 + α2 + ··· + α2`−2;

for D`+1: θ0 = α1 + ··· + α`−1 + α`;

for E6: θ0 = α1 + 2α2 + 2α3 + α4 + α5 + α6;

for D4: θ0 = α2 + α1 + α3. ◦ ◦ g g ∨ Choose elements Eθ0 ∈ θ0 ,Fθ0 ∈ −θ0 so that [Eθ0 ,Fθ0 ] = θ0 , and sim- ◦ ∨ ˆ g τ ilarly Eσ(θ0),Fσ(θ0). Now choose the elements ei, fi, αi ∈ L( ) as fol- lows:

A2`−1:

1 2 ` − 1 •••••••... HH ¨•` •••••••... ¨ 2` − 1 2` − 2 ` + 1 ∨ ei = 1 ⊗ (Ei + E2`−i), fi = 1 ⊗ (Fi + F2`−i), αi = 1 ⊗ (Hi + H2`−i) (1 ≤ i < `), ∨ e` = 1 ⊗ E`, f` = 1 ⊗ F`, α` = 1 ⊗ H` −1 ∨ ∨ ∨ e0 = t ⊗ (Fθ0 − Fσ(θ0)), f0 = t ⊗ (Eθ0 − Eσ(θ0)), α0 = 1 ⊗ (−θ0 − (σ(θ0)) ) + 2c.

D`+1:

¨•` •••••••... ¨H 1 2 ` − 1 H•` + 1 ∨ ei = 1 ⊗ Ei, fi = 1 ⊗ Fi, αi = 1 ⊗ Hi (1 ≤ i < `), ∨ e` = 1 ⊗ (E` + E`+1), f` = 1 ⊗ (F` + F`+1), α` = 1 ⊗ (H` + H`+1) −1 ∨ ∨ ∨ e0 = t ⊗ (Fθ0 − Fσ(θ0)), f0 = t ⊗ (Eθ0 − Eσ(θ0)), α0 = 1 ⊗ (−θ0 − (σ(θ0)) ) + 2c. 110 Twisted Affine Algebras and Automorphisms of Finite Order

E6:

1 2 •• HH3 6 ¨•• ••¨ 5 4

∨ ei = 1 ⊗ (Ei + E6−i), fi = 1 ⊗ (Fi + F6−i), αi = 1 ⊗ (Hi + H6−i) (1 ≤ i ≤ 2), ∨ e3 = 1 ⊗ E3, f3 = 1 ⊗ F3, α3 = 1 ⊗ H3 ∨ e4 = 1 ⊗ E6, f4 = 1 ⊗ F6, α4 = 1 ⊗ H6 −1 ∨ ∨ ∨ e0 = t ⊗ (Fθ0 − Fσ(θ0)), f0 = t ⊗ (Eθ0 − Eσ(θ0)), α0 = 1 ⊗ (−θ0 − (σ(θ0)) ) + 2c.

D4:

1•H 2 3••¨H 4•¨ ∨ e1 = 1 ⊗ E2, f1 = 1 ⊗ F2, α1 = 1 ⊗ H2, ∨ e2 = 1 ⊗ (E1 + E3 + E4), f2 = 1 ⊗ (F1 + F3 + F4), α2 = 1 ⊗ (H1 + H3 + H4) 2 −1 2 2 2 e0 = t ⊗ (Fθ0 + η Fσ(θ0) + ηFσ (θ0)), f0 = t ⊗ (Eθ0 + ηEσ(θ0) + η Eσ (θ0)), ∨ ∨ ∨ 2 ∨ α0 = 1 ⊗ (−θ0 − (σ(θ0)) − (σ (θ0) ) + 3c. Let ◦ h = 1⊗ h ⊕Cc ⊕ Cd. Note that

σ τ ∨ ∨ ∨ h = h = span(α0 , α1 , . . . , α` ) ⊕ Cc ⊕ Cd. σ ∗ ∗ Define the elements α1, . . . , α` ∈ (h ) to be the restriction from h ∗ of the roots α1, . . . , α` ∈ h , respectively, in types A2`−1 and D`+1. σ ∗ Define the elements α1, α2, α3, α4 ∈ (h ) to be the restriction from ∗ ∗ h of the roots α1, α2, α3, α6 ∈ h , respectively, in type E6. Define σ ∗ ∗ the elements α1, α2 ∈ (h ) to be the restriction from h of the roots ∗ α2, α1 ∈ h , respectively in type D4. Also, in all cases, we define α0 to ∗ be the restriction from h of δ − θ0. We next claim that (hσ, Π, Π∨) is a realization of the Cartan matrix 0 (r) A of type XN . We know from Theorem 8.1.6 that ∨ 0 hαi , αji = aij 8.2 Construction of Twisted Affine Algebras 111

∨ for i, j ≥ 1. The ` × ` matrix with entries hαi , αji for 1 ≤ i, j ≤ ` is ∨ ∨ non-singular, so α1 , . . . , α` and α1, . . . , α` are linearly independent. We have hd, αii = δi0, whence α0, α1, . . . , α` are linearly independent. Also ∨ ∨ ∨ ∨ c appears in α0 , whence α0 , α1 , . . . , α` are linearly independent. For the remaining entries of the Cartan matrix, note that

` X ∨ ∨ ai αi = rc, i=0

(r) where ai are the marks of the diagram XN . We also note that

` X aiαi = δ| ◦σ . (h ) i=0

Now

` ` ∨ ∨ X X 0 0 0 hαi , α0i = hαi , δ − ajαji = − aijaj = a0ai0 = ai0 j=1 j=1 for i = 1, . . . , `. Moreover,

` ` ∨ X ∨ ∨ X 0 ∨ ∨ 0 0 hα0 , αji = hrc − ai αi , αji = − aijai = a0 a0j = a0j i=1 i=1 for j = 1, . . . , `. Finally,

∨ ∨ ∨ ∨ hα0 , α0i = h−1 ⊗ θ0 − 1 ⊗ σ(θ0) − · · · + rc, −θ0 + δi = hθ0 , θ0i = 2,

∨ since hσ(θ0) , θ0i = 0. We next verify relations (1.12-1.15). The relation (1.12) is easy and the relation (1.13) is obvious. For (1.14,1.15), if i, j ≥ 1, then we already know that

∨ 0 ∨ 0 [αi , ej] = aijej, [αi , fj] = −aijfj (8.3)

For j = 1, . . . , ` we have

` ` ` ∨ X ∨ ∨ X ∨ ∨ X ∨ 0 0 [α0 , ej] = [rc− ai αi , ej] = − ai [αi , ej] = − ai aijej = a0jej, i=1 i=1 i=1 112 Twisted Affine Algebras and Automorphisms of Finite Order

∨ 0 and similarly we get [α0 , fj] = −a0jfj for j = 1, . . . , `. Also, for i = 1, . . . , ` we have

∨ ∨ −1 [αi , e0] = [αi , t ⊗ (Fθ0 + η Fσ(θ0) + ... )] ∨ −1 = t ⊗ hαi , −θ0i(Fθ0 + η Fσ(θ0) + ... ) ` ∨ X = hαi , − ajαjie0 j=1 ` X = −( aijaj)e0 j=1 = ai0e0.

∨ Similarly we have [αi , f0] = −ai0f0. We also have ∨ ∨ ∨ −1 [α0 , e0] = [1 ⊗ (−θ0 − σ(θ0) − ... ), t ⊗ (Fθ0 + η Fσ(θ0) + ... )] −1 = 2t ⊗ (Fθ0 + η Fσ(θ0) + ... )]

= 2e0.

∨ Similarly we have [α0 , f0] = −2f0. Ffinally, for h = c and d the relations [h, ei] = hh, αiiei and [h, fi] = −hh, αiifi are easy to check. We next prove that the elements e0, e1, . . . , e`, f0, f1, . . . , f` together ◦ τ τ with h generate Lˆ(g) . Denote by g1 the subalgebra generated by ◦ σ this elements. We know that e1, . . . , e`, f1, . . . , f` generate (g) . So the ◦ τ degree 0 part of Lˆ(g) lies in g1.

Suppose first that r = 2. We have e0 = t ⊗ (Fθ0 − Fσ(θ0)) ∈ g1 and ◦ ◦ g (1) g (1) Fθ0 − Fσ(θ0) ∈ ( ) . Now, it is easy to see that the elements y ∈ ( ) ◦ σ for which t ⊗ y ∈ g1 form a non-zero submodule of the (g) -module ◦ ◦ (1) (1) (g) . Since this module is irreducible, we conclude that t ⊗ (g) ⊂ g1. ◦ Now we can find elements x, y ∈ (g)(1) such that [x, y] 6= 0. Then 2 [t ⊗ x, t ⊗ y] = t ⊗ [x, y] is a non-zero element of g1. Now the set of all ◦ ◦ ◦ σ 2 σ σ z ∈ (g) such that t ⊗ z ∈ g1 is an ideal of (g) , and (g) is simple, so ◦ 2 σ t ⊗ (g) ⊂ g1. The relations ◦ [t2 ⊗ x, t2k ⊗ y] = t2k+2 ⊗ [x, y](x, y ∈ (g)σ) ◦ ◦ [t2 ⊗ x, t2k+1 ⊗ y] = t2k+3 ⊗ [x, y](x ∈ (g)σ, y ∈ (g)(1)) ◦ 2k σ can then be used to show by induction on k that t ⊗ (g) ⊂ g1 and ◦ 2k+1 (1) t ⊗ (g) ⊂ g1 for k > 0. The argument for k < 0 and for r = 3 is similar. 8.2 Construction of Twisted Affine Algebras 113 ◦ Finally we must show that Lˆ(g)τ has non non-trivial ideals i with ◦ i ∩ hτ = (0). Decompose Lˆ(g)τ into root spaces with respect to hτ . We ◦ first suppose that r = 2. Then Lˆ(g)τ is the direct sum of hτ and the following weight spaces: ◦ t2k ⊗ (h)σ with weight 2kδ; ◦ t2k+1 ⊗ (h)(1) with weight (2k + 1)δ; 2k Ct ⊗ Eα with weight α + 2kδ where {α} is a one-element orbit ◦ of σ on ∆; 2k Ct ⊗ (Eα + Eσ(α)) with weight α + 2kδ where {α, σ(α)} is a ◦ two-element orbit of σ on ∆; 2k+1 Ct ⊗ (Eα − Eσ(α)) with weight α + (2k + 1)δ where {α, σ(α)} ◦ is a two-element orbit of σ on ∆. By the Weight Lemma, if i 6= 0, then it has a non-zero element x in one of these root spaces. Then we can find an element y in the negative root space such that [x, y] is a non-zero element of hτ . When r = 3 a similar argument can be applied. This time the weight spaces are ◦ t3k ⊗ (h)σ with weight 3kδ; ◦ t3k+1 ⊗ (h)(1) with weight (3k + 1)δ; ◦ t3k+2 ⊗ (h)(2) with weight (3k + 2)δ; 3k Ct ⊗ Eα with weight α + 3kδ where {α} is a one-element orbit ◦ of σ on ∆; 3k 2 Ct ⊗(Eα+Eσ(α)+Eσ2(α)) with weight α+3kδ where {α, σ(α), σ (α)} ◦ is a three-element orbit of σ on ∆; 3k+1 −1 Ct ⊗ (Eα + η Eσ(α) + ηEσ2(α)) with weight α + (3k + 1)δ ◦ where {α, σ(α), σ2(α)} is a three-element orbit of σ on ∆. 3k+2 −1 Ct ⊗ (Eα + ηEσ(α) + η Eσ2(α)) with weight α + (3k + 2)δ ◦ where {α, σ(α), σ2(α)} is a three-element orbit of σ on ∆.

Corollary 8.2.2 The multiplicities of the imaginary roots are as follows: (2) (i) Type A2` : the multiplicity of any kδ is `; (2) (ii) Type A2`−1: the multiplicity of any 2kδ is ` and the multiplicity of any (2k + 1)δ is ` − 1; 114 Twisted Affine Algebras and Automorphisms of Finite Order

(2) (iii) Type D`+1: the multiplicity of any 2kδ is ` and the multiplicity of any (2k + 1)δ is 1; (2) (iv) Type E6 : the multiplicity of any 2kδ is 4 and the multiplicity of any (2k + 1)δ is 2; (3) (v) Type D4 : the multiplicity of any 3kδ is 2 and the multiplicity of any (3k + 1)δ and (3k + 2)δ is 1.

◦ Proof By the previous theorem, the multiplicity of rkδ equals dim(h)σ, ◦ and the multiplicity of (2k + 1)δ in cases (i)-(iv) equals dim(h)(1), etc.

8.3 Finite Order Automorphisms

Let g be a simple finite dimensional Lie algebra of type XN . Let µ be a diagram automorphism of g of order r. Let Ei,Fi,Hi (i = 0, 1,...,N) be the elements of g introduced in §7.2, and let α0, α1, . . . αN be the roots attached to E0,E1,...,EN , respectively. Recall that the elements E0,E1,...,EN generate g, and that there exists a unique linear de- P` pendence i=0 aiαi = 0 such that the ai are positive relatively prime (r) numbers. Recall also that the vertices of the diagram XN are in one- to-one correspondence with the Ei and that the ai are the labels at this diagram.

Lemma 8.3.1 Every ideal of the Lie algebra L(g, µ) is of the form P (tr)L(g, µ), where P (t) ∈ L. In particular, a maximal ideal is of the form (1 − (at)r)L(g, µ), where a ∈ C×.

Proof Let i be a non-trivial ideal of L(g, µ), and

X j x = t P¯j,s(t) ⊗ a¯j,s ∈ i ¯j,s ¯ where 0 ≤ j < r is such that j ≡ j (mod r), P¯j,s(t) ∈ L, and a¯j,s ∈ g¯j are linearly independent. We show that

r Q(t )P¯j,s(t)L(g, µ) ⊂ i

µ µ for all Q(t) ∈ L. Let h0¯ = h be the Cartan subalgebra of g0¯ = g . We ∗ can assume that x is an eigenvector for h0¯ with weight α ∈ h0¯. If α 6= 0, j taking [x, t ⊗ a−¯j] with a−¯j of weight −α, instead of x, we reduce the 8.3 Finite Order Automorphisms 115 ¯ ∗ problem to the case α = 0 and j = 0, i.e. a¯j,s ∈ h0¯. Let γ ∈ h0¯ be the root of g0¯ such that hγ, a¯j,si= 6 0. Then the element y

Theorem 8.3.2 Let s = (s0, s1, . . . , s`) be a sequence of non-negative P` relatively prime numbers; put m = r i=0 aisi. Then (i) The formulas

2πisj /m σs;r : Ej 7→ e Ej (0 ≤ j ≤ `)

define (uniquely) an mth order automorphism σs,r of g. (ii) Up to conjugation by an automorphism of g, the automorphisms σs,r all mth order automorphisms of g. (iii) The elements σs,r and σs0,r0 are conjugate by an automorphism of g if and only if r = r0 and the sequence s can be transformed 0 (r) to the sequence s by an automorphism of the diagram XN .

Proof See Kac. 9 Highest weight modules over Kac-Moody algebras

9.1 The category O For an h-diagonalizable g-module V we denote by P (V ) the set of weights of V . For λ ∈ h∗ denote D(λ) = {µ ∈ h∗ | µ ≤ λ}. The category O is defined as follows. Its objects are g-modules V which are h-diagonalizable with finite dimensional weight spaces and ∗ such that there exists a finite number of elements λ1, . . . , λs ∈ h such that

P (V ) ⊂ D(λ1) ∪ · · · ∪ D(λs). The morphisms in O are homomorphisms of g-modules. By the Weight Lemma, any submodule or quotient module of a module from category O is also in O. Also, a sum and a tensor product of a finite number of modules from O is again in O. Finally, every module from O is restricted. A highest weight vector of weight Λ is a Λ-weight vector v in a g- module V such that n+v = 0. A g-module is a highest weight module with highest weight Λ ∈ h∗ if it is generated by a highest weight vector of weight Λ. If V is such a module and vΛ ∈ V is a highest weight vector of weight Λ, then

M ∗ V = U(n−)vΛ,V = Vλ,VΛ = C · vΛ, dim Vλ < ∞ (λ ∈ h ). λ≤Λ In particular V ∈ O. Now form the Verma module

M(Λ) = U(g) ⊗U(b+) CΛ, where b+ = n+ ⊕ h and CΛ is the 1-dimensional b+-module with the

116 9.1 The category O 117 trivial action of n+ and the action of h with the weight Λ. Let us write

vΛ := 1 ⊗ 1 ∈ M(Λ).

This is a highest weight vector and M(Λ) is a highest weight module with highest weight Λ. By PBW, M(Λ) restricted to U(n−) is a free module of rank 1 on basis vΛ.

Lemma 9.1.1 Suppose V is a highest weight module with highest weight Λ. Then there is a unique up to scalars surjective homomorphism from M(Λ) onto V .

Proof By adjointness of tensor and Hom, ∼ Homg(M(λ),V ) = Homb+ (CΛ,V ). It is clear that the last Hom-space is 1-dimensional.

Proposition 9.1.2 M(Λ) has a unique maximal submodule M 0(Λ), and the quotient L(Λ) := M(Λ)/M 0(Λ) is an irreducible g-module. Moreover, every irreducible module in the category O is isomorphic to one and only one L(Λ), Λ ∈ h∗. Finally, Endg(L(Λ)) = C · IL(Λ).

Proof If M is a proper submodule of M(Λ) then MΛ = 0, hence the sum of all proper submodules of M(Λ) still has the trivial Λ-weight space, so is still proper. This proves the existence of a unique maximal submodule, whence L(Λ) is irreducible. Next, let L be an irreducible module in O. Pick a maximal weight Λ of L, an let v ∈ LΛ. It follows that v generates L, so by Lemma 9.1.1, L is a quotient of M(Λ), whence L =∼ L(Λ). The last claim is easy to check using the fact that dim L(Λ)Λ = 1.

A vector v in a g-module V is called primitive (of weight λ) if v is a weight vector (of weight λ) and there exists a submodule U ⊂ V such that v + U is a highest weight vector in V/U. Every module V ∈ O is generated by its primitive vectors. Indeed, let V 0 be the submodule of V generated by the primitive vectors. If V 0 6= V , then V/V 0 contains a highest weight vector, any preimage of which is a primitive vector. Actually, even more is true: 118 Highest weight modules over Kac-Moody algebras Lemma 9.1.3 Any module V ∈ O is generated by its primitive vectors as an n−-module.

Proof Note first that a weight vector v ∈ V is not primitive if and only if v ∈ U(n−)U0(n+)v, where U0(g) stands for the augmentation ideal of U(g). Indeed, for a weight vector v

U(n−)U0(n+)v = U(n−)U(n+)n+v = U(n−)U(n+)U(h)n+v = U(g)n+v is the g-submodule generated by n+v. Now it follows that every non-primitive vector is obtained by appli- cation of some elements from n− to elements of higher weights. This implies the lemma using boundedness from above of P (V ).

9.2 Formal Characters Unfortunately, a module V in O need not have a composition series. So we cannot define things like multiplicities [V : L(Λ)] in the usual way. The following provide a substitute for this:

Lemma 9.2.1 Let V ∈ O and λ ∈ h∗. Then there exists a filtration

V = Vt ⊃ Vt−1 ⊃ · · · ⊃ V0 = 0 and a subset J ⊂ {1, . . . , t} such that ∼ (i) if j ∈ J then Vj/Vj−1 = L(λj) for some λj ≥ λ; (ii) if j 6∈ J then (Vj/Vj−1)µ = 0 for every µ ≥ λ.

Proof Let X a(V, λ) = dim Vµ. µ≥λ This is a well-defined non-negative integer. We prove the lemma by induction on a(V, λ). If a(V, λ) = 0 then 0 = V0 ⊂ V1 = V is the required filtration, with J = ∅. If a(V, λ) > 0, let µ be a maximal weight of M such that µ ≥ λ. Choose a non-zero weight vector v ∈ Vµ and let U = U(g) · v. Clearly U is a highest weight module. Hence it has a unique maximal submodule U 0. Now we have 0 ⊂ U 0 ⊂ U ⊂ V with U/U 0 =∼ L(µ) and µ ≥ 0. Since a(U 0, λ) and a(V/U, λ) are both less than a(V, λ), we now can proceed by induction. 9.2 Formal Characters 119 Lemma 9.2.2 Let V ∈ O, µ ∈ h∗ and let λ be such that λ ≤ µ. Consider the corresponding filtration from Lemma 9.2.1. Then the number of times µ appears among the {λj | j ∈ J} is independent of the choice of filtration and also the choice if λ.

Proof We first observe that a filtration with respect to λ is also a filtration with respect to µ when µ ≥ λ. Also, the multiplicity of L(µ) in such filtration is the same whether it is regarded as a filtration with respect to λ or µ. Thus to prove the lemma it will be sufficient to take two filtrations with respect to µ and show that L(µ) has the same multiplicity in each. The following variant of the proof of the Jordan- Holder theorem achieves this. Let

V = V0 ⊃ V1 ⊃ · · · ⊃ Vl1 = 0, (9.1) V = V 0 ⊃ V 0 ⊃ · · · ⊃ V 0 = 0 (9.2) 0 1 l2 be two such filtrations of lengths l1 and l2. We use induction on min(l1, l2). If min(l1, l2) = 1 then either V is irreducible and the two filtrations are identical or µ is not a weight of µ and L(µ) does not appear in both filtrations. Thus suppose min(l1, l2) > 1. 0 Assume first that V1 = V1 . Then consider two filtrations

V1 ⊃ · · · ⊃ Vl1 = 0, V 0 ⊃ · · · ⊃ V 0 = 0 1 l2 of V1. By induction they give the same multiplicity of L(µ), and the filtrations for V are obtained by adding the additional factor V/V1, which is the same for both. So we are done in this case. 0 Next, assume that V1 6= V1 . Suppose first that one contains the other, 0 say V1 ⊂ V1 . Then V/V1 is not irreducible, so µ is not a weight of V/V1. 0 Thus neither V/V1 nor V/V1 is isomorphic to L(µ). Let

V ⊃ U1 ⊃ · · · ⊃ Um = 0, be a filtration of V1 of the required type with respect to µ. We then consider the filtrations

V ⊃ V1 ⊃ U1 · · · ⊃ Um = 0, (9.3) 0 V ⊃ V1 ⊃ V1 ⊃ U1 · · · ⊃ Um = 0. (9.4) These are filtrations of the required type with respect to µ. Moreover, L(µ) has the same multiplicity in (9.1) and (9.3), since they have the same leading term V1. Similarly L(µ) has the same multiplicity in (9.2) 120 Highest weight modules over Kac-Moody algebras and (9.2). Finally, L(µ) has the same multiplicity in (9.3) and (9.4), 0 0 since since none of V/V1, V/V1 ,V1 /V1 is isomorphic to L(µ). Thus L(µ) has the same multiplicity in (9.1) and (9.2), as required. 0 Finally, we assume that neither of V1,V1 is contained in the other. 0 Let U = V1 ∩ V1 and choose a filtration of U of the required type with respect to µ:

U ⊃ U1 ⊃ · · · ⊃ Um = 0.

We then consider the filtrations

V ⊃ V1 ⊃ U ⊃ U1 · · · ⊃ Um = 0, (9.5) 0 V ⊃ V1 ⊃ U ⊃ U1 · · · ⊃ Um = 0. (9.6) These are filtrations of the required type with respect to µ. This is clear since ∼ 0 0 0 ∼ 0 V1/U = (V1 + V1 )/V1 ,V1 /U = (V1 + V1 )/V1. Now L(µ) has the same multiplicity in (9.1), (9.5) and in (9.2), (9.6), since they have the same leading term. It is therefore sufficient to show that L(µ) has the same multiplicity in (9.5) and (9.6). These filtrations differ only in the two first factors. If V1 + V2 = V then we have ∼ 0 0 ∼ V/V1 = V1 /U, V/V1 = V1/U,

0 0 and we are done. If V1 +V1 6= V then V/V1 and V/V1 are not irreducible. 0 In this case µ is not a weight of V/V1 and V/V1 , so it is not a weight 0 0 of V1/U. Thus none of V/V1, V1/U, V/V1 , V1 /U is isomorphic to L(µ). This completes the proof.

Now, let V ∈ O and µ ∈ h∗. Fix λ ∈ h∗ such that λ ≤ µ and construct a filtration as in Lemma 9.2.1. Denote by [V : L(µ)] the number of times µ appears among the {λj | j ∈ J} and call it the multiplicity of L(µ) in V . In view of Lemma 9.2.2, the multiplicity is well-defined. Given a module V ∈ O, we have by definition that all its weight spaces are finite dimensional. The idea of the formal character is to record the dimensions of each of these weight spaces in one ”book-keeping devise” or a generating function. Since there may be infinitely many weights in P (V ), we are going to have to work with certain formal infinite sums. So let E be the C-algebra whose elements are series of the form X cλe(λ) λ∈h∗ 9.2 Formal Characters 121 where cλ ∈ C and cλ = 0 for λ outside the union of a finite number of sets of the form D(µ). The sum of two such series and the multiplication by a scalar are defined in the usual way. The product of two such series also makes sense if we use the rule e(λ)e(µ) = e(λ + µ) and note that to calculate the coefficient of a given e(ν) in the product of two elements in E only involves calculating a finite sum. Under these operations E becomes a commutative associative C-algebra with identity e(0). Given a module V ∈ O, define its formal character to be X ch V := (dim Vλ)e(λ) ∈ E. λ∈h∗ By definition, we have

ch (V ⊕ W ) = ch V + ch W, ch (V ⊗ W ) = ch V ch W.

Proposition 9.2.3 Let V ∈ O. Then X ch V = [V : L(λ)]ch L(λ). λ∈h∗

Proof Let X ϕ(V ) = ch V − [V : L(λ)]ch L(λ). λ∈h∗ Note that ϕ(V ) ∈ E. Moreover, ϕ(L(λ)) = 0 and, given a SES of modules

0 → V1 → V2 → V3 → 0 we have ϕ(V2) = ϕ(V1) + ϕ(V3). Now let us focus on a particular λ and V . By Lemma 9.2.1, there exists a filtration

V = Vt ⊃ Vt−1 ⊃ · · · ⊃ V0 = 0 ∼ such that, setting Wi := Vi/Vi−1, we have either Wi = L(λi) for some λi ≥ λ or that (Wi)λ = 0. In the former case ϕ(Wi) = 0. In the latter case ϕ(Wi) has e(λ)-coefficient 0. This was for all λ, so ϕ(V ) = 0.

Lemma 9.2.4 For any λ ∈ h∗ the formal character of the Verma module is given by Y ch M(λ) = e(λ) (1 − e(−α))− mult α.

α∈∆+ 122 Highest weight modules over Kac-Moody algebras

Proof Follows from PBW-theorem and freeness of M(λ) over U(n−).

Assume that A is symmetrizable and let (·|·) be the standard form on g. Recall from Corollary 2.3.6 that if V is a g-module with highest weight Λ, then 2 2 Ω = (|Λ + ρ| − |ρ| )IV .

Proposition 9.2.5 Let V be a g-module with highest weight Λ. Then X ch V = cλch M(λ), (9.7) λ≤Λ, |λ+ρ|2=|Λ+ρ|2 where cλ ∈ Z, cΛ = 1.

Proof In view of Proposition 9.2.3, we may assume that V = L(Λ). Using the same proposition, for any µ we deduce X ch M(µ) = cµ,ν ch L(ν), ν≤µ for some non-negative integers cµ,ν with cµ,µ = 1. We know that cµ,ν 6= 0 if and only if M(µ) contains a primitive vector of weight ν. Using the 2 2 action of the Casimir we deduce that cµ,ν = 0 unless |µ + ρ| = |ν + ρ| . Set B(Λ) = {λ ≤ Λ | |λ + ρ|2 = |Λ + ρ|2}, and order elements of this set, λ1, λ2,... so that λi ≥ λj implies i ≤ j. Then X ch M(λi) = ci,jch L(λj), j with ci,i = 1 and ci,j = 0 for j > i. So we can solve this system of linear equations to complete the proof of the lemma.

9.3 Generators and relations

Recall from chapter 1 that g = g˜/r, and r = r+ ⊕ r− where r± are ideals of g˜ defined by r± = r ∩ n˜±. Set rα = r ∩ g˜α. We deal with some general lemmas first.

Lemma 9.3.1 Let θ : g → g0 be a surjective homomorphism of Lie algebras with kernel r. Let ϕ : U(g) → U(g0) be the corresponding homo- morphism between enveloping algebras. Then the kernel of ϕ is rU(g). 9.3 Generators and relations 123 Proof Since r is an ideal of g, rU(g) is a two-sided ideal of U(g). More- over, rU(g) ⊂ ker ϕ. Conversely, we have a homomorphism

α : U(g)/rU(g) → U(g0) induced by ϕ. We consider the Lie algebra U(g)/rU(g) (with respect to the commutator bracket). Define a map

g0 → U(g)/rU(g)

0 0 0 as follows. Given x ∈ g , choose x1 ∈ g with θ(x1) = x . Then x1 gives rise tox ¯1 ∈ U(g)/rU(g). This map is well-defined and is a Lie algebra homomorphism. By the universal property there is a map

β : U(g0) → U(g)/rU(g) extending the constructed homomorphism of Lie algebras. It is readily checked that α and β are inverse homomorphisms, and thus isomor- phisms.

The two-sided ideal U0(g) := gU(g) of U(g) is called the augmentation ideal of U(g).

2 Lemma 9.3.2 g ∩ (U0(g) ) = [g, g].

Proof Since g ⊂ U0(g) and [x, y] = xy − yx in U(g), the embedding 2 g ∩ (U0(g) ) ⊃ [g, g] is clear. Conversely, let g¯ = g/[g, g]. We have a 2 natural homomorphism U(g) → U(g¯) under which g ∩ (U0(g) ) maps to 2 g¯ ∩ (U0(g¯) ). Now g¯ is abelian, so U(g¯) is a polynomial algebra. In such 2 a polynomial algebra, it is evident that g¯ ∩ (U0(g¯) ) = 0. It follows that 2 2 g ∩ (U0(g) ) lies in the kernel of g → g¯, and so g ∩ (U0(g) ) ⊂ [g, g].

Lemma 9.3.3 Let r be a subalgebra of the Lie algebra g. Then r ∩ rU0(g) = [r, r].

Proof Since r ⊂ U0(r) and [x, y] = xy − yx in U(r), we have

[r, r] ⊂ r ∩ rU0(r) ⊂ r ∩ rU0(g).

Conversely, let {ri} be a basis of r and extend it to a basis {ri, uj} of g. Q mi nj P P The monomials ri uj with mi + nj > 0 form a basis of U0(g) P P P and those with mi + nj ≥ 2 and mi ≥ 1 form a basis of rU0(g). Hence, each element of r ∩ rU0(g) is a linear combination of monomials 124 Highest weight modules over Kac-Moody algebras

Q mi nj P P ri uj with nj = 0 and mi ≥ 2. Hence 2 r ∩ rU0(g) ⊂ r ∩ U0(r) = [r, r], where the last equality comes from Lemma 9.3.2

Proposition 9.3.4 The ideal r+ (resp. r−) is generated as an ideal in n˜+ (resp. n˜−) by those rα (resp. r−α) for which α ∈ Q+ \ Π and 2(ρ|α) = (α|α).

Proof We define a Verma module M˜ (λ) over g˜ by M˜ (λ) = U(g˜) ⊗ , U(b˜+) Cλ ˜ ˜ where b+ = n˜+ ⊕ h and Cλ is the 1-dimensional b+-module with the trivial action of n˜+ and the action of h with the weight λ. As for the usual Verma modules, one proves that M˜ (λ) has a unique maximal proper 0 submodule M˜ (λ) and that as a U(n˜−)-module, M˜ (λ) is a free module on the generatorv ˜λ := 1 ⊗ 1. Consider the special case λ = 0. Writev ˜ :=v ˜0 = 1 ⊗ 1 ∈ M˜ (0). Since n˜− is a free Lie algebra on f1, . . . , fn, U(n˜−) is a free associative algebra on f1, . . . , fn. So as vector spaces,

U(n˜−) = C1 ⊕ U(n˜−)f1 ⊕ · · · ⊕ U(n˜−)fn.

Thus U(n˜−)f1 ⊕ · · · ⊕ U(n˜−)fn is a U(n˜−)-submodule of codimension 1 in U(n˜−). It corresponds to the subspace n M U(n˜−)fiv˜ i=1 of codimension 1 in M˜ (0). Moreover, this subspace is a g-submodule Ln ˜ isomorphic to i=1 M(−αi) since the vectors fiv˜ are easily checked to be highest weight vectors of weight −αi, i = 1, . . . , n. It follows that n n 0 M ∼ M M˜ (0) = U(n˜−)fiv˜ = M˜ (−αi). i=1 i=1

Tensoring with U(g)⊗U(g˜) we get an isomorphism of U(g)-modules n ˜ 0 ∼ M U(g) ⊗U(g˜) M (0) = M(−αi). (9.8) i=1 Let π : g˜ → g be the canonical homomorphism. We define a map ˜ 0 λ1 : r− → U(g) ⊗U(g˜) M (0), a 7→ 1 ⊗ a(˜v). 9.3 Generators and relations 125

This is a g˜-module homomorphism, where g˜ acts on r− via the adjoint action. Indeed, for x ∈ g˜, a ∈ r−, we have

λ1([x, a]) = 1 ⊗ (xa − ax)(˜v) = π(x) ⊗ a(˜v) = x(λ1(a)) since π(a) = 0. A similar calculation shows that λ1([r−, r−]) = 0. So we have a g-module homomorphism n M λ : r−/[r−, r−] → M(−αi) (9.9) i=1 by (9.8). More explicitly λ is described as follows: write a ∈ r− in the P form a = i uifi, where ui ∈ U0(n˜−), and the action of ui on fi is the adjoint action extended to the universal enveloping. Then X λ(a + [r−, r−]) = π(ui)vi, i where vi is the highest weight vector of M(−αi). We claim that λ is injective. Indeed, λ(a + [r−, r−]) = 0 implies P π(ui) = 0 for all i, hence ui ∈ r−U(n−), see Lemma 9.3.1. So uifi ∈ r−U0(n−). So a ∈ r− ∩ r−U0(n−) = [r−, r−] by Lemma 9.3.3. Thus we have an embedding (9.9) in the category O. Now let −α (α ∈ Q+) be a primitive weight of the g-module r−/[r−, r−]. Note that α 6∈ Π since no fi belongs to r−. Using the embedding and the action of Casimir we deduce that (−α + ρ| − α + ρ) = (−αi + ρ| − αi + ρ) for some i. Since 2(ρ|αi) = (αi|αi) by (2.18), we get 2(ρ|α) = (α|α). By Lemma 9.1.3, r−/[r−, r−] is generated as an n−-module by the representatives of those r−α for which α ∈ ∆+ \ Π and 2(ρ|α) = (α|α). We want to deduce from it that r− is generated as an n˜−-module by such rα (equivalently the ideal of n˜− generated by such r−α). Let k be the the n˜−-submodule generated by such r−α. Then k + [r−, r−] = r−. Suppose k 6= r−. Then r−/k is an n˜−-module. Consider the submodule [r−/k, r−/k] of r−/k. This is an n˜−-module whose weights are of the form β + γ where β, γ are weights of r−/k. Thus if α is a weight of r−/k for which | ht α| is minimal then α cannot be a weight of [r−/k, r−/k]. Thus [r−/k, r−/k] 6= r−/k, and this gives k + [r−, r−] 6= r−, a contradiction. This completes the proof for r−. The result for r+ follows by applying the involutionω ˜.

Theorem 9.3.5 Let A be symmetrizable. Then the elements

1−aij (ad ei) ej, i 6= j (i, j = 1, . . . , n), (9.10)

1−aij (ad fi) fj, i 6= j (i, j = 1, . . . , n) (9.11) 126 Highest weight modules over Kac-Moody algebras generate the ideals r+ and r−, respectively.

Proof Denote by g¯ the quotient of g˜ by the ideal generated by all ele- ments (9.10) and (9.11). The natural surjection g˜ → g factors through surjections g˜ → g¯ → g, thanks to Lemma 3.1.1. We have the induced Q-gradation of g¯: M g¯ = g¯α. α∈Q

Let ¯r (resp. ¯r±) denote the image of r (resp. r±) in g¯. We just need to show that ¯r+ = 0 (then ¯r− = 0 too by applyingω ˜). Otherwise, choose the root α of minimal height among the roots α ∈ Q+ \{0} such that P (¯r+)α 6= 0 and let α = kiαi. It is clear that (r+)α must occur in any system of homogeneous generators of r+ as an ideal of n+. It follows from Proposition 9.3.4 that (α|α) = 2(ρ|α). We know that the Weyl group W acts on the weights of g = g˜/r and that weights in the same W -orbit have the same multiplicity. The same argument can be applied to g¯ to give a similar result (the proofs in §3.2 relied only on Serre relations, which hold in g¯). Since

dim g¯α = dim gα + dim¯rα we see that W acts on the weights of ¯r and that weights in the same W -orbit have the same multiplicity. It follows using Lemma 3.2.2 that

(¯r+)riα 6= 0 for any i. Now ht (riα) ≥ ht (α) implies (αi|α) ≤ 0, whence (α|α) ≤ 0. But X X 2(ρ|α) = 2 ki(ρ|αi) = ki(αi|αi) > 0. This contradicts (α|α) = 2(ρ|α). 10 Weyl-Kac Character formula

10.1 Integrable highest weight modules and Weyl group Set

∗ ∨ P = {λ ∈ h | hλ, αi i ∈ Z (i = 1, . . . , n)}, ∨ P+ = {λ ∈ P | hλ, αi i ≥ 0 (i = 1, . . . , n)}, ∨ P++ = {λ ∈ P | hλ, αi i > 0 (i = 1, . . . , n)}.

The set P is called the weight lattice, elements from P (resp. P+, resp. P++) are called integral weights (resp. dominant, resp. regular dominant weights). Note that P contains the root lattice Q. Let V be a highest weight module over g with highest weight vector v. It follows from Lemmas 3.1.2(ii) and 3.1.3 that V is integrable if and Ni only if fi v = 0 for some Ni > 0 (i = 1, . . . , n).

Lemma 10.1.1 The g-module L(Λ) is integrable if and only if Λ ∈ P+.

Proof Follows from the previous paragraph and representation theory of sl2.

Denote by P (Λ) the set of weights of L(Λ). It is clear that P (Λ) ⊂ P if λ ∈ P . The following proposition follows from Lemma 10.1.1 and Proposition 3.2.3.

Proposition 10.1.2 If Λ ∈ P+, then for all w ∈ W we have

multL(Λ) λ = multL(Λ) w(λ).

In particular, P (Λ) is W -invariant.

127 128 Weyl-Kac Character formula

Corollary 10.1.3 If Λ ∈ P+ then any λ ∈ P (Λ) is W -conjugate to a unique µ ∈ P+ ∩ P (Λ).

Proof Follows from Proposition 3.4.1.

We let W act on the complex vector space E˜ of all (possibly infinite) P formal linear combinations λ cλe(λ) by X X w( cλe(λ)) = cλe(wλ). λ λ The space E˜ contains E as a subspace. However, the product of two ele- ments P1,P2 ∈ E˜ doesn’t always make sense. If it does, then w(P1P2) = w(P1)w(P2). Proposition 10.1.2 implies that

w(ch L(Λ)) = ch L(Λ) (w ∈ W, Λ ∈ P+). (10.1) Consider now the element Y R := (1 − e(−α))mult α ∈ E.

α∈∆+ For w ∈ W set `(w) ε(w) := (−1) = deth∗ w. We next claim that

w(e(ρ)R) = ε(w)e(ρ)R (w ∈ W ). (10.2)

Indeed, it is sufficient to check (10.2) for each fundamental reflection ri. Recall that the set ∆+ \{αi} is ri-invariant and mult ri(α) = mult α for α ∈ ∆+. So

ri(e(ρ)R) = (rie(ρ))(riR) Y mult α = e(riρ)ri(1 − e(−αi)) ri(1 − e(−α))

α∈∆+\{αi} Y mult α = e(ρ − αi)(1 − e(αi)) (1 − e(−α))

α∈∆+\{αi} = −e(ρ)R.

10.2 The character formula From now on we assume that A is symmetrizable and (·|·) is the standard bilinear form on g. 10.2 The character formula 129

Lemma 10.2.1 Let λ, Λ ∈ P , λ ≤ Λ, and Λ + λ ∈ P+. Then either ∨ hΛ + λ, αi i = 0 for i ∈ supp (Λ − λ) or (Λ|Λ) > (λ|λ). In particular, if Λ ∈ P++, λ ∈ P+, and λ < Λ, then (Λ|Λ) > (λ|λ).

P Proof We have Λ − λ = i kiαi, ki ∈ Z+. Hence

X (αi|αi) (Λ|Λ) − (λ|λ) = (Λ + λ|Λ − λ) = k hΛ + λ, α∨i. 2 i i i

Since (αi|αi) > 0 the result follows.

Theorem 10.2.2 (Weyl-Kac character formula) Let g be a sym- metrizable Kac-Moody algebra, and let L(Λ) be an irreducible g-module with highest weight Λ ∈ P+. Then P ε(w)e(w(Λ + ρ) − ρ) ch L(Λ) = w∈W . (10.3) Q (1 − e(−α))mult α α∈∆+

Proof Multiplying both sides of (9.7) by e(ρ)R and using Lemma 9.2.4, we get X e(ρ)R ch L(Λ) = cλe(λ + ρ), (10.4) λ≤Λ, |λ+ρ|2=|Λ+ρ|2 for cλ ∈ Z with cΛ = 1. By (10.1) and (10.2), the LHS of the last equation is W -skew-invariant. Hence the coefficients in the RHS have the following property:

cλ = ε(w)cµ if w(λ + ρ) = µ + ρ for some w ∈ W. (10.5)

Let λ be such that cλ 6= 0. Then by (10.5) we have cw(λ+ρ)−ρ 6= 0 for all w ∈ W . Hence it follows from (10.4) that w(λ+ρ) ≤ Λ+ρ for all w ∈ W . Let µ ∈ {w(λ + ρ) − ρ | w ∈ W } be such that ht (Λ − µ) is minimal. 2 2 Then µ + ρ ∈ P+ and |µ + ρ| = |Λ + ρ| . Applying Lemma 10.2.1 to the elements Λ + ρ ∈ P++ and µ + ρ, we deduce that µ = Λ. Thus cλ 6= 0 implies λ = w(Λ + ρ) for some w ∈ W , and in this case cλ = ε(w), see (10.5).

But Λ+ρ ∈ P++, so by Proposition 3.4.1(ii), w(Λ+ρ) = Λ+ρ implies w = 1. Hence finally we obtain X e(ρ)R ch L(Λ) = ε(w)e(w(Λ + ρ) − ρ), w∈W as required. 130 Weyl-Kac Character formula Take Λ = 0 in the Weyl-Kac character formula. Since L(0) is the trivial module, its character is e(0) = 1E . This gives the following de- nominator identity: Y X (1 − e(−α))mult α = ε(w)e(w(ρ) − ρ). (10.6)

α∈∆+ w∈W Substituting into (10.3) we get another form of the Weyl-Kac character formula: P w∈W ε(w)e(w(Λ + ρ) − ρ) ch L(Λ) = P . (10.7) w∈W ε(w)e(w(ρ) − ρ)

Remark 10.2.3 In the proof of the Weyl-Kac formula we never used the fact that L(Λ) is irreducible, but only that L(Λ) is integrable highest weight module with highest weight Λ. This happens if and only if Λ ∈ P+ and ∨ hΛ,αi i+1 fi (vΛ) = 0 (i = 1, . . . , n). (10.8)

Indeed, if L(Λ) is integrable, then clearly Λ ∈ P+. Moreover, if the (10.8) ∨ hΛ,αi i+1 fails then one of the fi (vΛ) is a non-zero highest weight vector of negative highest weight for the corresponding sl2, which contradicts integrability again. The converse follows from Lemmas 3.1.2(ii) and 3.1.3. We make two conclusions: first, if Λ ∈ P+ is dominant and V is an integrable module generated by highest weight vector of weight Λ then V = L(Λ). Second,

∨ X hΛ,αi i+1 L(Λ) = M(Λ)/ (U(n−)fi (vΛ)) (Λ ∈ P+). (10.9) i Consider the expansion Y X (1 − e(−α))− mult α = K(β)e(−β), (10.10) ∗ α∈∆+ β∈h defining the function ∗ K : h → Z+ called the (generalized Kostant) partition function. Note that K(β) = 0 unless β ∈ Q+, K(0) = 1, and for β ∈ Q+, K(β) is the number of partitions of β into a sum of positive roots. Now the character formula for Verma modules can be rewritten as follows:

multM(Λ) λ = K(Λ − λ). (10.11) 10.2 The character formula 131 Now, substitute (10.10) to (10.3):

X X X (multL(Λ) λ)e(λ) = ε(w)e(w(Λ + ρ) − ρ) K(β)e(−β) λ≤Λ w∈W β∈h∗ X X = ε(w)K(β)e(−β + w(Λ + ρ) − ρ) w∈W β∈h∗ X X = ε(w)K(w(Λ + ρ) − (λ + ρ))e(λ). w∈W λ∈h∗

Comparing the coefficients at e(λ), we obtain the (generalized) Kostant multiplicity formula:

X multL(Λ) λ = ε(w)K(w(Λ + ρ) − (λ + ρ)). (10.12) w∈W

Assume now that A is of finite type and for any λ ∈ P denote

P P w∈W ε(w)e(w(λ + ρ) − ρ) w∈W ε(w)e(w(λ + ρ)) χ(λ) = P = P . w∈W ε(w)e(w(ρ) − ρ) w∈W ε(w)e(w(ρ))

In particular, for λ ∈ P+, we have χ(λ) = ch L(λ), but it makes sense to consider χ(λ) for any λ ∈ P . It is interesting to specialize e(α) to 1 for all α. The result of such specialization in the expression χ(λ) is denoted d(λ). For example, if λ ∈ P+, then d(λ) = dim L(λ). There is a nice fomula for d(λ) called the Weyl dimension formula:

Q (λ + ρ|α) d(λ) = α∈∆+ . (10.13) Q (ρ|α) α∈∆+

To prove the formula, let E be the ring for g defined in §9.2 and denote P by E0 the subring consisting of all finite sums µ∈P nµe(µ) with nµ ∈ Z. Then we have in E0:

X X ε(w)e(wρ)χ(λ) = ε(w)e(w(λ + ρ)). (10.14) w∈W w∈W

Let A = R[[t]]. Then for each ξ ∈ P we have a ring homomorphism

θξ : E0 → A, e(µ) 7→ exp((ξ|µ)t). 132 Weyl-Kac Character formula We have X  X θξ ε(w)e(wµ) = ε(w) exp((ξ|wµ)t) w∈W w∈W X = ε(w) exp((µ|wξ)t) w∈W X  = θµ ε(w)e(wξ) . w∈W In particular we have X  X  θρ ε(w)e(w(λ + ρ) = θλ+ρ ε(w)e(wρ) w∈W w∈W Y  = θλ+ρ e−ρ (e(α) − 1)

α∈∆+ Y = exp((λ + ρ| − ρ)t) exp((λ + ρ|α)t) − 1)

α∈∆+ Y = tN ( (λ + ρ|α) + ... ),

α∈∆+ where N = |∆+|. By putting λ = 0 we obtain

X N Y θρ( ε(w)e(wρ)) = t ( (ρ|α) + ... ).

w∈W α∈∆+

Thus by applying θρ to (10.14), we obtain

N Y N Y t ( (ρ|α) + ... )θρ(χ(λ)) = t ( (λ + ρ|α) + ... ).

α∈∆+ α∈∆+ By canceling tN and taking the constant term we obtain Y Y (ρ|α)d(λ) = (λ + ρ|α).

α∈∆+ α∈∆+

10.3 Example: Lˆ(sl2)

Consider the denominator identity for the case g = Lˆ(sl2). Remember from Example 1.5.4 that the positive roots are of the form α1 + kδ for k ∈ Z+ and −α1 + nδ, nδ for n ∈ N, and that they all have multiplicity 1. Denote e(−δ) by q and e(−α1) by z. Then the left hand side of (10.6) is Y (1 − qn)(1 − qn−1z)(1 − qnz−1). n>0 10.3 Example: Lˆ(sl2) 133 To compute the right hand side, remember from Example 3.4.2(ii) that ∼ W = ZoS2, where the generators 1 ∈ Z and s ∈ S2 act on weights by the following formulas

1 : α1 7→ α1 − 2δ, δ 7→ δ, Λ0 7→ α1 − δ + Λ0,

s : α1 7→ −α1, δ 7→ δ, Λ0 7→ Λ0.

Now, we can take ρ = α1/2 + 2Λ0. So

1 : ρ 7→ ρ + 2α1 − 3δ,

s : ρ 7→ ρ − α1. We deduce that

2 m : ρ 7→ ρ + 2mα1 − (2m + m)δ, 2 sm : ρ 7→ ρ − (2m + 1)α1 − (2m + m)δ. Now the right hand side of (10.6) is

X X 2 ε(w)e(w(ρ) − ρ) = e(2mα1 − (2m + m)δ) w∈W m∈Z X 2 − e(−(2m + 1)α1 − (2m + m)δ) m∈Z X k(k − 1) = (−1)ke(−kα − δ) 1 2 k∈Z X k k k(k−1) = (−1) z q 2 . k∈Z (1) So the Weyl-Kac denominator identity for the easiest affine type A1 becomes

Y n n−1 n −1 X m m m(m−1) (1 − q )(1 − q z)(1 − q z ) = (−1) z q 2 . n>0 m∈Z This is a highly non-trivial Jacobi’s triple product identity. Let us divide both sides by (1 − z) to get an equivalent form

Y X z−2k − z(2k+1) (1 − qn)(1 − qnz)(1 − qnz−1) = qk(2k+1). 1 − z n>0 k∈Z By various specializations we get more famous identities. For example, take z = 1 in the second form to get X ϕ(q)3 = (4k + 1)qk(2k+1), k∈Z 134 Weyl-Kac Character formula where Y ϕ(q) := (1 − qn). n>0 Another specialization is obtained by applying a homomorphism

s0 s1 θ : C[[e(−α0), e(−α1)]] → C[[q]], e(−α0) 7→ q , e(−α1) 7→ q . Then the first form specializes to Y (1 − q(s0+s1)n)(1 − qs0(n−1)+s1n)(1 − qs0n+s1(n−1)) n>0 X m s m(m−1) +s m(m+1) = (−1) q 0 2 1 2 . m∈Z

Let us take (s0, s1) = (1, 1). We obtain

2 ϕ(q) X 2 = (−1)mqm ϕ(q2) m∈Z or

(1 − q)2(1 − q2)(1 − q3)2(1 − q4)(1 − q5)2(1 − q6) ... = 1 − 2q + 2q4 − 2q9 + 2q16 − ....

This is a classical Gauss identity. Next take (s0, s1) = (2, 1). We obtain

X m m(3m−1) ϕ(q) = (−1) q 2 m∈Z or

(1 − q)(1 − q2)(1 − q3)(1 − q4)(1 − q5)(1 − q6) ... = 1 − q − q2 + q5 + q7 − q12 − q15 + q22 + q26 − ....

This is a classical Euler identity.

10.4 Complete reducibility

Lemma 10.4.1 Let A be symmetrizable and V ∈ O. Assume that for any primitive weights λ, µ of V such that λ > µ one has

2(λ + ρ|λ − µ) 6= (λ − µ|λ − µ). (10.15)

Then V is completely reducible. 10.4 Complete reducibility 135 Proof Every module in O is locally finite over Ω (this follows from the fact that Ω preserves weight spaces). It follows that V is a direct sum of generalized eigenspaces for Ω. We may assume that V is one such eigenspace, i.e. Ω − aI acts locally nilpotently on V for some a ∈ C. Now, let v be a primitive vector of weight λ. Then there is a submodule U such that Ω(v) = (|λ + ρ|2 − |ρ|2)v (mod U) (see Corollary 2.3.6). Hence |λ+ρ|2 −|ρ|2 = a, whence |λ+ρ|2 = |µ+ρ|2 for any two primitive weight λ and µ, which is equivalent to 2(λ + ρ|λ − µ) 6= (λ − µ|λ − µ), which contradicts (10.15). So we have proved that for two primitive weights λ and µ of V , the inequality λ ≥ µ implies λ = µ. This property actually implies complete 0 0 reducibility. Indeed, let V = ⊕λ∈LVλ be the space of singular vectors in V (i.e. vectors killed by n+), where L is the set of singular weights. 0 Let v be a nonzero vector from Vλ . Then U(g)v is irreducible. Indeed, if this is not the case, there is a non-trivial submodule U ( U(g)v, whose maximal µ weight would be singular and µ < λ. Therefore the U(g)- submodule V 0 generated by V 0 is completely reducible. It remains to show that V 0 = V . If this is not the case, consider a singular vector 0 0 0 v + V of weight µ in V/V . We have ei(v) ∈ V for all i, and ei(v) 6= 0 for some i. But then, in view of Lemma 9.1.3, there exists a primitive weight λ ≥ µ + αi > µ, giving a contradiction.

Theorem 10.4.2 Let g be symmetrizable. Then every integrable module from the category O is a direct sum of modules L(Λ), Λ ∈ P+.

Proof In view of Lemma 10.4.1, it suffices to check that if λ > µ are primitive weights and β := λ − µ, then

2hλ + ρ, ν−1(β)i= 6 (β|β).

Since the module is integrable, we have

∨ hλ, αi i ∈ Z+ (i = 1, . . . , n) for every primitive weight λ. But then

2hλ + ρ, ν−1(β)i − (β|β) = hλ + (λ − β) + 2ρ, ν−1(β)i = hλ + µ + 2ρ, ν−1(β)i > 0.

Corollary 10.4.3 136 Weyl-Kac Character formula (i) A g-module V ∈ O is integrable if and only if V is a direct sum of modules L(Λ) with Λ ∈ P+. (ii) Tensor product of a finite number of integrable highest weight modules is a direct sum of modules L(Λ) with Λ ∈ P+.

10.5 Macdonald’s identities From now on we assume that g is affine. Recall the Kac’s denominator formula (10.6): Y X (1 − e(−α))mult α = ε(w)e(w(ρ) − ρ).

α∈∆+ w∈W

◦ We first assume that g is an untwisted affine algebra, i.e. g = Lˆ(g). Then ◦ re ∆ = {α + nδ | α ∈∆, n ∈ Z} ∪ {nδ | n ∈ Z, n 6= 0}. ◦ ◦ ∆+ = {α + nδ | α ∈∆, n > 0}∪ ∆+ ∪{nδ | n > 0}. The left hand side of the denominator formula can be expressed as   Y Y ` Y (1 − e(−α)) 1 − e(−nδ) 1 − e(−α − nδ) . ◦ n>0 ◦ α∈∆+ α∈∆ We also recall that ◦ W = T W ,T = {tα | α ∈ M}, where 1 t (λ) = λ + hλ, ciα − (λ|α) + (α|α)hλ, ciδ, α 2 and ( P` α for types A(1), D(1), E(1); M = i=1 Z i ` ` ` P α + P p α for types B(1), C(1), F (1), G(1). αi long Z i αi short Z i ` ` 4 2 In calculating the right hand side of the denominator formula we recall that ◦ ◦ ∗ ∗ h =h ⊕Cc ⊕ Cd, h = (h) ⊕ Cδ ⊕ CΛ0. Accordingly λ ∈ h∗ can be written as

◦ λ =λ +hλ, ciΛ0 + hλ, diδ 10.5 Macdonald’s identities 137

◦ ◦ where λ∈ (h)∗. We also recall that ρ ∈ h∗ satisfies

∨ hρ, di = 0, hρ, αi i = 1 (0 ≤ i ≤ `).

P` ∨ ∨ Since hρ, ci = i=0 ai = h , we have

◦ ∨ ρ =ρ +h Λ0

◦ ◦ where ρ∈ (h)∗ satisfies

◦ ∨ hρ, αi i = 1 (1 ≤ i ≤ `). We now consider the right hand side of the denominator formula. Let ◦ ◦ ◦ w ∈ W have form w =w tα where w∈W and α ∈ M. Then ◦ w(ρ) − ρ = w tα(ρ) − ρ ◦ 1 = w ρ + h∨α − (ρ|α) + (α|α)h∨δ − ρ 2 ◦ ◦ 1 = w (ρ) − ρ + h∨ w (α) − (ρ|α) + (α|α)h∨δ 2 ◦ ◦ ◦ 1 = w (ρ) − ρ + h∨ w (α) − (ρ |α) + (α|α)h∨δ. 2 ◦ since w (Λ0) = Λ0 and hΛ0, αi = 0 for α ∈ M. Now the last expression equals ◦ ◦ ◦ ◦ ∨ ∨ ◦ ◦ ◦ (ρ +h α| ρ +h α) − (ρ | ρ) w (h∨α+ ρ)− ρ − δ. 2h∨ Denote c(λ) = (λ + ρ|λ + ρ) − (ρ|ρ).

◦ If λ ∈ (h)∗ then ◦ ◦ ◦ ◦ c(λ) = (λ+ ρ |λ+ ρ) − (ρ | ρ).

◦ We also write for λ ∈ (h)∗,

P ◦ ◦ ◦ ◦ ε(w)e(w(λ+ ρ)− ρ) χ (λ) = w∈W . P ◦ ◦ ◦ ε(w)e(w(ρ)− ρ) w∈W ◦ When λ ∈ (h)∗ is dominant integral, χ(λ) is the character of the ir- ◦ reducible g-module L(λ). However c(λ) and χ(z(λ) are defined for all 138 Weyl-Kac Character formula

◦ ◦ λ ∈ (h)∗. Using the denominator formula for g we have ∨ X X X ◦ ◦ ◦ ◦ −c(h α) ε(w)e(w(ρ) − ρ) = ε(w)e(w (h∨α+ ρ)− ρ)e( δ) 2h∨ w∈W α∈M ◦ ◦ w∈W ∨ X ◦ ◦ ◦ ◦ X ◦ −c(h α) = ε(w)e(w (ρ)− ρ) χ (h∨α)e( δ) 2h∨ ◦ ◦ α∈M w∈W Y X ◦ −c(h∨α) = (1 − e(−α)) χ (h∨α)e( δ). 2h∨ ◦ α∈M α∈∆+ We now put q = e(−δ) and equate the left- and right-hand sides of Kac’s denominator formula. We obtain:

Theorem 10.5.1 (Untwisted Macdonald’s Identity)   Y Y X ◦ ∨ ∨ (1 − qn)` (1 − qne(−α)) = χ (h∨α)qc(h α)/2h . n>0 ◦ α∈M α∈∆ We have seen that in the special case the Macdonald’s identity gives Jacobi’s triple product identity. We next state Macdonald’s identities for the twisted affine algebras. The right hand side looks the same as before, the only change being that the appropriate lattice M should be taken in each case, see Lemma 6.4.2.

Theorem 10.5.2 We have L = R, where

X ◦ ∨ ∨ R = χ (h∨α)qc(h α)/2h α∈M and L is given as follows: (2) A2` :   Y n ` Y n Y 2n−1 α  2n  (1−q ) (1−q e(−α)) 1−q 2 e(− ) 1−q e(−α) 2 n>0 ◦ ◦ α∈∆s α∈∆l (2) A2`−1: Y  Y Y  (1−q2n)`(1−q2n−1)`−1 (1−qne(−α)) (1−q2ne(−α)) n>0 ◦ ◦ α∈∆s α∈∆l 10.6 Specializations of Macdonald’s identities 139

(2) D`+1: Y  Y Y  (1−q2n)`(1−q2n−1) (1−qne(−α)) (1−q2ne(−α)) n>0 ◦ ◦ α∈∆s α∈∆l

(2) E6 : Y  Y Y  (1−q2n)4(1−q2n−1)2 (1−qne(−α)) (1−q2ne(−α)) n>0 ◦ ◦ α∈∆s α∈∆l

(3) D4 : Y  Y Y  (1−q3n)2(1−q3n−1)(1−q3n−2) (1−qne(−α)) (1−q3ne(−α)) , n>0 ◦ ◦ α∈∆s α∈∆l where  (2) (2) (2) (3) P` α for types A ,D ,E ,D  i= Z i 2`−1 `+1 6 4 M = P 1 α + P α for type A(2) αi long 2 Z i αi short Z i 2`  1 (2) 2 Zα1 for type A2

(2) Example 10.5.3 In the case A2 we get the following quintuple product identity: Y (1 − qn)(1 − qnz−1)(1 − qn−1z)(1 − q2n−1z−2)(1 − q2n−1z2) n>0 X 3n −3n+1 n(3n−1) = (z − z )q 2 . n∈Z

10.6 Specializations of Macdonald’s identities ◦ One way to specialize is simply replace e(α) with 1 for all α ∈∆. The ◦ ◦ result of such specialization in the expression χ (λ) is denoted d (λ), which is given essentially by the Weyl dimension formula (10.13): Q ◦ ◦ (λ+ ρ |α) ◦ α∈∆ d (λ) = + . Q ◦ ◦ (ρ |α) α∈∆+ Remember that Y ϕ(q) = (1 − qn) n>0 140 Weyl-Kac Character formula is the Euler function. Now the specialization of the left hand side of the ◦ ◦ untwisted Macdonald identity is ϕ(q)`+|∆| = ϕ(q)dimg. So we get

Theorem 10.6.1 (Macdonald’s ϕ-function identity)

◦ g X ◦ ∨ ∨ ϕ(q)dim = d (h∨α)qc(h α)/2h . α∈M

Example 10.6.2 (1) Type A1 : X ϕ(q)3 = (4n + 1)qn(2n+1). n∈Z (1) Type A2 : X 1 ϕ(q)8 = (6n − 3n + 1)(−3n + 6n + 1) 2 1 2 1 2 2 (n1,n2)∈Z 2 2 3n −3n1n2+3n +n1+n2) ×(3n1 + 3n2 + 2)q 1 2 . (1) Type C2 :

10 X ϕ(q) = (12n1 − 6n2 + 1)(−6n1 + 6n2 + 1) 2 (n1,n2)∈Z 2 2 6n −6n1n2+3n +n1+n2) ×(2n2 + 1)(3n1 + 1)q 1 2 . (1) Type G2 : X 1 ϕ(q)14 = (8n − 12n + 1)(−12n + 24n + 1) 15 1 2 1 2 2 (n1,n2)∈Z ×(3n1 − 3n2 + 1)(12n2 + 5)(−2n1 + 6n2 + 1) 2 2 4n −12n1n2+12n +n1+n2) ×(4n1 + 3)q 1 2 .

Theorem 10.6.3 (Macdonald’s twisted ϕ-function identities) We have L = R where X ◦ ∨ ∨ R = d (h∨α)qc(h α)/2h , α∈M and L is given as follows: (2) 1 2 2 2` 2` −3` 2 2` A2` : ϕ(q ) ϕ(q) ϕ(q ) ; (2) 2`2−`−1 2 2`+1 A2`−1: ϕ(q) ϕ(q ) ; (2) 2`+1 2 2`2−`−1 D`+1: ϕ(q) ϕ(q ) ; 10.7 On converegence of characters 141

(2) 26 2 26 E6 : ϕ(q) ϕ(q ) ; (3) 7 3 7 D4 : ϕ(q) ϕ(q ) .

Example 10.6.4 (2) Type A2 : 1 2 −1 2 2 X 1 n(3n+2) ϕ(q 2 ) ϕ(q) ϕ(q ) = (3n + 1)q 2 . n∈Z (2) Type D4 : X 1 ϕ(q)5ϕ(q2)5 = (8n − 4n + 1)(−8n + 8n + 1) 3 1 2 1 2 2 (n1,n2)∈Z 2 2 8n −8n1n2+4n +2n1+n2) ×(8n1 + 3)(2n2 + 1)q 1 2 .

10.7 On converegence of characters If we replace e(λ) in the formal character by the function

λ hλ,hi e : h → C, h 7→ e , we will get the (”informal”) character

ch V : h → C of the module V ∈ O. Of course, now the questions of convergence arise. Let Y (V ) be the set of elements h ∈ h such that the series converges absolutely. Note that

h ch V (h) = tr V e (h ∈ Y (V )).

Define the complexified Tits cone XC by

XC = {x + iy | x ∈ X, y ∈ hR}. Set X Y = {h ∈ h | (mult α)|e−hα,hi| < ∞},

α∈∆+ YN = {h ∈ h | Rehαi, hi > N for all i = 1, 2 . . . , n} (N ∈ R+).

Note by Proposition 3.4.1(iii) that Y ⊂ XC . We also have [ ¯ XC = w(Y0). (10.16) w∈W

Lemma 10.7.1 Let V be a highest weight module over g. Then 142 Weyl-Kac Character formula (i) Y (V ) is a convex set.

(ii) Y (V ) ⊃ Y ∩ Y0. (iii) Y (V ) ⊃ Yln n.

Proof (i) is clear from the convexity of the function |eλ| (a function f is called convex if its domain D is a convex set and f(tx + (1 − t)y) ≤ tf(x)+(1−t)f(y) for any x, y ∈ D and t ∈ [0, 1]. Now each |eλ| is defined on h, and if the series ch V converges at h1 and h2 then the convexity property guarantees that ch V converges at th1 + (1 − t)h2 (actually to a convex function)). Moreover, since V is a quotient of some M(Λ), we have

multV λ ≤ K(Λ − λ), which gives

X hλ,hi hΛ,hi X −hβ,hi (multV λ)|e | ≤ |e | K(β)|e | ∗ λ∈h β∈Q+ Y = |ehΛ,hi| (1 − |e−hα,hi|)− mult α,

α∈∆+ provided h ∈ Y0. The product converges for h ∈ Y . This proves (ii). Now (iii) follows from (ii) since Yln n ⊂ Y0 and also Yln n ⊂ Y in view of (1.16).

Lemma 10.7.2 Let T ⊂ XC be an open convex W -invariant set. Then [  T ⊂ convex hull w(T ∩ Y0) . w∈W

S ¯ Proof Note that T0 := w∈W w(Y0 \ Y0) is nowhere dense (interior of closure is empty) in X . Hence every h ∈ T lies in the convex hull of S C T \ T0 = w∈W (T ∩ Y0) applied to T = Int XC. For a convex set R in a real vector space denote by Int R the interior of R.

Proposition 10.7.3 Let Λ ∈ P+. Then (i) Y (L(Λ)) is a solid (i.e. has non-empty interior) convex W -

invariant set, which for every x ∈ Int XC contains tx for all sufficiently large t ∈ R. (ii) ch L(Λ) is a holomorphic function on Int Y (L(Λ)). (iii) Y (L(Λ)) ⊃ Int Y . 10.7 On converegence of characters 143

P w(Λ+ρ) (iv) The series w∈W ε(w)e converges absolutely on Int XC to a holomorphic function, and diverges absolutely on h \ Int XC. (v) Provided that A is symmetrizable, ch L(Λ) can be extended from

Y (L(Λ)) ∩ XC to a meromorphic function on Int XC.

Proof Set T = Int Y . Then T is open, convex (see the proof of Lemma 10.7.1(i)), and W -invariant. By Lemma 10.7.1(ii), we have Y (L(Λ)) ⊃ Y ∩Y0. Furthermore, Lemma 10.7.1(i) and Proposition 10.1.2 imply that Y (L(Λ) is a convex W -invariant set. Now (iii) follows from Lemma 10.7.2. 0 To finish the proof of (i), we have to show that X := {x ∈ Int XC | tx ∈ Y (L(Λ)) for all sufficiently large t ∈ R} coincides with Int XC. But 0 again X is W -invariant, convex, and contains Y0 by Lemma 10.7.1(iii). 0 S So X contains the convex hull of w∈W w(Y0) = Int XC, the last equal- ity being true by Lemma 10.7.2. The convexity of |eλ| implies that the absolute convergence is uniform on compact sets. This implies (ii). (iv) By Proposition 3.4.1(ii), all w(Λ + ρ) − (Λ + ρ) are distinct, and also w(Λ + ρ) − (Λ + ρ) ∈ −Q+. Hence we have for all h ∈ Y0: X X | ε(w)ehw(Λ+ρ)−(Λ+ρ),hi| ≤ |e−hα,hi| < ∞.

w∈W α∈Q+

Thus the region of absolute convergence of our series contains Y0 and is convex and W -invariant, so it contains Int XC, as above. On the other 0 re hand, let h ∈ h \ XC. Then the set ∆ := {α ∈ ∆+ | Rehα, hi ≤ 0} is infinite by Proposition 3.4.1(iii),(vi), and for every α ∈ ∆0 we have |ehrα(Λ+ρ),hi| > |ehΛ+ρ,hi|, proving divergence at h. (v) follows from (iv) and the . 11 Irreducible Modules for affine algebras

Throughout g is affine.

11.1 Weights of irreducible modules ∗ ∗ Let λ ∈ h . Since Λ0, Λ1,..., Λ`, δ form a basis of h , we can write

λ = s0Λ0 + s1Λ2 + ··· + s`Λ` + sδ (ci, c ∈ C).

Note that λ ∈ P if and only if all ci ∈ Z, and λ ∈ P+ if and only if all ci ∈ Z≥0. Let λ ∈ P+. Then every weight µ of L(Λ) is of the form λ − m0α0 − m1α1 −· · ·−m`α` for some mi ∈ Z≥0. Since hαi, ci = 0 we have hµ, ci = P` ∨ ∨ hλ, ci for any weight µ of L(λ). Now, hλ, ci = i=0 ai hλ, αi i ∈ Z≥0. This non-negative integer hλ, ci is referred to as the level of the module L(λ).

Proposition 11.1.1 If L(λ) has level zero, then λ = sδ for some s ∈ C and dim L(λ) = 1.

Proof The first statement is clear and from the Weyl-Kac character formula we get ch L(sδ) = e(sδ).

From now on we concentrate on higher levels.

Theorem 11.1.2 Let λ ∈ P+ and hλ, ci > 0. Then µ ∈ P is a weight of L(λ) if and only if there exists w ∈ W such that w(µ) ∈ P+ and w(µ) ≤ λ.

Proof Assume that µ is a weight of L(λ). We know that then w(µ) is

144 11.1 Weights of irreducible modules 145 also a weight of L(λ) for all w ∈ W . Take w for which the height of ∨ λ − w(µ) is minimal. The minimality shows that hw(µ), αi i ≥ 0, i.e. w(µ) ∈ P+. Conversely, assume that µ ∈ P+ and µ ≤ λ. We have to prove that P` µ is a weight of L(λ). Let µ = λ − α where α = i=0 kiαi. We may assume α 6= 0. We first show that every connected component of supp α contains an ∨ i with hλ, αi i > 0. Otherwise there exists a connected component S of ∨ supp α with hλ, αi i = 0 for all i ∈ S. We have L(λ)µ ⊂ U(n−)−αvλ, and by the PBW theorem, U(n−)−α is spanned by the monomials of the form Q ekβ where P k β = α and each β involves simple β∈∆+ −β β roots which lie in the same connected component of supp α. Now, the e−β with simple roots in different connected components commute with each other, so we may bring the e−β with simple roots in S to the right of the above product. But for such β we have e−βvλ = 0. It follows that U(n−)−αvλ = 0, giving a contradiction. Now let Ψ be defined by

Ψ = {γ ∈ Q+ | γ ≤ α, λ − γ is a weight of L(λ)}. The set Ψ is finite. Let β ∈ Ψ be an element of maximal height. Then P β ≤ α. We need to show that β = α. Let β = miαi. We have mi ≤ ki for all i. Let I = {0, 1, . . . , ` and J = {i ∈ I | ki = mi}. Again, we need to show that J = I. If not, consider the non-empty subset of I given by supp α \ (supp α ∩ J). This set splits into connected components. Let M be one of them and take i ∈ M. Then λ − β is a weight of L(λ) but ∨ ∨ λ−β −αi is not. Thus hλ−β, αi i ≥ 0. Also lanµ, αi i ≥ 0 since µ ∈ P+ ∨ and so hλ − α, αi i ≥ 0. Thus we have ∨ ∨ ∨ hα, αi i ≤ hλ, αi i ≤ hβ, αi i. P Let γ = j∈M (kj − mj)αj. We have kj − mj > 0 for all j ∈ M. We also have ∨ X hγ, αi i = (kj − mj)aij. j∈M

∨ ∨ However hγ, αi i = hα − β, αi i since supp (α − β) = supp α \ J and M is ∨ a connected component of supp α \ J. Thus hγ, αi i ≤ 0 for each i ∈ M. Let AM be be the principal minor corresponding to M. Let u be the column vector with entries ki − mi for i ∈ M. Then we have u > 0 and Au ≤ 0. It follows that AM does not have finite type, i.e. M = I. Thus supp α = I and J = ∅. But then for all i ∈ I, λ − β is a weight 146 Irreducible Modules for affine algebras

∨ of L(λ) but λ − β − αi is not. Thus hλ − β, αi i ≤ 0 for all i ∈ I. ∨ ∨ ∨ Hence hα, αi i ≤ hλ, αi i ≤ hβ, αi i for all i ∈ I. We now have u > 0 and Au ≤ 0. Since A is affine we deduce that Au = 0. This shows ∨ ∨ ∨ ∨ that hα, αi i = hβ, αi i for all i ∈ I. Hence hα, αi i = hλ, αi i for all ∨ i, i.e. hµ, αi i = 0. But then we have hµ, ci = 0, and so hλ, ci = 0, contradiction.

Corollary 11.1.3 If µ is a weight of L(λ) then µ − δ is also a weight.

Proof Since µ is a weight there exists w ∈ W such that w(µ) ∈ P+. Then w(µ − δ) = w(µ) − δ ∈ P+. Since w(µ) − δ ≤ λ it follows from the theorem that w(µ) − δ is a weight of L(λ).

It follows from the corollary that µ − iδ is a weight for all positive integers i. On the other hand, there exist only finitely many positive integers i such that µ + iδ ≤ λ.

Definition 11.1.4 A weight µ of L(λ) is called an maximal weight if µ + δ is not a weight.

Corollary 11.1.5 For each weight µ of L(λ) there are a unique maximal weight ν and a unique non-negative integer i such that µ = ν − iδ.

Proof Consider the sequence µ, µ + iδ, µ + 2δ, . . . . There exists i such that µ + iδ is a weight of L(λ) but µ + (i + 1)δ is not. Let ν = µ + iδ. Then ν is an maximal weight of L(λ) and µ = ν − iδ. If µ = ν0 − i0δ where ν0 is an maximal weight and i0 is a non-negative integer we show that ν = ν0 and i = i0. Otherwise we may assume that i < i0. Then ν0 = ν + (i0 − i)δ is a weight. Then ν + δ is also a weight. Contradiction.

A of weights of L(λ) is a set ν, ν − δ, ν − 2δ, . . . where ν is an maximal weight. Each weight lies in a unique string of weights.

Lemma 11.1.6 The set of maximal weights of L(λ) is invariant under the Weyl group.

Proof Let w ∈ W . Then µ is a weight if and only if w(µ) is a weight. Thus if µ is an maximal weight then w(µ) is a weight but w(µ) + δ = w(µ + δ) is not a weight. 11.1 Weights of irreducible modules 147 Corollary 11.1.7 Each maximal weight of L(λ) has form w(µ) where w ∈ W and µ is a dominant maximal weight. Recall the fundamental alcove ◦ C = {λ ∈ h∗ | (λ|α ) ≥ 0 for 1 ≤ i ≤ ` and (λ|θ) ≤ 1}. af R i We also recall that ◦ ∗ ∗ h = h ⊕ (CΛ0 ⊕ Cδ), and for λ ∈ h∗ we have ¯ −1 λ = λ + hλ, ciΛ0 + a0 hλ, diδ ◦ where λ¯ ∈ h∗. Let Q¯ be the set of λ¯ given by λ in the root lattice Q.

Proposition 11.1.8 Let λ ∈ P+ have level k > 0. Then the projec- tion map µ 7→ µ¯ gives a bijection between the set of dominant maximal ¯ weights of L(λ) and (λ + Q¯) ∩ kCaf .

P Proof Let µ be a dominant weight of L(λ). Then µ = λ − i miαi for ¯ P ¯ ¯ mi ∈ Z≥0. Henceµ ¯ = λ − ( i miαi) and soµ ¯ ∈ λ + Q. −1 ∨ Now µ =µ ¯ + kΛ0 + a0 hµ, diδ. Since µ ∈ P+ we have hµ, αi i ≥ 0 for ∨ ∨ i = 0, . . . , `. Now hΛ0, αii = hδ, αi i = 0 for i = 1, . . . , `. So hµ,¯ αi i ≥ 0 and hence (¯µ|αi) ≥ 0 for i = 1, . . . , `. We also have ∨ ∨ (¯µ|θ) = (µ|θ) = (µ|δ − a0α0) = hµ, ci − hµ, α0 i = k − hµ, α0 i. ∨ Since hµ, α0 i ≥ 0 we have (¯µ|θ) ≤ k. Thusµ ¯ ∈ kCaf . Hence the ¯ projection maps dominant maximal weights of L(λ) into (λ + Q¯) ∩ kCaf . ¯ We next show that this map is surjective. Let ν ∈ (λ + Q¯) ∩ kCaf . −1 −1 −1 Sinceα ¯i = αi for i = 1, . . . , ` andα ¯0 = −a0 θ + a0 δ = −a0 θ we have ¯ −1 ν = λ + k1α1 + ··· + k`α` − k0a0 θ (ki ∈ Z).

Since θ = a1α1 + ··· + a`α` we have ¯ −1 ν = λ + (m − k0a0 )θ − (ma1 − k1)α1 − · · · − (ma` − k`)α`.

Choose m ∈ Z with m ≥ ki/ai for i = 0, . . . , `. Then ¯ −1 ν = λ + (m0a0 )θ − m1α1 − · · · − m`α` where mi = mai − ki are non-negative integers for i = 0, 1, . . . , `. Let P` µ = λ − i=0 miαi. Then ¯ −1 µ¯ = λ + (m0a0 )θ − m1α1 − · · · − m`α` = ν. 148 Irreducible Modules for affine algebras

We next show that µ ∈ P+. For i = 1, . . . , ` we have

∨ ∨ ∨ hµ, αi i = hµ,¯ αi i = hν, αi i ≥ 0. Also ∨ ∨ hµ, α0 i = hµ,¯ c − a0θ i = k − (¯µ|θ) = k − (ν|θ) ≥ 0.

Hence µ ∈ P+ and also µ ≤ λ, so µ is a weight of L(λ). Replacing µ by the maximal weight in the chain of weights containing µ we may assume that µ is a dominant maximal weight. Thus our map is surjective. To show that the map is injective, let µ, µ0 be dominant maximal ¯0 −1 weights of L(λ) withµ ¯ = µ . Writing µ =µ ¯ + kΛ0 + a0 hµ, diδ and 0 ¯0 −1 0 0 −1 0 µ = µ + kΛ0 + a0 hµ , diδ, we get µ − µ = a0 (hµ, di − hµ , di)δ. Now 0 0 −1 0 λ − µ, λ − µ ∈ Q, hence µ − µ ∈ Q and a0 (hµ, di − hµ , di)δ ∈ Q. This −1 0 0 shows that a0 (hµ, di − hµ , di) ∈ Z. Thus µ = µ + rδ for r ∈ Z. Since µ, µ0 are both maximal we must have r = 0, i.e. µ = µ0.

Corollary 11.1.9 The set of dominant maximal weights of L(λ) is fi- nite.

◦ Proof Q¯ is a lattice in h∗ and λ¯ + Q¯ is a coset of that lattice. On the ¯ other hand the set kCaf is bounded. Hence the intersection (λ+Q¯)∩kCaf must be finite.

We now have a procedure for describing all weights of L(λ). First ¯ determine the finite set (λ + Q¯) ∩ kCaf where k = hλ, ci. For each element ν in this finite set there is a unique dominant weight µ of L(λ) withµ ¯ = ν. This gives the set of all dominant maximal weights. By applying elements of the Weyl group to these we obtain all maximal weights. Finally, by subtracting positive integral multiples of δ from the maximal weights we obtain all weights of L(λ). We next consider the weights in a string µ, µ − δ, µ − 2δ, . . . . We wish to show that the multiplicities of these weights form an increasing function as we move down the string. In order to do this we introduce the subalgebra M t = gmδ. m∈Z Thus t is spanned by h and the root spaces of the imaginary roots. This algebra has a triangular decomposition

t = t− ⊕ h ⊕ t+, 11.1 Weights of irreducible modules 149 P where t± = ±i>0 giδ. One can define the category O of t-modules in the usual manner. One can also define Verma modules for t:

X ∗ M(λ) = U(t)/(U(t)t+ + U(t)(x − hλ, xi)(λ ∈ h ). x∈h Consider the expression

X X (j) (j) Ω0 = 2 e−iδeiδ i>0 j

(j) (j) where {eiδ } is a basis of giδ and {e−iδ} is the dual basis of g−iδ. Thus

(j) (k) (j) (k) (eiδ |e−iδ) = δjk, [eiδ , e−iδ] = δjkic.

Although the expression for Ω0 is an infinite sum the action of Ω0 on any t-module in category O is well defined, since all but a finite number of the terms will act as zero.

Lemma 11.1.10 Let λ ∈ h∗ and M(λ) be the associated Verma module for t. Let u ∈ U(t)mδ where m ∈ Z\{0}. Then Ω0u−uΩ0 acts on M(λ) in the same way as −2mhλ, ciu.

(j) (k) (k) Proof Note that a basis element erδ commutes with all eiδ , e−iδ except (j) for e−rδ. So

(j) (j) (j) (j) (j) (j) (j) (j) Ω0u − uΩ0 = 2(e−rδerδ erδ − erδ e−rδerδ ) = −2rcerδ = −2rhλ, cierδ

(j) on M(λ). Thus the lemma holds if u is a basis element emδ. It follows that the lemma also holds if u ∈ gmδ. Next suppose that u = u1u2 where on M(λ) we have

Ω0ui − uiΩ0 = −2rihλ, ciui (i = 1, 2). Then on M(λ)

Ω0u − uΩ0 = Ω0u1u2 − u1u2Ω0

= u1Ω0u2 − 2r1hλ, ciu − u1Ω0u2 − 2r2hλ, ciu

= −2(r1 + r2)hλ, ciu. The required result now follows from the PBW theorem.

Proposition 11.1.11 Let hλ, ci= 6 0. Then the Verma module M(λ) for t is irreducible. 150 Irreducible Modules for affine algebras Proof Suppose if possible that M(λ) has a proper submodule K. Let v be a highest weight vector of K. Then v ∈ M(λ)λ−mδ for some positive integer m. Thus v = uvλ for some u ∈ U(t−)−mδ. By the previous lemma,

(Ω0u − uΩ0)vλ = −2mhλ, ciuvλ.

Thus Ω0v − uΩ0vλ = −2mhλ, civ. Now Ω0vλ = 0 and Ω0v = 0 since vλ and v are highest weight vectors. By assumption this implies v = 0 giving a contradiction.

We now restrict the g-module L(λ) to t.

Proposition 11.1.12 Suppose λ ∈ P+ with hλ, ci > 0. Then the t-module L(λ) is completely reducible. Its irreducible components are Verma modules for t.

Proof Let

U = {v ∈ L(λ) | t+v = 0}, and pick a basis B of U consisting of weight vectors. Suppose that v ∈ B has weight µ. Then we have a surjective homomorphism M(µ) → U(t)v. Now hµ, ci = hλ, ci > 0, so M(µ) is irreducible and the homomorphism P M(µ) → U(t)v is an isomorphism. Let V = v∈B U(t)v. This sum of modules is a direct sum. Indeed, consider X U(t)v ∩ U(t)v0. v0∈B,v06=v Since U(t)v is irreducible the intersection is either trivial or U(t)v. In P 0 the latter case v ∈ v0∈B,v06=v U(t)v . This is impossible since

X 0 X 0 U ∩ U(t)v = Cv . v0∈B,v06=v v06=v

Thus V = ⊕v∈BU(t)v. We wish to show that V = L(λ). If not consider the t-module L(λ)/V . Let µ be a weight of L(λ)/V such that µ+iδ is not a weight for any i > 0. Then t+L(λ)µ ⊂ Vµ. Now consider the map Ω0 : L(λ) → L(λ). Since the action of Ω0 preserves weight spaces (it is ”of weight 0”), we have Ω0 : L(λ) → L(λ). So L(λ)µ decomposes as a direct sum of generalized eigenspaces

L(λ)µ = ⊕ζ∈C(L(λ)µ)ζ . 11.2 The fundamental modules for sbl2 151

Since L(λ)µ 6⊂ V , there exists ζ ∈ C such that (L(λ)µ)ζ 6⊂ V . Choose v ∈ (L(λ)µ)ζ with v 6∈ V . Then

k (Ω0 − ζ1) v = 0 for k large enough, and Ω0v ∈ V since t+L(λ)µ ⊂ V . If ζ 6= 0 the polynomials (t − ζ)k and t are coprime so we can deduce v ∈ V giving contradiction. Hence ζ = 0.

Now t+v 6= 0 since v 6∈ V and hence v 6∈ U. So there exists m > 0 0 0 and u ∈ U(t+)mδ with uv 6= 0 and t+uv = 0. Let v = uv. Then v 6= 0 0 and Ω0v = 0. Now all weights ν of L(λ) satisfy hν, ci = hλ, ci so by Lemma 11.1.10, we have

Ω0uv − uΩ0v = −2mhλ, ciuv, that is 0 (Ω0 + 2mhλ, ci)v = uΩ0v. It follows that

2 0 2 (Ω0 + 2mhλ, ci) v = (Ω0 + 2mhλ, ci)uΩ0v = uΩ0v, and continuing thus we obtain

k 0 k (Ω0 + 2mhλ, ci) v = uΩ0 v = 0. k But the polynomials (t + 2mhλ, ci) and t are coprime. Thus (Ω0 + k 0 0 0 2mhλ, ci) v = 0 and Ω0v = 0 imply v = 0 giving a contradiction.

Proposition 11.1.13 Let µ be a weight of L(λ) where λ ∈ P+ with hλ, ci > 0. Then dim L(λ)µ−δ ≥ dim L(λ)µ.

Proof Choose a non-zero element x ∈ g−δ and consider the action of x on the t-module L(λ). Since L(λ) is a direct sum of Verma modules, it is free as a module over U(t−), so x acts on it injectively. Thus we get an injective map L(λ)µ → L(λ)µ−δ.

11.2 The fundamental modules for sbl2

By symmetry it suffices to determine the character of L(Λ0). Note that ¯ ∨ ∨ α¯0 = −α1 and the lattice Q = Zα1, θ = α1, θ = α1 . The fundamental alcove is given by ◦ ◦ C = {λ ∈ h∗ | hλ, α i ≥ 0, hλ, θ∨i ≤ 1} = {λ ∈ h∗ | 0 ≤ hλ, α i ≤ 1}. af R 1 R 1 152 Irreducible Modules for affine algebras Thus ¯ ¯ (Λ0 + Q) ∩ Caf = {mα1 | m ∈ Z, 0 ≤ 2m ≤ 1} = {0}.

Thus L(Λ0) has only one dominant maximal weight which must be the highest weight Λ0. The other maximal weights are transforms of Λ0 ◦ under the affine Weyl group W . The stabilizer of Λ0 in W is W = hr1i. So the maximal weights have the form

2 tmα1 (Λ0) = Λ0 + mα1 − m δ (m ∈ Z)

The set of all weights of L(Λ0) is

2 {Λ0 + mα1 − m δ − kδ | m ∈ Z, k ∈ Z≥0}.

2 The weights Λ0 + mα1 − m δ have multiplicity 1 and the multiplicity 2 Λ0 +mα1 −m δ −kδ is independent of m. To determine the multiplicity of these weights consider Weyl-Kac formula P ε(w)e(w(Λ0 + ρ) − ρ) ch L(Λ ) = w∈W . 0 Q (1 − e(−α))mult α α∈∆+ Now X X X ◦ ◦ ε(w)e(w(Λ0 + ρ) − ρ) = ε(w)e(w tnα1 (Λ0 + ρ) − ρ). w∈W ◦ ◦ n∈ w∈W Z

1 Now ρ = 2 α1 + 2Λ0, hence 1 t (Λ + ρ) = 3Λ + (3n + )α − (3n2 + n)δ, nα1 0 0 2 1 so 2 tnα1 (Λ0 + ρ) − ρ = Λ0 + 3nα1 − (3n + n)δ. Also 1 r t (Λ + ρ) = 3Λ − (3n + )α − (3n2 + n)δ, 1 nα1 0 0 2 1 so 2 r1tnα1 (Λ0 + ρ) − ρ = Λ0 − (3n + 1)α1 − (3n + n)δ. Thus

X X  2 ε(w)e(w(Λ0+ρ)−ρ) = e(Λ0) e(3nα1)−e(−(3n+1)α1) e(−(3n +n)δ). w∈W n∈Z 11.2 The fundamental modules for sbl2 153

1/2 We write e(−α1) = z and e(−δ) = q . Then our expression is

X −3n 3n+1 n(3n+1) e(Λ0) z − z q 2 . n∈Z Now we can factor this expression using Macdonald’s identity for type (2) A2` . So we get

Y n n −1 n−1 2n−1 −2 2n−1 2 e(Λ0) (1 − q )(1 − q z )(1 − q z)(1 − q z )(1 − q z ) n>0 Y n n −1 n 2n−1 −2 2n−1 2 = e(Λ0)(1 − z) (1 − q )(1 − q z )(1 − q z)(1 − q z )(1 − q z ) n>0 Y n n −1 2n−1 −1 n 2n−1 = e(Λ0)(1 − z) (1 − q )(1 − q z )(1 − q 2 z )(1 − q z)(1 − q 2 z) n>0 2n−1 −1 2n−1 ×(1 + q 2 z )(1 + q 2 z) Y k/2 −1 k/2 Y n 2n−1 −1 2n−1 = e(Λ0)(1 − z) (1 − q z )(1 − q z) (1 − q )(1 + q 2 z )(1 + q 2 z). k>0 n>0

(1) We now make use of the Macdonald’s identity for type A1 :

Y n n−1 0 n 0−1 X m 0m m(m−1) (1 − q )(1 − q z )(1 − q z ) = (−1) z q 2 . n>0 m∈Z

0 −1 1 Taking z = −z q 2 we obtain

2 Y n 2n−1 −1 2n−1 X −m m (1 − q )(1 + q 2 z )(1 + q 2 z) = z q 2 . n>0 m∈Z Hence

2 Q k/2 −1 k/2 P −m m e(Λ0)(1 − z) (1 − q z )(1 − q z) z q 2 ch L(λ) = k>0 m∈Z Q k/2 −1 k/2 k/2 (1 − z) k>0(1 − q z )(1 − q z)(1 − q ) P 2 n∈ e(Λ0 + nα1 − n δ) = QZ k>0(1 − e(−kδ)) X 2 X = e(Λ0 + nα1 − n δ) p(k)e(−kδ) n∈Z k≥0 X X 2 = p(k)e(Λ0 + nα1 − n δ − kδ). n∈Z k≥0 Hence:

Proposition 11.2.1 The weights of the fundamental module 154 Irreducible Modules for affine algebras Proof Bibliography

[C] R. Carter, Lie Algebras of Finite and Affine Type. [K] V. Kac, Infinite Dimensional Lie Algebras.

155