ROOT SYSTEMS

GEORGE H. SEELINGER

1. Introduction A root system is a configuration of vectors in some that satisfy nice properties concerning reflections [Wik17]. In particular, root systems are acted upon by associated Weyl groups of reflections. While root systems have some interesting features in their own right, they are intimately related to the structure of semisimple Lie algebras. However, this write-up seeks to simply talk about what is known about root systems in their own right. A lot of the definitions and treatments are lifted from [Hum72] and thus I do not lay any claim to the originality of statements of definitions, theorems, etc. I view these notes as mainly a good companion to be read along with [Hum72].

2. Definitions and Basic Concepts

Given a vector α ∈ E, we can define a function σα : E → E that reflects every vector over the perpendicular to α. Explicitly, we get 2(β, α) σ (β) = β − α α (α, α) where β ∈ E and (·, ·) is the standard inner product on E. To see this works, note that σα(α) = −α as desired and if (β, α) = 0, then σα(β) = β, leaving all the vectors in the hyperplane fixed. 2.1. Definition. For notational convenience, we will say 2(β, α) hβ, αi = . (α, α) It is important to note that h·, ·i is linear only in the first variable. 2.2. Definition. [Hum72, p 42] A root system is a set of vectors Φ in Eu- clidean space E such that • Φ is finite, span Φ = E, and 0 6∈ Φ. • If α ∈ Φ, then the only multiples of α in Φ are ±α. • If α ∈ Φ, then σα(Φ) = Φ • If α, β ∈ Φ, then hβ, αi ∈ Z.

Date: February 2017.

1 2 2.3. Example. Consider the collection of vectors in R of equal length with angle from the origin given in multiples of 60 degress. Such a collection is a root system, as can be easily checked.

Figure 1. Root system A2 from [Wik17]

These hyperplane reflections form a “nice” action of the root system. Furthermore, they form a , since the composition of two reflections is a reflection and each of these described reflections is their own inverse. Thus, we have the following definition. 2.4. Definition. The of a root system is the collection of these reflections σα with group operation as composition. Explicitly, 2 W = h{σα | ∀α ∈ Φ} | {σα = 1, ∀α ∈ Φ}i It is a straightfoward exercise to show the following fact

2.5. Lemma. [Hum72, p 43] If σ ∈ GL(E) leaves Φ invariant and σα ∈ W is a simple reflection, then −1 σσασ = σσ(α) and also hσ(β), σ(α)i = hβ, αi 2.6. Definition. We say a root system is reducible if there exists root sys- tems Φ1, Φ2 ⊆ Φ such that Φ = Φ1 t Φ2 such that span(Φ) = span(Φ1) ⊕ span(Φ2). Any root system that is not reducible is called irreducible. In general, we restrict our attention to irreducible root systems, since they are the basic building blocks of root systems. Now, we also have a convenience of notation as follows 2.7. Definition. For α ∈ Φ, a root system, we say a coroot α∨ is given by 2α α∨ = (α, α) It is worth noting that Φ∨ is also a root system with an associated Weyl group that is isomorphic to the Weyl group of Φ. 2.8. Definition. We define the rank of a root system Φ to be the dimension of Euclidean space it spans.

2 The axioms for a root system are fairly restrictive, and so it is relatively 2 easy to classify all the possible root systems that span R and R . Namely, since (α, β) = kαk kβk cos θ, 2 hα, βihβ, αi = 4 cos θ ∈ Z≥0 Thus, we have the following 2.9. Lemma. Let α, β ∈ Φ with β 6= ±α, then hα, βihβ, αi ∈ {0, 1, 2, 3} Proof. This follows immediately from hα, βi and hβ, αi being and 2 4 cos θ ≤ 4.  Thus, roots can only ever have certain angles between them. To see this, examine Table 1 on page 45 of [Hum72]. However, these restrictions lead us to a useful application for determining when sums and differences of roots are roots. 2.10. Lemma. [Hum72, p 45] Let α, β ∈ Φ be nonproportional roots. • If (α, β) > 0, then α − β ∈ Φ. • If (α, β) < 0, then α + β ∈ Φ. Proof. Note (α, β) > 0 ⇐⇒ hα, βi > 0. By the table of values, the acuteness of the angle requires that either hα, βi or hβ, αi is 1. Thus, by the axioms of the root system, hα, βi = 1 =⇒ σβ(α) = α − β ∈ Φ or hβ, αi = 1 =⇒ β − α ∈ Φ =⇒ α − β ∈ Φ.  An application of this lemma is to justify the existence of unbroken “strings” of roots of the form β + nα, n ∈ Z, q ≤ n ≤ r. Write this in more formally. 2.11. Definition. [Hum72, p 47] Given ∆ ⊆ Φ, we say that ∆ is a simple system or base if • ∆ is a basis for span Φ P • Every β ∈ Φ can be written as β = kαα with α ∈ ∆ and all kα ∈ Z≥0 or all kα ∈ Z≤0. We note that ∆ need not be unique and so whenever any definition de- pends on ∆, there is a choice being made that must be accounted for. 2.12. Definition. The height of β ∈ Φ relative to ∆ is given by X ht(β) = kα α∈∆ P where β = α∈∆ kαα. Using this notion of height, we can define positive roots relative to ∆ as those with kα ≥ 0, ∀α ∈ ∆ and a similar notion for negative roots. Note that this partitions Φ = Φ+ t Φ−. We also note that we can define a partial order on the roots in Φ with a fixed ∆.

3 2.13. Definition. [Hum72, p 47] We say µ ≺ λ if and only if λ − µ is a sum of positive roots (equivalently, of simple roots) or λ = µ. We note also the following relationship between simple roots. 2.14. Lemma. [Hum72, p 47] If ∆ is a base of Φ, then (α, β) ≤ 0 for α 6= β in ∆, and α − β is not a root. Proof. For α, β ∈ ∆, let (α, β) > 0. Then, α − β ∈ Φ, which violates the second axiom of a base.  In essence, this lemma gives us that the angle between two simple roots is obtuse. 2.15. Definition. Let us define the half sum of positive roots as 1 X ρ := β. 2 β 0 This vector (not necessarily a root) has some important properties, such as the following

2.16. Proposition. [Hum72, p 50] σα(ρ) = ρ − α for all α ∈ ∆. Now, we wish to show the following existence theorem. 2.17. Theorem. Every root system has a base. To do this, we will make use of the fact 2.18. Proposition. The union of finitely many cannot be all of n R for n ≥ 2. As a result of this proposition, we may choose a vector in our Euclidean space E which does not lie in any hyperplane perpendicular to a root. S 2.19. Definition. We call a vector z ∈ E regular if z ∈ E \ α∈Φ Pα, where Pα is the hyperplane orthogonal to the root α, and singular otherwise. 2.20. Definition. Given a vector z ∈ E, we define Φ+(z) := {α ∈ Φ | (z, α) > 0} + + We also say α ∈ Φ (z) is decomposable if α = β1 + β2 for some βi ∈ Φ (z). Otherwise, it is indecomposable. 2.21. Proposition. If z ∈ E is regular, then Φ = Φ+(z) ∪ Φ+(−z) Thus, following [Hum72], we rewrite our theorem above in a more explicit manner. 2.22. Theorem. Let z ∈ E be regular. Then, the set ∆(z) of all indecom- posable roots in Φ+(z) is a base of Φ and every base of Φ has the form ∆(z0) for some regular z0 ∈ E.

4 Proof. Let dim E ≥ 2, otherwise the theorem is immediate. Now,β ∈ Φ =⇒ β ∈ Φ+(z) or −β ∈ Φ+(z) by the proposition above, so we need only show

X β = kαα, kα ∈ Z≥0 α∈∆+(z)

Assume that is not the case. Then, we may pick a β ∈ Φ+(z) such that + (z, β) is as small as possible. Since β 6∈ ∆(z), there is a β1, β2 ∈ Φ (z) such that

β = β1 + β2 =⇒ (z, β) = (z, β1) + (z, β2) > 0 but then either β1 or β2 is not a positive integral linear combination of + elements in ∆(z), contradicting βi ∈ Φ (z). Now, we need only show the elements of ∆(z) are linearly independent. By the lemma 2.14,(α, β) ≤ 0 for α, β ∈ ∆(z) with β 6= ±α. Thus, for cα ∈ R, X X X cαα = 0 =⇒ x := rαα = (−rβ)β

α∈∆(z) α|rα>0 β|rβ <0 X =⇒ (x, x) = −rαrβ(α, β) ≤ 0

α|rα>0 β|rβ <0 =⇒ x = 0 X =⇒ 0 = (x, z) = rα · (α, z)

α|rα>0 where (α, z) > 0 since α ∈ Φ+(z) by definition, so it must be that all the rα = 0 and similarly for all rβ, thus giving linear independence.

Thus, ∆(z) is a base of Φ = Φ+(z) ∪ Φ+(−z) since ∆(z) spans Φ+(z) and thus all of Φ, and it meets the base axioms by above.

To show every base of Φ has the form ∆(z) for some regular z ∈ E, we start with a given base ∆ and pick a z ∈ E so that (z, α) > 0 for all α ∈ ∆, which is possible since the intersection of the positive open half-spaces associated with any basis will be non-empty. Thus, z is regular, Φ+ ⊆ Φ+(z), and Φ− ⊆ Φ+(−z), so Φ+ = Φ+(z). Since ∆ contains only indecomposable elements, ∆ ⊆ ∆(z), but |∆| = |∆(z)| = ` =⇒ ∆ = ∆(z).  2.23. Remark. In plain geometric language, the linear independence argu- ment in the above proof shows that any set of vectors lying strictly on one side of a hyperplane in Euclidean space and forming pairwise obstuse angles must be linearly independent.

5 (c) Root system D (not (a) Root system B (b) Root system C 2 2 2 irreducible.)

Figure 2. Classical irreducible root systems when n = 2 from [Wik17]

3. Standard Examples

Above, we saw the root system A2. Classically, some standard root sys- tems are of types An,Bn,Cn, and Dn, all of rank n. Whe n = 2, each of these has simple system ∆ = {α, β}, which is√ suggestively given by the√ la- beling in the diagrams. Note that in B2, |β| = 2|α| and in C2, |α| = 2|β|. More formally, we define these systems for any n ∈ N. n+1 3.1. Definition. Let V be the subspace of R for which the√ coordinates sum to 0 and let Φ be the set of vectors in V that have length 2 and n+1 coordinates in R . We call Φ the An root system. √ n 3.2. Definition. Let Φ be all integers in R having length 1 or 2 and integer coordinates. We call Φ the Bn root system. √ n 3.3. Definition. Let Φ be all vectors in R having length 2 and integer coordinates, as well as all vectors 2v where v is vector having length 1 and integer coordinates. Then, we call Φ the Cn root system. √ n 3.4. Definition. Let Φ be all vectors in R having length 2 and integer coordinates. We call Φ the Dn root system. It turns out that all these root systems are irreducible, but they are not the only ones. There are other so-called “exceptional” types, such as 3.5. Definition. The following configuration of roots in figure3 is called .

3.6. Remark. Note that the short roots in G2 form the system A2, and similarly for the long roots.

Using Dynkin diagrams, we will arrive at all the irreducible root systems. While it might take a little work to derive directly at this stage, the root

6 Figure 3. G2 Root System from [Wik17]

An Sn+1 n Bn (Z/2) o Sn n Cn (Z/2) o Sn n−1 Dn (Z/2) o Sn G2 D12, the Dihedral group of order 12.

Table 1. Weyl Groups for Standard Irreducible Root Sys- tems systems have the following Weyl groups, presented in Table1, which we will derive later.

4. Weyl Groups Weyl groups have some interesting and useful properties as groups. Fur- thermore, an arbitrary Weyl group may have group actions on sets of “weights” (discussed later), not just root systems.

We start with a useful theorem for how Weyl groups interact with root systems and their bases 4.1. Theorem. [Hum72, Thm 10.3] Let ∆ be a base of Φ. (a) If z ∈ E is regular, then there exists σ ∈ W such that (σ(z), α) > 0 for all α ∈ ∆. (b) If ∆0 is another base of Φ, then σ(∆0) = ∆ for some σ ∈ W . In other words, W acts transitively on bases. (c) If α ∈ Φ, there exists σ ∈ W such that σ(α) ∈ ∆

7 (d) W is generated by {σα | α ∈ ∆}. (e) If σ(∆) = ∆ for σ ∈ W , then σ = 1, that is, W acts simply transi- tively on bases.

0 Proof. Let W ≤ W be the subgroup generated by {σα | α ∈ ∆}. For 0 0 (a)–(c), we will show the result using W , then show W = W .  Actually write this proof or just We can define a length function on the elements of a Weyl group W by cite it. writing all the elements of W in reduced form. If w = σα1 ··· σαn with n minimal, then we say that `(w) = n. We can also characterize this as follows. 4.2. Lemma. [Hum72, p 52] Let n(σ) be the number of positive roots for which σ(α) ≺ 0. Then, for all σ ∈ W , we have that `(σ) = n(σ). This length function has many results. 4.3. Proposition. Let Φ be a root system. Then, (a) The number of α ∈ Φ+ for which wα ≺ 0 is precisely `(w). In + particular, when α ∈ ∆, we have that σαβ 0 for all β 6= α in Φ . Moreover, w is uniquely determined by the set of α 0 for which wα ≺ 0. (b) If w ∈ W , then `(w) = `(w−1). Thus, `(w) = |Φ+ ∩ w(Φ−)|. 4.1. Chevalley-Bruhat Ordering (Optional). Now, we can also define a partial order on W called the “Chevalley-Bruhat Ordering” following the approach in [Hum08, p 5]. Let [ T := wSw−1 w∈W where S is the set of all simple reflections (sα) in W . Then, T is the set of all reflections. Now, if w, w0 ∈ W , then we can say w0 → w if w = tw0 for some t ∈ T and `(w0) < `(w). We extend this relation to a partial ordering 0 0 of W , saying w < w if there is some sequence of relations w = w0 → w1 → · · · → wn = w for some wi ∈ W . This partial order satisfies some useful properties. 4.4. Proposition. [Hum08, p 5] Let w, w0 ∈ W . (a) w0 < w =⇒ `(w0) < `(w) (b) w0 ≤ w if and only if w0 occurs as a subexpression in one (hence any) reduced expression s1 ··· sn, si ∈ S, for w. (c) Adjacent elements in the Bruhat ordering differ in length by 1. (d) If w0 < w and s ∈ S, then w0s ≤ w or w0s ≤ ws (or both). (e) If `(w1) + 2 = `(w2), the number of elements w ∈ W satisfying w1 < w < w2 is 0 or 2.

4.5. Remark. When W = Sn, then the Chevalley-Bruhat order is the weak order on .

8 5. Cartan Matrices If we fix an ordering of our simple roots, we can encode the relationships between our simple roots using a matrix. 5.1. Definition. [Hum72, p 55] Given a root system Φ and an ordered set of simple roots, (α1, . . . , αn), we can construct a matrix M = (hαi, αji)i,j. Such a matrix is called a of Φ and it entries are called Cartan integers. Cartan matrices are useful because they reveal some structure about root systems that may not be apparent from the original definition. Furthermore, Cartan matrices are actually unique up to ordering of the simple roots and the Cartan integers allow us to reconstruct Φ. Note, too, that all the diag- 2(α,α) onal entries must necessarily be 2’s, since hα, αi = (α,α) = 2.

5.2. Example. For A2 with simple system ∆ = {α, β}, we compute that 2π 2|α||β| cos( 3 ) hα, βi = |α|2 = −1. We get the same result for hβ, αi since cos(θ) is an even function and |α| = |β|. Thus, we get our Cartan matrix as  2 −1  −1 2

5.3. Example. For B2 with simple system ∆ = {α, β}, we compute that 2|α||β| cos( 3π ) √ 3π hα, βi = 4 = 2 2 cos( ) = −2 |α|2 4 We also compute that √ 2|β||α| cos(− 3π ) 2(− 2 ) hβ, αi = 4 = √ 2 = −1 |β|2 2 and so we get our Cartan matrix as  2 −2  −1 2

5.4. Example. Similar computations for C2 reveal that the Cartan matrix is given by  2 −1  −2 2 From these examples, we can see some important common facts about the structure of Cartan matrices. 5.5. Proposition. Given a Cartan matrix A of a root system Φ, we see it has the following properties.

(a) Ai,i = 2 for all i. (b) Ai,j ∈ {0, −1, −2, −3} if i 6= j. (c) If Ai,j = −2 or −3, then Aj,i = −1.

9 (d) Ai,j = 0 ⇐⇒ Aj,i = 0.

(αi,αi) Proof. (a) follows since hαi, αii = 2 = 2. (b) and (c) follow immedi- (αi,αi) ately from lemma 2.9 and the fact that any two distinct roots in ∆ form an obtuse angle. Finally, (d) follows since hαi, αji = 0 tells us that αi and αj are orthogonal, and thus the product in the other order will be 0 as well.  Now, we prove the important proposition 5.6. Proposition. [Hum72, Prop 11.1] Let Φ ⊆ E and Φ0 ⊆ E0 be root 0 0 0 systems with bases ∆ = {α1, . . . , α`} and ∆ = {α1, . . . , α`} respectively. If 0 0 hαi, αji = hαi, αji for 1 ≤ i, j ≤ `, then 0 • The αi 7→ αi extends (uniquely) to an φ: E → E0, • φ maps Φ onto Φ0, • and hφ(α), φ(β)i = hα, βi for all α, β ∈ Φ. Therefore, the Cartan matrix of Φ determines Φ up to isomorphism

Proof. Since ∆ and ∆0 are bases of E,E0, there is a unique 0 0 isomorphism φ: E → E sending αi to αi. If α, β ∈ ∆, then 0 σφ(α)(φ(β)) = σα0 (β ) = β0 − hβ0, α0iα0 0 0 = φ(β) − hβ, αiφ(α) by hypothesis hαi, αji = hαi, αji = φ(β − hβ, αiα)

= φ(σα(β)) that is to say, the following diagram commutes for any α ∈ ∆

φ E E0

σα σφ(α) φ E E0 Now, since W , W 0 are generated by simple reflections, σ 7→ φ ◦ σ ◦ φ−1 0 is an isomorphism between W and W that sends σα 7→ σφ(α) for α ∈ ∆. Furthermore, any β ∈ Φ is conjugate under W to a simple root, that is, there exists α ∈ ∆ such that β = σ(α). Thus, φ(β) = (φ ◦ σ ◦ φ−1)(φ(α)) =⇒ φ(β) ∈ Φ0 | {z } |{z} ∈W 0 ∈∆0 Thus, φ(Φ) ⊆ Φ0 =⇒ φ(Φ) = Φ0. Finally, since W , W 0 are generated by reflections of simple roots, we have a canonical isomorphism between W 0 and W , say φ∗. Thus, since any root can be obtained from a simple root by

10 applying an element of the Weyl group, it must be that all Cartan integers are preserved under the isomorphism.  We now conclude by computing Cartan matrices for various classes of root systems we discussed earlier.

5.7. Example. Consider the root system An. All vectors are the same 2π length and any choice of simple system ∆ will require an angle of 3 with some other simple vector. Thus, up to reordering of simple roots, we get the following   2 −1          −1      An :        −1        −1 2 where everything not filled in represents the appropriate number of 0’s. These computations are completely analogous for the A2 example worked out above.

For Bn, the situation is slightly different because our simple system ∆ must have at least one short root and at least one long root. We will take the simple system ∆ = {ei − ei+1 | 1 ≤ i ≤ n − 1} ∪ {en}. Then, by direct 2π computation, the angle between ei − ei+1 and ei+1 − ei+2 is 3 , and so

hei − ei+1, ei+1 − ei+2i = −1 3π just like in the An case. However, en−1 − en forms an angle of 4 with en, thus getting √ √ 2 cos 3π √ 2 he − e , e i = 2 4 = 2 2 · − = −2 n−1 n n 1 2 and thus hen, en−1 − eni = −1 and we arrive at the Cartan matrix   2 −1          −1 −1        An−2 −En−2,1 Bn :   =   −E1,n−2 B2    −1 2 −2        −1 2

11 The Cn situation is symmetric with the choice ∆ = {ei − ei+1 | 1 ≤ i ≤ n − 1} ∪ {2en}, giving us   An−2 −En−2,1 Cn : −E1,n−2 C2

For Dn, we make the choice ∆ = {ei − ei+1 | 1 ≤ i ≤ n − 1} ∪ {en−1 + en}. As above, the first n − 3 roots will behave like the A case, but we compute the following 1 hen−2 − en−1, en−1 + eni = 2 · −√ √ = −1 2 · 2 1 hen−1 + en, en−2 − en−1i = 2 · −√ √ = −1 2 · 2 hen−1 − en, en−1 + eni = 0 Thus, our Cartan matrix has the form   2 −1            −1          Dn :  2 −1 −1           −1 2 0          −1 0 2

6. Dynkin Diagrams In general, it can be hard to determine the Weyl group of a root system by inspection. Instead, we can encode the root system using the Cartan matrix as a to reveal symmetries of roots. We let every simple root αi have a vertex and connect each vertex by hαi, αji · hαj, αii edges. Note that the axioms on root systems limit the result of this product to 0, 1, 2, 3. This gives us a Coxeter graph.

6.1. Example. Consider the root systems A1 × A1, A2, B2, and G2. Then we have the following Coxeter graphs:

A1 × A1 A2 B2 G2

12 For A1 × A1, we note that any two simple roots must be orthogonal, since the two A1’s form a 90 degree angle. Thus, if ∆ = {α, β}, then hα, βi = 0 and hβ, αi = 0. So, there is no edge connecting the two vertices.

In contrast, consider A2. Taking ∆ = {α, β} as denoted where A2 was defined, we check 2π 2π hα, βi = 2(α, β) = 2 cos = −1 = 2 cos( ) = 2(β, α) = hβ, αi 3 3 and thus hα, βihβ, αi = 1.

Next, consider B2 taking ∆ = {α, β} as denoted where B2 was defined. We check √ (α, β) |α| 2|α| cos 3π √ 3π hα, βi = 2 = 2 4 = 2 cos = −1 (β, β) 2|α|2 4 √ 3π hβ, αi = 2 2 cos = −2 4

Thus, hα, βihβ, αi = 2. Do G2 However, two different root systems can have the same Coxeter graph. Luckily, with a slight fix, we can use diagrams such as these to distinguish different root systems. 6.2. Definition. We define the Dynkin diagram of a root system Φ to be the Coxeter graph with directed edges such that, given double or triple edges between two roots, the edges will point away from the longer root and towards the shorter root.

We then have the following theorem. 6.3. Theorem. Up to ordering of simple roots, every Cartan matrix has a unique Dynkin diagram and vice versa.

Proof.  Add proof or delete environ- 6.4. Theorem. Φ is irreducible if and only if its Coxeter graph is connected. ment 6.5. Corollary. A root system Φ in Euclidean space E decomposes (uniquely) as the union of irreducible root systems Φi (in subspaces Ei ⊆ E) such that E = E1 ⊕ E2 ⊕ · · · ⊕ Ek.

7. Irreducible Root Systems Now that we have a structure theorem for root systems, we wish to arrive at a classification of irreducible root systems. To do this, one first figures out all the possible connected Coxeter graphs using elementary geometry/graph theory. We will only summarise here.

13 7.1. Theorem. If Φ is an irreducible root system of rank `, its Dynkin dia- gram is one of the following with ` vertices:

··· A` with ` ≥ 1 ··· B` with ` ≥ 2 ··· C` with ` ≥ 3

···

D` with ` ≥ 4

E6

E7

E8 G2 where the restrictions on ` are to avoid duplication of diagrams.

Proof. See [Hum72, pp 58–63] for a full proof. In summary, we rely on the n concept of an admissible configuration in R , namely, a collection of unit vectors in an open half-space such that any two vectors form an angle of π 2π 3π 5π 2 , 3 , 4 , or 6 .

7.2. Lemma. If v1, . . . , vn form an admissible onfiguration, then they are linearly independent. 7.3. Lemma. The Coxeter graph of an admissible configuration is acyclic, ignoring multiplicity of edgegs. 7.4. Lemma. Any vertex in the Coxeter graph has degree at most 3, counting multiplicity.

7.5. Lemma. If v1, . . . , vn form an admissible configuration with vi, vj con- nected by a single edge, then the configuration formed by removing vi and vj and replacing them with a single vector vi + vj, corresponding to deleting the two nodes and replacing them with one in the Coxeter diagram, forms another admissible configuration. 7.6. Lemma. A Coxeter graph of an admissible configuration cannot contain any of the following subgraphs:

···

14 ···

···

 Thus, we have a complete classification of Dynkin diagrams, which com- pletely determines all possible Cartan matrices. Furthermore, since we can recover a root system from its Cartan matrices, we have effectively classified all irreducible root systems. Thus, in a sense, root systems are a very well understood object.

8. Automorphisms of a Root System As discussed before, the Weyl group is the group generated by all the re- flections across hyperplanes orthogonal to the roots of a root system. How- ever, this is typically only a subgroup of all automorphisms of a root system. 8.1. Lemma. Given a root system, Φ, W E Aut Φ, that is the group of all of Φ onto itself. Proof. It was left as an exercise to prove 2.5, that is, for σ ∈ Aut Φ and α ∈ Φ, −1 σσασ = σσ(α) Thus, since the conjugate of any of the Weyl group is in the Weyl group, and W ≤ Aut Φ, it must be that W E Aut Φ.  Next, we define the following subgroup of Aut Φ. 8.2. Definition. Let Φ be a root system with simple system ∆. Then, we define Γ := {σ ∈ Aut Φ | σ(∆) = ∆} 8.3. Proposition. Let W be the Weyl group of a root system Φ. Then, ∼ Aut Φ = W o Γ Proof. Let τ ∈ Γ ∩ W . Thus, τ(∆) = ∆ because τ ∈ Γ. Then, by 4.1, since τ ∈ W , τ(∆) = ∆ =⇒ τ = 1. Next, consider τ ∈ Aut Φ. Then, τ(∆) is another simple system of Φ, so there exists a σ ∈ W such that στ(∆) = ∆ by 4.1. Thus, στ ∈ Γ and σ ∈ W together gives τ ∈ W Γ.  Now, recall that for all τ ∈ Aut Φ, hα, βi = hτ(α), τ(β)i for all α, β ∈ Φ by definition. 8.4. Proposition. For a root system Φ, every τ ∈ Γ corresponds to a dia- gram automorphism of the Dynkin diagram associated to Φ.

15 Proof. As noted above, for τ ∈ Γ, hα, βi = hτ(α), τ(β)i, and τ(∆) = ∆, so τ will permute the vertex labels of the Dynkin diagram while still preserving the number of edges between them. Similarly, a diagram automorphism will permute ∆ while preserving the number of edges, thus giving an element of Γ. Thus, we have the desired correspondance. 

Thus, given that we know all the possible Dynkin diagrams, to fully under- stand the automorphisms, we need only understand the Weyl group struc- ture of our root systems. To summarise our understanding, so far, we have the following presentation of the Weyl group

8.5. Proposition. Let Φ be a root system with Weyl group W , simple system ∆ = {α1, . . . , αn}, and Cartan matrix A. Then,

 2 (σασβ) = 1 if hα, βihβ, αi = 0 *  3 + 2 (σασβ) = 1 if hα, βihβ, αi = 1 W = {σα | α ∈ ∆} | σ = 1, for β 6= ±α, σασβ = α (σ σ )4 = 1 if hα, βihβ, αi = 2  α β  6 (σασβ) = 1 if hα, βihβ, αi = 3 that is to say, W is generated by reflections of the simple roots with relations given by the Dynkin diagram of Φ.

Proof. The generators come from 4.1. Since α, β ∈ ∆, this means we can the following implications for θ, the angle between α and β. hα, βihβ, αi = 0 =⇒ θ = π  2  2π hα, βihβ, αi = 1 =⇒ θ = 3 hα, βihβ, αi = 2 =⇒ θ = 3π  4  5π hα, βihβ, αi = 3 =⇒ θ = 6

Thus, from a geometric perspective, σασβ is a rotation by 2θ on the root system. Thus, the relations immediately follow. 

This immediately tells us the information we need to understand the Weyl groups of irreducible root systems.

8.6. Example. Let Φ = An with simple system ∆ = {e1 −e2, . . . , en−1 −en}. Then, the Weyl group generators act on the set {e1, . . . , en} by

(ek, ei − ei+1) σei−ei+1 (ek) = ek−2 (ei−ei+1) = ek−(δk,i−δk,i+1)(ei−ei+1) (ei − ei+1, ei − ei+1) and so σei−ei+1 sends

ei 7→ ei+1, ei+1 7→ ei, ek 7→ ek for k 6= i, i + 1

Thus, f : W  Sn via σei−ei+1 7→ (i, i+1), but if some σ ∈ W has σ(ek) = ek ∼ for all k, then σ(∆) = ∆ and thus σ = id, so ker f = {id} and thus W = Sn.

16 8.7. Example. Let Φ = Bn with simple system ∆ = {ei − ei+1 | 1 ≤ i ≤ n − 1} ∪ {en}. Then, Φ = {ei ± ej} ∪ {±ei}. Let σ ∈ W act on the set of pairs of vectors {±e1,..., ±en}. Then, σ induces a on this set, and so we have a homomorphism f : W → Sn. Consider that σei ∈ ker f since all the ei are orthogonal. Furthermore, if τ ∈ ker f, then, by the orthogonality of e ’s, we can factor (not necessarily uniquely) τ = σ ··· σ i ei1 eik

. Thus, ker f = hσei | 1 ≤ i ≤ ni with all generators commuting. Thus, I am skeptical ∼ n ker f = (Z/2Z) and, since f must be surjective by virtue of the fact that we about this step. could have picked any short root in ∆ and the Weyl group acts transitively on the set of simple systems, we have short exact sequence of groups

n f 1 → (Z/2Z) → W → Sn → 1

Furthermore, using a geometric argument, one sees that f(σei−ei+1 ) = (i, i+ ∼ 1). From the An example above, we also know that hσα1 , . . . , σαn i = Sn ≤ W , so we have a splitting g : Sn → W given by

(i, i + 1) 7→ σei−ei+1 ∼ n Thus, our split short exact sequence gives us W = (Z/2Z) o Sn. The Cn case is analogous by .

8.8. Example. For Φ = Dn = {±ei ± ej | 1 ≤ i, j < n, i 6= j} with

∆ = {ei − ei+1 | 1 ≤ i ≤ n − 1} ∪ {en−1 + en}, consider the action of σei−ei+1 on the set of roots. By direct computation  e 7→ e  i i+1 ei+1 7→ ei  ek 7→ ek k 6= i, j and thus we have a straightforward embedding of the symmetric group Sn ,→

W via (i, i + 1) 7→ σei−ei+1 . We also compute that, for σen−1+en , ( en−1 7→ −en en 7→ −en−1

Thus, σen−1−en σen−1+en will send en−1 7→ −en−1 and en 7→ −en. From this element, we are able to create elements which change (only) an even number n of signs on the vectors of R but that do not permute them. Because of this even-ness restriction, there can only be 2n−1 such possible transformations, all of which are realized. Thus, looking at f : W → Sn where f(w) is sent to the permutation w induces on the set {±e1,..., ±en}, we get (split) short exact sequence

1 → ker f → W → Sn → 1 where ker f must consist of those 2n−1 sign changing permutations that do not permute the basis vectors. Since these actions commute, we have ∼ n−1 ∼ n−1 ker f = (Z/2Z) and so W = (Z/2Z) oSn by the splitness of the short exact sequence.

17 Figure 4. Weyl Chambers for Root System A2 from [WMM18]

9. Weyl Chambers In understanding the geometry of root systems, we have made great use of the set of hyperplanes orthogonal to each root. However, it is worth consid- ering the complement to this collection of hyperplanes. Such a complement will be disconnected, and so we can consider the connected components of this complement. 9.1. Definition. Given a root system Φ of Euclidean space E, we call the S connected components of E\ α∈Φ Pα the (open) Weyl chambers of E, where Pα is the hyperplane orthogonal to α.

9.2. Example. In Figure4, the root system A2 is drawn with associated hy- perplanes given by the dashed lines. Thus, there are 6 Weyl chambers. The shaded chamber above corresponds to the choice of simple system {α1, α2} indicated in the figure. 9.3. Proposition. Given a root system Φ of Euclidean space E (a) Each regular γ ∈ E belongs to precisely one Weyl chamber. (b) Weyl chambers are in natural one-to-one correspondence with bases. (c) The Weyl group acts naturally on the set of Weyl chambers, and this action is free and transitive.

Proof. Part (a) is because γ ∈ E is regular precisely when it does not lie in 0 a hyperplane Pα. Thus, if two regular vectors γ, γ are in the same Weyl chamber, they are both on the same sides of all the Pα’s. Therefore, recalling the notation in the proof of 2.22, ∆(γ) = ∆(γ0). Thus, each Weyl chamber associates to a unique base and, furthermore, by 2.22, each base is of the form ∆(z) for some regular z ∈ E which must lie in precisely one Weyl chamber. Thus, we have established our correspondance.

18 For part (c), the action of the Weyl group is given by reflecting the Weyl chamber across the appropriate hyperplane. The transitivity and free-ness of this action comes from the correspondance with a choice of base along with theorem 4.1.  9.4. Definition. Given a choice of ∆, we call the Weyl chamber correspond- ing to this choice consisting of all v ∈ E such that (v, α) > 0 for all α ∈ ∆ the fundamental Weyl chamber relative to ∆. Geometrically, this says that the fundamental Weyl chamber consists of those vectors forming an acute angle with all of the simple roots. At first glance, it is not apparent what value the Weyl chamber perspective gives over the choice of base persepctive, but there are some useful techniques that use closure arguments, such as 9.5. Theorem. Fix a Weyl chamber C. Then, for all v ∈ E, the Weyl group orbit of v contains exactly one point in the closure C of C. Proof. Consider the closure C of C. By construction, every vector v ∈ E must lie in the closure of some chamber and, by 9.3(c), W acts transitively on the chambers. Thus, each orbit of W on V intersects C.

Now, let v1, v2 ∈ C with w(v1) = v2 for some w ∈ W . If `(w) = 0, then w = 1 and so v1 = v2. Now, let us proceed by assuming that, if v1, v2 ∈ C and w(v1) = v2 for some w ∈ W with `(w) < n, then v1 = v2. Then, if w(v1) = v2 for `(w) = n, we know there is at least one αi ∈ ∆ such that − w(αi) ∈ Φ . Therefore, we have

0 ≤ hv1, αii = hv2, w(αi)i ≤ 0 =⇒ hvi, αii = 0

Therefore, σαi (v1) = v1 and thus wσαi (v1) = v2. However, the only positive root that becomes negative under the action of σαi is αi. Therefore, the positive roots made negative by w and wσαi are the same except for αi, which is made negative only by w. Therefore,

`(w) = `(wsαi ) + 1 =⇒ `(wsαi ) = `(w) − 1 and so v1 = v2 by the inductive hypothesis.  An immediate consequence of the second part of this proof is the following

9.6. Lemma. [Hum72, Lemma 10.3B p 52] If v1, v2 ∈ C such that w(v1) = v2 for some w ∈ W , then w is a product of simple reflections which fix v1. In particular, v1 = v2.

10. Weights

Let ΛW be a collection of vectors λ ∈ E such that hλ, αi ∈ Z for all α ∈ Φ. We call the elements of ΛW weights. We note that ΛW is endowed with a subgroup structure with vector addition as the operation since hλ1 +λ2, αi = hλ1, αi + hλ2, αi.

19 10.1. Definition. We say the root is the subgroup of ΛW generated by Φ. We denote this ΛR.

Note that ΛR is actually a lattice since it is a Z-span of a basis of E.

10.2. Definition. Fix a base ∆ ⊆ Φ. We say that λ ∈ ΛW is dominant if all the integers hλ, αi are nonnegative for all α ∈ ∆. If they are all positive, then λ is strongly dominant. + We denote by ΛW the set of all dominant weights. ∨ ∨ 10.3. Definition. Let ∆ = {α1, . . . , αn}. Then, we know that {α1 , . . . , αn } forms a basis of E. Then, if we have the dual basis to this basis, {ω1, . . . , ωn}, we can check that the ωi are all dominant weights which we will call the fun- damental dominant weights (relative to ∆). In otherwords, the fundamental dominant weights are such that hωi, αji = δi,j. Check this def- 10.4. Definition. We say that Λ /Λ is the of Φ. inition with the W R coroots. To see that ΛW /ΛR is a finite group, we can view both ΛW and ΛR as Z- ∼ m modules. Then, ΛW = Z where m is the dimension of ΛW . By definition, ΛR is a submodule of ΛW but has the same dimension. Both are free. So, the quotient will be some finite group. More concretely, one notes that the Cartan matrix is a change of basis matrix, so to write the ωi in terms of the αi, we can invert the Cartan matrix. Thus, 10.5. Proposition. Given Cartan matrix M of a root system Φ, (a) The transpose of the Cartan matrix expresses the simple roots as linear combinations of the fundamental weights. (b) The index of ΛR in ΛW is given by det C. P Proof. Write αi = j ci,jωj for ci,j ∈ Z. Then, we see X (10a) Mi,k = hαi, αki = ci,jhωj, αki = ci,k j giving us X X αi = ci,jωj = Mi,jωj j j Thus, the Cartan matrix is a change of basis matrix to write the αi’s in terms of the ωi’s. To go the other direction, we simply invert M, which is a matrix −1 1 with entries in Z, so M = det M adj(M), and thus det M · ωi ∈ ΛR. 

10.6. Example. Consider the root system A2 with ∆ = {α, β}. Then, since the Cartan matrix is  2 −1  M = −1 2 We have ( ( 2 1 α = 2ω1 − ω2 ω1 = 3 α + 3 β =⇒ 1 2 β = −ω1 + 2ω2 ω2 = 3 α + 3 β

20 Compute the Recall the partial order on roots 2.13. We can extend this to a partial index for each order on weights as follows. type.

10.7. Definition. Given λ, µ ∈ ΛW , we say µ ≺ λ if and only if λ − µ is a sum of positive roots. In some ways, dominant weights serve as a good notion for a base of the weights, by virtue of the following proposition. 10.8. Proposition. [Hum72, p 68] Each weight is conjugate under W to one and only one dominant weight. If λ is dominant, then wλ ≺ λ for all w ∈ W , and if λ is strongly dominant, then wλ = λ only when w = 1. Proof. This result follows from 9.5 and 9.6 since all the fundamental weights lie in the closure of the fundamental Weyl chamber.  However, dominant weights are not not necessarily maximal with respect to the partial order in general.

10.9. Example. Consider the weight lattice for A2 with ∆ = {α, β}, the standard choice. Then, λ = 3α + β has hλ, βi = −3 + 2 = −1 so λ is not a dominant weight. However, λ − α has hλ − α, βi = −2 + 2 = 0, hλ − α, αi = 4 − 1 = 3 and thus is a dominant weight. Therefore, λ − α ≺ λ since λ − (λ − α) = α ∈ ∆, but λ is not dominant. However, given a fixed dominant weight, the space of all such examples is finite. 10.10. Lemma. [Hum72, p 70] Let λ be a dominant weight. Then, the number of dominant weights µ such that µ ≺ λ is finite. Proof. Observe that, since λ − µ is a sum of positive roots, 0 ≤ (λ + µ, λ − µ) = (λ, λ) − (µ, µ) so µ ∈ {x ∈ E | (x, x) ≤ (λ, λ)}, which is compact. Furthermore, the dominant weights is a discrete set, so there can only be a finite number of such µ.  1 X 10.11. Proposition. [Hum72, p 70] Recall ρ = α (2.15). As dis- 2 α∈Φ,α 0 cussed before, ρ ∈ ΛW . Furthermore, (a) ρ is a (strongly) dominant weight and, in fact, n X ρ = ωi i=1

21 + −1 (b) For µ ∈ ΛW and ν = w (µ) for w ∈ W we have (ν + ρ, ν + ρ) ≤ (µ + ρ, µ + ρ) with equality only if ν = µ.

Proof. For (a), observe 2 σiρ = ρ−αi =⇒ (ρ−αi, αi) = (σi ρ, σiαi) = (ρ, −αi) =⇒ 2(ρ, αi) = (αi, αi) = 2

Thus, hρ, αii = 1, so 10a X X ρ = hρ, αiiωi = ωi i i For (b), observe (ν + ρ, ν + ρ) = (w(ν + ρ), w(ν + ρ)) = (µ + wρ, µ + wρ) = (µ + ρ, µ + ρ) − 2(µ, ρ − wρ) + ≤ (µ + ρ, µ + ρ) since µ ∈ ΛW and ρ − wρ is a sum of positive roots with equality only if (µ, ρ − wρ) = 0 by 10.8. However, this gives us Why is the last equality true? (µ, ρ) = (µ, wρ) = (ν, ρ) =⇒ (µ − ν, ρ) = 0 but µ − ν is a sum of positive roots and ρ is strongly dominant, so it must be that µ = ν.  Finally, we give a discussion of the notion of saturated sets of weights. This idea is important for those interested in understanding how [Hum72] proves results on irreducible finite dimensional semisimple rep- resentations over an algebraically closed field of characteristic 0 in section 21.

10.12. Definition. We call a subset S ⊆ ΛW saturated if, for all λ ∈ S, α ∈ Φ, and i between 0 and hλ, αi, the weight λ − iα ∈ S. We say a saturated set of weights has a highest weight λ if λ ∈ S and µ ≺ λ for all µ ∈ S. 10.13. Proposition. Given a saturated set of weights S, we have the fol- lowing

(a) S is stable under W , that is, σαλ = λ−hλ, αiα ∈ S for all λ ∈ S, α ∈ Φ. (b) If S has a highest weight, then S must be finite. + (c) If S has highest weight λ, then if µ ∈ ΛW and µ ≺ λ, then µ ∈ S. (d) (Optional) If S has highest weight λ, then if µ ∈ S, then (µ + ρ, µ + ρ) ≤ (λ + ρ, λ + ρ) with equality only if µ = λ.

22 Proof. Part (a) is immediate from the definition of S being saturated. For part (b), we note that the number of dominant weights less than the highest weight, say λ ∈ S, must be finite by 10.10. Thus, since our root system is also finite, each dominant weight will only have a finite number of other weights to make S saturated.

0 P + For (c), we take an arbitrary µ = µ + α∈∆ kαα ∈ S with kα ∈ Z . One reduces one of the kα while still remaining in S until only µ ∈ S.

For (d), we can reduce to µ being dominant using part (c). If µ = λ − π for π a sum of positive roots, we have (λ + ρ, λ + ρ) − (µ + ρ, µ + ρ) = (λ + ρ, λ + ρ) − (λ + ρ − π, λ + ρ − π) = (λ + ρ, π) + (π, µ + ρ) ≥ (λ + ρ, π) ≥ 0 where the inequalities follow because µ+ρ and λ+ρ are dominant. Because λ + ρ is strongly dominant, equality holds only if π = 0.  The proposition above gives us a good picture of what a saturated set of weights S looks like if it has a highest weight. Namely, S consists of all dominant weights lower than or equal to highest weight λ and the conjugates of all these weights under W + 10.14. Corollary. [Hum72, p 71] Given λ ∈ ΛW , there exists precisely one set of saturated weights S such that λ is a highest weight of S.

+ Proof. Given the exposition above, for λ ∈ ΛW , we see that at most one saturated set of weights can exist with λ the highest weight. Conversely, + given λ ∈ ΛW , let us construct a saturated set of weights where λ is a highest weight. (a) Take S to be the set of all dominant weights below λ and their W - conjugates. (b) Observe that S is stable under the W . Then, wµ ≺ µ for all dominant weights µ and all w ∈ W by 10.8. Thus,

S = {µ ∈ ΛW | wµ ≺ λ for all w ∈ W } Take α ∈ Φ. For µ0 = µ − kα for k between 0 and hµ, αi, we wish to show µ0 ∈ S. (c) Since µ0 is on the α-chain of µ, w(µ0) is on the w(α)-chain from w(µ) to w(µ − hµ, αiα) = w(µ) − hµ, αiw(α). Thus, we have wµ ≺ wµ0 ≺ wµ − hµ, αiw(α) or wµ − hµ, αiw(α) ≺ wµ0 ≺ wµ (d) Note, howver, wµ and wµ − hµ, αiw(α) are both in the W -orbit of µ, so by the description of S above, they are both lower than λ.

23 Therefore, wµ0 ≺ λ for all w ∈ W , thus giving us µ0 ∈ S by our description of S.



11. Appendix: Summary Information

n Throughout, we denote i for the ith standard basis vector of R . Refer to Section3 for definitions of these root systems.

11.1. Type An.

n+1 11.1. Proposition. The root system An ⊆ R has

(a) Φ = {i − j | 1 ≤ i 6= j ≤ n + 1} and |Φ| = n(n + 1). (b) The canonical choice of simple roots is

∆ = {i − i+1 | 1 ≤ i < n + 1}

+ and thus Φ = {i − j | 1 ≤ i < j ≤ n + 1}. (c) From the action on {e1, . . . , en}, one can infer W = Sn. (d) The fundamental weights are given by ωi = 1 + ··· + i for 1 ≤ i ≤ n and

ρ = n1 + (n − 1)2 + ··· + n

11.2. Type Bn.

n 11.2. Proposition. The root system Bn ⊆ R has

(a) Φ = {±(i +j) | 1 ≤ i < j ≤ n}∪{±(i −j) | 1 ≤ i < j ≤ n}∪{±i | 1 ≤ i ≤ n} and so |Φ| = 2n(n − 1) + 2n = 2n2. (b) The canonical choice of simple roots is

∆ = {i − i+1 | 1 ≤ i < n} ∪ {n}

+ and thus Φ = {i + j | 1 ≤ i < j ≤ n} ∪ {i − j | 1 ≤ i < j ≤ n} ∪ {i | 1 ≤ i ≤ n}. (c) Since the Dynkin diagram has no graph automorphisms, it must be that W = Aut Φ. Furthermore, automorphisms can permute {1, . . . , n} n and change any of the signs, thus giving W = (Z/2) o Sn. (d) The fundamental weights are given by 1 ω =  + ··· +  , (i < n) ω = ( + ··· +  ) i 1 i n 2 1 n and 1 ρ = ((2n − 1) + (2n − 3) + ··· +  ) 2 1 2 n

24 11.3. Type Cn. n 11.3. Proposition. The root system Cn ⊆ R has

(a) Φ = {±(i+j) | 1 ≤ i < j ≤ n}∪{±(i−j) | 1 ≤ i < j ≤ n}∪{±2i} 2 and so |Φ| = 2n(n − 1) + 2n = 2n just as in Bn. (b) The canonical choice of simple roots is

∆ = {i − i+1 | 1 ≤ i < n} ∪ {2n} + and thus Φ = {i + j | 1 ≤ i < j ≤ n} ∪ {i − j | 1 ≤ i < j ≤ n} ∪ {2i | 1 ≤ i ≤ n}. (c) Cn presents a somewhat dual situation to Bn and thus also has Weyl n group W = (Z/2) o Sn. (d) The fundamental weights are given by

ωi = 1 + ··· + i, (i ≤ n) and

ρ = n1 + (n − 1)2 + ··· + n

11.4. Type Dn. n 11.4. Proposition. The root system Dn ⊆ R has

(a) Φ = {i − j | 1 ≤ i 6= j ≤ n} ∪ {±(i + j) | 1 ≤ i < j ≤ n} and so n(n−1) |Φ| = n(n − 1) + 2 2 = 2n(n − 1). (b) The canonical choice of simple roots is

∆ = {i − i+1 | 1 ≤ i < n} ∪ {n−1 + n} + and thus Φ = {i ± j | 1 ≤ i < j ≤ n}. n−1 (c) Dn has Weyl group W = (Z/2) o Sn. (d) The fundamental weights are given by  ωi = 1 + ··· + i i < n − 1  1 ωn−1 = 2 (1 + ··· + n−1 − n)  1 ωn = 2 (1 + ··· + n−1 + n) and thus

ρ = (n − 1)1 + (n − 2)2 + ··· + n−1

References [Hum72] J. E. Humphreys, Introduction to Lie Algebras and Representaton Theory, 1972. Third printing, revised. [Hum08] , Representations of Semisimple Lie Algebras in the BGG , 2008. [Wik17] Wikipedia, Root system — Wikipedia, The Free Encyclopedia, 2017. [Online; accessed 21-February-2017] https://en.wikipedia.org/w/index.php?title= Root_system&oldid=759729605.

25 [WMM18] Wikipedia and Mathphysman-Mathematica, The shaded region is the funamental Weyl chamber for the base (2018). [Online; accessed 7- March-2018] https://en.wikipedia.org/wiki/Weyl_group#/media/File: Weyl_chambers_for_A2.png.

26