<<

From root systems to Dynkin diagrams

Heiko Dietrich

Abstract. We describe root systems and their associated Dynkin diagrams; these notes follow closely the book of Erdman & Wildon (“Introduction to Lie algebras”, 2006) and lecture notes of Willem de Graaf (Italy). We briefly describe how root systems arise from Lie algebras.

1. Root systems

1.1. Euclidean spaces. Let V be a finite dimensional , that is, a finite dimensional R-space with inner product ( , ): V V R, which is bilinear, symmetric, and − − p× → positive definite. The length of v V is v = (v, v); the angle α between two non-zero ∈ || || v, w V is defined by cos α = (v,w) . ∈ ||v||||w|| If v V is non-zero, then the hyperplane perpendicular to v is Hv = w V (w, v) = 0 . ∈ { ∈ | } The reflection in Hv is the linear map sv : V V which maps v to v and fixes every w Hv; → − ∈ recall that V = Hv Span (v), hence ⊕ R 2(w, v) sv : V V, w w v. → 7→ − (v, v) In the following, for v, w V we write ∈ 2(w, v) w, v = ; h i (v, v) note that , is linear only in the first component. An important observation is that each h− −i su leaves the inner product invariant, that is, if v, w V , then (su(v), su(w)) = (v, w). ∈ We use this notation throughout these notes.

1.2. Abstract root systems. Definition 1.1. A finite subset Φ V is a for V if the following hold: ⊆ (R1)0 / Φ and Span (Φ) = V , ∈ R (R2) if α Φ and λα Φ with λ R, then λ 1 , ∈ ∈ ∈ ∈ {± } (R3) sα(β) Φ for all α, β Φ, ∈ ∈ (R4) α, β Z for all α, β Φ. h i ∈ ∈ The rank of Φ is dim(V ). Note that each sα permutes Φ, and if α Φ, then α Φ. ∈ − ∈ Lemma 1.2. If α, β Φ with α = β, then α, β β, α 0, 1, 2, 3 . ∈ 6 ± h ih i ∈ { } Proof. By (R4), the product in question is an . If v, w V 0 , then (v, w)2 = v w cos2(θ) where θ is the angle between v and w, thus α, β∈ β, α\{=} 4 cos2 θ 4. If || |||| || h ih i ≤ cos2(θ) = 1, then θ is a multiple of π, so α and β are linearly dependent, a contradiction.  1 2 Heiko Dietrich

If α, β Φ with α = β and β α , then β, α α, β ; by the previous lemma, all possibilities∈ are listed6 ± in Table|| 1.2.|| ≥ || || |h i| ≥ |h i|

α, β β, α θ (β,β) h i h i (α,α) 0 0 π/2 – 1 1 π/3 1 -1 -1 2π/3 1 1 2 π/4 2 -1 -2 3π/4 2 1 3 π/6 3 -1 -3 5π/6 3 Table 1. Angles between root vectors.

Let α, β Φ with (α, β) = 0 and (β, β) (α, α). Recall that sβ(α) = α α, β β Φ, and α, β = ∈1, depending on6 whether the angle≥ between α and β is obtuse or− acute, h i see∈ Table 1. Thus,h i if the± angle is > π/2, then α + β Φ; if the angle is < π/2, then α β Φ. ∈ − ∈ Example 1.3. We construct all root systems Φ of R2. Suppose α Φ is of shortest length and choose β Φ such that the angle θ π/2, 2π/3, 3π/4, 5π/6 between∈ α and β is as large as possible. This∈ gives root systems of type∈ { A A , A , B , and} G , respectively: 1 × 1 2 2 2

β β β β α α α α

A1 A1 A2 B2 ×

Definition 1.4. A root system Φ is irreducible if it cannot be partitioned into two non- emtpy subsets Φ and Φ , such that (α, β) = 0 for all α Φ and β Φ . 1 2 ∈ 1 ∈ 2 Lemma 1.5. If Φ is a root system, then Φ = Φ ... Φk, where each Φi is an irreducible root 1 ∪ ∪ system for the space Vi = Span (Φi) V ; in particular, V = V ... Vk. R ≤ 1 ⊕ ⊕

Proof. For α, β Φ write α β if and only if there exist γ , . . . , γs Φ with α = γ , β = γs, ∈ ∼ 1 ∈ 1 and (γi, γi+1) = 0 for 1 i < s; then is an equivalence relation on Φ. Let Φ1,..., Φk be the equivalence6 classes of≤ this relation.∼ Clearly, (R1), (R2), and (R4) are satisfied for each Φk and Vk = Span (Φk). To prove (R3), consider α Φk and β Φk; if (α, β) = 0, then R ∈ ∈ sα(β) = β Φk. If (α, β) = 0, then (α, sα(β)) = 0 since sα leaves the inner product invariant; ∈ 6 6 thus, sα(β) α, and sα(β) Φk. In particular, Φk is an irreducible root system of Vk. Clearly, ∼ ∈ every root appears in some Vi, and the sum of Vi spans V . If v1 + ... + vk = 0 with each vi Vi, then 0 = (v + ... + vk, vj) = (vj, vj) for all j, that is, vj = 0 for all j. ∈ 1  From root systems to Dynkin diagrams 3

1.3. Bases of root systems. Let Φ be a root system for V . Definition 1.6. A subset Π Φ is a base (or root basis) for Φ if the following hold: ⊆ (B1) Π is a vector space basis for V , P (B2) every α Φ can be written as α = kββ with either all kβ N or all kβ N. ∈ β∈Π ∈ − ∈ A root α Φ is positive with respect to Π if the coefficients in (B2) are positive; otherwise ∈ α is negative. The roots in Π are called simple roots; the reflections sβ with β Π are simple reflections. ∈ We need the notion of root orders to prove that every root system has a base. Definition 1.7. A root order is a partial order “>” on V such that every α Φ satisfies α > 0 or α > 0, and “>” is compatible with addition and scalar multiplication.∈ − Lemma 1.8. Let Φ be a root system of V . P` a) Let v , . . . , v` be a basis of V and write v > 0 if and only if v = kivi and the first { 1 } i=1 non-zero ki is positive; define v > w if v w > 0. Then “>” is the lexicographic root − order with respect to the ordered basis v , . . . , v` . { 1 } b) Choose v V outside the (finitely many) hyperplanes Hα, α Φ. For u, v V write 0 ∈ ∈ ∈ u > v if and only if (u, v0) > (v, v0). Then “>” is the root order defined by v0. Let “>” be any root order; call α Φ positive if α > 0, and negative otherwise; α is simple if α > 0 and it cannot be written∈ as a sum of two positive roots. The proof of the following theorem shows that the set Π of all simple roots is a base for Φ: Theorem 1.9. Every root system has a base.

Proof. Let “>” be a root order with set of simple roots Π; we show that Π is a base. First, let α, β Π with α = β: If α β is a positive root, then α = (α β) + β, and α is not simple; if α β ∈is a negative6 root, then− β = (α β) + α is not simple;− thus, α β / Φ. This implies that− the angle of α and β is at least −π/2,− hence (α, β) 0 as seen in Table− 1.∈ ≤ Second, Π is linearly independent: if not, then there exist pairwise distinct α1, . . . , αk Π Pk ∈ with α1 = i=2 kiαi = β+ + β−, where β+ and β− are the sums of all kiαi with ki positive and negative, respectively. By construction, (β , β−) 0 since each (αi, αj) 0. Note that + ≥ ≤ β = 0 since α > 0, thus (β , β ) > 0 and (α , β ) = (β , β ) + (β−, β ) > 0. On the other + 6 1 + + 1 + + + + hand, (α , αj) 0 and the definition of β imply that (α , β ) 0, a contradiction. 1 ≤ + 1 + ≤ Finally, we show that every positive root α Φ is a of Π with coefficients ∈ in N. Clearly, this holds if α Π. If α / Π, then α = β + γ for positive roots β, γ Φ with ∈ ∈ ∈ α > β, γ. By induction, β and γ are linear combinations of Π with coefficients in N. 

Corollary 1.10. Let Π = α , . . . , α` be a root basis of Φ. { 1 } a) If “>” is the lexicographic order on V with respect to the basis Π of V , then Π is the set of simple roots with respect to “>”; thus, every base is defined by some root order. b) If v V with (v , α) > 0 for all α Π, then Π is the set of simple roots with respect to 0 ∈ 0 ∈ the root basis defined by v0. 4 Heiko Dietrich

Proof. a) This is obvious. b) Denote by “>” the root order defined by v0. Let αj Π; clearly, αj > 0. Suppose αj = β+γ P` ∈ P` for some β, γ Φ with β, γ > 0. Write β = kiαi and γ = hiαi, thus kj + hj = 1 ∈ i=1 i=1 and ki = hi if i = j. Recall that either k , . . . , k` 0 or k , . . . , k` 0; by definition, − 6 1 ≥ 1 ≤ each (v , αk) > 0, thus (v , β) > 0 implies k , . . . , k` 0. Analogously, (v , γ) > 0 forces 0 0 1 ≥ 0 h , . . . , h` 0. Thus, if i = j, then hi = ki implies hi = ki = 0. Now β = kjαj and γ = hjαj 1 ≥ 6 − yield kj, hj 1 . But hj + kj = 1, which is not possible. Thus αj must be simple. ∈ {± }  The proof of Theorem 1.9 also implies the following. Corollary 1.11. If α, β Φ are distinct simple roots, then (α, β) 0. ∈ ≤ If Π is a root base of Φ, then α Φ is positive with respect to Π if α is positive with respect to the root order which defines∈ Π; write Φ+ and Φ− for the set of positive and negative roots, respectively. Note that Φ− = Φ+ and Φ = Φ+ Φ−. − ∪ We remark that root bases can also be constructed geometrically: Fix a hyperplane in V which intersects Φ trivially; label the roots on one side of the hyperplane positive, the other negative. Define Π to be the set of positive roots which are nearest to the hyperplane.

1.4. . Let Φ be a root system with ordered root base Π = α , . . . , α` . { 1 } Definition 1.12. The Weyl group of Φ is the subgroup of linear transformations of V generated by all reflections sα with α Φ, that is, W = W (Φ) = sα α Φ . ∈ h | ∈ i Lemma 1.13. The Weyl group W of Φ is finite.

Proof. By (R3), there is a group homomorphism ϕ: W Sym(Φ). Since Φ contains a basis of V , the kernel of ϕ is trivial, hence W = ϕ(W ) Sym(Φ)→ is finite. ∼ ≤ 

Theorem 1.14. Let W0 be the subgroup of W generated by the simple reflections sα1 , . . . , sα` .

a) Each sαi permutes the positive roots other than αi. b) If β Φ, then β = g(α) for some g W and α Π. ∈ ∈ 0 ∈ c) We have W = W0. P` Proof. a) Let β Π with β = αi be positive; write β = kiαi with all ki 0. Since ∈ 6 i=1 ≥ β = αi, there must be kj > 0 for some j = i. The coefficient of αj in sα (β) = β β, αi αi 6 6 i − h i still is kj > 0, hence sα(β) is positive. + b) We first consider β Φ and show that β = g(α) for some g W0 and α Π. The assertion follows by induction∈ on the height of β defined as ∈ ∈ X X ht(β) = kγ where β = kγγ. γ∈Π γ∈Π

If ht(β) = 1, then choose g = 1 and α = β Π; if ht(β) 2, then, by (R2), at least two kγ ∈ ≥ P must be strictly positive. Suppose (β, γ) 0 for all γ Π. Then (β, β) = kγ(β, γ) 0, ≤ ∈ γ∈Π ≤ a contradiction to β = 0. Thus, there is γ Π with (β, γ) > 0, and so 6 ∈ ht(sγ(β)) = ht(β) β, γ < ht(β). − h i From root systems to Dynkin diagrams 5

0 0 Recall that sγ(β) is positive; by the induction hypothesis, sγ(β) = g (α) for some g W0 and 0 ∈ α Π, hence β = g(α) with g = sγ g W . Negative roots are dealt with analogously. ∈ ◦ ∈ 0 c) We have to show that sβ W0 for every β Φ. Part b) shows that β = g(α) for some ∈ ∈ −1 g W and α Π, and one can show that sβ = g sα g , which lies in W . ∈ 0 ∈ ◦ ◦ 0  Corollary 1.15. The root system Φ is completely determined by a base Π. Theorem 1.16. If Π and Π0 are two root bases of Φ, then g(Π) = Π0 for some g W . ∈ 0 Proof. Consider Π and Π as the simple roots with respect to root orders defined by v0 V and v0 V , respectively. The Weyl vector with respect to Π is ρ = 1 P β where β runs∈ 0 ∈ 2 β over all roots which are positive with respect to Π; similarly, ρ0 is defined with respect to Π0. 0 0 0 Since sα with α Π permutes the positive roots other than α, we have sα(ρ ) = ρ α. Since W (Φ) is finite, we∈ can choose w W (Φ) such that (w(v ), ρ0) is maximal. − ∈ 0 Now, if α Π0, then ∈ 0 0 (w(v0), ρ ) (sα(w(v0)), ρ ) (by the choice of w) ≥ 0 2 = (w(v0), sα(ρ )) (since sα = 1 and sα preserves the inner product) = (w(v ), ρ0 α) 0 − = (w(v ), ρ0) (w(v ), α). 0 − 0 Thus, (w(v ), α) 0 for all α Π0. If (w(v ), α) = 0, then (v , w−1(α)) = 0, which is 0 ≥ ∈ 0 0 impossible as (v0, β) = 0 for all β Φ by the definition of v0. It follows from Corollary 1.10 0 6 ∈ that Π is the set of simple roots with respect to the root order defined by w(v0). If β Π, 0 ∈ then (w(β), w(v0)) = (β, v0) > 0. Thus, both w(Π) and Π are root bases with respect to the 0 root order defined by w(v0). It follows that w(Π) = Π . 

1.5. Cartan matrices. Let Φ be a root system with ordered base Π = α , . . . , α` . { 1 } Definition 1.17. The Cartan of Φ with respect to Π is the ` ` matrix × C = (Ci,j) where Cij = αi, αj 0, 1, 2, 3 . 1≤i,j≤` h i ∈ { ± ± ± }

Note that each diagonal entry of a Cartan matrix is 2. Since (sα(u), sα(v)) = (u, v) for all α Φ, Theorem 1.16 shows that the Cartan matrix of Φ depends only on the ordering adopted with∈ the chosen base Π, and not on the base itself. If C and C0 are two Cartan matrices of a root system Π, then they are equivalent (and we write C C0) if and only if there is a ∼ permutation σ Sym(n) with Ci,j = Cσ(i),σ(j) for all 1 i, j `. We show that a Cartan matrix basically∈ determines the root system; we first need≤ more≤ notation. Definition 1.18. Let Φ and Φ0 be root systems of V and V 0, respectively. Then Φ and Φ0 are 0 0 isomorphic (and we write Φ ∼= Φ ) if there is a vector space isomorphism ϕ: V V such that ϕ(Φ) = Φ0 and α, β = ϕ(α), ϕ(β) for all α, β Φ. → h i h i ∈ An isomorphism of root systems preserves angles between root vectors; it does not necessarily preserve distances as the map v λv induces an isomorphism between Φ and cα α Φ . 7→ { | ∈ } Lemma 1.19. Let Φ and Φ0 be root systems with Cartan matrices C and C0, respectively. Then Φ = Φ0 if and only if C C0. ∼ ∼ 6 Heiko Dietrich

Proof. Let Π and Π0 be root bases which define C and C0, respectively. First, suppose there is an isomorphism ϕ:Φ Φ0 of root systems. Since ϕ(Π) is a base of Φ0, there is w W (Φ0) with ϕ(Π) = w(Π0), see Theorem→ 1.16. Clearly, Π and ϕ(Π) define the same Cartan matrix∈ C, and the Cartan matrix of w(Π0) is equivalent to the Cartan matrix C0 of Π0, thus C C0. ∼ Second, suppose C C0. Up to reordering the simple roots, we can assume that C = C0, ∼ 0 0 0 0 0 defined by Π = α1, . . . , α` and Π = α1, . . . , α` ; thus, αi, αj = αi, αj for all i, j. Let 0 { } { } 0 h i h i ϕ: V V be the linear map defined by ϕ(αi) = αi for all i. By definition, this is a vector space→ isomorphism which satisfies ϕ(Π) = Π0 and α, β = ϕ(α), ϕ(β) for all α, β Φ. It remains to show that ϕ(Φ) = Φ0. h i h i ∈ 0 If v V and αi Π, then v, αi = ϕ(v), αi follows from the definition of ϕ and the fact that ∈ ∈ h i h i 0 , is linear in the first component. This implies ϕ(sα (v)) = ϕ(v) v, αi α = sα0 (ϕ(v)). h− −i i − h i i Thus, the image under ϕ of the orbit of v V under the Weyl group W (Φ) is contained in 0 ∈ the orbit of ϕ(v) under W (Φ ). Since Φ = w(α) w W0, α Π , see Theorem 1.14b), and ϕ(Π) = Π0, it follows that ϕ(Φ) Φ0. The{ same argument| ∈ applied∈ } to ϕ−1 shows ϕ−1(Φ0) Φ, 0 ⊆ 0 ⊆ hence ϕ(Φ) = Φ . In conclusion, Φ ∼= Φ . 

1.6. Dynkin diagrams. Let Φ be a root system with ordered base Π = α , . . . , α` . { 1 } Definition 1.20. The of Φ (with respect to Π) has vertices α1, . . . α`, and there are dij = αi, αj αj, αi 0, 1, 2, 3 edges between αi and αj with i = j; if αj > αi , h ih i ∈ { } 6 || || || || then these edges are directed, pointing to the shorter root αi. The same graph, but without directions, is the Coxeter graph of Φ. If there is a single edge between α and β, then α = β and the edge is undirected, see Table 1; if there are multiple edges between, then|| α|| =|| β|| and the edges are directed. || || 6 || || Theorem 1.21. Two root systems are isomorphic if and only if their Dynkin diagrams are the same (up to relabeling the vertices).

Proof. By Lemma 1.19, isomorphic root systems have similar Cartan matrices, and the entries of a Cartan matrix define the Dynkin diagram. Thus, up to relabeling the simple roots, the Dynkin diagrams are the same. Conversely, from a Dynkin diagram one can recover the values αi, αj for all 1 , i, j `; recall that αi, αj < 0, and Table 1 determines the angle between h i ≤ ≤ h i αi and αj, and their proportion of lengths. In particular, the Cartan matrix is determined. Together with Lemma 1.19, it follows that identical Dynkin diagrams define identical Cartan matrices, which define isomorphic root systems.  Theorem 1.22. A root system Φ is irreducible if and only if its Dynkin diagram is connected. Example 1.23. Consider Example 1.3b). We have Φ = α, β, (α + β), (2α + β) with base Π = α, β . The angle between α and β is 3π/4, and{± β± >± α . Table± 1 shows} that α, β = {1 and} β, α = 2. Thus, the associated Cartan matrix|| || and|| || Dynkin diagram are h i − h i − 2 −1  C = −2 2 and B2 : . β α

Conversely, from such a diagram we read off that β > α and α, β β, α = 2; Table 1 shows α, β = 1 and β, α = 2, recall that both|| || values|| || musth be negativeih i by Corollary h i − h i − From root systems to Dynkin diagrams 7

1.11. In particular, the angle between α and β is 3π/4, and β = 2 α . Note that we have recovered the Cartan matrix and, using Corollary 1.15, we|| can|| recover|| || the root system by constructing the closure of α, β under simple reflections; the latter can be translated into an efficient algorithm (for arbitrary{± ± } Dynkin diagrams). In conclusion, we have seen how a root system is (up to isomorphism) uniquely determined by its Dynkin diagram.

2. Irreducible root systems By Lemma 1.5, it suffices to study irreducible root systems; the associated Dynkin diagrams are classified in the following theorem. Theorem 2.1. The Dynkin diagram of an irreducible root system is either a member of one of the four families An (n 1), Bn (n 2), Cn (c 3), Dn (n 4) as shown in Table 2, ≥ ≥ ≥ ≥ where each diagram has n vertices, or one of the five exceptional diagrams E6, , , G2, F4 as shown in Table 3. Each of the diagrams listed in Tables 2 and 3 occur as the Dynkin diagram of some irreducible root system.

An

Bn

Cn

Dn Table 2. Four infinite families of connected Dynkin diagrams.

G2

F4

E6

E7

E8

Table 3. Five exceptional Dynkin diagrams.

Sketch of Proof. Recall that the Coxeter diagram of an irreducible root system is the (connected) Dynkin diagram with all edges considered as undirected; the first step of the proof is to classify all connected Coxeter diagrams. For this, we consider admissible sets of 8 Heiko Dietrich an Euclidean space V with inner product ( , ), that is, a set A = v1, . . . , vn of linearly − − 2 { } independent unit vectors with (vi, vj) 0 and 4(vi, vj) 0, 1, 2, 3 if i = j. The associated ≤ 2 ∈ { } 6 graph ΓA has vertices v , . . . , vn, and dij = 4(vi, vj) edges between vi and vj if i = j. Every 1 6 Coxeter diagram is the graph ΓA for some admissible set A. We determine the structure of ΓA for an admissible set A; we assume that ΓA is connected and proceed as follows:

a) The number of vertices in ΓA joined by at least one edge is at most A 1: P | P| − v = v1 +...+vn = 0 satisfies (v, v) = n+2 i 0, and so n > i

c) No vertex in ΓA has more than four edges: Let w be a vertex of ΓA with adjacent vertices w1, . . . , wk. Since there are no cycles,

(wi, wj) = 0 for i = j. Let U = SpanR(w1, . . . , wk, w), and extend w1, . . . , wk to an 6 { Pk } orthonormal basis of U, say by adjoining w0. Clearly, (w, w0) = 0 and w = i=0(w, wi)wi. Pk 6 2 2 By assumption, w is a unit vector, so 1 = (w, w) = i=0(w, wi) . Since (w, w0) > 0, Pk 2 this shows that i=1(w, wi) < 1. Now, as A is admissible and (w, wi) = 0, we know 2 6 that (w, wi) 1/4 for 1 i k; hence k 3. ≥ ≤ ≤ ≤ d) If ΓA has a triple edge, then ΓA is the Coxeter graph of type G2: This follows from c) and the fact that ΓA is assumed to be connected.

e) If ΓA has a subgraph which is a line along w1, . . . , wk with single edges between wi and 0 0 wi ; let A = (A w , . . . , wk ) w where w = w + ... + wk. Then A is admissible +1 \{ 1 } ∪ { } 1 and the graph ΓA0 is obtained from ΓA by shrinking the line to a single vertex: Clearly, A0 is linearly independent, so we need only to verify the conditions on the inner products. By assumption, 2(wi, wi+1) = 1 for 1 i k 1 and (wi, wj) = 0 for i = j Pk−1 − ≤ ≤ − 6 otherwise, thus (w, w) = k + 2 (wi, wi ) = k (k 1) = 1. Suppose v A and i=1 +1 − − ∈ v = wi for 1 i k; since there are no cycles, v is joint to at most one wi. Thus, either 6 ≤ ≤ 0 (v, w) = 0 or (v, w) = (v, wi), and then 4(v, w) 0, 1, 2, 3 , so A is an admissible set; ∈ { } also ΓA0 is determined. f) A branch vertex is a vertex which is adjacent to three or more edges; by c), a branch vertex is adjacent to exactly three edges. The graph ΓA has no more than one double edge, not both a double edge and a branch vertex, and no more than one branch vertex: If ΓA contains two or more double edges, then it has a subgraph which is a line along w1, . . . , wk with single edges between w2, . . . , wk−1, and double edges between w1 and w2, and wk− and wk. By e), we obtain an admissible set w , v, wk with each two edges 1 { 1 } between w1 and v, and v and wk, contradicting c). The other two parts of the claim are proved in a similar way.

g) If ΓA has a subgraph which is a line along w1, . . . , wk, then (w, w) = k(k + 1)/2 where w = w1 + 2w2 + ... + kwk: The shape of the subgraph implies 2(wi, wi+1) = 1 for 1 i k 1, and (wi, wj) = 0 for i = j otherwise; the claim follows. − ≤ ≤ − 6 From root systems to Dynkin diagrams 9

h) If ΓA has a double edge, then ΓA is a Coxeter graph of type Bn or F4: Such a ΓA is a line along w1, . . . , wp, uq, uq−1, . . . , u1 with single edges and one double edge Pp between wp and uq. By g), (w, w) = p(p + 1)/2 and (u, u) = q(q + 1)/2 for w = i=1 iwi Pq 2 and u = i=1 iui. From the graph, 4(wp, uq) = 2 and (wi, uj) = 0 otherwise, hence 2 2 2 2 (w, u) = (pwp, quq) = p q /2. As w and u are linearly independent, the Cauchy- Schwarz inequality implies (w, u)2 < (w, w)(u, u), which yields 2pq < (p + 1)(q + 1), hence (p 1)(q 1) = pq q p < 2. So either q = 1 or p = q = 2. − − − − i) If ΓA has a branch vertex, then ΓA is of type Dn for some n 4, or E , E , or E : ≥ 6 7 8 Such a graph consists of three lines v1, . . . , vp, z and x1, . . . , xr, z and w1, . . . , wq, z, con- nected at the branch vertex z; we can assume p q r. We have to show that ≥ ≥ Pp Pq either q = r = 1 or q = 2, r = 1, and p 4. Let v = i=1 ivi, w = i=1 iwi, Pr ≤ and x = i=1 ixi. Note that x, w, x are pairwise orthogonal and U = SpanR(v, w, x, z) has orthonormal basis x,ˆ v,ˆ w,ˆ z for a suitable z , whereu ˆ = u/ u . Write z = { 0} 0 || || (z, vˆ)ˆv + (z, wˆ)w ˆ + (z, xˆ)ˆx + (z, z0)z0; as z is a unit vector and (z, z0) = 0, we get (z, vˆ)2 + (z, wˆ)2 + (z, xˆ)2 < 1. By g), the lengths of v, w, and x are known.6 Also, 2 2 2 2 2 2 2 (z, v) = (z, pvp) = p /4, and similarly (z, w) = q /4 and (z, x) = r /4. Substituting these in the previous inequality gives 2p2 2q2 2r2 + + < 1. 4p(p + q) 4q(q + 1) 4r(r + 1) This is equivalent to (p + 1)−1 + (q + 1)−1 + (r + 1)−1 > 1. Since (p + 1)−1 (q + 1)−1 (r + 1)−1 1/2, we have 1 < 3/(r + 1), and hence r < 2, so r = 1. Repeating≤ this≤ argument≤ gives q < 3, so q = 1 or q = 2. If q = 2, then p < 5. On the other hand, if q = 1, then there is no restriction on p. We have proved that the Coxeter diagram of an irreducible root system is a Coxeter diagram of type An, Bn, Cn, Dn, E6, E7, E8, F4, or G2; this proves that every connected Dynkin diagram occurs in Tables 2 and 3. That every diagram in Tables 2 and 3 is indeed a Dynkin diagram of some root system follows from a direct construction. We omit the proof here; see Section 3 for more details. 

3. Root systems of Lie algebras In this section we give a very very brief description of how the finite dimensional simple Lie algebras over C can be classified by Dynkin diagrams. A over the complex numbers is a C-vector space g together with a multiplication (Lie bracket) [ , ]: g g, (g, h) [g, h], − − × 7→ which is bilinear and for all g, h, k g satisfies [g, g] = 0 and the Jacobi identity ∈ [g, [h, k]] + [h, [k, g]] + [k, [g, h]] = 0. The Lie algebra g is simple if its only ideals are 0 and g. Every g g acts on g via the linear transformation ad(g): g g, h [g, h]; call g { }g semisimple if∈ ad(g) is diagonalisable. A subalgebra h g is a Cartan→ subalgebra7→ if it∈ is abelian, consists of semisimple elements of g, and is maximal≤ with these properties; up to conjugacy, g has a unique . 10 Heiko Dietrich

Let h g be a Cartan subalgebra of a finite dimensional Lie algebra g over the complex numbers.≤ Since h consists of pairwise commuting diagonalisable endomorphisms of g, there exists a C-basis of g such that, with respect to this basis, every ad(h), h h, is a diagonal ∗ ∈ matrix. Denote by h the dual space of h, that is, the space of linear maps h C. The root space decomposition of g with respect to h is → M g = h gα where gα = x g h h:[h, x] = α(h)x . ⊕ { ∈ | ∀ ∈ } α∈h∗ ∗ ∗ L Let Φ h be the set of α h with gα = 0 , thus, g = h α∈Φ gα. Each such gα is 1-dimensional,⊆ spanned by a∈ common eigenvector6 { } for each ad(h⊕), h h. It turns out the ∈ V = SpanR(Φ) can be furnished with an inner product ( , ) such that Φ is a root system of V . (We omit the technical details here1; proving that Φ−satisfies− the axioms of a root system is technical and requires significant effort.) This root system is irreducible if and only if g is simple. Moreover, there is a one-to-one correspondence between the isomorphism types of finite dimensional simple Lie algebras over the complex numbers and the isomorphism types of irreducible root systems. Thus, such Lie algebras can be classified up to isomorphism by the different types of connected Dynkin diagrams. In particular, it turns out that for each of the Dynkin diagram in Tables 2 and 3 there exists a Lie algebra whose root system has this Dynkin diagram. This result completes the proof if Theorem 2.1. The Dynkin diagrams of type

An, Bn, Cn, Dn correspond to the classical Lie algebras sln+1(C), so2n+1(C), sp2n(C), so2l(C).

4. More general: Coxeter groups

A is a group generated by finitely many involutions (elements of order 2), satisfying specific relations. More precisely, a Coxeter group is a group satisfying a presentation

nij w , . . . , wk (wiwj) = 1 h 1 | i where nii = 1 for all i and nji = nij 2 if i = j. The corresponding Coxeter matrix is ≥ 6 the symmetric k k matrix with integer entries nij, 1 i, j k. The associated Coxeter × ≤ ≤ diagram has vertices w , . . . , wk and, if nij 3, then there are nij edges between wi and wj. 1 ≥ If nij 4, then this edge is labelled nij. Finite Coxeter groups can, up to isomorphism, be classified≥ by their Coxeter diagrams; the list of possible Coxeter diagrams contains the Coxeter diagrams of the Dynkin diagrams in Tables 2 and 3. The Weyl group of a root system is a so-called reflection group (a group generated by hyperplane reflections of an Euclidean space), which is a special type of Coxeter group.

1 The of g is defined by κ(g, h) = tr(adg(g) adg(h)) where g, h g; it is a bilinear symmetric form, and non-degenerate if and only if g is semisimple. Also◦ the restriction of ∈κ to h h is non-degenerate, ∗ ∗ × hence it defines an isomorphism ϕ: h h , h κ(h, ). For α Φ h let tα h with κ(tα, ) = α( ). Now, if α, β Φ, then (α, β) = κ(t , t ) = →α(t ) 7→defines− a real-valued∈ ⊆ inner product∈ on V = Span− (Φ);− note that α β β Q R ∈ ∈ P 2 P 2 if xβ gβ, then adg(tθ)(xβ) = β(tθ)xβ, which implies that (θ, θ) = κ(tθ, tθ) = β(tθ) = (β, θ) 0 ∈ β∈Φ β∈Φ ≥ is real. If (θ, θ) = 0, then β(tθ) = 0 for all roots β, hence tθ = 0 and θ = 0.