From Root Systems to Dynkin Diagrams
Total Page:16
File Type:pdf, Size:1020Kb
From root systems to Dynkin diagrams Heiko Dietrich Abstract. We describe root systems and their associated Dynkin diagrams; these notes follow closely the book of Erdman & Wildon (\Introduction to Lie algebras", 2006) and lecture notes of Willem de Graaf (Italy). We briefly describe how root systems arise from Lie algebras. 1. Root systems 1.1. Euclidean spaces. Let V be a finite dimensional Euclidean space, that is, a finite dimensional R-space with inner product ( ; ): V V R, which is bilinear, symmetric, and − − p× ! positive definite. The length of v V is v = (v; v); the angle α between two non-zero 2 jj jj v; w V is defined by cos α = (v;w) . 2 jjvjjjjwjj If v V is non-zero, then the hyperplane perpendicular to v is Hv = w V (w; v) = 0 . 2 f 2 j g The reflection in Hv is the linear map sv : V V which maps v to v and fixes every w Hv; ! − 2 recall that V = Hv Span (v), hence ⊕ R 2(w; v) sv : V V; w w v: ! 7! − (v; v) In the following, for v; w V we write 2 2(w; v) w; v = ; h i (v; v) note that ; is linear only in the first component. An important observation is that each h− −i su leaves the inner product invariant, that is, if v; w V , then (su(v); su(w)) = (v; w). 2 We use this notation throughout these notes. 1.2. Abstract root systems. Definition 1.1. A finite subset Φ V is a root system for V if the following hold: ⊆ (R1)0 = Φ and Span (Φ) = V , 2 R (R2) if α Φ and λα Φ with λ R, then λ 1 , 2 2 2 2 {± g (R3) sα(β) Φ for all α; β Φ, 2 2 (R4) α; β Z for all α; β Φ. h i 2 2 The rank of Φ is dim(V ). Note that each sα permutes Φ, and if α Φ, then α Φ. 2 − 2 Lemma 1.2. If α; β Φ with α = β, then α; β β; α 0; 1; 2; 3 . 2 6 ± h ih i 2 f g Proof. By (R4), the product in question is an integer. If v; w V 0 , then (v; w)2 = v w cos2(θ) where θ is the angle between v and w, thus α; β2 β; αn f=g 4 cos2 θ 4. If jj jjjj jj h ih i ≤ cos2(θ) = 1, then θ is a multiple of π, so α and β are linearly dependent, a contradiction. 1 2 Heiko Dietrich If α; β Φ with α = β and β α , then β; α α; β ; by the previous lemma, all possibilities2 are listed6 ± in Tablejj 1.2.jj ≥ jj jj jh ij ≥ jh ij α; β β; α θ (β,β) h i h i (α,α) 0 0 π=2 { 1 1 π=3 1 -1 -1 2π=3 1 1 2 π=4 2 -1 -2 3π=4 2 1 3 π=6 3 -1 -3 5π=6 3 Table 1. Angles between root vectors. Let α; β Φ with (α; β) = 0 and (β; β) (α; α). Recall that sβ(α) = α α; β β Φ, and α; β = 21, depending on6 whether the angle≥ between α and β is obtuse or− acute, h i see2 Table 1. Thus,h i if the± angle is > π=2, then α + β Φ; if the angle is < π=2, then α β Φ. 2 − 2 Example 1.3. We construct all root systems Φ of R2. Suppose α Φ is of shortest length and choose β Φ such that the angle θ π=2; 2π=3; 3π=4; 5π=6 between2 α and β is as large as possible. This2 gives root systems of type2 f A A , A , B , andg G , respectively: 1 × 1 2 2 2 β β β β α α α α A1 A1 A2 B2 × G2 Definition 1.4. A root system Φ is irreducible if it cannot be partitioned into two non- emtpy subsets Φ and Φ , such that (α; β) = 0 for all α Φ and β Φ . 1 2 2 1 2 2 Lemma 1.5. If Φ is a root system, then Φ = Φ ::: Φk, where each Φi is an irreducible root 1 [ [ system for the space Vi = Span (Φi) V ; in particular, V = V ::: Vk. R ≤ 1 ⊕ ⊕ Proof. For α; β Φ write α β if and only if there exist γ ; : : : ; γs Φ with α = γ , β = γs, 2 ∼ 1 2 1 and (γi; γi+1) = 0 for 1 i < s; then is an equivalence relation on Φ. Let Φ1;:::; Φk be the equivalence6 classes of≤ this relation.∼ Clearly, (R1), (R2), and (R4) are satisfied for each Φk and Vk = Span (Φk). To prove (R3), consider α Φk and β Φk; if (α; β) = 0, then R 2 2 sα(β) = β Φk. If (α; β) = 0, then (α; sα(β)) = 0 since sα leaves the inner product invariant; 2 6 6 thus, sα(β) α, and sα(β) Φk. In particular, Φk is an irreducible root system of Vk. Clearly, ∼ 2 every root appears in some Vi, and the sum of Vi spans V . If v1 + ::: + vk = 0 with each vi Vi, then 0 = (v + ::: + vk; vj) = (vj; vj) for all j, that is, vj = 0 for all j. 2 1 From root systems to Dynkin diagrams 3 1.3. Bases of root systems. Let Φ be a root system for V . Definition 1.6. A subset Π Φ is a base (or root basis) for Φ if the following hold: ⊆ (B1) Π is a vector space basis for V , P (B2) every α Φ can be written as α = kββ with either all kβ N or all kβ N. 2 β2Π 2 − 2 A root α Φ is positive with respect to Π if the coefficients in (B2) are positive; otherwise 2 α is negative. The roots in Π are called simple roots; the reflections sβ with β Π are simple reflections. 2 We need the notion of root orders to prove that every root system has a base. Definition 1.7. A root order is a partial order \>" on V such that every α Φ satisfies α > 0 or α > 0, and \>" is compatible with addition and scalar multiplication.2 − Lemma 1.8. Let Φ be a root system of V . P` a) Let v ; : : : ; v` be a basis of V and write v > 0 if and only if v = kivi and the first f 1 g i=1 non-zero ki is positive; define v > w if v w > 0. Then \>" is the lexicographic root − order with respect to the ordered basis v ; : : : ; v` . f 1 g b) Choose v V outside the (finitely many) hyperplanes Hα, α Φ. For u; v V write 0 2 2 2 u > v if and only if (u; v0) > (v; v0). Then \>" is the root order defined by v0. Let \>" be any root order; call α Φ positive if α > 0, and negative otherwise; α is simple if α > 0 and it cannot be written2 as a sum of two positive roots. The proof of the following theorem shows that the set Π of all simple roots is a base for Φ: Theorem 1.9. Every root system has a base. Proof. Let \>" be a root order with set of simple roots Π; we show that Π is a base. First, let α; β Π with α = β: If α β is a positive root, then α = (α β) + β, and α is not simple; if α β 2is a negative6 root, then− β = (α β) + α is not simple;− thus, α β = Φ. This implies that− the angle of α and β is at least −π=2,− hence (α; β) 0 as seen in Table− 1.2 ≤ Second, Π is linearly independent: if not, then there exist pairwise distinct α1; : : : ; αk Π Pk 2 with α1 = i=2 kiαi = β+ + β−, where β+ and β− are the sums of all kiαi with ki positive and negative, respectively. By construction, (β ; β−) 0 since each (αi; αj) 0. Note that + ≥ ≤ β = 0 since α > 0, thus (β ; β ) > 0 and (α ; β ) = (β ; β ) + (β−; β ) > 0. On the other + 6 1 + + 1 + + + + hand, (α ; αj) 0 and the definition of β imply that (α ; β ) 0, a contradiction. 1 ≤ + 1 + ≤ Finally, we show that every positive root α Φ is a linear combination of Π with coefficients 2 in N. Clearly, this holds if α Π. If α = Π, then α = β + γ for positive roots β; γ Φ with 2 2 2 α > β; γ. By induction, β and γ are linear combinations of Π with coefficients in N. Corollary 1.10. Let Π = α ; : : : ; α` be a root basis of Φ. f 1 g a) If \>" is the lexicographic order on V with respect to the basis Π of V , then Π is the set of simple roots with respect to \>"; thus, every base is defined by some root order. b) If v V with (v ; α) > 0 for all α Π, then Π is the set of simple roots with respect to 0 2 0 2 the root basis defined by v0. 4 Heiko Dietrich Proof. a) This is obvious. b) Denote by \>" the root order defined by v0. Let αj Π; clearly, αj > 0. Suppose αj = β+γ P` 2 P` for some β; γ Φ with β; γ > 0.