<<

This week’s lectures

1. A tale of two matrices 2. Cluster complexes and their parametrizations 3A. Generalized associahedra 3B. Generalized Cartan matrices and Kac-moody root systems 4A. Combinatorial frameworks for cluster algebras 4B. Coxeter groups 5. Cambrian frameworks for cluster algebras

1 Lecture 1: A tale of two matrices

Nathan Reading

NC State University

Cluster Algebras and Cluster Combinatorics MSRI Summer Graduate Workshop, August 2011

Introduction Exchange matrices and cluster algebras Cartan matrices and root systems

2 Introduction

The heart of a cluster algebra: an exchange B

The heart of a : a Cartan matrix A

3 Introduction

The heart of a cluster algebra: an B ◮ entries

The heart of a root system: a Cartan matrix A ◮ Integer entries

3 Introduction

The heart of a cluster algebra: an exchange matrix B ◮ Integer entries ◮ 0’s on diagonal

The heart of a root system: a Cartan matrix A ◮ Integer entries ◮ 2’s on diagonal

3 Introduction

The heart of a cluster algebra: an exchange matrix B ◮ Integer entries ◮ 0’s on diagonal ◮ Strictly sign-skew-symmetric

The heart of a root system: a Cartan matrix A ◮ Integer entries ◮ 2’s on diagonal ◮ Strictly sign-symmetric, negative entries off-diagonal

3 Introduction

The heart of a cluster algebra: an exchange matrix B ◮ Integer entries ◮ 0’s on diagonal ◮ Strictly sign-skew-symmetric ◮ Skew-symmetrizable

The heart of a root system: a Cartan matrix A ◮ Integer entries ◮ 2’s on diagonal ◮ Strictly sign-symmetric, negative entries off-diagonal ◮ Symmetrizable

3 Introduction

The heart of a cluster algebra: an exchange matrix B ◮ Integer entries ◮ 0’s on diagonal ◮ Strictly sign-skew-symmetric ◮ Skew-symmetrizable (A“normalized” cluster algebra with “skew-symmetrizable” B)

The heart of a root system: a Cartan matrix A ◮ Integer entries ◮ 2’s on diagonal ◮ Strictly sign-symmetric, negative entries off-diagonal ◮ Symmetrizable (A Kac-Moody root system with “symmetrizable” Cartan matrix.)

3 Introduction (continued)

Our starting point: Strong superficial resemblance between exchange matrices B and Cartan matrices A. More precisely: Given a B, we easily recover an A: Put 2’s on diagonal; make all nonzero off-diagonal entries negative.

0 0 3 3 2 0 3 3 − − − 0 0 10 0 2 1 0    −  1 2 0 1 7→ 1 2 2 1 − − − −  1 0 1 0   1 0 1 2   − −   − −      This is a many-to-one map. Think: B is A plus additional data (an “orientation.”). We will call A the Cartan companion of B.

4 Introduction (concluded)

Is the resemblance coincidental?

5 Introduction (concluded)

Is the resemblance coincidental?

No! It appears to be essential, and it is also useful: ◮ generalized associahedra and cluster algebras of finite type. ◮ Cambrian combinatorics. ◮ double Bruhat cells. ◮ quiver representations (via Gabriel’s Theorem and generalizations). ◮ . . . This week, we’ll talk about the first two of these.

5 Cluster algebras, in the vaguest sense

Setting: A field of rational functions in n variables. F Start with elements x ,... xn of and some “combinatorial data.” 1 F This ( Data , x ,... xn) is called the initial seed. { } 1 The rational functions x1,... xn are the cluster variables. Mutation: an operation that takes a seed and gives a new seed. ◮ There are n “directions” for mutation. ◮ Mutation does two things: ◮ switches out one cluster variable, replaces it with a new one; ◮ alters the combinatorial data. The result is a new seed. ◮ The combinatorial data tells you how to do mutations. ◮ Mutation is involutive.

6 Cluster algebras, in the vaguest sense (continued)

Do all possible sequences of mutations, and collect all the cluster variables which appear.

seed ¨ seed ¨ @©@ @©@ @ initial  seed  

seed ¨ @©@ The cluster algebra for the given initial seed is the subalgebra of F generated by all cluster variables. (subalgebra: we get to multiply and add/subtract arbitrarily, but no division.) 7 Cluster algebras, in the vaguest sense (continued)

Do all possible sequences of mutations, and collect all the cluster variables which appear.

′ ′ x1, x2, x3 ¨ x1, x2, x3 ¨ @ @@ ©@ © @ x1, x2, x3 ¨ ©

′ x1, x2, x3 ¨ @©@ The cluster algebra for the given initial seed is the subalgebra of F generated by all cluster variables. (subalgebra: we get to multiply and add/subtract arbitrarily, but no division.) 7 Exchange matrices

A skew-symmetrizable exchange matrix is

8 Exchange matrices

An exchange matrix is

8 Exchange matrices

An exchange matrix is a skew-symmetrizable n n matrix × B = (bij ) with integer entries. Skew-symmetrizable means that there exist positive, real δ1,...,δn such that

δi bij = bji δj for all i, j [n]. − ∈

That is, if D = diag(δ1,...,δn), then DB is skew-symmetric:

DB = BT D. − This implies strict sign-skew symmetry, which implies 0’s on diagonal.

8 The coefficient semifield

A semifield P = (P, , ): ⊕ · (P, ) is an abelian (“multiplicative”) group. is· an “auxiliary” addition: ⊕ commutative associative multiplication distributes over . ⊕ Informally: You can always divide, but you can’t subtract. Group rings over (the multiplicative group of) P: ZP is the set of formal linear combinations of elements of P, with coefficients in Z. (Similarly, QP.) Addition is by the obvious definition, multiplication is by linearly extending the group product. The following exercise implies that ZP and QP are domains. Exercise 1a Show that P is torsion-free as a multiplicative group. Why doesn’t your argument prove a similar result about fields? 9 The ambient field

Let be (a field isomorphic to) the field of rational functions in n F independent variables, with coefficients in QP. What does this mean? p Elements are q , where p and q are both finite sums of terms

ryxe1 xen 1 · · · n

where r Q and y P and ei ’s are nonnegative . ∈ ∈

Note that is not a part of the algebraic structure of . ⊕ F

10 Labeled seeds

A labeled seed is a triple (x, y, B), where ◮ B is an n n exchange matrix, × ◮ y = (y1,..., yn) is a tuple of elements of P called coefficients, and ◮ x = (x1,..., xn) is a tuple (or “cluster”) of algebraically independent elements of called cluster variables. F The pair (y, B) is the “combinatorial data” alluded to earlier, called a Y-seed.

The most important definition is seed mutation, in which Using B, we define a new exchange matrix. Using B and y, we define a new coefficient tuple. Using B, y and x, we define a new cluster.

11 Matrix mutation

Let B = (bij ) be an exchange matrix. Write [a]+ for max(a, 0). ′ The mutation of B in direction k is the matrix B = µk (B) with b if k i, j ; ′ ij bij = − ∈ { } (bij + sgn(bkj )[bik bkj ]+ otherwise. 0 0 31 0 6 3 4 − 0 0 1 0 µ3 1 0 1 0 Example: − −  1 2 01  ←→  1 2 0 1  −3 0 1 0 4− 0 1− 0  − −   −      Exercise 1b Show that µk (B) is an exchange matrix. Exercise 1c Show that matrix mutation can be equivalently defined by

b if k i, j ; ′ ij bij = − ∈ { } (bij + [ bik ]+bkj + bik [bkj ]+ otherwise. − 12 Coefficient mutation

Let (y, B) be a Y-seed. The mutation of (y, B) in direction k is ′ ′ ′ ′ the Y-seed (y , B )= µk (y, B), where B = µk (B) and y is the ′ ′ tuple (y1,..., yn) given by

−1 yk if j = k; ′ yj =  [bkj ]+ −b yj y (yk 1) kj if j = k.  k ⊕ 6 Recall that [a]+ means max(a, 0).

13 Mutation of clusters (Exchange relations)

Let (x, y, B) be a labeled seed. The mutation of (x, y, B) in ′ ′ ′ direction k is the labeled seed (x , y , B )= µk (x, y, B), where (y′, B′) is the mutation of (y, B) and where x′ is the cluster ′ ′ ′ (x ,..., x ) with x = xj for j = k, and 1 n j 6 [bik ]+ [−bik ]+ ′ yk xi + xi xk = . (yk 1)xk Q ⊕ Q These relations are called exchange relations. Exercise 1d Show that each mutation µk is an involution on labeled seeds. Remark: What do the signs in B really mean? All of the exponents in the exchange relations are positive! The signs in B really only indicate which term in the exchange relation xi belongs in.

(That’s not quite true in the coefficient mutation, but it becomes true in the most important special case, as we’ll discuss tomorrow.) 14 The regular n-ary tree

In the vague description of cluster algebras, we said “collect all the cluster variables which appear.” We need a more precise way to “collect” them.

The n-regular tree Tn is the tree (graph with no cycles) with n edges emanating from each vertex. Each edge is labeled 1, 2, . . . , or n, with each edge having exactly one edge with each label. n = 1: ———— with label 1. • • n = 2: Infinite path —— ——— ——— —— · · · • • • · · · with labels alternating 1 and 2. p p n = 3: Infinite “fractal tree:” p3 p3 2 @ 1 ppp s@1 2 s ppp @ s3

2 @s 1 @ 15 ppp ppp Patterns (Cluster patterns and Y-patterns)

Think of Tn as a way of keeping track of (and labeling) every possible sequence of mutations, given that the mutations are involutive.

Choose a vertex t0 of Tn. Given an initial labeled seed, we will recursively define a map from vertices of Tn to labeled seeds. The vertex t will map to the labeled seed Σt = (xt , yt , Bt ).

k ′ Send t0 to the initial labeled seed (x, y, B). If t —— t , then we require that Σt and Σt′ are related by the mutation µk .

p p This assignment is called p3 p3 a cluster pattern. 2 @ 1 ppp s@1 2 s ppp @ The assignment t (yt , Bt ) 7→ s is called a Y-pattern. 3

2 s@ 1 @ 16 ppp ppp Some notation with a bit more detail

To specify individual cluster variables, coefficients, and matrix entries in the seed Σt = (xt , yt , Bt ), we will write

t xt = (xi;t ,..., xn;t ), yt = (y1;t ,..., yn;t ), and Bt = (bij ).

k Thus if t —— t′, the requirement is

t ′ bij if k i, j ; bt = − ∈ { } ij t t t t (bij + sgn(bkj )[bik bkj ]+ otherwise.

−1 yk;t if j = k; y ′ = j;t t t  [bkj ]+ −b yj t y (yk t 1) kj if j = k.  ; k;t ; ⊕ 6 t t [bik ]+ [−bik ]+  yk;t xi;t + xi;t xk;t′ = . (yk t 1)xk t Q ; ⊕ Q; 17 The cluster algebra (Finally!)

Choose an initial seed (x, y, B). That is: Choose x to be an algebraically independent n-tuple of elements of . Almost always: May as well choose x = (x ,..., xn). F 1 Choose arbitrary initial coefficients y = (y1,..., yn).

Construct the cluster pattern with Σt = (x, y, B). The cluster algebra (x, y, B) associated to the initial labeled seed A (x, y, B) is the algebra generated by the set

xi t : i = 1,..., n and t is a vertex of Tn . { ; }

18 Example 0 2 Let B = and take P = 1 , so that y = (1, 1). 1 0 { } −  0 2 0 2 0 2 µ − µ 1 0 1 1 0 2 1 0 −    − 2  2 ←→ x2+1 ←→ x2+1 x1 +(x2+1) [x1 x2] x2 2 x1 x1 x1 x2 h i h i µ µ l 2 l 1

0 2 0 2 0 2 − − 1 0 µ1 1 0 µ2 1 0  2  2 − 2 2  2  2 x1 +1 ←→ x1 +x2+1 x1 +1 ←→ x1 +x2+1 x1 +(x2+1) x1 x x x x x x 2 2 1 2 2 1 2 x1 x2 h i h i h i

2 2 2 2 x2+1 x1 +(x2+1) x1 +x2+1 x1 +1 Cluster variables: x1, x2, x , 2 , x x , x 1 x1 x2 1 2 2 19 Exercises Exercise 1e Verify the previous example by hand. Exercise 1f Redo the example with P a general semifield, and initial coefficients y = (y1, y2). Find all labeled seeds (xt , yt , Bt ) in the exchange pattern. Are there still finitely many distinct labeled seeds? Exercise 1g Show that, for any cluster in a cluster pattern, no cluster variable occurs twice. (Hint: prove something much, much stronger. In fact, if you read the directions very strictly in a previous exercise, you should have proved the stronger thing already.) Exercise 1h k ′ ′ Suppose t —— t . Let xk and xk be the cluster variables in xt and ′ xt′ that are related by the exchange relation. Show that xk = x 6 k (Hint: if you did the previous exercise the way I have in mind, this

becomes easy.) 20 Cluster algebras in the mathematical universe

Defined by Fomin and Zelevinsky, 2000, motivated by the • study of total positivity of matrices (and more generally, totally positive varieties). Primordial example (the pentagon recurrence) may have been • known to Abel, who studied the pentagonal identity for the dilogarithm. Have since been discovered/applied in various areas, including • algebraic and geometric combinatorics • algebraic geometry (Grassmannians, tropical analogues) • discrete dynamical systems (rational recurrences) • higher Teichm¨uller theory • PDE (KP solitons) • Poisson geometry • of quivers • Y -systems in thermodynamic Bethe Ansatz • Main tools currently being applied include representation • theory of quivers, triangulations of compact surfaces, combinatorics of root systems/Coxeter groups, and more. 21 Questions?

22 Stand and stretch. (2 minutes)

23 Reflections

Given a nonzero vector α in , the reflection in the hyperplane orthogonal to α is σα, given by

α α α, x σα(x)= x 2 , x = x 2 h i α − · * α, α + · α, α − α, α h i h i h i p α p

σα(β)

2 hα,xi α − hα,αi

β

∨ α ∨ Define α = 2 . Then σα(x)= x α , x α. hα,αi − h i 24 Root systems

A (finite crystallographic) root system is a collection Φ of nonzero vectors (called roots) such that:

(i) For each root β, the reflection σβ permutes Φ. (ii) Given a line L through the origin, either L Φ is empty or L Φ= β for some β. ∩ ∩ {± } (iii) α∨, β Z, for each α, β Φ. h i∈ ∈ ∨ α ∨ Recall α = 2 , and σα(x)= x α , x α. hα,αi − h i So (iii) says that reflections defined by roots are described by integral matrices in terms of any basis of roots.

If we don’t require (iii), we get a (not-necessarily “crystallographic”) finite root system. These don’t seem to have much to do with cluster algebras. We will require (iii) but we won’t adopt the adjective “crystallographic.”

25 Root systems of rank 2

The rank of a root system is the dimension of its linear span. These are the root systems of rank 2 (up to scaling and rotation):

A A A 1 × 1 2

B2 26 Exercises

Exercise 1i Verify that the collections of vectors shown on the previous slide are indeed root systems.

It is acceptable to do Exercise 1i visually, without writing anything. Condition (ii) is easy. To get (i) and (iii), you can just check that, for any α, β Φ, the vector σα(β) is in Φ and differs from β by an ∈ integer multiple of α.

Exercise 1j Show that the four root systems shown on the previous page are the only root systems of rank 2, up to scaling and rotation.

To get you started on Exercise 1j, note that up to scaling and 1 rotation, we may as well have the root α = in Φ. What vectors 0   β have the property that α∨, β and β∨, α are both integers? h i h i 27 A rank-3 example

B3 28 More rank-3 examples

A3 C3

(A3, B3 and C3 are the only “irreducible” examples.)

29 Exercises

Exercise 1k 1. Show that ej ei : i, j [n + 1], i = j is a rank-n root { − ∈ 6 } system. (It is called An.)

2. Show that ei : i [n] ej ei : i, j [n], i < j is a {± ∈ }∪{± ± ∈ } rank-n root system. (It is called Bn.)

3. Show that 2ei : i [n] ej ei : i, j [n], i < j is a {± ∈ }∪{± ± ∈ } rank-n root system. (It is called Cn.)

4. Show that ej ei : i, j [n], i < j is a rank-n root system. {± ± ∈ } (It is called Dn.) These will be repetitive, so re-use your work. A different one: Exercise 1l Show that the following is a rank-4 root system. (It is called F4.)

1 ej ei : i, j [4], i < j ei : i [4] ( e e e e ) {± ± ∈ }∪{± ∈ }∪ 2 ± 1 ± 2 ± 3 ± 4  30 Why root systems? I

Root systems were defined and classified in the work of Cartan and Killing, starting in the 1890’s, as part of the classification of complex simple Lie algebras (and then of simple Lie groups). This eventually led to Weyl’s study of the associated reflection groups. The point:

finite-dimensional complex simply-connected semisimple Lie groups l finite-dimensional complex semisimple Lie Algebras l root systems

We’ll see that root systems are completely classified (the famous Cartan-Killing classification).

31 Why root systems? II

Finite reflection groups (finite transformation groups generated by reflections) correspond to not-necessarily-crystallographic root systems (i.e. with condition (iii) omitted). The symmetry groups of regular polytopes are reflection groups, so primordial examples (symmetry groups of the Platonic solids) were known to the Greeks, without the notion of a “group.” Finite reflection groups appear, around 1890, in Kantor’s classification of subgroups of the Cremona group of birational transformations of the complex projective plane. Finite reflection groups correspond to the abstractly defined finite Coxeter groups. These were defined and classified by Coxeter in 1935. The classification extends the Cartan-Killing classification. Reflection groups appear in many places, e.g. quadratic/modular forms, low dimensional topology/singularity theory, etc. More later... For now, some pictures. 32 Finite Reflection group examples

A2 I2(5)

33 Finite Reflection group examples (continued)

B3 H3

34 Positive and negative roots

Choose a linear functional not zero on any root. Why can we do this?

35 Positive and negative roots

Choose a linear functional not zero on any root. Why can we do this? We just need to avoid finitely many hyperplanes in dual space.

35 Positive and negative roots

Choose a linear functional not zero on any root. Why can we do this? We just need to avoid finitely many hyperplanes in dual space. By this choice, the functional is strictly positive or strictly negative on every root. So we’ll use the functional to decompose Φ into positive and negative roots.

Φ=Φ Φ (disjoint union) + ∪ − This choice is unique up to symmetry. Why?

35 Positive and negative roots

Choose a linear functional not zero on any root. Why can we do this? We just need to avoid finitely many hyperplanes in dual space. By this choice, the functional is strictly positive or strictly negative on every root. So we’ll use the functional to decompose Φ into positive and negative roots.

Φ=Φ Φ (disjoint union) + ∪ − This choice is unique up to symmetry. Why? Dual hyperplanes to Φ are a very symmetric collection.

35 Simple roots

Define Π to be the unique minimal set of positive roots such that any positive root is in the positive linear span of Π. The roots in Π are called the simple roots. Why can we do this?

36 Simple roots

Define Π to be the unique minimal set of positive roots such that any positive root is in the positive linear span of Π. The roots in Π are called the simple roots. Why can we do this? This is basic convexity theory. We’re asking for the extreme rays of a finitely generated cone.

36 Simple roots

Define Π to be the unique minimal set of positive roots such that any positive root is in the positive linear span of Π. The roots in Π are called the simple roots. Why can we do this? This is basic convexity theory. We’re asking for the extreme rays of a finitely generated cone. Some other facts: The simple roots are linearly independent. The number of simple roots equals the rank of Φ. The angles between simple roots are never acute. Π determines Φ: Any root in Φ can be obtained from some root in Π by applying a sequence of reflections with respect to a root in Π. These facts are not hard, but need proof.

36 Exercise

Exercise 1m For each part of Exercise 1k, we propose a set of positive roots below. Verify that this is a valid choice. Also, find the corresponding set of simple roots.

1. ej ei : i, j [n + 1], i < j { − ∈ } 2. ei : i [n] ej ei : i, j [n], i < j { ∈ } ∪ { ± ∈ } 3. 2ei : i [n] ej ei : i, j [n], i < j { ∈ } ∪ { ± ∈ } 4. ej ei : i, j [n], i < j { ± ∈ }

37 Examples: Positive and simple roots

α2 α2

α1 α1 A A A 1 × 1 2

α2

α2

α1 α1

B2 G2

38 Examples: Positive and simple roots

α2

α2

α1 α1

α3

α3

B3 C3

39 Cartan matrices (Finally!)

The Cartan matrix for a root system Φ is the matrix

∨ α , β α,β∈Π . The entries are the coefficients that  show up when you reflect a simple root with respect to another simple root. The Cartan matrix completely determines Φ. Why?

40 Cartan matrices (Finally!)

The Cartan matrix for a root system Φ is the matrix

∨ α , β α,β∈Π . The entries are the coefficients that  show up when you reflect a simple root with respect to another simple root. The Cartan matrix completely determines Φ. Why? Because if you know α∨, β and β∨, α , then you know the angle h i h i between α and β and the relative lengths of α and β. Therefore, you know Π up to scaling and rotation, which determines Φ up to scaling and rotation.

40 Cartan matrices (Finally!)

The Cartan matrix for a root system Φ is the matrix

∨ α , β α,β∈Π . The entries are the coefficients that  show up when you reflect a simple root with respect to another simple root. The Cartan matrix completely determines Φ. Why? Because if you know α∨, β and β∨, α , then you know the angle h i h i between α and β and the relative lengths of α and β. Therefore, you know Π up to scaling and rotation, which determines Φ up to scaling and rotation. This isn’t really quite right, but it is the right intuition, and it’s wrong in a very precise, limited way, that we’ll see next.

40 Cartan matrices of rank-two root systems

α2 α2

α1 α1

2 0 2 1 0 2 1− 2   − 

α2

α2

α1 α1

2 2 2 3 1− 2 1− 2 −  −  41 Cartan matrices (continued)

When simple roots are orthogonal, the Cartan matrix reflects that, but may not determine their relative lengths. When two simple roots α , α Π are not orthogonal, how does 1 2 ∈ the Cartan matrix determine their angle and relative lengths?

42 Cartan matrices (continued)

When simple roots are orthogonal, the Cartan matrix reflects that, but may not determine their relative lengths. When two simple roots α , α Π are not orthogonal, how does 1 2 ∈ the Cartan matrix determine their angle and relative lengths?

α, β α, β α∨, β = 2h i , β∨, α = 2h i. α, α β, β h i h i

42 Cartan matrices (continued)

When simple roots are orthogonal, the Cartan matrix reflects that, but may not determine their relative lengths. When two simple roots α , α Π are not orthogonal, how does 1 2 ∈ the Cartan matrix determine their angle and relative lengths?

α, β α, β α∨, β = 2h i , β∨, α = 2h i. α, α β, β h i h i

α, α β∨, α h i = h i. β, β α∨, β h i h i

42 Cartan matrices (continued)

When simple roots are orthogonal, the Cartan matrix reflects that, but may not determine their relative lengths. When two simple roots α , α Π are not orthogonal, how does 1 2 ∈ the Cartan matrix determine their angle and relative lengths?

α, β α, β α∨, β = 2h i , β∨, α = 2h i. α, α β, β h i h i

α, α β∨, α h i = h i. β, β α∨, β h i h i The angle θ between α and β is non-acute and has

α, β 2 1 cos2(θ)= h i = β∨, α α∨, β . α, α β, β 4 h ih i

42 Cartan matrices (concluded)

A root system is Φ reducible if it is the disjoint union of two subsets Φ1 and Φ2 such that every root in Φ1 is orthogonal to every root in Φ2. In this case, Φ1 and Φ2 are both root systems, and we can obtain simple roots for Φ by taking the union of simple roots Π1 for Φ1 and Π2 for Φ2. The correct statement is: The Cartan matrix determines angles. The Cartan matrix determines relative lengths within each irreducible component of Φ. Furthermore, the Cartan matrix determines the irreducible components. We find them by simultaneously permuting rows and columns to put the Cartan matrix in block-diagonal form. For example:

2 0 1 2 1 0 − − 0 2 0 1 2 0 “=” A A   −→ −  2 × 1 1 0 2 0 0 2 −     43 Some properties of Cartan matrices

Let A = (aij ) be an n n Cartan matrix. Then × (i) aii = 2 for every i [n]; ∈ (ii) aij 0 for i = j ≤ 6 (iii) aij = 0 if and only if aji = 0.

(iv) There exist positive, real δ1,...,δn such that

δi aij = aji δj for all i, j [n]. ∈ ∨ Recall that A is the matrix [ α , αj ] . h i i αi ,αj ∈Π ∨ α Condition (i): By definition α = 2 hα,αi . Condition (ii): Angles between simple roots are never acute. Condition (iii): Scaling doesn’t affect orthogonality.

hαi ,αi i Condition (iv): Take δi = 2 . Then δi aij and aji δj both equal αi , αj . Condition (iv) says that A is symmetrizable. h i 44 Dynkin diagrams for Cartan matrices

α2 α2

α1 α1

2 0 2 1 0 2 1− 2   − 

α2

α2

α1 α1

2 2 2 3 1− 2 1− 2 −  −  45 Dynkin diagrams for Cartan matrices (continued)

Vertices Simple roots ↔ Edges or non-edges:

if roots are orthogonal 2π if angle is 3 (same length) 3π √ if angle is 4 (length ratio is 2) 5π √ if angle is 6 (length ratio is 3)

Convention: Arrows point downhill (from longer root to shorter root).

Note: Irreducible components of root systems correspond to connected components of Dynkin diagrams.

46 Dynkin diagrams for irreducible finite root systems

What are the possible Cartan matrices for finite root systems (AKA Cartan matrices of finite type)? Exactly those whose Dynkin diagrams have connected components on the following list. This is the famous Cartan-Killing classification.

An (n 1) ≥ tttttttt

Bn (n 2) ≥ tttttttt

Cn (n 3) ≥ tttttttt

H t HH Dn (n 4) ¨¨ ≥ ¨ ttttttt t

More on next page... 47 Dynkin diagrams for irreducible finite root systems (cont’)

E 6 t t t t t

t E 7 tttttt

t E 8 ttttttt

t F 4 t t t t

G 2 t t

48 References

(RSGA) S. Fomin and N. Reading, “Root systems and generalized associahedra.” IAS/PCMI Lecture Series 13. (CA IV) S. Fomin and A. Zelevinsky, “Cluster algebras IV: Coefficients.” Compositio Mathematica 143. (H) J. E. Humphreys, “Reflection groups and Coxeter groups.” Cambridge studies in advanced mathematics 29.

49 Exercises, in order of priority

There are more exercises than you can be expected to complete in a day. Please work on them in the order listed. Exercises on the first line constitute a minimum goal. It would be profitable to work all of the exercises eventually.

1b, 1d, 1e, 1i, 1k.1, 1k.2–4, 1m, 1f, 1g, 1h, 1j, 1l, 1c, 1a.

50