<<

Representations of sln(C), the Weyl Group, the Root Lattice, and Linear Braids

Kie Seng Nge Supervisor: Assoc. Prof. Anthony Licata

October, 2017

A thesis submitted for the degree of Bachelor of Science (Advanced) (Honours) of the Australian National University

Dedicated to my supportive parents.

Declaration

Except where otherwise stated, this thesis is my own work prepared under the supervision of Assoc. Prof. Anthony Licata.

Kie Seng Nge

Acknowledgements

First and foremost, I would like to give thanks to the almighty God for the strength and wisdom He has given throughout my completion of this papaer.

To my supervisor, Anthony Licata, I would like to express my heartfelt thanks for the opportunity to work with him on my Honours thesis. His unceasing guidance and insight in his research area have kept me motivated throughout the learning process. The inspiration and knowledge from him has been extremely invaluable. Not forgetting, thanks to David Symth who assisted me in the relevant background reading during Anthony’s absence.

Besides that, I would like take this opportunity to thank our Honours convenor, Joan Licata for coordinating the Honours program and our Honours conference coordinator, Bryan Wang for organising the mandatory Honours conference. I also want to thank Arnab Saha who is always willing to share his knowledge on number theory and his career experience with me, helping me to map out my future path.

In addition, I would like to express my utmost gratitude to all lecturers who have mentored me throughout my pursuit of Bachelor of Science (Advanced) (Honours) in Pure Mathematics. They are John Urbas, Vigleik Angeltveit, Jim Borger, Amnon Neeman, Scott Morrison, Brett Parker, Jesse Burke, Bregje Pauwels, Ben Andrews, Qi-Rui Li, Martin Ward, Michael Norrish, Tim Trudgian, Pierre Portal, Vladimir Mangazeev, Dayal Wickramasinghe, Griffith Ware, Peter Bouwknegt and Linda Stals.

Many thanks to a postdoctoral fellow, Changwei Xiong and a former Honours student, Suo Jun Tan whom I had consultation with whenever I had difficulty with the study content. Furthermore, I would like to thank my best friend, James Bailie, as well as other peers – An Ran Chen, Edmund Heng, and Joanne Zheng for constantly exchanging ideas with me keeping every semesters enjoyable. There are also times when we struggled, but these memories are unique and will last forever.

Apart from the above, my family deserve a special mention. They are my powerful

vii backing; they are always on my side whenever I am up or down. Without their unwavering support, I might not have made it this far in both academics and life in general. Also, thank you to my family members in Christ for their moral and spiritual support.

Last but not least, I thank my beloved country, Malaysia for recognizing my abilities and accepting me as a MybrainSc scholar, which has rendered me a burden-free academic life here by settling my expenses and tuition fees at the Australian National University (ANU).

viii Contents

Acknowledgements vii

Introduction1

1 Representations of the Special Linear sln(C) 5

1.1 What is a Representation of sln(C)?...... 6 1.1.1 Definitions...... 6

1.1.2 Semisimplicity of sln(C)...... 9 1.1.3 General theory of ...... 11 1.2 The ...... 12

1.2.1 Cartan decomposition of sln(C)...... 13

1.2.2 Representation of sl2(C)...... 14 1.2.3 The highest weight vector in a representation...... 17 1.3 Full Picture of a Representation...... 19 1.3.1 Finding all weights...... 20 1.3.2 Multiplicity of the weights in an irreducible representation.... 23 1.4 Classifying the Finite Dimensional Irreducible Representations..... 26

2 Weyl Group Acting on the Root Lattice 29 2.1 ...... 30 2.1.1 ...... 30 2.1.2 Properties of roots...... 32 2.2 Reflections in the Root Lattice...... 36 2.2.1 Definition of a Weyl group...... 36

2.2.2 An isomorphism between W(sln(C)) and Sn ...... 37

2.3 An example – S4 ...... 39

ix 3 Combinatorics in the Weyl Group of sln(C) 43 3.1 Coxeter System...... 44 3.1.1 Definitions...... 44 3.1.2 A bijection between reflections and roots...... 45 3.1.3 Minimal length expressions...... 49

3.2 Expressions in Weyl Group of sln(C)...... 53

3.2.1 Reduced expression in W(sln(C))...... 53

3.2.2 The longest element in Weyl group of sln(C)...... 56 3.3 Root Sets for Expressions in the Weyl Group...... 57

4 From the Weyl Group of sln(C) to the Braid Group Bn 61 4.1 Definitions...... 61 4.2 Positive and Negative Root Sets for Elements of the Braid Group.... 63 4.3 Separated Root Sets and Linear Braids...... 65

5 Presentations for the Braid Group and their Length Functions 75

5.1 The Abstract Group, Bfn ...... 76

5.2 The Abstract Group, Bcn ...... 77

5.3 Relation between `linear and `Garside ...... 80

Bibliography 82

x Introduction

Chapter 1 is devoted to studying the finite dimensional representation of the semisimple Lie algebra sln(C) systematically. First, we define the representation of sln(C). Next, we study the case of sl2(C) thoroughly. The finite dimensional irreducible representa- tions of sl2(C) act as blocks for the representation of sln(C). Subsequently, we prove the most important theorem (Theorem 1.4.1) in this chapter; this theorem classifies the representations of sln(C) with their highest weight. With the knowledge of the highest weight of the representation, we can reveal all weights contained in the representation and also their corresponding multiplicity. While exploring the repre- sentation of sln(C), we discover an interesting group, called Weyl group W, acting on the root lattice generated by the weights.

In Chapter 2, we study the symmetry group in the root lattice of sln(C). The Weyl group is generated by the reflections in root lattice. We also spend a substantial effort in showing the set of roots in the adjoint representation of sln(C) forms a root system. A mechanics – Killing form is introduced to tackle this. Killing form of sln(C) put forward an idea of length and angle in the root lattice. The four key features of roots in Theorem 2.1.12 are what constitute an abstract root system. We then finish this chapter by proving the isomorphism between the Weyl group of sln(C) and the sym- metric group Sn (Theorem 2.2.10). This can be easily observed by suitably labelling the roots. A relevant example S4 is provided to demonstrate the claim.

∼ In Chapter 3, we begin to explore the combinatorics in the Weyl group W = Sn of sln(C). With the identification of the Weyl group with the in Chapter 2, the Weyl group W of sln(C) possesses the properties that hold in a more general group, namely the . The symmetric group Sn is one of the classic exam- ple. We describe the characteristics of elements in W(sln(C)) in the Coxeter language. The most prominent theorem in this chapter is to relate combinatorics of words to the action of W(sln(C)) on the root lattice (Corollary 3.2.3). Before the chapter ends, we derive a few traits on the left and right root set of reduced expressions.

So far, there are no new knowledge being tabled in the first three chapters. Our original works start in Chapter 4.

1 In Chapter 4, we first introduce linear braids, which are certain distinguished lifts from the Weyl group of sln(C) to the braid group, determined by a splitting of the root set of the Weyl group element (Definition 4.3.9). In particular, we define a notion of separated root sets (Definition 4.3.1). In a pair of separated root sets, the non-negative cones defined by the root sets intersect only at the origin. Geometrically, we can use a hyperplane to divide the two separated root sets with each of them lying opposite side of the hyperplane.

The notion of a linear braid comes from braid group actions on categories. However, this thesis will not discuss the action of the braid group on categories. It is merely to motivate our topic. There is a special categorical action of the braid group on the homotopy category of projective modules over the zigzag algebra. More precisely, to each braid β ∈ Bn, there is an object

b F (β) ∈ K (An − bimod).

The algebra An is a finite-dimensional algebra called the zigzag algebra, and the ob- b jects of K (An − bimod) are chain complexes of (An,An)-bimodules, considered up to homotopy. This is, in fact, a categorical analogue of the notion of a representation of

Bn on a vector space.

b Without defining these notions here, the category K (An −bimod) is actually trian- b gulated with a canonical t-structure. As a consequence, the category K (An − bimod) has a canonically-defined abelian subcategory - the heart of the canonical t-structure - which we denote by C. So, one can question which braids β have the property that the object F (β) lives in the heart C. The answer is the following:

Theorem 0.0.1 (Licata). The object F (β) lives in the heart of the t-structure if and only if β is β is linear.

Moreover, Licata conjectures an alternative combinatorial characterisation of linear braids. The following is what he has conjectured:

Conjecture 0.0.2 (Licata). The braid β is linear as in Definition 4.3.9 if and only if β is the product of a positive Garside generator and a negative Garside generator as defined in Definition 4.1.9.

We prove Conjecture 0.0.2 as Theorem 4.3.25 in this thesis. This is the main theo- rem we have achieved when studying the linear braids.

Finally, in Chapter 5, there are three natural collections of generators for the braid group to be considered. They are the Artin generators, the Garside generators, and the

2 linear generators, which are linear braids discovered in the previous chapter. The main things we did are proposing two new presentations of the braid group with respect to the Garside generators and the linear generators (Theorem 5.1.1 and Theorem 5.2.1). After that, we analyse the length functions with respect to these generators by pinpointing the nexus between the length function in the Garside generators and the length function in the linear generators.

3 4 Chapter 1

Representations of the Special Linear Lie Algebra sln(C)

In the mid nineteenth century, the mathematical object, arose when a Nor- wegian mathematician, Sopus Marius Lie(1842 -1849) started to explore every possible group actions on manifolds locally. Lie groups are smooth manifolds studied signif- icantly in differential geometry. Lie algebra, named by after in the 1930s, is simply an algebraic object dismissing the topological complexity of Lie group. Lie group’s algebraic structure embraces most of the topology of the groups, although it seems at first we drop too much of its details. From elementary Lie algebra theory, semisimple Lie algebras are sometimes regarded as atomic objects. Their representations are also interesting ones. In this thesis, we will focus first on the representations of the complex semisimple Lie algebra, sln(C).

In Section 1.1, we introduce the classic object – sln(C) in the . It is a complex semisimple Lie algebra which is simply a vector field with some extra properties. Moreover, we study a few representations of sln(C) by looking at how they act as a linear map on a fixed vector space. We also show that sln(C) is semisimple from the definition of semisimple concerning ideals in a Lie algebra. Its semisimplicity allows the Jordan decomposition to be preserved under any representation. This is extremely important when we are studying representations of sln(C).

In Section 1.2, we analyse a far more distinguishable representation of sln(C) – its adjoint representation. However, to do this, we must first start off with the irreducible finite dimensional representation of sl2(C). The reason is that it is the building block of the adjoint representation of sln(C) for n > 2. From here, we are able to generalize a couple of observations in the representation of sl2(C) to that of sln(C) (Theorem 1.2.22). This section is also the first place we introduce the idea of a root space.

5 In Section 1.3, having found a highest weight vector v in a finite-dimensional rep- resentation of sln(C), we wish to obtain the rest of its weight vectors. This is possible since we are only considering finite dimensional representations. In the searching pro- cess, we discover that the weights actually form a lattice where an generated by reflections acts on. The Weyl group is first tracked down when it is used to fill the weight diagram of a finite dimensional representation. However, the number of distinct weights and the dimension of the representation do not match up. In truth, some weights have multiplicity more than one. Therefore, we use Young tableau to assist us to find the multiplicity of weights in an irreducible representation.

Lastly, Section 1.4 is one of the core part that we would like to achieve in this thesis. We manage to classify the finite dimensional irreducible representations of sln(C) solely by its highest weight vector (Theorem 1.4.1). In particular, we are capable of classifying every irreducible representations of sl2(C) by a symmetric power of its standard representation.

1.1 What is a Representation of sln(C)?

In this section, we define a representation of the semisimple Lie algebra sln(C). We start off by defining sln(C) (Definition 1.1.3), as well as a representation of a Lie al- gebra (Definition 1.1.8). To better understand the abstract idea of a representation of sln(C), we provide some common examples which are often useful to be kept in mind. Furthermore, the concept of ideal in a Lie algebra (Definition 1.1.13) is important to show that sln(C) is a semisimple Lie algebra. Finally, we invoke a famous result regard- ing a representation of semisimple Lie algbras – Preservation of Jordon Decomposition Theorem where the argument for developing the adjoint representation of sln(C) is based on.

1.1.1 Definitions

We intoduce all essential definitions to answer the title of the section. Understanding the adjoint representation of sln(C) is critical for the rest of the Chapter 1. Definition 1.1.1. A Lie algebra g is a vector space over a field F together with a skew-symmetric bilinear map [ , ]: g × g → g, called the Lie bracket, satisfying the Jacobi identity, that is, for all X,Y,Z in g, [X, [Y,Z]] + [Y, [Z,X]] + [Z, [X,Y ]] = 0.

Remark 1.1.2. We would like to remind the reader some properties of Lie bracket (also known as the commutator of X and Y ), that are, for all a, b ∈ F , and all X,Y,Z ∈ g,

(L1) (Bilinearity) [aX + bY, Z] = a[X,Y ] + b[Y,Z], [Z, aX + bY ] = a[Z,X] + b[Z,Y ],

(L2) (Antisymmetry) [X,X] = 0,

6 (L3) The bracket operation satisfies the Jacobi identity.

In addition, if we apply L2 on X + Y , then we get the following:

(L2’) (Anticommutativity) [X,Y ] = −[Y,X].

Notice that L2 can be recovered from L20 if the the characteristic of F is not 2 by taking Y = X.

Definition 1.1.3. The set of n × n matrices with entries over a field F having trace 0 is called the special linear Lie algebra of order n which is denoted by sln(F ). Its set notation is     a a . . . a  11 12 1n     n   a21 a22 . . . a2n X  X =  . . . .  ∈ Mn(F ) T r(X) = aii = a11 + a22 + ... + ann = 0 .  . . .. .    . . .  i=1     an1 an2 . . . ann 

Lemma 1.1.4. Let X = (xij)1≤i,j≤n and Y = (yij)1≤i,j≤n be n × n matrices. Then,

T r(XY ) = T r(YX).

Proof. Take any two n × n matrices X and Y , we get

n n n n X X X X T r(XY ) = xikyki = ykixik = T r(YX). i=1 k=1 k=1 i=1 QED

Now, we are going to introduce the core example of Lie algebra in this chapter.

Example 1.1.5. To start, we pick the basis for sl2(C). Consider the following basis elements:

( " # " # " #) 0 1 0 0 1 0 E = ,F = ,H = 0 0 1 0 0 −1 with the relations [H,E] = 2E, [H,F ] = −2F, [E,F ] = H. One subtle thing to note is that this Lie algebra is a vector space of dimension three. It is enough to check the Jacobi identity on the basis elements. Moreover, the set is closed under the bracket operation since for all X,Y ∈ sl2(C), T r(XY ) = T r(YX), so

T r([XY ]) = T r(XY − YX) = T r(XY ) − T r(YX) = 0.

7 Example 1.1.6. Similarly, we can easily see that sln(C) is a Lie algebra of dimension 2 n − 1. First, let us pick a convenience basis to work with. Denote Ei,j as the n × n matrix with 1 at the (i, j)-th entry and 0 at any other entries. One can verify that, for i 6= j, the matrices

Ei,j with 1 in the non-diagonal terms and the matrices

Hi,i+1 := Ei,i − Ei+1,i+1 with 1 and -1 in the consecutive diagonal terms form the basis of sln(F ) by the Spanning Set Theorem in [Lay12]; there are n2 − n + (n − 1) = n2 − 1 of them which are linearly independent to one another. By using a similar argument in showing sl2(C) is closed, we can see that sln(C) is closed under the bracket operation too.

Definition 1.1.7. Suppose g1 and be Lie algebras over a field F .A Lie algebra homomorphism is a linear map ϕ : g1 → g2 such that for all X,Y ∈ g1, ϕ([X,Y ]) = [ϕ(X), ϕ(Y )].

Definition 1.1.8. A representation of a Lie algebra g on a vector space V is a Lie algebra homomorphism ρ : g → End(V ) from a Lie algebra g to the Lie algebra of endomorphism of V .

2 Example 1.1.9. Consider the standard representation of sl2(C) where V = C the two-dimensional complex vector space. Then, we can take ρ to be the identity map sending X ∈ sl2(C) to itself where the commutator relation is carried over trivially. n Example 1.1.10. Consider another representation of sl2(C) where V = C the n- dimensional complex vector space. With Example 1.1.5 in mind, take the eigenvalue λ of H with the biggest real part and pick a vector v in the eigenspace Vλ. With respect to the basis {v, F v, . . . , F n−1v} of V , we can define ρ as follows:   n − 1 0 0 ··· 0    0 n − 3 0 ··· 0   . . .   .. .. .  ρ(H) =  0 0 .   . . .   . . ..   . . 3 − n 0  0 0 ... 0 1 − n

0 n − 1 0 ··· 0 0   .. . .  0 0 2(n − 2) . . .     ..  0 0 0 . 0 0  ρ(E) =    ..  0 0 0 . (n − 2)2 0    . . ..  . . . 0 n − 1 0 0 ... 0 0 0

8 0 0 0 ··· 0 0 1 0 0 ··· 0 0     0 1 0 ··· 0 0   ρ(F ) =  .. . . . 0 0 1 . . .   ......  . . . . 0 0 0 0 ... 0 1 0

This is done as Problem 1.55 in [EGHLSVY11].

Example 1.1.11. Consider the adjoint representation of sln(C), that is, we take V = sln(C) to be itself. Then, we can take ρ to be the map sending X ∈ sln(C) to ad(X) := [X, · ] ∈ End(sln(C)). To check that ad is really a Lie algebra homomorphism, we need to check that for all X,Y ∈ V , ad([X,Y ]) = [ad(X), ad(Y )].

Take an arbitrary Z ∈ sln(C), [ad(X), ad(Y )](Z) = (ad(X)ad(Y ) − ad(Y )ad(X))(Z) = ad(X)ad(Y )(Z) − ad(Y )ad(X)(Z) by L1, = [X, [Y,Z]] − [Y, [X,Z]] by definition, = [X, [Y,Z]] + [Y, [Z,X]] by L1 and L2’, = −[Z, [X,Y ]] by L3, = [[X,Y ],Z] = [ad([X,Y ]),Z], as desired.

1.1.2 Semisimplicity of sln(C)

The definition of a semisimple Lie algebra involves the concept of an ideal of a Lie algebra. We will soon see that sln(C) is not only semisimple, but is, in fact, simple. Definition 1.1.12. A subspace h ⊂ g of a Lie algebra g is a subalgebra if it satisfies the condition [X,Y ] ∈ h for all X,Y ∈ h.

Definition 1.1.13. A Lie subalgebra h ⊂ g of a Lie algebra g is an ideal if it satisfies the condition [X,Y ] ∈ h for all X ∈ h,Y ∈ g.

Definition 1.1.14. The center Z(g) of a Lie algebra g is the subspace of g of elements X ∈ g such that [X,Y ] = 0 for all Y ∈ g. Then, g is abelian if all brackets are zero, or equivalently Z(g) = g.

9 Definition 1.1.15. A Lie algebra g is simple if dim g > 1 and it contains no non-trivial ideals. Definition 1.1.16. Let g denote a finite dimensional Lie algebra. The lower central series of subalgebras Dng is defined inductively by

D1g = [g, g] and Dng = [g, Dn−1g] giving the decreasing sequence

g ⊇ D1g ⊇ D2g ⊇ · · · . In a similar manner, the derived series {Dng} is defined inductively by D1g = [g, g] and Dng = [Dn−1g, Dn−1g]. giving the decreasing sequence

g ⊇ D1g ⊇ D2g ⊇ · · · .

Definition 1.1.17. A Lie algebra g is nilpotent if Dng = 0 for some n. A Lie algebra g is solvable if Dng = 0 for some n. n Remark 1.1.18. Notice that D g ⊂ Dng for every n. This implies that any nilpotent 1 Lie algebra is solvable too. When n = 1, Dg := D g = D1g is called the commutative subalgebra. Definition 1.1.19. A Lie algebra g is semisimple if g has no non-zero solvable ideals. Remark 1.1.20. Since a has no non-trivial ideal, it is automatically semisimple. Furthermore, every semisimple Lie algebra is not solvable and hence not nilpotent given Remark 1.1.18. Besides, any semisimple Lie algebra has a trivial center because a non-trivial center is obviously a solvable ideal.

Proposition 1.1.21. The Lie algebra sl2(C) is semisimple.

Proof. The strategy is to prove that sl2(C) is simple. It suffices to show that each basis elements generates the whole Lie algebra since we want to show that sl2(C) has no non-trivial ideal. Then, the fact that sl2(C) is semisimple will follow from Remark 1.1.20. From the commutator relation in Example 1.1.5, we know immediately that E and 1 1 F are contained in the ideal generated by H by considering 2 [H,E] and 2 [H,F ]. It is also not difficult to infer that H is contained in the ideal generated by E or F by looking at [E,F ], so that the ideal generated by H is equal to the Lie algebra itself. Finally, consider the ideal generated by an arbitrary element aE +bF +cH for some a, b, c ∈ C by breaking into three cases:

10 (i) if a 6= 0, then

[[aE + bF + cH, F ],F ] = [[aE, F ] + [bF, F ] + [cH, F ]],F ] = [aH − 2bF, F ] = −2aF,

(ii) if b 6= 0, then

[[aE + bF + cH, E],E] = [−bH + 2cE, E] = −2bE,

(iii) if c 6= 0, then

[[aE + bF + cH, E],H] = [−bH + 2cX, H] = −4cE.

In either cases, by the preceding paragraph, they generate the entire Lie algebra. QED

Proposition 1.1.22. The Lie algebra sln(C) is semisimple.

Proof sketch. A similar strategy as per the case of sln(C) is used here.

Take an nonzero element in a nontrivial ideal in sln(C). Following the notation in Example 1.1.6, by suitable multiplication of Ei,j for i 6= j, we can see that the ideal encompasses some basis elements Ei,j. Finally, we can see readily that the ideal is the whole sln(C) by further multiplication. Hence, sln(C) is simple. QED

Proposition 1.1.23. The adjoint representation of sln(C) as described in Example 1.1.11 is faithful.

Proof. Note that the kernel of the adjoint representation is the centre. But, we know that sln(C) has trivial center from Remark 1.1.20 as we just shown that it is semisimple in the previous proposition. Hence, the adjoint representation of sln(C) is faithful. QED

1.1.3 General theory of semisimple Lie algebra

Furthermore, the semisimplicity of sln(C) allows us to decompose the representation of sln(C) into direct sum of irreducible representation which we will do explicitly in the later section. This is quoted as Theorem 9.12 in [FH91].

11 Complete Reducibility Theorem. Let V be a representation of the semisim- ple Lie algebra g and W ⊂ V a subspace invariant under the action of g. Then, there exists a subspace W 0 ⊂ V complementary to W and invariant under g.

In addition, the representation of semisimple Lie algebra preserves the Jordan de- composition. Let us recall the Jordan Decomposition Theorem.

Theorem 1.1.24 (Jordan Decomposition Theorem). Any endomorphism X of a com- plex vector space V can be uniquely written in the form X = Xs + Xn where Xs is diagonalizable and Xn is nilpotent, and the two commute.

Proof. Please refer to Theorem 4.3 in [BR02]. QED

Remark 1.1.25. However, the preservation of Jordan decomposition under a repre- sentation does not hold in general. Consider a representation ρ : C → End(C), " # t t ρ : t 7→ . 0 o

The images ρ(t) are neither diaganoalizable and nilpotent. The situation is completely different if the Lie algebra g is semisimple. This is quoted as Theorem 9.20 in [FH91].

Preservation of Jordan Decomposition Theorem. Let g be a semisim-

ple Lie Algebra. For any element X ∈ g, there exists Xs and Xn ∈ g such that X = Xs + Xn and for any representation ρ : g → End(V ), we have

ρ(X)s = ρ(Xs) and ρ(X)n = ρ(Xn).

1.2 The Adjoint Representation

In this section, we decompose the adjoint representation of sln(C) (Example 1.1.11) into root spaces through Cartan decomposition of sln(C). Before doing this, we begin by probing into the irreducible representation of sln(C) in Section 1.2.1. After that, we study the finite dimensional irreducible representation of sl2(C) very closely in Section 1.2.2. Then, in Section 1.2.3, we generalize the idea of the highest weight vector in a representation of sln(C) for larger n by suggesting an ordering on the set of roots.

12 1.2.1 Cartan decomposition of sln(C)

Let us return to the discussion of the finite dimensional representation of sln(C). From Example 1.1.6, we found that Ei,i − Ei+1,i+1 spans a (n − 1)-dimensional subspace h ⊂ sln(C) of all diagonal matrices. This subspace h is, in fact, a in the sense that it is a maximal abelian diagonalizable subalgebra of sln(C). Examine how h acts on sln(C) through the adjoint representation.

From standard linear algebra, the commuting diagonalizable linear operators are simultaneously diagonalizable. Moreover, any representation of sln(C) preserves the Jordon Decomposition due to its semisimplicity as discussed in the previous section. As a consequence, the property of h being diagonalizable is inherited by the action of h on V , in other words, h acts diagonally on V !

By above analysis, we have the following definition.

Definition 1.2.1. Let g = sln(C). Suppose ρ : g → End(V ) is a representation of g and h is a Cartan subalgebra of g. The weight for the action of h is a linear functional α ∈ h∗ such that there exists some X ∈ V \{0} satisfying for all H ∈ h,

ρ(H)(X) = α(H)X.

All X’s that satisfy the equation for a fixed α form a subspace called the weight space ∗ associated to α, denoted by Vα. If h consists of only constant maps, then the weight is sometimes referred as an eigenvalue. Then, the corresponding weight space is just an eigenspace. The dimension of Vα is the multiplicity of α in the representation. Definition 1.2.2. Take V = g itself. We often refer a non-zero weight α of the adjoint representation of g as a root of the Lie algebra g, that is, the linear functional α which satisfies for all H ∈ h and all X ∈ gα,

ad(H)(X) := [H,X] = α(H)X.

Denote the set of roots by ∆. Subsequently, the corresponding weight spaces gα are called the root spaces.

∗ Remark 1.2.3. Note that ∆ ( h . There is a group acting on ∆. We will reveal the group later.

In the light of this, we are able to decompose g = sln(C) into direct sum of eigenspace associated to α. This root space decomposition called the Cartan decomposition,

 M  sln(C) = h ⊕ gα α∈∆ where

gα = {X ∈ g | for all H ∈ h, ad(H)(X) = α(H)X}

13 which matches  M  sln(C) = h ⊕ CEi,j . i6=j

Remark 1.2.4. Note that h is the space with zero weight although zero is not regarded as a root conventionally. Also, h preserves all gα. Example 1.2.5. For n = 3, it is easy to verify that

sl3(C) = H1,2 ⊕ H2,3 ⊕ E1,2 ⊕ E1,3 ⊕ E2,1 ⊕ E2,3 ⊕ E3,1 ⊕ E3,2

= h ⊕ gL1−L2 ⊕ gL1−L3 ⊕ gL2−L1 ⊕ gL2−L3 ⊕ gL3−L1 ⊕ gL3−L2 .

1.2.2 Representation of sl2(C)

In this part, we focus on the irreducible finite dimensional representation of sl2(C) and use the highest weight vector to form the basis of the representation. Here, we are able to deduce the direct relation between the weights and the dimension of the representation. A couple of definitions are required before we proceed. Definition 1.2.6. Let ρ be a representation of a Lie algebra g acting on the space V. A subspace W of V is called invariant if ρ(X)w ∈ W for all w ∈ W and all X ∈ g. A subrepresentation of a representation V is a vector subspace W of V which is invariant under g. An invariant subspace W is called proper if W 6= {0} and W 6= V. A representation V is irreducible if there is no proper nonzero invariant subspace W of V . Example 1.2.7. From previous part, if we consider the adjoint representation of sl2(C), then M M M M sl2(C) = h g−2 g2 = CH CF CE which can be directly inferred from the commutator relation. This is one of its irre- ducible finite dimensional representation .

Now, let V to be any finite dimensional irreducible representation of sl2(C). We are now going to explore the representation theory of sl2(C) which lays down the foundation of representation of sln(C). The span of H is a Cartan subalgebra in sl2(C). Since the action of H on V is diagonalizable by Preservation of Jordan Decomposition Theorem in Section 1.1.3, we can decompose M V = Vα into eigenspaces Vα where α’s are the eigenvalues. But, we know more than that; the eigenvalues are mod 2 congruent to one another and are integers. These are the immediate consequences of the following theorems.

14 Theorem 1.2.8. The weights in a finite dimensional irreducible representation V of sl2(C) form an unbroken string of numbers of the form α, α − 2, ..., α − 2n for some α ∈ C.

Proof. Suppose for some γ ∈ C, v ∈ Vγ, that is, H(v) = βv.

We can see that E(v) ∈ Vγ+2 because

H(E(v)) = E(H(v)) + [H,E](v) = βE(v) + 2E(v) = (γ + 2)E(v).

By the same token, F (v) ∈ Vγ−2 as

H(F (v)) = F (H(v)) + [H,F ](v) = γF (v) − 2F (v) = (γ − 2)F (v).

Then, the subspace W = L V is invariant under the action of sl ( ). Since n∈Z γ+2n 2 C V is irreducible, V = W . As V is finite dimensional, the sequence of weight must terminate where the first term is chosen as α. QED

Remark 1.2.9. In short, we can summarise the action of sl2(C) in the following dia- gram with Vα the first element in the sequence:

H H H

F F F Vα Vα−2 Vα−4 ··· .

E E E

The upshot of this is that the finite dimensionality of V guarantees the presence of a maximal weight α in the sense that E(v) = 0 for all v ∈ Vα.

Theorem 1.2.10. An n-dimensional irreducible representation V for sl2(C) is spanned by {v, F (v),F 2(v), ..., F n−1(v)} where v lies in the highest weight space.

Proof. The existence of v is guaranteed by the finite-dimensionality of V . Start with any eigenvector v0 of H associated to the eigenvalue α and apply E to v0. Since V is finite dimensional, the process of applying E multiple times on v0 must terminate after k +1 steps for some natural number k. Finally, we can take v = Ek(v0).

15 Using the irreducibility of V , it suffices to show that the subspace W = Span {v, F (v),F 2(v), ...F n−1(v)} is invariant under sl2(C). Then, it follows the irreducibility of V and v 6= 0 that W = V. By the nature of W , F preserves W since F sends F m(v) to F m+1(v) constituting those spanning set elements. In addition, H also preserves W as the spanning set vectors are also eigenvectors of H by the calculation in Theorem 1.2.8.

Next, we examine the action of E on sl2(C). First, we have E(v) = 0 which is clearly in W . Then, E(F (v)) = F (E(v)) + [E,F ](v) = 0 + H(v) = nv where n is the eigenvalues of v. Furthermore, E(F 2(v)) = F (E(F v)) + [E,F ](F v) = nF (v) + H(F (v)) = (n + (n − 2))F (v). Inductively, we get E(F m(v)) = (n + (n − 2) + ... + (n − 2(m − 1)))F m−1(v) = (mn − m(m − 1))F m−1(v) = m(n − m + 1)F m−1(v), as desired. QED

Remark 1.2.11. By the finite-dimensionality of V again, we have a lower bound on the eigenvalue, in the sense that it has the smallest real part and there exist a smallest m such that F m(v) = 0. Subsequently, we obtain 0 = E(F m(v)) = m(n − m + 1)F m−1(v). Since F m−1(v) 6= 0 by the choice of m, n must be m − 1 which is a non-negative integer. By the virtue of Theorem 1.2.10, we can conclude that the eigenvalues are integer-valued, differ by 2 from one another, and symmetric about the origin.

Corollary 1.2.12. The number of irreducible representations Ui in an arbitrary rep- L resentation V = i Ui of sl2(C) is exactly the sum of multiplicities of 0 and 1 as eigenvalues of H, in other words, the sum of dimensions of the 0 and 1 eigenspaces in L the decomposition V = α Vα. Generalization 1.2.13. The following are quoted as facts in [FH91] generalizing what we have just shown in the case of sl2(C). We will verify them in Section 2.1.2 after we introduce the Killing form of a Lie algebra.

Fact 1.2.14. For all sln(C),

(i) every root space gα will be one dimensional. (ii) ∆ is symmetric about the origin, that is, if α ∈ ∆ is a root, then −α ∈ ∆ is a root as well.

16 1.2.3 The highest weight vector in a representation

We will now formalise the definition of highest weight vector. Before that, we have to define an ordering on the roots since the word “highest” suggests that the highest weight vector varies with respect to the ordering we chose. From this vector, we can obtain the other weight vectors in the irreducible representation.

Construction 1.2.15. For a general theory of sln(C), let explicitly    a 0 ... 0  1      0 a2 ... 0   h =  . . . .  ∈ Mn(C) a1 + a2 + ... + an = 0  . . .. .   . . .      0 0 . . . an  be its Cartan subalgebra. We define a linear functional space h∗ on h by {L ,L ,...,L } h∗ = C 1 2 n L1 + L2 + ... + Ln where   a1 0 ... 0    0 a2 ... 0  Li  . . . .  = ai, for i = 1, 2, . . . , n.  . . .. .   . . .  0 0 . . . an

Next, we see that that gγ has an adjoint action on g. Take any E ∈ gα and F ∈ gγ, we learn that ad(gγ): gα → gα+γ because for any H ∈ h, [H, [F,E]] = −[F, [E,H]] − [E, [H,F ]] = −[F, −[H,E]] − [E, γ(H)F ] = [F, [H,E]] + [γ(H)F,E] = [F, α(H)E] + [γ(H)F,E] = (α(H) + γ(H))[F,E].

Now, we choose an root ordering by picking a linear functional l such that n ! n X X l aiLi = ciai i=1 i=1 n P with ci = 0 and c1 > c2 > ··· > cn. Thus, this defines the positive root space for i=1 + + − ∆ to be gLi−Lj for i > j dividing the set of all roots ∆ = ∆ ∪ ∆ into a positive root set and a negative root set. We will adopt this convention throughout the thesis.

17 Remark 1.2.16. Notice that we are making a choice of root ordering here.

Example 1.2.17. Diagram 1.2.1 depicts the weight diagram for sl3(C). The thick ma- 1 + genta line divides ∆ into a positive root set ∆ = {L2 − L3,L1 − L3,L1 − L2} and a − negative root set ∆ = {L2 − L1,L3 − L1,L3 − L2}. At the same time, the arrows show 2 the action of ad(gL2−L1 ) on the root spaces.

0

L2 − L3 0

L2 L1 − L3

L2 − L1

0 L1 0

L3 L1 − L2

L3 − L1

L3 − L2

l = 0

Figure 1.2.1: The weight diagram for sl3(C) and the adjoint action of gL2−L1 on the root spaces.

Definition 1.2.18. A positive (respectively negative) root α ∈ ∆ is simple or primitive if it cannot be expressed as a sum of two positive (respectively negative) roots.

Definition 1.2.19. Let V be any representation of g. A nonzero vector v ∈ V is called a highest weight vector of V if that is both a weight vector for the action of h and + in the kernel of gα for all α ∈ ∆ . 1As described before, this line depends on the linear functional you choose. For example, the thick line is obtained by choosing a = 5, b = −1 and c = −4. 2 Note that there are only six one-dimensional root spaces for the adjoint representation of sl3(C).

18 Remark 1.2.20. This is exactly fulfilled by the highest weight vector of sl2(C). Generalization 1.2.21. The next theorem is a generalization to the one in the case sl2(C).

Theorem 1.2.22. The following statements are true for g = sln(C):

(i) Every finite-dimensional representation V of sln(C) possesses a highest weight vector v.

(ii) The subspace W of V generated by the images of a highest weight vector v under − successive applications of root spaces gα for α ∈ ∆ is an irreducible representa- tion.

(iii) An irreducible representation possesses a unique highest weight vector up to scalars.

Proof. The strategy is the same as before. First, for (i), the existence of v can be examined by picking one vector from the Vα with α having “largest” l(α) in the sense of Definition 1.2.19. After that, for (ii), we just need to show that the subspace W of V spanned by images of v under the subalgebra of sln(C) is preserved by all of sln(C) and hence must be an irreducible representation. Let wn denote the any word of length n or less in the − elements of gγ for γ ∈ ∆ and Wn denote the vector space spanned by all wnv. We + argue by induction that EWn ⊂ Wn for E ∈ gγ and γ ∈ ∆ . Write wn = F wn−1 where − F ∈ gα and α ∈ ∆ . Then, EF wn−1 = F Ewn−1 + [E,F ]wn−1 ⊂ Wn by inductive hypothesis plus [E,F ] ∈ h. Hence, W , being the union of Wn , is a subrepresentation. Suppose, for contrary, W is not irreducible which means W = W 0 L W 00. By Fact 1.2.14 0 00 (i), the one dimensional weight space Wα must belong to W or W and thus one of the spaces is zero and the other equals to W . Finally, for (iii), suppose v and w are the highest weight vectors but not scalar multiples of each other in an irreducible representation Vα. Then, by part (ii), we can form two irreducible subrepresentations generated by v and w under successive applications of negative root spaces. Since Vα is an irreducible representation, the subrepresentations must agree, and hence v and w must be scalar multiples of each other. QED

1.3 Full Picture of a Representation

In this section, we study the root lattice of a representation and weight lattice of the Lie algebra. They are two fundamentally different lattices with one sitting in the other. In section 1.3.1, we introduce a reflection, or rather an involution in the weight lattice. The set of involutions form an interesting group called Weyl group. A point worth

19 noticing here is whatever we do depend on the choice of ordering. Section 1.3.2 is devoted to deal with the multiplicity of weights of a representation. This is done by treating a representation as a tableau.

1.3.1 Finding all weights

First, we want to know how the border vectors behave in the representation of g = − sln(C). Take an representative E in gα where α ∈ ∆ and apply it repeatedly to the highest vector v associated to the eigenvalue α. However, the question is when the string of eigenvalues will vanish.

Construction 1.3.1. By Fact 1.2.14 (2), g−α exists. The commutator h := [gα, g−α] together with gα and g−α forms a subalgebra of g. Moreover, the commutator h has dimension at most one since gα is one dimensional from Fact 1.2.14(1). As the adjoint action of the commutator h sends each of gα and g−α into itself, we get the following direct sum M M sα := gα g−α [gα, g−α].

∼ The claim is sα = sl2(C). By picking a basis Eα ∈ gα and F−α ∈ g−α, it deter- mines the unique Hα ∈ [gα, g−α] which has eigenvalue 2 on gα and −2 on g−α. In other words, the two criteria – Hα ∈ [gα, g−α] and α(Hα) = 2 are solely character- ize Hα. The adjoint representation of sln(C), therefore, inherits the integrality of the eigenvalues under the action of Hα, that is, the eigenvalues are integral linear combi- nations of Li.

∗ Definition 1.3.2. The weight lattice of slnC is a Z–module of h generated by {L1,...,Ln−1}.

∗ Definition 1.3.3. The root lattice of sln(C) is a Z–submodule of h generated by the roots of sln(C).

∗ Form a weight lattice of sln(C), ΛW using the set of linear functionals l ∈ h . Then, we see that ∆ also generate a root lattice Λ∆ ⊂ ΛW where explicitly,

n !  X ΛW = Z{L1, ··· ,Ln} Li and i=1 ( ) !  n X X X Λ∆ = aiLi ai ∈ Z and ai = 0 Li . i i i=1 Note that weight lattice and root lattice are not the same; weight lattice depends on the Lie algebra whereas root lattice depends on the adjoint representation of the Lie algebra.

20 Returning to the preceding question, we introduce the following definition.

Definition 1.3.4. For a root α, an involution is defined as follows:

2γ(Hα) tα(γ) = γ − α = γ − γ(Hα)α α(Hα)

(as α(Hα) = 2), which is a reflection in the plane

∗ Ωα = {l ∈ h : hHα, li = l(Hα) = 0}.

Remark 1.3.5. Interestingly, these involutions tα generate a group which we call the Weyl group W of the Lie algebra sln(C).

Definition 1.3.6. The closed Weyl chamber W associated to the ordering of the + roots is the real span of the roots α satisfying α(Hγ) ≥ 0 for every γ ∈ ∆ .

Remark 1.3.7. With respect to Construction 1.2.15,

n o X W = aiLi a1 ≥ a2 ≥ · · · ≥ an .

Geometrically, the vectors

L1,L1 + L2,L1 + L2 + L3,...,L1 + L2 + L3 + ··· + Ln−1 generate the edges of the cone W over an (n − 2)-simplex.

Notation 1.3.8. For an arbitrary (n − 1)-tuple of natural numbers (a1, ··· , an−1) ∈ n−1 N , denote by

Γa1,··· ,an−1 = Γa1L1+a2(L1+L2)+···+an−1(L1+···+Ln−1), an irreducible representation of sln(C) with the highest weight

a1L1 + a2(L1 + L2) + ··· + an−1(L1 + ··· + Ln−1).

Example 1.3.9. Figure 1.3.1 demonstrates how we utilize the involutions

tL1−L2 , tL1−L3 and tL2−L3 to find all the border vectors in the case of sl3(C). The yellow shaded part is W .

21 hHL1−L3 , γi = 0 hHL1−L2 , γi = 0

α

0

hHL2−L3 , γi = 0

Figure 1.3.1: The Λ∆ of the representation Γα for sl3(C) in ΛW .

At the same time, we apply the same sl2(C) analysis to the border vectors to fill up the inner diagram forming unbroken strings of eigenvalues.

Generalization 1.3.10. Generally, we can rewrite the eigenspace decomposition M V = Vλ by grouping them in equivalence classes of eigenvalues M M M V = V[λ] = Vλ+nα [λ] n∈Z as subrepresentations of V for sα. Choose λ and m ≥ 0 such that the set of weights form unbroken string

λ, λ + α, λ + 2α, ··· , λ + mα with the string of integers

λ(Hα), (λ + α)(Hα), (λ + 2β)(Hα), ··· , (λ + mα)(Hα)

22 which implies m = −λ(Hα) as (l +mα)(H) = λ(Hα)+2m and the string is symmetric about zero. In other words, all the weights in Γγ are congruent to the highest weight γ modulo the root lattice Λ∆ and lie completely in the convex hull of the images of γ under the action of Weyl Group.

1.3.2 Multiplicity of the weights in an irreducible representation

This subsection will focus on the multiplicities of the weights in an irreducible represen- tation and the dimension of an irreducible representation. To first find the multiplicity of the weights, we can write the weights as a sum of the highest weight root and a few primitive negative roots. Then, the multiplicity of the weight is exactly the number of vectors that are independent to each other with different order of application of the primitive negative roots to the highest weight root.

We will first introduce the definition of the tensor product of representations, n-th symmetric power of a representation, and n-th exterior power of a representation.

Definition 1.3.11. Suppose ρV : g → End(V ) and ρW : g → End(W ) be two repre- sentations of the Lie algebra g. The tensor product of two representations V and W of a Lie algebra g is the space V ⊗ W with

ρV ⊗W (X): g → End(V ⊗ W )   X 7→ v ⊗ w 7→ (ρV (X))(v) ⊗ w + v ⊗ (ρW (X))(w) ,

or in other words, ρV ⊗W (X) = ρV (X) ⊗ Id + Id ⊗ ρW (X). Remark 1.3.12. We can view a symmetric power of a vector space V,

n ⊗n Sym V := {v ∈ V | w(v) = v for all permutations w ∈ Sn} and an exterior power of a vector space V,

^n ⊗n V := {v ∈ V | w(v) = −v for all permutations w ∈ Sn} as subsets of the tensor product of the vector space V. The n-th symmetric power of a representations V of a Lie algebra g is the space SymnV,→ V ⊗n with an inherited Lie algebra homomorphism

n ρSymnV : g → End(Sym V ).

The n-th exterior power of a representations V of a Lie algebra g is the space Vn V,→ V ⊗n with an inherited Lie algebra homomorphism ^n ρVn V : g → End( V ),

N One can check that the action of g on V n preserves SymnV and Vn V.

23 Example 1.3.13. To illustrate, we show that the weight L1 + L2 + L3 has multiplicity two in the irreducible representation Γ1,1,0 of sl4 with the highest weight

1 · L1 + 1 · (L1 + L2) + 0 · (L1 + L2 + L3) = 2L1 + L2 contained in V NV2 V . We notice that

L1 + L2 + L3 = 2L1 + L2 + (L2 − L1) + (L3 − L2).

N Choosing e1 (e1 ∧ e2) as the generator of the highest root space, we ought to calculate

O gL2−L1 (gL3−L2 (v)) = C · E2,1(E3,2(e1 (e1 ∧ e2))) O O = C · E2,1(E3,2(e1) (e1 ∧ e2) + e1 E3,2(e1 ∧ e2)) O O = C · E2,1(0 + e1 (E3,2(e1) ∧ e2) + e1 e1 ∧ E3,2(e2)) O = C · E2,1(0 + 0 + e1 (e1 ∧ e3)) O = C · E2,1(e1 (e1 ∧ e3)) O O = C · (e2 (e1 ∧ e3) + e1 (e2 ∧ e3)), and O gL3−L2 (gL2−L1 (v)) = C · E3,2(E2,1(e1 (e1 ∧ e2))) O O = C · E3,2(e2 (e1 ∧ e2) + e1 (e2 ∧ e2)) O O = C · (e3 (e1 ∧ e2) + e2 (e1 ∧ e3)).

From the two equations, gL2−L1 (gL3−L2 (v)) and gL3−L2 (gL2−L1 (v)) are clearly linearly independent to each other.

Otherwise, we can rely on the knowledge of Schur functor Sλ to determine the multiplicity of weights. In order to achieve this, we need to introduce Young tableau.

Definition 1.3.14. A Young diagram, sometimes called a Young frame or Ferrers diagram, associated to a partition λ = (λ1, . . . , λk) such that λ = λ1 + ··· + λk and λ1 ≥ ... ≥ λk is a collection of boxes, or cells, arranged in left-justified rows, with a weakly decreasing number λi of boxes in the ith rows. The conjugate diagram with conjugate partition λ is defined by flipping a Young diagram associated to λ over its main diagonal from upper left to lower right, that is, interchanging rows and columns in the Young diagram.

Definition 1.3.15. A Young tableau, or simply tableau, is a way of numbering each box of a Young diagram that is weakly increasing across each row and strictly down each column. A tableau is standard if the entries are the numbers from 1 to n, each occurring exactly once.

24 Definition 1.3.16. Given a standard tableau with partition λ, we define two of the symmetric group

P = {g ∈ Sλ | g preserves each row} and Q = {g ∈ Sλ | g preserves each columnn}.

After that, in term of group algebra CSλ, let X X aλ = and bλ = sgn(g) · eg g∈P g∈Q be two elements correlated to the subgroups. Till this end, we introduce a Young symmetrizer cλ = aλ · bλ ∈ CSn.

Definition 1.3.17. For any finite dimensional complex vector space V , the Schur functor or Weyl module, or simply Weyl’s construction SλV corresponding to λ is the N n image, Im(cλ|V N n ) of cλ restricted to V .

The following theorem gives explicit construction of an irreducible representation.

n Theorem 1.3.18. The representation Sλ(C ) is the irreducible representation of sln(C) with highest weight λ1L1 + λ2L2 + ··· + λnLn.

Proof. Please refer to the proof of Proposition 15.15 in [FH91]. QED

Remark 1.3.19. The proof of the theorem above in [FH91] tells us that the trace of a matrix with respect to eigenspace decomposition corresponding to the eigenvalues n P x1, . . . , xn on Sλ(C ) is the Schur polynomial Sλ(x1, . . . , xn) = KλµMµ where Kλµ µ is the Kostka number giving the number of way filling the tableau of shape λ with µi µ1 µi times of the natural number i and Mµ is the monomial x1 ··· xi . From here, the numbers Kλµ elucidate the multiplicities of possible weight spaces in the irreducible representation with highest weight λ1L1 + λ2L2 + ··· + λnLn.

n Remark 1.3.20. Moreover, we also know that the dimension of Sλ(C ). Given an irreducible Γa1,...,an−1 with highest weight

a1L1 + a2(L1 + L2) + ... + an−1(L1 + ... + Ln−1), we can get our hands on computing

Y (ai + ··· + aj−1) + j − i dim Γ = dim ( n) = a1,...,an−1 Sλ C j − i 1≤i

25 1.4 Classifying the Finite Dimensional Irreducible Repre- sentations

Eventually, we are able to classify the irreducible, finite dimensional representations of sln(C) which the highest weight vector. For the case of sl2(C), we can say exactly what are all the finite dimensional irreducible representations isomorphic to (Corollary 1.4.2).

Theorem 1.4.1. With respect to the ordering of the roots, for any α in the intersection of the Weyl chamber W and the weight lattice ΛW , there exits a unique irreducible, finite dimensional representation Γα of sln with highest weight α satisfying α(Hγ) ≥ 0 + for each γ ∈ ∆ ; this gives a bijection between W ∩ ΛW and the set of irreducible representations of sln. Moreover, the weights of Γα will consist of those elements of the weight lattice congruent to α modulo the root lattice Γ∆ and lying in the convex hull of the set of points in h∗ conjugate to α under the Weyl group.

∼ n Proof. Note that the standard representation V = C of sln(C) has highest weight k L1, the exterior power ∧ V is irreducible with highest weight L1 + ... + Lk, and the k symmetric power Sym V is irreducible with highest weight akL1. The exterior power ∧kV and the symmetric power SymkV are irreducible because their weights occur with multiplicity 1.

To prove the existence part, we can see that the irreducible representation Γa1,··· ,an−1 with highest weight (a1 + ··· + an−1)L1 + ··· + an−1Ln−1 shows up inside the tensor product Syma1 V ⊗ Syma2 (∧2V ) · · · ⊗ Syman−1 (∧n−1V ).

The highest weight is obtained by taking the sum of highest weight of irreducible rep- resentations appear in the tensor product.

Finally, if V and W are two finite dimensional irreducible representations of sln(C) with highest weight vector v and w having the same weight α, then the vector (v, w) ∈ V ⊕ W is a highest weight vector associated to the weight α in that representation. Suppose U ⊂ V ⊕ W be subrepresentation generated by (v, w). Since U is irreducible by construction, it follows that the two projection maps π1 : U → V and π2 : U → W are isomorphisms, demonstrating the uniqueness part of the theorem. QED

In particular, we can see a beautiful implication on classifying all irreducible repre- sentation of sl2(C). This is a special case of Theorem 1.4.1.

Corollary 1.4.2. Any irreducible representation of sl2(C) is a symmetric power of the ∼ 2 standard representation V = C .

26 Proof. For the trivial one dimensional representation C, by considering the span of the ! 1 basis vector , we just get the representation V (0) with eigenvalue 0. 1

2 Next, if e and f are the standard basis for the standard representation C , then 2 L L (1) H(e) = e and H(f) = −f rendering us V = C = Cf Ce = V−1 V1 = V a representation with the highest eigenvalue 1. Similarly, we have a basis {e2, ef, f 2} for 2 2 the symmetric square Sym C . After a simple calculations such as

H(e · e) = H(e) · e + e · H(e) = 2e · e, H(e · f) = H(e) · f + e · H(f) = 0, H(f · f) = H(f) · f + f · H(f) = 2f · f,

2 2 2 L L 2 L L (2) we found that V = Sym C = Cf Cef Ce = V−2 V0 V2 = V . n 2 n n−1 n For the general case where V = Sym (C ), its basis are {e , e f, . . . , f } and we compute

H(en−if i) = H(en−i)f i + en−iH(f i) = eH(en−i−1)f i + en−i−1H(e)f i + en−ifH(f i−1) + en−iH(f)f i−1 = e2H(en−i−2)f i + 2en−i−1H(e)f i + en−if 2H(f i−2) + 2en−iH(f)f i−1 . . = (n − i)en−i−1H(e)f i + ien−iH(f)f i−1 = (n − i)en−if i − ien−if i = (n − 2i)en−if i, giving the eigenvalues n, n − 2,..., −n. From Theorem 1.2.10, we know that an rep- resentation having eigenvalues of H with multiplicity 1 is irreducible. Therefore, any irreducible representation, V (n) with highest eigenvalues n is the nth symmetric power ∼ 2 of the standard representation V = C . QED

27 28 Chapter 2

Weyl Group Acting on the Root Lattice

In this chapter, we would like to pass the semisimple Lie algebra sln(C) to a root system via a choice of diagonalizable Cartan subalgebra h. Previously, we learn that the root space decomposition of the semisimple Lie algebra sln(C) is a result of the adjoint ac- tion of the Cartan subalgebra h. Subsequently, we can define a so called “simple roots” which are not sums of positive roots with an appropriate ordering of the underlying vec- tor space of a root system (Definition 2.2.4). These roots serve a basis of the root lattice.

The purpose of this chapter is to develop the relation between the Weyl group and the symmetric group. In contrast to Chapter 1, instead of viewing the Weyl group from the point of view of adjoint representation of sln(C), we have the Weyl group generated by reflections of the root system. Suppose ∆ is an abstract root system satisfying a few axioms in a finite dimensional vector space E. In principle, we can form a

W = W(∆) of the on E generated by the reflections tα for α ∈ ∆. Notice that the involution tα in Chapter 1 is a reflection. Here, W is the Weyl group of ∆. In our case, we specify the Weyl group W with the complex semisimple Lie algebra sln(C), so it is sensible to express the Weyl group W as W(sln(C)). However, we will mostly concern about the Weyl group W of sln(C) in this thesis.

In Section 2.1, we uncover a root system from the adjoint representation of sln(C). In order to achieve this, we have to define a Killing form to get a notion of length and angle in the representation, or more accurately, in the weight diagram formed by the weights which are linear functionals. Despite an involution having already been defined in Section 1.3, we have a second equivalent definition of an involution in terms of the Killing form. The properties of root in adjoint representation of sln(C) (Theo- rem 2.1.12) are the axioms of an abstract root system.

29 In Section 2.2, we see that the Weyl group of sln(C) as a particular instance of a general Weyl group of an abstract root system. To speed up the calculation involving the involutions, we provide a third equivalent definition of the involutions in W(sln(C)). Theorem 2.2.10 is the most important theorem we proved in this chapter. In particular, it provides an isomorphism between the Weyl group of sln(C) and the symmetric group Sn where the symmetric group Sn is a group that we are extremely familiar with. We can even write up the presentation of the the symmetric group Sn.

∼ In Section 2.3, we provide S4 = W(sl4(C)) as an illustration of a symmetric group or a Weyl group acting on the root lattice of sl4(C). It serves as a perfect model to understand the action. A few important observations can be made before we proceed to Chapter 3 which we will see later that they occur in a more general setting.

2.1 Root System

In this section, we wish to identity a root system of the adjoint representation of sln(C). Killing form is a key item we used to show the main properties take up by the roots. Another main feature of a semisimple Lie algebra is the nondegeneracy of its Killing form. This attribute plays an important role in gaining the properties we need for a root system to hold in the adjoint representation of sln(C).

2.1.1 Killing form

We first define an inner product on the Lie algebra. Besides, we define the involution in term of Killing form which is a sensible deed from the point of view of a abstract root system.

Definition 2.1.1. Let g be a Lie algebra. The Killing form B of g is an inner product defined by associating to each pair of elements X,Y ∈ g the trace of the composition of their adjoint actions on g, that is,

B(X,Y ) = T r(ad(X) ◦ ad(Y ): g → g).

Remark 2.1.2. It is obvious that the Killing form is a symmetric bilinear form. This is because of the identity T r(XY ) = T r(Y X) for any endomorphisms X, Y of a vector space and the bilinearity of the adjoint map. Moreover, for all X,Y,Z ∈ g, the Killing form is associative with respect to bracket operation, that is, B([X,Y ],Z) = B(X, [Y,Z]). However, for any endomorphisms X, Y, Z of a vector space,

T r((XY − Y X)Z) = T r(X(Y Z − ZY ))

30 since T r(Y XZ − XZY ) = T r([Y, XZ]) = 0.

Definition 2.1.3. Suppose V is finite dimensional vector space and B(·, ·) is an inner product on V × V . The radical of B is defined by

rad B := {v ∈ V | B(u, v) = 0 for all u ∈ V }.

Then, B is nondegenerate if rad B = 0 .

Remark 2.1.4. Let V be a finite dimensional vector space. Define φ : V → V ∗ by hφ(v), ui = B(v, u) where h·, ·i is the pairing of the dual V ∗ with V . Notice that ker φ = rad B. Thus, φ is an isomorphism if and only if B is nondegenerate.

Proposition 2.1.5. A Lie algebra g is semisimple if and only if its Killing form B is nondegenerate.

Proof. Please refer to the proof of Proposition C.10 in [Ful97]. QED

Remark 2.1.6. Recall the adjoint representation of g = sln(C). Following from Re- mark 2.1.4 and Proposition 2.1.5, we have an isomorphism φ : h → h∗ associating a functional lH (X) = hH,Xi. To see this, suppose first Hα := [Eα,Fα] is the commutator of an element Eα ∈ gα and Fα ∈ g−α. Then,

B(Hα,Hα) = B([Eα,Fα],Hα) by construction,

= B(Eα, [Fα,Hα]) by asscociativity of bracket,

= B(Eα, α(Hα)Fα) by definition,

= α(Hα)B(Eα,Fα) by bilinearity,

= 2B(Eα,Fα) by construction, 6= 0.

Otherwise, if B(Eα,Fα) = 0, then B(Hα,H) = 0 for every H ∈ h contradicting the fact that B is nondegeneracy on h. Let the unique element Tα of h be the element such that for all H ∈ h,

B(Tα,H) = α(H).

Hence, with a similar calculation as before, for all H ∈ h,

B(Tα,H) = α(H) = B(Hα,H)/B(Eα,Fα) = B(Hα/B(Eα,Fα),H) which implies

Tα = Hα/B(Eα,Fα) = 2Hα/B(Hα,Hα).

31 ∗ ∗ ∗ We now define the Killing form B on h by B (γ, α) = B(Tγ,Tα).

Remark 2.1.7. For all root α,

∗ 2Hα 2Hα 2 2 B (α, α) = B(Tα,Tα) = B( , ) = B(Hα,Hα) B(Hα,Hα) B(Hα,Hα) B(Hα,Hα) B(Hα,Hα) 4 = . B(Hα,Hα)

Definition 2.1.8. For a root α, The involution tα(γ) can also be expressed in term of roots by the formula 2B∗(γ, α) t (γ) = γ − α. α B∗(α, α)

Remark 2.1.9. This definition indeed agrees with Definition 1.3.4. To check, by Re- mark 2.1.7,

∗   2B (γ, α) 2B(Tγ,Tα) B(Hα,Hα) 2Hα ∗ α = α = 2 B Tγ, = B(Tγ,Hα) = γ(Hα), B (α, α) B(Tα,Tα) 4 B(Hα,Hα) as desired.

2.1.2 Properties of roots

Here, we are verifying the validity of Construction 1.3.1 and addressing Fact 1.2.14 at the same time. The main theorem – Theorem 2.1.12 shows that the roots in the adjoint representation of sln(C) constitute a root system.

The following proposition shows the orthogonality of root space and determine the dimension of the subalgebra [gα, g−α] for α ∈ ∆.

Proposition 2.1.10. Suppose g = sln(C).

(i) The subspace gα and gγ are orthogonal if α + γ 6= 0.

(ii) If α ∈ ∆, then [gα, g−α] is the one dimensional, with basis Tα.

32 Proof. (i) Note that for Xα ∈ gα,Yγ ∈ gγ and H ∈ h, by Remark 2.1.2,

0 = B([H,Xα],Y ) + B(Xα, [H,Yγ]) = (α(H) + γ(H))B(Xa,Yγ).

If we choose α(H) + γ(H) 6= 0, then B(Xa,Yγ) = 0 for all Xa ∈ gα, and all Yγ ∈ gγ, proving gα and gγ are orthogonal.

(ii) For Eα ∈ gα,F−α ∈ g−α and H ∈ h, we have

B([Eα,F−α],H) = B(Eα, [F−α,H]) = α(H)B(Eα,F−α) = B(Eα,F−α)B(Tα,H)

which gives

B([Eα,F−α] − B(Eα,F−α)Tα,H).

Thus, by degeneracy of B,[Eα,F−α] = B(Eα,F−α)Tα, as desired. QED

We prove this theorem in response of Construction 1.3.1.

Theorem 2.1.11. Suppose g = sln(C).

(i) ∆ is symmetric about the origin, that is, if α ∈ ∆ is a root, then −α ∈ ∆ is a root as well.

(ii) If α ∈ ∆ and Eα ∈ gα, then there exists F−α ∈ g−α such that Eα,F−α, and Hα = [Eα,F−α] span a three dimensional subalgebra sα of g isomorphic to sl2(C). via

! ! ! 0 1 0 0 1 0 E 7→ ,F 7→ ,H 7→ α 0 0 −α 1 0 α 0 −1

(iii) Every root space gα will be one dimensional.

Proof. (i) Suppose α ∈ ∆. If −α∈ / ∆, then, by Proposition 2.1.10(i), B(gα, gγ) = 0 for all γ ∈ h∗ contradicting the nondegeneracy of B.

33 (ii) By part (i) and Proposition 2.1.10(ii), we can find F−α ∈ g−α and Hα = [Eα,F−α] 6= 0 with α(Hα) 6= 0. By suitably adjusting the scalars, they gen- erate a subalgebra sα isomorphic to sl2(C). Using a key fact in Proposition 8.3(e) in [Hum80], we know that we can choose F−α such that B(Eα,F−α) = 2/B(Tα,Tα). Let Hα = 2Tα/B(Tα,Tα). We get

2 B(Tα,Tα)Hα [Eα,F−α] = B(Eα,F−α)Tα = · = Hα B(Tα,Tα) 2

by virtue of Proposition 2.1.10(ii). Furthermore, by the uniqueness of Tα and the nature of Tα ∈ h,

  2Tα 2 2α(Tα) [Hα,Eα] = ,Eα = [Tα,Eα] = Eα = 2Eα, B(Tα,Tα) α(Tα) α(Tα)

as desired. By the same token, [Hα,F−α] = −2F−α. Moreover,

  2Tα α(Hα) = B(Tα,Hα) = B Tα, = 2. B(Tα,Tα)

Hence, Eα,F−α, and Hα span a three dimensional subalgebra isomorphic to sl2(C).

(iii) By part (i), we can pick Eα ∈ gα,F−α ∈ g−α, and Hα = [Eα,Fα]. Then, it is not hard to realize that the subrepresentation

0 M g = CFα ⊕ CHα ⊕ gnα n>0

is invariant under the adjoint action of F−α,Hα, and Eα.

0 Next, we claim that ad(Hα) has trace 0 in its action on g . To see this,

T r(ad(Hα)) = T r(ad([Eα,F−α])) = 0.

0 However, by the nature of Hα on g , we have its trace ∞ X −2 + 0 + 2n dim gnα = 0. n=1 It follows that

( 1, if n = 1, dim g = nα 0, if n = 2, 3, 4,....

Thus, we obtain g = CEα which is one dimensional. QED

34 This is the main theorem showing the four properties of the roots in adjoint repre- sentation of sln(C) which then are used to generalise the notion of a root system.

Theorem 2.1.12. Suppose g = sln(C).

(i) The roots ∆ span h∗.

(ii) The roots of sln(C) are invariant under the Weyl group.

(iii) If α is a root, then the multiples of α which are roots are ±α.

2B∗(α,γ) (iv) For any α, γ ∈ ∆, hα, γi := B∗(γ,γ) ∈ Z.

Proof. (i) If ∆ fails to span h∗, then for each root α, there exists H ∈ h such that α(H) = 0. Then, ad(H) = 0, that is H is in the center of g. But, g is semisimple. Thus, H = 0.

(ii) Suppose γ and α are roots of g. It suffices to show that the roots congruent to λ

modulo α are invariant under the reflection tα. Consider the subrepresentation M U = Vλ+nα n∈Z

of the subalgebra sα. For some m 6= n, look at the string of roots

λ + nα, λ + (n + 1)α, . . . , λ + mα

with the string evaluated at Hβ

λ(Hα) + 2n, λ(Hα) + 2n, . . . , λ(Hα) + 2m.

By the virtue of Theorem 2.1.11(i), λ(Hα) must be −(m + n) ∈ Z. Finally, for k ≤ m − n,

tα(λ + (n + k)α) = λ + (n + k)α − (λ + (n + k)α)(Hα)α by definition,

= λ + (n + k)α − λ(Hα)α − (n + k)α(Hα)α = λ + (n + k)α + (m + n)β − (n + k)2α by above, = λ + (m − k)α

which is still in U.

35 (iii) Suppose α is a root. We want to show that γ = nα is a root if and if n = ±1. The “if” direction follows from 2.1.11(i). For the “only if” direction, assume that

γ = nα is a root. Consider applying γ to the unique elements Hγ and Hα; we get two equations

γ(Hγ) = nα(Hγ) γ(Hα) = nα(Hα) 2 ⇒ α(H ) = ⇒ γ(H ) = 2n γ n α

2 which by part (2), n and 2n must be in integers. This restricted the possibility 1 of n to ±1, ±2 or ± 2 . However, 2α and 4α are not in ∆ by Theorem 2.1.11(iii). Thus, n can only be ±1.

(iv) This follows from part (iii). QED

2.2 Reflections in the Root Lattice

In this section, we are trying to perceive the Weyl group purely from the point of view of a reflection group acting on a subset of an euclidean space. Lastly, we find that W(sln(C)) is isomorphic to Sn (Theorem 2.2.10).

2.2.1 Definition of a Weyl group

We introduce the axioms of an abstract root system .

Definition 2.2.1. An euclidean space E is a finite dimensional real vector space V equipped with a positive definite symmetric bilinear form hα, γi.

Definition 2.2.2. A subset ∆ of the euclidean space E is called a root system in E if the following axioms are satisfied:

(R1) ∆ is finite, spans E, and does not contain 0.

(R2) If α ∈ ∆, the only multiples of α in ∆ are ±α.

(R3) If α ∈ ∆, the reflection tα leaves ∆ invariant.

(R4) If α, γ ∈ ∆, the hγ, αi ∈ Z.

Remark 2.2.3. In the previous section, we verified that our root system of the adjoint representation of sln(C) satisfies the axioms above.

Definition 2.2.4. A subset P of ∆ is called a base if

36 (B1) P is a basis of ∆,

(B2) Each root γ can be written as a sum of α ∈ P with all non-negative or all non-positive integral coefficients.

The roots in P are then called simple.

Remark 2.2.5. This definition agrees with Definition 1.2.18.

Theorem 2.2.6. Every abstract root systems ∆ has a base P .

Proof. Please refer to the proof of Theorem 10.1 in [Hum72]. QED

Now, let us formalise the definition of Weyl group. Recall that a reflection in a euclidean space E is an invertible linear transformation sending all vectors orthogonal to a hyperplane of codimension one into its negative and fixing the hyperplane pointwise.

Definition 2.2.7. Let ∆ be a root system in E. The Weyl group W of ∆ is generated by the reflections tα for α ∈ ∆.

In Section 1.2.1, we obtain a root set from the adjoint representation of semisimple Lie algebra sln(C). The reflections on the lattice formed by the roots generate the Weyl group of sln(C). Denote the positive simple roots Li − Li+1 as αi. One can verify that these roots, in fact, form a basis. To ease the manipulation involving reflections in the Definition 1.3.4, we redefine the following.

Definition 2.2.8. To simplify our computation later, we define  2, if i = j,  hαi, αji := −1, if |i − j| = 1,  0, if |i − j| > 1. and extend the definition bilinearly. With this convention, we redefine our reflecting hyperplane as for α,

Ωα, = {γ ∈ E | hγ, αi = 0} and the reflection

tα(γ) = γ − hγ, αiα.

Remark 2.2.9. It is easy to show that these definitions agree those in Definition 1.3.4.

2.2.2 An isomorphism between W(sln(C)) and Sn

We will see that the Weyl group W of the complex semisimple Lie algebra sln(C) is, in fact, the symmetric group Sn. The isomorphism between them helps us to perform easier calculation of the action of W on the root lattice.

37 Theorem 2.2.10. The Weyl group, W(sln(C)) generated by reflections tαi associated to a positive simple root αi is isomorphic to Sn.

Proof. Denote the transposition (i i + 1) by si. First, note that Sn has a presentation

2 3 2 hsi, 1 ≤ i ≤ n − 1 | si = e, (sisi+1) = e, (sisj) = e for |i − j| > 1i.

We will show that the Weyl group satisfies the three relations in Sn. To achieve this, we compute, for all γ ∈ ∆,

2 tαi (γ) = tαi tαi (γ)

= tαi (γ − hγ, αiiαi)

= γ − hγ, αiiαi − h(γ − hγ, αiiαi), αiiαi

= γ − hγ, αiiαi − hγ, αiiαi + 2hγ, αiiαi as hαi, αii = 2, = γ = e(γ),

2 which implies tαi = e where e is the identity element in Weyl group.

For each γ ∈ ∆ and |i − j| > 1,

tαi tαj (γ) = tαi (γ − hγ, αjiαj)

= γ − hγ, αjiαj − h(γ − hγ, αjiαj), αiiαi

= γ − hγ, αjiαj − hγ, αiiαi as hαj, αii = 0,

= γ − hγ, αjiαj − hγ, αiiαi + hhγ, αiiαi, αjiαj as hαi, αji = 0,

= γ − hγ, αiiαi − h(γ − hγ, αiiαi), αjiαj

= tαj (γ − hγ, αiiαi)

= tαj tαi (γ), as desired.

Moreover, for each γ ∈ ∆ and |i − j| = 1,

tαi tαj tαi (γ) = tαi (γ − hγ, αiiαi − hγ, αjiαj − hγ, αiiαj)

= γ − hγ, αiiαi − hγ, αjiαj − hγ, αiiαj

− h(γ − hγ, αiiαi − hγ, αjiαj − hγ, αiiαj), αiiαi

= γ − hγ, αiiαi − hγ, αjiαj − hγ, αiiαj

− hγ, αiiαi − hγ, αjiαi + 2hγ, αiiαi − hγ, αiiαi

= γ − hγ, αiiαi − hγ, αjiαj − hγ, αiiαj − hγ, αjiαi where the final expression is symmetrical in αi and αj. (One can also verify that tαi tαj tαi (γ) = tαj tαi tαj (γ) by doing similar calculations on tαj tαi tαj (γ)). Hence, this

38 implies that

tαi tαj tαi = tαj tαi tαj .

Giving these, we can easily construct a surjective group homomorphism, g from Sn to W by sending si to tαi . Note that W acts on ∆. In other word, the left multiplication map by tαi induces a group homomorphism, f from W to the set Perm(∆) of bijective map from ∆ to itself. g f Now, the composition Sn −→W −→ Perm(∆) defines a injective homomorphism. To show this, we can take any si ∈ Sn since g is surjective and let it acts on ∆. It is easy to see that the only w such that w(α) = α for all α ∈ ∆ is the identity element in Sn and if we have the result of Theorem 3.2.1 which will be proved in the later chapter. Since + + w(α) ∈ ∆ for all α ∈ ∆ , there is no simple reflections such that `(wtαi ) < `(w). Then, w = e. Thus, we can infer that g is injective. ∼ In the end, we conclude that g is an isomorphism, that is, W = Sn. QED

2.3 An example – S4

Let’s study how S4 acts on the root of the adjoint representation of sl4(C). By under- standing this example, we will help to absorb the later material better.

First, we have the presentation of S4 : 2 3 2 hsi, 1 ≤ i ≤ 3 | si = e, (sisi+1) = e, (sisj) = e for |i − j| > 1i. with all the elements {e, (12), (23), (34), (13), (14), (24), (12)(34), (13)(24), (14)(23), (123), (234), (124), (134), (132), (143), (142), (243), (1234), (1324), (1342), (1243), (1423), (1432)}.

Then, we pick out the generators such as s1 = (12) , s2 = (23), and s3 = (34). We also denotes the the root by α1 = L1 − L2, α2 = L2 − L3, and α3 = L3 − L4. Now, an action of S4 is set up such that an element w ∈ S4 acts on the root lattice by permuting the subscript i of simple roots αi. It is not hard to establish that tα1 = s1, tα2 = s2, and tα2 = s3.

Figure 2.3.1 show the root lattice of sl4(C). We have coloured lines on the root lattice to indicate a hyperplane across the lattice. These hyperplanes are precisely where the reflections happen. We can immediately see that there are six reflections in W(sln(C)).

39 Ω α2 α1 = L1 − L2

Ωα1

−α2 − α3 = L4 − L2 α1 + α2 = L1 − L3

−α3 = L4 − L3

Ωα1+α2+α3 α1 + α2 + α3 = L1 − L4 −α2 = L3 − L2

α2 = L2 − L3 −α1 − α2 − α3 = L4 − L1 α3 = L3 − L4

Ωα3

−α1 − α2 = L3 − L1 α2 + α3 = L2 − L4

−α1 = L2 − L1 Ωα1+α2 Ωα2+α3

Figure 2.3.1: The root lattice of sl4(C).

The table below demonstrates the action of a word w ∈ S4 on the roots. Notice that we write every w in S4 in its minimal length. It is thrilling that we have this table because a few interesting observations can be made here. They will be proved as theorems in the next chapter.

We first observe that the word s3s2s1s2s3s2 is the longest element in the S4. It is unique! Moreover, it is a product of six generators corresponding to the six roots in the adjoint representation of sl4(C). The minimal length of a word w in S4 is exactly the number of positive roots that are sent to negative roots by the action of the wordw (Corollary 3.2.3).

−1 Furthermore, we see that the reflections are elements in the form wsiw for some si and w in S4. The fact that they are reflections can be shown directly from the com- putation in the table or see its action on Figure 2.3.1.

Before proving these observations occur in a more general setting, we need a suffi- cient language to describe the elements in the Weyl group. This brings us to Chapter 3.

40 S4 (12) (23) (34) (13) (24) (14) (12)(34) (13)(24)

Reduced Expression s1 s2 s3 s1s2s1 s2s3s2 s3s2s1s2s3 s1s3 s2s1s3s2 Reflection Yes Yes Yes Yes Yes Yes No No Action on ∆

α1 7→ −α1 α1 + α2 α1 −α2 α1 + α2 + α3 −(α2 + α3) −α1 α3

α2 7→ α1 + α2 −α2 α2 + α3 −α1 −α2 α2 α1 + α2 + α3 −(α1 + α2 + α3)

α3 7→ α3 α2 + α3 −α3 α1 + α2 + α3 −α3 −(α1 + α2) −α3 α1

α1 + α2 7→ α2 α1 α1 + α2 + α3 −(α1 + α2) α1 + α2 −α3 α2 + α3 −(α1 + α2)

α2 + α3 7→ α1 + α2 + α3 α3 α2 α2 + α3 −(α2 + α3) −α1 α1 + α2 −(α2 + α3)

α1 + α2 + α3 7→ α2 + α3 α1 + α2 + α3 α1 + α2 α3 α1 −(α1 + α2 + α3) α2 −α2

S4 (14)(23) (1234) (1324) (1342) (1243) (1423) (1432) (123)

Reduced Expression s3s2s1s2s3s2 s1s2s3 s1s2s3s1s2 s2s1s3 s1s3s2 s2s1s3s2s1 s3s2s1 s1s2 Reflection No No No No No No No No Action on ∆

α1 7→ −α3 α2 α3 −(α1 + α2) α2 + α3 −α3 −(α1 + α2 + α3) α2

41 α2 7→ −α2 α3 −(α1 + α2 + α3) −(α1 + α2) α1 −(α1 + α2) α1 + α2 + α3 −(α1 + α2 + α3)

α3 7→ −α1 −(α1 + α2 + α3) −α1 −(α2 + α3) α1 + α2 α1 α2 α1 + α2 + α3

α1 + α2 7→ −(α2 + α3) α2 + α3 −α2 α3 −α1 −(α1 + α2 + α3) −(α2 + α3) −α1

α2 + α3 7→ −(α1 + α2) −(α1 + α2) −(α1 + α2 + α3) α1 −α3 −α2 α1 + α2 α3

α1 + α2 + α3 7→ −(α1 + α2 + α3) −α1 −(α1 + α2) −α2 α2 −(α2 + α3) −α3 α2 + α3

S4 (234) (124) (134) (132) (143) (142) (243)

Reduced Expression s2s3 s1s2s3s2 s1s2s1s3 s2s1 s3s1s2s1 s2s3s2s1 s3s2 Reflection No No No No No No No No Action on ∆

α1 7→ α1 + α2 α2 + α3 −α2 −(α1 + α2) −(α2 + α3) −(α1 + α2 + α3) α1 + α2 + α3

α2 7→ α3 −α3 α2 + α3 α1 −α1 α1 + α2 −(α2 + α3)

α3 7→ −(α2 + α3) −(α1 + α2) −(α1 + α2 + α3) α2 + α3 α1 + α2 −α2 α2

α1 + α2 7→ α1 + α2 + α3 α2 α3 −α2 −(α1 + α2 + α3) −α3 α1

α2 + α3 7→ −α2 −(α1 + α2 + α3) −α1 α1 + α2 + α3 α2 α1 −α3

α1 + α2 + α3 7→ α1 −α1 −(α1 + α2) α3 −α3 −(α2 + α3) α1 + α2 ∼ Table 2.3.1: The action of S4 = W(sl4(C)) on the roots of sl4(C). 42 Chapter 3

Combinatorics in the Weyl Group of sln(C)

In this chapter, we will derive basic combinatorial properties in the Weyl group W to a great extent. Begining with the work of W. Killing and E. Cartan, the Weyl group of semisimple Lie algebras plays a crucial role in representation theory.

In 1934, H. S. M. Coxeter showed that every reflection group is a Coxeter group. Here, a reflection group refers exactly to a group generated by reflections of a finite ∼ dimensional Euclidean space. On the ground of W(sln(C) = Sn, it is easy to realize that the Weyl group is a perfect example of the Coxeter group. We are trying to un- derstand the Coxeter groups algebraically, geometrically and combinatorially. Perhaps the geometrical aspect will become more apparent in the next chapter when dealing with the root set.

In Section 3.1, we define a Coxeter system which generalised the Weyl group. There- fore, any properties held in a Coxeter system are going to be equally true in the Weyl group. Lemma 3.1.12 represents a homomorphism in signed reflections in W(sln(C)) which can be relate to the action of the reflections on the root set ∆ of the sln(C) by Theorem 3.1.13 . We also prove several useful properties taken by a minimal length expressions.

In Section 3.2, we want to understand more about the elements in the Weyl group. We have a stunning result which is the length of an element is the number of dis- tinct positive roots sent to the negative root sets (Corollary 3.2.3). After that, we show the existence of the longest word w0 in W(sln(C)). The most important attribute it has is that we can obtain w0 from any w ∈ W(sln(C)) by a multiplication of an- 0 0 other w ∈ W(sln(C)), in other words, w0 = ww for any w ∈ W(sln(C)) and some 0 w ∈ W(sln(C)).

43 In Section 3.3, we define root sets for the expressions for elements in the Weyl group. We then explore the root sets of reduced expressions.

3.1 Coxeter System

Coxeter system will supply the combinatorial structure of the Weyl group sufficiently. We adopt a similar way to introduce a Coxeter system in [BB05].

3.1.1 Definitions

To begin with, we introduce essential definitions.

Definition 3.1.1. Given a group W and a set S, the pair (W, S) is called a Coxeter system if the group W has a representation ( Generators : S 0 m(s,s0) 0 0 Relations :(ss ) = 1, with m(s, s ) ∈ N and m(s, s ) < ∞, where m(s, s) = 1, m(s, s0) = m(s0, s) ≥ 2 for s 6= s0 in S. If no such relations occurs for a pair s, s0, the convention is m(s, s0) = ∞. In this case, the group W is the Coxeter group and S is the generating set.

Remark 3.1.2. There is a universal property of Coxeter group W : given a group G and f : S → G is a map such that

0 (f(s)f(s0))m(s,s ) = e for all (s, s0) with m(s, s0) 6= ∞, there exists a unique extension of f to a group homomorphism f : W → G.

Example 3.1.3. The symmetric group Sn is a typical example of Coxeter group.

Note that Sn has a presentation

2 3 2 hsi, 1 ≤ i ≤ n − 1 | si = e, (sisi+1) = e, (sisj) = e for |i − j| > 1i. where si denotes the the transposition (i i + 1).

Evidenced by Theorem 2.2.10 earlier, the Weyl group W of sln(C) is isomorphic to

Sn. Recall that tαi is a reflection associated to a simple positive root αi. When we view

W(sln(C)) as Sn, we regard the simple reflection tαi as si through the isomorphism in Theorem 2.2.10. Therefore, any element tα for α ∈ ∆ can be expressed in terms of si for notational brevity.

Throughout this chapter, it is convenient to keep W(sln(C)) or Sn in mind whenever we talk about a general Coxeter system (W, S).

44 Notation 3.1.4. An element w ∈ W has at least one expression w.

Remark 3.1.5. Given a Coxeter system (W, S). Denote S∗ as the free monoid gen- erated by S, in other words, the set of concatenation of expressions in the alphabet S. There is a surjective map ψ from S∗ to W. Therefore, an expression w is a lift of a word w.

Example 3.1.6. In S4, w = s3s2s1s2 is an expression but s3s1s2s1, s1s3s2s1, and s3s2s1s2 are representing the same element w. So, three of the them are expressions of w with length of 4. We will make the definition of length precise in Definition 3.1.15.

Definition 3.1.7. Given a Coxeter system (W, S), define a reflection set T as follows;

T := {wsw−1 | s ∈ S, w ∈ W } containing elements called reflections.

Remark 3.1.8. The set T contains all elements that are conjugate to an element in the generating set S of W . We can verify that t2 = e for t ∈ T . Easily, we can see that S ⊆ T , so sometimes the elements of S are called simple reflections.

3.1.2 A bijection between reflections and roots

In this part, we try to link the reflections in W(sl(C)) with the roots of sln(C). More precisely, Theorem 3.1.13 gives the required bijection.

∗ Construction 3.1.9. Given an expression si1 si2 . . . sin ∈ S , we continue to introduce a couple of definitions: for 1 ≤ j ≤ n,

−1 tij := (si1 si2 . . . sij−1 )sij (si1 si2 . . . sij−1 ) ∈ T, the ordered n-tuple

Tb(si1 si2 . . . sin ) := (ti1 , ti2 , . . . , tin ), and

n(si1 si2 . . . sin : t) := the number of times t appears in Tb(si1 si2 . . . sin ).

There are few interesting properties pertaining to ti such as

t s s . . . s = s ... s . . . s ij i1 i2 in i1 cij in with sij omitted and

si1 si2 . . . sij = tij tij−1 ··· ti1 .

Moreover, we want to study the group S(Ω) of all permutations of the set Ω = T × {−1, 1}.

45 Definition 3.1.10. Define a bijective function πs from R to itself for each s ∈ S such that

πs(t, ) := (sts, η(s : t)) where ( −1, if s = t, η(s : t) := +1, if s 6= t.

Proposition 3.1.11. If si1 si2 . . . sin , with n minimal, then for all 1 ≤ j < k ≤ n,

tij 6= tik ∈ Tb(si1 si2 . . . sin ).

Proof. Suppose, for contrary, tij = tik for some j < k. But,

s s . . . s = t t w = t t s s . . . s = s ... s ... s . . . s i1 i2 in ij ik ij ik i1 i2 in i1 cij cik in

contradicting the minimality of si1 si2 . . . sin . QED

Lemma 3.1.12. (i) The mapping s 7→ πs extends uniquely to an injective homomor- phism w 7→ πw from W to S(Ω).

(ii) For all t ∈ T , πt(t, ) = (t, −).

Proof. (i) First, we want to utilise the universal property. Suppose we have (s, s) with m(s, s0) 6= ∞. Then,

2 πs (t, ) = πs(sts, η(s : t)) = (sstss, η(s : t)η(s : sts)) = (t, )

2 which implies that πs = idR and πs ∈ S(Ω). Next, suppose (s, s0) with m(s, s0) = p 6= ∞. For the sake of notation simplicity, we denote ( s0, if i is odd, s = i s, if i i is even. We want to show that p (πsπs0 ) = idR. Now, we claim that

2p p Y (πsπs0 ) (t, ) = (s2p . . . s1ts1 . . . s2p,  η(si : si−1 . . . s1ts1 . . . si−1)) = (t, ). i=1

46 0 p The first component is clear since s2p . . . s1 = (ss ) = e by assumption. Note that for the second component,

2p Y n(s1s2...s2p:t)  η(si : si−1 . . . s1ts1 . . . si−1) = (−1) . i=1

To make sure that it is equal to , we need to show n(s1s2 . . . s2p : t) is even. Consider Tb(s0ss0s . . . s0s) and we found that | {z } 2p tp+i = ti for 1 ≤ i ≤ p as (s, s0)p = e. Thus, if t appears in Tb(s0ss0s . . . s0s), it must appear twice, | {z } 2p as desired.

By the universal property, we can extend uniquely the mapping s 7→ πs to a ho- momorphism w 7→ πw from W to S(Ω). Let w = sin sin−1 . . . si1 be any arbitrary expression. We see that

π (t, ) = π π . . . π (t, ) w sin sin−1 si1  n  Y = sin . . . si1 tsi1 . . . sin ,  η(sij : sij−1 . . . si1 tsi1 . . . sij−1 ) j=1 −1 n(s s ...s : t) = (wtw , (−1) i1 i2 in ).

This is suggesting us to define

−1 n(s s ...s :t) η(w : t) := (−1) i1 i2 in which is well-defined since we just showed that n(s1s2 . . . sn : t) depends only on w and t. Hence, we can write our new homomorphism as

−1 −1 πw(t, ) = (wtw , η(w : t)).

This homomorphism is injective. Suppose w 6= e. Write w = sin sin−1 . . . si1 with n minimal. Then, by Proposition 3.1.11, we know that

n(si1 si2 . . . sin : ti) = 1

since Tb(si1 si2 . . . sin ) = (t1, t2, . . . , tn) and tk 6= tl for k 6= l. It follows that the homomorphism is injective since

−1 πw(tj, ) = (wtjw , −) for 1 ≤ j ≤ n and so πw 6= idR.

47 (ii) We will prove by induction. Suppose t = si1 . Obviously,

−1 πt(t, ) = (ttt , η(t : t)) = (t, −).

Now, let t = si1 si2 . . . sin . . . si2 si1 . Then, by inductive hypothesis together with part (i),

πt(t, ) = π (s s . . . s . . . s s , ) si1 si2 ...sin ...si2 si1 i1 i2 in i2 i1 = π π π (s s . . . s . . . s s , ) si1 si2 ...sin ...si2 si1 i1 i2 in i2 i1 = π π (s s s . . . s . . . s s s , η(s : s s . . . s . . . s s )) si1 si2 ...sin ...si2 i1 i1 i2 in i2 i1 i1 i1 i1 i2 in i2 i1 = π π (s . . . s . . . s , η(s : s s . . . s . . . s s )) si1 si2 ...sin ...si2 i2 in i2 i1 i1 i2 in i2 i1 = π π (s . . . s . . . s , ) si1 si2 ...sin ...si2 i2 in i2 = π (s . . . s . . . s , −) si1 i2 in i2

= (si1 si2 . . . sin . . . si2 si1 , −η(si1 : si2 . . . sin . . . si2 )) = (t, −), as desired. QED

Theorem 3.1.13. Suppose Ω = T × {+1, −1} and πw :Ω → Ω and let ∆ be the root system of sln(C). The map φ :Ω → ∆ defined by

+ (t, +1) → γ ∈ ∆ such that tγ = t, − (t, −1) → γ ∈ ∆ such that tγ = t. is bijective satisfying for all (t, ) ∈ Ω and w ∈ W,

φ(πw(t, )) = w(φ(t, )).

Proof. The map φ is surjective because every root is associated with a reflection. Fur- thermore, the map φ is injective because the adjoint representation is faithful.

To check that φ(πw(t, )) = w(φ(t, )) for all (t, ) ∈ Ω and w ∈ W, it suffices to check it on the generator si, that is, for all si,

φ(πsi (t, )) = si(φ(t, )).

Note that η(si:t) φ(πsi (t, )) = φ(sitsi, η(si : t)) = si(γ) ∈ ∆ and

si(φ(t, )) = si(γ)

48 for γ ∈ ∆. By definition, it is equivalent to see that ( −1, if t = si, η(si : t) = 1, if t 6= si, and ( − ∆ , if γ = αi, si(γ) ∈  ∆ , if γ 6= αi.

But, γ = αi if and only if t = si. Thus, we obtain

φ(πsi (t, )) = si(φ(t, )). for all si since the left hand side and the right hand side has the same image. QED

Remark 3.1.14. This theorem gives the precise bijection between signed reflections and the set of roots, which intertiwes the action of W on both sets.

3.1.3 Minimal length expressions

We make a precise definition of length on the words in W(sln(C)). The strong exchange condition tells us totally what the words can be under a certain condition (Corol- lary 3.1.18) whereas the deletion condition tells us any expression can be reduced.

Definition 3.1.15. Given a Coxeter system (W, S), every element w ∈ W can be expressed as a product of generators:

w = si1 si2 . . . sin where sij ∈ S.

If n is the minimal among all such expressions for w, we write

`(w) = n

and n is the Bruhat length of w. Consequently, si1 si2 . . . sin is a reduced expression for w.

Remark 3.1.16. The length function on the word is always non-negative. If e is the identity element in W , we may define `(e) := 0.

Theorem 3.1.17 (Strong Exchange Condition). Suppose w = si1 si2 . . . sin with sij ∈ S and t ∈ T . If `(tw) < `(w), then

tw = s ... s . . . s i1 cij in for some 1 ≤ j ≤ n.

49 Proof. Note that the condition

tw = s ... s . . . s i1 cij in is equivalent to show that η(w, t) = −1. If this is the case, then

t = si1 si2 . . . sij . . . si2 si1 from some j. It follows that tw = s ... s . . . s . i1 cij in First, suppose that `(tw) < `(w). We want to show that the formula from Lemma 3.1.12,

n(s s ...s ,t) η(w, t) = (−1) i1 i2 in = −1.

In order to achieve a contradiction, assume η(w, t) = 1. Then,

π(tw)−1 (t, ) = πw−1 πt(t, ) by Lemma 3.1.12(1),

= πw−1 (t, −) by Lemma 3.1.12(2), = (w−1tw, −η(w, t)) = (w−1tw, −) by assumption.

This implies that η(tw, t) = −1 which means n(tw, t) is odd. Consequently, we obtain

`(ttw) < `(tw) ⇐⇒ `(w) < `(tw) contradicting our assumption. QED

Corollary 3.1.18. If w = si1 si2 . . . sin is reduced and t ∈ T , then the following are equivalent:

(1) `(tw) < `(w),

(2) tw = s ... s . . . s , for some 1 ≤ j ≤ n, i1 cij in

(3) t = si1 si2 . . . sij . . . si2 si1 , for some 1 ≤ j ≤ n.

Furthermore, the index “ij” appearing in (ii) and (iii) is uniquely determined.

Proof. (1) ⇒ (2) It follows from the strong exchange property as proven before. (2) ⇒ (1) This is obvious. (2) ⇔ (3) It is also easy to see.

The uniqueness of the index ij follows immediately from Proposition 3.1.11. QED

50 Inspired by these theorems, we establish the following definition.

Definition 3.1.19. We call the set of left associated reflections to w,

TL(w) := {t ∈ T | `(tw) < `(w)} and the set of right associated reflections to w,

TR(w) := {t ∈ T | `(wt) < `(w)}.

−1 Remark 3.1.20. Note that TR(w) = TL(w ).

Corollary 3.1.21. Let w = si1 si2 . . . sin be reduced. Then,

TL(w) = {si1 si2 . . . sij . . . si2 si1 | 1 ≤ j ≤ n}

with |TL(w)| = `(w).

Proof. Write w = si1 si2 . . . sin with `(w) = n. By Corollary 3.1.18,

TL(w) = {si1 si2 . . . sij . . . si2 si1 | 1 ≤ j ≤ n}.

Moreover, by Proposition 3.1.11, the elements in TL(w) are all different from each other. QED

Corollary 3.1.22. For all s ∈ S and w ∈ W , the following hold:

(1) s ∈ TL(w) if and only if some reduced expression for w begins with the letter s.

(2) s ∈ TR(w) if and only if some reduced expression for w ends with the letter s.

Proof. (1) (⇐=) If some reduced expression for w begins with the letter s, then s surely satisfies the definition of TL(w).

(=⇒) Suppose `(sw) < `(w) with w = si1 si2 . . . sin . Then, by Corol- lary 3.1.18, sw = s ... s . . . s i1 cij in for some ij. Since s is a Coxetor generator,

w = ss ... s . . . s , i1 cij in as desired. (2) (⇐⇒) Apply the proof of (1) on the word w−1. QED

Definition 3.1.23. Given a Coxeter syetem (W, S). For w ∈ W ,the left descent set DL(w) is defined as the intersection TL(w) ∩ S. Similarly, the right descent set DR(w) is defined as the intersection TR(w) ∩ S.

51 Proposition 3.1.24. (Deletion Condition) If w = si1 si2 . . . sin and `(w) < k, then w = s ... s ... s . . . s i1 cij cik in for some 1 ≤ j < k ≤ n.

Proof. Pick j maximal so that sij sij+1 . . . sin is not reduced. Obviously,

`(sij sij+1 . . . sin ) < `(sij+1 . . . sin ). So, by Theorem 3.1.17, s s . . . s = s ... s . . . s ij ij+1 in ij+1 cik in for some j < k ≤ n. Now, multiply si1 si2 . . . sij−1 on both side, we get s s . . . s = s ... s ... s . . . s , i1 i2 in i1 cij cik in as desired. QED

Definition 3.1.25. A subexpression of an expression si1 si2 . . . sin in a Coxeter group

W is an expression of the form sj1 sj2 ··· sjk where i1 ≤ j1 < ··· < jk ≤ in.

Corollary 3.1.26. (i) Any expression si1 si2 . . . sin contains a reduced expression for w as a subexpression, obtainable by deleting an even number of alphabets.

(ii) Suppose w = s s . . . s = s0 s0 . . . s0 are two reduced expression. Then, the set i1 i2 in i1 i2 in

of letters appearing in the word si1 si2 . . . sin equals the set of alphabets appearing in s0 s0 . . . s0 . i1 i2 in

(iii) S is a minimal generating set for W ; no Coxeter generator can be expressed in terms of the others.

Proof. (i) This is an easy consequence from the Proposition 3.1.24.

(ii) Define S := {si1 , . . . , sin }. Choose j minimal such that sij is not in S. Then, 0 0 0 0 0 si1 si2 . . . sij . . . si2 si1 = si1 si2 . . . sij ··· si2 si1 for some j by Corollary 3.1.18. So,

0 0 0 0 0 sij = sij−1 . . . si1 si1 si2 . . . sij . . . si2 si1 si1 . . . sij−1 which reduced to an alphabets in S since every alphabets on the right is in S. However, sij ∈ S contradicts the assumption that sij ∈/ S.

(iii) This follows immediately from (ii) when w is a alphabet. QED

52 Definition 3.1.27. Given a Coxeter system (W, S), denote by αs,s0 the alternating expression ss0ss0s . . . of finite length m(s, s0). Then, the deletion of a factor of the form ss is called a nil-move while the replacement of a factor αs,s0 by αs0,s is called a braid-move.

Theorem 3.1.28. (Expression Property) Let (W, S) be a Coxeter group and w ∈ W .

(i) Any expression si1 si2 . . . sin for w can be transformed into a reduced expression for w by a sequence of nil-moves and braid-moves.

(ii) Every two reduced expressions for w can be connected via a sequence of braid-moves.

Proof. Please refer to the proof of Theorem 3.3.1 in [BB05]. QED

Remark 3.1.29. This theorem will be our powerful tool to manipulate expressions in our Coxeter system. In later chapter, it is extremely useful in proofs.

3.2 Expressions in Weyl Group of sln(C)

The aim of this section is to understand the expressions in W. The first part shows that the longest expression in W is unique with special properties as mentioned in Remark 3.2.12 and w0. The second part will deal with the finding the length of reduced expression in W.

3.2.1 Reduced expression in W(sln(C))

Given an arbitrary expressions for elements in W, we can find its length by methods introduced in this subsection.

+ − Theorem 3.2.1. For all w ∈ W and α ∈ ∆ , `(wtα) < `(w) if and only if w(α) ∈ ∆ .

Proof. First, note that the condition `(wtα) < `(w) is equivalent to w = utα for some expression u. Otherwise, `(wtα) won’t be less than `(w). In addition, if w = utα for some expression u, then it is obviously `(wtα) < `(w). The proof will proceed by induction argument on `(w) = k.

Base case: If k = 1, then w = si for some i. It follows that the statement is clearly true. − Inductive hypothesis: When `(w) = n, `(wtα) < `(w) ⇔ w(α) ∈ ∆ . Inductive step: Assume `(w) = n + 1. − (⇐=) Suppose w(α) ∈ ∆ . We want to show that `(wtα) < `(w). Write w = usij − for some sij ∈ S where `(u) = n. Therefore, w(α) = usij (α) ∈ ∆ . Now, sij (α) = −α

53 if and only if α = αij . If α = αij , then tα = sij . Hence,

`(wtα) = `(usij sij ) = `(u) < `(w).

Otherwise, w(α) = u(γ) ∈ ∆− for some γ ∈ ∆. By inductive hypothesis,

`(utγ) < `(u).

But, by Proposition 3.2.5, tγ = sij tαsij . Then,

`(u) > `(utγ) = `(usij tαsij ) = `(wtαsij ) which implies `(wtα) < `(u) + 1 = n + 1 = `(w), as desired. − (=⇒) Suppose `(wtα) < `(w). We want to show that w(α) ∈ ∆ . Then,

w(α) = utα(α) = u(−α) = −u(α).

We claim that u(α) ∈ ∆+. This is because if u(α) ∈ ∆−, then by the inductive hypothesis, u = vtα. However, this contradicts the minimality of w since w = vtαtα = v and clearly `(v) < `(w). QED

Remark 3.2.2. The essence of this theorem is that we can know that the length of the word by the number of distinct positive roots sent to negative roots though its action. This is proven in the following corollary.

Next, we associate a set of positive roots, {α ∈ ∆+ | w(α) ∈ ∆−} to each group element w.

Corollary 3.2.3. For all w ∈ W and α ∈ ∆+,

+ − {α ∈ ∆ | w(α) ∈ ∆ } = {tα ∈ T | `(wtα) < `(w)} = `(w).

Proof. The first equality follows from Theorem 3.2.1. In addition, the second equality follows from Corollary 3.1.21 by looking at w−1. QED

Remark 3.2.4. Corollary 3.1.21 and Corollary 3.2.3 tell us that the sets

+ − TR(w), {tα ∈ T | `(wtα) < `(w)}, and {α ∈ ∆ | w(α) ∈ ∆ } has the same cardinality `(w).

Recall from Definition 2.2.8 that a reflection associated to a root α is defined as tα(γ) := γ − hγ, αiα . Proposition 3.2.5. Suppose w ∈ W and γ ∈ ∆. If γ = w(α) for α ∈ ∆+, then

−1 tγ = wtαw .

54 Proof. For any λ ∈ ∆,

−1 −1 wtαw (λ) = wtα(w (λ)) = w(w−1(λ) − hw−1(λ), αiα) = λ − hλ, w(α)iw(α) = λ − hλ, γiγ

= tγ(λ).

QED

Theorem 3.2.6. There is a bijective correspondence between the set T of reflections + in W(sln(C)) and and the set ∆ of simple positive roots.

Proof. We wish to define a bijective mapping ρ : ∆+ → T by sending γ = w(α) to −1 tγ = wtαw for all w ∈ W and s ∈ S. First, note that the map ρ is well-defined. If w(α) = u(α0), then for any λ ∈ ∆,

−1 tw(α)(λ) = wsαw (λ) = w(w−1(λ) − hw−1(λ), αiα) = λ − hλ, w(α)iw(α) = λ − hλ, u(α0)iu(α0) = u(u−1(λ) − hu−1(λ), βiβ) −1 = usα0 u (λ)

= tu(α0)(λ).

Note that the only reflection that acts trivially on the roots is the identity. To see this, first suppose t = si1 . . . sij−1 sij sij−1 . . . si1 . Then, for all λ ∈ ∆,

λ = t(λ) = si1 . . . sij−1 sij sij−1 . . . si1 (λ)

⇐⇒sij−1 . . . si1 (λ) = sij sij−1 . . . si1 (λ)

⇐⇒γ = sij (γ) with γ = sij−1 . . . si1 (λ).

But, sij (αij ) = −αij by definition. Therefore, sij must be the identity element e and it follows that t = e. In particular, this tells us that tw(α) = tu(α0), as desired. Moreover, it is surjective as every element in T can be written as

si1 . . . sij−1 sij sij−1 . . . si1

− with j minimal. If w(αij ) ∈ ∆ , then by Theorem 3.2.1, w has an expression ended + with sij which contradict the minimality of j. Therefore, w(αij ) must be in ∆ . 0 Besides, it is injective as if α and α are different positive roots, then tα 6= tα0 by the definition of a root system (R2). QED

55 3.2.2 The longest element in Weyl group of sln(C)

We mostly use results from [Hum94] to give us the desired property we want that the longest element owns. This is explained in Remark 3.2.12.

Theorem 3.2.7. The Weyl group W(sln(C)) permutes the set of positive root sets transitively.

Proof. Please refer to the proof of Theorem 1.4 in [Hum94]. QED

Remark 3.2.8. Note that positive root set is determined by our ordering on the roots. If ∆+ is a positive root set, by a different ordering, ∆− could be equally a positive root set as well.

Theorem 3.2.9. The following conditions on w ∈ W(sln(C)) are equivalent:

(1) w = e

(2) w∆+ = ∆+.

Proof. Please refer to the proof of Theorem 1.8 in [Hum94]. QED

Proposition 3.2.10. Let w0 be an element such that for each w ∈ W(sln(C)),

−1 `(w0) = `(w) + `(w w0).

Suppose S is the generating set of W(sln(C)). Then, w0 satisfies the following condi- tions:

(a) w0 is unique,

(b) w0 is an involution, and

(c) w0Sw0 = S.

Proof. (a) Suppose w0 and v0 are the elements such that for all w ∈ W ,

−1 −1 `(w0) = `(w) + `(w w0) and `(v0) = `(w) + `(w v0).

Now, consider

−1 −1 `(w0) = `(v0) + `(v0 w0) and `(v0) = `(w0) + `(w0 v0).

The first equation tells us that `(w0) ≥ `(v0) whereas the second equation tells us that `(v0) ≥ `(w0). For the two equations to hold true at the same time, we must have −1 −1 `(v0 w0) = 0 and `(w0 v0)0, so that `(w0) = `(v0). But, this implies that w0 = v0 since v0 and w0 are inverses of each other.

56 2 −1 (b) We want to show that w0 = e or, in other words, w0 is its inverse w0 . This −1 follows immediately from the fact that w0 is also a longest element in W and part (a).

2 2 (c) Suppose s ∈ S. Then, since s = e and w0 = e,

−1 `(w0sw0) = `((sw0) w0)

= `(w0) − `(sw0) by assumption,

= `(w0) − `(w0) + `(s) by assumption, = `(s) = 1 by assumption, as desired. This shows that w0sw0 is of length 1, in particular, w0sw0 ∈ S. Thus, −1 w0Sw0 ⊆ S. The reversed inclusion follows because w0 = w0. Hence, w0Sw0 = S. QED

Definition 3.2.11. The longest element in the Weyl group of sln(C) is the expression of a word w0 satisfying −1 `(w0) = `(w) + `(w w0), for each w ∈ W(sln(C)). Remark 3.2.12. (The Existence of the Longest Element) Theorem 3.2.7 and Theo- rem 3.2.9 together substantiates the simple transitivity of the permutation action of W on the set of positive root sets. The significance of the theorems is that there must exist + a unique word w0 with `(w0) = |∆ |.

By Theorem 3.2.1, w0 is distinguished by `(w0tα) < `(w0) for all α ∈ ∆. To find the expression of w0, we can multiply a simple reflection si1 of length 1 on the left successively until the process ceases. On the other hand, we can multiply any reduced expression for w0 ∈ W by simple reflections on the right till we can’t do it anymore. Finally, w0 is obtained. 0 0 From here, we can see that w0 = ww for some w ∈ W and w0 satisfies `(w0) = 0 `(w) + `(w ). In other words, for all w ∈ W, `(w0w) = `(w0) − `(w).

3.3 Root Sets for Expressions in the Weyl Group

This section colligates the reduced expressions in W with its root set which we define below.

+ Definition 3.3.1. Let W denote the Weyl group of sln(C), ∆ denote the roots, ∆ the positive roots, and Σ the simple roots. For an expression si1 . . . sin , we define the left root set of the expression

RootL(si1 si2 . . . sin ) = {αi1 , si1 · αi2 , . . . , si1 si2 . . . sin−1 · αin }.

57 Here w · α is the root given by acting on α with the Weyl group element w. The right root set RootR(si1 si2 . . . sin ) is defined analogously, reading from the right rather than from the left.

We have the following basic facts.

Proposition 3.3.2. (i) The left and right root sets are related by

RootL(si1 si2 . . . sin ) = RootR(sin sin−1 . . . si1 ).

(ii) If si1 si2 . . . sin is a reduced expression, then sin sin−1 . . . si1 is an another reduced expression.

+ (iii) If si1 si2 . . . sin is reduced, then the left root set RootL(si1 si2 . . . sin ) ⊂ ∆ .

(iv) The left root set RootL(si1 . . . sin ) consists entirely of positive roots only if si1 . . . sin is a reduced expression.

(v) If two expressions si1 si2 . . . sin = sj1 sj2 . . . sjn are reduced, then the corresponding left root sets are equal:

RootL(si1 si2 . . . sin ) = RootL(sj1 sj2 . . . sjn ).

(vi) For a reduced expression si1 si2 . . . sin for w, we have

α ∈ RootL(si1 si2 . . . sin ) ⇐⇒ `(tαw) < `(w),

where `(w) denotes Bruhat length.

Proof. (i) This follows immediately from both definition of left and right root sets.

RootL(si1 si2 . . . sik ) = {αi1 , si1 · αi2 , . . . , si1 si2 . . . sin−1 · αin }

= RootR(sin sin−1 . . . si1 ).

(ii) It is immediate.

(iii) We have + RootL(si1 si2 . . . sin ) ⊂ ∆ because si1 si2 . . . sin is reduced. Otherwise, if there exists some 1 ≤ k ≤ n such that

− si1 si2 . . . sik−1 (αik ) ∈ ∆ ,

58 then si1 si2 . . . sik−1 has an expression ending with sik contradicting the assumption that si1 si2 . . . sin is reduced.

(iv) We will prove by contrapositive. Suppose first w is not reduced, that is, w has an expression si1 ··· sjsj ··· sim with at least one pair of sj adjacent to each other for some j. This is because any expression for w can be reduced by a se- quence of nil-moves and braid moves by Theorem 3.1.28. We might as well do all possible braid moves first and collect the like term adjacent to each other. Con- sider the left root set of w = si1 . . . sik sjsjsik+3 . . . sim . Note that si1 . . . sik · αj and si1 . . . sik sj · αj = −si1 . . . sik · αj are in the left root set of w. Since the two roots have different sign, the left root set of w must contain a negative root.

(v) By Theorem 3.1.28, it suffices to look at two types of single braid moves one by one. Suppose we have two expressions w1 = si1 . . . sim skslsim+3 . . . sin , w2 = si1 . . . sim slsksim+3 . . . sin connected by the braid move sksl = slsk for |l − k| > 1. Then, as sk · αl = αl and sl · αk = αk,

RootL(w1) = {α1, si1 · αi2 , . . . , si1 . . . sim · αk, si1 . . . sim sk · αl, . . . , si1 . . . sin−1 · αin }

= {α1, si1 · αi2 , . . . , si1 . . . sim · αk, si1 . . . sim · αl, . . . , si1 . . . sin−1 · αin }

= {α1, si1 · αi2 , . . . , si1 . . . sim · αl, si1 . . . sim · αk, . . . , si1 . . . sin−1 · αin }

= {α1, si1 · αi2 , . . . , si1 . . . sim · αl, si1 . . . sim sl · αk, . . . , si1 . . . sin−1 · αin }

= RootL(w2)

Second, suppose we have two words w1 = si1 . . . sim skslsksim+3 . . . sin and w2 = si1 . . . sim slskslsim+3 . . . sin connected by the braid move sksl sk = slsksl for |l − k| = 1. It suffices to see that since sk · αl = αk + αl = sl · ak, sksl · ak = αl and slsk · al = αk,

{si1 si2 . . . sim · αk, si1 si2 . . . sim sk · αl, si1 si2 . . . sim sksl · αk}

= {si1 si2 . . . sim · αk, si1 si2 . . . sim · (αk + αl), si1 si2 . . . sim · αl}

= {si1 si2 . . . sim · αl, si1 si2 . . . sim · (αk + αl), si1 si2 . . . sim · αk}

= {si1 si2 . . . sim · αl, si1 si2 . . . sim · sl · ak, si1 si2 . . . sim slsk · al} showing that they have the same root set.

(vi) (=⇒) Suppose α ∈ RootL(si1 si2 . . . sin ). Then, by definition,

α = si1 si2 . . . sik−1 · αik for some 1 ≤ k ≤ n. By Proposition 3.2.5, t w = s s . . . s t s . . . s s w α i1 i2 ik−1 αik ik−1 i2 i1

= si1 si2 . . . sik−1 sik sik−1 . . . si2 si1 si1 si2 . . . sin = s s . . . s s s . . . s i1 i2 ik−1 cik ik+1 in

59 where obviously `(tαw) < `(w).

(⇐=) Suppose `(tαw) < `(w). By Corollary 3.1.18, we see that

tα = si1 si2 . . . sij . . . si2 si1 for 1 ≤ j ≤ n. Hence, by Proposition 3.2.5 again,

t = t α si1 si2 ...sij−1 ·αij which implies

α = si1 si2 . . . sij−1 · αij ∈ RootL(si1 si2 . . . sin ), as desired. QED

60 Chapter 4

From the Weyl Group of sln(C) to the Braid Group Bn

The aim of this chapter is to establish a correct notion of linear braid which will lead to proving Conjecture 0.0.2, which is also known as Theorem 4.3.25. We would like to remind the reader on the isomorphism between W(sln(C)) and Sn (Theorem 2.2.10). From now onwards, we will frequently refer the Weyl group of sln(C) as W.

In Section 4.1, we introduce the braid group. We identify a few kinds of elements such as the positive expressions, negative expression, and Garside generators. Note that there is a canonical surjective homomorphism π from a braid group Bn to the −1 Weyl Group W by sending β ∈ {σi, σi } to si. Subsequently, it makes sense to define a positive lift and a negative lift of an expression for word in W.

In Section 4.2, we arrange the roots of an expression w for word in W into a positive and a negative set with a certain rule on the expression for a braid which is a lift of w. Then, we have some easy consequence from the definition.

In Section 4.3, we start by giving a proper definition of a linear braid. A few important results are proven here. Corollary 4.3.14 states that all linear braids must be minimal length lifts from the Weyl group whereas Corollary 4.3.15 states that all reduced expressions for π(β) have lifts to expressions for β. Finally, the most prominent theorem proven in this thesis is Theorem 4.3.25.

4.1 Definitions

Let us make some definitions to set up the chapter.

61 Definition 4.1.1. The braid group, Bn is a group with a presentation

hσi, 1 ≤ i ≤ n − 1 | σiσi+1σi = σi+1σiσi+1, σiσj = σjσi for |i − j| > 1i.

The generators σi are known as the Artin generators.

Remark 4.1.2. Notice that a difference between Bn and Sn is that the elements in Bn are not their own inverses.

Notation 4.1.3. An element β in the braid group Bn is called a braid. A braid β has at least one expression β. Similarly to the symmetric group, we differentiate the braids and their expressions with distinct symbols.

−1 −1 Example 4.1.4. In B3, β = σ1σ2 σ1 has only one expression β = σ1σ2 σ1. But, β = σ1σ2σ1 has two expressions, namely σ1σ2σ1 and σ2σ1σ2.

Definition 4.1.5. A positive expression is of the form

+ β = σi1 σi2 . . . σim for some i1, . . . , im and a negative expression is of the form

β− = σ−1σ−1 . . . σ−1 i1 i2 im for some i1, . . . , im.

Definition 4.1.6. Every element β ∈ Bn can be expressed as a product of generators:

β = σ1 σ2 . . . σn . i1 i2 im

If m is the minimal among all such expressions β for β, we write

`(β) = n and n is the length (or sometimes `Artin) of β.

Definition 4.1.7. A preimage βw of w ∈ W under the surjective homomorphism π is called a lift. A lift βw is positive if it has a positive expression. A lift βw is negative if it has a negative expression.

Remark 4.1.8. Every w ∈ W has a positive lift and a negative lift. Any expression β is a lift of w = π(β) in W.

Definition 4.1.9. A positive (respectively negative) lift is a positive (respectively nega- tive) Garside generator if its positive (respectively negative) expression can be written minimally such that `Artin(βw) = `(w). By convention, the identity element e in Bn is considered as a positive and negative Garside generator.

Remark 4.1.10. Note that the Artin generators of Bn are also Garside generators.

62 Notation 4.1.11. Enlightened by Remark 4.1.8, given an element w ∈ W, we associate + − the positive Garside generator βw and the negative Garside generator βw to it.

+ Example 4.1.12. Let s1s2 be the expression for w ∈ W. Then, βw has a positive − −1 −1 expression σ1σ2 and βw has a negative expression σ1 σ2 .

On the other hand, σ1σ1σ2 cannot be an expression for any positive Garside gener- + + ator βw since `(π(σ1σ1σ2)) = 1 6= 3 = `(βw ).

+ Remark 4.1.13. Note that a positive Garside generator βw has a canonical mini- + mal length positive expression βw although it is not unique. The canonical positive expression can be written as a product of `(π(β)) generators. For example, σ1σ2σ1 and σ2σ1σ2 are two different expressions representing the same positive Garside generator. The same story applies to a negative Garside generator.

4.2 Positive and Negative Root Sets for Elements of the Braid Group

±1 Now, let Bn denote the braid group, with Artin generators σi . Given an expression

β = σ1 σ2 . . . σk , i1 i2 ik

we define π(β) = si1 . . . sik to be the corresponding expression in W. Definition 4.2.1. We define the left positive and negative root sets of the ex- pression β by partitioning the left root set RootL(π(β)) according to the signs i:

+ RootL (β) = {si1 si2 . . . sil−1 · αil | l = 1},

− RootL (β) = {si1 si2 . . . sil−1 · αil | l = −1}. ± The right positive and negative root sets RootR(β) are defined analogously, by partitioning RootR(π(β)) according to the signs i.

For an expression β = σ1 σ2 . . . σn , i1 i2 in let r(β) = σn σn−1 . . . σ1 in in−1 i1 denote the reverse of β, let

β−1 = σ−n σ−n−1 . . . σ−1 , in in−1 i1 and let −β = σ−1 σ−2 . . . σ−n . i1 i2 in Note that β−1 = r(−β) = −r(β).

63 Proposition 4.2.2. We have the following properties for root sets:

(i) ± ± RootL (β) = RootR(r(β)).

(ii) ± ∓ −1 RootL (β) = RootR(β ).

(iii) ± ∓ RootL (β) = RootL (−β).

Proof. Write β = σ1 σ2 . . . σn . These follow from the definitions easily. We have i1 i2 ik

+ RootR(r(β)) = {si1 si2 . . . sil · αil−1 | l = 1} by definition, + = RootL (β) + −1 −1 RootR(β ) = {si1 si2 . . . sil · αil−1 | l = −1} by the construction of β , − = RootL (β) + RootR(−β) = {si1 si2 . . . sil · αil−1 | l = −1} by the construction of −β, − = RootL (β)

Their negative counterparts can be proved similarly. QED

Proposition 4.2.3. Let β1, β2 be expressions in the braid group, and let β1 β2 denote their concatenation. Then

± ± [ ± RootL (β1 β2) = RootL (β1) π(β1) · RootL (β2), where π(β1)· denotes the action of the element of the Weyl group on roots.

Proof. Write β = σ1 σ2 . . . σm and β = σδ1 σδ2 . . . σδn . Then, π(β ) = s s . . . s 1 i1 i2 im 2 j1 j2 jn 1 i1 i2 im and π(β2) = sj1 sj2 . . . sjn . Compute

RootL(β1 β2) = {αi1 , si2 · αi1 , . . . , si1 si2 . . . sim−1 · αim ,

si1 . . . sim · αj1 , si1 si2 . . . sim sj1 · αj2 , . . . , si1 si2 . . . sim sj1 sj2 . . . sjn−1 αjn }

= {αi1 , si2 · αi1 , . . . , si1 si2 . . . sim−1 · αim , π(β ) · α , π(β ) · (s s · α ), . . . , π(β ) · (s s . . . s · α )} 1 j1 1 j1 j2 j2 1 j1 j2 jn−1 jn [ = RootL(β1) π(β1) · RootL(β2).

To show the lemma, we just need to sort the left roots into positive and negative root set by imposing appropriate conditions on the power. QED

64 4.3 Separated Root Sets and Linear Braids

Definition 4.3.1. We say that two subsets X,Y ⊂ ∆ of roots are separated if X X nαα = mγγ, with nα, mγ ≥ 0 for all α, γ =⇒ nα = mγ = 0 for all α, γ. α∈X γ∈Y

If X or Y is an empty set, then the sum will be zero by convention.

Remark 4.3.2. So in a pair of separated subsets, the non-negative cones defined by the subsets intersect only at the origin.

We come to the fundamental definitions in this chapter.

Definition 4.3.3. An expression β = σ1 σ2 . . . σn is left linear if Root+(β) and i1 i2 in L − + − RootL (β) are separated. Similarly, β is right linear if RootR(β) and RootR(β) are separated.

Generally, the left root set and right root set of a braid are not equal. The aim of following proposition is to find the relation between the left root set and right root set of the same expression β.

Proposition 4.3.4. Let β be expression in the braid group. Then,

π(r(β)) · RootL(β) = −RootR(β).

From the proposition, we can

Proof. Write β = σ1 σ2 . . . σn . Take any α ∈ Root (β). Then, α = s s . . . s · α i1 i2 in L i1 i2 ij−1 ij for some 1 ≤ j ≤ n. Consider

π(r(β)) · α = sin sin−1 . . . si1 si1 si2 . . . sij−1 · αij

= sin sin−1 . . . sij · αij

= sin sin−1 . . . sij+1 · (−aij )

= −sin sin−1 . . . sij+1 · aij , as desired. Hence, α ∈ −RootR(β) implies π(r(β)) · RootL(β) ⊆ −RootR(β).

To show the other inclusion, it is equivalent to show −RootL(β) ⊇ π(β) · RootR(β). But, this can be done using a similar argument as before. QED

Remark 4.3.5. A similar argument can show that −RootL(β) = π(β) · RootR(β).

The following proposition allows us to move freely between the left root set and right root set of an expression β.

65 Proposition 4.3.6. Suppose β is an expression in the braid group. If X X nαα = mγγ with nα, mγ ≥ 0 for all α, γ, + − α∈RootL (β) γ∈RootL (β) 0 0 then there exists nα0 , mγ0 for all α , γ such that X 0 X 0 nα0 α = mγ0 γ . 0 + 0 − α ∈RootR(β) γ ∈RootR(β)

P P Proof. We just apply π(r(β)) to the equation + nαα = − mγγ. α∈RootL (β) γ∈RootL (β) By the computation in Proposition 4.3.4, the equation become X 0 X 0 −nαα = −mγγ . + 0 − α∈RootR(β) γ ∈RootR(β) So, trivially, X 0 X 0 nαα = mγγ + 0 − α∈RootR(β) γ ∈RootR(β)

0 where we can simply take nα0 = nα and mγ0 = γ . QED

Remark 4.3.7. Proposition 4.3.6 is unequivocally true if we starts with zn equation concerning only the right positive and negative root set, that is, if

X X nαα = mγγ with nα, mγ ≥ 0 for all α, γ, + − α∈RootR(β) γ∈RootR(β)

0 0 then there exists nα0 , mγ0 for all α , γ such that

X 0 X 0 nα0 α = mγ0 γ . 0 + 0 − α ∈RootL (β) γ ∈RootL (β) To see this, apply Remark 4.3.5. Corollary 4.3.8. An expression β is left linear if and only if it is right linear.

Proof. This is an immediate consequence of Proposition 4.3.6 by considering the con- trapositive of the statement above. QED

Therefore, it makes sense to define the following. Definition 4.3.9. We say a braid β is linear if β has a left (or, equivalently, right) linear expression.

66 + − S3 B3 RootL (β) RootL (β) Garside generator Linear braid s1 σ1 {α1} ∅ Yes Yes −1 σ1 ∅ {α1} Yes Yes s2 σ2 {α2} ∅ Yes Yes −1 σ2 ∅ {α2} Yes Yes s1s2 σ1σ2 {α1, α1 + α2} ∅ Yes Yes −1 −1 σ1 σ2 ∅ {α1, α1 + α2} Yes Yes −1 σ1σ2 {α1} {α1 + α2} No Yes −1 σ1 σ2 {α1 + α2} {α1} No Yes s2s1 σ2σ1 {α2, α1 + α2} ∅ Yes Yes −1 −1 σ2 σ1 ∅ {α2, α1 + α2} Yes Yes −1 σ2σ1 {α2 {α1 + α2} No Yes −1 σ2 σ1 {α1 + α2} {α2} No Yes s1s2s1 σ1σ2σ1 {α1, α1 + α2, α2} ∅ Yes Yes −1 σ1 σ2σ1 {α1 + α2, α2} {α1} No Yes −1 −1 σ1 σ2 σ1 {α2} {α1, α1 + α2} No Yes −1 −1 −1 σ1 σ2 σ1 ∅ {α1, α1 + α2, α2} Yes Yes −1 −1 = σ1σ2 σ1 {α1} {α1 + α2, α2} No Yes −1 σ1σ2σ1 {α1, α1 + α2} {α2} No Yes −1 −1 σ1 σ2σ1 {α1 + α2} {α1, α2} No No −1 σ1σ2 σ1 {α1, α2} {α1 + α2} No No s2s1s2 σ2σ1σ2 {α2, α1 + α2, α1} ∅ Yes Yes −1 σ2 σ1σ2 {α1 + α2, α1} {α2} No Yes −1 −1 σ2 σ1 σ2 {α1} {α2, α1 + α2} No Yes −1 −1 −1 σ2 σ1 σ2 ∅ {α2, α1 + α2, α1} Yes Yes −1 −1 σ2σ1 σ2 {α2} {α1 + α2, α1} No Yes −1 σ2σ1σ2 {α2, α1 + α2} {α1} No Yes −1 −1 σ2 σ1σ2 {α1 + α2} {α2, α1} No No −1 σ2σ1 σ2 {α2, α1} {α1 + α2} No No

Table 4.3.1: The braids in B3.

Example 4.3.10. Table 4.2.1 tells us the linear braids in the braid group B3. It also indicates which braids are the Garside generators. Note that we list all the expressions in B3. So, some of the expressions represent the same element in B3. We can see that linear braids with length 3 has two expressions. This is verified in a more general setting later (Corollary 4.3.15).

Here are some basic facts about linear braids.

Definition 4.3.11. A subexpression of an expression β = σ1 σ2 . . . σn is an expres- i1 i2 in sion of the form σir σr+1 . . . σs , where 1 ≤ r ≤ s ≤ n. ir ir+1 is

Proposition 4.3.12. Any subexpression of a left linear expression β is left linear.

67 Proof. Let σ1 σ2 . . . σn be a left linear expression β. Suppose, by contrary, there exists i1 i2 in some subexpression σj σj+2 . . . σk for some 1 ≤ j, k ≤ n which is not left linear. Then, ij ij+2 ik by definition, Root+(σj σj+2 . . . σk ) and Root−(σj σj+2 . . . σk ) L ij ij+2 ik L ij ij+2 ik are not separated. It follows that

π(σ1 σ2 . . . σj−1 )·Root+(σj σj+2 . . . σk ) and π(σ1 σ2 . . . σj−1 )·Root−(σj σj+2 . . . σk ) i1 i2 ij−1 L ij ij+2 ik i1 i2 ij−1 L ij ij+2 ik are not separated. Note that

 [    Root (β) = Root (σ1 σ2 . . . σ j−1 ) π(σ1 σ2 . . . σ j−1 ) · Root (σ j σ j+2 . . . σk ) L L i1 i2 ij−1 i1 i2 ij−1 L ij ij+2 ik [   π(σ1 σ2 . . . σk ) · Root (σ k+1 σ k+2 . . . σn ). i1 i2 ik L ik+1 ik+2 in

+ − Since the argument before suggests that RootL (β) and RootL (β) is not separated, β is not a left linear expression. QED

Theorem 4.3.13. Let β be a linear braid. Then, any left linear expression σ1 . . . σm i1 im for β can be transformed into a left linear expression with minimal length by a sequence of moves

n −1 −1 σiσi = σi σi = e,

  σiσjσi = σjσiσj  −1 −1  σiσjσi = σj σiσj , if |i − j| = 1 σ σ−1σ−1 = σ−1σ−1σ  i j i j i j  −1 −1 −1 −1 −1 −1 σi σj σi = σj σi σj   σiσj = σjσi  −1 −1 σiσj = σj σi , if |i − j| > 1  −1 −1 −1 −1  σi σj = σj σi

Proof. Given any left linear expression β = σ1 . . . σm , look at the expression s . . . s i1 im i1 im for π(β) which is not necessarily reduced. By Theorem 3.1.28, there exists a sequence of nil-moves and braid-moves to transform the expression for π(β) to a reduced expression w.

We claim that every such moves lift to a valid move in Bn. Note that the nil-moves −1 −1 sisi = 1 are lift to either σiσi = 1 or σi σi = 1. If it was lifted to σiσi, then β will not be linear in the first place by Proposition 4.3.12 as it contains a subexpression that is not left linear.

Consider the following two braid moves, for |i − j| = 1, sisjsi = sjsisj is lifted to

−1 −1 −1 −1 −1 −1 σiσjσi = σjσiσj, σi σj σi = σj σi σj

68 −1 −1 −1 −1 −1 −1 σiσjσi = σj σiσj or σiσj σi = σj σi σj, and for |i − j| > 1, sisj = sisj is lifted to

−1 −1 −1 −1 −1 −1 −1 −1 σiσj = σjσi, σi σj = σj σi , σi σj = σjσi or σiσj = σj σi.

If sisi+1si = si+1sisi+1 and sisj = sjsi for |i − j| > 1 are not lifted to these possible linear cases, then β contains a subexpression which is not left linear which means β is not left linear by Proposition 4.3.12 contradicting the assumption that β is left linear. Since every nil-moves and braid-moves on expression for π(β) lift to the moves stated in theorem, every left linear expression for β can be transformed into a left linear expression with minimal length by a sequences of moves stated in the theorem. QED

As a corollary , we see that linear braids are all minimal length lifts for W. Corollary 4.3.14. Suppose β = σ1 σ2 . . . σn is a left linear expression for the linear i1 i2 in braid β. Then, `Artin(β) = `(π(β)). (In other words, linear braids must be minimal length lifts from the Weyl group.)

Proof. First, we see that it is always the case that `Artin(β) ≥ `(π(β)) since Weyl group 2 has an extra relation, namely si = e.

We also have `Artin(β) ≤ `(π(β)) by Theorem 4.3.13 because any moves introduced to reduce the expression for π(β) are lifted to a cancellation moves in the expression for β.

Combining the two inequalities, we obtain `Artin(β) = `(π(β)), as desired. QED

Corollary 4.3.15. Let β be a linear braid, and let π(β) ∈ W be the image of β in the

Weyl group W. Then for all reduced expressions si1 si2 . . . sin for π(β), there exist signs η1, η2, . . . , ηn such that β = ση1 ση2 . . . σηn . i1 i2 in (That is, all reduced expressions for π(β) have lifts to expressions for β.)

Proof. Let σ1 σ2 . . . σr be a left linear expression for β ∈ B and let s s . . . s be j1 j2 jr n p1 p2 pn a reduced expression for π(β) ∈ W. We want to show that there exists η1, η2, . . . , ηn ∈ {±1} such that β = ση1 ση2 . . . σηn . i1 i2 in By Corollary 4.3.14, β = σδ1 σδ2 . . . σδn for some h , h , . . . , h and δ , δ , . . . , δ . If h1 h2 hn 1 2 n 1 2 n si1 si2 . . . sin is another reduced expression for π(β), then si1 si2 . . . sin and sp1 sp2 . . . spn are connected by a sequence of braid-moves by Theorem 3.1.28. By Corollary 4.3.14 again, these moves are lifted to a valid braid-moves in the expression for the linear braid β, so there exist signs η1, η2, . . . , ηn such that β = ση1 ση2 . . . σηn . i1 i2 in for all reduced expressions si1 si2 . . . sin for π(β). QED

69 Suppose β1 and β2 are linear braids. We ought to look for a condition on the positive and negative root sets which tells us when the product β1β2 is linear.

Proposition 4.3.16. Suppose β1 and β2 are left linear expressions for linear braids. + S − + S − Then, β1β2 is linear if and only if RootL (β2) RootR(β1) and RootR(β1) RootL (β2) are separated.

Proof. From Proposition 4.2.3, we know that β1 β2 is left linear if and only if

+ [ + − [ − RootL (β1) π(β1) · RootL (β2) and RootL (β1) π(β1) · RootL (β2) are separated. By Proposition 4.3.4, we know that this is true if and only if

+ [ + − [ − −RootR(β1) RootL (β2) and − RootR(β1) RootL (β2) are separated. But, it is equivalent to saying that

+ [ − + [ − RootL (β2) RootR(β1) and RootR(β1) RootL (β2) are separated. QED

If we consider the product of a positive Garside generators and a negative Garside generators, we have the following special case of Proposition 4.3.16.

+ − Corollary 4.3.17. Suppose βx is a positive Garside generators and βy is a negative + − − + Garside generators. Then, βx βy and βy βx are linear.

Proof. To show that they are linear, we need to prove that they have a left linear expression. Consider, guaranteed by definition, the minimal length positive expression + − for βx and the minimal length negative expression for βy . By Proposition 4.3.16, to + − show that βx βy is linear, it suffices to check that

+ − [ − + + + [ − − RootL (βy ) RootR(βx ) and RootR(βx ) RootL (βy )

+ − + − S − + are separated. By construction of βx and βy , RootL (βy ) RootR(βx ) is empty. + + S − − In addition, RootR(βx ) RootL (βy ) consists of entirely positive roots by Proposi- + − tion 3.3.2(iv). Hence, the linear condition is satisfied and βx βy is a left linear expres- − + sion. Similar argument applies to βy βx . QED

Proposition 4.3.18. Suppose w0 is the longest element in W. If w0 = xy with `(w0) = + + + − − − `(x) + `(y), then βw0 = βx βy and βw0 = βx βy .

Proof. This is clear from the definition of Garside generators. QED

70 Proposition 4.3.19. The following are equivalent:

+ − (1) β has an expression βx βy for some x and y in W.

− + (2) β has an expression βu βv for some u and v in W.

Proof. (1) =⇒ (2) Let w0 be the element of longest length in the Weyl group W and + βw0 be the associated positive Garside generator. Subsequently, by the property of the −1 element of longest length, we can find z ∈ W such that z = w0x with its positive + Garside generator βz and v ∈ W such that v = w0y with its positive Garside generator + βv . Then, by Proposition 4.3.18,

+ − β = βx βy + −1 + + − = (βz ) βz βx βy + −1 + − = (βz ) βw0 βy + −1 + = (βz ) βv , where we have written β in the required arrangement. This proves that (1) =⇒ (2). To get (1) ⇐= (2), we apply a similar argument. QED

+ Definition 4.3.20. We define a set EL (β) for a braid as following:

+ EL (β) := {σi | σiβ is not linear.},

+ and similarly, we can define a set EL (β) for a braid as following:

− −1 −1 ER (β) := {σi | βσi is not linear.},

Recall the definition of the descents of Weyl group W. Let S be the generating set for W. For w ∈ W, the left descent set DL(w) := TL(w) ∩ S and the right descent set DR(w) := TR(w) ∩ S.

Definition 4.3.21. Analogously, a left descent set DL(β) for a braid is defined as

D (β) := {σ | β has a minimal length expression β = σσ1 σ2 . . . σm . } L j j i1 i2 im

and a right descent set DR(β) for a braid is defined as

D (β) := {σ | β has a minimal length expression β = σ1 σ2 . . . σm σk . }. R k i1 i2 im k

+ − Lemma 4.3.22. Let β be a left linear expression for β and βx βy be its subexpression + − with DR(βx ) ∩ DL(βy ) = ∅ such that there is no internal cancellation of terms in + − − + − + the subexpression. Suppose βx βy = βu βv with DR(βu ) ∩ DL(βv ) = ∅. Then, the 0 + − − + expression β given by the replacement of βx βy by βu βv in the expression β for β is left linear.

71 + − Proof. We prove by induction of the length of βx and βy . It actually suffices to check + − −1 −1 −1 the statement when βx = σi and βy = σj . If σiσj = σj σi, the claim is obvious. −1 −1 −1 −1 0 −1 −1 Otherwise, σiσj = σj σi σjσi. So that, β = wσiσj z and β = wσj σi σjσiz. Then,

+ + [ [ −1 + RootL (β) = RootL (w) π(w) ·{αi} π(wσiσj ) · RootL (z),

− + [ [ −1 + RootL (β) = RootL (w) π(w) ·{αi + αj} π(wσiσj ) · RootL (z), + 0 + [ [ −1 + RootL (β ) = RootL (w) π(w) ·{αi, −αj} π(wσiσj ) · RootL (z), and

− 0 + [ [ −1 + RootL (β ) = RootL (w) π(w) ·{αj, αi + αj} π(wσiσj ) · RootL (z). Note that these sets differ only in that β0 has two more roots than β in its root set.

+ 0 − 0 −1 Suppose that RootL (β ) and RootL (β ) are not separated. Acting by (π(w)) , we + 0 − 0 get a equation relating RootL (β ) and RootL (β ) of the form

r+ + n1αi + n2(−αj) = r− + m1αj + m2(αi + αj)

+ 0 for n1, n2, m1, m2 ∈ N, where r+ is some linear combination of other roots in RootL (β ) − 0 and r− is some linear combination of other roots in RootL (β ). After rearranging, we get

r+ + (n1 + m1 + n2)(αi) = r− + (m1 + n2 + m2)(αi + αj) + 0 for n1, n2, m1, m2 ∈ N where r+ is some linear combination of other roots in RootL (β ) − 0 and r− is some linear combination of other roots in RootL (β ). After applying π(w) + − to the equation above, it yields a non-trivial relation between RootL (β) and RootL (β) which means β is not linear. A contradiction. QED

+ − + Proposition 4.3.23. Given a braid β with an expression β = βx βy with DR(βx ) ∩ − DL(βy ) = ∅ such that there is no internal cancellation of terms in the expression where β+ = σ σ . . . σ is a positive Garside generator from W and β− = σ−1σ−1 . . . σ−1 is x j1 j2 jt y l1 l2 lk a negative Garside generator from W. + + Then, every elements in EL (β) under the map π are exactly the elements in DL(π(βx )), + + in other words, π(EL (β)) = DL(π(βx )).

+ + + + Proof. To show that DL(π(βx )) ⊂ π(EL (β)), suppose si ∈ DL(π(βx )), that is, π(βx ) + has an expression starting with si. We want to show that si ∈ π(EL (β)), that is σi or −1 + + + σi is in EL (β). Since π(βx ) has an expression starting with si, then βx also has a + 0+ 0+ minimal length positive expression starting with σi, that is βx = σiβ for some σiβ 0 + with `(β ) = `(βx ) − 1. Then, β has an expression + − 0+ − σiβx βy = σiσiβ βy

72 which is not left linear as αi − αi = 0. But, by Lemma 4.3.22, β cannot have a left linear expression. So, the claim is proved. + − + For the converse, suppose σiβx βy is not left linear. Then, it follows that σiβx is + not left linear by Corollary 4.3.17. Subsequently, σiβx is not a positive lift from W. Then, + + `(π(σiβx )) < `(π(βx )). + + + As π(σiβx ) = siπ(βx ), we obtain si ∈ DL(π(βx )) by Corollary 3.1.22. QED

Remark 4.3.24. Given the same condition in Proposition 4.3.23, similarly one can − − also show that π(ER (β)) = DR(π(βy )).

The following is one of the main theorems in this document. It relates the linear braids to the Garside generators.

Theorem 4.3.25. The following are equivalent:

(1) β is linear.

+ − (2) β has an expression βx βy for some x and y in W.

− + (3) β has an expression βu βv for some u and v in W.

Proof. (2) ⇐⇒ (3) It follows from Proposition 4.3.19.

(1) =⇒ (2) We will prove by induction on the length of β. Suppose β is linear.  Base case: If β = σi is of length 1, then it is in the desired form. + − Inductive hypothesis: Suppose that when `(β) ≤ n we can write β = βx βy , where β+ = σ σ . . . σ is a positive Garside generator from W and β− = σ−1σ−1 . . . σ−1 x j1 j2 jt y l1 l2 lk is a negative Garside generator from W when β = σ1 σ2 . . . σn is a minimal length i1 i2 in expression for `(β). Inductive step: Suppose now β is a linear braid with `(β) = n + 1 having a minimal length expression β = σ1 σ2 . . . σn+1 . i1 i2 in+1 Assume first  = 1. Then, β = σ β0 with σ2 . . . σn+1 being an expression β0 and 1 i1 i2 in+1 `(β0) < `(β). Notice that any subexpression of β is left linear by Proposition 4.3.12. β0 is also a minimal length expression since otherwise β would not be a minimal length + − 0 expression at first. So, by inductive hypothesis, βx βy is an expression for β . Note + 0 + that σi1 ∈/ EL (β ). If not, β is not linear. By Proposition 4.3.23, σi1 ∈/ DL(βx ). Hence, + + − σi1 βx is a positive Garside generator and β has an expression σi1 βx βy which is a product of a positive Garside generator and a negative Garside generator. On the other hand, if  = −1, we can use a similar induction argument on (1) =⇒ (3). − + − + So that, we can write β = βu βv where βu is a negative Garside generator and βv is a

73 positive Garside generator. By the earlier proof of (2) ⇐⇒ (3), we can get (2) eventually.

(2) =⇒ (1) This follows from Corollary 4.3.17.

QED

74 Chapter 5

Presentations for the Braid Group and their Length Functions

Naturally, we have three families of generators to consider as the generators of the braid group. Recall that the definition of the braid group Bn:

Bn := hσi, 1 ≤ i ≤ n − 1 | σiσi+1σi = σi+1σiσi+1, σiσj = σjσi for |i − j| > 1i.

Those σi’s are known as the Artin generators. The other two generator candidates, namely the Garside generator and the linear braids are met in Chapter 4. Say in B3, we are claiming that the sets

−1 −1 −1 −1 −1 −1 −1 −1 −1 −1 −1 −1 {σ1, σ1 , σ2, σ2 , σ1σ2, σ1 σ2 , σ2σ1, σ2 σ1 , σ1σ2σ1, σ1 σ2 σ1 , σ2σ1σ2, σ2 σ1 σ2 } and −1 −1 −1 −1 −1 −1 −1 −1 −1 −1 {σ1, σ1 , σ2, σ2 , σ1σ2, σ1 σ2 , σ1σ2 , σ1 σ2, σ2σ1, σ2 σ1 , σ2σ1 , σ2 σ1, σ1σ2σ1, −1 −1 −1 −1 −1 −1 −1 −1 −1 −1 −1 −1 σ1 σ2σ1, σ1 σ2 σ1, σ1 σ2 σ1 , σ2σ1σ2, σ2 σ1σ2, σ2 σ1 σ2, σ2 σ1 σ2 } both generate the braid group. Obviously, this should be done by imposing the right relations. So, we define two abstract groups Bfn and Bcn to treat the isomorphism for- mally. Finally, we relate the length function in the Garside generators and the length function in the linear generators.

In section 5.1, we define a formal symbol for the generator of our abstract group Bfn. This abstract group is generated with one relation. Then, by isomorphism in The- orem 5.1.1, we indeed see that the generator of Bfn is the positive Garside generators.

In the same favour, we see that the linear generators of Bcn are the linear braids. But, a point worth noting is that the abstract group Bcn has far more generators and

75 relations than the original presentation of the braid group and Bfn.

Section 5.3 is the final section of this thesis. Although we don’t know much about the length function `Artin, Theorem 5.3.2 is able to find `linear from `Garside and vice versa with only minimal condition on the braid.

For this chapter, we will keep using the convention that W always refers to the Weyl group of sln(C).

5.1 The Abstract Group, Bfn

First, we want to build an abstract group Bfn. Suppose w is an element in the Weyl + group W. The generators of Bfn will be w . Define the presentation Bfn as follows: ( Generators : {w+ | w ∈ W}, Relations : x+y+ = z+ if xy = z with `(x) + `(y) = `(z) for x, y, z ∈ W, that is,

+ + + + Bfn := hw | x y = z if xy = z with `(x) + `(y) = `(z) for x, y, z ∈ Wi.

Theorem 5.1.1. The group Bfn is isomorphic to the braid group Bn.

Proof. First, we define a group homomorphism Φ from Bn to Bfn by

Φ: Bn → Bfn + σi 7→ si .

−1 + −1 It follows that Φ : σi 7→ (si ) .

Moreover, we check that the relations held in the braid group Bn must also hold in Bfn. Then, for |i − j| > 1,

Φ(σiσj) = Φ(σi)Ψ(σj) + + = si sj + = (sisj) + = (sjsi) by the relation in W, + + = sj si

= Φ(σj)Φ(σi)

= Φ(σjσi) and for |i − j| = 1,

76 Φ(σiσjσi) = Φ(σi)Φ(σj)Φ(σi) + + + = si sj si + + = (sisj) si + = (sisjsi) + = (sjsisj) by the relation in W, + + = (sjsi) sj + + + = sj si sj

= Φ(σj)Φ(σi)Φ(σj)

= Φ(σjσiσj), as desired.

Next, we define a group homomorphism

Ψ: Bfn → Bn

+ + by sending w to the positive Garside generator σi1 σi2 . . . σin = βw if w has a reduced expression si1 ··· sin . To check that Ψ is well defined, take any two reduced expressions for the element w, say w1 = si1 si2 ··· sin and w2 = sj1 sj2 ··· sjn . From Theorem 3.1.28, we know that they are connected by a sequence of braid-moves. Hence, their positive lifts equal through the corresponding braid relations.

Finally, it is easy to see that Φ and Ψ are inverses of each other from the construc- tion, so we establish an isomorphism between B and Bfn. QED

Remark 5.1.2. The generators of Bfn are exactly the positive Garside generators.

Definition 5.1.3. Every element β ∈ Bn can be expressed as a product of Garside generators: + 1 + 1 + n β = (w1 ) (w2 ) ... (wn ) . If n is the minimal among all such expressions β for β, we write

`Garside(β) = n and n is the Garside length, `Garside of β. By convention, `(e) = 0.

5.2 The Abstract Group, Bcn

Let si1 si2 . . . sik be a (not-necessarily reduced) expression for an element w of the Weyl + − group W , with left root set RootL(w). Let A = (A ,A ) be a separated splitting of

77 RootL(w), so that + − RootL(w) = A ∪ A , and (A+,A−) is separated. Associated to the splitting A, we have a linear lift w of w to an element of the braid group, as we have discussed before.

Now, consider an abstract group Bcn presented as follows. The generators are wA, where w is an expression in the Weyl group generators, and A is a separated splitting of RootL(w). Bcn has generators {wA}. The relations are

(Rb1)( si)({αi},∅)(si)(∅,{αi}) = e, and (si)(∅,{αi})(si)({αi},∅) = e. Here α is the simple root associated to a simple reflection si,

(Rb2) For |i − j| > 1, (sisj)({αi,αj },∅) = (sjsi)({αj ,αi},∅),

(Rb3) For |i − j| = 1, (sisjsi)({αi,αi+αj ,αj },∅) = (sjsisj)({αj ,αi+αj ,αi},∅),

0 (Rb4) uAvA0 = (uv)A∪u(A0) when A ∪ u(A ) is separated.

Note here that the concatenated word uv is an element of W. Also, if A = (A+,A−) and A0 = (A0+,A0−), then

A ∪ w(A0) := (A+ ∪ w(A0+),A− ∪ w(A0−)).

Let S∗ be the free monoid on the generating set S of W. Therefore,

∗ Bcn := hwA, w ∈ S and A is a splitting of RootL(w) | Rb1, Rb2, Rb3, Rb4i.

So, Bcn has more generators (and more relations) in its presentation than Bn does. Nonetheless we have the following theorem.

Theorem 5.2.1. The group Bcn is isomorphic to the braid group Bn .

Proof. The idea of the proof is to define a group homomorphism

Φ: Bcn −→ Bn by sending wA to the linear lift βw whose left root set splitting is A. The claim is Φ is an isomorphism.

First, we show that Φ is a well-defined group homomorphism. Under the map Φ,

−1 (si)({αi},∅)(si)(∅,{αi}) 7→ σiσi = e and −1 (si)(∅,{αi})(si)({αi},∅) 7→ σi σi = e.

So, it is sent to identity in Bn under the homomorphism. Moreover, for |i − j| > 1,

Φ((sisj)({αi,αj },∅)) = σiσj = σjσi = Φ((sjsi)({αj ,αi},∅)),

78 and for |i − j| = 1,

Φ((sisjsi)({αi,αi+αj ,αj },∅)) = σiσjσi = σjσiσj = (sjsisj)({αj ,αi+αj ,αi},∅).

Suppose u = si1 si2 . . . sin with a separated splitting A and v = sj1 sj2 . . . sjm with a 0 separated splitting A . Then, there exists 1, 2, . . . , n, η1, η2, . . . , ηm ∈ {±1},

1 2 n η1 η2 ηm Φ(u v 0 ) = σ σ . . . σ σ σ . . . σ = Φ((uv) 0 ) A A i1 i2 in j1 j2 jm A∪u(A ) 0 since if A ∪ u(A ) is a separated splitting implying the image of (uv)A∪u(A0) under Φ must be linear, then its subexpressions must have a separated splitting of root set by Proposition 4.3.12.

Next, we want to define its inverse

Ψ: Bn → Bcn sending σi to (si)({αi},∅). Then, we need to check that Ψ is well-defined. For |i − j| > 1,

Ψ(σiσj) = Ψ(σi)Ψ(σj)

= (si)({αi},∅)(sj)({αj },∅)

= (sisj)({αi,αj },∅)

= (sjsi)({αj ,αi},∅)

= (sj)({αj },∅)(si)({αi},∅)

= Ψ(σj)Ψ(σi)

= Ψ(σjσi) and for |i − j| = 1,

Ψ(σiσjσi) = Ψ(σi)Ψ(σj)Ψ(σi)

= (si)({αi},∅)(sj)({αj },∅)(si)({αi},∅)

= (sisj)({αi,αi+αj },∅)(si)({αi},∅)

= (sisjsi)({αi,αi+αj ,αj },∅)

= (sjsisj)({αj ,αi+αj ,αi},∅)

= (sjsi)({αj ,αi+αj },∅)(sj)({αj },∅)

= (sj)({αj },∅)(si)({αi},∅)(sj)({αj },∅)

= Ψ(σj)Ψ(σi)Ψ(σj)

= Ψ(σjσiσj), as desired.

The fact that they are inverses of each other is clear from the construction. QED

79 Definition 5.2.2. Every element β ∈ Bn can be expressed as a product of linear gen- erators:

β = l1 l2 . . . ln.

If n is the minimal among all such expressions β for β, we write

`linear(β) = n and n is the linear length, `linear of β. By convention, `linear(e) := 0.

Recall from the previous chapter that a left descent set DL(β) for a braid is defined as

D (β) := {σ | β has a minimal length expression β = σσ1 σ2 . . . σm . } L j j i1 i2 im and a right descent set DR(β) for a braid is defined as

D (β) := {σ | β has a minimal length expression β = σ1 σ2 . . . σm σk . }. R k i1 i2 im k

5.3 Relation between `linear and `Garside

The next proposition is concerning an arbitrary braid.

Proposition 5.3.1. Any braid β has an expression

+ − − + β1 β2 = β3 β4 ,

+ − − + for some positive expressions β1 , β3 and some negative expressions β2 , β4 .

+ 1 + 1 + 1 Proof. Note that we can decompose β = (w1 ) (w2 ) ... (w4 ) as a product of Gar- side generators. By repeatedly application of Proposition 4.3.19, we can get

+ − − + β = β1 β2 = β3 β4

+ + − − for some positive expressions β1 , β4 and some negative expressions β2 , β3 . QED

+ − Theorem 5.3.2. Write an expression β = β1 β2 as a product of a positive expression and a negative expression with DR(β1) ∩ DL(β2) = ∅ such that there is no internal cancellation of terms in the expression for the braid β. Then,

`linear(β) = max(`Garside(β1), `Garside(β2)).

80 Proof. First, we will show that

`linear(β) ≤ max(`Garside(β1), `Garside(β2)).

Suppose `Garside(β1) = m and `Garside(β2) = n. Without loss of generality, assume m ≤ + + + n. Then, we can write, for some Garside generators x1 , x2 , . . . , xm, + −1 + −1 + −1 (y1 ) , (y2 ) ,..., (yn ) , + + + + −1 + −1 + −1 β = x1 x2 . . . xm(y1 ) (y2 ) ... (yn ) . By repeatedly application of Proposition 4.3.19, we get

+ + −1 + + −1 + + −1 + −1 + −1 β = x1 (u1 ) v2 (u2 ) . . . vm(um) (ym+1) ... (yn ) ,

+ −1 + −1 + + for some Garside generator (u1 ) ,..., (um) , v2 , . . . , vm. From Theorem 4.3.25 again, we see that the first m pairs of Garside generators are linear. Since the remaining Garside generators are clearly linear, we have

`linear(β) ≤ n.

On the other hand, if m > n, we see that

`linear(β) ≤ m using a similar argument. Therefore, `linear(β) ≤ max(`Garside(β1), `Garside(β2)). Next, we will show that

`linear(β) ≥ max(`Garside(β1), `Garside(β2)).

Suppose `linear(β) = k, so we can write, for some linear generators l1, l2, . . . , lk,

β = l1 l2 . . . lk.

By Theorem 4.3.25, we can decompose each linear generator into a product of positive and negative Garside generators, that is

+ + −1 + + −1 + + −1 β = x1 (y1 ) x2 (y2 ) . . . xk (yk ) . Again, by Proposition 4.3.19, we obtain

+ + + + −1 + −1 + −1 β = x1 v2 . . . vk (u1 ) (u2 ) ... (uk ) such that it is a product of some positive and negative Garside generators. From here, it is clear that ` (β ) ≤ k and ` (β ) ≤ k. Hence, Garside 1 Garside 2

max(`Garside(β1), `Garside(β2)) ≤ `linear(β).

Together with the previous result, we have

`linear(β) = max(`Garside(β1), `Garside(β2)).

QED

81 82 Bibliography

[AR10] Alexandre V. Borovik and Anna Borovik. Mirrors and Reflections: The Geometry of Finite Reflection Groups. Springer Science + Business Media, New York, 2010.

[Bak13] Sarjick Bakshi. Classification of Finite Di- mensional semisimple Lie Algebra over C. http://www.cmi.ac.in/ pdeshpande/projects/Lie%20algebra%20project.pdf, April 2013.

[BB05] Anders Bj¨orner and Francesco Brenti. Combinatorics of Coxeter Groups. Springer Science + Business Media, New York, 2005.

[Boss12] Joshua Bosshardt. The Classification of Simple Complex Lie Algebras. http://math.uchicago.edu/ may/REU2012/REUPapers/Bosshardt.pdf, 2005.

[Bour05] Nicholas Bourbaki. Lie Groups and Lie Algebras, Chapters 7 - 9, Elements of Mathematics. Springer-Verlag, Berlin, 2005.

[BR02] T.S. Blyth and E.F. Robertson. Further Linear Algebra. Springer-Varlag, London , 2002.

[CSM02] Roger Carter, Graeme Segal and Ian Macdonald. Lectures on Lie Groups and Lie Algebras. Cambridge University Press, Cambridge, 1995.

[Dav08] Michael W. Davis. The Geometry and Topology of Coxeter Groups. Prince- ton University Press, New Jersey, 2008.

[EGHLSVY11] Pavel Etingof, Oleg Golberg, Sebastian Hensel, Tiankai Liu, Alex Schwendner, Dmitry Vaintrob, and Elena Yudovina. Introduction to Rep- resentation Theory . arXiv:0901.0827v5 [math.RT], Feb 2011.

[FH91] William Fulton and Joe Harris. Representation Theory A First Course. Springer-Verlag, New York, 1991.

[Ful97] William Fulton. Young Tableaux: with applications to representation the- ory and geometry. Cambridge University Press, Cambridge, 1997.

83 [GB85] L. C. Grove and C. T. Benson. Finite Reflection Groups. Springer-Verlag, New York, 1985.

[HK71] Kenneth Hoffman and Rat Kunze. Linear algebra, 2nd edn. Prentice-Hall, Englewood Cliffs, 1971.

[Hall15] Brian Hall. Lie Groups, Lie Algebras and Representations: An Elemen- tary Introduction, 2nd edn. Springer International Publishing, Switzer- land, 2015.

[Hsiang00] Wu Yi Hsiang. Lectures on Lie Groups World Scientific Publishing Co., Singapore, 2000.

[Hum72] James E. Humphreys. Introduction to Lie Algebras and Representation Theory. Springer-Verlag, New York, 1972.

[Hum75] James E. Humphreys. Linear Algebraic Groups. Springer-Verlag, New York, 1975.

[Hum94] James E. Humphreys. Reflection Groups and Coxeter Groups. Cambridge University Press, Cambridge, 1994.

[Jac62] Nathan Jacobson. Linear Algebras. Dover Publications, New York, 1962.

[Knapp02] Anthony W Knapp. Lie Groups Beyond an Introduction, 2nd edn. Birkh¨auserBasel, New York, 2002.

[Lay12] David C. Lay. Linear Algebra and Its Applications, 4th edn. Pearson Ed- ucation, Boston, 2012.

[Mac95] I. G. Macdonald. Symmetric Functions and Hall Polynomials, 2nd edn. Oxford University Press, New York, 1995.

[Sam90] Hans Samelson. Notes on Lie Algebras. Springer-Verlag, New York, 1990.

[Ser01] Jean-Pierre Serre. Complex Semisimple Lie Algebras. Springer-Verlag, Berlin, 2001.

[Ser77] Jean-Pierre Serre. Linear Representations of Finite Groups. Springer Sci- ence + Business Media, New York, 1997.

[Ser06] Jean-Pierre Serre. Lie Algebras and Lie Groups, 2nd edn. Springer-Verlag, Berlin, 2006.

[Sko07] Alexei Skorobogatov. Lie Algebras. http://wwwf.imperial.ac.uk/ an- skor/LIE.PDF, New York, March 2017.

[SW73] Arthur A. Sagle and Ralph E. Walde. Introduction to Lie Groups and Lie Algebras. Academic Press, New York, 1973.

84