<<

Extensions of Borovoi’s theorem

David Hume (690100)

March 31, 2009 I warrant that the content of this dissertation is the direct result of my own work and that any use made in it of published or unpublished material is fully and correctly referenced.

Signed ...... Date ...... Contents

1 Introduction 1 1.1 Manifolds ...... 3 1.2 Lie Groups and Lie Algebras ...... 6 1.3 Properties of Lie algebra ...... 12 1.4 Characterisation of semisimple Lie algebras ...... 15 1.5 Connected Lie groups ...... 17 1.6 Representations of the Lie algebra sl2(C)...... 18 2 Complex semisimple Lie algebras 20 2.1 The classification theorem ...... 20 2.2 Example: sln+1(C), for n ≥ 1...... 20 2.3 Cartan subalgebras and roots ...... 23 2.4 Abstract Root Systems ...... 28 2.5 Classification of connected abstract Dynkin diagrams ...... 32 2.6 Other examples: son(C) and spn(C)...... 33 2.7 Existence and Uniqueness theorems ...... 34

3 Semisimple Lie groups 39 3.1 Example: SLn(C), for n ≥ 2...... 39 3.2 Haar measures ...... 39 3.3 Centralisers of Tori ...... 41 3.4 Compact real forms ...... 44 3.5 Cartan decomposition ...... 46 3.6 Iwasawa decomposition ...... 49

4 (Twin) Buildings and (Twin) BN-pairs 53 4.1 Buildings as W -metric spaces ...... 53 4.2 Buildings as chamber complexes ...... 56 4.3 Solomon-Tits theorem ...... 59 4.4 BN-pairs ...... 60 4.5 Twin buildings and twin BN-pairs ...... 62 4.6 Semisimple Lie groups with BN-pairs and twin BN-pairs ...... 65

5 Borovoi’s theorem 67 5.1 Introductory Category theory ...... 67 5.2 Final topologies ...... 69 5.3 A topological version of Borovoi’s theorem ...... 70

6 Locally kω topological groups 72 6.1 Locally kω topological spaces ...... 72 6.2 Locally kω groups ...... 75

i 7 Kac-Moody groups 77 7.1 Kac-Moody algebras ...... 77 7.2 Kac-Moody groups ...... 80 7.3 Unitary forms ...... 84

8 Phan amalgams 86 8.1 Borovoi’s theorem for complex Kac-Moody groups ...... 86 8.2 Standard Phan amalgams ...... 87 8.3 Classification ...... 88

9 Conclusion 91

ii Chapter 1

Introduction

Sophus Lie almost single-handedly began the theory of Lie groups as it is now known, 1873. He intended to parallel the work of Evariste´ Galois’ theory of field extensions for differentiable manifolds. The main results of his work with F. Engel are today given by three theorems and their converses. Many authors refer to these as Lie’s first, second and third theorems. They allow passage between certain transforma- tion groups and their corresponding family of ‘infinitesimal transformations’, now known as a and its corresponding Lie algebra. Lie did not work directly with any of the classical groups, like SLn, instead studying them as groups of transformations. The idea of treating such groups systematically did not really begin until the work of Weyl. Like Sophus Lie, Weyl was inspired by pioneering work in another area, in this case the extensions to made by Schur from finite groups to other groups, such as orthogonal and unitary groups. This led to many studies concerning compact connected groups between 1924 and 1926. The next major change of focus to greet Lie’s theory was that made by Elie´ Cartan in 1930. Firstly, he gave a proof of Lie’s third theorem for global groups, rather than locally defined groups of transfor- mations. He also wrote the first book concerning global , which changed the emphasis of this subject, the second chapter of this 60 page book gives rise to what is now called Cartan’s theorem. He also introduced the term Lie group for the first time. The term Lie algebra was first used in lecture notes by Weyl. The first chapter covers the basics of Lie theory, introducing Lie groups, Lie algebras and providing many of the fundamental results to this area of study.

The classification of complex semisimple Lie algebras is the focus of the second chapter. The original classification was mainly due to the work of Killing and Cartan in the 1890’s. Many of the results were announced by Killing, but his proofs were often missing or incomplete. Cartan, with the assistance of both Engel and in particular his thesis student Umlauf, gave the first rigourous treatment of this classi- fication. This differs greatly from any classification proof that would be given today as it did not include a notion of a simple root and it was Weyl who introduced the lexicographical orderings needed to define simple roots. Van der Waerden provided another simplification in 1933, and another step towards the modern proof was given by Dynkin, who provided the collection of diagrams which now bear his name. Cartan did give the original proof of what is now known as the Existence theorem, but he did so case by case. A general approach was given by Witt over 10 years later, provided that the result held for groups of rank at most 4. The first complete general approach was due to Chevalley in 1948, introducing a free Lie algebra and factoring out by some ideal. In 1966, Serre improved this method by redefining the ideal more concretely using what are now called the Serre relations. His method also simplified the proof of the Isomorphism theorem, which again was originally proved by Cartan.

Much of the material covered by chapter 3 is again due to the work of Cartan and Weyl. Cartan classified the real forms of complex simple Lie algebras and by inspection saw that each complex admits exactly one compact real form. The development of Serre relations and new proof of the Isomor- phism gave a simpler proof of the existence of compact real forms. The existence and uniqueness of a

1 Cartan involution (up to conjugacy) is also, unsurprisingly, a work of Cartan. The Cartan decomposition on the Lie group level is due to Mostow in 1949. Cartan gave a decomposition of certain semisimple Lie groups, called the KAK decomposition. The simply connected N was introduced by Iwasawa in 1949 and the resulting decomposition was called the Iwasawa decomposition.

Jacques Tits’ book “Buildings of spherical type and finite BN-pairs”, was first published in 1974 and introduced spherical buildings and BN-pairs, as the name suggests. Chapter 4 follows this theory, giving two equivalent constructions of buildings. The correspondence between spherical buildings and BN-pairs is explored, before attention is turned to twin buildings and twin BN-pairs.

Categories were first introduced by Samuel Eilenberg and Saunders Mac Lane in the early 1940’s, in work concerning algebraic topology. In his 1984 paper, “Generators and relations in compact Lie groups,” Mikhail Borovoi proved what is now known as Borovoi’s theorem, we will present the recent extension made in [GGH09] to prove a stronger statement concerning the topology of compact Lie groups. This represents our first extension of this theorem.

V. Kac and R. Moody independently introduced Kac-Moody algebras in their respective texts [Ka68] and [Mo68]. These algebras are in some way infinite dimensional analogues of complex semisimple Lie algebras. To these algebras we can define respective Kac-Moody groups and ask whether such groups possess an analogue to Borovoi’s theorem. The answer to this question is not new, however, the method of preserving information concerning the relative topologies given by [GGH09] includes this additional information.

Using the results in chapter 2, a new approach to groups of this form started. The method was to take some and classify those groups possessing the structure necessary to be consid- ered as groups with that Dynkin diagram and when such groups are in fact entirely determined by certain small . The final chapter gives sufficient conditions on such a group to ascertain that it is in fact isomorphic (as an amalgam) to the Phan amalgam of a unitary form of some Kac-Moody group.

Borovoi’s theorem and its many extensions provide one of the most useful tools in , the ability to determine a group completely from certain small subgroups. Variations and improvements on these results lead to a better understanding of the exact information needed to completely determine a group of Lie type.

2 1.1 Manifolds

A function f : X → Y is described as smooth (written f ∈ C∞(X)) if and only if all partial derivatives of f exist and are continuous on the whole of X.

Definition 1.1.1 Manifolds Let (X, T ) be a topological space. X is called a manifold (of dimension n) if and only if there exists a set of pairs {(Uα, φα) | α ∈ A} such that

(i) {Uα | α ∈ A} is an open cover of X,

n (ii) for each α there is a nonempty open subset Vα of R , such that φα : Uα → Vα is a homeomor- phism,

(iii) φα(x) = φβ(x) for all x ∈ Uα ∩ Uβ.

If, in addition, the set of pairs can be chosen such that each φα is a diffeomorphism then X is called a differentiable manifold.

The set of pairs {(Uα, φα) | α ∈ A} is called an atlas of the manifold. When we refer to a topological space as a manifold, we implicitly assume that its atlas is extended to include all pairs satisfying the three properties above. Often an n-dimensional manifold is simply referred to as an n-manifold. It is an immediate consequence of the definition that if X is an n-manifold and f : X → Y is a homeomorphism then Y is an n-manifold. Moreover, if X is a differentiable manifold and f is smooth, then Y is also a differentiable manifold. Finally, if X is a manifold and x ∈ X then there is some connected neighbourhood U of x which is homeomorphic to a connected open subset of Rn, but every connected open subset of Rn is homeomorphic to Rn itself. Therefore every manifold is locally homeomorphic to Rn and every differentiable manifold is locally diffeomorphic to Rn. From this we can deduce that all manifolds are locally compact.

Example 1.1.2 Examples of manifolds

n n (i) R with the usual Euclidean topology, E, is an n-manifold with open cover {U1 = R } and n n ι1 : R → R is the identity map. (ii) Let V be an n-dimensional real vector space, then there is some vector space isomorphism ψ : V → Rn, so we may equip V with the topology T which makes ψ a homeomorphism. Therefore V is also an n-manifold with respect to the topology T . As ψ is linear, we deduce that V is a differentiable manifold. In particular the spaces Cn and Hn (where H is the field of quaternions) are both differentiable manifolds (with respect to the Euclidean topology) of dimension 2n and 4n respectively.

n  n+1 n (iii) Let C := x = (x1, . . . , xn+1) ∈ R | max{x1, . . . , xn} = 1 , so C is the surface of an (n+1) di- mensional cube. Cn is an n-manifold but is not a differentiable manifold, as no open neighbourhood of (1,..., 1) in Cn is diffeomorphic to any open subset of Rn. (iv) If (Rn, T ) is a topological space where T is such that some x ∈ Rn has no open neighbourhood homeomorphic to an open subset of (Rn, E), then Rn is not a manifold with respect to this topology. Proposition 1.1.3 Open subsets of manifolds With respect to the subspace topology an open subset of an n-manifold is an n-manifold and an open subset of a differentiable n-manifold is a differentiable n-manifold.

Proof: If Y ⊆ X is open in T then {(Uα ∩ Y ) | α ∈ A} forms an open cover of Y . Moreover as φα is a homeomorphism, it is an open map (the of any open set is open) and hence n φα(Uα ∩ Y ) is homeomorphic to an open subset of R . Thus {(Uα ∩ Y ), φα | α ∈ A} is an atlas for Y . If X is differentiable, then each φα is a diffeomorphism on (Uα ∩ Y ) as required. 

3 Example 1.1.4 Closed subsets of manifolds Closed subsets of manifolds may or may not be manifolds. For example if n ≥ 2, the sphere

n−1 n S = {x ∈ R | |x| = 1} is an (n − 1)-manifold and proposition 1.1.3 tells us that the open set

n B1(0) = {x ∈ R | |x| < 1} is an n-manifold. However, the unit ball n−1 n B1(0) = S ∪ B1(0) = {x ∈ R | |x| ≤ 1} is not a manifold as no open neighbourhood of any x in this set with |x| = 1 is homeomorphic to any open subset of Rk, for any k ∈ N.

Example 1.1.5 GLn(F) is a differentiable manifold 2 Mn(C), the set of all n × n matrices with complex entries, is a complex vector space of dimension n , so 2 is a differentiable 4n -manifold by example 1.1.2(ii). Consider the determinant map det : Mn(C) → C. As C \{0} is an open subset of C and the determinant map is continuous, it follows that −1 GLn(C) = {M ∈ Mn(C) | det(M) 6= 0} = det (C \{0}) is an open subset of Mn(C). Therefore, GLn(C) is a differentiable manifold, by proposition 1.1.3. Two 2 similar arguments verify that GLn(R) and GLn(H) are both differentiable manifolds of dimensions n 2 and 16n respectively.  Definition 1.1.6 Derivations and the tangent bundle Let M be a manifold and let p ∈ M. A derivation at p is a linear map D : C∞(M) → C which obey the Liebniz product rule D(f.g)(p) = D(f)g(p) + f(p)D(g) for all f, g ∈ C∞(M), where (f.g)(p):= f(p).g(p). The set of all derivations at x will be denoted by Tx(M). F The tangent bundle of M is TM = x∈M Tx(M). Definition 1.1.7 (Smooth) vector fields ∞ A function X : M → TM is called a (smooth) vector field if and only if X ∈ C (M) and X(p) ∈ Tp(M) for all p ∈ M. Example 1.1.8 Tangent vectors One collection of derivations are provided by curves on a manifold. A curve is simply a smooth map γ : I → M where I ⊆ R is an open interval containing 0. Let p ∈ M and let γ be any curve in M with γ(0) = p. As γ is smooth, d γ(t)| = γ0(0) is a well defined vector in n. dt t=0 R γ0(0) is called a tangent vector to the manifold M at the point p. To see how this defines a derivation, suppose f : M → R is a smooth function and consider d Df = f(γ(t))| = γ0(0)f 0(γ(0)) γ dt t=0 0 As γ(0) = p is fixed, Dfγ depends only on the tangent vector γ (0). We can immediately see from the definition that each curve passing through p defines a derivation at p, by the product rule of differentia- tion. The next goal will be to prove that for a differentiable manifold all derivations at p can be obtained from the tangent vectors of curves. Before this we need to introduce the notion of a differential and state two important properties which differentiable manifolds inherit from Rn.

4 Definition 1.1.9 Differentials Let M and N be differentiable manifolds and let f : M → N be smooth. The differential of f at p ∈ M ∞ is the linear map dfp : Tp(M) → Tf(p)(N) with the property that given any function g ∈ C (U) where U is an open neighbourhood of f(p) in N, the formula dfp(v)(g) = v(g ◦ f) holds. It is not obvious from this definition that such a differential necessarily exists. We will show that differentials do exist and that there is a neat formula for dfp. Theorem 1.1.10 Two key theorems for differentiable manifolds (i) Inverse function theorem Let M and N be differentiable manifolds. A map φ : M → N is locally diffeomorphic at p ∈ M if and only if its differential dφ : Tp(M) → Tφ(p)(N) is a vector space isomorphism. (ii) Existence and uniqueness theorem for ODE’s Let X be a smooth vector field on a differentiable manifold M and let p ∈ M. There exists an open neighbourhood U of p, an open interval I containing 0 and a smooth mapping ψ : I × U → M such that for each q ∈ U the curve ψq : I → M given by ψq(t) = ψ(t, q) is the unique curve that ∂ψ satisfies ∂t = Xψq (t) subject to the initial condition ψq(0) = q. For a proof of (i) the reader is referred to [Wa71] pages 22-29 and to [Co01], theorem 2.8.4 (pages 72-73) and appendix C (pages 379-385) for a proof of (ii).  Theorem 1.1.11 Derivations and tangent vectors Let M be a differentiable n-manifold and let p ∈ M. (i) The set of all tangent vectors at p forms a vector space over R of dimension n.

(ii) Tp(M) is an n dimensional real vector space. (iii) D is a derivation at p if and only if there is a curve γ such that d Df = f(γ(t))| for all f ∈ C∞(M) and γ(0) = p. dt t=0

Proof: Let p ∈ Uα ⊆ M where Uα is the open neighbourhood of p given by the definition and φα is n the corresponding diffeomorphism from Uα to R . (The remarks following definition 1.1.1 show such a diffeomorphism always exists). n (i) The set of tangent vectors inherits its vector space properties from R . Let γ1 and γ2 be curves which define tangent vectors at p and let λ ∈ R.

To make Tp(M) a vector space we define 0 0 −1 0 −1 (γ1 + γ2 )(t) = φα [φα(γ1(t)) + φα(γ2(t))] and(λγ1 )(t) = φα [λφα(γ1(t))] . n If γ is a curve in Uα then φα ◦ γ is a smooth map from R to R , thus we associate an element of Rn to each tangent vector γ0(0) as follows. 0 d n Let γ be a curve in Uα which has tangent vector γ (0) at p. Set xγ = dt (φα ◦ γ)(0) ∈ R . n We claim that the map Tp(M) → R given by γ 7→ xγ is bijective, which will suffice to prove the result.

To show injectivity, suppose γ1(0) = γ2(0) = p and d d (φ ◦ γ )(0) = γ 0(0).φ 0(γ (0)) = γ 0(0).φ 0(γ (0)) = (φ ◦ γ )(0). dt α 1 1 α 1 2 α 2 dt α 2 0 0 0 0 We know φα (γ1(0)) = φα (γ2(0)) as φα is a smooth function. Therefore γ1 (0) = γ2 (0). To show surjectivity, consider a connected local neighbourhood U of p. By choosing canonical local coordinate vectors ei for i from 1 to n, it is clear that there is some ε > 0 such that p + (kε)ei ∈ U for all k ∈ (−1, 1) and all i. 0 Set γi :(−ε, ε) → M with γi(x) = p + x.ei. Then γi (0) = ei and therefore this vector space contains n linearly independent vectors, justifying the claim.

5 (ii) Every differentiable manifold is locally diffeomorphic to Rn. Hence, for any point p ∈ M, ∼ n ∼ n Tp(M) = Tφα(p)(R ) = R , by the inverse function theorem on manifolds (1.1.10(i)). (iii) It is immediately clear from example 1.1.8 that every curve with the required properties defines a derivation at p by basic properties of calculus.

Consider the subspace of Tp(M) which contains all derivations which have such a curve associated to them. As the dimension of Tp(M) is equal to the dimension of this subspace, it follows that this subspace is the entire of Tp(M), which completes the proof. 

As a result of this Tx(M) is called the tangent space of M at x.

Proposition 1.1.11 provides a useful method for calculating the differential of a smooth function f at p ∈ M. Given any curve γ with the properties that γ(0) = p and γ 0(0) = v, the differential of f at p is given by d df = f(γ(t))| . p dt t=0 Definition 1.1.12 Connectedness properties Let M be an n-manifold. M is said to be (i) connected if and only if it cannot be partitioned into two disjoint open sets, (ii) path-connected if and only if given any two distinct points x, y ∈ M there is a continuous function f : [0, 1] → M with f(0) = x and f(1) = y, (iii) simply connected if and only if it is path connected and for each continuous map f : S1 → M there is another continuous map

F : B1(0) → Mz, with the property that F |S1 = f.

To illustrate the definition we prove that if M is path-connected then it is connected. Suppose the converse is true and let x and y lie in two disjoint open sets U and V which partition M. By assumption, M is path-connected so there is a continuous function f : [0, 1] → M with f(0) = x and f(1) = y. As f is continuous, f −1(U) and f −1(V ) are disjoint open subsets of [0, 1] which partition this interval. But [0, 1] is connected so this is a contradiction.  Definition 1.1.13 Submanifolds Let M be a manifold and let N ⊆ M. N is called an immersed submanifold of M if and only if (i) N is itself a manifold,

(ii) the differential of the identity map dι : Tp(N) → Tp(M) is injective for all p ∈ N. If, in addition, the topology on N is the subspace topology inherited from M, then N is called an embedded (or regular) submanifold.

1.2 Lie Groups and Lie Algebras

Definition 1.2.1 Lie groups Let G be a group and let T be a topology on G. G is called a Lie group if and only if (i) G is a differentiable manifold, (ii)( G, T ) is a smooth , so the function θ : G × G → G given by θ(g, h) = gh−1 is smooth, (iii) T is Hausdorff and second countable.

6 A subgroup H of G is called a Lie subgroup of G if and only if H is an immersed submanifold of G.

As every Lie group is a differentiable manifold, so all of the results in section 1.1 apply to Lie groups. We also note that every Lie subgroup of a Lie group must be a Lie group itself.

One trivial example of a Lie group is Rn under addition. A more important example is the general over any of the fields R, C, or H. A proof of this fact for R and C can be found in [Kn02], on pages 12-15.

Theorem 1.2.2 Cartan’s theorem Let G be a Lie group and let H be a subset of G equipped with the subspace topology inherited from G. H is a Lie subgroup of G if and only if it is a closed subgroup of G.

Proof: See [Co01], proposition 5.3.1 and theorem 5.3.2, pages 173-175, or alternatively [Wa71], theorem 3.21. 

Example 1.2.3 G0 is a normal Lie subgroup of G

A simple consequence of Cartan’s theorem is that given any Lie group G, the connected component of the identity, G0, is a normal Lie subgroup of G. To show G0 is closed we note that the closure of G0 is itself connected, so G0 must be closed as it is the maximal connected subset of G (by inclusion) containing the identity. It remains to show that G0 is a of G. Firstly, we note that as G is a topological group, multiplication by an element from the group is a home- omorphism (it is a continuous bijection whose inverse is multiplication by g−1 which is also continuous). In particular the image of a closed connected subset under group multiplication is closed and connected. Let g, h ∈ G0, then the image of G0 under right multiplication by h is a closed connected subset of G which contains h (as 1 ∈ G0) and gh. Therefore, gh ∈ G0. −1 −1 −1 Similarly, the image of G0 under right multiplication by g contains both g and 1, so g ∈ G0. By Cartan’s theorem G0 is a Lie subgroup of G. Finally, we note that group conjugation is the composition of two group multiplication functions. Let −1 −1 g ∈ G0 and h ∈ G. Then h G0h is a closed connected subset of G which contains h 1h = 1 and −1 −1 h gh, so h gh ∈ G0. Thus G0 E G. Example 1.2.4 Important Lie subgroups of the

Returning to the example G = GLn(R) and using Cartan’s theorem we see that

−1 SLn(R) = {M ∈ GLn(R | det M = 1} = det ({1})

is a Lie subgroup of GLn(R) as the determinant map is continuous and {1} is a closed subset of R. Hence SLn(R) is a Lie group. As det : GLn(C) → C ic continuous and {1} is a closed subset of C, we deduce that SLn(C) is a Lie subgroup of GLn(C) and therefore a Lie group. The general linear groups have many other important Lie subgroups, those of particular interest to this project include the special SU, the special SO and the Sp. The is the subgroup

∗ SUn(C) = {X ∈ GLn(C) | X X = In and det(X) = 1}, where X∗ denotes the conjugate transpose of X. −1 ∗ SUn(C) = ψ (In), where ψ : SLn(C) → Mn(C) maps X to X X is a continuous map, so SUn(C) is a closed subgroup of SL2(C). ∗ The condition X X = 1 ensures that each entry of a matrix X ∈ SUn(C) has modulus at most 1. Thus, applying the Heine-Borel theorem to each entry in turn we see that SUn(C) is a compact subgroup of SLn(C).

7 A similar argument shows that the special orthogonal group

∗ t SOn(R) = {X ∈ GLn(R) | X X = X X = In and det(X) = 1} and the symplectic group ∗ Spn(H) = {X ∈ GLn(H) | X X = 1} are compact Lie subgroups of SLn(R) and GLn(H) respectively. Definition 1.2.5 Left invariant vector fields Let G be a Lie group, with a ∈ G and let X be a vector field on G.

X is said to be left-invariant if and only if X(La) = dLa(X), where La : G → G is the diffeomorphism given by left multiplication by a, in other words, La(g) = ag for all g ∈ G. The set of all left-invariant vector fields on G is denoted by g.

As a shorthand, X(p) will be denoted by Xp.

Left invariant vector fields have the key property that they are determined by their value at e = 1G, since Xa = dLa(Xe), leading to the following proposition.

Proposition 1.2.6 Left-invariant vector fields and derivations

The function ψ : g → Te(G) given by ψ(X) = Xe is a linear vector space isomorphism.

A vector field X may be thought of as a smooth endomorphism on C∞(M), where given some ∞ f ∈ C (M), Xf(p) is defined pointwise by Xp(f) for each p ∈ G. With this in mind, define [X,Y ] = XY −YX. Then it is clear that g is closed under the binary operation [·, ·].

Definition 1.2.7 Lie algebras

An F algebra is a vector space g over F with a distributive multiplication [·, ·]: V × V → V which has the property a[x, y] = [ax, y] = [x, ay] for all a ∈ F. Such an F-algebra g is called a Lie algebra if it satisfies the additional properties (i)[ X,X] = 0 for all X ∈ g.

(ii)[ aX + bY, Z] = a[X,Z] + b[Y,Z] for all X,Y,Z ∈ g and all a, b ∈ F. (iii) g satisfies the Jacobi identity, [X, [Y,Z]] + [Y, [Z,X]] + [Z, [X,Y ]] = 0 for all X,Y,Z ∈ g. A Lie subalgebra h is a subspace of g with the property that [h, h] = {[X,Y ] | X,Y ∈ h} ⊆ h

The multiplication [·, ·] in a Lie algebra is called the Lie bracket.

We will only consider Lie algebras when F = R, C or H. Under this assumption condition (i) is equivalent to requiring that [X,Y ] = −[Y,X] for all X ∈ g. Applying this to (ii) says that the Lie bracket is bilinear.

Example 1.2.8 Examples of Lie algebras

(i) Cn is a Lie algebra with respect to the multiplication [a, b] = 0 for all a, b ∈ Cn. A Lie algebra whose Lie bracket always evaluates to 0 is called abelian.

(ii) The F-algebra of all n × n matrices, Mn(F), is a Lie algebra with respect to the Lie bracket [X,Y ] = XY − YX.

(iii) Any F-algebra of smooth functions from F to itself is a Lie algebra with respect to the Lie bracket [f, g](x) = (fg − gf)(x) = f(x)g(x) − g(x)f(x).

8 We give the following important examples of matrix Lie algebras.

The Lie algebras sln(R) = {X ∈ Mn(R) | Tr(X) = 0} is a Lie subalgebra of Mn(R) as Tr(AB − BA) = Tr(AB) − Tr(BA) = Tr(AB) − Tr(AB) = 0.

Similarly, sln(C) = {X ∈ Mn(C) | Tr(X) = 0} is a Lie subalgebra of Mn(C). Next, we introduce two more Lie algebras which will be important examples in chapter 2.

t son(F):= {X ∈ Mn(F) | X + X = 0}, for n ≥ 3 and   t 0 In spn(C):= {X ∈ M2n(C) | X J + JX = 0} for n ≥ 1, where J = is a 2n × 2n matrix. −In 0

Notice that for n ≥ 2, son(F) ( sln(F), as 0 = Tr(0) = Tr(X + Xt) = Tr(X) + Tr(Xt) = 2Tr(X), so Tr(X) = 0.

∗ Finally, we present the real Lie algebra sun(C):= {X ∈ Mn(C) | X + X = 0 and Tr(X) = 0}.

Again, sun(C) ( sln(C). Theorem 1.2.9 The Lie algebra of a Lie group Let G be a Lie group and let g be the set of left-invariant vector fields on G. Then g is a Lie algebra with respect to the bracket operation [X,Y ] = XY − YX = Xp(Y (f)) − Yp(X(f)).

Proof: There are a number of properties to check: Firstly, if X,Y are left invariant vector fields then so is aX + bY for all a, b ∈ F as TG is an F vector space and differentials are linear. [X,X] = XX − XX = 0, and g is closed under bracket multiplication as

dLa[X,Y ]pf = [X,Y ]p(f ◦ La) = Xp(Y (f ◦ La)) − Yp(X(f ◦ La))

= Xp(dLa(Y ))f − Yp(dLa(X))f = XpY (f) − Yp(X(f)) = [X,Y ]p(f). Next we show that the bracket is linear in the first variable [aX + bY, Z] = (aX + bY )Z − Z(aX + bY ) = aXZ + bY Z − aZX − bZY = a(XZ − ZX) + b(YZ − ZY ) = a[X,Z] + b[Y,Z]. Finally we verify that the Jacobi identity holds, [X, [Y,Z]] + [Y, [Z,X]] + [Z, [X,Y ]] = (XYZ − XZY − YZX + ZYX) +(YZX − YXZ − ZXY + XZY ) +(ZXY − ZYX − XYZ + YXZ) = 0. g is called the Lie algebra of G. By proposition 1.2.6, Te(G) is isomorphic to g when equipped with the u v u u Lie bracket [u, v] = [X ,X ]e where the vector field X is defined by Xg = (dLg)e(u).

Example 1.2.10 The Lie algebras of GLn(F) and SLn(F)  1, if i = m and j = n In the following example we define E = (y ) to be the matrix given by y = ij mn mn 0, otherwise

Recall that we are only considering the cases F = R, C or H.

Let X ∈ GLn(F) and notice that there is some ε > 0 such that X + t.Eij ∈ GLn(F) for all i, j and all t ∈ (−ε, ε). Hence the curves γij :(−ε, ε) → GLn(F) given by γij(t) = X + t.Eij all have the properties γij(0) = X 0 and γij(0) = Eij.

9 As the tangent space is an F vector space this is sufficient to prove that TX (GLn(F)) = Mn(F).

For the example SLn(F), with F = R or C, we use a slight variation on the method given in [Kn02].

The key result needed here is that given a curve γ :(−ε, ε) → Mn(F) with the property that γ(0) = In, then d [det γ(t)] = Tr(γ0(0)). dt t=0

To prove this result we write the curve as a set of matrix coordinates, γ(t) = (γij(t)) then expand the determinant along the top row.

n X n−1 det γ(t) = (−1) γ1i(t) det γi(t) i=1 where γi(t) is the (n − 1) × (n − 1) matrix remaining from γ(t) when the first row and ith column are removed. Applying the product rule to each term in the sum gives d γ 0(0) det γ (0) + γ (0). [det γ (t)] . 1i i 1i dt i t=0

If i 6= 1 then det γi(0) = γ1i(0) = 0, therefore d d [det γ(t)] = γ 0(0) + [det γ (t)] . dt t=0 11 dt 1 t=0 Repeating this process gives the required result.

Using this fact we can deduce that the Lie algebra of SLn(F) is sln(F). 0 Let γ : I → SLn(F) be a curve with γ(0) = I. Then Tr(γ (0)) = 0 as the determinant is constant across the curve.

To show every element of sln(F) is contained in the tangent space consider the curves

γij :(−1, 1) → SLn(F) given by γij(t) = In + t.Eij for all i 6= j.

0 γij(0) = Eij so as the tangent space is a vector space, all possible non-diagonal entries are accounted for. As a result it remains to deduce that given any diagonal matrix with trace 0, there is a suitable curve. Let A = (aij) be a diagonal matrix with trace 0. Define

 eaij t, whenever i=j, γA :(−1, 1) → SLn(F) such that γA(t) = (bij) where bij = bij = 0, otherwise.

0 Then γA(0) = A and the example is complete.

Without proof we state here that the Lie algebras of SUn(C) and SOn(R) are sun(C) and son(R) respectively.

Definition 1.2.11 One parameter subgroups

Let G be a Lie group. A one parameter subgroup of G is a smooth φ :(R, +) → G. Note that a one-parameter subgroup is simply a curve with the additional properties (i) φ(s + t) = φ(s)φ(t), (ii) φ(0) = e and (iii) φ(−t) = φ(t)−1.

10 The derivative of a one parameter subgroup is given by 1 1 φ0(t) = lim [φ(t + h) − φ(t)] = φ(t) lim [φ(h) − φ(0)] = Aφ(t), h→0 h h→0 h 1 where A = limh→0 h [φ(h) − φ(0)]. This limit exists because G is a differentiable manifold. Theorem 1.2.12 One parameter subgroups and tangent spaces The map φ 7→ dφ(1) defines an isomorphism between the set of all one parameter subgroups of G and Te(G).

Proof: Let v ∈ Te(G) and set Xg(v) = [dLg(e)](v). Note Xg is a smooth left invariant vector field. By v proposition 1.1.10(ii), let φ : I → G be the unique curve with the properties φ(0) = e and dφt = Xφ(t). t The curve φ is a homomorphism as φt(s) = φ(s + t) and φ (s) = φ(s)φ(t) both satisfy the two properties above, so by uniqueness, φ(s+t) = φ(s)φ(t). φ can be extended to a one parameter subgroup φv : R → G t n by defining φv(t) = φ( n ) for suitably large n. t n t m As φ is a homomorphism, φ( n ) = φ( m ) for all m, n ∈ N \{0}, so φv is a well defined homomorphism. Moreover, φv is smooth because φ is smooth and the inverse map is given by v 7→ φv, so this map is a bijection. 

The following corollary is a direct consequence of the method used in the previous proof.

Corollary 1.2.13 For each X ∈ g there is a unique one parameter subgroup φX : → G such that 0 R φX (0) = X. This corollary allows us to define a map which will relate Lie algebras to Lie groups. Definition 1.2.14 The exponential map

Let G be a Lie group and let g be the Lie algebra of G. The map exp : g → G defined by exp(X) = φX (1) is called the exponential map.

Example 1.2.15 The Lie groups of Mn(F) and sln(F). To calculate the exponential map for a Lie algebra of matrices, it suffices to find the unique curve φ : I → GLn(F) with the properties d φ(0) = I and dφ = [φ(t)] = Xφ(t). t dt t=0 The next proposition states that the matrix exponential function ∞ X Xi t 7→ etX = i! i=0 satisfies these properties. This will prove the unsurprising result, that the exponential function for matrix groups is, in fact, the exponential function. Proposition 1.2.16 Properties of the matrix exponential function

Let X,Y ∈ Mn(F) for F = R or C. X P∞ 1 n (i) e = n=0 n! X , (ii) If XY = YX then eX eY = eX+Y , tX (iii) t 7→ e is a smooth curve with 0 7→ In, d tX tX (iv) dt (e ) = Xe , X Tr(X) X (v) det(e ) = e , in particular, e ∈ GLn(F). Proof: See [Kn02] pages 7-8.  From part (iv) of this proposition we deduce that the Lie groups of Mn(F) and sln(F) are Lie subgroups of GLn(F) and SLn(F respectively.

11 1.3 Properties of Lie algebra

Definition 1.3.1 Homomorphisms Let G, H be Lie groups and let g, h be their respective Lie algebras. A map φ : G → H is called a Lie group homomorphism if and only if it is a smooth group homomorphism.

A map ψ : g → h is called a Lie algebra homomorphism if and only if it is a vector space homomorphism with the property ψ([X,Y ]) = [ψ(X), ψ(Y )] for all X,Y ∈ g.

As should be expected, an injective homomorphism (of either structure) is called a monomorphism, a surjective homomorphism is called an epimorphism and a bijective homomorphism is called an isomor- phism. If H = G then a Lie group homomorphism φ : G → G is called an endomorphism and an isomorphism is called an . Similarly, if h = g then a Lie algebra homomorphism is called an endomorphism and a bijective endo- morphism is called an automorphism. The sets End(G) and Aut(G), End(g) and Aut(g) consist of all endomorphisms and on a Lie group G or Lie algebra g respectively.

Example 1.3.2 The conjugation automorphism

−1 The map Ig : G → G given by Ig(h) = g hg is a smooth (and hence continuous) automorphism of G. This is obvious since group multiplication is a smooth automorphism in a Lie group and the composition of two smooth automorphisms is itself a smooth automorphism.

Proposition 1.3.3 Relating homomorphisms Let G, H be Lie groups and let g, h be their respective Lie algebras and suppose φ : G → H is a Lie group homomorphism. Then

(i) The differential dφe : g → h is a Lie algebra homomorphism.

(ii) φ(exp(X)) = exp(dφe(X)).

Proof: For part (i) see [Wa71] theorem 3.14(b).

Let X ∈ h. The map t 7→ φ(exp(tX)) is a smooth curve in G whose tangent at 0 is dφe(X). Since φ is a homomorphism this map extends uniquely to a 1-parameter subgroup of G.

However, the map t 7→ exp(t(dφe(X))) is also a 1-parameter subgroup of G whose tangent at 0 is dφe(X). By corollary 1.2.13, φ(exp(tX)) = exp(t(dφe(X))) for all t ∈ R.

Therefore, φ(exp(X)) = exp(dφe(X)).  Definition 1.3.4 Ideals of a Lie algebra Let g be a Lie algebra and let h be a subspace of g. h is called an ideal of g if and only if [h, g] ⊆ h.

Example 1.3.5 Some basic ideals

(i) {0} and g are always ideals of any Lie algebra g. (ii) If a is an abelian Lie algebra, then all its subalgebras are ideals, as the bracket is equal to the zero map.

(iii) sln(F) is an ideal in Mn(F), since Tr[A, B] = 0 for all A, B ∈ Mn(F). Proposition 1.3.6 Constructing ideals Given two ideals a, b of a Lie algebra g, the following are also ideals.

12 (i) a + b, (ii) a ∩ b and (iii)[ a, b].

Proof: (i) Let X ∈ a, Y ∈ b and Z ∈ g, then [X + Y,Z] = [X,Z] + [Y,Z] ∈ a + b. (ii) If X ∈ a ∩ b, then [X,Z] ∈ a and [X,Z] ∈ b by definition. (iii) By the Jacobi identity [[a, b], g] ⊆ −[[b, g], a] − [[g, a], b].

Using antisymmetry, [[a, b], g] ⊆ [a, [b, g]] + [[a, g], b] ⊆ [a, b] + [a, b] ⊆ [a, b].  Theorem 1.3.7 Relating ideals and homomorphisms a is an ideal of g if and only if there is some Lie algebra homomorphism ψ : g → h with ker(ψ) = a.

Proof: Let X ∈ ker(ψ) and let Y ∈ g. Then ψ[X,Y ] = [ψ(X), ψ(Y )] = [0, ψ(Y )] = 0, so [X,Y ] ∈ ker(ψ). If a is an ideal in g then the quotient Lie algebra g/a is the Lie algebra given by the operations

(X + a) + (Y + a) = X + Y + a and [X + a,Y + a] = [X,Y ] + a.

Using this we define the quotient map ψ : g → g/a by ψ(X) = X + a and notice that ψ is a Lie algebra homomorphism with a.  Definition 1.3.8 Commutator Series and Lower Central Series Let g be a Lie algebra and define a sequence of ideals of g as follows, g0 = g g1 = [g, g] gk = [gk−1, gk−1]

The decreasing sequence g0 ⊇ g1 ⊇ ... is called the commutator series for g. Similarly, the sequence of ideals of g defined by

g0 = g g1 = [g, g] gk = [g, gk−1] leads to the decreasing sequence g0 ⊇ g1 ⊇ ... , which is called the lower central series for g.

k g is called solvable if g = {0} for some k and nilpotent if gk = {0} for some k. A non-zero solvable Lie algebra g has a non-zero abelian ideal, namely the last non-zero gk, similarly, a non-zero nilpotent Lie algebra g always has non-zero centre equal to the last non-zero gk. k For each k, g ⊆ gk so every nilpotent algebra is solvable.

Theorem 1.3.9 Engel’s theorem on nilpotent Lie algebras

Let V 6= 0 be a finite dimensional vector space over a field F and let g be a Lie algebra of nilpotent endomorphisms of V . Then (i) g is a nilpotent Lie algebra, (ii) there is some v ∈ V \{0} such that X(v) = 0 for all X ∈ g, (iii) there is a basis of V such that all X ∈ g are strict upper triangular matrices with respect to this basis.

Proof: See [Kn02], pages 46-47. One key corollary of Engel’s theorem is that if g is a Lie algebra and ad(X) is nilpotent for all X ∈ g then g is nilpotent.

13 Proposition 1.3.10 Solvable and nilpotent Lie algebras Let g be a Lie algebra and let φ : g → h be a Lie algebra homomorphism. Then (i) If g is solvable then any subalgebra of g is solvable and im(φ) is also solvable. (ii) If a is an ideal of g and both g/a and a are solvable, then g is solvable. (iii) Parts (i) and (ii) hold when solvable is replaced by nilpotent. (iv) There is a unique solvable ideal r of g containing all solvable ideals of g.

Proof: Parts (i) to (iii) are trivial so attention is concentrated on (iv). As the Lie algebra is a finite dimensional vector space it suffices to show that the sum of two solvable Lie algebras is itself solvable. Therefore suppose a, b are solvable ideals and let h = a + b. As a is a solvable ideal in h we may take the quotient algebra h/a = (a + b)/a =∼ b/(a ∩ b) by the second isomorphism theorem on rings. As b, a ∩ b are both solvable it follows that h/a is solvable and hence h is solvable.  The ideal r constructed in proposition 1.3.10(iv) is called the radical of g and is denoted by rad(g).

Definition 1.3.11 Simple and semisimple Lie algebras A Lie algebra g is called simple if g is non-abelian and has exactly two ideals, namely 0 and g. A Lie algebra g is called semisimple if it has no nonzero solvable ideals, in other words, rad(g) = 0.

Semisimple Lie algebras are the focus of chapter 2, so it will be useful to state some basic properties of such algebras at this stage.

Proposition 1.3.12 Properties of simple and semisimple Lie algebras (i) If g is a simple Lie algebra then [g, g] = g. (ii) Every simple Lie algebra is semisimple. (iii) Every has centre 0. (iv) If g is a Lie algebra, then g/rad(g) is semisimple.

Proof: (i) g is simple so [g, g] = 0 or g. If [g, g] = 0 then g is abelian which is a contradiction. (ii) rad(g) is an ideal, so rad(g) = 0 or g. If rad(g) = g then g is solvable and [g, g] 6= g.

(iii) The centre Zg is an abelian ideal (and hence solvable), so 0 = rad(g) ⊇ Zg = 0  (iv) Let π : g → g/rad(g) be the quotient homomorphism and suppose h is a solvable ideal in g/rad(g). −1 Then a = π (h) is an ideal in g. π(a) = h is solvable and ker π |a is solvable as it is contained in rad(g). Hence a is solvable by 1.3.10(ii) and therefore contained in rad(g), so h = {0}. Therefore g/rad(g) is semisimple.  A Lie group G is said to be simple, semisimple, solvable or nilpotent if and only if its Lie algebra g is simple, semisimple, solvable or nilpotent respectively.

Definition 1.3.13 Adjoint representations Let G be a Lie group and let g be its Lie algebra. The adjoint representation of G is the map Ad : G → Aut(g) given by

Ad(g) = d(Ig)e.

The adjoint representation of g is the map ad : g → End(g) given by

ad(X) = (d Ad)e(X).

14 The adjoint representation of g has a more usable formulation, given by the following proposition.

Proposition 1.3.14 ad(X)(Y ) = [X,Y ]

Proof: See [Wa71], proposition 3.47.  Definition 1.3.15 The Killing form Let g be a Lie algebra and pick X,Y ∈ g. The Killing form B on g is given by B(X,Y ) = Tr(ad(X)ad(Y )). Moreover, we define rad(B) = {X ∈ g | B(X,Y ) = 0 for all Y ∈ g}. If g is a Lie algebra and rad(B) = {0} on g then the Killing form of g is called nondegenerate. Let m ⊆ g then m⊥ := {X ∈ g | B(X,M) = 0 for all M ∈ m}.

This definition is meaningful as ad(X)ad(Y ) is a linear transformation on g and is therefore represented by a dim(g) × dim(g) matrix and its trace can be calculated.

Proposition 1.3.16 Properties of the Killing form Let g be a Lie algebra. (i) The Killing form is a symmetric bilinear form on g. (ii) For all X,Y,Z in g, B(X, [Y,Z]) = B([X,Y ],Z).

(iii) If a ∈ Aut(g), then B(aX, aY ) = B(X,Y ).

Proof: The first part follows because the adjoint map is linear and the trace function is symmetric and linear. For the second part, expand both sides and use the symmetry of the trace to obtain the desired result. Finally we note that ad(aX) = a(ad(X))a−1, as

ad(aX)Y = [aX, Y ] = a[X, a−1Y ] = (a(ad(X))a−1)(Y ).

Therefore, B(aX, aY ) = Tr(ad(aX)ad(aY )) = Tr(a(ad(X))a−1a(ad(Y ))a−1) = Tr(ad(X)ad(Y ))

= B(X,Y ).  The result of part (ii) can also be written in the form B([X,Y ],Z) = −B(Y, [X,Z]).

1.4 Characterisation of semisimple Lie algebras

Chapters 2 and 3 are primarily concerned with semisimple Lie algebras and groups. In this section we provide a sufficient condition for a finite dimensional Lie algebra to be semisimple and use this to characterise all semisimple Lie algebras in terms of simple Lie algebras.

Theorem 1.4.1 Cartan’s criterion for solvability A Lie algebra g is solvable if and only if B(X,Y ) = 0 for all X ∈ g and all Y ∈ [g, g].

The proof of theorem 1.4.1 can be found in [Kn02] on pages 51-54. To prove the following corollary we need an intermediate result, namely that for a Lie algebra g, rad(B) ⊆ r = rad(g).

It suffices to show that rad(B) is a solvable ideal. Let H ∈ rad(B) and let X1,X2 ∈ g. Then

B([X1,H],X2) = −B(H, [X1,X2])

15 and as a result of this [X1,H] ∈ rad(B) and therefore rad(B) is an ideal of g. Pick s such that g = rad(B) ⊕ s. This is possible as any subspace has a basis which can be extended to a basis of the whole space, s is just the span of the vectors added by this process. If X ∈ rad(B), then in terms of the basis discussed, the adjoint map extends the zero map over s, by proposition 1.3.14. Thus the trace of ad(X) is the same as if it were restricted to the ideal rad(B). Hence, if C is the Killing form on rad(B) then C(X,Y ) = Tr(ad(X)ad(Y )) = B(X,Y ) = 0 and so by theorem 1.4.1, rad(B) is solvable.  Corollary 1.4.2 Cartan’s criterion for semisimplicity A Lie algebra g is semisimple if and only if the Killing form on g is nondegenerate.

Proof: If B is degenerate then rad(B) 6= 0 and therefore rad(g) 6= 0 by the previous result, so g is not semisimple. Conversely, if g is not semisimple then rad(g) 6= 0. However, rad(g) is solvable so there is an l such that (rad(g))l = 0 and (rad(g))l−1 = a is a nonzero abelian ideal in g. Set s so that g = a ⊕ s as a vector space. If X ∈ a and Y ∈ g then ad(X) is zero on a as a is an abelian ideal. Also the adjoint map extends the zero map over s (as in the preceding comments). Hence, Tr(ad(X)ad(Y )) = 0 and a ⊆ rad(B), so B is degenerate.  Theorem 1.4.3 Jordan decomposition

Let F be an algebraically closed field and let V be a finite dimensional vector space over F. Then each ψ ∈ EndF(V ) can be uniquely decomposed as ψ = s + n where s is diagonalisable, n is nilpotent and sn = ns.

Proof: See [Hu78], section 4.2.  Theorem 1.4.4 Characterisation of semisimple Lie algebras Lm A Lie algebra g is semisimple if and only if g = i=1 gj where each gj is a simple ideal of g. If g is semisimple then this decomposition is unique and the only ideals of g are the sums of the gj. Lm Proof: Suppose g = j=1 gj where each gj is a simple ideal of g. Let Pi be the projection of g onto gi and let a be any ideal in g and set ai = Pi(a). ai is an ideal in gi as

[Pi(A),Xi] = Pi([A, Xi]) ∈ Pi(a) = ai for all A ∈ a. gi is simple so ai = 0 or ai = gi. If ai = gi, then

gi = [gi, gi] = [gi, ai] ⊆ [g, a] ⊆ a.

As g = Lm g , it follows that a = a ∩ g = Lm (a ∩ g ) = L g . j=1 j i=1 i gi⊆a i This proves the uniqueness of the decomposition and the structure of the ideals. Also   M M M M  gi, gj = [gi, gi] = gi = a.

gi⊆a gj ⊆a gi⊆a gi⊆a

Therefore a is solvable if and only if a = 0. Thus rad(g) = 0, so by Cartan’s criterion for semisimplicity, g is semisimple.

Suppose g is semisimple. Let a be a minimal (by inclusion) non-zero ideal of g. Write a⊥ = {X ∈ g | B(X,A) = 0 for all A ∈ a}. a⊥ is an ideal, as

B([X,H],A) = B(H, −[X,A]) ∈ B(H, a) = 0 for all A ∈ a,X ∈ g and H ∈ a⊥.

⊥ Therefore rad(B|a×a) = a ∩ a = 0 or a, by the minimality of a.

16 Suppose rad(B|a×a) = a, then B(A1,A2) = 0 for all A1,A2 ∈ a. a is an ideal, so [a, a] ⊆ a. By theorem 1.4.1, a is solvable, contradicting the semisimplicity of g. Thus a ∩ a⊥ = 0. The Killing form is non-degenerate on a, so g = a ⊕ a⊥ as a vector space and both a and a⊥ are ideals of g. Also, the non-degeneracy shows a is nonabelian. Suppose b ⊆ a, then [b, a⊥] = 0, so b is an ideal of g. Moreover, b = 0 or a by the minimality of a. Thus a is simple. Finally, note that any ideal of a⊥ is an ideal of g and therefore rad(a⊥) = 0 so a⊥ is semisimple, by ⊥ corollary 1.4.2. We repeat the argument with a and complete the proof by induction on dim(g). 

Proposition 1.4.5 The Lie algebra of Aut(g) is Der(g), the space of all derivations on g. If g is semisimple then Der(g) = ad(g).

Proof: This is a special case of [Wa71], theorem 3.54.  Later, we will need a way of changing the base field of a Lie algebra between R and C. With this goal in mind, we make the following definition. Definition 1.4.6 Complexifications and real forms Given a real Lie algebra g we can define a tensor product gC = g ⊗ C as follows. Pn Pn Let {X1,...,Xn} be a basis for g over R and define X ⊗ c = i=1 caiXi, where X = i=1 aiXi. To make gC into a Lie algebra define [X ⊗ c, Y ⊗ d] = [X,Y ] ⊗ cd. gC is called the complexification of g.

Note that {X1,...,Xn} is a basis for gC over C and {X1,...,Xn, iX1, . . . , iXn} is a basis for gC over R. C C So dimC(g ) = dimR(g) and dimR(g ) = 2 dimR(g) Conversely, given a complex Lie algebra g we define gR to be the vector space g considered over the field R instead of C. The key result which is obtained from this new basis is (gC)R = g ⊕ ig. Consequently, given a complex Lie algebra g, any real Lie algebra h which satisfies gR = h ⊕ ih is called a real form of g. As should be expected, any real Lie algebra is a real form of its complexification.

1.5 Connected Lie groups

This section contains many of the most important and fundamental results relating Lie algebras and Lie groups. Many of them have involved proofs which are omitted. We will make use of these results (often without directly referring to them) throughout the remaining chapters. Theorem 1.5.1 Equality of Lie group homomorphisms Let G be a connected Lie group and let ψ, φ : G → H be two Lie group homomorphisms whose differentials dψ, dφ : g → h are identical. Then ψ = φ.

Proof: See [Wa71], theorem 3.16.  Proposition 1.5.2 Generating connected Lie groups S∞ n Let G be a connected Lie group and let U be an open neighbourhood of 1G. Then G = n=1 U , where U n is the set of all n-fold products of elements of U.

−1 −1 Proof: Set V = (U ∩ U ). The set U is also an open neighbourhood of 1G as the inversion map is continuous. Therefore V is an open neighbourhood of 1G. Define ∞ ∞ [ [ H = V n ⊆ U n. n=1 n=1 H is a subgroup of G and is open in G as each element h ∈ H lies in the open neighbourhood hV ⊆ H. Thus all cosets of H in G are open, in particular, G \ H is open, so H is a closed subgroup of G. As G is connected and H 6= {1G}, it follows that H = G. 

17 Theorem 1.5.3 Existence and uniqueness of connected Lie subgroups Let G be a Lie group with Lie algebra g. Let h be a Lie subalgebra of g. There is a unique connected Lie subgroup H of G whose Lie algebra is h.

Proof: See [Wa71], theorem 3.19.  Corollary 1.5.4 Subgroup-subalgebra correspondence Let G be a Lie group with Lie algebra g. There is a 1-1 correspondence between the connected Lie subgroups of G and the subalgebras of g. Moreover, the correspondence maps normal subgroups to ideals.

Proof: This follows immediately from theorem 1.5.3.

We will use this corollary regularly. At this time we use it to define the connected Lie subgroup Int(g) to be the unique connected subgroup of Aut(g) with Lie algebra ad(g). If g is semisimple then by proposition 1.4.5, Int(g) = Aut(g)0.

Theorem 1.5.5 Uniqueness of Lie group homomorphisms Let G and H be Lie groups with Lie algebras g and h respectively. If G is simply connected then for each Lie algebra homomorphism ψ : g → h there exists a unique Lie group homomorphism φ : G → H with dφ = ψ.

Proof: See [Wa71], theorem 3.27.

Theorem 1.5.6 Campbell-Baker-Hausdorff formula Let G be a connected Lie group with Lie algebra g. The element Z ∈ g satisfying the equation exp(Z) = exp(X) exp(Y ) is given by the formula

∞ ∞ n X (−1)n−1 X X X Z = (r !s ! . . . r !s !)−1 (r + s )−1[Xr1 Y s1 ...Xrn Xsn ], n 1 1 n n i i n=1 1≤i≤n ri+si=1 i=1

where the notation [Xr1 Y s1 ...Xrn Xsn ] is a shorthand for the iterated bracket

[X, [X,... [X, [Y, [Y,... [Y , [X,... [X, [X,... [X, [Y, [Y,...,Y ]]] ... ]] | {z } | {z } | {z } | {z } r1 s1 rn sn

We note first that this iterated bracket is 0 if sn > 1 or sn = 0 and rn > 1. The first three terms of this expression are given by 1 1 Z = X + Y + [X,Y ] + ([X, [X,Y ]] − [Y, [X,Y ]]) + .... 2 12 A proof of this result can be found in [Kn02] on pages 669-682.

1.6 Representations of the Lie algebra sl2(C)

In this section we concentrate on the Lie algebra sl2(C). The set {e, f, h} is a basis for the Lie algebra sl2(C) where  0 1   0 0   1 0  e = , f = and h = . 0 0 1 0 0 −1

Notice that [e, f] = h, [h, e] = 2e and [h, f] = −2f and hence [sl2(C), sl2(C)] = sl2(C).

Definition 1.6.1 Let ψ : g → gl(V ) be a representation of sl2(C) on a finite dimensional complex vector space V .

(i) An invariant subspace under ψ is a subspace U ⊆ V such that ψ(X)U ⊆ U for all X ∈ sl2(C). (ii) ψ is called irreducible if the only invariant subspaces are 0 and V .

18 0 0 (iii) If ψ : sl2(C) → GL(V ) is a representation of sl2(C) on a finite dimensional complex vector space V 0, then ψ and ψ0 are said to be equivalent if there is an isomorphism E : V → V 0 such that 0 Eψ(X) = ψ (X)E for all X ∈ sl2(C).

Theorem 1.6.2 Irreducible representations of sl2(C) For each m ∈ N there is a unique (up to equivalence) irreducible complex-linear representation π : sl2(C) → GLm(C). Moreover, there is a basis {v0, . . . vm−1} for V such that

(i) π(h)vi = (m − 1 − 2i)vi,

(ii) π(e)v0 = 0,

(iii) π(f)vi = vi+1, where vm := 0,

(iv) π(e)vi = i(m − i)vi−1, for 1 ≤ i ≤ m − 1.

Proof: We prove that properties (i)-(iv) define an irreducible complex-linear representation as claimed. The proof of uniqueness is omitted and can be found in [Kn02] on page 63.

Define π(h), π(e), π(f) as given by (i) - (iv) and extend linearly to define π on sl2(C). Simple calcu- lations show that π([h, e]) = [π(h), π(e)], π([h, f]) = [π(h), π(f)] and π([e, f]) = [π(e), π(f)] so π is a complex-linear representation.

To prove the representation is irreducible, suppose U is a nontrivial subspace of V , hence vi ∈ U for some i. By (iv) we can apply π(e) to vi to show v0, . . . , vi−1 ∈ U and by (iii) we can apply π(f) to vi to show vi+1, . . . , vm−1 ∈ U, so U = V . 

Later results will rely heavily on the uniqueness of such representations.

19 Chapter 2

Complex semisimple Lie algebras

The purpose of this chapter is to introduce complex semisimple Lie algebras and classify them up to isomorphism. This is achieved by relating such Lie algebras bijectively to a collection of diagrams called Dynkin diagrams. These diagrams are then classified using Cartan matrices. The main reference for this chapter is [Kn02] chapters 1 and 2.

2.1 The classification theorem

Theorem 2.1.1 Classification theorem To each complex semisimple Lie algebra we can associate a Dynkin diagram in which every connected component is isomorphic to exactly one such diagram from figure 2.1.2. Moreover, if g and g0 are two complex semisimple Lie algebras, then g and g0 are isomorphic if and only if they have isomorphic Dynkin diagrams.

Figure 2.1.2 Dynkin diagrams

2.2 Example: sln+1(C), for n ≥ 1 Restricting the classification theorem just to this case we obtain the following theorem.

Theorem 2.2.1 g := sln+1(C) = {A ∈ Mn+1(C) | Tr(A) = 0} is the unique (up to isomorphism) complex simple Lie algebra with Dynkin diagram An.

We will make no attempt to prove the uniqueness in this section.

20 The first task is to show that g is actually a complex Lie algebra. This result was established by showing that g is the Lie algebra of SLn+1(C), in this section we will prove the result directly. Note that g is certainly a complex vector space and a ring with respect to the multiplication [A, B] = AB − BA since Tr(AB − BA) = Tr(AB) − Tr(BA) = Tr(AB) − Tr(AB) = 0. It remains to show that this bracket is antisymmetric and satisfies the Jacobi identity. The first is obvious from the definition, and [A, [B,C]] + [B, [C,A]] + [C, [A, B]] = A(BC − CB) − (BC − CB)A + B(CA − AC) −(CA − AC)B + C(AB − BA) − (AB − BA)C = 0 We will define the following elements of g. th For i 6= j we define Ei,j to be zero except in the (i, j) entry where it takes value 1 th th and we define Hi,j to be zero except in the i diagonal entry where it takes value 1 and the j diagonal entry where it takes value −1.

Notice that B := {Ei,j | i 6= j, 1 ≤ i, j ≤ n + 1} ∪ {Hi,i+1 | 1 ≤ i ≤ n} is a basis for g as a vector space. It is easy to verify that g is semisimple using Cartan’s criterion for semisimplicity, as the Killing form is non-degenerate on g. Later we will use results from this section to show that g is actually a simple Lie algebra.

Set h to be the Lie subalgebra of g with basis {Hi,i+1 | 1 ≤ i ≤ n}, this subalgebra is abelian and thus has the key property that adg(h) = {ad(H): g → g | H ∈ h} is simultaneously diagonalisable with respect to the basis B. th Moreover, if we define a collection of linear functionals ei : h → C, where ei(H) is the i diagonal entry of H, then we see that given two distinct indices i and j,

(ad(H))Ei,j = [H,Ei,j] = ei(H)Ei,j − ej(H)Ei,j = (ei − ej)(H)Ei,j.

We can think of Ei,j as a simultaneous eigenvector of adg(h) whose eigenvalue is the linear functional ∗ ei − ej ∈ h . Using these simultaneous eigenvectors we can decompose g as follows. M g = h ⊕ gei−ej where gei−ej := {X ∈ g | ad(H)(X) = (ei − ej)(H)(X)}. i6=j

Similarly, we could define g0 := {X ∈ g | ad(H)(X) = 0} and it is clear that g0 = h. The decomposition of g above is called the root-space decomposition, h is called a Cartan subalgebra and the linear functionals ei − ej for i 6= j are called roots, the set of roots will be denoted by Φ. For each root ei − ej we will call Ei,j the root vector associated to the root ei − ej. The set of roots span ∗ ∗ h , as the set of ei span h .

We define h0 to the real form of h consisting of those diagonal matrices with real entries. The linear ∗ ∗ functionals ei (considered now as elements of (h0) ) span (h0) and their sum is the trace function on ∗ ∗ h0 which is the zero functional as h0 ⊆ g. There are n + 1 vectors ei (which span (h0) ) and (h0) has Pn+1 dimension n so it suffices to impose one suitable restriction on the coefficients in the sum j=1 ajej to Pn+1 obtain a unique expression for each functional. Such a condition is given by j=1 aj = 0. Pn+1 Pn+1 A non-zero functional φ = j=1 ajej (with j=1 aj = 0) is said to be positive if and only if the first non-zero aj is positive. If φ is positive then we say that −φ is negative. Notice that a sum of positive elements is positive and a sum of negative elements is negative. With respect to this ordering the positive elements of Φ behave as follows:

e1 − en+1 > e1 − en > ··· > e1 − e2 > e2 − en+1 > e2 − en > ··· > e2 − e3 > ··· > en−1 − en+1 > en−1 − en > en − en+1 > 0.

21 We say that a positive root is simple if it cannot be expressed as the some of at least two other positive roots. The set of simple roots of g is Π = {ei − ei+1 | 1 ≤ i ≤ n}

Although this step is not essential for constructing the Dynkin diagram of g, it is very informative and will be a crucial ingredient when proving theorem 2.1.1 and in later chapters (for example in definition 7.1.7).

Consider the subspace sli,j of g with basis {Ei,j,Ej,i,Hi,j} for some 1 ≤ i < j ≤ n + 1. Then the map

ψ : sli,j → sl2(C) given by

 0 1   0 0   1 0  ψ(E ) = , ψ(E ) = and ψ(H ) = i,j 0 0 j,i 1 0 i,j 0 −1 is a Lie algebra isomorphism with respect to the Lie brackets of g and sl2(C). As this holds whenever i < j we can see that g is spanned by copies of sl2(C).

In to obtain the Dynkin diagram we require a bilinear form h·, ·i on h∗. This is given by taking the C-linear extension of

hei − ej, ek − eli = B(Hi,j,Hk,l) = (ei − ej)(Hk,l) = (ek − el)(Hi,j), where B is the Killing form of g. This defines a bilinear form on h∗ as Φ spans h∗ and B is a symmetric bilinear form.

With respect to this inner product hei − ej, ei − eji = 2 whenever i 6= j.

∗ Let ei − ej ∈ Φ (where ei − ej is restricted to the space (h0) ). We define a root reflection

2hψ, ei − eji si,j(ψ) = ψ − (ei − ej). hei − ej, ei − eji

For each root this is a reflection in the hyperplane orthogonal to R(ei − ej) which permutes Φ. The Weyl group W is the finite subgroup of the orthogonal group generated by the set of reflections S := {sa | a ∈ Π}.

Now that we have the collection of root reflections in place we may define the Cartan matrix of Φ. This n × n matrix A = (aij) is given by

2hei − ei+1, ej − ej+1i aij = . hei − ei+1, ei − ei+1i

We note immediately that aii = 2 for all i. It also follows that aij = −1 whenever |i − j| = 1 and aij = 0 otherwise.  2 −1 0  So in the case n = 3, the Cartan matrix of sl4(C) is  −1 2 −1  0 −1 2

Finally we are in a position to define the Dynkin diagram of g. The Dynkin diagram is the graph with vertex set Π and edges between two vertices ei − ei+1 and ej − ej+1 if and only if aijaji 6= 0.

Every edge is assigned a label equal to 3 and each vertex ei −ei+1 is given a label equal to chei −ei+1, ei − ei+1i where c > 0 is some constant which does not depend on i, chosen minimally so that all labels take 1 integer values. In this case hei − ei+1, ei − ei+1i = 2 for all i so we choose c = 2 . We can see from the diagram 2.1.2 that every vertex is not labelled. It is conventional that labels are only written on adjacent vertices which have distinct labels. Often an edge connecting vertices with distinct labels is displayed as a directed edge towards the vertex of smaller label.

Thus g has Dynkin diagram An. In the chapter on buildings we will see that Dynkin diagrams are in

22 fact examples of Coxeter diagrams, where the group associated to the Coxeter diagram is, in fact, the Weyl group of root reflections. This will enable us to prove that g has Weyl group Sn+1.

In this section, many things have been assumed, most commonly that the choices made (Cartan subal- gebra, ordering of Φ, choice of simple roots Π) have no impact on the resulting Cartan matrix, Dynkin diagram and Weyl group. The real triumph of the classification theorem is in proving these assumptions are justified. As with this section the place to start is with Cartan subalgebras and roots.

2.3 Cartan subalgebras and roots

This section abstracts the constructions from the previous section, introducing Cartan subalgebras and root systems. We will then show how each pair (g, h) where g is a complex semisimple Lie algebra and h is a Cartan subalgebra of g gives rise to an abstract root system.

Definition 2.3.1 Cartan subalgebras Let g be a complex semisimple Lie algebra. A Lie subalgebra h of g is called a Cartan subalgebra if it is maximal by inclusion amongst the abelian Lie subalgebras a of g.

If a Lie subalgebra a of g is abelian, it follows that any two linear transformations from adg(a) commute. To see this, notice that for any X ∈ g,

(ad(A)ad(A0))(X) = [A, [A0,X]] = −[A0, [X,A]] − [X, [A, A0]] (by the Jacobi identity) = −[A0, [X,A]] (since a is abelian) = [A0, [A, X]] = (ad(A0)ad(A))(X).

We know from basic representation theory that any finite set of commuting linear transformations is simultaneously diagonalisable and therefore the underlying space has a basis of simultaneous eigen- vectors whose eigenvalues are constant. However, adg(a) is certainly not finite. Instead we use the fact that a is finite dimensional, with basis {H1,...,Hk} say. Thus, under the linear transformations {ad(H1),..., ad(Hk)}, g has a basis of simultaneous eigenvectors whose eigenvalues are constant.

Suppose X ∈ g lies in one of these simultaneous eigenspaces, so for each i there is some constant ci such that [Hi,X] = ciX. Pk Let H ∈ a, then H = i=1 aiHi and therefore Pk [H,X] = i=1 ai[Hi,Xi] Pk  = i=1 aici X

Pk so X lies in a simultaneous eigenspace under the set of transformations adg(a) with eigenvalue i=1 aiei, which is a linear functional on a. The same construction applies equally well to infinite dimensional Lie algebras providing every maximal (by inclusion) abelian subalgebra is finite dimensional.

Proposition 2.3.2 Let g be a complex semisimple Lie algebra and let h be a Cartan subalgebra of g. Then M g = ga where ga = {X ∈ g | [H,X] = a(H)X for all H ∈ h}. a∈h∗ ∗ Furthermore,g0 = h and [ga, gb] ⊆ ga+b, for all a, b ∈ h .

Proof: The fact that all X ∈ g lie in some ga has already been covered.

Suppose that X ∈ ga ∩ gb. Then a(H)X = [H,X] = b(H)X for all H ∈ h, so a = b.

Let X ∈ ga, Y ∈ gb and H ∈ h. Then

23 (ad(H) − (a(H) + b(H))I)[X,Y ] = [H, [X,Y ]] − a(H)[X,Y ] − b(H)[X,Y ] = [(ad(H) − a(H)I)X,Y ] + [X, (ad(H) − b(H)I)Y ] = 0 as X ∈ ga and Y ∈ gb

g0 is precisely the subspace of g consisting of all elements of g which commutes with h, as h was chosen maximally it follows that g0 = h. 

The decomposition given by proposition 2.3.2 is called the root-space decomposition of g. A linear ∗ functional a ∈ h \{0} is called a root if and only if ga 6= {0}. The set of roots will be denoted by Φ(g, h) to show its dependence on the choice of h, when this is clear from the context we will simply write Φ. Nonzero members of ga are called root vectors for the root a. As was mentioned at the start of this section the next important result will be to prove that the set of roots Φ given by the root-space decomposition forms an abstract root system. With this in mind we give the following definition and theorem.

Definition 2.3.3 Abstract Root Systems Let (V, h·, ·i) be finite dimensional real vector space equipped with a bilinear form. An abstract root system in V is a finite subset Φ ⊆ V \{0} with the following properties: (i)Φ spans V ,

2hb,ai (ii) if a, b ∈ Φ then sa(b) = b − ha,ai a ∈ Φ,

2hb,ai (iii) if a, b ∈ Φ then ha,ai ∈ Z. An abstract root system is said to be reduced if a ∈ Φ implies 2a 6∈ Φ.

Two abstract root systems (V, Φ) and (V 0, Φ0) are isomorphic is there is a vector space isomorphism ψ : V → V 0 with ψ(Φ) = Φ0.

Theorem 2.3.4 The root system of a complex semisimple Lie algebra g with respect to a Cartan ∗ subalgebra h forms a reduced abstract root system in (h0) .

The remainder of this section will essentially be a proof of theorem 2.3.4.

Proposition 2.3.5 Basic properties of root spaces Let g be a complex semisimple Lie algebra, let h be a Cartan subalgebra of g and let Φ = Φ(g, h).

(i) If a, b ∈ Φ ∪ {0} and a + b 6= 0, then B(ga, gb) = 0.

(ii) If a ∈ Φ ∪ {0} then B is non-singular on ga × g−a. (iii) If a ∈ Φ then −a ∈ Φ.

(iv) B|h×h is nondegenerate. (v)Φ spans h∗.

Proof:

(i) ad(ga)ad(gb) maps gc onto gc+a+b by proposition 2.3.2, so when this transformation is written as a matrix whose basis is compatible with the root-space decomposition each diagonal entry will be zero. Hence, B(ga, gb) = Tr(ad(ga)ad(gb)) = 0. (ii) By Cartan’s criterion for semisimplicity, (1.4.2), B is non-degenerate on g, in particular the sets B(X, g) are all nontrivial for X ∈ ga \{0}. Part (i) shows that B(X, gb) = 0 for all b 6= −a so B(X, g−a) 6= 0.

(iii) Suppose −a 6∈ Φ then g−a = 0 and hence B(ga, g−a) = 0 which contradicts part (ii).

(iv) By (ii), B(X, g0) 6= 0 for all X ∈ g0 \{0}, but g0 = h so B|h×h is nondegenerate.

24 (v) If H ∈ h is such that a(H) = 0 for all a ∈ Φ, then the root-space decomposition tells us that ad(H) 0 is nilpotent on g by the definition of the ga spaces. Since h is abelian, ad(H)ad(H ) is nilpotent for all H0 ∈ h. By Engel’s theorem (1.3.9), B(H, h) = Tr(ad(H)ad(h)) = 0 so by part (iv), H = 0 and therefore Φ ∗ spans h . 

By part (iv) of this proposition we can see that for each a ∈ Φ there is some Ha ∈ h such that a(H) = B(H,Ha) for all H ∈ h. Also, by proposition 2.3.2 we may choose a root vector Ea in ga with the property that [H,Ea] = a(H)Ea for all H ∈ h. Later in the chapter we will show that the subspace of g with basis {Ea,E−a,Ha} is isomorphic to sl2(C).

Lemma 2.3.6 Let g be a complex semisimple Lie group and let Φ = Φ(g, h). Let Ea, and Ha be as defined above.

(i) If a ∈ Φ and X ∈ g−a then [Ea,X] = B(Ea,X)Ha.

(ii) If a, b ∈ Φ then b(Ha) is a rational multiple of a(Ha).

(iii) If a ∈ Φ then a(Ha) 6= 0.

Proof:

(i) By proposition 2.3.5(iii), [ga, g−a] ⊆ g0, so [Ea,X] ∈ h. For H ∈ h, B(([Ea,X],H) = B([X, [H,Ea]) = a(H)B(X,Ea)

= B(H,Ha)B(X,Ea) = B(Ha,H)B(Ea,X)

= B(B(Ea,X)Ha,H).

Therefore, B([Ea,X] − B(Ea,X)Ha,H) = 0 for all H ∈ h.

By proposition 2.3.5(iv) it follows that [Ea,X] − B(Ea,X,Ha) = 0 as required.

(ii) By proposition 2.3.5(ii) there is some X−a ∈ g−a such that B(Ea,X−a) = 1, so by part (i), [Ea,X−a] = Ha. 0 L Fix b ∈ Φ and let g = gb+na. This subspace is invariant under ad(Ha). We will now compute n∈Z the trace of ad(Ha) in two ways.

Note ad(Ha) acts on gb+na with the single generalised eigenvalue (b + na)(Ha) and summing the trace of all values of n we obtain the result X Tr(ad(Ha)) = (b + na)(Ha) dim(gb+na)(†) n∈Z

as the trace is equal to the sum of the eigenvalues and the multiplicity of the eigenvalue (b+na)(Ha) is (dim gb+na) 0 On the other hand, by proposition 2.3.5(iii), g is invariant under ad(Ea) and ad(X−a). Using the fact [Ea,X−a] = Ha we see that

Tr(ad(Ha)) = Tr(ad(Ea)ad(X−a) − ad(X−a)ad(Ea)) = 0.

Thus † = 0, so as there are only finitely many eigenvalues the result holds.

(iii) If a(Ha) = 0 then by part (ii) b(Ha) = 0 for all b ∈ Φ. Proposition 2.3.5(v) says that every member ∗ of h vanishes on Ha and therefore Ha = 0, which contradicts 2.3.5(iv) as a 6= 0. 

The following theorem shows that the example g = sln+1(C) is more similar to the general case than may be expected.

Proposition 2.3.7 Let a ∈ Φ, then dim(ga) = 1. Also na 6∈ Φ for any integer n ≥ 2.

25 Proof: Again choose X−a ∈ g−a with B(Ea,X−a) = 1 and notice that [Ea,X−a] = Ha. Set 00 M g = CEa ⊕ CHa ⊕ gna. n<0

00 Then g is invariant under ad(Ea) and ad(Ha) by proposition 2.3.5(iii) and is invariant under ad(X−a) by proposition 2.3.5(iii) and lemma 2.3.6(i). Therefore X Tr(ad(Ha)) = a(Ha) + 0 + n.a(Ha) dim(gna) = 0, n<0 P∞ so by rearranging we see that n=1 n dim(g−na) = 1.

Consequently, dim(g−a) = 1 and dim(g−na) = 0 for all n ≥ 2, since a ∈ Φ if and only if −a ∈ Φ. 

We note the following corollary of theorem 2.3.7. Corollary 2.3.8 Let g be a complex semisimple Lie algebra with Cartan subalgebra h.

(i) The action of adg(h) on g is simultaneously diagonalisable. 0 P 0 (ii) B|h×h is given by B(H,H ) = a∈Φ a(H)a(H ).

(iii) The vector pairs {Ea,E−a} may be chosen so that B(Ea,E−a) = 1. Proof: (i) h is abelian and each root-space is one dimensional, so g must have a simultaneous basis of eigen- vectors and hence it is simultaneously diagonalisable.

(ii) Let {Hi} be a basis for h, so by proposition 2.3.7, {Hi} ∪ {Ea} is a basis for g which ad(H) acts on diagonally by (i). Therefore ad(H)ad(H0) acts diagonally and its eigenvalues are 0 and a(H)a(H0). Hence X B(H,H0) = Tr(ad(H)ad(H0)) = a(H)a(H0). a∈Φ

(iii) By proposition 2.3.5(ii) B is non-singular on ga × g−a. These spaces are one dimensional so the result holds.  The preceding results from this section are sufficient to show that any complex semisimple Lie algebra is built out of copies of sl2(C) in a certain way.

To see this, let Ea,E−a be the normalised root vectors given by corollary 2.3.8(iii). Lemma 2.3.6(i) gives us the following bracket relations.

[Ha,Ea] = a(Ha)Ea, [Ha,E−a] = −a(Ha)E−a and [Ea,E−a] = Ha

Then we may normalise these vectors by setting

0 2 0 2 0 Ha = Ha,Ea = Ea, and E−a = E−a, a(Ha) a(Ha) so 0 0 0 0 0 0 0 0 0 [Ha,Ea] = 2Ea, [Ha,E−a] = −2E−a, and [Ea,E−a] = Ha

Define e, f, h ∈ sl2(C) as follows,  0 1   0 0   1 0  e = , f = and h = . 0 0 1 0 0 −1

These satisfy the bracket relations

[h, e] = 2e, [h, f] = −2f, [e, f] = h

26 Consequently, using results from section 1.6 we know that a function which maps 0 0 0 Ha 7→ h, Ea 7→ e, and E−a 7→ f

can be extended linearly to an isomorphism of span{Ha,Ea,E−a} onto sl2(C). For this reason we will ∼ define the Lie subalgebra sla := span{Ha,Ea,E−a} = sl2(C). Thus, by the root-space decomposition, g is spanned by embedded copies of sl2(C). Definition 2.3.9 Root Strings Let a ∈ Φ and let b ∈ Φ∪{0}. The a string containing b is the subset of Φ∪{0} consisting of all elements of the form b + na for some n ∈ Z. ∗ ∗ We define an bilinear form on h by ha, bi = B(Ha,Hb) = a(Hb) = b(Ha) for a, b ∈ h with Ha ∗ and Hb as defined in 2.3.5(iv). Then we can regard (h , h·, ·i) as a bilinear inner product space. To investigate this further we study the action of such a copy of sl2(C) on g. In particular we consider the adjoint representation of the copy of sl2(C) generated by the root a ∈ Φ on the invariant subspace 0 L g = gb+na given by the root string of a containing b. n∈Z Proposition 2.3.10 Let a ∈ Φ, b ∈ Φ ∪ {0}. 2hb,ai (i) The a string containing b has the form {b+na | −p ≤ n ≤ q} where p, q ≥ 0 and p−q = ha,ai ∈ Z, 0 L (ii) If b 6= na for all n ∈ , then the adjoint representation of sla on g := gb+na is irreducible. Z n∈Z Proof: Omitted, see [Kn02] pages 144-146. 

There are three key corollaries to this result, the last of which will be crucial for the proof of theorem 2.3.4.

Corollary 2.3.11 If a, b ∈ Φ ∪ {0} with a + b 6= 0 then [ga, gb] = ga+b.

Proof: Without loss we may assume that a 6= 0. By proposition 2.3.2, [ga, gb] ⊆ ga+b. We consider two cases. Case 1: b = na for some n ∈ Z \ {−1}, then proposition 2.3.7 shows b = 0 or b = a.

If b = a then ga+b = 0 by proposition 2.3.7 and equality must hold. If b = 0 then the equality [h, ga] = ga holds as ga does not commute with h and thus the space [h, ga] is at least 1 dimensional, so must be equal to ga. 0 L Case 2: b 6= na for all n ∈ , then by proposition 2.3.10(ii), sla acts irreducibly on g = gb+na. Z n∈Z The result then follows from theorem 1.6.2. 

Corollary 2.3.12 Let a, b ∈ Φ be such that b + na 6= 0 for all n. Let Ea,E−a,Eb be any root vectors for a, −a, b respectively and let p, q be the defined in 2.3.10(i). Then, q(1 − p) [E , [E ,E ]] = a(H )B(E ,E )E . −a a b 2 a a −a b

Proof: Both sides of this equation are linear in Ea and E−a so we may normalise these two vectors so that B(Ea,E−a) = 1. Identifying the span of {Ha,Ea,E−a} with sl2(C) we can rewrite the desired formula as ha, ai q(1 + p) [f, [e, E ]] =? a(H )E . 2 b 2 a b As a(Ha) = ha, ai this formula simplifies to ? [f, [e, Eb]] = q(1 + p)Eb. By proposition 2.3.10, the adjoint representation of span{e, f, h} on g0 is irreducible. Comparing these facts with theorem 1.6.2, we see that the vector Eb+qa corresponds to a multiple of v0. Moreover, as q Eb is a multiple of (ad(f)) Eb+qa, Eb corresponds to a multiple of vq. Using parts (iii) and (iv) from theorem 1.6.2, we see that 0 (ad(f))(ad(e))Eb = q(N − q + 1)Eb, where N = dim(g ) − 1 = (q + p + 1) − 1. Therefore q(N − q + 1) = q(1 + p), as required. 

27 Corollary 2.3.13 Let V be the R linear span of Φ in h∗. Then V is a real form of the vector space h∗ and the restriction of h·, ·i to V × V is a positive definite inner product. Moreover, if h0 denotes the R linear span of all Ha for a ∈ Φ then h0 is a real form of h and the members of V are exactly those linear functionals that are real on h0 and the restriction of those linear functionals ∗ from h to h0 is an isomorphism of real vector spaces from V onto (h0) . Proof: Omitted, see [Kn02] pages 147-148.

∗ ∗ Definition 2.3.14 Let a ∈ Φ, the root reflection sa :(h0) → (h0) is given by 2hb, ai s (b) = b − a for any b ∈ (h )∗. a ha, ai 0

∗ The root reflection is an orthogonal transformation on (h0) , sa(ka) = −ka for all k ∈ R and sa(b) = b whenever b is in the orthogonal component of a. ∗ The subgroup W = W (g, h) of the orthogonal group on (h0) generated by the set of root reflections S := {sa | a ∈ Φ} is called the Weyl group of g with respect to the Cartan subalgebra h.

Proposition 2.3.15 Let a, b ∈ Φ, then sa(b) ∈ Φ. Proof: Let p, q be as defined in proposition 2.3.10. Then 2hb, ai s (b) = b − a = b − (p − q)a = b + (q − p)a a ha, ai

Since −p ≤ q − p ≤ q, b + (q − p)a is in the a string containing b. Hence sa(b) ∈ Φ ∪ {0}. Because sa is an orthogonal transformation, sa(b) 6= 0, so sa(b) ∈ Φ. 

∗ Proof of theorem 2.3.4: By corollary 2.3.13, (h0) is a bilinear inner product space spanned by Φ. Part (ii) of the definition follows from proposition 2.3.15 and part (iii) by proposition 2.3.10(i). Finally by proposition 2.3.7, Φ is reduced. 

2.4 Abstract Root Systems

∗ We now know that every pair (g, h) admits a reduced abstract root system ((h0) , Φ). In this section we will show how each abstract root system defines an abstract Cartan matrix and how this matrix can be used to form a Dynkin diagram.

Definition 2.4.1 Strings Let Φ be a reduced abstract root system. If a ∈ Φ and b ∈ Φ ∪ {0}, then the a string containing b is the set {φ ∈ Φ | φ = b + na for some n ∈ Z}. Proposition 2.4.2 Let Φ be a reduced abstract root system. (i) If a ∈ Φ then −a ∈ Φ. (ii) If a ∈ Φ then the only members of Φ proportional to a are ±a.

2hb,ai (iii) If a, b ∈ Φ, with b 6= ±a then ha,ai ∈ {0, ±1, ±2, ±3}.

2hb,ai (iv) If a, b ∈ Φ are nonproportional with ha, ai < hb, bi then hb,bi ∈ {0, ±1}. (v) If a, b ∈ Φ and ha, bi > 0 then a − b ∈ Φ ∪ {0}, moreover, if ha, bi < 0 then a + b ∈ Φ ∪ {0}. (vi) If a, b ∈ Φ and a − b, a + b 6∈ Φ ∪ {0}, then ha, bi = 0. (vii) If a ∈ Φ and b ∈ Φ ∪ {0} then the a string containing b has at most 4 elements and equals the set 2hb, ai {b + na | − p ≤ n ≤ q} where p − q = . ha, ai

28 Proof:

2ha,ai (i) sa(a) = a − ha,ai a = a − 2a = −a, so −a ∈ Φ. (ii) Let a ∈ Φ and let ca ∈ Φ. Then

2hca, ai 2ha, cai 2 = 2c ∈ and = ∈ . ha, ai Z hca, cai c Z

As Φ is reduced c 6= 2±1, hence c = ±1. (iii) By the Schwarz inequality on inner products

2hb, ai 2ha, bi ≤ 4 ha, ai hb, bi with equality if and only if b = ca for some c. This case is covered by part (ii). If the inequality is strict then

2hb, ai 2ha, bi ≤ 3. ha, ai hb, bi

(iv) In this case 2hb,ai ≥ 2ha,bi and their product is at most three, so 2ha,bi ∈ {0, ±1}. ha,ai hb,bi hb,bi (v) The cases b = ±a are both trivial, so assume a and b are not proportional. 2ha,bi If ha, ai ≤ hb, ai then sb(a) = a − hb,bi b = a − b by part (iv)

If instead ha, ai ≥ hb, ai then sa(b) = b − a so a − b ∈ Φ by part (i). To prove the second part simply replace a by −a in this argument. (vi) This follows immediately from part (v). (vii) Let −p and q be the smallest and largest values of n such that b + na ∈ Φ ∪ {0}. If this set has a gap then there exist values r and s such that r < s − 1 and b + ra, b + sa ∈ Φ ∪ {0}, with b + (r + 1)a, b + (s − 1)a 6∈ Φ ∪ {0}. By part (v), hb + ra, ai ≥ 0 and hb + sa, ai ≤ 0. Subtracting these inequalities we see that (r−s)ha, ai ≥ 0 and thus r−s ≥ 0 which is a contradiction.

2hb+na,ai  2hb,ai  Considering the root reflection sa(b + na) = b + na − ha,ai a = b − n + ha,ai a 2hb,ai we deduce that the inequality −p ≤ n ≤ q implies that −q ≤ n + ha,ai ≤ p. 2hb,ai 2hb,ai Setting n = −p and n = q in turn we see that ha,ai ≤ p − q ≤ ha,ai . Thus they are equal. It remains to calculate the length of the string. We may assume q = 0 so the 2hb,ai length of the string is p + 1, where p = ha,ai ≤ 3 by part (iii).  Definition 2.4.3 Positive elements ∗ We define a notion of positivity on (h0) by fixing a basis {H1,...,Hm} for h0. Then we say φ > 0 if there is some k such that φ(Hi) = 0 for all i < k and φ(Hk) > 0. We can then extend > to an ordering on V where a > b if and only if a − b > 0.

Such an ordering is often called lexicographic.

Lemma 2.4.4 If a, b are distinct simple roots then a − b is not a root, i.e. ha, bi ≤ 0.

Proof: Assume a − b is a root. If a − b > 0 then a = (a − b) + b so a is not a simple root. If instead a − b < 0 then b = a + (b − a) so b is not simple. Finally, if a − b = 0 then a, b are not distinct. Thus a − b is not a root so by proposition 2.4.2(v), ha, bi ≤ 0. 

29 Proposition 2.4.5 A basis of simple roots ∗ Set m = dim(h0) . There are m simple roots a1, . . . , am and they are linearly independent. Pl Moreover, if b ∈ Φ and b = i=1 ciai then each ci ∈ Z and either ci ≥ 0 for all i or ci ≤ 0 for all i.

Proof: Let b > 0 be in Φ. If b is not simple then write b = b1 + b2 with b1, b2 > 0. Continue decomposing until b is written as the sum of simple roots. We can list the decomposition as tuples (b, b1, component of b1,... ) with each entry a component of the preceding one. No tuple has more entries than there are positive roots, (if a root appeared twice we would get some Pm simple root a twice, implying that a = a + φ for some sum of positive roots φ). Thus b = i=1 ciai where each ci is a non-negative integer and all the ai are simple roots. Hence the set of simple roots spans Φ.

To prove linear independence suppose c1a1 + ··· + csas − cs+1as+1 − · · · − cmam = 0 with all + cj ∈ R0 = {x ∈ R | x ≥ 0}.

Set b = c1a1 + ··· + csas = cs+1as+1 + ··· + cmam. Then

* s m + s X X X X m 0 ≤ hb, bi = cjaj, ckak = k = 1 cjckhaj, aki ≤ 0, j=1 k=s+1 j=1

where the last inequality holds by lemma 2.4.4. Hence hb, bi = 0 so b = 0 and cj = 0 for all j since a positive combination of positive roots cannot equal zero.  ∗ ∗ As Φ spans (h0) it is clear that the set {a1, . . . , am} is a basis of (h0) . For the remainder of the section we fix a reduced root system Φ and an ordering < on Φ. Let Π be the set of simple roots with respect to <. Definition 2.4.6 Cartan matrices

2haj ,aii Write Π = {a1, . . . an}. The n × n matrix A = (Aij) given by Aij = is called the Cartan matrix hai,aii of Φ with respect to Π. Proposition 2.4.7 Properties of the Cartan matrix Let A be the Cartan matrix of Φ with respect to Π. Then

(i) Aij ∈ Z for all i, j,

(ii) Aii = 2 for all i,

(iii) Aij ≤ 0 for i 6= j,

(iv) Aij = 0 if and only if Aji = 0, (v) There is a diagonal matrix D with positive entries such that DAD−1 is symmetric positive definite. Remark: Any matrix which satisfies the five properties above is called an abstract Cartan matrix and two abstract Cartan matrices A, B are said to be isomorphic if there is a permutation matrix P such that P AP −1 = B. Proof: (i)-(iv) are all obvious from the definition.

1 (v) Set D = (Dij) where Dij = 0 whenever i 6= j and Dii = |ai|:= hai, aii 2 , then   a a  DAD−1 = 2 i , j |ai| |aj| ij.

This matrix is positive definite as given any c = (c1, . . . , cn),

T DX X E ∗ c(hφi, φji)c = ciφi, ciφi > 0, for any basis {φ1, . . . , φn} of (h0) . n o a1 an ∗ By proposition 2.4.5, the vectors ,..., form a basis of (h0) so the result holds. |a1| |an| 

30 Proposition 2.4.8 Let A be an abstract Cartan matrix. (i) The matrix Ak obtained from A by deleting the kth row and column is also an abstract Cartan matrix.

(ii) If i 6= j, then AijAji < 4 and Aij ∈ {0, −1, −2, −3}.

Proof: Part (i) follows from the definition by deleting the same row and column from the matrix D. Let A0 be the abstract Cartan matrix obtained from A by deleting all but the ith and jth rows and columns. As A0 is an abstract Cartan matrix there is a diagonal matrix D0 such that D0A0(D0)−1 is positive definite, so in particular

     −1  di 0 2 Aij di 0 det −1 > 0 0 dj Aji 2 0 dj

Therefore AijAji < 4. Aij = 0 if and only if Aji = 0 and Aij,Aji ≤ 0 are integers, so Aij ∈ {0, −1, −2, −3}. 

Definition 2.4.9 Dynkin diagrams

Let A = (Aij) be an n × n abstract Cartan matrix. The Dynkin diagram of A is the graph on n vertices {A1,...,An} defined as follows.

Each vertex Ai has an attached weight chAii,Aiii where c > 0 is a constant independent of i. c is chosen minimally such that every weight takes an integer value. Given two distinct vertices Ai and Aj we connect them with an edge if AijAji 6= 0 and attach a weight wij to that edge, where   3, if AijAji = 1 wij = 4, if AijAji = 2  6, if AijAji = 3.

Conventionally, edge labels of value 3 are omitted from the Dynkin diagram. Also, vertex weights are often omitted, replaced by directed edges when adjacent roots have distinct weights, always directed towards the shorter root. Figure 2.1.2 includes both of these pieces of information for clarity. Another common way of labelling the edges is to draw AijAji edges between the two vertices Ai and Aj.

Notice that given any abstract Dynkin diagram, we may recover the abstract Cartan matrix A which defined it. Given any two vertices Ai,Aj with no edge between them, we deduce that AijAji = 0 and since zeroes are symmetric in the Cartan matrix, Aij = Aji = 0.

Now suppose there is an edge between Ai and Aj where the weight of the vertices Ai and Aj are wi and wj respectively. The weight of the edge on the Dynkin diagram tells us the value of AijAji and since DAD−1 is symmetric, we see that

1 − 1 1 1 (wi) 2 Aij(wj) 2 = (wj) 2 Aji(wi) 2 .

In other words, wiAij = wjAji. These two pieces of information, added to the fact that Aij,Aji ∈ {−1, −2,... }, are sufficient to deduce the values Aij and Aji.

Theorem 2.4.10 Complex simple Lie algebras Let g be a complex semisimple Lie algebra. g is simple if and only if its Dynkin diagram is connected.

The condition that the Dynkin diagram is connected is equivalent to requiring that any the Cartan matrix cannot be decomposed into a block diagonal matrix with more than one block and this is equivalent to requiring that the root system Φ(g, h) is irreducible, so it does not admit a nontrivial orthogonal decomposition. We will implicitly use the characterisation of semisimple Lie algebras (theorem 1.4.4) in the proof.

31 Proof: Suppose g is a nontrivial direct sum of ideals g = g0 ⊕ g00. Let a ∈ Φ and decompose the 0 00 corresponding root vector Ea = Ea + Ea . Let H ∈ h, then

0 0 00 00 0 = [H,Ea] − a(H)Ea = ([H,Ea] − a(H)Ea) + ([H,Ea ] − a(H)Ea ). g0 and g00 are both ideals of g and have trivial intersection, so the two terms on the right hand side of 0 00 this equation must both by 0. Thus Ea and Ea are both contained in the root-space ga. As dim(ga) = 1, 0 00 0 00 by proposition 2.3.7, Ea = 0 or Ea = 0. Thus ga ⊆ g or ga ⊆ g . Define 0 0 00 00 Φ = {a ∈ Φ | ga ⊆ g } and Φ = {a ∈ Φ | ga ⊆ g }. Therefore Φ is the disjoint union of Φ0 and Φ00. Finally,

0 0 0 00 0 a (Ha00 )Ea0 = [Ha00 ,Ea0 ] ∈ [Ha00 , g ] = [[Ea00 ,E−a00 ], g ] ⊆ [g , g ] = 0.

0 0 00 Thus a (Ha00 ) = 0, so Φ and Φ are mutually orthogonal.

Suppose Φ is not irreducible, so Φ = Φ0 ∪ Φ00 exhibits Φ as a nontrivial union of two mutually orthogonal (and hence disjoint) sets. Define

0 X 00 X g = {CHa + ga + g−a} and g = {CHa + ga + g−a}. a∈Φ0 a∈Φ00 Then g0 and g00 are both subspaces of g and g = g0 ⊕ g00 as a vector space. To complete the proof it suffices to show that g0 and g00 are both ideals of g. It is apparent that both are closed under the Lie bracket, so are Lie subalgebras of g. 0 00 00 0 0 00 00 As Φ and Φ are mutually orthogonal [Ha0 ,Ea00 ] = a (Ha0 )Ea00 = 0 for all a ∈ Φ and a ∈ Φ . 0 00 0 00 Suppose [ga0 , ga00 ] 6= 0. Then the root a + a is not orthogonal to a or to a , contradicting the decom- position of Φ. Thus [ga0 , ga00 ] = 0. 0 00 0 0 00 00 Combining these two results, we see that [g , ga00 ] = 0 = [g , ga0 ], thus [g , g] ⊆ g and [g , g] ⊆ g as required. 

There are two remarks to make about this proof. Firstly, theorem 2.4.10 also says that g is simple if and only if its Weyl group is not equal to the direct product of the Weyl groups of two other non-trivial complex semisimple Lie algebras. The second remark is that this result proves that sln+1(C) is indeed a simple Lie algebra, as was stated in section 2.2.

2.5 Classification of connected abstract Dynkin diagrams

Theorem 2.5.1 Classification Up to isomorphism the connected abstract Dynkin diagrams are precisely those in figure 2.1.2, namely,

(i) An for n ≥ 1,

(ii) Bn for n ≥ 2,

(iii) Cn for n ≥ 3,

(iv) Dn for n ≥ 4,

(v) ,,, and

We have constructed many of the results needed to prove this, however, we will not give the full proof but will instead demonstrate how the results we have already obtained allow us to restrict the possibilities substantially. Trivially, yet also importantly, a connected abstract Dynkin diagram has finite vertex set.

32 Given a connected Dynkin diagram whose ith and jth vertices are connected by a single edge, (so have the same weight), we can obtain a new Dynkin diagram by collapsing these two vertices down to a single vertex, giving it the common weight and retaining all edges with exactly one end vertex from i and j. Using the fact that AijAji ≤ 3, this construction allows us to deduce that G2 is the only connected Dynkin diagram with an edge labelled 6.

Using the fact that AijAji ≤ 3, we can show that a Dynkin diagram on n vertices contains at most n − 1 pairs of adjacent vertices (vertices with an edge connecting them). Pn −1 Let a = i=1 |ai| ai. Then

D a E 0 < |a|2 = P ai , j i,j |ai| |aj |

D E D a E = P ai , ai + 2 P ai , j i |ai| |ai| i

1 P 2 = n − i

1 2 P Whenever (AijAji) 6= 0 it is at least 1. We therefore deduce that n − i,j adjacent 1 ≥ 0, so each connected Dynkin diagram is a tree.

These observations significantly limit the possibilities for a connected Dynkin diagram. One more crucial observation is that no vertex sends out more than three edges (where the Dynkin diagram is labelled by multiple edges). A complete proof of theorem 2.5.1 can be found in [Kn02] on pages 170-179.

To illustrate the theory we give two further examples which, together with section 2.2, give explicit complex simple Lie algebras corresponding to each of the four infinite families of connected Dynkin diagrams, An, Bn, Cn and Dn.

2.6 Other examples: son(C) and spn(C) Recall that the special orthogonal and symplectic Lie algebras are given by

t son(C):= {X ∈ Mn(C) | X + X = 0} and   t 0 In spn(C):= {X ∈ M2n(C) | X J + JX = 0} where J = . −In 0

We will show that, for suitable n, so2n+1(C) is a complex simple Lie algebra of type Bn, spn(C) is a complex simple Lie algebra of type Cn and so2n(C) is a complex simple Lie algebra of type Dn.

We begin with so2n+1(C), for n ≥ 2. The Cartan subalgebra h of so2n+1(C) is the subalgebra containing all matrices of the form   0 ih1 0  −ih1 0     0 ih2     −ih2 0  H =    ..   .     0 ihn     −ihn 0  0 0

We define the linear functionals ej so that ej(H) = hj, for 1 ≤ j ≤ n. h0 is the subalgebra of h in which hi ∈ Ri for all i.

The set of roots with respect to this Cartan subalgebra is Φ = {±ei ± ej | i 6= j} ∪ {±ek | 1 ≤ k ≤ n}.

33 ∗ It is clear by construction that {ek | 1 ≤ k ≤ n} is an orthogonal basis for the real vector space (h0) in ∗ Pn which every element has the same length. Thus, every element φ ∈ (h0) can be written as i=1 ciei. ∗ We define an element of (h0) to be positive if the first non-zero ci is positive. With respect to this ordering, the following is a suitable choice of simple roots.

Π:= {ei − ei+1 | 1 ≤ i ≤ n − 1} ∪ {en}.

Direct calculations show that the vertices {ei − ei+1 | 1 ≤ i ≤ n − 1} behave exactly as they do for the An example, so they form a path in the Dynkin diagram in which every vertex has weight 2 and each edge has label 3. The vertex en has weight 1, so in this case we choose the constant c = 1 and it is clear that only the vertex en−1 − en can form an edge with the vertex en. Moreover, 2he − e , e i 2he − e , e i −1 −1 n−1 n n . n−1 n n = 2 2 = 2. 2hen−1 − en, en−1 − eni 2hen, eni 2 1

So the edge connecting en−1 −en to en is labelled 4. Thus so2n+1(C) has Dynkin diagram Bn, as claimed.

Since the Dynkin diagram of so2n+1(C) is connected, we deduce that it is a complex simple Lie algebra, by theorem 2.4.10.

Next we consider spn(C), for n ≥ 3. It can be shown that the set, h, of all matrices of the form   h1 0  ..   .     hn  H =    −h1     ..   .  0 −hn is a maximal abelian subalgebra of spn(C), so is a Cartan subalgebra. As with the previous example we define ej(H) = hj where H is the matrix given above. We define h0 to be the set of all matrices in ∗ h where each hi ∈ R. Again, {ek | 1 ≤ k ≤ n} is an orthogonal basis for the real vector space (h0) in ∗ which every element has the same length. We use this basis to define an ordering on (h0) in the same way as for so2n+1(C).

The set of roots with respect to this Cartan subalgebra is Φ = {±ei ± ej | i 6= j} ∪ {±2ei | 1 ≤ i ≤ n}. The set Π = {ei − ei+1 | 1 ≤ i ≤ n − 1} ∪ {2en} is a simple root system for Φ. Direct calculations show that this defines the Dynkin diagram Cn. Again by theorem 2.4.10, we deduce that spn(C) is a complex simple Lie algebra.

We finish this section with so2n(C) for n ≥ 4. All elements of the Cartan subalgebra of so2n(C) are obtained by removing the last row and column from each matrix in the Cartan subalgebra of so2n+1(C). Unsurprisingly, we define ej(H) = hj and h0 to be the subalgebra of h in which hi ∈ Ri for all i. The set of roots with respect to this Cartan subalgebra is Φ = {±ei ± ej | i 6= j}. We notice that ∗ {ek | 1 ≤ k ≤ n} is an orthogonal basis for the real vector space (h0) in which every element has the same length and define a notion of positivity accordingly. This ordering leads to the following set of simple roots Π = {ei − ei+1 | 1 ≤ i ≤ j} ∪ {en−1 + en}. 1 We deduce immediately that each vertex has weight 2, so we set c = 2 . The vertices {ei − ei+1 | 1 ≤ i ≤ j} ∪ {en−1 + en} form a path of length n − 1 in the Dynkin diagram, so it remains to consider the edges with end vertex en−1 + en. Since hen−1 + en, en−1 − eni = 0, en−1 + en is connected only to the vertex en−2 − en − 1 and that edge is labelled 3. Thus the simple Lie algebra so2n(C) has Dynkin diagram Dn.

2.7 Existence and Uniqueness theorems

This section offers an insight into how the proof of theorem 2.1.1 is completed. So far, we have proved that every complex semisimple Lie algebra admits a Dynkin diagram and section 2.5 states that each

34 connected component of this diagram is listed in figure 2.1.2. It remains to prove that each of the Dynkin diagrams E6, E7, E8, F4 and G2 is in fact the Dynkin diagram of some complex semisimple Lie algebra and that two complex semisimple Lie algebras with isomorphic Dynkin diagrams are isomorphic.

The first two uniqueness theorems deal with two of the more choices that were made during the process, showing that they have no effect on the Dynkin diagram.

Theorem 2.7.1 Uniqueness theorem 1 If h and h0 are two Cartan subalgebras of g then there is some α ∈ Int(g) with α(h) = h0

Proof: In section 3.3, we will prove that the real forms of h and h0 correspond to maximal tori in the corresponding semisimple Lie group G. Maximal tori are conjugate in G, so there is some element of Ad(G) which performs this conjugation. d(Ad) = ad so this map acts as an element of α ∈ Int(g) on g 0 and α(h) = h .  Theorem 2.7.2 Uniqueness theorem 2 Different enumerations of Π lead to Cartan matrices which are conjugates by a permutation matrix. Such Cartan matrices define isomorphic Dynkin diagrams.

Proof: This follows immediately from the definitions and the construction of the Dynkin diagram. 

We now build to the next result, which essentially states that the Lie algebra is in some sense uniquely determined by the way in which the various copies of sl2(C) contained in the Lie algebra interact. Definition 2.7.3 Standard generators Let g be a complex semisimple Lie algebra and let h be some Cartan subalgebra of g. Let Φ be the set of roots corresponding to this choice of h and let Π = {a1, . . . , al} be a simple system for Φ. Let B be a nondegenerate symmetric invariant bilinear form on g that is positive definite on the real form h0 of h, l where the roots are real. Finally let A = (Aij)i,j=1 be the Cartan matrix of Φ.

The set X = {hi, ei, fi | 1 ≤ i ≤ l} is called a set of standard generators of g relative to h, Φ,B, Π and A, where 2 hi = Hai and hai, aii 2 ei and fi are non-zero root vectors for ai and −ai respectively, normalised so that B(ei, fi) = . hai, aii Proposition 2.7.4 Let X be a set of standard generators of g relative to h, Φ,B, Π and A. X generates g as a Lie algebra.

Proof: Clearly X is contained in g, so its span is contained in g. Hence we only need to show that g ⊆ span(X). ∗ The hi’s form a basis for h, as the ai’s form a basis of its dual space h . Pl Let a = i=1 niai be a positive root and let ea be any nonzero root vector. We show ea ∈ span(X) by Pl using induction on the level of a (which is i=1 ni). Moreover, we show that ea is a multiple of some iterated bracket of the ei’s. (A similar argument shows that each negative root vector fa is a multiple of some iterated bracket of the fi’s.)

If the level is 1, then a = aj for some j, and as root-spaces are one dimensional, ea is a multiple of ej. Assume the result holds for all roots of level less than n and suppose a is a root of level n > 1. Pl Since 0 < ha, ai = i=1 niha, aii, ha, aji > 0 for some j.

By proposition 2.4.2(v), b = a − aj is a root and by proposition 2.4.5, b > 0.

If eb is a nonzero root vector for b then the induction hypothesis shows that eb is a multiple of an iterated bracket of the ei’s. Using corollary 2.3.11, we deduce that ea is a multiple of [eb, ej] for some nonzero root vector ej of aj. 

35 Proposition 2.7.5 Key properties of a set of standard generators

The set X = {ei, fi, hi | 1 ≤ i ≤ l} obeys the following relations for all i, j.

(i)[ hi, hj] = 0,

(ii)[ ei, fj] = δijhi,

(iii)[ hi, ej] = Aijej,

(iv)[ hi, fj] = −Aijfj,

−Aij +1 (v) (ad(ei)) ej = 0 whenever i 6= j and

−Aij +1 (vi) (ad(fi)) fj = 0 whenever i 6= j.

These relations are called the Serre relations, we will meet them again in section 7.1. Proof: (i) h is abelian by definition.

(ii) The case i = j follows by lemma 2.3.6. When i 6= j, ai − aj cannot be a root by proposition 2.4.5.

2 (iii)[ hi, ej] = aj(hi)ej = aj(Ha )ej = Aijej. hai,aii i (iv) Same reasoning as (iii).

(v) When i 6= j the ai string containing aj is given by aj, aj + ai, . . . , aj + q.ai, since aj − ai is not a 2haj ,aii root. Hence p = 0 for this root string and thus −q = p − q = = Aij. hai,aii

−Aij +1 This means that 1 − Aij = q + 1 and a := aj + (1 − Aij)ai is not a root. As (ad(ei)) ej ∈ ga the result follows.

(vi) Same reasoning as (v).  Definition 2.7.6 Free Lie algebras Let X be a set. A free Lie algebra on X is a pair (F, σ) consisting of a complex lie algebra F (which is infinite dimensional) and a function σ : X → F with the following universal property. Whenever l is a complex Lie algebra and ψ : X → l is a function, then there is a unique Lie algebra homomorphism ψ : F → l such that ψ ◦ σ = ψ.

Proposition 2.7.7 Existence and Uniqueness of free Lie algebras If X is a non-empty set then there is a free Lie algebra on X which is unique up to isomorphism. Moreover, the image of X in F (under σ) generates F.

Proof: This requires the Poincar´e-Birkhoff-Witttheorem, which is outside the scope of this document. For a proof see [Kn02] pages 217-229. 

Let X be a set of generators for a complex semisimple Lie algebra g with respect to h, Φ,B, Π and A. Let F be the free Lie algebra on X and let R be the ideal generated by all elements of F which satisfy the Serre relations. We obtain a unique Lie algebra homomorphism ψ of F into g, from the natural mapping ψ between X and g. By proposition 2.7.5, R ⊆ ker ψ. Hence this descends to a Lie algebra homomorphism F/R → g which is onto by proposition 2.7.4. This is called the canonical homomorphism of F/R onto g relative to X = {hi, ei, fi | 1 ≤ i ≤ l}.

Theorem 2.7.8 Let g be a complex semisimple Lie algebra and let X = {hi, ei, fi | 1 ≤ i ≤ l} be a set of standard generators. Let F be the free Lie algebra of X and let R be the ideal generated by the Serre relations. Then the canonical homomorphism from F/R onto g is an isomorphism.

36 It is clear that by definition there is a homomorphism from F to g which extends the 0 map on R, therefore it reduces to a homomorphism from F/R to g. The proof requires two lemmas.

Lemma 2.7.9 Let A = Aij be an abstract Cartan matrix, let F be the free Lie algebra of X = {hi, ei, fi | 1 ≤ i ≤ l} and let R be the ideal generated by the elements of F which satisfy Serre relations (i) to (iv).

Define g = F/R and denote the images of the generators in g by ei, fi and hi respectively.

Also, let h = span {hi | 1 ≤ i ≤ l} and let e and f be the subalgebras of g generated by {ei | 1 ≤ i ≤ l} and {fi | 1 ≤ i ≤ l} respectively. Then g = h ⊕ e ⊕ f.

Proof: Omitted, see [Kn02] pages 189-190.

Lemma 2.7.10 Let A = Aij be an abstract Cartan matrix, let F be the free Lie algebra of X = {hi, ei, fi | 1 ≤ i ≤ l} and let R be the ideal generated by the six Serre relations. 0 0 Define g = F/R and suppose span{hi | 1 ≤ i ≤ l} is mapped injectively from F into g . 0 0 0 Write hi for the image of hi in g . Then g is a finite dimensional complex semisimple Lie algebra and 0 0 0 the subspace h = span {hi | 1 ≤ i ≤ l} is a Cartan subalgebra of g . 0 ∗ Finally, the linear functionals aj ∈ (h ) given by aj(Hi) = Aij form a simple system within the root system associated to the Cartan subalgebra h0 and the Cartan matrix associated to this system is A.

Proof: Omitted, see [Kn02] pages 191-196.

Proof of theorem 2.7.8: By the definition of the free Lie algebra, σ maps X to a linearly independent subset of g. (If not then setting ψ = 0 in the definition will not lead to a unique ψ.) The map φ : F → g can be written as the composition of maps from F to g0 = F/R and from g0 to g, 0 so both of these maps must be injective on span{hi} and span{hi} respectively. Because of this we may apply lemma 2.7.10, so we know that g is a finite dimensional complex semisimple Lie algebra with a 0 Cartan subalgebra given by span{hi | 1 ≤ i ≤ l}. The map F → g is onto by proposition 2.7.4 and hence the map g0 → g is also onto. Thus g is isomorphic to a quotient of g0. If a is a nontrivial simple ideal in g0 then it follows from the definition that h0 ∩ a is a Cartan subalgebra of g0. h is mapped injectively under the quotient map from g0 to g, so h0 ∩ a does not map to 0 and therefore 0 a does not map to 0. Thus the homomorphism g → g has trivial kernel and the theorem holds. 

We now proceed to the isomorphism and existence theorems which complete the proof of theorem 2.1.1.

Theorem 2.7.11 Isomorphism theorem Let g and g0 be complex semisimple Lie algebras with respective Cartan subalgebras h and h0 and root systems Φ and Φ0. Suppose there is a vector space isomorphism ψ : h → h0 such that its transpose ψt : h0∗ → h∗ has the property φt(Φ0) = Φ. For each a ∈ Φ set a0 = (φt)−1(a) ∈ Φ0 and fix a simple system Π for Φ. For each a ∈ Π select nonzero root vectors Ea ∈ ga and Ea0 ∈ ga0 . 0 Then there is a unique Lie algebra isomorphism ψ : g → g such that ψ|h = ψ and ψ(Ea) = Ea0 for each a ∈ Π.

37 −1 Proof of uniqueness: If ψ1 and ψ2 are two such isomorphisms then ψ0 = ψ2 ◦ ψ1 is an automor- phism of g which preserves h and the root vectors for each simple root. Let {hi, ei, fi} be the triple associated to the simple root ai, it is clear that ψ0 preserves each ei and hi. Moreover, ψ0(fi) must be a root vector for −ai and hence must be a multiple of fi, say cifi.

Applying ψ0 to the Serre relation [ei, fi] = hi tells us that ci = 1. Thus ψ0 preserves a generating set for g so ψ0 is the identity automorphism and ψ1 = ψ2. 

Proof of existence: The linear map (ψt)−1 is given by (ψt)−1(a) = a0 = a ◦ ψ−1. By assumption this map carries Φ to Φ0 and hence maps root strings to root strings. 2hb,ai 2hb0,a0i By proposition 2.3.10(i), ha,ai = ha0,a0i for all a, b ∈ Φ. (†) 0 t −1 0 0 Write Π = {a1, . . . , al} and let Π = (ψ ) (Π) = {a1, . . . , al}. 0 0 Define hi and hi to be the respective members of h and h with 0 0 2haj, aii 0 0 2haj, aii aj(hi) = and aj(hi) = 0 0 . hai, aii hai, aii 0 0 t −1 0 −1 0 0 By (†), aj(hi) = aj(hi) and hence (ψ ) (aj)(hi) = aj(hi) = aj(ψ (hi)). Therefore ψ(hi) = hi for all i. 0 Set E = e and let e = E 0 . Define f ∈ g to be a root vector for −a with [e , f ] = h and ai i i ai i i i i i 0 0 0 0 0 0 define fi ∈ g to be a root vector for −ai with [ei, fi ] = hi. Then X = {ei, fi, hi | 1 ≤ i ≤ l} and 0 0 0 0 0 X = {ei, fi , hi | 1 ≤ i ≤ l} are standard sets of generators for g and g respectively. Let F and F0 be the free Lie algebras of X and X0 and let R and R0 be the ideals in F and F0 generated by the Serre relations. 0 0 0 0 Define ϕ : X → F by ϕ(hi) = (hi), ϕ(ei) = (ei) and ϕ(fi) = (fi ). Note that the universal mapping property of F shows that ϕ extends to a unique Lie algebra homomorphism ϕ : F → F0. By (†), ϕ(R) ⊆ R0. 0 0 Therefore ϕ descends to a Lie algebra homomorphism from F/R to F /R , which we denote by ϕe. 0 0 0 The canonical maps ψ1 : F/R → g and ψ2 : F /R → g are isomorphisms by theorem 2.7.8. They also satisfy the following conditions. −1 (i) ψ1 (hi) = hi mod R, −1 (ii) ψ1 (Eai ) = ei mod R, 0 0 0 (iii) ψ2(hi mod R ) = hi and (iv) ψ (e0 mod R0) = E0 . 2 i ai −1 Therefore ψ = ψ ◦ϕ◦ψ is a Lie algebra homomorphism from g to g0 with ψ(h ) = h0 and ψ(E ) = E0 e 2 e 1 e i i e ai ai 0 0 for all i. As ψ(hi) = hi it follows that ψe|h = ψ. By assumption ψe : h → h is an isomorphism. Using the same argument which completed the proof of theorem 2.7.8 we can deduce that ψe is injective. 0 0 0 Finally, as dim(g) = dim(h) + |Φ| = dim(h ) + |Φ | = dim(g ) it follows that ψe is an isomorphism.  Corollary 2.7.12 If g and g0 are two complex semisimple Lie algebras with isomorphic root systems then they are isomorphic.

Proof: Apply theorem 2.7.11 to the map which sends the root vector Ea for each simple root a to any 0 nonzero root vector for the corresponding simple root for g .  Theorem 2.7.13 Existence theorem

If A = (Aij) is an abstract Cartan matrix then there is a complex semisimple Lie algebra g whose root system has A as its Cartan matrix.

Proof: See [Bo05], chapter VII, theorem 4.3.1 for a proof of this result. 

38 Chapter 3

Semisimple Lie groups

Now that we have a classification of all complex semisimple Lie algebras we focus of their associated Lie groups. The first key result is to relate compact Lie groups to semisimple Lie groups, by constructing a compact real form for every complex semisimple Lie algebra. By constructing maximal tori, we will obtain an alternative definition of the Weyl group to the one given in the previous chapter. Together these results will allow us to give two important decompositions of semisimple Lie groups, the Cartan and Iwasawa decompositions. The second of these will be critical in the chapter on buildings. The main reference for this chapter is [Kn02] chapters 4 and 6.

3.1 Example: SLn(C), for n ≥ 2

Recall that SLn(C) = {A ∈ Mn(C) | det(A) = 1}. The Lie algebra of SLn(C) is sln(C) by example 1.2.10.

The set h of all diagonal matrices in g:= sln(C) is a Cartan subalgebra of g so applying the exponential function to such matrices, we see that the set T of all diagonal matrices in G := SLn(C) is a maximal abelian subgroup of G, as a strictly larger maximal abelian subgroup would generate a strictly larger abelian subalgebra, which is impossible as h is maximal by inclusion. The normaliser of T in G is precisely the set of monomial matrices in G (matrices of determinant 1 which have exactly one non-zero entry on each row and each column). The N/T is generated by the cosets

 0 1   1   1   −1 0   0 1   ..       .   1   −1 0      N,   N,...,  1  N.  ..   ..     .   .   0 1  1 1 −1 0

Associating these cosets with the involutions (1 2), (2 3) ... (n − 1 n) on the set {1, . . . , n} we see that ∼ N/T = Sn the on n letters. We note that this coincides with the Weyl group of sln(C), the Lie algebra of SLn(C).

We will return to this example later in the section.

3.2 Haar measures

This section defines Haar measures and uses them to construct an inner product on the Lie algebra of a compact Lie group with desirable properties. This inner product will be used throughout the rest of the chapter.

A Lie group G is compact if it is compact as a topological group. Moreover, a real Lie algebra g0 is said to be compact if the Lie group Int(g0) is compact.

39 Definition 3.2.1 Haar measures Let G be a compact topological group with topology T and let σ(T ) be the σ-algebra generated by T .A measure µ : σ(T ) → [0, 1] is called a normalised Haar measure if it satisfies the following properties. (i) µ(Ex) = µ(xE) = µ(E) for all x ∈ G and all E ∈ σ(T ) (µ is G-invariant), (ii) µ(U) > 0 for all nonempty U ∈ T , (iii) µ(G) = 1, (iv) if µ∗ is the outer measure of µ then for all X ⊆ G there is some B ∈ σ(T ) such that B ⊆ X and µ(B) = µ∗(X).

Normalised Haar measures exist, (see [Ne99], pages 37-40), so we may define a space of Haar integrable functions,  Z  2 2 L (G):= f : G → C ||f||2 := |f(x)| dµ(x) < ∞ . G Let ∼ be the equivalence relation on L2(G) given by f ∼ g if and only if µ{x ∈ G | f(x) 6= g(x)} = 0. Then L2(G) = L2(G)/ ∼. However, we still speak of elements of L2(G) as functions rather than equiva- lence classes of functions. We require this equivalence relation on L2(G) as it will be necessary to define inner products on this space. Without this equivalence there would not be a unique 0 function so no inner product could exist.

We will require an inner product upon which the two maps Ad(G) and ad(G) act as orthogonal and anti-symmetric transformations respectively. The next two proposition guarantee the existence of such an inner product.

Definition 3.2.2 Unitary representations Let G be a topological group and let π : G → GL(V ) be a representation of G on a finite dimensional inner product space (V, h·, ·i). π is said to be unitary if hπ(g)x, π(g)yi = hx, yi for all g ∈ G.

Proposition 3.2.3 Let φ be a finite dimensional representation of a compact topological group G on V . Then there is a Hermitian inner product h˙,i˙ on V such that φ is unitary.

Proof: Let (·, ·) be any Hermitian inner product on V (clearly one exists, namely the complex dot product). Define Z hu, vi = (φ(x)u, φ(x)v)dµ(x). G Here, integration is considered over the measure space (L2(G), µ), where µ is a normalised Haar measure. h·, ·i is a Hermitian inner product on V inheriting these properties from (·, ·), which is unitary since Z Z hφ(y)u, φ(y)vi = (φ(x)φ(y)u, φ(x)φ(y)v)dµ(x) = (φ(xy)u, φ(xy)v)dµ(x) = hu, vi. G G Proposition 3.2.4 Existence of a suitable inner product

Let G be a compact Lie group and let g0 be its Lie algebra. Then the real vector space g0 admits an inner product h·, ·i that is invariant under Ad(G), so hAd(g)u, Ad(g)vi = hu, vi, relative to which members of Ad(G) act by orthogonal transformations, while members of ad(G) act by anti-symmetric transformations.

Proof: Ad(G) is a representation of G on g0 so by proposition 3.2.3 an inner product exists in which Ad(G) is a unitary transformation and thus orthogonal. As real inner products are bilinear, this inner product is invariant under Ad(G). If we differentiate the equation hAd(exp tX)u, Ad(exp tX)vi = hu, vi at t = 0 we see that h(ad(X))u, vi = −hu, (ad(X))vi for all X ∈ g0 so ad(X) acts anti-symmetrically. 

We note one important corollary to this result.

40 Corollary 3.2.5 If G is a compact Lie group with Lie algebra g0, then the Killing form of g0 is negative semidefinite.

Proof: Define the inner product h˙,i˙ as in proposition 3.2.4, also via this proposition we know that ad(X) is anti-symmetric for all X ∈ g0. Therefore the eigenvalues of ad(X) are purely imaginary, and thus the eigenvalues of (ad(X))2 are nonpositive. If B is the Killing form, it follows that 2 B(X,X) = Tr(ad(X) ) ≤ 0, as required. 

The following result gives something resembling an inverse to corollary 3.2.5, which will be useful when proving that an abstract Lie algebra is compact.

Proposition 3.2.6 If the Killing form of a real Lie algebra g0 is negative definite, then g0 is a compact Lie algebra.

Proof: If B is negative definite, then it is nondegenerate, so by Cartan’s criterion for semisimplicity

1.4.2, g0 is semisimple. The uniqueness theorems from chapter 2 prove that Int(g0) = (AutR(g0))0. Consequently, Int(g0) is a closed subgroup of GL(g). On the other hand, the negation of the Killing form is an inner product on g in which every member of ad(g) acts as an anti-symmetric transformation. Therefore its corresponding connected Lie group Int(g) acts by orthogonal transformations so Int(g0) is a closed subgroup of the orthogonal group and thus it is compact. 

3.3 Centralisers of Tori

In this section, let G be a compact connected Lie group, let g0 be the Lie algebra of G and denote the complexification of g0 by g. We fix an inner product on g0 with the properties guaranteed by proposition 3.2.4 and write B for its negation, so B is negative semidefinite, by corollary 3.2.5.

Definition 3.3.1 Tori and maximal tori Let G be a compact connected Lie group. A subgroup T is called a torus if it is a compact, connected, abelian Lie subgroup of G. A torus T is said to be maximal if it is maximal by inclusion.

Tori are compact, connected, abelian groups which are also differentiable manifolds. It follows from this that they are in fact products of the S1 = {z ∈ C | |z| = 1}. Proposition 3.3.2 The maximal tori in G are exactly the analytic subgroups corresponding to maximal abelian subalgebras of g0. Proof: As G is connected we will use the subgroup subalgebra correspondence (1.5.4) in this proof. If T is a maximal torus in G and t0 is its Lie algebra, we need to show that t0 is maximal abelian. Suppose not, then let h0 be an abelian Lie algebra which strictly contains t0. The corresponding connected Lie subgroup H will be abelian and will contain T as a strict subset which is a contradiction.

Conversely, suppose t0 is a maximal abelian subalgebra in g0. The corresponding connected Lie sub- group T must be abelian. If T were not closed then T would have a strictly larger abelian Lie algebra, contradicting the maximality of t0. Hence T is a maximal torus. 

Consider a compact connected Lie group G.

Let T be a maximal torus in G and let t0 be its Lie algebra. We know that g0 = Zg0 ⊕ [g0, g0] where Zg0 0 0 is the centre of g0 and [g0, g0] is semisimple. Since t0 is maximally abelian in g0, t0 = Zg0 ⊕ t0, where t0 is maximally abelian in [g0, g0].

If we drop the subscript 0’s to indicate complexifications we know that g = Zg ⊕ [g, g] where [g, g] is 0 0 semisimple by proposition 1.3.12(iv). Moreover, t = Zg ⊕ t , where t is maximal abelian in [g0, g0]. Therefore t0 is a Cartan subalgebra of the complex semisimple Lie algebra [g, g], so using the root-space decomposition we may write 0 M g = Zg ⊕ t ⊕ [g, g]a. a∈Φ([g,g],t0)

41 Using this, it is clear that t is a Cartan subalgebra of g. If we extend the members a of Φ([g, g], t0) to t by defining them to be 0 on Zg, then we may write the root-space decomposition of g relative to ad(t) as M g = t ⊕ ga where ga = {X ∈ g | [H,X] = a(H)X for all H ∈ t}. a∈Φ(g,t)

We say a is a root if a 6= 0 and ga 6= 0. The nonzero members of ga are called root vectors for the root a and we refer to this equation as the root-space decomposition of g relative to t. In particular the set Φ(g, t) has all the properties expected of an abstract root system, except that it does not necessarily span t∗. The presence of the groups G and T give us more information about the root-space decomposition. Ad(T ) acts by orthogonal transformations on g0 relative to the inner product chosen. If we extend this inner product on g0 to a Hermitian inner product on g, then Ad(T ) acts as a commuting family of unitary transformations. Using basic representation theory it is clear that Ad(T ) has a simultaneous eigenspace decomposition given by the root-space decomposition. The action of Ad(T ) on the 1-dimensional space ga is a 1-dimensional representation of T so is necessarily of the form Ad(t)X = ξa(t)X for t ∈ T where 1 ξa : T → S is a continuous homomorphism from T into the group of complex numbers of modulus 1.

ξa is called a multiplicative character. The differential of ξa is a|t0 . Thus a|t0 is imaginary valued and the roots are real valued on it0. Lemma 3.3.3 Let T be a maximal torus in G, let t be the complexification of its corresponding Lie algebra and let Φ be the set of roots. If there is a H ∈ t such that a(H) 6= 0 for all a ∈ Φ then the centraliser of H in g is t.

0 P Proof: Let X ∈ Cg(H), and let X = H + a∈Φ Xa be the decomposition of X according to the root-space decomposition, then X X 0 = [H,X] = 0 + [H,Xa] = a(H)Xa a∈Φ a∈Φ

0 Thus a(H)Xa = 0 for all a. As a(H) 6= 0, Xa = 0 so X = H ∈ t.  Theorem 3.3.4 If G is a compact connected Lie group then any two maximal abelian subalgebras of g0 are conjugates in Ad(G).

0 0 Proof: Let t0 and mft0 be two maximal abelian subalgebras with T and T as the corresponding maximal tori. There are only finitely many roots relative to t0 and therefore the union of their kernels 0 0 0 0 (which are all hyperplanes in t ) cannot contain t . Thus there is some X ∈ t0 such that Cg(X) = t 0 by lemma 3.3.3. Similarly, we can find Y ∈ t0 such that Cg(X) = t . Choose a g0 ∈ G such that B(Ad(g)X,Y ) is minimised when g = g0. For any Z ∈ g0,

r 7→ B(Ad(exp rZ)Ad(g0)X,Y ) is a smooth function of r which is minimised when r = 0. If we differentiate at the point r = 0 we see that 0 = B((ad(Z))Ad(g0)X,Y ) = B([Z, Ad(g0)X],Y ) = B(Z, [Ad(g0)X,Y ])

Z is arbitrary, so [Ad(g0)X,Y ] = 0. Thus Ad(g0)X ∈ Cg0 (T ) = t0. But t0 is abelian, so

0 t0 ⊆ Cg0 (Ad(g0)X) = Ad(g0)Cg0 (X) = Ad(g0)t0

0 Both sides of this equation are abelian, so equality holds as t0 is maximal. Thus t0 = Ad(g0)t0.  Corollary 3.3.5 If G is a compact connected Lie group then any two maximal tori are conjugate.

0 0 Proof: Let T and T be two maximal tori with respective maximal abelian subalgebras t0 and t0. Then 0 0 −1 0 g0 t0 = Ad(g0)t0 for some g0 ∈ G by theorem 3.3.4. Hence T = g0T g0 = (T ) .  Theorem 3.3.6 Let G be a compact connected Lie group and let T be a maximal torus in G. Then every element in G is the conjugate of some element in T . In other words, for each x ∈ G there are elements y ∈ G and t ∈ T such that x = yty−1 = ty.

42 The proof is extensive and can be found in [Kn02] on pages 255-258.  Corollary 3.3.7 Let G be a compact connected Lie group with Lie algebra g. (i) Every element of G lies in some maximal torus.

(ii) The centre of G, ZG lies in every maximal torus. (iii) The exponential map from g to G is onto.

Proof: (i) Let x ∈ G and let T be a maximal torus of G, then there are elements y ∈ G and t ∈ T such that x = ty by theorem 3.3.6, moreover x ∈ T y which is another maximal torus of G.

(ii) Let z ∈ ZG and let T be a maximal torus of G. Then there are elements y ∈ G and t ∈ T such that z = yty−1. Thus z = zy−1y = y−1zy = t so z ∈ T . (iii) The exponential map is onto for any maximal torus, so by part (i), the exponential map is onto for G. 

Lemma 3.3.8 Let A be a compact abelian Lie group such that A/A0 is cyclic, where A0 is the identity component of A. Then A contains an element a whose powers are dense in A.

Proof: A0 is a torus (a product of circle groups) so we can choose some

2πxi dim(A0) a0 = (exp )i=1 ∈ A0 such that each xk is transcendental over Q. Then the powers of a0 are dense in A0. Let N = |A/A0| and N let b ∈ A be a representative of a generating coset in A/A0. Since b ∈ A0, we can find some c ∈ A0 N N with b c = a. Then a = bc is dense in A0 and contains one element in each coset, thus it is dense in A. 

Theorem 3.3.9 Let G be a compact connected Lie group and let S be some torus in G. If g ∈ CG(S) then there is some torus S0 containing both g and S. ∞ [ Proof: Let A be the closure of gnS. n=−∞

Then the identity component A0 of A is a torus. Since A0 is open in A,

∞ ∞ [ n [ n g A0 is an open subgroup of A containing g S. n=−∞ n=−∞

S∞ n Hence n=−∞ g A0 = A. By compactness this open cover of A admits a finite subcover, so there is some non-zero power of g in A0. Let N denote the smallest such positive power, then A/A0 is cyclic of order N so we may apply lemma 3.3.8 to find some a ∈ A whose powers are dense in A. Using corollary 3.3.7(iii), a = exp(X) for 0 some X ∈ g0. Then the closure of {exp(rX) | r ∈ R} is a torus S containing A, and thus also containing S and g.  Corollary 3.3.10 If G is a compact connected Lie group, the centraliser of a torus is connected. If T is a maximal torus in G then T = CG(T ).

Proof: By theorem 3.3.9 CG(T ) is the union of the tori which contain T and therefore is a torus, if T is maximal, then T = CG(T ). 

We now recap the ideas presented so far in this section.

Let G be a compact connected Lie group, g0 its Lie algebra, T a maximal torus of G and t0 its Lie algebra. Let B be the negation of an invariant inner product on G. We indicate the complexification of

43 a Lie algebra by dropping the subscript 0. Let Φ(g, t) be the set of roots of g with respect to the Cartan

subalgebra t. The centre Zg0 of g0 is contained in t0, and all roots vanish on Zg.

We know that the roots are purely imaginary valued on t0. We define tR = it0 which is a real form of t in ∗ which all the roots are real. We may then regard all roots as lying in (tR) . Then Φ(g, t) is an abstract root system in the subspace of tR corresponding to the semisimple Lie algebra [g, g].

The negative definite form B on t0 then corresponds to a positive definite form on tR. Thus given ∗ ∗ λ ∈ (tR) , let Hλ be the member of tR such that λ(H) = B(H,Hλ) for all H ∈ (tR) . ∗ ∗ The resulting linear map λ 7→ Hλ is a vector space isomorphism from (tR) to tR. Let (iZg0 ) be the ∗ ∗ subspace of (tR) corresponding to iZg0 . The inner product on tR induces an inner product on (tR) which we will denote by h·, ·i. With respect to this inner product Φ(g, t) spans the orthogonal complement of ∗ (iZg0 ) and Φ(g, t) is an abstract reduced root system in this orthogonal complement. Also,

hλ, µi = λ(Hµ) = µ(Hλ) = B(Hλ,Hµ)

For a ∈ Φ(g, t) we define the root reflection as in the semisimple case, namely

2hλ, ai s (λ) = λ − a. a ha, ai

∗ sa is the identity transformation on (iZg0 ) and is the usual root reflection in its orthogonal complement. We define the Weyl group W (Φ(g, t)) as being the group generated by the involutions sa for a ∈ Φ. This we think of as the algebraically defined Weyl group.

Definition 3.3.11 Analytic Weyl group Let G be a compact connected Lie group and let T be a maximal torus of G. The analytic Weyl group is given by W (G, T ) = NG(T )/CG(T ) = NG(T )/T. (The last equality is due to corollary 3.3.10).

The group W (G, T ) acts by automorphisms of T , hence by invertible linear transformations on t0 and ∗ ∗ the associated spaces tR = it0, t, (tR) and t . Moreover, only 1W acts as the identity. The following theorem explains why both W (g, h) and W (G, T ) are referred to as the Weyl group.

Theorem 3.3.12 Let G be a compact connected Lie group with maximal torus T . The analytic Weyl ∗ group W (G, T ) and the algebraic Weyl group W (Φ(g, t) coincide when considered as acting on (tR) . Proof: Omitted, see [Kn02] pages 262-264.

Theorem 3.3.12 agrees with the observation made in section 3.1 that the analytic Weyl group of SLn(C) is Sn and this finally proves that the (algebraic) Weyl group of sln(C) is also Sn.

3.4 Compact real forms

We aim to show that every complex semisimple Lie algebra has a compact real form. Let g be a complex semisimple Lie algebra and let h be a Cartan subalgebra of g. Let Φ = Φ(g, h) be the corresponding root system and let B be the Killing form of g.

For each pair {a, −a} ⊆ Φ, we fix root vectors Ea ∈ ga,E−a ∈ g−a such that B(Ea,E−a) = 1, by lemma 2.3.6(i) [Ea,E−a] = Ha. Let a, b ∈ Φ, if a + b ∈ Φ then we define Ca,b such that [Ea,Eb] = Ca,bEa+b. If a + b 6∈ Φ we set Ca,b = 0. We collect a few preliminary results in the following lemma.

Lemma 3.4.1

(i) Ca,b = −Cb,a.

(ii) If a, b, c ∈ Φ and a + b + c = 0 then Ca,b = Cb,c = Cc,a.

44 (iii) Let a, b, a + b ∈ Φz, and let {b + na | − p ≤ n ≤ q} be the a string containing b. Then 1 C C = − q(1 + p)ha, ai. a,b −a,−b 2

Proof: (i) The Lie bracket is anti-symmetric.

(ii) By the Jacobi identity, [[Ea,Eb],Ec] + [[Eb,Ec],Ea] + [[Ec,Ea],Eb] = 0.

Thus Ca,b[E−c,Ec] + Cb,c[E−a,Ea] + Cc,a[E−b,Eb] = Ca,bHc + Cb,cHa + Cc,aHb = 0.

Substituting Hc = −Ha − Hb and using the linear independence of the set {Ha,Hb} we get the desired result.

1 (iii) By corollary 2.3.12 [E−a, [Ea,Eb]] = 2 q(1 + p)ha, aiB(Ea,E−a)Eb 1 [E−a, [Ea,Eb]] = C−a,a+bCa,bEb and B(Ea,E−a) = 1, so C−a,a+bCa,b = 2 q(1 + p)ha, ai

(−a) + (a + b) + (−b) = 0 so by parts (i) and (ii) C−a,a+b = C−b,−a = −C−a,−b.  Theorem 3.4.2 Let g be a complex semisimple Lie algebra and let h be a Cartan subalgebra of g and let Φ be the set of roots. Given a ∈ Φ, there is a root vector Xa ∈ ga such that for all a, b ∈ Φ,

 N whenever a + b ∈ Φ [X ,X ] = H and [X ,X ] = a,b a −a a a b 0 otherwise

Moreover, Na,b = −N−a,−b. Moreover, for any such choice of system {Xa} of root vectors, 1 N 2 = q(1 + p)ha, ai, where {b + na | − p ≤ n ≤ q} is the a string containing b. a,b 2 Proof: The transpose of the linear map ψ : h → h given by ψ(h) = −h carries Φ to Φ and thus extends to an automorphism ψe of g by the isomorphism theorem 2.7.11. ψe(Ea) ∈ g−a so there exists a constant c−a such that ψe(Ea) = c−aE−a (as the space ga is 1 dimensional). By proposition 1.3.16, the Killing form is invariant under automorphisms so B(ψX,e ψYe ) = B(X,Y ) for all X,Y ∈ g.

Setting X = Ea and Y = E−a we see that

c−aca = c−acaB(E−a,Ea) = B(φEe a, φEe −a) = B(Ea,E−a) = 1.

2 Hence we may choose da for each a ∈ Φ such that dad−a = 1 and (da) = −ca, for example by writing iθ −1 −iθ ca = re , so that c−a = r e ; then taking

1 i θ − 1 −i θ da = r 2 ie 2 and da = r 2 ie 2 .

These satisfy both the conditions we required. Define Xa = daEa and notice that

[Xa,X−a] = dad−a[Ea,E−a] = Ha and

−1 ψe(Xa) = daψe(Ea) = dac−aE−a = d−ac−aE−a = −d−aE−a = −X−a

Define Na,b such that [Xa,Xb] = Na,bXa+b whenever a + b ∈ Φ and Na,b = 0 otherwise.

Then −Na,bX−a−b = ψe(Na,bXa+b) = ψe[Xa,Xb] = [ψXe a, ψXe b] = [−X−a, −X−b] = N−a,−bX−a−b, so it follows that Na,b = −N−a,−b. 2 The formula for Na,b follows by repeating the argument in lemma 3.4.1(iii), replacing all the root vectors Ea by Xa. 

45 Definition 3.4.3 Split real forms Let g be a complex semisimple Lie algebra and let h be some Cartan subalgebra of g. Define

h0 = {H ∈ h | a(H) ∈ R for all a ∈ Φ}

A real form of g which contains h0 for some Cartan subalgebra h is called a split real form of g.

Proposition 3.4.4 Any complex semisimple Lie algebra contains a split real form.

L 2 1 Proof: Write g0 = h0 ⊕ a∈Φ RXa. The formula Na,b = 2 q(1 − p)ha, ai > 0 shows that Na,b ∈ R. It is clear that gR = g0 ⊕ ig0, when considered as real vector spaces so g0 is a split real form of g.  Definition 3.4.5 Compact real forms A compact real form of a complex semisimple Lie algebra g is a real form of g which is also a compact Lie algebra.

Theorem 3.4.6 Every complex semisimple Lie algebra admits a compact real form u0.

Proof: The proof shows that the following choice of u0 is suitable. X u0 = (R(iHa) + R(Xa − X−a) + Ri(Xa − X−a)) . a∈Φ

This follows by showing that gR = u0 + iu0 as real vector spaces, u0 is compact and closed under the

bracket operation. To show compactness we deduce that the restriction of the Killing form B|u0×u0 is negative definite. The full details can be found in [Kn02], pages 353-354. 

Using the formula given in proposition 3.4.4 it is clear that sln(R) is a split real form of sln(C). It is less obvious that the formula given by theorem 3.4.6 defines sun(C) as a compact real form of sln(C).

3.5 Cartan decomposition

Definition 3.5.1 Cartan involution

Let g0 be a real semisimple Lie algebra and let θ : g0 → g0 be an involution (an automorphism with 2 θ = 1). Let B be the Killing form and define the symmetric bilinear form Bθ by Bθ(X,Y ) = −B(X, θY ). If Bθ is positive definite then θ is called a Cartan involution.

Proposition 3.5.2 Let g be a complex semisimple Lie algebra and let u0 be a compact real form of g (guaranteed by theorem 3.4.6). Let τ be the conjugation of g with respect to u0. If g is regarded as the real Lie algebra gR then τ is a Cartan involution of gR.

R Proof: τ is certainly an involution. Moreover, the Killing forms Bg of g and BgR of g are related by

the formula BgR (Z1,Z2) = 2 Re(Bg(Z1,Z2)),

Write Z = X + iY where X,Y ∈ u0, then

Bg(Z, τZ) = Bg(X + iY, X − iY ) = Bg(X,X) + Bg(Y,Y ) = Bu0 (X,X) + Bu0 (Y,Y ) < 0 unless Z = 0.

R Hence, (BgR )τ (Z1,Z2) = −BgR (Z1, τZ2) = −2 Re(Bg(Z1, τZ2)) is positive definite on g and thus τ is a Cartan involution. 

Lemma 3.5.3 Let g0 be a real finite-dimensional Lie algebra and let ρ be an automorphism of g0 that

is diagonalisable with positive eigenvalues d1, . . . , dm and corresponding eigenspaces (g0)dj . r r For r ∈ R define ρ to be the linear transformation that is given by multiplication by (dj) on (g0)dj . r r Then {ρ } is a one-parameter subgroup of Aut(g0). If g0 is semisimple then {ρ } lies in Int(g0).

46 Proof: Let X ∈ (g0)di and let Y ∈ (g0)dj , then ρ[X,Y ] = [ρX, ρY ] = didj[X,Y ] ∈ (g0)didj , since ρ is an automorphism. r r r r r r r r Therefore ρ [X,Y ] = (didj) [X,Y ] = [di X, dj Y ] = [ρ X, ρ Y ], so ρ is an automorphism. Thus {ρ } is a one-parameter subgroup of Aut(g0) and hence lies in the identity component Aut(g0)0. If g0 is semisimple then Aut(g0)0 = Int(g0). 

Theorem 3.5.4 Let g0 be a real semisimple Lie algebra, let θ be a Cartan involution and let σ be any −1 involution. Then there is a ψ ∈ Int(g0) such that ψθψ commutes with σ.

1 2 r The proof shows that setting ψ = ρ 4 suffices where ρ = (σθ) and lemma 3.5.3 is used to show that {ρ } is a one parameter subgroup of Int(g0). The full proof can be found in [Kn02] on page 357. 

Corollary 3.5.5 If g0 is a real semisimple Lie algebra then g0 has a Cartan involution.

Proof: Let g be the complexification of g0 and use theorem 3.4.6 to choose a compact real form u0 of g. Let σ and τ be the conjugations of g with respect to g0 and u0 respectively. Notice σ and τ are both involutions on gR. By proposition 3.5.2, τ is a Cartan involution and by theorem 3.5.4 there is some R −1 −1 ψ ∈ Int(g ) such that ψτψ commutes with σ. Set θ = (ψτψ )|g0 , then θ is the required Cartan involution. −1 −1 −1 To see this notice that g0 = {X ∈ g | σX = X} and that σ(ψτψ X) = ψτψ (σX) = ψτψ X. −1 1 −1 Moreover, Bθ(X,Y ) = −Bg0 (X, θY ) = −Bg(X, ψτψ Y ) = 2 (BgR )ψτψ (X,Y ).

Thus Bθ is positive definite on g0 and θ is a Cartan involution. 

Corollary 3.5.6 If g0 is a real semisimple Lie algebra then any two Cartan involutions of g0 are conjugate via Int(g0).

Proof: Let θ and θ0 be two Cartan involutions. Take σ = θ0 as in theorem 3.5.4 then there is some −1 0 −1 ψ ∈ Int(g0) such that ψθψ commutes with θ . ψθψ is another Cartan involution of g0 so we may assume that θ and θ0 commute. Since θ and θ0 commute, they have a compatible eigenspace decomposition into ±1 eigenspaces. By symmetry it suffices to show that no X lies in the +1 eigenspace of θ and the −1 eigenspace of θ0. Suppose for a contradiction that θX = X, and θ0X = −X. Then,

0 −B(X,X) = −B(X, θX) = Bθ(X,X) > 0 and B(X,X) = −B(X, θ X) = Bθ0 (X,X) > 0

which is a contradiction.  Corollary 3.5.7 If g is a complex semisimple Lie algebra then any two compact real forms of g are conjugate via Int(g).

Proof: Each compact real form defines a conjugation on g which is a Cartan involution by proposition 3.5.2. Applying 3.5.6 to gR these two conjugations are conjugate by an element of Int(gR) = Int(g). 

Corollary 3.5.8 If A = (Aij) is an abstract Cartan matrix, then there is a unique (up to isomorphism) compact semisimple Lie algebra g0 whose complexification g has a root system with A as its Cartan matrix.

Proof: The existence of such a g is given by theorem 2.7.13 and the uniqueness by corollary 2.7.12. Passing from g to g0 is enabled by theorem 3.4.6 and corollary 3.5.7. 

Corollary 3.5.9 If g is a complex semisimple Lie algebra then the only Cartan involutions of gR are the conjugations with respect to the compact real forms of g.

Proof: By theorem 3.4.6 and proposition 3.5.2 there is a Cartan involution of gR given by conjugation with respect to some compact real form of g. Any other Cartan involution is conjugate to this one by corollary 3.5.6 and hence is also the conjugation with respect to a compact real form of g. 

47 A Cartan involution θ of g0 yields an eigenspace decomposition g0 = k0 ⊕p0 of g0 into 1 and -1 eigenspaces respectively. Moreover, since θ is an involution:

[k0, k0] ⊆ k0, [k0, p0] ⊆ p0, [p0, p0] ⊆ k0. (†)

Thus it follows that k0, p0 are orthogonal with respect to Bg0 and also with respect to Bθ. In fact, if

X ∈ k0,Y ∈ p0 then ad X ad Y carries k0 to p0 and p0 to k0. Thus it must have trace 0 so Bg0 (X,Y ) = 0 and also Bθ(X,Y ) = 0 since θY = −Y .

Bθ is positive definite, so the eigenspaces k0, p0 have the property that:  negative definite on k0 Bg0 is (‡) positive definite on p0

A decomposition of g0 into two subalgebras which satisfies (†) and (‡) is called a Cartan decomposition of g0. Conversely, a Cartan decomposition determines a Cartan involution θ by setting

θ(x + y) = x − y where z = x + y is the decomposition of z in k0 + ip0.

θ respects brackets and Bθ is positive definite as required.

If g0 = k0 ⊕ p0 is a Cartan decomposition of g0 then k0 ⊕ ip0 is a compact real form of g0, moreover if h0 and q0 are the +1 and -1 eigenspaces of an involution σ, then σ is a Cartan subalgebra of g0 only if h0 ⊕ iq0 is a compact real form of g.

So the most general Cartan decomposition of gR is gR = u0 ⊕ iu0 where u0 is a compact real form of g.

Corollaries 3.5.5 and 3.5.6 show that an arbitrary real semisimple Lie algebra g0 has a unique Cartan decomposition up to conjugacy by Int(g0).

Lemma 3.5.10 If g0 is a real semisimple Lie algebra and θ is a Cartan involution, then

∗ (ad(X)) = −ad(θX) for all X ∈ g0,

∗ where the adjoint operator ( · ) is defined relative to the inner product Bθ.

Proof: Bθ((ad(θ(X)Y ),Z) = −B([θ(X),Y ], θ(Z)) = B(Y, [θ(X), θ(Z)])

= B(Y, θ[X,Z]) = −Bθ(Y, (ad(X))Z) ∗ = −Bθ((ad(X)) Y,Z).

The linearity of Bθ in the first variable gives the result. 

Proposition 3.5.11 If g0 is a real semisimple Lie algebra, then g0 is isomorphic to a Lie algebra of real matrices that is closed under transposition. For any Cartan involution θ of g0 this isomorphism may be chosen so that θ is carried to the negation of the transpose.

Proof: Let θ be a Cartan involution of g0 and define the inner product Bθ on g0. Since g0 is semisimple, ∼ the natural association X 7→ ad(X) shows g0 = ad(g0). Therefore, the matrices of ad(g0) in an orthonormal basis relative to Bθ will be the required Lie algebra of matrices. The closure of ad(g0) under the adjoint map follows from lemma 3.5.10 and the fact that g0 is closed under θ. 

Corollary 3.5.12 If g0 is a real semisimple Lie algebra and θ is a Cartan involution then any θ stable subalgebra s0 of g0 is reductive.

Proof: By proposition 3.5.11 g0 may be regarded as a real Lie algebra of real matrices which is closed under transposition and where θ is mapped to the negation of the transpose. Then s0 is a Lie subalgebra of matrices closed under transpose and so the result follows from the fact that any such real Lie algebra of matrices is reductive. 

We can now lift these results from the Lie algebra to the Lie group.

48 Proposition 3.5.13 If G is a semisimple Lie group and Z is its centre then G/Z has trivial centre.

Remark: We know that Z is discrete as it is a closed subgroup of G whose Lie algebra is 0, since G is semisimple.

Proof: Let g0 be the Lie algebra of G. Given x ∈ G, Ad(x) represents conjugation by x and Ad(x) = 1 if and only if x ∈ Z. Thus G/Z =∼ Ad(G). If g ∈ Ad(G) is central then gAd(x) =Ad(x)g for all x ∈ G.

Differentiating gives g(ad(X)) = (ad(X))g for all X ∈ g0. Applying both sides of this equation to some −1 Y ∈ g0 tells us that g([X,Y ]) = [X, gY ]. Thus replacing Y by g Y we see that [gX, Y ] = [X,Y ]. Interchanging X and Y gives [X, gY ] = [X,Y ] and hence g([X,Y ]) = [X,Y ].

g0 is semisimple, so [g0, g0] = g0. Therefore g is the identity map on g0. Thus Ad(G) has trivial centre. 

Theorem 3.5.14 Let G be a semisimple Lie group, let θ be a Cartan involution of its Lie algebra g0 and let g0 = k0 ⊕ p0 be the corresponding Cartan decomposition. Let K be the connected Lie subgroup of g with Lie algebra k0. Then (i) there is a Lie group involution Θ of G with differential θ, (ii) {g ∈ G | Θ(g) = g} = K,

(iii) the mapping K × p0 → G given by (k, X) 7→ k exp X is a diffeomorphism, (iv) K is closed, (v) Z = Z(G) ≤ K, (vi) K is compact if and only if Z is finite, (vii) when Z is finite, K is a maximal compact subgroup of G.

The automorphism Θ guaranteed by this theorem is called the global Cartan involution and the decom- position in (iii) is called the global Cartan decomposition. Proof: Omitted, see [Kn02] pages 362-367.

3.6 Iwasawa decomposition

As with the Cartan decomposition, we begin with a decomposition of Lie algebras before lifting the result to Lie groups. For this section it is wise to change notation briefly.

G will denote a semisimple Lie group and its Lie algebra will be denoted by g (not by g0 as previously). If θ is a Cartan involution of g we will write g = k ⊕ p as the corresponding Cartan decomposition and K for the connected Lie subgroup of G with Lie algebra k. B will be some nondegenerate symmetric invariant bilinear form on g such that B(θX, θY ) = B(X,Y ) for all X,Y ∈ g and Bθ (defined by Bθ(X,Y ) = B(X, θY )) is positive definite. It follows from this that B is negative definite on the compact real form k ⊕ ip. Thus B is negative definite on a maximal abelian subspace of k ⊕ ip and hence given any Cartan subalgebra of gC, B is positive definite on the subspace where all roots are real-valued. Bθ is an inner product on g which we use to define orthogonality and adjoints.

Definition 3.6.1 Restricted root-space decomposition Let a be a maximal abelian subspace of p, such a thing exists because p is finite-dimensional. Since ad(X)∗ = −ad(θX) by lemma 3.5.10, the set {ad(H) | H ∈ a} is a commuting family of self-adjoint transformations of g. Then g is the orthogonal direct sum of simultaneous eigenspaces with all the corresponding eigenvalues being real. Fixing such an eigenspace with eigenvalue λH under ad(H), then the equation ad(H)X = λH X shows λH is linear in H. Hence the simultaneous eigenvalues are members of the dual space a∗. Given λ ∈ a∗ we write

gλ = {X ∈ g | (ad(H))X = λ(H)X for all H ∈ a}.

49 If gλ 6= 0 and λ 6= 0 we call λ a restricted root of g or a root of (g, a). The set of restricted roots is denoted by Σ. Any non-zero gλ is called a restricted root-space and its members are called restricted root vectors of the restricted root λ. Proposition 3.6.2 The restricted roots and restricted root-spaces have the following properties: L (i) g = g0 ⊕ λ∈Σ gλ,

(ii)[ gλ, gµ] ⊆ gλ+µ,

(iii) θ(gλ) = g−λ and hence λ ∈ Σ if and only if −λ ∈ Σ,

(iv) g0 = a ⊕ m where a and m are orthogonal and m = Ck(a). Remark: The decomposition given by (i) is called the restricted root-space decomposition of g. Proof: (i) Follows by construction. (ii) Holds by the Jacobi identity.

(iii) Consider X ∈ gλ, then [X, θX] = θ[θH, X] = −θ[H,X] = −λ(H)θX.

(iv) By part (iii), θg0 = g0, hence g0 = (k ∩ g0) ⊕ (p ∩ g0). Since a ⊆ p ∩ g0 and a is maximal abelian in p, a = p ∩ g0. Also k ∩ g0 = Ck(a).  Let Σ+ be the set of positive roots with respect to some suitable notion of positivity (for example a L lexicographical ordering). Define n = λ∈Σ+ gλ. By proposition 3.6.2(ii), n is a nilpotent subalgebra of g. Proposition 3.6.3 Iwasawa decomposition of Lie algebras With the notation given previously, g = k ⊕ a ⊕ n. Moreover, a is abelian, n is nilpotent and a ⊕ n is a solvable Lie subalgebra of g. Finally, [a ⊕ n, a ⊕ n] = n.

Proof: We know that a is abelian and n is nilpotent. Since [a, gλ] = gλ for each λ 6= 0, we see that [a, n] = n and that a ⊕ n is a solvable subalgebra with [a ⊕ n, a ⊕ n] = n. To prove that k + a + n is a direct sum it suffices to show that k ∩ (a ⊕ n) = 0. Let X ∈ k ∩ (a ⊕ n). Then θ(X) = X ∈ a ⊕ θ(n). Since a ⊕ n ⊕ θ(n) is a direct sum by proposition 3.6.2 parts (i) and (iii), X ∈ a, but then X ∈ k ∩ p = 0.

Finally, k ⊕ a ⊕ n ⊇ g because we can write X ∈ g using some H ∈ a some X0 ∈ n and elements Xλ ∈ gλ by X X X + X = H + X0 + Xλ = (X0 + (X−λ + θX−λ)) + H + ( λ ∈ Σ (Xλ − θX−λ)) λ∈Σ λ∈Σ+ which lies in k + a + n. 

In order to prove the Iwasawa decomposition of Lie groups we first prove two lemmas. Lemma 3.6.4 Let H be an connected Lie group with Lie algebra h and suppose that h is a vector space direct sum of Lie subalgebras h = s⊕t. If S and T denote the connected subgroups of G with Lie algebras s and t respectively, then the multiplication map φ(s, t) = st of S × T into H is everywhere regular, i.e. det(dφ) 6= 0. Proof: Let X ∈ s,Y ∈ t. Then

−1 φ(s0 exp(rX), t0) = s0 exp(rX)t0 = s0t0 exp(Ad(t0 )rX) and φ(s0, t0 exp(rY )) = s0t0 exp(rY ). −1 Therefore, dφ(X) = Ad(t0 )X and dφ(Y ) = Y . So dφ must be block triangular in matrix form, thus

−1 det Adh(t0 ) det Adt(t0) det dφ = −1 = 6= 0. det Adt(t0 ) det Adh(t0)

50 Lemma 3.6.5 There is a basis {Xi} of g such that the matrices representing ad(g) have the following properties. (i) the matrices of ad(k) are anti-symmetric, (ii) the matrices of ad(a) are diagonal with real entries, (iii) the matrices of ad(n) are strictly upper triangular.

Proof: Let {Xi} be an orthonormal basis of g compatible with the restricted root-space decomposition and with the property that if Xi ∈ gλi ,Xj ∈ gλj with i < j then λi ≥ λj. ∗ For X ∈ k, ad(X) = −ad(θX) = −ad(X) by lemma 3.5.10. Since each Xi is a restricted root vector or is in g0, the matrices of ad(a) are necessarily diagonal with real entries and the matrices of ad(n) are strictly upper triangular by proposition 3.6.2(ii).  Theorem 3.6.6 Iwasawa decomposition for Lie groups Let G be a semisimple Lie group, let g = k ⊕ a ⊕ n be the Iwasawa decomposition of the Lie algebra g of G. Let A and N be the connected Lie subgroups of G with Lie algebras a and n respectively. Then the multiplication map K × A × N → G given by (k, a, n) 7→ kan is a diffeomorphism. Moreover A and N are simply connected.

Proof: Let G = Ad(G) which can be regarded as the closed subgroup Aut(g)0 of GL(g). We prove the theorem for G and then lift the result to G.

Impose the inner product Bθ on g. We write matrices for elements of G and ad(g) relative to the basis given by lemma 3.6.5. Let K = Adg(K), A = Adg(A) and N = Adg(N). Lemma 3.6.5 tells us what matrices in these sets look like. Namely, matrices in K are rotation matrices, A contains diagonal ma- trices with positive entries and N contains upper unitriangular matrices. K is compact by proposition 3.5.13 and theorem 3.5.14(vi). The subgroup of GL(g) consisting of diagonal matrices with positive entries is abelian and simply con- nected and A is an connected Lie subgroup of it. Therefore A is closed in GL(g) and hence in G, similarly N is closed in G. The map A × N → GL(g) given by (a, n) 7→ an is injective since we can recover a from the diagonal entries of an. Clearly, this function is onto from A×N → AN. Since A, N are both closed a simple limit argument shows that AN is closed and has the Lie algebra a ⊕ n. Thus by lemma 3.6.4, A × N → AN is a diffeomorphism. K is compact and thus the image of the map K × A × N → K × AN → G is the product of a compact set with a closed set and thus it is closed. Also, the image is open since the map is everywhere regular by lemma 3.6.4. By dimensional considerations from the Iwasawa decomposition of g it follows that the image is all of G. Thus the multiplication map is smooth, regular and onto. Finally, K ∩ (A × N) = {1} since a rotation matrix with positive eigenvalues is always 1. Hence the multiplication map K × A × N → G is a diffeo- morphism. Let e : G → G = Ad(G) be the natural (smooth) homomorphism e(g) = Ad(g). We may define an inverse of e locally, so we may write this map locally as

(k, a, n) 7→ (e(k), e(a), e(n)) 7→ e(k)e(a)e(n) = e(kan) 7→ kan

and therefore the multiplication map is smooth and everywhere regular. Since A and N are simply connected e|A and e|N are injective covering maps to A and N respectively, from this we can deduce that A and N are simply connected. To show the multiplication map is onto, let g ∈ G and write e(g) = kan. Set

−1 −1 a = (eA) (a) ∈ A, and n = (eN ) (n) ∈ N and let k ∈ e−1(k). Then e(kan) = kan, so e(g(kan)−1) = 1.

51 Thus g(kan)−1 = z ∈ Z(G). By theorem 3.5.14(v) z ∈ K. Thus g = (zk)an so g ∈ K × A × N. To show the map is injective it suffices to show that K ∩ AN = {1}. If x ∈ K ∩ AN then e(x) ∈ K ∩ AN = {1}, so e(x) = 1. Write x = an ∈ AN. Then 1 = e(x) = e(an) = e(a)e(n) so e(a) = e(n) = 1 by the adjoint case. Therefore, as e is injective on A and N, a = n = 1 and thus x = 1. 

Proposition 3.6.7 If t is a maximal abelian subspace of m = Ck(a) then h = a ⊕ t is a Cartan subalgebra of g

Proof: It suffices to show that hC is maximal abelian in gC. We know hC is abelian. If Z = X + iY commutes with hC then so do X and Y , thus without loss of generality we may consider only X. X commutes with a and hence lies in a ⊕ m, therefore θ(X) ∈ a ⊕ m. X + θ(X) ∈ k so it lies in m and commutes with t so lies in t, while X − θ(X) ∈ a so X ∈ a ⊕ t, so hC is maximal abelian.  Definition 3.6.8 Borel subgroups Let G be a Lie group and let g be its Lie algebra. A connected Lie subgroup B of G is called a Borel subgroup if there is a Cartan subalgebra h of g and an ordering of the corresponding set of roots Φ such that M b = h ⊕ ga a∈Φ+ + is the Lie algebra of B, where ga = {X ∈ g | ad(H)X = a(H)X for all H ∈ h} and Φ is the set of positive roots of Φ.

Uniqueness results in chapter 2 tell us that different choices of Cartan subalgebras and orderings lead to positive root systems which are conjugates of each other. Thus we deduce that all Borel subgroups of G are conjugate.

Proposition 3.6.9 Let B be a Borel subgroup of a semisimple Lie group G. Then G = KB = {kb | k ∈ K, b ∈ B}.

Proof: It is clear that b ⊇ a ⊕ n so the result follows from the Iwasawa decomposition for Lie groups. Trivially this tells us that the partition of G into double cosets K\G/B has precisely one element, K1GB = G.

It is clear that the set of all upper triangular matrices in sln(C) is a Borel subalgebra of sln(C). The matrix exponential function maps upper triangular matrices in sln(C) to upper triangular matrices, as the set of upper triangular matrices is closed under addition and multiplication. It is therefore apparent that the set of upper triangular matrices of SLn(C) contains a Borel subgroup of SLn(C). Moreover, the previous proposition tells us that this subgroup should contain a maximal torus, namely the set of all diagonal matrices. Using this, we can deduce that the set of all upper triangular matrices is indeed a Borel subgroup of G. Similarly, we could choose the set of all lower triangular matrices as another Borel subgroup of G. These two subgroups are conjugate via the matrix

 0 0 1   ..  J =  0 . 0  .   1 0 0

52 Chapter 4

(Twin) Buildings and (Twin) BN-pairs

Historically groups are understood by considering the ways in which they can act on objects. This section introduces buildings and shows how semisimple Lie groups act on these objects. Following this we continue into the theory of twin buildings and consider necessary properties for a group to act on these objects in an informative way. The main references for this chapter are [Bo02] and [AB08].

4.1 Buildings as W -metric spaces

Definition 4.1.1 Coxeter groups

A symmetric matrix M = (mij) ∈ Mn(Z) is called a Coxeter matrix if and only if mii = 1 for all i and mij ∈ {2, 3,... }∪{∞} whenever i 6= j. The Coxeter group W (of type M) is the group with presentation

mij W = hs1, . . . , sn | (sisj) whenever mij 6= ∞i

Let S = {s1, . . . sn}, then the pair (W, S) is called a Coxeter system of rank |S|. Note that this presen- tation implies that 1W 6∈ S. If (W, S) is a Coxeter system then we define the Coxeter graph of (W, S) to be the graph with vertex set S and edges between si and sj if and only if mij > 2 that is, these two elements of S do not commute. We give each edge the label mij. It is clear that this graph is simple, as no vertex can send an edge to itself.

Example 4.1.2 The algebraic Weyl group

Let g be a complex semisimple Lie algebra with Cartan subalgebra h and let A = (Aij) be the corre- sponding abstract Cartan matrix. We know that the algebraic Weyl group W (g, h) is generated by the set of reflections S = {Sa | a ∈ Π}, where Π is some system of simple roots, subject to the relations mij (sisj) , where  1, if i = j mij = 2, 3, 4 or 6 if AijAji = 0, 1, 2 or 3 respectively. From this we deduce that W (g, h) is a Coxeter group whose rank is equal to the rank of g. Moreover, the Coxeter diagram of W coincides with the Dynkin diagram of g (ignoring the labels on vertices). Therefore figure 2.1.2 provides a presentation of the Weyl group of each complex semisimple Lie group.

Example 4.1.3 Symmetric and dihedral groups

This example covers the Weyl group of type An−1, namely the symmetric group on n letters, Sn, which is a Coxeter group of rank n − 1 with presentation

 3  (sisj) = 1 whenever |i − j| = 1 Sn = si = (i i+1), i ∈ {1, 2, . . . , n − 1} 2 (sisj) = 1 otherwise

53 (2 3)(3 4)...(n−1 n) Note that these generators are sufficient to generate Sn as (1 n) = (12) . We provide an example to show that there are Coxeter groups which are not algebraic Weyl groups. The , D2n, is a Coxeter group of rank 2 with presentation

2 2 n D2n = ha, b | a = b = (ab) i

For n ≥ 4, D2n is not the Weyl group of any complex semisimple Lie algebra as its Coxeter diagram is not present in figure 2.1.2. In the case n = 3 we may enumerate the vertices of a regular hexagon clockwise from 1 to 6 and choose a = (2 6)(3 5) and b = (1 2)(3 6)(4 5). These are both involutions and ab = (1 2 3 4 5 6) has order 6.

Lemma 4.1.4 The Cayley graph of (W, S) Let (W, S) be a Coxeter system and define the Cayley graph Γ to be the graph with vertex set W and an −1 edge from w1 to w2 if and only if w1(w2) ∈ S. Then (W, d) is a metric space, where −1 −1 d(w1, w2) := l(w1(w2) ) is the smallest integer q ≥ 0 such that w1(w2) is the product of a sequence of q elements of S. The map l : W → Z is called the length function of W . Proof: It will suffice to show that the Cayley graph is a simple (undirected) graph and that in this graph d(w, w0) corresponds to the length of a shortest path from w to w0 as this is a well known metric space. −1 −1 −1 −1 S is a collection of involutions, so S = S . Therefore, if w1w2 = s ∈ S, then w2w1 = s = s ∈ S. Clearly there is a path from w to w0 in this graph of length l(w(w0)−1) and any shorter path would define a shorter sequence of elements from S whose product is w(w0)−1 so we can deduce that this is indeed a shortest path.  Proposition 4.1.5 Equivalent definitions of Coxeter systems Let W be a group generated by a set S of involutions. The following are equivalent. (i)( W, S) is a Coxeter system.

(ii) Whenever w ∈ W , s ∈ S and l(sw) ≤ l(w), then given any reduced decomposition (s1, . . . , sq) of w there is some j ∈ {1, . . . , q} such that ss1s2 . . . sj−1 = s1s2 . . . sj.

Condition (ii) of this proposition is called the exchange condition.

Proof: See [Bo02], pages 7-11. 

We will need some key properties of Coxeter groups, which are collected in the next proposition.

Proposition 4.1.6 Let (W, S) be a Coxeter system of rank n. (i) Let w ∈ W and s ∈ S, then l(sw) = l(w) ± 1.

(ii) If |W | < ∞ then there is a unique w0 ∈ W such that l(w0) ≥ l(w) for all w ∈ W .(w0 is called the longest word of W ).

Proof: Using lemma 4.1.4, we see that d(sw, w) = l(s) = 1 ≥ |d(w, 1) − d(sw, 1)| = |l(w) − l(sw)|, so it suffices to prove that l(w) 6= l(sw).

Suppose l(sw) ≤ l(w) and that (s1, . . . , sq) is a reduced decomposition of w. As (W, S) satisfies the exchange condition, there is some j, such that w = ss1 . . . sj−1sj+1 . . . sq, so sw = s1 . . . sj−1sj+1 . . . sq and l(sw) < l(w) as required.

For part (ii) we define an ordering on (W, S) where w1 ≤ w2 if and only if there is some reduced decomposition (s1 . . . sq) for w2 such that w1 = (sn1 . . . snk ) with 1 ≤ n1 < ··· < nk ≤ q. This ordering is called the Bruhat ordering and W is a poset with respect to this ordering. As |W | < ∞ this ordering must have some maximal element, w0 and it is clear that such a w0 can be chosen to be maximal by length. Then l(w0) ≥ l(w) for all w ∈ W .

54 0 Suppose w0 and w0 are both maximal elements with respect to the Bruhat ordering and that l(w0) = 0 l(w0). 0 0 0 Let (s1 . . . sq) and (s1 . . . sq) be reduced decompositions of w0 and w0 respectively. 0 0 0 By definition l(s1w0) ≤ l(w0), so by the exchange condition we could have chosen s1 = s1. 0 0 0 0 0 0 0 0 0 0 If l(s2s2s3 . . . sq) > l(s2s3 . . . sq) = l(w0) − 1 then we can deduce that s2 = s1 = s1, as l(sw0) < l(w0) for all s ∈ S. But this contradicts the fact that (s1 . . . sq) is a reduced decomposition for w0. 0 0 0 0 0 0 Thus l(s2s2s3 . . . sq) ≤ l(s2s3 . . . sq) so we can use the exchange condition again and deduce that we 0 0 could have chosen s2 = s2. Repeating this procedure we see that w0 = w0.  Definition 4.1.7 Buildings Let (W, S) be the Coxeter system with associated Coxeter matrix M. A (of type (W, S)) is a pair (∆, δ) where ∆ is a set and δ : ∆ × ∆ → W is a map where the following hold for any pair x, y ∈ ∆ with δ(x, y) = w (i) w = 1 if and only if x = y,

(ii) if z ∈ ∆ is such that δ(y, z) = s ∈ S then δ(x, z) ∈ {w, ws}, (iii) for each s ∈ S there is a z ∈ ∆ such that δ(x, z) = ws and δ(y, z) = s. The elements of ∆ are called chambers and δ is called the distance function. A building is called spherical if |W | < ∞.

If (∆, δ) is a building, then the chamber graph associated to this building is the graph with vertex set ∆ and edges between all pairs of vertices x, y ∈ ∆ with the property that δ(x, y) ∈ S. Paths in the chamber graph are called galleries and a shortest path between two chambers is called a minimal gallery.

Example 4.1.8 Σ(W, S)

Let (W, S) be a Coxeter system. Set ∆ = W and define δ : ∆ × ∆ → W by δ(x, y) = xy−1. We claim (∆, δ) is a building of type (W, S) and that the associated chamber graph is the Cayley graph of (W, S). Proof: Property (i) clearly holds by the uniqueness of inverses. (ii) holds because δ(x, z) = δ(x, y)δ(y, z) = ws ∈ {w, ws}.

For (iii) simply choose z = sy ∈ W . 

A set A ⊆ ∆ is called an apartment if and only if it is isometrically isomorphic to Σ(W, S).

Definition 4.1.9 s-panels Let (∆, δ) be a building of type (W, S). For x ∈ ∆ and s ∈ S the set

Ps(x):= {y ∈ ∆ | δ(x, y) ∈ {1, s}} is called the s-panel of ∆. A building is said to be thin if every panel has cardinality 2 and thick if every panel has cardinality at least 3. A panel P is said to belong to an apartment A if A ∩ P 6= ∅.

Continuing with example 4.1.8 we can see that Σ(W, S) is a thin building. As we will see in section 4.4, thick buildings are the ones which will be most important to the study of semisimple Lie groups.

55 4.2 Buildings as chamber complexes

Definition 4.2.1 Simplicial complexes A simplicial complex is a partially ordered set (poset) (X, ≤) satisfying (i) every pair x, y ∈ X have a greatest lower bound,

(ii) for each x ∈ X there is some r ∈ N such that ∼ X≤x = {y ∈ X | y ≤ x} = P{1, . . . , r},

here P(X) denotes the power set of X. Elements of simplicial complexes are called simplices.

For each x ∈ X the rank of x, rank(x) is the value r given by property (ii). ∼ As X≤x = P{1, . . . , r} for some r we may visualise the set X≤x as a complete graph Γ on r vertices, in which each element y ≤ x defines a complete subgraph of Γ on at most r vertices. Considering each vertex (copy of K1) as a 0 dimensional point, each copy of K2 as a line, K3 as a triangle, K4 as a tetrahedron and so on, it is sensible to define the dimension of the point x as the dimension of its realisation as a complete graph, so dim(x):= r − 1.

Example 4.2.2 Three simplicial complexes

(i) The poset consisting of all finite subsets of N under the inclusion ordering is a simplicial complex as every pair of subsets have a greatest lower bound, namely their intersection. If Y is a finite ∼ ∼ subset of N of size k, then X≤Y = P(Y ) = P{1, . . . , k} has rank k and dimension k − 1.

(ii) Similarly, one can take an r dimensional vector space V with basis B = {v1, . . . , vr} and create a poset of all subspaces of V with basis B0 ⊆ B ordered by set inclusion. The greatest lower bound 0 ∼ 0 of two subspaces is their intersection. Moreover, if W := span(B ) then X≤W = P(B ).

n n (iii) The projective plane P is the poset of all flags of subspaces (1 ( V0 ( V1 ( ··· ( Vk (R ). n n 0 0 0 n The ordering on P is (1 ( V0 ( V1 ( ··· ( Vk (R ) ≤ (1 ( V0 ( V1 ( ··· ( Vl (R ) if and 0 only if for each i ∈ {0, . . . k} there is some j ∈ {0, . . . , l} with Vi = Vj . The greatest lower bound of two flags is the flag consisting of all common subspaces. Finally, the n set of all flags bounded from above by a given flag (1 ( V0 ( V1 ( ··· ( Vk (R ) is isomorphic to the power set of {V0,V1,...,Vj}. Similar examples can be made for a projective plane over any field F. Definition 4.2.3 Chamber complexes Let (X, ≤) be a simplicial complex. (X, ≤) is called a chamber complex if (i) maximal simplices (chambers) have constant dimension, (the set of chambers is denoted Ch(X)),

(ii) the chamber graph Γ = (Ch(X),E) (defined by xy ∈ E if and only if x ∩ y has co-dimension 1) is connected. A path in the chamber graph is called a gallery.

The notion of a co-dimension is well defined for chamber complexes as they have a well defined dimension, given by the dimension of each maximal chamber. A chamber complex is called thin if every panel (simplex of co-dimension 1) is contained in at most 2 chambers and thick if every panel is contained in at least 3 chambers.

Example 4.2.4 Continuing with example 4.2.2

56 (i) The poset consisting of all finite subsets of N is not a chamber complex as every simplex is contained in a strictly larger simplex. However, the poset given by the power set of some finite set A is a chamber complex with one chamber. (ii) A more interesting extension to this idea is the poset of all proper subsets of a finite set A. This will be a chamber complex consisting of |A| chambers of rank |A| − 1, corresponding to the sets A \{a} for each a ∈ A. Moreover, this complex is thin as each panel, which is a set A \{a, b} of size |A| − 2 is only contained in the chambers A \{a} and A \{b}. (iii) The projective plane is a thick chamber complex. This is because a panel consists of a nested n sequence (V0 = 1 ( V1 ( ··· ( Vn − 2 (F = Vn−1). Suppose i is the unique value with dim(Vi/Vi−1) 6= 1 then this panel is contained in every chamber of the form

n (V0 = 1 ( ··· ( Vi−1 ( W ( Vi ( ··· (F = Vn−1) and there are certainly more than 2 choices for W . If |F | = ∞, then there are infinitely many choices for W , so each panel is contained in infinitely many chambers.

Definition 4.2.5 Coxeter complex Let (W, S) be a Coxeter system, the associated Coxeter complex Σ = Σ(W, S) is given as follows. Then Σ = {whT i | w ∈ W, T ⊆ S} with order relation ≤ given by

w1hT1i ≤ w2hT2i if and only if StabW (w1hT1i) ⊆ StabW (w2hT2i). For each T ⊆ S we call hT i a special subgroup of W , and the cosets of W/hT i are called special cosets. Sometimes we will write wWT for whT i. The chambers of Σ are given by {w} for each w ∈ W , the panels by wh{s}i = {w, ws} for each w ∈ W and each s ∈ S. The vertices are given by sets whS \{s} for each w ∈ W and s ∈ S and the unique empty simplex is W (empty because StabW (W ) = ∅). Example 4.2.6 Σ(W, S)

Let (W, S) be a Coxeter system, then Σ(W, S) is a thin chamber complex of rank |S|. The above comments prove that Σ(W, S) is a chamber complex. Let wh{s}i = {w, ws} be a panel. This is contained in precisely two chambers, namely {w} and {ws}. Therefore the claim holds.

Definition 4.2.7 Simplicial automorphisms Let X and Y be simplicial complexes. An order preserving map X → Y is called simplicial if it does not increase dimension locally and maps vertices to vertices.

Definition 4.2.8 Links and residues Let (X, ≤) be a simplicial complex and let x ∈ X.

The link of x is given by LkX (x) = {y ∈ X | x ∩ y = ∅, and x, y ≤ z for some z}

The residue of x is X≥x = {y ∈ X | x ≤ y}.

It is apparent from the definition that LkX (x) is a subcomplex of X.

Proposition 4.2.9 LkX (x) and X≥x are isomorphic as posets.

Proof: Define ψ : X≥x → LkX (x) by ψ(y) is the maximal (with respect to ≤) subsimplex of y which satisfies ψ(y) ∩ x = ∅.  Theorem 4.2.10 Let c, c0 ∈ Ch(Σ(W, S)) and write

Ch(a) = {x ∈ Ch(Σ) | d(c, x) < d(c0, x)} and Ch(−a) = {x ∈ Ch(Σ) | d(c, x) > d(c0, x)}.

Then Ch(Σ) = Ch(a)t˙ Ch(−a).

57 Here the distance metric d(x, y) is given by the length of a shortest xy-path in the Chamber graph. Proof: It is clear from the above remark that this union is disjoint, so it suffices to show that there are no x ∈ Ch(Σ) with d(c, x) = d(c0, x) but this is just a consequence of the fact that l(sw) = l(w) ± 1 for all s ∈ S and all w ∈ W .  P P We make two remarks here. First a = ≤ch(a) is called a half-space or root and −a = ≤ch(−a) is the root opposite to a, secondly, the set a ∩ −a is called a wall.

Definition 4.2.11 Buildings A building is a simplicial complex ∆ which is the union of subcomplexes Σ (called apartments), which satisfy the following three conditions. (B0) Each apartment is a Coxeter complex, (B1) if A, B ∈ ∆, then there is an apartment Σ such that A, B ∈ Σ, (B2) if A, B ∈ ∆ and Σ, Σ0 are apartments with A, B ∈ Σ ∩ Σ0 then there is a simplicial isomorphism φ :Σ → Σ0 which fixes A and B pointwise.

There are two other conditions which are equivalent to (B2) (given that (B0) and (B1) already hold) which will be of use. These are (B20) if A ∈ ∆,C ∈ Ch(∆) and Σ, Σ0 are apartments with A, C ∈ Σ ∩ Σ0 then there is a simplicial isomorphism φ :Σ → Σ0 which fixes A and C pointwise. (B200) if Σ and Σ0 are apartments which contain a common chamber C, then there is a simplicial isomorphism φ :Σ → Σ0 which fixes Σ ∩ Σ0 pointwise. All apartments are isomorphic by (B2), so we may reasonably talk about the dimension of a building as being the dimension of any of its chambers. A consequence of this property is that all apartments have the same Coxeter matrix. However, the set of apartments is not part of the structure of a building. We define a building to be spherical if and only if the Coxeter group used to define its apartments is finite.

Example 4.2.12 Two buildings

(i)Σ( W, S) is a thin building of dimension |S|−1. Property (B0) holds by example 4.2.6 and properties (B1) and (B2) hold vacuously, as Σ(W, S) has only one apartment.

(ii) The projective plane Pn is a thick building of dimension n − 1. To avoid confusion, from this point onwards, buildings in the sense of definition 4.2.11 will be denoted by ∆, and those in the sense of definition 4.1.7 by (Ch(∆), δ).

Theorem 4.2.13 The chamber complex and W -metric definitions of a building are equivalent and all the defined notions for each type of building agree.

Proof: Let ∆ be a building in the sense of definition 4.2.11. Each apartment in this building is a copy of Σ(W, S) for some Coxeter system (W, S). The set of all chambers of Σ(W, S) forms a building in the sense of definition 4.1.7 as was seen in example 4.1.8. Define a metric δ : Ch(D) × Ch(D) → W by δ(C,D) = w where w is the distance between C and D in any apartment containing both these chambers. This is well defined as any two apartments containing C and D are isomorphic by condition (B2). The remaining properties follow from the fact that (Ch(Σ(W, S)), δ) is a building in the sense of definition 4.1.7.

The reverse implication is more difficult, so the reader is referred to [AB08], section 5.6.  Using the first implication we can define a metric d on a building ∆ where

d(x, y) = min{l(δ(C,D)) | x ≤ C, y ≤ D}.

58 4.3 Solomon-Tits theorem

Later results will require a building to be simply connected. With this in mind we will now state the Solomon-Tits theorem. A complete proof would need knowledge of basic homotopy theory. Since this is not assumed, these details will be stated, but their proofs will be omitted. Using theorem 4.2.13 we will interchange between the two definitions.

Definition 4.3.1 Bouquet of spheres

Let (Xi)i∈I be a collection of topological spaces and for each i ∈ I, let xi ∈ Xi. The wedge sum of the X is i , _ G Xi := Xi ∼, i∈I i∈I

where ∼ is the equivalence relation, a ∼ b if and only if a = xi and b = xj for some i, j ∈ I. The point W xi is called the base point of Xi. These base points are equivalent in i∈I Xi so we denote the unique base point of this set by x. n n+1 W If each Xi is homeomorphic to the sphere S = {x ∈ R | |x| = 1}, then i∈I Xi is called a bouquet of n-spheres.

Proposition 4.3.2 A bouquet of n-spheres is simply connected if and only if n ≥ 2

Proof: Sn is simply connected if and only if n ≥ 2. Let X be a bouquet of spheres. It is clear that X is path connected. Consider a continuous curve f : S1 → X. If the image f(S1) avoids the base point of X then it is entirely contained in some copy of Sn so is in the same homotopy class as the identity curve f(x) = x. Here we are using the fact that Sn is simply connected, so its fundamental group is trivial. Suppose now that x ∈ f(S1). If f(x) = x for all x then f is the identity curve, so suppose there is some a ∈ S1 with f(a) 6= x. As f is continuous there is a connected open subset U of S1, such that x 6∈ f(U), but f(x) = x for all x ∈ U \ U. But then f(U) is contained in some copy of Sn so is in the same homotopy class as the curve f 0 which is the identity map on U and equal to f elsewhere. As this process can be done for any point not mapped to x, we deduce that f lies in the same homotopy class as the identity curve. Thus X has trivial fundamental subgroup and is therefore simply connected. 

To ease the notation in the following results we denote the set ∆≤A = {X ∈ ∆ | X ≤ A} by A.

Lemma 4.3.3 Let ∆ be a building. Fix some chamber C ∈ Ch(∆) and some integer d ≥ 1. Let D be a set of chambers with the following two properties. (i) d(C,D) ≤ d for all D ∈ D, (ii) if d(C,D) < d then D ∈ D. Let ∆0 be the subcomplex of ∆ generated by D and let D be a chamber of ∆ with D 6∈ D and d(C,D) = d. Then [ D ∩ ∆0 = A, A∈P where P is the set of all panels A of D with d(C,A) < d (i.e. A lies on some chamber strictly closer to C than D is). Moreover, the set P contains all the panels of D if and only if ∆ is spherical and of diameter d.

Proof: Firstly, we claim that D ∩ ∆0 = {B < D | d(C,B) < d}. The inclusion D ∩ ∆0 ⊇ {B < D | d(C,B) < d} is obvious, by assumption (ii). Suppose B ∈ D ∩ ∆0. Then there is some chamber D0 ∈ D with B < D0 and d(C,B) ≤ d(C,D0) ≤ d = d(C,D), by assumption (i). If d(C,B) = d, then d(C,B) = d(C,D0) = d(C,D), so D = D0, which contradicts the fact that D 6∈ D and D0 ∈ D. Secondly, we claim that if B < D and d(C,B) < d then there is some panel A of D with the property

59 that B ≤ A and d(C,A) < d. This claim can be verified by considering a minimal gallery from C to D and choosing the panel which connects D to the chamber preceding it in this minimal gallery. Combining the two claims obtains the main result. If P contains all the panels of D and δ(C,D) = w then l(sw) < l(w) for all s ∈ S. This can only happen if W is finite and w is the longest word in W . If ∆ is spherical and of diameter d, then each panel A of D lies in some chamber DA with d(C,DA) < d(C,D) = D, so A ∈ P.  Theorem 4.3.4 Solomon-Tits theorem Let ∆ be a building of rank n. Then ∆ is a bouquet of (n − 1) spheres.

By proposition 4.3.2, the Solomon-Tits theorem says that ∆ is simply connected if and only if it has rank at least 3.

Proof: Pick some chamber C ∈ Ch(∆), choose d = l(w0), where w0 is the unique longest word of (W, S) and set D = {X ∈ Ch(∆) | d(X,C) < d}. Let ∆0 be the subcomplex of ∆ generated by D. 0 S By lemma 4.3.3, we know that D ∩ ∆ = A∈P A, where P is the set of all the panels of D. Using this it can be proved that any continuous curve on ∆0 is homotopy equivalent to a continuous curve on the single chamber C. Thus ∆ has the same fundamental subgroup as the simplicial complex obtained by contracting all chambers of ∆0 to a single chamber. Reintroducing the chambers at distance d from C, we notice that each of these chambers shares a panel with exactly one chamber of ∆ which is closer to C than it. This chamber must lie in ∆0 by construction, so this opposite chamber has the fundamental subgroup of a sphere with base point given by a chosen point in the contraction of ∆0. We chose a generic opposite chamber so it follows that the homotopy type of ∆ is that of a bouquet of (n − 1)-spheres (as n − 1 is the dimension of each chamber). 

4.4 BN-pairs

Definition 4.4.1 BN-pairs

Let G be a group and let B,N be subgroups of G with B ∩ N E N. Define T := B ∩ N, W := N/T and let S be a generating set for W . The tuple (G, B, N, S) is called a BN-pair if (i) G is generated by B ∪ N, (ii) for each w ∈ W and s ∈ S, wBs ⊆ BwB ∪ BwsB, (iii) for each s ∈ S, sBs−1 6⊆ B. If (G, B, N, S) is a BN-pair then we say that G admits a BN-pair.

It is natural to ask what additional structure can be guaranteed on a group which admits a BN-pair. The following proposition begins to answer this question.

Proposition 4.4.2 Let (G, B, N, S) be a BN-pair. Then (i) there is a bijection between W and the double cosets B\G/B. In particular G has the Bruhat decomposition G G = BwB, w∈W

(ii)( W, S) is a Coxeter system whenever |S| < ∞, in particular, s = s−1 for all s ∈ S,  BwsB, if l(ws) > l(w) (iii) BwBsB = BwB ∪ BwsB, if l(ws) < l(w)

Note that BwB = B(T n)B = (BT )nB = BnB is a well defined double coset.

Proof: See [Br88] pages 107-110. 

60 Definition 4.4.3 Group action on a building A group G is said to act on a building (Ch(∆), δ) of type (W, S) if there is a δ-preserving homomor- phism φ : G → Sym(∆). In other words, φ is a homomorphism and for all g ∈ G and all x, y ∈ ∆ δ((gφ)(x), (gφ)(y)) = δ(x, y). G is said to act transitively on (Ch(∆), δ) if for any pair x, y ∈ ∆ there is some g ∈ G such that (gφ)(x) = y. Finally, G acts strongly transitively on (Ch(∆), δ) if for each w ∈ W , the action of G is transitive on the set of pairs (∆ × ∆)w := {(x, y) ∈ ∆×∆ | δ(x, y) = w}. The next theorem will be of great importance to us, as it will show that groups which admit a BN-pair act strongly transitively on an associated thick building. Moreover, the proof is constructive so enables us to study the group via the building. This will be a necessary tool in later chapters.

Theorem 4.4.4 Let G be a group which admits a BN-pair. Then there is an associated thick building on which G acts strongly transitively.

Proof: We define the building (Ch(∆), δ) using the Bruhat decomposition guaranteed by proposition 4.4.2. Let ∆:= B\G be the set of left cosets of G with respect to B and define δ : ∆ × ∆ → W by δ(gB, hB) = w if and only if Bg−1hB = BwB.

The Bruhat decomposition tells us that such an operation is well defined. We must check that δ satisfies the three conditions necessary in definition 4.1.7 and show that G acts strongly transitively on (Ch(∆), δ). (i) Suppose δ(gB, hB) = 1, then Bg−1hB = B, so g−1h ∈ B and h ∈ gB which implies that gB = hB. (ii) Now assume δ(gB, hB) = w and suppose kB ∈ G/B is such that δ(hB, kB) = s ∈ S, then Bg−1hB = BwB and Bh−1kB = BsB. Therefore, Bg−1kB ⊆ (Bg−1hB)(Bh−1kB) = BwBBsB = BwBsB ⊆ BwB ∪ BwsB. Thus Bg−1kB = BwB or BwsB as required. (iii) Suppose for every choice of kB with δ(hB, kB) = s ∈ S, Bg−1kB = BwB. Then BwBsB ⊆ BwB and therefore ws ∈ BwB contradicting the Bruhat decomposition. Therefore there is some coset kB with δ(hB, kB) = s ∈ S and Bg−1kB = BwsB, so δ(gB, kB) = ws as required. It remains to prove that G acts strongly transitively on this building. Obviously, G acts transitively on B\G. It therefore suffices to show that there exists some b ∈ B such that bgB = hB whenever δ(B, gB) = δ(B, hB) = w. By definition BgB = BhB = BwB, so there is clearly some b ∈ B with the required property.  Due to the equivalence of the two definitions of buildings 4.2.13 and the fact that the two definitions of strongly transitive group actions agree, we deduce from this theorem that G acts strongly transitively on the equivalent building ∆. Equivalently, we could have defined the building as the set of all right cosets {Bg | g ∈ G} and defined

δ(Bg, Bh) = w if and only if Bgh−1B = BwB.

Theorem 4.4.5 Let G be a group which acts strongly transitively on some thick building ∆. Then G admits a BN-pair.

Proof: We consider G as a strongly transitive automorphism group on ∆ and fix some chamber C in some apartment Σ. Define the three subgroups B = {g ∈ G | gC = C} N = {g ∈ G | gΣ = Σ} T = {g ∈ G | g fixes Σ pointwise}

It is clear that T E N as it is the kernel of the homomorphism α : N → W induced by the action of N on the apartment Σ = Σ(W, S). Moreover, (G, B, N, S) is a BN-pair for G. The full details of the proof can be found in [Br88], pages 102-106. 

61 4.5 Twin buildings and twin BN-pairs

The previous section shows how we may associate a building to any group which admits a BN-pair, in particular to semisimple Lie groups. This will be useful in the proof of Borovoi’s theorem in section 5.3. In section 8.1 we will extend this result to Kac-Moody groups which admit a twin BN-pair. For this we will need to generalise the results from the previous section to determine a natural structure on which these groups act. It will be shown that twin buildings are the natural objects to consider.

Definition 4.5.1 Twin building ∗ A triple ((Ch(∆+), δ+) and (Ch(∆−), δ−), δ ) consisting of two buildings (Ch(∆+), δ+) and (Ch(∆−), δ−) ∗ of type (W, S) and a codistance function δ : Ch(∆+) × Ch(∆−) ∪ Ch(∆−) × Ch(∆+) → W is called a twin building of type (W, S) if for each ε ∈ {+, −} and each pair (x, y) ∈ Ch(∆ε) × Ch(∆−ε) with δ∗(x, y) = w, (i) δ∗(x, y) = δ∗(y, x)−1,

0 0 ∗ 0 (ii) if x ∈ ∆ε is such that δε(x , x) = s ∈ S and l(ws) < l(w), then δ (x , y) = ws,

0 0 ∗ 0 (iii) for any s ∈ S there is some x ∈ ∆ε such that δε(x , x) = s and δ (x , y) = ws.

∗ Two chambers x ∈ Ch(∆+) and y ∈ Ch(∆−) are said to be opposite if δ (x, y) = 1.

Example 4.5.2 A spherical twin building of type (W, S)

Let (Ch(∆), δ) be a spherical building of type (W, S) and let w0 be the unique longest word in W with respect to S. Define (Ch(∆+), δ+), (Ch(∆−), δ−) to be two disjoint copies of (Ch(∆), δ) and define the codistance δ∗ by ∗ ∗ δ : Ch(∆+) × Ch(∆−) → W is given by δ (x, y) = δ+(x, y)w0 ∗ ∗ δ : Ch(∆−) × Ch(∆+) → W is given by δ (x, y) = w0δ−(x, y). ∗ Then ((Ch(∆+), δ+), (Ch(∆−), δ−), δ ) is a twin building of type (W, S). In light of this we may view twin buildings as a generalisation of spherical buildings.

Definition 4.5.3 Group action on a twin building ∗ A group G is said to act on a twin building ∆ = ((Ch(∆+), δ+), (Ch(∆−), δ−), δ ) if it acts simultaneously ∗ on the two sets Ch(∆+) and Ch(∆−) and also preserves δ+, δ− and δ . 0 0 In particular this means that for all x, x ∈ (Ch(∆+), y, y ∈ (Ch(∆−) and g ∈ G,

0 0 (i) δ+((gφ)x, (gφ)x ) = δ+(x, x ),

0 0 (ii) δ−((gφ)y, (gφ)y ) = δ−(y, y ), (iii) δ∗((gφ)x, (gφ)y) = δ∗(x, y), where φ is the associated group action.

To make sure the twin building is a suitable structure for the group to act on we require stronger properties on such an action, namely, that it is strongly transitive. There are many equivalent definitions of this, two of which will be collected in the next proposition.

Proposition 4.5.4 Equivalent definitions of strongly transitive ∗ Let G be a group which acts on a twin building ∆ = ((Ch(∆+), δ+), (Ch(∆−), δ−), δ ). The following are equivalent. For each w ∈ W , G acts transitively on the set

∗ {(x, y) ∈ Ch(∆+) × Ch(∆−) | δ (x, y) = w}.

62 ε For ε ∈ {+, −}, G acts transitively on ∆ε and for each w ∈ W and x ∈ ∆ε, B acts transitively on the set ∗ {y ∈ ∆−ε | δ (x, y) = w}.

Proof: See [AB08], lemma 6.70.  We say a group G acts strongly transitively on a twin building if and only if the action satisfies either of the properties given in proposition 4.5.4. Definition 4.5.5 Twin BN-pairs + − + − Let G be a group and let B ,B and N be subgroups of G such that B ∩ N = B ∩ N = T E N and there is some subset S ⊆ W := N/T such that (G, Bε,N,S) is a BN-pairs of G for ε ∈ {+, −}. Then the system (G, B+,B−,N,S) is called a twin BN-pair if (i) for each w ∈ W and s ∈ S with l(ws) < l(w), BεwB−εsB−ε = BεwsB−ε, (ii) for each s ∈ S, sB− ∩ B+ = ∅. Proposition 4.5.6 Properties of groups which admit twin BN-pairs Let (G, B+,B−,N,S) be a twin BN-pair, then (i) for each ε ∈ {+, −}, each s ∈ S and w ∈ W , BεwB−εsB−ε ⊆ BεwsB−ε ∪ BεwB−ε,

(ii) G has a Birkhoff decomposition, so G G = BεwB−ε, for each ε ∈ {+, −}, w∈W

Proof: (i) We consider the case ε = + and l(w) < l(ws), the other three cases can be handled similarly. Recall that as (G, B−,N,S) is a BN-pair, B−sB−sB− = B− ∪ B−sB−. Then B+wB−sB− = (B+wB−)sB+ = (B+wsB−sB−)sB− by definition 4.5.5(i) = B+ws(B−sB−sB−) = B+ws(B− ∪ B−sB−) as (G, B−,N,S) is a BN-pair = B+wsB− ∪ B+wsB−sB− = B+wsB− ∪ B+wB− by definition 4.5.5(i).

+ − + − (ii) We will prove the case ε = +. First we claim that B wB 6= B B whenever 1W 6= w ∈ W . Assume for a contradiction that this is false for some w. As w 6= 1 there is some s such that l(ws) < l(w). Then by part (i), B+wsB− = B+B−sB− = B+B− ∪ B+sB−. The left hand side is one double coset and the right hand side is the union of two distinct double cosets by definition 4.5.5(ii), which is a contradiction. Next we show that B+vB− = B+wB− implies that v = w by induction on min{l(v), l(w)} which we assume to be l(v). The case l(v) = 0 is covered by the previous claim, so assume l(v) > 0 and choose s ∈ S such that l(vs) < l(v). If B+vB− = B+wB−, then using the same trick as earlier, we deduce that B+vsB− = B+wB−sB−. There is only one double coset on the left hand side, so it follows that the right hand side is simply the double coset B+wsB−. By the induction hypothesis, vs = ws, so v = w. S + − Finally, we show G = w∈W B wB . By part (i) the union is closed under left multiplication by both B+ and N, so is closed under left multiplication by B+NB+ which is the whole of G by the Bruhat decomposition. 

63 Theorem 4.5.7 Let G be a group which admits a twin BN-pair (G, B+,B−,N,S). Then G acts ∗ strongly transitively on an associated twin building ∆ = ((Ch(∆+), δ+), (Ch(∆−), δ−), δ ).

We know that the two BN-pairs act strongly transitively on associated thick buildings (Ch(∆+), δ+) ∗ and ((Ch(∆−), δ−) respectively. It remains to show that there is a suitable codistance function δ . For ε ∈ {+, −}, set δ∗(gBε, hB−ε) = w, if and only if ,Bεgh−1B−ε = BεwB−ε. This is well defined as G has a Birkhoff decomposition. There are three properties on δ∗ to check. (i) It is clear from the definition that δ∗(x, y) = δ∗(y, x)−1.

∗ (ii) Now assume δ (gB, hB) = w and suppose kB ∈ G/B is such that δ−(gB, kB) = s ∈ S, with l(ws) < l(w). Then B+g−1hB− = B+wB− and B−g−1kB− = B−sB−. Therefore, B+g−1kB− ⊆ (B+g−1hB−)(B−h−1kB−) = B+wB−B−sB− = B+wB−sB− = B+wsB−, as l(ws) < l(w). Both the left hand side and right hand side are double cosets in the Birkhoff decomposition, so B+g−1kB− = B+wsB− as required.

− − − + −1 − + − (iii) Suppose for every choice of kB with δ−(hB , kB ) = s ∈ S, B g kB = B wB . Then B+wB−sB− ⊆ B+wB− and therefore ws ∈ B+wB− contradicting the Birkhoff decomposition. − − − + −1 − + − Therefore there is some coset kB with δ−(hB , kB ) = s ∈ S and B g kB = B wsB , so ∗ + − δ (gB , kB ) = ws, as required.  Let G be a group which admits an abstract root system (in the sense of definition 2.3.4 without the ∗ condition that Φ spans (h0) .) A pair of roots {a, b} ⊆ Φ is said to be prenilpotent if there are elements w1, w2 ∈ W such that + − {w1(a), w1(b)} ⊆ Φ and {w2(a), w2(b)} ⊆ Φ or equivalently that the intersections of half spaces a ∩ b and (−a) ∩ (−b) are both nonempty. For a prenilpotent pair {a, b} we define

[a, b]:= {c ∈ Φ | a ∩ b ⊆ c and (−a) ∩ (−b) ⊆ (−c)} and set (a, b):= [a, b] \{a, b}.

Definition 4.5.8 Twin root datum (Root group datum) + − Let G be a group with abstract root system Φ = Φ t Φ . A system (G, (Ua)a∈Φ) is called a twin root ε ε datum if it satisfies the following axioms. (For ease of notation we define U = hUa | a ∈ Φ i).

(i) Each Ua is a non-trivial subgroup of G.

(ii) G = hUa | a ∈ Φi.

− (iii) If b ∈ Π (the set of simple roots), then Ub 6⊆ U .

(iv) If a ∈ Φ and u ∈ Ua \{1}, then there exist elements u1, u2 ∈ U−a such that u1uu2 conjugates Ub

to Usa(b) for each b ∈ Φ.

(v) If the pair {a, b} ⊆ Φ is prenilpotent then the commutator group [Ua,Ub] ⊆ hUc | c ∈ (a, b)i.

0 (vi) If b ∈ Π then there is some b in the root subsystem generated by b such that Ua ⊆ Ub0 for all a in the root system generated by b.

It can be shown that the elements chosen in part (iv) of this definition are uniquely determined so the map µ(u) = u1uu2 is well defined.

The groups Ua are called root subgroups of G, explaining the description of such an object as a root group datum. The following proposition justifies our choice of describing this object as a twin root datum.

64 Proposition 4.5.9 Groups with a twin root datum admit a twin BN-pair + − If G is a group which admits a twin root datum (G, (Ua)a∈Φ) then (G, B ,B ,N,S) is a twin BN-pair, where \ T := NG(Ua) a∈Φ ε ε N := hT ∪ {µ(u) | u ∈ Ua \{1}, a ∈ Φ}i and B = TU .

We follow the proof given in [Ma07]. Proof: U + ⊆ B+ and T ⊆ N so G is certainly generated by B+ ∪ N. Next we wish to prove that B+ ∩ N = T and T is normal in N. T is the intersection of the normalisers −1 of the Ua so it follows that tUat = Ua for all a ∈ Φ and all t ∈ T . Definition 4.5.8(iv) tells us that the action of conjugating by elements of N permutes the root subgroups. However, B+ ∩ N is the subgroup of N consisting of all those elements generated by T and the positive root subgroups, so must fix the positive root-spaces under the conjugation action. Therefore B+ ∩ N is normal in N and B+ ∩ N = T . The group W := N/T acts on the set of root groups by conjugation. Again by property (iv), for each a ∈ Φ there is some element w ∈ W such that for some suitable b,

2 w Ub = wUsa(b) = Usa(sa(b)) = Ub.

Thus W is generated by involutions. The remaining two properties are easy consequences of the definition of an abstract root system. The same arguments hold with B+ replaced by B− so we have shown that G admits two BN-pairs. The remaining axioms for a twin BN-pair are more technical and are therefore omitted. They may be found in [Ti87]. 

4.6 Semisimple Lie groups with BN-pairs and twin BN-pairs

In this section we will show that certain semisimple Lie groups admit BN-pairs and twin BN-pairs and therefore act strongly transitively on associated buildings and twin buildings respectively. We will follow the method given in [AB08], section 7.9.2, specifying the construction to Lie groups.

Let G be a connected semisimple Lie group, whose Lie algebra g is a complex semisimple Lie algebra. Such semisimple Lie groups will be referred to as complex semisimple Lie groups. Such groups admit an abstract root system Φ(g, h) where h is a Cartan subalgebra of g.

To each root a ∈ Φ there is a corresponding copy of sl2(C) contained in g, which is generated by the set {Ea,E−a,Ha}, where Ea ∈ ga, E−a ∈ g−a and Ha ∈ h.

The exponential mapping takes this set to a copy of SL2(C) in G which will be denoted by Ga.

We denote by Ua the image of the set of all upper unitriangular matrices in SL2(C) contained in Ga. ε We claim that (G, (Ua)a∈Φ) is a twin root datum. As with the definition we will define the sets U = ε hUa | a ∈ Φ i, for ease of notation. ∼ (i) Each Ua is clearly nontrivial as Ua = (C, +).

(ii) Ga = hUa,U−ai, so all copies of Ga are contained in hUa | a ∈ Φi. G is connected, so the exponential map is onto by proposition 3.3.7(iii). Therefore hUa | a ∈ Φi = G.

− + (iii) U is generated by lower triangular matrices, so if b ∈ Π ⊆ Φ , then Ub is a group of upper triangular matrices. − − U 6= G, so it follows that Ub 6⊆ U .

65 Finally, we note that as the root system generated by any root b is simply the pair {b, −b}, property (vi) holds trivially. For the remaining two properties we introduce some additional notation.

We define xa : C → Ua to be the obvious isomorphism and define the following elements of G,

−1 ma(z) := xa(z)x−a(−z )xa(z), ma := ma(1) and −1 ha(z) := ma(z)(ma) .

for all a ∈ Φ and all z ∈ C \{0}. × × From [St68], lemma 28(a), we see that ha : C → Ua is in fact a homomorphism, so {ha(z) | z ∈ C } is a subgroup of G which is isomorphic to the C×.

We now determine conjugation relations on the elements ma and ha(z). The first two relations are

−1 maxb(z)(ma) = xsa(b)(±z) and

0 hb,a∨i 0 ha(z)xb(z )ha(z) = xb(z z ) × 0 ∨ 2 for all a, b ∈ Φ, all z ∈ C and all z ∈ C. (Here, a := ha,ai ). Proofs of both these results can be found in [St68] on page 30. We will soon see that the exact nature of the ± in the first equation is soon made irrelevant for our purposes.

Combining these two results with the definition of ha(z), we obtain the new formula

∨ 0 −1 hsa(b),a i 0 ma(z)xa(z )ma(z) = xsa(b)(z z ). (†)

Setting a = b we see that 0 −1 2 0 ha(z)xa(z )ha(z) = xa(z z ), and therefore we obtain the first important commutator formula

0 2 0 [ha(z), xa(z )] = xa((z − 1)z ).

From this we deduce that [ha(z),Ua] = Ua, provided z 6∈ {0, ±1}.

Secondly, we deduce conjugation relations on the generators xa(z). Let a and b be two nonproportional roots and let z, z0 ∈ C, then the commutator relationship has the form   0 Y Y i 0 j [xa(z), xb(z )] =  xia+jb(cijz (z ) ) . (‡) i,j∈N ia+jb∈Φ

Certain details are omitted here, as an ordering must be chosen on the elements in the product (as they do not commute). The integers cij depend on a, b and the order chosen. We can now complete the proof that G admits a twin root datum.

Property (v) follows from (‡) and the fact that {ia + bj | i, j ∈ Z} ∩ Φ ⊆ (a, b). −1 −1 Finally, to prove property (iv), let u = xa(z) ∈ Ua \{1}, then choosing u1 = m−a(−z ) and u2 = (u1) will suffice, by (†). 

This result combined with proposition 4.5.9 is sufficient to show that every complex semisimple Lie group admits a twin BN-pair and therefore admits a BN-pair and acts strongly transitively on both an associated building and an associated twin building. We will use this result in the next chapter.

66 Chapter 5

Borovoi’s theorem

The first application we make of the theory developed in the preceding chapters is to prove a topological version of Borovoi’s theorem. The main reference for this chapter is [GGH09], sections 1-3.

5.1 Introductory Category theory

Definition 5.1.1 Categories

A pair I = (ob(I), mor(I)) is called a category if the following properties hold. (i) Every f ∈ mor(I) has a unique source object a ∈ ob(I) and a unique target object b ∈ ob(I). This is denoted by f : a → b. The collection of all f ∈ mor(I) with f : a → b is denoted by mor(a, b). (ii) If f ∈ mor(a, b) and g ∈ mor(b, c) then there is some h ∈ mor(a, c) with h = g ◦ f.(mor(I) is closed under composition). (iii) If f ∈ mor(a, b), g ∈ mor(b, c) and h ∈ mor(c, d) then h ◦ (g ◦ f) = (h ◦ g) ◦ f. (Composition in mor(I) is associative).

(iv) For every x ∈ ob(I) there is some ιx ∈ mor(x, x) such that for every f ∈ mor(a, b), ιb ◦ f = f = f ◦ ιa.

ob(I) is called the class of objects of I and mor(I) is called the class of morphisms of I. A category is called small if ob(I) is a set. Example 5.1.2 Important categories of groups

G is the category of all groups and group homomorphisms. Group homomorphisms are associative and closed under composition. Part (iv) of the definition follows by taking the identity automorphism of each group.

TG denotes the category of all topological groups under continuous (group) homomorphisms. The axioms of a category are easily verified in this case, using the observations made for the category G. Similarly, HTG is the category of Hausdorff topological groups under continuous group homomorphisms, LCG is the category of locally compact σ-compact groups under continuous group homomorphisms and LIE is the category of Lie groups under Lie group homomorphisms. It is easy to verify that these are indeed categories.

Example 5.1.3 The (small) category of a poset

Let (P, ≤) be a partially ordered set. Then we can form a category I with ob(I) = P and given two elements a, b ∈ P we define a unique morphism ι : a → b if and only if a ≤ b. It is important that the morphism a → b is unique when it exists to ensure that parts (ii)-(iv) of the definition hold. We should also note that part (iv) of the definition fails if the ordering is strict. One particular example which will be important is the category formed from the poset of all 1 and 2 element subsets of a finite set, under

67 set inclusion. Later, we may refer to a poset as a small category, implicitly using the association constructed in this example.

Definition 5.1.4 Diagrams

Let I be a small category. A diagram of I in a category A is a covariant functor δ : I → A. Specifically, this means that

δ(f): δ(a) → δ(b) ∈ mor(A) whenever f : a → b ∈ mor(I). Example 5.1.5 A diagram in the

Let W be a Coxeter group with presentation

* mij + (sisj) for all i 6= j, S = {s1, s2, . . . , sn} 2 (si) and let I be the small category corresponding to the poset (P(S), ⊆), where P(S) denotes the power set of S. There is a natural diagram δ : I → G where

* 0 0 mij + (sisj) for all i 6= j, δ(S0) = S0 = {s0 , s0 , . . . , s0 } ⊆ S . 1 2 k 0 2 (si) The set inclusion S0 ⊆ S00 defines a monomorphism δ(S0) → δ(S00). A proof of this can be approached from the viewpoint of Coxeter diagrams, as the subdiagram of W generated by the vertices of S0 is itself a subdiagram of the Coxeter diagram generated by the vertices of S00.

Definition 5.1.6 Cones and colimits

Let δ : I → A be a diagram. A cone over δ is a pair (G, (ψi)i∈ob(I)), where G ∈ ob(A) and ψi : δ(i) → G are morphisms which satisfy

ψj ◦ δ(a) = ψi for all i, j ∈ ob(I) and all a ∈ mor(i, j).

A cone (G, (ψi)i∈ob(I)) is called a colimit of δ if for each cone (H, (ϕi)i∈ob(I)) there is a unique morphism ϕ : G → H such that ϕ ◦ ψi = ϕi.

Example 5.1.7 Completing example 5.1.5

0 By construction, we see that (W, ιS0 ) is a cone over the diagram δ, where ιS0 : δ(S ) → W is the identity monomorphism. Moreover, given any other cone (H, ϕS0 ) there must be a monomorphism ϕ : W → H 0 0 0 with the property that ϕ|ιS0 = ϕ ◦ ιS = ϕS . Thus (W, ιS ) is a colimit of δ. The next proposition will prove that this colimit is unique up to isomorphism.

Proposition 5.1.8 Existence of colimits in categories of groups

Colimits exist for any diagram in the categories G, TG and HTG and are unique up to isomorphism. That is, given any two colimits, (G, (ψi)i∈ob(I)) and (H, (ϕi)i∈ob(I)) of δ there is some (homeomorphic) −1 isomorphism φ : G → H such that φ ◦ ψi = φ ϕi for all i ∈ ob(I).

Proof: Let δ : I → G be a diagram. Then the group , G G = δ(i) ∼

i∈ob(I) is a colimit of δ, where ∼ is the equivalence relation a ∼ b if and only if there exist elements i, j ∈ I such that {a, b} ⊆ δ(i) ∩ δ(j).

68 Using this result, a diagram δ : I → TG clearly has a colimit (G, (λi)i∈ob(I)) in G. There is a unique finest topology O on G which makes each λi continuous and ((G, O), (λi)i∈ob(I)) is a colimit of the dia- gram δ in TG.

Similarly, a diagram δ : I → HTG has a colimit ((G, O)(λi)i∈ob(I)) in TG. Let H := G/{1G} and let 0 q : G → G/{1G} = H be the canonical quotient homomorphism. Then ((H, O )(q◦λi)i∈ob(I)) is a colimit of δ in HTG, where O0 is the quotient topology.

To prove uniqueness suppose (G, ψi) and (H, ϕi) are two colimits of δ : I → G. Then there exist maps ψ : G → H and ϕ : H → G such that

ϕ ◦ ψi = ϕi and ψ ◦ ϕi = ψi for all i ∈ ob(I).

Therefore, (ϕ ◦ ψ) ◦ ϕi = ϕi and (ψ ◦ ϕ) ◦ ψi = ψi for all i ∈ ob(I). From this we deduce that ϕ ◦ ψ = ιG ∼ and ψ ◦ ϕ = ιH , so G = H and setting φ:= ψ proves that the colimits are isomorphic. As the topologies chosen are the unique finest topologies it follows that colimits are unique in the other two categories as well. 

5.2 Final group topologies

Definition 5.2.1 Final group topologies

Let X be a set and let (fi)i∈I be a collection of functions fi : Xi → X where the Xi are topological spaces. The final topology on X is the finest topology under which each fi is continuous. If X is a group then the topology is called the final group topology. An equivalent way of formulating this topology is to −1 say that U is open in X if and only if fi (U) is open in Xi for each i ∈ I. Definition 5.2.2 Direct limit topologies A directed set is a pair (I, ≤) where I is a set and ≤ is a reflexive, transitive relation.

Let {Ai | i ∈ I} be a family of topological spaces indexed by a directed set I and suppose there are continuous functions fij : Ai → Aj for all i ≤ j with the following properties:

(i) fii is the identity homeomorphism on Ai,

(ii) fik = fjk ◦ fij.

Then (Ai, fij)i,j∈I is called a direct system over I. The underlying set A of the direct limit, is given by !, [ A =−→ lim(Ai) := Ai ∼ i

where xi ∼ xj if and only if xi ∈ Ai, xj ∈ Aj and fik(xi) = fjk(xj) for some k ∈ I.

The underlying set of a direct limit is constructed in much the same way as the colimit was constructed for δ in proposition 5.1.8.

Definition 5.2.3 Some topological properties Let X be a topological space. (i) Let C ⊆ X. A point x ∈ X is called an interior point of C if there is some open neighbourhood U of x contained in C. (ii) X is called a Baire space if and only if any countable union of closed sets with empty interior has empty interior. (iii) X is said to be σ-compact if and only if it is the union of countably many compact sets.

69 Lemma 5.2.4 Let G and H be Hausdorff topological groups and let f : G → H be a surjective continuous group homomorphism. If G is σ-compact and H is a Baire space, then f is an open map and H is locally compact. S Proof: Since G is σ-compact it follows that G = Kn where each Kn is compact in G. By S n∈N surjectivity and continuity, H = f(Kn) and each f(Kn) is compact. n∈N

H must contain some interior point, so as H is Baire, f(Kn) has nonempty interior for some n. H is therefore locally compact about this interior point, so as H is a topological group it must be locally compact about every point. Let ψ : G/ker(f) → H be the continuous group isomorphism induced by f and let q : G → G/ker(f) be the quotient homomorphism. Then ψ−1 ◦ f = q which is continuous. Hence ψ is a homeomorphic isomorphism, so f is an open map. 

Proposition 5.2.5 Let G be a Hausdorff topological group and let (fi)i∈I be a countable family of S continuous maps fi : Xi → G, such that Xi is σ-compact for each i ∈ I and i∈I fi(Xi) generates G. Then G is locally compact if and only if it is a Baire space. In this case the given locally compact topology on G is the final group topology with respect to (fi)i∈I .

Proof: Every locally compact space is a Baire space. The final group topology O is at least as fine as the given topology so is Hausdorff. Since (G, T ) S is generated by its σ-compact subset i∈I fi(Xi), the topological group (G, T ) is σ-compact. Also, as (G, O) is Baire, lemma 5.2.4 shows us that the identity map (G, T ) → (G, O) is a homeomorphic isomorphism, so (G, O) is locally compact. 

Corollary 5.2.6 Let δ : I → LCG be a diagram where I = ob(I) is countable. If there exists a locally compact Hausdorff group topology O on G making λi : Gi → (G, O) continuous for each i ∈ I := ob(I) and such that (G, (λi)i∈I ) is a colimit of δ in G. Then ((G, O), (λi)i∈I ) is a colimit of δ in the categories of topological groups, Hausdorff topological groups and locally compact groups. Furthermore, if each Gi is a σ-compact Lie group and (G, O) is a Lie group then ((G, O), (λi)i∈I ) is a colimit of δ in the category LIE of Lie groups under Lie group homomorphisms.

Proof: By proposition 5.2.5 we know that ((G, O), (λi)i∈ob(I)) is a colimit of δ in the category of topological groups, so as this group is also Hausdorff and locally compact it follows that it is the colimit in these categories as well. Finally, if all these groups are Lie groups, then this is also a colimit in LIE.

5.3 A topological version of Borovoi’s theorem

Let K be a simply connected compact semisimple Lie group, with lie algebra g0 and let TK be a maximal torus of K with Lie algebra t0. Let G be a connected semisimple Lie group with Lie algebra g = (g0)C and let T be a maximal torus of G containing TK . Let Φ = Φ(G, T ) be the corresponding root system and let Π be a system of simple roots of Φ. ∼ To each root a ∈ Π we may associate some compact semisimple Lie group Ka = SU2(C) of rank one such that T normalises Ka. For any two distinct simple roots a, b ∈ Π we denote by Kab the group generated by Ka and Kb and by Φab we denote its root system relative to the torus Tab = T ∩ Kab. Kab is a compact semisimple Lie group of rank two and {a, b} is a simple system for Φab.

The pair (Ka,Kb) is called a standard pair of Kab as is any image of (Ka,Kb) under a homomorphism from Kab onto a central quotient of Kab. Standard pairs are conjugate in G, a result which can be determined using the fact that maximal tori are conjugate (3.3.5).

Definition 5.3.1 Fundamental domains Let A be a poset and let G be a group which acts on A. F ⊆ A is called a fundamental domain for the action of G on A if the following hold. (i) b ∈ F whenever a ∈ F and b ≤ a,

70 (ii) A = G(F ) = {g(a) | a ∈ F, g ∈ G}, (iii) G(a) ∩ F = {a} for all a ∈ F .

In particular, if G is a group acting strongly transitively on its associated thick building ∆ =∼ G/B, and σ is the chamber corresponding to the chosen subgroup B of G, then ∆≤σ is a fundamental domain for the action of G on ∆.

Lemma 5.3.2 Tits’ lemma Let A be a poset, let G be a group which acts on A and let F be a fundamental domain for the action of G on A. Let I be a small category with ob(I) = F and morphisms given by a → b whenever b ≤ a ∈ F . Then the poset A is simply connected if and only if G is the colimit of the diagram δ : I → G, where δ(a) = Ga and δ(a → b) = (Ga ,→ Gb). (A poset is simply connected if and only if the associated simplicial complex is).

Proof: See [Ti86].  Theorem 5.3.3 Topological version of Borovoi’s theorem

Let K be a simply connected compact semisimple Lie group with topology O, let TK be a maximal torus of K, let Φ = Φ(G, T ) be the corresponding root system and let Π be a simple system for Φ. Let I be a small category whose objects are the one and two element subgroups of Π. This is often written as Π Π ob(I) = ( 1 ) ∪ ( 2 ). The morphisms of I are precisely the inclusions {a} → {a, b} for all a, b ∈ Π. Let δ : I → LCG be a diagram with δ({a}) = Ka, δ({a, b}) = Kab and δ({a} → {a, b}) = Ka ,→ Kab.   Then (K, O), (ιi) Π ∪ Π is a colimit of δ in the category LIE of Lie groups. ( 1 ) ( 2 ) Proof: If the rank of K is at most 2 the result holds trivially, so we may assume that |Π| ≥ 3. Let B be a Borel subgroup containing the maximal torus T of G and construct the associated building ∼ ∆ = G/B. The set ∆≤σ where σ is the chamber of ∆ corresponding to B is a fundamental domain for the conjugation action of G on ∆. Since K is a compact real form of G, the Iwasawa decomposition tells us that G = KB so ∆≤σ is also a fundamental domain for the conjugation action of K on ∆.

The stabilisers in K of the subsimplices of codimensions 1 and 2 are precisely the groups KaTK and KabTK respectively. Tits’ lemma and the Solomon-Tits’ theorem (4.3.4) together allow us to deduce that K is the colimit of the diagram

δ : I → G, where δ(a) = StabK (a) and δ(a → b) = (StabK (a) ,→ StabK (b)). This can be done as buildings of rank at least three are simply connected. To get a result concerning the groups KaTK and KabTK we must apply induction on |Π|, taking stabilisers over more and more roots, until we reach the stabilisers of subsimplices of codimension 1 and 2. We can do this as there are exactly two elements of Π remaining when we obtain the groups KaTK and KabTK , so at all previous times (when we may be hoping to apply Tits’ lemma) there were at least three, so the rank of the associated building was at least three and therefore simply connected. From this we deduce that the group K is a colimit in the category of abstract groups of the diagram

0 0 0 0 δ : I → LCG with δ ({a}) = KaTK , δ ({a, b}) = KabTK and δ ({a} → {a, b}) = (KaTK ,→ KabTK ).

This is still some way from what we need to prove. However, it is known that the torus TK can be reconstructed from the rank two tori Tab (see [GLS95], lemma 29.3) so G is in fact a colimit of the diagram δ in the category of abstract groups. Applying corollary 5.2.6 to δ completes the theorem. 

71 Chapter 6

Locally kω topological groups

kω spaces play an important role in the theory of Kac-Moody groups as it will be shown that a Kac- Moody group, endowed with the Kac-Peterson topology is a kω topological group. The information contained in this chapter will be used to prove this result and allow us to deduce that the category KOG of kω groups and continuous homomorphisms is the natural setting in which to attempt any extension of Borovoi’s theorem to compact real forms of complex Kac-Moody groups.

6.1 Locally kω topological spaces

Definition 6.1.1 kω spaces

Let X be a Hausdorff topological space. X is called a kω space if there exists a sequence of compact subsets K1 ⊆ K2 ⊆ · · · ⊆ X S such that X = Kn and U ⊆ X is open if and only if U ∩ Kn is open in Kn (with the subspace n∈N topology) for each n ∈ N. The sequence (Kn) is called a kω sequence for X.

A Hausdorff topological space X is said to be locally kω if every point has an open neighbourhood which is kω in the subspace topology.

Example 6.1.2 Examples of kω and locally kω spaces

(i) Every kω space is locally kω.

(ii) Every compact Hausdorff topological space is a kω space.

(iii) Any countable discrete topological space is a kω space and any discrete topological space is locally kω.

n n (iv) R is a noncompact kω space with kω sequence ([−k, k] )k∈N.

Lemma 6.1.3 Every quotient map f : X → Y between kω spaces is compact covering. In other words, for each compact set K ⊆ Y there is a compact set L ⊆ X with the property that K ⊆ f(L).

Proof: Follows from [Gl05], lemma 1.7(d). 

Proposition 6.1.4 More kω spaces

(i) Locally compact Hausdorff spaces are locally kω.

(ii) σ-compact, locally compact Hausdorff spaces are kω.

(iii) Closed subsets of kω spaces are kω in the subspace topology.

(iv) Finite products of kω spaces are kω in the product topology.

72 (v) Countable disjoint unions of kω spaces are kω.

(vi) Hausdorff quotients of kω spaces are kω.

(vii) Open subsets of locally kω spaces are locally kω in the subspace topology.

(viii) Closed subsets of locally kω spaces are locally kω in the subspace topology.

(ix) Finite products of locally kω spaces are locally kω in the product topology.

(x) Hausdorff quotients of locally kω spaces under open quotient maps are locally kω.

Proof: (i) Each point is contained in an open neighbourhood which is compact in the subspace topology and therefore is kω. As the space is Hausdorff it is locally kω.

(ii) The space is locally kω as it is locally compact and is covered by countably many compact sets. Using these facts we can build a kω sequence for the space. Thus the space is kω, since we assumed it was Hausdorff.

(iii) Intersecting the closed subset with each of the terms in the kω sequence yields a kω sequence for the closed subspace.

(iv) Create a kω sequence for the product space by taking the pointwise direct product of the kω sequences.

j (v) To find a kω sequence for this space we define (Kn)n∈N to be a kω sequence of the jth element of the disjoint union. We then form a kω sequence (Kn) for the disjoint union of these spaces via the rule n G i Kn = Kn. i=1

(vi) The quotient map is compact covering, by lemma 6.1.3, so the image of a kω sequence can be enlarged to a kω sequence in the quotient space. The enlarging must be carefully chosen to ensure that all open sets are open in each subspace, but this can be accomplished. The underlying space is also Hausdorff, by assumption, so is kω.

For a proof of part (vii) see [GGH09] page 11. The other locally kω properties follow relatively easily from the corresponding kω ones. 

We will use these properties of kω spaces repeatedly. Now we present one additional lemma which will be needed for the propositions which complete this introductory section.

Lemma 6.1.5 Let X be locally kω and let Y ⊆ X be σ-compact. Then Y has an open neighbourhood U ⊆ X which is a kω space.

Proof: Y is σ-compact so there exists a sequence (Un) of open kω-spaces contained in X such that [ Y ⊆ U := Un. n∈N F U is a Hausdorff quotient of the kω space Un so is itself a kω-space, by proposition 6.1.4 (v) and n∈N (vi). 

The final two results from this section prove that kω spaces are a excellent setting in which to study direct limits, a result which will be crucial when extending Borovoi’s theorem.

Proposition 6.1.6 Let X1 ⊆ X2 ⊆ ... be a sequence of kω spaces such that the inclusion map S Xn ,→ Xn+1 is continuous for each n ∈ . Then the final topology O on X := Xn with respect N n∈N to the inclusion maps Xn ,→ X is kω and makes (X, O) the direct limit of the Xn in the categories of topological spaces, Hausdorff topological spaces and kω spaces.

73 This proposition also holds if the condition that X and Xn are kω spaces is replaced by them being locally kω spaces and the conclusion that (X, O) is a direct limit in the category of kω spaces is replaced by that of locally kω spaces.

Proof: It suffices to show that (X, O) is kω and is the direct limit in the category of topological spaces. The rest then follows trivially. j j Sj i As each Xj is a kω space, let (Kn)n∈N be a kω sequence for Xj. Replacing Kn by i=1 Kn wherever i j necessary we may assume that Kn ⊆ Kn whenever i ≤ j. n Define Kn := Kn for all n ∈ N. Each Kn is compact and the inclusions Kn ,→ Kn+1 are topological embeddings, so using [Gl03] proposition 3.6(a) we can see that the final topology T on X with respect to the inclusion maps Kn ,→ X is Hausdorff and therefore (X, T ) is a kω space as (Kn) forms a kω sequence for X. It now remains to show that T = O.

As the inclusion maps Kn ,→ Xa ,→ (X, O) are continuous, O ⊆ T by the definition of a final topology (5.2.1). Now suppose U ∈ T . U ∩ Kn is open in Kn for all n ∈ N. Define k := max{j, n} and note that j j j j U ∩ Kn = (U ∩ Km) ∩ Kn which is open in Kn and the inclusion map Kn ,→ Km is continuous. Hence U ∩ Xj is open in Xj for each j and U ∈ O, as required. 

For the locally kω case, see [GGH09] page 12.

From this proposition we may deduce that the direct limit space of each countable direct system of kω spaces is itself a kω space.

Proposition 6.1.7 Let X1 ⊆ X2 ⊆ ... and Y1 ⊆ Y2 ⊆ ... be sequences of topological spaces with continuous inclusion maps. S S Set X := Xn = lim(Xn) and Y := Yn = lim(Yn) endowed with the direct limit topology. n∈N −→ n∈N −→ S Denote by lim(Xn ×Yn) the set (Xn ×Yn) equipped with the direct limit topology over the inclusion −→ n∈N maps Xn × Yn ,→ X × Y . The natural map

β :−→ lim(Xn × Yn) → −→lim(Xn) × −→lim(Yn) given by (x, y) → (x, y)

is a continuous bijection. If each Xn and Yn is a locally kω space, then β is a homeomorphism.

Proof: The inclusion Xn ×Yn ,→ X ×Y is continuous for each n ∈ N by assumption, so β is continuous. It is also a bijection as the operations of taking direct limits and products commute in the category of sets. To prove the second part we will show that for each pair (x, y) ∈ X × Y there are open neighbourhoods U ⊆ X of X and V ⊆ Y of y such that X × Y and−→ lim(Xn × Yn) induce the same topology on U × V . Without loss of generality we may assume (x, y) ∈ X1 × Y1.

Each Xn and Yn is locally kω, so we let U1 and V1 be open kω neighbourhoods of x in X1 and y in Y1 respectively. Using lemma 6.1.5 we construct non-descending (by inclusion) sequences of open kω spaces (Un) and (Vn) where each Un ⊆ X and each Vn ⊆ Y . Define [ [ U := Un and V := Vn. n∈N n∈N

By lemma 6.1.4, parts (v) and (vi) as applied in the proof of lemma 6.1.5, U and V are both kω spaces j with respect to the topology induced by X and Y respectively. We define kω sequences (Kn) for Uj and j (Ln) for Vj, as was done in the proof of proposition 6.1.6 and obtain kω sequences (Kn) for U and (Ln) n n for V by setting Kn = Kn and Ln = Ln. Proposition 3.3 of [Gl03] shows that direct products and countable direct limits behave well on compact spaces, so we may deduce that

j j j j Uj × Vj =−→ lim(Kn) × −→lim(Ln) =−→ lim(Kn × Ln).

74 j j Therefore (Kn × Ln) is a kω sequence for Uj × Vj and it follows that (Kn × Ln) is a kω sequence for the open subset lim−→(Un × Vn) of lim−→(Xn × Yn) equipped with the subspace topology. Using proposition 3.3 of [Gl03] again we see that

U × V =−→ lim(Kn) × −→lim(Ln) =−→ lim(Kn × Ln) = lim−→(Un × Vn).

−→lim(Un × Vn) is simply the set U × V equipped with the subspace topology induced from lim−→(Xn × Yn). (For a more detailed discussion of this fact, see [Gl03] lemma 3.1). Thus X ×Y and−→ lim(Xn ×Yn) induce the same topology on U × V , so the two spaces are locally homeomorphic about each point. Thus β is a homeomorphism. 

6.2 Locally kω groups

Definition 6.2.1 kω and locally kω groups

A topological group is said to be a kω group if it is also a kω space. Identically, a locally kω group is a topological group which is also a locally kω space. Proposition 6.2.2 The following conditions are equivalent on a topological group G.

(i) G is a locally kω group,

(ii) G has an open subgroup H which is a kω group,

(iii) G = lim−→(Xn) as a topological space for some non-descending sequence X1 ⊆ X2 ⊆ ... of closed locally compact subsets Xn of G equipped with the induced topology.

Proof: See [GGH09] proposition 5.3. 

Proposition 6.2.3 Let G1 ⊆ G2 ⊆ ... be a non-descending sequence of kω groups such that the S inclusion maps Gn ,→ Gn+1 are continuous homomorphisms. G := Gn permits a group structure n∈N which makes each inclusion Gn ,→ G a homomorphism.

The final topology with respect to these inclusion maps makes G a kω group and also the direct limit −→lim(Gn) in the categories of topological spaces and topological groups. Proof: G is trivially the direct limit in the category of topological spaces and by proposition 6.1.6, G is a kω group when equipped with this topology. It remains to show that G is a topological group.

Let λn : Gn → Gn be the inversion map on Gn and λ : G → G the inversion map on G. Then λ =−→ lim(λn) and λ is continuous as each Gn is continuous. The direct limit map λ = lim−→λn is the map from G → G which extends each of the maps λn : Gn → Gn, it is well defined as Gn ⊆ G, Gn is closed under taking inverses and the inverse of an element in Gn is the same as the inverse of that element in Gm for all m ≥ n. The corresponding map for group multiplication is also shown to be well-defined by similar observations.

Let µn : Gn × Gn → Gn be the group multiplication map on Gn and µ : G × G → G the group multiplication map on G. Identifying the topological spaces G × G with lim−→(Gn × Gn) by applying proposition 6.1.7, µ is given by lim−→(µn) : lim−→(Gn × Gn) → −→lim(Gn) which is continuous. Therefore G is a topological group. 

An important remark concerning the previous theorem is that it also holds for locally kω spaces, as both propositions 6.1.6 and 6.1.7 have analogues for these spaces. We now give an important corollary which relates more specifically to the setting of amalgams of Kac-Moody groups.

Corollary 6.2.4 Let I be a countable directed set (see definition 5.2.2) and let S := ((Gi)i∈I , (fij)i≥j) be a direct system of kω groups and continuous homomorphisms. Then S has a direct limit (G, fi)i∈I in the category of Hausdorff topological groups. The topology on G is the final group topology with respect to the family (fi) and the Hausdorff topological space G is the direct limit of S in the category of Hausdorff topological spaces. Furthermore, G is a kω group and thus is a direct limit of S in the category of kω groups as well.

75 S Proof: Without loss of generality we may assume that I = (N, ≤). Set Nm := n≥m ker(fn,m), where the bar notation denotes the topological closure. Set Qm := Gm/Nm and let qm : Gm → Qm be the respective quotient morphism. Proposition 6.1.4(v) tells us that each Qm is a kω group.

Let gn,m : Qm → Qn be the continuous homomorphism defined by the relation gn,m ◦ qm = qn ◦ fn,m. Therefore, the direct limit of S in the category of Hausdorff topological spaces is precisely the direct limit of ((Qn)n∈N, (gn,m)n≥m). By proposition 6.2.3, the direct limit of the latter system is a kω group. 

Proposition 6.2.5 Let G be a group and let (fi)i∈I be a countable family of maps fi : Xi → G such S that each Xi is a kω space and i∈I Xi generates G. If the final group topology on G with respect to the family (fi) is Hausdorff, then G is a kω group with respect to this topology.

The proof of this proposition requires knowledge of the Markov free topological group and therefore is omitted. [Ma50] and [MMO73] provide the details necessary to complete a proof. 

Unlike the previous results, we do not claim that an analogue of this result holds for locally kω groups.

Corollary 6.2.6 Let δ : I → KOG be a diagram in the category KOG of kω groups under continuous homomorphisms, where I := ob(I) is countable. Write Gi := δ(i) for all i ∈ ob(I) and ψa := δ(a): Gi → Gj for all a ∈ mor(i, j). Let (G, (λi)i∈I ) be a colimit of δ in the category of Hausdorff topological groups. Then G is a kω group and (G, (λi)i∈I ) is a colimit of δ in the category of kω groups.

Proof: This follows directly from proposition 6.2.5.  This corollary is an analogue of corollary 5.2.6 in the category KOG. We will see that it also plays a significant role when extending Borovoi’s theorem.

With this introduction to kω groups complete we proceed to the study of Kac-Moody groups, where the motivation for this chapter will become clear.

76 Chapter 7

Kac-Moody groups

7.1 Kac-Moody algebras

We will introduce Kac-Moody algebras and show that they are in some senses a natural generalisation of complex semisimple Lie algebras to include infite dimensional algebras. The theory will be illustrated by −1 −1 the key example sln(C[t, t ]) where C[t, t ] is the infinite dimensional algebra of Laurent polynomials with complex coefficients. The main sources for this chapter are [Ka85] and [GGH09], section 6.

Definition 7.1.1 Generalised Cartan matrices

An n × n square matrix A = (aij) is said to be a generalised Cartan matrix if aii = 2 and for each i, j ∈ {1, . . . , n} with i 6= j

− (i) aij ∈ Z0 = {z ∈ Z | z ≤ 0},

(ii) aij = 0 if and only if aji = 0. We can see from chapter 2 that all abstract Cartan matrices are generalised Cartan matrices, justifying the term ‘generalised’.

Given a generalised Cartan matrix A we can define a Dynkin diagram ΓA for this matrix in exactly the same way as for an abstract Cartan matrix with the one additional condition that if AijAji ≥ 4 then we label the edge between the ith and jth vertices +∞. This project will only consider 2-spherical generalised Cartan matrices, that is, n × n generalised Cartan matrices whose 2 × 2 submatrices of the form   Aii Aij Aji Aij for 1 ≤ i < j ≤ n are abstract Cartan matrices in the sense of definition 2.4.7. In terms of Dynkin diagrams, this means that the restriction of ΓA to any two vertices is isomorphic to A1 ⊕ A1, A2, B2 or G2, so in particular AijAji ≤ 3 for all i 6= j. Example 7.1.2 A 2-spherical generalised Cartan matrix

 2 −1 0 ··· 0 −1   −1 2 −1 0     .. .   0 −1 2 . .  The matrix A =    . .. ..   . . . −1 0     0 −1 2 −1  −1 0 ··· 0 −1 2 is a generalised Cartan matrix but not an abstract Cartan matrix, as the Dynkin diagram it generates is not one of those from figure 2.1.2. We deduce from this that A fails property (v) of definition 2.4.7.

The Dynkin diagram of A, is called Afn. It has (n + 1) vertices and is of the form

77 Definition 7.1.3 Realisations of generalised Cartan matrices A realisation of an n × n generalised Cartan matrix A of rank l is a triple (h, Π, Π∨) where h is a vector ∗ ∨ space over C, Π = {c1, c2, . . . , cn} ⊆ h and Π = {h1, h2, . . . , hn} ⊆ h, which satisfies the following properties. (i)Π and Π∨ are linearly independent in h∗ and h respectively,

(ii) ci(hj) = aij for all i and j, (iii) dim(h) = 2n − l.

∨ ∨ We say two realisations (h1, Π1, Π1 ) and (h2, Π2, Π2 ) are isomorphic if there is a vector space isomor- ∗ ∨ ∨ ∗ phism ψ : h1 → h2 such that ψ (Π1) = Π2 and ψ(Π1 ) = Π2 , where ψ is the dual of ψ. Proposition 7.1.4 Let A be an n × n generalised Cartan matrix. A has a unique realisation up to isomorphism. Moreover, the realisations of two matrices A and B are isomorphic if and only if A can be obtained from B by a permutation of the index set {1, . . . , n}.

Proof: See [Ka85] proposition 1.1.  Using this proposition we may define the Kac-Moody algebra corresponding to a generalised Cartan matrix A.

Definition 7.1.5 The derived Lie algebra g0(A) 0 Let A = (aij) be an n × n generalised Cartan matrix. The derived Lie algebra of type A, denoted g (A), is the Lie algebra generated by the set of elements {ei, fi, hi | 1 ≤ i ≤ n} subject to the relations:

(i)[ hi, ej] = Aijei,

(ii)[ hi, fj] = −Aijfj,

(iii)[ hi, hj] = 0,

(iv)[ ei, fj] = −δijhi.

We note here that the hi’s can only generate h as a vector space if the generalised Cartan matrix has rank n. However, the set {ei, fi, hi | 1 ≤ i ≤ n} generates the same Lie algebra as that generated by h ∪ {ei, fi | 1 ≤ i ≤ n}. By defining the bracket to be antisymmetric, we can see that g0(A) is indeed a Lie algebra. Using the results from chapter 2 we can deduce that g0(A) is finite dimensional if and only if A is an abstract Cartan matrix as defined by proposition 2.4.7. h is a maximal simultaneously diagonalisable abelian subalgebra of g0(A) so is a Cartan subalgebra of g0(A). (In the infinite dimensional case, we require that h is maximal amongst the simultaneously diagonalisable abelian Lie subalgebras of g. Corollary ??(i) shows that such a condition is unnecessary in the finite-dimensional case.) Using the method discussed immediately after definition 2.3.1, we can define a root-space decomposition of g0(A) relative to ad(h). 0 M M g (A) = ga = h ⊕ ga. a∈h∗ a∈Φ

where ga = {X ∈ g | [H,X] = a(H)X for all H ∈ h}. Notice that h = g0.

We will use the same terminology that was used in chapter 2, so for a 6= 0, ga is called a root-space if it is nontrivial and in such cases a is called a root. Elements of ga are called root vectors of the root a. The set of roots is denoted by Φ.

78 Returning to the definition of a realisation, we note that Π is always a subset of Φ, explaining why Π is sometimes referred to as the root basis.

Again the Weyl group is the group W generated by the root reflections S := {sa | a ∈ Π}. Unlike in the finite dimensional case, we cannot guarantee that W Π = Φ. To distinguish this we partition Φ = ΦR ∪ΦI where ΦR := W Π. ΦR is called the set of real roots and ΦI the set of imaginary roots.

Proposition 7.1.6 An important ideal of the derived Lie algebra The collection of all ideals of g0(A) which intersect h trivially has a unique maximal element, a.

Definition 7.1.7 The Kac-Moody algebra g(A) Let A be a generalised Cartan matrix with derived Lie algebra g0(A). Let a be the ideal guaranteed by proposition 7.1.6. Then the Kac-Moody algebra is defined as

g(A) = g0(A)/a.

The root-space decomposition on g0(A) induces a decomposition on g(A). We will outline an alternative construction of the Kac-Moody algebra g(A) in the case where A is symmetrisable, so A = DS where D is an invertible diagonal matrix and S is a symmetric matrix. In this case we can obtain the Kac-Moody algebra directly from definition 7.1.7 by imposing the two additional conditions 1−aij 1−aij (adei ) (ej) = 0 = (adfi ) (fj). The Gabber-Kac theorem [Ka85](9.11) shows that these two constructions give equivalent Kac-Moody algebras.

−1 Example 7.1.8 sln(C[t, t ]), for n ≥ 3

−1 −1 −1 Let g := sln(C[t, t ]) := {X ∈ Mn(C[t, t ]) | Tr(X) = 0}, where C[t, t ] is the integral domain of all complex valued Laurent polynomials of the form

X k ckt for some N. |k|≤N

We wish to show that for n ≥ 3, g is in fact isomorphic to the Kac-Moody algebra g(A) where A = (Aij) is the n − 1 × n − 1 generalised Cartan matrix given in example 7.1.2. Firstly, we will form a set of generators contained in g for the derived Lie algebra g0(A). For each i ∈ Zn = {1, . . . , n}, let ei be the matrix which is 0 in all but the (i + 1, i) entry, where it takes −1 value t. Similarly, let fi be the matrix which is 0 in all but the (i, i + 1) entry, where it takes value −t . We also define hi to be zero except in the ith diagonal entry where it takes value 1 and in the (i + 1)th diagonal entry where it takes value (-1). An elementary, but time consuming, calculation shows that g is in fact generated by the set

{ei, fi, hi | 1 ≤ i ≤ n} as a Lie algebra.

We now need to verify that the four rules in definition 7.1.5 hold.

(i)[ hi, ei] = hiei − eihi = ei − (−ei) = 2ei = Aiiei, also

[hi, ei+1] = −ei+1 − 0 = Ai,i+1ei+1 and [hi, ei−1] = 0 − ei−1 = Ai,i−1ei−1. The remaining case holds trivially.

(ii)[ hi, fi] = hifi − fihi = −fi − (fi) = −2fi = −Aiifi, similarly, [hi, fi±1] = fi±1 = −Ai,i±1fi and the final case is also simple.

(iii) The span of {hi | 1 ≤ i ≤ n} is an abelian subset of g, so this part holds.

79 −1 (iv)[ ei, fi] = eifi − fiei. This matrix takes value t(−t ) = −1 on the ith diagonal entry and value −1 −(−t )t = 1 on the (i + 1)th diagonal entry. Therefore [ei, fi] = −hi = −δijhi.

If i 6= j, then eifj = fjei = 0, so [ei, fj] = 0 = −δijhi. This proves that g is a quotient of the derived Lie algebra g0(A), however, we can also show that this 1−Aij 1−Aij construction obeys the two additional conditions (adei ) (ej) = 0 = (adfi ) (fj), so using the fact that A is symmetric and the Gabber-Kac theorem again, we can deduce that g is a quotient of the Kac-Moody algebra g(A). If i = j there is nothing to prove, so we assume i 6= j.

1−Aij (i) For j = i ± 1, (adei ) (ej) = [ei, [ei, ej]] = 0. To see this we can restrict our attention to 3 × 3 submatrices obtained by deleting all but the ith, jth, (i + 1)th and (j + 1)th rows and columns. 2 From this we see that [ei, ej] takes value ±t in the top right entry of this 3 × 3 matrix and is zero everywhere else. Therefore, ei[ei, ej] = [ei, ej]ei = 0 and the result follows.

(ii) For j 6∈ {i − 1, i, i + 1},[ei, ej] = 0, so [ei, [ei, ej]] = [ei, 0] = 0.

(iii) The case for fi is constructed similarly, with the 3 × 3 matrix obtained from [fi, fi±1] taking value ±t−2 in the bottom left entry and 0 elsewhere.

7.2 Kac-Moody groups

Kac-Moody groups generalise semisimple Lie groups, however, the most general definition of them is highly abstract. We will introduce these objects, relating them to their corresponding Kac-Moody alge- bras and prove that they admit a twin root datum. Together with results from section 4.5 this will show that Kac-Moody groups act strongly transitively on associated twin buildings. This will provide useful information about the structure of such groups.

Let A be an algebra. A vector space V is called an A-module if there is an algebra homomorphism θ : A → GL(V ) which commutes with the addition and multiplication in the algebra.

Definition 7.2.1 Types of vector space endomorphism Let V be a vector space. An endomorphism T : V → V is said to be (i) locally finite if every v ∈ V lies in some finite-dimensional A-invariant subspace of V ,

(ii) locally nilpotent if for each v ∈ V there is some n ∈ N such that T n(v) = 0 and (iii) semisimple if V admits a basis of eigenvectors of T .

Notice that locally nilpotent and semisimple endomorphisms are certainly locally finite. Let A be a a locally finite endomorphism of V . We can form a corresponding one parameter subgroup of V automorphisms. X tn exp(tA) = An where t ∈ . n! C n≥0 Definition 7.2.2 Locally finite elements Let g be a (possibly infinite dimensional) complex Lie algebra and let V be a g-module which affords the action π. An element X ∈ g is said to be π-locally finite if π(X) is a locally finite endomorphism on V .

The set of all ad-locally finite elements of g is denoted by Fg and the subalgebra of g generated by Fg is denoted by gf . g is said to be integrable if g = gf and a homomorphism φ : h → g between integrable Lie algebras is called integrable if φ(Fh) ⊆ Fg.

In a similar way we define Fg,π to be the set of all π-locally finite elements of Fg.

80 A g-module (V, π) is called integrable if Fg,π = Fg. Naturally, the g-module (g, ad) is integrable. We are now in a position to define the Kac-Moody group associated to g. ∗ Let G be the with generating set Fg. For each integrable g-module (V, dπ), we define a ∗ G -module (V, πe) by X 1 π(X) = exp(dπ(X)):= (dπ(x))n for X ∈ F e n! g n≥0 ∗ T Set G = G /( ker(πe)) where the intersection is taken over all dπ corresponding to integrable g-modules. The G∗-module (V, dπ) is naturally a G-module (V, π). We call (V, dπ) the differential of (V, π) and call G the group associated to g. ∗ The image of an element x ∈ Fg under the canonical homomorphism G → G is denoted by exp(x). Thus, by definition, X 1 π(exp(x)) = exp(dπ(x)):= (dπ(X))n n! n≥0 for any integrable g-mofule (V, dπ). Also notice that {exp tx | t ∈ C} is a 1-parameter subgroup of G.

We define FG = {exp(x) | x ∈ Fg} and say a G-module (V, dπ) is differentiable if all elements of FG act as locally finite endomorphisms on V and exp(tx) is analytic on any G-invariant finite dimensional subspace of V .

Definition 7.2.3 Kac-Moody groups Let A be a generalised Cartan matrix and let g0(A) be the derived Lie algebra. The Kac-Moody group G(A) associated to A is the group associated to g0(A) by the method previously given.

To each real root a ∈ ΦR we define an associated root group Ua := exp(ga). In 1983, Dale Peterson and Victor Kac proved that there is a 1 − 1 correspondence between integrable g0(A) modules and G(A) modules, ([KP83] pages 141 − 166).

Theorem 7.2.4 Kac-Moody groups admit a twin root datum

We will show that (G, {U | a ∈ Φ },H) suffices, where H := T N (U ). To ease the notation a R a∈ΦR G(A) a ε ε we write U := hUa | a ∈ ΦRi. We follow the proof given in [Ma07].

Proof: The subgroups Ua are nontrivial and generate G so properties (i) and (ii) hold. + Let b ∈ Π ⊆ Φ , then Ub can be described as the image of the subgroup of upper unitriangular matrices − of SL2(C) under ψi for some i. U is generated by the images of lower unitriangular matrices, thus Ub is not contained in U−, so part (iii) also follows.

Let {a, b} be a pair of prenilpotent roots. For ease of notation, define g(a,b) := hgc | c ∈ (a, b)i, the vector space generated by these root spaces. We obtain a decomposition, M g(a,b) = gc c∈(a,b)

from the root space decomposition. Define U(a,b) to be the Lie group of g(a,b). The multiplication in U(a,b) is given by the Campbell-Baker-Hausdorff formula (1.5.6). Therefore the Lie algebra of the com- mutator subgroup [Ua,Ub] is contained in ga,b. Using the exponential mapping we can deduce that the [Ua,Ub] is contained in the subgroup generated by the Uc with c ∈ (a, b). There- fore property (v) holds.

Proving part (iv) is more difficult. We use the fact that the span of {ei, fi, hi} is a subalgebra sli isomor- phic to sl2(C). The automorphism ri := exp(ad(ei))exp(ad(−ei))exp(ad(ei)), acts on sli as the Cartan involution presented in chapter 3. From this we deduce that ri is a reflection on g(A). The remaining details of the proof of part (iv) are more technical and are therefore omitted.

81 Φ is reduced, so Φ ∩ (Ca) = {±a} for all a ∈ Φ, thus (vi) holds vacuously. 

Combining this result with previous ones from section 4.5 we see that G(A) has the Bruhat and Birkhoff decompositions G G G G(A) = B+wB+ = B−wB− = B+wB−. w∈W w∈W w∈W Moreover, each Kac-Moody group acts strongly transitively on an associated twin building.

0 For each real root a ∈ ΦR, the Lie algebra g(a) generated by g±a in g (A) is isomorphic to sl2(C) and ∼ 0 contained in Fg0(A). Therefore the inclusion ψa : sl2(C) = g(a) ,→ g (A) is integrable and thus induces a map ψa : SL2(C) → G(A) whose image is denoted by Ga. It can be shown that Ga = hUa,U−ai and that ψa is an isomorphism onto its image.

Ga is called the rank one subgroup of the real root a. Similarly, if a1, . . . , an ∈ ΦR we define Ga1,...,an =

hGa1 ,...,Gan i to be the associated rank n subgroup of these real roots. We now construct a topology on G(A).

Definition 7.2.5 Kac-Peterson topology

The final group topology with respect to the inclusion maps {ψa : SL2(C) → G(A) | a ∈ ΦR} where SL2(C) has its connected Lie group topology, is called the Kac-Peterson topology on G(A) and is denoted by TKP .

The aim of the remainder of this section is to prove the following theorem.

Theorem 7.2.6 The topological group (G(A), TKP ) is a kω-group.

We simplify the proof by first giving two lemmas.

Lemma 7.2.7 Let Π = {a1, . . . an} be any choice of simple roots for the Kac-Moody algebra g(A),

then TKP is the final group topology with respect to the inclusions ψai : SL2(C) → G(A) for 1 ≤ i ≤ n.

Proof: The Weyl group W = NG(A)(H)/H. By construction H normalises every root subgroup so it follows that W acts by conjugation on the root subgroups. In particular, for any root a and any w ∈ W ,

−1 −1 wUaw = Uwa and thus wGaw = Gwa

By definition, every real root is of the form wai for some w ∈ W and some ai ∈ ΦR. Conjugation is continuous, so it follows that all rank one subgroup inclusions are continuous. Therefore TKP is the final

group topology with respect to the inclusions ψai : SL2(C) → G(A) for 1 ≤ i ≤ n.  Lemma 7.2.8 Let (V, dψ) be an integrable g0(A)-module, let ψ : G(A) → Aut(V ) be the associated representation of G(A) and let a ∈ ΦR. Every v ∈ V is contained in a finite dimensional Ga-submodule V0 of V and the orbit map Ga → V given by g 7→ ψ(g)v is continuous onto V0. Moreover, for each v ∈ V and real roots a1, . . . , an, the map

Ga1 × · · · × Gan → V defined by (g1, . . . , gn) 7→ ψ(g1g2 . . . gn)v

has image contained in some finite dimensional subspace of V and is continuous onto this space.

Proof: V is integrable and ga ⊆ Fg0(A) which is finite dimensional. [Ka85], proposition 3.6(a) (or [Ka84], lemma (b) on page 170) tells us that each v ∈ V is contained in a finite dimensional ga-submodule V0 of V . By definition ∞ X (dψ(X))k ψ(exp(X))w = w ∈ V k! 0 k=0 for all X ∈ ga and all w ∈ V0. Thus V0 is a Ga submodule and π : Ga → GL(V0) given by g 7→ ψ(g)|V0 is the smooth representation of Ga with dπ(X) = dψ(X) for all X ∈ ga. Therefore the orbit map Ga → V0, with g 7→ ψ(g)v = π(g)v is continuous. We now proceed by induction on n, noting that we have completed the case n = 1 above. By hypothesis

82 we assume that ψ(Ga2 Ga3 ...Gan )v is contained in some finite dimensional vector space V0 ⊆ V and that the orbit map θ : Ga2 × Ga3 × · · · × Gan → V0 defined by θ(g2, . . . , gn) = ψ(g2, . . . , gn)v is continuous.

Each element v ∈ V is contained in a finite dimensional ga1 -module, so the ga1 -module V1 ⊆ V generated by V0 is finite dimensional.

Consider the map π : Ga1 → GL(V1) with π(g) = θ(g)|V1 . The preliminary case tells us that π is continuous for each w ∈ V1 so π is continuous. The evaluation map  : GL(V1) × V1 → V1 which maps (B, x) to B(x) is continuous so it follows that the orbit map

 ◦ (π × ψ): Ga1 × Ga2 × · · · × Gan → V1 is continuous, as required

Proof of theorem 7.2.6: Fix a system of simple roots Π = {a1, . . . , an} and for each word I := (i1, . . . , ik) over {1, . . . , n} define GI as the image of the product map p : G × · · · × G toG(A) where (g , . . . , g ) 7→ g g . . . g . I ai1 aik 1 k 1 2 k

Define the topology TI on GI to be the quotient topology with respect to pI (the finest topology under which pI is continuous).

We claim (GI , TI ) is a kω space. SL2(C) is a kω group so by proposition 6.1.4(iv), SL ( )k =∼ G × · · · × G 2 C ai1 aik is a kω space, so by part (v) of the same proposition it suffices to show that (GI , TI ) is Hausdorff. 0 To see this let x, y ∈ GI be distinct. Then there is some integrable g (A)-module (V, dψ) whose associated representation ψ : G(A) → Aut(V ) distinguishes x from y, in other words, ψ(x) 6= ψ(h). So for some v ∈ V , ψ(x)v 6= ψ(y)g.

Define f : GI → V to be such that f(x) = ψ(x)v. By lemma 7.2.8, f ◦ PI is continuous, so f is also continuous. Thus we can choose disjoint open neighbourhoods of f(x) and f(y) and their preimages under f will be disjoint open neighbourhoods of x and y in GI , so GI is Hausdorff, as required.

Define Wn to be the poset of all words over the set {1, . . . , n} with respect to the ordering I ≤ J if and only if I is a subsequence of J. For I ≤ J there is an obvious inclusion map GI ,→ GJ . Let T0 be the S final topology with respect to the system (GI , TI )I∈Wn on the union G(A) = GI .

Consider the sequence I1 ≤ I2 ≤ ... in Wn defined by I1 = (1), In = (1, . . . , n), In+1 = (1, . . . , n, 1) and so on. Then

GI1 ⊆ GI2 ⊆ · · · ⊆ G(A) and as each GIn is kω we deduce that (G(A), T0) is kω. It remains to show that (G(A), T0) is a topological group and that T0 = TKP .

The concatenation map G(i1, i2, . . . , ik) × G(j1, j2, . . . , jl) → G(i1, . . . , ik, j1, . . . , jl) is continuous, so it is clear that the map G(i1, i2, . . . , ik) × G(j1, j2, . . . , jl) → G(A) is also continuous. By proposition 6.1.7, group multiplication is continuous. Setting l = k and im = j(k+1)−m proves that taking inverses is continuous as well. Thus (G(A), T0) is a kω group.

Finally, notice that the maps ψi : SL2(C) → G(A) are continuous (as they correspond to words of length 1), so by lemma 7.2.7, T0 ⊆ TKP . On the other hand, each pI is continuous with respect to TKP by lemma 7.2.8. Thus T0 = TKP . 

We note one important corollary to this result. Corollary 7.2.9 Let (V, dψ) be an integrable g0(A)-module and let ψ : G(A) → Aut(V ) be the corre- sponding representation on G(A). Then ψ is continuous with respect to the Kac-Peterson topology on G(A) and the topology of pointwise convergence on Aut(V ). Proof: (We continue with the notation introduced in the proof of theorem 7.2.6). The restriction of ψ to each GI is continuous by lemma 7.2.8. Theorem 7.2.6 tells us that G(A) =−→ lim(GI ) so the result follows. 

83 7.3 Unitary forms

We will say a subalgebra h of g0(A) is integrable if and only if the inclusion h ,→ g0(A) is an integrable homomorphism in the sense of definition 7.2.2.

Definition 7.3.1 A subalgebra g0(A) ⊆ g0(A) is called a real form if g0(A) = g0(A) ⊕ ig0(A). A real form g0(A) defines an involution on g0(A) given by

0 σ(X0 + iX1):= X0 − iX1 for X0,X1 ∈ g (A)

Proposition 7.3.2 Let g0(A) ⊆ g0(A) be a real form of g0(A), so g0(A) = g0(A)⊕ig0(A) =∼ g0(A)⊗C. Let σ : g0(A) → g0(A) be the associated involution. Then g0(A) ⊆ g0(A) is an integrable subalgebra, σ is integrable and the induced group involution σG : G(A) → G(A) is continuous.

0 0 0 Proof: Denote by ad and ad0 the adjoint representations of g (A) and g (A) respectively. Let X ∈ g (A) 0 0 be ad0-locally finite and let Y ∈ g (A). We can write Y = Y0 + Y1 with Y0,Y1 ∈ g (A). There exist finite 0 dimensional ad0(X)-submodules V0 and V1 of g (A) containing Y0 and Y1 respectively. Thus Y ∈ V0 ⊕V1. Given Z = Z0 + iZ1 ∈ V0 ⊕ V1, we obtain

ad(X)(Z) = [X,Z0 + iZ1] = [X,Z0] + i[X,Z1] = ad0(X)(Z0) + iad0(X)(Z1) ∈ V0 ⊕ iV1.

Thus V0 ⊕iV1 is a finite dimensional ad(X) invariant subspace which contains Y . Y was chosen arbitrarily so we deduce that X is ad-locally finite. Suppose X is ad-locally finite. For Y ∈ g0(A), let V be a finite dimensional ad(X) invariant subspace of g0(A) containing σ(Y ). Set V 0 := σ(V ) and let Z0 ∈ V 0 with Z0 = σ(Z) for some Z ∈ V . Then

ad(σ(X))(Z0) = [σ(X), σ(Z)] = σ([X,Z]) = σ(ad(X)(Z)) ∈ σ(V ) = V 0.

Moreover, Y = σ(σ(Y )) ∈ V 0 and thus V 0 is a finite dimensional ad(σ(X)) invariant subspace containing Y . Again, Y was chosen arbitrarily so we deduce that σ(X) is ad-locally finite and thus σ is integrable. The induced involution σ : G(A) → G(A) is determined by the rule σ(exp(X)) = exp(σ(X)). Thus the restriction of σ to any rank one subgroup of G(A) is smooth and hence continuous. Thus σ is continuous, by definition of the Kac-Peterson topology. 

Given a real form g0(A) of g0(A), the group G0(A) := G(A)σ = {g ∈ G(A) | σ(g) = g} is called the real form of G(A) associated with g0(A). The topology on G0(A) is the subspace topology with respect to (G(A), TKP ). We note one immediate corollary of the preceding results.

Corollary 7.3.3 Every real form of a complex Kac-Moody group is a kω group.

The proof follows from proposition 7.3.2, proposition 6.1.4(iii) and theorem 7.2.6. 

We recall that every complex semisimple Lie algebra g has a compact real form k which is the fixed point set of the Cartan involution (see proposition 3.4.6). This is used to define the compact real form K of a semisimple Lie group G, from which we obtain the Iwasawa decomposition of G. The Cartan involution restricted to each rank 1 subgroup of G is equivalent to taking the negative conjugate transpose. A similar construction can be made for Kac-Moody groups. Kac and Peterson, [KP85], proved that an involution can be defined on a derived Lie algebra g0(A) which extends the Cartan involution on each ∼ rank 1 subalgebra ga = sl2(C). This involution is denoted by ω and the subalgebra k(A):= {X ∈ g0(A) | ω(X) = X} ⊆ g0(A) is called the unitary form of g0(A). The real form K(A) of G(A) corresponding to the unitary form k(A) of g0(A) is called the unitary form of G(A).

84 The involution ω : G(A) → G(A) obtained by integrating ω fixes every rank one subgroup and satisfies

ω Ka := Ga = Ga ∩ K(A). ∼ Moreover, K(A) is generated by the compact Lie groups Ka for a ∈ ΦR and each Ka = SU2(C), the compact real form of SL2(C).

Corollary 7.3.3 tells us that K(A) is a kω group with respect to the subspace topology induced from (G(A), TKP ), this topology will also be called the Kac-Peterson topology and denoted by TKP . The following proposition is an analogue of theorem 7.2.6 for unitary forms.

Proposition 7.3.4 Let Π = {a1, . . . , an} be any choice of simple roots for g(A). The Kac-Peterson

topology TKP on K(A) is the final group topology with respect to the inclusions ϕai : SU2(C) ,→ K(A) for i = 1, . . . , n.

Proof: Each Kai has its compact connected Lie group topology so let T0 be the final group topology on K(A) with respect to the inclusions ϕai : Kai ,→ K(A). We wish to show that T0 = TKP .

Each of the maps ϕai is continuous with respect to TKP and therefore TKP ⊆ T0, so T0 is Hausdorff.

Using the proof of theorem 7.2.6 as a template we associate a product map ϕI to each word I = (i1, . . . , ik) over {1, . . . , n},

ϕI : Ka1 × · · · × Kak → K(A), (g1, . . . , gk) 7→ g1g2 . . . gk

where aj := aij for ease of notation. The image of ϕI is denoted by KI .

ϕI is continuous with respect to both topologies as both topologies define topological groups on K(A).

Moreover, as these topologies are both Hausdorff and Ka1 × · · · × Kak is compact it follows that ϕI is a quotient map for both T0 and TKP . Thus these topologies coincide on each KI .

Using the method from the proof of theorem 7.2.6 we can deduce that lim−→(KI ) = (K(A), T0). Also, −→lim(GI ∩ K(A)) = (K(A), T0). It is clear that KI ⊆ GI ∩ K(A) and the topologies coincide on each KI , so it will be sufficient to prove that for each word I ∈ Wn there is some J ∈ Wn such that GI ∩ K(A) ⊆ KJ . If this is the case then the topologies will coincide on GI ∩ K(A) for all I. Using theorem 6.3 we deduce that T0 is the subspace topology on K(A) induced from (G(A), TKP ), completing the proof.

Denote by sj the root reflection at aj. Then S = {s1, . . . , sn} generates the Weyl group W . We

will say that a word I is special if (si1 , . . . , sik ) is a reduced expression for the Weyl group element

w(I) = si1 . . . sik ∈ W . + + + Suppose I is special. Using the Bruhat decomposition for SL2(C), we have Gai ⊆ B ∪ B siB . Extending this, we know that

+ + + + + + [ + + GI ⊆ (B ∪ B si1 B ) ... (B ∪ B sik B ) ⊆ B wB . w≤w(I)

Here ≤ describes the Bruhat ordering on W where w1 ≤ w2 if and only if there is a reduced expression of w2 which contains a reduced expression of w1 as a subsequence. + + By [KP85], proposition 5.1, K(A) ∩ B wB ⊆ KI .T where T := H ∩ K(A) is a compact torus. Thus

GI ∩ K(A) ⊆ KI T.

Moreover, T ⊆ K1 ...Kn so defining J = (i1, . . . ik, 1, 2, . . . , n) to be the concatenation of I with (1, . . . , n), we see that GI ∩ K(A) ⊆ KJ . This completes the claim for special I ∈ Wn.

If I ∈ Wn is arbitrary, then define WI to be the set of all special subwords of I. WI is finite so there is 0 0 some special word I ∈ Wn such that W (I ) ≥ w(J) for all J ∈ WI . Thus

[ + + GI ⊆ B wB . w≤w(I0)

0 Defining J to be the concatenation of I with (1, 2, . . . , n) we see that GI ∩ K(A) ⊆ KJ . 

85 Chapter 8

Phan amalgams

8.1 Borovoi’s theorem for complex Kac-Moody groups

Definition 8.1.1 Amalgams

Let G be the category of groups under group homomorphisms and let I be the category given by a poset (J, ≤).

A diagram δ : I → G is called an amalgam if δ(a) is a group monomorphism for all a ∈ mor(I). A cone (G, (ψi)i∈ob(I)) over an amalgam δ in G is called an enveloping group and the corresponding colimit is called a universal enveloping group.

Often an amalgam is written just as a sequence of subgroups when the poset and the monomorphisms are clear from the context. Borovoi’s theorem, 5.3.3, shows that a semisimple Lie group G is the universal enveloping group of its rank 1 and rank 2 subgroups. In this chapter we will prove an analogous result for Kac-Moody groups. To do this we will need to impose some ‘finiteness’ condition on the Dynkin diagram Γ of a Kac-Moody group G(A). A suitable choice to make is that these diagrams are 2-spherical. This means that the subgraph of Γ induced by any pair of vertices is isomorphic to either A1 ⊕ A1, A2, B2 or G2 (see figure 2.1.2).

Theorem 8.1.2 Borovoi’s theorem for complex Kac-Moody groups Let K(A) be the unitary form of some complex Kac-Moody group G(A) associated with a symmetrisable generalised Cartan matrix of finite size and two-spherical type (W, S). Let ΦR be the set of real roots and let Π be a system of fundamental roots.

Π Π Define a diagram δ : I → KOG as follows. Choose I to be the small category with objects ( 1 ) ∪ ( 2 ) and morphisms {a} → {a, b} for all a, b ∈ Π with a 6= b. Define δ({a}) = Ka, δ({a, b}) = Kab and δ({a} → {a, b}) = (Ka ,→ Kab).

Then ((K(A), TKP ), (ιi)i∈ Π ∪ Π ) (where ιi is the identity inclusion map) is a colimit of the diagram ( 1 ) ( 2 ) δ in the categories of abstract groups, Hausdorff topological groups and kω groups.

Proof: The proof that δ is a colimit in the category of abstract groups is exactly the same as the proof of theorem 5.3.3, using the fact that G(A) acts strongly transitively on an associated twin building and we can define a fundamental chamber with respect to the action of K(A). Applying extensions of Tits’ lemma and the Solomon-Tits theorem to twin buildings will complete the proof. (K(A), TKP ) is Hausdorff by proposition 7.3.4 and is a kω group by proposition 7.3.4 and corollary 6.2.6.  Amalgams of the type considered by theorem 8.1.2 are called standard amalgams. Our final major result will be to prove that under certain circumstances this amalgam is unique up to isomorphism.

86 8.2 Standard Phan amalgams

Definition 8.2.1 Topological weak Phan systems Let Γ be a two-spherical Dynkin diagram with vertex set Π and let G be a topological group. A family (Kj)j∈Π of subgroups of G is called a topological weak Phan system of type Γ over C in G if the following properties hold. S (i) G is generated by j∈Π Kj,

(ii) each Kj is isomorphic (as a topological group) to a central quotient of a simply connected, compact, semisimple Lie group of rank one,

(iii) given any two distinct elements i, j ∈ Π, the subgroup Kij := hKi, Kji of G is isomorphic as a topological group to a central quotient of the simply connected, compact, semisimple Lie group of rank two corresponding to the rank two subdiagram of Γ induced by the vertices i and j,

(iv)( Ki, Kj) or (Kj, Ki) is a standard pair in Kij. This definition requires Γ to be two-spherical, otherwise condition (iii) can never hold. Definition 8.2.2 Phan amalgams Π Π Let Γ be a two-spherical Dynkin diagram with vertex set Π. An amalgam A = (Kj) for j ∈ ( 1 ) ∪ ( 2 ) is called a Phan amalgam of type Γ over C, if for all distinct pairs a, b ∈ Π, the group Ka,b is a central quotient of a simply connected compact semisimple Lie group whose type is given by the subgraph of Γ induces by the vertices a and b and its morphisms are the maps Ka ,→ Kab which embed (Ka, Kb) as a standard pair into Kab. It is clear from the definition that every topological weak Phan system gives rise to a Phan amalgam. The converse is not true, as the definition of a Phan amalgam makes no mention of the amalgam possessing an enveloping group. However, if such a group G exists into which A can be embedded, then the image of A in G is a weak Phan system of G. To distinguish this case, we call such Phan amalgams strongly noncollapsing. Definition 8.2.3 Irreducible Phan amalgams A Phan amalgam A is said to be irreducible if and only if its Dynkin diagram is connected. Definition 8.2.4 Amalgam isomorphisms

ι0 ι0 ι1 ι2 0 0 1 0 0 2 0 Let A = (P1 ←- (P1 ∩ P2) ,→ P2) and A = (P1 ←- (P1 ∩ P2) ,→ P2) be two amalgams of abstract groups. 0 The amalgams are said to be of the same type if there exist isomorphisms ψi : Pi → Pi with the property 0 0 0 that (ψi ◦ ιi)(P1 ∩ P2) = ιi(P1 ∩ P2) for i = 1, 2. 0 0 If there is also an isomorphism ψ12 : P1 ∩ P2 → P1 ∩ P2 such that the following diagram commutes,

ψ1 P / P 0 ; 1 ; 1 ι xx ι0 xx 1 xx 1 xx xx xx xx xx x ψ12 0 x0 P1 ∩ P2 / P1 ∩ P2 F F FF FF FF FF ι2 FF ι0 FF FF 2 FF # ψ2 # 0 P2 / P2 then the amalgams A and A0 are said to be isomorphic. We denote an amalgam isomorphism as a triple (ψ1, ψ12, ψ2). Here each map ι denotes the identity map and they differ only in their domain and codomain. If the amalgams consist of topological groups, they are said to be isomorphic if and only if there is an amalgam isomorphism (ψ1, ψ12, ψ2) for the underlying amalgam of abstract groups in which each of the three maps is also a homeomorphism.

87 Lemma 8.2.5 Goldschmidt’s lemma

ι1 ι2 Let A = (P1 ←- (P1 ∩ P2) ,→ P2) be an amalgam of topological groups. For i = 1, 2 define

Ai = StabAut(Pi)(P1 ∩ P2) and let ai : Ai → Aut(P1 ∩ P2) map f ∈ Ai onto its restriction to P1 ∩ P2. There is a one-to-one correspondence between isomorphism classes of amalgams of the same type as A and the double cosets a2(A2) \Aut(P1 ∩ P2)/ a1(A1).

Proof: By definition 8.2.4 any amalgam of the same type as A is isomorphic to an amalgam of the form ι1◦b1 ι2◦b2 (P1 ←- (P1 ∩ P2) ,→ P2)

where b1, b2 ∈ Aut(P1 ∩ P2). This in turn is isomorphic to the amalgam

−1 ι1 ι2◦b2◦b1 (P1 ←- (P1 ∩ P2) ,→ P2)

via the amalgam isomorphism (id, b1, id). Thus it remains to determine when the following two amalgams are isomorphic.

ι1 ι2◦c1 (P1 ←- (P1 ∩ P2) ,→ P2)

ι1 ι2◦c2 (P1 ←- (P1 ∩ P2) ,→ P2)

To achieve this we need an amalgam isomorphism (ψ1, ψ12, ψ2) with the properties ι1 ◦ ψ12 = ψ1 ◦ ι1 and ι2 ◦ c2 ◦ ψ12 = ψ2 ◦ ι2 ◦ c1. From the first equality it is clear that ψ12 ∈ a1(A1) and the second says −1 −1 c2 ◦ ψ12 ◦ c1 ∈ a2(A2). Therefore such an amalgam exists if and only if a1(A1) ∩ c2 a2(A2)c1 6= ∅. This is equivalent to requiring a2(A2)c1a1(A1) = a2(A2)c2a1(A1).

Definition 8.2.6 Umambiguous Phan amalgams

A Phan amalgam (Kab) and any subamalgam of a Phan amalgam is said to be unambiguous if every Kab is isomorphic to the corresponding rank two semisimple Lie group Kab.

8.3 Classification

Theorem 8.3.1 Classification Let n ≥ 2 and let A be a tree-like strongly noncollapsing unambiguous irreducible Phan amalgam of rank n. Then A is unique up to isomorphism.

We have already seen from theorem 8.1.2 that the rank one and two subgroups of the unitary form K(A) of a Kac-Moody group whose Dynkin diagram is a 2-spherical tree forms such an amalgam. We therefore deduce from this theorem that there amalgams are unique up to isomorphism. To consider an amalgam as strongly noncollapsing we recall that its Dynkin diagram must be two- spherical. Moreover, the condition that A is tree-like and irreducible says that the Dynkin diagram is in fact a tree. Critically, this guarantees that given any collection of at least three vertices from the corresponding Dynkin diagram Γ the subdiagram induced by some pair {a, b} of them has type A1 ⊕ A1, so the group Kab is isomorphic to the direct product Ka × Kb. Proof: Consider a tree-like strongly noncollapsing unambiguous irreducible Phan amalgam

A = (KJ )J∈ Π ∪ Π ( 1 ) ( 2 ) of rank n ≥ 2. The proof proceeds by induction on n. As A is 2-spherical the amalgams of rank two are unique by theorem 2.1.1.

88 For n = 3, let Π = {a, b, c} and let A and A0 be the amalgams

0 0 Ka / Kab and Ka / Kab respectively. D = B = DD zz BB || DzDz B|B| zz DD || BB zz D! || B! K K 0 0 b ac Kb Kac DD z= B = DD zz BB || zzDD BB|| zz DD || BB z ! || B! Kc / Kbc 0 0 Kc / Kbc

As we have assumed A and A0 are both irreducible, without loss we may also assume that they both have the Dynkin diagram a b c and the two edges have labels contained in the set {3, 4, 6}. This condition on the labels holds as the Dynkin diagram is 2-spherical and connected.

By Goldschmidt’s lemma 8.2.5, the amalgams

0 0 0 0 0 0 B = (Kab ←-Kb = Kab ∩ Kbc ,→ Kbc) andB = (Kab ←-Kb = Kab ∩ Kbc ,→ Kbc)

are isomorphic via some amalgam isomorphism ψ. We do not need to write three isomorphisms as every isomorphism of Kb is induced by some automorphism of Kab, ([HM98] theorem 6.73 gives the details). 0 Therefore we can deduce that ψ(Kb) = Kb.

The groups Ka and Kb are clearly a standard pair in Kab, and hence (ψ(Ka), ψ(Kb)) forms a standard 0 0 pair in Kab = ψ(Kab). Standard pairs are conjugate, so there exists an automorphism of Kab which maps 0 0 ψ(Ka) onto Ka and leaves Kb = ψ(Kb) invariant. Relabelling ψ as the composition of this automorphism 0 with the original ψ, we may assume that ψ(Ka) = Ka. i Define Dj := NKj (Ki), where Ki and Kj are considered as subsets of Kij. As (Ki,Kj) is a standard i i b pair in Kij, Dj is a maximal torus in Kj. If i = a, c, then Db = CKb (Di ). As A is strongly noncollapsing, let π : A → G define an enveloping group of G where π is injective on each Ki. To consider π acting on the Ki we have implicitly used the assumption that A is unambiguous.

i b π acts as a group monomorphism on each Ki so it follows that π(Db) = Cπ(Kb)(π(Di )). Moreover, π(Kb) a c a and π(Dc) are invariant under π(Db ) = Nπ(Ka)(π(Kb)), so π(Db) is invariant under π(Db ). a a Therefore, we have deduced that a maximal torus π(Db ) of π(Ka) leaves the maximal tori π(Db ) and c π(Db) of π(Kb) invariant. a c In this situation, analysis of the rank 2 subgroup Kab yields the result Db = Db, which we will now denote by Db. 0 0 0 0 0 N (K ) = D , ψ(D ) = D := N 0 (K ) = N 0 (K ), since ψ(K ) = K and ψ(K ) = K . Kb a b b b Kb a Kb c a a b b

Returning to the Dynkin diagram we have identified a root system of type A2, B2 or G2 with respect to 0 0 0 which the groups Kb = ψ(Kb), Kc and ψ(Kc) all occur as compact rank one subgroups. Using [Ca05], 0 0 0 plate X, we can deduce that there exists an automorphism of Kbc which centralises Kb and maps Kc 0 onto ψ(Kc), again we may compose this automorphism with ψ and assume that ψ(Kc) = Kc. Finally, 0 0 ∼ 0 ∼ 0 0 ψ(Kac) = Kac as Kac = Ka × Kc and Kac = Ka × Kc.

Let |Π| ≥ 4 and assume as our inductive hypothesis that the result holds for all amalgams of smaller rank.

Let A = (KJ )J∈ Π ∪ Π be a tree-like strongly noncollapsing unambiguous irreducible Phan amalgam ( 1 ) ( 2 ) of rank |Π|. Now let a, b ∈ Π be two nonadjacent vertices of the corresponding Dynkin diagram Γ with the property Γ remains connected (and hence a tree) if these two vertices are removed. This is always possible as the amalgam is tree-like and irreducible, with |Π| ≥ 3.

89 The amalgams A = (K )     for i = a, b, are isomorphic to standard Phan amalgams i J J∈ Π\{i} ∪ Π\{i} 1 2 by the inductive hypothesis. Therefore the colimits, Ha of Aa and Hb of Ab are both determined by theorem 8.1.2. 0 0 Now suppose A = (KJ )J∈ Π ∪ Π and A = (KJ )J∈ Π ∪ Π are two tree-like strongly noncollapsing ( 1 ) ( 2 ) ( 1 ) ( 2 ) unambiguous irreducible Phan amalgam of rank |Π| with the same Dynkin diagram Γ. Choose two nonadjacent vertices a, b of this Dynkin diagram such that Γ remains connected when these two vertices ∼ 0 ∼ 0 are removed. Maintaining the notation, we notice that Ha = Ha and Hb = Hb by assumption. 0 0 Set B = (K )     and B = (K )    , J J∈ Π\{a,b} ∪ Π\{a,b} J J∈ Π\{a,b} ∪ Π\{a,b} 1 2 1 2 ∼ 0 0 0 then again by the inductive hypothesis B = B and the colimits H0 of B and H0 of B are determined by theorem 8.1.2. 0 0 0 Then the amalgams Ha ←-H0 ,→ Hb and Ha ←-H0 ,→ Hb have the same type. Applying [Ca05] theorem 0 0 0 8.2 and Goldschmidt’s lemma, (8.2.5), the amalgams Ha ←-H0 ,→ Hb and Ha ←-H0 ,→ Hb are in fact 0 0 isomorphic under some map ψ. The standard Phan amalgams of ψ(B) and B in H0 correspond to two 0 choices of a maximal torus in H0, so these maximal tori are conjugate by the Iwasawa decomposition. 0 0 Therefore there is some inner automorphism of H0 which maps ψ(Ki) to Ki for all Π\{a,b}  Π\{a,b}  i ∈ 1 ∪ 2 . 0 0 0 0 0 Studying Ha and Hb we see that ψ(Ka) = Ka and ψ(Kb) = Kb. Also ψ(Kij) = Kij for i = a, b ∼ 0 and j ∈ Π \{a, b}. By assumption Kab = Ka × Kb, so ψ induces an isomorphism between A and A , completing the proof. 

90 Chapter 9

Conclusion

We have seen that every compact simply connected semisimple Lie group K is the universal enveloping group of the amalgam of rank one and rank two subgroups. Moreover, the Lie group topology on K is determined by the induced topology on these rank one and rank two subgroups. Following this, we extended the result to unitary forms of Kac-Moody groups over symmetrisable gener- alised Cartan matrices, again deducing that they are the universal enveloping group of the Phan amalgam consisting of their rank one and rank two subgroups. Finally, we proved that under suitable conditions, which force the Dynkin diagram of a Phan amalgam to be a tree, the unitary forms of Kac-Moody groups are in fact the universal enveloping groups of the unique Phan amalgam (up to isomorphism) with that particular Dynkin diagram.

More recent research concentrates on what conclusions can be made about both Phan amalgams and related Curtis-Tits amalgams under the hypothesis of theorem 8.3.1 when the tree-like condition is removed. In particular, Phan amalgams of type Afn are of interest, as they form one of the easiest examples of so-called circular amalgams. We have seen that the rank one and rank two subgroups of −1 sln+1(C[t, t ]) define a Phan amalgam which has Dynkin diagram Afn, but this amalgam is certainly not unique up to isomorphism.

Recalling the diagram Afn,

we can apply the classification theorem 8.3.1 to any copy of An contained inside it, so the amalgam is unique up to isomorphism on this path, but then we are left with the question of how the two ‘ends’ are joined. In his 1999 paper, [M¨u99],Bernhard M¨uhlherrconsiders 2-spherical circular Dynkin diagrams, where the edges can be labelled with one of the values 3, 4 or 6. His method is to consider this cycle as an infinite path and consider the ways in which it can be folded up.

... a1 ... an a1 ... an ... where each of the vertices a1 up to an constitutes a copy of SU2(C) living inside the group. One of the outcomes of this paper is that the amalgam is then unique up to a field automorphism, so a maximal set of non-isomorphic amalgams of this type is in bijective correspondence with Aut(C).

Using this paper and the methods and results it contained, Riuwert Blok and Corneliu Hoffman, ([BH09a] and [BH09b]) extend this idea to the case when a group can be considered as acting strongly transitively on an associated twin building yet also be able to interchange the two halves of that twin building. It is immediate in this case that the field automorphism should still play a role and the question remains as

91 to how much extra freedom can be achieved. Again we may use the classification theorem (8.3.1) on any subdiagram which is a tree. Considering the ways of joining the last two edges they proved that the field automorphisms remain and also the last edge can act as a transpose inverse (like the Cartan involution on SU2(C)), which swaps the two halves of the building. This transpose inverse flips the defined notion of positivity, meaning that a positive root group (the upper unitriangular matrices in some copy of SU2(C)) gets mapped to positive root groups as it moves along the path An but when it arrives at the last vertex, the transpose inverse maps it to a negative root group (the lower unitriangular matrices in some copy of SU2(C)). Therefore, the corresponding fundamental chamber sees both halves of the building, leading to a natural comparison with a M¨obius band. The result of this is that there is a bijective correspondence between non-isomorphic noncollapsing amal- gams of this type, (noncollapsing meaning that they do in fact define some nontrivial group) and the group Aut(C) × Z2.

The first major outstanding question which has set to be resolved is what conclusion can be reached in theorem 8.3.1 if the tree-like condition is removed. Recent results give answers in the case of circular Phan amalgams, but there is no definitive answer for some arbitrary generalised Cartan matrix. This would help to cover more cases of standard Phan amalgams, describing the amount of freedom such amalgams can take. It is known in the case of a cycle that this is given by the automorphism group of the underlying field, so the question is what happens when two cycles meet. Suitable conditions are needed to ensure that the field automorphisms agree, so the expected result should be a bijective corre- spondence between non-isomorphic amalgams of the same type and some carefully chosen subgroup of Aut(C), which is trivial for connected tree-like Dynkin diagrams and the entire group for cycles.

Another extension that has been made to much of the theory covered by this project is to consider the same constructions over finite fields, replacing Lie groups with finite groups of Lie type. Since the classification of finite simple groups it has been known that simple groups of Lie type form the least well understood of the three infinite families of finite simple groups.

92 Bibliography

[AB08] Abramenko, P. and Brown, K. S., (2008) Buildings: Theory and Applications. (New York: Springer-Verlag) [Ar03] Arvanitoyeorgos, A., (2003) An Introduction to Lie Groups and the Geometry of Homoge- neous Spaces. (Providence, Rhode Island: American Mathematical Society) [BH09a] Blok, R. J. and Hoffman, C., (2009) On Tits’ twisted Kac-Moody group. (Preprint) [BH09b] Blok, R. J. and Hoffman, C., (2009) Curtis-Tits groups, Moufang foundations and q − CCR algebras. (Preprint) [Bo02] Bourbaki, N., (2002) Lie Groups and Lie Algebras Chapters 4-6. (Berlin: Springer-Verlag) [Bo05] Bourbaki, N., (2005) Lie Groups and Lie Algebras Chapters 7-9. (Berlin: Springer-Verlag) [Br88] Brown, K. S., (1988) Buildings. (New York: Springer-Verlag) [Ca05] Caprace, P-E., (2005) “Abstract” Homomorphisms of Split Kac-Moody Groups. (Ph.D. The- sis: Universit`eLibre de Bruxelles) [Co01] Conlon, L., (2001) Differentiable Manifolds. (Boston: Birkh¨auser) [GGH09] Gl¨ockner, H., Gramlich, R. and Hartnick, T., (2009) Final Group Topologies, Kac-Moody Groups and Pontryagin Duality. [Gl03] Gl¨ockner, H., (2003) Direct limit Lie groups and manifolds. (Kyoto Univ. J. Math. 43, pages 126) [Gl05] Gl¨ockner, H., (2005) Fundamentals of direct limit Lie theory. (Compos. Math. 141, pages 1551-1577) [GLS95] Gorenstein, D., Lyons, R. and Solomon, R., (1995) The Classification of the Finite Simple Groups, Number 2. Volume 40 (Providence, Rhode Island: American Mathematical Society) [HM98] Hofmann, K. H. and Morris S. A., (1998) The Structure of Compact Groups. (Berlin: De Gruyter) [Hu78] Humphries, J. E., (1978) Introduction to Lie Algebras and Representation Theory. (New York: Springer-Verlag) [IS91] Ivanov, A. A. and Shpectorov, S. V., (1991) Geometry of sporadic groups II: Representations and Amalgams. (Encyclopedia of mathematics and its applications) [Ka68] Kac, V. G., (1968) Simple irreducible graded Lie algebras of finite growth. (Akad. Nauk SSSR Ser. Mat. 32, pages 1323 1367) [Ka84] Kac, V. G., (1984) Constructing groups associated to infinite dimensional Lie algebras. Pro- ceedings of the conference on Infinite-dimensional groups, Berkeley. (MSRI publ. ]4, pages 167-216) [Ka85] Kac, V. G., (1985) Infinite dimensional Lie algebras, second edition. (Cambridge: Cambridge University Press)

93 [Kn02] Knapp, A. W., (2002) Lie Groups Beyond an Introduction. (Berlin: Birkh¨auser) [KP83] Kac, V. G. and Peterson, D. H., (1983) Arithmetic and Geometry. (Berlin: Birkh¨auser) [KP85] Kac, V.G. and Peterson, D. H., (1985) Defining relations of certain infinite-dimensional groups. (Asterisque, pages 165208) [MMO73] Mack, J., Morris, S.A. and Ordman, E.T., (1973) Free topological groups and the projective dimension of a locally compact . (Proc. Amer. Math. Soc. 40, pages 303-308) [Ma50] Markov, A.A., (1950) On free topological groups (Amer. Math. Soc. Transl. 30, pages 195- 272) [Ma07] Mars, A., (2007) Unitary forms of locally finite Kac-Moody groups (Diplom thesis: Technis- che Univerist¨atDarmstadt) [Mo68] Moody, R. V., (1968) A new class of Lie algebras. (Journal of Algebra 10, pages 211 230)

[M¨u99] M¨uhlherr,B., (1999) Locally split and locally finite twin buildings of 2-spherical type. (J. Reine Angew. Math. 511, pages 119-143) [Ne99] Neumann, J. V., (1999) Invariant measures. (Providence, Rhode Island: American Mathe- matical Society) [St68] Steinberg, R., (1968) Lectures on Chevalley groups. (London: Yale University Press)

[Ti86] Tits, J., (1986) Ensembles ordonn´es,immeubles et sommes amalgam´ees.(Bull. Soc. Math. Belg. Ser. A 38, pages 367387) [Ti87] Tits, J., (1987) Uniqueness and presentation of Kac-Moody groups over fields. (J. Algebra 105, pages 542-573)

[Va74] Varadarajan, V. S., (1974) Lie groups, Lie algebras and their representations. (Englewood Cliffs, New Jersey: Prentice-Hall) [Wa71] Warner, F. W., (1971) Foundations of Differential Manifolds and Lie Groups. (New York: Springer-Verlag)

94