<<

Killing Forms, W-Invariants, and the Product Map

Cameron Ruether

Thesis submitted to the Faculty of Graduate and Postdoctoral Studies in partial fulfillment of the requirements for the degree of Master of Science in Mathematics1

Department of Mathematics and Statistics Faculty of Science University of Ottawa

c Cameron Ruether, Ottawa, Canada, 2017

1The M.Sc. program is a joint program with Carleton University, administered by the Ottawa- Carleton Institute of Mathematics and Statistics Abstract

Associated to a split, semisimple linear algebraic group G is a group of invariant quadratic forms, which we denote Q(G). Namely, Q(G) is the group of quadratic forms in characters of a maximal torus which are fixed with respect to the action of the of G. We compute Q(G) for various examples of products of the special linear, special orthogonal, and symplectic groups as well as for quotients of those examples by central subgroups. Homomorphisms between these linear algebraic groups induce homomorphisms between their groups of invariant quadratic forms. Since the linear algebraic groups are semisimple, Q(G) is isomorphic to Zn for some n, and so the induced maps can be described by a set of integers called Rost multipliers. We consider various cases of the Kronecker tensor product map between copies of the special linear, special orthogonal, and symplectic groups. We compute the Rost multipliers of the induced map in these examples, ultimately concluding that the Rost multipliers depend only on the dimensions of the underlying vector spaces.

ii Dedications

To root systems - You made it all worth Weyl.

iii Acknowledgements

First among the people I wish to thank is my supervisor, Dr. Kirill Zainoulline. It was his guidance which kept me focused, his excitement which kept me inspired, and his reminders of deadlines which kept me motivated. His time and assitance were invaluable in allowing me to complete this work. He showed me a path up the seemingly unscaleable mountain. I feel extremely fortunate to be his student. I would also like to thank those among the Department of Mathematics and Statistics at the University of Ottawa and the School of Mathematics and Statistics at Carleton University. To the UOttawa Director of Graduate Programs Dr. Benoit Dionne, thank you for helping me navigate my degree. To the professors whose courses I attended, thank you for your excellent lectures. And to the secretary staff, thank you for having the wealth of resources I regularly drew upon to keep my days running smoothly. My friends and family have earned their mention here many times over. My Mother, Father, and Brother have been endlessly interested in and supportive of my work in university. I still remember my Father teaching me to multiply four, five, and even six digit numbers at the kitchen table. I have them to thank for my love of mathematics. Among friends I’d like to thank my classmate Curtis Toupin in particular. He has been an immense help in both writing this thesis, and in procrastinating writing this thesis. Shoutout to Beep Beep as well, a group of friends who keep me humble. Finally, I would like to express my gratitude to those agencies and insitutions who have helped fund my activities. Thank you to The Natural Sciences and Engi- neering Research Council of Canada (NSERC) for the funding I have received from Dr. Zainoulline’s NSERC Discovery Grant 2. Thank you to the University of Ottawa and to Memorial University for their support towards the costs of my trip to the latter to attend an Atlantic Algebra Centre conference. I grateful for all the experiences these contributions have allowed me to have.

2NSERC Discovery Grant RGPIN-2015-04469

iv Contents

Preface vii

1 Preliminaries 1 1.1 Root Systems ...... 1 1.2 Linear Algebraic Groups and the Hopf Algebras ...... 10 1.3 Lie Algebras and the Killing Forms ...... 20 1.4 Killing Form and Characters ...... 26 1.5 Invariants Under the Action of the Weyl Group ...... 34 1.6 Invariant Quadratic Forms ...... 39

2 Tensor Product Maps 45 2.1 The Kronecker Product of Matricies ...... 45 2.2 The Product of Quadratic Invariants ...... 52 2.2.1 The SL × SL-case ...... 52 2.2.2 The Even SO × SO-case ...... 54 2.2.3 The Odd SO × SO-case ...... 55 2.2.4 The Mixed SO × SO-case ...... 57 2.2.5 The Sp × Sp-case ...... 59 2.2.6 The Even Sp × SO-case ...... 61 2.2.7 The Odd Sp × SO-case ...... 62 2.2.8 The General Case ...... 62 2.3 Quadratic Invariants of Quotient Groups ...... 65 2.3.1 The SL-case ...... 66 2.3.2 The SO or Sp-case ...... 68

v CONTENTS vi

2.3.3 The Product of SLs ...... 69 2.3.4 The Product of SOs and Sps ...... 72 Preface

In this thesis we study certain invariant quadratic forms associated to split semisimple linear algebraic groups. In particular, we study quadratic forms in characters of a split maximal torus which are invariant under the action of the Weyl group. These invari- ants have been systematically studied by S. Garibaldi, A. Merkurjev, and J.-P. Serre in their celebrated lecture notes on cohomological invariants and Galois cohomology [GMS]. More precisely, they consider a functor Q which takes an appropriate linear algebraic group G (simply connected, smooth) and produces the group of quadratic invariants Q(G). Then they investigate functorial maps Q(α): Q(G) → Q(G0) where α is a homomorphism between simple, simply connected groups. When the groups are simple, their group of invariants are infinite cyclic (isomorphic to Z), and so Q(α) is determined by a single integer r(α) called the Rost multiplier of α. Several results about r(α), and examples of computations for various α between common simple lin- ear algebraic groups can be found in works by A. Merkurjev [Mer], S. Garibaldi and K. Zainoulline [GZ], H. Bermudez and A. Ruozzi [BR], and others. In the case that G is not simple, then the computation of Q(G) and of r(α) becomes a much more diffi- cult problem as the group Q(G) turns into a product of several infinite cyclic groups, hence leading to several associated Rost multipliers. Additionally, Q(G) and Rost multipliers play an important role in the theory of cohomological invariants of linear algebraic groups. They appear in this context in works such as S. Garibaldi’s [Gar], S. Baek’s [Baek], A. Merkurjev’s [Mer], and in his work with A. Neshitov and K. Zain- oulline, [MNZ]. In a recent paper [BDZ], S. Baek, R. Devyatov, and K. Zainoulline computed Q(G) when G is a quotient of some semisimple but not simple linear algebraic group by its central subgroup. In this thesis we will be extending such computations to cases dealing with the tensor (Kronecker) product map between various combinations of the special linear, special orthogonal, and symplectic groups. Then, similar to work done by S. Garibaldi and A. Qu´eguiner-Mathieuin [GQ-M], the computations will also be extended to quotients of these groups by the kernel of the Kronecker product map in addition to other central subgroups. Our main object of study is the group of invariants Q(G) associated to a linear algebraic group G. We work with it in the following form, where T denotes a maximal

vii PREFACE viii torus in the group G. Q(G) = S2(T ∗)W . Q(G) consists of quadratic forms on the character group of the maximal torus which are invariant under the action of the Weyl group W . As mentioned above, when the linear algebraic group G is simple, this group of invariants is isomorphic to Z and hence generated by a single element q, called the normalized Killing form of G. When G is semisimple, the invariants form a free abelian group generated by the normalized Killing forms of the simple components of G. This allows us to describe the maps between groups of invariants which are induced from homomorphisms between groups by a unique collection of integers. We do so in the following cases.

Let V1,V2 be vector spaces with finite dimensions d1, d2 respectively. Denoting the with SL, the special with SO, and the with Sp, let G1,,H be linear algebraic groups (with maximal tori T1,T2,TH resp.) in one of the following configurations.

G1 G2 H SL SL SL SO SO SO Sp SO Sp Sp Sp SO

Here Gi = Sp only when Vi is of even dimension. We then have the tensor product map defined by the Kronecker product of matrices

ρ: G1(V1) × G2(V2) → H(V1 ⊗ V2) (A, B) 7→ A ⊗ B which induces a map between invariant quadratic forms

∗ 2 ∗ WH 2 ∗ W1 2 ∗ W2 ρ :S (TH ) → S (T1 ) × S (T2 ) .

Since each of G1,G2,H are simple groups, their group of invariants are generated by normalized Killing forms q1, q2, qH respectively. In all considered cases we show that the map ρ∗ is given by ∗ ρ (qH ) = (d2q1, d1q2) hence it depends only on the dimensions of the underlying vector spaces. Ultimately we show that this behaviour generalizes to the following result

Theorem. Let V1,V2,...,Vn+m be vector spaces with finite dimensions d1, d2, . . . , dn+m respectively. Let G1,...,Gn+m,H be linear algebraic groups in one of the following configurations.

• G1,...,Gn+m,H = SL, all groups are the special linear group, PREFACE ix

• G1,...,Gn+m,H = SO, all groups are the special orthogonal group,

• G1,...,Gn = Sp, Gn+1,...,Gn+m = SO, and H = SO where n is even,

• G1,...,Gn = Sp, Gn+1,...,Gn+m = SO, and H = Sp where n is odd.

Then when considering the map

ρn+m : G1(V1) × ... × Gn+m(Vn+m) → H(V1 ⊗ ... ⊗ Vn+m)

(A1,...,An+m) 7→ A1 ⊗ ... ⊗ An+m the induced map between invariant quadratic forms is given by

∗ ˆ ρn(qH ) = (d2 . . . dn+mq1, . . . , d1d2 ... di . . . dn+mqi, . . . , d1d2 . . . dn+m−1qn+m) ˆ where di indicates ommision. 2 0 W 0 G  th Afterwards we provide descriptions of S (G ) where G = µk when the k roots of unity µk are a central subgroup of G which is one of our three commonly considered linear algebraic groups, SL, SO, or Sp. We do the same for the invariant G × G  quadtraic forms of 1 2 µ when G1,G2 are in one of the above listed configu- k . rations. In particular this includes the cases G1 × G2 ker(ρ). To reach these conclusions we require several background facts and constructions which we present in Chapter 1. We begin with an exposition on abstract root systems in section 1.1. Following [Hum90] we work from the definition of a to the classification theorem of simple root systems. It is here that we introduce the first notion of a Weyl group. Section 1.2 is dedicated to describing linear algebraic groups. As in [KMRT], they are presented as functors from the category of algebras of an algebraically closed field to the category of groups. Specifically, linear algebraic groups are functors which are represented by finitely generated Hopf algebras, and so section 1.2 begins with defini- tions of Hopf algebras and their related notions. This section then provides examples of linear algebraic groups among which are the special linear, special orthogonal, and symplectic groups. These examples serve to define these groups. Next, a connection is made between root systems and linear algberaic groups via discussion of Lie algebras. In section 1.3 we define Lie algebras, describe how a arises from a linear algebraic group, and provide a method of computing that Lie algebra. Here we introduce the Killing form, a bilinear form on a Lie algebra defined in terms of its . The section ends with discussion of how a Lie algbera produces a unique root system, thereby making the connecting from linear algebraic group to root system. However, the root system associated with a linear algebraic group may be obtained in another way, by considering the action PREFACE x of the group on its Lie algebra, and this comprises the majority of section 1.4. This perspective provides another description of the Killing form, one which we use later in this thesis, and so here we summarize the Killiing forms of various linear algebraic groups. We reach discussion of invariant quadtraic forms in section 1.5. The structure of the group of invariant quadtraic forms is described, that it is an free abelian group of finite rank. The normalized Killing form is introduced in the beginning section 1.6. The majority of section 1.6 is spent developing the notion of Q(G) from [GMS] to ultimately show that any homomorphism between smooth linear algebraic groups does induce a map between their invariant quadratic forms. Having covered the preliminaries, Chapter 2 starts by defining the Kronecker product map. This, followed by examples of the kernel and restrictions of the codomain of the Kronecker product map comprise section 2.1. Section 2.2 contains the proof of the above stated Theorem. First the computations for the examples involving the product of two groups are presented, and then these results are used in an inductive argument of the Theorem. Finally, the descriptions of the group of invariant quadtraic forms of quotient groups are contained in section 2.3. The quotient of single simple groups by roots of unity which form central subgroups are covered first. Such quotients of the product of special linear groups and the product of special orthogonal and/or symplectic groups follow. Chapter 1

Preliminaries

Our aim in this chapter is to introduce the definitions and notations of the objects we will work without throughout the paper. We also explain various auxiliary results which will be used to prove the main results of Chapter 2. This chapter also provides brief overviews of the theory of root systems, Lie algebras, linear algebraic groups, and invariant quadratic forms. Most of the material presented here is taken from Humphreys’ book on root systems and his book on the of Lie algebras, [Hum90] and [Hum72] respectively, as well as from Knus, Merkurjev, Rost, and Tignol’s The Book of Involutions, [KMRT].

1.1 Root Systems

We introduce the notion of a root system following the book by Humphreys [Hum90]. We also introduce the notions of simple systems and coxeter graphs of root systems, ultimately concluding with the use of Dynkin diagrams to classify root systems. Given a real euclidean vector space V , associated to each α ∈ V there is a reflection sα defined by (λ, α) s (λ) = λ − 2 α ∀λ ∈ V α (α, α) where (−, −) denotes a symmetric, positive definite, bilinear form on V . Inspired by geometric realizations of finite reflection groups, we wish to consider finite subsets of V which are stable with respect to each associated reflection. Such collections are called root systems.

Definition 1.1.1. Given a vector space V and a finite set Φ ⊂ V , we say Φ is a root system if

1 1. PRELIMINARIES 2

(R1) Φ ∩ Span{α} = {α, −α} ∀α ∈ Φ,

(R2) sα(Φ) = Φ ∀α ∈ Φ.

Additionally, we say that the root system is crystallographic if we also require

2(α, β) (R3) ∈ Z ∀α, β ∈ Φ. (β, β)

Elements of Φ are called roots. Given root systems Φ1,... Φn subsets of V1,...,Vn n n respectively, Φ := ∪ Φi is a root system in V := ⊕ Vi. We call Φ ⊂ V irreducible if it i=1 i=1 cannot be written as such a sum, and we call it reduced if V = Span(Φ). Associated to each root system is a finite reflection group generated by the associated reflections

W := hsα | α ∈ Φi ⊆ End(V )

called the Weyl group of Φ. W acts naturally on V , stabilizing Φ. Two root systems Φ ⊂ V ,Φ0 ⊂ V 0 are considered isomorphic if there exists an isomorphism of vector spaces ϕ: V −→∼ V 0 such that ϕ(Φ) = Φ0. All of the information in a root system can be recovered from a small subset of the original roots, called a simple system, elements of which will be called simple roots. Thus it will be these simple systems which we use to classify root systems.

Definition 1.1.2. A subset ∆ ⊂ Φ is called a simple system if

(i) ∆ is a basis for Span(Φ) ⊆ V , P (ii) ∀α ∈ Φ, α = cγγ such that each cγ ∈ R, and either cγ ≥ 0 ∀γ or cγ ≤ 0 ∀γ. γ∈∆

Simple systems exist for all reduced root systems [Hum90, Theorem 1.3], and so in the following discussion we assume all root systems are reduced. Simple systems are not unique, and in fact any two simple systems in Φ are conjugate under the action of W .

Theorem 1.1.3. [Hum90, Theorem 1.4] If ∆, ∆0 ⊂ Φ are two simple systems, then there exists w ∈ W such that w(∆) = ∆0. (Here w(∆) := {w(α) | α ∈ ∆}).

The following theorem and lemma demonstrate how the entire root system Φ can be recovered from any simple system ∆ ⊂ Φ.

Theorem 1.1.4. [Hum90, Theorem 1.5] Fix a simple system ∆ ⊂ Φ. Then W is generated by the set of simple reflections {sα | α ∈ ∆}. I.e. W =< sα | α ∈ ∆ >. 1. PRELIMINARIES 3

Lemma 1.1.5. [Hum90, Corollary 1.5] Given ∆, for all β ∈ Φ, there exists w ∈ W such that w(β) ∈ ∆. In particular W (∆) = Φ.

Hence given a simple system ∆, we can obtain W = hsα | α ∈ ∆i, and then in turn reconstruct Φ = W (∆). Since ∆ is a basis of V , |∆| = dim(V ) and is therefore an invariant of the root system called the rank of the root system.

Remark 1.1.6. In the case that Φ is crystallographic, W stabilizes the lattice L := SpanZ(∆) and is therefore also crystallographic. (G ≤ GL(V ) is called crystal- lographic if it stabilizes some lattice in V ). Furthermore if we consider some β ∈ Φ, by the above lemma there exists w ∈ W such that w(β) ∈ ∆ and hence in L. This means that β = w−1(w(β)) ∈ L and therefore Φ ⊂ L. In a crystallographic root system, all roots are integral linear combinations of simple roots.

Now, in order to classify root systems we need only classify possible simple systems. To do so we introduce the Coxeter graph of a root system.

Definition 1.1.7. Given ∆ ⊂ Φ a simple and root system respectively, the Coxeter graph, Γ (with vertex set V (Γ) and edge set E(Γ)), is an undirected, edge labeled graph such that

(i) V (Γ) = ∆,

(ii) The edge (α, β) is included in E(Γ) if m(α, β) := ord(sαsβ) ≥ 3, in which case the edge is labeled with the integer m(α, β).

It is important to note here that the above definition does not depend on our choice of simple system ∆. Had we instead chosen the simple system w(∆) for some w ∈ W , the map ∆ → w(∆) induces an isomorphism of Coxeter graphs since m(wα, wβ) = m(α, β). (This follows from the fact that for all α ∈ Φ, w ∈ W , −1 n n −1 we have wsαw = swα, which implies (swαswβ) = w(sαsβ) w , and therefore ord(swαswβ) = ord(sαsβ).) Given two root systems Φ, Φ0 with isomorphic Coxeter graphs, the graph iso- morphism induces an isometry between their vector spaces, and so Φ ∼= Φ0 as root systems [Hum, Proposition 2.1]. Hence our classification of root systems will be pre- sented in terms of possible Coxeter graphs. Since we need only classify irreducible root systems, we note the following.

Proposition 1.1.8. If Φ is an irreducible root system, then its Coxeter graph Γ is connected. 1. PRELIMINARIES 4

Proof: Let Φ be a root system, let Γ be its Coxeter graph, and assume that Γ is disconnected. Let Γ1, Γ2 ⊂ Γ be two disjoint subgraphs with Γ1 ∪ Γ2 = Γ, and let ∆1, ∆2 ⊂ ∆ be the respective vertex sets.

Since there is no edge between any α ∈ ∆1 and β ∈ ∆2, we have that m(α, β) = 2. Therefore (α, β) = 0, and so the two sets ∆1, ∆2 are mutually orthogonal. Because ∆ is a basis for V , we have a decomposition

(V, Φ) = (Span(∆1), Φ1) ⊕ (Span(∆2), Φ2)

where Φi = Φ ∩ Span(∆i). Finally note that since the decomposition is orthogonal, Φ2 is fixed pointwise by sα for any α ∈ ∆1, and vice versa. Therefore the reflections corresponding to roots in Φi permute Φi only, and so Φi ⊂ Span(∆i) is a root sys- tem. The decomposition then means that Φ is reducible, and so by contrapositive, irreducible root systems produce connected Coxeter graphs.

Now we are left with deciding whether a given graph can be obtained as the Coxeter graph of a root system. The decision procedure is to determine if the graph is positive definite or not.

Definition 1.1.9. Given an arbitrary undirected, integer edge labeled graph Γ, let n := π |V (Γ)|, and associate to Γ an n × n matrix A, where the entry aij := − cos( ). m(vi,vj ) Here vi denote the vertices of Γ, and m(vi, vj) equals the edge label between vi and vj if they are adjacent, or 2 if they are not. Since Γ is undirected, aij = aji and so A defines a symmetric bilinear form, B, on Rn. We call Γ positive definite (positive semidefinite, negative definite, etc.) if B is positive definite (positive semidefinite, etc.).

Proposition 1.1.10. The matrix A associated to the Coxeter graph of a root system is positive definite.

Proof: Let Φ be a root system in V with simple system ∆ = {γ1, . . . , γn} and Coxeter graph Γ. Let A be the matrix associated to Γ, i.e.

  π n A = − cos . m(γi, γj) i,j=1 By [Hum90, Corollary 1.3], if α 6= β ∈ ∆, then (α, β) ≤ 0. Therefore by the cosine π formula for an inner product, (α, β) = ||α|| ||β|| cos(θ) ≤ 0 implies that θ ∈ [ 2 , π), where θ is the angle between the roots α and β.

Since the element sαsβ ∈ W acts as a rotation with order m(α, β), it rotates 2π by an angle of m(α,β) , and therefore the angle between the reflecting hyperplanes is π m(α,β) . 1. PRELIMINARIES 5

π π π If α 6= β then θ ≥ 2 meaning that θ = π − m(α,β) and cos(θ) = − cos( m(α,β) ). If π α = β then θ = 0 and m(α, β) = 1 which again means cos(θ) = − cos( m(α,β) ). Hence

  π  (α, β) = ||α|| ||β|| − cos . m(α, β)

n Pn ci Now let v = (c1, . . . cn) 6= 0 ∈ R and let w = γi ∈ V . We compute i=1 ||γi||

n n X X   π  vT Av = c c a = c c − cos i j ij i j m(γ , γ ) i,j=1 i,j=1 i j

and since w 6= 0 n n    X ci cj X ci cj π 0 < (w, w) = (γ , γ ) = ||γ || ||γ || − cos ||γ || ||γ || i j ||γ || ||γ || i j m(γ , γ ) i,j=1 i j i,j=1 i j i j n X   π  = c c − cos = vT Av. i j m(γ , γ ) i,j=1 i j

Hence the matrix A associated to a root system is positive definite.

We are now able to classify irreducible root systems. Theorem 1.1.11. [Hum90, Theorem 2.7] The only connected, positive definite graphs are the following. ... An 4 ... Bn

...

Dn

E6

E7 1. PRELIMINARIES 6

E8 4 5 H3 5 H4 m I2(m)

Here the letters name the class of the root system/graph, and the subscripted number its rank. As is usually done, we have omitted the edge labels when the edge has m(α, β) = 3.

The graphs above classify irreducible root systems satisfying (R1) and (R2). There is a similar classification of crystallographic root systems found by refining our current classification. First, some consequences of enforcing (R3).

Proposition 1.1.12. [Hum90, Proposition 2.8] If Φ, and hence W , is crystallographic each integer m(α, β) is in {2, 3, 4, 6} when α 6= β ∈ ∆.

Proposition 1.1.13. If Φ is irreducible and crystallographic, then each root must have one of at most two lengths. That is, |{||α|| | α ∈ Φ}| ≤ 2.

2(α,β) Z Proof: Let α, β ∈ ∆. Recall that (R3) states (β,β) ∈ for all α, β ∈ Φ. Therefore

π 2(α, β) 2||α|| ||β||(− cos( )) ||α||   π  = m(α,β) = −2 cos ∈ Z (β, β) ||β||2 ||β|| m(α, β) and symmetrically ||β||   π  −2 cos ∈ Z. ||α|| m(α, β) π By the previous proposition there are only a small number of possibilities for cos( m(α,β) ). π Case 1: m(α, β) = 2 ⇒ −2 cos( m(α,β) ) = 0, and no information may be gained. π ||α|| ||β|| Case 2: m(α, β) = 3 ⇒ −2 cos( m(α,β) ) = −1. Therefore both ||β|| and ||α|| are themselves positive integers. Hence they are equal to 1, and ||α|| = ||β||. 1. PRELIMINARIES 7

√ π Z Case 3: m(α, β) = 4 ⇒ −2 cos( m(α,β) ) = − 2. So there exists n, m ∈ such that ||α|| n ||β|| m nm = √ , = √ ⇒ = 1. ||β|| 2 ||α|| 2 2 √ Without loss of generality this means that n = 1, m = 2 and ||β|| = 2||α||. √ Case 4: m(α, β) = 6 ⇒ −2 cos( π ) = − 3. By a similar argument this √ m(α,β) produces that ||β|| = 3||α||. Finally it is enough to notice that in each positive definite graph there are at most two edge labels. Hence each pair of simple roots connected by a path of edges labeled 3 are the same length, and those pairs connected by an edge with a larger label have length related as described above. This ensure that there are at most two root lengths among the simple roots of an irreducible, crystallographic root system. Then by Lemma 1.1.5, and the fact that all w ∈ W are orthogonal transformations, for all α ∈ Φ there exists γ ∈ ∆ such that ||α|| = ||γ||.

Based on these restrictions, Coxeter graphs of type H3,H4 and I2(m) for all m∈ / {3, 4, 6} do not arise from crystallographic root systems. We modify the remaining Coxeter graphs to include the additional information about root lengths. For each edge in the Coxeter graph with label greater than 3, we replace it with a directed edge pointing toward the shorter root. It is also customary to replace the explicit edge labels of 4 and 6 with double and triple edges respectively. The resulting graphs are called Dynkin diagrams, and they form our classification of crystallographic root systems.

Theorem 1.1.14. The only graphs arising from irreducible crystallographic root sys- tems are the following. ... An ... > Bn ... < Cn

...

Dn

E6 1. PRELIMINARIES 8

E7

E8 > F4 > G2

Here single, double, and triple edges represent edge labels of 3, 4, and 6 respec- tively.

The above theorem states slightly more than we have shown. It indicates that there are in fact root systems giving rise to each of the listed Dynkin diagrams. Producing examples of such root systems is straightforward, simply choose vectors in euclidean space with appropriate dot products and restrict to their span. The following first four examples are found in [KMRT], while the remainder are taken from [Bou]. th n Let ei be the i standard basis vector of R .

An: n+1 n+1 P Vector Space V = {(x1, . . . , xn+1) ∈ R | xi = 0} i=1 Root System Φ = {ei − ej | 1 ≤ i 6= j ≤ n + 1}

Simple System ∆ = {e1 − e2, e2 − e3, . . . , en − en+1}

Bn: Vector Space V = Rn

Root System Φ = {ei  ej, ek | 1 ≤ i < j ≤ n, 1 ≤ k ≤ n}

Simple System ∆ = {e1 − e2, e2 − e3, . . . , en−1 − en, en}

Cn: Vector Space V = Rn

Root System Φ = {ei  ej, 2ek | 1 ≤ i < j ≤ n, 1 ≤ k ≤ n}

Simple System ∆ = {e1 − e2, e2 − e3, . . . , en−1 − en, 2en} 1. PRELIMINARIES 9

Dn: Vector Space V = Rn

Root System Φ = {ei  ej | 1 ≤ i < j ≤ n}

Simple System ∆ = {e1 − e2, e2 − e3, . . . , en−1 − en, en−1 + en}

E6: Vector Space V = R8

1 d1 d5 Root System {ei  ej,  2 ((−1) ,..., (−1) , −1, −1, 1) | (i), (ii)} 1 1 1 1 Simple System ∆ = {e1 + e2, e2 − e1, e3 − e2, . . . , e5 − e4, ( 2 , − 2 ,..., − 2 , 2 )} 5 P with (i) : 1 ≤ i < j ≤ 5 and (ii): di is even. i=1

E7: Vector Space V = R8

1 d1 d6 Root System {ei  ej, (e7 − e8),  2 ((−1) ,..., (−1) , 1, −1) | (i), (ii)} 1 1 1 1 Simple System ∆ = {e1 + e2, e2 − e1, e3 − e2, . . . , e6 − e5, ( 2 , − 2 ,..., − 2 , 2 )} 6 P with (i) : 1 ≤ i < j ≤ 6 and (ii): di is odd. i=1

E8: Vector Space V = R8

1 d1 d8 Root System {ei  ej, 2 ((−1) ,..., (−1) ) | (i), (ii)} 1 1 1 1 Simple System ∆ = {e1 + e2, e2 − e1, e3 − e2, . . . , e7 − e6, ( 2 , − 2 ,..., − 2 , 2 )} 8 P with (i) : 1 ≤ i < j ≤ 8 and (ii): di is even. i=1

F4: Vector Space V = R4 1 1 1 1 Root System {ei  ej, ek, ( 2 ,  2 ,  2 ,  2 ) | 1 ≤ i < j ≤ 4, 1 ≤ k ≤ 4} 1 1 1 1 Simple System ∆ = {(1, −1, 0, 0), (0, 1, −1, 0), (0, 0, 1, 0), (− 2 , − 2 , − 2 , − 2 )}

G2: Vector Space V = {(x, y, z) ∈ R3 |x + y + z = 0}

Root System {(ei − ej), (2, −1, −1), (−1, 2, −1), (−1, −1, 2) | 1 ≤ i < j ≤ 3} Simple System ∆ = {(1, −1, 0), (−1, 2, −1)} 1. PRELIMINARIES 10

1.2 Linear Algebraic Groups and the Hopf Alge- bras

Our approach to linear algebraic groups will be through the use of group schemes as covered in [KMRT]. In order to introduce group schemes we must first introduce Hopf algebras. Throughout this section F will be a field.

Definition 1.2.1. By a Hopf algebra over F we mean an F-bialgebra, A, together with an antipode i: A → A. Specifically, if

m: A ⊗F A → A

u: F → A are the multiplication and unit maps of A as an F-algebra, then the comultiplication and counit maps c: A → A ⊗F A ε: A → F are F-algebra homomorphisms such that the following diagrams commute.

c A A ⊗F A

c c⊗Id Id ⊗c A ⊗F AA ⊗F A ⊗F A

c ε⊗Id AA ⊗F A F ⊗F A = A

Id

c i⊗Id m AA ⊗F A A ⊗F AA

ε u F

The Hopf algebra is called commutative if m(a ⊗ b) = m(b ⊗ a) for all a, b ∈ A, and it is called cocommutative if c(a) = P x ⊗ y = P y ⊗ x for all a ∈ A. 1. PRELIMINARIES 11

Definition 1.2.2. If A and B are Hopf algebras over F, then f : A → B is a Hopf algebra homomorphism if it respects the Hopf algebra structures on A and B

f ◦ mA = mB ◦ (f ⊗ f),

f ◦ uA = uB,

(f ⊗ f) ◦ cA = cB ◦ f,

εA = εB ◦ f,

f ◦ iA = iB ◦ f. That is, f is an algebra homomorphism, coalgebra homomorphism, and preserves the antipode.

Definition 1.2.3. If A is an F-Hopf algebra, J ⊂ A is called a Hopf ideal if

m(A ⊗F J + J ⊗F A) ⊆ J,

c(J) ⊆ A ⊗F J + J ⊗F A, ε(J) = 0, i(J) ⊆ J. J is an algebra ideal, a coalgebra ideal and closed under the antipode.

For every Hopf ideal J ⊂ A there is a natural Hopf algebra structure on A /J , and the quotient map q : A → A /J is a surjective Hopf algebra homomorphism. Also, as to be expected, if f : A → B is a Hopf algebra homomorphism, then ker(f) ⊆ A is . a Hopf ideal and A ker(f) ∼= Img(f) ⊆ B. Hopf algebras are present in the theory of group schemes because their structure allows us to define a group operation on the the set of morphisms from the Hopf algebra to an F-algebra.

Let A be an F-Hopf algebra, R be an F-algebra, and let HomF(A, R) denote the set of F-algebra homomorphism from A to R. For f, g ∈ HomF(A, R) we define their product to be f · g := mR ◦ (f ⊗ g) ◦ cA.

With this multiplication HomF(A, R) is a group. Its identity element is e := uR ◦ εA, −1 and inverses are given by f := f ◦ iA.

Hence HomF(A, −): AlgF → Grp is a functor from the category of F-algebras to the category of groups. These functors are precisely those which we will call groups schemes. 1. PRELIMINARIES 12

Definition 1.2.4. An affine group scheme G over F is a functor

G: AlgF → Grp

which is isomorphic to HomF(A, −) for some F-Hopf algebra A.

By Yoneda’s Lemma, if a group scheme G is isomorphic to HomF(A, −) for a Hopf algebra A, then A is the unique such Hopf algebra (up to isomorphism). Denote this Hopf algebra by F[G]. G is said to represented by F[G]. If G is a group scheme over F and R ∈ AlgF, then the group G(R) is called the group of R points of G.A group scheme G is said to be commutative if G(R) is commutative for all R ∈ AlgF. Definition 1.2.5. Let G, H be group schemes over F.A group scheme homomor- phism ρ: G → H is a natural transformation of functors between G and H. Yoneda’s Lemma ensures that ρ is completely determined by a unique Hopf alge- bra homomorphism ρ∗ : F[H] → F[G], called the comorphism of ρ.

If ρ: G → H is a group scheme homomorphism, we will denote by ρR : G(R) → H(R) the group homomorphism between the groups of R points. Since the study of group schemes is equivalent to the study of their represent- ing Hopf algebras, we would like to restrict our attention to those group schemes represented by easy to handle Hopf algebras. This leads to the notion of algebraic groups.

Definition 1.2.6. A group scheme G over F is called an algebraic group if the representing Hopf algebra F[G] is finitely generated as an F-algebra.

The following are some standard examples of algebraic groups.

Example 1.2.7. The trivial group scheme, 1, defined as

1: AlgF → Grp R 7→ {1}

is represented by F[1] = F, the trivial Hopf algebra. Since F ⊗F F = F, the Hopf algebra structure is given by taking m, u, c, ε, i to all be the identity on F.

Example 1.2.8. Let V be a finite dimensional F-vector space. The functor

V: AlgF → Grp R 7→ V ⊗F R 1. PRELIMINARIES 13

is represented by F[V] = S(V ∗), the symmetric algebra of V dual. It is finitely ∗ ∗ generated by {f1, f2, . . . , fn}, a basis of V . The Hopf algebra structure on S(V ) is given by m(fi ⊗ fj) = fi ⊗ fj, u(a) = a · 1,

c(fi) := fi ⊗ 1 + 1 ⊗ fi,

ε(fi) := 0,

i(fi) := −fi. A particular case of this functor is when V = F, in which case we denote the functor Ga, called the additive group.

Ga : AlgF → Grp R 7→ F ⊗F R = R.

The group operation induced by the Hopf structure defined above is simply addition in R. That is, if (R, ·, +) is an F-algebra, then Ga(R) = (R, +) is the additive group of R. The representing Hopf algebra of the additive group, F[Ga] = S(F∗) = F[t], is the ring of polynomials over F. Example 1.2.9. Let A be a unital, associative, F-algebra of dimension n. A has a faithful representation as a subalgebra of Mn(F)(n×n matricies with entries in F) and hence has a norm map N : A → F induced by the determinant of the representatives. N can be considered as an element of Sn(A∗) ⊂ S(A∗) and so we can form an ∗ 1 algebra B := S(A )[ N ]. If we equip B with a comultiplication induced by the map ∗ ∗ ∗ ∗ A → A ⊗F A dual to the multiplication mA : A ⊗F A → A, (that is for f ∈ A , c(f) = f ◦ mA) then HomF(B,R) becomes a group for all R ∈ AlgF. This condition then implies that B has a unique counit and antipode making it a Hopf algebra [KMRT, Remark 20.1]. × Denoting GL1(A) := HomF(B, −), and the set of units of A by A , we get a group scheme acting as follows.

GL1(A): AlgF → Grp × R 7→ (A ⊗F R) .

The is a case of the above functor when A = EndF(V ), the algebra of endomorphisms for a finite dimensional F-vector space V . In this case we denote GL1(EndF(V )) = GL(V ), and then the group of R points are given by

GL(V ): AlgF → Grp R 7→ GL(V ⊗F R) 1. PRELIMINARIES 14

where GL(V ⊗F R) is the of invertible R-linear endomorphisms of n V ⊗F R. Further convention is to denote GL(F ) = GLn(F).

The pair to the additive group is the multiplicative group Gm := GL1(F)

Gm : AlgF → Grp R 7→ R×.

× Gm sends an F-algebra (R, ·, +) to the multiplicative group of its units (R , ·). −1 In this case the representing Hopf algebra is F[Gm] = F[t, t ], the ring of Laurent polynomials over F. The Hopf algebra structure is then simply the ring structure alongside c(t) = t ⊗ t, ε(t) = 1, i(t) = t−1.

Deviating slightly from the content in [KMRT], we will discuss group scheme analogues to some classical linear algebraic groups (subgroups of GLn(F) which are also an algebraic variety, that is they are the zero locus of some family of polynomials on Mn(F)). The special linear group SLn(F), the orthogonal and special orthogonal F F F groups, On( ) and SOn( ) respectively, as well as the symplectic group Sp2n( ) are examples. Each of these groups has a realization as a group scheme which is a subgroup of GLn(F). First we use [KMRT] to define what we mean by a subgroup of a group scheme, and following that we give the group schemes corresponding to the listed examples. Definition 1.2.10. Let H,G be group schemes over F. H is called a (closed) sub- F group of G if there exists a Hopf ideal J ⊂ F[G] such that F[H] = [G] /J . This produces a homomorphism of group schemes ρ: H → G induced by the quotient map q : F[G] → F[H]. ρ is called a (closed) embedding. Furthermore, since q is surjective, each ρR : H(R) → G(R) is injective, allowing us to identify H(R) as a subgroup of G(R) for all R ∈ AlgF. Example 1.2.11. The special linear group is typically defined as

SLn(F) := {A ∈ GLn(F)| det(A) = 1}. GL F F GL F F 1 The representing algebra of n( ) is [ n( )] = [xij, det ] (where xij form a dual ∗ n basis of EndF(V ) ), and it contains the determinant polynomial, det = det([xij]i,j=1). F 1 The ring ideal generated by det −1 is also a Hopf ideal in [xij, det ] and so we can consider the group scheme represented by the quotient. It follows that  .  F[x , 1 ] HomF ij det < (det −1) > ,R = {A ∈ GLn(R) | det(A) = 1} = SLn(R). 1. PRELIMINARIES 15

We call this group scheme the special linear group

SLn(F) : AlgF → Grp

R 7→ SLn(R)

and it is a subgroup of GLn(F).

The remainder of the listed examples are built similarly. The defining polyno- mials of the variety are used to generate a Hopf ideal which then corresponds to the subgroup scheme of GLn(F). Example 1.2.12. The orthogonal and special orthogonal groups are given as follows

T On(F) = {A ∈ GLn(F) | A A = I},

T SOn(F) = {A ∈ GLn(F) | A A = I, det(A) = 1}. The Hopf ideal generated by polynomials which ensure that columns of the resulting matricies are orthonormal, that is

n X J := h−δij + xkixkj | 1 ≤ i, j ≤ ni k=1

(δij is the Kronecker delta function) and the same Hopf ideal with an added generator of det −1, n 0 X J := hdet −1, −δij + xkixkj | 1 ≤ i, j ≤ ni k=1 will yield the representing algebras of On(F), the orthogonal group scheme, and SO F n( ), the special orthogonal group scheme, respectively.

F O F F[x , 1 ] [ n( )] = ij det /J ,

1 F SO F F[x , ] 0 [ n( )] = ij det /J . Example 1.2.13. When our vector space is even dimensional, i.e. we are working with F2n, we can us the following matrix to define the symplectic group.

 0 I  K := n −In 0 where In is the n × n identity matrix. Using K, the symplectic group is

F F T Sp2n( ) := {A ∈ GLn( ) | A KA = K}. 1. PRELIMINARIES 16

Just as with the orthogonal groups we can generate a Hopf ideal by the relevant relations n X J := h1 + (xn+k,i xk,n+i − xk,i xn+k,n+i) | 1 ≤ i ≤ ni. i=1

Then the symplectic group scheme, Sp2n(F), is the group scheme represented by

F Sp F F[x , 1 ] [ 2n( )] = ij det /J .

Remark 1.2.14. The orthogonal and symplectic groups can be described more gen- erally as groups of matricies which preserve a bilinear form, B, on V . The group

G = {A ∈ End(V ) | B(Ax, Ay) = B(x, y) ∀x, y ∈ V } is called the orthogonal group when B is symmetric, and the symplectic group when B is antisymmetric. If Ω is a symmetric matrix, it defines a symmetric bilinear form via B(x, y) = xT Ωy, and therefore an orthogonal group denoted

SO(V, Ω) = {A ∈ End(V ) | AT ΩA = Ω}.

Similarily for any antisymmetric matrix J we have a corresponding symplectic group Sp(V,J) = {A ∈ End(V ) | AT JA = J}.

In the case that the bilinear form in question is non-degenerate, the choice of representing matrix, Ω or J, is simply equivalent to a choice of basis for V . This is because all non-degenerate symmetric (resp. antisymmetric) matricies over an algebraically closed field are conjugate in Mdim(V )(F). Lemma 1.2.15. Let Ω be a symmetric matrix representing a non-degenerate bilinear form B on V . Then there exists a matrix P such that P T ΩP = I.

Proof: Let dim(V ) = n. First we note that there exists x0 ∈ V such that B(x0, x0) 6= 0 (otherwise B(x, x) = 0 for all x ∈ V ⇔ B is antisymmetric). Let 2 −1 B(x0, x0) = c and choose α ∈ F such that α = c . Set v1 := αx0. Therefore

2 −1 B(v1, v1) = B(αx0, αx0) = α B(x0, x0) = c c = 1.

⊥ We may then induct on the dimension of V by considering the subspace {v1} which

⊥ is of lesser dimension. B|{v1} remains a symmetric, nondegenerate bilinear form and ⊥ so produces a basis of {v1} , {v2, . . . , vn} such that B(vi, vj) = δij (an equivalent condition to being conjugate to the identity, as below). This basis can then be 1. PRELIMINARIES 17

extended to one of V sharing the same property, namely {v1, v2, . . . , vn} in which B(vi, vj) = δij.

Finally, by setting P = [ v1 v2 . . . vn ] (basis vectors inserted as columns) we have that T n P ΩP = [B(vi, vj)]i,j=1 = I.

Lemma 1.2.16. Let J be an antisymmetric matrix representing a non-degenerate bilinear form B on V (dim(V ) = 2n). Then there exists a matrix P such that P T JP = K.

Proof: Fix a vector v1 ∈ V . Since B is non-degenerate, there exists x0 ∈ V such that B(v1, x0) = −1. Set vn+1 = x0. ⊥ Next, since B remains non-degenerate on Span{v1, vn+1} , we may use induction to know it has a basis {v2, . . . , vn, vn+2, . . . , v2n} such that B(vi, vn+i) = −1 for 2 ≤ i ≤ n, and B(vi, vj) = 0 otherwise.

Extending this to the set {v1, . . . , v2n}, we have a basis of V where   −1 1 ≤ i ≤ n, j = n + i B(vi, vj) = 1 n + 1 ≤ i ≤ 2n, j = i − n  0 else.

Therefore if we once again take P to be the matrix with the vectors v1, . . . , v2n as columns   T 2n 0 In P JP = [B(vi, vj)]i,j=1 = . −In 0

This conjugacy of the representing matricies induces a conjugacy of the groups relative these forms. Let Ω1, Ω2 be two non-degenerate symmetric (resp. antisymmet- T ric) matricies such that P Ω1P = Ω2 for some P . If A ∈ SO(V, Ω2) (resp. Sp(V, Ω2)), then

T A Ω2A = Ω2 T T T ⇒A (P Ω1P )A = P Ω1P −1 T T T −1 ⇒(P ) A P Ω1P AP = Ω1 −1T −1 ⇒ P AP Ω1 P AP = Ω1 1. PRELIMINARIES 18

−1 and so P AP ∈ SO(V, Ω1) (resp. Sp(V, Ω1)). Therefore −1 P (SO(V, Ω2))P = SO(V, Ω1) and similarily for the symplectic case. Both groups are the special orthogonal or symplectic group of V , however they are expressed in different bases.

The above four subgroup schemes of GLn(F) each corresponded to a quotient algebra of F[GLn(F)]. Other classical examples of matrix groups which are not sub- groups of GLn(F) can be found in quotients of GLn(F). Due to duality, these quotient groups will correspond to Hopf subalgebras of F[GLn(F)]. The two examples we will F F discuss are group scheme analogues of PGLn( ) and PSp2n( ), the projective general linear group and projective symplectic group. Example 1.2.17. Just as projective space is the result of quotienting a vector space by the action of scalars, so to is the projective general linear group acquired by taking the quotient of GLn(F) by scalar multiples of the identity. . F GL (F) × PGLn( ) := n {αI|α ∈ F } .

As outlined in [SR, Example 2.16], consider the subalgebra of F[GLn(F)] ∞ F[xij]nr A := ⊕ r r=0 det where by F[xij]nr we denote the set of homogenous polynomials of degree nr in vari- r ables xij. Since deg(det ) = nr, these polynomials are invariant with respect to scaling all variables, and so intuitively the group represented by A will also have scaling invariant elements. We can then define the projective general linear group scheme

PGLn(F): AlgF → Grp

R 7→ PGLn(R) which is represented by F[PGLn(F)] = A. Example 1.2.18. As a final collection of examples of linear algebraic groups, for each subgroup scheme of GLn(F) there is a corresponding subgroup scheme of PGLn(F). GL F F F[GL (F)] F GL F Let H ≤ n( ) such that [H] = n /J where J ⊂ [ n( )] is a Hopf ideal. We can then consider the following commutative diagram of Hopf algebras F[GL (F)] i∗ F GL F n /J [ n( )]

q∗

∗ ∗ Img(i ◦ q ) F[PGLn(F)] 1. PRELIMINARIES 19

∗ ∗ where i is the comorphism of the inclusion map i : H → GLn(F), and q is the comorphism of the quotient map q : GLn(F) → PGLn(F). Via duality, we also have a commutative diagram of group schemes

i H GLn(F)

q

∗ ∗ HomF(Img(i ◦ q ), −) PGLn(F)

∗ ∗ where we call HomF(Img(i ◦ q ), −) := PH the projective version of H.

For instance, if we choose H = Sp2n(F), then PH = PSp2n(F) and the group of F points is the classical projective symplectic group . PSp F F Sp (F) 2n( )( ) = 2n {I} . 1. PRELIMINARIES 20

1.3 Lie Algebras and the Killing Forms

Associated to any linear algebraic group is a Lie algebra. The structure of this Lie algebra is the bridge connecting linear algebraic groups to root systems, and therefore allows us to classify linear algebraic groups. Definition 1.3.1. A Lie algebra L is a vector space over a field F with an additional binary operation [−, −]: L × L → L called the Lie bracket such that

(i) [−, −]: L × L → L is bilinear,

(ii) [x, x] = 0 ∀x ∈ L,

(iii) [x, [y, z]] + [y, [z, x]] + [z, [x, y]] = 0 ∀x, y, z ∈ L (called the Jacobi identity).

A Lie subalgebra is simply a subspace which is closed under the bracket, and a Lie ideal is a subalgebra I such that for all x ∈ I, y ∈ L we have [x, y] ∈ I. Definition 1.3.2. A Lie algebra L is called simple if it contains no nontrivial ideals (if I ⊆ L is an ideal, then I = 0, L), and a Lie algebra is called semisimple if it is a direct sum of simple Lie algebras. An algebraic group will be called simple (resp. semisimple) in the case that its corresponding Lie algebra is simple (resp. semisimple).

If G is an algebraic group scheme over F with representing Hopf algebra A, then the definition of its Lie algebra, Lie(G), as given in [KMRT] relies on derivations of the Hopf algebra A. Definition 1.3.3. Let M be an A-module. A derivation, D, of A into M is an F-linear map D : A → M such that

D(ab) = aD(b) + bD(a) for all a, b ∈ A. The set of all such derivations is denoted Der(A, M). In particular, D ∈ Der(A, A) is called left-invariant if

c ◦ D = (Id ⊗D) ◦ c.

Definition 1.3.4. The Lie algebra associated to G is given by

Lie(G) = {D ∈ Der(A, A) | D is left-invariant} with bracket given by [D1,D2] = D1 ◦ D2 − D2 ◦ D1, where the operations on the right are the usual composition and difference of linear maps. 1. PRELIMINARIES 21

In practice, it is easiest to compute the Lie algebra of a group scheme not by finding left-invariant derivations of A, but by instead computing the kernel of the group homomorphism between certain groups of points of G. The setup is as follows. By F[δ] we denote the F-algebra of dual numbers.

F[δ] = {a + bδ | a, b ∈ F, δ2 = 0}.

There exists a unique F-algebra homomorphism k : F[δ] → F defined by k(δ) = 0. We are interested in the kernel of the group homomorphism G(k): G(F[δ]) → G(F), to which we will add a vector space structure.

Clearly if f ∈ ker(G(k)), we can regard it as f = ε + f1δ where ε: A → F is the counit of A and f1 : A → F is a derivation in Der(A, F). Then ker(G(k)) is an F-vector space with operations

f + g = f · g ∀f, g ∈ ker(G(k)) (multiplication in the group G(F[δ])) α(ε + f1δ) = ε + αf1δ ∀f ∈ ker(G(k)), α ∈ F.

By [KMRT, Proposition 21.1], with these operations there exists a natural F- vector space isomorphism Lie(G) ∼= ker(G(k)). It is given by D ↔ ε + (ε ◦ D)δ. Furthermore, there is a method of computing the Lie bracket on Lie(G) from the elements of ker(G(k)). To compute [ε + f1δ, ε + g1δ], consider the F-algebra

F[δ, δ0] = {a + bδ + cδ0 | δ2 = δ02 = 0}

0 0 0 0 0 0 0−1 0−1 and the two elements f = ε+f1δ and g = ε+g1δ in the group G(F[δ, δ ]). f g f g 0 will be an element of the form ε + hδδ , and setting [ε + f1δ, ε + g1δ] = ε + hδ makes the isomorphism above one of Lie algebras. A convenient result is [KMRT, Corollary 21.2] which states that if G is an al- gebraic group scheme, then dimF(Lie(G)) < ∞. This allows us to limit our further discussion of Lie algebras to finite dimensional ones without losing any relevant cases. Carrying out the above construction on the examples of linear algebraic groups 1. PRELIMINARIES 22 mentioned previously yields these Lie algebras

G Lie(G) [−, −] 1 {0} [0, 0] = 0 V V [A, B] = 0 Ga F [A, B] = 0 Gm F [A, B] = 0 GL(V ) gl(V ) = End(V ) [A, B] = AB − BA . PGL(V ) pgl = End(V ) {c Id | c ∈ F×} [A, B] = AB − BA SL(V ) sl(V ) = {A ∈ End(V )| Tr(A) = 0} [A, B] = AB − BA O(V ) o(V ) = {A ∈ End(V )|AT = −A} [A, B] = AB − BA SO(V ) so(V ) = {A ∈ End(V )|AT = −A, Tr(A) = 0} [A, B] = AB − BA Sp(W ) sp(W ) = {A ∈ End(W )|(KA)T = KA} [A, B] = AB − BA where V is any F-vector space and W is a vector space of even dimension. Let L be a Lie algebra over a field F. Denote the adjoint representation of L by ad: L → GL(L). We define a symmetric bilinear form on L called the Killing form.

K: L × L → F (x, y) 7→ Tr(ad(x) ad(y)).

The Killing form has a number of convenient properties, some of which we will state here.

Proposition 1.3.5. The Killing form is associative with respect to the Lie bracket. That is K([x, y], z) = K(x, [y, z]).

Proof: This follows from a simple calculation. Both sides of the equation become

Tr(ad(x) ad(y) ad(z)) − Tr(ad(y) ad(x) ad(z)).

In particular this means that the radical of the Killing form

Rad(K) := {x ∈ L | K(x, y) = 0 ∀y ∈ L} is an ideal of L. The Killing form is called nondegenerate when Rad(K) 6= 0 and this is equivalent to the semisimplicity of L. 1. PRELIMINARIES 23

Theorem 1.3.6. [Hum72, Theorem 5.1] Let L be a nonzero Lie algebra, then L is semisimple ⇔ K is nondegenerate.

This next property will be relevant to our future discussion on invariants under group actions on L. Proposition 1.3.7. The Killing form respects automorphism of L. For all σ ∈ Aut(L) and x, y ∈ L K(σ(x), σ(y)) = K(x, y). Proof: Let z ∈ L and consider ad(σ(x)) ad(σ(y))(z). ad(σ(x)) ad(σ(y))(z) =[σ(x), [σ(y), z]] =[σ(x), σ([y, σ−1(z)])] =σ([x, [y, σ−1(z)]]) =(σ ad(x) ad(y)σ−1)(z).

Therefore, ad(σ(x)) ad(σ(y)) = σ ad(x) ad(y)σ−1 which means that when we evaluate the Killing form K(σ(x), σ(y)) = Tr(ad(σ(x)) ad(σ(y))) = Tr(σ ad(x) ad(y)σ−1) = Tr(ad(x) ad(y)σ−1σ) = Tr(ad(x) ad(y)) =K(x, y).

Clearly this causes the Killing form to be an element of the ring of invariant polynomials on L with respect to any group of automorphisms. In particular the Killing form is an invariant of the action of the Weyl group. Example 1.3.8. The following are examples of the Killing form of some classical Lie algebras, the general linear algebra, special linear algebra, special orthogonal algebra, and symplectic algebra respectively. L K(x, y) gl(n, F) 2n Tr(xy) − 2 Tr(x) Tr(y) sl(n, F) 2n Tr(xy) so(n, F)(n − 2) Tr(xy) sp(2n, F) (4n + 2) Tr(xy) 1. PRELIMINARIES 24

The Killing form is of importance in the following discussion since it is essentially the inner product with respect to which the roots of a Lie algebra form a root system as in 1.1. We follow the exposition of this theory as in [Hum72]. To begin, let L be a finite dimensional . To construct the roots we require the notion of toral subalgebras. Definition 1.3.9. An element x ∈ L is called semisimple if its adjoint representation, ad(x) ∈ GL(L), is a diagonalizable matrix. Definition 1.3.10. A subalgebra T ⊂ L is called toral if all elements of T are semisimple.

In particular, non-zero toral subalgebras exist in semisimple Lie algebras. Lemma 1.3.11. [Hum72, 8.1] A T ⊂ L is abelian, i.e. [T,T ] = 0.

Now, let H be a maximal toral subalgebra of L. Since it is abelian, adL(H) is a commuting family of semisimple endomorphism of L and hence they are simultane- ∗ ously diagonalizable. Therefore, for each α in H = HomF(H, F) we have subspaces

Lα = {x ∈ L | [h, x] = α(h)x ∀h ∈ H}. These form a decomposition of L

L = ⊕ Lα. α∈H∗

∗ The set Φ = {α ∈ H | Lα 6= 0, α 6= 0} are called the roots of L, and the associated Lα are called root spaces. These root spaces, alongside those elements of L which are zero-eigenvectors for the elements of H (its centralizer CL(H) = {x ∈ L | [x, H] = 0}), form a decomposition of L called the root space decomposition or L = CL(H) ⊕ ⊕Lα. α∈Φ

It turns out that by [Hum72, Proposition 8.2] we have H = CL(H), and that by [Hum72, Proposition 8.4], for each α ∈ Φ, dimF(Lα) = 1. [Hum72, Corollary 8.1] gives us that the restriction of the Killing form to H remains non-degenerate. This allows us to transfer the Killing form to H∗ via the isomorphism H ∼= H∗ h 7→ K(h, −).

∗ We denote the inverse image of α ∈ H by tα ∈ H, and then define the bilinear ∗ form on H by (α, β) = K(tα, tβ). This leads us to the following formula for the restriction of the Killing form to H. 1. PRELIMINARIES 25

Lemma 1.3.12. Let λ, µ ∈ H∗. Then X (λ, µ) = K(tλ, tµ) = α(tλ)α(tµ). α∈Φ

Proof: By the root space decomposition, L has an F-basis

{w1, . . . , wm} ∪ {vα | α ∈ Φ} where {w1, . . . , wm} is a basis of H and each vα ∈ Lα.

We then consider the form of the matrix ad(tλ) ad(tµ) with respect to this basis. For each wi, since it is in H

ad(tλ) ad(tµ)(wi) = [tλ, [tµ, wi]] = [tλ, 0] = 0 and for each vα, since it is in Lα and tλ, tµ ∈ H

ad(tλ) ad(tµ)(vα) = [tλ, [tµ, vα]] = [tλ, α(tµ)vα] = α(tλ)α(tµ)vα.

So the matrix of ad(tλ) ad(tµ) takes the form

 0   ...       0    ad(tλ) ad(tµ) =  α(tλ)α(tµ)     β(tλ)β(tµ)   .   ..  γ(tλ)γ(tµ)

where α, β, γ represent the elements of Φ. From this it becomes clear that X K(tλ, tµ) = Tr(ad(tλ) ad(tµ)) = α(tλ)α(tµ) α∈Φ and the result follows. 1. PRELIMINARIES 26

1.4 Killing Form and Characters

Instead of considering a maximal toral subalgebra and the adjoint action on its Lie algebra, we may also find the root system of a linear algebraic group G by considering the adjoint action of a maximal torus T ⊆ G on the Lie algebra Lie(G). To define what is an algebraic torus we first require two preliminary definitions. We again reference [KMRT] in the following.

Definition 1.4.1. An algebraic group scheme, G, is called diagonablizable if

G = HomF(FhHi, −) where H is a concrete group and FhHi is the group algebra equipped with Hopf algebra structure c(h) = h ⊗ h, i(h) = h−1, and ε(h) = 1.

An algebraic group scheme is of multiplicative type if its restriction to Fsep- algebras, Gsep, is a diagonalizable group scheme. Definition 1.4.2. A torus, T , is an algebraic group scheme of multiplicative type such that H is a free abelian group of finite rank. A torus is called split if it is already a diagonalizable group scheme, i.e. T = HomF(FhHi, −). In this case T is isomorphic to the group scheme of diagonal ma- tricies, Gm × ... × Gm, where the number of factors is the rank of H.

∗ Remark 1.4.3. If T is a torus, the character group is defined to be T := HomF(T, Gm). In the case that T is split, T = Gm × ... Gm with n factors where n is the rank of H. F −1 −1 −1 T is then represented by the Hopf algebra [t1, t1 , t2, t2 , . . . , tn, tn ], with structure −1 c(ti) = ti ⊗ ti, ε(ti) = 1, and i(ti) = ti . The group T ∗ is then isomorphic to the group of comorphisms

F −1 F −1 −1  HomF [t, t ], [t1, t1 , . . . , tn, tn ] . These are all maps ϕ (which are uniquely determined by ϕ(t)) such that

• ϕ(t) is invertible,

• c(ϕ(t)) = ϕ(t) ⊗ ϕ(t).

F −1 −1 Note that the only elements of [t1, t1 , . . . , tn, tn ] exhibiting the property that d1 d2 dn Z c(x) = x ⊗ x are the monomials t1 t2 . . . tn where di ∈ . Then since the Hopf algebra structure on F[t, t−1] induces a group structure on comorphisms defined by pointwise multiplication, we have that for

d1 dn e1 en ϕ1 ≡ t1 . . . tn and ϕ2 ≡ t1 . . . tn 1. PRELIMINARIES 27

d1+e1 dn+en ϕ1 · ϕ2 = t1 . . . tn . Therefore there is clearly an isomorphism T ∗ ∼= Zn given by

d1 dn ϕ ≡ t1 . . . tn 7→ (d1, . . . , dn). Definition 1.4.4. Let G be a linear algebraic group. A maximal torus, T ⊆ G, is a torus which is a subgroup G that is not contained in any larger toral subgroup. If a maximal torus is split, G is also called split.

Note that all maximal tori in a group G are conjugate [Hall, Theorem 11.9]. Let G be a split semisimple algebraic group and let T ⊆ G be a split maxi- mal torus. Consider the adjoint representation ad: G → GL(Lie(G)) as defined in [KMRT, Example 22.19.2].

Let R ∈ AlgF, g ∈ G(R), and x ∈ Lie(G) ⊗F R. Since

Lie(G) ⊗F R = ker(G(R[δ]) → G(R)) this means that x: F[G] → R[δ] such that x(α) = ε(α) + f(α)δ where ε is the counit of F[G] and f ∈ Der(F[G],R) = {f : F[G] → R | f(ab) = ε(b)f(a) + ε(a)f(b)}. Then g acting on x is (ad(R)(g))(x): F[G] → R[δ] α 7→ ε(α) + (gfg−1)(α)δ. Here the expression gfg−1 is a multiplication of maps from F[G] using its Hopf algebra structure, and gfg−1 ∈ Der(F[G],R). We can restrict this representation to the maximal torus T , and since T is split, ad |T : T → GL(Lie(G)) is a representation of a diagonalizable group. Therefore by the theory of representations of diagonalizable groups [KMRT, Section 22.20], there is a decomposition Lie(G) = ⊕Vα α where the sum is taken over all weights α ∈ T ∗, and

Vα := {x ∈ Lie(G) | ad(g)(x) = α(g)x ∀g ∈ T } is called the weight space.

∗ Definition 1.4.5. α ∈ T is a weight of the representation ad |T if the weight space Vα 6= 0. The non-zero weights are called roots of G (with respect to T ). The set of roots is denoted Φ(G). 1. PRELIMINARIES 28

∗ Theorem 1.4.6. [KMRT, Theorem 25.1] The set Φ(G) is a root system in T ⊗Z R. The root system does not depend (up to isomorphism) on the choice of maximal torus T .

Once we have the root system for an algebraic group, we can distinguish an element of S2(T ∗) as the Killing form of the root system. Motivated by the previous section, we define the Killing form to be X K = α2. α∈Φ(G)

In these next examples, we compute the root systems and Killing forms of the common linear algebraic groups. We do so by choosing a basis on V and a maximal torus in G which prove convenient in Chapter 2. Note that we will be working with the traditional definitions of these linear algebraic groups. This suffices since Lie(G) is an F-vector space and the action of G on Lie(G) is determined by ad(F): G(F) → GL(Lie(G)) where G(F) is precisely the traditional group. Example 1.4.7. Let dim(V ) = n + 1 and choose any basis of V . Following [KMRT, 25.9], we give the root system of SL(V ) = {A ∈ End(V ) | det(A) = 1}.

We will denote diagonal matricies by   λ1  λ   2  diag(λ1, λ2, . . . , λn+1) =  ..   .  λn+1

Choose the maximal torus ( n+1 ) Y T = diag(λ1, . . . , λn+1) | λi = 1 ⊆ SL(V ) i=1 which has character group

∗ T = hχi | 1 ≤ i ≤ n + 1, χ1χ2 . . . χn+1 = 1i where χi(diag(λ1, . . . , λn+1)) := λi. Note that ∗ ∼ Zn+1 . T = he1 + e2 + ... + en+1i

χi ↔ ei 1. PRELIMINARIES 29

and is therefore generated by {ei | 1 ≤ i ≤ n}. The Lie algebra of SL(V ) is

Lie(SL(V )) = {A ∈ End(V ) | Tr(A) = 0}

= Span{Eij,Eii − Ei+1,i+1 | 1 ≤ i < j ≤ n + 1}.

T acts on Lie(SL(V )) by conjugation. If A ∈ T and B ∈ Lie(SL(V ) then A · B = ABA−1. The weight space decomposition is

Weight space Weight Diagonal Matricies 1 ↔ 0 F −1 hEiji for 1 ≤ i 6= j ≤ n + 1 χiχj ↔ ei − ej

This produces a root system of type An

Φ(SL(V )) = {ei − ej | 1 ≤ i 6= j ≤ n + 1}.

The Killing form is then

n+1 n+1 n X 2 X 2 X K = (ei − ej) = 2(n + 1) ei = 4(n + 1) ei ej. i,j=1 i=1 i,j=1 i6=j i≤j

Example 1.4.8. Let dim(V ) = 2n and choose a basis such that the symmetric bilinear form is represented by the matrix with ones on the second diagonal only.

 0 1   . .  Ω =  .  1 0

The special orthogonal group relative to Ω is

SO(V, Ω) = {A ∈ End(V ) | AT ΩA = Ω, det(A) = 1}.

This group has a maximal torus

−1 −1 −1 F× T := {diag(t1, t2, . . . , tn, tn , . . . , t2 , t1 ) | ti ∈ }

∗ ∼ n which then has character group T = hχi | 1 ≤ i ≤ ni = Z via χi ↔ ei where

−1 −1 −1 χi(diag(t1, t2, . . . , tn, tn , . . . , t2 , t1 )) = ti. 1. PRELIMINARIES 30

Relative to our choise of basis

Lie(SO(V, Ω)) = {A ∈ End(V ) | ΩAT Ω = −A}.

The matrix ΩAT Ω is simply the reflection of A along its second diagonal. Denote C n C n this by A . If A = [aij]i,j=1, then A = [an+1−j,n+1−i]i,j=1. Therefore

Lie(SO(V, Ω)) = {A ∈ End(V ) | AC = −A}

= Span{Eij − E2n+1−j,2n+1−i | 1 ≤ i ≤ 2n, 1 ≤ j ≤ 2n + 1 − i}.

T again acts on Lie(SO(V, Ω)) by conjugation producing the following decompo- sition. Weight space Weight Diagonal Matricies 1 ↔ 0 F −1 hEij − E2n+1−j,2n+1−ii, 1 ≤ i 6= j ≤ n χiχj ↔ ei − ej FhEi,2n+1−j − Ej,2n+1−ii, 1 ≤ i, j ≤ n χiχj ↔ ei + ej F −1 −1 hE2n+1−i,j − E2n+1−j,ii, 1 ≤ i, j ≤ n χi χj ↔ −ei − ej

The root system produced is of type Dn

Φ(SO(V, Ω)) = {ei  ej | 1 ≤ i < j ≤ n}.

Its Killing form is

n X 2 2 2 2 K = (ei + ej) + (−ei + ej) + (ei − ej) + (−ei − ej) i,j=1 i

SO(V, Ω) = {A ∈ End(V ) | AT ΩA = Ω, det(A) = 1}.

We choose the maximal torus

−1 −1 −1 F× T := {diag(t1, t2, . . . , tn, 1, tn , . . . , t2 , t1 ) | ti ∈ }.

∗ ∼ n The character group of T is again T = hχi | 1 ≤ i ≤ ni = Z , however

−1 −1 −1 χi(diag(t1, t2, . . . , tn, 1, tn , . . . , t2 , t1 )) = ti. 1. PRELIMINARIES 31

The Lie algebgra also appears similar.

Lie(SO(V, Ω)) = {A ∈ End(V ) | AC = −A}

= Span{Eij − E2n+2−j,2n+2−i | 1 ≤ i ≤ 2n + 1, 1 ≤ j ≤ 2n + 2 − i}.

The difference in dimension of V produces some additional weight spaces in the decomposition of Lie(SO(V, Ω)).

Weight space Weight Diagonal Matricies 1 ↔ 0 F −1 hEij − E2n+2−j,2n+2−ii, 1 ≤ i 6= j ≤ n χiχj ↔ ei − ej FhEi,2n+2−j − Ej,2n+2−ii, 1 ≤ i < j ≤ n χiχj ↔ ei + ej F −1 −1 hE2n+2−i,j − E2n+2−j,ii, 1 ≤ i < j ≤ n χi χj ↔ −ei − ej FhEi,n+1 − En+1,2n+2−ii, 1 ≤ i ≤ n χi ↔ ei F −1 hEn+1,i − E2n+2−i,n+1i, 1 ≤ i ≤ n χi ↔ −ei

The root system is of type Bn

Φ(SO(V, Ω)) = {ei  ej, ek | 1 ≤ i < j ≤ n, 1 ≤ k ≤ n} and the Killing form is

n n X 2 2 2 2 X 2 2 K = (ei + ej) + (−ei + ej) + (ei − ej) + (−ei − ej) + ei + (−ei) i,j=1 i=1 i

 1  .  . .   0       1  0 Ωn Ψ =   =  −1  −Ωn 0  .   . . 0  −1

The symplectic group is then

Sp(V, Ψ) = {A ∈ End(V ) | AT ΨA = Ψ} 1. PRELIMINARIES 32

in which we choose the same maximal torus as in 1.4.8 −1 −1 −1 F× T := {diag(t1, t2, . . . , tn, tn , . . . , t2 , t1 ) | ti ∈ } ∗ and so the character group T = hχi | 1 ≤ i ≤ ni is also the same. Computing the Lie algebra yields Lie(Sp(V, Ψ)) = {A ∈ End(V ) | ΨAT Ψ = A}.

2n T For A = [aij]i,j=1 ∈ End(V ), the equality ΨA Ψ = A is equivalent to the follow- ing relations. For 1 ≤ i, j ≤ n: aij = −a2n+1−j,2n+1−i For 1 ≤ i ≤ j ≤ n:

ai,2n+1−j = aj,2n+1−i

a2n+1−i,j = a2n+1−j,i.

Therefore we can express the Lie algebra as

Lie(Sp(V, Ψ)) = Span({Eij − E2n+1−j,2n+1−i | 1 ≤ i, j ≤ n}

∪{Ei,2n+1−j + Ej,2n+1−i,E2n+1−i,j + E2n+1−j,i | 1 ≤ i < j ≤ n}

∪{Ei,2n+1−i | 1 ≤ i ≤ 2n}).

The weight space decomposition is Weight space Weight Diagonal Matricies 1 ↔ 0 F −1 hEij − E2n+1−j,2n+1−ii for 1 ≤ i 6= j ≤ n χiχj ↔ ei − ej FhEi,2n+1−j + Ej,2n+1−ii for 1 ≤ i < j ≤ n χiχj ↔ ei + ej F −1 −1 hE2n+1−i,j + E2n+1−j,ii for 1 ≤ i < j ≤ n χi χj ↔ −ei − ej F 2 hEi,2n+1−ii for 1 ≤ i ≤ n χi ↔ 2ei F −2 hE2n+1−i,ii for 1 ≤ i ≤ n χi ↔ −2ei

The resulting root system is of type Cn

Φ(Sp(V, Ψ)) = {ei  ej, 2ek | 1 ≤ i < j ≤ n, 1 ≤ k ≤ n} with Killing form n n X 2 2 2 2 X 2 2 K = (ei + ej) + (−ei + ej) + (ei − ej) + (−ei − ej) + (2ei) + (−2ei) i,j=1 i=1 i

To summarize, the Killing forms associated to the special linear, special orthog- onal, and symplectic groups are below.

Group Killing Form n P SL(V ), dim(V ) = n + 1 4(n + 1) ei ej i,j=1 i≤j n P 2 SO(V ), dim(V ) = 2n 4(n − 1) ei i=1 n P 2 SO(V ), dim(V ) = 2n + 1 4(n − 2) ei i=1 n P 2 Sp(V ), dim(V ) = 2n 4(n + 1) ei i=1 1. PRELIMINARIES 34

1.5 Invariants Under the Action of the Weyl Group

At the end of section 1.3 we obtained an equation for the Killing form of a Lie algebra in terms of its roots. This allows us to consider the Killing form as an element of the symmetric tensor product of H∗, denoted S(H∗). That is, X K = α2 ∈ S2(H∗). α∈Φ Then since all elements of the Weyl group, W , permute the elements of Φ, the Killing form is an invariant under the induced action on S(H∗). In order to study all such polynomial invariants, i.e. the ring S(H∗)W , we introduce certain divided difference operators, one for each simple reflection corresponding to a chosen basis of H∗.

∗ Definition 1.5.1. Let {α1, . . . , αn} ⊆ Φ be a basis of H and let {s1, . . . , sn} ⊆ W be the associated simple reflections. The kth divided difference operator is

∗ ∗ ∆k : S(H ) → S(H ) q − s (q) q 7→ k . αk

Observe that q − sk(q) is divisible by αk for all k, hence we have

m+1 m+1 ∗ m ∗ ∆k :S (H ) → S (H ).

It is clear from the definition how these operators relate to polynomial invariants. ∆k(q) = 0 if and only if sk(q) = q, and therefore

∗ W q ∈ S(H ) ⇔ ∆k(q) = 0 ∀k ∈ {1, . . . , n}.

Additionally these operators have a few properties which are somewhat reminis- cent of differential operators.

Lemma 1.5.2. Let x, y ∈ S(H∗). Then

2 • (∆k) (x) = 0,

• ∆k(xy) = ∆k(x)y + sk(x)∆k(y).

Using these divided difference operators we can easily argue the following fact.

Proposition 1.5.3. Let L be a semisimple Lie algebra with maximal toral subalgebra H. Then there are no non-zero W -fixed linear forms on H∗. That is S1(H∗)W = 0. 1. PRELIMINARIES 35

n P ∗ Proof: Let q := ciαi be a generic linear form on H . Then, i=1 n P n 2( ciαi, αk) X 2(αi, αk) i=1 ∆ (q) = c = . k i (α , α ) (α , α ) i=1 k k k k

n P If ∆k(q) = 0 for all k ∈ {1, . . . , n}, then this means ( ciαi, αk) = 0, and there- i=1 n n P P ∗ fore ( ciαi, dkαk) = 0 for all choices of dk. Hence (q, λ) = 0 for all λ ∈ H , but i=1 k=1 since the bilinear form is lifted from the Killing form and is nondegenerate on H∗, this forces q = 0.

We are also able to speak about the fixed quadratic elements of S(H∗). We have seen that the Killing form is such a fixed and it turns out that among quadratric forms on H∗, the Killing form is essentially unique in this regard. Theorem 1.5.4. Let Φ be an irreducible root system in H∗. The Weyl group, W , of the root system acts on Φ which contains a basis of H∗, and hence the action extends naturally to S(H∗). Under this action the only fixed quadratic forms are multiples of the Killing form. S2(H∗)W = {cK | c ∈ F}.

Proof: The proof hinges on the following lemma from [Hum90] which is a result from the representation theory of groups. Lemma 1.5.5. [Hum90, Lemma 6.4(c)] Let G be a group, E a finite dimensional R-vector space, and ρ : G → GL(E) be a group representation such that the only elements of GL(E) commuting with all of ρ(G) are the scalars. Then, if β, β0 are two symmetric, non-degenerate, G-invariant, bilinear forms on E, there exists some c ∈ R such that β = cβ0.

The assumption that E be an R-vector space is used by Humphreys in parts (a) and (b) of lemma 6.4, however part (c) may be generalized to E being a vector space over arbitrary F and the result still holds. We will employ the lemma in this case. Since the elements of S2(H∗) are quadratic forms on H, we wish to apply the lemma to W acting on H. W is already a subset of GL(H∗), and since H∗ is iso- morphic to H via the map α ↔ tα, the Weyl group also acts on H as a subgroup of GL(H). (For s ∈ W, s(tα) = ts(α).) The representation ρ is then simply the restriction to W of the isomorphism GL(H∗) → GL(H). So we aim to show that the centralizer of the image of the Weyl group consists only of the scalars. 1. PRELIMINARIES 36

To start, choose a simple system {s1, . . . , sn} ⊆ W . Then {α1, . . . , αn} is a basis ∗ of H , and letting tk := tαk , {t1, . . . , tn} is a basis of H.

Let A ∈ GL(H) such that Aw = wA for all w ∈ W . In particular Ask = skA for all k ∈ {1, . . . , n}.

⇒ Ask(tk) = skA(tk)

⇒ −A(tk) = sk(Atk)

⇒ Atk ∈ {x ∈ H | sk(x) = −x} = {ctk | c ∈ F}

⇒ ∀k, ∃ck ∈ F such that Atk = cktk.

All simple coroots are eigenvectors for the matrix A. Now let i 6= j and consider siA(tj) = Asi(tj).

2(ti, tj) 2(αi, αj) ⇒ si(cjtj) = A(tj − mijti), where mij = = (ti, ti) (αi, αi)

⇒ cjti − cjmijti = cj − mijciti

⇒ cj(mijti) = ci(mijti)

⇒ cj = ci if mij 6= 0.

2(αi,αj ) However, we also know that mij = 6= 0 whenever αi, αj are adjacent in (αi,αi) the of the root system. So ci = cj for all adjacent roots, and then since irreducible root systems have connected Dynkin diagrams, we know that there exists c ∈ F such that Atk = ctk for all k. Finally the fact that {t1, . . . , tn} form a basis for H implies that A = cI. Hence the lemma applies to W acting on H. Now to briefly avoid overlapping notation, there is the Killing form on H, which we will denote K0(−, −), and there is the associated quadratic form X K = α2 ∈ S2(H∗) α∈Φ For x ∈ H, K(x, x) = K0(x). Let q ∈ S2(H∗)W . Then q0(x, y) := q(x + y) − q(x) − q(y) is a bilinear form on H with q0(x, x) = 2q(x). Since q is fixed by the action of W , so is q0, and therefore if q0 is nondegenerate the lemma applies. That is there exists c ∈ F such that q0 = cK0. In particular q0(x, x) = cK0(x, x) ⇒ 2q(x) = cK(x) 1. PRELIMINARIES 37

and so q ∈ {cK | c ∈ F}. If q0 is degenerate, then consider the set

D := {x ∈ H | q0(x, y) = 0 ∀y ∈ H} 6= {0}.

For x ∈ D, y ∈ H, and w ∈ W , since q0 is W -invariant

0 = q0(x, w−1(y)) = q0(w(x), y)

and so w(x) ∈ D. Then because the elements of w are automorphisms of H, D is a W -stable subspace. However in the case that Φ is irreducible, the only such sta- ble subspaces are all of H or {0}. (Each sk fixes the entire space and a hyperplane perpendicular to the element tk. The intersection of these n pairwise non-parallel hyperplanes in n-dimensional space is the origin). Thus D 6= {0} implies that D = H and this forces q0 = 0, and q = 0 in turn. So q ∈ {cK | c ∈ F} still.

Remark 1.5.6. This result does not require any of the preceding Lie structure and applies to any irreducible root system Φ in a vector space V ∗. When the Killing form is defined as K = P α2, the associated bilinear form on V is α∈Φ X K0(x, y) = K(x + y) − K(x) − K(y) = 2 α(x)α(y). α∈Φ This will be non-degenerate by parallel to the Killing form of a Lie algebra. That is, there exists a Lie algbera L of the same root system type as Φ. It will have a maximal toral subalgebra H, and a root system Φ0 ⊆ H∗ isomorphic to Φ ⊆ V ∗. The Killing form of L is non-degenerate and corresponds via the root system isomorphism 1 0 0 to 2 K . Hence K is also non-degenerate. Corollary 1.5.7. Let Φ be a root system in a vector space V ∗ which is not irreducible. n n ∗ S ∗ ∗ If Φi ⊆ Vi are the irreducible components (Φ = Φi, and V = ⊕ Vi .) with Killing i=1 i=1 forms Ki, then n n 2 ∗ W 2 ∗ Wi F S (V ) = ⊕S (Vi ) = ⊕{cKi | c ∈ } i=1 i=1 ∗ where Wi is the Weyl group of the root system Φi ⊆ Vi and as such the whole Weyl group is W = W1 × ... × Wn.

Proof: Let q ∈ S2(V ∗)W and set q0 to be the associated bilinear form on V . Since 0 0 2 ∗ Wi q is fixed by W , it is fixed by all subgroups Wi and hence q |Vi×Vi ∈ S (Vi ) . So 0 F q |Vi×Vi = ciKi for some ci ∈ . 1. PRELIMINARIES 38

Let i 6= j and take v0 ∈ Φi, w0 ∈ Φj. The reflection sw0 ∈ Wj ≤ W fixes v0 while 0 negating w0. Because q is W -invariant, this means

0 0 0 0 q (v0, w0) = q (sw0 (v0), sw0 (w0)) = q (v0, −w0) = −q (v0, w0)

0 and so q (v0, w0) = 0. Since the sub-root systems span their respective subspaces, 0 q (v, w) = 0 whenever v ∈ Vi, w ∈ Vj, i 6= j. n n P P Next let x, y ∈ V . They have decompositions x = vi, and y = wi, where i=1 i=1 vi, wi ∈ Vi. By definition 0 X Ki(x, y) = 2 α(x)α(y). α∈Φi

However, for all α ∈ Φi, α(Vj) = 0 if i 6= j. This implies that

0 X 0 Ki(x, y) = 2 α(vi)α(wi) = Ki(vi, wi). α∈Φi

Finally, consider

n n 0 0 X X q (x, y) = q ( vi, wi) i=1 i=1 n X 0 = q (vi, wj) i,j=1 n n X 0 X 0 = q (vi, wi) + q (vi, wj) i=1 i,j=1 i6=j n X 0 = ciKi(vi, wi) + 0 i=1 n X 0 = ciKi(x, y). i=1

n n 0 P 0 P Hence q = ciKi, meaning q = ciKi. The reverse inclusion holds since Wi i=1 i=1 fixes Ki and acts trivially on Kj, i 6= j, fixing them also. 1. PRELIMINARIES 39

1.6 Invariant Quadratic Forms

We may ask about invariants of the Weyl group which are somewhat removed from the Lie algebra setting, and consider the action of the Weyl group on quadratic forms in characters. Namely, let G be a split simple linear algebraic group with maximal torus T . Consider a root system Φ(G) ⊂ T ∗ with Weyl group W and a subgroup of invariant quadratic forms S2(T ∗)W . The group of characters T ∗ is a ∗ lattice and it may be extended to a vector space by considering T ⊗Z Q. Then Φ(G) ∗ 2 ∗ W is a root system in T ⊗Z Q and S (T ⊗Z Q) consists of linear combinations of Killing forms of the irreducible components of Φ(G). We may identify our original 2 ∗ W 2 ∗ W invariants as the subgroup S (T ) ≤ S (T ⊗Z Q) of elements with strictly integer 2 ∗ W coefficients. In the case that Φ(G) is irreducible, S (T ⊗Z Q) is 1-dimensional and so S2(T ∗)W is an infinite cyclic group generated by an element we denote by q0, called the normalized Killing form. Of course when Φ(G) is not irreducible, the group S2(T ∗)W will be generated by the normalized Killing forms of the irreducible components. To summarize, Proposition 1.6.1. Let G be a split semisimple linear algebraic group with maximal n S torus T . Let Φ(G) = Φi be the root system decomposed into its irreducible compo- i=1 nents. Let W denote the Weyl group of Φ(G). If qi is the normalized killing form of Φi, then n 2 ∗ W S (T ) = ⊕Zhqii. i=1 Below we summarize the Killing forms and normalized Killing forms of our stan- dard simple linear algebraic groups. Group T ∗ Killing Form Normalized Killing Form n n P P SL(V ), dim(V ) = n + 1 hei | 1 ≤ i ≤ ni 4(n + 1) ei ej ei ej i,j=1 i,j=1 i≤j i≤j n n P 2 P 2 SO(V ), dim(V ) = 2n hei | 1 ≤ i ≤ ni 4(n − 1) ei ei i=1 i=1 n n P 2 P 2 SO(V ), dim(V ) = 2n + 1 hei | 1 ≤ i ≤ ni 4(n − 2) ei ei i=1 i=1 n n P 2 P 2 Sp(V ), dim(V ) = 2n hei | 1 ≤ i ≤ ni 4(n + 1) ei ei i=1 i=1 These groups of quadratic invariants turn out to display functorial properties. A ∗ 2 ∗ 2 ∗ map of linear algebraic groups ϕ : G → H induces a map ϕ :S (TH ) → S (TG). In order to demonstrate this we follow the exposition of [GMS]. Definition 1.6.2. Given an algebraic group G, a loop in G is a group homomorphism Gm → G, and so the set of all loops in G is G∗ := HomF(Gm,G). 1. PRELIMINARIES 40

Some needed properties of loops are

1) If f1, f2 ∈ G∗ are two loops with commuting images, then the pointwise prod- uct f1f2 is again a loop. In particular, if f1, . . . , fn are loops with pairwise Z k1 k2 kn commuting images and k1, . . . , kn ∈ , then f1 f2 . . . fn is again a loop.

g 2) If g ∈ G(F) and f ∈ G∗, then f is again a loop, where for α ∈ R ∈ AlgF, (gf(R))(α) = gf(R)(α)g−1.

Our object of interest is the group Q(G) consisting of particular maps defined on these loops.

Definition 1.6.3. To a linear algebraic group G we associate the abelian group

Q(G) := {q : G∗ → Z | functions such that (i) and (ii) hold}

g (i) : q( f) = q(f) for all f ∈ G∗ and g ∈ G(F).

(ii) : For all f1, . . . , fn ∈ G∗ with pairwise commuting images, the map

Zn → Z

k1 k2 kn (k1, . . . , kn) 7→ q(f1 f2 . . . fn )

is a quadratic form on Zn.

The group operation in Q(G) is simply pointwise addition of images in Z. ∼ Lemma 1.6.4. Let T be a split torus, i.e. T = Gm ×...×Gm (with n factors). Then the group Q(T ) is isomorphic to S2(T ∗), where both are groups of quadratic forms on the lattice T∗.

Proof: Note that since T is abelian, all loops have commuting images and so the set of loops is simply the cocharacter group.

∗ ∼ n T = hχi | 1 ≤ i ≤ ni = Z

χi ↔ ei ∼ n T∗ = h∆i | 1 ≤ i ≤ ni = Z

∆i ↔ di

th th where χi is the projection onto the i factor, and ∆i is injection into the i factor with the identity in all other factors. 1. PRELIMINARIES 41

When the relations χi(∆i(t)) = t, and for i 6= j, χi(∆j(t)) = 1, are expressed ∗ additively, we have that ei(dj) = δij, and so T and T∗ are dual lattices. Then 2 ∗ naturally, the elements of S (T ) are quadratic forms on T∗. We can show that these elements of S2(T ∗) also belong to Q(T ). n 2 ∗ P Let q ∈ S (T ) so q = cijeiej for some cij ∈ F. i,j=1 i≤j

Since T is abelian, for all f ∈ T∗ and g ∈ T (F), if α ∈ R ∈ AlgF then

(gf(R))(α) = gf(R)(α)g−1 = gg−1f(R)(α) = f(R)(α) implying gf = f and so clearly q(gf) = q(g), satisfying (i) from the definition of Q(T ). n P r Next take f1, . . . , fr ∈ T∗. Let fi = bikdk. Consider the map Z → Z such k=1 that (l1, . . . , lr) 7→ q(l1f1 +...+lrfr), which is the map from (ii) expressed additively.   n n n ! X  X X q(l1f1 + ... + lrfr) =  cijeiej l1 b1kdk + ... + lr brkdk . i,j=1 k=1 k=1 i≤j

After some rearranging this becomes   r n X X  (l1, . . . , lr) 7→  cijbkibmj lklm k,m=1 i,j=1 i≤j which is a quadratic form on Zr. Hence q ∈ Q(T ).

For the reverse inclusion consider z ∈ Q(T ). Choose the elements ∆1,..., ∆n ∈ n T∗ to construct the quadratic map Z → Z given by (l1, . . . , ln) 7→ z(l1d1 +...+lndn). Since this is a quadratic map it takes the form

n X z(l1d1 + ... + lndn) = cijlilj i,j=1 i≤j

n P 2 ∗ for some cij ∈ F. However this map agrees with cijeiej ∈ S (T ) on all points of i,j=1 i≤j Zn, and therefore n X 2 ∗ z = cijeiej ∈ S (T ) i,j=1 i≤j 1. PRELIMINARIES 42

so the result follows.

Our goal in introducing the group Q(G) was to use its functorial properties. If ϕ : G → H is a homomorphism of linear algebraic groups, there is an associated map between loops

ϕ∗ : G∗ → H∗ f 7→ ϕ ◦ f.

This in turn gives rise to the map

Q(ϕ): Q(H) → Q(G)

q 7→ q ◦ ϕ∗ which is a group homomorphism. Thus, if we have a split maximal torus, T , in a linear algebraic group G, it comes with an inclusion map i: T,→ G which induces the homomorphism

Q(i): Q(G) → Q(T ) ∼= S2(T ∗).

Due to condition (i) of the definition of Q(G), the image of this homomorphism is subset of S2(T ∗)W . The Weyl group is usually defined as the group generated by the reflections with respect to the root system Φ(G) ⊆ T ∗, however there is a notion of the Weyl groups of G with respect to a subtorus T 0, and this is

0 F . 0 NG(F)(T ( )) 0 WT := CG(F)(T (F)) the normalizer of the torus modulo the centralizer of the torus in the group of F points. This Weyl group acts on the torus, and hence on the cocharacters by conjugation. g For s ∈ WT and ∆ ∈ T∗, s(∆) := ∆ where g ∈ NG(F)(T (F)) is any representative of s. There is then an analogous action on the group of characters T ∗. For χ ∈ T ∗, (sχ)(∆) := χ(g∆). However, in the case that T is a maximal torus, these groups and their actions are isomorphic [Hall]. So we can argue our claim that Q(i) maps into S2(T ∗)W .

Let q ∈ Q(G), s ∈ W with representative g ∈ NG(F)(T (F)), and let ∆ ∈ T∗. s(Q(i)(q))(∆) = (Q(i)(q))(s∆) = (Q(i)(q))(g∆) = q(i ◦ g∆).

Since i is simply an inclusion map, and g∆ is conjugation in G, we have that i ◦ g∆ = g(i ◦ ∆), and so

q(i ◦ g∆) = q(g(i ◦ ∆)) = q(i ◦ ∆) = (Q(i)(q))(∆) 1. PRELIMINARIES 43 and therefore s(Q(i)(q)) = Q(i)(q) ⇒ Q(i)(q) ∈ S2(T ∗)W .

We will denote this homomorphism by β : Q(G) → S2(T ∗)W .

Proposition 1.6.5. [GMS, Part 2, Proposition 7.2]

β : Q(G) → S2(T ∗)W is an isomorphism of groups.

Proof: We start by showing that β is injective.

If f ∈ G∗ is a loop, then its image is either isomorphic to Gm or is trivial. In both cases the image is contained in a maximal torus conjugate to T , and so there is g g an element g ∈ G(F) such that the image of f is a subgroup of T , i.e. f ∈ T∗.

Assume q1, q2 ∈ Q(G) such that β(q1) = β(q2). This assumption is that for all ∆ ∈ T∗, q1(i ◦ ∆) = q2(i ◦ ∆). They agree on loops in T . Therefore for all f ∈ G∗

g g q1(f) = q1( f) = q2( f) = q2(f) and so q1 = q2. β is injective. To show that β is surjective, we construct an extension of a map on loops in T to a map on loops in G. Being an extension, β will then simply restrict it to the original map. 2 ∗ W Let q ∈ S (T ) . For f ∈ G∗ and g ∈ G(F) as above, define q ∈ Q(G) by q(f) := q(gf). If q is well-defined, then β is surjective. This definition of q depends on g, which is an artifct of our choice of maximal torus containing the image of f. Let S,S0 be two maximal tori containing the image of f, each of which are conjugate to T via the elements g, g0 ∈ G(F), (gS = T and g0 S0 = T ). Since S,S0 are tori containing the image of f, they are both contained in its centralizer H := CG(Img(f)), where they are still maximal and hence conjugate via an element h ∈ H,(hS = S0). Then we have

g0hg−1 T = g0hS = g0 S0 = T

0 −1 0 meaning g hg = w ∈ NG(F)(T (F)). In particular, g h = wg. Finally, q(g0 f) = q(g0hf) = q(wgf), and since q is invariant under the action of W caused by w, q(wgf) = q(gf). This means q is well-defined, and so β is surjective. 1. PRELIMINARIES 44

Remark 1.6.6. Let ϕ: G → H be a map between linear algebraic groups with maximal tori TG,TH and Weyl groups WG,WH . We have the map 2 ∗ WH 2 ∗ WG Q(ϕ):S (TH ) → S (TG)

2 ∗ WH where for q ∈ S (TH ) , Q(ϕ)(q) = q ◦ ϕ∗. ∗ ∗ ∗ ∗ ∗ ∗ We also have the map ϕ : H → G which restricts to ϕ | ∗ : T → T , which TH H G in turn extends to

∗ 2 ∗ 2 ∗ ϕ | ∗ :S (T ) → S (T ) TH H G q 7→ q ◦ ϕ.

2 ∗ However note that in the latter instance, S (TH ) is being considered as multi- plicative maps TH → Gm. These are naturally considered as additive maps TH ∗ → Z by taking ∼ (q ◦ ϕ)(f) = q ◦ ϕ ◦ f ∈ Gm∗ = Z for f ∈ TH ∗. ∗ But then since q ◦ ϕ ◦ f = q ◦ ϕ∗(f), our two maps are identified. Q(ϕ) = ϕ . So for any homomorphism between linear algebraic groups, its dual map preserves quadratic invariants. Chapter 2

Tensor Product Maps

In this chapter we study the behaviour of invariant quadratic forms with respect to the Kronecker tensor product map. To start with introduce the Kronecker tensor product map itself before discussing the kernels and codomains of the map when it is restricted to combinations of the special linear, special orthogonal, and symplectic groups. We continue to directly compute the behavior of the induced map on invariant quadratic forms in several cases culminating in our main result, Theorem 2.2.1, which states that this behaviour depends only on the dimensions of the underlying vector spaces. Some (but not all) computations and proofs of this chapter can be found in [GMS] and [BDZ] which motivated our investigations.

2.1 The Kronecker Product of Matricies

The Kronecker tensor product of two matricies (each representing an endomorphism of a vector space) produces a matrix which represents an endomorphism of the tensor product of the two vector spaces. Here we define this notion precisely for the invertible endomorphisms of the general linear group.

Let V1, V2 be finite dimensional vectors spaces over an algebraically closed field F. Consider the following tensor product map.

V1 ⊕ V2 → V1 ⊗F V2

(v1, v2) 7→ v1 ⊗ v2. It induces a map ρ : GL(V1) × GL(V2) → GL(V1 ⊗ V2) by requiring that for (A1,A2) ∈ GL(V1) × GL(V2)

ρ((A1,A2))(v1 ⊗ v2) = A1v1 ⊗ A2v2.

45 2. TENSOR PRODUCT MAPS 46

Hence ρ((A1,A2)) = A1 ⊗ A2 where the expression on the right is the Kronecker product of matricies.

Definition 2.1.1. Let V1,V2 be n-dimensional and m-dimensional vector spaces re- spectively. Let A ∈ GL(V1) and B ∈ GL(V2). The Kronecker product of A and B is A ⊗ B ∈ GL(V1 ⊗ V2), where (A ⊗ B)(v ⊗ w) := Av ⊗ Bw. n m If A = [aij]i,j=1 with respect to a basis {v1, . . . vn} of V1, and B = [bij]i,j=1 with respect to a basis {w1, . . . , wm} of V2, then

n A ⊗ B := [aijB]i,j=1

with respect to the basis

{v1 ⊗ w1, v1 ⊗ w2, . . . , v1 ⊗ wm, v2 ⊗ w1, . . . , v2 ⊗ wm, . . . , vn ⊗ wm} ⊆ V1 ⊗ V2.

That is, A ⊗ B is an nm × nm block matrix whose blocks are of the form aijB.  a b   e f  Example 2.1.2. If A = ,B = , then c d g h

 ae af be bf    aB bB  ag ah bg bh  A ⊗ B = =   cB dB  ce cf de df  cg ch dg dh

Since our map ρ defined by the Kronecker product is a group homomorphism, naturally we ask about its kernel.

Proposition 2.1.3. We have

ker(ρ) = {(cI, c−1I) | c ∈ F×}.

Proof: The kernel of ρ is all pairs (A, B) ∈ GL(V1)×GL(V2) such that A⊗B = I. n m Let A = [aij]i,j=1 and B = [bij]i,j=1 with respect to bases as above. Then, if A ⊗ B = n [aijB]i,j=1 = Inm we know,

i) aij = 0 whenever i 6= j,

ii) aiibjj = 1 for all i, j,

iii) bij = 0 whenever i 6= j. 2. TENSOR PRODUCT MAPS 47

Hence A, B are both diagonal matricies, and since a11bjj = 1 for all j, this means −1 B = a11 I. Then aiib11 = 1 for all i implies A = a11I. Hence ker(ρ) ⊆ {(cI, c−1I) | c ∈ F×}, with the other inclusion being clear.

We may also consider restrictions of ρ to distinguished linear algebraic subgroups of GL(V ) such as SL(V ), SO(V ), etc. We do this as follows. Let G1 ,→ GL(V1) and G2 ,→ GL(V2) be linear algebraic subgroups. Define the restriction of ρ to G1 × G2 to be the composition

ρ ρ|G1×G2 : G1 × G2 ,→ GL(V1) × GL(V2) → GL(V1 ⊗ V2). Proposition 2.1.4. We have

−1 −1 ker(ρ|G1×G2 ) = {(cI, c I) | cI ∈ G1, c I ∈ G2}.

Proof: This merely follows from the fact that the kernel of a restricted homomor- phism is the intersection of the original kernel with the domain. That is,

−1 −1 ker(ρ|G1×G2 ) = ker(ρ) ∩ (G1 × G2) = {(cI, c I) | cI ∈ G1, c I ∈ G2}.

In the following examples, let V be an n-dimensional vector space over F. Example 2.1.5. Consider the restriction of ρ to

SL(V ) = {A ∈ GL(V ) | det(A) = 1}.

The kernel is then

−1 −1 −1 n ker(ρ|SL(V )×SL(V )) = {(cI, c I) | cI, c I ∈ SL(V )} = {(cI, c I) | c = 1}.

th Hence ker(ρ|SL(V )×SL(V )) is isomorphic to the group of n roots of unity, µn ⊆ F. Example 2.1.6. The special orthogonal group is

SO(V ) = {A ∈ GL(V ) | det(A) = 1,AT = A−1}.

Therefore ( −1 n 2 {(I,I)} n odd ker(ρ|SO(V )×SO(V )) = {(cI, c I) | c = 1, c = 1} = {(I,I), (−I, −I)} n even. 2. TENSOR PRODUCT MAPS 48

To consider restrictions to the symplectic group, we assume that V is 2n-dimensional. Example 2.1.7.  0 I  Sp(V ) = {A ∈ GL(V ) | AT KA = K} where K = n . −In 0 Hence −1 −1 −1 × ker(ρ|Sp(V )×Sp(V )) = {(cI, c I) | cc K = K} = {(cI, c I) | c ∈ F }.

In proposition 2.1.4 there was no requirement that the linear algebraic subgroups G1,G2 be equal, and so we can also consider examples involving two of the common subgroups. Example 2.1.8. −1 n −m −2 ker(ρ|SL(Fn)×SO(Fm)) = {(cI, c I) | c = 1, c = 1, c = 1} ( {(I,I)} gcd(n, m) odd = {(I,I), (−I, −I)} gcd(n, m) even

In general, the above tensor product map can be extended to a product of n copies of GL(Vi). That is we consider

ρn : GL(V1) × ... × GL(Vn) → GL(V1 ⊗ ... ⊗ Vn)

(A1,A2,...,An) → A1 ⊗ A2 ⊗ ... ⊗ An and A1 ⊗ ... ⊗ An is as we would expect

(A1 ⊗ ... ⊗ An)(v1 ⊗ ... ⊗ vn) = A1v1 ⊗ ... ⊗ Anvn. Proposition 2.1.9. We have

ker(ρn) = {(c1I, c2I, . . . , cnI) | c1c2 . . . cn = 1}.

Proof: We argue inductively assuming the result holds for all ρm with m < n.

Let A1 ⊗ ... ⊗ An ∈ ker(ρn). This means that (A1 ⊗ ... ⊗ An−1,An) is in the kernel of ρ : GL(V1 ⊗ ... ⊗ Vn−1) × GL(Vn) → GL(V1 ⊗ ... ⊗ Vn). −1 F× ⇒ (A1 ⊗ ... ⊗ An−1,An) = (cn I, cnI) for some cn ∈

⇒ (cnA1) ⊗ A2 ⊗ ... ⊗ An = I

⇒ (cnA1,A2,...,An−1) ∈ ker(ρn−1)

⇒ (cnA1,A2,...,An−1) = (a1I, c2I, . . . , cn−1I) such that a1c2 . . . cn−1 = 1 −1 ⇒ (A1,A2,...,An−1,An) = (cn a1I, c2I, . . . , cn−1I, cnI) −1 −1 with cn (a1c2 . . . cn−1)cn = cn cn = 1. 2. TENSOR PRODUCT MAPS 49

−1 Hence setting c1 := cn a1 we obtain that (A1,...,An) = (c1I, . . . , cnI) such that c1c2 . . . cn = 1. This implies that

ker(ρn) ⊆ {(c1I, c2I, . . . , cnI) | c1c2 . . . cn = 1}

with the other inclusion again being clear.

Finally we consider restrictions of ρn to G1 × G2 × ... × Gn, where each Gi ,→ GL(Vi) is a linear algebraic subgroup. The restricted map, which we shall denote as

0 ρn : G1 × ... × Gn → GL(V1 ⊗ ... ⊗ Vn), has the following kernel.

Proposition 2.1.10. We have

0 ker(ρn) = {(c1I, . . . , cnI) | c1c2 . . . cn = 1, ciI ∈ Gi}.

Proof: Just as in the two subgroup case, the identity stems directly from the fact that 0 ker(ρn) = ker(ρn) ∩ (G1 × ... × Gn) and the result follows.

Example 2.1.11. Let dim(Vi) = di and let Gi = SL(Vi) with an obvious embedding into GL(Vi), then

ker(ρn) = {(c1I, . . . , cnI) ∈ µd1 × ... × µdn | c1c2 . . . cn = 1}.

Having considered the kernels of these maps, we move on to considering their images. We are able to restrict the codomain significantly, demonstrating that ρ is a map between distinguished linear algebraic subgroups. First, a useful fact for doing so.

Lemma 2.1.12. Let V1,V2 be two F-vector spaces with dim(V1) = n and dim(V2) = m. Let A ∈ GL(V1),B ∈ GL(V2) be two matrices, then

det(A ⊗ B) = det(A)m det(B)n. 2. TENSOR PRODUCT MAPS 50

Proof: First we decompose the matrix as A ⊗ B = (A ⊗ Im)(In ⊗ B). Consider A ⊗ Im which has the form   a11 a1n  .. ..   . ... .     a11 a1n   . .   . .     a a   n1 nn   . .   ......  an1 ann which in m(m − 1)/2 row transpositions can be rearranged to   a11 a12 a1n  . . .   ......     an1 an2 ann     a11 a12 a1n   . . .   . . .   ......     an1 an2 ann   . . .   ......     a a a   11 12 1n   . . .   ......  an1 an2 ann and in a further m(m − 1)/2 column transpositions can take the form

 A   ..   .  A which clearly has a determinant of det(A)m. Finally since we performed an even number of row/column transpositions in total, m(m − 1), the determinant of our original matrix agress with this determinant, and so we have

m det(A ⊗ Im) = det(A) .

The determinant of the other factor is much more straightfoward. We have

 B   ..  n det(In ⊗ B) = det  .  = det(B) . B 2. TENSOR PRODUCT MAPS 51

Finally m n det(A ⊗ B) = det(A ⊗ Im) det(In ⊗ B) = det(A) det(B) and so we obtain the result.

This lemma is all that is required to restrict the codomain of ρ|SL(V1)×SL(V2). If (A, B) ∈ SL(V1)×SL(V2), then det(A) = det(B) = 1 which implies that det(A⊗B) = 1. Hence

ρ|SL(V1)×SL(V2) : SL(V1) × SL(V2) → SL(V1 ⊗ V2) and the argument easily extends via induction, showing that

ρn : SL(V1) × ... × SL(Vn) → SL(V1 ⊗ ... ⊗ Vn).

In the case of ρ restricted to special orthogonal groups, by definition If (A, B) ∈ T T SO(V1) × SO(V2), then A A = In, B B = Im, and det(A) = det(B) = 1. Therefore T T T T T (A ⊗ B) (A ⊗ B) = (A ⊗ B )(A ⊗ B) = (A A) ⊗ (B B) = In ⊗ Im = Inm and det(A ⊗ B) = 1. From this it’s clear that (A ⊗ B) ∈ SO(V1 ⊗ V2) and so

ρ|SO(V1)×SO(V2) : SO(V1) × SO(V2) → SO(V1 ⊗ V2) and this extends similarily,

ρn : SO(V1) × ... × SO(Vn) → SO(V1 ⊗ ... ⊗ Vn).

The last of the common linear algebraic subgroups we have been considering, Sp(V ), does not conform in such a straightforward manner. Recall that  0 I  Sp(V ) = {A ∈ GL(V ) | AT KA = K} where K = n . −In 0

It is also the case that if A ∈ Sp(V ) then det(A) = 1. Therefore if

(A1,...,An) ∈ Sp(V1) × ... × Sp(Vn) then T T T (A1 ⊗...⊗An) (K ⊗...⊗K)(A1 ⊗...⊗An) = A1 KA1 ⊗...⊗An KAn = K ⊗...⊗K and det(A1 ⊗ ... ⊗ An) = 1. So the image of ρn lies in the group of matricies of determinant 1 preserving the bilinear form corresponding to K⊗...⊗K. This is either SO(V ⊗...⊗V,K ⊗...⊗K) if the form is symmetric, or Sp(V ⊗...⊗V,K ⊗...⊗K) if the form is antisymmetric. This depends on the parity of n. Since (K ⊗ ... ⊗ K)T = KT ⊗ ... ⊗ KT = −K ⊗ ... ⊗ −K = (−1)n(K ⊗ ... ⊗ K) the image lies in SO when n is even, and in Sp when n is odd. 2. TENSOR PRODUCT MAPS 52

2.2 The Product of Quadratic Invariants

As we have seen already, a map of linear algebraic groups ϕ : H → G restricts to

a map between maximal tori, ϕ|TH : TH → TG. This induces a dual map between character groups ϕ∗ : T ∗ → T ∗ , which in turn extends to a map on quadratic forms TH G H ϕ∗ : S2(T ∗ ) → S2(T ∗ ) mapping the Weyl group fixed points into Weyl group fixed TG H H points.

ϕ∗ : S2(T ∗ )WH → S2(T ∗ )WG TG H H q 7→ q ◦ ϕ∗. In the case that H,G are split semisimple groups, we have shown that their groups of quadratic invariants are free groups of finite rank generated by the normalized Killing forms of their simple components. Therefore any such map is described entirely by the images of these generators. We now describe this behavior for the map induced by the Kronecker product map, ρ, in our familiar examples.

2.2.1 The SL × SL-case

Let dim(V1) = n + 1 and dim(V2) = m + 1. Consider

ρ : SL(V1) × SL(V2) → SL(V1 ⊗ V2).

Let T1 ⊆ SL(V1), T2 ⊆ SL(V2) be maximal tori as in example 1.4.7. Denote the maximal torus in SL(V1 ⊗ V2) by

T(n+1)(m+1) = {diag(t1, t2, . . . , t(n+1)(m+1)) | t1t2 . . . t(n+1)(m+1) = 1} ∗ ∼ Z(n+1)2 and T(n+1)(m+1) = /hf1 + f2 + ... + f(n+1)(m+1)i

with generating elements f i for 1 ≤ i ≤ (n + 1)(m + 1) − 1. Then since T1 × T2 is a maximal torus in SL(V1) × SL(V2) we compute

ρ|T1×T2 : T1 × T2 → T(n+1)(m+1).

Let A = diag(a1, . . . , an+1) ∈ T1, B = diag(b1, . . . , bm+1) ∈ T2. We have

ρ(A, B) = diag(a1b1, a1b2, . . . a1bm+1,

a2b1, a2b2, . . . , a2bm+1,...

. . . ,an+1b1, an+1b2, . . . , an+1bm+1).

∗ To find ρ (f i), we first decompose i = k(m + 1) + r in such a way that 0 ≤ k ≤ n, 1 ≤ r ≤ m + 1. Observe that this decomposition is unique. Then we obtain

∗ ρ (f i)(A, B) = f i(ρ(A, B)) = ak+1br 2. TENSOR PRODUCT MAPS 53

and so ∗ ρ (f i) = (ek+1, er) ∈ T1 × T2. 2 ∗ W To apply these arguments to invariants S (T(n+1)(m+1)) , note that by example 1.4.7

Φ(SL(V1 ⊗ V2) = {f i − f j | 1 ≤ i 6= j ≤ (n + 1)(m + 1)}.

Hence the Killing form and normalized Killing form are

(n+1)(m+1) (n+1)(m+1)−1 X 2 X KSL(V1⊗V2) = 2(n + 1)(m + 1) f i = 4(n + 1)(m + 1) fi fj, i=1 i,j=1 i≤j

(n+1)(m+1)−1 (n+1)(m+1) X 1 X 2 q = f f = f . SL(V1⊗V2) i j 2 i i,j=1 i=1 i≤j

∗ Since ρ (f i) depends on r and k in the decomposition of i, we express the Killing form as a double sum over these variables.

n m+1 1 X X 2 q = f SL(V ⊗V ) 2 k(n+1)+r k=0 r=1 n m+1 1 X X 2 ⇒ ρ∗(q ) = ρ∗(f ) SL(V ⊗V ) 2 k(n+1)+r k=0 r=1 n m+1 1 X X = (e , e )2 2 k+1 r k=0 r=1 n+1 m+1 1 X X = (e2, e2) 2 k r k=1 r=1 n+1 m+1 ! 1 X X = (m + 1)e2, e2 2 k r k=1 r=1 n+1 m+1 ! 1 X X = (m + 1) e2, (n + 1) e2 2 k r k=1 r=1 n+1 m+1 ! 1 X 1 X = (m + 1) e2, (n + 1) e2 2 k 2 r k=1 r=1  = (m + 1)qSL(V1), (n + 1)qSL(V2) . 2. TENSOR PRODUCT MAPS 54

The Killing form is simply a multiple of the normalized Killing form, and so we obtain

∗ ρ (KSL(V1⊗V2)) ∗ =4(n + 1)(m + 1)ρ (qSL(V1⊗V2))  =4(n + 1)(m + 1) (m + 1)qSL(V1), (n + 1)qSL(V2) 2 2 =((m + 1) 4(n + 1)qSL(V1), (n + 1) 4(m + 1)qSL(V2)) 2 2 =((m + 1) KSL(V1), (n + 1) KSL(V2)).

2.2.2 The Even SO × SO-case

Let dim(V1) = 2n and dim(V2) = 2m. We repeat the above process for the map

ρ : SO(V1, Ω2n) × SO(V2, Ω2m) → SO(V1 ⊗ V2, Ω4nm).

Let T1 ⊆ SO(V1, Ω2n), T2 ⊆ SO(V2, Ω2m) be maximal tori as described in 1.4.8. The larger torus is −1 −1 F× T4nm = {diag(t1, . . . , t2nm, t2nm, . . . , t1 ) | ti ∈ } ∗ ∼ Z2nm and T4nm = hfi | 1 ≤ i ≤ 2nmi = .

The root system of Φ(SO(V1 ⊗ V2, Ω4nm) is of type D2nm

Φ(SO(V1 ⊗ V2, Ω4nm)) = {fi  fj | 1 ≤ i < j ≤ 2nm} and so has Killing form and normalized Killing form

2nm 2nm X 2 X 2 KSO(V1⊗V2,Ω4nm) = 4(2nm − 1) fi , qSO(V1⊗V2,Ω4nm) = fi . i=1 i=1

−1 −1 −1 −1 Let A = diag(a1, . . . , an, an , . . . , a1 ) ∈ T1, B = diag(b1, . . . , bm, bm , . . . , b1 ) ∈ T2. We then have

−1 −1 −1 ρ(A, B) = diag(a1b1, a1b2, . . . , a1bm, a1bm , . . . , a1b2 , a1b1 , −1 −1 −1 a2b1, a2b2, . . . , a2bm, a2bm , . . . a2b2 , a2b1 ,... −1 −1 −1 . . . ,anb1, anb2, . . . anbm, anbm , . . . , anb2 , anb1 , −1 −1 −1 −1 −1 −1 −1 −1 −1 an b1, an b2, . . . , an bm, an bm , . . . , an b2 , an b2 ,... −1 −1 −1 −1 −1 −1 −1 −1 −1 . . . ,a1 b1, a1 b2, . . . , a1 bm, a1 bm , . . . , a1 b2 , a1 b1 ). If i = k(2m) + r is such that 0 ≤ k ≤ n − 1, 1 ≤ r ≤ 2m, then  ∗ (ek+1, er) 1 ≤ r ≤ m ρ (fi) = (ek+1, −e2m+1−r) m + 1 ≤ r ≤ 2m 2. TENSOR PRODUCT MAPS 55

2nm ∗ X ∗ 2 ⇒ ρ (qSO(V1⊗V2,Ω4nm)) = ρ (fi ) i=1 n−1 " m 2m # X X ∗ 2 X ∗ 2 = ρ (fk(2n)+r) + ρ (fk(2n)+r) k=0 r=1 r=m+1 n−1 " m 2m # X X 2 X 2 = (ek+1, er) + (ek+1, −e2n+1−r) k=0 r=1 r=m+1 n " m m # X X 2 2 X 2 2 = (ek, er) + (ek, er) k=1 r=1 r=1 n m X X 2 2 = 2 (ek, er) k=1 r=1 n m ! X 2 X 2 = 2 m ek, n er k=1 r=1

= ((2m)qSO(V1,Ω2n), (2n)qSO(V2,Ω2m)).

Applying the map to the Killing form gives

4nm2 − 2m 4n2m − 2n  ρ∗(K ) = K , K . SO(V1⊗V2,Ω4nm) n − 1 SO(V1,Ω2n) m − 1 SO(V2,Ω2n)

2.2.3 The Odd SO × SO-case

We now consider the orthogonal case when both vectors spaces have odd dimension. Let dim(V1) = 2n + 1, dim(V2) = 2m + 1.

ρ : SO(V1, Ω2n+1) × SO(V2, Ω2m+1) → SO(V1 ⊗ V2, Ω(2n+1)(2m+1)).

Again let T1 ⊆ SO(V1, Ω2n+1), T2 ⊆ SO(V2, Ω2m+1) be the maximal tori as per example 1.4.9. The maximal torus in SO(V1 ⊗ V2, Ω(2n+1)(2m+1)) is

−1 −1 −1 F× T(2n+1)(2m+1) = {diag(t1, t2, . . . , t2nm+n+m, 1, t2nm+n+m, . . . , t2 , t1 ) | ti ∈ }.

−1 −1 −1 −1 Let A = diag(a1, . . . , an, 1, an , . . . , a1 ) ∈ T1, B = diag(b1, . . . , bm, 1, bm , . . . , b1 ) ∈ T2, and let ρ(A, B) := diag(c1, . . . , c(2n+1)(2m+1)). 2. TENSOR PRODUCT MAPS 56

If i = k(2m + 1) + r with 0 ≤ k ≤ 2n, 1 ≤ r ≤ 2m + 1 then

 0 ≤ k ≤ n − 1   ak+1br 1 ≤ r ≤ m   ak+1 r = m + 1  −1  ak+1b2m+2−r m + 2 ≤ r ≤ 2m + 1   k = n   br 1 ≤ r ≤ m ci =  1 r = m + 1  −1  b2m+2−r m + 2 ≤ r ≤ 2m + 1   n + 1 ≤ k ≤ 2n  −1  a2n+1−kbr 1 ≤ r ≤ m  −1  a2n+1−k r = m + 1  −1 −2 a2n+1−kb2m+2−r m + 2 ≤ r ≤ 2m + 1 and so for 1 ≤ i ≤ 2nm + n + m,

 0 ≤ k ≤ n − 1   (ek+1, er) 1 ≤ r ≤ m  ∗  (ek+1, 0) r = m + 1 ρ (fi) = (ek+1, −e2m+2−r) m + 2 ≤ r ≤ 2m + 1   k = n   (0, er) 1 ≤ r ≤ m.

From example 1.4.9

Φ(SO(V1 ⊗ V2, Ω(2n+1)(2m+1)))

= {fi  fj, fk | 1 ≤ i < j ≤ 2nm + n + m, 1 ≤ k ≤ 2nm + n + m} and

2nm+n+m X 2 KSO(V1⊗V2,Ω(2n+1)(2m+1)) = 4(2nm + n + m − 2) fi , i=1 2nm+n+m X 2 qSO(V1⊗V2,Ω(2n+1)(2m+1))) = fi . i=1 2. TENSOR PRODUCT MAPS 57

This implies

2nm+n+m ∗ X ∗ 2 ρ (qSO(V1⊗V2,Ω(2n+1)(2m+1))) = ρ (fi) i=1 n−1 " m 2m+1 # m ! X X ∗ 2 ∗ 2 X ∗ 2 X ∗ 2 = ρ (fk(2m+1)+r) + ρ (fk(2m+1)+(m+1)) + ρ (fk(2m+1)+r) + ρ (fn(2m+1)+r) k=0 r=1 r=m+2 r=1 n−1 " m 2m+1 # m ! X X 2 2 X 2 X 2 = (ek+1, er) + (ek+1, 0) + (ek+1, −e2m+2−r) + (0, er) k=0 r=1 r=m+2 r=1 n " m # m ! X 2 X 2 2 X 2 = (ek, 0) + 2 (ek, er) + (0, er) k=1 r=1 r=1 n " m !# m !! X 2 2 X 2 X 2 = (ek, 0) + 2 mek, er + 0, er k=1 r=1 r=1 " n m !# m !! X 2 X 2 X 2 = (2m + 1)ek, 2 er + 0, er k=1 r=1 r=1 n m ! m !! X 2 X 2 X 2 = (2m + 1) ek, 2n er + 0, er k=1 r=1 r=1 n m ! X 2 X 2 = (2m + 1) ek, (2n + 1) er k=1 r=1  = (2m + 1)qSO(V1,Ω2n+1), (2n + 1)qSO(V2,Ω2m+1)

and for the Killing form

∗ ρ (KSO(V1⊗V2,Ω(2n+1)(2m+1))) = 4nm2 + 4nm + 2m2 − 3m + n − 2 ( K , n − 2 SO(V1,Ω2n+1) 4n2m + 4nm + 2n2 − 3n + m − 2 K ). m − 2 SO(V2,Ω2m+1)

2.2.4 The Mixed SO × SO-case

The remaining combination of orthogonal groups is when one vector space has even dimension and the other is has odd dimension. Let dim(V1) = 2n and dim(V2) = 2m + 1. We consider

ρ: SO(V1, Ω2n) × SO(V2, Ω2m+1) → SO(V1 ⊗ V2, Ω2n(2m+1)). 2. TENSOR PRODUCT MAPS 58

The maximal tori T1 ⊆ SO(V1, Ω2n), T2 ⊆ SO(V2, Ω2m+1) are as in the previous examples. Since V1 ⊗ V2 is even dimensional, the larger maximal torus is

−1 −1 F× T2n(2m+1) = {diag(t1, . . . , tn(2m+1), tn(2m+1), . . . , t1 ) | ti ∈ }

∗ ∼ Zn(2m+1) and T2n(2m+1) = hχi | 1 ≤ i ≤ n(2m + 1)i =

χi ↔ fi.

∗ −1 As always, it is sufficient to understand ρ (fi). Let A = diag(a1, . . . , an, an , . . . , a1) ∈ −1 T1 and B = diag(b1, . . . , bm, 1, bm , . . . , b1) ∈ T2. If ρ(A, B) = A⊗B = diag(c1, . . . , c2n(2m+1)), then for i = k(2m + 1) + r where 0 ≤ k ≤ 2n − 1, 1 ≤ r ≤ 2m + 1

 if 0 ≤ k ≤ n − 1   ak+1br 1 ≤ r ≤ m   ak+1 r = m + 1  −1  ak+1b2m+2−r m + 2 ≤ r ≤ 2m + 1 ci =  if n ≤ k ≤ 2n − 1  −1  a2n−kbr 1 ≤ r ≤ m  −1  a2n−k r = m + 1  −1 −1 a2n−kb2m+2−r m + 2 ≤ r ≤ 2m + 1. Therefore for 1 ≤ i ≤ n(2m + 1)

 if 0 ≤ k ≤ n − 1  ∗  (ek+1, er) 1 ≤ r ≤ m ρ (fi) = (ek+1, 0) r = m + 1   (ek+1, −e2m+2−r) m + 2 ≤ r ≤ 2m + 1.

We know by example 1.4.8 that SO(V1⊗V2, Ω2n(2m+1)) has Killing form and normalized Killing form n(2m+1) X 2 KSO(V1⊗V2,Ω2n(2m+1)) = 4(n(2m + 1) − 1) fi , i=1 n(2m+1) X 2 qSO(V1⊗V2,Ω2n(2m+1)) = fi . i=1 2. TENSOR PRODUCT MAPS 59

So we compute

n(2m+1) ∗ X ∗ 2 ρ (qSO(V1⊗V2,Ω2n(2m+1))) = ρ (fi) i=1 n−1 m 2m+1 ! X X ∗ 2 ∗ 2 X ∗ 2 = ρ (fk(2m+1)+r) + ρ (fk(2m+1)+(m+1)) + ρ (fk(2m+1)+r) k=0 r=1 r=m+2 n−1 m 2m+1 ! X X 2 2 X 2 = (ek+1, er) + (ek+1, 0) + (ek+1, −e2m+2−r) k=0 r=1 r=m+2 n m ! X 2 X 2 2 = (ek, 0) + 2 (ek, er) k=1 r=1 n m ! X 2 X 2 = (2m + 1)ek, 2 er k=1 r=1 n m ! X 2 X 2 = (2m + 1) ek, 2n er k=1 r=1  = (2m + 1)qSO(V1,Ω2n), (2n)qSO(V2,Ω2m+1) .

As for the Killing forms, we obtain

∗ ρ (KSO(V1⊗V2,Ω2n(2m+1))) = 4nm2 + 4nm − 2m − n − 1 ( K , n − 1 SO(V1,Ω2n) 4n2m + 2n2 − 2n K ). m − 2 SO(V2,Ω2m+1)

2.2.5 The Sp × Sp-case

Let dim(V1) = 2n and dim(V2) = 2m. Consider the product of symplectic groups

ρ : Sp(V1, Ψ2n) × Sp(V2, Ψ2m) → SO(V1 ⊗ V2, Ψ2n ⊗ Ψ2m).

In this case we cannot immediately evoke previous results about the root structure of SO(V1 ⊗ V2, Ψ2n ⊗ Ψ2m) since this is an orthogonal group with respect to a different orthogonal form, Ψ2n ⊗ Ψ2m 6= Ω4nm. We take a brief aside to derive the Lie algebra and root structure.

T Lie(SO(V1⊗V2, Ψ2n⊗Ψ2m)) = {A ∈ End(V1⊗V2) | A (Ψ2n⊗Ψ2m) = −(Ψ2n⊗Ψ2m)A}. 2. TENSOR PRODUCT MAPS 60

To analyze this condition we define the function h(i),

( i +1 (−1)d m e 1 ≤ i ≤ 2nm h(i) = i (−1)d m e 2nm + 1 ≤ i ≤ 4nm and note that  h(4nm)   . .   .  Ψ2n ⊗ Ψ2m =    h(2)  h(1)

4nm If we set A = [aij]i,j=1, then

T 4nm −(Ψ2n ⊗ Ψ2m)A (Ψ2n ⊗ Ψ2m) = [−h(i)h(j)a4nm+1−j,4nm+1−i]i,j=1

meaning that A ∈ Lie(SO(V1 ⊗ V2, Ψ2n ⊗ Ψ2m)) is equivalent to

aij = −h(i)h(j)a4nm+1−j,4nm+1−i ∀1 ≤ i, j ≤ 4nm

and so Lie(SO(V1 ⊗ V2, Ψ2n ⊗ Ψ2m)) is spanned by the set

{Eij − h(i)h(j)E4nm+1−j,4nm+1−i | 1 ≤ i, j ≤ 4nm}.

The maximal torus in SO(V1 ⊗ V2, Ψ2n ⊗ Ψ2m) does not differ from the usual. −1 −1 F× T4nm = {diag(t1, . . . , t2nm, t2nm, . . . , t1 ) | ti ∈ }

∗ and T4nm = hχi | 1 ≤ i ≤ 2nmi. We then obtain a weight space decomposition almost identical to our previous de- composition with respect to Ω4nm. Weight space Weight Diagonal Matricies 1 ↔ 0 F −1 hEij − h(i)h(j)E4nm+1−j,4nm+1−ii, 1 ≤ i 6= j ≤ 2nm χiχj ↔ ei − ej FhEi,4nm+1−j − h(i)h(j)Ej,4nm+1−ii, 1 ≤ i, j ≤ 2nm χiχj ↔ ei + ej F −1 −1 hE4nm+1−i,j − h(i)h(j)E4nm+1−j,ii, 1 ≤ i, j ≤ 2nm χi χj ↔ −ei − ej

This produces a root system of type D2nm

Φ(SO(V1 ⊗ V2, Ψ2n ⊗ Ψ2m)) = {ei  ej | 1 ≤ i < j ≤ 2nm}.

It was expected that the root systems of SO relative to the different orthogonal forms ∗ Ω4nm and Ψ2n ⊗ Ψ2m would be isomorphic, but in fact they are directly equal in T . In addition, the maximal tori in SO(Vi, Ω) and Sp(Vi, Ψ) are also equal. Therefore we 2. TENSOR PRODUCT MAPS 61 may use our previous calculations about the Killing form, normalized Killing form, ∗ images of the roots under ρ , etc. from SO(V1 ⊗ V2, Ω4nm). 2nm 2nm X 2 X 2 KSO(V1⊗V2,Ψ2n⊗Ψ2m) = 4(2nm − 1) fi , qSO(V1⊗V2,Ψ2n⊗Ψ2m) = fi i=1 i=1 This implies n m ! ∗ X 2 X 2 ρ (qSO(V1⊗V2,Ψ2n⊗Ψ2m)) = 2m ek, 2n er k=1 r=1  = 2mqSp(V1,Ψ2n), 2nqSp(V2,Ψ2m) . On the level of Killing forms the result does differ slightly from the case involving only orthogonal groups. We have 4nm2 + 2m 4n2m + 2n  ρ∗(K ) = K , K . SO(V1⊗V2,Ψ2n⊗Ψ2m) n + 1 Sp(V1,Ψ2n) m + 1 Sp(V2,Ψ2m)

2.2.6 The Even Sp × SO-case

Let dim(V1) = 2n, dim(V2) = 2m. Consider

ρ : Sp(V1, Ψ2n) × SO(V2, Ω2m) → Sp(V1 ⊗ V2, Ψ4nm). Since both the torus −1 −1 F× T4nm = {diag(t1, . . . , t2nm, t2nm, . . . , t1 ) | ti ∈ } and the normalized Killing form 2nm X 2 qSp(V1⊗V2,Ψ4nm) = fi i=1 are the same in Sp(V1 ⊗ V2, Ψ4nm) as they are for SO(V1 ⊗ V2, Ω4nm) (as is also the case for Sp(V1, Ψ2n) and SO(V1, Ω2n)) we may again directly apply our previous calculations. Therefore n m ! ∗ X 2 X 2 ρ (qSp(V1⊗V2,Ψ4nm)) = 2m ek, 2n er k=1 r=1  = 2mqSp(V1,Ψ2n), 2nqSO(V2,Ω2m) .

The Killing form is then given by 4nm2 + 2m 4n2m + 2n  ρ∗(K ) = K , K . Sp(V1⊗V2,Ψ4nm) n + 1 Sp(V1,Ψ2n) m − 1 SO(V2,Ω2m) 2. TENSOR PRODUCT MAPS 62

2.2.7 The Odd Sp × SO-case

Let dim(V1) = 2n, dim(V2) = 2m + 1. Consider

ρ : Sp(V1, Ψ2n) × SO(V2, Ω2m+1) → Sp(V1 ⊗ V2, Ψ2n(2m+1)).

The argument is analagous to the one used above. Since the tori and normalized Killing forms of Sp(V1, Ψ2n) and Sp(V1⊗V2, Ψ2n(2m+1)) are the same as their orthogonal counterparts, we know that

∗  ρ (qSp(V1⊗V2,Ψ2n(2m+1))) = (2m + 1)qSp(V1,Ψ2n), (2n)qSO(V2,Ω2m+1) and so 4nm2 + 4nm + 2m + n + 1 ρ∗(K ) =( K , Sp(V1⊗V2,Ψ2n(2m+1)) n + 1 Sp(V1,Ψ2n) 4n2m + 2n2 + 2n K ). m − 2 SO(V2,Ω2m+1)

2.2.8 The General Case

All our computations can be summarized as follows.

Let dim(V1) = d1 and dim(V2) = d2. Suppose G1,G2, and G3 are the following groups (choosing Sp only when the appropriate di is even).

G1 G2 G3 SL SL SL SO SO SO Sp SO Sp Sp Sp SO

Then the Kronecker product map ρ: G1(V1) × G2(V2) → G3(V1 ⊗ V2) induces a map between invariant quadratic forms such that

∗ 2 ∗ W3 2 ∗ W1 2 ∗ W2 ρ :S (T3 ) → S (T1 ) ⊕ S (T2 )

q3 7→ (d2q1, d1q2)

where Ti ⊆ Gi are the maximal tori, and Wi are the respective Weyl groups. This result can be generalized to an arbitrary product of groups.

Theorem 2.2.1. Let V1,...,Vn be vector spaces such that dim(Vi) = di. Consider linear algebraic groups G1,...,Gn,H in one of the following configurations 2. TENSOR PRODUCT MAPS 63

• G1,...,Gn,H = SL

• G1,...,G2m = Sp G2m+1,...,Gn = SO H = SO

• G1,...,G2m+1 = Sp G2m+2,...,Gn = SO H = Sp where 2m or 2m + 1 ≤ n, and Gi = Sp only when di is even. Consider the Kronecker product map

ρ|n : G1(V1) × ... × Gn(Vn) → H(V1 ⊗ ... ⊗ Vn)

2 ∗ Wi 2 ∗ WH and let qi ∈ S (Ti ) and qH ∈ S (TH ) be the normalized Killing forms. Then

∗  ˆ  ρ|n(qH ) = (d2 . . . dn)q1, (d1d3 . . . dn)q2,..., (d1 ... di . . . dn)qi,..., (d1 . . . dn−1)qn

ˆ where di represents ommision.

Proof: We proceed inductively on the number of factors, with the base case of two factors already verified. Note that our map ρ|n factors as

0 ρ|n−1×Id 0 ρ ρ|n : G1(V1) × ... × Gn(Vn) → H (V1 ⊗ ... ⊗ Vn−1) × Gn(Vn) → H(V1 ⊗ ... ⊗ Vn).

Here H0 is the appropriate group from among SL, SO, Sp depending on the groups 0 G1,...,Gn−1. The map ρ is a tensor product map from two groups to one, and is therefore included in our previous examples, meaning

0∗ ρ (qH ) = (dnqH0 , dim(V1 ⊗ ... ⊗ Vn−1)qn) = (dnqH0 , (d1 . . . dn−1)qn).

Then by assumption, since the map

0 ρ|n−1 : G1(V1) × ... × Gn−1(Vn−1) → H (V1 ⊗ ... ⊗ Vn−1) involves one less factor, we have

∗ ˆ ρ|n−1(qH0 ) = ((d2 . . . dn−1)q1,..., (d1 ... di . . . dn−1)qi,..., (d1 . . . dn−2)qn−1). 2. TENSOR PRODUCT MAPS 64

Combining these two equalities we obtain

∗ ∗ 0∗ ρ|n(qH ) = (ρ|n−1 × Id) ◦ ρ (qH ) ∗ = (ρ|n−1 × Id)(dnqH0 , (d1 . . . dn−1)qn) ∗ = (dnρ|n−1(qH0 ), (d1 . . . dn−1)qn)

= (dn((d2 . . . dn−1)q1,..., (d1 . . . dn−2)qn−1), (d1 . . . dn−1)qn)  ˆ  = (d2 . . . dn)q1,..., (d1 ... di . . . dn)qi,..., (d1 . . . dn−1)qn and the result follows. 2. TENSOR PRODUCT MAPS 65

2.3 Quadratic Invariants of Quotient Groups

Let G be a split semisimple linear algebraic group with maximal torus T . If H is a subgroup of T which is normal in G, then we can conisder the following short exact sequences. 1 HG G /H 1

inc 1 HT T /H 1

When we consider the character groups of T and T /H we again get an exact sequence

∗ ∗ inc∗ ∗ 0 T /H T H 0

∗ ∗ meaning that the character group T /H is simply a subset of T . Namely it is the kernel of inc∗. There are two Weyl groups at work here. Let W be the Weyl group of G with respect to T , acting on T ∗, and let W 0 be the analogous Weyl group for G /H , which ∗ ∗ ∗ acts on T /H . Since T /H ⊆ T , it is also acted upon by W , however these actions are the same since W is isomorphic to W 0. Proof: We will use the definition of the Weyl group relative a torus which refers to normalizers and centralizers. These are . NG(T ) W = CG(T ) ,

0 T . W = NG/H ( /H ) T . CG/H ( /H ) At this point we use that the normalizer (resp. centralizer) in the quotient is the quotient of the normalizer (resp. centralizer).

T N (T ) NG/H ( /H ) = G /H , T C (T ) CG/H ( /H ) = G /H . Finally this means that    0 T . N (T )   W = NG/H ( /H ) T = G /H C (T ) CG/H ( /H ) G /H . ∼ NG(T ) = CG(T ) = W. 2. TENSOR PRODUCT MAPS 66

W 2  ∗ Hence the invariants S T /H are described by a simple intersection

W 2  ∗ 2  ∗ 2 ∗ W S T /H = S T /H ∩ S (T ) .

In the case that G is a split semisimple group, we fully understand the structure of S2(T ∗)W . It is generated by the normalized Killing forms of the simple components, W 2  ∗ and so the subring of invariant quadratic forms S T /H will be generated by certain multiples of those generators. We now compute these invariants in exam- ples where we consider quotients of our typical linear algebraic groups by central subgroups.

2.3.1 The SL-case

Let dim(V ) = n and consider the group SL(V ) with maximal torus T as in 1.4.7. This contains a subgroup isomorphic to the nth roots of unity,

µn ,→ SL(V ) ζ 7→ diag(ζ, . . . , ζ)

and so we have the short exact sequences

ϕ T  1 µn T µn 1

∗ T  ∗ ∗ ϕ ∗ 0 µn T µn 0 where T ∗ ∼= Zn−1.

Remark 2.3.1. Since F is algebraically closed, µn ⊂ F for all n. The character group consists of the exponential maps, i.e. ( ) ∗ d F ∼ Z µn = σ : µ → | 0 ≤ d ≤ n − 1 = /nZ . ζ7→ζd

Now to understand the map ϕ∗, we describe how it acts on the generators of T ∗. Let χi be such a generator, and let ζ ∈ µn.

∗ k k k k ϕ (χi )(ζ) = χi (ϕ(ζ)) = χi (diag(ζ, . . . , ζ) = ζ . 2. TENSOR PRODUCT MAPS 67

∗ k k Therefore ϕ (χi ) = σ , which means that

∗ k1 kn−1 k1+...+kn−1 ϕ (χ1 . . . χn−1 )(ζ) = ζ and so the map ϕ∗ expressed additively is ∗ n−1 Z ϕ : Z → /nZ n−1 X (x1, x2, . . . , xn−1) 7→ xi (mod n). i=1 Then since the character group of the quotient torus is the kernel of ϕ∗, it consists of the points with coordinateds summing to a multiple of n. ( n−1 ) T  ∗ ∗ X Z µn = (x1, . . . , xn−1) ∈ T | xi ∈ n . i=1 Lemma 2.3.2. Let d ∈ Z and consider the subset ( n n ) X n X P := ciei ∈ Z | ci ∈ dZ . i=1 i=1 Then    n  2 X 2 n  S (P ) = cijeiej ∈ S (Z ) | (i), (ii)  i,j=1   i≤j  where n P 2 (i) cij ∈ d Z, i,j=1 i≤j

(ii) ci,n ≡ −c1,i − ... − ci−1,i − 2cii − ci,i+1 − ... − ci,n−1 (mod d) for 1 ≤ i ≤ n − 1.

2 Proof: P is generated by the elements {ei − en, den | 1 ≤ i ≤ n − 1}, and so S (P ) is generated by the tensor products of pairs of these generators. 2 2 2 2 S (P ) = heiej − eien − ejen + en, deien − den, d en | 1 ≤ i ≤ j ≤ n − 1i. Hence it is a subset of S2(Zn) given by conditions (i) and (ii).

  2 T  ∗ Using the description of S µn as in 2.3.2, namely      n−1  2 T  ∗ X 2 ∗ S µn = cijeiej ∈ S (T ) | (i),(ii)  i,j=1   i≤j  2. TENSOR PRODUCT MAPS 68

n−1 P 2 (i) cij ∈ n Z, i,j=1 i≤j

(ii) ci,n−1 ≡ −c1,i − ... − 2cii − ... − ci,n−2 (mod n) for 1 ≤ i ≤ n − 2,

 W 2 T  ∗ we can determine the ring of invariants S µn . The normalized Killing form for SL(V ) is

n−1 X q = eiej. i,j=1 i≤j Currently q has the property that

n−1 n−1 X X n2 − n c = 1 = . ij 2 i,j=1 i,j=1 i≤j i≤j

n2−n 2 Now if we compute the lowest common multiple of 2 and n we get  2   3 2 n − n 2 n − n n even lcm , n = n3−n2 2 2 n odd and so the least multiple of q which satisfies (i) is 2nq when n is even, and nq when n is odd. In both these cases, all coefficients of the multiple of q are divisible by n and so condition (ii) is satisfied trivially. Therefore   ∗W Zh2nqi n even S2 T   = µn Zhnqi n odd.

2.3.2 The SO or Sp-case

Let dim(V ) = 2n and let G be either SO(V, Ω) or Sp(V, Ψ). In both these cases we have a short exact sequence

G  1 µ2 G µ2 1

where µ2 = {1} is identified with the diagonal matricies {I}. Furthermore, we are able to choose the same maximal torus T for both groups, and both groups have the same normalized Killing form. Therefore we may consider both groups simultaneously with the short exact sequence

T  1 µ2 T µ2 1. 2. TENSOR PRODUCT MAPS 69

This has dual sequence

∗ T  ∗ ∗ ϕ ∗ 0 µ T µ2 0 ∼ =

2 ∼ =

n Z Z /2Z

n ∗ P where ϕ (x1,. . . , xn) = xi (mod 2), and so i=1

( n n ) T  ∗ X X Z µ2 = ciei | ci ∈ 2 . i=1 i=1 Again by 2.3.2, this means that      n  2 T  ∗ X 2 ∗ S µ2 = cijeiej ∈ S (T ) | (i),(ii)  i,j=1   i≤j 

n P (i) cij ∈ 4Z, i,j=1 i≤j

(ii) ci,n ≡ c1,i + ... + 2cii + ... + ci,n−1 (mod 2) for 1 ≤ i ≤ n − 1.

n n P 2 P The normalized Killing form of G is q = ei , which has cij = n. The i=1 i,j=1 i≤j generator of the quotient’s invariant quadratic forms then depends on the class of n (mod 4).  Zhqi n ≡ 0 (mod 4)   ∗W  Zh4qi n ≡ 1 (mod 4) S2 T   = µ2 Z  h2qi n ≡ 2 (mod 4)  Zh4qi n ≡ 3 (mod 4).

When n ≡ 0 (mod 4) condition (ii) is satisfied by q since for each i, only cii = 1 is non-zero and so the condition requires that 0 ≡ 2 (mod 2). When n ≡ 1, 2, 3 (mod 4), the generator has even coefficients and so (ii) is satisfied trivially.

2.3.3 The Product of SLs

Here we consider quotients by central subgroups of multiple copies of SL. Let dim(V1) = n + 1, dim(V2) = m + 1. Set Ti ⊆ SL(Vi) to be the maximal tori and 2. TENSOR PRODUCT MAPS 70

consider the short exact sequence

T1 × T2  1 µk T1 × T2 µk 1

ζ (ζIn+1, ζIm+1)

where k| gcd(n + 1, m + 1). We have the associated dual short exact sequence

∗ T1 × T2   ∗ ∗ ∗ 0 µk T1 × T2 µk 0 ∼ = ∼ =

n+m Z Z /kZ n+m n+m P P ciei ci (mod k) i=1 i=1

∗ T1 × T2   and so we have a description of µk , (n+m n+m ) ∗ X n+m X T1 × T2   Z µk = ciei ∈ | ci ≡ 0 (mod k) i=1 i=1

= hei − en+m, ken+m | 1 ≤ i ≤ n + m − 1i and then by 2.3.2   n+m  2  ∗ X  T1 × T2   S µk = cijeiej | (i),(ii)  i,j=1   i≤j 

n+m P 2 (i) cij ∈ k Z, i,j=1 i≤j

(ii) ci,n+m ≡ −c1,i − ... − 2cii − ... − ci,n+m−1 (mod k) for 1 ≤ i ≤ n + m − 1.

Recall that the normalized Killing forms of SL(V1) and SL(V2) as elements of 2 ∗ ∗ W S (T1 × T2 ) are n n+m X X q1 = eiej and q2 = eiej i,j=1 i,j=n+1 i≤j i≤j respectively. The invariants of the quotient are then elements d1q1 +d2q2 with d1, d2 ∈ Z such that (i) and (ii) are satisfied. Consider such a linear combination.

n+m X (n + 1)n (m + 1)m c = d + d ij 2 1 2 2 i,j=1 i≤j 2. TENSOR PRODUCT MAPS 71

2 so condition (i) is equivalent to (n + 1)nd1 + (m + 1)md2 ≡ 0 (mod 2k ).

For condition (ii), first consider the case when 1 ≤ i ≤ n. Then ci,n+m = 0 and

−c1,i−...−2cii−...−ci,n+m−1 = −c1,i−...−2cii−...−ci,n−0 = −(n+1)d1 ≡ 0 (mod k) with the last equivalene holding because k| gcd(n + 1, m + 1) and so k|n + 1.

For n + 1 ≤ i ≤ n + m, ci,n+m = d2 and

−c1,i − ... − 2cii − ... − ci,n+m−1 = 0 − cn+1,i − ... − 2cii − ... − ci,n+m−1

= −md2 ≡ −(−1)d2 ≡ d2 (mod k) which shows that condition (ii) is trivially satisfied. Hence

W 2  ∗ 2 T1 × T2   S µk = {d1q1 + d2q2 | (n + 1)nd1 + (m + 1)md2 ≡ 0 (mod 2k )}.

This condition is equivalent to the description of these invariants given in [BDZ, Proposition 6.3].

As a slight variation to example 2.3.3, we can consider the group µk embedded into T1 × T2 such that it is a subgroup of the kernel of the tensor product map. That is T1 × T2  1 µk T1 × T2 µk 1 −1 ζ (ζIn+1, ζ Im+1) with dual sequence

∗ T1 × T2   ∗ ∗ ∗ 0 µk T1 × T2 µk 0 ∼ = ∼ =

n+m Z Z /kZ n+m n n+m P P P ciei ci − ci (mod k). i=1 i=1 i=n+1

This changes the generators of the quotient torus

∗ T1 × T2   µk = hei + en+m, ej − en+m, ken+m | 1 ≤ i ≤ n, n + 1 ≤ j ≤ n + m − 1i and changes the conditions on quadratic elements   n+m  2  ∗ X  T1 × T2   S µk = cijeiej | (i),(ii)  i,j=1   i≤j  2. TENSOR PRODUCT MAPS 72

n n n+m n+m P P P P 2 (i) cij − cij + cij ∈ k Z, i,j=1 i=1j=n+1 i,j=n+1 i≤j i≤j (ii) ci,n+m ≡ c1,i + ... + 2cii + ... + cin − ci,n+1 − ... − ci,n+m−1 (mod k) for 1 ≤ i ≤ n, and

ci,n+m ≡ c1,i + ... + cin − ci,n+1 − ... − 2cii − ... − ci,n+m−1 (mod k)

for n + 1 ≤ i ≤ n + m − 1.

Now the fixed elements are those d1q1 + d2q2 satisfying (i) and (ii). Considering such an element, we have that

n n n+m n+m X X X X (n + 1)n (m + 1)m c − c + c = d − 0 + d ij ij ij 2 1 2 2 i,j=1 i=1 j=n+1 i,j=n+1 i≤j i≤j

2 and so we recover the same condition: (n + 1)nd1 + (m + 1)md2 ≡ 0 (mod 2k ).

Furthermore, for 1 ≤ i ≤ n, ci,n+m = 0 and

c1,i + ... + 2cii + ... + cin − ci,n+1 − ... − ci,n+m−1 = (n + 1)d1 − 0 ≡ 0 (mod k) while for n + 1 ≤ i ≤ n + m − 1, ci,n+m = d2 and

c1,i + ... + cin − ci,n+1 − ... − 2cii − ... − ci,n+m−1 = 0 − md2 ≡ d2 (mod k).

Once again, condition (ii) is trivially satisfied, and so the ring of invariant quadratic forms is the same as before.

W 2  ∗ 2 T1 × T2   S µk = {d1q1 + d2q2 | (n + 1)nd1 + (m + 1)md2 ≡ 0 (mod 2k )}.

2.3.4 The Product of SOs and Sps

Let dim(V1) = 2n, dim(V2) = 2m and take G1,G2 ∈ {SO, Sp}. Let Ti ⊆ Gi(Vi) be the maximal tori and consider

T1 × T2  1 µ2 T1 × T2 µ2 1

1 (I2n, I2m) 2. TENSOR PRODUCT MAPS 73

and dual

T × T  ∗ ∗ ∗ ∗ 0 1 2 µ T1 × T2 µ2 0

2 ∼ = ∼ =

n+m Z Z /2Z n+m n+m P P ciei ci (mod 2) i=1 i=1

meaning that

∗ T1 × T2   µ2 = hei − en+m, 2en+m | 1 ≤ i ≤ n + m − 1i   n+m  2  ∗ X  T1 × T2   S µ2 = cijeiej | (i),(ii)  i,j=1   i≤j 

n+m P (i) cij ∈ 4Z, i,j=1 i≤j

(ii) ci,n+m ≡ c1,i + ... + 2cii + ... + ci,n+m−1 (mod 2) for 1 ≤ i ≤ n + m − 1.

Invariant quadratic forms of G1 ×G2 are generated by the two normalized Killing forms, in this case n n+m X 2 X 2 q1 = ei and q2 = ei . i=1 i=n+1

If an element d1q1 + d2q2 for d1, d2 ∈ Z satisfies (i) then

n+m n n+m X X X cij = cii + cii = nd1 + md2 ≡ 0 (mod 4). i,j=1 i=1 i=n+1 i≤j

Also for such an element, for all 1 ≤ i ≤ n + m − 1, ci,n+m = 0 and

c1,i + ... + 2cii + ... + ci,n+m−1 = 2cii ≡ 0 (mod 2).

With condition (ii) again being trivially satisfied, this leaves that

W 2  ∗ T1 × T2   S µ2 = {d1q1 + d2q2 | nd1 + md2 ≡ 0 (mod 4)}. Bibliography

[Baek] S. Baek. Chow Groups of Products of Severi-Brauer Varieties and Invariants of Degree 3. Trans. Amer. Math. Soc. 369 (2017), no. 3, 1757–1771.

[BDZ] S. Baek, R. Devyatov, and K. Zainoulline. The K-theory of Versal Flags and Cohomological Invariants of Degree 3. to appear in Documenta Mathematica (preprint version is available at arXiv:1612.07278).

[Bou] N. Bourbaki. El´ementsde´ Math´ematique,Groupes et Alg`ebres de Lie, Chapitres 4 `a6. Masson, Paris, 1981.

[BR] H. Bermudez and A. Ruozzi. Degree Three Cohomological Invariants of Groups that are Neither Simply-Connected nor Adjoint. J. Ramanujan Math. Soc. 29 (2014), no. 4, 465–481.

[Gar] S. Garibaldi. Orthogonal Involutions on Algebras of Degree 16 and the Killing Form of E8. Contemporary Mathematics Volume 493, 2009. [GMS] S. Garibaldi, A. Merkurjev, and J.-P. Serre. Cohomological Invariants in Ga- lois Cohomology. University Lecture Series 28, AMS, Providence, RI, 2003.

[GQ-M] S. Garibaldi and A. Qu´eguiner-Mathieu. Restricting the Rost Invariant to the Center. St. Petersburg Math J. Vol 19 (2008), no. 2, 197–213.

[GZ] S. Garibaldi and K Zainoulline. The Gamma-Filtration and the Rost Invariant. J. Reine und Angew. Math. 696 (2014), 225–244.

[Hall] B.C. Hall. Lie Groups, Lie Algebras, and Representations (an elementary in- troduction, second edition). Springer, 2015.

[Hum72] J. Humphreys. Introductions to Lie Algebras and Representation Theory. Springer-Verlag New York Inc, 1972.

[Hum90] J. Humphreys. Reflection Groups and Coxeter Groups. Cambridge Univer- sity Press, 1990.

74 BIBLIOGRAPHY 75

[KMRT] M.-A. Knus, A. Merkurjev, M. Rost, and J.-P. Tignol, The Book of Involu- tions. American Mathematical Society, Providence, RI, 1998, With a preface in French by J. Tits.

[Mer] A. Merkurjev. Degree Three Cohomological Invariants of Semisimple Groups. J. Eur. Math. Soc. (JEMS), 18 (2016), no. 2, 657–680.

[MNZ] A. Merkurjez, A. Neshitov, and K. Zainoulline. Invariants of Degree 3 and Torsion in the Chow Group of a Versal Flag. Composito Mathematica 151 (2015), 1416–1432.

[SR] W.F. Santos and A. Rittatore. Actions and Invariants of Algebraic Groups. CRC Press, 2005.