<<

Imperial College London

MSci Thesis

Polynomial Representations of the General Linear

Author: Supervisor: Misja F.A. Steinmetz Dr. John R. Britnell

A thesis submitted in fulfilment of the requirements for the degree of Master in Science in the

Algebra Section Department of

“This is my own work unless otherwise stated.” Name: Date:

June 2014 Misja F.A. Steinmetz CID: 00643423

KNSM-laan 50 1019 LL Amsterdam the Netherlands [email protected] [email protected]

i Abstract

The main goal of this project will be to describe and classify all irreducible characters of representations of GLn(K) for an infinite field K. We achieve this goal in Theorem 4.5.1 at the end of Chapter 4.

Our journey towards this big theorem takes us past many interesting topics in and representation . In Chapter 1 we will do some of the necessary groundwork: we will introduce the concepts of and . In Chapter 2 we will introduce finitary functions and coefficient functions (following Green’s nomenclature [9]). We will use results from Chapter 1 to deduce some initial consequences from these definitions. In Chapter 3 we will introduce the MK (n) of finite-dimensional left KGLn(K)-modules which ‘afford’ polynomial representations. This category will be the main object of study in this and the next chapter. Next we introduce the SK (n) and prove that left SK -modules are equivalent to left modules in MK . In Chapter 4 we introduce weights, weight spaces and formal characters. We use these results to prove our big theorem.

Finally, in Chapter 5 we will look at the rather long and explicit example of the ir- reducible characters of GL2(Fq) to give the reader some feeling for dealing with the characters of GLn(K), when K is a finite field rather than an infinite one. We will construct a complete character table for the aforementioned groups.

ii Contents

Contents ii

Introduction1

1 Elementary Theory3 1.1 The Definition of a Coalgebra...... 3 1.1.1 Examples of Coalgebras...... 4 1.2 The Dual Algebra to a Coalgebra...... 5 1.3 Homomorphisms of Coalgebras...... 6 1.4 Subcoalgebras...... 8 1.5 Comodules...... 8 1.6 Bialgebras...... 10 1.7 Definitions in Theory...... 11 1.7.1 Extension of the Ground ...... 12 1.7.2 ...... 12

2 Finitary Functions and Coefficient Functions 13 2.1 Basic ...... 13 2.2 Finitary functions...... 14 2.2.1 F is a K-coalgebra...... 14 2.3 Coefficient Functions...... 17 2.4 The category modA(KΓ)...... 18

3 Polynomial Representations and the Schur Algebra 19 3.1 The Definition of MK (n) and MK (n, r)...... 21 3.2 Examples of Polynomial Representations...... 22 3.3 The Schur Algebra...... 23 3.4 The map e : KΓ → SK (n, r)...... 25 3.5 The Module E⊗r ...... 27

4 Weights and Characters 30 4.1 Weights...... 30 4.2 Weight Spaces...... 31 4.2.1 Examples of Weight Spaces...... 33 4.3 First Results on Weight Spaces...... 34 4.4 Characters...... 35 4.5 Irreducible modules in MK (n, r)...... 40

iii Contents iv

5 The irreducible characters of GL2 43 5.1 Conjugacy Classes of GL2(Fq)...... 43 5.2 Irreducible Characters of V,Uα and Vα ...... 45 5.2.1 The Characters of Uα and Vα ...... 47 5.3 The Characters of Wα,β ...... 48 5.4 The Characters of Ind ϕ ...... 51

Conclusion 56 Acknowledgements...... 57 Introduction

“What fascinated me so extraordinarily in these investigations [represen- tations of groups] was the fact that here, in the midst of a standstill that prevailed in other areas in the theory of forms, there arose a new and fertile chapter of algebra which is also important for and analysis and which is distinguished by great beauty and perfection.” – ([4, p. xii]).

Issai Schur was an astoundingly brilliant 20th century German-Jewish mathematician, whose life did not end well. He was born in Russia in 1875, but having most of his life been educated in German he moved to the University of Berlin to study Mathematics in 1894. Under the supervision of Frobenius and Fuchs, he obtained his PhD here in 1901 with a dissertation titled “Uber¨ eine Klasse von Matrizen, die sich einer gegebenen zuordnen lassen”[14], which translates loosely as “about a class of matrices, which can be categorised by a given matrix”. This dissertation contains an unusually large amount of original ideas and almost all important results in Chapter 3 and Chapter 4 of this project were first found in this dissertation. Schur became a professor in Bonn in 1919, but was forced to step down from his position under the Nazi regime. He died in 1941 in Tel Aviv, Palestine, after having lived the last years of his life here in poverty.

I decided to start this project with a short biography of Schur’s life, because my main goal in this project is to present, prove and explain some of the most important results from Schur’s dissertation. More specifically the main goal of this project is to describe and classify all irreducible characters of GLn(K) for an infinite field K. The approach I have taken here is inspired by, but not completely identical to Schur’s approach. As the quotation above might suggest we will come across some beautifully elegant parts of mathematics on our journey towards this goal, but may the reader be warned that elegant is not always a synonym for easy!

In Chapter 1 we do some of the algebraic groundwork needed in later chapters. We introduce the concepts of a coalgebra and a from scratch. Thereafter we look at some of the theory surrounding these objects, involving concepts such as homomorphisms of coalgebras, subcoalgebras and comodules.

In Chapter 2 we introduce finitary functions. It should be noted that finitary functions appear under many different names in the literature (most notably as representative functions), but I have chosen to follow Green’s nomenclature here (see [9, p. 3]). We proceed in this chapter by using results from Chapter 1 to prove that the of finitary functions is a K-bialgebra and we show that coefficient functions form a subcoalgebra.

1 Introduction 2

In Chapter 3 we let Γ = GLn(K) (again following Green’s notation) and we define polynomial functions on Γ. We introduce the categories MK (n) and MK (n, r) of ‘finite- dimensional left KΓ-modules, which afford polynomial representations’, i.e. represen- tations whose coefficient functions are polynomial functions. Next we introduce the Schur algebra SK (n, r) - the attentive reader may be able to make a wild guess who this has been named after - and we show that left SK (n, r)-modules are, in fact, equivalent to the modules in MK (n, r). We conclude the chapter by considering the evaluation ⊗r map e : KΓ → SK (n, r) and the module E and use these to show that any module V ∈ MK (n, r) is completely reducible.

In Chapter 4 we introduce weights and weight spaces. After having looked at some initial consequences of these definitions such as the weight space decomposition of a module in MK (n, r), we shift our focus to the study of formal characters. We study these characters using some of the theory of symmetric and show that formal characters are very naturally linked to the normal characters we know from representation theory. At the end of this chapter we use our results to find all irreducible modules in MK (n, r).

In Chapter 5 we move away from the heavy theory and adopt a more hands-on approach in constructing all irreducible characters of GL2(Fq). This is intended to give the reader a feeling of how to deal with characters of GLn(K), where K is a finite field rather than an infinite one. We construct the character table of GL2(Fq), by first looking at some fairly standard characters and then by inducing characters from (large) subgroups. It should be noted that in 1955 Green([7]) found abstract formulae for the characters of GLn(K), where K is a finite field. However, explicitly constructing these characters often remains challenging.

Many different approaches to this subject are possible. The most famous approach is probably through the representation theory of the Sn, which is less complicated to understand. Using the Schur and the Schur-Weyl duality it is possible to establish a link between the irreducible representations of the symmetric and general linear groups (see e.g. [6]). It has always been my intention to write a project on the representation theory of GLn(K), which is why I decided to take a more direct approach here. It is interesting to note, however, that the link between the representation theory of Sn and GL works two ways, so we could use the results of this project and the to deduce many interesting results about the representation theory of the symmetric group (see e.g. [9, Chapter 6]).

The main references I have used for this project are Sweedler ([15]) for Chapter 1, Green ([9]) for Chapters 2,3 and 4 and Fulton & Harris ([6]) for Chapter 5. In many places I have included my own proofs and examples, or I have adapted proofs to make the argument more lucid. I have always tried to indicate this clearly at the beginning of the proof. Any result, which I have not explicitly stated to be my own working, was taken from a source that should be clear from context. Chapter 1

Elementary Coalgebra Theory

Before we embark on the beautiful and complicated theory of polynomial representa- tions of GLn, we need to do some groundwork first. Many of the proofs that we will use in later chapters rely heavily on a basic understanding of concepts like coalgebras and bialgebras. Because these concepts are most certainly not part of the standard under- graduate curriculum, I have decided to give a brief introduction to coalgebra theory in this chapter. The main reference I have used for this chapter is Sweedler’s book on Hopf [15]. In this chapter, I will sometimes use the word space instead of K- and map instead of K-.

1.1 The Definition of a Coalgebra

Firstly, let us give a new alternative definition of an algebra. It is not hard to check the following definition is equivalent to the definition we are familiar with.

Definition 1.1.1. Let K be a field. An algebra over K is a triple (A, M, u), where A is a K-vector space, M : A ⊗ A → A is a K-linear map called multiplication, u : K → A is a K-linear map called the unit map, and such that the following two diagrams commute:

1 ⊗ M A ⊗ A ⊗ A - A ⊗ A

M ⊗ 1 M (Asscociativity of M) ? M ? A ⊗ A - A

1 ¨* A ⊗ A HY 1 u ⊗ ¨ H ⊗ u ¨¨ HH K ⊗ A M A ⊗ K (Unitary property) H ¨ HH ? ¨¨ Hj A ¨

3 Chapter 1. Elementary Coalgebra Theory 4 where in the second diagram the map K ⊗ A → A is the natural , which sends k ⊗ a 7→ ka, and similarly A ⊗ K → A is the natural isomorphism as well (see e.g. [2, p. 26]).

The upshot of this definition is that it immediately leads to the definition of a coalgebra by dualising, which is simply to ‘reverse all arrows.’

Definition 1.1.2. A K-coalgebra is a triple (C, ∆, ) with C a K-vector space, ∆ : C → C ⊗ C a K-linear map called diagonalisation or comultiplication, and  : C → K a K-linear map called the augmentation or counit, such that the following two diagrams commute:

1 ⊗ ∆ C ⊗ C ⊗ C  C ⊗ C 6 6 ∆ ⊗ 1 ∆ (Coassociativity) ∆ C ⊗ C  C

1 C ⊗ C 1  ⊗ ¨¨ HH ⊗  ¨¨ 6 HHj K ⊗ C ∆ C ⊗ K (Counitary property) HY ¨* HH ¨¨ H C ¨

where in the second diagram the maps C → C ⊗ K and C → K ⊗ C are the natural as before.

We can understand coassociativity better as (1C ⊗∆)◦∆ = (∆⊗1C )◦∆. So, informally, we can say that once we have diagonalised once, the factor which we next diagonalise on is irrelevant.

1.1.1 Examples of Coalgebras

1. Let S be a set and K be a field. We denote KS for the set of all formal K-linear combinations of the elements of S, hence KS is a K-vector space with S. Now we define ∆ : KS → KS ⊗ KS and  : KS → K by

∆ : s 7→ s ⊗ s, for all s ∈ S,  : s 7→ 1 ∈ K, for all s ∈ S,

and extend these maps linearly. Then the triple (KS, ∆, ) is a coalgebra and it is sometimes referred to as the group-like coalgebra on the set S.

2. Let {S, ≤} be a partially ordered set which is locally finite (i.e. if x ≤ y. Then there are only finitely many z ∈ S such that x ≤ z ≤ y). The set {Z, ≤} is an example of a partially ordered set which is locally finite, but any such set will do. Chapter 1. Elementary Coalgebra Theory 5

Let T = {(x, y) ∈ S × S|x ≤ y}. Let KT be as in the previous example. Then we can define ∆ : KT → KT ⊗ KT and  : KT → K by

∆ : (x, y) 7→ P (x, z) ⊗ (z, y), x≤z≤y ( 0 if x 6= y,  :(x, y) 7→ 1 if x = y,

and extend these maps linearly to KT . We can check that (KT, ∆, ) is a coalgebra.

1.2 The Dual Algebra to a Coalgebra

∗ For a K-vector space V , let V = HomK (V,K) denote the linear . Following Sweedler’s notation [15, p. 7], we will usually write hf, vi instead of f(v), for f ∈ V ∗ and v ∈ V. We now recall from that there is a linear injection given by

ρ : V ∗ ⊗ W ∗ → (V ⊗ W )∗, hρ(f ⊗ g), v ⊗ wi = hf, vihg, wi,

for all f ∈ V ∗, g ∈ W ∗, v ∈ V, w ∈ W.

Furthermore, if L : V → W is a linear map, then as usual L∗ : W ∗ → V ∗ denotes the unique map induced by hL∗(f), vi = hf, L(v)i. Now let us take this discussion back to coalgebras. Suppose (C, ∆, ) is a coalgebra. Then ∆ : C → C ⊗ C and  : C → K induce ∆∗ :(C ⊗ C)∗ → C∗ and ∗ : K∗ → C∗. We may define M : C∗ ⊗ C∗ → C∗ to be the composite

ρ ∗ C∗ ⊗ C∗ −→ (C ⊗ C)∗ −−→∆ C∗

and u : K → C∗ to be the composite

φ−1 ∗ K −−→ K∗ −→ C∗,

∗ where φ : K → K is the natural isomorphism sending f 7→ f(1K ). This is naturally an isomorphism since K is a field.

Proposition 1.2.1. [15, p. 9] The triple (C∗, M, u) is an algebra.

Proof. Similar to our usual notation for multiplication let us write c∗d∗ for M(c∗ ⊗ d∗), ∗ ∗ ∗ P where c , d ∈ C . If ∆(c) = i ci ⊗ di for c ∈ C, then one easily checks that

∗ ∗ X ∗ ∗ hc d , ci = hc , ciihd , dii i

and also, since 1 = u(1K ) by definition, that

h1, ci = (c). Chapter 1. Elementary Coalgebra Theory 6

From these facts it is straightforward to prove that C∗ is an algebra. (Note that  = 1C∗ .)

If we are in the finite dimensional case, then we can even dualise Proposition 1.2.1. Suppose (A, M, u) is an algebra with A finite dimensional. Then ρ : A∗ ⊗A∗ → (A⊗A)∗ is bijective and we define ∆ : A∗ → A∗ ⊗ A∗ to be the composite

∗ ρ−1 A∗ −−→M (A ⊗ A)∗ −−→ A∗ ⊗ A∗ and  : A∗ → K to be the composite

∗ φ A∗ −→u K∗ −→ K with φ : K∗ → K the natural isomorphism.

Proposition 1.2.2. [15, p. 11] If (A, M, u) is a finite dimensional algebra, then (A∗, ∆, ) is a coalgebra.

Proof. (Own working) I will show that (A∗, ∆, ) satisfies the counital property. Coas- sicociativity is a little too tedious to write out here, but holds as well. Let f ∈ A∗. Then ∗ ∗ −1 ∗ P by construction ∆(f) lies in A ⊗ A . So we can write ∆(f) = ρ M (f) = i ci ⊗ di ∗ for some ci, di ∈ A . For the counital property we need to prove that ( ⊗ 1) ◦ ∆ = (1 ⊗ ) ◦ ∆ = idA∗ . For any a ∈ A, we find that

1 1 P P ∗ (( ⊗ ) ◦ ∆(f)) (a) = (( ⊗ )( i ci ⊗ di))(a) = i φu (ci)di(a) = P ∗ P P i u (ci)(1K )di(a) = i ci(u(1K ))di(a) = ihρ(ci ⊗ di), 1A ⊗ ai = P −1 ∗ hρ( i ci ⊗ di), 1A ⊗ ai = ρ(∆(f))(1A ⊗ a) = ρ(ρ M (f))(1A ⊗ a) = ∗ M (f)(1A ⊗ a) = f(a).

Since this holds for any a ∈ A, we find that ( ⊗ 1) ◦ ∆(f) ≡ f and by an analogous argument we find that (1 ⊗ ) ◦ ∆(f) ≡ f. Hence the counital property is satisfied and (A∗, ∆, ) is a coalgebra, as required.

1.3 Homomorphisms of Coalgebras

We first want to write the definition of a homomorphism of algebras in terms of com- mutative diagrams.

Definition 1.3.1. If A, B are algebras and f : A → B is a linear map, then f is an algebra map () when the following two diagrams commute:

f ⊗ f A ⊗ A - B ⊗ B

MA MB (multiplicative)

? f ? A - B Chapter 1. Elementary Coalgebra Theory 7

f A - B

@I@  @ uA uB (unit preserving) @ K

For the definition of homomorphisms of coalgebras, we can just dualise this definition.

Definition 1.3.2. Let C,D be coalgebras and g : C → D a linear map. Then g is a coalgebra map (morphism) if these two diagrams commute:

g ⊗ g C ⊗ C - D ⊗ D 6 6

∆C ∆D g C - D

g C - D @ @ C @ D @R © K

Proposition 1.3.3. [15, p. 14] If f : C → D is a coalgebra map, then f ∗ : D∗ → C∗ is an algebra map.

The first half of the following proof (f ∗ is multiplicative) I took from Sweedler’s book [15, p. 14], but the second half (f ∗ preserves unit) is my own work.

Proof. Let us prove first that f ∗ is multiplicative. We will use the same notation as before except that we will stop writing the monomorphism ρ and instead just treat it as the inclusion. So we need to show that, for a∗, b∗ ∈ D∗ and c ∈ C, we have that ∗ ∗ ∗ ∗ ∗ ∗ ∗ P hf (a b ), ci = hf (a )f (b ), ci. If we suppose ∆(c) = i ci ⊗ di, then

hf ∗(a∗b∗), ci = ha∗b∗, f(c)i (by definition of f ∗) = ha∗ ⊗ b∗, ∆f(c)i (multiplication in the dual algebra) ∗ ∗ P = ha ⊗ b , i f(ci) ⊗ f(di)i (f is a coalgebra map) P ∗ ∗ = iha , f(ci)ihb , f(di)i P ∗ ∗ ∗ ∗ ∗ = ihf (a ), ciihf (b ), dii (by definition of f ) = hf ∗(a∗) ⊗ f ∗(b∗), ∆(c)i = hf ∗(a∗)f ∗(b∗), ci Chapter 1. Elementary Coalgebra Theory 8

So we have found that indeed f ∗ is multiplicative.

Now let us try to prove that f ∗ preserves unit as well. To do this we need to prove ∗ ∗ ∗ ∗ ∗ that uC∗ ≡ f ◦ uD∗ , i.e. C (φ(1K )) ≡ f (D)φ(1K ), where φ : K → K is the natural isomorphism. If c ∈ C, then

∗ ∗ ∗ ∗ hf Dφ(1K ), ci = hDφ(1K ), f(c)i (by definition of f ) ∗ = hφ(1K ), Df(c)i (by definition of D) = hφ(1K ), C (c)i (f is a coalgebra map) ∗ ∗ = hC φ(1K ), ci (by definition of C )

∗ ∗ ∗ Hence, since c ∈ C was arbitrary, we see that C (φ(1K )) ≡ f (D)φ(1K ), as required. Proposition 1.3.4. If A, B are finite dimensional algebras and f : A → B is an algebra map, then f ∗ : B∗ → A∗ is a coalgebra map.

For the sake of brevity I have omitted the proof of this proposition.

1.4 Subcoalgebras

Definition 1.4.1. Suppose C is a coalgebra and V a subspace with ∆(V ) ⊆ V ⊗ V . Then (V, ∆|V , |V ) is a coalgebra and V is said to be a subcoalgebra.

Moreover we see immediately that the inclusion map V,→ C is a coalgebra map. Also notice that when we are defining a subalgebra, we need to add the condition that the unit is in the subalgebra. For subcoalgebras, however, the counit takes care of itself.

Proposition 1.4.2. [15, p. 18] If f : C → D is a coalgebra map, then Imf is a subcoalgebra of D

P Proof. If c ∈ C, then ∆(c) = i ci ⊗ di. Since f is a coalgebra map we also have P that ∆(f(c)) = i f(ci) ⊗ f(di). Therefore we find that ∆(Im f) ⊆ Im f ⊗ Im f, as required.

1.5 Comodules

As usual in this chapter, let us first try to write the definition of a module in terms of commutative diagrams. The following definition is easily checked to be equivalent to the definition we know already.

Definition 1.5.1. If A is an algebra, then we can define a left A-module as a space N and a map ψ : A ⊗ N → N, such that the following two diagrams commute: Chapter 1. Elementary Coalgebra Theory 9

K ⊗ N @ @ u ⊗ 1N @ @R ? ψ A ⊗ N - N

(where K ⊗ N → N is the natural isomorphism onto N)

1A ⊗ ψ A ⊗ A ⊗ N - A ⊗ N

M ⊗ 1N ψ ? ψ ? A ⊗ N - N

We usually write a · n instead of ψ(a ⊗ n).

Now we are in a position to dualise this definition to obtain the definition of a right comodule.

Definition 1.5.2. If C is a coalgebra we define a right C-comodule to be a space M together with a map ω : M → M ⊗ C (called the comodule structure map of M ) such that the following two diagrams commute:

M ⊗ K 6 @I 1 @ M ⊗  @ @ ω M ⊗ C  M

(where M → M ⊗ K is the natural isomorphism)

ω ⊗ 1C M ⊗ C ⊗ C  M ⊗ C 6 6 1M ⊗ ∆ ω ω M ⊗ C  M

As a straightforward example of a comodule one realises that C itself is a right C- comodule with structure map ∆.

Definition 1.5.3. If M is a right comodule and N ⊆ M with ω(N) ⊆ N ⊗ C, then N is a subcomodule. Chapter 1. Elementary Coalgebra Theory 10

Definition 1.5.4. Let M,N be right comodules. We say that f : M → N is a comod- ule map if the following diagram commutes:

f M - N

ωM ωN ? ? f ⊗ 1C M ⊗ C - N ⊗ C

It is not difficult to verify that this is dual to the definition of maps between modules.

1.6 Bialgebras

We will shortly give a definition of a bialgebra, which intuitively is something that is an algebra and a coalgebra at the same time. But before we can make this rigorous, we will need the following proposition.

Proposition 1.6.1. [15, p. 51] Suppose (H, M, u) is an algebra and (H, ∆, ) is a coal- gebra. The following are equivalent:

1. M and u are coalgebra maps;

2. ∆ and  are algebra maps;

3. (a) ∆(1) = 1 ⊗ 1 P g h g h P g g (b) ∆(gh) = ci cj ⊗ di dj (where ∆(g) = i ci ⊗ di etc.) i,j (c) (1) = 1 and (d) (gh) = (g)(h).

Proof. We immediately see that conditions2 and3 are equivalent as condition3 is just a restatement of the for ∆ and  being algebra maps. For the equivalence of condition1 and2 we consider the following set of diagrams:

M ∆ H ⊗ H - H - H ⊗ H

∆ ⊗ ∆ 6M ⊗ M ? 1H ⊗ T ⊗ 1H H ⊗ H ⊗ H ⊗ H - H ⊗ H ⊗ H ⊗ H a)

(where T : U ⊗ V → V ⊗ U is the bilinear ‘twist’ map, i.e. T (u ⊗ v) = v ⊗ u) Chapter 1. Elementary Coalgebra Theory 11

∆ H - H ⊗ H 6 6 u u ⊗ u b) K - K ⊗ K

 ⊗  H ⊗ H - K ⊗ K

M

?  ? c) H - K

H  @ u @ @R 1K K - K d)

The commutativity of a) and b) says exactly that ∆ is an algebra map, whereas the commutativity of c) and d) says  is an algebra map. On the other hand a) and c) commute if and only if M is a coalgebra map, and b) and d) commute in case u is a coalgebra map. Thus condition1 is equivalent to2.

Definition 1.6.2. Any system which satisfies the above is called a bialgebra and denoted (H, M, u, ∆, ) or simply H.

Definition 1.6.3. A subspace A of a bialgebra H is called a subbialgebra of H if A is simultaneously a subalgebra of H and a subcoalgebra of H.

Definition 1.6.4. A linear map between bialgebras is a bialgebra map if it is simul- taneously an algebra map and a coalgebra map.

1.7 Definitions in Module Theory

Later in this project, in particular in Chapter4, we will use some more advanced mod- ule theory to prove results about the polynomial representations of GLn. Hence, for completeness, I will give a review here of some of the definitions that we will use later on. For a more extensive discussion of this topic see [5, pp. 198-205]. Chapter 1. Elementary Coalgebra Theory 12

1.7.1 Extension of the Ground Field

Definition 1.7.1. Let K be an algebra over the field K, and let L be any extension field of K. We can introduce the algebra

L A = A ⊗K L,

L P which is an algebra over L. We can think of A as the L-linear combinations i liai of the elements of A, where addition and multiplication by scalars in L are defined in the natural way.

Definition 1.7.2. Completely analogously we can construct the extended module L L L V = V ⊗K L of a left A-module V . Then V naturally becomes an A module by P P  P the multiplication rule ( i βiai) j γjvj = i,j(βiγj)(aivj), for all βi, γj ∈ L, ai ∈ A and vj ∈ V .

1.7.2 Absolute Irreducibility

Definition 1.7.3. Let K be a field, A a K-algebra and V an irreducible A-module (i.e. it has only trivial submodules). We call V absolutely irreducible if V L is an irreducible AL-module for every extension field L of K.

Theorem 1.7.4. An irreducible A-module V is absolutely irreducible if and only if ∼ HomA(V,V ) = K, that is, if and only if the only A- of V are left multiplications by elements of K.

Because this is not terribly relevant to my project, I have omitted the proof of this result. For a proof see [5, p. 202]. Chapter 2

Finitary Functions and Coefficient Functions

2.1 Basic Representation Theory

Now we have discussed some coalgebra theory, which we will need later on, it is time to turn to representation theory: the heart of this project. Before we delve into the details of theory of polynomial representations of GLn, I want to briefly go over some basic definitions and results from representation theory.

Definition 2.1.1. Let Γ be a group and V a vector space over a field K. Then a representation τ of Γ is a map τ :Γ → EndK (V ), which satisfies τ(gh) = τ(g)τ(h) for all g, h ∈ Γ.

We recall that the the group algebra on Γ over the field K is given by all finite formal K-linear combinations of the elements of Γ, so the elements of this group algebra are P given by κ of the form κ = g∈Γ κgg, where the set {g ∈ Γ: κg 6= 0} is finite. Following Green’s notation [9, p. 2], we will denote this by KΓ. We can extend τ linearly to get a map τ : KΓ → EndK (V ). Note that this map satisfies τ(κ + λ) = τ(κ) + τ(λ) and τ(κλ) = τ(κ)τ(λ) for all κ, λ ∈ KΓ.

Proposition 2.1.2. [9, p. 2] A representation τ :Γ → Endk(V ) is equivalent to a left KΓ-module (V, τ) by the multiplication rule κv = τ(κ)v for all v ∈ V .

Proof. (Own working) We only need to check here that, given a representation, we obtain a valid KΓ-module by the multiplication rule stated in the proposition and vice versa. Suppose first that τ is a representation. Then we easily see that, for all κ, λ ∈ KΓ and v, w ∈ V,

1. κ(v + w) = τ(κ)(v + w) = τ(κ)v + τ(κ)w = κv + κw;

2.( κ + λ)v = τ(κ + λ)v = (τ(κ) + τ(λ))v = κv + λv;

3.( κλ)v = τ(κλ)v = (τ(κ)τ(λ))v = τ(κ)(τ(λ)v) = κ(λv);

4. 1Γv = τ(1Γ)v = 1V (v) = v. 13 Chapter 2. Finitary Functions and Coefficient Functions 14

Hence (V, τ) is a left KΓ-module.

Conversely, suppose that we are given a left KΓ-module V with a multiplication map KΓ × V → V . Then we set τ(g)v = gv for all g ∈ Γ and v ∈ V . We immediately see that τ(gh)v = (gh)v = g(hv) = τ(g)τ(h)v for all v ∈ V. It also follows easily that τ(g) ∈ EndK (V ), hence τ is a representation of Γ.

This proposition seems trivial but is very important for the rest of this project. A con- sequence will be that we can look at the left KΓ-modules instead of the representations of Γ. We will state here without proof that concepts from the world of representations translate naturally to the world of modules, for example a subrepresentation naturally gives a submodule and vice versa.

2.2 Finitary functions

Definition 2.2.1. We denote the space of all maps Γ → K by KΓ. With multiplication and addition defined pointwise (i.e. fg : x 7→ f(x)g(x)) this space forms a commutative K-algebra.

Definition 2.2.2. Since Γ is a group, we can define the following two K-algebra maps

∆ : KΓ → KΓ×Γ, where ∆f ∈ KΓ×Γ :(s, t) 7→ f(st); Γ  : K → K, which sends f 7→ f(1Γ) ∈ K.

Definition 2.2.3. We call an f ∈ KΓ finitary if ∆f ∈ KΓ ⊗ KΓ, where we consider KΓ ⊗ KΓ as a subset of KΓ×Γ. We denote the space of finitary functions f : K → Γ by F = F (KΓ).

0 Γ Note that saying that f is finitary is equivalent to saying that there exist fh, fh ∈ K P 0 such that ∆f = h fh ⊗fh with h running over some finite index set, which is equivalent P 0 to saying that f(st) = h fh(s)fh(t) for all (s, t) ∈ Γ × Γ. Proposition 2.2.4. [9, p. 3] The space of finitary functions F = F (KΓ) is a K-algebra.

Proof. (Own working) Let us show that F is a K-subalgebra of KΓ. It is most cer- tainly a vector subspace of KΓ since, if f, g ∈ F , then ∆(f + g) = ∆f + ∆g and P 0 P 0 therefore ∆(f + g) = h(fh ⊗ fh) + i(gi ⊗ gi). Hence f + g is finitary. But F is also closed under (pointwise) multiplication, since if f, g ∈ F , then fg(st) = f(st)g(st) = P 0 P 0 0 Γ ( i fi(s)fi (t))( j gj(s)gj(t)). Thus there exist functions hk, hk ∈ K such that this last P 0 sum equals k hk(s)hk(t) for all s, t ∈ Γ, where k runs over a finite index set. Hence, fg ∈ F and F is a K-subalgebra of KΓ.

2.2.1 F is a K-coalgebra

We can prove an even stronger result: F = F (KΓ) is in fact a K-bialgebra. To get to this result we have to show that (F, ∆, ) is a K-coalgebra. The proof of this result is much more delicate than the proof that F is a K-algebra. We need to do some work in the form of definitions and propositions before we can prove this remarkable fact. Chapter 2. Finitary Functions and Coefficient Functions 15

Definition 2.2.5. We have a left and right action of Γ on KΓ given by

(x · f)(y) = f(xy) for all f ∈ KΓ and x, y ∈ Γ; (f · x)(y) = f(yx) for all f ∈ KΓ and x, y ∈ Γ.

This action extends naturally to a left and right action by KΓ on KΓ.

The left and right actions commute and they turn KΓ into a two-sided KΓ-module. We will denote the left, right and two-sided KΓ-modules generated by f ∈ KΓ by KΓf, KfΓ and KΓfΓ respectively.

Proposition 2.2.6. [1, p. 71] The following conditions are equivalent:

(i) dim KΓf < ∞;

(ii) dim KfΓ < ∞;

(iii) dim KΓfΓ < ∞.

Proof. i ⇒ ii: Suppose dim KΓf < ∞ and let {f1, . . . , fn} be a basis. Then for every x, y ∈ Γ we can write

n X (x · f)(y) = gi(x)fi(y) for some functions gi :Γ → K. i=1 Since (x · f)(y) = f(xy) = (f · y)(x), we see that

n n X X x · f = gi(x)fi; f · y = fi(y)gi. i=1 i=1 Extending everything linearly to x, y ∈ KΓ we see that KfΓ is contained in the K- linear span of {g1, . . . , gn} and hence dim KfΓ < ∞. The result ii ⇒ i follows in a similar fashion.

Moreover, iii ⇒ i, ii follows trivially since x · f = x · f · 1Γ and f · y = 1Γ · f · y. What is left for us to show is that i ⇒ iii. Suppose, again, that KΓf is finite dimensional and let {f1, . . . , fn} be a basis. Then fi ∈ KΓf, so fi = (x · f) for some x ∈ KΓ. Therefore y · fi = (yx · f) for all y ∈ KΓ and hence dim KΓfi < ∞. Now, by i ⇒ ii, also dim KfiΓ < ∞. We let KΓ act on the right of the basis vectors of KΓf. Then we get a finite dimensional KΓ-module. By linearity, it follows that KΓfΓ < ∞, which completes the proof.

Definition 2.2.7. We define the K-linear map π : KΓ ⊗ KΓ → KΓ×Γ by

π(f ⊗ g)(x, y) = f(x)g(y), for all f, g ∈ KΓ and x, y ∈ Γ. and extending this map linearly to a map on the whole of KΓ ⊗ KΓ.

Proposition 2.2.8. [1, p. 71] The map π is injective.

Pn Γ Γ Proof. Suppose π( i=1 fi ⊗ gi) ≡ 0. Since we can write a general element of K ⊗ K as a K-linear combination of elements {hi ⊗ hj}, where the {hi} are basis elements of Chapter 2. Finitary Functions and Coefficient Functions 16

Γ K , we may assume without loss of generality that g1, . . . , gn are linearly independent Pn Pn over K. We see that π( i=1 fi ⊗ gi) = 0 implies that i=1 fi(x)gi(y) = 0 for all Pn x, y ∈ Γ. So the Γ → K i=1 fi(x)gi must be the zero function. Since the gi are linearly independent over K by assumption, this implies that fi(x) = 0 for all x ∈ Γ Pn and 1 ≤ i ≤ n. Therefore i=1 fi ⊗ gi ≡ 0, as required.

Note that the map π really gives us a rigorous (and natural) way of thinking about KΓ ⊗ KΓ lying inside KΓ×Γ. We hinted at this fact before in the definition of a finitary func- tion, but we needed to make it more precise now. At this point recall that before we de- fined the K-algebra map ∆ : KΓ → KΓ×Γ by ∆f(x, y) = f(xy), for all f ∈ KΓ and x, y ∈ Γ and note that we can reformulate our definition of a finitary function from before.

Definition 2.2.9. We say that the function f :Γ → K is finitary if ∆f ∈ π(KΓ ⊗KΓ).

A third equivalent definition to the concept of a finitary function is given by the following proposition.

Proposition 2.2.10. [1, p. 72] We have ∆f ∈ π(KΓ ⊗ KΓ) ⇔ dim KΓf < ∞.

Γ Γ Pn Proof. ⇒: If ∆f ∈ π(K ⊗K ), then we can write ∆f(x, y) = f(xy) = i=1 gi(x)hi(y). Pn We see immediately that (x · f) = i=1 gi(x)hi and hence the span of the functions x · f is contained in the span of the functions hi, so by linearity dim KΓf < ∞.

⇐: Suppose dim KΓf < ∞ and let {f1, . . . , fn} be a basis of KΓf. Then x · f = Pn i=1 gi(x)fi for some functions gi :Γ → K and x ∈ Γ. Therefore ∆f(x, y) = f(xy) = Pn Pn (x·f)(y) = i=1 gi(x)fi(y) for all x, y ∈ Γ and hence ∆f = π( i=1 gi ⊗fi), as required.

This proposition together with Proposition 2.2.6 gives us a whole set of equivalent defi- nitions for a function to be finitary. Armed with all these definitions let us now finally prove that the set of finitary functions F = F (KΓ) is in fact a K-coalgebra with ∆ as the comultiplication map. By the previous chapter this is proved by the following theorem.

Theorem 2.2.11. [1, p. 72] We have ∆F (KΓ) ⊆ π(F (KΓ) ⊗ F (KΓ)).

Before we can go on to prove this, we need the following lemma from linear algebra. For a proof of this lemma see, for example, [1, pp. 72-73].

Lemma 2.2.12. Let S be a set and let V be a finite dimensional K-linear subspace of S K . Then it is possible to pick a basis {f1, . . . , fn} for V and a subset {s1, . . . , sn} of S such that the condition fi(sj) = δij is satisfied, where δ is the Kronecker delta.

Γ Proof of Theorem 2.2.11. Let f ∈ F (K ) and set Vf = KΓfΓ. Propositions 2.2.6 and 2.2.10 imply that dim Vf < ∞. Hence we can pick a basis {f1, . . . , fn} of Vf and a subset {x1, . . . , xn} of Γ such that fi(xj) = δij by Lemma 2.2.12. As before

n X ∆f(x, y) = f(xy) = (f · y)(x) = fi(x)gi(y), i=1 Chapter 2. Finitary Functions and Coefficient Functions 17

Γ for some functions gi ∈ K . As in the proof of Proposition 2.2.6, we immediately see that dim KΓfi < ∞ for all i. We can conclude that fi is finitary for all i. It remains for us to show that the gi are also finitary. We find that X ∆f(xj, y) = f(xjy) = (xj · f)(y) = δijgi(y) = gj(y). i

So the functions gj lie in KΓf and therefore gi ∈ KΓfΓ for all i, which implies that Γ Γ dim KΓgi < ∞ for all i and the gi are finitary too. So ∆f ∈ π(F (K ) ⊗ F (K )), as required.

We have shown that (F, ∆, ) is a K-coalgebra. Before we showed that F is a K-algebra under the maps ∆ and  as well. So in fact F is a K-bialgebra. This is an important result that we will use repeatedly in the following chapter.

2.3 Coefficient Functions

The reason finitary functions are important is that they appear as the coefficient func- tions of finite-dimensional representations of Γ, which will be very important in this project. Let us, however, first explain what coefficient functions are.

Definition 2.3.1. Suppose V is a finite-dimensional K-vector space with basis {vb : b ∈ B} and suppose we have a representation τ :Γ → EndK (V ) (or equivalently a left KΓ-module (V, τ)). Then we define the coefficient functions rab :Γ → K (for a, b ∈ B) of (V, τ) by X τ(g)vb = gvb = rab(g)va for all g ∈ Γ, b ∈ B. a∈B Definition 2.3.2. The space spanned by the coefficient functions is called the coef- ficient space of (V, τ) and it is denoted cf(V ). We therefore obtain the equation P cf(V ) = a,b K · rab. Proposition 2.3.3. [9, p. 4] The coefficient space cf(V) is independent of the chosen basis {vb : b ∈ B} of V .

Proof. (Own working) Suppose we are given two bases {vi : i ∈ {1, . . . , n}} and {wi : i ∈ {1, . . . , n}} of our finite dimensional vector space V . Let us denote the coefficient functions with respect to {vi} as rab and those with respect to {wi} as sab for a, b ∈ P {1, . . . , n}. We can define the change of basis matrix (aij) such that wi = j ajivj and −1 its inverse (bij) = (aij) . We find that

n n n n X X X X τ(g)vi = τ(g)( bjiwj) = bjiτ(g)wj = bji skj(g)wk j=1 j=1 j=1 k=1 n n n X X X = bjiskj(g)( alkvk) = bjiskj(g)alkvl, for all g ∈ Γ. j,k=1 l=1 j,k,l=1

Pn So we find that rli(g) = j,k,l=1 bjialkskj(g) for all g ∈ Γ. Hence the coefficient space obtained by spanning the rab is contained in the coefficient space obtained by spanning Chapter 2. Finitary Functions and Coefficient Functions 18

the sab. By swapping roles of rab and sab in the argument we see that the latter coefficient space is contained by the former as well, hence the coefficient spaces are equal, as required.

Definition 2.3.4. The matrix R = (rab) is called the invariant matrix. Proposition 2.3.5. [9, p. 4] The invariant matrix gives a matrix representation of Γ.

Proof. (Own working) Suppose g, h ∈ Γ. We need to check that R(gh) = R(g)R(h), P but τ(gh) = τ(g)τ(h) implies that rab(gh) = c∈B rac(g)rcb(h). If, however, we are P multiplying out general n × n-matrices (aij)(bij) = (cij), then cij = k aikbkj. So, by the summation we found, we indeed have that (rab(gh)) = (rab(g))(rab(h)).

Another important way to formulate the above proposition is by writing the formula we P found in the proof for rab(gh) as ∆rab = c∈B rac ⊗ rcb. It immediately follows that the coefficient functions are finitary and therefore cf(V ) is a subspace of F = F (KΓ). We can even conclude that cf(V ) forms a K-subcoalgebra of F as ∆cf(V ) ⊆ cf(V ) ⊗ cf(V ).

2.4 The category modA(KΓ)

Definition 2.4.1. If S is a K-algebra, then we denote the category of all finite- dimensional left S-modules by mod(S).

Now we can finally properly define what modA(KΓ) means. Let us first suppose that we are given a subcoalgebra A of F = F (KΓ). This simply means that we pick a K-subspace A of F such that ∆A ⊆ A ⊗ A.

Definition 2.4.2. Let A be as above. We define modA(KΓ) to be the full subcategory of mod(KΓ) of all left KΓ-modules (V, τ) such that cf(V ) ⊆ A. We call a left KΓ- module (V, τ) A-rational if cf(V ) ⊆ A. So modA(KΓ) is the category of left A-rational finite-dimensional KΓ-modules. Chapter 3

Polynomial Representations and the Schur Algebra

In this chapter we will give a first definition of polynomial representations of GLn(K) for an infinite field K and we will define the Schur algebra. Our main goal in this chapter will be to derive some initial results for the Schur algebra. We will use these results in the next chapter to find all irreducible polynomial representations of GLn(K), which is the main goal of this project.

From now on we will assume K is an infinite field, n a positive integer and we will stick to Green’s notation Γ = GLn(K). We want to define the polynomial functions on Γ.

Γ Definition 3.0.3. Suppose µ, ν ∈ n := {1, 2, . . . , n}. We define the function cµν ∈ K as the function sending the matrix g ∈ Γ to its µν-th coefficient gµν.

Γ Definition 3.0.4. We denote by A or AK (n) the subalgebra of K generated by the cµν. We call the elements of the algebra A polynomial functions on Γ.

Proposition 3.0.5. [9, p. 11] Since K is an infinite field, the cµν are algebraically independent over K.

This proposition means that we can consider A to be the algebra of polynomials over K 2 in n indeterminates cµν.

Proof. (Own working) Let us suppose for a contradiction that the cµν are algebraically dependent. Then there exists a non-zero polynomial P ∈ K[X11,...,Xnn] such that Γ P (c11, . . . , cnn) ≡ 0 ∈ K . This means that P (c11, . . . , cnn)(g) = P (g11, . . . , gnn) = 0 ∈ K for all g ∈ Γ. Since K is an infinite field, we can always obtain an overdetermined system for the coefficients of the polynomial P by considering different g ∈ Γ, which means that the only solution for the coefficients is the solution in which all are zero, i.e. P is the zero polynomial. This contradicts our assumption that P is non-zero, hence the cµν are algebraically independent, as required.

Definition 3.0.6. For each r ≥ 0, we define AK(n, r) to be the subspace of the elements of AK (n) that are expressible as homogeneous polynomials of degree r in the cµν.

19 Chapter 3. Polynomial Representations and the Schur Algebra 20

n2+r−1 Proposition 3.0.7. [9, p. 11] The K-space AK (n, r) has r .

Proof. (Own working) The distinct monomials in the cµν of degree r form a K-basis of the space AK (n, r). The number of these monomials is just the number of ways in which we can choose r elements from a set of size n2, where repetition is allowed and is n2+r−1 unimportant. This number is r (see e.g. [13, p. 70]).

In particular, we notice that AK (n, 0) = K · 1A, where 1A is the function sending g 7→ 1K for all g ∈ Γ. We also notice that the K-algebra A has standard grading L L∞ A = AK (n) = r≥0 AK (n, r). (Recall that an algebra R = i=0 Ri is a graded algebra if K ⊂ R0 and RiRj ⊂ Ri+j, see e.g. [15, p. 231]).

We need to briefly discuss some notation that we will use throughout the rest of this project. Suppose we are given integers n, r ≥ 1. Then we will write I(n, r) for the space of all functions r → n. We can also denote these functions as a vector: if i ∈ I(n, r), then we can write i = (i1, . . . , ir) where iα ∈ n. We will write the symmetric group on r elements as G(r). This group has a natural right action on I(n, r) by place- in the following way: iπ = (iπ(1), . . . , iπ(r)) for π ∈ G(r) and i ∈ I(n, r). For i, j ∈ I(n, r) we will write i ∼ j if they lie in the same G(r)-orbit, that is i ∼ j ⇐⇒ ∃π ∈ G(r) such that j = iπ. Furthermore, we can let G(r) act naturally on the right of I(n, r) × I(n, r) by (i, j)π = (iπ, jπ). We will write (i, j) ∼ (k, l) if they are in the same G(r)-orbit. The following definition and proposition illustrate why this notation is so useful.

Definition 3.0.8. For i, j ∈ I(n, r), we will write ci,j := ci1j1 ci2j2 . . . cirjr .

Proposition 3.0.9. [9, p. 11] If i, j ∈ I(n, r), then ci,j lies in AK (n, r) and AK (n, r) is spanned by the monomials ci,j as a K-space.

Proof. (Own working) It should be intuitively clear that ci,j lies in AK (n, r), since ci,j is a monomial in the cµν with exactly r factors of the form cµν for some ν, µ ∈ n. Furthermore, any element in AK (n, r) can be written as the sum of certain K-multiples of monomials. Any monomial in AK (n, r), however, will have exactly r factors of the form cµν with µ, ν ∈ n, so we can write such a monomial as ci,j with i, j ∈ I(n, r). Hence AK (n, r) is spanned by the ci,j as a K-space, as required.

We conclude that AK (n, r) and I(n, r) are closely linked. We run into a problem, however, as i, j ∈ I(n, r) do not uniquely determine ci,j. For example, (1, 2) and (2, 1) are distinct elements of I(2, 2), but c(1,2),(2,1) = c12c21 = c21c12 = c(2,1),(1,2). This problem is easily solved by the following proposition.

Proposition 3.0.10. [9, p. 12] We have ci,j = ck,l if and only if (i, j) ∼ (k, l).

Proof. (Own working) Let us denote ci,j as in Definition 3.0.8, so we have r (potentially non-distinct) factors of the form cµν. Now ci,j = ck,l if and only if the factors cµν appearing in ci,j are the same as the factors appearing in ck,l, i.e. the factors of ck,l are just some permutation of the factors of ci,j.

Therefore, if ci,j = ck,l, then ∃π ∈ G(r) such that the α-th factor of ci,j equals the π(α)-th factor of ck,l for all α ∈ r. That just means that iα = kπ(α) and jα = lπ(α). Which is just equivalent to saying that (i, j) ∼ (k, l). Chapter 3. Polynomial Representations and the Schur Algebra 21

Conversely, if (i, j) = (k, l)π, then the α-th factor of ci,j is the same as the π(α)-th factor of ck,l. This means that they have the same factors in a different order, hence ci,j=ck,l.

n2+r−1 Corollary 3.0.11. [9, p. 12] The set I(n, r) × I(n, r) has exactly r distinct G(r)-orbits.

3.1 The Definition of MK(n) and MK(n, r)

In the previous chapter we introduced the maps

∆ : KΓ → KΓ×Γ, where ∆f ∈ KΓ×Γ :(s, t) 7→ f(st); Γ  : K → K, which sends f 7→ f(1Γ) ∈ K.

The cµν (with µ, ν ∈ n) introduced at the beginning of this chapter are indeed functions Γ → K, so we need to figure out how ∆ and  behave on these functions.

Proposition 3.1.1. [9, p. 12] When we apply ∆ and  to the functions cµν, we find the following identities: P ∆(cµν) = λ∈n cµλ ⊗ cλν;

(cµν) = δµν.

Proof. (Own working) For the first identity we see that if g, h ∈ Γ, then ∆cµν :(g, h) 7→ cµν(gh). If we denote the ij-th component of the matrix g by gij, then the µν-th P P  component of gh is (gh)µν = λ gµλhλν. We know that λ∈n cµλ ⊗ cλν (g, h) = P λ gµλhλν. So the two functions agree on Γ × Γ and they are therefore equal.

For the second identity we see that (cµν) = cµν(1Γ) = δµν by definition of the identity matrix.

Corollary 3.1.2. [9, p. 12] For ‘multi-indices’ i, j ∈ I(n, r) of length r ≥ 1 we also have P ∆(ci,j) = s∈I(n,r) ci,s ⊗ cs,j,

(ci,j) = δi,j, where δi,j equals 1 is i = j and 0 otherwise.

Proof. (Own working) From the previous chapter we know that ∆ and  are K-algebra maps, in other words: they are multiplicative maps. Therefore

∆(ci,j) = ∆(ci1j1 , . . . , cirjr )

= ∆(ci1j1 ) ... ∆(cirjr )     X X =  ci1λ1 ⊗ cλ1j1  ...  cirλr ⊗ cλrjr  λ1∈n λr∈n Chapter 3. Polynomial Representations and the Schur Algebra 22

X = ci,s ⊗ cs,j s∈I(n,r)

Here the last equality stems from the fact that we can pick exactly one term from each of the r brackets and sum over all these possible products. When we pick s = (λ1, . . . , λr), then (ci1λ1 ⊗ cλ1j1 ) ... (cirλr ⊗ cλrjr ) = (ci1λ1 . . . cirλr ) ⊗ (cλ1j1 . . . cλrjr ) = ci,s ⊗ cs,j. For the second equation we spot that

(ci,j) = (ci1j1 , . . . , cirjr )

= (ci1j1 ) . . . (cirjr )

= δi1j1 . . . δirjr = δi,j.

In these propositions, we have shown that for A = AK (n) we have that ∆A ⊆ A ⊗ A, which means that A is a subcoalgebra of F (KΓ). Since the restrictions of ∆ and  to A will still be K-algebra maps, this implies that A is also automatically a subbialgebra Γ of F (K ). Furthermore, what these propositions show is that ∆AK (n, r) ⊆ AK (n, r) ⊗ AK (n, r) and hence that AK (n, r) is also a subcoalgebra of AK (n) (For the case r = 0 this follows easily from ∆(1A) = 1A ⊗ 1A).

Definition 3.1.3. We shall write MK(n) for the category modAK (n)(KΓ) and MK(n, r)

for the category modAK (n,r)(KΓ). In other words, MK (n) is the category of finite dimensional left KΓ-modules whose coefficient functions are polynomials in the cµν and furthermore, for MK (n, r), we require that the coefficient functions are homogeneous polynomials of degree r in the cµν.

Informally, we could also say that MK (n) is the category of finite-dimensional (left) KΓ modules which afford ‘polynomial’ representations of Γ = GLn(K).

Theorem 3.1.4. [9, p. 12] Each KΓ-module V ∈ MK (n) has decomposition M V = Vr, r≥0 where for each r ≥ 0, Vr is a submodule of V with cf(Vr) ⊆ AK (n, r), i.e. Vr ∈ MK (n, r).

In other words, each polynomial representation of Γ decomposes as a direct sum of homogeneous ones. Because the proof of this theorem is not terribly relevant to the rest of the project I have decided to omit it here. The curious reader may want to have a look at Theorem (1.6c) [8, p. 156], where Green proves a much more general theorem L about comudules over a coalgebra R which decomposes as a direct sum R = ρ Rρ. Theorem 3.1.4 follows as a consequence to the theorem proved there by Green.

3.2 Examples of Polynomial Representations

To get a feeling of what these polynomial representations really are, let us look at some examples. Chapter 3. Polynomial Representations and the Schur Algebra 23

Example 1 For any vector space V of dimension n, the π : g 7→ 1V has as its invariant matrix R the n×n identity matrix for each g ∈ Γ. Therefore the trivial representation is a polynomial representation.

Example 2 (Adapted from [16, p. 1]) Let n=2 and let E be a 2-dimensional K- 2 vector space with basis e1, e2. The symmetric square Sym E is a representation of 2 2 GL2(K). It has the basis e1, e1e2, e2. Let us try to figure out how GL2(K) acts on a b this basis. If g = ∈ GL (K), then this matrix sends c d 2

e1 7→ ae1 + be2;

e2 7→ ce1 + de2.

2 2 So it will act on the basis e1, e1e2, e2 as

a2 2ab b2 ac ad + bc bd c2 2cd d2

But clearly all elements of this matrix are all polynomials in the coefficients of g, hence we have found a polynomial representation of GL2(K). Since all elements of the matrix are homogeneous polynomials of degree 2 in the coefficients of g, we 2 can even conclude that Sym E ∈ MK (2, 2).

× Non-example 3 (Adapted from [16, p. 2]) Let n = 2 and let ρ : GL2(K) → K be the representation defined by ρ(g) = (det(g))−1. If ρ is a polynomial representation, then the function GL2(K) → K defined by

−1 −1 g 7→ (det g) = (c11(g)c22(g) − c21(g)c12(g))

must be a polynomial in the cij. This is clearly impossible, for if we ourselves α 0 to matrices of the form for α 6= 0, then we conclude that the function map- 0 1 ping α 7→ 1/α for all α ∈ K× has to be a polynomial in α. This is a contradiction, since K is an infinite field.

3.3 The Schur Algebra

From Theorem 3.1.4 we see that the only indecomposible modules V ∈ MK (n) are the homogeneous modules Vr for some r ≥ 0. So we may as well limit our attention to the homogeneous cases AK (n, r) and MK (n, r). Let us do this from now on and let r ≥ 0 be fixed.

Definition 3.3.1. Let the Schur Algebra SK (n, r) be defined as the dual space of AK (n, r): ∗ SK (n, r) := (AK (n, r)) = HomK (AK (n, r),K).

Now we deal with SK (n, r) as we usually treat a dual space. The basis {ci,j : i, j ∈ I(n, r)} for AK (n, r) immediately leads to the definition of the dual basis {ξi,j : i, j ∈ Chapter 3. Polynomial Representations and the Schur Algebra 24

I(n, r)} of SK (n, r) where for i, j ∈ I(n, r) the element ξi,j is defined by ( 1 if (i, j) ∼ (p, q) ξi,j(cp,q) = for all p, q ∈ I(n, r). 0 if (i, j) 6∼ (p, q)

From Proposition 3.0.10 we instantly see that ξi,j = ξp,q ⇐⇒ (i, j) ∼ (p, q) and also n2+r−1 that the dimension of SK (n, r) (which equals the dimension of AK (n, r)) is r . From Proposition 1.2.1 we know that, since SK (n, r) is dual to the coalgebra AK (n, r), it must be an algebra. Multiplication in this algebra is as follows: if c ∈ AK (n, r) and

P 0 ∆(c) = t ct ⊗ ct then, using the definition of multiplication in the dual space of a coalgebra, the product of ξ, η ∈ SK (n, r) is defined by

P 0 (ξη)(c) = t ξ(ct)η(ct) and the unit element of the algebra SK (n, r) will be denoted  and is given by (c) = c(1Γ) for all c ∈ AK (n, r).

Moreover, using Corollary 3.1.2 and the fact that ξ and η are K-algebra maps, we also find that X (ξη)(ci,j) = ξ(ci,s)η(cs,j), s∈I(n,r)

for any basis element ci,j ∈ AK (n, r).

Proposition 3.3.2 (Multiplication Rule for SK (n, r)). [9, p. 13] For ξi,j, ξk,l ∈ SK (n, r) we have the multiplication rule X ξi,jξk,l = {Z(i, j, k, l, p, q).1K } ξp,q, p,q

where the sum is over a set of representatives (p, q) of G(r)-orbits of I(n, r) × I(n, r), and Z(i, j, k, l, p, q) = Card {s ∈ I(n, r):(i, j) ∼ (p, s) and (k, l) ∼ (s, q)} .

Proof. (Own working) Suppose we pick a basis element cp,q ∈ AK (n, r). Then X (ξi,jξk,l)(cp,q) = ξi,j(cp,s)ξk,l(cs,q). s∈I(n,r)

The summand on the right hand side is 1 for each s ∈ I(n, r) such that (i, j) ∼ (p, s) and (k, l) ∼ (s, q) and it is 0 otherwise. Hence the right hand side equals Z(i, j, k, l, p, q).1K . Moreover ξp,q(cn,m) is 1 if (p, q) ∼ (n, m) and 0 otherwise and ∼ is an equivalence relation. So if (n, m) ∼ (p, q), then Z(i, j, k, l, p, q) = Z(i, j, k, l, n, m). Hence the formula holds.

This general multiplication rule (which was first written down in Issai Schur’s disserta- tion [14, p. 20]) has some special cases which are worth noticing.

Proposition 3.3.3. [9, p. 14] For any i, j, k, l ∈ I(n, r) we have: Chapter 3. Polynomial Representations and the Schur Algebra 25

(a) ξi,jξk,l = 0 unless j ∼ k;

(b) ξi,iξi,j = ξi,j = ξi,jξj,j.

Proof. Let us start with the first equation. If ξi,jξk,l 6= 0, then there exist s, p, q ∈ I(n, r) such that (i, j) ∼ (p, s) and (k, l) ∼ (s, q). This implies that j ∼ s and k ∼ s, and hence that j ∼ k, since ∼ is an equivalence relation.

(Own working) Now for the second equation, we need to consider ξi,iξi,j. We are only interested in ξp,q for which Z(i, i, i, j, p, q) 6= 0. If this is non-zero, then there exist s ∈ I(n, r) such that (i, i) ∼ (p, s) and (i, j) ∼ (s, q). The former implies that p = s, so the latter becomes (i, j) ∼ (p, q). We conclude there exists exactly one s ∈ I(n, r) satisfying the conditions of Z(i, i, i, j, p, q) (namely s = p) and that this can only happen when (i, j) ∼ (p, q). Using the multiplication rule we find that ξi,iξi,j = ξp,q for some (p, q) ∼ (i, j) and hence that ξi,iξi,j = ξi,j. An analogous argument leads to ξi,jξj,j = ξi,j.

The upshot of this last proposition is that we can deduce a useful equation for the ξi,i. 2 Firstly, we know from the second equation that ξi,i = ξi,iξi,i = ξi,i. But we also know from the first equation that ξi,iξj,j = 0 if i 6∼ j (and if i ∼ j, then (i, i) ∼ (j, j), so ξi,i = ξj,j). So the ξi,i form a set of mutually orthogonal idempotents.

Proposition 3.3.4. [9, p. 14] For the unit element  of SK (n, r) the following equation holds: X  = ξi,i, i where i runs over a set of representatives of the G(r)-orbits of I(n, r).

Proof. (Own working) We know that  : c 7→ c(1Γ) for all c ∈ AK (n, r) and we already found in Corollary 3.1.2 that (ci,j) = δi,j where δi,j = 1 if i = j and 0 otherwise. Now let us evaluate the right hand side of the equation in the proposition on the basis P elements cp,q. A term in the sum i ξi,i(cp,q) equals 1 if (i, i) ∼ (p, q) and 0 otherwise. But (i, i) ∼ (p, q) for some i ∈ I(n, r) if and only if p = q. Moreover since the sum runs over a set of representatives of G(r)-orbits, it is not possible for more than one term of the sum to be equal to 1. On the right hand side we obtain a function which sends ( 1 if p = q, cp,q 7→ 0 otherwise.

So the left hand side and the right hand side agree on basis elements cp,q, hence they represent the same function.

3.4 The map e : KΓ → SK(n, r)

Definition 3.4.1. For each g ∈ Γ we define a map eg : AK (n, r) → K by eg(c) = c(g) for all c ∈ AK (n, r) (so eg ∈ SK (n, r)) and we define the map e : KΓ → SK (n, r) as the linear extension of the map sending g 7→ eg for all g ∈ Γ. Proposition 3.4.2. [9, p. 14] The following two equations hold: Chapter 3. Polynomial Representations and the Schur Algebra 26

(i) e1Γ =  ∈ SK (n, r); 0 (ii) egg0 = egeg0 for all g, g ∈ Γ.

Proof. The first equation holds, since this is just the definition of the  in P 0 SK (n, r). For the second equation we can check that if c ∈ AK (n, r) and ∆(c) = t ct⊗ct, then, by the definition of multiplication in the Schur algebra, we find that (egeg0 )(c) = P 0 P 0 0 P 0 0 0 0 t eg(ct)eg0 (ct) = t ct(g)ct(g ) = ( t ct ⊗ ct)(g, g ) = ∆(c)(g, g ) = c(gg ) = egg0 (c), as required (here the second to last equality is just the definition of ∆).

In fact, the above proposition shows that the map e : KΓ → SK (n, r) is a map of K- algebras. Since we can linearly extend any map f ∈ KΓ uniquely to a map f : KΓ → K, we can think of the map e as the ‘evaluation’ map where e(κ): c 7→ c(κ) for κ ∈ KΓ and where we think of c(κ) as the unique linear extension of c to KΓ.

Proposition 3.4.3. [9, pp. 14-15] For the map e : KΓ → SK (n, r) we have:

(i) e is surjective;

Γ (ii) Let Y = Ker e and let f be any element of K . Then f ∈ AK (n, r) if and only if f(Y ) = 0.

Proof. (Adapted from [9, p. 15]) (i) Suppose for a contradiction that e is not surjective. ∗ Then Im e would be a proper subspace of SK (n, r) = AK (n, r) . Since Im e is a K- subspace of SK (n, r), we can find a basis element ξi,j for some i, j ∈ I(n, r) which does not lie in the of e. So Im e is a subset of the K-span of the remaining basis elements of SK (n, r) and hence every element in Im e will send ci,j ∈ AK (n, r) to 0. We find that eg(ci,j) = ci,j(g) = 0 for all g ∈ Γ, which is a contradiction.

(ii) Let us start with the only if direction in which we assume that f ∈ AK (n, r). Suppose κ ∈ Ker e. Then e(κ) ≡ 0. This is simply equivalent to saying that e(κ)(f) = 0 for all f ∈ AK (n, r), i.e. f(κ) = 0. So f(κ) = 0 for all κ ∈ Y and therefore f(Y ) = 0. For the if direction let us assume that f is any element of KΓ for which we have f(Y ) = 0. Using part (i) we can now construct a short exact sequence

ι e 0 −−−→ Y ,−−−→ KΓ −−−→ SK (n, r) −−−→ 0,

where ι is simply the inclusion Y,−→ KΓ. But we also have the short exact sequence given by ι f 0 −−−→ Y ,−−−→ KΓ −−−→ K −−−→ 0. By considering a map between these two short exact sequences (which is just the identity map on 0,Y and KΓ) we can now construct a K-linear map y : SK (n, r) → K which is defined by y(e(κ)) = f(κ) for all κ ∈ KΓ. From the K-linearity of this map it follows ∗ ∗ ∼ easily that y ∈ SK (n, r) and the natural isomorphism SK (n, r) = AK (n, r) implies that there must be a c ∈ AK (n, r) such that y(ξ) = ξ(c) for all ξ ∈ SK (n, r). If we now take ξ = e(κ), then we find that f(κ) = y(e(κ)) = e(κ)(c) = c(κ) for all κ ∈ KΓ. We can therefore conclude that f = c ∈ AK (n, r), as required.

Note that we have just proved an important result: any ξ ∈ SK (n, r) can be written as ξ = eκ for some κ ∈ KΓ! Chapter 3. Polynomial Representations and the Schur Algebra 27

Proposition 3.4.4. [9, p. 15] Let V ∈ mod(KΓ). Then V ∈ MK (n, r) if and only if YV = 0.

Proof. Fix a basis {vb} of V . Recall from the previous chapter the invariant matrix R = (rab), where rab are the coefficient functions for the chosen basis. Saying that YV = 0 is clearly equivalent to saying that R(Y ) = (rab(Y )) = 0. But from the previous Γ proposition and the fact that rab ∈ K , we see that rab(Y ) = 0 is itself equivalent to saying that rab ∈ AK (n, r). Since this must hold for all coefficient functions we obtain an equivalence with the statement that cf(V ) ≤ AK (n, r), which is just the same as requiring that V ∈ MK (n, r).

Inspired by these two propositions we can now define the following left action of SK (n, r) on V ∈ MK (n, r): e(κ)v = κv, for all κ ∈ KΓ, v ∈ V, where κv is just the left action of KΓ on V . These two propositions show that this definition makes sense. Furthermore, we see that through this definition we have actually established an equivalence of the categories MK (n, r) and mod(SK (n, r)). It is worth noting that in the notation we used in the previous chapter we found that if we fix a basis {vb : b ∈ B} of V, then X gvb = rab(g)va, for g ∈ G, b ∈ B. a∈B

In the action of SK (n, r) on V this becomes X ξvb = ξ(rab)va, for ξ ∈ SK (n, r), b ∈ B, a∈B

since these two clearly agree when ξ = eg and we can just extend linearly to the whole of KΓ.

3.5 The Module E⊗r

Let our infinite field K be fixed and let us choose Γ = ΓK = GLn(K). We define E to be an n-dimensional K-vector space

E = EK = K · e1 ⊕ · · · ⊕ K · en,

which clearly has basis {eν : ν ∈ n}. We let Γ act on this basis naturally, i.e. in the n same way as GLn(R) would act on the {ei} of R . So we have X X geν = gµνeµ = cµν(g)eµ, for all g ∈ Γ and ν ∈ n. µ∈n µ∈n

The corresponding invariant matrix becomes C = (cµν) and therefore we see that the KΓ-module E lies in MK (n, 1). Chapter 3. Polynomial Representations and the Schur Algebra 28

Now we choose r ≥ 1 and we let Γ act naturally on the K-vector space E⊗r = E⊗· · ·⊗E. Firstly, we note that E⊗r has the K-basis

{ei = ei1 ⊗ · · · ⊗ eir : i ∈ I(n, r)}.

When we let Γ act on this basis as before we see that, for j ∈ I(n, r) and g ∈ Γ, X X gej = gej1 ⊗ · · · ⊗ gejr = gi1j1 . . . girjr ei = ci,j(g)ei. i∈I(n,r) i∈I(n,r)

r ⊗r The invariant matrix we obtain from this looks like (ci,j) = C . So we find that E ∈ ⊗r MK (n, r). As discussed before this action is equivalent to the action of SK (n, r) on E which is defined by the rule X ξej = ξ(ci,j)ei, for all ξ ∈ SK (n, r), j ∈ I(n, r). i∈I(n,r)

We can also define a right action of the symmetric group G(r) on E⊗r by

eiπ = eiπ, for all i ∈ I(n, r) and π ∈ G(r).

By extending this action linearly we instantly obtain a right action of the group algebra KG(r) on E⊗r.

Proposition 3.5.1. [9, p. 17] The right action of KG(r) commutes with the left action of KΓ on E⊗r.

Proof. (Own working) Since the left action of KΓ is equivalent to the left action of ⊗r ⊗r SK (n, r) on E , it suffices for us to check that (ξx)π = ξ(xπ) for all x ∈ E , ξ ∈ SK (n, r) and π ∈ G(r). But   X X X (ξej)π =  ξ(ci,j)ei π = ξ(ci,j)eiπ = ξ(ckπ−1,j)ek i∈I(n,r) i∈I(n,r) k∈I(n,r) X = ξ(ck,jπ)ek = ξ(ejπ) = ξ(ejπ), k∈I(n,r) where the first equality of the second line follows as (kπ−1, j) ∼ (k, jπ). By linearity the two actions commute.

We have a much stronger statement as well.

⊗r Theorem 3.5.2 (Schur). [9, p. 18] Let ψ : SK (n, r) → EndK (E ) be the representation ⊗r afforded by the SK (n, r)-module E . Then:

⊗r (i) Im ψ = EndKG(r)(E ); (ii) Ker ψ = 0.

∼ ⊗r Hence SK (n, r) = EndKG(r)(E ). Chapter 3. Polynomial Representations and the Schur Algebra 29

⊗r Proof. (Adapted from [9, p. 18]) Every element θ ∈ EndK (E ) has a matrix represen- ⊗r tation (Ti,j) relative to the basis {ei : i ∈ I(n, r)} of E . In this matrix the i, j run ⊗r independently over I(n, r) and Ti,j ∈ K. Additionally, we see that θ ∈ EndKG(r)(E ) if and only if Ti,j = Tiπ,jπ, for all i, j ∈ I(n, r) and all π ∈ G(r).

⊗r From this observation we can set up a between a K-basis of EndKG(r)(E ) and the set Ω of all G(r)-orbits on I(n, r) × I(n, r) in the following way: for ω ∈ Ω define θω ⊗r as the element of EndK (E ) which has the matrix (Ti,j) as its invariant matrix relative to the basis {ei}, where Ti,j = 1K if (i, j) ∈ ω and Ti,j = 0 otherwise.

Now suppose we pick a (p, q) ∈ I(n, r) × I(n, r) then (p, q) lies in a G(r)-orbit ω ∈ Ω. Let ξp,q be the basis element of SK (n, r) as before. Then

Claim. ψ(ξp,q) = θω

⊗r P By definition of the left action of SK (n, r) on E , we see that ξp,qej = i ξp,q(ci,j)ei. Since we have chosen our matrix (Ti,j) in the basis {ei}, this implies that the invariant matrix corresponding to this action looks like (ξp,q(ci,j)), where i, j run over I(n, r). This matrix is by definition the invariant matrix of θω in the same basis, hence ψ(ξp,q) = θω and we have proved the claim.

⊗r Hence we have proved that ψ induces a bijection SK (n, r) → EndKG(r)(E ). Since ψ is clearly linear as well, ψ must in fact be an isomorphism, as required.

Corollary 3.5.3 (Schur). [9, p. 18] If char K = 0, or if char K = p > r, then SK (n, r) is semisimple. Hence, every V ∈ MK (n, r) is completely reducible.

Proof. (Adapted from [9, p. 18]) Maschke’s Theorem restated in the language of modules reads: let G be a finite group and F a field whose does not divide the order of G. Then FG, the group algebra of G, is semisimple (see e.g. [11]). We can conclude from this that, given the conditions on char K, KG(r) is semisimple, since |G(r)| = r!. Therefore every KG(r)-module, in particular E⊗r, is completely reducible (see e.g. [10, p. 28, Theorem 2.5]). But the algebra of a completely reducible module is semisimple (see e.g. [10, p. 66, Proposition 2.27]), so by the ismorphism established in the previous proposition, SK (n, r) is semisimple. Since the categories MK (n, r) and mod SK (n, r) are equivalent and any module over a semisimple is completely reducible, we obtain the required result. Chapter 4

Weights and Characters

In this fourth chapter we will look at the theory of weights and weight spaces. We will also develop some of the theory of the characters of MK (n, r). At the end of the chapter we will deduce our big theorem: we will describe all irreducible characters of modules in MK (n, r). For this chapter I will mainly use Green’s book [9] as my reference as well as my own input and some smaller references where needed. All results in this chapter are originally due to Schur [14] for the case K = C and have here been generalised to hold for any infinite field.

4.1 Weights

Definition 4.1.1. For n, r ≥ 1 we denote the set of all G(r)-orbits in I(n, r) by Λ(n, r).

Definition 4.1.2. We will call the elements α, β, . . . ∈ Λ(n, r) weights or more precisely weights of GLn of dimension r. Proposition 4.1.3. [9, p. 23] A weight α ∈ Λ(n, r) can be completely specified by a vector (α1, . . . , αn) describing the content of any i = (i1, . . . , ir) ∈ α, i.e. for each ν ∈ n, we let αν be the number of ρ ∈ r such that iρ = ν.

Note that we can also consider α to be an unordered partition of r into n parts, where zero parts are allowed.

Proof of Proposition 4.1.3. (Own working) Firstly, if two functions i = (i1, . . . , ir), j = (j1, . . . , jr) ∈ I(n, r) lie in the same weight α, then ∃π ∈ G(r) such that

iπ = (iπ(1), . . . , iπ(r)) = (j1, . . . , jr) = j.

Both functions will therefore most certainly lead to the same vector (α1, . . . , αn).

Conversely, suppose two functions i, j ∈ I(n, r) lead to the same vector (α1, . . . , αn). Then for each ν ∈ n, they have the same number of ρ ∈ r such that iρ = ν. Therefore there exists a π ∈ G(r) such that iπ = j and hence i, j lie in the same weight α ∈ Λ(n, r).

30 Chapter 4. Weights and Characters 31

Given this proposition, we will usually write α = (α1, . . . , αn) for an α ∈ Λ(n, r). Apart from the right action of G(r) on I(n, r), we can also define a left action of W = G(n) on I(n, r) by the rule wi = (w(i1), . . . , w(ir)) for any w ∈ W and i ∈ I(n, r). Proposition 4.1.4. [9, p. 23] The right G(r)-action and the left W -action on I(n, r) commute.

Proof. (Own working) For w ∈ W, π ∈ G(r) and i ∈ I(n, r) we have that

(wi)π = (w(i1), . . . , w(ir))π = (w(iπ(1)), . . . , w(iπ(r))) = w(iπ), as required.

Since these two actions commute, it makes sense for us to let W act on the set of G(r)-orbits Λ(n, r).

Proposition 4.1.5. [9, p. 23] The left action of W on α ∈ Λ(n, r) is given by the rule −1 w α = (αw(1), . . . , αw(n)).

−1 −1 −1 Proof. (Own working) Suppose that i ∈ α. Then w i = (w (i1), . . . , w (ir)). If for ν ∈ n we had αν times the number ν appearing in the original i, then we would have −1 −1 αν times the number w (ν) appearing in w i. So the number ν appears αw(ν) times −1 −1 in w i. So we have w α = (αw(1), . . . , αw(n)), as required. Definition 4.1.6. The last proposition implies that each W -orbit of Λ(n, r) contains exactly one weight λ which satisfies λ1 ≥ λ2 ≥ · · · ≥ λn. We shall call such a weight a dominant weight. We will denote the set of dominant weights by Λ+(n, r).

It is intuitively clear that the dominant weights correspond to (ordered) partitions of r into no more than n parts.

4.2 Weight Spaces

Let us now fix our infinite field K and recall the mutually orthogonal idempotents ξi,i ∈ SK (n, r) for i ∈ I(n, r) defined in Section 3.3. In this section we proved that ξi,i = ξj,j if and only if i ∼ j. Inspired by this we shall now adopt the notation ξα for α ∈ Λ(n, r). The orthogonal decomposition that we found in Proposition 3.3.4 then becomes X  = ξα. α∈Λ(n,r)

From the previous chapter we already know that  is the identity element in SK (n, r), so when we consider the left action of  on a V ∈ MK (n, r) we find that M V = V = ξαV. α∈Λ(n,r)

Definition 4.2.1. Let Tn(K) be the diagonal subgroup of ΓK = GLn(K) consisting ∗ of all diagonal matrices x(t) = diag(t1, . . . , tn) with t1, . . . , tn ∈ K = K \{0}. Chapter 4. Weights and Characters 32

Definition 4.2.2. Let V ∈ MK (n, r) and α ∈ Λ(n, r). We define the α-weight space V α of V as

α α1 αn V := {v ∈ V : x(t)v = t1 . . . tn v, for all x(t) ∈ Tn(K)}.

α ∗ Definition 4.2.3. For each α ∈ Λ(n, r), we can define a function χ : Tn(K) → K by α α1 αn α χ (x(t)) = t1 . . . tn . It is clear that χ is in fact a multiplicative character of Tn(K).

Proposition 4.2.4. [9, p. 24] For α ∈ Λ(n, r) and V ∈ MK (n, r), we have that ξαV = V α.

Proof. (Adapted from [9, p. 24]) Recall that we defined the evaluation map e : KΓ → SK (n, r) in Section 3.4. Claim. For the evaluation map e we have the following identity:

X α1 αn ex(t) = t1 . . . tn ξα, for all x(t) ∈ Tn(K). α∈Λ(n,r)

To prove this identity let us evaluate both sides at some ci,j for i, j ∈ I(n, r). The left hand side then becomes ci,j(x(t)) which equals 0 if i 6= j, since all non-diagonal entries α1 αn of x(t) are 0, and it equals t1 . . . tn if i = j where α = (α1, . . . , αn) is the weight corresponding to i. When we evaluate the right hand side at ci,j, we find the same values, which proves the claim.

2 If v ∈ ξαV, then v = ξαw for some w ∈ V and therefore ξαv = ξαw = ξαw = v. Moreover, ξβv = ξβξαw = 0 for all β ∈ Λ(n, r) \{α}, as the ξα are mutually orthogonal. Hence  

X γ1 γn α1 αn x(t)v = ex(t)v =  t1 . . . tn ξγ (v) = t1 . . . tn v, γ∈Λ(n,r) α which implies that ξαV ⊆ V .

For distinct α, β ∈ Λ(n, r), however, we must have that V α ∩ V β = 0, for if 0 6= v ∈ α β α1 αn β1 βn V ∩ V , then x(t)v = t1 . . . tn v and x(t)v = t1 . . . tn v for all x(t) ∈ Tn(K). But, since K is an infinite field, this can only happen if α = β, a contradiction.

From the formula M V = ξαV, α∈Λ(n,r) α the fact that ξαV ⊆ V ⊆ V and the fact that for all distinct α, β ∈ Λ(n, r) we have α β α V ∩ V = 0, we can conclude that in fact ξαV = V , as required. Corollary 4.2.5. From the previous proposition we get the following decomposition of V ∈ MK (n, r) for free: M V = V α. α∈Λ(n,r)

We will sometimes refer to this as the weight space decomposition of V . Chapter 4. Weights and Characters 33

4.2.1 Examples of Weight Spaces

Example 4.2.1. (Own working) An example that may help the reader to get a feel for the weight space decomposition is given by the action of GL2(K) on the symmetric square Sym2E, where E is a 2-dimensional K-vector space (see Example 2 in Section m 0 3.2). If we consider the action of the diagonal matrix on the basis e2, e e , e2 0 n 1 1 2 2 of Sym2E, then we see that it acts as

m2 0 0   0 mn 0  . 0 0 n2

2 2 So the weight space decomposition is in this case simply given by Sym E = Sp(e1) ⊕ 2 Sp(e1e2) ⊕ Sp(e2) with corresponding α-vectors given by (2, 0), (1, 1) and (0, 2) respec- tively. Let us for completeness check that this agrees with the decomposition given by 2 L 2 Sym E = α∈Λ(n,r) ξαSym E. Take for example the content vector α = (1, 1) corre- sponding to the class of i = (1, 2) in Λ(2, 2). We saw that ξα = ξi,i sends ci,i = c11c22 to 1 and ci,j to 0 otherwise. In Example 2 in Section 3.2 we already saw that a matrix a b g = ∈ GL (K) acts on e2, e e , e2 as c d 2 1 1 2 2

2 2 2 2 2 e1 7→ a e1 + 2abe1e2 + b e2; 2 2 e1e2 7→ ace1 + (ad + bc)e1e2 + bde2; 2 2 2 2 2 e2 7→ c e1 + 2cde1e2 + d e2.

P 2 Now we use the rule ξvb = a∈B ξ(rab)va to find that indeed ξαSym E = Sp(e1e2), since bc = c11c22(g) is the only coefficient function of the above, which is not mapped 2 2 2 2 to 0 by ξα. Similarly, we find that ξαSym = Sp(e1) and ξαSym = Sp(e2) for α = (2, 0) and (0, 2) respectively. Therefore the two decompositions agree.

Before we carry on with our theory of weight spaces, let us do another example which will be of great importance later.

Example 4.2.2. (Adapted from [9, p. 24]) For each 0 ≤ r ≤ n, let V = ΛrE be the th r exterior power. We already know that if we let ei be the standard basis vector (not to be confused with eκ, the evaluation map at κ ∈ KΓ!), then we get a K-basis of V n by considering the r elements es = ei1 ∧ · · · ∧ eir for any s = {i1, . . . , ir} ⊆ n such that i1 < i2, ··· < ir. Now we immediately see that V is a KΓ-module, since we can let

g ∈ Γ act on es by ges = (gei1 ) ∧ · · · ∧ (geir ). Furthermore, the bilinearity of the wedge product together with E = Span({ei}) ∈ MK (n, 1) shows us that V ∈ MK (n, r).

Now returning to the theory of weight spaces, we see that if x(t) ∈ Tn(K) and we define α(s) = α to be the weight containing (i1, . . . , ir), then x(t) wil act by x(t)es = α1 α2 αn ti1 ti2 . . . tir es = t1 t2 . . . tn es. The strict ordering of the elements of s implies that distinct s, s0 give distinct weights α(s), α(s0). But the direct sum of all weight spaces is α(s) V and therefore all weight spaces must have dimension 1 or 0: V = K · es for any α s = {i1, . . . , ir} ⊆ n with i1 < i2, ··· < ir and V = 0 for all other α ∈ Λ(n, r). Chapter 4. Weights and Characters 34

4.3 First Results on Weight Spaces

As before we will let V ∈ MK (n, r) and α ∈ Λ(n, r) in this section. Proposition 4.3.1. [9, p. 25] Let w ∈ W = G(n). Then the K-spaces V α and V w(α) are isomorphic.

Proof. (Adopted from [9, p. 25]) Let us fix w ∈ W. Then we have two bases of E given by {e1, . . . , en} and {ew(1), . . . , ew(n)}. Hence there exists a change of basis matrix, which sends the former basis to the latter. Let us call this matrix nw. Clearly nw ∈ GLn(K). Claim. −1 ∗ nwx(t1, . . . , tn)nw = x(tw(1), . . . , tw(n)), for all t1, . . . , tn ∈ K .

For the left hand side of the expression we assume that a linear transformation T in the basis {ei} is given by x(t1, . . . , tn). We know from basic theory of change of basis matrices, that, since nw is a change of basis matrix from {ei} to {ew(i)}, the expression on the left hand side of the equation just equals the same linear transformation T expressed in the basis {ew(i)}. But T sends ei to ti · ei, hence T will send ew(i) to tw(i) · ew(i). So in the basis {ew(i)} T is given by x(tw(1), . . . , tw(n)), which proves the claim.

α w(α) −1 Now we consider the map from V onto V which sends v 7→ nw v. This is well defined since v ∈ V α implies that

−1 −1 α1 αn −1 x(t1, . . . , tn)nw v = nw x(tw(1), . . . , tw(n))v = tw(1) . . . tw(n)nw v α α w−1(1) w−1(n) −1 = t1 . . . tn nw v.

−1 w(α) Recall here that W acts on Λ(n, r) by w(α) = (αw−1(1), . . . , αw−1(n)). So nw v ∈ V . α Furthermore, multiplying by nw clearly gives an isomorphism of K-spaces. So V and V w(α) are isomorphic, as required.

Proposition 4.3.2. [9, p. 25] Let

f g 0 −−→ V1 −−−→ V −−−→ V2 −−→ 0

α be an exact sequence in MK (n, r). Then, since we always have V ⊆ V , we can construct a naturally induced sequence of K-spaces

α α α 0 −−→ V1 −−→ V −−→ V2 −−→ 0

by restricting the maps f, g and this induced sequence is exact.

0 α 0 Proof. (Own working) Let us write f for the restriction of f to V1 and g for the restric- α 0 α tion of g to V . If v ∈ Im(f ), then v = f(w) for some w ∈ V1 . Thus, using the fact that α1 αn f is a KΓ-module homomorphism, we see that x(t)v = x(t)f(w) = t1 . . . tn f(w) = α1 αn α 0 α t1 . . . tn v. Hence v ∈ V and we may conclude that Im f ⊆ V . Moreover, a similar 0 α argument shows that Im g ⊆ V2 , hence the induced sequence in well defined. Now, using Corollary 4.2.5 and what we have just proved, we may conclude that Im f 0 = α 0 α 0 α Im f ∩ V , Ker g = Ker g ∩ V and Im g = Im g ∩ V2 . From these equalities it follows easily that the induced sequence is exact too. Chapter 4. Weights and Characters 35

Now suppose r, s are any non-negative integers and V,W are KΓ-modules belonging to MK (n, r) and MK (n, s) respectively. After a moment of thought we realise that V ⊗W = V ⊗K W , regarded as a KΓ-module in the usual way, will belong to MK (n, r+s). Proposition 4.3.3. [9, p. 25] Let γ ∈ Λ(n, r + s). Then M (V ⊗ W )γ = V α ⊗ V β, α,β

where the sum is over all α ∈ Λ(n, r), β ∈ Λ(n, s) such that α + β = γ.

I have decided to omit the proof of this result, because it is rather fiddly and generally not very interesting.

The last result we will discuss in this section is what happens when we extend the field K to a field L containing K. We can then identify SK (n, r) with a subset of SL(n, r) K L K L by identifying ξi,j with ξi,j for all i, j ∈ I(n, r). Then we see that ξα = ξα , for all α ∈ Λ(n, r). Thus, if we make VL = V ⊗K L into an SL(n, r)-module by ‘extension of scalars’ and identify V with the subset V ⊗ 1L of VL, then we find:

α L Proposition 4.3.4. The weight-space VL = ξα VL is the L-span of the weight space α K α α V = ξα V. In particular, dimK V = dimL VL .

4.4 Characters

Now let us finally do something that will remind us vaguely of representation theory: let us define characters. Suppose V is a left KΓ-module in MK (n, r). For a given α1 α2 αn α ∈ Λ(n, r) we construct the monomial X1 X2 ...Xn . This monomial has degree r and we choose the n indeterminates X1,...,Xn to lie in Q.

Definition 4.4.1. The character (or formal character) of V ∈ MK (n, r) is defined as the polynomial

X α α1 αn ΦV (X1,...,Xn) = (dimK V ) · X1 ...Xn . α∈Λ(n,r)

So it is clear that ΦV is an element of the polynomial ring Z[X1,...,Xn], but we also note that, since all the monomials appearing in the sum have degree exactly r, ΦV is homogeneous of degree r.

Proposition 4.4.2. [9, p. 26] For w ∈ W = G(n), we have ΦV (Xw(1),...,Xw(n)) = ΦV (X1,...,Xn). In other words, ΦV is a .

Proof. (Own working) Let us just calculate what ΦV (Xw(1),...,Xw(n)) is:

X α α1 αn ΦV (Xw(1),...,Xw(n)) = (dimK V ) · Xw(1) ...Xw(n) α∈Λ(n,r) α α X α w−1(1) w−1(n) = (dimK V ) · X1 ...Xn α∈Λ(n,r) Chapter 4. Weights and Characters 36

  X w(α) α1 αn = dimK V · X1 ...Xn α∈Λ(n,r)

X α α1 αn = (dimK V ) · X1 ...Xn , α∈Λ(n,r)

where the last equality is established by Proposition 4.3.1.

Example 4.4.1. (Own working) Recall Example 4.2.1 from Section 4.2.1. Let us try to find the formal character of Sym2E, where E is a 2-dimensional K-vector space. In 2 2 Example 4.2.1 we found that the weight space decomposition of Sym E = Sp(e1) ⊕ 2 Sp(e1e2) ⊕ Sp(e2) with corresponding α-vectors given by (2, 0), (1, 1) and (0, 2) respec- tively. Hence all of its weight spaces have dimension 1 and we conclude that its formal character is given by

2 2 ΦSym2E(X1,X2) = X1 + X1X2 + X2 .

Before we go on with the theory of the characters ΦV , let us quickly recall some defini- tions from the theory of symmetric polynomials.

Definition 4.4.3. [12, p. 11] Let λ be any partition of length ≤ n. The monomial symmetric function mλ is defined by

X α1 αn mλ(x1, . . . , xn) = x1 . . . xn , α

where the sum runs over all distinct α = (α1, . . . , αn) of λ = (λ1, . . . , λn).

Hence we can now also write our character ΦV as

X  λ ΦV (X1,...,Xn) = dimK V · mλ(X1,...,Xn). λ∈Λ+(n,r)

Recall here that Λ+(n, r) is the set of dominant weights in Λ(n, r).

Next I want to prove a of short propositions about the behaviour of ΦV . We have done almost all of the work for these propositions already in the previous section.

Proposition 4.4.4. [9, p. 26] Suppose 0 −−→ V1 −−→ V −−→ V2 −−→ 0 is an exact sequence in MK (n, r). Then

ΦV = ΦV1 + ΦV2 .

Proof. (Own working) We know from Proposition 4.3.2 that we have an induced short α α α α exact sequence given by 0 −−→ V1 −−→ V −−→ V2 −−→ 0 and therefore dimK V = α α dimK V1 + dimK V2 . The result now follows simply from the definition of ΦV , ΦV1 and

ΦV2 . Proposition 4.4.5. [9, p. 26] Suppose we are given a composition series of V , say V = V0 ⊃ V1 ⊃ V2 ⊃ · · · ⊃ Vk = 0. Then

k X ΦV = ΦVσ−1/Vσ . σ=1 Chapter 4. Weights and Characters 37

Proof. (Own working) We obtain the short exact sequences

0 −−→ Vi −−→ Vi−1 −−→ Vi−1/Vi −−→ 0, for all i ∈ k (where the maps are the natural inclusion and quotient map, respectively).

The last proposition tells us that ΦVi−1 = ΦVi + ΦVi−1/Vi . Also note that ΦVk = Φ0 ≡ 0. So the sum on the right hand side becomes

k X ΦVσ−1 − ΦVσ = ΦV0 − ΦVk = ΦV , σ=1 as required.

Proposition 4.4.6. [9, p. 27] For V ∈ MK (n, r) and W ∈ MK (n, s), we have

ΦV ⊗W = ΦV ΦW .

Proof. (Own working) From Proposition 4.3.3 we see that

α X β γ dimK (V ⊗ W ) = (dimK V )(dimK W ). α=β+γ

The required formula follows easily from this.

Recall at this point that from the theory of symmetric polynomials we have the following results.

Definition 4.4.7. [12, p. 12] For any r ≥ 0, we define the rth elementary symmetric polynomial er as the sum of all products of r distinct variables xi, so that e0 = 1 and for r ≥ 1 : X er = xi1 xi2 . . . xir = m(1r). i1 n. Definition 4.4.8. The elementary symmetric polynomials are also defined by the gen- erating function X r Y E(t) = ert = (1 + xit). r≥0 i≥1 It is easy to check that the two definitions coincide (see e.g. [12, p. 12]).

Definition 4.4.9. For each partition µ of r (i.e. µ1 ≥ · · · ≥ µr ≥ 0 and µ1 +···+µr = r) we define the symmetric function

eµ(X1,...,Xn) = eµ1 . . . eµr .

Definition 4.4.10. Let Sym(n, r) be the ring of all symmetric functions in Z[X1,...,Xn] which are homogeneous of degree r.

Theorem 4.4.11 (Fundamental Theorem of Symmetric Polynomials). [12, p. 13] The ring Sym(n, r) is equal to the Z-span of the eµ, where µ partitions r as above. Chapter 4. Weights and Characters 38

Now we return to the main topic of this section: characters. Recall from Example 4.2.2 that we can impose a KΓ-module structure on ΛrE.

Proposition 4.4.12. [9, p. 27] Let µ be a partition of r as before. The symmetric µ µr function eµ(X1,...,Xn) is the character of the left KΓ-module Λ 1 E ⊗ ... Λ E.

Proof. (Own working) Let us first show that, for 0 ≤ k ≤ n, ek is the character of the module ΛkE. Let V = ΛkE. Then

X α α1 αn ΦV (X1,...,Xn) = (dimK V ) · X1 ...Xn . α∈Λ(n,k)

α But in Example 4.2.2 we found that dimK V = 1 if α corresponds to an s = {i1, . . . , ik} ⊆ α n with i1 < ··· < ik and dimK V = 0 otherwise. So we are really summing over all possible n-tuples α which have k entries with value 1 and all other entries 0. Let S be the set of these n-tuples. Then

X α1 αn ΦV = X1 ...Xn = ek. α∈S

Note that when k > n, the formula works as well, for in this case V = ΛkE = 0 and ek = 0 too. For the general case we now see immediately from Proposition 4.4.6 that

µ µ µ µ ΦΛ 1 E⊗···⊗Λ r E = ΦΛ 1 E ... ΦΛ r E = eµ1 . . . eµr = eµ.

All this work leads us to the beautiful theorem:

Theorem 4.4.13. [9, p. 27] The additive subgroup of Z[X1,...,Xn] which is generated by all characters ΦV for V ∈ MK (n, r) is Sym(n, r). In particular, this additive group is independent of the field K.

Proof. I have called this a theorem, but the name corollary would be more in place for we have already done all of the work required to prove this theorem. We know that V ∈ MK (n, r) leads to ΦV ∈ Sym(n, r) (Proposition 4.4.2), but we also know that for each eµ ∈ Sym(n, r) there exists a V ∈ MK (n, r) such that eµ = ΦV (namely V = Λµ1 E ⊗ · · · ⊗ Λµr E by the previous proposition). Hence the two additive groups are equal.

We have proved a big and important theorem about our formal characters, which we defined at the beginning of this section. The only thing we do not know is how our formal characters relate to the natural characters we know from representation theory. By divine intervention it turns out that these two actually relate in a wonderfully natural way, which we will describe now.

First recall the definition of a natural character.

Definition 4.4.14. The natural character ϕV of a representation V = (V, ρ) ∈ MK (n, r) is defined by ϕV (g) = ρ(g), for all g ∈ Γ. Chapter 4. Weights and Characters 39

Here recall that the trace of a linear map from a vector space V → V is just defined as the trace of the matrix representing the linear map in any basis of V . So the natural character of V ∈ MK (n, r) is in fact the trace of the invariant matrix R = (rab) in any basis of V . Therefore, we see that ϕV ∈ AK (n, r) when V ∈ MK (n, r). Now we are ready to state and prove the important theorem about the relation between ϕV and ΦV .

Theorem 4.4.15. [9, p. 27] Let V ∈ MK (n, r) and g ∈ GLn(K). Then ϕV (g) = ΦV (ζ1, . . . , ζn), where ζ1, . . . , ζn are the eigenvalues of g.

Proof. (Adapted from [9, p. 27]) Let L be some field containing K. We know from Proposition 4.3.4 at the end of the last section that if we replace V by a module VL = V ⊗ L which lies in ML(n, r), then dimensions of the weight spaces and therefore the character ΦV is unchanged. This process replaces ϕV by a larger function on ΓL = GLn(L), which must coincide with ϕV on ΓK = GLn(K). Therefore we may assume in our proof that K is algebraically closed.

Let C be the n × n matrix (cµν) and let x ∈ K be an indeterminate. Then we define f1, . . . , fn ∈ AK (n, r) by

n n−1 n det(xI − C) = x − f1x + ··· + (−1) fn. (4.1)

The second definition of elementary symmetric polynomials (Definition 4.4.8) and the fact that K is algebraically closed now imply that fr(g) = er(ζ1, . . . , ζn) for 1 ≤ r ≤ n (here the ζi - the eigenvalues of g - are the roots of equation (4.1), of course). Now by Theorem 4.4.13 we may write

X µ1 µr Z ΦV = bµe1 . . . er with bµ ∈ , µ

where we take the sum over the partitions µ of r as before. We can also define the P µ1 µr element ψ ∈ AK (n, r) by ψ = µ(bµ · 1K )f1 . . . fr . Then we clearly have, for any g ∈ ΓK ΦV (ζ1, . . . , ζn) = ψ(g). (4.2)

Now suppose for a moment that g is diagonalizable, which means there is some z ∈ ΓK −1 such that zgz = diag(ζ1, . . . , ζn). We can choose a basis which respects the decomposi- L α α tion V = V , i.e. for each α ∈ Λ(n, r) we have dimK V consecutive basis elements of V in our new basis which form a complete basis of V α. By definition of the weight spaces α1 αn diag(ζ1, . . . , ζn) acts on such a basis element v by diag(ζ1, . . . , ζn)v = ζ1 . . . ζn v. So α diag(ζ1, . . . , ζn), with respect to our new basis of V, is a diagonal matrix having dim V α1 αn diagonal terms ζ1 . . . ζn for each α ∈ Λ(n, r). Since the trace of this last matrix is ΦV (ζ1, . . . , ζn), we find by taking traces that

−1 ϕV (g) = ϕV (zgz ) = ΦV (ζ1, . . . , ζn). (4.3)

2 Now we are almost there. We have two polynomials in n variables cµν, namely ψ and ϕV and by equations (4.2) and (4.3) we know that ψ(g) = ϕV (g) for all g in the set of diagonalizable matrices. But the set of diagonalizable matrices over an algebraically closed field is dense in the set of all matrices. So since these two continuous functions Chapter 4. Weights and Characters 40

agree on a dense subset of ΓK , they must agree on all of ΓK , which concludes the proof.

Example 4.4.2. (Own working) Let us once more return to our running example in a b this chapter, namely Example 4.2.1. We saw that the matrix g = ∈ GL (K) c d 2 2 2 2 acts on the basis e1, e1e2, e2 of Sym E as

a2 2ab b2 ac ad + bc bd . c2 2cd d2

2 2 Hence the natural character of this representation equals ϕ 2 (g) = a + ad + bc + d . √ Sym E 1  2 2 The eigenvalues of g are given by λ1, λ2 = 2 a + d ± a + 4bc − 2ad + d . If we plug 2 2 these back into the formal character ΦSym2E(X1,X2) = X1 + X1X2 + X2 as found in 2 2 Example 4.4.1, then we find that indeed ΦSym2E(λ1, λ2) = a +ad+bc+d , as expected.

We now quote a famous theorem by Frobenius-Schur (see [5, p. 184]).

Theorem 4.4.16. Let Γ be a group and let K be an algebraically closed field. Suppose T1,..., Tk are inequivalent irreducible matrix representations of G in K and let

(r) Tr(g) = (fij (g))1≤i,j≤nr , for g ∈ Γ.

Then the coordinate functions

(r) {fij : 1 ≤ i, j ≤ nr, 1 ≤ r ≤ k}

are linearly independent over K.

From our big theorem and this last theorem we deduce the following important corollary:

Corollary 4.4.17. [9, p. 28] Suppose that Φ1,..., Φt are the characters of a set of mutually non-isomorphic, absolutely irreducible modules V1,...,Vt ∈ MK (n, r). Then Φ1,..., Φt are linearly independent elements of Sym(n, r).

Proof. By Theorem 4.4.16 we know that in this case the coordinate functions are linearly independent, so most certainly the natural characters φ1, . . . , φt of V1,...,Vt, which are simply sums of coordinate functions, will be linearly independent elements of AK (n, r) as well. If there exists a non-trivial relation z1Φ1 + ··· + ztΦt = 0 (with zi ∈ Z), then by Theorem 4.4.15 we find that (z1 · 1K )ϕ1(g) + ··· + (zt · 1K )ϕt(g) = 0 for all g ∈ ΓK , which contradicts Theorem 4.4.16.

4.5 Irreducible modules in MK(n, r)

Hooray! We have finally done enough groundwork (and a little more) to prove our main big theorem. Let us get started. In this section we will always use lexicographic ordering λ1 λn when we consider the ordering of weights λ ∈ Λ(n, r) or of monomials X1 ...Xn . This theorem is due to Schur [14, p. 37] in the case K = C. Chapter 4. Weights and Characters 41

Theorem 4.5.1. [9, pp. 28-29] Let n, r be given integers with n ≥ 1 and r ≥ 0. Let K be an infinite field. Then:

+ (i) For each λ ∈ Λ (n, r) there exists an absolutely irreducible module Fλ,K in MK (n, r) λ1 λn whose character Φλ,K has leading term X1 ...Xn ; + (ii) These Φλ,K , for λ ∈ Λ (n, r), form a Z-basis of Sym(n, r);

(iii) Every irreducible module V ∈ MK (n, r) is isomorphic to Fλ,K for exactly one λ ∈ Λ+(n, r).

Proof. (Adapted from [9, p. 29]) (i) Suppose we are given a weight λ = (λ1, . . . , λn) ∈ + P Λ (n, r). Since λ1 ≥ · · · ≥ λn we can also consider λ as a partition of i λi. Let th µ = (µ1, . . . , µn) be the partition which has as its i entry the number of blocks of size i in the partition conjugate to this partition. Then µi = λi − λi+1 and µn = λn. We saw in the previous section that V = Λµ1 E ⊗ · · · ⊗ Λµn E is a KΓ-module with character µ1 µn ΦV = e1 . . . en .

λ1 λn Claim. The leading term of ΦV is X1 ...Xn .

We know that the leading term of ei is X1X2 ...Xi and that for the leading term of a product of polynomials, we can just multiply out the leading term of each of the factors. P P i µi ( i µi)−µ1 µn λ1 λ2 λn Hence we find that ΦV has leading term X1 X2 ...Xn = X1 X2 ...Xn . Now we see from Proposition 4.4.5 that there must be some composition factor U of V λ1 λn with leading term X1 ...Xn . We can take Fλ,K to be U. The only thing left to do is to prove that U is in fact absolutely irreducible. By Theorem 1.7.4 stated at the end of Chapter1, it suffices to show that every KΓK -endomorphism θ of U is equivalent to left λ1 λn multiplication by a scalar in K. Since we assume that ΦU contains the term X1 ...Xn λ λ we conclude that dim U = 1, and since the KΓK -endomorphism θ must map U onto λ itself, there exists some scalar a ∈ K such that θU (u) = a · u for u ∈ U . But the set U 0 = {u ∈ U : θ(u) = a · u} is a submodule of U, hence U = U 0 and θ is equal to a scalar map a · 1U .

+ (ii) The monomial symmetric functions mλ (see Definition 4.4.3) for λ ∈ Λ (n, r) form a Z-basis of Sym(n, r). This fact should be intuitively clear, but the very keen reader could also have another look at [12, p. 11]. Using the discussion after Definition 4.4.3 we P + actually find that we can write Φλ,K = mλ + µ<λ zλµ mµ, where the µ ∈ Λ (n, r) and < stands for the lexicographic ordering of weights mentioned earlier. Note here that by definition of Φλ,K all monomials with leading term µ > λ have coefficient 0. Therefore the Φλ,K also form a basis for Sym(n, r).

L (iii) Suppose L is an algebraically closed field containing K. Let us write Fλ,K for the module Fλ,K ⊗K L ∈ ML(n, r) as discussed before Proposition 4.3.4. Since any extension L field of L is also an extension field of K, if Fλ,K is absolutely irreducible, so is Fλ,K . But L we also know from Proposition 4.3.4 that Fλ,K and Fλ,K have the same character Φλ,K . L Any irreducible module X ∈ ML(n, r), however, must be isomorphic to some Fλ,K since + otherwise the characters ΦX and Φλ,K for all λ ∈ Λ (n, r) would be linearly independent by Corollary 4.4.17, contradicting (ii) above.

Now let V be any irreducible module in MK (n, r), and let X be an irreducible submodule + of VL = V ⊗K L. By the discussion above there must then be a λ ∈ Λ (n, r) such that Chapter 4. Weights and Characters 42

L ∼ L Fλ,K = X. Therefore we conclude that the space HomSL(n,r)(Fλ,K ,VL) 6= 0. From this ∼ it follows immediately that HomSK (n,r)(Fλ,K ,V ) 6= 0. So by Schur’s Lemma V = Fλ,K which completes the proof of the theorem. Chapter 5

The irreducible characters of GL2

In this chapter we will look at a rather long, explicit example of characters of general linear groups. We will construct all irreducible characters for the groups GL2(Fq) where q is an odd prime-power. The general approach taken in this chapter was inspired by the treatment of the material in Fulton and Harris [6, pp. 67-70]. My treatment here is in much greater detail, however, and all proofs and calculations are my own work and I will therefore not state this explicitly every time. The case where q a power of two can be obtained with very similar methods.

I have included this chapter as a beautiful counterbalance to the heavy theory done in the first part of this project and as an example of how to deal with representations and characters of GLn(K) where K is a finite field, rather than an infinite one. Note that these characters are fully understood (see e.g. [12, pp. 137-156]) in the sense that we have found abstract formulae for the irreducible characters of GLn(K), where K is a finite field. Calculating explicit values of these characters should therefore, in theory, be possible, although calculating them for large groups can, in practice, still be challenging. The theory of the irreducible characters of GL2 over a finite field has many applications, most notably in automorphic forms (see e.g. [3]).

5.1 Conjugacy Classes of GL2(Fq)

Before we can start looking at the characters of our group G = GL2(Fq), we need to understand the conjugacy classes of G first. We notice that for a 2x2 matrix g ∈ G we have q2 − 1 possibilities for the first column (any non-zero vector will do) and q2 − q possibilities for the second column (if we pick any vector for the second column, which is not a multiple of the vector in the first column, then of g is non-zero). So the size of G equals

2 2 2 |GL2(Fq)| = (q − 1)(q − q) = (q + 1)q(q − 1) .

Now let us try to find the different conjugacy classes of G. We know the conjugacy classes of G are in 1-1 correspondence with partitions associated to irreducible polynomials such that the sum of the size of the partition times the degree of the irreducible polynomial equals 2.

43 Chapter 5. The irreducible characters of GL2 44

F∗ The first case to consider is if we are given the irreducible polynomial (x−λ) for λ ∈ q. We can either associate the partition {1, 1} to (x − λ) or the partition {2}. In the λ 0 former case we obtain the conjugacy class of G represented by the matrix . 0 λ Since this matrix commutes with all other matrices, this conjugacy class has size 1. In λ 1 the latter case we obtain a conjugacy class represented by the matrix . If we 0 λ look at the centralizer of this matrix in G, a simple calculation shows that a matrix a b λ 1 commutes with if and only if c = 0 and a = d. Therefore the size of c d 0 λ λ 1 the centralizer of is (q − 1)q and the size of the conjugacy class of this matrix 0 λ |G| (q+1)q(q−1)2 2 must be |Conj| = |Cent| = q(q−1) = (q − 1). In both cases we have q − 1 choices of λ ∈ Fq, so we find q − 1 conjugacy classes in each case.

The next case to consider is if we have two (distinct) irreducible polynomials of degree F∗ 1: (x − λ) and (x − µ) for λ, µ ∈ q with λ 6= µ. They must then both have the partition {1} associated to them. A matrix representative of this conjugacy class of G is λ 0 given by . Again a simple calculation shows that the only matrices in G which 0 µ a 0 commute with this matrix are given by for a, b ∈ F∗. This implies that the size 0 b q λ 0 of the centralizer of is (q − 1)2. So the size of the conjugacy class must be 0 µ 2 λ 0 µ 0  0 1 (q+1)q(q−1) = q(q + 1). Moreover, since is conjugate to by , (q−1)2 0 µ 0 λ −1 0 (q−1)(q−2) q−1 but not to any other matrix of this form we see that we have 2 = 2 of q−1 such classes. This makes sense because we also have 2 polynomials of the form (x − λ)(x − µ).

The last case to consider is the conjugacy class corresponding to an irreducible polyno- mial of degree 2 with associated partition {1}. Suppose g ∈ G belongs to this class. The eigenvalues λ , λ of g then satisfy an irreducible quadratic over F , so F (λ ) ∼ F 2 . 1 2 √ q q 1 = q This implies that ∃a ∈ F such that a∈ / F (i.e. a equals some odd power of a gen- q q √ F∗ F∗ erator of q, so in fact a itself generates q as q − 1 is even.) Hence λ1 = x + y a for some x ∈ F and y ∈ F∗. From this and the fact that λ must be the image of λ under q q √ 2 1 the automorphism fixing Fq, we see that λ2 = x − y a. From this we conclude that the characteristic polynomial of g must be equal to

2 2 2 2 2 (z − λ1)(z − λ2) = z − 2xz + (x − ay ) = (z − x) − ay .

x ay This is the characteristic polynomial of the matrix , which therefore is a rep- y x resentative of the conjugacy class. A slightly more complicated, but still reasonably x ay straightforward calculation shows that the only matrices that commute with y x α aβ are matrices of the form (where in this case we will allow for β = 0 unless β α α = 0, of course). Hence the size of the centralizer of the matrix is q2 − 1 and the size q(q+1)(q−1)2 of each conjugacy class is q2−1 = q(q − 1). Since for a as above and any λ1 we Chapter 5. The irreducible characters of GL2 45

Table 5.1: Conjugacy Classes of GL2(Fq)

Representative Size of class No. of classes λ 0 1 q − 1 0 λ λ 1 q2 − 1 q − 1 0 λ λ 0 , λ 6= µ q2 + q (q−1)(q−2) 0 µ 2 λ aµ , µ 6= 0 q2 − q q(q−1) µ λ 2

∼ √ have that Fq(λ1) = Fq( a), a matrix g in this class will always be similar to a matrix x ay of the form , for x ∈ F , y ∈ F∗. Hence these matrices represent all conjugacy y x q q x ay  x −ay classes of this sort. Since is conjugate to by any matrix of the y x −y x α −aβ x ay form , but to no other matrix of the form , we see that there are β −α y x q(q−1) 2 of these classes. This makes sense since we have q(q − 1) degree two polynomials not divisible by x, and subtracting the polynomials which are reducible (as seen in the previous cases) we find that there are

q − 1 q(q − 1) q(q − 1) − (q − 1) − = 2 2 irreducible polynomials of degree 2 over Fq.

We can summarise our results in Table 5.1.

This list is complete, since the number of elements of G it accounts for is

2 2 (q−1)(q−2) 2 q(q−1) q − 1 + (q − 1)(q − 1) + (q + q) 2 + (q − q) 2 = q(q + 1)(q − 1)2 = |G|.

5.2 Irreducible Characters of V,Uα and Vα

To get our first irreducible representation we consider the action of G = GL2(Fq) on the 1 1 q + 1 elements of the 1-dimensional P (Fq). If we denote P (Fq) by the set of lines {l1, ··· , lq+1}, then multiplying on the left by g ∈ G gives a permutation of

this set. Hence if we take a q + 1 dimensional vector space with basis {el1 , ··· , elq+1 }, then we get the permutation representation by letting g act on this basis as

g · eli = egli , for all i. Chapter 5. The irreducible characters of GL2 46

q+1 X Since eli is fixed by the action of any g ∈ G, we see that the 1-dimensional trivial i=1 representation is contained in this representation. Let the complementary representation be called V . Proposition 5.2.1. The representation V takes the following values on the different conjugacy classes of G:

λ 0 λ 1 λ 0 λ aµ 0 λ 0 λ 0 µ µ λ χV : q 0 1 -1 and V is irreducible.

Proof. To prove the first statement, let us first try to calculate the character values of the permutation representation as defined above. These values for a matrix g ∈ G are just 1 the number of li fixed by the action of g on P (Fq). Recall that two lines (a1 : b1) and (a2 : b2) are equal if and only if a1b2 = b1a2. Suppose a line is given by the pair (z1 : z2). λ 0 We look at the actions of the different matrices on this line. · (z : z ) = (λz : 0 λ 1 2 1 λ 0 λz ) = (z : z ), so fixes all lines in the space. Hence its character will have 2 1 2 0 λ λ 1 value q +1. For the next one, ·(z : z ) = (λz +z : λz ). So this matrix will fix 0 λ 1 2 1 2 2 a line (z1 : z2) if and only if (λz1 +z2)z2 = λz1z2 ⇔ z2 = 0. So this matrix will fix exactly one line, namely the line (1 : 0) and the character has value 1 on this conjugacy class. λ 0 If we look at the action of the matrix for λ 6= µ we find that it fixes (z : z ) 0 µ 1 2 if and only if λz1z2 = µz1z2, so if and only if z1 = 0 or z2 = 0, since λ and µ are not equal. So the character of the permutation representation has value 2 on this conjugacy λ aµ class. For the last one, we calculate · (z : z ) = (λz + aµz : µz + λz ), so µ λ 1 2 1 2 1 2 (z1 : z2) is fixed by this character if and only if

2 2 λz1z2 + aµz2 =µz1 + λz1z2 2 2 ⇐⇒aµz2 = µz1 2 2 ⇐⇒az2 = z1. (since µ 6= 0).

So if z = 0, then z = 0, but this is a contradiction as (0 : 0) is not a well-defined line 2 1 √ 1 ∗ z1 in P (Fq). If, however, z2 ∈ F , then the equation implies that = a which is also a q √ z2 contradiction, since we chose a such that a∈ / Fq. So this matrix does not fix any line, hence its character has value 0.

We now obtain the required values of the character χV from the equation χV = χperm − χtriv, so by just subtracting 1 from all the character values we just found for χperm. Now to prove that V is an irreducible representation, we just calculate the inner product of χV with itself. (q − 1)(q − 2) q(q − 1) |G|(χ , χ ) =(q − 1)q2 + q(q + 1) + q(q − 1) (−1)2 V V 2 2 Chapter 5. The irreducible characters of GL2 47

 (q + 1)(q − 2) q(q − 1) =q(q − 1) q + + 2 2  2q2 − 2q − 2 =q(q − 1) q + 2 =q(q − 1)(q2 − 1) =q(q − 1)2(q + 1).

Hence (χV , χV ) = 1 and V is irreducible, as required.

5.2.1 The Characters of Uα and Vα

F∗ To obtain our next irreducible character, we consider the group q. We know this is a cyclic group of size q − 1, so we also know that there exist exactly q − 1 irreducible F∗ C∗ characters α : q → , which are all 1-dimensional and take roots of unity as values C F C∗ in . Let α be one of these characters and consider the function χUα : GL2( q) → sending g 7→ α(det g).

Lemma 5.2.2. The function χUα is a character of G.

Proof. We first notice that χUα is well-defined, since det g 6= 0 as g ∈ G. Also,

χUα (g1g2) = α(det(g1g2)) = α(det(g1) det()) = α(det g1)α(det(g2)).

So χUα is a character of G.

Since χUα (I2) = α(1) = 1 we see that χUα is a 1-dimensional character of G, hence it is irreducible. We can now define a new character χVα of G as the character of Vα := V ⊗ Uα. Since Vα is a product of an irreducible representation and a 1- dimensional representation, we find that Vα is irreducible for free. The values of the characters of Uα and Vα are given in the following table:

λ 0 λ 1 λ 0 λ aµ 0 λ 0 λ 0 µ µ λ 2 2 2 2 χUα : α(λ) α(λ) α(λ)α(µ) α(λ − aµ ) 2 2 2 χVα : qα(λ) 0 α(λ)α(µ) −α(λ − aµ )

F∗ Note that if we take α to be the trivial representation of q, then χUα will be the 0 trivial character and Vα = V . Also note that if α 6= α are distinct characters of F∗ F∗ 0 q, then there exists some x ∈ q such that α(x) 6= α (x). From this it follows that 1 0 1 0 χ ( ) 6= χ ( ), so U 6= U 0 and therefore V 6= V 0 . So we have found Uα 0 x Uα0 0 x α α α α 2q−2 distinct irreducible characters of G, half of which are 1-dimensional and the others q-dimensional. Chapter 5. The irreducible characters of GL2 48

5.3 The Characters of Wα,β

We have found all fairly straightforward irreducible characters of G. The next natural way to look for irreducible characters of G is by inducing representations and characters from (large) subgroups of G. Before we start doing this we need a proposition which will help us to calculate the character values of these induced characters.

Proposition 5.3.1. Suppose G is any group with subgroup H and let C be a conjugacy class of G. Let W be a representation of H and Ind W be the representation of G induced by W . Suppose C ∩ H decomposes into conjugacy classes D1,D2,...,Dr of H. Then the value of the character of Ind W on an element of C is

r |G| X χ (C) = |D | · χ (D ), Ind W |H||C| i W i i=1 where χW (Di) denotes the value of the character of W on an element of the class Di.

Proof. Suppose the left cosets of H in G are {s1H, . . . , skH}. We know from basic representation theory that

X −1 χInd W (g) = χW (s gs), where the sum runs over all siH for which g(siH) = siH and we pick some element −1 s ∈ siH for each of these cosets. We notice that gsiH = siH if and only if si gsi ∈ H. −1 −1 So if s is as above, then s = sih for some h ∈ H. Therefore s gs = (sih) g(sih) = −1 −1 −1 −1 −1 −1 h si gsih. Hence χW (s gs) = χW (si gsi) and s gs ∈ H if and only if si gsi ∈ H. We can rewrite the sum above as 1 X χ (g) = χ (s−1gs). Ind W |H| W s∈G s−1gs∈H

Here we divide by |H|, since we now sum over all s ∈ siH.

Additionally, we assume g ∈ C. This means that s−1gs ∈ C as C is a conjugacy class in G. So if we take s as above, then s−1gs ∈ C ∩ H. Conversely, for every h ∈ H ∩ C there exists an s ∈ G such that s−1gs = h. We ask the natural question: for how many −1 −1 −1 s ∈ G does this hold? s1 gs1 = h = s2 gs2 if and only if s2s1 ∈ CentG(g). This happens if and only if s2 = cs1 for some c ∈ CentG(g). For different c ∈ CentG(g) we −1 get different s2 (since c1s1 = c2s1 ⇒ c1 = c2). So we find that s gs = h for exactly |G| |G| | CentG(g)| = = different s ∈ G. Hence every element of H ∩ C is reached | ConjG(g)| |C| |G| |C| times by the sum above. Since H ∩ C decomposes into conjugacy classes D1,...,Dr, we thus find that the sum above can be rewritten as r |G| X χ (g) = |D | · χ (D ), Ind W |H||C| i W i i=1 as required. Chapter 5. The irreducible characters of GL2 49

Armed with this proposition, we move our attention to the subgroup B ≤ G, where B is defined as the subgroup of upper triangular matrices in G. So

a b B := { ∈ G : a, b, c ∈ F }. 0 c q

Since we have q−1 possibilities for the first column and q2 −q possibilities for the second (it may not lie in the span of the first!) we see that |B| = q(q − 1)2. Suppose we have F∗ 0 two irreducible characters α, β of q, we define a character of a representation Wα,β on B by a b 7→ α(a)β(c). 0 c We leave it as a trivial check for the reader that this is indeed a character of B. Let us G 0 0 denote Wα,β = IndB Wα,β, i.e. representation of G induced by Wα,β. We are interested in the values of the characters of Wα,β, so by the above proposition we need to understand how the conjugacy classes of G decompose in B. λ 0 The matrices of the class lie in B and since their conjugacy class in G has size 0 λ 1, it will remain unchanged in B. Hence by the proposition     λ 0 |G| 0 λ 0 χW ( ) = χ ( ) = (q + 1)α(λ)β(λ). α,β 0 λ |B| Wα,β 0 λ

λ 0 µ 0 For the class we immediately notice that this matrix is not conjugate to 0 µ 0 λ  0 1 anymore in B as ∈/ B. From −1 0

−1 α β λ 0 α β λ β (λ − µ) = α 0 γ 0 µ 0 γ 0 µ

λ 0 and the fact that λ 6= µ, we see that the size of the conjugacy class of in B 0 µ λ 0 λ 0 is q. In the class of in G, the only two elements of this form are and 0 µ 0 µ µ 0 , so we get 0 λ

λ 0 |G| λ 0 µ 0 χW ( ) = (q · χW 0 ( ) + q · χW 0 ( ) α,β 0 µ |H||C| α,β 0 µ α,β 0 λ q + 1 = (qα(λ)β(µ) + qα(µ)β(λ)) q(q + 1) = α(λ)β(µ) + α(µ)β(λ).

λ 1 For the class , consider 0 λ

α β−1 λ 1 α β λ γ  = α . 0 γ 0 λ 0 γ 0 λ Chapter 5. The irreducible characters of GL2 50

So the size of the conjugacy class in B is q − 1. Moreover, we spot that a conjugate

a b−1 λ 1 a b c d 0 λ c d lies in B if and only if c = 0, so the matrices above are in fact all G-conjugates of this class which lie in B. Hence λ 1 q + 1 λ 1 χW ( ) = (q − 1)χW 0 ( ) = α(λ)β(λ). α,β 0 λ |C| α,β 0 λ

λ aµ The class always has empty intersection with B, since µ 6= 0. The value of µ λ the induced character on this class will therefore be 0.

Summarizing our results we obtain the following table:

λ 0 λ 1 λ 0 λ aµ 0 λ 0 λ 0 µ µ λ

χWα,β : (q + 1)α(λ)β(λ) α(λ)β(λ) α(λ)β(µ) + α(µ)β(λ) 0

∼ ∼ From this table we see easily that Wα,β = Wβ,α and that Wα,α = Uα ⊕Vα. The following result is much less trivial.

Proposition 5.3.2. For α 6= β, Wα,β is irreducible.

Proof. Before we start this proof, recall that since α and β are irreducible characters of a cyclic group of size q − 1, they take as their values on the non-identity elements the (q −1)th roots of unity, except the trivial character, which takes the value 1 everywhere. The absolute value of one of these characters is therefore always 1. Note that, since α F∗ α 6= β, γ := β is a non-trivial character of q. We will try to evaluate the inner product

|G| · (χWα,β , χWα,β ).

λ 0 The classes of the form each have size 1, so they contribute to the inner product 0 λ X X (q + 1)2|α(λ)|2|β(λ)|2 = (q + 1)2 = (q − 1)(q + 1)2. F∗ F∗ λ∈ q λ∈ q

λ 1 The classes of the form each have size q2 − 1, so they contribute to the inner 0 λ product X X (q2 − 1)|α(λ)|2|β(λ)|2 = (q2 − 1) = (q − 1)(q2 − 1). F∗ F∗ λ∈ q λ∈ q λ 0 The classes of the form are slightly trickier. These classes all have size q(q +1), 0 µ Chapter 5. The irreducible characters of GL2 51

µ 0 but since also belongs to this class, we should evaluate the sum 0 λ

X q(q + 1) X q(q + 1) |α(λ)β(µ) + α(µ)β(λ)|2 = |α(λ)β(µ) + α(µ)β(λ)|2 2 F∗ F∗ λ,µ∈ q λ,µ∈ q λ>µ q(q + 1) X = |α(λ)β(λz) + α(λz)β(λ)|2 2 F∗ λ,z∈ q z6=0,1 q(q + 1) X = |α(λ)β(λ)|2|β(z) + α(z)|2 2 F∗ λ,z∈ q z6=0,1 q(q + 1)(q − 1) X = |γ(z) + 1|2. 2 z6=0,1

α 2 Recall here that we defined γ := β . We see that |1 + γ(z)| = (1 + γ(z))(1 + γ(z)) = 2(1 + <(γ(z))) and X <(γ(z)) = <(−γ(1)) = −1, z6=0,1 where the first equality is justified since all roots of unity and 1 sum to 0. So the total sum becomes   q(q + 1)(q − 1) X X |γ(z) + 1|2 = q(q + 1)(q − 1) 1 + <(γ(z)) 2   z6=0,1 z6=0,1   X = q(q + 1)(q − 1) q − 2 + <(γ(z)) z6=0,1 = q(q + 1)(q − 1)(q − 3).

Adding all contributions of the different classes together, we find that (q − 1)(q + 1)2 + (q − 1)(q2 − 1) + q(q + 1)(q − 1)(q − 3) = q(q − 1)2(q + 1) = |G|. So the character is indeed irreducible.

1 Thus we have found 2 (q − 1)(q − 2) new irreducible characters of G.

5.4 The Characters of Ind ϕ

To find more irreducible characters we now turn our attention to the subgroup K gen- erated by the matrices λ aµ ∈ G, µ λ √ where a is as before (i.e. fixed such that a∈ / Fq), but we will allow µ = 0 now. We leave it as an easy check that these indeed form a subgroup. ∼ F∗ 2 Proposition 5.4.1. We have K = q2 , so K is a cyclic subgroup of G of size q − 1. Chapter 5. The irreducible characters of GL2 52

F∗ Proof. We define a map K → q2 by

λ aµ √ 7→ λ + µ a ∈ F∗ . µ λ q2 √ √ Since λ2 − aµ2 = 0 ⇒ (λ + µ a) = 0 or (λ − µ a) = 0 ⇒ λ = µ = 0, we can choose any values for λ and µ as long as they are not simultaneously zero. So the map given above is a bijection. It is also a homomorphism, as

λ aµ λ0 aµ0 λλ0 + aµµ0 a(λµ0 + µλ0) = µ λ µ0 λ0 λµ0 + µλ0 λλ0 + aµµ0 √ 7→ (λλ0 + aµµ0) + a(λµ0 + µλ0) √ √ = (λ + µ a)(λ0 + µ0 a).

Hence it is an isomorphism.

From this proposition it follows that we have exactly q2 − 1 one-dimensional irreducible ∼ F∗ C∗ representations ϕ : K = q2 → . We will now try to calculate the value of the character of the representation Ind ϕ, the representation of G induced by the representation ϕ of K. λ 0 All classes of the form lie in K and are of size 1, so 0 λ

λ 0 |G| λ 0 χ ( ) = ϕ( ) = q(q − 1)ϕ(λ), Ind ϕ 0 λ |K| 0 λ where in the last equality we have used the isomorphism from the proposition.

λ 1 The classes of the form never lie in K, so 0 λ

λ 1 χ ( ) = 0. Ind ϕ 0 λ

Similarly, λ 0 χ ( ) = 0. Ind ϕ 0 µ

x ay The only classes, which we have not yet looked at, are the classes of the form , y x  x −ay where y 6= 0. We first note that this matrix is not conjugate to in K, since −y x λ −aµ does not lie in K for any λ, µ ∈ F . We also know that these two matrices µ −λ q x ay are the only two matrices of this form in the conjugacy class of . We calculate y x that p aq−1 x ay p aq x ay = , q p y x q p y x Chapter 5. The irreducible characters of GL2 53 for any matrix in K. So the two matrices above form a conjugacy class of size 1 in K. Hence we find that q(q − 1)  x ay  x −ay  χ = ϕ( ) + ϕ( ) Ind ϕ |C| y x −y x √ √ = ϕ(x + y a) + ϕ(x − y a) √ √ = ϕ(x + y a) + ϕ(x + y a)q,

q where in the last line, we have used that x 7→ x is an automorphism of F 2 which fixes √ √ q Fq. It must therefore send x + y a 7→ x − y a.

Summarising our results, we obtain the following table:

λ 0 λ 1 λ 0 λ aµ 0 λ 0 λ 0 µ µ λ √ √ q χInd ϕ : q(q − 1)ϕ(λ) 0 0 ϕ(λ + µ a) + ϕ(λ + µ a)

Proposition 5.4.2. For the character χInd ϕ, we have ( q if ϕ = ϕq; (χInd ϕ, χInd ϕ) = q − 1 if ϕ 6= ϕq.

Proof. When we evaluate the inner product before normalizing, we get

X 1 X √ √ q2(q − 1)2|ϕ(λ)|2 + q(q − 1)|ϕ(λ + µ a) + ϕq(λ + µ a)|2. 2 F∗ F∗ λ∈ q λ,µ∈ q µ6=0

λ aµ  λ −aµ Here we need the factor 1 in the second sum, since and lie in 2 µ λ −µ λ the same conjugacy class in G, so in this sum we are summing over each conjugacy class twice.

The first of these two sums, clearly equals q2(q − 1)3 since the values ϕ takes in C are 2 F∗ ϕ (q − 1)th roots of unity. We define a new character γ of q2 as γ := ϕq . Using the same trick as before, the second sum now becomes

1 X √ X √ q(q − 1) |γ(λ + µ a) + 1|2 = q(q − 1) 1 + <(γ(λ + µ a)) 2 µ6=0 µ6=0   2 2 X √ = q (q − 1) + q(q − 1)<  γ(λ + µ a) . µ6=0

Hence if ϕ = ϕq, then γ is the trivial character. So this sum will simply equal 2q2(q−1)2. q2(q−1)3+2q2(q−1)2 In this case we find that (χInd ϕ, χInd ϕ) = |G| = q. If ϕ 6= ϕq, it is slightly more complicated. Now γ is not the trivial character, so we see that X √ X √ X γ(λ + µ a) = − γ(λ + µ a) = − γ(λ). F∗ µ6=0 µ=0 λ∈ q Chapter 5. The irreducible characters of GL2 54

To evaluate this sum, we will have to be a bit more explicit. Suppose ζ is a generator F∗ 2 of the cyclic group q2 and let ω be a (q − 1)th root of unity. Then we must have ϕ(ζ) = ωa for some integer 0 ≤ a ≤ q2 −2. From this it follows that γ(ζ) = ωa(q−1). Also F∗ F∗ k F∗ notice that q forms a cyclic subgroup of size q − 1 inside q2 , so ζ ∈ q iff q + 1|k. So

q2−2 q−2 X X X − γ(λ) = − γ(ζk) = − γ(ζ(q+1)n) F∗ λ∈ q k=0 n=0 q+1|k q−2 q−2 X X = − ωa(q−1)(q+1)n = − 1 = −(q − 1). n=0 n=0

So in this case the contribution of the second sum is

q(q − 1){q(q − 1) − (q − 1)} = q(q − 1)3.

q2(q−1)3+q(q−1)3 We find that in this case (χInd ϕ, χInd ϕ) = |G| = q − 1, as required.

We have just proved that neither of these characters are irreducible, so the reader might wonder why we have proceeded with this tedious exercise. We have done this because the characters of Ind ϕ, for ϕ 6= ϕq, together with some of the characters we have seen already lead to new irreducible characters. Let us therefore count how many distinct Ind ϕ we have such that ϕ 6= ϕq. If we write, as above, that ϕ(ζ) = ωa, then ϕq 6= ϕ if and only if ωa(q−1) 6= 1 ⇐⇒ q + 1 6 |a. The number of a such that q + 1|a is exactly q − 1, hence there are q2 − 1 − (q − 1) = q(q − 1) a such that q + 1 6 |a. So we have q(q − 1) ϕ such that ϕ 6= ϕq. We also notice that Ind ϕ =∼ Ind ϕq, and they are non-isomorphic q(q−1) q otherwise, so we have exactly 2 distinct Ind ϕ such that ϕ 6= ϕ .

Proposition 5.4.3. The character χϕ := χV ⊗Wα,1 − χWα,1 − χInd ϕ (where ϕ is a char- ∗ q acter of F as before, ϕ 6= ϕ and α = ϕ|F∗ ) takes values q2 q

λ 0 λ 1 λ 0 λ aµ 0 λ 0 λ 0 µ µ λ √ √ q χϕ : (q − 1)α(λ) −α(λ) 0 −(ϕ(λ + µ a) + ϕ(λ + µ a) ) and is irreducible.

Proof. We will leave the check that this character takes the values given in the table as a trivial exercise for the reader. It is also not hard to see that the contribution of the first set of classes in the inner product (χϕ, χϕ) is X (q − 1)2|α(λ)|2 = (q − 1)3. F∗ λ∈ q

The second set of classes will contribute X (q2 − 1) |α(λ)|2 = (q + 1)(q − 1)2. F∗ λ∈ q Chapter 5. The irreducible characters of GL2 55

Table 5.2: The Irreducible Characters of GL2(Fq)

λ 0 λ 1 λ 0 λ aµ 0 λ 0 λ 0 µ µ λ 2 2 2 2 χUα : α(λ) α(λ) α(λ)α(µ) α(λ − aµ ) 2 2 2 χVα : qα(λ) 0 α(λ)α(µ) −α(λ − aµ )

χWα,β : (q + 1)α(λ)β(λ) α(λ)β(λ) α(λ)β(µ) + α(µ)β(λ) 0 q χϕ : (q − 1)ϕ(λ) −ϕ(λ) 0 −(ϕ(ζ) + ϕ(ζ) ) √ where ζ = λ + µ a.

And the last set of classes contribute 1 X √ √ q(q − 1)|ϕ(λ + µ a) + ϕq(λ + µ a)|2 = q(q − 1)3 2 F∗ λ,µ∈ q µ6=0 as before.

(q−1)3+(q+1)(q−1)2+q(q−1)3 So we find that (χϕ, χϕ) = |G| = 1. Hence χϕ is irreducible as required.

∼ It is clear that the characters χϕ = χϕq and that they are not isomorphic otherwise. q(q−1) So we have found 2 new irreducible characters. This means the total number of irreducible characters is (q − 1)(q − 2) q(q − 1) 2q − 2 + + , 2 2 which equals the number of conjugacy classes of G. So we have found all irreducible characters of G and its character table is given in Table 5.2. Conclusion

In summary, the first chapter of this project concerned some elementary coalgebra theory. We dualised the known definitions of an algebra and a module to obtain definitions of a coalgebra and a comodule. We saw that the dual space to a coalgebra is naturally an algebra itself and we looked at some basic objects such as homomorphisms of coalgebras, subcoalgebras and bialgebras.

In the second chapter we introduced finitary functions. We showed that the space of finitary functions F = F (KΓ) is naturally a K-bialgebra. Next we introduced the coefficient functions of a finite-dimensional representation and we showed that these form a K-subcoalgebra of F .

In the third chapter we introduced polynomial functions on Γ = GLn(K) and the cate- gory MK (n, r). We continued by introducing the Schur algebra SK (n, r) and the eval- uation map e : KΓ → SK (n, r). We used this map to establish an equivalence of cate- gories between MK (n, r) and mod(SK (n, r) and we applied these results to the module ⊗r ∼ ⊗r E . We showed that SK (n, r) = EndKG(r)(E ) and used this to conclude that any V ∈ MK (n, r) is completely reducible. In the fourth chapter we introduced weights and weight spaces. We showed any V ∈ MK (n, r) can be decomposed into weight spaces. After having looked at some initial results on weight spaces, we introduced formal characters. We showed formal characters are symmetric polynomials and used this to show that all formal characters together generate Sym(n, r). We showed formal characters are naturally linked to ‘normal’ char- acters. At the end of this chapter we used all of these results to deduce our big theorem: a classification of all irreducible modules in MK (n, r).

In the fifth and final chapter we constructed the different conjugacy classes of GL2(Fq). Next we constructed the characters of Uα and Vα. We went on by constructing the characters of Wα,β by inducing from the subgroup of upper triangular matrices. Lastly, we constructed the characters of Ind ϕ by inducing from the subgroup K (see Section 5.4) and used these to define the irreducible characters χϕ. We ended this chapter by showing that these are all irreducible characters of GL2(Fq) and we gave the complete character table.

It should be noted here that there is a beautiful theory underlying the formal characters Φλ,K for K a field of characteristic 0 (see Section 4.5). These symmetric polynomials are known as Schur functions or S-functions (for a definition see e.g. [12, p. 40]). It takes a few pages to prove that the definition given by MacDonald ([12]) is in fact equivalent to the definition of a formal character over a field of characteristic 0, but it is a wonderful proof that I would have certainly included given more time and space. A sketch of the proof can be found in [9, p. 30]. 56 Bibliography 57

As mentioned in the introduction, it is possible to use the results of this project to deduce results about the representation theory of the symmetric group. This is not a small topic, but could have made for an interesting last chapter of this project. The interested reader can find such an approach in [9, Chapter 6].

Lastly, I should mention that, as also mentioned in the introduction, the theory of the irreducible characters of GLn(K) for a finite field K is, in an abstract sense, very well understood. The first person to give a complete treatment of this material was Green in 1955 [7]. A slightly simplified treatment of this material, however, can be found in MacDonald ([12, Chapter 4]). This is a very large topic and could in itself be enough to write an entire project on. It is an interesting topic, however, and closely related to what we have been doing here. I therefore encourage the keen reader to have a look at this material!

Acknowledgements

I want to take a few lines here to briefly thank my supervisor Dr. John R. Britnell for helping me out when I was stuck. In particular John’s help with the proofs of Propositions 4.3.2, 5.3.2 and 5.4.2 was substantial and consequential. I would also like to thank John for giving me extremely helpful feedback on many occasions throughout the year. Furthermore, I would like to thank James Hookham for taking the time and effort to proofread this project cover to cover. Lastly, I would like to thank Xander Koster for his invaluable support throughout the year and Caro, Daan and Oma: you will always be a source of inspiration to me. Bibliography

[1] Abe, E. (1980). Hopf Algebras. Cambridge: Cambridge University Press. Cambridge Tracts in Mathematics.

[2] Atiyah, M. and I. MacDonald (1969). Introduction to . West- view Press.

[3] Bump, D. (1998). Automorphic Forms and Representations. Cambridge: Cambridge University Press.

[4] Curtis, C. (1999). Pioneers of representation theory : Frobenius, Burnside, Schur, and Brauer. London: London Mathematical Society.

[5] C.W.Curtis and I. Reiner (1962). Representation Theory of Finite Groups and As- sociative Algebras. New York: John Wiley and Sons.

[6] Fulton, W. and J. Harris (1991). Representation Theory: A First Course. New York: Springer.

[7] Green, J. (1955). The characters of the finite general linear groups. American Mathematical Society 80 (2), 402–447 li.

[8] Green, J. (1976). Locally finite representations. Journal of Algebra (41), 137–171.

[9] Green, J. (2007). Polynomial Representations of GLn (2nd ed.). Springer Berlin Heidelberg New York.

[10] Lam, T. (1991). A First Course in Noncommutative Rings. New York: Springer- Verlag. Graduate Texts in Mathematics.

[11] Lang, S. (2002). Algebra (Revised 3rd ed.). New York: Springer-Verlag. Graduate Texts in Mathematics.

[12] Macdonald, I. (1979). Symmetric Polynomials and Hall Polynomials. Oxford: Claredon Press.

[13] Merris, R. (2003). (2nd ed.). New Jersey: John Wiley and Sons inc.

[14] Schur, I. (1901). Uber eine klasse von matrizen die sich einer gegebenen matrix zuordnen lassen. Gesammelte Abhandlungen I, 1–70. Springer Berlin, 1973.

[15] Sweedler, M. E. (1969). Hopf Algebras. New York: W.A. Benjamin inc.

[16] Wildon, M. (2008, August). Notes on polynomial representations of general linear groups. http://www.ma.rhul.ac.uk/~uvah099/Maths/PolyRepsRevised.pdf.

58