<<

Modern Theory

MAT 4199/5145

Fall 2017

Alistair Savage

Department of and Statistics

University of Ottawa

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License Contents

Preface 4

1 of finite groups5 1.1 Basic concepts...... 5 1.1.1 Representations...... 5 1.1.2 Examples...... 7 1.1.3 Intertwining operators...... 9 1.1.4 Direct sums and Maschke’s Theorem...... 11 1.1.5 The adjoint representation...... 12 1.1.6 Matrix coefficients...... 13 1.1.7 Tensor products...... 15 1.1.8 Cyclic and vectors...... 18 1.2 Schur’s lemma and the commutant...... 19 1.2.1 Schur’s lemma...... 19 1.2.2 Multiplicities and isotypic components...... 21 1.2.3 Finite-dimensional algebras...... 23 1.2.4 The commutant...... 25 1.2.5 Intertwiners as invariant elements...... 28 1.3 Characters and the projection formula...... 29 1.3.1 The trace...... 29 1.3.2 Central functions and characters...... 30 1.3.3 Central projection formulas...... 32 1.4 Permutation representations...... 38 1.4.1 Wielandt’s lemma...... 39 1.4.2 Symmetric actions and Gelfand’s lemma...... 41 1.4.3 for permutation representations...... 42 1.4.4 The structure of the commutant of a permutation representation... 47 1.5 The group algebra and the Fourier transform...... 50 1.5.1 The group algebra...... 50 1.5.2 The Fourier transform...... 54 1.5.3 Algebras of bi-K-invariant functions...... 57 1.6 Induced representations...... 61 1.6.1 Definitions and examples...... 61 1.6.2 First properties of induced representations...... 63 1.6.3 Frobenius reciprocity...... 65

2 Contents 3

1.6.4 Mackey’s lemma and the intertwining number theorem...... 66

2 The theory of Gelfand–Tsetlin bases 69 2.1 Algebras of conjugacy invariant functions...... 69 2.1.1 Conjugacy invariant functions...... 69 2.1.2 Multiplicity-free ...... 73 2.2 Gelfand–Tsetlin bases...... 74 2.2.1 Branching graphs and Gelfand–Tsetlin bases...... 74 2.2.2 Gelfand–Tsetlin algebras...... 76

3 The Okounkov–Vershik approach 80 3.1 The Young poset...... 80 3.1.1 Partitions and conjugacy classes in Sn ...... 80 3.1.2 Young diagrams...... 81 3.1.3 Young tableaux...... 82 3.1.4 Coxeter generators...... 83 3.1.5 The content of a tableau...... 86 3.1.6 The Young poset...... 87 3.2 The Young–Jucys–Murphy elements and a Gelfand–Tsetlin basis for Sn ... 89 3.2.1 The Young–Jucys–Murphy elements...... 90 3.2.2 Marked permutations...... 91 3.2.3 Olshanskii’s Theorem...... 93 3.3 The spectrum of the YJM elements and the branching graph of Sn ..... 96 3.3.1 The weight of a Young basis vector...... 97 3.3.2 The spectrum of the YJM elements...... 98 3.3.3 Spec(n) = Cont(n)...... 100 3.4 The irreducible representations of Sn ...... 105 3.4.1 Young’s seminormal form...... 105 3.4.2 Young’s orthogonal form...... 107 3.4.3 The Young seminormal units...... 110 3.4.4 The Theorem of Jucys and Murphy...... 113

4 Further directions 114 4.1 Schur–Weyl duality...... 114 4.2 Categorification...... 115 4.2.1 Symmetric functions...... 115 4.2.2 The Grothendieck group...... 117 4.2.3 Categorification of the algebra of symmetric functions...... 118 4.2.4 The Heisenberg algebra...... 119 4.2.5 Categorification of bosonic Fock space...... 119 4.2.6 Categorification of the basic representation...... 121 4.2.7 Going even further...... 122

Index 123 Preface

These are notes for the course Modern Group Theory (MAT 4199/5145) at the University of Ottawa. Since the pioneering works of Frobenius, Schur, and Young more than a hundred years ago, the representation theory of the has developed into a huge area of study, with applications to algebra, combinatorics, category theory, and mathematical physics. In this course, we will cover the representation theory of the symmetric group following modern techniques developed by Vershik, Olshankii, and Okounkov. Using techniques from algebra, combinatorics, and category theory, we will cover the following topics.

• Representation theory of finite groups. We will begin the course with an introduction to the representation theory of finite groups. This will include a discussion of irreducible representations, tensor products, Schur’s lemma, characters, permutation representa- tions, group algebras, and Frobenius reciprocity. • The theory of Gelfand-Tsetlin bases. We will discuss branching rules for representations of symmetric groups and see how such branching rules allow one to obtain particularly nice bases for irreducible representations. • The Okounkov-Vershik approach. We will discuss the combinatorics of Young tableaux, Jucys-Murphy elements, and the Okounkov-Vershik approach to the representation theory of symmetric groups.

Acknowledgements: These notes closely follow the book [CSST10], which is the recom- mended textbook for the course.

Alistair Savage

Course website: http://alistairsavage.ca/mat5145

4 Chapter 1

Representation theory of finite groups

In this chapter we discuss some basic facts about the representation theory of finite groups. While we will focus on the symmetric groups later in the course, we work mostly with arbitrary finite groups in this chapter. We closely follow the presentation in [CSST10, Ch. 1]. Throughout this chapter, G will denote a finite group and V,W will denote finite- dimensional complex vector spaces. Unless otherwise specified, we will always work over the field of complex numbers. So the term vector space means complex vector space.

1.1 Basic concepts

In this section we give the main definitions related to representations of finite groups and discuss some examples.

1.1.1 Representations Recall that the

GL(V ) := {T : V → V : T is an invertible linear map}

is a group under composition. Its identity element is the identity map IV . A (linear) representation of G on V is a group homomorphism

σ : G → GL(V ).

The name arises from the fact that elements g of G are “represented” by linear transformati- ons σ(g) of V . When we wish to make the vector space V explicit, we will sometimes denote the representation by (σ, V ), or simply by V (with the homomorphism σ understood). The dimension of the representation σ is the dimension of V . A subspace W ≤ V is said to be σ-invariant (or G-invariant, when the representation σ is understood) if

σ(g)W ⊆ W, for all g ∈ G (i.e. σ(g)w ∈ W for all g ∈ G, w ∈ W ).

5 6 Chapter 1. Representation theory of finite groups

If this is the case, then (σ|W ,W ) is also a representation of G. We say that σ|W is a subrepresentation of G. (Note that we will use the notation ≤ for subspaces and subgroups. We reserve the symbol ⊆ for set inclusion.) Note that the trivial spaces V and {0} are always invariant. A nonzero representation (σ, V ) is irreducible if V has no nontrivial invariant subspaces; otherwise we say it is reducible. If (σ, V ) is a representation of G and K ≤ G is a , the restriction of σ from G G G to K, denoted ResK σ (or ResK V ) is the representation of K of V defined by the restriction σ|K : K → GL(V ). A unitary space is a vector space V endowed with a Hermitian scalar product. Recall that a Hermitian scalar product (or Hermitian inner product) is a map h·, ·iV : V × V → C such that, for all u, v, w ∈ V and α ∈ C, we have (a) hu + v, wi = hu, wi + hv, wi, (b) hu, v + wi = hu, vi + hu, wi, (c) hαu, vi = αhu, vi, (d) hv, αvi =α ¯hu, vi, (e) hu, vi = hv, ui, (f) hu, ui ≥ 0, with equality only if u = 0, wherez ¯ denotes the complex conjugate of z ∈ C. From now on, the term scalar product will mean Hermitian scalar product. Suppose V is a unitary space and T : V → V is a linear operator. The adjoint operator T ∗ is defined by ∗ hT u, viV = hu, T viV , for all u, v ∈ V. (1.1) (See Exercise 1.1.1.) If V is a unitary space, then a linear operator T : V → V is unitary if it preserves the scalar product, i.e. if

hT u, T viV = hu, viV , for all u, v ∈ V.

(More generally, T : V → W is unitary if hT u, T viW = hu, viV for all u, v ∈ V .) All unitary operators are invertible (Exercise 1.1.2). Furthermore, T ∈ GL(V ) is a unitary operator if and only if T −1 = T ∗ (Exercise 1.1.3). Suppose V is a unitary space. A representation (σ, V ) is unitary if σ(g) is a unitary operator for all g ∈ G or, in other words, if σ(g−1) = σ(g)∗ for all g ∈ G. We way a representation (σ, V ) is unitarizable if there exists a scalar product on V with respect to which σ is unitary.

Lemma 1.1.1. Every finite-dimensional representation of a finite group is unitarizable.

Proof. Let (·, ·) be an arbitrary scalar product on V . (See Exercise 1.1.4.) Then define a new scalar product on V by X hu, vi = (σ(g)u, σ(g)v), for all u, v ∈ V. g∈G 1.1. Basic concepts 7

Then, for all h ∈ G and u, v ∈ V , we have X hσ(h)u, σ(h)vi = (σ(gh)u, σ(gh)v) g∈G X = (σ(s)u, σ(s)v) (setting s = gh) s∈G = hu, vi.

Hence the representation is unitary with respect to h·, ·i.

In light of Lemma 1.1.1, we can assume representations are unitary, which we will do from now on. Note that, for infinite groups, it is not true that all representations are unitarizable. See for example, this Wikipedia entry.

Exercises.

1.1.1. Prove that, given a linear operator T : V → V , there is a unique linear operator T ∗ : V → V satisfying (1.1).

1.1.2. Prove that all unitary operators are invertible.

1.1.3. Prove that T ∈ GL(V ) is unitary if and only if T −1 = T ∗.

1.1.4. Prove that every finite-dimensional vector space V has a scalar product.

1.1.5. Consider the infinite group Z (under addition). Let V = C2, with elements viewed as column vectors. Prove that

1 n σ : → GL(V ), σ(n)(v) = v Z 0 1 is a representation of Z on V . Prove that this representation is reducible.

1.1.2 Examples Example 1.1.2 (Trivial representation). The trivial representation of G is the one-dimensional representation (ιG, C) given by ιG(g) = IC, for all g ∈ G. Example 1.1.3 (Permutation representation (homogeneous space)). Suppose G acts on a finite set X, and let L(X) denote the vector space of all complex-valued functions on X. Then we can define a representation λ of G on L(X) by

(λ(g)f)(x) = f(g−1x), for all g ∈ G, f ∈ L(X), x ∈ X. 8 Chapter 1. Representation theory of finite groups

Note that, for all g1, g2 ∈ G, f ∈ L(X), and x ∈ X, we have

 −1  −1 −1  −1   λ(g1g2)(f) (x) = f (g1g2) x = f(g2 g1 x) = λ(g2)(f) (g x) = λ(g1) λ(g2)f (x).

Hence λ(g1g2) = λ(g1)λ(g2). Also, it is clear that λ(1G) = IL(X). So λ is indeed a represen- tation. We call it the permutation representation of G on L(X). We can define a scalar product on L(X) by X hf1, f2i = f1(x)f2(x), for all f1, f2 ∈ L(X). (1.2) x∈X With this scalar product, λ is a . (See Exercises 1.1.6 and 1.1.7.) For x ∈ X, the Dirac function δx centered at x is defined by ( 1 if y = x, δx(y) = 0 if y 6= x.

Then {δX : x ∈ X} is an orthonormal basis for L(X). In particular, X f = f(x)δx, for all f ∈ L(X). (1.3) x∈X Furthermore, λ(g)δx = δgx, for all g ∈ G, x ∈ X. (1.4) Example 1.1.4. The group G acts on itself by multiplication on the left: g · h = gh, for all g, h ∈ G. (1.5) The associated permutation representation is called the left regular representation of G and is typically denoted λ. Explicitly, we have λ(g)f(h) = f(g−1h), for all g, h ∈ G, f ∈ L(G). Similarly, G acts on itself by multiplication on the right by the inverse: g · h = hg−1, for all g, h ∈ G. (1.6) (We must multiply by the inverse in order for this to be a . See Exercise 1.1.8.) The associated permutation representation is called the right regular representation of G and is typically denoted ρ. Explicitly, we have ρ(g)f(h) = f(hg), for all g, h ∈ G, f ∈ L(G).

Recall that the symmetric group Sn of degree n is the group of all bijections (called per- mutations) π : {1, 2, . . . , n} → {1, 2, . . . , n}. Recall that a permutation is even (respectively, odd) if it is a product of an even (respectively, odd) number of transpositions.

Example 1.1.5 (The sign representation). The sign representation (or alternating represen- tation) of Sn is the one-dimensional representation (ε, C) defined by ( I if π is even, ε(π) = C −IC if π is odd. 1.1. Basic concepts 9

Exercises.

1.1.6. Prove that (1.2) defines a scalar product on L(X).

1.1.7. Prove that the permutation representation λ of Example 1.1.3 is unitary.

1.1.8. Prove that (1.5) and (1.6) define actions of G on the set G.

1.1.3 Intertwining operators We let Hom(V,W ) denote the vector space of all linear maps from V to W . If (σ, V ) and (ρ, W ) are two representations of G, we define

HomG(V,W ) = HomG(σ, ρ) := {T ∈ Hom(V,W ): T σ(g) = ρ(g)T ∀ g ∈ G}.

We call elements of HomG(V,W ) intertwining operators (or simply interwiners), and we say that they intertwine σ and ρ (or V and W ). If σ and ρ are unitary, then

=∼ ∗ HomG(σ, ρ) −→ HomG(ρ, σ),T 7→ T , (1.7) is an antilinear . Here, antilinear means that

∗ ∗ ¯ ∗ (αT1 + βT2) =αT ¯ 1 + βT2 , for all α, β ∈ C,T1,T2 ∈ HomG(σ, ρ).

To see this, note that (T ∗)∗ = T (so (1.7) is bijective) and

T ∈ HomG(σ, ρ) ⇐⇒ T σ(g) = ρ(g)T ∀ g ∈ G ⇐⇒ σ(g)∗T ∗ = T ∗ρ(g)∗ ∀ g ∈ G (taking the adjoint of both sides) ⇐⇒ σ(g−1)T ∗ = T ∗ρ(g−1) ∀ g ∈ G (since σ and ρ are unitary) ∗ ⇐⇒ T ∈ HomG(ρ, σ).

Two representations (σ, V ) and (ρ, W ) are equivalent if there is a bijective intertwiner T ∈ HomG(V,W ). In this case, we call T an isomorphism of representations and we write σ ∼ ρ or V ∼= W . If, in addition, σ and ρ are unitary representations and T is a unitary operator, then we say that σ and ρ are unitarily equivalent. Recall that a bijective operator T ∈ Hom(V,W ) has a unique polar decomposition T = U|T |, where |T | ∈ GL(V ) is the (positive definite) square root of the positive operator T ∗T , and U ∈ Hom(U, V ) is unitary. (See, for example, [Roy, Th. 10.5.5] or [Tri, Th. 3.5].)

Lemma 1.1.6. Two unitary representations of a finite group are equivalent if and only if they are unitarily equivalent. 10 Chapter 1. Representation theory of finite groups

Proof. Since unitarily equivalent representations are equivalent by definition, it suffices to prove the reverse implication. Let (σ, V ) and (ρ, W ) be unitary representations of a finite group G and suppose they are equivalent. Then there exists a bijection T ∈ HomG(V,W ), and ∗ T T ∈ HomG(V,V ) ∩ GL(V ). Let T = U|T | be the polar decomposition of T . Then, for all g ∈ G, we have

σ(g−1)|T |σ(g)2 = σ(g−1)|T |σ(g)σ(g−1)|T |σ(g) = σ(g−1)|T |2σ(g) (since σ is a group homomorphism) = σ(g−1)T ∗T σ(g) (by the definition of |T |) = T ∗ρ(g−1)ρ(g)T (since T and T ∗ are intertwiners) = T ∗T (since ρ is a group homomorphism).

The uniqueness of positive square root of T ∗T implies that

σ(g−1)|T |σ(g) = |T | =⇒ |T |σ(g) = σ(g)|T |.

Hence |T | ∈ HomG(V,V ). Then, for all g ∈ G, we have

Uσ(g) = T |T |−1σ(g) = T σ(g)|T |−1 = ρ(g)U.

Thus U is a unitary equivalence from σ to ρ.

Definition 1.1.7. We let Irr(G) denote the set of all (unitary) irreducible representations of G, and we let Gb = Irr(G)/ ∼ denote the set of equivalence classes of Irr(G). By a slight abuse of notation, we will often identity Gb with a fixed set of representatives of these equivalence classes, that is, a set of irreducible (unitary) pairwise inequivalent representations of G.

Remark 1.1.8. For those students who know a bit of category theory, one can define a category of (finite-dimensional) representations of a fixed group G. The objects are finite- dimensional representations and the morphisms are intertwiners. One can check that the axioms of a category are satisfied. For example, the composition of intertwiners is again an intertwiners, and this composition is associative (see Exercise 1.1.9).

Exercises.

1.1.9. Suppose that (σi,Vi), i = 1, 2, 3, are representations of G, that S ∈ HomG(σ1, σ2), and that T ∈ HomG(σ2, σ3). Prove that the composition TS is an element of HomG(V1,V3).

1.1.10. Suppose (σ, V ) and (ρ, W ) are representations of G. Prove that if T ∈ HomG(σ, ρ) −1 is invertible, then T ∈ HomG(ρ, σ). 1.1. Basic concepts 11

1.1.4 Direct sums and Maschke’s Theorem

Suppose that (σj,Vj), j = 1, 2, . . . , m, are representations of a group G. Their is Lm Lm  the representation (σ, V ) = j=1 σj, j=1 Vj defined by

m m σ(g)(vj)j=1 = (σj(g)vj)j=1 .

m Lm Pm We will often write the element (vj)j=1 of a direct sum j=1 Vj as j=1 vj. Conversely, suppose (σ, V ) is a representation of G such that Lm • V = j=1 Vj (as vector spaces), and

• the subspace Vj is σ-invariant for each j = 1, . . . , m. Then, for j = 1, . . . , m, we can define

σj(g) = σ(g)|Vj : Vj → Vj, g ∈ G, and we have m M σ = σj. j=1 Theorem 1.1.9 (Maschke’s Theorem). Every finite-dimensional representation of a finite group can be decomposed as a direct sum of irreducible representations. Furthermore, the decomposition can be chosen to be orthogonal (i.e. elements of distinct direct summands are orthogonal to each other). Proof. Let (σ, V ) be a finite-dimensional representation of G. As noted after Lemma 1.1.1, we may assume that σ is unitary. We prove the result by induction on the dimension of V . If dim V = 0 or dim V = 1, then V is clearly irreducible. Now suppose dim V > 1, and that the result has been proved for all representations of dimension less than V . If V is irreducible, we are done. Thus, we suppose V is reducible. Then there is a nontrivial σ-invariant subspace W ≤ V . We claim that its orthogonal complement

⊥ W = {v ∈ V : hv, wiV = 0 ∀ w ∈ W } is also σ-invariant. Indeed, for g ∈ G and v ∈ W ⊥, we have

−1 hσ(g)v, wiV = hv, σ(g )wiV = 0, for all w ∈ W, since σ(g−1)w ∈ W and v ∈ W ⊥. Thus σ(g)v ∈ W ⊥ and so W ⊥ is σ-invariant. By the above, we have the orthogonal decomposition

V = W ⊕ W ⊥.

Since W is nontrivial, we have dim W, dim W ⊥ < dim V . Therefore, by the inductive hypot- hesis, W and W ⊥ have orthogonal decompositions into direct sums of irreducible represen- tations. This completes the proof of the inductive step. 12 Chapter 1. Representation theory of finite groups

Remark 1.1.10. Maschke’s Theorem relies on our assumptions about the group and our choice of C as the ground field. More generally, Maschke’s Theorem holds for finite groups and representations over a field whose characteristic does not divide the order of the group. See Exercise 1.1.12.

Remark 1.1.11. A representation that cannot be decomposed into a direct sum of two nontri- vial representations is said to be indecomposable. Irreducible representations are always in- decomposable (for arbitrary groups and arbitrary ground fields). Maschke’s Theorem says that, under additional assumptions, the converse is true: indecomposable representations are also irreducible. However, this is not true in general. See Exercise 1.1.11.

Exercises.

1.1.11. Show that the representation of Exercise 1.1.5 cannot be decomposed as a direct sum of irreducible representations. Why does this not violate Maschke’s Theorem?

1.1.12. Let p be a prime number. In this exercise, we work over the field Zp with p elements, instead of over C. Suppose G is a finite group whose order is divisible by p. Consider the left regular representation (λ, L(G)) of G. Let ( ) X V = f ∈ L(G): f(g) = 0 . g∈G (a) Prove that V is G-invariant. (b) Prove that V ∩ W 6= {0} for every nonzero G-invariant subspace W of L(G). Hint: Let f be a nonzero element of W such that f 6∈ V . Define s ∈ L(G) by s(g) = 1 for P  all g ∈ G. Note that s ∈ V . Prove that s ∈ W by considering g∈G λ(g)f . (c) Explain how this example shows that Maschke’s Theorem does not hold when the characteristic of the field divides the order of the group.

1.1.5 The adjoint representation Recall that the dual of the vector space V , denoted V 0 is the vector space of all linear functions V → C. If V is endowed with scalar product h·, ·iV , then we have the Riesz map

0 V → V , v 7→ θv, where θv(w) := hw, viV , for all w ∈ V. This map is antilinear: ¯ θαv+βw =αθ ¯ v + βθw, for all α, β ∈ C, v, w ∈ V. Since V is finite-dimensional, this map is also bijective. (This follows from the Riesz repre- sentation theorem. See, for example, [Tri, Th. 2.1]. Alternatively, you can prove it directly.) 1.1. Basic concepts 13

We have dual scalar product on V 0 defined by

hθv, θwiV 0 = hw, viV , for all v, w, ∈ V. (1.8)

0 If {v1, v2, . . . , vn} is an orthonormal basis for V , then the corresponding dual basis of V is

{θv1 , . . . , θvn }. It is characterized by the property that

θvi (vj) = δi,j. Suppose (σ, V ) is a representation of G. The adjoint (or conjugate, or contragredient) representation is the representation (σ0,V 0) of G defined by

(σ0(g)f) v = f σ(g−1)v , for all g ∈ G, f ∈ V 0, v ∈ V. (1.9)

Note that

0 −1  (σ (g)θw) v = θw σ(g )v −1 = hσ(g )v, wiV

= hv, σ(g)wiV

= θσ(g)w(v). Thus 0 σ (g)θw = θσ(g)w, for all g ∈ G, w ∈ V. (1.10) It follows that σ is irreducible if and only if σ0 is irreducible (Exercise 1.1.13).

Exercises.

1.1.13. Prove that a representation (σ, V ) of G is irreducible if and only if the adjoint repre- sentation (σ0,V 0) is irreducible.

1.1.6 Matrix coefficients As we know from linear algebra, if we choose a basis for V , then any linear transformation of V can be represented by a matrix. Let (σ, V ) be a unitary representation of G, and let {v1, . . . , vn} be an orthonormal basis of V . Then the matrix coefficients associated with this basis are given by

Ui,j(g) = hσ(g)vj, viiV , g ∈ G, i, j = 1, . . . , n. n For g ∈ G, let U(g) = Ui,j(g) i,j=1 ∈ Mn,n(C) be the matrix whose entries are the matrix coefficients of (σ, V ). Recall that if M ∈ Mn,n(C), then its conjugate transpose is the matrix M ∗ with entries given by ∗ Mi,j = Mj,i. The matrix M is unitary if it is invertible and M −1 = M ∗. 14 Chapter 1. Representation theory of finite groups

Lemma 1.1.12. For all g, g1, g2 ∈ G, we have

(a) U(g1g2) = U(g1)U(g2), (b) U(g−1) = U(g)∗, (c) U(g) is unitary, (d) the matrix coefficients of the adjoint representation σ0 with respect to the dual basis

θv1 , . . . , θvn are 0 0 hσ (g)θvj , θvi iV = Ui,j(g). Proof. (a) We have

Ui,j(g1g2) = hσ(g1g2)vj, viiV

= hσ(g1)σ(g2)vj, viiV (σ is a group homomorphism) * n ! + n ! X X = σ(g1) hσ(g2)vj, vkiV vk , vi w = hw, vkivk for all w ∈ V k=1 V k=1 n X = hσ(g1)vk, viiV hσ(g2)vj, vkiV (the scalar product is linear) k=1 n X = Ui,k(g1)Uk,j(g2). k=1 (b) We have

−1 −1 Ui,j(g ) = hσ(g )vj, viiV −1 ∗ = hvj, σ(g)viiV (σ(g ) = σ(g) )

= hσ(g)vi, vjiV (property of the scalar product)

= Uj,i(g). (c) It follows from part (a) that U(g−1) = U(g)−1. Then it follows from part (b) that U(g) is unitary. (d) We have

0 (1.10) 0 hσ (g)θvj , θvi iV = hθσ(g)vj , θvi iV (1.8) = hvi, σ(g)vjiV

= hσ(g)vj, viiV (property of the scalar product)

= Ui,j(g) It follows from Lemma 1.1.12 that we have a group homomorphism U : G → U(n), g 7→ U(g), where U(n) is the group of n × n unitary matrices (under multiplication). This group homomorphism is called a unitary matrix realization of σ. Note that it depends on the choice of orthonormal basis. 1.1. Basic concepts 15

Exercises.

0 1.1.14. Suppose B = {v1, . . . , vn} and B = {w1, . . . , wn} are two orthonormal bases for V . Then we have a “change of basis matrix” A whose (i, j) entry is given by

Ai,j = hvj, wiiV . Let U and U 0 be the unitary matrix realizations of a representation σ on V in terms of the bases B and B0, respectively. State and prove an equality relating U(g), U 0(g), and A that holds for all g ∈ G.

1.1.7 Tensor products In this section, we introduce the important notion of a tensor product. Rather than give the most general possible definition (that of a tensor product of bimodules over rings), we give a more direct definition that is suitable for our purposes. Suppose V and W are finite-dimensional unitary spaces over C. The tensor product V ⊗ W is the vector space consisting of all maps

B : V × W → C that are bi-antilinear:

B(α1v1 + α2v2, w) =α ¯1B(v1, w) +α ¯2B(v2, w),

B(v, α1w1 + α2w2) =α ¯1B(v, w1) +α ¯2B(v, w2),

for all α1, α2 ∈ C, v, v1, v2 ∈ V , and w, w1, w2 ∈ W . For v ∈ V and w ∈ W we define the simple tensor v ⊗ w ∈ V ⊗ W by

(v ⊗ w)(v1, w1) = hv, v1iV hw, w1iW . The map V × W → V ⊗ W, (v, w) 7→ v ⊗ w, is bilinear: 2 X (α1v1 + α2v2) ⊗ (β1w1 + β2w2) = αiβjvi ⊗ wj, i,j=1

for all α1, α2, β1, β2 ∈ C, v1, v2 ∈ V , and w1, w2 ∈ W .

Lemma 1.1.13. If {v1, . . . , vn} is an orthonormal basis for V and {w1, . . . , wm} is an ort- honormal basis for W , then

{vi ⊗ wj : 1 ≤ i ≤ n, 1 ≤ j ≤ m} (1.11) is a basis for V ⊗ W . In particular, dim(V ⊗ W ) = (dim V )(dim W ). 16 Chapter 1. Representation theory of finite groups

Proof. Suppose B ∈ V ⊗W , v ∈ V , and w ∈ W . Then there exist α1, . . . , αn, β1, . . . , βm ∈ C such that n m X X v = αivi and w = βiwi. i=1 i=1 Then, for i = 1, . . . , n, we have

n X hvi, viV = α¯khvi, vkiV =α ¯k. k=1 ¯ Similarly, for j = 1, . . . , m, we have hwj, wiW = βj. Therefore,

n m n m ! X X ¯ X X B(v, w) = α¯iβjB(vi, wj) = B(vi, wj)vi ⊗ wj (v, w). i=1 j=1 i=1 j=1 Pn Pm It follows that B = i=1 j=1 B(vi, wj)vi ⊗ wj. So every element of V ⊗ W can be written in a unique way as a linear combination of the elements of (1.11). Thus, these elements form a basis for V ⊗ W .

Remark 1.1.14. It is important to note that not every element of V ⊗ W can be written as a simple tensor. When working with an arbitrary element of a tensor product, one must consider finite sums of simple tensors.

In the notation of Lemma 1.1.13, we define a scalar product on V ⊗ W by

hvi ⊗ wk, vj ⊗ w`i = hvi, vjiV · hwk, w`iW = δi,jδk,`, (1.12) and extending by linearity. Thus the basis (1.11) is orthonormal. Suppose V1,V2,W1,W2 are unitary spaces. If T ∈ Hom(V1,V2) and S ∈ Hom(W1,W2), we define

T ⊗ S ∈ Hom(V1 ⊗ W1,V2 ⊗ W2), (T ⊗ S)(v ⊗ w) = (T v) ⊗ (Sw), v ∈ V1, w ∈ W1, and extend by linearity. Lemma 1.1.15. We have an isomorphism of vector spaces ∼ Hom(V1 ⊗ W1,V2 ⊗ W2) = Hom(V1,V2) ⊗ Hom(W1,W2).

Proof. Since the dimensions of the two spaces are both (dim V1)(dim V2)(dim W1)(dim W2), it suffices to prove that every element of Hom(V1 ⊗ W1,V2 ⊗ W2) is a linear combination of elements of the form T ⊗ S for T ∈ Hom(V1,V2) and S ∈ Hom(W1,W2). For i = 1, 2, let

{vi,1, . . . , vi,ni } be a basis of Vi, and let {wi,1, . . . , wi,mi } be a basis of Wi. Choose

a ∈ {1, . . . , n1}, b ∈ {1, . . . , n2}, c ∈ {1, . . . , m1}, d ∈ {1, . . . , m2}. Define ( w1,c if i = a, Ta,c ∈ Hom(V1,W1),T (v1,i) = 0 if i 6= a, 1.1. Basic concepts 17

( w2,d if j = b, Sb,d ∈ Hom(V2,W2),T (v2,j) = 0 if j 6= b,

Then ( w1,c ⊗ w2,d if i = a, j = b, (Ta,c ⊗ Sb,d)(v1,i ⊗ v2,j) = 0 otherwise.

Since {v1,i ⊗ v2,j : 1 ≤ i ≤ n1, 1 ≤ j ≤ n2} is a basis of V1 ⊗ V2 and {w1,i ⊗ w2,j : 1 ≤ i ≤ m1, 1 ≤ j ≤ m2} is a basis of W1 ⊗ W2, very element of Hom(V1 ⊗ V2,W1 ⊗ W2) can be written as a linear combination of the above maps (for various choices of a, b, c, d). This completes the proof.

Now suppose G1 and G2 are finite groups. Let (σi,Vi) be a representation of Gi for i = 1, 2. The outer tensor product of σ1 and σ2 is the representation σ1  σ2 of G1 × G2 given by

σ1  σ2 : G1 × G2 → GL(V1 ⊗ V2), (σ1  σ2)(g1, g2) = σ1(g1) ⊗ σ2(g2), for all g1 ∈ G1, g2 ∈ G2.

Recall that we have the diagonal embedding (a group homomorphism)

G → G × G, g 7→ (g, g).

Suppose (σ1,V1) and (σ2,V2) are two representations of the same group G. Then the inter- nal tensor product of σ1 and σ2, denoted σ1 ⊗ σ2, is the representation of G given by the composition σ1σ2 G → G × G −−−→ GL(V1 ⊗ V2).

In other words, the σ1 ⊗ σ2 is given by

(σ1 ⊗ σ2)(g) = σ1(g) ⊗ σ2(g).

Exercises.

1.1.15. Suppose (σi,Vi) is a representation of Gi for i = 1, 2. Let Wi be a σi-invariant subspace of Vi for i = 1, 2. Prove that W1⊗W2 is a σ1σ2-invariant subspace of V1⊗V2. (Here we identify W1 ⊗W2 with the subspace of V1 ⊗V2 spanned by {w1 ⊗w2 : w1 ∈ W1, w2 ∈ W2}.)

1.1.16. Prove that the scalar product defined in (1.12) is indeed a Hermitian scalar product on V ⊗ W . 18 Chapter 1. Representation theory of finite groups

1.1.8 Cyclic and invariant vectors Let (σ, V ) be a unitary representation of G. For v ∈ V ,

hσ(g)v : g ∈ Gi is a σ-invariant subspace of V , called the subspace generated by v. (We use the notation h i to denote the C-span.) If this space is all of V , then we say that v is a cyclic vector. We say that a vector v ∈ V is σ-invariant or fixed if

σ(g)v = v, for all g ∈ G.

We let V G = {v ∈ V : σ(g)v = v ∀ g ∈ G} denote the subspace of all σ-invariant vectors. More generally, if K ≤ G is a subgroup, we let V K := {v ∈ V : σ(k)v = v ∀ k ∈ K} (1.13) be the subspace of K-invariant vectors.

G Lemma 1.1.16. Suppose that u ∈ V , u 6= 0. If v ∈ V is orthogonal to u (i.e. hu, viV = 0), then v is not cyclic.

Proof. For all g ∈ G, we have

−1 hu, σ(g)viV = hσ(g )u, viV G = hu, viV (since u ∈ V ) = 0.

Hence σ(g)v ∈ hui⊥. So ⊥ hσ(g)v : g ∈ Gi ⊆ hui V, and hence v is not cyclic.

The following corollary will be useful in our study of the representation theory of the symmetric group.

Corollary 1.1.17. Suppose that there exists a cyclic vector v ∈ V , and g ∈ G, λ ∈ C, λ 6= 1, such that σ(g)v = λv. Then V G = {0}.

Proof. Suppose u ∈ V G. Thus σ(g)u = u. So u and v are eigenvectors for the unitary operator with distinct eigenvalues. Hence there are orthogonal. By Lemma 1.1.16, u = 0. 1.2. Schur’s lemma and the commutant 19

Exercises.

1.1.17. Show that if dim V G ≥ 2, then V has no cyclic vectors. Hint: Suppose u, w ∈ V G are nonzero and orthogonal. Then, for v ∈ V \ V G, we have dimhu, w, vi = 3, and so there G exists a nonzero u0 ∈ hu, wi ⊆ V such that hu0, vi = 0.

1.1.18. Suppose G acts transitively on a set X.

(a) Show that dim L(X)G = 1.

(b) Show that the vectors δx are cyclic.

1.2 Schur’s lemma and the commutant

In this section we prove Schur’s lemma, which is a very useful result in representation theory, even though its proof is quite simple. We also discuss decompositions of representations into isotypic components, commutants, and how these relate to spaces of intertwiners.

1.2.1 Schur’s lemma Lemma 1.2.1 (Schur’s lemma). Suppose (σ, V ) and (ρ, W ) are irreducible representations of G.

(a) Every nonzero element of HomG(σ, ρ) is an isomorphism.

(b) We have HomG(σ, σ) = CIV .

Proof. (a) Suppose T ∈ HomG(σ, ρ). Then Ker T ≤ V and Im T ≤ W are G-invariant (Exercise 1.2.1). Since V is irreducible, this implies that Ker T = V or Ker T = 0. If Ker T = V , then T = 0. Otherwise, T is injective. Also, if T 6= 0, then Im T 6= 0, and so Im T = W since W is irreducible. So T is also surjective.

(b) Suppose T ∈ HomG(σ, σ). Since C is algebraically closed, T has at least one eigen- value. So there exists λ ∈ C such that Ker(T − λIV ) 6= 0. This implies that T − λIV is not injective. So, by part (a), T − λIV = 0. Hence T = λIV .

Corollary 1.2.2. If σ and ρ are irreducible representations of G, then ( 1 if σ ∼ ρ, dim HomG(σ, ρ) = 0 if σ 6∼ ρ.

Corollary 1.2.3. Every irreducible representation of an abelian group is one-dimensional. 20 Chapter 1. Representation theory of finite groups

Proof. Suppose (σ, V ) is an irreducible representation of an abelian group G. For every h ∈ G, we have

σ(h)σ(g) = σ(hg) = σ(gh) = σ(g)σ(h), for all g ∈ G.

Thus σ(h) ∈ HomG(σ, σ). By Lemma 1.2.1(b), we have σ(h) = χ(h)IV for some χ(h) ∈ C. Since this holds for all h ∈ G, it follows that every subspace of V is invariant. Since V is irreducible, we must have dim V = 1.

Example 1.2.4 (Representations of cyclic groups). Suppose

n Cn = ha : a = 1i is a cyclic group of order n. By Corollary 1.2.3, every irreducible representation corresponds to a group homomorphism

× χ: Cn → GL(C) = C := C \{0}.

Such a map is uniquely determined by χ(a). Since the only relation in the group is an = 1, we can choose any value for χ(a) such that χ(a)n = 1, that is, such that χ(a) is an n-th root of unity. So every representation is of the form

 2πij  χ : C → , χ ak = χ (a)k = exp k , k ∈ , j n C j j n Z

for some j ∈ {0, 1, . . . , n − 1}. Since one-dimensional representations are equivalent if and only if they corresponds to the same χ (Exercise 1.2.2), we have

Ccn = {χ0, . . . , χn−1}.

In particular, the only irreducible representations of C2, up to isomorphism are the trivial

representation χ0 = ιC2 (Example 1.1.2) and the sign representation χ1 = ε (Example 1.1.5).

Exercises.

1.2.1. Suppose (σ, V ) and (ρ, W ) are representations of G and T ∈ HomG(V,W ). Prove that Ker T is a σ-invariant subspace of V and that Im T is a ρ-invariant subspace of W .

1.2.2. Prove that two one-dimensional representations (σ, C) and (ρ, C) are equivalent if and only if they are equal. 1.2. Schur’s lemma and the commutant 21

1.2.2 Multiplicities and isotypic components A linear transformation E ∈ Hom(V,V ) is a projection if it is idempotent: E2 = E. If, in addition, Im E is orthogonal to Ker E, we say that E is an orthogonal projection of V onto Im E. It is not hard to verify that a projection E is orthogonal if and only if it is self-adjoint, that is, E = E∗. (See Exercise 1.2.3.) Now suppose (σ, V ) is a representation of G. By Maschke’s Theorem (Theorem 1.1.9), we have an orthogonal decomposition M V = Vρ, ρ∈Gb where ∼ ⊕mρ Vρ = Wρ := Wρ ⊕ · · · ⊕ Wρ | {z } mρ summands

is an orthogonal direct sum of mρ copies of an irreducible representation Wρ in the equivalence ⊕0 class ρ ∈ Gb, for some mρ ∈ N = Z≥0. We adopt the convention that U = 0 for a vector space U. The summand Vρ is called the ρ-isotypic component of V , and mρ is called the multiplicity of ρ in σ (or of Wρ in V ). Let Gbσ = {ρ ∈ Gb : mρ ≥ 1} be the set of isomorphism classes of irreducible representations of G that appear with nonzero multiplicity in σ. The inclusions of the summands yield interwiners

Iρ,j ∈ HomG(Wρ,V ), ρ ∈ Gbσ, 1 ≤ j ≤ mρ.

So we have mρ M M V = Iρ,jWρ. (1.14) j=1 ρ∈Gbσ Thus, every v ∈ V can be written uniquely in the form

mρ X X v = vρ,j, vρ,j ∈ Iρ,jWρ. j=1 ρ∈Gbσ

For each ρ ∈ Gbσ and 1 ≤ j ≤ mρ, we have the orthogonal projection

 mη  X X Eρ,j ∈ HomG(V,V ),Eρ,j  vη,j = vρ,j. j=1 η∈Gbσ

It follows that mρ X X IV = Eρ,j. (1.15) j=1 ρ∈Gbσ 22 Chapter 1. Representation theory of finite groups

Lemma 1.2.5. For ρ ∈ Gbσ, the intertwiners Iρ,1,...,Iρ,mρ form a basis of HomG(Wρ,V ). In particular, mρ = dim HomG(Wρ,V ).

Proof. Suppose T ∈ HomG(Wρ,V ). Then

mη X X T = IV T = Eη,jT. j=1 η∈Gbσ ∼ The domain of Eη,jT is Wρ, while its image is Iη,jWη = Wη. Thus, by Schur’s lemma (Lemma 1.2.1), we have ( 0, if η 6= ρ, Eη,jT = αjIρ,j for some αj ∈ C, if η = ρ.

Hence, mρ X T = αjIρ,j. j=1 Since this decomposition is unique, the lemma follows.

Corollary 1.2.6. With notation as in Lemma 1.2.5, we have mρ = dim HomG(V,Wρ).

Proof. This follows from Lemma 1.2.5 and (1.7). Alternatively, it can be proved directly, using an argument analogous to that used to prove Lemma 1.2.5 (Exercise 1.2.4).

The isotypic summands Vρ of V are unique. However, the decomposition of Vρ into a sum of irreducible representations is not unique when mρ > 1; it corresponds to a choice of basis for HomG(Wρ,V ). ρ For ρ ∈ Gbσ and 1 ≤ j, k ≤ mρ, consider the intertwiner Tk,j ∈ HomG(V,V ) defined to be the composition I I−1 ρ ρ,k ρ,j Tk,j : V  Iρ,jWρ −−−−→ Iρ,kWρ ,→ V, (1.16) where  and ,→ denote the projection onto and inclusion of the given summand of V , and −1 Iρ,j denotes the inverse of Iρ,j when its codomain is replaced by its image (so that it becomes invertible, since it is injective). It follows that

ρ η ρ Tk,jTs,t = δρ,ηδj,sTk,t. (1.17)

ρ Note also that Tj,j = Eρ,j. Furthermore, it follows from Corollary 1.2.2 that

ρ HomG(Iρ,jWρ,Iρ,kWρ) = CTk,j. 1.2. Schur’s lemma and the commutant 23

Exercises.

1.2.3. Prove that a projection is orthogonal if and only if it is self-adjoint.

1.2.4. Prove Corollary 1.2.6 directly, using an argument analogous to that used to prove Lemma 1.2.5.

1.2.5. Prove that a representation (σ, V ) is reducible if and only if HomG(σ, σ) has nontrivial idempotents. (The trivial idempotents are 0 and IV .)

1.2.3 Finite-dimensional algebras

An (associative) algebra over C is a vector space A with bilinear product A × A → A such that A is a ring (possibly without unit) with respect to this product and the vector addition. A subalgebra of A is a subspace B ≤ A that is closed under multiplication. An of A is a bijective map A 7→ A∗ such that

• (A∗)∗ = A, • (αA + βB)∗ =αA ¯ ∗ + βB¯ ∗, and • (AB)∗ = B∗A∗, for all α, β ∈ C and A, B ∈ A. An algebra with involution is called an involutive algebra or ∗-algebra. An element A in an involutive algebra is self-adjoint if A∗ = A. The algebra A is commutative if it is commutative as a ring:

AB = BA, for all A, B ∈ A.

The center of A is the commutative algebra

Z(A) = {B ∈ A : AB = BA for all A ∈ A}.

The algebra A is unital if there exists I ∈ A such that

AI = IA = A, for all A ∈ A.

The unit is unique: If I0 is another unit, then

I = II0 = I0.

Furthermore, I∗ = I∗I = ((I∗I)∗)∗ = (I∗(I∗)∗)∗ = (I∗I)∗ = (I∗)∗ = I. So the unit is self-adjoint. Suppose A1 and A2 are algebras. A map φ: A1 → A2 is an algebra homomorphism if • φ is linear, 24 Chapter 1. Representation theory of finite groups

• φ is multiplicative: φ(AB) = φ(A)φ(B) for all A, B ∈ A1.

If A1 and A2 are involutive, then φ is a ∗-homomorphism if it is an algebra homomorphism and it preserves the involution:

∗ ∗ φ(A ) = φ(A) , for all A ∈ A1.

If, in addition, φ is bijective, then we call it a ∗-isomorphism, and we say that A1 and A2 are ∗-isomorphic. A map φ: A1 → A2 is a ∗-anti-homomorphism if it satisfies the conditions of a ∗- homomorphism, except that we replace the multiplicative property by the anti-multiplicative property φ(AB) = φ(B)φ(A).

If φ is also bijective, it is called a ∗-anti-isomorphism, and we say that A1 and A2 are ∗-anti-isomorphic.. Suppose A1, A2,..., Ak, k ≥ 2, are algebras. Their direct sum, denoted A1 ⊕ · · · ⊕ Ak, is equal to A1 ⊕ · · · ⊕ Ak as a vector space, with componentwise product:

(A1,...,Ak)(B1,...,Bk) = (A1B1,...,AkBk), for all Ai,Bi ∈ Ai, 1 ≤ i ≤ k.

The algebra generated by a subset B ⊆ A, denoted hBi, is the smallest subalgebra of A containing B. Explicitly, hBi is the set of all linear combinations of products of elements of B. The dimension of A is its dimension as a vector space. Suppose A is finite dimensional, and let {e1, . . . , ed} be a basis of A. Then the structure coefficients ci,j,k ∈ C, 1 ≤ i, j, k ≤ d defined by d X eiej = ci,j,kek k=1 uniquely determine the product in A.

Example 1.2.7 (Endomorphism algebra). The endomorphism algebra End(V ) := Hom(V,V ) is a unital ∗-algebra with the usual vector space structure and multiplication (composition of operators). The involution is the map T 7→ T ∗, where T ∗ is the adjoint of T . Moreover, if (σ, V ) is a unitary representation of G, then the subalgebra HomG(V,V ) of End(V ) is also a unital ∗-algebra.

Example 1.2.8 (Matrix algebra). The matrix algebra Mm,m(C) of complex m×m matrices is a unital ∗-algebra under matrix multiplication, where the involution is givin by the conjugate m ∼ transpose. If V = C , then Mm,m(C) = End(V )(∗-isomorphism). The center of Mm,m(C) is ∼ Z(Mm,m(C)) = {λI : λ ∈ C} = C, where I is the identity matrix (Exercise 1.2.6). 1.2. Schur’s lemma and the commutant 25

Exercises.

1.2.6. Prove that Z(Mm,m(C)) = {λI : λ ∈ C}.

1.2.7. Prove that if φ: A1 → A2 is a homomorphism of algebras, then Im φ is a subalgebra of A2. If, in addition, φ is a ∗-homomorphism, prove that Im φ is an involutive algebra, with involution induced by the involution on A2. 1.2.8. A subspace J of an algebra A is a (two-sided) ideal of A if

AB ∈ J and BA ∈ J, for all A ∈ A,B ∈ J.

(a) Prove that if φ: A1 → A2 is a homomorphism of algebras, then Ker φ is an ideal of A1. (b) If J is an ideal of A, define a natural algebra structure on the quotient vector space A/J. (c) If J is an ideal of A that is ∗-invariant (i.e. A∗ ∈ J for all A ∈ J), define a natural involution on the quotient algebra A/J and prove that this gives the quotient the structure of an involutive algebra. ∼ (d) If φ: A1 → A2 is a ∗-homomorphism, prove that Im φ = A1/ Ker φ as ∗-algebras.

1.2.9. Let m ≥ 2 and consider the algebra C[x]/(xm), where (xm) = C[x]xm is the ideal of C[x] generated by xm. Prove that C[x]/(xm) is not isomorphic as an algebra to Cm = Lm k=1 C, even though these two algebras are both commutative and both have dimension m. Hint: The image of x in the quotient C[x]/(xm) has a property that no element of Cm has. 1.2.10. An element E of an algebra A is idempotent if E2 = E. Of course, every unital algebra has the trivial idempotents 0 and I. Suppose a unital algebra A has a nontrivial central idempotent E, that is, E is a nontrivial idempotent in the center of A. Show that

EAE := {EAE : A ∈ A} and (I − E)A(I − E) := {(I − E)A(I − E): A ∈ A} are subalgebras of A with units. (Here we do not require the units of the subalgebras to be the same as the unit of A.) Furthermore, prove that A = EAE ⊕ (I − E)A(I − E) (direct sum of algebras). Thus, central idempotents allow one to decompose algebras (or rings) as direct sums of smaller algebras (or rings).

1.2.4 The commutant Suppose (σ, V ) is a representation of G.

Definition 1.2.9 (Commutant). The algebra EndG(V ) := HomG(V,V ) is called the com- mutant of (σ, V ).

ρ Recall the elements Tk,j ∈ EndG(V ) defined in (1.16). 26 Chapter 1. Representation theory of finite groups

Theorem 1.2.10. The set

ρ {Tk,j : ρ ∈ Gbσ, 1 ≤ k, j ≤ mρ} (1.18) is a basis for EndG(V ). Furthermore, the map

mρ M X X M mρ End (V ) → M ( ), αρ T ρ 7→ αρ  , (1.19) G mρ,mρ C k,j k,j k,j k,j=1 k,j=1 ρ∈Gbσ ρ∈Gbσ ρ∈Gbσ is an isomorphism of algebras.

Proof. Suppose T ∈ EndG(V ). Then

 mρ   mη  mρ mη (1.15) X X X X X X X T = IV TIV =  Eρ,k T  Eη,j = Eρ,kTEη,j. ρ∈Gb k=1 η∈Gb j=1 ρ,η∈Gb k=1 j=1

Note that Im(Eρ,kTEη,j) ≤ Iρ,kWρ. Thus, the restriction of Eρ,kTEη,j to Iη,jWη is an intert- wining operator from Iη,jWη to Iρ,kWρ. Therefore, by Corollary 1.2.2, we have ( 0, if η 6∼ ρ, Eρ,kTEη,j = ρ ρ ρ αk,jTk,j for some αk,j ∈ C, if ρ ∼ η.

This proves that the set (1.18) spans EndG(V ). It remains to prove that the set (1.18) is linearly independent. Suppose that

mρ X X ρ ρ αk,jTk,j = 0. k,j=1 ρ∈Gbσ

η For ρ ∈ Gbσ, choose a nonzero v ∈ Iρ,jWρ. Since Tk,`v = 0 for η 6∼ ρ or ` 6= j, we have

mρ X ρ ρ αk,jTk,jv = 0 k=1

ρ Since the Tk,jv, 1 ≤ k ≤ mρ, belong to different summands in a decomposition of V into ρ irreducible subrepresentations, they are linearly independent. Hence αk,j = 0 for all 1 ≤ k ≤ mρ. The fact that (1.19) is an isomorphism of algebras follows from (1.17).

Corollary 1.2.11. We have dim End (V ) = P m2. G ρ∈Gbσ ρ

Definition 1.2.12. We say that the representation (σ, V ) is multiplicity free if mρ ≤ 1 for all ρ ∈ Gb, or equivalently, if mρ = 1 for all ρ ∈ Gbσ. Corollary 1.2.13. The representation (σ, V ) is multiplicity free if and only if its commutant EndG(V ) is commutative. 1.2. Schur’s lemma and the commutant 27

Proof. This follows from Theorem 1.2.10 and the fact that the matrix algebra Mm,m(C) is commutative if and only if m = 1.

One of the nice properties of multiplicity free representations is that their decomposition into irreducible subrepresentations is unique, since it is the same as their decomposition into isotypic components. Note that, for ρ ∈ Gbσ, mρ mρ X X ρ Eρ := Eρ,j = Tj,j j=1 j=1

Lmρ is the projection from V onto the ρ-isotypic component Vρ = j=1 Iρ,jWρ. The projection Eρ is called the minimal central projection or minimal central idempotent corresponding to ρ. The Eρ,j are called minimal projections or minimal idempotents.

Gbσ Corollary 1.2.14. The center Z(EndG(V )) is isomorphic, as an algebra, to C . Further- more, the minimal central projections {Eρ : ρ ∈ Gbσ} are a basis for the center.

Proof. This follows from Theorem 1.2.10 and Exercise 1.2.6.

Exercises.

1.2.11. Let (σ, V ) be a representation of G and ρ ∈ Gbσ. Suppose Eρ = A + B for central 2 2 idempotents A, B ∈ EndG(V ) (i.e. A = A, B = B, and A, B ∈ Z(EndG(V ))). Prove that either Eρ = A (hence B = 0) or Eρ = B (hence A = 0). This justifies the term minimal central idempotent.

1.2.12. Let (σ, V ) be a representation of G. Prove that Eρ,j, ρ ∈ Gbσ, 1 ≤ j ≤ mρ, cannot be written as a sum of two nontrivial orthogonal idempotents. In other words, prove that if 2 2 Eρ,j = A + B for A, B ∈ EndG(V ) with A = A, B = B, and AB = BA = 0 (we say the idempotents A and B are orthogonal if AB = BA = 0), then Eρ,j = A (hence B = 0) or Eρ,j = B (hence A = 0). This justifies the term minimal idempotent (sometimes also called a primitive idempotent.)

1.2.13. Suppose (σ, V ) and (η, U) are two representations of G with decompositions

M ⊕mρ M ⊕nρ V = Wρ and U = Wρ

ρ∈Gbσ ρ∈Gbη

into irreducible subrepresentations. Prove that we have an isomorphism of vector spaces

∼ M HomG(V,U) = Mnρ,mρ (C).

ρ∈Gbσ∩Gbη 28 Chapter 1. Representation theory of finite groups

1.2.5 Intertwiners as invariant elements We have a canonical (i.e. basis independent) isomorphism of vector spaces 0 ∼ W ⊗ V = Hom(W, V ), ϕ ⊗ v 7→ Tϕ,v, where (1.20)

Tϕ,vw = ϕ(w)v, for all w ∈ W. (See Exercise 1.2.14.) It is important to remember here that not all elements of W 0 ⊗ V are simple tensors of the form ϕ ⊗ v. However, it is enough to define a linear map on simple tensors, since we then extend it to all of the tensor product by linearity. Suppose that (σ, V ) and (ρ, W ) are two representations of G. We define a representation of G on Hom(V,W ) by η(g)T = σ(g)T ρ g−1 , for all g ∈ G, T ∈ Hom(W, V ). (1.21) Lemma 1.2.15. The map (1.20) is an isomorphism from ρ0 ⊗ σ to η. Proof. For g ∈ G, ϕ ∈ W 0, v ∈ V , and w ∈ W , we have −1 (η(g)Tϕ,v) w = σ(g)Tϕ,vρ g w = σ(g)ϕ ρ g−1 w v = ϕ ρ g−1 w σ(g)v = (ρ0(g)ϕ)(w)σ(g)v

= Tρ0(g)ϕ,σ(g)vw. Since (ρ0 ⊗ σ)(g)(ϕ ⊗ v) = (ρ0(g)ϕ) ⊗ (σ(g)v), this proves that the map (1.20) is an intertwiner. Since it is an isomorphism of vector spaces, it follows that it is an isomorphism of representations. Corollary 1.2.16. We have G ∼ 0 HomG(W, V ) = Hom(W, V ) = HomG(ιG, ρ ⊗ σ),

where the isomorphism is one of vector spaces and ιG is the trivial representation of G. Proof. For T ∈ Hom(W, V ), we have

T ∈ HomG(W, V ) ⇐⇒ σ(g)T = T ρ(g), for all g ∈ G ⇐⇒ σ(g)T ρ g−1 = T, for all g ∈ G ⇐⇒ η(g)T = T, for all g ∈ G ⇐⇒ T ∈ Hom(W, V )G. G Thus HomG(W, V ) = Hom(W, V ) . G Note that Hom(W, V ) is precisely the ιG-isotypic component of Hom(W, V ) (Exer- cise 1.2.15). Thus, G 0 dim Hom(W, V ) = dim Hom(W, V )ιG = dim HomG(ιG, η) = dim HomG(ιG, ρ ⊗ σ) by Lemmas 1.2.5 and 1.2.15. Therefore, we have an isomorphism of vector spaces G ∼ 0 Hom(W, V ) = HomG(ιG, ρ ⊗ σ). 1.3. Characters and the projection formula 29

Exercises.

1.2.14. Prove that (1.20) is an isomorphism of vector spaces.

G 1.2.15. Suppose (σ, V ) is a representation of G. Prove that V is the ιG-isotypic component of V . (Recall that ιG is the trivial representation of G.)

1.3 Characters and the projection formula

1.3.1 The trace

Suppose {v1, . . . , vn} is an orthonormal basis of V . The trace is the linear map

n X tr: End(V ) → C, tr(T ) = hT vj, vji. (1.22) j=1

The trace is independent of the chosen basis. In fact, it is uniquely determined by the properties

(a) tr(TS) = tr(ST ) for all S,T ∈ End(V ), and

(b) tr(IV ) = dim V .

(See Exercise 1.3.1.) If T ∈ End(W ) and S ∈ End(V ), then T ⊗ S ∈ End(W ⊗ V ). Choose orthonormal bases {v1, . . . , n} and {w1, . . . , wm} for V and W , respectively. Then

{wi ⊗ vj : 1 ≤ i ≤ m, 1 ≤ j ≤ n} is an orthonormal basis for W ⊗ V , and so

m n X X tr(T ⊗ S) = h(T ⊗ S)(wi ⊗ vj), wi ⊗ vjiW ⊗V i=1 j=1 m n X X = hT wi, wiiW hSvj, vjiV i=1 j=1 = tr(T ) tr(S). 30 Chapter 1. Representation theory of finite groups

Exercises.

1.3.1. Prove that the trace map is uniquely characterized by the fact that it is linear and satisfies the properties

(a) tr(TS) = tr(ST ) for all S,T ∈ End(V ), and

(b) tr(IV ) = dim V . More precisely, show that any map with these properties is given by (1.22) for any choice of orthonormal basis.

1.3.2 Central functions and characters Definition 1.3.1 (Class function). A function f ∈ L(G) is central (or is a class function) if f(gh) = f(hg) for all g, h ∈ G.

Lemma 1.3.2. A function f ∈ L(G) is a central if and only if it is conjugacy invariant:

f g−1hg = f(h), for all g, h ∈ G.

In other words, a function is a central if and only if it is constant on each conjugacy class.

Proof. First suppose f is central. Then, for all g, h ∈ G, we have

f g−1hg = f hgg−1 = f(h),

and so f is conjugacy invariant. On the other hand, if f is conjugacy invariant, then, for all g, h ∈ G, we have f(gh) = f ghgg−1 = f(hg), and so f is central. Lemma 1.3.2 justifies the terminology class function since one can think of such a function as a function from the set of conjugacy classes to C. Definition 1.3.3 (Character of a representation). The character of a representation (σ, V ) of G is the function σ σ χ : G → C, χ (g) = tr(σ(g)). In other words, χσ = tr ◦σ. A character of an irreducible representation is called an irredu- cible character.

The basic properties of characters are given in the following proposition.

Proposition 1.3.4. Suppose (σ, V ) and (ρ, W ) are two representations of G.

σ (a) χ (1G) = dim V 1.3. Characters and the projection formula 31

(b) χσ (g−1) = χσ(g) for all g ∈ G. (c) χσ is a central function. (d) χσ⊕ρ = χσ + χρ.

(e) If σi is a representation of Gi for i = 1, 2, then

σ1σ2 σ1 σ2 χ (g1, g2) = χ (g1)χ (g2), for all (g1, g2) ∈ G1 × G2. (f) χσ0 (g) = χσ(g) for all g ∈ G. (Recall that σ0 is the adjoint of σ.) (g) χσ⊗ρ = χσχρ. (h) If η is the representation of G on Hom(V,W ) given by (1.21), then χη = χρχσ.

Proof. Choose an orthonormal basis {v1, . . . , vd} for V .

σ (a) We have χ (1G) = tr(σ(1G)) = tr(IV ) = dim V . (b) For g ∈ G, we have

d d d σ −1 X −1 X X σ χ g = σ g vi, vi = hvi, σ(g)vii = hσ(g)vi, vii = χ (g). i=1 i=1 i=1 (c) For g, h ∈ G, we have

χσ(gh) = tr(σ(gh)) = tr(σ(g)σ(h)) = tr(σ(h)σ(g)) = tr(σ(hg)) = χσ(hg).

(d) For g ∈ G, we have

χσ⊕ρ(g) = tr(σ(g) ⊕ ρ(g)) = tr(σ(g)) + tr(ρ(g)) = χσ(g) + χρ(g).

(e) For (g1, g2) ∈ G1 × G2, we have

σ1σ2 σ1 σ2 χ (g1, g2) = tr(σ1(g1) ⊗ σ2(g2)) = tr(σ1(g1)) tr(σ2(g2)) = χ (g1)χ (g2).

0 (f) Let {θv1 , . . . , θvd } be the basis of V dual to the chosen basis of V . Then χσ0 (g) = tr(σ0(g)) d X 0 0 = hσ (g)θvi , θvi iV i=1 d (1.10)X 0 = hθσ(g)vi , θvi iV i=1 d (1.8) X = hvi, σ(g)viiV i=1 d X = hσ(g)vi, viiV i=1 = χσ(g). 32 Chapter 1. Representation theory of finite groups

(g) This follows from (e). (h) By Lemma 1.2.15 and (f) and (g), we have

χη = χρ0⊗σ = χρ0 χσ = χρχσ.

Exercises.

1.3.2. Compute the character of the sign representation ε of Sn (Example 1.1.5).

1.3.3 Central projection formulas If E : V → V is a projection, then

dim(Im E) = tr(E). (1.23)

(Exercise 1.3.3.) Recall that we may (and will) assume all representations are unitary by Lemma 1.1.1.

Lemma 1.3.5 (Basic projection formula). Suppose (σ, V ) is a representation of G and K ≤ G. Then 1 X EK := σ(k) σ |K| k∈K is the orthogonal projection of V onto V K .

K Proof. Let E = Eσ . For v ∈ V and g ∈ K, we have 1 X σ(g)Ev = σ(gk)v = Ev. |K| k∈K

Thus Ev ∈ V K . On the other hand, if v ∈ V K , then

1 X 1 X Ev = σ(k)v = v = v. |K| |K| k∈K k∈K

Thus E is a projection from V onto V K . To see that E is orthogonal, we compute

1 X 1 X E∗ = σ(k)∗ = σ k−1 = E. |K| |K| k∈K k∈K

Recall that ιG is the trivial representation of a group G, with ιG(g) = IC for all g ∈ G. Under the natural identification of End(C) with C, ιG corresponds to the element of L(g) which is the constant function 1. 1.3. Characters and the projection formula 33

Corollary 1.3.6. If (σ, V ) is a representation of G and K ≤ G, then

K 1 D ResG σ E dim V = χ K , ιK . |K| L(K) Proof. We have ! 1 X dim V K = tr σ(k) ((1.23) and Lemma 1.3.5) |K| k∈K 1 X = χσ(k) (linearity of tr and definition of χσ) |K| k∈K

1 D ResG σ E = χ K , ιK (definition (1.2) of scalar product on L(K)). |K| L(K)

Corollary 1.3.7 (Orthogonality of irreducible characters). Suppose (σ, V ) and (ρ, W ) are irreducible representations of G. Then ( 1 σ ρ 1 if σ ∼ ρ, hχ , χ iL(G) = |G| 0 if σ 6∼ ρ.

Proof. Let η be the representation of G on Hom(V,W ) given by (1.21). Then 1 1 hχσ, χρi = hχρχσ, ι i |G| L(G) |G| G L(G) 1 = hχη, ι i (Proposition 1.3.4(h)) |G| G L(G) = dim Hom(ρ, σ)G (Corollary 1.3.6)

= dim HomG(ρ, σ) (Corollary 1.2.16) ( 1 if σ ∼ ρ, = (Schur’s lemma). 0 if σ 6∼ ρ

Corollary 1.3.8. Let (σ, V ) be a representation of G with multiplicities mρ, ρ ∈ Gb. Then

1 ρ σ (a) mρ = |G| hχ , χ iL(G) for all ρ ∈ Gb, (b) 1 hχσ, χσi = P m2, and |G| L(G) ρ∈Gb ρ 1 σ σ (c) |G| hχ , χ iL(G) = 1 if and only if σ is irreducible. Proof. We leave the proof of this corollary as an exercise (Exercise 1.3.4). Note that Corollary 1.3.8 implies that σ is determined uniquely, up to equivalence, by its character. Since characters are readily computable, this is a very useful fact. 34 Chapter 1. Representation theory of finite groups

Lemma 1.3.9 (Fixed points character formula). Suppose G acts on a finite set X, and let (λ, L(X)) be the corresponding permutation representation (Example 1.1.3). Then

χλ(g) = |{x ∈ X : gx = x}|.

Proof. Recall that the Dirac functions, δx, x ∈ X, form an orthonormal basis for L(X). So we compute

λ X χ (g) = hλ(g)δx, δxiL(X) x∈X X = hδgx, δxiL(X) (by (1.4)) x∈X = |{x ∈ X : gx = x}|.

Corollary 1.3.10. The multiplicity of an irreducible representation (ρ, Vρ) in the left regular representation (λ, L(G)) is equal to dρ := dim Vρ. In other words,

∼ M ⊕dρ L(G) = Vρ (as representations of G). ρ∈Gb In particular |G| = P (dim V )2. ρ∈Gb ρ Proof. By Lemma 1.3.9, we have ( λ |G| if g = 1G, χ (g) = |{h ∈ G : gh = h}| = = |G|δ1 (g). 0 otherwise, G

Thus, by Corollary 1.3.8(a), we have 1 m = hχρ, χλi = χρ(1 ) = dim V , ρ |G| L(G) G ρ where the last equality follows from Proposition 1.3.4(a).

Corollary 1.3.11. Suppose G acts transitively on a finite set X. Choose x0 ∈ X, and let

K = {g ∈ G : gx0 = x0} be the stabilizer of x0. Then the character of the permutation representation λ of G on X is given by |X| χλ(h) = |C ∩ K|, for all h ∈ G, |C| where C is the conjugacy class of G containing h. Proof. Suppose x ∈ X. Since the action of G on X is transitive, there exists s ∈ G such that sx0 = x. Then we have a bijection

−1 {g ∈ C : gx = x} → {g ∈ C : gx0 = x0}, g 7→ s gs. 1.3. Characters and the projection formula 35

Thus, |{g ∈ C : gx = x}| = |{g ∈ C : gx0 = x0}| = |C ∩ K|. (1.24) Then, for h ∈ G, we have

χλ(h) = |{x ∈ X : hx = x} (Lemma 1.3.9) 1 = |{(x, g) ∈ X × C : gx = x} |C| 1 X = |{g ∈ C : gx = x}| |C| x∈X |X| = |C ∩ K|. |C|

Definition 1.3.12 (Fourier transform). Suppose (σ, V ) is a representation of G and f ∈ L(G). The operator X σ(f) := f(g)σ(g) ∈ End(V ) g∈G is called the Fourier transform of f at σ.

Lemma 1.3.13 (Fourier transform of central functions). If f ∈ L(G) is central and (ρ, W ) is an irreducible representation, then

1 ρ ¯ ρ(f) = χ , f L(G) IW . dρ

Proof. First we show that ρ(f) is an intertwiner. For h ∈ G, we have X ρ(f)ρ(h) = f(g)ρ(gh) g∈G X = f(sh−1)ρ(s)(s = gh) s∈G X = f(h−1s)ρ(s)(f is central) s∈G X = f(g)ρ(hg)(g = h−1s) g∈G = ρ(h)ρ(f).

Thus ρ(f) ∈ EndG(W ). By Schur’s lemma, there exists c ∈ C such that

ρ(f) = cIW .

It remains to compute c. We have ! X X ρ ρ ¯ cdρ = tr(cIW ) = tr(ρ(f)) = tr f(g)ρ(g) = f(g)χ (g) = χ , f L(G) . g∈G g∈G 36 Chapter 1. Representation theory of finite groups

Thus 1 ρ ¯ c = χ , f L(G) , dρ as desired.

Corollary 1.3.14. If (σ, V ) and (ρ, W ) are irreducible representations, then

( |G| IV if σ ∼ ρ, σ (χρ) = dσ 0 otherwise.

Proof. By Lemma 1.3.13 and Corollary 1.3.7, we have ( 1 |G| I if σ ∼ ρ, ρ σ ρ dσ V σ (χ ) = hχ , χ i IV = dσ 0 otherwise.

Recall from Corollary 1.2.14 that, if (σ, V ) is a representation of G, then the minimal central projections Eρ ∈ EndG(V ), ρ ∈ Gbσ, form a basis for the center Z(EndG(V )). We can now give an explicit expression for these minimal central projections.

Corollary 1.3.15 (Projection onto an isotypic component). Suppose (σ, V ) is a represen- tation of G. Then, for (ρ, W ) an irreducible representation of G, the orthogonal projection onto the ρ-isotypic component of V is given by d E = ρ σ (χρ) . ρ |G|

Proof. Let σ = L Lmη η be a decomposition of σ into irreducible subrepresentations, η∈Gbσ j=1 j where (ηj,Wη,j) is a representation in the equivalence class η for each 1 ≤ j ≤ mη (see (1.14)). Then

mη dρ dρ X X σ (χρ) = η (χρ) |G| |G| j j=1 η∈Gbσ mρ X = Eρ,j (Corollary 1.3.14) j=1

= Eρ.

We would now like to determine the cardinality of Gb, that is, the number of irreducible nonequivalent representations of G. Note that, for f ∈ L(G), we have

(1.3) X (1.4) X f = f(g)δg = f(g)λ(g)δ1G = λ(f)δ1G . (1.25) g∈G g∈G

Proposition 1.3.16. The characters χρ, ρ ∈ Gb, form an orthonormal basis for the space of central functions of G. In particular, |Gb| is equal to the number of conjugacy classes of G. 1.3. Characters and the projection formula 37

Proof. The elements χρ, ρ ∈ G,b are orthogonal by Corollary 1.3.7. Since orthogonal vectors are linearly independent, it suffices to show that the characters span the space of central functions in L(G). For this, it suffices to prove that the orthogonal complement to their span is zero. Suppose f ∈ L(G) is central and

ρ hf, χ iL(G) = 0, for all ρ ∈ G.b

Then X λ(f) = dρρ(f) (Corollary 1.3.10) ρ∈Gb X ρ ¯ = hχ , fiL(G) (Lemma 1.3.13) ρ∈Gb X ρ = hf, χ iL(G) ρ∈Gb X ρ0 = hf, χ iL(G) (Proposition 1.3.4(f)) ρ∈Gb = 0.

Therefore, f = 0 by (1.25). If C denotes the set of all conjugacy classes of G, then the characteristic functions 1C , C ∈ C, defined by ( 1 if g ∈ C, 1C (g) = 0 if g∈ / C, form another basis for the space of central functions by Lemma 1.3.2. Thus, the dimension of this space is |C|, and so |Gb| = |C|. Example 1.3.17. If G is abelian, then every conjugacy class is a singleton, and all elements of L(G) are central. It follows that |Gb| = |G|. For the cyclic group with n elements, we found the n inequivalent irreducible representations in Example 1.2.4.

The next theorem gives a precise relation between the representations of two groups and their product.

Theorem 1.3.18. Suppose G1 and G2 are finite groups. Then we have a bijection of sets

Gc1 × Gc2 → G\1 × G2, (ρ1, ρ2) 7→ ρ1  ρ2.

Proof. For ρ1, σ1 ∈ Gc1 and ρ2, σ2 ∈ Gc2, we have 1 ρ1ρ2 σ1σ2 χ , χ L(G ×G ) |G1 × G2| 1 2 38 Chapter 1. Representation theory of finite groups

1 = hχρ1 χρ2 , χσ1 χσ2 i (Proposition 1.3.4(e)) L(G1×G2) |G1 × G2| 1 1 ρ1 σ1 ρ2 σ2 = hχ , χ iL(G1) hχ , χ iL(G2) (Exercise 1.3.5) |G1| |G2|

= δρ1,σ1 δρ2,σ2 (Corollary 1.3.7).

It then follows from Corollary 1.3.7 and Corollary 1.3.8(c) that

ρ1  ρ2, ρ1 ∈ Gc1, ρ2 ∈ Gc2, are pairwise inequivalent irreducible representations of G1 × G2. Now, by Proposition 1.3.16, |G\1 × G2| is equal to the number of conjugacy classes of G1 × G2, which is the product of the number of conjugacy classes of G1 and the number of conjugacy classes of G2 (Exercise 1.3.6.) By Proposition 1.3.16, this is equal to |Gc1| · |Gc1|. So we have found all of the irreducible representations, completing the proof.

Example 1.3.19. By the fundamental theorem of finite abelian groups (see, for example, [Jud, Th. 13.4]), every finite abelian group is a product of cyclic groups. Thus, Exam- ple 1.2.4 and Theorem 1.3.18 completely classify the irreducible representations of finite abelian groups, up to isomorphism.

Exercises.

1.3.3. Prove (1.23).

1.3.4. Prove Corollary 1.3.8.

1.3.5. Suppose X1 and X2 are finite sets. Any element f ∈ L(X1) can be viewed naturally as an element of L(X1 × X2) by setting f(x1, x2) = f(x1) for (x1, x2) ∈ X1 × X2. Similarly, elements of L(X2) can also be viewed as elements of L(X1 × X2). Prove that

hf1f2, g1g2iL(X1×X2) = hf1, g1iL(X1)hf2, g2iL(X2), for all f1, g1 ∈ L(X1), f2, g2 ∈ L(X2).

1.3.6. Suppose G1 and G2 are finite groups. Show that the conjugacy classes of G1 × G2 are precisely the sets C1 × C2, where Ci is a conjugacy class of Gi for i = 1, 2.

1.3.7. Consider the natural action of the symmetric group Sn on the set X = {1, . . . , n}. Using the theory of characters discussed in this section, compute the multiplicity of the trivial representation in the corresponding permutation representation (λ, L(X)) of Sn.

1.4 Permutation representations

In this section, we study permutation representations (Example 1.1.3) in more detail. 1.4. Permutation representations 39

1.4.1 Wielandt’s lemma Suppose G acts on a finite set X and let (λ, L(X)) denote the corresponding permutation representation (Example 1.1.3). Define a product on L(X,X) by: X (F1F2)(x, y) = F1(x, z)F2(z, y), for all F1,F2 ∈ L(X × X), x, y ∈ X. (1.26) z∈X Under this product and pointwise addition and scalar multiplication, L(X ×X) is an algebra (Exercise 1.4.1). It may be viewed as the algebra of X × X matrices with coefficients in C. Lemma 1.4.1. We have an isomorphism of algebras

L(X × X) → End(L(X)),F 7→ TF , where X (TF f)(x) = F (x, y)f(y). (1.27) y∈X Proof. The proof of this lemma is left as an exercise (Exercise 1.4.1). The group G acts diagonally on X × X: g(x, y) = (gx, gy), for all g ∈ G, x, y ∈ X. Then we have the associated permutation representation on L(X × X). We can also view End(L(X)) = Hom(L(X),L(X)) as a representation of G as in (1.21). Lemma 1.4.2. The isomorphism of Lemma 1.4.1 is an intertwiner and hence an isomor- phism of representations of G.

Proof. Let λX denote the permutation representation on L(X) and let λX×X denote the permutation representation on L(X × X). Furthermore, let η denote the representation of G on End(L(X)) defined in (1.21). Then, for F ∈ L(X × X), f ∈ L(X), g ∈ G, and x ∈ X, we have   −1  (η(g)TF )f (x) = λX (g)TF λX (g )f (x)

X −1 = λX (g) F (x, y)(λX (g )f)(y) y∈X X = λX (g) F (x, y)f(gy) y∈X X = F (g−1x, y)f(gy) y∈X X = F (g−1x, g−1z)f(z)(z = gy) z∈X X  = λX×X (g)F (x, z)f(z) z∈X  = TλX×X (g)F f (x). Thus, the isomorphism of Lemma 1.4.1 intertwines the G-action. 40 Chapter 1. Representation theory of finite groups

∼ G Corollary 1.4.3. We have EndG(L(X)) = L(X × X) as algebras. Proof. By Lemma 1.4.2 and Corollary 1.2.16, we have of algebras G ∼ G L(X × X) = End(L(X)) = EndG(L(X)). Corollary 1.4.4 (Wielandt’s lemma). Suppose G acts on a finite set X, and let λ = L ρ⊕mρ be the decomposition of the associated permutation representation into irre- ρ∈Gbλ ducibles. Then X 2 mρ = number of orbits of G on X × X.

ρ∈Gbλ Proof. Since the characteristic functions of the orbits of G on X × X form a basis of L(X × X)G, we have

number of orbits of G on X × X = dim L(X × X)G

= dim EndG(L(X)) (Corollary 1.4.3) X 2 = mρ (Corollary 1.2.11).

ρ∈Gbλ

Example 1.4.5. Consider the natural action of the symmetric group Sn on X = {1, 2, . . . , n}. We have an orthogonal direct sum decomposition (as representations)

L(X) = V0 ⊕ V1, where (1.28)

V0 = {f ∈ L(X): f(i) = f(j) for all i, j ∈ X}, Pn V1 = {f ∈ L(X): j=1 f(j) = 0}.

On the other hand Sn has precisely two orbits on X × X, given by

Ω0 = {(i, i): i ∈ X} and Ω1 = {(i, j): i, j ∈ X, i 6= j}. Thus, by Wiedlandt’s lemma (Corollary 1.4.4), (1.28) is a decomposition into irreducible subrepresentations.

Exercises.

1.4.1. Show that L(X,X) is an algebra under the product (1.26) and pointwise addition and scalar multiplication. Then prove Lemma 1.4.1.

1.4.2 ([CSST10, Ex. 1.4.2]). Suppose G acts on a finite set X. Show that the permutation reputation of G on L(X × X) is equivalent to the tensor product of the permutation re- presentation L(X) with itself. In other words, show that L(X × X) ∼= L(X) ⊗ L(X) as representations of G. 1.4. Permutation representations 41

1.4.3 ([CSST10, Ex. 1.4.3]). Suppose G acts on finite sets X and Y . Show that

∼ G HomG(L(X),L(Y )) = L(X × Y )

as vector spaces.

1.4.4. Verify the details of Example 1.4.5. More precisely, do the following:

(a) Prove that V0 and V1 are Sn-invariant subspaces of L(X).

(b) Prove that one has an orthogonal direct sum decomposition L(X) = V0 ⊕ V1.

(c) Prove that Ω0 and Ω1 are Sn-orbits for the diagonal action of Sn on X × X.

1.4.5 ([CSST10, Ex. 1.4.6]). Suppose G acts transitively on a finite set X with at least two elements. As in Example 1.4.5, define ( ) X V0 = {f ∈ L(X): f(x) = f(y) for all x, y ∈ X} and V1 = f ∈ L(X): f(x) = 0 . x∈X

We say that the action of G on X is doubly transitive if

∀ x, y, z, u ∈ X such that x 6= y and z 6= u, ∃ g ∈ G such that g(x, y) = (z, u).

Prove that L(X) = V0 ⊕ V1 is the decomposition of L(X) into irreducible subrepresentations if and only if the action of G on X is doubly transitive.

1.4.6 ([CSST10, Ex. 1.4.7]). Suppose G acts on finite sets X and Y . For ρ ∈ Gb, let mρ and 0 mρ denote the multiplicities of ρ in L(X) and L(Y ), respectively. Show that

X 0 mρmρ = number of orbits of G on X × Y. ρ∈Gb

1.4.2 Symmetric actions and Gelfand’s lemma A subset A ⊆ X × X is symmetric if

(x, y) ∈ A =⇒ (y, x) ∈ A.

The action of G on a set X is symmetric if all orbits of G on X × X are symmetric. A function F ∈ L(X × X) is symmetric if

F (x, y) = F (y, x), for all x, y ∈ X.

Proposition 1.4.6 (Gelfand’s lemma; symmetric case). Suppose that the action of G on a finite set X is symmetric. Then EndG(L(X)) is commutative and the permutation represen- tation of G on L(X) is multiplicity free. 42 Chapter 1. Representation theory of finite groups

Proof. Since the action of G is symmetric, any F ∈ L(X × X)G is symmetric since it is G constant on G-orbits. Therefore, for all F1,F2 ∈ L(X × X) and x, y ∈ X, we have X (F1F2)(x, y) = F1(x, z)F2(z, y) z∈X X = F1(z, x)F2(y, z) z∈X

= (F2F1)(y, x) G = (F2F1)(x, y)(F2F1 ∈ L(X × X) by Corollary 1.4.3).

G Hence L(X × X) is commutative. Thus, by Corollary 1.4.3, EndG(L(X)) is commutative. Then the result follows from Corollary 1.2.13. Proposition 1.4.6 corresponds to the following fact for matrices: If A is a subalgebra of Mn,n(C) consisting of symmetric matrices, then A is commutative, since AB = AT BT = (BA)T = BA, for all A, B ∈ A.

1.4.3 Frobenius reciprocity for permutation representations

In this section we will assume that G acts transitively on a finite set X. We fix x0 ∈ X and let K = {g ∈ G : gx0 = x0} denote its stabilizer. Then the map

G/K → X, gK 7→ gx0, is a bijection of G-sets (i.e. it is a bijection of sets that commutes with the G-actions on X and on the set of right G/K). So we may identify X with the space G/K of right cosets in G. We have X 1 X f(x) = f(gx ), for all f ∈ L(X). (1.29) |K| 0 x∈X g∈G

Definition 1.4.7 (Gelfand pair). Suppose G acts transitively on X. We say that (G, K) is a Gelfand pair if the permutation representation L(X) is multiplicity free. If the action of G on X is symmetric, we say that (G, K) is a symmetric Gelfand pair. (Note that, in this case, L(X) is multiplicity free by Gelfand’s lemma (Proposition 1.4.6).)

Example 1.4.8. Consider the natural action of the symmetric group Sn on {1, 2, . . . , n}. Fix k ∈ Z, 0 ≤ k ≤ n/2, and define

Ωn−k,k = {A ⊆ {1, 2, . . . , n} : |A| = k}.

For A ∈ Ωn−k,k and π ∈ Sn, we have

πA = {π(j): j ∈ A} ∈ Ωn−k,k, 1.4. Permutation representations 43

so Sn acts on the set Ωn−k,k. Fix A0 ∈ Ωn−k,k and let K denote its stabilizer. Then ∼ K = Sn−k × Sk (as groups),

{ where the first factor is the symmetric group on A0 := {1, 2, . . . , n}\ A0 and the second factor is the symmetric group on A0. Since the action of Sn on Ωn−k,k is transitive, we may identify Ωn−k,k = Sn/ (Sn−k × Sk) . 0 0 Claim: Two elements (A, B) and (A ,B ) of Ωn−k,k × Ωn−k,k are in the same Sn-orbit (under the diagonal action) if and only if |A ∩ B| = |A0 ∩ B0|. Proof of claim: The “only if” part is clear. Suppose |A ∩ B| = |A0 ∩ B0|. Consider the decomposition

{1, 2, . . . , n} = (A ∪ B){ t A \ (A ∩ B) t B \ (A ∩ B) t (A ∩ B) = (A0 ∪ B0){ t A0 \ (A0 ∩ B0) t B0 \ (A0 ∩ B0) t (A0 ∩ B0).

We can choose π ∈ Sn such that • π(A ∩ B) = A0 ∩ B0, • π(A \ (A ∩ B)) = A0 \ (A0 ∩ B0), and • π(B \ (A ∩ B)) = B0 \ (A0 ∩ B0), since we know that each pair of sets involved have the same cardinality. It follows that π(A, B) = (A0,B0). This proves the claim. Now define

Θj = {(A, B) ∈ Ωn−k,k × Ωn−k,k : |A ∩ B| = j}, 0 ≤ j ≤ k,

so that the decomposition of Ωn−k,k × Ωn−k,k into Sn-orbits is given by

k G Ωn−k,k × Ωn−k,k = Θj. j=0

(We use the fact that k ≤ n/2 here to conclude that the Θj are all nonempty.) Since |A ∩ B| = |B ∩ A|, every orbit is symmetric. So (Sn, Sn−k × Sk) is a symmetric Gelfand pair. Since there are precisely k + 1 orbits of Sn on Ωn−k,k × Ωn−k,k, Wielandt’s lemma (Corol- lary 1.4.4) implies that L(Ωn−k,k) decomposes into k + 1 pairwise inequivalent irreducible Sn-representations. When k = 1, this recovers the result of Example 1.4.5.

Suppose (ρ, W ) is an irreducible representation of G. Define dρ = dim W and suppose that W K is nontrivial. For every v ∈ W K , define a linear map s d T : W → L(X), (T u)(gx ) = ρ hu, ρ(g)vi , for all g ∈ G, u ∈ W. (1.30) v v 0 |X| W 44 Chapter 1. Representation theory of finite groups

This is defined on all of X since the action of G on X is transitive. Furthermore, if g, h ∈ G −1 and gx0 = hx0, then g h ∈ K, and thus s d (T u)(hx ) = ρ hu, ρ(h)vi v 0 |X| W s d = ρ hu, ρ(g)ρ(g−1h)vi |X| W s d = ρ hu, ρ(g)vi (since v ∈ V K ) |X| W

= (Tvu)(gx0).

Hence Tvu is well defined. Theorem 1.4.9 (Frobenius reciprocity for permutation representations). With notation as above, we have the following.

K (a) Tv ∈ HomG(W, L(X)) for all v ∈ W . (b) (Orthogonality relations) For all v, u ∈ W K and w, z ∈ W , we have

hTuw, TvziL(X) = hw, ziW hv, uiW . (c) The map K W → HomG(W, L(X)), v 7→ Tv, (1.31) is an antilinear isomorphism. In particular, the multiplicity of ρ in the permutation representation L(X) is equal to dim W K . Proof. (a) For g, h ∈ G and w ∈ W , we have

−1  (λ(g)Tvw)(hx0) = (Tvw) g hx0 s d = ρ hw, ρ(g−1)ρ(h)vi |X| W s d = ρ hρ(g)w, ρ(h)vi |X| W

= (Tvρ(g)w)(hx0).

Hence λ(g)Tv = Tvρ(g) for all g ∈ G, and so Tv ∈ HomG(W, L(X)). (b) For u, v ∈ W K , define a linear map

Ru,v : W → W, Ru,vw = hw, uiW v, for all w ∈ W.

Choosing an orthonormal basis {w1, . . . , wdρ } for W , we compute

dρ dρ * dρ + X X X tr(Ru,v) = hRu,vwj, wjiW = hwj, uiW hv, wjiW = v, hu, wjiwj = hv, uiW . j=1 j=1 j=1 W 1.4. Permutation representations 45

While Ru,v ∈ End(W ), in general Ru,v is not an element of EndG(W ). However, we can G project it onto EndG(W ) = End(W ) (see Corollary 1.2.16) to get 1 X R := ρ(g)R ρ(g−1) ∈ End (W ). (1.32) |G| u,v G g∈G (Here we use Lemma 1.3.5 and the action on End(W ) given by (1.21).) Since W is irreducible, by Schur’s lemma (Lemma 1.2.1) we have R = cIW for some c ∈ C. Taking the trace of both sides of (1.32), we have

cdρ = tr(cIW ) 1 X = tr ρ(g)R ρ(g−1) |G| u,v g∈G 1 X = tr(R ) |G| u,v g∈G

= hv, uiW . Thus c = 1 hv, ui , and so dρ W 1 R = hv, uiW IW . (1.33) dρ Then, for w, z ∈ W , we have X hTuw, TvziL(X) = (Tuw)(x)(Tvz)(x) x∈X 1 X = (T w)(gx )(T z)(gx ) (by (1.29)) |K| u 0 v 0 g∈G

dρ X = hw, ρ(g)ui hz, ρ(g)vi (since |K| · |X| = |G|) |G| W W g∈G

dρ X = hhw, ρ(g)ui ρ(g)v, zi |G| W W g∈G

dρ X = ρ(g)hρ(g−1)w, ui v, z |G| W W g∈G

dρ X = hρ(g)R ρ(g−1)w, zi |G| u,v W g∈G

= dρhRw, ziW (by (1.32))

= hw, ziW hv, uiW (by (1.33)). (c) The map (1.31) is antilinear since the bilinear form is antilinear in the second ar- gument. We now show that it is bijective. Fix T ∈ HomG(W, L(X)) and consider the composition of linear maps

T f7→f(x0) W −→ L(X) −−−−−→ C, u 7→ (T u)(x0), 46 Chapter 1. Representation theory of finite groups

As discussed in Section 1.1.5, this implies that there exists a unique v ∈ W such that

(T u)(x0) = hu, viW , for all u ∈ W.

Then

−1  (T u)(gx0) = λ(g )T u (x0) −1  = T ρ(g )u (x0)(T ∈ HomG(W, L(X))) −1 = hρ(g )u, viW

= hu, ρ(g)viW , which implies that s |X| T = Tv. dρ

We also have v ∈ W K since, for k ∈ K,

hu, ρ(k)viW = (T u)(kx0) = (T u)(x0) = hu, viW , for all u ∈ W,

and so ρ(k)v = v by the nondegeneracy of the scalar product. Since the vector v was uniquely determined by T , we see that (1.31) is bijective. Finally, by Lemma 1.2.5, the multiplicity of ρ in the permutation representation on L(X) is equal to K dim HomG(W, L(X)) = dim W .

Corollary 1.4.10. The pair (G, K) is a Gelfand pair if and only if dim W K ≤ 1 for every irreducible G-representation W . In particular, (G, K) is a Gelfand pair if and only if

dim W K = 1 ⇐⇒ W is a subrepresentation of L(X).

Exercises.

1.4.7 ([CSST10, Ex. 1.4.11]). (a) Show that, if 0 ≤ h ≤ k ≤ n/2, then Sn has precisely h + 1 orbits on Ωn−k,k × Ωn−h,h. Lk (b) Suppose that L(Ωn−k,k) = j=0 Vk,j is the decomposition of L(Ωn−k,k) into irreducible subrepresentations (see Example 1.4.8). Use part (a) and Exercise 1.4.6 to show that ∼ it is possible to number the representations Vk,0,...,Vk,j in such a way that Vh,j = Vk,j (as representations) for all j = 0, 1, . . . , h and 0 ≤ h ≤ k ≤ n/2. Hint: Every subrepresentation of L(Ωn−h,h) is also a subrepresentation of L(Ωn−k,k). 1.4. Permutation representations 47

1.4.4 The structure of the commutant of a permutation represen- tation

ρ The goal of this subsection is to give an explicit form for the operators Tk,j defined in (1.16). Recall that, by Theorem 1.2.10, these operators give a basis for the commutant EndG(V ). As in Section 1.4.3, we assume that G acts transitively on a finite set X. We fix x0 ∈ X, and let K be the stabilizer of x0, so that we can identify X with G/K. For ρ ∈ Gb, let mρ be the multiplicity of ρ in the permutation representation (λ, L(X)). K For each ρ ∈ Gbλ, by Theorem 1.4.9, we have dim Wρ = mρ, so we can choose an orthonormal basis ρ ρ {v1, . . . , vmρ }

K for Wρ . Let ρ T := T ρ ∈ Hom (W ,L(X)), ρ ∈ G , 1 ≤ j ≤ m , j vj G ρ bλ ρ be the intertwiners defined in (1.30) (which are intertwiners by Theorem 1.4.9(a)), so that s ρ  dρ ρ Tj u (gx0) = u, ρ(g)vj , for all g ∈ G, u ∈ Wρ. (1.34) |X| Wρ

Recall that if U and V are unitary spaces, an isometric immersion of U into V is a linear map T : U → V such that

hT u1, T u2iV = hu1, u2iU , for all u1, u2 ∈ U.

The term immersion comes from the fact that such a map is necessarily injective (Exer- cise 1.4.8).

Lemma 1.4.11. We have that

mρ M M ρ L(X) = Tj Wρ (1.35) j=1 ρ∈Gbλ is an orthogonal decomposition of L(X) into irreducible subrepresentations. Furthermore, ρ every Tj is an isometric immersion of Wρ into L(X).

L Lmρ ρ Proof. It follows from Theorem 1.4.9(b) that ρ∈G j=1 Tj Wρ is an orthogonal decompo- bλ ρ sition (i.e. the summands are orthogonal) and that the Tj are isometric immersions. Then, since

 mρ  mρ mρ M M ρ X X ρ X X dim  Tj Wρ = dim Tj Wρ = dim Wρ = dim L(X), j=1 j=1 j=1 ρ∈Gbλ ρ∈Gbλ ρ∈Gbλ we see that we have the equality (1.35). 48 Chapter 1. Representation theory of finite groups

Now, for ρ ∈ Gbλ and 1 ≤ i, j ≤ mρ, define

ρ ρ dρ ρ ρ φi,j ∈ L(X × X), φi,j(gx0, hx0) = ρ(h)vj , ρ(g)vi , g, h ∈ G. (1.36) |X| Wρ

ρ ρ ρ ρ G Note that φi,j is well defined since vi and vj are K-invariant. Moreover, φi,j ∈ L(X × X) , since, for all s, g, h ∈ G, we have

ρ dρ ρ ρ ρ φi,j(sgx0, shx0) = ρ(s)ρ(h)vj , ρ(s)ρ(g)vi = φi,j(gx0, hx0), |X| Wρ since ρ is unitary. Therefore, as in (1.27) and Corollary 1.4.3, we can define

ρ Φi,j ∈ EndG(L(X)), ρ  X ρ Φi,jf (x) = φi,j(x, y)f(y), for all f ∈ L(X), x ∈ X. y∈X

Lemma 1.4.12. For all g ∈ G and f ∈ L(X), we have p ! ρ dρ ρ X ρ Φi,jf = p Ti ρ(h)f(hx0)vj . |X| |K| h∈G Proof. We compute

ρ  X ρ Φi,jf (gx0) = φi,j(gx0, y)f(y) y∈X 1 X = φρ (gx , hx )f(hx ) |K| i,j 0 0 0 h∈G dρ X ρ ρ = ρ(h)vj , ρ(g)vi f(hx0) (by (1.36)) |K| |X| Wρ h∈G p dρ X ρ ρ = p Ti ρ(h)vj (gx0) · f(hx0) (by (1.34)) |X| |K| h∈G p ! dρ ρ X ρ = p Ti ρ(h)f(hx0)vj (gx0). |X| |K| h∈G

ρ ρ Using Lemma 1.4.12, we can now show that the Φi,j are the operators Ti,j of Theo- rem 1.2.10.

Theorem 1.4.13. For all σ, ρ ∈ Gbλ, 1 ≤ i, j ≤ mρ, and 1 ≤ s, r, ≤ mσ, we have ρ ρ (a) Im Φi,j = Ti Wρ, ρ σ ρ (b) Φi,jΦs,r = δj,sδρ,σΦi,r. ρ ρ ρ (c) Ker Φi,j = L(X) Tj Wρ (here means we remove the summand Tj Wρ from L(X)), and 1.4. Permutation representations 49

ρ Proof. (a) Since Wρ is irreducible, the G-invariant subspace generated by vj ∈ Wρ is all ρ ρ of Wρ. Thus, it follows from Lemma 1.4.12 that Im Φi,j = Ti Wρ. (b) For all g, h ∈ G, we compute the product:

ρ σ  φi,jφs,r (gx0, hx0) 1 X = φρ (gx , tx )φσ (tx , hx ) |K| i,j 0 0 s,r 0 0 t∈G dρdσ X = hρ(t)vρ, ρ(g)vρi hσ(h)vσ, σ(t)vσi (by (1.36)) |X|2|K| j i Wρ r s Wσ t∈G p dρdσ X = T ρρ(g)vρ (tx )(T σρ(h)vσ)(tx ) (by (1.34)) |X| |K| j i 0 s r 0 t∈G pd d = ρ σ T σρ(h)vσ, T ρρ(g)vρ |X| s r j i L(X)

dρ ρ ρ ρ ρ = δσ,ρ vj , vs hρ(h)vr , ρ(g)vi iW (Th. 1.4.9(b), Lem. 1.4.11) |X| Wρ ρ ρ = δσ,ρδj,sφi,r(gx0, hx0) (by (1.36)).

ρ σ ρ ρ Thus φi,jφs,r = δσ,ρδj,sφi,rφi,r. So the result follows from Lemma 1.4.1.

(c) Suppose σ ∈ Gbλ and 1 ≤ k ≤ mσ such that σ 6= ρ or that σ = ρ, j 6= k. Then, by parts (a) and (b), we have ρ σ ρ σ Φi,jTk Wσ = Im Φi,jΦk,k = 0. ρ Since Φi,j is not the zero map by part (a), it must be nonzero on the remaining irreducible ρ summand Tj Wρ.

ρ ρ Corollary 1.4.14. (a) The map Φi,i is the orthogonal projection onto Ti Wρ. Pmρ ρ (b) The map i=1 Φi,i is the orthogonal projection onto the ρ-isotypic component.

Exercises.

1.4.8. Prove that every isometric immersion is injective.

1.4.9. Prove that s d Φρ f (gx ) = ρ f, T ρρ(g)vρ , for all g ∈ G, f ∈ L(X). i,j 0 |X| j i L(X)

Hint: Follow the method of Lemma 1.4.12. 50 Chapter 1. Representation theory of finite groups

1.5 The group algebra and the Fourier transform

In this section we consider the special case of Section 1.4 where X = G. That is, we study the left regular representation (λ, L(G)) of a the finite group G.

1.5.1 The group algebra Recall the left regular representation λ and right regular representation ρ of G on L(G) from Example 1.1.4. In the language of Section 1.4, we take x0 = 1G, so that K = {1G}. We define a product ∗ on L(G) by

X −1 (f1 ∗ f2)(g) = f1(gh )f2(h), for all f1, f2 ∈ L(G), g ∈ G. (1.37) h∈G One can show that L(G) is a unital algebra with this product (Exercise 1.5.1). It is called the group algebra (or the convolution algebra) of G. The unit of this algebra is δ1G . Remark 1.5.1. The group algebra of G is sometimes defined to be set of formal linear com- binations ( ) X CG = αgg : αg ∈ C for all g ∈ G , g∈G with multiplication given by ! ! X X X αgg βhh = (αgβh)(gh). g∈G h∈G g,h∈G We have an obvious isomorphism

=∼ X X CG −→ L(G), αgg 7→ αgδg, g∈G g∈G so this is essentially the same construction.

Note that we can also express the convolution product as

X X −1 (f1 ∗ f2)(g) = f1(s)f2(t) = f1(h)f2(h g), for all f1, f2 ∈ L(G), g ∈ G. s,t∈G:st=g h∈G It follows that X (f1 ∗ f2) = f1(h)λ(h)f2. h∈G For g ∈ G and f ∈ L(G), we have X δg ∗ f = δg(h)λ(h)f2 = λ(g)f. (1.38) h∈G Similarly, −1 f ∗ δg = ρ(g )f. 1.5. The group algebra and the Fourier transform 51

In particular, δg ∗ δh = δgh, for all g, h ∈ G. (1.39) It follows that ρ(g)λ(h) = λ(h)ρ(g), for all g, h ∈ G. In other words, the left and right regular actions commute. Lemma 1.5.2. The map

L(G) → L(G), ψ 7→ ψ,ˇ where ψˇ(g) = ψ(g−1), for all g ∈ G, is an involution on the algebra L(G).

Proof. For ψ1, ψ2 ∈ L(G) and g ∈ G, we have

ˇ ˇ  X ˇ ˇ −1 ψ1 ∗ ψ2 (g) = ψ1(gs)ψ2(s ) s∈G X −1 −1 = ψ1(s g )ψ2(s) s∈G −1 = (ψ2 ∗ ψ1)(g ) ˇ = (ψ2 ∗ ψ1) (g). We leave it as an exercise to verify the remaining properties of an involution (Exercise 1.5.2).

Remark 1.5.3. Note that

f ∈ Z(L(G)) ⇐⇒ δg ∗ f = f ∗ δg, for all g ∈ G ⇐⇒ λ(g)f = ρ(g−1)f, for all g ∈ G ⇐⇒ f(g−1h) = f(hg−1), for all g, h ∈ G ⇐⇒ f is central.

Proposition 1.5.4. The map

L(G) → EndG(L(G)), ψ 7→ Tψ, where

Tψf = f ∗ ψ, for all f ∈ L(G), is a ∗-anti-isomorphism of algebras. Proof. We leave it as an exercise (Exercise 1.5.4) to show that the linear map

EndG(L(G)) → L(G),T 7→ T (δ1G ). (1.40)

is inverse to the linear map ψ 7→ Tψ. For ψ1, ψ2, f ∈ L(G), we have

Tψ1 (Tψ2 f) = (f ∗ ψ2) ∗ ψ1 = f ∗ (ψ2 ∗ ψ1) = Tψ2∗ψ1 f. 52 Chapter 1. Representation theory of finite groups

Thus Tψ1 Tψ2 = Tψ2∗ψ1 , and so the map ψ 7→ Tψ is an anti-multiplicative. Furthermore, for f1, f2, ψ ∈ L(G), we have

hTψf1, f2i = hf1 ∗ ψ, f2i X = (f1 ∗ ψ)(g) f2(g) g∈G X X −1 = f1(gs)ψ(s ) f2(g) g∈G s∈G X X −1 −1 = f1(t)ψ(s ) f2(ts )(t = gs) t∈G s∈G X X ˇ −1 = f1(t) ψ(s)f2(ts ) t∈G s∈G X ˇ = f1(t) (f2 ∗ ψ)(t) t∈G

= f ,T ˇf . 1 ψ 2 L(G)

∗ Hence (Tψ) = Tψˇ.

For every ρ ∈ Gb, fix an orthonormal basis

{vρ, vρ, . . . , vρ } 1 2 dρ

for the representation space Wρ. Then define the corresponding unitary matrix coefficients

ϕρ (g) = ρ(g)vρ, vρ . (1.41) i,j j i Wρ We deduced some basic properties of matrix coefficients in Lemma 1.1.12. In the case that the representations are irreducible, we have the following additional properties.

Lemma 1.5.5. Let ρ and σ be two irreducible representations of G. |G| (a) (Orthogonality relations) ϕρ , ϕσ = δ δ δ . i,j s,t L(G) ρ,σ i,s j,t dρ

ρ σ |G| ρ (b) ϕi,j ∗ ϕs,t = δρ,σδj,sϕi,t. dρ Proof. (a) We have

ρ σ X ρ σ hϕi,j, ϕs,tiL(G) = ϕi,j(g)ϕs,t(g) g∈G X ρ ρ σ σ = ρ(g)v , v hρ(g)vt , v i j i Wρ s Wρ g∈G s s |G| |G| X ρ ρ σ σ = (T ρ v )(g)(T σ v )(g) (see (1.30)) v i vt s dρ dσ j g∈G 1.5. The group algebra and the Fourier transform 53

s s |G| |G| D σ σ ρ ρE = T σ v , T ρ v vt s v i dρ dσ j L(G) |G| = δρ,σδi,sδj,t (Theorem 1.4.9(b)). dρ (b) We have

ρ σ  X ρ σ −1 ϕi,j ∗ ϕs,t (g) = ϕi,j(gh)ϕs,t(h ) h∈G

dρ X X ρ ρ σ = ϕi,k(g)ϕk,j(h)ϕt,s(h) (Lemma 1.1.12) h∈G k=1

dρ |G| X ρ = ϕi,kδρ,σδk,tδj,s (part (a)) dρ k=1

|G| ρ = δρ,σδj,sϕi,t(g). dρ Corollary 1.5.6. The set n ρ o ϕi,j : ρ ∈ G,b 1 ≤ i, j ≤ dρ is an orthogonal basis for L(G). Proof. The given elements are orthogonal, hence linearly independent, by Lemma 1.5.5. By Corollary 1.3.10, dim L(G) = P d2, which is equal to the cardinality of the given set. ρ∈Gb ρ

Exercises.

1.5.1. Prove that L(G) is an algebra under the product ∗ defined in (1.37) and componentwise addition and scalar multiplication. Prove that it is commutative if and only if G is abelian.

1.5.2. Complete the proof of Lemma 1.5.2.

1.5.3. Suppose A is a unital algebra and let Aop denote the opposite algebra. Precisely, Aop is equal to A as a vector space, but the multiplication in Aop is given by

a · b = ba, a, b ∈ A,

where · denotes the multiplication in Aop and juxtaposition denotes the multiplication in A. Note that an algebra anti-homomorphism A → B is the same as an algebra homomorphism A → Bop. Let End A denote the algebra of linear maps from A to A, and define

EndA A = {f ∈ End A : f(ab) = af(b) ∀ a, b ∈ A}. 54 Chapter 1. Representation theory of finite groups

This is a subalgebra of End A. (In fact, EndA A is the algebra of A-module endormorphisms of A, considered as a module over itself.)

∼ op (a) Show that EndA A = A as algebras.

(b) In the case that A is the group algebra L(G), prove that EndL(G) L(G) = EndG(L(G)). Here, EndG(L(G)) refers to the commutant of the left regular representation (λ, L(G)). (c) Using the above facts, give another proof that the map of Proposition 1.5.4 is an anti- isomorphism of algebras. (You are not asked to give a new proof that the map respects the involutions.)

1.5.4. Show that (1.40) is inverse to the map ψ 7→ Tψ of Proposition 1.5.4.

1.5.2 The Fourier transform Suppose (σ, V ) is a representation of G and recall the definition (Definition 1.3.12) of the Fourier transform of elements of L(G) at σ. For f1, f2 ∈ L(G), we have X σ(f1 ∗ f2) = (f1 ∗ f2)(g)σ(g) g∈G X −1 −1 = f1(gh)f2(h )σ(gh)σ(h ) (1.42) g,h∈G

= σ(f1)σ(f2).

The Fourier transform is the algebra homomorphism M M F : L(G) → A(G) := End(Wρ), f 7→ ρ(f). ρ∈Gb ρ∈Gb

We define a scalar product on A(G) by * + M M 1 X T , S = d tr T S∗ . (1.43) ρ σ |G| ρ ρ ρ ρ∈Gb σ∈Gb ρ∈Gb

(See Exercise 1.5.5.) Recall that {vρ, . . . , vρ } is an orthonormal basis for W . For ρ ∈ G and 1 ≤ i, j ≤ d , 1 dρ ρ b ρ define ρ ρ ρ ρ T ∈ A(G),T w = δρ,σ w, v v , w ∈ Wσ, σ ∈ G.b i,j i,j j Wρ i ρ ρ ρ σ Thus, the map Ti,j sends vj to vi and vk to zero for all σ 6= ρ, 1 ≤ k ≤ dσ or σ = ρ, k 6= ij. Theorem 1.5.7. The Fourier transform F is an isometric ∗-isomorphism between the al- gebras L(G) and A(G). Furthermore,

ρ |G| ρ F ϕi,j = Ti,j, for all ρ ∈ G,b 1 ≤ i, j ≤ dρ. (1.44) dρ 1.5. The group algebra and the Fourier transform 55

Proof. A proof of this statement can be found in [CSST10, Th. 1.5.11]. Let B0 ⊆ A(G) be the subalgebra consisting of elements that are diagonal in the bases {vρ, . . . , vρ }. Then B0 is a maximal commutative subalgebra of A(G). (See Exercise 1.5.6.) 1 dρ Define B = F −1(B0), so that B is a maximal commutative subalgebra of L(G). Note that B depends on our choice of bases for the Wρ, ρ ∈ Gb. ρ The primitive idempotent associated with the vector vj is the group algebra element d eρ := ρ ϕρ ∈ L(G). (1.45) j |G| j,j Thus ρ dρ ρ dρ ρ ρ ej (g) = ϕj,j(g) = ρ(g)wj , wj , for all g ∈ G. (1.46) |G| |G| Wρ Proposition 1.5.8. (a) The set

n ρ o ej : ρ ∈ G,b j = 1, 2, . . . , dρ is a vector space basis for B.

(b) For all ρ, σ ∈ Gb, 1 ≤ j ≤ dρ, 1 ≤ i ≤ dσ, we have ρ σ ρ ej ∗ ei = δρ,σδj,iej . ρ (So the ej are orthogonal idempotents.)

(c) For all ρ, σ ∈ Gb, 1 ≤ j ≤ dρ, 1 ≤ i ≤ dσ, we have ρ σ ρ σ(ej )vi = δρ,σδj,ivj . ρ ρ In particular, ρ(ej ): Wρ → Wρ is the orthogonal projection onto Cvj . (d) If f ∈ B satisfies ρ ρ ρ ρ(f)vj = λj vj , for all ρ ∈ Gb, 1 ≤ j ≤ dρ, ρ for some λj ∈ C, then

dρ X X ρ ρ f = λj ej (Fourier inversion formula in B) ρ∈Gb j=1 and ρ ρ ρ f ∗ ej = λj ej . Proof. (a) By (1.44) and (1.45), we have

ρ ρ F(ej ) = Tj,j,

ρ σ which is the diagonal matrix acting as 1 on vj and as 0 on vi for σ 6= ρ or i 6= j. Since such 0 ρ matrices form a basis for B , it follows from Theorem 1.5.7 that the ej form a basis for B. 56 Chapter 1. Representation theory of finite groups

(b) We have

d d eρ ∗ eσ = ρ σ ϕρ ∗ ϕσ j i |G|2 j,j i,i dρ = δ δ ϕρ (Lemma 1.5.5(b)) |G| ρ,σ j,i j,j ρ = δρ,σδj,iej . (c) We have

M ρ σ ρ σ ρ σ ρ ρ ρ ρ σ(ej )vi = Fej vi = Tj,jvi = δρ,σhvi , vj iWρ vj = δρ,σδi,jvj ∈ Wρ. σ∈Gb

Comparing Wσ components, we see that

ρ σ ρ σ(ej )vi = δρ,σδj,ivj . (d) Suppose f ∈ B satisfies

ρ ρ ρ ρ(f)vj = λj vj , for all ρ ∈ Gb, 1 ≤ j ≤ dρ,

ρ for some λj ∈ C. By part (a), we have

dρ X X ρ ρ f = µj ej ρ∈Gb j=1

ρ for some µj ∈ C. Then, for σ ∈ Gb and 1 ≤ i ≤ dσ, we have

σ σ σ λi vi = σ(f)vi

dρ X X ρ ρ σ = µj σ(ej )vi ρ∈Gb j=1

dρ X X ρ ρ = µj δρ,σδj,ivj (part (c)) ρ∈Gb j=1 σ σ = µi vi .

σ σ Thus µi = λi , as desired. Finally, we have   dσ ρ X X σ σ ρ ρ ρ f ∗ ej =  λi ei  ∗ ej = λj ej , σ∈Gb i=1 by part (b). 1.5. The group algebra and the Fourier transform 57

Exercises.

1.5.5. Prove that (1.43) defines a scalar product on A(G).

1.5.6. Let A be the subalgebra of Mn,n(C) consisting of diagonal matrices. Prove that A is a maximal commutative subalgebra of Mn,n(C). In other words, prove that if B is a commutative subalgebra of Mn,n(C) containing A, then B = A. 1.5.7 ([CSST10, Ex. 1.5.18]). (a) Consider the decomposition (1.35) in the case of the group algebra (so X = G with the left regular action). Show that ψ ∈ L(G) be- ρ σ longs to Tj Wρ if and only if ψ ∗ ei = δσ,ρδi,jψ for all σ ∈ Gb and 1 ≤ i ≤ dσ. ρ (b) Show that f ∈ L(G) belongs to B if and only if each subspace Tj Wρ is an eigenspace for P Pdρ ρ ρ the associated convolution operator Tf : ψ 7→ ψ ∗ f; moreover, if f ∈ ρ∈G j=1 λj ej , ρ ρ b then the eigenvalue of Tf corresponding to Tj Wρ is λj .

1.5.3 Algebras of bi-K-invariant functions

Suppose G acts transitively on a finite set X. Fix x0 ∈ X and let K be the stabilizer of x0. Then, as before, we can identify X with G/K. Let S be a set of representatives of the double cosets K\G/K of K in G. In other words, {KsK : s ∈ S} are the equivalences classes for the relation on G defined by

g ∼ h ⇐⇒ ∃ k1, k2 ∈ K such that g = k1hk2

So we have G G = KsK. s∈S For s ∈ S, define

Ωs := Ksx0 = KsKx0 = {ksx0 : k ∈ K},

Θs := G(x0, sx0) = {(gx0, gsx0): g ∈ G} ⊆ X × X. F Lemma 1.5.9. (a) X = s∈S Ωs is the decomposition of X into K-orbits. F (b) X × X = s∈S Θs is the decomposition of X × X into G-orbits (under the diagonal action).

Proof. (a) For g ∈ G, Kgx0 is the K-orbit of gx0. Furthermore, for h ∈ G, we have

Kgx0 = Khx0 ⇐⇒ ∃ k1 ∈ K such that gx0 = k1hx0 −1 ⇐⇒ ∃ k1 ∈ K such that (k1h) gx0 = x0 −1 ⇐⇒ ∃ k1, k2 ∈ K such that (k1h) g = k2

⇐⇒ ∃ k1, k2 ∈ K such that g = k1hk2 ⇐⇒ g ∈ KhK. 58 Chapter 1. Representation theory of finite groups

Since S is a set of representatives of K\G/K, it follows that Ksx0, s ∈ S, are the K-orbits on X.

(b) Suppose x, y ∈ X. Then x = g0x0 for some g0 ∈ G. Since the action of G on X is −1 transitive, we can choose g ∈ G such that gx0 = g0 y. Then

G(x, y) = G(g0x0, g0gx0) = G(x0, gx0).

Thus, every G-orbit on X × X is of the form G(x0, gx0) for some g ∈ G. Furthermore, for g, h ∈ G, we have

G(x0, gx0) = G(x0, hx0) ⇐⇒ ∃ k1 ∈ K such that (x0, gx0) = (x0, k1hx0)

⇐⇒ ∃ k1, k2 ∈ K such that g = k1hk2.

Definition 1.5.10 (Left, right and bi-K-invariant functions). Let K be a subgroup of a finite group G.

• f ∈ L(G) is right K-invariant if f(gk) = f(g) for all g ∈ G, k ∈ K. • f ∈ L(G) is left K-invariant if f(kg) = f(g) for all g ∈ G, k ∈ K. • f ∈ L(G) is bi-K-invariant if it is both left and right K-invariant.

We let L(G/K) denote the subspace of right K-invariant functions and let L(K\G/K) denote the subspace of bi-K-invariant functions. (Note that this notation corresponds to the fact that right K-invariant functions can be viewed as functions on G/K and vice versa, and similarly for bi-K-invariant functions.)

It is straightforward to verify (Exercise 1.5.9) that the space L(G/K) is a left ideal in L(G) and hence, by (1.38), it is invariant under the left regular action. It follows that we can restrict the permutation representation of G on L(G) to obtain a representation of G on L(G/K). Similarly, L(K\G) is a right ideal in L(G). It follows that L(K\G/K) is a subalgebra of L(G).

Theorem 1.5.11. (a) The map

L(X) → L(G/K), f 7→ f,˜ where ˜ f(g) = f(gx0), for all g ∈ G,

is an isomorphism of G-representations. (b) The map

L(X × X)G → L(K\G/K),F 7→ F,˜ where 1 F˜(g) = F (x , gx ), for all g ∈ G, |K| 0 0

is an isomorphism of algebras.

Proof. (a) This follows immediately from the isomorphism of G-sets X ∼= G/K. 1.5. The group algebra and the Fourier transform 59

(b) The map F 7→ F˜ is clearly linear. It then follows from Exercise 1.5.8 that it is an G isomorphism of vector spaces. Now suppose F1,F2 ∈ L(X × X) and let X F (x, y) = F1(x, z)F2(z, y), for all x, y ∈ X, z∈X

G so that F is the product of F1 and F2 in L(X × X) . Then, for all g ∈ G, 1 F˜(g) = F (x , gx ) |K| 0 0 1 X = F (x , hx )F (hx , gx ) (note we sum over G) |K|2 1 0 0 2 0 0 h∈G 1 X = F (x , hx )F (x , h−1gx ) since F ∈ L(X × X)G |K|2 1 0 0 2 0 0 2 h∈G X ˜ ˜ −1 = F1(h)F2(h g) h∈G  ˜ ˜  = F1 ∗ F2 (g).

By Theorem 1.5.11 and Lemma 1.5.9 we have

L(X × X)G ∼= L(K\G/K) ∼= L(X)K , where the first isomorphism is one of algebras, and the second is one of vector spaces. Using the second isomorphism, one can endow L(X)K with the structure of an algebra. Corollary 1.5.12. We have that (G, K) is a Gelfand pair if and only if the algebra L(K\G/K) is commutative. Proof. We have

(G, K) is a Gelfand pair ⇐⇒ L(X) is multiplicity free (definition of Gelfand pair)

⇐⇒ EndG(L(X)) is commutative (Corollary 1.2.13) ⇐⇒ L(X × X)G is commutative (Corollary 1.4.3) ⇐⇒ L(K\G/K) is commutative (Theorem 1.5.11(b)).

We are now able to prove the general form of Gelfand’s lemma (see Proposition 1.4.6). Proposition 1.5.13 (Gelfand’s lemma). Suppose G is a finite group and K ≤ G is a sub- group. Furthermore, suppose there exists an τ of G such that g−1 ∈ Kτ(g)K for all g ∈ G. Then (G, K) is a Gelfand pair.

−1 Proof. If f ∈ L(K\G/K), then f(τ(g)) = f(g ) for all g ∈ G. Thus, for all f1, f2 ∈ L(K\G/K) and g ∈ G, we have

X −1 (f1 ∗ f2)(τ(g)) = f1(τ(g)s)f2 s s∈G 60 Chapter 1. Representation theory of finite groups

X −1 −1 = f1(τ(gh))f2(τ(h )) (h = τ (s)) h∈G X −1 = f1 (gh) f2(h) h∈G X −1 −1 = f2(h)f1 h g h∈G −1 = (f2 ∗ f1) g

= (f2 ∗ f1)(τ(g)),

and so L(K\G/K) is commutative. Then the result follows from Corollary 1.5.12.

We say that (G, K) is a weakly symmetric Gelfand pair if it satisfies the hypotheses of Proposition 1.5.13.

Example 1.5.14. The group G × G acts on G by

−1 (g1, g2) · g = g1gg2 , for all g1, g2, g ∈ G.

The stabilizer of 1G is the diagonal subgroup

G˜ = {(g, g): g ∈ G} ≤ G × G, and thus G ∼= (G × G)/G˜ as G-sets. Now consider the flip automorphism

τ : G × G → G × G, τ(g1, g2) = (g2, g1), for all (g1, g2) ∈ G × G.

Then, for all g1, g2 ∈ G, we have

−1 −1 −1 −1 −1 −1 −1 ˜ ˜ (g1, g2) = g1 , g2 = g1 , g1 (g2, g1) g2 , g2 ∈ Gτ(g1, g2)G.

Therefore, (G × G, G˜) is a weakly symmetric Gelfand pair.

Exercises.

1.5.8 ([CSST10, Ex. 1.5.20]). For every orbit Ω of K on X, set

ΘΩ = {(gx0, gx): g ∈ G, x ∈ Ω}.

Show that the map Ω 7→ ΘΩ is a bijection from the set of K-orbits on X to the set of G-orbits on X × X.

1.5.9. Verify that L(G/K) is a left ideal in L(G) and that L(K\G) is a right ideal in L(G). Then prove that L(K\G/K) is a subalgebra of L(G). 1.6. Induced representations 61

1.5.10 ([CSST10, Ex. 1.5.25]). Show that (G, K) is a symmetric Gelfand pair if and only if −1 g ∈ KgK for all g ∈ G. Note that this corresponds to the case τ = IG in Proposition 1.5.13. 1.5.11 ([CSST10, Ex. 1.5.27]). A group G is ambivalent if g−1 is conjugate to g for every g ∈ G. We adopt the notation of Example 1.5.14. Show that the Gelfand pair (G × G, G˜) is symmetric if and only if G is ambivalent.

1.6 Induced representations

1.6.1 Definitions and examples Let K be a subgroup of G and let (ρ, V ) be a representation of K. Consider the action of K on G given by k · g = gk−1, k ∈ K, g ∈ G.

Definition 1.6.1 (Induced representation). The representation induced by (ρ, V ) is the G-representation (σ, Z) defined by

 −1 Z = FunK (G, V ) := f : G → V : f(gk) = ρ(k )f(g), for all g ∈ G, k ∈ K (1.47) and −1  (σ(g1)f)(g2) = f g1 g2 , for all g1, g2 ∈ G, f ∈ Z. (1.48) (Note that Z is a vector space under pointwise addition and scalar multiplication, and that σ(g)f ∈ Z for all g ∈ G and f ∈ Z.) We introduce the notation

G G IndK ρ = σ and IndK V = Z. One can also give another description of induced representations. Fix a set S of repre- sentatives for the set G/K of right cosets of K in G, so that G G = sK. (1.49) s∈S For every v ∈ V , define ( ρ (g−1) v if g ∈ K, fv : G → V, fv(g) = (1.50) 0 if g∈ / K.

Then fv ∈ Z. Furthermore, the space ˜ V = {fv : v ∈ V } (1.51) is a K-invariant subspace of Z and the map ˜ V → V , v 7→ fv, is an isomorphism of K-representations. (See Exercise 1.6.1.) 62 Chapter 1. Representation theory of finite groups

We claim that M Z = σ(s)V˜ (1.52) s∈S as vector spaces. Indeed, for every f ∈ Z and s ∈ S, let vs = f(s). Then we have X f = σ(s)fvs (1.53) s∈S and this is the unique way to write f as a sum of elements of σ(s)V˜ , s ∈ S. (See Exer- cise 1.6.1.) Furthermore, for all g ∈ G and s ∈ S, by (1.49) there exist unique elements ts ∈ S and ks ∈ K such that gs = tsks. Then we have X X X σ(g)f = σ(gs)fvs = σ(ts)σ(ks)fvs = σ(ts)fρ(ks)vs , s∈S s∈S s∈S where in the last equality we used the fact that the map v 7→ fv is an isomorphism of K-representations. The following lemma is a converse to the above.

Lemma 1.6.2. Let K ≤ G, and let S be a set of representatives of G/K. Let (τ, W ) be a representation of G. Suppose that V ≤ W is a K-invariant subspace and that M W = τ(s)V s∈S

∼ G as vector spaces. Then W = IndK V as representations of G. ˜ Proof. Define V as in (1.51). For each s ∈ S, let vs = f(s). Then we have the isomorphism of vector spaces.

M M ˜ X X W = τ(s)V → σ(s)V = Z, τ(s)vs 7→ σ(s)fvs , vs ∈ V ∀ s ∈ S. (1.54) s∈S s∈S s∈S s∈S

For g ∈ G, we have

! X X X τ(g) τ(s)vs = τ(gs)vs = τ(ts)τ(ks)vs s∈S s∈S s∈S X X X 7→ σ(ts)fτ(ks)vs = σ(ts)σ(ks)fvs = σ(g) σ(s)fvs . s∈S s∈S s∈S

Hence (1.54) is an isomorphism of G-representations.

Recall that the index of the subgroup K ≤ G, is defined to be [G : K] = |G/K|. Since |S| = |G/K|, it follows immediately from Lemma 1.6.2 that

G  dim IndK V = [G : K] dim V. (1.55) 1.6. Induced representations 63

Proposition 1.6.3. Suppose K ≤ G and let X = G/K. Then the permutation represen- G tation of G on L(X) is isomorphic to IndK ι, where (ι, C) is the trivial representation of K.

Proof. By definition,

G IndK C = {f : G → C : f(gk) = f(g), for all g ∈ G, k ∈ K} is the space of right K-invariant functions on G. Thus the proposition follows from Theo- rem 1.5.11(a).

Exercises.

1.6.1. (a) Prove that fv, as defined in (1.50), is an element of Z. (b) Prove that V˜ , as defined in (1.51), is a K-invariant subspace of Z. ˜ (c) Prove that the map V → V , v 7→ fv, is an isomorphism of K-representations. (d) Prove that equality (1.53) holds and that this is the unique way to write f ∈ Z as a sum of elements of σ(s)V˜ , s ∈ S. Hint: f is uniquely determined by its values on S.

1.6.2 ([CSST10, Ex. 1.6.3]). Suppose K is a subgroup of G and let S be a set of representa- tives of G/K. Let (π, W ) be a representation of G, suppose that V ≤ W is K-invariant, and denote by (ρ, V ) the corresponding K-representation. Prove that if W = hπ(s)V : s ∈ Si, G then there exists a surjective intertwiner from IndK ρ to π. G Hint: If we let σ = IndK ρ, then the required surjective intertwiner is the map

σ(s)fv 7→ π(s)v, s ∈ S, v ∈ V,

extended by linearity.

1.6.2 First properties of induced representations Induction is transitive in the following sense.

Proposition 1.6.4 (Transitivity of induction). Suppose K ≤ H ≤ G are subgroups and let (ρ, V ) be a representation of K. Then

G H  ∼ G IndH IndK V = IndK V

as representations of G. 64 Chapter 1. Representation theory of finite groups

H Proof. Let σ = IndK ρ. Consider the linear map G H  IndH IndK V = FunH (G, FunK (H,V )) → Fun(G, V ) := {f : G → V }, ˜ ˜ f 7→ f, where f(g) = f(g)(1G), g ∈ G.

For k ∈ K, g ∈ G, and f ∈ HomH (G, HomK (H,V )), we have ˜ f(gk) = f(gk)(1G) −1  = σ(k )f(g) (1G) (since f ∈ HomH (G, HomK (H,V ))) = f(g)(k) (by (1.48)) −1  = ρ(k ) f(g)(1G) (since f(g) ∈ HomK (H,V )) = ρ(k−1)f˜(g) ˜ Thus, f ∈ FunK (G, V ). G G Let η = IndH σ and π = IndK ρ. Then, for g1, g2 ∈ G and f ∈ HomH (G, HomK (H,V )), we have ˜ ˜ −1  π(g1)f (g2) = f g1 g2 −1  = f g1 g2 (1G)  = η(g1)f (g2)(1G)  = η^(g1)f (g2).

Thus the map f 7→ f˜ is an intertwiner

G H  G IndH IndK V = FunH (G, HomK (H,V )) → FunK (G, V ) = IndK V. Since

G H  H dim IndH IndK V = [G : H] dim IndK V = [G : H][H : K] dim V G = [G : K] dim V = IndK V, to show that f 7→ f˜ is an isomorphism, it suffices to prove that it is injective. Suppose f˜ = 0. Then, for all g ∈ G and h ∈ H, we have −1  ˜ f(g)(h) = σ(h )f(g) (1G) = f(gh)(1G) = f(gh). Thus f = 0, as desired. Theorem 1.6.5 (Frobenius character formula for induced representations). Let (ρ, W ) be a representation of K, where K ≤ G. Then

IndG ρ X ρ −1  χ K (g) = χ s gs , (1.56) s∈S:s−1gs∈K where S is any system of representatives for G/K. Proof. A proof of this result can be found in [CSST10, Th. 1.6.7]. 1.6. Induced representations 65

1.6.3 Frobenius reciprocity The following fundamental result gives a precise relationship between the operations of in- duction and restriction. Theorem 1.6.6 (Frobenius reciprocity). Let G be a finite group, K ≤ G a subgroup, (σ, W ) a representation of G, and (ρ, V ) a representation of K. Then we have an isomorphism of vector spaces

G  G  HomG W, IndK V → HomK ResK W, V ,T 7→ T,b where

Tb: W → V, Tb w = (T w)(1G).

G  G Proof. We first check that Tb ∈ HomK ResK W, V . Let τ = IndK ρ. For k ∈ K and w ∈ W , we have  Tb σ(k)w = T (σ(k)w) (1G)  G  = τ(k)(T w) (1G) T ∈ HomG IndK V = (T w)(k−1) (by (1.48))  = ρ(k) T w(1G) (by (1.47)) = ρ(k)Tb w, as desired. Now consider the map G  G  ˇ HomK ResK W, V → HomG W, IndK V ,U 7→ U, where Uwˇ  (g) = Uσ(g−1)w, for all g ∈ G, w ∈ W.

ˇ G  It is straightforward to verify that U ∈ HomG W, IndK V (Exercise 1.6.3). Let T ∈ G  HomG W, IndK V , and set U = Tb. Then

ˇ  −1 −1  −1  Uw (g) = Tb σ(g )w = T σ(g )w (1G) = τ(g )(Tw) (1G) = (T w)(g). ˇ G  ˇ Thus U = T . Similarly, let U ∈ HomK ResK W, V and set T = U. One can verify that Tb = U (Exercise 1.6.4). Remark 1.6.7. In the language of category theory, Theorem 1.6.6 (together with a “na- turality” statement) says that induction is right adjoint to restriction. Induction is also left adjoint to restriction, but this is an easier result that holds in a much greater level of generality. The fact that induction is right adjoint is an important property of what are called Frobenius extensions. The group algebras of nested finite groups are special cases of a Frobenius extensions.

Corollary 1.6.8. Suppose, in the setting of Theorem 1.6.6 that W and V are irreducible. G G Then the multiplicity of W in IndK V is equal to the multiplicity of V in ResK W . Proof. This follows immediately from Theorem 1.6.6 and Lemma 1.2.5. 66 Chapter 1. Representation theory of finite groups

Exercises.

ˇ G  1.6.3. Prove that U, as defined in the proof of Theorem 1.6.6 is an element of HomG W, IndK V .

G  1.6.4. In the notation of the proof of Theorem 1.6.6, let U ∈ HomK ResK W, V and set T = Uˇ. Prove that Tb = U.

1.6.4 Mackey’s lemma and the intertwining number theorem We now discuss a sort of “commutation” property for induction and restriction. Suppose H and K are two subgroups of G and let (ρ, W ) be a representation of K. Let S be a system of representatives for the double cosets in H\G/K, so that G G = HsK. s∈S For each s ∈ S, let −1 Gs = sKs ∩ H ≤ G, and define a representation (ρs,Ws) of Gs by setting Ws = W and

−1 ρs(t)w = ρ(s ts)w, for all t ∈ Gs, w ∈ Ws.

Theorem 1.6.9 (Mackey’s lemma). With notation as above, we have an isomorphism of H-representations G G ∼ M H ResH IndK ρ = IndGs ρs. s∈S Proof. Let

 0 −1 0 Zs = F : G → W : F (hs k) = δs,s0 ρ(k )F (hs) ∀ h ∈ H, k ∈ K, s ∈ S .

G Comparing to (1.47), we see that Zs is the subspace of Z = IndK W consisting of those functions that vanish outside HsK. Thus M Z = Zs as vector spaces. s∈S

For each s ∈ S, the space Zs is H-invariant (Exercise 1.6.6(a)). Therefore, it suffices to prove that, for all s ∈ S,

∼ H Zs = IndGs Ws as representations of H. Consider the linear map

Zs → Fun(H,W ) = {f : H → W },F 7→ fs, where fs(h) = F (hs), ∀ h ∈ H. (1.57) 1.6. Induced representations 67

−1 For t ∈ Gs, we have s ts ∈ K, and so

−1  −1 −1  −1 fs(ht) = F (hts) = F hss ts = ρ s t s F (hs) = ρs t fs(h).

H Thus fs ∈ FunGs (H,Ws) = IndGs Ws. Hence (1.57) is a linear map

H Zs → IndGs Ws. We will now construct the inverse to (1.57). First consider the map

H IndGs Ws → Fun(G, W ), f 7→ Fs ∈ Fun(G, W ), where (1.58) 0 −1 0 Fs(hs k) = δs,s0 ρ(k )f(h), for all k ∈ K, h ∈ H, s ∈ S.

We must verify that Fs is well defined. If hsk = h1sk1 for some h, h1 ∈ H and k, k1 ∈ K, −1 −1 −1 then t := skk1 s = h h1 ∈ Gs, and so

−1 −1 −1 −1   −1 −1 −1 −1 ρ k1 f(h1) = ρ k ρ s h h1s f(h1) (since k1 = k s h h1s) −1 = ρ k (ρs(t)f(h1)) −1 −1 = ρ k f h1t = ρ k−1 f(h).

0 Thus Fs is well defined. For h ∈ H, k ∈ K, and s ∈ S, we have

0 −1 −1 Fs(hs k) = δs,s0 ρ(k )f(h) = δs,s0 ρ(k )Fs(hs),

so Fs ∈ Zs. Hence (1.58) is a linear map

H IndGs Ws → Zs. It is straightforward to verify that the maps (1.57) and (1.58) are mutually inverse and that (1.57) intertwines the H-action (Exercise 1.6.6(b)). This completes the proof. Let us now consider the special case where (ρ, W ) is the permutation representation on L(X), with X = G/K. Let x0 = K ∈ G/K, so that K is the stabilizer of x0. As above, we let S be a set of representatives for the double cosets H\G/K. By Exercise 1.6.5, Gs is the stabilizer of sx0 for each s ∈ S. Then G X = Ωs, Ωs = Hsx0 = {hsx0 : h ∈ H}, s∈S is the decomposition of X into H-orbits. Now let ρ = ιK be the trivial representation of K. Then ρs is the trivial representation H of Gs, and IndGs ρs is the permutation representation of H on L(Ωs) by Proposition 1.6.3. Thus, Mackey’s lemma gives the decomposition

G M ResH L(X) = L (Ωs) . (1.59) s∈S We finish this chapter with an important application of Mackey’s lemma. 68 Chapter 1. Representation theory of finite groups

Theorem 1.6.10 (Intertwining number theorem). With the same hypotheses as in Mackey’s lemma (Theorem 1.6.9), assume that σ is a representation of H. Then

G G  X H  dim HomG IndH σ, IndK ρ = dim HomGs ResGs σ, ρs . s∈S Proof. We have

G G  dim HomG IndH σ, IndK ρ G G  = dim HomH σ, ResH IndK ρ (Frobenius reciprocity (Theorem 1.6.6)) X H  = dim HomH σ, IndGs ρs (Mackey’s lemma (Theorem 1.6.9)) s∈S X H  = dim HomGs ResGs σ, ρs (Frobenius reciprocity (Theorem 1.6.6)). s∈S

Exercises.

1.6.5 ([CSST10, Ex. 1.6.13]). Identify H\G/H with the set of H-orbits on X = G/K. Prove that Gs is the stabilizer in H of the point xs = sK. (Compare with Lemma 1.5.9.)

1.6.6. (a) Prove that Zs is H-invariant. (b) Verify that (1.57) and (1.58) are mutually inverse, and that (1.57) intertwines the H-action. Chapter 2

The theory of Gelfand–Tsetlin bases

Our goal in this chapter is to develop the theory of Gelfand–Tsetlin bases for group algebras and permutation representations. These are bases that are well suited to the restriction of a representation to certain chains of subgroups. We closely follow the presentation in [CSST10, Ch. 2]. Throughout this chapter we again suppose that G is a finite group.

2.1 Algebras of conjugacy invariant functions

2.1.1 Conjugacy invariant functions Suppose H is a subgroup of G. A function f ∈ L(G) is H-conjugacy invariant if

f h−1gh = f(g), for all h ∈ H, g ∈ G.

We let C(G, H) denote the set of all H-conjugacy invariant functions on G. This is a subalgebra of L(G) (under convolution). Indeed, for f1, f2 ∈ C(G, H), we have

−1  X −1  −1 (f1 ∗ f2) h gh = f1 h ghs f2 s s∈G X −1 −1 −1 = f1 ghsh f2 hs h (since f1, f2 ∈ C(G, H)) s∈G X −1 −1 = f1(gt)f2 t t = hsh t∈G

= (f1 ∗ f2)(g).

Hence f1 ∗ f2 ∈ C(G, H). Note that C(G, G) is the algebra of central functions on G (Defi- nition 1.3.1). Consider the action of G × H on G defined by

−1 (g, h) · g0 = gg0h , for all g, g0 ∈ G, h ∈ H. (2.1)

We denote the associated permutation representation by η, so that

 −1  η(g, h)f (g0) = f g g0h , for all f ∈ L(G), g, g0 ∈ G, h ∈ H. (2.2)

69 70 Chapter 2. The theory of Gelfand–Tsetlin bases

Lemma 2.1.1. (a) The stabilizer of 1G under the action (2.1) is

H˜ = {(h, h): h ∈ H} ≤ G × H.

(b) Let LH˜ \(G × H)/H˜  denote the algebra of bi-H˜ -invariant functions on G × H. Then we have an isomorphism of algebras ˜ ˜  Φ: L H\(G × H)/H → C(G, H), Φ(F )(g) = |H| F (g, 1G).

Proof. Part (a) is clear since

−1 1G = (g, h) · 1G = gh ⇐⇒ g = h.

We now prove part (b). Suppose F ∈ LH˜ \(G × H)/H˜  and let f = Φ(F ). Then

f h−1gh = |H| F h−1gh, h−1h ˜ = |H| F (g, 1G) (since F is bi-H-invariant) = f(g).

Thus f ∈ C(G, H). It is clear that Φ is linear. To see that Φ is injective, note that

−1  ˜ F (g, h) = F gh , 1G (since F is right H-invariant) 1 = f gh−1 (where f = Φ(F )). |H| Thus F is uniquely determined by f. To prove Φ is surjective, let f ∈ C(G, H). Then, if we 1 −1 ˜ define F (g, h) = |H| f (gh ), we have that F is bi-H-invariant, and f = Φ(F ). ˜ ˜  It remains to prove that Φ is multiplicative. Let F1,F2 ∈ L H\(G × H)/H . Then, for all g ∈ G, we have  Φ(F1 ∗ F2) (g) = |H| (F1 ∗ F2)(g, 1G) X X −1 −1 = |H| F1(gs, h)F2 s , h s∈G h∈H X X −1 −1  ˜ = |H| F1(gsh , 1G)F2 hs , 1G (F1,F2 are bi-H-invariant) s∈G h∈H 2 X −1  −1  = |H| F1(gt, 1G)F2 t , 1G sh = t t∈G X −1  = (Φ(F1)(gt)) Φ(F2)(t ) t∈G  = Φ(F1) ∗ Φ(F2) (g).

We now wish to decompose the permutation representation η of (2.2) into irreducible G × H-subrepresentations. Recall that, by Theorem 1.3.18, every irreducible representation of G × H is of the form σ  ρ for some σ ∈ Gb and ρ ∈ Hb. Note also that the adjoint of G G 0 0 ResH σ is ResH σ , where σ is the adjoint of σ (Exercise 2.1.1). 2.1. Algebras of conjugacy invariant functions 71

Theorem 2.1.2. Suppose (σ, V ) ∈ Gb and (ρ, W ) ∈ Hb. Let (ρ0,W 0) ∈ Hb denote the adjoint of (ρ, W ). Then we have an isomorphism of vector spaces

G 0 ˜ HomG×H (σ  ρ, η) → HomH ρ, ResH σ ,T 7→ T, where ˜   T w (v) = T (v ⊗ w) (1G), for all v ∈ V, w ∈ W.

Proof. Note that a linear map T : V ⊗ W → L(G) lies in HomG×H (σ  ρ, η) if and only if   −1  T (σ(g)v ⊗ ρ(h)w) (g0) = T (v ⊗ w) g g0h , ∀ g, g0 ∈ G, h ∈ H, v ∈ V, w ∈ W. (2.3)

Let T ∈ HomG×H (σ  ρ, η). For h ∈ H, v ∈ V , and w ∈ W , we have  ˜   T ρ(h)w (v) = T (v ⊗ ρ(h)w) (1G) = T (v ⊗ w)(h) (by (2.3)) −1  = T (σ(h )v ⊗ w) (1G) (by (2.3))   = T˜ w σ h−1 v   = σ0(h)T˜ w (v) (by (1.9)).

Thus σ0(h)T˜ = T˜ ρ(h), for all h ∈ H, ˜ G 0 and so T ∈ HomH ρ, ResH σ , as desired. To see that the map T 7→ T˜ is injective, note that, by (2.3), we have    −1  ˜ −1  T (v ⊗ w) (g) = T σ g v ⊗ w (1G) = T w σ g v . (2.4)

Thus T is uniquely determined by T˜. Since the map T 7→ T˜ is clearly linear, it remains to show it is surjective. Suppose G 0 S ∈ HomH ρ, ResH σ . Define T ∈ Hom(V ⊗ W, L(G)) by

 −1  T (v ⊗ w) (g0) = (Sw) σ g0 v , for all g0 ∈ G, v ∈ V, w ∈ W. (2.5) Then, for all g ∈ G, h ∈ H, v ∈ V , and w ∈ W , we have

 −1   T (σ(g)v ⊗ ρ(h)w) (g0) = (Sρ(h)w) σ g0 g v 0 −1   G 0 = (σ (h)Sw) σ g0 g v S ∈ HomH ρ, ResH σ −1 −1   = (Sw) σ h g0 g v (by (1.9)) −1  = (T (v ⊗ w)) g g0h .

Hence T satisfies (2.3), and so T ∈ HomG×H (σ  ρ, η). Comparing (2.4) and (2.5), we see that T˜ = S.

G 0 Corollary 2.1.3. The multiplicity of σ  ρ in η is equal to the multiplicity of ρ in ResH σ . Proof. This follows from Lemma 1.2.5 and Theorem 2.1.2. 72 Chapter 2. The theory of Gelfand–Tsetlin bases

For σ ∈ Gb and ρ ∈ Hb, let

G 0 mρ,σ = dim HomH ρ, ResH σ

G 0 denote the multiplicity of ρ in ResH σ . Corollary 2.1.4. The decomposition of η into irreducible subrepresentations of G × H is given by ∼ M M ⊕mρ,σ η = (σ  ρ) . σ∈Gb ρ∈Hb We now consider the case where H = G, so that (η, L(G)) is a representation of G × G. We showed in Example 1.5.14 that (G × G, G˜) is a Gelfand pair. This also follows from the following result.

Corollary 2.1.5. Suppose that H = G. We have a decomposition into G × G-invariant subspaces M L(G) = Mρ, ρ∈Gb where Mρ is the subspace of L(G) spanned by all matrix coefficients

ϕ(g) = hρ(g)v, wiWρ , v, w ∈ Wρ.

0 Furthermore, the action of G × G on the summand Mρ is isomorphic to ρ  ρ. Hence we have an isomorphism of G × G-representations

∼ M 0 η = ρ  ρ. ρ∈Gb Proof. We have ( 0 0 1 if ρ ∼ σ , mρ,σ = dim HomG(ρ, σ ) = 0 otherwise.

0 0 Define T ∈ HomG×G(ρ  ρ, η) as in (2.5) with σ = ρ and S = IWρ . Then, for all v, w ∈ Wρ and g ∈ G, we have

 0 −1  T (θv ⊗ w) (g) = ρ g θv (w)

= θv(ρ(g)w) = hρ(g)w, vi,

where, in the first equality, we use the natural identification of (ρ0)0 with ρ (i.e. of the double dual of a vector space with the vector space itself.) Hence T (θv ⊗ w) ∈ Mρ. 0 2 Since ρ  ρ is irreducible and dim Mρ = dρ (a basis for Mρ is given by the matrix ρ 0 coefficients ϕi,j, 1 ≤ i, j ≤ dρ, by Corollary 1.5.6), it follows that T (Wρ ⊗ Wρ) = Mρ, completing the proof. 2.1. Algebras of conjugacy invariant functions 73

Exercises.

0 G 2.1.1. Prove that if σ is a representation of G with adjoint σ , then the adjoint of ResH σ is G 0 ResH σ . ∼ 0 2.1.2. (a) Using the properties of the matrix coefficients, prove directly that Mρ = Wρ ⊗ Wρ, in the language of the proof of Corollary 2.1.5. Then deduce Corollary 2.1.5. (b) Use part (a) to prove Corollary 2.1.4.

2.1.2 Multiplicity-free subgroups Definition 2.1.6 (Multiplicity-free subgroup). A subgroup H of G is said to be multiplicity G free if, for every σ ∈ Gb, the restriction ResH σ is multiplicity free or, equivalently, if

G  dim HomH ρ, ResH σ ≤ 1, for all ρ ∈ H,b σ ∈ G.b

Theorem 2.1.7. The following conditions are equivalent:

(a) The algebra C(G, H) is commutative. (b) (G × H, H˜ ) is a Gelfand pair. (Recall that H˜ = {(h, h): h ∈ H}.) (c) H is a multiplicity-free subgroup of G.

Proof. The equivalence of (b) and (c) follows from Corollary 2.1.4, Lemma 2.1.1(a), and the definition of a Gelfand pair (Definition 1.4.7). The equivalence of (a) and (b) follows from Lemma 2.1.1(b) and Corollary 1.5.12.

Remark 2.1.8. (a) When H = G, the conditions of Theorem 2.1.7 are always satisfied: •C (G, G) is the space of central functions, which is the centre of the group algebra L(G) (see Remark 1.5.3). Thus it is commutative. • (G × G, G˜) is a Gelfand pair by Example 1.5.14. • Clearly G is a multiplicity-free subgroup of itself.

(b) When H = {1G}, the conditions in Theorem 2.1.7 are equivalent to G being abelian since C(G, {1G}) is commutative if and only if G is commutative. (This follows, for example, from (1.39).)

Proposition 2.1.9. We have that (G × H, H˜ ) is a symmetric Gelfand pair if and only if

∀ g ∈ G ∃ h ∈ H such that hgh−1 = g−1

(that is, every element of G is H-conjugate to its inverse). Moreover, if this is the case, then H is a multiplicity free subgroup of G. 74 Chapter 2. The theory of Gelfand–Tsetlin bases

Proof. By Exercise 1.5.10, the pair (G × H, H˜ ) is symmetric if and only if for all (g, h) ∈ G × H, there exist h1, h2 ∈ H such that

−1 −1 −1 g , h = (g, h) = (h1, h1)(g, h)(h2, h2) = (h1gh2, h1hh2). (2.6)

−1 −1 −1 Taking h = 1G, we obtain h2 = h1 and so g = h1gh1 . To prove the converse implication, suppose that every element of G is H-conjugate to its inverse. Then, for (g, h) ∈ G × H, we can choose t ∈ H such that

gh−1−1 = t gh−1 t−1.

−1 −1 −1 Thus, taking h1 = h t and h2 = h t , we see that

−1 −1 −1 −1 −1 −1 −1 −1 −1 −1 −1 (h1gh2, h1hh2) = h tgh t , h thh t = h hg , h = g , h .

Hence (2.6) is satisfied.

Exercises.

2.1.3. Suppose G is abelian. Show that (G × H, H˜ ) is a symmetric Gelfand pair if and only 2 if every g ∈ G satisfies g = 1G.

2.2 Gelfand–Tsetlin bases

2.2.1 Branching graphs and Gelfand–Tsetlin bases A chain

{1G} = G1 ≤ G2 ≤ · · · ≤ Gn−1 ≤ Gn = G (2.7) of subgroups is said to be multiplicity free if Gk−1 is a multiplicity-free subgroup of Gk for 1 < k ≤ n. Note that, by Remark 2.1.8((b)), if (2.7) is multiplicity free, then G2 is abelian. From now on, we fix a multiplicity-free chain (2.7). The branching graph of this chain is the directed graph with vertex set n G Gck k=1 and edge set n o (ρ, σ) ∈ G × G[ : σ is a subrepresentation of ResGk ρ, 2 ≤ k ≤ n . ck k−1 Gk−1

We will write ρ → σ if (ρ, σ) is an edge of the branching graph. 2.2. Gelfand–Tsetlin bases 75

Suppose (ρ, Vρ) ∈ Gcn. Then M ResGn V = V Gn−1 ρ σ

σ∈G\n−1:ρ→σ is an orthogonal decomposition. Then, for each σ ∈ G[n−1 we have an orthogonal decompo- sition G M Res n−1 V = V . Gn−2 σ θ

θ∈G\n−2:σ→θ

We continue in this way until, after the restriction from G2 to G1, we are left with sums of one-dimensional trivial representations. To keep track of the restrictions, let

T (ρ) = {T : T = (ρ = ρn → ρn−1 → ρn−2 → · · · → ρ2 → ρ1), ρk ∈ Gck, 1 ≤ k ≤ n − 1.} denote the set of all paths T in the branching graph. (We will always be interested in directed paths.) Then we have M M M M Vρ = Vρn−1 = Vρn−2 = ··· = Vρ1 . (2.8) ρn−1: ρn−1: ρn−2: T ∈T (ρ) ρ→ρn−1 ρ→ρn−1 ρn−1→ρn−2

Since ρ1 is the trivial representation of G1 = {1G}, it is one-dimensional. Therefore, for each p T ∈ T (ρ), we can choose vT ∈ Vρ1 with kvT k := hvT , vT i = 1. (Thus vT is defined up to a scalar factor of norm one.) Then (2.8) becomes M Vρ = CvT . (2.9) T ∈T (ρ)

In other words, {vT : T ∈ T (ρ)}

is an orthonormal basis of V , called a Gelfand–Tsetlin basis for Vρ with respect to the multiplicity-free chain (2.7). The branching graph and Gelfand–Tsetlin bases are well adapted to restriction to the subgroups appearing in the corresponding multiplicity free chain. In particular, if θ ∈ Gck for some 1 ≤ k ≤ n − 1, then the multiplicity of θ in ResGn ρ is equal to the number of paths Gk from ρ to θ in the branching graph. Furthermore, we obtain an orthogonal decomposition of the θ-isotypic component of ResGn ρ into irreducible G -subrepresentations (each isomorphic Gk k to Vθ). Namely, we have a unique component Vθ for Vρ for each path from ρ to θ, and the components corresponding to distinct paths are orthogonal. For j = 1, 2, . . . , n and ρ ∈ Gb, we let

Tj(ρ) = {S : S = (ρ = ρn → ρn−1 → · · · → ρj+1 → ρj), ρk ∈ Gck, j ≤ k ≤ n − 1}.

In particular, T1(ρ) = T (ρ). For

T = (ρ = ρn → ρn−1 → · · · → ρ2 → ρ1) ∈ T (ρ), 76 Chapter 2. The theory of Gelfand–Tsetlin bases

we define the j-th truncation of T to be

Tj = (ρ = ρn → ρn−1 → · · · → ρj+1 → ρj) ∈ Tj(ρ).

For 1 ≤ j ≤ n and S ∈ Tj(ρ), we define M   V = V , ρ = ResG ρ . (2.10) S ρ1 S Gj VS T ∈T (ρ): Tj =S

Then (ρS,VS) is an irreducible Gj-representation, and ρS ∼ ρj.

Exercises.

n 2.2.1. Fix a positive integer n and let Gn be the cyclic group of order 2 with generator a. 2n−k For 0 ≤ k ≤ n, let Gk be the subgroup of Gn generated by a .

(a) Prove that {1G} = G0 ≤ G1 ≤ · · · ≤ Gn is a multiplicity-free chain of subgroups. (b) Describe the branching graph for this chain. (c) Draw the branching graph for n = 3.

2.2.2 Gelfand–Tsetlin algebras

G Let H be a subgroup of G. We can extend any f ∈ L(H) by zero to a function fH ∈ L(G) defined by ( f(g) if g ∈ H, f G(g) = H 0 otherwise.

If H ≤ K ≤ G, it is straightforward to verify that, for all f, f1, f2 ∈ L(H) and α1, α2 ∈ C, we have

K G G • fH K = fH , G G G • (f1 ∗ f2)H = (f1)H ∗ (f2)H (Exercise 2.2.2), and G G G • (α1f1 + α2f2)H = α1(f1)H + α2(f2)H .

Thus we can view L(H) as a subalgebra of L(G). By an abuse of notation, we will often G denote the extension fH again by f. Recalling Definition 1.3.12, for all ρ ∈ Gb and f ∈ L(H), we have

G X G X X G  G  ρ fH = fH (g)ρ(g) = f(h)ρ(h) = f(h) ResH ρ (h) = ResH ρ (f). g∈G h∈H h∈H 2.2. Gelfand–Tsetlin bases 77

Definition 2.2.1 (Gelfand–Tsetlin algebra). For 1 ≤ k ≤ n, we let Z(k) denote the center of the group algebra L(Gk), that is, Z(k) is the subalgebra of central functions on Gk (see Remark 1.5.3). The Gelfand–Tsetlin algebra GZ(n) associated with the multiplicity-free chain (2.7) is the subalgebra of L(Gn) generated by the subalgebras

Z(1),Z(2),...,Z(n).

Theorem 2.2.2. The Gelfand–Tsetlin algebra GZ(n) is a maximal commutative subalgebra of L(Gn). Furthermore

GZ(n) = {f ∈ L(G): ρ(f)vT ∈ CvT , for all ρ ∈ Gcn and T ∈ T (ρ)}. (2.11)

In other words, GZ(n) is the subalgebra of L(Gn) consisting of those f ∈ L(Gn) whose Fourier transforms ρ(f), ρ ∈ Gcn act diagonally on the Gelfand–Tsetlin basis of Vρ.

Proof. For fi ∈ Z(i) and fj ∈ Z(j) with i ≤ j, we have fi ∈ L(Gi) ≤ L(Gj), and so fi ∗ fj, since fj ∈ Z(j). Thus GZ(n) is commutative and spanned by the products

f1 ∗ f2 ∗ · · · ∗ fn, fk ∈ Z(k), 1 ≤ k ≤ n.

Let A denote the right-hand side of (2.11). Then A is an algebra by the multiplicative property of the Fourier transform (see (1.42)).

Suppose 1 ≤ j ≤ n. Let fj ∈ Z(j), ρ ∈ Gcn, and T ∈ T (ρ). Let S = Tj. Then X ρ(fj)vT = fj(g)ρ(g)vT g∈G X   = f (g) ResG ρ (g)v (since f ∈ L(G )) j Gj T j j g∈Gj X = fj(g)ρS(g)vT (by (2.10), since vT ∈ VS) (2.12)

g∈Gj

= ρS(fj)vT ∈ CvT (Lemma 1.3.13). Thus Z(j) ⊆ A for each 1 ≤ j ≤ n. Hence GZ(n) ⊆ A.

We now prove that A ⊆ GZ(n). Let ρ ∈ Gcn and

T = (ρ = ρn → ρn−1 → · · · → ρ1) ∈ T (ρ).

By Theorem 1.5.7, we can choose fj ∈ L(Gj), 1 ≤ j ≤ n, such that, for all σ ∈ Gcj, we have ( I if σ = ρ , Vρj j σ(fj) = 0 otherwise.

Let FT = f1 ∗ f2 ∗ · · · ∗ fn. 78 Chapter 2. The theory of Gelfand–Tsetlin bases

As in (2.12), for S ∈ T (ρ), we have ( vT if S = T, ρ(FT )vS = (2.13) 0 otherwise.

Therefore, {FT : T ∈ T (ρ), ρ ∈ Gcn} is a basis for A, and so A ⊆ GZ(n). By Theorem 1.5.7, we have

∼ M ∼ M L(Gn) = Hom(Vρ,Vρ) = Mdρ,dρ (C).

ρ∈Gcn ρ∈Gcn

Thus A is a maximal commutative subalgebra of L(Gn) by Exercise 1.5.6.

Corollary 2.2.3. Every element of the Gelfand–Tsetlin basis of Vρ is a common eigenvector for all ρ(f), f ∈ GZ(n). In particular, it is uniquely determined, up to a scalar factor, by the corresponding eigenvalues.

Proof. The last statement follows from (2.13).

Suppose f1, f2, . . . , fn ∈ GZ(n). By Corollary 2.2.3, for all ρ ∈ Gcn, T ∈ T (ρ), and 1 ≤ i ≤ n, we have ρ ρ ρ(fi)vT = αρ,T,ivT for some αρ,T,i ∈ C, (2.14) ρ where vT is the GZ-vector associated with the path T . When the map

n Gcn × T (ρ) → C , (ρ, T ) 7→ (αρ,T,1, αρ,T,2, . . . , αρ,T,n) ,

ρ is injective, we say that f1, . . . , fn separate the vectors of the GZ-bases {vT : ρ ∈ Gcn,T ∈ T (ρ)}.

Proposition 2.2.4. Let G1 ≤ G2 ≤ · · · ≤ Gn−1 ≤ Gn be a multiplicity-free chain of groups. Then C(Gn,Gn−1) ⊆ GZ(n).

Proof. By Theorem 2.2.2, it suffices to prove that, for all f ∈ C(Gn,Gn−1), the elements vT , T ∈ T (ρ), of the Gelfand-Tsetlin basis are eigenvectors for ρ(f), for all ρ ∈ Gcn. Note that

−1 f ∈ C(Gn,Gn−1) ⇐⇒ f(h gh) = f(g), for all h ∈ Gn−1, g ∈ Gn

⇐⇒ f(gh) = f(hg), for all h ∈ Gn−1, g ∈ Gn

⇐⇒ (f ∗ δh)(g) = (δh ∗ f)(g), for all h ∈ Gn−1, g ∈ Gn

⇐⇒ f ∗ δh = δh ∗ f, for all h ∈ Gn−1.

Let f ∈ C(Gn,Gn−1). Then, for all ρ ∈ Gcn and h ∈ Gn−1, we have

ρ(h)ρ(f) = ρ(δh ∗ f)

= ρ(f ∗ δh) 2.2. Gelfand–Tsetlin bases 79

= ρ(f)ρ(h).

Thus   ρ(f) ∈ End ResGn ρ . Gn−1 Gn−1

Since ResGn is multiplicity free, it then follows from Schur’s lemma that Gn−1

ρ(f)Vσ ⊆ Vσ, for all f ∈ C(Gn,Gn−1), σ ∈ G[n−1,Vσ ≤ Vρ.

Now, note that

C(Gn,Gn−1) ⊆ C(Gn,Gk), for all 1 ≤ k ≤ n − 1.

Therefore, iterating the argument above, we see that every vector vT of the Gelfand-Tsetlin basis is an eigenvector for ρ(f), as desired.

Exercises.

G G G 2.2.2. Suppose H ≤ G. Verify that, for f1, f2 ∈ L(H), we have (f1 ∗ f2)H = (f1)H ∗ (f2)H .

2.2.3 ([CSST10, Ex. 2.2.5]). Prove that if the functions f1, . . . , fn ∈ GZ(n) separate the vectors of the GZ-bases, then the set {δ1G , f1, f2, . . . , fn} generates GZ(n) as an algebra. Hint: For ρ ∈ Gcn and T ∈ T (ρ), define Fρ,T to be the convolution of all

fi − ασ,S,iδ1G , σ ∈ Gcn,S ∈ T (σ), 1 ≤ i ≤ n, such that(σ, S) 6= (ρ, T ) and ασ,S,i 6= αρ,T,i, αρ,T,i − ασ,S,i using the notation of (2.14). (Note that the order of convolution is irrelevant since GZ(n) is commutative.) Show that Fρ,T is given by (2.13). Chapter 3

The Okounkov–Vershik approach

In this chapter, we study the representation theory of the symmetric groups following the approach of Okounkov and Vershik [OV96, VO04, Ver06]. We closely follow the presentation in [CSST10, Ch. 3]. The reference [Py03] may also be helpful to the reader.

3.1 The Young poset

In this section we introduce some algebraic and combinatorial concepts that will be used in our study of the representation theory of the symmetric group.

3.1.1 Partitions and conjugacy classes in Sn Definition 3.1.1. Suppose n is a positive integer. A partition of n is a sequence λ = (λ1, λ2, . . . , λh) of positive integers such that λ1 ≥ λ2 ≥ · · · ≥ λh such that λ1 + λ2 + ··· λh = n. We call h the length of the partition λ, and denote it `(λ). We adopt the convention that λr = 0 for r > h. We write λ ` n to indicate that λ is a partition of n. We also call |λ| := n the size of λ.

Recall that Sn is the symmetric group of all permutations of the set {1, 2, . . . , n}.A permutation γ ∈ Sn is called a cycle of length t if there exist pairwise distinct elements a1, a2, . . . , at ∈ {1, 2, . . . , n} such that

γ(ai) = ai+1, 1 ≤ i ≤ t − 1, γ(at) = a1,

γ(b) = b, b ∈ {1, 2, . . . , n}\{a1, a2, . . . , at}.

We denote this cycle by γ = (a1, a2, . . . , at), or sometimes by

(a1 → a2 → · · · at → a1). (3.1)

A transposition is a cycle of length two. Two cycles γ = (a1, . . . , at) and θ = (b1, . . . , bs) are disjoint if

{a1, . . . , ar} ∩ {b1, . . . , bs} = ∅.

80 3.1. The Young poset 81

If γ and θ are disjoint cycles, then they commute: γθ = θγ. Every π ∈ Sn can be written as a product of disjoint cycles

π = (a1,1, a1,2, . . . , a1,µ1 )(a2,1, a2,2, . . . , a2,µ2 ) ··· (ak,1, ak,2, . . . , ak,µk ), (3.2)

where the list

a1,1, . . . , a1,µ1 , . . . , ak,1, . . . , ak,µk Pk is a permutation of 1, 2, . . . , n. In particular i=1 µi = n. Rearranging the cycles if necessary, we may assume that µ1 ≥ µ2 ≥ · · · ≥ µk > 0. The partition µ = (µ1, . . . , µk) is called the cycle type of π. The expression (3.2) is called a cycle decomposition of π. It is unique up to cyclic permutation of the elements of the cycles and permutation of cycles of equal size. We say that i ∈ {1, 2, . . . , n} is a fixed point of π ∈ Sn if π(i) = i. Thus i is a fixed point of π if and only if i appears in a cycle of length 1 in the cycle decomposition of π. If σ ∈ Sn, then

−1    σπσ = σ(a1,1), . . . , σ(a1,µ1 ) σ(a2,1), . . . , σ(a2,µ2 ) σ(ak,1), . . . , σ(ak,µk ) .

0 It follows that two elements π, π ∈ Sn are conjugate if and only if they have the same cycle type.

Proposition 3.1.2. The conjugacy classes of Sn are parameterized by partitions of n. The conjugacy class associated to λ ` n consists of all permutations of cycle type λ.

Proof. This follows from the above discussion.

Exercises.

3.1.1. Let λ ` n. Deduce an explicit formula for the number of permutations of cycle type λ.

3.1.2 Young diagrams

Let λ = (λ1, . . . , λh) ` n. The Young diagram associated to λ, also called the Young diagram of shape λ, is the array consisting of n boxes, with h left-justified rows, the i-th row containing λi boxes for 1 ≤ i ≤ h. It follows that the Young diagram has λ1 columns. We will often abuse terminology and refer to λ itself as a Young diagram. For example, the Young diagram associated to the partition λ = (5, 4, 1, 1) ` 11 is

. 82 Chapter 3. The Okounkov–Vershik approach

The box in row i (with row 1 at the top of the diagram) and column j (with column 1 on the left of the diagram) will be said to be in position (i, j). We say that a box in position (i, j) is removable if removing this box results in a Young diagram. In other words, it is removable if there are no boxes in position (i + 1, j) and (i, j + 1). Similarly, the position (i, j) is addable if we can add a box in position in position (i, j) and end up with a Young diagram. In other words, it is addable if

λi = j − 1 < λi−1 or (i = h + 1 and j = 1).

Exercises.

3.1.2. For a Young diagram λ, let a(λ) be the number of addable positions of λ and let r(λ) be the number of removable boxes of λ. Show that a(λ) − r(λ) = 1.

3.1.3 Young tableaux Suppose λ ` n.A (bijective) Young tableau of shaped λ is a bijection between the boxes of the Young diagram of shape λ and the set {1, 2, . . . , n}. It is depicted by filling the boxes of the Young diagram with the numbers 1, 2, . . . , n, with exactly one number in each box. For example, 4 5 1 6 7 2 3 8 (3.3) 9 is a Young tableau of shape (5, 3, 1). The plural of Young tableau is Young tableaux. A Young tableau is standard if the numbers in the boxes are increasing along the rows (from left to right) and down the columns (from top to bottom). For instance, the Young tableau of (3.3) is not standard, while

1 3 5 8 9 2 4 7 6

is. Note that in a standard Young tableau, the number 1 is always in position (1, 1), while n is always in a removable box. We denote by Tab(λ) the set of all standard tableaux of shape λ, and we let [ Tab(n) = Tab(λ). λ`n 3.1. The Young poset 83

Exercises.

3.1.3. A Young diagram is called a hook if every row below the first has one box. For example,

, , , and

are hooks. Let λ be a hook. Deduce an explicit expression for the number of standard tableaux of shape λ.

3.1.4 Coxeter generators The elements

si = (i, i + 1) ∈ Sn, i = 1, 2, . . . , n − 1, are called simple transpositions (or adjacent transpositions). They are also called the Coxeter generators of Sn. (This is because of the connection to the more general theory of Coxeter groups.) The term “generator” will be justified in Proposition 3.1.3.

If T is a Young tableaux with n boxes and π ∈ Sn, then πT will denote the tableau obtai- ned from T by replacing i with π(i) for i = 1, 2, . . . , n. For example, if π = (156)(78)(234)(9) and 4 5 1 6 7 2 6 5 1 8 T = 2 3 8 , then πT = 3 4 7 . 9 9

If T is standard, then an admissible transposition for T is a simple transposition si such that siT is standard. Thus, si is admissible for T if and only if i and i + 1 belong neither to the same row, nor the same column of T .

An inversion for π ∈ Sn is a pair (i, j) with 1 ≤ i, j ≤ n such that

i < j and π(i) > π(j).

Let I(π) denote set of all inversions for π and let

`(π) = |I(π)|

denote the number of inversions of π.

The length of π ∈ Sn is the smallest integer k such that π can be written as a product

of k simple transpositions: π = si1 si2 ··· sik .

Proposition 3.1.3. The length of π ∈ Sn is equal to `(π). 84 Chapter 3. The Okounkov–Vershik approach

Proof. We first claim that, for any π ∈ Sn, we have ( `(π) − 1 if (i, i + 1) ∈ I(π), `(πsi) = (3.4) `(π) + 1 if (i, i + 1) ∈/ I(π).

First consider k satisfying 1 ≤ k < i. There are three possibilities:

• π(k) < min{π(i), π(i + 1)}, • π(k) > max{π(i), π(i + 1)}, • min{π(i), π(i + 1)} < π(k) < max{π(i), π(i + 1)}.

In the first case, (k, i) and (k, i + 1) are neither inversions for π nor for πsi. In the second case, (k, i) and (k, i + 1) are inversions for both. In the third case, exactly one of (k, i) and (k, i + 1) is an inversion for π, while only the other one is an inversion for πsi. Thus, in all three cases, the number of inversions in the set {(k, i), (k, i+1)} is the same for π and πsi.A similar argument gives the same result for the case that i+1 < k ≤ n. Since (i, i+1) ∈ I(π) if and only if (i, i + 1) ∈/ I(πsi), the claim follows.

Now suppose π = si1 si2 ··· sik is a minimal representation of π as a product of simple transpositions. Then, by (3.4), we have

`(π) = `(si1 si2 ··· sik−1 ) ± 1 ≤ `(si1 si2 ··· sik−1 ) + 1 ≤ · · · ≤ k.

Therefore, the length of π is greater than or equal to `(π). It remains to prove the reverse inequality, which we do by induction on n. It is clear for n = 2 that π ∈ S2 can be written as a product of `(π) transpositions. (Simply consider the two cases π = (1)(2) and π = (12).) Now suppose n ≥ 3 and that any π ∈ Sn−1 can be written as a product of `(π) transpositions. −1 Fix π ∈ Sn. Let jn = π (n) and let

πn = πsjn sjn+1 ··· sn−1.

Thus πn(n) = n and so, by (3.4), we have

`(πnsn−1) = `(πn) + 1.

Now, since πnsn−1(n − 1) = n, we also have, by (3.4),

`(πnsn−1sn−2) = `(πnsn−1) + 1 = `(πn) + 2.

Continuing in this way, we see that

`(πn) = `(π) − (n − jn).

Now, since πn(n) = n, it can naturally be viewed as an element of Sn−1. By our induction hypothesis, it can be written as a product of `(πn) simple transpositions. But then, since

π = πnsn−1sn−2 ··· sjn , we have that π can be written as a product of `(πn) + (n − jn) = `(π) transpositions. This completes the proof of the induction step. 3.1. The Young poset 85

For λ ` n, let T λ denote the standard tableau of shape λ, where we number the boxes 1, 2, . . . , n from left-to-right starting in the top row, then continuing in the second row, etc. For example, 1 2 3 4 5 if λ = (5, 3, 1) then T λ = 6 7 8 . 9

λ For T ∈ Tab(λ), we let πT ∈ Sn denote the unique permutation such that πT T = T .

Theorem 3.1.4. Suppose T ∈ Tab(λ). Then there exists a sequence of `(πT ) admissible transpositions transforming T into T λ.

Proof. We prove the result by induction on |λ|. If |λ| = 1, then T λ is the only tableau of shape λ and we are done. Now suppose |λ| > 1 and that the result holds for all tableau with fewer than |λ| boxes. Suppose λ = (λ1, λ2, . . . , λk) ` n and T ∈ Tab(λ). Let j denote the entry in the rightmost box of the bottom row of T . If j = n then, since the box is removable, we can consider the 0 0 standard tableau T of shape λ = (λ1, λ2, . . . , λk −1) obtained by removing that box. By our induction hypothesis, there exists a sequence of `(πT 0 ) admissible transpositions transforming 0 λ0 λ T into T . This same sequence transforms T into T and `(πT 0 ) = `(πT ). Now suppose j 6= n. Then sj is admissible for T . Similarly, sj+1 is admissible for sjT , ... , 00 sn−1 is admissible for sn−2 . . . sj1 sjT . Now, T = sn−1sn−2 ··· sjT contains n in the rightmost

box of the bottom row. Therefore, by the previous case, there exists a sequence of `πT 00 simple 00 λ transpositions transforming T into T . It follows from (3.4) that `(πT ) = `πT 00 + n − j, completing the proof of the induction step.

Corollary 3.1.5. For any T,S ∈ Tab(λ), there is a sequence of admissible transpositions transforming S into T .

Remark 3.1.6. Note that the proof of Theorem 3.1.4 gives a standard procedure to write πT as a product of `(πT ) admissible transpositions. We shall use this procedure in what follows.

Exercises.

3.1.4. Show that the simple transpositions in Sn satisfy the relations

2 si = 1, 1 ≤ i ≤ n − 1,

sisj = sjsi, 1 ≤ i, j ≤ n − 1, |i − j| > 1,

sisi+1si = si+1sisi+1, 1 ≤ i ≤ n − 2.

(In fact, the above form a complete set of relations for the simple transpositions. That is, Sn is the group with generators si, 1 ≤ i ≤ n − 1, and relations as given above.) 86 Chapter 3. The Okounkov–Vershik approach

3.1.5 The content of a tableau Suppose T ∈ Tab(λ) for some λ ` n. For 1 ≤ t ≤ n, we let row(t) and col(t) denote the row and column of the box of T containing t. For example,

1 2 4 5 7 if T = 3 6 8 then row(8) = 2, col(8) = 3. 9

Given a box in the Young diagram λ with coordinates (i, j) we define the content of the box to be c(i, j) := j − i. For example, the contents of the boxes in λ = (5, 3, 1) are as indicated:

0 1 2 3 4 -1 0 1 -2

So, essentially, the content of a box corresponds to the diagonal on which it lies. We define the content of the tableau T to be

 n C(T ) := c(row(1), col(1)), c(row(2), col(2)), . . . , c(row(n), col(n)) ∈ Z . For example,

4 5 1 6 7 if T = 2 3 8 then c(T ) = (2, −1, 0, 0, 1, 3, 4, 1, −2). 9 Note that, for a fixed partition λ, the entries of the contents of the tableau of shape λ are the same, but potentially in different orders (i.e. they are permutations of one another).

n Definition 3.1.7 (Cont(n)). Let Cont(n) be the set of all (a1, a2, . . . , an) ∈ C such that

(a) a1 = 0,

(b) {aj + 1, aj − 1} ∩ {a1, a2, . . . , aj−1}= 6 ∅ for all j > 1,

(c) if ai = aj for some i < j then {aj − 1, aj + 1} ⊆ {ai+1, ai+2, . . . , aj−1}.

It follows from the definition that, in fact, Cont(n) ⊆ Zn. For example, Cont(1) = {0} and Cont(2) = {(0, 1), (0, −1)}. (3.5)

n For α, β ∈ Cont(n), we will write α ≈ β if πβ = α for some π ∈ Sn (here Sn acts on Z by permuting the entries). Note that ≈ is an equivalence relation on Cont(n), but that Sn does not act on Cont(n) since there are α ∈ Cont(n) and π ∈ Sn such that πα∈ / Cont(n). For example, if π = (1, 2) ∈ S2, then

α = (0, 1) ∈ Cont(2) but π(0, 1) = (1, 0) ∈/ Cont(2). 3.1. The Young poset 87

Theorem 3.1.8. For any T ∈ Tab(n), we have C(T ) ∈ Cont(n). Furthermore, the map

Tab(n) → Cont(n),T 7→ C(T ),

is a bijection. In addition, for T,S ∈ Tab(n), we have

C(T ) ≈ C(S) ⇐⇒ T and S are tableaux of the same shape.

Proof. A proof of this theorem can be found in [CSST10, Th. 3.1.10].

For α ∈ Cont(n), we say that a simple transposition si is admissible for α if it is admissible for the unique T ∈ Tab(n) such that α = C(T ).

Corollary 3.1.9. For α, β ∈ C(T ), we have α ≈ β if and only if there exists a sequence of admissible transpositions transforming α into β.

Proof. This follows from Corollary 3.1.5 and Theorem 3.1.8.

Corollary 3.1.10. The cardinality of the quotient set Cont(n)/ ≈ is equal to the number of partitions of n.

Proof. This follows from Theorem 3.1.8.

Exercises.

3.1.5. Suppose that α = (α1, α2, . . . , αn) ∈ Cont(n) and 1 ≤ i ≤ n − 1. Prove that si is admissible for α if and only if ai+1 6= ai ± 1.

3.1.6 The Young poset Let Y = {λ : λ ` n, n ∈ Z>0} be the set of all partitions. Equivalently, Y is the set of all Young diagrams. We define a partial ordering on Y by saying that, for µ, λ ∈ Y, µ  λ if and only if the Young diagram of µ is contained in the Young diagram of λ. In other words, if µ = (µ1, . . . , µk) ` n and λ = (λ1, . . . , λh) ` m, then

µ  λ ⇐⇒ m ≥ n and λj ≥ µj for all j = 1, 2, . . . , k.

For example, (4, 3, 1)  (5, 3, 1, 1).

For µ, λ ∈ Y, we say that λ covers µ if µ is obtained from λ by removing a single box. In particular, this implies that µ  λ. 88 Chapter 3. The Okounkov–Vershik approach

The Hasse diagram of Y, also called the Young (branching) graphs, is the oriented graph with vertex set Y and an arrow from λ to µ if and only if λ covers µ. The bottom of the Young graph is as follows:

   Õ   Ó  Ô Ö 

   Ò  Ó

  }

$ |

(One sometimes includes the empty Young diagram ∅ at the bottom of the graph.) A path in the Young graph is a sequence p = λ(n) → λ(n−1) → · · · → λ(1) of partitions λ(k) ` k such that λ(k) covers λ(k−1) for k = 2, 3, . . . , n. We call `(p) := n the length of the path p. We let Πn(Y) denote the set of all paths in the Young graph of length n and let ∞ [ Π(Y) = Πn(Y). n=1 To any standard Young tableau T of shape λ ` n we can associate a path λ = λ(n) → λ(n−1) → · · · → λ(1) by letting λ(k), 1 ≤ k ≤ n, be the Young diagram formed by the boxes of T with labels less than or equal to k. For example, to the standard tableau 1 2 5 6 3 4 7 8 3.2. The Young–Jucys–Murphy elements and a Gelfand–Tsetlin basis for Sn 89

we associate the path

(4, 3, 1) → (4, 3) → (4, 2) → (3, 2) → (2, 2) → (2, 1) → (2) → (1).

In this way we have a natural bijection

Πn(Y) ↔ Tab(n) (3.6)

which gives a bijection ∞ [ Π(Y) ↔ Tab(n). (3.7) n=1 Combining (3.6) with the bijection in Theorem 3.1.8 yields a bijection

Πn(Y) ↔ Cont(n). (3.8)

Proposition 3.1.11. Suppose α, β ∈ Cont(n) correspond to the paths

λ(n) → λ(n−1) → · · · → λ(1) and µ(n) → µ(n−1) → · · · → µ(1),

respectively. Then α ≈ β if and only if λ(n) = µ(n).

Proof. This follows immediately from Theorem 3.1.8.

Exercises.

3.1.6. Suppose λ is a hook (see Exercise 3.1.3). How many paths in the Young graph are there that start at λ?

3.2 The Young–Jucys–Murphy elements and a Gelfand– Tsetlin basis for Sn

Our goal in this section is to prove that the chain

S1 ≤ S2 ≤ · · · ≤ Sn ≤ Sn+1 ≤ · · · is multiplicity free. This allows use to use the techniques of Chapter2. In particular, we will study the Gelfand–Tsetlin algebra associated to this chain. 90 Chapter 3. The Okounkov–Vershik approach

3.2.1 The Young–Jucys–Murphy elements

In the remainder of these notes, we will identity the Dirac function δπ, π ∈ Sn, with π. Thus, elements f ∈ L(Sn) will be written as formal sums X f = f(π)π.

π∈Sn

Similarly, the characteristic function of A ⊆ Sn will be denoted simply by A, so that X A = π. π∈A

In addition, the convolution of f1, f2 ∈ L(Sn) will be denoted f1 ·f2 and written as a product of formal sums:   X  X  f1 · f2 =  f1(σ)f2(θ) π. π∈Sn σ,θ∈Sn: σθ=π

The Young–Jucys–Murphy (YJM) elements of L(Sn) are defined by

X1 = 0,Xk = (1, k) + (2, k) + ··· + (k − 1, k), k = 2, . . . , n.

In some places in the literature, these elements are simply called Jucys–Murphy elements.

Exercises.

3.2.1. Show that the following relations hold in L(Sn):

Xi+1si = siXi + 1, 1 ≤ i ≤ n − 1,

Xjsi = siXj, 1 ≤ i ≤ n − 1, 1 ≤ j ≤ n, j 6= i, i + 1.

3.2.2. Show that XiXj = XjXi for all 1 ≤ i, j ≤ n.

3.2.3. Fix n ≥ 2. The degenerate affine Hecke algebra Hn is the C-algebra generated by elements ti, 1 ≤ i ≤ n − 1, and xj, 1 ≤ j ≤ n, subject to the relations

2 ti = 1, 1 ≤ i ≤ n − 1,

titj = tjti, 1 ≤ i, j ≤ n − 1, |i − j| > 1,

titi+1ti = ti+1titi+1, 1 ≤ i ≤ n − 2.

xi+1ti = tixi + 1, 1 ≤ i ≤ n − 1,

xjti = tixj, 1 ≤ i ≤ n − 1, 1 ≤ j ≤ n, j 6= i, i + 1, 3.2. The Young–Jucys–Murphy elements and a Gelfand–Tsetlin basis for Sn 91

xixj = xjxi, 1 ≤ i, j ≤ n.

0 0 (This means that for any C-algebra A with elements ti, xj satisfying the above relations, 0 0 there is a unique algebra homomorphism Hn → A such that ti 7→ ti and xj 7→ xj.) There is an injective algebra homomorphism L(Sn) ,→ Hn given by si 7→ ti and we use this to view L(Sn) as a subalgebra of Hn. ∼ Let I be the ideal of Hn generated by x1. (See Exercise 1.2.8.) Prove that Hn/I = L(Sn), as algebras (or as rings). You may use the fact that Hn has a basis given by the elements

k1 k2 kn x1 x2 . . . xn π, k1, k2, . . . , kn ∈ Z≥0, π ∈ Sn, (3.9)

where we view the element π ∈ Sn (identified with δπ ∈ L(Sn)) as an element of Hn as explained above.

3.2.2 Marked permutations

Fix `, k ≤ 1. In this section, we let S`+k, S`, and Sk denote the symmetric groups on the sets {1, 2, . . . , ` + k}, {1, 2, . . . , `} and {` + 1, ` + 2, . . . , ` + k}, respectively. Thus

S`, Sk ≤ S`+k and S` ∩ Sk = {1}.

We let Z(`, k) = C(S`+k, S`) denote the algebra of all S`-conjugacy invariant functions in L(S`+k). We wish to analyze the algebra Z(`, k). The first step is to parameterize the orbits of the S`-conjugacy action on S`+k. Recall that, for π ∈ S` and θ = S`+k, the cycle decomposition of πθπ−1 is obtained from the cycle decomposition of θ by replacing 1, 2, . . . , ` with π(1), π(2), . . . , π(`). Thus, the S`-orbit of θ is obtained from its cycle decomposition by permuting in all possible ways the elements 1, 2, . . . , `, leaving the remaining elements ` + 1, ` + 2, . . . , ` + k unchanged. Consider the cycle decomposition of a permutation using our notation (3.1) for cycles:

(a1,1 → a1,2 → · · · → a1,µ1 → a1,1)(a2,1 → · · · → a2,µ2 → a2,1) ··· (ar,1 → · · · → ar,µr → ar,1). (3.10) A marked permutation is a permutation as in (3.10), together with a labeling of all the arrows by nonnegative integers, called tags, potentially will some additional empty cycles also labeled with tags. For example, the permutation (3.10) may be marked as follows:

u1,1 u1,2 u1,µ1−1 u1,µ1 u2,1 u2,µ2−1 u2,µ2 (a1,1 −−→ a2 −−→· · · −−−−→ a1,µ −−−→ a1,1)(a2,1 −−→· · · −−−−→ a2,µ −−−→ a2,1) 1 2 (3.11) ur,1 ur,µr−1 ur,µr v1 v2 vs ··· (ar,1 −−→· · · −−−−→ ar,µr −−−→ ar,1)(−→)(−→) ··· (−→).

The orbits of the conjugacy action of S` on S`+k are in natural one-to-one correspondence with the set of all marked permutations of {` + 1, . . . , ` + k} such that the sum of the tags is equal to `. The orbit corresponding to a given marked permutation is obtained by inserting, in all possible ways, the elements {1, . . . , `} into the marked permutation, with the number of elements added at any particular arrow equal to the label of that arrow. 92 Chapter 3. The Okounkov–Vershik approach

Example 3.2.1. If ` = 12 and k = 8, then the marked permutation

(19 −→2 15 −→1 13 −→0 14 −→0 19)(16 −→2 20 −→1 17 −→1 16)(18 −→2 18)(−→2 ))(−→1 )

corresponds to the orbit consisting of all permutations of the form

(19 → y1 → y2 → 15 → y3 → 13 → 14 → 19)(16 → y4 → y5 → 20 → y6 → 17 → y7 → 16)

· (18 → y8 → y9 → 18)(y10 → y11 → y10)(y12 → y12),

where {y1, y2, . . . , y12} = {1, 2,..., 12}.

We will typically omit trivial cycles of the form (a −→0 a) and (−→1 ). Note that, when omitting such trivial cycles, the sum of the tags is ≤ `, with strict inequality a possibility.

Theorem 3.2.2. Let Sn−1 be the symmetric group on {1, 2, . . . , n − 1}. Then (Sn × Sn−1, Se n−1) is a symmetric Gelfand pair.

Proof. In light of Proposition 2.1.9, it suffices to prove that every π ∈ Sn is Sn−1-conjugate to π−1. Suppose π has cycle decomposition

π = (n = a1,1 → a1,2 → · · · → a1,µ1 → n)(a2,1 → · · · → a2,µ2 → a2,1)

··· (ar,1 → · · · → ar,µr → ar,1).

Then π belongs to the Sn−1-conjugacy class corresponding to the marked permutation

µ −1 µ µ (n −−−→1 n)(−→2 ) ··· (−→r ).

Then

−1 π = (n → a1,µ1 → · · · → a1,1 = n)(a2,1 → a2,µ2 → · · · → a2,1) ··· (ar,1 → ar,µr · · · → ar,1) clearly belongs to the same conjugacy class.

Corollary 3.2.3. The algebra C(Sn, Sn−1) is commutative, and Sn−1 is a multiplicity-free subgroup of Sn. Thus S1 ≤ S2 ≤ · · · ≤ Sn−1 ≤ Sn ≤ · · · is a multiplicity-free chain. Proof. This follows from Theorems 2.1.7 and 3.2.2.

Example 3.2.4. (a) For j = 1, . . . , k, the YJM element X`+j can be written as

`+j−1 1 X 0 0 X`+j = (` + j −→ ` + j) + (` + j −→ h −→ ` + j). (3.12) h=`+1

(Recall our convention that, for A ⊆ S`+k, we denote the characteristic function of A simply by A.) In particular, X`+1,X`+2,...,X`+k ∈ Z(`, k) since these elements are all sums of characteristic functions of orbits under the conjugacy action of S`. 3.2. The Young–Jucys–Murphy elements and a Gelfand–Tsetlin basis for Sn 93

(b) Any σ ∈ Sk forms a one-element orbit of S`. Thus, viewing σ as an element of L(Sk), we have Sk ⊆ Z(`, k).

(c) It is clear that Z(`) := Z(L(S`)) ⊆ Z(`, k). It follows from Example 3.2.4 that

hX`+1,X`+2,...,X`+k, Sk,Z(`)i ⊆ Z(`, k), (3.13)

where, on the left side, the angled brackets denote the subgroup of L(S`+k) generated by the elements/sets inside the brackets.

Exercises.

3.2.4. For σ ∈ Sk, describe the marked permutation corresponding to the S`-orbit {σ} (see Example 3.2.4(b)).

3.2.5. Suppose that Cλ is the conjugacy class of S` corresponding to λ ` ` (see Proposi- tion 3.1.2). Describe the marked permutation corresponding to the S`-orbit formed by Cλ (see Example 3.2.4(c)).

3.2.3 Olshanskii’s Theorem The goal of this subsection is to prove the reverse of the inclusion (3.13). Precisely, we want to prove that Z(`, k) is generated by X`+1,X`+2,...,X`+k, Sk, and Z(`). Let Zh(`, k) be the subspace of Z(`, k) spanned by the S`-conjugacy classes consisting of permutations with at least ` + k − h fixed points (equivalently, moving at most h elements). Then we have a filtration

C1 = Z0(`, k) ⊆ Z1(`, k) ⊆ Z`+k−1(`, k) ⊆ Z`+k(`, k) = Z(`, k). (3.14) We will essentially be interested in “leading terms” with respect to this filtration. More precisely, for f1, f2, f3 ∈ Z(`, k), we will write

f1 · f2 = f3 + lower terms if there exists h such that

f3 ∈ Zh(`, k) − Zh−1(`, k) and f1 · f2 − f3 ∈ Zh−1(`, k).

Lemma 3.2.5. Let i, j ≥ 1. Suppose a1, a2, . . . , ai and b1, b2, . . . , bj are each sequences of distinct elements in {`+1, . . . , `+k}. (Note that we do not require the ak to be distinct from the bk.) Suppose that, in Sk, we have

(a1 → a2 → · · · → ai → a1)(b1 → b2 → · · · → bj → b1) = (c1 → c2 → · · · → ch → c1), 94 Chapter 3. The Okounkov–Vershik approach

with h = |{a1, a2, . . . , ai} ∪ {b1, b2, . . . , bj}| ≤ i + j.

Let u1, u2, . . . , ui, v1, v2, . . . , vj ≥ 0 such that

u1 + u2 + ··· + ui + v1 + v2 + ··· + vj ≤ `. Then, in Z(`, k), we have

 u1 u2 ui−1 ui   v1 v2 vj−1 vj  a1 −→ a2 −→· · · −−→ ai −→ a1 b1 −→ b2 −→· · · −−→ bj −→ v1

 w1 w2 wh−1 wh  = c1 −→ c2 −→· · · −−−→ ch −→ c1 + lower terms,

where  v if c = b and b ∈/ {a , a , . . . , a },  t s t t+1 1 2 i ws = vt + um if cs = bt and bt+1 = am, (3.15)  um if cs = am ∈/ {b1, b2, . . . , bj}. for 1 ≤ s ≤ h. Proof. Consider a product of the form

(a1, x1,1, x1,2, . . . , x1,u1 , a2, x2,1, . . . , x2,u2 , . . . , ai, xi,1, . . . , xi,ui )

· (b1, y1,1, . . . , y1,v1 , . . . , bj, yj,1, . . . , yj,vj ). (3.16)

If the numbers x1,1, . . . , xi,ui , y1,1, . . . , yj,vj are all distinct, which is possible since u1 + ··· + ui + v1 + ··· + vj ≤ `, then (3.16) is equal to a permutation of the form

(c1, z1,1, z1,2, . . . , z1,w1 , c2, . . . , ch, zh,1, . . . , zh,wh ),

where w1, w2, . . . , wh are given by (3.15). Otherwise, the product (3.16) moves fewer than h + w1 + w2 + ··· + wh elements. Example 3.2.6. (a) For a ∈ {` + 1, ` + 2, . . . , ` + k} and 0 ≤ u ≤ `, we have  u   a −→1 a = a −→u a + lower terms.

(b) For a, b ∈ {` + 1, ` + 2, . . . , ` + k}, a 6= b, and 0 ≤ u ≤ `, we have       b −→u b a −→0 b −→0 a = a −→u b −→0 a .

Note that there are no lower order terms in this case.

(c) For a1, a2, . . . , ai ∈ {` + 1, ` + 2, . . . , ` + k}, pairwise distinct, and u1, u2, . . . , ui ≥ 0 with u1 + u2 + ··· ui ≤ `, we have

 ui   ui−1 0   u2 0   u1 0  a1 −→ a1 a1 −−→ ai −→ a1 ··· a1 −→ a3 −→ a1 a1 −→ a2 −→ a1

 u1 u2 u3 ui−1 ui  = a1 −→ a2 −→ a3 −→· · · −−→ ai −→ a1 + lower terms. 3.2. The Young–Jucys–Murphy elements and a Gelfand–Tsetlin basis for Sn 95

Theorem 3.2.7 (Olshanskii’s Theorem). The centralizer algebra Z(`, k) is generated by the Young–Jucys–Murphy elements X`+1,X`+2,...,X`+k, the subgroup Sk, and the center Z(`) of S`. In other words, we have

Z(`, k) = hX`+1,X`+2,...,X`+k, Sk,Z(`)i.

Proof. Let A = hX`+1,X`+2,...,X`+k, Sk,Z(`)i. We have A ⊆ Z(`, k) from (3.13). Keeping in mind the filtration (3.14), we will prove by induction on h that Zh(`, k) ⊆ A for all h = 0, 1, . . . , ` + k. Since Z0(`, k) = C1 ⊆ A, our base case is proved. Now suppose that 1 ≤ h ≤ ` + k, and that the result holds for h − 1. For a, j ∈ {` + 1, ` + 2, . . . , ` + k} with a 6= j, we have

 0 0  a −→ j −→ a ∈ Sk.

Thus, by (3.12), we have

a−1  1  X  0 0  a −→ a = Xa − a −→ j −→ a ∈ A. j=`+1

Now, the typical orbit in Zh(`, k) − Zh−1(`, k) is of the form

u1,1 u1,2 u1,µ1−1 u1,µ1 u2,1 u2,µ2−1 u2,µ2 (a1,1 −−→ a1,2 −−→· · · −−−−→ a1,µ1 −−−→ a1,1)(a2,1 −−→· · · −−−−→ a2,µ2 −−−→ a2,1)

ur,1 ur,µr−1 ur,µr v1 v2 vs ··· (ar,1 −−→· · · −−−−→ ar,µr −−−→ ar,1)(−→)(−→) ··· (−→) (3.17) for pairwise distinct elements

a1,1, a1,2, . . . , a1,µ1 , . . . , ar,1, . . . , ar,µr ∈ {` + 1, ` + 2, . . . , ` + k} and

u1,1, u1,2, . . . , u1,µ1 , . . . , ur,1, . . . , ur,µr , v1, v2, . . . , vs ∈ Z≥0 r µp s X X X such that up,q + vp = h ≤ `. p=1 q=1 p=1

Repeated application of Lemma 3.2.5 (see also Example 3.2.6) gives that

u u  1  1,µ1  1  1,µ1−1  0 0  a1,1 −→ a1,1 a1,µ1 −→ a1,µ1 a1,1 −→ a1,µ1 −→ a1,1 ···

 1 u1,2  0 0   1 u1,1  0 0  ··· a1,3 −→ a1,3 a1,1 −→ a1,3 −→ a1,1 a1,2 −→ a1,2 a1,1 −→ a1,2 −→ a1,1 ··· ···  1 ur,µr  1 ur,µr−1  0 0  ··· ar,1 −→ ar,1 ar,µr −→ ar,µr ar,1 −→ ar,µr −→ ar,1 ··· 96 Chapter 3. The Okounkov–Vershik approach

 1 ur,2  0 0   1 ur,1  0 0  ··· ar,3 −→ ar,3 ar,1 −→ ar,3 −→ ar,1 ar,2 −→ ar,2 ar,1 −→ ar,2 −→ ar,1 ···       ··· −→v1 −→v2 ··· −→vs is equal to (3.17) modulo terms. Thus, by the inductive hypothesis, we have that (3.17) is an element of A, completing the proof. Corollary 3.2.8. The Gelfand–Tsetlin algebra GZ(n) of the multiplicity free chain

S1 ≤ S2 ≤ · · · ≤ Sn is generated by the Young–Jucys–Murphy elements X1,X2,...,Xn.

Proof. For 2 ≤ k ≤ n, let Tk denote the set of all transpositions (not necessarily simple) in Sk. Then Tk ∈ Z(k) (recall that we identify a subset of Sk with its characteristic function). Thus Xk = Tk − Tk−1 ∈ GZ(n).

Hence hX1,X2,...,Xni ∈ GZ(n). We now prove by induction on n that GZ(n) = hX1,X2,...,Xni. The result for n = 1 is trivial. Assume n > 1 and that the result holds for n − 1. Then

Z(n) = C(Sn, Sn)

⊆ C(Sn, Sn−1) = Z(n − 1, 1)

= hZ(n − 1),Xni, (by Theorem 3.2.7). Thus, using the induction hypothesis, we have

GZ(n) = hGZ(n − 1),Z(n)i = hGZ(n − 1),Xni = hX1,X2,...,Xn−1,Xni, completing the proof of the induction step.

Exercises.

3.2.6. Use Corollary 3.2.8 to give an alternate proof that Z(n − 1, 1) = C(Sn, Sn−1) is commutative (see Corollary 3.2.3).

3.3 The spectrum of the Young–Jucys–Murphy ele- ments and the branching graph of Sn In this section, our goal is to show that the spectrum (i.e. set of possible eigenvalues) of the YJM elements is given by Cont(n) (see Definition 3.1.7) and prove that the branching graph of the multiplicity-free chain

S1 ≤ S2 ≤ · · · ≤ Sn ≤ · · · is precisely the Young graph. 3.3. The spectrum of the YJM elements and the branching graph of Sn 97

3.3.1 The weight of a Young basis vector

For ρ ∈ Scn, we have the Gelfand–Tsetlin basis {vT : T ∈ T (ρ)} associated with the mul- tiplicity free chain S1 ≤ S2 ≤ · · · ≤ Sn. In this setting, we also call this basis the Young basis for Vρ. By Theorem 2.2.2 and Corollary 3.2.8, every vT is an eigenvector for ρ(Xj) for all 1 ≤ j ≤ n. We define the weight of vT to be

α(T ) = (a1, a2, . . . , an), where ρ(Xj) = ajvT , 1 ≤ j ≤ n. (3.18)

Since X1,...,Xn generate GZ(n) by Corollary 3.2.8, it follows from Corollary 2.2.3 that vT is determined, up to a scalar factor, by α(T ). It follows from Theorem 1.5.7 that ρ(Xj) is self-adjoint for all Xj and all representations ρ of Sn. (See Exercise 3.3.1.)

Proposition 3.3.1. Suppose ρ ∈ Scn and

T = (ρ = ρn → ρn−1 → · · · → ρ2 → ρ1) ∈ T (ρ).

0 Then ρ(sk)vT is a linear combination of vectors vT 0 with T of the form

0 T = (σ = σn → σn−1 → · · · → σ2 → σ1) ∈ T (ρ), such that σi = ρi for all i 6= k.

Proof. For 1 ≤ j ≤ n, let Vj denote the representation space of ρj. Then

Vj = {ρj(f)vT : f ∈ L(Sj)},

since the right side is a Sj-invariant subspace generated by a nonzero element of Vj. If j > k, then sk ∈ Sj, and so ρj(sk) ∈ Vj. Thus

σj = ρj for all j = k + 1, k + 2, . . . , n.

Now suppose j < k. Then sk and Sj commute. If we define

Wj = {ρj(f)ρ(sk)vT : f ∈ L(Sj)} = ρ(sk)Vj,

then we have an isomorphism of Sj-representations

Vj → Wj ρj(f)vT 7→ ρj(f)ρ(sk)vT .

Therefore ρ(s )v belongs to the ρ -isotypic component of ResSn ρ, and so σ = ρ . k T j Sj j j

Exercises.

3.3.1. Use Theorem 1.5.7 to prove that ρ(Xj) is self-adjoint for all 1 ≤ j ≤ n and all representations ρ of Sn. 98 Chapter 3. The Okounkov–Vershik approach

3.3.2 The spectrum of the YJM elements We define the spectrum of the YJM elements to be

Spec(n) = {α(T ): T ∈ T (ρ), ρ ∈ Scn},

where α(T ) is the weight of vT , as in (3.18). Since the elements of the Young basis are uniquely determined by their weight, we have X | Spec(n)| = dim Vρ.

ρ∈Scn

In particular, Spec(n) is in natural bijection with the set of all paths in the branching graph of the chain S1 ≤ S2 ≤ · · · ≤ Sn. We let Tα denote the path corresponding to α ∈ Spec(n), and we let vα denote the Young basis vector corresponding to Tα. We define an equivalence relation on Spec(n) by declaring that α ∼ β if vα and vβ belong to the same irreducible Sn-representation. Equivalently, α ∼ β if the corresponding paths in the branching graph start at the same vertex. It follows that

| Spec(n)/ ∼ | = Scn . (3.19)

We would now like to deduce an explicit description of Spec(n) and ∼ using the relations (see Exercise 3.2.1)

Xi+1si = siXi + 1, 1 ≤ i ≤ n − 1, (3.20)

Xjsi = siXj, 1 ≤ i ≤ n − 1, 1 ≤ j ≤ n, j 6= i, i + 1. (3.21)

Note that (3.20) is equivalent to

siXi+1 − 1 = Xisi, 1 ≤ i ≤ n − 1, 1 ≤ j ≤ n, j 6= i, i + 1. (3.22)

In what follows, if vα is a vector of the Young basis of an irreducible representation ρ of Sn, we denote ρ(si)vα and ρ(Xi)vα by sivα and Xivα, respectively. (In other words, we will use the notation of modules, which is equivalent to that of representations.)

Proposition 3.3.2. Suppose α = (a1, a2, . . . , an) ∈ Spec(n) and 1 ≤ i ≤ n − 1.

(a) ai 6= ai+1.

(b) ai+1 = ai ± 1 if and only if sivα = ±vα.

(c) If ai+1 6= ai ± 1, then

0 α := siα = (a1, a2, . . . , ai−1, ai+1, ai, ai+2, . . . , an) ∈ Spec(n),

α ∼ α0, and we have 1 vα0 = sivα − vα (3.23) ai+1 − ai 3.3. The spectrum of the YJM elements and the branching graph of Sn 99

(up to a scalar factor). Moreover, the space hvα, vα0 i is invariant under the action of Xi, Xi+1, and si, and in the basis {vα, vα0 }, these operators are given by the matrices

    1 1 ! 1 − 2 ai 0 ai+1 0 ai+1−ai (ai+1−ai) , , and 1 , 0 ai+1 0 ai 1 ai−ai+1 respectively.

Proof. It follows immediately from the definitions of α and vα that

Xivα = aivα and Xi+1vα = ai+1vα.

It also follows from (3.20) and (3.22) that hvα, sivαi is invariant under the action of Xi and Xi+1. It is also clearly invariant under the action of si. Suppose that sivα and vα are linearly independent and ai+1 = ai ± 1. Then a direct computation shows that the only line stable under the action of the subalgebra A of L(Sn) generated by si, Xi, and Xi+1 is the line spanned by sivα ∓ vα. This contradicts the fact that representations of A are completely reducible (since si, Xi, and Xi+1 act by unitary operators). Thus, if ai+1 = ai ± 1, then the vectors sivα and α are proportional, in which case (3.20) implies that aisivα + vα = ai+1sivα, (3.24) 2 2 and so sivα = ±vα. Conversely, if sivα = λvα, then the fact that si = 1 implies that λ = 1, and so λ = ±1. Then (3.24) implies that ai+1 = ai ± 1. This proves (b). Now suppose that ai+1 6= ai ± 1. Then, by the above, we have

dimhvα, sivαi = 2. By (3.20) and (3.22), we have

Xisivα = −vα + ai+1sivα,

Xi+1sivα = vα + aisivα.

Thus the actions of si, Xi, and Xi+1 on hvα, sivαi are represented, with respect to the basis {vα, sivα} by the matrices 0 1 a −1  a 1  , i , and i+1 , 1 0 0 ai+1 0 ai respectively. Now, we know that the actions of Xi and Xi+1 are diagonalizable. This implies that ai 6= ai+1, proving (a). Then we check directly that

0 1 v := sivα − vα ai+1 − ai is an eigenvector of Xi and Xi+1 with eigenvalues ai+1 and ai, respectively. In addition (3.21) implies that 0 0 Xjv = ajv , for all j 6= i, i + 1. 100 Chapter 3. The Okounkov–Vershik approach

Thus 0 α := (a1, a2, . . . , ai−1, ai+1, ai, ai+2, . . . , an) ∈ Spec(n), 0 and v = vα0 is a vector of the Young basis. We leave it as an exercise (Exercise 3.3.2) to verify the formula for the action of si in the basis {vα, vα0 }.

Exercises.

3.3.2. In the notation of the proof of Proposition 3.3.2, prove that the action of si on the space hvα, vα0 i, with respect to the basis {vα, vα0 }, is represented by the matrix

1 1 ! 1 − 2 ai+1−ai (ai+1−ai) . 1 1 ai−ai+1

3.3.3 Spec(n) = Cont(n) Let α = (a1, a2, . . . , an) ∈ Spec(n).

We say that si is an admissible transposition for α if ai+1 6= ai ± 1. In this case, by Proposition 3.3.2(c), we have

siα = (a1, . . . , ai−1, ai+1, ai, ai+2, . . . , an) ∈ Spec(n).

The Coxeter generators satisfy the relations (see Exercise 3.1.4)

sisj = sjsi, 1 ≤ i, j ≤ n − 1, |i − j| > 1, (3.25)

sisi+1si = si+1sisi+1, 1 ≤ i ≤ n − 2. (3.26)

n Lemma 3.3.3. Let α = (a1, a2, . . . , an) ∈ C . If ai = ai+2 = ai+1 ± 1 for some i ∈ {1, 2, . . . , n − 2}, then α∈ / Spec(n).

Proof. Suppose, towards a contradiction, that ai = ai+2 = ai+1 − 1 and α ∈ Spec(n). Then, by Proposition 3.3.2(b), we have

sivα = vα and si+1vα = −vα.

Then, by (3.26), we have

vα = si+1sisi+1vα = sisi+1sivα = −vα,

which contradicts the fact that vα 6= 0. The proof of the case that ai = ai+2 = ai+1 + 1 is similar.

Lemma 3.3.4. (a) For all (a1, a2, . . . , an) ∈ Spec(n), we have a1 = 0. 3.3. The spectrum of the YJM elements and the branching graph of Sn 101

(b) If (a1, a2, . . . , an) ∈ Spec(n), then (a1, a2, . . . , an−1) ∈ Spec(n − 1). (c) We have Spec(2) = {(0, 1), (0, −1)}.

Proof. (a) This follows immediately from the fact that X1 = 0.

(b) This follows from the fact that X1,X2,...,Xn−1 ∈ L(Sn−1) and Xjvα = ajvα for all 1 ≤ j ≤ n − 1.

(c) The group S2 has two irreducible representations: the trivial representation ι and the sign representation ε (see Example 1.1.5). The branching graph of S1 ≤ S2 is

ι ε

 ι0

where ι0 is the trivial representation of S1. Since X2 = (1, 2), we have ( v if v ∈ Vι, X2v = −v if v ∈ Vε.

Lemma 3.3.5. (a) For all n ≥ 1, we have Spec(n) ⊆ Cont(n). (b) If α ∈ Spec(n), β ∈ Cont(n), and α ≈ β, then β ∈ Spec(n) and α ∼ β.

Proof. We prove (a) by induction on n. The case n = 1 is trivial, while the case n = 2 follows from Lemma 3.3.4(c) and (3.5). Suppose that Spec(n − 1) ⊆ Cont(n − 1), and let

α = (a1, a2, . . . , an) ∈ Spec(n).

By Lemma 3.3.4(a), we have a1 = 0, which corresponds to (a) of Definition 3.1.7. By Lemma 3.3.4(b) and our induction hypothesis, to prove that α ∈ Spec(n), it suffices to prove that conditions (b) and (c) of Definition 3.1.7 are satisfied for j = n. First suppose, towards a contradiction, that α does not satisfy Definition 3.1.7(b). In other words, we suppose that

{an − 1, an + 1} ∩ {a1, a2, . . . , an−1} = ∅. (3.27)

By Proposition 3.3.2(c), the transposition (n − 1, n) is admissible for α, that is

(a1, a2, . . . , an−2, an, an−1) ∈ Spec(n).

Then, by Lemma 3.3.4(b) and our induction hypothesis, we have

(a1, a2, . . . , an−2, an) ∈ Spec(n − 1) ⊆ Cont(n − 1).

By (3.27), we have {an − 1, an + 1} ∩ {a1, a2, . . . , an−2} = ∅. 102 Chapter 3. The Okounkov–Vershik approach

But this contradicts Definition 3.1.7(b) for Cont(n − 1). This completes the proof that (b) of Definition 3.1.7 is satisfied for Cont(n). Now suppose, towards a contradiction, that α does not satisfy Definition 3.1.7(c) for j = n. In other words, we suppose that ai = an = a for some i < n and, for instance,

a − 1 ∈/ {ai+1, ai+2, . . . , an−1}.

(The case a + 1 ∈/ {ai+1, ai+2, . . . , an−1} is similar and will be omitted.) We choose i to be maximal with the above properties, so that we also have

a∈ / {ai+1, ai+2, . . . , an−1}. (3.28)

By the inductive hypothesis, we have (a1, a2, . . . , an−1) ∈ Cont(n − 1). Thus a + 1 may appear at most once in ai+1, ai+2, . . . , an−1 since, if it appeared more than once, Defini- tion 3.1.7(c) for Cont(n − 1) would imply that a appears between two occurrences of a + 1 in ai+1, ai+2, . . . , an−1, contradicting (3.28). Suppose

a + 1 ∈/ {ai+1, ai+2, . . . , an−1}.

Then (ai, ai+1, . . . , an) = (a, ∗,..., ∗, a) where all the entries ∗ are different from a, a + 1, and a − 1. Then, by a sequence of n − i − 1 admissible transpositions, we get

α ∼ (. . . , a, a, . . . ) ∈ Spec(n),

which contradicts Proposition 3.3.2(a). Similarly, if a + 1 ∈ {ai+1, ai+2, . . . , an−1}, then (ai, ai+1, . . . , an) = (a, ∗,..., ∗, a + 1, ∗,..., ∗, a), where, as before, each ∗ represents a number not equal to a, a + 1, or a − 1. Then, by a sequence of admissible transpositions, we get

α ∼ (. . . , a, a + 1, a, . . . ) ∈ Spec(n),

which contradicts Lemma 3.3.3. This completes the proof that (c) of Definition 3.1.7 is satisfied for Cont(n), completing the proof of part (a) of the current lemma. Part (b) of the current lemma is an immediate consequence of part (a), Corollary 3.1.9, and Proposition 3.3.2(c).

Theorem 3.3.6. (a) We have Spec(n) = Cont(n). (b) The equivalence relations ∼ and ≈ are the same. (c) The Young graph Y is isomorphic to the branching graph of the multiplicity-free chain S1 ≤ S2 ≤ · · · ≤ Sn ≤ Sn+1 ≤ · · · . 3.3. The spectrum of the YJM elements and the branching graph of Sn 103

Proof. First note that

| Cont(n)/ ≈ | = number of partitions of n (Corollary 3.1.10) = number of conjugacy classes of n (Proposition 3.1.2)

= Scn (Proposition 1.3.16) = | Spec(n)/ ∼ | (by (3.19)).

By Lemma 3.3.5, each equivalence class in Cont(n)/ ≈ is either disjoint from Spec(n) or is contained in a single equivalence class in Spec(n) ∼. In particular, the partition of Spec(n) induced by ≈ is finer than the partition of Spec(n) induced by ∼. Therefore

| Spec(n)/ ∼ | ≤ | Spec(n)/ ≈ | ≤ | Cont(n)/ ≈ | = | Spec(n)/ ∼ |.

It follows that the two inequalities above must be equality, proving (a) and (b). As explained at the beginning of Section 3.3.2, Spec(n) parameterizes the paths in the branching graph. By (3.8), Cont(n) parameterizes the paths in Y. Thus, by part (a), we have a bijective correspondence between the paths in Y and the paths in the branching graph. By Proposition 3.1.11 and the definition of ∼, this yields a bijective correspondence between the vertices of these graphs. This correspondence is clearly a graph isomorphism.

It follows from Theorem 3.3.6 that we have a natural correspondence between Scn and the n-th level of the branching graph Y, that is, the set of all partitions of n.

Definition 3.3.7 (The irreducible representations Sλ). Given a partition λ ` n, we define λ S to be the irreducible representation of Sn spanned by the vectors vα, with α ∈ Spec(n) = Cont(n) corresponding to a standard tableau of shape λ.

Proposition 3.3.8. We have dim Sλ = | Tab(λ)|. In other words, the dimension of Sλ is equal to the number of standard tableaux of shape λ.

Proof. This follows immediately from Definition 3.3.7.

Proposition 3.3.2 will allow us to give explicit formulas for the action of the Coxeter λ generators (and hence any element of Sn) on the irreducible representations S . See Theo- rems 3.4.2 and 3.4.4.

Corollary 3.3.9. Suppose 0 ≤ k < n, λ ` n, and µ ` k. The multiplicity of Sµ in ResSn Sλ Sk is equal to the number of paths in Y from λ to µ.

Proof. We have ResSn Sλ = ResSk+1 ResSk+2 ··· ResSn Sλ. Sk Sk Sk+1 Sn−1 At each step of the right side, the decomposition is multiplicity free and according to the branching graph . Thus, the multiplicity of Sµ in ResSn Sλ is equal to the number of paths Y Sk in Y that start at λ and end at µ. 104 Chapter 3. The Okounkov–Vershik approach

Corollary 3.3.10 (Branching rule). For λ ` n, M ResSn Sλ = Sµ. (3.29) Sn−1 µ`n−1: λ→µ

The sum above runs over all partitions µ ` n − 1 obtained from λ by removing a single box. Moreover, for all µ ` n − 1, we have M IndSn Sµ = Sλ. (3.30) Sn−1 λ`n: λ→µ

Proof. Corollary 3.3.9 immediately implies (3.29). Now suppose µ ` n − 1 and λ ` n. Then we have   dim Hom Sλ, IndSn Sµ Sn Sn−1   = dim Hom ResSn Sλ,Sµ (Frobenius reciprocity (Theorem 1.6.6)) Sn−1 Sn−1   M = dim Hom  Sν,Sµ (by (3.29)) Sn−1   ν`n−1: λ→ν ( 1, if λ → µ, = (Schur’s lemma (Corollary 1.2.2)). 0, otherwise.

Thus (3.30) follows from Corollary 1.2.6.

We have defined two notions of admissible transposition, one for Spec(n), and one for Cont(n) (coming from the notion of admissible transposition for tableaux). The following result states that these coincide.

Lemma 3.3.11. Suppose T ∈ Tab(n) is a standard tableau with content α = C(T ) = (a1, a2, . . . , an) ∈ Spec(n). For 1 ≤ i ≤ n − 1, the simple transposition si is admissible for T if and only if ai+1 6= ai ± 1.

Proof. We have

ai+1 = ai ± 1 ⇐⇒ the box labelled i + 1 is immediately to the right of or immediately below the box labelled i in T ⇐⇒ i and i + 1 belong to the same row or column of T

⇐⇒ si is not admissible for T. 3.4. The irreducible representations of Sn 105

Exercises.

3.3.3. Fix n ≥ 2. Determine the partitions λ, µ ` n such that Sλ is the trivial representation and Sµ is the sign representation (see Example 1.1.5). Hint: Use Proposition 3.3.2.

3.3.4. Fix n ≥ 2. Prove that Sn has exactly two inequivalent one-dimensional irreducible representations.

3.4 The irreducible representations of Sn In this section we will deduce explicit descriptions of the irreducible representations. In particular, we will compute matrix coefficients for the simple transpositions. We will also derive a formula for the primitive idempotents in terms of the YJM elements. Finally, we state a theorem of Jucys and Murphy relating the centre of the group algebra L(Sn) and the YJM elements.

3.4.1 Young’s seminormal form For a partition λ ` n, recall the tableau T λ defined in Section 3.1.4. Furthermore, recall λ that for any T ∈ Tab(λ), there exists a unique permutation πT ∈ Sn such that πT T = T . Recall that the Young vector vT associated to a tableau T is defined up to a scalar factor (of norm one if the Young vectors are normalized).

Proposition 3.4.1. It is possible to choose the vectors vT , T ∈ Tab(n), such that, for every T ∈ Tab(λ), λ ` n, we have

−1 X πT vT λ = vT + γRvR, R∈Tab(λ): `(πR)<`(πT ) for some γR ∈ C.

Proof. We prove the result by induction on `(πT ). At each step in the induction, we will choose the vectors vT for all T with `(vT ) = `. λ If `(πT ) = 1, then πT is an admissible transposition for T . In this case, the result follows from Proposition 3.3.2 (and Lemma 3.3.11). In particular, we choose vT to be the vα0 appearing in (3.23).

Now suppose πT = si1 si2 ··· si`−1 sj is the standard decomposition of πT into a product of admissible transpositions (see Remark 3.1.6). Then πT = πT1 sj, where T1 = sjT is a standard tableau and `(πT1 ) = `(πT ) − 1. By the inductive hypothesis, we have

−1 X (1) π v λ = v + γ v , (3.31) T1 T T1 R R R∈Tab(λ): `(πR)<`(πT1 ) 106 Chapter 3. The Okounkov–Vershik approach

(1) for some γR ∈ C. Since T = sjT1, then as in (3.23) we can choose vT such that 1 sjvT1 = vT + vT1 , (3.32) aj+1 − aj

where (a1, a2, . . . , an) = C(T1) is the content of T1. Then we have

−1 −1 (3.31) X (1) π v λ = s π v λ = s v + γ s v T T j T1 T j T1 R j R R∈Tab(λ): `(πR)<`(πT1 ) (3.32) 1 X (1) = vT + vT1 + γR sjvR aj+1 − aj R∈Tab(λ): `(πR)<`(πT1 )

Then the result follows by using Proposition 3.3.2 to compute the terms sjvR. Theorem 3.4.2 (Young’s seminormal form). Choose the vectors of the Young basis according to Proposition 3.4.1. If T ∈ Tab(λ) has content C(T ) = (a1, a2, . . . , an), then the simple transposition sj acts on vT as follows:

(a) If aj+1 = aj ± 1, then sjvT = ±vT . 0 (b) If aj+1 6= aj ± 1, then, setting T = sjT , we have

 1 vT + vT 0 if `(πT 0 ) > `(πT ),  aj+1−aj sjvT = 1  1  (3.33) vT + 1 − 2 vT 0 if `(πT 0 ) < `(πT ).  aj+1−aj (aj+1−aj )

Proof. Part (a) follows immediately from Proposition 3.3.2(b). Suppose that `(πT 0 ) > `(πT ). By Proposition 3.3.2(c), we have 1 sjvT = cvT 0 + vT ai+1 − ai

for some nonzero c ∈ C (since (3.23) holds up for vα0 up to a scalar factor). Since πT 0 = πT sj, we have, by Proposition 3.4.1

X 0 −1 −1 X vT 0 + γR0 vR0 = πT 0 vT λ = sjπT vT λ = sjvT + γRsjvR R0∈Tab(λ): R∈Tab(λ): `(πR0 )<`(πT 0 ) `(πR)<`(πT ) Thus c = 1. The case `(πT 0 ) < `(πT ) is similar, starting from πT = πT 0 sj and applying Proposi- tion 3.3.2 with α = C(T 0). (See Exercise 3.4.1.)

Corollary 3.4.3. In the orthogonal bases of Proposition 3.4.1 and Theorem 3.4.2 the matrix coefficients of the irreducible representations of Sn are rational numbers. In particular, the coefficients γR in Proposition 3.4.1 are rational numbers. 3.4. The irreducible representations of Sn 107

Exercises.

3.4.1. Complete the proof of Theorem 3.4.2(b) by treating the case `(πT 0 ) < `(πT ).

3.4.2 Young’s orthogonal form The orthogonal bases of Proposition 3.4.1 and Theorem 3.4.2 do not consist of unit vectors in general. Given an arbitrary invariant scalar product k k on Sλ that makes it a unitary representation of Sn (see Lemma 1.1.1) we can, of course, normalize the basis. If λ ` n and {vT : T ∈ Tab(λ)} is the basis as in Proposition 3.4.1 and Theorem 3.4.2, we define

vT wT = ,T ∈ Tab(λ). kvT k

Let T be a standard tableau and let C(T ) = (a1, a2, . . . , an) be its content. For i, j ∈ {1, 2, . . . , n}, the axial distance from j to i in T is the integer aj − ai. Geometrically, we move from j to i in the tableau T , counting each step left or downwards as +1 and each step right or upwards as −1. The resulting integer is aj − ai (and is independent of the path chosen). For example, if we have j , i

then the axial distance from j to i is aj − ai = 5. On the other hand, if we have

j i ,

then the axial distance from j to i is aj − ai = −2.

Theorem 3.4.4 (Young’s orthogonal form). In the orthonormal basis {wT : T ∈ Tab(n)} we have 1 r 1 s w = w + 1 − w , (3.34) j T r T r2 sj T

where r is the axial distance from j + 1 to j in T . In particular, if aj+1 = aj ± 1, we have r = ±1 and sjwT = ±wT .

0 Proof. Let T = sjT and suppose `(πT 0 ) > `(πT ). By Theorem 3.4.2, we have

2 2 1 kvT 0 k = sjvT − vT r 1 1 1 = ks v k2 − hs v , v i − hv , s v i + kv k2 j T r j T T r T j T r2 T     2 1 1 1 1 1 2 = kv k − v + v 0 , v − v , v + v 0 + kv k T r r T T T r T r T T r2 T 108 Chapter 3. The Okounkov–Vershik approach

 1  = 1 − kv k2, r2 T

where we have used the fact that vT ⊥ vT 0 . Thus we have

vT vT 0 vT 0 wT = and wT 0 = = q . kvT k kvT 0 k 1 1 − r2 kvT k

In this basis, the first case in (3.33) becomes (3.34).

The proof in the case `(πT 0 ) < `(πT ) is similar (Exercise 3.4.2).

It follows from Theorem 3.4.4 that

1 r 1 1 r 1 s w = w + 1 − w , s w = − w + 1 − w , (3.35) j T r T r2 sj T j sj T r sj T r2 T

where r is the axial distance from j + 1 to j. Thus, in the bases {wT , wsj T }, the action of sj is given by the orthogonal matrix

 1 q 1  1 − 2 r r . q 1 1  1 − r2 − r

Example 3.4.5. The only standard tableau of shape (n) is

T = T (n) = 1 2 n .

Its content is C(T ) = (0, 1, 2, . . . , n − 1). Thus aj+1 = aj + 1 for all 1 ≤ j ≤ n − 1. By (3.34),

sjwT = wT , for all 1 ≤ j ≤ n − 1.

(n) Hence S is the trivial representation of Sn.

Example 3.4.6. The only standard tableau of shape (1, 1,..., 1) is

1 2 T = T (1,1,...,1) = .

n

Its content is C(T ) = (0, −1, −2,..., −n + 1). Thus aj+1 = aj − 1 for all 1 ≤ j ≤ n − 1. By (3.34),

sjwT = −wT , for all 1 ≤ j ≤ n − 1.

(1,1,...,1) Hence S is the sign representation of Sn. 3.4. The irreducible representations of Sn 109

Example 3.4.7. Consider the representation S(n−1,1). The standard tableaux of shape (n − 1, 1) are T = 1 2 j −1 j +1 n , 2 ≤ j ≤ n. j j For 2 ≤ j ≤ n, we have

C(Tj) = (0, 1, . . . , j − 2, −1, j − 1, j, . . . , n − 2), (3.36)

where the entry −1 is in the j-th position. Let wj = wTj for 2 ≤ j ≤ n. Then the Young orthogonal form becomes

1 r 1 s w = w + 1 − w , (3.37) j j j j j2 j+1 s 1 1 s w = − w + 1 − w , (3.38) j−1 j j − 1 j (j − 1)2 j−1

skwj = wj, k 6= j − 1, j. (3.39) The branching rule gives

ResSn S(n−1,1) = S(n−1) ⊕ S(n−2,1). Sn−1

(n−1,1) We claim that S is isomorphic to the representation V1 of Example 1.4.5. We prove this claim by giving an explicit isomorphism. Let X = {1, 2, . . . , n} and recall that

( n ) X V1 = f ∈ L(X): f(j) = 0 . j=1 For 2 ≤ j ≤ n, define s 1 j − 1 w˜j = 1j−1 − δj, (3.40) pj(j − 1) j

where δj is the Dirac function at j, and 1j = δ1 + δ2 + ··· + δj. One can check (Exercise 3.4.3) that {w˜j : 2 ≤ j ≤ n} is an orthonormal basis for V1. Now, s s s 1 r 1 1 1 j − 1 1 j − 1 j − 1 w˜j + 1 − w˜j+1 = 1j−1 − δj + 1j − δj+1 j j2 jpj(j − 1) j j j j j s 1 j − 1 = 1j−1 − δj+1 = sjw˜j. pj(j − 1) j

Thus, the vectorsw ˜j satisfy (3.37). The proof that they satisfy (3.38) is similar. That fact that they satisfy (3.39) is straightforward. Therefore we have an isomorphism of represen- tations (n−1,1) V1 → S , w˜j 7→ wj, 2 ≤ j ≤ n. 110 Chapter 3. The Okounkov–Vershik approach

Exercises.

3.4.2. Complete the proof of Theorem 3.4.4 in the case `(πT 0 ) < `(πT ). 3.4.3. We adopt the notation of Example 3.4.7.

(a) Verify that {w˜j : 2 ≤ j ≤ n} is an orthonormal basis for V1.

(b) Verify that the vectorsw ˜j satisfy (3.38).

3.4.4. Prove thatw ˜j, as defined in (3.40), corresponds to the path (n − 1, 1) → (n − 2, 1) → · · · → (j, 1) → (j − 1, 1) → (j − 1) → (j − 2) → · · · → (2) → (1) in the Young graph by examining the action of Sn ≥ Sn−1 ≥ · · · ≥ S1. This gives another way of identifyingw ˜j with wj.

3.4.5. Definew ˜j as in (3.40). Show, by direct computation, that  (i − 1)w ˜ for i < j,  j Xjw˜j = −w˜j for i = j,  (i − 2)w ˜j for i > j.

In other words, prove that α(Tj) = (0, 1, 2, . . . , j − 2, −1, j − 1, j, . . . , n − 2) ∈ Spec(n). Compare with (3.36).

3.4.3 The Young seminormal units In this section we will deduce an expression, in terms of YJM elements, for the primitive idempotents of L(Sn) corresponding to the Gelfand–Tsetlin bases for the irreducible repre- sentations. (See Proposition 1.5.8.) For λ ` n, let λ dλ = dim S .

For each T = Tab(n) of shape λ, the primitive idempotent in L(Sn) corresponding to the Gelfand–Tsetlin vector wT is given by (see (1.46))

dλ e (π) = hπw , w i λ , π ∈ S . (3.41) T n! T T S n

Following the notation of this chapter, we will identify eT with the formal sum X eT (π)π.

π∈Sn For S ∈ Tab(n), we let X eT wS = eT (π)πws

π∈Sn 3.4. The irreducible representations of Sn 111

denote the action of eT (more precisely, of its Fourier transform) on wS. Furthermore, eT eS denotes the convolution of eT and eS. By Proposition 1.5.8, for all S,T ∈ Tab(n) we have

eT eS = δT,SeT and (3.42)

eT wS = δT,SwT . (3.43)

Note that (3.43) uniquely characterizes eT among elements of L(Sn). Also, it follows from (3.43) and Theorem 2.2.2 that {eT : T ∈ Tab(n)}

is a basis of the Gelfand–Tsetlin algebra GZ(n). The elements eT , T ∈ Tab(n), are called Young seminormal units. For T ∈ Tab(n), let T ∈ Tab(n − 1) denote the tableau obtained from T by removing the box labelled n. We also denote by aT (j) the j-th component of C(T ); in other words,

C(T ) = (aT (1), aT (2), . . . , aT (n)) . It follows that XkwT = aT (k)wT , for all T ∈ Tab(n), 1 ≤ k ≤ n. The following theorem gives a recursive formula for the Young seminormal units in terms of the YJM elements.

Theorem 3.4.8. We have eT = 1 for the unique T ∈ Tab(1). For n ≥ 2 and T ∈ Tab(n), we have Y Xn − aS(n) eT = eT . (3.44) aT (n) − aS(n) S∈Tab(n): S=T,S6=T

Proof. Lete ˜T denote the element of L(Sn) recursively defined by the right side of (3.44). We will show thate ˜T wS = δT,SwS for all S ∈ Tab(n). By the characterizing property (3.43), this will imply thate ˜T = eT . We proceed by induction on n. The result is clearly true for n = 1. Thus, we assume n ≥ 2 and that the result is true for n − 1. Suppose S 6= T . Then

e˜T wS =e ˜T wS = 0.

The first equality above follows from the fact thate ˜T ∈ L(Sn−1), and so, to compute the action ofe ˜T on wS, we first restrict the irreducible representation containing wS to Sn−1, obtaining the vector wS. The second equality above follows from the induction hypothesis. Now suppose S = T , but S 6= T . Then

XnwS = aS(n)wS,

and soe ˜T wS = 0 since the factor Xn − aS(n) in the right side of (3.44) acts as zero on wS. Finally, note that XnwT = aT (n)wT , and so

Xn − aS(n) wT = wT aT (n) − aS(n) 112 Chapter 3. The Okounkov–Vershik approach for all S ∈ Tab(n) such that S = T and S 6= T . Hence

Y Xn − aS(n) e˜T wT =e ˜T · wT aT (n) − aS(n) S∈Tab(n): S=T,S6=T

=e ˜T wT

=e ˜T wT

= wT , where the final two equalities follow from restriction to Sn−1 and the induction hypothesis.

Corollary 3.4.9. For 1 ≤ k ≤ n, we have X Xk = aT (k)eT . T ∈Tab(n)

Proof. This follows immediately from Theorem 3.4.8 and Proposition 1.5.8(d).

Exercises.

3.4.6 ([CSST10, Ex. 3.4.13]). (a) Let T be the unique standard tableau of shape n (see Example 3.4.5). Show that n 1 Y e = (1 + X ). T n! j j=1 Prove also that e = 1 P π in two ways: (i) by means of the representation T n! π∈Sn theory of Sn, and (ii) as an algebraic identity in Sn. Hint: If π ∈ Sn, then there exists a unique σ ∈ Sn−1 such that σ ∈ Sn−1 and j ∈ {1, 2, . . . , n − 1} such that π = σ(j → n → j). (b) Let T be the unique standard of shape (1n) = (1,..., 1) ` n (see Example 3.4.6). Show that n 1 Y e = (1 − X ). T n! j j=1 As in (a), give two proofs of the fact that e = 1 P (−1)`(π)π. T n! π∈Sn

(c) Let Tj be the standard tableau of Example 3.4.7. Show that

j−1 ! n ! (j − 2)! Y Y e = − (X + 1) · (X − j + 1) · X (X + 2) . Tj n!(n − 2)! i j i i i=1 i=j+1 3.4. The irreducible representations of Sn 113

3.4.4 The Theorem of Jucys and Murphy

Consider the polynomial algebra C[y1, . . . , ym]. The symmetric group Sm acts on C[y1, . . . , ym] by permuting the indeterminates y1, . . . , ym. In other words, for π ∈ Sm, we have a unique algebra isomorphism

C[y1, . . . , ym] → C[y1, . . . , ym], f 7→ π · f, determined by π · yi = yπ(i). The subalgebra

Sm C[y1, . . . , ym] = {f ∈ C[y1, . . . , ym]: π · f = f for all π ∈ Sm} ⊆ C[y1, . . . , ym] is called the algebra of symmetric polynomials. So an element f ∈ C[y1, . . . , ym] is a symme- tric polynomial if and only if it is left unchanged by any permutation of the indeterminates y1, . . . , ym. Recall that Z(n) = Z(L(Sn)) is the center of the group algebra L(Sn). We say an element f ∈ Z(n) is a symmetric polynomial in the YJM elements if

Sn−1 f = p(X2,...,Xn) for some p ∈ C[y1, . . . , yn−1] .

(In fact, recalling that X1 = 0, one can show that this is equivalent to requiring that Sn f = p(X1,X2,...,Xn) for some p ∈ C[y1, . . . , yn] .) Theorem 3.4.10 (Theorem of Jucys and Murphy). The center Z(n) of the group algebra of the symmetric group Sn is precisely the algebra of all symmetric polynomials in the YJM elements X2,X3,...,Xn. Proof. Due to lack of time, we will not prove this theorem in this course. A proof can be found in [CSST10, Th. 4.4.5] or in [Mur83, Th. 1.9]. Chapter 4

Further directions

In this final chapter, we briefly touch on some more advanced topics related to the represen- tation theory of the symmetric group and related algebras. We will omit the proofs of most results.

4.1 Schur–Weyl duality

Schur–Weyl duality is a result that gives a precise relationship between the representation theory of the symmetric group and the representation theory of the general linear group. Fix n ≥ 1. The general linear group GLn(C) is the group of invertible n × n complex matrices, under multiplication. It is an important example of a . The group n GLn(C) acts naturally on the space C (thought of as consisting of column vectors) via matrix multiplication. It then acts on the space

n n n V := C ⊗ C ⊗ · · · ⊗ C (4.1) | {z } k factors by simultaneous matrix multiplication:

n g(v1 ⊗ v2 ⊗ · · · ⊗ vk) = gv1 ⊗ gv2 ⊗ · · · gvk, v1, . . . , vk ∈ C , g ∈ G, extended by linearity. On the other hand, the symmetric group Sk also acts naturally on V by permuting the factors

n π(v1 ⊗ v2 ⊗ · · · ⊗ vk) = vπ−1(1) ⊗ vπ−1(2) ⊗ · · · ⊗ vπ−1(k), v1, . . . , vk ∈ C , π ∈ Sk, extended by linearity. It is straightforward to verify that the actions of GLn(C) and Sk on V commute:

gπv = πgv, for all g ∈ GLn(C), π ∈ Sk, v ∈ V. Thus we have maps to the commutants:

GLn(C) → EndSk V and Sk → EndGLn(C) V.

114 4.2. Categorification 115

Schur–Weyl duality asserts that the images of each of these maps generate the codomain (as an algebra). Furthermore, V decomposes as M V = Sλ ⊗ Lλ, (4.2) λ where the sum is over all Young diagrams (equivalently, partitions) with k boxes and at most n rows. In this decomposition, Sk acts on the first factor, while GLn(C) acts on the second λ factor. The L are pairwise inequivalent irreducible representations of GLn(C) (just as the λ S are pairwise inequivalent irreducible representations of Sk). It follows, for instance, that the multiplicity of Sλ in V is dim Lλ, and that the multiplicity of Lλ in V is dim Sλ.

Example 4.1.1. Suppose that k = 2 and n ≥ 2. We know that S2 has exactly two irreducible representations: the trivial representation and the sign representation. Thus we have

n n 2 n 2 n C ⊗ C = S C ⊕ Λ C , 2 n n n where S C is the space of symmetric tensors (the subspace of C ⊗ C on which S2 acts trivially) and Λ2Cn is the space of antisymmetric tensors (the subspace of Cn ⊗ Cn on which S2 acts via the sign representation). Each of these summands is an irreducible representation of GLn(C).

4.2 Categorification

Categorification is a powerful tool for relating mathematical structures that may appear on the surface to be completely unrelated. It also reveals hidden mathematical structure and provides a method to study and organize the representation theory of important algebras. In this section we will give a very brief overview of some examples of categorification, including categorification of symmetric functions, bosonic Fock space, and representations of certain Lie algebras. For further details we refer the reader to the expository references [Kle05, LS12, Sav17] and to the original research papers [Gei77, LS13, Kho14, RS17]

4.2.1 Symmetric functions

For n ≥ 0, let Symn denote the space of all degree n elements of

C x1, x2,... J K that remain invariant under any permutation of the indeterminates x1, x2,... . For example, we have the n-th power sum

∞ X n pn = xi ∈ Symn, n ∈ Z>0. i=1

We set p0 = 1. The algebra of symmetric functions is ∞ M Sym := Symn, n=0 116 Chapter 4. Further directions together with the natural sum and product of formal power series. Let P denote the set of all partitions, including the empty partition ∅ of 0. For λ = (λ1, λ2, . . . , λk) ∈ P, we define the power sum symmetric function

pλ := pλ1 pλ2 ··· pλk .

One can show that {pλ : λ ∈ P} is a basis for Sym, so that M Sym = Cpλ, λ∈P

It follows that Sym is isomorphic as an algebra to the polynomial algebra in the pn: ∼ Sym = C[p1, p2,... ]. We define an inner product on Sym by declaring

λ1 Y mi(λ) hpλ, pµi = δλ,µzλ, zλ = i mi(λ)!, i=1 where mi(λ) denotes the number of parts of the partition λ = (λ1, λ2, . . . , λk) equal to i. (Note that |λ|!/zλ is the number of partitions of |λ| that have cycle type λ. See Exercise 3.1.1.) For λ ∈ P, a semistandard tableau of shape of λ is a filling T of the boxes of λ (considered as a Young diagram) with elements of Z>0 such that the entries are weakly increasing from left to right along rows and strictly increasing down columns. For example

1 1 2 2 3 5 2 3 3 3 6 3 4 6 is a semistandard tableau of shape (6, 5, 2, 1). For a semistandard tableau T , define

∞ T Y ti x := xi ∈ C x1, x2,... , i=1 J K where ti is the number of occurrences of i in the tableau T . (Note that the above product is actually finite since ti = 0 for all but finitely many values of i.) The Schur function corresponding to λ ∈ P is

X T sλ := x , T where the sum is over all semistandard tableaux of shape λ. It can be shown that the Schur functions form an orthonormal basis for Sym. For n ∈ Z>0, let X hn := s(n) = xi1 xi2 ··· xin .

1≤i1≤i2≤···≤in 4.2. Categorification 117

We define h0 = 1. For every partition λ = (λ1, λ2, . . . , λk) we have the corresponding complete symmetric function

hλ = hλ1 hλ2 ··· hλk .

One can show that {hλ : λ ∈ P} is a basis for Sym. It follows that

Sym = C[h1, h2,... ]. (In fact, the complete symmetric functions have the slightly better property that they gene- rate Sym over Z, which makes them more naturally suited to categorification.)

4.2.2 The Grothendieck group

For the remainder of these notes, we let S0 denote the trivial group, so that S0 = S1. ∅ Clearly, S0 has one irreducible representation, which we denote by S , where ∅ is the empty partition of 0. The (finite-dimensional) representations of Sn, together with homomorphisms of repre- sentations, form a category Rep Sn. For a representation V of Sn, let [V ] denote its iso- morphism class. Consider the free vector space Fn on the set of isomorphism classes of representations of Sn: M Fn := C[V ], (4.3) [V ] where the sum is over all isomorphism classes [V ]. Now define ˜ Fn = h[V1 ⊕ V2] − [V1] − [V2]: V1,V2 reps of Sni ⊆ Fn. (4.4)

The Grothendieck group of Rep Sn is defined to be ˜ K(Sn) := Fn/Fn.

Equivalently, K(Sn) is the quotient of Fn by the relation

[V1 ⊕ V2] = [V1] + [V2], for all representations V1 and V2 of Sn.

We will denote the image in K(Sn) of an isomorphism class [V ] again by [V ]. The group operation on K(Sn) is the vector space addition. One can show that K(Sn) has a basis given by the classes of the irreducible representa- tions. Thus M λ K(Sn) = C[S ]. λ`n Define ∞ M K(S) := K(Sn), n=0 so that M λ K(S) = C[S ]. λ∈P 118 Chapter 4. Further directions

Taking the Grothendieck group of the category Rep Sn is an example of decategorification. It takes a category and produces a vector space. One can take the Grothendieck group of other categories, provided they have enough structure, but that is beyond the scope of our current discussion.

4.2.3 Categorification of the algebra of symmetric functions

Recall that for `, k ∈ Z≥0, we can view S`×Sk as a subgroup of S`+k, where S` permutes the elements {1, 2, . . . , `} and Sk permutes the elements {`+1, `+2, . . . , `+k}. Therefore, given a representation U of S` and a representation V of Sk, we have the outer tensor product representation U  V of S` × Sk (see Section 1.1.7), and hence the induced representation IndS`+k (U V ) S`×Sk 

of S`+k. Since the operations of outer tensor product and induction preserve isomorphism and direct sums, we have an induced bilinear map h i IndS`+k : K(S ) ⊗ K(S ) → K(S ). S`×Sk ` k `+k

Taking these maps for all `, k gives a binary operation on K(S). One can show that this operation is associative and unital. The unit element is class of the trivial representation of K(S0). The following theorem was first proved by Geissinger in [Gei77]. Theorem 4.2.1 (Categorification of the algebra of symmetric functions). The linear map

Φ: K(S) → Sym

λ determined by Φ([S ]) = sλ, λ ∈ P, is an isomorphism of algebras. Theorem 4.2.1 is an example of a categorification. We have a category (namely, the category of representations of symmetric groups) with some extra structure coming from induction. Decategorifying, i.e. passing to the Grothendieck group, recovers the algebra of symmetric functions. Thus, the category of representations of symmetric groups categorifies the algebra of symmetric functions. In fact, if we also consider the restriction functors ResS`+k , we obtain the structure of a coproduct on K(Sym). Then Theorem 4.2.1 can be S`×Sk strengthened to state that we have an isomorphism of Hopf algebras. We can also categorify the inner product on Sym as follows. For U, V representations of Sn, we define

h[U], [V ]i = dim HomSn (U, V ). (4.5) One can show that this does indeed define an inner product on K(S) (we declare elements of K(Sn) to be orthogonal to elements of K(Sm) for n 6= m). For λ, µ ∈ P, it follows from Schur’s lemma that λ µ h[S ], [S ]i = δλ,µ. Thus, the [Sλ], λ ∈ P, form an orthonormal basis for K(S). It follows that the isomorphism Φ of Theorem 4.2.1 is an isometry (i.e. it respects the inner products). 4.2. Categorification 119

4.2.4 The Heisenberg algebra Heisenberg algebras play a fundamental role in mathematics and mathematical physics. The (infinite rank) Heisenberg algebra H is the unital associative C-algebra with generators pn, ∗ pn, n ∈ Z>0, and relations

∗ ∗ ∗ ∗ ∗ ∗ pnpm = pmpn + δn,m1, pnpm = pmpn, pnpm = pmpn, n, m ∈ Z>0. (4.6) The first relation in (4.6) is often called the canonical commutation relation in the physics ∗ literature, where the generators pn and pn correspond to position and momentum operators in a single particle system with a countable infinite number of degrees of freedom. The Heisenberg algebra is also crucial in the study of the quantum harmonic oscillator. There is another, more presentation independent, way to describe the Heisenberg algebra H. Any f ∈ Sym acts on Sym via multiplication. Let f ∗ denote the operator on Sym adjoint to multiplication by f:

hf ∗(g), hi = hg, fhi for all f, g, h ∈ Sym.

∗ Then H is the subalgebra of EndC Sym generated by the operators f and f , for f ∈ Sym. The tautological action of H on Sym is called the bosonic Fock space representation. Any choice of generating set for Sym yields a presentation of H. In particular, if we choose power sums, we recover the presentation (4.6). If we instead choose the complete symmetric ∗ functions, we see that H is the unital associative C-algebra generated by hn, hn, n ∈ Z>0, are relations

min(m,n) ∗ X ∗ ∗ ∗ ∗ , hnhm = hm−rhn−r, hnhm = hmhn, hnhm = hmhn n, m ∈ Z>0. (4.7) r=0

∗ Note, in particular, that h1 and h1 satisfy the canonical commutation relation:

∗ ∗ h1h1 = h1h1 + 1 (4.8)

4.2.5 Categorification of bosonic Fock space By Corollary 3.3.10, for λ ` n, we have

M S M ResSn Sλ = Sµ and Ind n+1 Sλ = Sµ. Sn−1 Sn µ`n−1: µ`n+1: λ→µ µ→λ ∼ Restriction and induction respect isomorphism, in the sense that, if V1 = V2 are isomorphic representations of Sn, then

ResSn V ∼ ResSn V and IndSn+1 V ∼ IndSn+1 V . Sn−1 1 = Sn−1 2 Sn 1 = Sn 2 Thus, restriction and induction induce linear maps (see (4.3)):

[IndSn+1 ]: F → F , [ResSn ]: F → F . Sn n n+1 Sn−1 n n−1 120 Chapter 4. Further directions

Restriction and induction also respect direct sums, in the sense that     ResSn (V ⊕ V ) ∼ ResSn V ⊕ ResSn V , and Sn−1 1 2 = Sn−1 1 Sn−1 2     IndSn+1 (V ⊕ V ) ∼ IndSn+1 V ⊕ IndSn+1 V . Sn 1 2 = Sn 1 Sn 2 It follows that (see (4.4)) [IndSn+1 ](F˜ ) ⊆ F˜ , [ResSn ](F˜ ) ⊆ F˜ . Sn n n+1 Sn−1 n n−1 Therefore, we have induced maps on the quotients: [IndSn+1 ]: K(S ) → K(S ), [ResSn ]: K(S ) → K(S ). Sn n n+1 Sn−1 n n−1 Then we have linear maps ∞ X h S i [Ind]: K(S) → K(S), [Ind] = Ind n+1 Sn n=0 and ∞ X h i [Res]: K(S) → K(S), [Res] = ResSn Sn−1 n=1 where we adopt the convention that [Res] acts as zero on K(S0). Using the combinatorics of the Young graph, one can show that, for all λ ` n, n ≥ 1, we have   IndSn ResSn Sλ ∼ ResSn+1 IndSn+1 Sλ ⊕ Sλ. Sn−1 Sn−1 = Sn Sn (See Exercise 4.2.1.) It follows that [Ind][Res] = [Res][Ind] + 1 (4.9) as linear operators on K(S). Note that these are precisely the canonical commutation ∗ relations satisfied by the generators e1 and e1 of the Heisenberg algebra (see (4.8))! What about other generators? Recall from Section 4.2.3 that we have defined a product (n) on K(S). Thus, for n ∈ Z>0 we have the operator given by multiplication by [S ]: h i a : K(S) → K(S), a ([V ]) = [V ] · [S(n)] = IndSk+n V S(n) ,V a rep of S . n n Sk×Sn  k ∗ Since we also have an inner product on K(S) as in (4.5), we can consider operators an adjoint to an. These can also be described directly using restriction. One can then show that these operators satisfy the relations (4.7):

min(m,n) ∗ X ∗ ∗ ∗ ∗ , anam = am−ran−r, anam = aman, anam = aman n, m ∈ Z>0. (4.10) r=0 It follows that we have an action of the Heisenberg algebra on K(Sym). In other words, we can view K(Sym) as a representation of H Theorem 4.2.2 (Categorification of bosonic Fock space). The map Φ of Theorem 4.2.1 is an isomorphism of representations of the Heisenberg algebra H from K(S) to the bosonic Fock space representation of H on Sym. 4.2. Categorification 121

Exercises.

4.2.1. Prove that for all λ ` n, n ≥ 1, we have   IndSn+1 ResSn Sλ ∼ ResSn IndSn+1 Sλ ⊕ Sλ. Sn Sn−1 = Sn−1 Sn

4.2.6 Categorification of the basic representation The categorification of bosonic Fock space described in Section 4.2.5 can be refined somewhat. Suppose V is a representation of Sn for some n ∈ Z>0. Since the YJM element Xn commutes with S (see Exercise 3.2.1), the action of X on ResSn V gives an S -intertwiner. n−1 n Sn−1 n−1 We know that Xn acts diagonally, with integral eigenvalues. For i ∈ Z, let proji denote the projection onto the eigenspace corresponding to eigenvalue i. If we define

Res V := proj ResSn V, i i Sn−1 it follows that M ResSn V = Res V. Sn−1 i i∈Z

(Note that Resi V = 0 for all but finitely many values of i.) We call Resi i-restriction. This is a functor, with an adjoint functor i-induction, denoted Indi. In terms of the combinatorics of standard tableaux, Resi acts on irreducible representations by removing a box of content i (if such a box exists). That is,

( µ λ S if µ is obtained from λ by removing a box of content i, Resi S = 0 if µ has no removable boxes of content i.

Similarly, Indi adds a box of content i (if possible). As in Section 4.2.5, we have induced maps

[Indi], [Resi]: K(S) → K(S).

Let sl∞ denote the space of trace zero Z × Z matrices with a finite number of nonzero entries. In other words, elements of sl∞ are matrices X = (Xi,j)i,j∈Z such that Xi,j = 0 for all but finitely many pairs (i, j) ∈ × , and such that P X = 0. This is a Z Z i∈Z i,i with Lie bracket [X,Y ] = XY − YX, where juxtaposition denotes matrix multiplication. One can prove that the action of [Indi] and [Resi] on K(S) define an action of the Lie algebra sl∞ on K(S). The particular representation one obtains is called the basic representation. Thus, the representation theory of symmetric groups yields a categorification of the basic representation of sl∞. 122 Chapter 4. Further directions

4.2.7 Going even further The constructions discussed briefly above are just the start of the deep and extremely active area of categorification. We mention here some related constructions. For details on the first two, we refer the reader to the book [Kle05].

Positive characteristic. If we work over a field of positive characteristic p instead of over the complex numbers, we can repeat the categorification of the basic representation sketched in Section 4.2.6. Then the eigenvalues of the YJM elements lie in Z/pZ. We obtain a categorification of the affine Lie algebra slbp. Note that, in positive characteristic, Maschke’s Theorem fails (see Exercise 1.1.12). It is no longer true that every representation of Sn decomposes as a sum of irreducible ones. So here the representation theory is much more complicated. But categorification provides some very useful tools for studying these representations.

Higher level cyclotomic quotients. The group algebra L(Sn) of the symmetric group is a quotient of the degenerate affine Hecke algebra (see Exercise 3.2.3) by the ideal generated by x1. More generally one can take the cyclotomic quotient by the ideal generated by

Y µi (x1 − i) , i∈Z for some µi ∈ Z≥0, i ∈ Z, where all but finitely many of the µi are equal to zero. Such a quotient is called a degenerate cyclotomic Hecke algebra. One can repeat the constructions of Sections 4.2.5 and 4.2.6 in this setting and obtain categorifications of other representations of the Heisenberg algebra and of sl∞ (or of slbp if we work in positive characteristic). The study of these categorification is currently an active area research.

Wreath product algebras. One can replace the group algebra L(Sn) by wreath product algebras. These are algebras of the form

⊗n F ⊗ L(Sn),

⊗n where the multiplication involves the action of Sn on F by permuting the factors (similar to a semidirect product of groups). We have a chain

⊗2 ⊗3 C ⊆ F ⊆ F ⊗ L(S2) ⊆ F ⊗ L(S3) ⊆ · · · . Then one can examine the functors of induction and restriction. There are deep connections to representation, geometry, and algebraic combinatorics. We refer the reader to [RS17] for further details. Index

≈, 86 C(G, H), 69 h i, 18 character, 30 ≤,6 class function, 30 ⊥, 11 col, 86 ∼, 98 commutant, 25 ⊆,6 commutative algebra, 23 complete symmetric function, 117 addable, 82 conjugacy invariant, 69 adjacent transposition, 83 conjugate representation, 13 adjoint,6 conjugate transpose, 13 adjoint representation, 13 Cont(n), 86 admissible, 87 content, 86 admissible transposition, 83, 100, 104 contragredient representation, 13 algebra, 23 convolution, 50 ∗-algebra, 23 convolution algebra, 50 algebra homomorphism, 23 covers, 87 alternating representation,8 Coxeter generator, 83 ambivalent, 61 cycle, 80 ∗-anti-homomorphism, 24 cycle decomposition, 81 ∗-anti-isomorphic, 24 cycle type, 81 ∗-anti-isomorphism, 24 cyclic, 18 antilinear,9 cyclotomic quotient, 122 antisymmetric tensors, 115 associative algebra, 23 dρ, 34 α(T ), 97 decategorification, 118 axial distance, 107 degenerate cyclotomic Hecke algebra, 122 δx,8 basic representation, 121 dimension,5 bi-K-invariant, 58 Dirac function,8 branching graph, 74 direct sum, 11 branching rule for Sn, 104 of algebras, 24 disjoint cycles, 80 × C , 20 doubly transitive, 41 categorification, 118 dual, 12 symmetric functions, 118 dual basis, 13 category, 117 center, 23 EndG(V ), 25 central function, 30, 69 endomorphism algebra, 24

123 124 Index

End(V ), 24 induced representation, 61 equivalent,9 induction even permutation,8 transitivity, 63 inner product fixed point, 81 Hermitian,6 fixed points character formula, 34 internal tensor product, 17 fixed vector, 18 intertwine,9 Fourier inversion formula in B, 55 intertwining number theorem, 68 Fourier transform, 35, 54 intertwining operator,9 Frobenius character formula for induced re- interwiner,9 presentations, 64 G-invariant,5 Frobenius extension, 65 σ-invariant,5 Frobenius reciprocity, 44, 65 invariant vector, 18 inversion, 83 G, 10 b involution, 23 G , 21 bσ involutive algebra, 23 G-invariant,5 I(π), 83 Gelfand pair, 42 Irr(G), 10 weakly symmetric, 60 irreducible character, 30 Gelfand’s lemma, 59 irreducible representation,6 symmetric case, 41 isometric immersion, 47 Gelfand–Tsetlin algebra, 77 isomorphism,9 Gelfand–Tsetlin basis, 75 ∗-isomorphism, 24 general linear group,5, 114 isotypic component, 21 GL(V ),5 Grothendieck group, 117 Jucys–Murphy elements, 90 group algebra, 50 GZ(n), 77 left K-invariant, 58 left regular representation,8 Hasse diagram, 88 length, 88 Heisenberg algebra, 119 length of a partition, 80 Hermitian inner product,6 length of a permutation, 83 Hermitian scalar product,6 Lie algebra, 121 Hom(V,W ),9 Lie group, 114 homomorphism L(X),7 of algebras, 23 ∗-homomorphism, 24 Mackey’s lemma, 66 hook, 83 marked permutation, 91 Maschke’s Theorem, 11 IV ,5 matrix algebra, 24 i-induction, 121 matrix coefficients, 13 i-restriction, 121 minimal central idempotent, 27 ideal, 25 minimal central projection, 27 idempotent, 21, 25 minimal idempotent, 27 indecomposable, 12 minimal projection, 27 , 62 multiplicity, 21 Index 125 multiplicity free, 26, 74 sign representation,8 multiplicity-free subgroup, 73 simple tensor, 15 simple transposition, 83 N, 21 size of a partition, 80 S ,8, 80 odd permutation,8 n Spec(n), 98 Olshanskii’s Theorem, 95 spectrum, 98 operator stabilizer, 34 unitary,6 standard Young tableau, 82 opposite algebra, 53 structure coefficients, 24 orthogonal projection, 21 subalgebra, 23 outer tensor product, 17 subrepresentation,6 P, 116 Sym, 115 partition, 80 symmetric, 41 path, 88 symmetric functions, 115 permutation,8 symmetric Gelfand pair, 42 marked, 91 symmetric group,8 permutation representation,7 symmetric polynomials, 113 polar decomposition,9 symmetric tensors, 115 power sum, 115 λ power sum symmetric function, 116 T , 85 primitive idempotent, 27, 55 Tα, 98 projection, 21 Tab(λ), 82 onto an isotypic component, 36 Tab(n), 82 tag, 91 reducible representation,6 tensor product, 15 removable, 82 internal, 17 representation,5 outer, 17 unitary,6 Theorem of Jucys and Murphy, 113 G ResK ,6 trace, 29 restriction,6 transitivity of induction, 63 Riesz map, 12 transposition, 80 Riesz representation theorem, 12 trivial idempotent, 23 right K-invariant, 58 trivial representation,7 right regular representation,8 truncation, 76 row, 86 unital algebra, 23 Sλ, 103 unitarily equivalent,9 scalar product,6 unitarizable,6 Hermitian,6 unitary Schur function, 116 matrix, 13 Schur’s lemma, 19 unitary matrix realization of σ, 14 self-adjoint, 21, 23 unitary operator,6 semistandard tableau, 116 unitary representation,6 separate, 78 unitary space,6 126 Index

vα, 98 weakly symmetric Gelfand pair, 60 weight, 97 Wielandt’s lemma, 40 wreath product algebras, 122

Young (branching) graphs, 88 Young basis, 97 Young diagram, 81 Young seminormal units, 111 Young tableau, 82 standard, 82 Young’s orthogonal form, 107 Young’s seminormal form, 106 Young–Jucys–Murphy (YJM) elements, 90

Z(`, k), 91 Z(n), 77, 113 Bibliography

[CSST10] T. Ceccherini-Silberstein, F. Scarabotti, and F. Tolli. Representation theory of the symmetric groups, volume 121 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 2010. The Okounkov-Vershik approach, character formulas, and partition algebras. doi:10.1017/CBO9781139192361.

[Gei77] L. Geissinger. Hopf algebras of symmetric functions and class functions. pages 168–181. Lecture Notes in Math., Vol. 579, 1977.

[Jud] T. W. Judson. Abstract algebra: theory and applications. URL: http:// abstract.ups.edu/.

[Kho14] M. Khovanov. Heisenberg algebra and a graphical calculus. Fund. Math., 225(1):169–210, 2014. Available at https://arxiv.org/abs/1009.3295. doi: https://doi.org/10.4064/fm225-1-8.

[Kle05] A. Kleshchev. Linear and projective representations of symmetric groups, volume 163 of Cambridge Tracts in Mathematics. Cambridge University Press, Cam- bridge, 2005. URL: https://doi.org/10.1017/CBO9780511542800.

[LS12] A. Licata and A. Savage. A survey of Heisenberg categorification via graphical calculus. Bull. Inst. Math. Acad. Sin. (N.S.), 7(2):291–321, 2012.

[LS13] A. Licata and A. Savage. Hecke algebras, finite general linear groups, and Hei- senberg categorification. Quantum Topol., 4(2):125–185, 2013. URL: https: //doi.org/10.4171/QT/37.

[Mur83] G. E. Murphy. The idempotents of the symmetric group and Nakayama’s conjec- ture. J. Algebra, 81(1):258–265, 1983. doi:10.1016/0021-8693(83)90219-3.

[OV96] A. Okounkov and A. Vershik. A new approach to representation theory of symme- tric groups. Selecta Math. (N.S.), 2(4):581–605, 1996. doi:10.1007/PL00001384.

[Py03] P. Py. On representation theory of symmetric groups. Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov. (POMI), 301(Teor. Predst. Din. Sist. Komb. i Algoritm. Metody. 9):229–242, 245–246, 2003. doi:10.1007/ s10958-005-0316-7.

127 128 Bibliography

[Roy] D. Roy. Linear algebra II: Notes for MAT 3141. Translated by A. Sa- vage. URL: http://alistairsavage.ca/mat3141/notes/MAT%203141%20-% 20Linear%20Algebra%20II.pdf.

[RS17] D. Rosso and A. Savage. A general approach to Heisenberg categorification via wreath product algebras. Math. Z., 286(1-2):603–655, 2017. URL: https://doi. org/10.1007/s00209-016-1776-9.

[Sav17] A. Savage. Heisenberg categorification. CMS Notes, 49(3):16–17, 2017. Available at https://cms.math.ca/notes/vault/Notesv49n3.pdf.

[Tri] S. Triel. Linear algebra done wrong. URL: http://www.math.brown.edu/~treil/ papers/LADW/LADW.html.

[Ver06] A. M. Vershik. A new approach to the representation theory of the symmetric groups. III. Induced representations and the Frobenius-Young correspondence. Mosc. Math. J., 6(3):567–585, 588, 2006.

[VO04] A. M. Vershik and A. Yu. Okounkov. A new approach to representation theory of symmetric groups. II. Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov. (POMI), 307(Teor. Predst. Din. Sist. Komb. i Algoritm. Metody. 10):57–98, 281, 2004. doi:10.1007/s10958-005-0421-7.