Schur-Weyl duality

These notes are from a graduate student reading group at the University of Utah happening during the Fall semester 2018.

Contents

1 Introduction2

2 Modules and representations of groups3 2.1 Modules...... 3 2.2 Group representations...... 4

3 Linear algebra6 3.1 Tensor products...... 6 3.1.1 Application to representations...... 8 3.2 Symmetric products...... 10 3.3 Exterior products...... 12 3.4 Determinant...... 13

4 Character theory 13 4.1 Decomposition of Representations...... 14 4.2 Character Theory...... 15 4.2.1 Properties of χ ...... 15 4.2.2 More Properties of χ ...... 16

5 Representations of the 16

6 Double Centralizer Theorem 24

7 Schur-Weyl duality 27

8 Determinental rings and applications to commutative algebra 29 8.1 Determinantal Rings Are Cohen-Macaulay...... 30 8.2 Relation to Representation Theory...... 31

9 Decomposing tensor products of Weyl modules 33

1 1 Introduction Talk by Adam Brown, notes by Sabine Lang We give here a brief outline of the topics that will be covered during the semester, along with some motivations behind Schur-Weyl duality. The basic idea comes from Lie groups: we want to find all the representations of these Lie groups. In our case, we will focus on only one Lie group: the general linear group. For V = Cn, we define G = GL(V ) and want to find all the representations of G, i.e., the continuous homomorphisms π : G → GL(W ) for W a complex . We also say that G acts on W in this case. As a starting point, we know one representation: the canonical or standard representation, defined by Id : G → GL(V ). Can we use this to construct new representations? Let us first analyze the operations that we can do with vector spaces. The direct sum of V with itself does not give a new representation: G → GL(V ⊕ g 0 V ) is given by g → which corresponds to two copies of the standard 0 g representation. A more interesting operation is the : G acts on V ⊗ V by g · (v1 ⊗ v2) = (g · v1) ⊗ (g · v2). How can we decompose V ⊗ V into G-invariant subspaces? A first subspace which is preserved by the action of G is the space of alternat- 2 ing vectors Λ V = h{v1 ⊗v2 | v1 ⊗v2 = −v2 ⊗v1}i. Can we decompose V ⊗V as 2 2 Λ V ⊕W ? Yes! And W is then equal to Sym V = h{v1 ⊗v2 | v1 ⊗v2 = v2 ⊗v1}i and is also preserved by the action of G. We obtain V ⊗ V = Λ2V ⊕ Sym2V. k z }| { Can we generalize this idea? We can consider V ⊗ · · · ⊗ V = V ⊗k. However, the decomposition that we had for k = 2 is no longer as simple. For k = 3 already, V ⊗3 = Λ3V ⊕ Sym3V ⊕ W with W 6= 0. Therefore, one of our goals is to decompose V ⊗k into G-invariant subspaces. Let us consider the symmetric group Sk = { bijections from Fk to Fk}, where Fk is a set with k elements. We have an action of Sk on Fk by σ · x = σ(x) for σ ∈ Sk, x ∈ Fk. However, Fk is a set and not a vector space, so this is not a representation. But we can construct a vector space E = {f : Fk → C}, which −1 has an action of Sk given by (σ · f)(x) = f(σ (x)). This is a representation of Sk. ⊗k Going back to V , we can use it to construct a representation of Sk: we ⊗k have an action of the symmetric group on V given by σ · (v1 ⊗ · · · ⊗ vk) = vσ−1(1) ⊗ · · · ⊗ vσ−1(k) on simple tensors, and extended linearly. Now, we can ⊗k try to decompose V into Sk-invariant subspaces.  For example, when k = 2, the group S2 has two elements, 1 2 and the 2  identity. If v1 ⊗ v2 ∈ Λ V , then 1 2 · v1 ⊗ v2 = v2 ⊗ v1 = −v1 ⊗ v2. 2 Therefore, S2 acts on Λ V by the sign of the permutation, and we can decompose 2 2 Λ V into copies of the sign representation. Now if v1 ⊗ v2 ∈ Sym V, then  2 1 2 · v1 ⊗ v2 = v2 ⊗ v1 = v1 ⊗ v2, and Sym V decomposes into copies of the trivial representation. We conclude that V ⊗ V = Λ2V ⊕ Sym2V is a

2 decomposition into invariant subspaces, both for the G-action and for the S2- action. This turns out to be true in general. This is due to the fact (to be proven later) that the actions of Sk and G commute: for σ ∈ Sk, g ∈ G we have σ · (g · (v1 ⊗ · · · ⊗ vk)) = g · (σ · (v1 ⊗ · · · ⊗ vk)). Therefore, the decompositions into invariant subspaces should agree. This leads to the main theorem of Schur- Weyl duality:

⊗k Theorem (Schur-Weyl duality). V decomposes both as ⊕π∈Irr(Sk)Eπ and as ⊕φ∈Irr(GL(V ))Uφ.

We are in particular interested in using Sk (it is a finite group, and there is a combinatorial approach to its representations) to understand GL(V ). In conclu- sion, if we have a canonical representation V of G = GL(V ) and an irreducible representation π of Sk, then we can construct an irreducible representation of G. More precisely, we can get all polynomial representations of G that way. We can realize this using the

⊗k Sπ : V 7→ HomSk (Eπ,V ),

⊗k and HomSk (Eπ,V ) is an irreducible representation of G when dim(V ) = k or 0. This duality has applications to any object with both a GL(V ) and an Sk x y action. For example, for a field k, we can define k[ ] (think of k[x], but w z x y we add a matrix instead of a variable x). Then k[ ]/(xz − yw) has a w z natural GL(V )-action. We can use Schur-Weyl duality to decompose it, and compute the syzygies.

2 Modules and representations of groups Talk by Cameron Zhao, notes by Peter McDonald Today we are going to introduce some fundamental tools of linear algebra that we will be using later in the semester.

2.1 Modules Definition 2.1.1. Let R be an associative algebra with unity. An R-module is an abelian group M with an action of R on M satisfying 1. The action preserves addition and multiplication in R and addition in M, i.e., (rs + t) · (a + b) = r(s · a) + t · a + r · (s · b) + t · b for r, s, t ∈ R and a, b ∈ M

2. 1R · a = a for all a ∈ M

3 Example 2.1.2. Abelian groups are Z-modules Example 2.1.3. Vector spaces over a field k are k-modules. Example 2.1.4. Consider the vector space kn for a field k. This is a module over the matrix algebra Mn(k). n Example 2.1.5. Take A ∈ Mn(k). Then k can be viewed as a k[x]-module where x acts as A. This gives us a lot of information about A: rational and Jordan canonical forms, minimal polynomial, etc. Example 2.1.6. If R is an algebra, then R is a left R-module. The submodules of R are precisely the left ideals of R. Example 2.1.7. Let R be a commutative ring and M a left R-module. Then M is also a right R-module where the action is given by a · r = r · a for r ∈ R and a ∈ M. Then M is a bi-module. Remark 2.1.8. If R is not commutative then the above is not well-defined because a · (rs) = (a · r) · s = (r · a) · s = s · (r · a) = (sr) · a 6= (rs) · a

2.2 Group representations Definition 2.2.1. Let G be a group. A representation of G is a group homo- morphism p : G → GL(V ) for some vector space V . Group representations are precisely modules over a certain algebra, so we can use the tools to study modules to study representations. What is this algebra though? Definition 2.2.2. The group algebra of G over a field k, denoted k[G], is con- structed as the k-vector space spanned by the basis G, i.e., k(G) = spank(G). Then, every element looks like X cg · [g]. g∈G Multiplication is defined on basis elements by

[g1][g2] = [g1g2], and extended linearly.

Example 2.2.3. C[Z/3] = {c0[0] + c1[1] + c2[2] : ci ∈ C}.

√ √ √ (2[0] + [1])(i 2[1] + 3[2]) = 2i 2[0][1] + 6[0][2] + i 2[1][1] + 3[1][2] √ √ = 2i 2[0 + 1] + 6[0 + 2] + i 2[1 + 1] + 3[1 + 2] √ √ = 3[0] + 2i 2[1] + (6 + i 2)[2] We can show this is an associative algebra with identity because the identity is 1k[e] where e ∈ G is the identity element of the group.

4 Proposition 2.2.4. Representations of the group G over the field k are k[G]- modules. Proof. If we have a representation ρ : G → GL(V ) then it extends linearly to ρ˜ : k[G] → Endk(V ). Then V is a k[G]-module. Consider the following example of how k[G] acts on V . Consider how c1g1 + c2g2 ∈ k[G] acts on v ∈ V :

(c1g1 + c2g2) · v =ρ ˜(c1g1 + c2g2) · v

= c1ρ(g1) · v + c2ρ(g2) · v

=ρ ˜(c1g2) · v +ρ ˜(c2g2) · v

= (c2g2) · v + (c2g2) · v

Now, given a k[G]-module V , we get a ring homomorphismρ ˜ : k[G] → Endk(V ) whereρ ˜ maps groups elements of G to invertible transformations:

ρ˜([g])˜ρ([g−1]) =ρ ˜([g][g−1]) =ρ ˜(e) = Id.

Becauseρ ˜ preserves multiplication, ρ =ρ ˜|G is a group homomorphism. Then ρ : G → GL(V ) is a representation. Essentially, instead of thinking of G acting on a set, we are thinking about G as giving us functions on our set. Now we need to make sure that k[G]- homomorphisms are the same as homomorphisms on representations. Definition 2.2.5. A homomorphism of representations ϕ : V → W is a k- linear map that commutes with the group action, i.e., the following diagram commutes: ϕ V −−−−→ W   g g y y ϕ V −−−−→ W This is also called a G-equivariant map.

Definition 2.2.6. An R-module homomorphism ϕ : M → N is a Z-linear map that commutes with the ring action, i.e., r · ϕ(m) = ϕ(r · m) for r ∈ R and m ∈ M. This is also called an R-linear map. X So a k[G]-linear map ϕ : V → W is one such that for all cg[g] ∈ k[G] the g∈G following diagram commutes:

ϕ V −−−−→ W X  X  cg[g] cg[g] g∈G y g∈G y ϕ V −−−−→ W

5 Take 1[e] = [e], then considering the following diagram we can see that ϕ is k-linear:

ϕ V −−−−→ W     [e]y [e]y ϕ V −−−−→ W Furthermore, if you take [g], then ϕ becomes a homomorphism between representations, so a k[G]-module homomorphism is also a homomorphism of representations:

ϕ V −−−−→ W   g g y y ϕ V −−−−→ W 3 Linear algebra Talk by Cameron Zhao, notes by Cameron Zhao (Some of the contents in this section were skipped in the talk).

3.1 Tensor products Recall from last time: we showed that k[G]-modules are the same as representa- tions of G, and that k[G]-module maps are the same as homomorphisms between representations. In other words, the category of k[G]-modules is isomorphic to the category of representations of G. The direct product M ×N of two modules is again a module, where r(m, n) = (rm, rn). It is also called the direct sum, denoted M ⊕ N (note: direct sums and direct products of modules only differ when infinitely many modules are considered). The idea is that when we take the direct sum of two vector spaces, the resulting dimension is the sum of the dimension of the two spaces. So one can try to construct a vector space whose dimension is the product of two smaller spaces. A natural choice of basis on such a space is {(ei, fj)}. This is the tensor product. This can be done for general modules. It should be an abelian group satis- fying some conditions. So to do this, we take elements in the free abelian group generated by M × N and impose the relations we want: Definition 3.1.1. Let M be a right R-module and N a left R-module. The tensor product is defined to be M ⊗R N := F (M × N)/I, where F (M × N) is the free abelian group generated by M × N, and I is the ideal generated by elements of the form

(m1 + m2, n) − (m1, n) − (m2, n),

6 (m, n1 + n2) − (m, n1) − (m, n2), (mr, n) = (m, rn).

The image of (m, n) in the quotient is denoted by m ⊗ n. So M ⊗R N consists P of elements of the form i mi ⊗ ni subject to the relations

(m1 + m2) ⊗ n = m1 ⊗ n + m2 ⊗ n,

m ⊗ (n1 + n2) = m ⊗ n1 + m ⊗ n2, mr ⊗ n = m ⊗ rn. An element of the form m⊗n is called a simple tensor or a rank one tensor. So M ⊗R N consists of R-linear combinations of simple tensors. The most important property of the tensor product is its universal property. Many characterizations of the tensor product can be deduced from it.

Definition 3.1.2. A map ϕ: M × N → L is called R-balanced if it is Z-linear in both arguments and ϕ(mr, n) = ϕ(m, rn). Theorem 3.1.3 (Universal Property of Tensor Product). Let R be an algebra with 1, M a right module, N a left module, and L an abelian group. Then we have a 1-1 correspondence correspondence

ϕ φ {R-balanced maps M × N −→ L} ←→ {group homomorphisms M ⊗R N −→ L} ϕ 7→ φ, where φ(m ⊗ n) = ϕ(m, n) such that the following diagram commute:

ι M × N M ⊗R N ϕ φ L where ι(m, n) = m ⊗ n. Proof. Given φ, let ϕ = φ◦ι, then ϕ is R-balanced because ι(mr, n) = mr ⊗n = m ⊗ rn = ι(m, rn). Given ϕ, it extends to a group homomorphismϕ ˜: F (M × N) → L from the free abelian group F (M × N). We want it to factor through M ⊗R N, so that it gives a map φ : M ⊗R N → L. Indeed, the generators of the ideal defining the tensor product vanish underϕ ˜. The universal property can be understood in the following way: we want to study bilinear maps, but bilinear maps are not module maps. The universal property states that using the tensor product we can encodes all information in the bilinear map in a module map. A commutative version of this is: bilinear maps M × N → L are the same as linear maps M ⊗R N → L. Using this we can prove many nice properties of the tensor product:

7 Proposition 3.1.4. 1. (Extension of scalars) If M is a left R-module and S is an R-algebra, then S ⊗R M is the “smallest” S-module containing M. ∼ 2. (Tensor products are associative) (M ⊗R N) ⊗R L = M ⊗R (N ⊗R L) for any left module M, bimodule N and right module L. ∼ 3. (Tensor products are commutative) If R is commutative, then M ⊗R N = N ⊗R M. ∼ 4. (Tensor product distributes with direct sums) (M ⊕ N) ⊗R L = (M ⊗R L) ⊕ (N ⊗R L). 5. (Vector spaces) If k is a field and V,W are vector spaces with bases {ei}, {fj} respectively, then V ⊗k W is a vector space with basis {ei ⊗ fj}, so dim(V ⊗ W ) = (dim V ) · (dim W ). Intuitively, when tensoring over fields, the product is not “shrunk” by rela- tions. But over general algebras the product will “shrink”. For example: Example 3.1.5. Let R be a commutative ring, I ⊂ R an ideal, M a module. ∼ ∼ Then R/I ⊗R M = M/IM. So if J is another ideal, then R/I ⊗R R/J = (R/J)/(I/J) =∼ R/(I + J). If we take a bimodule M we can tensor it with itself over and over again. Since tensor products are associative, we have formed the tensor powers M ⊗n. We can sum up all the tensor powers and form the tensor algebra

T (M) = R ⊕ M ⊕ M ⊗2 ⊕ · · · where the multiplication is given by concatenation of tensors. This is an asso- ciative algebra with 1, and is the “largest” associative algebra containing M. Note: Countable direct sum is defined to be ∞ M Mi := {(m1, m2,...) | mi ∈ Mi, only finitely many entries are nonzero}, i=1 whereas countable direct product is

∞ Y Mi := {(m1, m2,...) | mi ∈ Mi}, i=1 without the finiteness assumption.

3.1.1 Application to representations

If A : M1 → M2,B : N1 → N2 are linear maps, then we can construct the tensor 0 0 product A ⊗ B : M1 ⊗ N1 → M2 ⊗ N2, m ⊗ n 7→ A(m) ⊗ B(n). So if A ,B are also linear maps, then (A ⊗ B) ◦ (A0 ⊗ B) = (A ◦ A0) ⊗ (B ◦ B0). Over fields

8 and finite dimensional vector spaces, A, B can be represented as matrices. The matrix of A ⊗ B is   a11B a12B ··· a1lB  . . .   . . .  , as1B as2B ··· aslB assuming A = (aij)i=1,...,s,j=1,...,l. In particular if A is s × s, B is l × l, then tr(A ⊗ B) = tr A tr B, det(A ⊗ B) = (det A)s(det B)l. If ρ : G → GL(V ), σ : G → GL(W ) are two representations, then for any g ∈ G, ρ(g) ⊗ σ(g) is an invertible transformation on V ⊗k W . Let (ρ ⊗ σ)(g) = ρ(g) ⊗ σ(g), then ρ ⊗ σ : G → GL(V ⊗k W ) is another representation. We will simply write V ⊗ W . (Note: V ⊗k W is different from V ⊗k[G] W ! For example if V = Vtrv is ∼ ∼ the trivial representation, then Vtrv ⊗k W = W , but Vtrv ⊗k[G] W = WG := W/hgw − w | g ∈ Gi.) Similarly, if σ : H → GL(W ) is a representation of another group, then ρ ⊗ σ : G × H → GL(V ⊗k W ) is a representation of G × H. This is denoted by V  W . ∗ Now we look at Hom sets. Let N := HomR(N,R) be the dual space of N. Proposition 3.1.6. Let M be a free module, i.e., M =∼ R⊕m for some m. There ∗ is a Z-linear isomorphism N ⊗R M → HomR(M,N), (n ⊗ f) 7→ (m 7→ nf(m)). If R is commutative, then it is also R-linear. Proof. For simplicity, we prove it for the case when R = k is a field. Any linear map A : V → W is uniquely determined by its value on basis elements. If span{ei} = V , span{fj} = W , then A is uniquely determined by the coefficient P P ∗ cij’s where Aei = j cijfj. So A is the image of ij cijfj ⊗ ei . This is very useful. Recall that any linear map A : V → W induces a map A∗ : W ∗ → V ∗, f 7→ f ◦ A. If V,W are finite dimensional, then the matrix of A∗ is the transpose of the matrix of A. If ρ : G → GL(V ), σ : G → GL(W ) are representations, then ρ∗ : G → GL(V ∗), g 7→ − ◦ ρ(g−1) is a representation. ∗ Therefore Homk(V,W ) = W ⊗k V is also a representation. The link of this G G to characters is that dimk(Homk(V,W ) ) = hχV , χW i, where Homk(V,W ) is the invariant subspace of Homk(V,W ) consisting of the elements that are fixed under all g ∈ G. We mentioned that extension of scalars can be done using tensor products. If H ≤ G is a subgroup, then k[H] ⊂ k[G] is a subalgebra. So if V is an H-module, then we can extend the scalars to k[G] by taking the tensor product. Definition 3.1.7. Let H ≤ G be a subgroup, and V be an H-module. The in- G duced representation from H to G obtained from V is defined to be IndH V := k[G] ⊗k[H] V . The coinduced representation from H to G obtained from V G is CoindH V := HomH (k[G],V ). This definition is much cleaner than the one that does not use tensor product. An important fact about induction is the Frobenius reciprocity, which is a direct collorary of the following general fact:

9 Proposition 3.1.8 (Adjoint pairs). Given a right R-module M, an (R,S)- bimodule N and a right S-module L, there is a unique abelian group isomorphism ∼ η : HomS(M ⊗R N,L) −−→ HomR(M, HomS(N,L)), given by

η(f)(m)(n) = f(m ⊗ n), for f ∈ HomS(M ⊗R N,L), m ∈ M, n ∈ N. Pictorially

M M ⊗R N

η(f) f

HomS(N,L) L As a result, Corollary 3.1.9. If H ≤ G is a subgroup, W is a G-module, V is an H-module, then G ∼ G HomH (V, ResH W ) = HomG(IndH V,W ). G Proof. We only need to show that ResH W = HomG(k[G],W ). In general, if ∼ ∼ M is an R-module, then HomR(R,M) −→ M, f 7→ f(1). So HomG(k[G],W ) = W as vector spaces. The action of H on k[G] makes HomG(k[G],W ) an H- module.

3.2 Symmetric products 0 0 In general m ⊗ m 6= m ⊗ m in M ⊗R M. But we can make them equal by taking a quotient. Definition 3.2.1. Let R be commutative, and let M be an R-module. The symmetric power Symm M is the quotient of the tensor power M ⊗m by the submodule generated by elements of the form

m1 ⊗ · · · ⊗ mi ⊗ mi+1 ⊗ · · · ⊗ mm − m1 ⊗ · · · ⊗ mi+1 ⊗ mi ⊗ · · · ⊗ mm. m So in Sym M the tensor factors commute. We write m1 ··· mm instead of m1 ⊗ · · · ⊗ mm. There is also a universal property for symmetric powers. Many properties of the symmetric powers can be proved using this. Theorem 3.2.2 (Universal Property). Let M,N be R-modules. Then sym- metric multilinear maps M m → N are the same as R-module homomorphisms Symm M → N.

Proposition 3.2.3. Let V be a vector space over k with basis {e1, . . . , en}, m then {ei1 ··· eim | 1 ≤ ij ≤ ij+1 ≤ n} is a basis of Sym V . As a result, m n+m−1 dimk Sym V = m . We also have m ∼ Sym V −−→ k[x1, . . . , xn]m, ei 7→ xi where k[x1, . . . , xn]m is the degree m part of the polynomial ring.

10 There is another way to construct symmetric tensors. For any m1 ⊗· · ·⊗mm, simply sum up all of its permutations. Then we get P m ⊗· · ·⊗m , σ∈Sm σ(1) σ(m) which is clearly invariant under permutations. As one can expect,

Proposition 3.2.4. If M =∼ Rl is a free module, then we have an injection

n ⊗n X Sym M,→ M , m1 ··· mn 7→ mσ(1) ⊗ · · · ⊗ mσ(n).

σ∈Sn In particular this is true for vector spaces. If n! is invertible in R, then the following map is also injective:

1 X Symn M,→ M ⊗n, m ··· m 7→ m ⊗ · · · ⊗ m . 1 n n! σ(1) σ(n) σ∈Sn

⊗n n Moreover, the composition of this map with the projection M  Sym M is the identity on Symm M.

We can also form the symmetric algebra Sym M := R⊕M ⊕Sym2 M ⊕· · · . This is the “largest” commutative algebra containing M. An insightful fact is that Sym(V ) =∼ k[V ∗] canonically for a vector space V . ⊗m ⊗m Recall that if A ∈ Endk(V ), then A acts on V by v1 ⊗ · · · ⊗ vm 7→ m Av1 ⊗ · · · ⊗ Avm. So this action passes down to the quotient on Sym V . For the same reason, if V is a representation of G, then so is Symm V .  2  2 2 ∼ Example 3.2.5.GL 2(C) C naturally. So GL2(C) Sym (C ) = k[x, y]2. Explicitly,

ρ(g)(ax2 + bxy + cy2) = a(gx)2 + b(gx)(gy) + c(gy)2.

Under the basis {x2, xy, y2}, we can write down the matrix for ρ(g):

 2 2    g11 g11g12 g12 g11 g12 ρ = 2g11g21 g11g22 + g12g21 2g12g21 g21 g22 2 2 g21 g21g22 g22 which is a polynomial representation. Lastly, we have the following decomposition. Proposition 3.2.6. If V,W are finite dimensional vector spaces, then we have a canonical isomorphism

n M Symn(V ⊕ W ) −−→∼ Syma V ⊗ Symn−a W, a=0

v1 ··· vaw1 ··· wn−a ← (v1 ··· va) ⊗ (w1 ··· wn−a). [

11 Proof. The map is defined in a coordinate-free way, so it is canonical. To see that it is an isomorphism, note that if {ei}, {fj} are basis of V,W respectively, then we have a correspondence of basis

ei1 ··· eia fj1 ··· fjn−a ← (ei1 ··· eia ) ⊗ (fj1 ··· fjn−a ) [ for all 0 ≤ a ≤ n, 1 ≤ i1 ≤ · · · ≤ ia ≤ a, 1 ≤ j1 ≤ · · · ≤ n − a. Corollary 3.2.7. If V,W are also representations of G, then the above decom- position of vector spaces is also a decomposition of representations. Proof. We only need that the isomorphism is G-equivariant. Indeed, the follow- ing diagram commute:

n Ln a n−a Sym (V ⊕ W ) a=0 Sym V ⊗ Sym W g g

n Ln a n−a Sym (V ⊕ W ) a=0 Sym V ⊗ Sym W

ei1 ··· eia fj1 ··· fjn−a (ei1 ··· eia ) ⊗ (fj1 ··· fjn−a )

g g

(gei1 ) ··· (geia )(gfj1 ) ··· (gfjn−a ) (gei1 ··· geia ) ⊗ (gfj1 ··· gfjn−a )

3.3 Exterior products In the exterior power we require instead that tensors anti-commute, i.e., we want m ⊗ m0 = −m0 ⊗ m. Definition 3.3.1. Let R be commutative, M an R-module. The exterior power Vm M is the quotient of the tensor power M ⊗m by the ideal generated by elements of the form

m1 ⊗ · · · ⊗ mi ⊗ mi+1 ⊗ · · · ⊗ mm + m1 ⊗ · · · ⊗ mi+1 ⊗ mi ⊗ · · · ⊗ mm.

We shall write m1 ∧ · · · ∧ mm instead of m1 ⊗ · · · ⊗ mm. There are parallel results for exterior products. Theorem 3.3.2 (Universal Property). Let M,N be R-modules. Then alter- nating multilinear maps M m → N are the same as R-module homomorphisms Vm M → N.

Proposition 3.3.3. Let V be a vector space over k with basis {e1, . . . , en}, then Vm Vm {ei1 ··· eim | 1 < ij < ij+1 ≤ n} is a basis of V . As a result, dimk V = n  m for m ≤ n and 0 for m > n.

12 Proposition 3.3.4. If M =∼ Rl is a free module, then we have an injection

n ^ ⊗n X M,→ M , m1 ∧ · · · ∧ mn 7→ ε(σ)mσ(1) ⊗ · · · ⊗ mσ(n).

σ∈Sn In particular this is true for vector spaces. If n! is invertible in R, then the following map is also injective:

^n 1 X M,→ M ⊗n, m ∧ · · · ∧ m 7→ ε(σ)m ⊗ · · · ⊗ m . 1 n n! σ(1) σ(n) σ∈Sn

⊗n Vn Moreover, the composition of the latter map with the projection M  M is the identity on Vm M. Proposition 3.3.5. If V,W are finite dimensional vector spaces, then we have a canonical isomorphism

n ^n M ^a ^n−a (V ⊕ W ) −−→∼ V ⊗ W, a=0

v1 ∧ · · · ∧ va ∧ w1 ∧ · · · ∧ wn−a ← (v1 ∧ · · · ∧ va) ⊗ (w1 ∧ · · · ∧ wn−a). [ If V,W are representations of G, then the above is a decomposition of represen- tations.

3.4 Determinant It is worth mentioning that the exterior powers can be used to develop a coordinate-free version of linear algebra. The determinant is one of the in- stances. Vn Let V be an n-dimensional vector space over k. Notice that dimk V = n Vn Vn n = 1, so for any linear map A ∈ Endk V , its exterior product A: V → Vn V is canonically identified with a number. If we choose a basis and write down the formula for Vn A, we will see that this number is exactly det A.

4 Character theory Talk by Sam Swain, notes by Peter McDonald Recall, a representation of a finite group G is a homomorphism ρ : G → GL(V ) where V is a vector space. We often say that V is a representation. Definition 4.0.1. A subrepresentation of a group G is a subspace W of a representation V such that ρ(g)(W ) ⊂ W for all g ∈ G. Definition 4.0.2. A representation is called irreducible if its only subrepre- sentations are 0 and itself.

13 4.1 Decomposition of Representations Let W ⊂ V be a subrepresentation of a representation V of a group G. Since G is finite, then W has a complement W 0 such that V = W ⊕ W 0. Let π : V → W be the projection map. Then we can define 1 X π0 := ρ(g)πρ(g−1) |G| g∈G

If we take x ∈ W then ρ(g−1)(x) ∈ W because W is a subrepresentation. So

πρ(g−1)x = ρ(g−1)x because π is a projection. Then

ρ(g)πρ(g−1)x = ρ(g)ρ(g−1)x = x

Then 1 X π0(x) = x = x |G| g∈G so π0 is a projection. We now show that ρ(g)π0 = π0ρ(g) for all g ∈ G. Note that 1 X ρ(h)π0ρ(h−1) = ρ(h)ρ(g)π0ρ(g−1)ρ(h−1) |G| g∈G 1 X = ρ(hg)π0ρ((hg)−1) |G| g∈G = π0

Take W ⊥ = ker π0. Take x ∈ W ⊥. Then π0ρ(g)(x) = ρ(g)π0(x) = 0 ∈ W ⊥. Then W ⊥ is a subrepresentation. So V = W ⊕ W ⊥. We should note that this decomposition does not depend on our choice of π. If we keep breaking up V into these direct summands, we can eventually ⊕m1 ⊕mk write V = V1 ⊕ · · · ⊕ Vk . Note, this is not always the case when our group is infinite. Consider the following example: 1 x Example 4.1.1. Consider ρ : → GL( 2) given by ρ(x) = . Consider R C 0 1 the subspace spanned by 1 . 0 This is invariant under ρ and so is a subrepresentation. However, its comple- ment would be the subspace spanned by 0 , 1 but R acts on this subgroup nontrivially.

14 Proposition 4.1.2 (Schur’s Lemma). Let V and W be irreducible representa- tions. Let ϕ : V → W be an intertwining operator, i.e., the following diagram commutes ϕ V −−−−→ W     ρ1(g)y yρ2(g) ϕ V −−−−→ W Then 1. ϕ is either the 0 map or an isomorphism 2. If ϕ is an isomorphism, then ϕ = λId. Proof. 1. We claim that ker ϕ and Imϕ are subrepresentations of V and W respectively. Take x ∈ ker ϕ. Then ρ2(g)ϕ(x) = 0 = ϕρ1(g)(x) which means that ρ1(g) ∈ ker ϕ and therefore ker ϕ is a subrepresentation. Similarly, w ∈ Imϕ implies there is some v ∈ V such that ϕ(v) = w. Then ρ2(g)(w) = ϕ(ρ1(g)(v)) which means that ρ2(g)(w) ∈ Imϕ and Imϕ is a subrepresentation. By irreducibility, the only possibilities are ker ϕ = 0, Imϕ = W or ker ϕ = V, Imϕ = 0. 2. Assume V = kn where k is an algebraically closed field. Then ϕ has an eigenvalue λ in k. Then det(ϕ−λId) = 0 means ker(ϕ−λId) is nontrivial. Then by (1) ker(ϕ − λId) = V and ϕ = λId.

If we have an isomorphism between representations ϕ : V → W with

⊕m1 ⊕mk V = V1 ⊕ · · · ⊕ Vk

⊕n1 ⊕nj W = W1 ⊕ · · · ⊕ Wj then we can restrict ϕ to each irreducible component.

4.2 Character Theory Another way of extracting irreducible representations from other representations is through the use of characters. Definition 4.2.1. If G is a finite group and ρ : G → GL(V ) is a representation, the character of V , denoted χV , is a map χV : G → k given by χV (g) = tr(ρ(g)).

4.2.1 Properties of χ

1. χV (e) = dim(V ).

−1 2. χV (g ) = χV (g).

−1 3. χV (xgx ) = χV (g)

15 Proof. 1. Since ρ is a representation, we have ρ(e) = Id. Hence, χ(e) = tr(Id) = dim(V ) 2. X χV (g) = tr(ρ(g)) = λi

P −1 |G| |G| We claim this is equal to λi . To see this, ρ(g) = ρ(g ) = ρ(e) has eigenvalues 1. We also know that the eigenvalues of Ak are the eigenvalues −1 of A raised to the k. Then |λi| = 1 for all ρ(g) and so λi = λi . Hence

X X −1 −1 −1 χV (g) = tr(ρ(g)) = λi = λi = tr(ρ(g )) = χV (g )

3. tr(AB) = tr(BA) so tr(ABA−1) = tr(B).

4.2.2 More Properties of χ

If V and W are two representations with respective characters χV and χW , then

1. χV ⊕W = χV + χW

2. χV ⊗W = χV · χW

Proof. 1. χV ⊕W (g) is just the trace of the following matrix:   ρV (g) 0 0 ρW (g)

Then the trace of this matrix is just tr(ρV ) + tr(ρW ) = χV (g) + χW (g). 2. Look at the Kronecker product

5 Representations of the symmetric group Talk by Faith Pearson, notes by Faith Pearson (Additional proofs and examples are covered in these notes that were not covered in the talk.) Our goal in this section is to classify all of the irreducible representations of the symmetric group. Given any finite group G, the number of irreducible representations of G is equal to the number of conjugacy classes of G. Though this is true for every finite group G, it is not always possible to create an explicit bijection between its irreducible representations and conjugacy classes. However, we will see that each irreducible representation of the symmetric group has a one-to-one correspondence with a combinatorial object called a Young diagram that corresponds to a given conjugacy class. In order to study the irreducible representations of the symmetric group, we must first establish its conjugacy classes.

16 Definition 5.0.1. Suppose σ ∈ Sn is a product of cycles σ1σ2 ··· σk and let λi be the length of σi. We may assume that λi ≥ λi+1 for all i since disjoint cycles commute. Then we define λ(σ) = {λ1, λ2, . . . , λk} to be the cycle type of σ.

Definition 5.0.2. For a given σ ∈ Sn, the conjugacy class Cσ is the set of 0 −1 all elements σ = τστ for some τ ∈ Sn.

We can now determine the conjugacy classes of the symmetric group Sn. We start by noticing that any conjugate of a k-cycle is also a k-cycle.

Lemma 5.0.3. Let α, τ ∈ Sn, where α is a k-cycle (a1, a2, . . . , ak). Then −1 τατ = (τ(a1), τ(a2), . . . , τ(ak)). −1 Proof. Consider τ(ai) such that 1 ≤ i ≤ k. Then we have τ τ(ai) = ai, and −1 α(ai) = ai+1 mod k. Then τατ (τ(ai)) = τ(ai+1) mod k. Now take any j where j ∈ {1, 2, . . . , n}, and where j 6= ai for any i. Then α(j) = j because j is not in the k-cycle defining α, and so τατ −1(τ(j)) = τ(j). Hence, τατ −1 fixes any number which is not of the form τ(ai) for some i, and thus

−1 τατ = (τ(a1), τ(a2), . . . τ(ak)).

Lemma 5.0.4. The conjugate of a product of k-cycles is equivalent to the prod- uct of the conjugates of k-cycles. That is, for αi disjoint, we have that

−1 −1 −1 −1 τα1α2 . . . αnτ = (τα1τ )(τα2τ ) ... (ταnτ ). Finally, we can describe the conjugacy classes of the symmetric group.

Proposition 5.0.5. The conjugacy classes of Sn are determined by cycle type. That is, if σ has cycle type (λ1, λ2, . . . , λ`), and if ρ is any other element of Sn with cycle type (λ1, λ2, . . . , λ`), then σ is conjugate to ρ.

Proof. Suppose that σ is a product of disjoint cycles σ = α1α2 . . . α`, with cycle type (λ1, λ2, . . . , λ`), where αi is a λi-cycle. By Lemma 5.0.4,

−1 −1 −1 −1 τστ = (τα1τ )(τα2τ ) ... (ταnτ ), −1 and by Lemma 5.0.3, ταiτ is a λi-cycle. For any i, j ∈ {1, 2, . . . , n} where −1 i 6= j, αi and αj are disjoint. Then since τ is a bijection, this implies ταiτ −1 −1 and ταjτ must also be disjoint. Thus the conjugate τστ is a product of disjoint λi-cycles, and has cycle type (λ1, λ2, . . . , λ`).

Conversely, suppose σ and ρ both have cycle type (λ1, λ2, . . . , λ`). Let σ = α1α2 . . . α` and let ρ = β1β2 . . . β`, where αi and βi are λi cycles. Then αi =

(ai,1, ai,2, . . . , ai,λi ) and bi = (bi,1, bi,2, . . . , bi,λi ). We can chose τ to be the map such that τ(ai,k) = bi,k. Because αi are mutually disjoint and similarly βi are mutually disjoint, and σ and ρ are permutations on {1, 2, . . . , n}, τ is well defined and also a permutation. Thus by Lemma 5.0.3, τστ −1 = ρ.

17 Thus, we see that any two permutations in Sn are conjugate if and only if they have the same cycle type. Recall the definition of the group algebra, so we can define a few new repre- sentations.

Definition 5.0.6. The group algebra CSn is the set of all finite formal sums of the form X zσeσ for zσ ∈ C σ∈Sn where eσ are basis elements indexed by σ ∈ Sn. Multiplication is defined on basis elements by  X  X  X zieσi yjeσj = ziyjeσi eσj which we can expand linearly.

Definition 5.0.7. We may also think of CSn as a complex vector space with basis elements indexed by each eσ. Then we can define a representation ∼ n! ρ : Sn → GL(CSn) = GL(C ), which is called the regular representation. Let us define two more representations that will come up again in a later example.

Definition 5.0.8. For Sn, the alternating representation (or sign represen- tation) is C equipped with the action ( v, if σ is an even permutation σ · v = −v, if σ is an odd permutation or equivalently, ρ(σ) = sgn(σ)I for every σ ∈ Sn. Remark that any Sn where n ≥ 2 has the alternating representation, and since this representation is one dimensional, it is irreducible.

n Definition 5.0.9. For any n, let {e1, e2, . . . , en} be the standard basis for C , n and define the action of Sn on C to be

σ(a1e1 + a2e2 + · + anen) = a1eσ(1) + a2eσ(2) + · + aneσ(n).

This is a permutation representation of Sn. Remark that the one-dimensional subspace of C spanned by e1 + e2 + ··· + en is invariant under the action of Sn, and so its orthogonal complement V = {(x1, x2, . . . , xn)|x1 + x2 + ··· + xn = 0} is also invariant, and therefore a subrepresentation. We call V the standard representation of Sn.

Definition 5.0.10. A partition of a positive integer n = λ1 + λ2 + ··· + λk is an ordered set λ = (λ1, λ2, . . . , λk) of positive integers such that λi ≥ λi+1 for every 1 ≤ i ≤ k.

18 Recall that in Sn, the number of its irreducible representations is equal to the number of its conjugacy classes, and each conjugacy class is a cycle-type equivalence class. Now, there is also a bijective correspondence between the set of cycle-types and the ways n can be written as the sum of positive integers. For example, S4 has the following five cycle-types: Cycle Notation Alternate Form Corresponding Sum  (1)(2)(3)(4) 1+1+1+1 (1 2) (1 2)(3)(4) 2+1+1 (1 2 3) (1 2 3)(4) 3+1 (1 2)(3 4) (1 2)(3 4) 2+2 (1 2 3 4) (1 2 3 4) 4

A fundamental tool for studying the representations of Sn is the Young diagram. Definition 5.0.11. A Young diagram is a graphical representation of a par- tition λ = (λ1, λ2, . . . , λk), as an array of boxes. This array is constructed by drawing a row of λ1 boxes, then beneath it drawing a row of λ2 boxes, and so on until the last row contains λk boxes, and each row is as long as or shorter than the one above it. Example 5.0.12. The Young Diagrams corresponding to the partitions of n = 4 are:

.

So we see each conjugacy class of Sn corresponds to a Young diagram, and now we can find a method that will generate all of the irreducible representations of Sn. (For a proof of why this method works, see Section 4.2 of Fulton and Harris.) To continue, we must define a . Definition 5.0.13. A Young tableau is a Young diagram whose boxes are labeled in any way with each of the numbers 1, . . . , n. For our purposes, we will fill in our Young diagrams in the natural way, starting with 1 in the upper left and increasing by 1 as we move from left to right and top to bottom. Example 5.0.14. For n = 9, the partition λ = (3, 2, 2, 1, 1) induces the Young diagram,

19 and its associated Young tableau would be

1 2 3 4 5 6 7 8 9 . Given any Young tableau, we can define

Pλ = {σ ∈ Sn | σ preserves each row}, and

Qλ = {σ ∈ Sn | σ preserves each column}.

Example 5.0.15. If we were working in S6, and wanted to find the irreducible representations corresponding to the partition λ = (3, 2, 1), our Young tableau would be

1 2 3 4 5 6 . Then we would have

Pλ = {e, (12), (23), (13), (123), (132), (45), (12)(45), (23)(45), (13)(45),

(123)(45), (132)(45)},

Qλ = {e, (14), (16), (46), (146), (164), (25), (14)(25), (16)(25), (46)(25), (146)(25), (164)(25)}.

We can use Pλ and Qλ to define the elements aλ, bλ in the group algebra CSn to be X aλ := eσ

σ∈Pλ X bλ := sgn(σ)eσ

σ∈Qλ

Finally, we can define a key tool in finding the irreducible representations of Sn. Definition 5.0.16. The Young symmetrizer is

cλ := aλ · bλ.

Theorem 5.0.17. Given Sn, let λ be a partition of n. Let Vλ = CSn · cλ be the subspace of CSn spanned by the Young symmetrizer cλ. Then

1. Vλ is an irreducible representation of Sn.

20 2. If λ and µ are distinct partitions of n, then Vλ and Vµ are not isomorphic.

3. The Vλ account for all of the irreducible representations of Sn. We will follow Sean McAfee’s proof. The full proof can be found in Sections 4.1 and 4.2 in Fulton and Harris.

Lemma 5.0.18. For all x ∈ CSn, cλ · x · cλ is a scalar multiple of cλ.

Lemma 5.0.19. If λ 6= µ, then cλ · CSn · cµ = 0. Assuming these two lemmas to be true, we can now prove Theorem 2.8.

Proof. 1. Take Vλ = CSn · cλ for a given Young symmetrizer cλ. Then by Lemma 2.9, we have that cλVλ ⊆ Ccλ.

Let W be a nonzero subrepresentation of Vλ. We want to show W = Vλ. First, we claim that cλVλ and cλW are both nonzero. Suppose that cλVλ = 0. Then VλVλ = CSn(cλVλ) = 0.

Considering CSn and CSn · cλ as subspaces, there exists a projection π : CSn → CSn · cλ that commutes with the action of Sn. This projection can be described as right multiplication on the group algebra CSn by an element x ∈ CSn by letting x := π(1). Since x = 1 · x, this x must be in Vλ. Then by the definition of projection,

2 x = x ∈ VλVλ = 0,

and thus x must equal zero, which is a contradiction since the nonzero cλ itself is in CSn · cλ. Therefore we must have cλVλ 6= 0. Next we will show cλW 6= 0.

Since we have that W is a subspace of Vλ, cλVλ ⊆ Ccλ, and that cλW 6= 0, we must have cλW = Ccλ. Therefore,

Vλ = CSn · cλ = CSn(CCcλ) = CSn(cλW ) ⊆ W where the inclusion on the right follows from the fact that W is a subrep- resentation of Vλ, that is, W is invariant under the action of CSn. Thus, we have Vλ = W , which completes the proof that Vλ is irreducible in Sn.

2. Let λ and µ be distinct partitions of n, and let Vλ and Vµ be their corre- sponding representations. By (a), we have that cλVλ = Ccλ 6= 0, and by Lemma 2.10, we have that

cλVµ = cλCSncµ = 0.

Thus, Vλ and Vµ cannot be isomorphic, and therefore, if λ and µ are distinct partitions of n, then Vλ and Vµ are not isomorphic.

21 3. We know that each partition λ of n is in one-to-one correspondence with a distinct conjugacy class of Sn. We know from (b) that the Vλ determine by such partitions are all inequivalent. Then since the number of conjugacy classes of a finite group is equal to the number of irreducible representa- tions, we have therefore accounted for all of the irreducible representations of Sn.

Therefore we have proved that the subspace CSn · cλ is an irreducible rep- resentation of Sn, and distinct partitions of λ correspond to distinct irreducible representations. Furthermore, the CSn · cλ account for all irreducible represen- tations of Sn. Example 5.0.20. Let us use what we have learned to find all of the irreducible representations of S3. There are three Young diagrams corresponding to the three partitions λ = (3), µ = (2, 1), ω = (1, 1, 1), which are respectively

1 1 2 2 1 2 3 3 3 . In the first Young tableau, since all of the numbers are in the same row, any permutation will preserve the row. However, the only permutation that will preserve the columns is the identity. Thus, Pλ = S3, and Qλ = . So we have

aλ = e + e(12) + e(23) + e(13) + e(123) + e(132),

bλ = e, (1) cλ = (e + e(12) + e(23) + e(13) + e(123) + e(132))e

= e + e(12) + e(23) + e(13) + e(123) + e(132).

Therefore CS3 ·cλ = C·cλ = hcλi is the associated irreducible representation since multiplying by any element in the basis of CS3 will simply rearrange the addends of cλ, however it will not change the sum. Remark that the subspace generated by cλ is one dimensional, and because σ · rcλ = rcλ for any σ ∈ S3 and any r ∈ C, the action of every σ leaves every vector in hcλi fixed. Therefore hcλi is the trivial representation. In the second Young diagram, we have Pµ = {, (12)}, and Qµ = {, (13)}. Then we obtain

aµ = e + e(12),

bµ = e − e , (13) (2) cµ = (e + e(12))(e − e(13))

= e − e(13) + e(12) − e(132).

The associated irreducible representation is CS3 · cµ. To find out what this subspace is, we multiply cµ by the basis elements of CS3, and obtain

22 e(e − e(13) + e(12) − e(132)) = e − e(13) + e(12) − e(132),

e(12)(e − e(13) + e(12) − e(132)) = e(12) − e(132) + e − e(13),

e (e − e + e − e ) = e − e + e − e (13) (13) (12) (132) (13) (123) (23) (3) e(23)(e − e(13) + e(12) − e(132)) = e(23) − e(123) + e(132) − e(12)

e(123)(e − e(13) + e(12) − e(132)) = e(123) − e(23) + e(13) − e

e(132)(e − e(13) + e(12) − e(132)) = e(132) − e(12) + e(23) − e(123).

The matrix corresponding to the above set of equations is

 1 1 −1 0 −1 0   1 1 0 −1 0 −1   −1 −1 1 0 1 0    .  0 0 −1 1 −1 1     0 0 1 −1 1 −1 −1 −1 0 1 0 1

Since the set is spanned by the first and third vectors, CS3 ·cµ is the subspace

he − e(13) + e(12) − e(132), e(13) − e + e(123) − e(23)i.

This must be the standard representation, since it is the only two-dimensional representation of S3. In the third Young diagram, any permutation in S3 will preserve the column, however only the identity will fix the rows. So we have Pω = {}, and Qω = S3. Then we obtain

aω = e,

bω = e − e − e − e + e + e , (12) (23) (13) (123) (132) (4) cω = e(e − e(12) − e(23) − e(13) + e(123) + e(132))

= e − e(12) − e(23) − e(13) + e(123) + e(132).

Once again, CS3 · cω = C · cω = hcωi is the associated irreducible represen- tation since multiplying by any element in the basis of CS3 will rearrange the addends of cω and negate their signs. This subspace is also one dimensional, and for any σ ∈ S3, and any r ∈ C, we have σ · rcω = rcω if σ is even, and σ · rcω = −rcω if σ is odd. Thus, hcωi is the alternating representation. Thus, the three representations of S3 are the trivial representation, the stan- dard representation, and the alternating representation. This method will still hold for any symmetric group, however the Young symmetrizers size become large very quickly.

23 6 Double Centralizer Theorem Talk by Sabine Lang, notes by Peter McDonald In this section, let k be an algebraically closed field, A a finite dimensional k-algebra, and ρ : A → End(V ) is a representation of A. Definition 6.0.1. The radical of A is the set

Rad(A) := {x ∈ A : xM = 0 ∀M irreducible left A-modules}

If Rad(A) = 0 we say that A is semisimple. Theorem 6.0.2. Up to isomorphism A has finitely many irreducible represen- ∼ Ln tations Vi with dim(Vi) < ∞ for all i and A/Rad(A) = i=1 End(Vi)

Proof. Let Vi be an irreducible representation of A. If 0 6= v ∈ Vi, then 0 6= Av ⊆ Vi, hence Av = Vi. So dim(Vi) = dim(Av) < ∞ because A is finite dimensional. By linear algebra M M ρi : A → End(Vi) i∈I i∈I is surjective. Then X |I| = # of irreducible representations ≤ dim(End(Vi)) < dim(A) < ∞ i∈I Finally, ! M Rad(A) = ker ρi , i∈I so by the first isomorphism theorem

n ∼ M A/Rad(A) = End(Vi). i=1

Corollary 6.0.3.

X 2 X dim(A) − dim(Rad(A)) = dim (Vi) = dim End(Vi). i∈I i∈I Theorem 6.0.4. If A is finite dimensional, the following are equivalent. 1. A is semisimple Pn 2 2. i=1 dim (Vi) = dim(A) ∼ L 3. A = i Matdi (k)

24 4. Any finite dimensional representation of A is completely reducible. 5. A is completely reducible as a representation of A. Definition 6.0.5. Let E be a finite dimensional k-vector space. Consider End(E) and A ⊆ End(E) a subalgebra. Then the centralizer of A in End(E) is the set

EndA(E) := {f : E → E : f is linear and a morphism of A-representations} = the centralizer of A

Theorem 6.0.6 (Double Centralizer Theorem). Let E be a finite dimensional k-vector space and A ⊆ End(E) a subalgebra such that A is semisimple. Let B = EndA(E). Then

1. EndB(E) = A 2. B is semisimple Ln 3. E = i=1(Vi ⊗ Wi) as a representation of A ⊗ B where the Vi are irre- ducible representations of A and the Wi are irreducible representations of B. This gives a bijection between the irreducible representations of A and the irreducible representations of B.

Proof. Let V1,...,Vn be irreducbile representations of A. Then, because A is semisimple n ∼ M A = End(Vi). i=1 We can also decompose E using these representations of A:

n ∼ M E = (Vi ⊗ HomA(Vi,E)) i=1 Let Wi = HomA(Vi,E).

25 Then

B = EndA(E)

= HomA(E,E) n ! ∼ M = HomA (Vi ⊗ Wi),E i=1 n M = HomA(Vi ⊗ Wi,E) i=1 n M = HomA(Wi ⊗ Vi,E) i=1 n ∼ M = HomA(Wi, HomA(Vi,E)) i=1 n M = HomA(Wi,Wi) i=1 n M = EndA(Wi) i=1

0 We now show that the Wi are irreducible B-modules. Let f, f ∈ Wi = HomA(Vi,E). We know Vi is irreducible as an A-module, so given 0 6= v ∈ Vi, 0 0 Vi = Av, which means that f and f are determined by f(v) and f (v) respec- tively. Then Af(v) ⊆ E is an A-invariant subspace and there is an invariant complement W . Then E = Af(v) ⊕ W . Let T : E → E be defined by

af(v) 7−→ af 0(v) W 7−→ W

0 Then T ◦ f = f . T is an A-homomorphism, so T ∈ EndA(E) = B. Then Wi is Ln an irreducible B-module. Because B = i=1 EndA(Wi), B is semisimple. We are now ready to show EndB(E) = A. As B-modules

n ∼ M E = (Wi ⊗ HomB(Wi,E)) i=1

We know Wi = HomA(Vi,E) are irreducible B-modules. Comparing the two decomposition of E as

n n ∼ M ∼ M E = (Wi ⊗ HomB(Wi,E)) = (Vi ⊗ Wi) , i=1 i=1

26 ∼ we can deduce that Vi = HomB(Wi,E) (Note: this can be seen in several different ways. One of them is to use the decomposition as an A-module, with the fact that each irreducible for A is one of V1,...,Vn.). Hence,

n ∼ M E = (Vi ⊗ Wi) i=1 is a decomposition of E as an A ⊗ B-module. Then

EndB(E) = HomB(E,E) n ! M = HomB (V⊗Wi),E i=1 ∼ M = Hom(Vi, HomB(Wi,E)) n M = End(Vi) i=1

7 Schur-Weyl duality Talk by Chengyu Du, notes by Sam Swain Recall the Double Centralizer Theorem from last week: Theorem 7.0.1. Given a finite-dimensional vector space E and A ⊆ End(E) semisimple, then defining B := EndA(E), we have that

1. A = EndB(E) 2. B is semisimple ∼ Lr 3. E = i=1(Vi ⊗C Wi) where the Vi are irreducible A-modules and the Wi are irreducible B-modules and Wi = Hom(Vi,E). We now consider the special case where E = V ⊗d with dim(V ) = n and M A = CSd = CSdcλ |λ|=d L Theorem 7.0.2 (Schur-Weyl Duality). Given A = CSd = |λ|=d CSdcλ and E = V ⊗d. Then ⊗d ∼ M E = V = (SλV ) |λ|=d

27 Proof. A is semi-simple as it is the direct sum of irreducible A-modules. We ⊗d now define an A-action on V . Consider σ ∈ Sd. Then define

σ(v1 ⊗ v2 ⊗ · · · ⊗ vd) := vσ−1(1) ⊗ · · · ⊗ vσ−1(d) which we can expand linearly to V ⊗d. Then dim(D) = d · n < ∞ which means we can apply the double centralizer theorem. ⊗d In this case B = EndA(V ). Then ⊗d ∼ ⊗d ∗ ⊗d EndC(V ) = (V ) ⊗C V ∼ ∗ ⊗d ⊗d = (V ) ⊗C V ∼ ∗ ⊗d = (V ⊗C V ) =∼ (End(V ))⊗d

⊗d Then B consists of the elements in End(V ) that commute with the Sd action. In general, these elements look like

φ = φ1 ⊗ φ2 ⊗ · · · ⊗ φd, φi ∈ End(V ) In fact, it turns out that B is spanned by elements like φ ⊗ φ ⊗ · · · ⊗ φ (this is not trivial, but was not covered in the talk) so B ' End(V ). Then, letting Vλ = CSλ · cλ we have ⊗d ∼ M ⊗d ∼ M V = (Vλ ⊗C HomA(Vλ,V )) = (V ⊗C SλV ) |λ|=d due to the fact that ⊗d ∼ ⊗d ∗ HomA(Vλ,V ) = V ⊗A Vλ ∼ ⊗d = V ⊗A Vλ ∼ ⊗d = V ⊗A A · cλ ∼ ⊗d = V · cλ ∼ = Imcλ Note that the transition from the third-to-last to the second-to-last step is not trivial. We should also note that Fulton-Harris proves this using the fact that we L ⊕mλ can think of |λ|=d(SλV ) as an End(V )-module where mλ = dim Vλ = dim SλV. However, this loses the A-module structure.

Example 7.0.3. V = span{e1, e2}.

Proof. We will use the irreducible representation of S3: 3 CS3 · c(3) ' Sym V CS3 · c(2,1) '? 3 CS3 · c(1,1,1) ' Λ V

28 Note that Λ3V = 0 because dim(V ) = 2 < 3. Now   e1 ⊗ e1 ⊗ e1,   3  e2 ⊗ e2 ⊗ e2,  Imc(3) = Sym V = span e1 ⊗ e1 ⊗ e2 + e1 ⊗ e2 ⊗ e1 + e2 ⊗ e1 ⊗ e1,    e2 ⊗ e2 ⊗ e1 + e2 ⊗ e1 ⊗ e2 + e1 ⊗ e2 ⊗ e2  so dim Sym3V = 4. Now dim V ⊗3 = 8 and

1 ⊗3 ∼ M mλ 3  V = (SλV ) = Sym V ⊕? ⊕ 0 |λ|=d

From Faith’s talk, we know dim(V(2,1)) = 2, and after lots of computation we can get

Imc(2,1) = span{e1 ⊗ e1 ⊗ e2 − e2 ⊗ e1 ⊗ e1, e2 ⊗ e2 ⊗ e1 − e1 ⊗ e2 ⊗ e2}

8 Determinental rings and applications to com- mutative algebra Talk by Jenny Kenkel, notes by Jenny Kenkel My objects of interest are the ring of a field adjoin nm many variables, and the ideal generated by minors:   x11 . . . x1n  . .  R = F  . .  ,Ir+1 = ( size r + 1 minors ) xm1 . . . xmn Note: R/Ir+1 is setting all size r+1 minors to zero, so it corresponds to matrices of rank r or less. The example I consider the most is: u v w R = ,I = (∆ = vz − wy, ∆ = wx − uz, ∆ = uy − vx) x y z 2 1 2 3 Properties of R/I: in general, this ring is nice but not too nice. • I is prime (nice)

• u∆1 + v∆2 + w∆3 = 0 and x∆1 + y∆2 + z∆3 = 0 (not too nice) • dim(R/I) = 4 as a ring, that is, we can find a chain of prime ideals with four containments (this ring is infinite dimensional as a vector space)

(∆1, ∆2, ∆3) ( (∆1, ∆2, ∆3, v − x) ( (∆1, ∆2, ∆3, v − x, w − y) ( (∆1, ∆2, ∆3, v − x, w − y, u − z) ( (u, v, w, x, y, z)

29 8.1 Determinantal Rings Are Cohen-Macaulay The following section is a description of why determinantal rings are interesting to algebraists, and an example of the way commutative algebraists often think of rings. √ Local Ring Fact: Define the radical of J, denoted J to be √ J = {r|rn ∈ J}

Then I can always find some ideal that is generated by dim(S) many elements that’s radical is the maximal ideal. Those dim(S) elements are referred to as a system of parameters. In the case of u v w R = ,I = (∆ = vz − wy, ∆ = wx − uz, ∆ = uy − vx) x y z 2 1 2 3

I claim p(u, v − x, w − y, z) = (u, v, w, x, y, z), the maximal (homogenous) ideal of R/I. Note: the ring R has infinitely many maximal ideals, but only one maximal homogenous ideal. For many purposes, then, it can be considered a local ring. Proof. Note that p(u, v − x, w − y, z) ⊆ (u, v, w, x, y, z), so it suffices to show that (u, v, w, x, y, z) ⊆ p(u, v − x, w − y, z)

For purposes of this proof, let n = (u, v − x, w − y, z). Certainly, u, z ∈ n. As u ∈ n, we have that uy ∈ n. Recall that uy − vx = 0 ∈ n, so vx ∈ n. Now,

v(v − x) = v2 − vx ∈ n so v2 ∈ n and similarly x(v − x) = vx − x2 ∈ n so x2 ∈ n

In a symmetric argument, since z ∈ n, vz ∈ n and so wy ∈ n. Thus,

w(w − y) = w2 − wy ∈ n so w2 ∈ n y(w − y) = wy − y2 ∈ n so y2 ∈ n

2 2 2 2 Thus, we√ have shown that u, v , w , x , y and z are in n, and so u, v, w, x, y and z are in n. Definition 8.1.1. Regular Sequence

A regular sequence in a ring S is a sequence of elements, x1, . . . , xn such that

• x1 is not a zero divisor in S and x1S 6= S

• x2 is not a zero divisor in S/(x1) and (x1, x2) 6= S

30 • xi is not a zero divisor in S/(x1, . . . , xi−1 and (x1, . . . , xi) 6= S A very neat property about the ring R/I is that the system of parameters we discussed above is in fact a regular sequence! Sketch of Proof that the system of parameters is a regular sequence: Since I is a prime ideal, R/I is a domain, so there are no zero divisors, and in particular, u is not a zero divisor in R/I. Now consider (R/I)/u. We are setting u = 0, so 0 v w R/(I, u) =∼ /(vz − wy, wx, vx) F x y z Certainly w, v and x are zero divisors in the above ring. But the element v −x is not. One can convince oneself of this fact by multiplying v − x by any variable, v, w, x, y or z and getting something that is not a zero divisor. Now consider (R/I)/(u, v − x). We are setting v − x = 0, or in other words, v = x. So 0 v w R/(I, u, v − x) = /(vz − wy, v2, wv) F v y z Certainly, w and v are zero divisors in the above ring, but again, we can convince ourselves that w − y is not a zero divisor. 0 v w R/(I, u, v − x, w − y) =∼ /(vz − w2, v2, wv) F v w z Finally, 0 v w R/(I, u, v − x, w − y, z) =∼ /(w2, v2, wv) F v w 0 Since every non-unit is a zerodivisor in the above ring, not only do we have a regular sequence, but we have a maximal regular sequence. Definition 8.1.2. Cohen-Macaulay If there is some (equivalently, every) system of parameters that is a regular sequence, than the ring is called Cohen-Macaulay. Thus, the ring R/I is Cohen-Macaulay (nice!). It is not, however, Goren- stein, a definition beyond the scope of this talk (not too nice!).

8.2 Relation to Representation Theory Notice that 3 2 2 × 3 matrices of elements of C = Hom C , C 3∗ 2 = C ⊗ C where a matrix acts on an element of C3 by multiplication on the left. 3∗ 2 Define the group GL(3) × GL(2) action on C ⊗ C in the following way. Let φ ∈ Hom(C3, C2). Then −1 −1 (g1, g2)φ 7→ g2 φ(g1 ).

31 After fixing a basis, let Sym(V ) denote the symmetric algebra on the basis of V . 3 2 Sym(Hom(C , C )) = Sym(2 × 3 matrices with elements in C u v w = = R C x y z Then we can think of R/I as a subrepresentation of R, that is, all matrices of rank 1 or less, or as a quotient representation of R. If M is some matrix that has rank 1 or less, then action by GL(3) × GL(2) will take this matrix to some other element of rank 1 or less. 3 2 Let E be the vector space C and F be the vector space C , and let e1, e2, e3 1 ∗ ∼ ∗ be a basis for E and f1, f2 be a basis for F . Then Sym (E ⊗ F ) = E ⊗ F has basis:  ∗ ∗ ∗  e1 ⊗ f1 e2 ⊗ f1 e3 ⊗ f1 ∗ ∗ ∗ e1 ⊗ f2 e2 ⊗ f2 e3 ⊗ f2 u v w and degree 1 polynomials in R = have, as a vector space basis, C x y z u v w . x y z We might expect symmetric algebras to play nicely with tensor products, and that Sym2(E∗ ⊗ F ) might be the same as Sym2(E∗) ⊗ Sym2(F ). However, in Sym2(E∗ ⊗ F ),

∗ ∗ ∗ ∗ (e1 ⊗ f1)(e2 ⊗ f2) 6= (e1 ⊗ f2)(e2 ⊗ f1) but in Sym2(E∗) ⊗ Sym2(F ) the analagous element for both of the above elements is: ∗ ∗ e1e2 ⊗ f1f2 In other words, in Sym2(E∗ ⊗ F ), uy 6= vx, but Sym2(E∗) ⊗ Sym2(F ) acts just like R/I! u v w Suppose we want to understand polynomials in degree m in =∼ C x y z Symm(E∗ ⊗ F ). One way to do so is to understand its decomposition into symmetric products of E∗ and symmetric products of F . The Cauchy Formula tells us how to do that. Cauchy Formula If F a field of characteristic 0, E,F vector spaces, then

m M Sym (E ⊗ F ) = SλE ⊗ SλF |λ|=m where Sλ(V ) is the Schur functor acting on the vector space V , that is, Sλ(V ) = Im(cλ|V ⊗d ), where cλ is the Young symmetrizer.

32 9 Decomposing tensor products of Weyl mod- ules Talk by Peter McDonald, notes by Peter McDonald

Throughout this section, let λ = (λ1, . . . , λk) be a partition of d, let µ = (µ1, . . . , µk) be a partition of d and let ν = (ν1, . . . , νk) be a partition of d + m. Recall that given λ, we can construct a Young tableau and define the fol- lowing sets

Pλ = {σ ∈ Sd : σ preserves the rows of the Young tableau}

Qλ = {σ ∈ Sd : σ preserves the columns of the Young tableau}

Letting X aλ = eσ

σ∈Pλ X bλ = sgn(σ)eσ

σ∈Qλ define the Young symmetrizer

cλ = aλ · bλ

⊗d Considering the action of cλ on V , we get a Weyl module which we denote

⊗d SλV = cλ(V ) which we can use as a building block of our representations. Then Schur-Weyl duality gives us that ⊗d ∼ M mλ V = SλV |λ|=d where mλ is the dimension of Vλ, the irreducible representation of Sd corre- sponding to λ. Now that we have these Weyl modules, we would like to understand how the tensor product of two Weyl modules behave. Intuitively, the tensor product of ⊗d+m two Weyl modules Sλ and Sµ can be decomposed into components of V , but we want to know exactly which components their product corresponds to. In fact, given λ a partition of d and µ a partition of m, we have the following isomorphism ∼ M SλV ⊗ SµV = Nλµν Sν V |ν|=d+m where Nλµν are numbers determined by the Littlewood-Richardson rule. While we will not prove this formula, we will investigate the Littlewood-Richardson rule and look at examples in the case where µ = (m) and µ = (1,..., 1).

33 Definition 9.0.1. Given an endomorphism g of V , we have an induced endo- 0 0 morphism g of SλV . Let χSλV (g) denote the trace of g . This will be a sym- metric polynomial of degree d in k-variables, each representing an eigenvalue of x. This polynomial with indeterminates x1, . . . , xk is known as the Schur polynomial and is denoted Sλ.

Remark 9.0.2. {Sλ}|λ|=d is a basis for the symmetric polynomials of degree d. Given that Schur polynomials form a basis for the symmetric polynomials, we would like a systematic way to express the product of two Schur polynomials in terms of the basis for the corresponding degree of the product. It turns out that X Sλ · Sµ = Nλµν Sν ν so we will need to understand how to calculate these Nλµν . X Proposition 9.0.3 (Pieri’s Formula). Sλ · S(m) = Sν where ν ranges over ν the partitions of d + m whose Young diagrams are obtained from λ’s by adding m boxes, no two in the same column, i.e., all ν = (ν1, . . . , νk) with

ν1 ≥ λ1 ≥ ν2 ≥ λ2 ≥ · · · ≥ νk ≥ λk ≥ 0

Example 9.0.4. S(2,1) · S(2) = S(4,1) + S(3,2) + S(3,1,1) + S(2,2,1) Proof. The corresponding Young diagrams are, where the x’s denote the two boxes added: − − x − − − − x x − − x − − x − , − x , x , x .

While this can be combined with the determinental formula to find the product of any two Schur polynomials, there is an easier way.

Definition 9.0.5. Given λ = (λ1, . . . , λk) a partition of d and µ = (µ1, . . . , µk) a partition of m, a µ-extension of the Young diagram for λ is obtained by the following:

1. Add µ1 boxes to λ’s Young diagram according to Pieri’s formula, marking the boxes with a 1

2. Add µ2 boxes to the above Young diagram according to Pieri’s formula, marking the boxes with a 2

3. Continue this process for all remaining µi We say that a µ-expansion is strict if, when reading the diagram from right to left and top to bottom, at any point in the reading the integer p appears at least as any times as the integer p + 1 for 1 ≤ p ≤ k − 1.

34 Proposition 9.0.6 (Littlewood-Richardson Rule). Given λ a partition of d, µ a partition of m, and ν a partition of d + m, Nλνµ is the number of ways the Young diagram for λ can be expanded to a Young diagram for ν by a µ-strict expansion. Recall our formula for the product of two Schur polynomials X Sλ · Sµ = Nλµν Sν ν We compute an example

Example 9.0.7. S(2,1) · S(2,1) = S(4,2) + S(4,1,1) + S(3,3) + 2S(3,2,1) + S(3,1,1,1) + S(2,2,2) + S(2,2,1,1) Proof. The (2, 1)-extensions of (2, 1) are listed below:

− − 1 1 − − 1 − − 1 1 − − − 1 − 1 − 2 , 2 , − 1 2 , 2 ,

− − 1 − − − − 1 − − − − 1 − 2 1 − 1 1 1 , 2 , 1 2 , 2

The fact that these Littlewood-Richardson coefficients come from the mul- tiplication of Schur polynomials is no mistake. Because Schur polynomials cor- respond to the characters of the Weyl modules, the character of the product of two Weyl modules is the product of their characters. Now that we understand where the coefficients Nλµν come from, we can compute a few decompositions of the tensor products of Weyl-modules. Example 9.0.8.

m ∼ M Sλ ⊗ S(m) = Sλ ⊗ Sym V = Nλµν Sν ν where ν is all partitions of d + m whose Young diagrams are obtained from λ’s by adding m boxes, no two in the same column. Example 9.0.9.

m ∼ M Sλ ⊗ S(1,...,1) = Sλ ⊗ Λ V = Nλµν Sν ν where ν is all partitions of d + m whose Young diagrams are obtained from λ’s by adding m boxes, no two in the same row.

35