<<

MATH 263A NOTES: ALGEBRAIC AND SYMMETRIC FUNCTIONS

AARON LANDESMAN

CONTENTS 1. Introduction 4 2. 10/26/16 5 2.1. Logistics 5 2.2. Overview 5 2.3. Down to Math 5 2.4. Partitions 6 2.5. Partial Orders 7 2.6. Monomial Symmetric Functions 7 2.7. Elementary symmetric functions 8 2.8. Course Outline 8 3. 9/28/16 9 3.1. Elementary symmetric functions eλ 9 3.2. Homogeneous symmetric functions, hλ 10 3.3. Power sums pλ 12 4. 9/30/16 14 5. 10/3/16 20 5.1. Expected Number of Fixed Points 20 5.2. Random Matrix Groups 22 5.3. Schur Functions 23 6. 10/5/16 24 6.1. Review 24 6.2. Schur Basis 24 6.3. Hall Inner product 27 7. 10/7/16 29 7.1. Basic properties of the Cauchy product 29 7.2. Discussion of the Cauchy product and related formulas 30 8. 10/10/16 32 8.1. Finishing up last class 32 8.2. Skew-Schur Functions 33 8.3. Jacobi-Trudi 36 9. 10/12/16 37 1 2 AARON LANDESMAN

9.1. Eigenvalues of unitary matrices 37 9.2. Application 39 9.3. Strong Szego limit theorem 40 10. 10/14/16 41 10.1. Background on Tableau 43 10.2. KOSKA Numbers 44 11. 10/17/16 45 11.1. Relations of skew-Schur functions to other fields 45 11.2. Characters of the symmetric 46 12. 10/19/16 49 13. 10/21/16 55 13.1. Review 55 13.2. Completing the example from last class 55 13.3. Completing the example; back to the Schur functions 57 14. 10/24/16 58 15. 10/26/16 61 16. 10/28/16 66 16.1. Plane partitions, RSK, and MacMahon’s generating 66 17. 10/31/16 71 17.1. Announcements and Review 71 18. 11/2/16 74 18.1. Overview 74 18.2. P-partitions 74 18.3. The order 75 19. 11/4/16 78 19.1. Review 78 19.2. A possibly non-politically correct example 78 19.3. More on descent 79 19.4. Shuffling Cards 79 20. 11/7/16 81 21. 11/9/16 83 21.1. Algebra of the Ai 83 21.2. Quasi-Symmetric Functions 84 22. 11/11/16 86 22.1. Application to theory 87 22.2. Connection of quasi-symmetric functions to card shuffling 88 22.3. Applications 89 23. 11/14/16 90 23.1. Combinatorial Hopf Algebras 90 23.2. Examples of Hopf Algebras 91 MATH 263A NOTES: AND SYMMETRIC FUNCTIONS3

23.3. What did Hopf do? 92 24. 11/16/16 93 24.1. Definition of combinatorial Hopf algebras 93 24.2. Examples 94 25. 11/18/16 95 25.1. What do Hopf algebras have to do with card shuffling? 95 25.2. Lyndon words 97 25.3. The standard bracketing of Lyndon words 97 26. 11/28/16 98 27. 11/30/16 100 28. 12/2/16 102 28.1. Macdonald 102 28.2. Proof of Theorem 28.3 103 29. 12/5/16 106 29.1. Review 106 29.2. Defining D 107 29.3. Examples of Macdonald polynomials 107 29.4. Understanding the operator D in an alternate manner 108 30. 12/7/16 110 30.1. School 1 110 30.2. School 2 111 30.3. Persi’s next project 112 4 AARON LANDESMAN

1. INTRODUCTION Persi Diaconis taught a course (Math 263A) on Algebraic Combi- natorics and Symmetric Function Theory at Stanford in Fall 2016. These are my “live-TEXed“ notes from the course. Conventions are as follows: Each lecture gets its own “chapter,” and appears in the table of contents with the date. Of course, these notes are not a faithful representation of the course, either in the mathematics itself or in the quotes, jokes, and philo- sophical musings; in particular, the errors are my fault. By the same token, any virtues in the notes are to be credited to the lecturer and not the scribe. Thanks to Lisa Sauermann for taking notes on October 17, when I missed class. 1 Please email suggestions to aaronlandesman@ gmail.com.

1This introduction has been adapted from Akhil Matthew’s introduction to his notes, with his permission. MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS5

2. 10/26/16 2.1. Logistics. (1) Math 263A (2) Algebraic Combinatorics (3) Persi Diaconis (4) office hours Tuesday 2-4, 383 D (5) (No email) 2.2. Overview. This is a course in algebraic combinatorics and sym- metric function theory. We’ll talk about what we want to cover in the course. Combinatorics is pretty hard to define. It deals with things like sets Xn which are finite, permutations, partitions, graphs, trees. We might try to estimate |Xn|, functions T : Xn R or

| {x ∈ Xn : T(x) = y} |. → Here’s a slogan: Symmetric function theory “makes math” out of lots of classical combinatorics. We’ll try to cover (1) Chapter I of MacDonald’s book symmetric functions and Hall polynomials (2) More things that weren’t mentioned. . . Remark 2.1. We’ll have many digressions into “why are we studying this” and “what is it good for.” 2.3. Down to Math. Definition 2.2. For n ∈ Z, a weak composition of n is a partition of n into (a1, a2, ...) with k=1 ak = n with ak ≥ 0. Definition 2.3. Let R beP∞ a commutative ring. For example, R = Z, Q, Z[x], ... Suppose we have infinitely many variables

x1, x2, x3, ... then a homogeneous symmetric function of degree n is a formal power α f(x1, x2, ...) = f(x) = cαx α X where (1) α ranges over all weak compositions of n. (2) cα ∈ R α α1 α2 (3) x = x1 x2 ··· (4) f(x1, x2, ...) = f(xσ(1), xσ(2), ... for σ ∈ S .

∞ 6 AARON LANDESMAN

(5) Every term has the same degree.

Example 2.4. (1) f(x) = n=1 xi is a symmetric function. 2 2 2 2 (2) f(x) = x1x2 + x1x3 + ···∞+ x2x2 + x2x3 + ··· is another sym- metric function. P n Definition 2.5. Let ΛR be all symmetric functions of degree n over R. Remark 2.6. We often omit the R subscript when it is understood or clear from context. Remark 2.7. We have Λn · Λm ⊂ Λn+m. n Definition 2.8. Define ΛR := ⊕n=0ΛR. 2.4. Partitions. ∞ Definition 2.9. λ is a partition of n, written λ ` n if

λ = (λ1, λ2, ...) with λ1 ≤ λ2 ≤ · · · and

λi = n. i X Write |λ| = n, `(λ) := the number of nonzero parts of λ . Example 2.10. The partitions of 5 are {5, 41, 311, 32, 2111, 11111, 221} . We will often write

λ = 1n1(λ)2n2(λ) with ni(λ) equal to the number of parts equal to i. For example,

221 = 122. These satisfy

ini(λ) = n. i X Example 2.11. We can also draw young diagrams with dots or young tableaux with boxes. Here is the partition 13223 = 322111 ` 10. MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS7

Definition 2.12. If λ is a partition of n, λ0 (the transpose) is what you obtain when flipping the diagram. 2.5. Partial Orders.

Example 2.13. We can define the partial order by λ ≤ µ if λi ≤ µi for all i. Example 2.14. One can write down a partial orders. For example, we can define a partial order on these diagrams by saying one can get from one partition to another by moving dots to adjacent rows so that at every stage one arrives at a partition. Algebraically, the partial j j order is majorization order, where λ ≤ µ if i=1 λi ≥ i=1 µi for all j. P P Fact 2.15. We have λ ≤ µ µ0 ≤ λ0. This isn’t too hard, but it’s a little bit finicky, and we’ll come back to proving it later. ⇐⇒

Example 2.16. Take λ < µ if |λ| < µ or λ1 = µ1, ... , λn = µn, λn+1 < µn+1. This is a lexicographic ordering. 2.6. Monomial Symmetric Functions. Suppose λ is some partition α λ = (λ1, λ2, ...) and mλ = α x , the sum over all distinct permuta- tions of λ. P 2 2 Example 2.17. We have m21 = i

Proof. Apply the preceding lemma.  8 AARON LANDESMAN

2.7. Elementary symmetric functions. Definition 2.20. The elementary symmetric functions

ej = xi1 xi2 ··· xij , i <···

Fact 2.21.PWe’ll seeP that {eλ} as λ ` n form a basis of Λ over Z. Lemma 2.22. We have

eλ = Mλµmµ µX`n where mλµ is the number of matrices with row sums λ and column sums µ.

Proof. Say λ = λ1 ··· λr, µ = µ1 ··· µs.  Example 2.23. [Darwin’s data (see Persi’s paper “sequential Monte Carlo methods for statistical analysis of tables”)] Look at the of all tables with the same row sums and the same column sums (this is the Mµλ) and see where Darwin’s original table fits in. The size of this set is sharp-P complete problem, and nobody knows the answer. Example 2.24. Given a bipartite graph one can represent it as a 0 1 matrix depending on whether vertex i in the first column is con- nected to vertex j in the second column. The degrees of the vertices are the row and column sums. We show that the

(1 + xiyj) = Mλµmλ(x)mµ(y). i j λ µ Y, X, You can pick off the λ, µ coefficient of the left by using a Fast Fourier transform. So, you want to get your hands on the coefficients of xλyµ in the left hand product. 2.8. Course Outline. (1) Next time, we’ll talk about the classical bases eλ, hλ, pλ. (2) We’ll discuss the Hall inner product (3) Schur functions (4) Robin Schensted Knuth correspondence (5) Theory of the . Letting Rm be the class functions of Sn we get ⊕Rm = Λ. In particular, sλ = µ χλ pµ with χ the characters of the symmetric group. (6) Random matrix theory (relates to characters of the unitary group)P MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS9

(7) Combinatorial Hopf algebras - can stick things together by combinatorics and you can also pull them apart. Whenever you can pull things apart and stick them together, you can form a Hopf algebra. It turns out that Λ is the terminal object in Hopf algebra. This ends up explaining all sorts of generat- ing functions. (8) MacDonald polynomials. There will be two course projects. (1) Everybody from now until November 2, choose 10 problems. We’ll try and combine them so we have a set of solutions. (2) At the end, there will be a list of things to do a small final pa- per on. There are lots of interesting applications to algebraic geometry and so on.

3. 9/28/16 Today we’ll talk about various bases for symmetric functions.

3.1. Elementary symmetric functions eλ. We defined the elemen- tary symmetric functions last time as

ei = xi1 xi2 ··· xir . 1≤i

ni(λ) eλ := eλ1 eλ2 ··· = ei . ∞ i=1 Y We have

i E(t) := (1 + xit) = eit , ∞ ∞ i=1 i=0 Y X with e0 = 1. Note that n n i (1 + xit) = eit . i=1 i=1 Y X Proposition 3.1. For any λ, we have

eλ0 = aµλmµ µ≤λ X 0 with the ≤ order the dominating order, with aλλ = 1. Remember λ is the transpose of λ. 10 AARON LANDESMAN

Proof. Say e 0 = e 0 . Writing λ λi α Q(xi1 xi2 ··· )(xi1 xi2 ··· ) ··· = x . α αi with α = α1α2 ··· , x = x and i1 < i2 < ··· < i 0 , j1 < j2 < i λ1 0 ··· < jλ0 . Draw a tableau of shape λ. λ is the length of the ith 2 Q i column of λ.

i1 j1 k1

i2 j2 k2

i3 k3

i4

All the values of r appearing in the Tableau have to be in the first r rows. Hence, the size of the first r rows are λ1 + ··· + λr. This implies α1 + ··· + λr ≤ λ1 + ··· + λr. Now, eλ0 is a symmetric function. Pick any monomial appearing on the right hand side. This shows that µ ≤ λ. The only nonzero mµ have µ ≤ λ. It’s also easy to see that aλλ = 1 using this interpretation as filling in tableau of shape λ (we need to have the ith row filled in completely by i’s, assuming the lower rows were filled only by lower elements). 

Corollary 3.2. The collection eλ form a basis for λZ.

Proof. Apply the previous proposition. The eλ0 form a basis, as the change of basis to mλ is upper triangular, with 1’s on the diagonal. Hence the eλ form a basis since taking the transpose is an involution.  Theorem 3.3 (Fundamental theorem of symmetric functions). We have Λ = Z[e1, ...] and these ei are algebraically independent over Z.

Proof. If the ei were dependent, then there is some polynomial in them which is 0. Writing out that polynomial would give a finite linear combination of eλ which evaluates to 0. This contradicts that eλ form a basis for ΛZ. 

3.2. Homogeneous symmetric functions, hλ. Definition 3.4. Define

hn := mλ. Xλ`n MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS11

Example 3.5. So,

h1 = m1 = e1 = xi i X 2 h2 = m1 + m11 = xi + xixj = xixj. i i

hn = xi1 xi2 ··· xin 1≤i ≤i ≤···≤i 1 X2 n and h0 = 1. Define

i −1 H(t) = hit = (1 − xit) . ∞ ∞ n=1 i=1 X Y Observe E(t)H(−t) = 1. This allows us to inductively compute the coefficients of H. So, n i (3.1) (−1) eihn−i = 0 i=0 X for n ≥ 1. Example 3.6. Take n = 2. We have

e0h2 − e1h1 + e2h0 = 0. Expanding this, we have 2 h2 = e1 − e2 2 2 2 since h0 = e0 = 1, e1 = h1. Verifying this, e1 = ( i xi) = i xi + 2 xixj, and indeed h2 = xixj. i

ei 7 hi. This is a well defined homomorphism→ by Theorem 3.3. → Remark 3.9. By (3.1), we have w2 = id. This implies that ω is an involution, so Λ = Z[h1, ...] which is equivalent to showing that hλ := hλ1 hλ2 ··· form a basis. 12 AARON LANDESMAN

We can write

hλ = Nµλmµ where the Nµλ are always non-negativeX entries. N H = h  h = 0 a < 0 Fix and let i−j 0≤i,j≤N where a if . Define  i−j  E = (−1) ei−j . 0≤i,j≤N

Example 3.10. When N = 2, we get  1 0 0 h1 1 0 h2 h1 1 We have HE = id by (3.1). Fact 3.11. If A is invertible, then any minor of A equal the comple- mentary cofactor of AT . Corollary 3.12 (Jacobi-Trudi identity). Proof. If λ, µ have length ≤ p and λ0, µ0 have length ≤ q with p + q = N + 1, consider the minor of H with row indices λi + p − i for 1 ≤ i ≤ p and column indices µi + p − i for 1 ≤ i ≤ p, then by Fact 3.11, we have     det hλi−µj−i+1 = det eλi−λj−i+1 1≤i,j≤p 1≤i,j≤q Taking µ = 0, we have that

det hλ −i+1 = det e 0 . i λi−i+1 

3.3. Power sums pλ. Definition 3.13. The rth power sum is

r pr := xi i X for i ≥ 1.

Warning 3.14. p0 is not defined! MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS13

Lemma 3.15. Define

r−1 P(t) = prt . ∞ r=1 X H0(t) E0(−t) We have P(t) = H(t) . and P(t) = E(t) . Proof. We just verify the first identity.

r−1 P(t) = prt ∞ r=1 X r r−1 xit r i X X x = i 1 − x t i i X ∂ 1 = log ∂t 1 − xit i X ∂ 1 = log ∂t 1 − xit ∂ Y = log H(t) ∂t H0(t) = . H(t)  Corollary 3.16 (Newton’s identities). We have n nhn = prhn−r. r=1 X and n r−1 nen = (−1) pren−r r=1 X Proof. This follows immediately from expanding the equalities in Lemma 3.15. 

Remark 3.17. We have hn ∈ Q[p1, ... , pn] with pn ∈ Z[h1, ... , hn], 1 2  and h2 = 2 p1 + p2 . Observe that ΛQ = Λ ×Z Q = Q[p1, p2, ...]. Defining pλ = i pλi , we obtain that pλ form a basis and pj are alge- braically independent over Q. Q 14 AARON LANDESMAN

We can also see that recalling ω as the map sending ei 7 hi, we have n ω(pn) = (−1) pn, → and |λ|−`(λ) ω(pλ) = (−1) pλ with |λ| the size of λ and `(λ) equal to the length of λ.

4. 9/30/16 Recall we have Λ the ring of symmetric functions, and the various bases

mλ, eλ, hλ, pλ. Lemma 4.1. We can write 1 hn = pλ ξλ Xλ`n and 1 en = tλ pλ ξλ Xλ`n where

ni ξλ = i ni! i Y |λ|−`(λ) tλ = (−1) . where ni is the number of i’s in λ. Proof. We first claim

r n prt H(t) = hnt = e r=1 r . ∞ n=0 P∞ X To see this, observe it suffices to show r log H(t) = prt /r which is equivalent to showing X 0 H (t) r−1 = prt , H(t) ∞ r=1 X which we proved last time. MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS15

The right hand side is

r r prt prt e r=1 r = e r P∞ r Y  r mr prt = /mr! ∞ r r m =0 Y Xr 1 r mr = t pr m ∞ r r mr! n=0 i X m1,...,Xrmr=n Y P n 1 = t pλ . ∞ ξ n=0 λ X Xλ`n  Remark 4.2. Here is some motivation for why we are doing these calculations. Let Sn be the symmetric group. Consider a typical per- mutation σ ∈ Sn. Say it “looks like?” (1) For example, how many fixed points does it have? (2) How many cycles does it have? (3) What is the length of the longest cycle (4) What is the order of the cycle? (the order o(σ) is the smallest k so that σk = id.) Answers: (1) about 1 (2) about log n (3) about .62n (log n)2 (4) about e 2 . 1 Definition 4.3. Let u(σ) := n! denote the uniform distribution for the symmetric group. (1) More precisely, 1 1 # {σ ∈ Sn : fp(σ) = j} ∼ n! ej! where fp(σ) is the number of fixed points of σ. So, 1 P(fp(σ) = 0) ∼ . e This models a Poisson distribution. 16 AARON LANDESMAN

(2) We also have

1 c(σ) − log n # σ ∈ Sn : p ∼ Φ(x) n!  log n  √ x −t2/2 where Φ(x) = − e / 2tdt is the normal distribution, and c(σ) is the number of cycles. R (3) We also have ∞

(log n)2 1 log o(σ) − 2 # σ ∈ Sn : √ ∼ Φ(x). n!  (log n)3   3  These feature only depend on σ having a given cycle class. That is, they only depend on the conjugacy class of σ in Sn. Recall σ is conjugate to τ if and only if they have the same cycle type. Let ai(σ) be the number of cycles in σ of type i.

Remark 4.4. We have the following facts

k iai(σ) = n i=1 X fp(σ) = a1(σ)

c(σ) = ai(σ) i X `(σ) = max ai(σ) > 0 i o(σ) = lcmai(σ).

Lemma 4.5 (Cauchy). We have n! n! # {σ ∈ Sn : σ has cycle type a1 . . . am} = = . ai i=1 n ai! 3λ

Proof. Fix a1, ... , an. Observe that Sn acts transitively.Q The size of n! the cycle class is ai . i n ai!  Q Define the cycle indicator

n 1 a (σ) C (x , ... , x ) = x i , n 1 n n! i σ∈S i=1 Xn Y MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS17 and C0(x) = 1. We have another function, also called the cycle indi- cator

n C(t) = t Cn(x) ∞ n=0 X

Theorem 4.6 (Polya). We have i C(t) = e i=1 t xi/i P∞ Proof. Observe

i i e i=1 t xi/i = et xi /i ∞ P∞ i=1 Y a  i  i 1 = t xi/i ∞ ∞ ai! i=1 a =0 Y Xi ai n xi = t a ∞ i i a ! n=0 i X a1,a2,...X,: i ai=n Y = C(t). P  Remark 4.7. There are similar formulas for cycle factorizations of GLn(Fq), which are q-analogs of the above formulas for Sn. Definition 4.8. Fix θ ∈ (0, ) . The Poisson distribution with pa- rameter theta is ∞ e−θθj P (j) = θ j! for 0 ≤ j ≤ . Define ∞ j Mθ(x) = x Pθ(j) ∞ j=0 X xjθje−θ = ∞ j! j=1 X = e−θ+xθ. 18 AARON LANDESMAN

Definition 4.9. If P(j), 0 ≤ j < is any probability distribution, then we have moments and falling factorial moments ∞

h Mh = j P(j), ∞ j=0 X

Vh = j(j − 1) ··· (j − h + 1)P(j). ∞ j=1 X These two moments are equivalent, we can find either from the other. Differentiating k times, and taking P(j) = Pθ(j) we see

h j−h Mθ(x) = j(j − 1) ··· (j − h + 1)x Pθ(j). ∞ j=0 X h For Pθ(j) we get vh = θ . We have mh(x) = θkexθ−θ Now, setting x = 1, we see k k m (1) = Vk = θ . So, the falling factorial moments are these simple numbers. Example 4.10. Theorem 4.11. For any k = 1, 2, ..., for all n ≥ k, the kth moments of the number of fixed points of σ ∈ Sn satisfies 1 fp(σ)k = kth moment of the Poisson P (j). n! 1 σ∈S Xn Proof. Consider ai(σ) = fp(σ). We want 1 a (σ) (a σ − 1) ··· (a (σ) − k + 1) n! 1 1 1 σ∈S Xn in 1 a (σ) C (x , ... , ) = x i . n 1 n! i σ∈S Xn Y MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS19

Set x1 = x1, x2 = ··· = 1. Then, 1 C (x) = xai(σ). n n! σ∈S Xn Then,

i n tx+ t t cn(x) = e i=2 i ∞ P i X tx−t+ t = e i=1 i ∞ etx−t P = . 1 − t Differentiate k times in x and set x = 1, we get 1 tn a (σ)(A ··· (a (σ) − k − 1) n! 1 1(σ)−1 1 σ∈S X Xn tk = 1 − t = tk + tk+1 + tk+2 + ··· .

One can read off the moments from equating powers of t, since the left hand side is the formula for the moments.  Remark 4.12. If you have two measures, and all moments are equal, then the two measures are close. This is called the method of mo- ments. Since the falling factorial moments in the above theorem are equal, the moments are equal.

The method of moments implies that: Let Pn(j) = PSn {fp(σ) = j} 1 then as n , pn(j) . ej!

The point→ of∞ these calculations→ is the following. Exactly the same calculations show (1)

E(ai(σ))k = jkP1/i(x). ∞ j=0 X So, the number of transpositions has Poisson distribution with parameter 1/2. Here, the subscript k on the expectation means the falling factorial moment. 20 AARON LANDESMAN

(2) The joint distribution `

E(a1(σ)j1 a2(σ)j2 ··· a`(σ)j` ) = E1/k(xjk ) k=1 Y for n ≥ jii. Remark 4.13. TheP fact that the moments are exactly equal is called stabilization. This will lead to some interpretations in terms of char- acters.

5. 10/3/16 5.1. Expected Number of Fixed Points. Recall from last time we have 1 a (σ) C = x i . n n! i σ∈S Xn Y Then we defined

i n x t C(t) = t Cn = e i i . ∞ n=0 P∞ X This has a lot of information in it. Example 5.1. Suppose c(σ) is the number of cycles of σ. Then, n c(σ) = ai(σ). i=1 X Setting all xi = x, we have 1 C (x) = xc(σ). n n! σ X Then, 1 C(t) = (1 − t)x −x = (−t)j ∞ j j=0 X tj = x(x + 1) ··· (x + j − 1). j! j X i since i xt /i is the expansion for − log(1 − t) P MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS21

Therefore, 1 C = x(x + 1) ··· (x + n − 1) n n! x + 1 2 + x n − 1 + x = x ··· 2 3 n = E(xSn ) = Exxi ,

Here Sn denotes the sum, notY the symmetric group. Here P(xi = i−1 1 0) = i and P(xi = 1) = i , and n Sn j E(x ) = x P(Xn = j), j=0 X as in general,

E(f(Sn)) = f(j)P(Sn = j)).

Here we took f(j) = xj. X So, 1 1 AV(S ) = 1 + + ··· + ∼ log n, n 2 n n 1  1 VAR(S ) = 1 − ∼ log n n i i n=1 X c(σ) − log n P p ≤ x Φ(x).  log n  The coefficient of xj is the number of permutations→ with j cycles. These happen to be called sterling numbers of the first kind. For more information, see Shepp, Lloyd, cycles of permutations. Question 5.2. Who cares about all this stuff with fixed points? There was a game played where someone took two decks of cards up to n. People play this game and you get a dollar if the same number comes up. The question is a question of the number of fixed points. Monmort in 1708 proved the number of fixed points has a Poisson distribution, as we proved last time. Note that we may as well call the cards on the first deck 1 . . . n, so the number of matches is just the number of fixed points in a random permutation. We also have a metric d(π, σ) = # {i : π(i) 6= σ(i)} . 22 AARON LANDESMAN

See Diaconis, Goralnick, and Mulman on fixed points of permuta- tions for a classification of possible fixed points of transitive primi- tive actions of the symmetric group. Definition 5.3. The Caley distance between two permutations −1 dc(σ, π) = minimum number of transpositions needed to express πσ . I.e., this is the distance in the Caley graph where the vertices are permutations and the edges join two elements differing by a permu- tations. −1 Exercise 5.4. We have dc(σ, π) = n − c(σπ ), where c(σ) is the num- ber of cycles of σ. Remark 5.5. The above two distance measures are the only two bi- invariant distances that Persi knows of. 5.2. Random Matrix Groups.

Question 5.6. For any of groups, call it Gn, pick g ∈ Gn at random. What does it look like?

Example 5.7. Take Gn = GLn(Fq). The conjugacy classes of a ma- trix are indexed by (f, λ(f)) where f is an irreducible polynomial of degree d(f) and λ(f) a partition of |λ(f)|, with the restriction that d(f)|λ(f)| = n. Here λ is a function fromX irreducible polynomials to partitions of Sn. This is an interesting way to get your hands on the symplectic group. This is Jordan form where we’re not assuming the roots are in the field. There is more information on these for general finite groups of lie type, look at J. Fulman’s thesis: Random matrix theory over finite fields. Remark 5.8. One can do similar things for (1) On(R) (2) Un(C) (3) Sp2n(R) (4) GLn(Zp). As long as the group is compact, it make sense to have a description of conjugacy classes. If you wanted to look at this subject, you might look at Persi’s paper Patterns and Eigenvalues, in bulletin of the ams. MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS23

Example 5.9. In the orthogonal group has conjugacy classes indexed by its eigenvalues. So, this is asking: pick a matrix at random, what are its eigenvalues? In fact, G doesn’t have to be compact, but we can stick take an n ap(n) up to x uniformly, and look at n = p p .

Example 5.10. For example, we canQ take ω(n) = p|n 1. This is the number of prime divisors. So, ω(12) = 2. The Erdos Katz theorem P says ω(n) − log log x P p Φ(t).  log log x  This can similarly be done for GLn(R) by→ chopping it off along com- pact subsets, say with all entries up to n, and let n . 5.3. Schur Functions. Recall we have Λ the ring of symmetric func- tions, with four bases mλ, eλ, hλ, pλ. → ∞ n We will work with n variables x1, ... , xn. If α ∈ N , say

α = (α1, ... , αn) , define α α1 αn x := x1 ··· xn . Polynomials in n variables have an action of the symmetric group by permuting variables. Consider α sgn(w)w(x ) =: aα w∈S Xn These are alternating polynomials under the action of the symmetric group. That is,

σaα = sgn σaα.

In particular, aα = 0 if xi = xj for any i 6= j. Therefore,

xi − xj ≡ 0 mod aα. Let δ = (n − 1, n − 2, ... , 1, 0) . Hence, j xi − xj = det(xi)1≤i≤n,0≤j≤n−1 i α2 > ··· > αn ≥ 0. In this case, we obtain that

α1 − 1 ≥ · · · ≥ αn − n. and we can write α = λ + δ. Then, λ+δ aα = aλ+δ = sgn ww(x ) w∈S Xn  λj+(n−j) = det xi . 1≤i,j≤n

Definition 5.11. The Schur function sλ(x1, ... , xn) is aλ+δ sλ(x1, ... , xn) := . aδ Exercise 5.12. If one adds 0’s to λ one gets the same Schur function.

6. 10/5/16 n 6.1. Review. Last time, we had x1, ... , xn and took α ∈ N . Then, we took α aα = aα(a1, ... , xn) = ε(w)w(x ), w∈S Xn where ε(w) is the sign of w. We saw that aα is divisible by aδ := i

f 7 aδf → → MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS25 is bijective. Since the aα form a basis in alternating polynomials the Schur functions form a basis for Λn. Warning 6.1. This map is not degree preserving, but it does show that the dimension of the symmetric functions in n variables of de- gree d is equal to the dimension of alternating functions of degree d + deg aδ. Remark 6.2 (Important secret about Schur functions!). The Schur functions are the characters of the unitary group. That is,

{sλ}λ`n, at most n parts are the irreducible polynomial characters of ∗ Un := {M ∈ Mn×n : MM = I} .

That is, given an irreducible representation ρλ of Un, we have

sλ(z1, ... , zn) = tr(ρλ(M)), 1 if M has eigenvalues z1, ... , zn in S . Theorem 6.3 (Jacobi Trudi identity). We have s = h  λ det λi−i+j 1≤i,j≤n , for any n ≥ `(λ) with the convention that h0 = 1 and hj = 0 for j < 0. Additionally,   sλ = det eλ0−i+j i 1≤i,j≤n for any m ≥ `(λ0). Example 6.4. Consider the matrix   hλ1 hλ1+1 hλ1+2 h h  λ2−1 λ2     ... 

hλn

For example, s(j) = hj, since we have a 1 × 1 matrix. We also get s1n = en. For example,   h3 h4 s3,1 = det = h3h1 − h4. h0 h1 (k) Proof. Let ej be the nth elementary symmetric function in x1 ··· x^k ··· xn. Introduce  n−i (k)  M = (−1) en−i 1≤i,k≤n 26 AARON LANDESMAN

n  αi  and let α = α1, ... , αn ∈ N . Let Aα = xj . Let  Hα = hαi−n+j .

Lemma 6.5. We have

Aα = HαM. Proof. We have

(k) (k) n E (t) = en t ∞ n=0 Xn = (1 + xit) i=Y1,i6=k and recall n −1 n H(t) = (1 − xit) = hnt . ∞ i=1 n=1 Y X Therefore, (k) −1 H(t)E (−t) = (1 − xit) . Now, look at the coefficient of tαn on each side. We have n n−1 (k) αi hαi−n+j(t) en−j = xn j=1 X  Now, take the determinant of both sides in the lemma. We have

aα = det Hα det M. Taking α = δ = (n − 1, ... , 1). We get

det Hδ = det(hn−i−λ+j) = 1, since it is upper triangular with diagonal 1. Therefore, det M = aδ. Our formula says aα = det Hαaδ, using that aα is the determinant of Aα by definition. Hence, aλ+δ sλ = = det (Hλ+δ) . aδ We can prove the other formula directly by this sort of manipulation.  MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS27

6.3. Hall Inner product. In order to define the hall inner product, we will need three expansions of −1 1 − xiyj , i j Y, where xi, yj are two sets of variables. We have the following identi- ties. Lemma 6.6. (1) −1 −1 1 − xiyj = zλ pλ(x)pλ(y). i j λ Y, X (2) −1 1 − xiyj = hλ(x)mλ(y) = hλ(y)mλ(x) i j λ λ Y, X X (3) −1 1 − xiyj = sλ(x)sλ(y). i j λ Y, X Proof. We prove these in order (1) Recall −1 hn = zλ pλ(x). Xλ`n We have also n −1 H(t) = t hn = (1 − xit) . n i X Y Set t = 1, and we obtain

−1 (1 − xiyj) = hn(xy) ∞ i j n=0 Y, X = zλpλ(x, y) λ X = zλpλ(x)pλ(y). λ X Since ! r r r pr = (xiyj) = xi yj . i j i X, X 28 AARON LANDESMAN

(2) We have −1 1 − xiyj = H(yj) i j j Y, Y r = hn(x)yj ∞ j n=0 Y X α = hαy α∈Nn X = hλ(x)mλ(y), λ X where the last equality uses that hα is symmetric, so it doesn’t depend on the ordering of α, only on the partition λ associ- ated to α. (3) The third part is slightly messier, but similar, and we will omit it.  Definition 6.7. We have the Hall inner product h, i on Λ so that

hhµ, mλi = δµλ.

Lemma 6.8. Say uλ, vµ are two bases of Λ. Then,

huλ, vµi = δλµ if and only if −1 1 − xiyj = uλ(x)vλ(y). i j λ Y, X We’ll fill in the proof of this lemma next time.

Lemma 6.9. With respect to the Hall inner product, pλ form an and the Schur functions sλ form an orthonormal basis.

Proof. This is immediate from Lemma 6.8 and Lemma 6.6.  Remark 6.10 (Important secret fact). We have

hf, gi = f(m)g(m)dm, ZUn where by integrating a function we mean integrating the function of the eigenvalues of the corresponding matrix. MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS29

7. 10/7/16 7.1. Basic properties of the Cauchy product. Last time we saw −1 −1 (7.1) 1 − xiyj = zλ pλ(x)pλ(y) λ Y X (7.2) = mλ(x)hλ(y) λ X = sλ(x)sλ(y).(7.3) λ X

Remark 7.1. We had a fairly uninspiring manipulation to prove this, but Dan bump’s textbook has a very helpful group theoretic/representation theoretic argument for this. We defined the Cauchy inner product by

hmλ, hµi = δλ,µ. Recall from last time, we stated n Proposition 7.2. If {uλ} , {vλ} are two bases of Λ . The the following are equivalent: (1) We have huλ, vµi = δλµ. −1 (2) The Cauchy product 1 − xiyj = λ uλ(x)vλ(y). Proof. Write Q P

uλ = aλρhρ ρ X vµ = bµσmσ. σ X Observe (1) says

aλρbµρ = δλµ ρ X and (2) says

uλ(x)vλ(y) = mλ(x)hλ(y). λ λ X X because

aλρbλσ = δρσ. λ X 30 AARON LANDESMAN

So, these two rephrasings are equivalent because AB = I BA = I.  Corollary 7.3. We have ⇐⇒

hpλ, pµi = zλδλµ and we have that the sλ are orthonormal.

Proof. Apply Equation 7.1 and Proposition 7.2.  Corollary 7.4. The inner product h, i is symmetric and positive definite.

Proof. Write f = λ aλ. Then, P 2 hf, fi = ai . X 

Recall the map w : Λn Λn with

ω(hλ) = eλ → ω(pλ) = ±pλ.

This implies

hpλ, pµi = hωpλ, ωpµi hu, vi = hω(u), ω(v)i. so ω preserves norms (it is an orthogonal transformation). Now, our three identities yield

 −1 1 + xiyj = ελzλ pλ(x)pλ(y) λ Y X = mλ(x)eλ(y) λ X = sλ(x)sλ0 (y). λ X 7.2. Discussion of the Cauchy product and related formulas. We’ll now have a discussion on the meaning of the above derived formu- las. Question 7.5. What do these formulas mean? MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS31

(1) Let’s begin by working with x1, ... , xn. We have n E(x) = (1 + xit) i=1 Yn n = ent . n=0 X 1 To see what this means, multiply both sides by (1+t)n . We get

n  1 t  n 1 n  t r 1 + xi i = en . 1 + t 1 + t n r 1 + t (1 + t)n−r i=1 i=1 r Y X Now, let’s interpret this probabilistically. Flip a coin n times. t Say the probability of getting heads is 1+t . Let x = (x1, ... , xn) be a binary pattern. Then, the left hand side is

xi x E( xi ) = E(x ) = xiξP(ξ). ξ Y X Y On the other hand, the part of the right hand side n  t r  1 n−r r 1 + t 1 + t is the probability that you get r successes out of n flips. Also,

1 r z ner = EFD (x ) r z = xi p(z). z∈FD X Y is the Fermi Dirac statistic (here z is a random binary vector): one drops r balls into n spot with balls in distinct spots. The fact that this does not depend on t tells us this xi is a suf- ficient statistic, meaning you only have to keep track of the number of 1’s to estimate the probability, not theP whole dis- tribution. (2) Let’s now look at the generating function for the h’s. We have

−1 n H(A) = (1 − xit) = hn(t)t . ∞ n=0 Y X 32 AARON LANDESMAN

Now, multiply both sides by (1 − t)n. We have 1 − t 1 n + r − 1 = h (t) · (1 − t)n tr. 1 − x t n+r−1 r r i r Y X To interpret this, recall the geometric distribution with pt(j) = tj(1 − t) (it measures the chance you get j heads before getting a tail). The left hand side generates x1, x2, ... , xn independent geometric distributions with parameter t. The left hand side is

x zi E(x ) = xi P(x = z). z∈Nn X Y The right hand side is a sort of dot product of the negative binomial distribution with hr. Here, the negative binomial distribution is n + r − 1 tn (1 − x)n = P (x). r n,t This measures the chance we get r heads before the nth tail. The coefficient 1 h (t) n+r−1 r r is the Bose Einstein distribution. That is, this is the generating function for Bose-Einstein . Here, Bose-Einstein statistics measure dropping r unlabeled balls into n boxes so all configurations are equally likely. For example, if you have 10 electrons being put into 2 slots, there are 11 possibilities of the form i, 10 − i for 0 ≤ i ≤ 10. In 1 the Bose-Einstein distribution, all possibilities have chance 11 . Chartarjee and Persi are writing a paper on Bose Einstein dis- tributions. The identities above say that by randomizing r we can make the coefficients independent.

8. 10/10/16 8.1. Finishing up last class. Recall from last time we had

c1 ct eλ(x1, ... , xc) = c(λ)E(x1 ··· xt |Ri = λi) with λ = λ1 ··· λn. Here the normalizing constant c(λ) is some prod- uct of binomial coefficients. Here, P(Xij = 1) = p and cj = j xij, Ri = xij. j P P MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS33

Similarly,

0 c1 cc  hλ(x1, ... , xc) = c (λ)E x1 ··· xc |Ri = λi h where P(Xij = h) = p (1 − p) for 0 ≤ n < . Last class, we proved  ∞ 1 + xiyj = sλ(x)sλ0 (y). i j λ Y, X In Macdonald exercise 1 on orthogonality, he says to set

y1 = y0 . . . yn = t. Then, he obtains the identity n E(t)n = s (x)t|λ| λ λ λ X where X n − c(x) = λ n(x) λ Y where c(x) = column of x - row of x h(x) = hook length of x. Since this holds for all n, we can view n as a variable, and we obtain X E(t)X = s (x)t|λ| λ λ λ X Similarly X H(t)X = s (x)t|λ| λ0 λ λ X n Remark 8.1. These λ generalize the binomial formula since if λ just has one shape, this becomes the usual binomial formula. 8.2. Skew-Schur Functions. Definition 8.2. A Young Tableau is semi-standard is the numbers are weakly increasing from left to right and strictly increasing from top to bottom. 34 AARON LANDESMAN

Example 8.3. An example of a semi-standard young tableau of λ = 4322 is

1 1 2 3 2 3 4 3 4 5 5

a We have s = λ+δ . Combinatorialists think that λ aδ

T sλ(x1, ... , xn) = x t X where T ranges over all semi-standard young Tableau of shape λ.

Example 8.4. All semi-standard young tableau of shape 2, 1 with n = 3 are

1 1 2 1 1 3 1 2 2 1 2 3 1 3 2 1 3 3 2 2 3 2 3 3 MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS35

It turns out we get

2 3 2 2 2 2 s21 = x1x2 + x1x3 + x1x2 + 2x1x2x3 + x1x3 + x2x3 + x2x3 = 2m13 + m21.

Say λ, µ are two partitions we can express any symmetric function as

f = hf, sλisλ. λ X

Define sλ/µ to satisfy

hsλ/µ, sνi = hsλ, sµsνi.

λ Definition 8.5. The Littlewood Richardson coefficients are cµν so that

λ sµsν = cµνsλ. λ X We have

λ sλ/µ = cµνsν ν X λ with zµν = 0 unless |λ| = |µ| + |ν| implies sλ/µ = 0 unless |λ| ≥ |µ|. We will soon show sλ/µ = 0 unless µ ≤ λ, meaning λn ≥ µn for all n.

Example 8.6. If

µ =

λ =

Then λ/µ is the shape of λ with µ removed. 36 AARON LANDESMAN

8.3. Jacobi-Trudi. Theorem 8.7 (Jacobi-Trudi). We have

sλ/µ(x) = ε(w)hλ+δ−w(µ+δ) w∈Sn X   = det hλi−µj−i+j . 1≤i,j≤n Proof. Consider λ sλ/µ(x)sλ(y) = cµνsν(x)sλ(y) λ λν X X = sν(x)sµ(y)sν(y) ν X = sµ(y) sν(x)sν(y) ν X = sµ(y) hν(x)mν(y). ν X So,

sλ/µ(x)sλ(y)aµ+δ(y) = uhν(x)mν(y)aµ+δ(y) λ n X X α+w(µ+δ) = hα(x) y α w∈Sn X X λ+δ Therefore, sλ/µ(y) is the coefficient of y on the right hand side above. Therefore,

sλ/µ(x) = ε(w)hλ+δ−w(µ+δ) w∈Sn X   = det hλi−µj−i+j . 1≤i,j≤n  Remark 8.8. If µ = 0 this reduces to the old Jacobi-Trudi identity. Corollary 8.9. By duality,   s = det e 0 0 . λ/µ λi−µj−i+j

w(sλ/µ) = sλ0/µ0 . Proof. Follows from Jacobi-Trudi. The point is that the duality map ω switches sλ and sλ0 and also switches e’s and h’s.  MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS37

Corollary 8.10. We have sλ/µ = 0 unless µ < λ. Proof. Follows from Jacobi-Trudi, more details omitted. 

Say λr < µr for some r. Then, λi ≤ λr < µr ≤ µj for 1 ≤ j ≤ r ≤ i. This implies λi − µj − i + j < 0 for these i and j. This implies

hλi−µj−i+j has a zero block in the lower left hand (n − r + 1) × r corner.

9. 10/12/16 9.1. Eigenvalues of unitary matrices. Consider the group of unitary matrices ∗ Un = {M ∈ Mn×n : MM = I} . n iθj Say a given unitary matrix has eigenvalues e j=1. The Weyl den- sity of the eigenvalues is  1 f(θ , ... , θ ) = |eiθj − eiθk |2. 1 n 2π)nn! 1≤j

i Theorem 9.1 (Diaconis and Dishahshaham). The probability that Tn(M ) ∈ Bi tends to (as n ) n  → ∞ P zj ∈ Bj , j=1 Y where zj are independent standard complex Gaussians,

2 e−|z| P z ∈ B = dz. j π ZB Remark 9.2. We now describe the proof of this to illustrate the utility of Schur functions. Proof. The idea for this proof is to use the method of moments. 38 AARON LANDESMAN

Definition 9.3. If µn are probability measures on R, we say µn con- verges to µ if for every bounded probability measure f,

f(x)µn(dx) f(x)µ(dx). Z Z If µ has a density, this is equivalent→ to saying µn(ball) µ(ball).

If µn have finite moments with → k µn(k) = x µn(dx) ZR and µn(k) goes to µ(k) as n , this means µn goes to µ, pro- vided µ is determined by its moments. For example, µ is always determined by its moments if → ∞ 1 = 1 2k . ∞ µ(2k) / k=1 X For µn on C, we need ∞

a −b µn(a, b) = z z µndz. ZC Example 9.4. For µ the standard normal moments are

a −b −|z|2 z z e /πdz = δaba! ZC For M ∈ Un, we have n j iθk iθ1 iθn Tn(M ) = e = Pj(e , ... , e ) = Pj(M). k=1 X We have k k bj j aj j (Tn(M )) Tn(M ) dM = PλPµdM Un j=1 j=1 Un Z Y Y Z = hPµ, Pλi where λ, M are two partitions. Using that the Schur functions are the characters of the unitary group,

hsλ, sµi = δλµ. MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS39

We know also that

pλ = χν(λ)sν ν X pµ = χν0 (λ)sν0 0 Xν where

χν(λ) is the νth irreducible character of sn at the λth conjugacy class. (We’ll see this soon in the course). Therefore, k k bj j aj j (Tn(M )) Tn(M ) dM = hPµ, Pλi Un j=1 j=1 Z Y Y = χν(λ)χν0 (µ)hsν, sν0 i 0 Xν,ν = χν(λ)χν0 (µ) ν X ai = n ai!δλµ. i Y a where n i ai! is the size of the conjugacy class of λ in Sn (ai are the a cycle lengths). But, note that n i ai! are the moments of the normal Q √ √ distribution z 2z ··· nz . We have an exact equality provided 1 2 nQ n ≥ iai, ibi. i i X X This concludes the proof of the theorem on the distribution of the Un. See the paper by Persi and Evans for more details.  9.2. Application. Remark 9.5. Here is an application: In the talk Persi gave yesterday, take I = (a, b) on S1. Pick a matrix at random and count the number n iθj of eigenvalues in I. This is i=1 δIe . So, the number of eigenvalues in an interval should be proportional to the length of the interval. We can ask how they fluctuate.P #{ eigenvalues in I}−n2π(b−a) Theorem 9.6 (Wieand). Let XI = √ , then log n

XI n(0, 1)

→ 40 AARON LANDESMAN where n(0, 1) denotes the normal distribution with mean 0 and variance 1.

Question 9.7. What are the correlation between XI, XJ? It turns out they are 0 unless J and I have an endpoint agreeing, in which case it is −1/2 if I ∩ J is a point, and 1/2 if I ⊂ J. 9.3. Strong Szego limit theorem. Definition 9.8. A Toeplitz operator is a matrix of the form a b c d e a b c   . f e a b g f e a meaning that the entries are constant on diagonals. Question 9.9. What are the eigenvalues of Toeplitz matrices? Consider |z| = 1, f(z) = f^(j)zj. j∈Z X so that f^(j) = f^(−j). Take the Toeplitz matrix  f^(0) f^(1) ··· f^(n − 1)  ^ .. .. .   f(−1) . . .   . . .  .  . .. .. f^(1)  f^(−n + 1) ··· f^(−1) f^(0)

Then, taking real eigenvalues λ1, ... , λn of M take 1 n µ (m) = δ n n λj j=1 X

f−1 Theorem 9.10 (Weak Szego). We have µn(m) u with u uniform on S1. That is, 1 n 1 2π  → ψ(λ ) ψ f(eiθ) dθ n j 2π j=1 0 X Z where ψ is any bounded continuous→ function. Remark 9.11. This doesn’t look like eigenvalues of random matrices, but this turns out to be the same as saying that the eigenvalues are jointly normal. MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS41

10. 10/14/16 Today we’ll talk about skew Schur functions. Definition 10.1. Define

hsλ/µ, sνi := hsλ, sµsνi. where λ sµsν = cµνsλ λ X λ sλ/µ = cµνsν. λ X We have the Jacobi Trudi identities   sλ/µ = det hλi−µj−i+j 1≤i,j≤n with `(λ, `(µ) ≤ n and   s = det h 0 0 λ/µ λi−µj−i+j here with `(λ0), `(µ0) ≤ n.

Lemma 10.2. We have sλ/µ = 0 unless µ ≤ λ, meaning µi ≤ λi for all i. Proof. We have the following manipulations with three sets of vari- ables λ sλ/µ(x)sλ(z)sµ(y) = cµνsν(x)sλ(z)sµ(y) λ µ λ µ ν X, X, , = sν(x)sµ(y)sµ(z)sν(z) µ,ν X = sν(x)sν(z) · sµ(y)sµ(z) ν µ X X −1 = sν(X)sν(z) 1 − yizj ν i j X Y, −1 −1 = 1 − xizj 1 − yizj i j i j Y, Y, = sλ(x, y)sλ(z). λ X This implies

(10.1) sλ(x, y) = sλ/µ(x)sµ(y) µ X 42 AARON LANDESMAN by taking the coefficient of sλ(z) in

sλ/µ(x)sµ(y)sλ(z) = sλ(x, y)sλ(z). λ µ λ X, X  Lemma 10.3. In fact, we have

sλ/µ(x, y) = sλ/ν(x)sν/µ(y). µ≤ν≤λ X Proof. Using (10.1), we have

sλ/µ(x, y)sµ(z) = sλ(x, y, z) µ X by taking xy for x, and z for y in (10.1). Similarly,

sλ(x, y, z) = sλ/µ(x)sν(y, z). ν X This implies

sλ/µ(xy)sµ(z) = sλ(x, y, z) µ X = sλ/ν(x)sν(y, z) ν X = sλ/ν(x)sν/µ(y)sµ(z). ν,µ X now, taking the coefficient of sµ(z), we get

sλ/µ(x, y) = sλ/ν(x)sν/µ(y). µ ν X X  Remark 10.4. More generally, if x1, ... , xn are n sets of variables, n 1 n i (10.2) sλ/µ(x , ... , x ) = sνi/νi−1 (x ) 0 1 n i=1 ν ≤νX≤···≤ν Y where the sum is taken over all of partitions µ = ν0 ≤ · · · ≤ νn = λ. MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS43

10.1. Background on Tableau. Definition 10.5. A semi-standard Young tableau (SST) in 1, ... , n is a placement of 1, ... , n into boxes of shape λ which are weakly increasing in each row and strictly increasing in each column. One can make a similar definition for λ/µ. Example 10.6. Taking λ = 6, 4, 2, 2 we have the semi-standard young tableau 1 1 2 2 4 4 2 2 4 4 3 3 4 4

Definition 10.7. A semi-standard young tableau is standard if it uses the numbers 1, ... , n without repetition, and has n boxes.

i In (10.2), take x = xi. The left hand side is sλ/µ(x1, ... , xn) and the right hand side sνi/νi−1 (x) by the Jacobi Trudi restrictions, sλ/µ(x) = 0 unless λ/µ is a row hook shape (meaning at most one box in each |λ|−|µ| column). Then, sλ/µ(x) = x . In general, (10.2) has right hand α1 αn side product x1 ··· xn where i i−1 |ν | − |ν | = αi. and νi are a sequence of row hooks. Corollary 10.8. This tells us

T sλ/µ(x1, ... , xn) = x T X where T is a semi-standard young tableau of shape λ/µ in n variables, with

T Ti x = xi . i Y and Ti is the number of i’s in T. In particular,

T sλ(x1, ... , xn) = x . T SST of shapeXλ in n variables Proof. Follows from the above discussion.  44 AARON LANDESMAN

10.2. KOSKA Numbers. Fix λ ≥ µ, n, ν. we define the Koska num- ber

Kλ/µ,ν to be the number of SST of shape λ/µ with content ν. We have

sλ/µ(x1, ... , xn) = Kλ/µ,νmν. ν X Example 10.9. Take λ = 6, 4, 2, 2. Then 1 1 1 2 2 3 2 2 3 3 3 3 4 4 has content 4443333332222111 = 13243543. Take µ = 4, 4, 2 and the filling of λ/µ 2 3

4 4 has content 4432. We have

hsλ/µ, hνi = Kλ/µ,ν which implies

< sλ, sµhν >= Kλ/µ,ν. So, Pieri’s formula says

sµhν = Kλ/µ,νsλ. λ X Example 10.10. If ν = r is a partition of r with one part, then

Kλ/µ,r = 0 unless λ/µ is an r row hook shape. In this case, Kλ/µ,r = 1. In particular, we have

sµhν = sλ. λ/µ is an r rowX hook shape MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS45

Example 10.11. Say µ = 33, r = 2. What λ are possible if λ/µ is a 2 row hook shape? We can have λ equal to

equal to

or equal to

11. 10/17/16 11.1. Relations of skew-Schur functions to other fields. (1) Persi didn’t show how to prove Toeplitz matrix stuff from the Schur functions. If you want to know more about Toeplitz Schur functions, see Persi’s paper with Dan Bump on Toeplitz Minors. (2) The last thing we did in this class was show how to move between skew Schur functions to semi-standard tableau. That is, we took T sλ/µ(x1, ... , xn) = x T X where T varies over all semi-standard tableau of shape λ/µ in the variables 1, ... , n. (3) m × n tableaus correspond to plane partitions, with blocks coming out of the diagram. If you look at a plane partition at a distance, it looks like a tiling of the outline by Rhombus by Rhombi. So, these three viewpoints turn out to be equivalent. This has an enormous literature. The probabilistic part ends up having a sort of circle, and each of the corners of the circle are in some way of the same type (the Arctic Circle theorem). Vadim Goren proved the first nice theorems about probabilis- tic viewpoints on distributions of number in shapes λ. 46 AARON LANDESMAN

(4) One of the things you hear a lot about in symplectic geometry is the moment map. The distribution you get is called the Deusman Heckman measure, and has something to do with these skew tableau and their probabilistic distribution. Remark 11.1 (Random Remark). The skew Schur shapes index the representations of the affine Hecke algebra. 11.2. Characters of the symmetric group. Let G be a finite group and let A be a commutative ring with unit (often C, R, Z, K). If f, g: G A, define the inner product 1 hf, gi = f(x)g(x−1). → |z| x∈G X

Definition 11.2. Let Gb denote the irreducible characters of G over C, which are an orthonormal basis with respect to the inner product h, i above. Definition 11.3. Given H ⊂ G, we have restriction H ResG(f): H A sending f : G A to the composition H G A. → Definition 11.4. Given H ⊂ G, we have induction → G → → IndH(f) Sending a function f : H A to a function G A by sending f to a sum of functions over cosets (see Serre’s rep theory of finite groups book). → → Definition 11.5. Let G = Sn. Define n ψ : Sn Λ

w 7 Pρ(w) → where if w has cycle type 1a1(w), ... , nan(w) then ρ(w) := iai(w). → Remark 11.6. If u ∈ Sm, v ∈ Sn, acting on different sets ofQ variables, then u × v ∈ Sm × Sn ⊂ Sm+n. (we will think about embedding this as the first m blocks and then the last n blocks). u × v has cycle type ρ(u) ∪ ρ(w), so the map ψ is multiplicative. That is, ψ(u × v) = ψ(u)ψ(w) because the power sums are multiplicative. That is, ψ is a homomor- phism. MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS47

n Definition 11.7. Let R denote the class functions on Sn, n −1 R := f: Sn Z : f(u vu) = f(v) , the center of the group algebra. We have → n λ R = hx iλ`mhδλiλ`n. For each λ there is an character, and we call the character xλ. Here, δλ is, for any partition of cycle type λ, we have 1 if ρ(w) = λ δλ(w) = 0 otherwise. n Definition 11.8. Let, R := ⊕n=0R . Here, S0 = e is the group with 1 R0 = Z R f Rm g Rn element and . Now, ∞has a product. If ∈ , ∈ , we let f · g define f · g := IndSm+n f × g Sm×Sn . on each graded piece, and then extend by linearity. Remark 11.9. Under the above definition, R is a commutative, as- sociative (by a standard fact about induction) graded algebra with unit. If f = n=0 fn, g = n=0 gn then ∞ ∞ P hPf, gi := hfn, gni. n X Goal 11.10. We would like to show in the strongest sense R is iso- morphically isometric to Λ. Definition 11.11 (Characteristic map). We have a characteristic map

Ch : R ΛC = Λ ⊗Z C f 7 hf, ψi, → where f ∈ Rn. Note that →1 hf, ψi = f(w)ψ(w) n! w X −1 = f(ρ)pρzρ . Xρ`n where n!/zρ is the number of elements with partition size equal to ρ. One then extends by linearity. Lemma 11.12. In fact, Ch(•) is a ring map. 48 AARON LANDESMAN

Proof. That is, we have Ch(f · g) = hIndSn+m f × g ψi Sm×Sn , Sn+m = hf × g ResSn+m ψi , Sn×Sm Sn×Sm

= hf, ψiSm hg, ψiSn .  Lemma 11.13. For f, g ∈ Rn, we have

hCh(f), Ch(g)i = hf, giSn and for f, g in different graded pieces, hCh(f), Ch(g)i = 0. Proof. We have −1 hCh(f), Ch(g)i = f(ρ)g(ρ)zρ Xρ`n z = f(w)g(w)z−1 ρ ρ(w) n! w X = hf, giSn where here we used that pρ were orthogonal and

hpλ, pµi = zλδλµ.  This starts our goal of showing the characteristic map is a isomor- phism, since being an isometry on each graded piece implies it is injective. Theorem 11.14. The characteristic maps is an isometric isomorphism of R onto ΛC.

Proof. Let ηn be the trivial character of Sn. This is 1 on each permu- tation. So, −1 Ch(ηn) = zλ Pρ = hn, Xρ`n as we proved when we introduced the hλ basis. For λ = λ1, ... , λr ` n, define

ηλ := ηλ1 ··· ηλr Sn = IndS ×···S (triv). λ1 λr Since the map is multiplicative, we have

Ch(ηλ) = hλ. MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS49

Finally, define λ n χ := det (ηλn−i+1)1≤i,j≤n ∈ R . λ We’ll finish the proof next time, but Ch(χ ) = sλ.  12. 10/19/16 Today, we’ll back up, and cover more coherently. Definition 12.1. Let G be a finite group. Let Q be a probability dis- tribution meaning Q(g) ≥ 0, with g∈G Q(g) = 1. Example 12.2. This is an example whichP is morally about card shuf- fling by making repeated transpositions. Take G = Sn, let w ∈ Sn denote a permutation. Let 1 n if w = id 2 Q(w) = 2/n if w = (i, j) 0 otherwise

Note, this is a probability distribution because 2 n 2 + = 1. n 2 n2 We have Q ∗ Q(g) = Q(η)Q(gη−1). η X Similarly, Q∗h = Q ∗ Q∗(h−1). 1 We have u(g) = |G| . Theorem 12.3 (Poincare 1912). We have Q∗k u if and only if Q is not concentrated in a coset of a subgroup. (For example if we made the above measure→ vanish on the identity, convolutions would be concentrated on either even or odd permu- tations, since the sign would alternate at each step.) Now, |Q∗h − u| := max |Q∗h(A) − u(A)| A⊂G 1 = |Q∗h(g) − u(g)|. 2 g∈G X 50 AARON LANDESMAN

The math problem is: given g, Q, ε > 0, how large do you need to take k so that |Q∗h − u| < ε. One way to approach these problems is to use Fourier analysis. Let G be a finite group. Definition 12.4. A representation ρ of G is a homomorphism ρ : G GL(V) so that → ρ(st) = ρ(s)ρ(t).

We denote dρ := dim V. Example 12.5. The trivial representation is the representation with dim V = 1 and ρ(g) = 1.

Example 12.6. If G = Sn, V = R, we have the sign representation is the 1-dimensional representation satisfying ρ(w) = sgn(w). n Example 12.7. If G = Sn and V = R with basis

he1, ... , eni, then ρ(w) is a linear map with

ρ(w)(ei) = ew(i).  This is a permutation matrix δiw(i) . (n) Example 12.8. If V = R 2 with basis ei,j for i, j an unordered pair, we have a representation of the symmetric group defined by

ρ(w)ei,j = ew(i),w(j). Definition 12.9. For G an arbitrary group, a representation ρ is irre- ducible if there does not exist a strict nontrivial subspace V0 ⊂ V so that

ρ(g)V0 ⊂ V0 for all g ∈ G.

Example 12.10. The n dimensional representation of Sn above is not irreducible for n ≥ 2. We can consider the subspace

n V0 := v ∈ R : vi = 0  i  X which is a strict nontrivial subspace fixed by the action of Sn. MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS51

Definition 12.11. If Q is a probability distribution on G and ρ a G representation, we define the Fourier transform Q^ (ρ) = Q(g)ρ(g). g∈G X Lemma 12.12. We have Q\∗ Q(ρ) = Q^ (ρ)2. Proof. Exercise 12.13. Prove this. Hint: It follows from the definition.  Lemma 12.14. We have 1 if ρ = triv u^(ρ) = 0 if ρ is nontrivial and irreducible, where u is the uniform distribution. Proof. Exercise 12.15. Prove this. Hint: Use Schur’s lemma.  We would like to show Q∗h u by showing Q^ (ρ)h 0. For this we will need Fourier inversion and the Plancherel theorem. Theorem 12.16 (Fourier Inversion)→ . If f : G C is a function,→ we have 1   f(g) = d tr ρ(g−1)f^(ρ) |G| ρ → ^ ρX∈G where G^ is the set of all irreducible representations. Proof. Observe that both sides are linear in f. So, it’s enough to prove this for a basis. We will show it for the basis of δ functions. That is, we only need show for every g0, 1   δ (g) = d tr ρ(g−1)δ (g ) g0 |G| ρ g0 0 ^ ρX∈G 1   = d tr ρ(g−1)ρ(g ) |G| ρ 0 ^ ρX∈G 1   = d tr ρ(g−1g ) |G| ρ 0 ^ ρX∈G 52 AARON LANDESMAN and this is the decomposition of the regular representation, defined 2 −1 by V = L (G) with Tg(f)(x) = f(g x). That is, we are using

Theorem 12.17. For V the regular representation, we have

V = ⊕ρ∈G^ dρVρ.

Taking characters on both sides yields the result, since the charac- ter of the regular representation is 0 if g 6= id and |G| if g = id.  Remark 12.18. In the case G is a , this recovers the usual Fourier inversion on a circle.

Theorem 12.19 (Plancherel). If f, h : G C are two functions, then

1 f(g)h(g−1) = d→tr f^(ρ)h^(ρ) . |G| ρ g ^ X ρX∈G Proof. Both sides are linear in f so we only need verify this for δ func- tions. That is, take f(g) = δg0 (g). We only need verify this is this case. Then,

f^(ρ) = ρ(g0).

Then, we have to check 1 h(g−1) = d tr ρ(g )h^(ρ) . 0 |G| ρ 0 ^ ρX∈G

This holds by Fourier inversion above. 

We would like to apply this to bound |Q∗h − u|.

Lemma 12.20. We have

∗h 2 h 4|Q − u| ≤ dρ|Q^ (ρ) | ^ ρ∈GX,ρ6=triv with

|M|2 = tr(MM∗). MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS53

Proof. We have  2 4|Q∗h − u|2 = |Q∗h(g) − u(g)

≤ |GX| |Q∗h(g) − u(g)|2 g X  h ∗ = dρ tr Q^ (ρ)h(Q^ (ρ) ) ^ ρ∈GX,ρ6=triv h = dρ|Q^ (ρ) |. ^ ρX∈G using Cauchy Schwartz in the second line above, we used Plancherel in the third line and that the Fourier transform of the uniform is 0 for ρ nontrivial.  Lemma 12.21 (Schur’s lemma in disguise). Suppose G is a group with a probability measure Q which is a class function (constant on conjugacy classes). Example 12.22. For example, take

1 n if w = id 2 Q(w) = 2/n if w = (i, j) 0 otherwise

Then, if ρ is irreducible, 

Q^ (ρ) = cρid.

Proof. We have

ρ(g−1)Q^ (ρ)ρ(g) = Q(x)ρ(g−1xg) g X = Q^ (ρ). This says this matrix commutes with the action of the group. So, by Schur’s lemma, the only matrices commuting with the action of the group are scalars.

Remark 12.23. Further, the constant cρ can be computed by taking the trace of both sides.

 54 AARON LANDESMAN

Example 12.24. Take Q on Sn the distribution 1 n if w = id 2 Q(w) = 2/n if w = (i, j) 0 otherwise

We have  1 2 Q^ (ρ) = ρ(I) + ρ(i, j) = cI. n n2 Hence, X   dρ 2 n + χ (1, 2) = cd . n n2 2 ρ ρ This implies

1 n − 1 χρ(1, 2) c = + . n n dρ We want to bound  2k ∗k 2 2 1 n − 1 χρ |Q − u| ≤ dρ + . n n dρ ^ ρ∈GX,ρ6=triv So, we have one term from the sign representation, we get χρ(i, j) = 1, dρ = 1, then  1 n − 12k  2 2k − = 1 − . n n n Another term is from the n − 1 dimensional representation with dρ = (n − 1). We get χρ(1, 2) = n − 3. Then, this term becomes  1 n − 1 n − 32k  2 2k (n − 1)2 + = (n − 1)2 1 − n n n − 1 n 2 n−2k 2 ≤ e log n ≤ e−2c 1 where k = 2 n (log n + c) in the last line above. It turns out that these terms dominate the whole sum, and this yields the desired answer. The answer to our card shuffling question ultimately gives 1 e−c ≤ |Q∗k − u| ≤ 2e−c 2 1 with k = 2 n (log n + c). MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS55

13. 10/21/16 13.1. Review. Last time, we let G be a finite group. We define Q a probability measure, we defined Q∗k, a convolution, we defined the uniform measure u. We also defined the distance between the convolution and the uniform measure, 1 |Q∗k − u| = |Q∗k(g) − u(g)|. 2 g∈G X We found ∗k 2 ^ 2k u|Q − u| ≤ dρ|Q(ρ)|2 , ρ X ∗ with |M|2 = tr(MM ).

13.2. Completing the example from last class. We were looking at the example of G = Sn and 1 n if g = id 2 Q(g) =  n if g = (i, j) 0 otherwise

We had that Q is then a class function, so ^ Q(ρλ) = cρλ I, 1 n − 1 χ(i, j) c = + n n fλ with λ χ (1, 2) = tr ρλ(1, 2) λ λ f = dim ρλ = χ (id). We then had to bound 2k  2  1 n − 1 χλ(1, 2) u|Q∗k − u|2 < fλ + . n n fλ λ`nX,|λ|=n 1 n−1 λ2 To bound this sum say n is about 0, n is about 1. We have f = λ n! χ (1,2) = 1 . If fλ 2 , then, P 2k  2  1 n − 1 χλ(1, 2) fλ + . n n fλ λ`nX,|λ|=n 56 AARON LANDESMAN is roughly 1 2k √ n!p(n) = en log n+ n−2k log 2 2 which is small if k is roughly 10n log n. But, if λ = (n − 1, 1) is this particular partition, we have χn−1,1(1, 2) = n − 3, fn−1,1 = n − 1. Therefore, in this case, 1 n − 1 χn−1,1(1, 2)  2  + = 1 − . n n fn−1,1 n Then,  2 2k (n − 1)2 1 − ≤ e−c n 1 where c is defined so that k = 2 n (log n + c). In order to prove something, we have to know what fλ are and λ 1 + n−1 χ (1,2) what n n fλ are. To calculate these, we have the following lemma from Macdonald.

Lemma 13.1. n! fλ = x∈λ h(x) where hλ is the hook length of x. Q Proof. This is some exercise in Macdonald.  Example 13.2. In the case of the standard representation, we have nn −n2− 3··· 1 1

λ n! and so we see f = n(n−2)! = n − 1. We have χλ(1, 2) 2 λ  λ0 = i − j fλ n(n − 1) 2 2 i j X   X 2 λi = − λ (i − 1), n(n − 1) 2 i i X as is shown in Macdonald. MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS57

Remark 13.3. See Persi’s work “use of group representations in prob- ability and statistics.”

1 ∗ −c The final result is that k = 2 n (log n + c) and |Q − u| ≤ 2e . Remark 13.4. Igor Pak has a survey of proofs of the hook length formula. 13.3. Completing the example; back to the Schur functions. We proved

−1 λ sλ = zρ xρpρ ρX`|λ| ai Here zρ = i i ai!, where ρ has ai parts equal to i. Set xi = 1 for 1 ≤ i ≤ n. Then, p (1, ... , 1) = 1i = n. We have p (1) = nρ(λ). Q j λ Then, Macdonald shows P n + c(x) s (1) = λ h(x) x∈λ Y where h(x) is the hook-length of box x, and c(x) is the content of box x, equal to the column of x minus the row of x. So, using the hook length formula, n + c(x) s (1) = λ h(x) x∈λ Y fλ = (n + c(x)) . n! x Y Hence, we obtain −1 λ zρ xρpρ = sλ ρ X fλ = (n + c(x)) . n! x Y n−2 Here 21 denotes a partition for the standard representation of Sn. Now, we can rewrite this as λ xρ (n + c(x)) = n!z−1 n`(ρ). ρ fλ x∈λ Y Xρ`n 58 AARON LANDESMAN

Example 13.5. Therefore, taking e = 2(1n−2, `(ρ) = n − 1, we obtain n! n! = xλ 2(n − 2)! z(21n−2) ρ fλ = c(x) x∈λ X λ  λ  = i − j 2 2 x j X X Remark 13.6. (1) Persi says this was the first problem solved us- ing non-commutative Fourier analysis in probability. (2) This also started the study of cutoff phenomenon, with |Q∗k − u| sharply cutting off to randomness around 1 n log n. n (3) There is a subject called comparison theory, which says that if you know one walk, carefully, then, morally, you know about “any walk” to good approximation. (4) For more information see Persi’s paper “comparison theo- rems for random walks on finite groups.” 14. 10/24/16 Today, we’ll discuss the Robinson-Schensted-Knuth (RSK) algo- rithm. Persi proceeded to explain the rules of solitaire. Remark 14.1. Persi says if you can solve solitaire, Persi will get you on the front page of the New York Times. Solitaire is a hard problem we can’t solve, so in math, we look for an easier problem we can solve. Remark 14.2. One such problem is called patience. Here are the rules. (1) Start with n cards labeled 1, ... , n, shuffled in a random order (2) Turn up one card at a time (3) You can play a low card on a high card (e.g., put a 2 on a 6) (4) If you turn over a card higher than any card showing, you must start a new pile (5) The goal is to get as few piles as possible Example 14.3. Start with 423175968 in a 9 card deck. When we play this game, we get 421, 3, 75, 97, 8. Hence, here, we ended up with 5 piles. Note that 23569 is an increasing subsequence of length 5. MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS59

8 9 10 11 12 13 14 15 16 17 18 54 525 1746 2790 2503 1518 632 188 33 11 1

TABLE 1.

So, Persi says, let’s say we win the game with 9 or fewer piles.

Question 14.4. How should you play, what is the optimal strategy, how many piles do you expect?

Lemma 14.5. Let π ∈ Sn. The number of piles if you play as far to the left as possible is `(π), which is the length of the longest increasing subsequence. Further, “play to the left” is optimal. Proof. Say k := `(π). We claim that no mater how we play, cards in an increasing subsequence must go in separate piles. This is just because the only cards that can go on top of a given card are lower cards. Therefore, for any strategy, k is a lower bound on the number of piles. To complete the proof, we only need show that this strategy of playing to the left achieves (at most) k. When we play a card on the last pile in the play to the left strategy, there must have been lower numbers on all previous piles. This means we have constructed an increasing subsequence of length equal to the number of piles.  Remark 14.6. Provably, this play to the left strategy (patience sort- ing) is the fastest algorithm for computing the longest increasing subsequence. Some people say that “patience sorting” is the fastest way for hu- mans to sort n cards. By “patience sorting” we mean that one first plays the card game, and then look for 1 on top of a pile, then 2 on top of a pile, and so on. Example 14.7. Playing the game a few thousand times with 52 cards, Persi found the statistics.

Question 14.8. Say we pick π ∈ Sn at random. What is the distribu- tion of `(π), where `(π) is the length of the longest cycle. This is related to Ulam’s problem on sorting cards using deletion and insertion operations. This yields a metric, saying the distance between two permutations is the minimum number of deletions and insertions. More mathematically, an insertion and deletion operation is a cycle. So, equivalently, if we let S denote the set of all cycles, this 60 AARON LANDESMAN distance is the length function with generating set S for Sn. Call this metric du(σ, τ).

Fact 14.9. It turns out that du(1, π) = n − `(π). (The proof is similar to that of Lemma 14.5.

Theorem 14.10 (Logan-Schepp,√ Kerov-Virscict, 1965). We have that En(`(π)) is about 2 n, where En denotes the expected value over Sn. Fur- ther, 1 (`(π) − E (π))2 ∼ n1/3. n n π∈S Xn Finally, the Baik-Deift-Johansson theorem says √ `(π) − 2 n  P ≤ x A(x) n1/6 with → − (x−t)q2(t)dt A(x)e x , R∞ where q(x) satisfies q00 = xq(x) + 2q2(x) and q(x) ∼ Ai(x) the airy function. Pick M = M∗. Form the GUE. This means that M is an n × n matrix zij = zji with zij iid normal 0, 1/2 and zii normal 0, 1. This is a hermitian matrix, so it has real eigenvalues, λ1, ... , λn. Theorem 14.11 (Tracy-Widom). We have  √  λ1 − 2 n P ≤ x A(x). n1/6

There are quite a few other appearances→ of the Tracy-Widom dis- tribution. For example: (1) The buses in Cuernevaca (is this spelled correctly?)- there were small taxis picking everyone up. The buses got a small amount of regulation. This made the traffic in the city smooth and regular The inter-arrival times looked like Tracy-Widom (2) the shape of a burning piece of paper (3) the distance between parked cars (4) coffee stains MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS61

Definition 14.12. The RSK algorithm assigns to a permutation π a pair P(π), Q(π) a pair of standard Young Tableau of the same shape so that the map π 7 (P(π), Q(π)) is bijective. → Example 14.13. Start with the permutation 423175968. We start play- ing 4. Then we play 24. Then, we get 24, 3. The 1 bumps down the first column. We get 124, 3, 7. We next get 124, 37, 5. Then, 124, 37, 5, 9. Then, 124, 37, 59, 6. Then, the 8 starts a new pile 124, 36, 59, 6, 8. In the end, we get, 1 3 5 6 8 P(π) = 2 7 9 4 Next, we count where we add a box to get Q(π). This results in 1 3 5 7 9 Q(π) = 2 6 8 4 One can run this algorithm backwards to get a bijection. This is a 2 combinatorial proof that n! = λ`n fλ, where fλ is the size of the irrep, and also the number of young tableau of shape λ. P Next time, we’ll examine Staley’s chapter 7 RSK.

15. 10/26/16 Recall that we’re talking about the RSK algorithm, which assigns to a permutation π a pair of standard young tableau (P, Q) of the same shape λ. We made P by playing the patience game discussed in the pre- vious lecture. So, λ1 is the length of the longest increasing subse- quence. We get that λ1 + λ2 is the maximum of the length of the union of the biggest two increasing subsequences (this isn’t precisely the right statement, since they can have some overlap, for the precise statement see Curtis Greene’s work). Remark 15.1. About 30 years ago, all partitions and young tableau can be said in terms of nilpotent orbits of groups and flags in vector spaces. There turns out to be a natural metric between flags taking values in the permutations. You can say RSK in that formulation. 62 AARON LANDESMAN

But, we prefer to describe it this way, since it came from analyzing solitaire. 15.0.1. Row Insertion. Definition 15.2. If P is a semi-standard tableau and k is an integer, we write P k (“insert k into P”). We insert k into the first row in the lowest place possible (or if its bigger, we insert it at the end of the row, and← end the process). If it is inserted into the row, it bumps the current element in the row into the next row. We then continue the process until a number is the biggest in its row, at which point it is added to the end of the row, and the process is terminated. We illustrate this by the following example. Example 15.3. Let 1 1 2 4 5 5 6 2 3 3 6 8 4 4 6 8 6 7 8 9 Now, let’s say we insert a 4. 1 1 2 4 4 5 6 2 3 3 5 8 4 4 6 6 6 7 8 8 9, where the numbers with overlines indicate they were bumped in the path. The insertion path was (15)(24)(34)(43). Lemma 15.4. Insertion has the following properties: (1) “Stuff moves to the left.” That is, if (rs) ∈ P k has (r + 1t) has t ≤ s (2) If j ≤ k then I ((P j) h) lies strictly to the← left of I (P j) Proof. (1) Start with a semi-standard tableau P. This has weakly increasing rows and← strictly← increasing columns. So,← either Pr+1,s > Pr,s or there is no box under r, s. In the first case, when we insert something, it has to be inserted on something strictly bigger. When you move something from Pr,s to row MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS63

r + 1, it can always go in (r + 1, s) and maybe to the left. In cases 2, you can’t go to the right of column s. (2) This argument is similar, see Staley’s book on enumerative combinatorics, volume 2, chapter 7, RSK.  Corollary 15.5. The insertion P k is a semi-standard young tableau. Proof. We first show that the rows are still weakly increasing after the insertion. At each step of the← insertion, we replace a number by another number lower than it. A number a can only bump a larger number b. By the first part of the above lemma, b does not move to the right, and so b is inserted below a strictly smaller number.   Definition 15.6 (RSK algorithm). Start with a matrix A = aij with only finitely many nonzero entries. RSK assigns to A a pair (P, Q) where P, Q are semi-standard young tableau of the same shape. Further, the column sums of A equal the content of P, and the row sums equal the content of Q. Example 15.7. 1 0 2 0 2 0 1 1 0 To A assign   i1 i2 ··· im j1 j2 ··· jm with i1 ≤ · · · ≤ im, if ia + ib then ja ≤ jb, and ij appears aij times. Then, form P via solitaire on the bottom row. So, we take the se- quences 1 1 1 2 2 3 3 1 3 3 2 2 1 2 Now, we play a book keeping arrow, showing how we got through the elements. We get the following sequence of moves 1 then 1 3 64 AARON LANDESMAN then 1 3 3 then 1 2 3 3 then 1 2 2 3 3 then 1 1 2 2 3 3 then 1 1 2 2 2 3 3

We next keep track of the sequence of insertions, adding the next number in the first row to the place where the new box was added. We get the following sequence of moves 1 then 1 1 then 1 1 1 then 1 1 1 2 MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS65 then 1 1 1 2 2 then 1 1 1 2 2 3 then 1 1 1 3 2 2 3. At this point, we see the column sums are the content of the Q tableau and the column sums are the content of the P tableau. Theorem 15.8. We have that RSK is a bijection between finite integer ma- trices with finitely many nonzero entries and pairs (P, Q) of semi-standard young tableau of the same shape with the property that the content of P is the column sums of A and the content of Q is the row sums of A. Proof. We know that P is semi-standard because we built it by play- ing solitaire. Next, we show Q is semi-standard. Q is gotten from the book-keeping array by inserting the first row of   i1 i2 ··· im j1 j2 ··· jm with the ij’s weakly increasing. Therefore, since we’re forming shapes by adding increasing numbers, we obtain that the rows and columns are both weakly increasing. It remains to check he columns are strictly increasing. For the strictness, we use the second part of Lemma 15.4. We now claim that the content of P is the column sums of A and the content of Q is the row sums of A. This follows fairly immedi- ately from the construction. Exercise 15.9. Verify that the statement regarding the row sums and column sums.  Remark 15.10. The correspondence for permutations is given when we take the matrices to be permutation matrices, and then we obtain semi-standard young tableau (with entries 1, ... , n) as a result. 66 AARON LANDESMAN

16. 10/28/16 Recall that last time we discussed a fancy version of RSK using matrices. Here is an application. We give a bijective proof of the Cauchy following equation. Corollary 16.1. We have −1 1 − xiyj = sλ(x)sλ(y) i j λ Y, X Proof. We have

−1 aij 1 − xiyj = xiyj ∞ i j i m a =0 Y, Y, Xij α β A typical term is x y and arises from some collection of aij with αi βj j aij = xi , i aij = yj . This is equivalent to picking a matrix with given row and column sums. Using the RSK bijection, this is P P equivalent to picking P, Q two semi-standard young tableau of the same shape. The result then follows from the combinatorial defi- nition of sλ(x) as the sum over all semi-standard young tableau of shape λ of x raised to the content of λ.  16.1. Plane partitions, RSK, and MacMahon’s generating function. For λ ` n, we have λ1 ≥ · · · ≥ λr > 0 with i λi = n.

Definition 16.2. A plane partition π = πij Pis a collection of indexed by i and j so that

πij ≥ πi,j+1 πi+1,j ≤ πij That is, the rows are decreasing rightward and the columns are de- creasing downward. Example 16.3. The following is a plane partition: 7 5 5 3 2 1 1 1 6 5 5 2 1 1 6 3 2 2

We have that this has 18 parts meaning 18 numbers appearing. We have the size (the sum of all numbers) with |π| = 59. The number of rows is 3. The number of columns is 8. The trace of π is i aii = 14. P MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS67

Definition 16.4. Define P(r, c) to be the set of plane partitions with at most r rows and at most c columns. In particular, P(1, c) is the number of partitions with at most c parts. Theorem 16.5 (Euler). c  −1 qtr(π)x|π| = 1 − qxi i=1 π∈XP(1,c) Y Proof. Exercise 16.6. Prove this by expanding the right hand side.  The following is a higher dimensional case: Theorem 16.7 (MacMahon).  −i x|π| = 1 − xi ∞ i=1 π∈XP(α,α) Y The proof is vintage bijective combinatorics, due to bender and Knuth, though it was originally proved by MacMahon. We want to prove this. We proceed in several steps. (1) First, we describe a way of sticking together two partitions with distinct parts λ, µ with the same number of rows to make a new partition ρ(λ, µ). with ρ(λ, µ) a partition of |λ| + |µ| − `(λ). Example 16.8. Consider λ = 532, µ = 631. (It’s not important that they happen to be partitions of the same number) both with distinct parts and the same number of rows. First, form the shifted diagram of λ. • • • • • • • • • • Similarly, form the shifted diagram of µ, • • • • • • • • • • 68 AARON LANDESMAN

Next, form the shifted diagram of µ, which is essentially the transpose of the diagram for µ.

• • • • • • • • • • • •

Next, glue λ to the shifted diagram to µ.

• • • • • • • • • • • • • • • • • • •

This resulting shape is ρ(λ, µ).

Note that one can recover λ, µ from ρ(λ, µ), and so this map is a bijection. (2) We next extend the above ρ(λ, µ) construction between pairs (P, Q) of reverse semi-standard young tableau (meaning strictly decreasing down columns and weakly decreasing down rows) of the same shape and plane partitions. The bijection is given by sending P, Q to π whose ith column is ρ(Pi, Qi), for P, Q reverse semi-standard young tableau of the same shape with.

Example 16.9. Take

4 4 2 1 P = 3 1 1 2 MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS69

and 5 3 2 2 Q = 4 2 1 1 We then get 4 4 2 1 π(P, Q) = 4 2 2 1 4 2 2 2

(3) Now, replace each row of π(P, Q) by its transpose. Example 16.10. Using the above example 4 4 2 1 π(P, Q) = 4 2 2 1 4 2 2 2 we get π0(P, Q) given by 4 3 2 2 π0(P, Q) = 4 3 1 1 2 2 1 1 1 1 1 1

It’s not hard to see that the resulting map (P, Q) 7 π0 (P, Q) is a bijection, with |π0| = |P| + |Q| + |Sh(P)|. (Here the shape is the sum of the lengths of→ the rows). Furthermore, diag(π0) = Sh(P) = Sh(Q) so tr (π0) = |Sh(P)|. Further, we have invariants 0 0 `1(π ) = max(Q)`2(π ) = max(P) 70 AARON LANDESMAN

where `1 is the number of rows and `2 is the number of columns.  (4) To A = aij we assign a pair of reverse semi-standard young tableau of the same shape.

Example 16.11. Given

2 0 1 0 1 1 0 3 0

From our RSK algorithm, this yields

1 1 1 2 2 3 3 3 1 1 3 2 3 1 2 2

Reversing this, we get

3 3 3 2 2 1 1  2 2 1 3 2 3 1 1

Playing the solitaire game, we get

3 2 2 2 1 2 1

and the second tableau can be computed similarly as the one recording where boxes are placed, gotten by RSK where you put a high card on a low card.

From this matrix A, we get P, Q, given by

|P| = jaij|Q| = iaij i j i j X, X, with

max P = max j : aij 6= 0

max Q = max i : aij 6= 0 |Sh(P)| = |Sh(Q)| = aij

Theorem 16.12.  X r c  −1 qtr πx|π| = 1 − qxi+j−1 i=1 j=1 π∈XP(r,c) Y Y MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS71

Proof. Let Mrc be all r by c matrices. Then, qtr(π)x|π| = qaij x (i+j)aij− aij A=a ∈M P P π∈XP(r,c) Xij r c  σ  a (i+j−1)a =  q ij x ij  i j a ≥0 Y Y Xij  17. 10/31/16 17.1. Announcements and Review. Remark 17.1. Happy Halloween! Last time, we discussed plane partitions. Recall these were collec- tions π = πij so that πij ≥ πi,j+1, πij ≥ πi+1,j. We defined P(r, c) as the collection of plane partitions fitting in an r × c box. Question 17.2. Fix a shape or a pair (r, c), and fix n. Consider all π so that n = i,j πi,j. Choose one uniformly at random. What does this partition “look like”? P Today, we look at the above question for λ ` n (a partition instead of a plane partition). Let P(n) be the number of partitions of n. We have  −1 p(x)xn = 1 − xi ∞ ∞ n=0 i=1 X Y Remark 17.3. Hardy and Ramanujan found √ √ π 2n P(n) ∼ e 3 /4n 3 Remark 17.4. Persi’s test to see how fast something grows is to plug in n = 52. This is about e43. For reference, n! is about e68. Question 17.5. So, pick a partition at random. How many 1’s does it have? Here are some theorems going back to the 1940’s. Remark 17.6. We have a generating function, and a product form. Look at the singularities of the right hand side. This has singularities dense in the unit circle. The circle method was invented to solve this problem. Later, Rademacher found an exact formula, in the sense that given n, there is some formula with finitely many terms, which 72 AARON LANDESMAN you can compute to find P(n), though the number of terms depends on n. The following theorems are proved using these ideas. Theorem 17.7 (Erdos-Lehner). Let λ = 1a1 ··· nan . Then,   πa1 P √ ≤ x ∼ 1 − e−x. 6n √ In particular “a typical partition has around n 1’s. Furthermore,   πkak P √ ≤ x ∼ 1 − e−x. 6n

Theorem 17.8. Let y1(λ) ≥ y2(λ) ≥ · · · yk(λ) be the sequence of largest parts of λ. Then,   π log π −e−x P √ Y1 − √ ≤ x ∼ e . 6n 6n for − < x < . Further,

 −x −kv   x e−e ∞ ∞π log π P √ Yn − √ ≤ x ∼ dv 6n 6n (k − 1) ! Z− Additionally, we have ∞  π  n −xi  P √ (a1, 2a2, . . . nan) ≤ (x1, ... , xn) ∼ 1 − e . 6n i=1 Y Theorem 17.9. Fix q ∈ (0, 1). Take

|λ|  i Qq(λ) = q 1 − q . ∞ i=1 Y a probability measure on all partitions. Under this measure, k   ixi i P (a1 = x1, ... , an = xn) = q 1 − q . i=1 Y Corollary 17.10. Under Qq, the ai (λ) are independent with ij i Qq(ai = j) = q (1 − q ).

Corollary 17.11. Pick λ from Qq, let N = i iai (λ). Then, 1 P (λ|N(λ) = n) = P . P(n) Proof. Use the Borel-Cantelli lemma.  MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS73

Remark 17.12. The above corollary suggests the following algorithm. Pick λ ` 1000. Pick q close to 1 so that Eq (N(λ)) = 1000. To do this, use

Eq (N (λ)) = iEq (ai) ∞ i=1 X qi = i i . ∞ 1 − q i=1 X Now, fix this resulting q, and sample repeatedly from Qq. Just wait until you get λ with N (λ) = 1000. The√ problem with this is that N (λ) has a Gaussian distribution, with n the standard deviation. This algorithm will be too slow to get many random partitions of size 1000. The next topic in this line of work is the shape of a random parti- tion. Question 17.13. What does the shape of a random partition look like? Remark 17.14. That is, choose some random partition of n. If n is large, the√ partition will have some shape. We know the partition has length n log n, and it also has approximately this height. There is then some curve describing the locations of the boxes. The curve we draw has a limit. The limit is √ √ e−πx/ 6 + e−πn/ 6 = 1. A reference for this is Vershik, Statistical mechanics of combinatorial partitions. The main other measure used is the following: Definition 17.15. Let P(n), denote the Plancherel measure, defined by f(λ)2 P(λ) := n! where f(λ) is the dimension of the irreducible representation of Sn of shape λ, which is n! fλ = . i hook lengths hi Q 74 AARON LANDESMAN

The above questions have been asked under this measure, and their answers are also known for this measure. For example, we know √ λ − 2 n  P 1 ≤ x F(x) n1/6 where F is the Tracy Witten distribution. Remark 17.16. One good reference is Andrei Okunkov is “the uses of random partitions” (from 2003).

18. 11/2/16 18.1. Overview. Today, we’ll discuss three related topics: (1) P-partitions (2) Quasi-symmetric functions (3) Shuffling cards 18.2. P-partitions. To some degree, this has disappeared from mod- ern research, but Stanley and others have done some interesting work on it in the past. Definition 18.1. A partition of k into n parts is a sequence 1 ≤ f(1) ≤ n · · · ≤ f(n) with i=1 f(i) = k. Definition 18.2. PA composition of k into n parts 1 ≤ f(1), f(2), ... , f(n) n with i=1 f(i) = k. That is, it is a partition without an order. QuestionP 18.3. What about if we give a partial ordering on the parts of a composition? Example 18.4. Say we have a composition f with f(1) + f(2) + f(3) = n and 1 ≤ f(2) < f(1), f(2) ≤ f(3). How many such partially ordered compositions are there? Definition 18.5. Let P be a partial order on [n] := {1, ... , n}, with relation denoted

f: [n] N+. so that if i

N, we have f(i) < f(j). → Example 18.6. If we take P to be a poset which is a set, we get a usual composition series. MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS75

Example 18.7. Take n = 3 and consider the partial order 2

Remark 18.15. Here is an equivalent formulation: Drop n labeled balls into m labeled boxes, and count the number of ways that are consistent with the partial order.

Definition 18.16. The order polynomial, denoted ΩP(m) is the num- ber of such functions as in Remark 18.14. Remark 18.17. If one drops n balls in at random, the chance that the assignment works out with respect to the partition is

ΩP(m) . mn Example 18.18. (1) Say P is poset on 1, ... , n which is a set (no n additional relations), then ΩP(m) = m . (2) If P is is the chain 1 < 2 < ··· < n, we obtain weakly mono- tone functions. Then, n + m − 1 Ω (m) = P n by stars and bars. (3) Say P = P1 ∪ P2. Then,

ΩP(m) = ΩP1 (m)ΩP2 (m).

Definition 18.19. Given a permutation π ∈ Sn, the descent set of the permutation is the set of i so that π(i + 1) < π(i). d(π) is the number of descents. We let D(π) denote the descent set of π. Example 18.20. Taking π = 6431274, this has descent set 1, 2, 3, 6, d(π) = 4, and D(π) = {1, 2, 3, 6}.

Example 18.21. What about Ωπ(m), the complete ordering given by the permutation π? Then, we have m + n − 1 − d(π) Ω (m) = . π n For example, take π = 14253. This has d(π) = 2. This is the number of solutions to 1 ≤ f(1) ≤ f(4) < f(2) ≤ f(5) < f(3). This is the same as counting f ≤ f(1) ≤ f(4) ≤ f(2) − 1 ≤ f(5) − 1 ≤ f(3) − 2 ≤ m − 2. So, this is m − 2 + n − 1 m + n − 1 − d(π) = . n n MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS77

Corollary 18.22. We have

d(π)+1 m x ΩP(m)x = n+1 . ∞ m=0 (1 − x) X π∈XL(P) Proof. By linearity, it’s enough to prove this for all permutations π, by Lemma 18.11   m n + m − 1 − d(π) m Ωπ(m)x = x ∞ ∞ n m=0 m=0 X X xd+1 = . (1 − x)n+1

 Example 18.23. If you take the partial order with no constraints, this becomes

mnxm = xd(π)+1/ (1 − x)n+1 . ∞ m=0 π∈S X Xn For example, if n = 0, we obtain 1 xn = . 1 − x X If you differentiate this, you get x mxm = . (1 − x)2 X If you further differentiate this, you obtain more special cases, and can see this satisfies the same recursion as the descent numbers.

Remark 18.24. The polynomial

n−1 d(π)+1 j+1 An(x) = x = Anjx π∈S j=0 Xn X and the latter are the Eulerian number, where the Anj is the number of permutations of n things with j descents. 78 AARON LANDESMAN

19. 11/4/16 19.1. Review. Last time we were discussing P-partitions. Let P be a partial order on [n]. Let f : [n] [m] so that

i

N j = f(i) < f(j). ⇒ We introduced ⇒ ΩP(m) = # { P-partitions }

Recall that if P was a poset which is a set of size n, then ΩP(m) = 1 mn . If P is a linear ordering corresponding to a permutation π, say π = 3214, then we require f(3) < f(2) < f(1) ≤ f(4). Any P has P(P) = P(π) π∈aL(P)

Example 19.1 (Euler). If P is a set of size n, we get

m n m n+1 ΩP(m)t = m t = An(t)/ (t − t) , ∞ ∞ i=0 m=0 X X where n−1 j+1 An(t) = An,jt j=1 X with An,j the number of permutations in Sn with j descents. For example, if n = 2, we get 2 2 m t + t m t = 3 . ∞ m=0 (1 − t) X If n = 4, we get 2 3 4 4 m t + 11t + 11t + t m t = 5 . m (1 − t) X 19.2. A possibly non-politically correct example.

Question 19.2. Say we have n1 girls and n2 boys. Let n := n1 + n2. They each have a summer job. There are m salary levels. If a salary in {1, ... , m} is assigned at random, to each person, what is the chance that all the girls are paid strictly less than all the boys? MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS79

Let’s rephrase the above question in our poset language. Set a poset with 2 girls and 3 boys, and consider the complete bipartite graph K3,2 where the the 3 corresponds to 3 boys, and the 2 corre- sponds to the two boys. We want to count the number of P-partitions. To use the theory developed above, we need to find all linear exten- sions of this poset L(P). There are n2!n1! such linear extensions. So, the answer is 1 m + n − 1 − (a + b + 1) A A . mn n n1,a n2,b 0≤a≤n1−1, 0≤bX≤n2−1 To enumerate partitions of r, use n f(r) ΩP(q, m) = q i=1 f∈XP(P) Y The coefficient of qr in this counts P partitions with parts at most m, and n parts that sum to r. Here are some references: (1) Ira Gessel has slides on his webpage of a nice talk he gives, “P-partitions and permutation enumeration” (2) Kile Petersen “Descent, peaks, and P-partitions” (3) Richard Stanley, Volume 1 and Volume 2 of “enumerative com- binatorics”

19.3. More on descent. Here are two comments about descents.

Remark 19.3. In Sn, D(π) = {i : π(i + 1) < π(i)} . For example, D(6412375) = {1, 2, 6} ⊂ [n − 1]. If S ⊂ [n − 1], we define As = D(π)=s π. Fact 19.4 (Solomon)Q . u ASAT = cST AU. u≤X[n−1] where c ∈ Z≥0 is a fairly explicit combinatorial constant. 19.4. Shuffling Cards. Definition 19.5. The Gilberg-Shannon-Reeds model for card shuf- fling is the following: Say we have a deck of n cards. The probability 80 AARON LANDESMAN of cutting the cards into c and n − c is n c . 2n If the left has A and the right has B. The change next from the left is A/(A + B). Remark 19.6. If you shuffle cards 8 times perfectly, they come back to where the start. The above is provably the highest entropy model for shuffling. Let Q(π) be the change of the permutation after 1 shuffle. Then,

Q ∗ Q(π) = Q(η)Q(πη−1). η∈S Xn Similarly, one can define Q∗k(π) := Q ∗ Q∗k−1(π). We have the uni- 1 form distribution u(π) = n! . Question 19.7. Given ε > 0, how large should we choose k so that |Q∗k − u| < ε

∗k ∗k where we use the distance |Q − u| := maxA⊂Sn |Q (A) − u(A)|. Here are some other descriptions of Q. (1) Flip a fair coin for each card, and put all 1’s on top. (2) Take the unit interval and place n points down at random. Label them x1, ... , xn. Now do the Baker’s transformation, sending x 7 2x mod 1. Look at the induced permutation. Remark 19.8. We also have a-shuffles, by sending x 7 ax mod 1. So, for example,→ a 3 shuffle is where we put n points down, and stretch each third out. This corresponds to partitioning→ cards into 3 piles, and then put them together randomly. We have Qa ∗ Qb = Qab, using this description. We want to study the k-fold convolution. ∗k ∗k Hence, we have Q = Q2 = Q2k . Fact 19.9 (Diaconis-Bayer). We have 1 a + n − 1 − d(π−1) Q (π) = . a an n So, the mathematics of shuffling is about the same as the mathemat- ics of P partitions, and about the same as the mathematics of quasi- symmetric functions. MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS81

20. 11/7/16 Recall that last time we were talking about shuffling. Definition 20.1. We define

sh (a1, ... , an; x1, ... , xm) to be the set of all sequences in a1, ... , an, x1, ... , xm with no repeti- tions such that ai comes before aj for i < j and xi comes before xj for i < j.

Remark 20.2. In Coexeter groups, if Sk × Sn−k is the subgroup of Sn permuting the first k and n − k elements separately. The shortest coset representatives for Sn/Sk × Sn−k are shuffles, where the met- ric is given by the minimum number of transpositions to get to the identity.

Remark 20.3. If P1, P2 are disjoint total orders, (meaning that a poset is a union of two disjoint chains) then L(P) corresponds to shuffles. Recall the Gilbert-Shannon-Reeds shuffling scheme, which shuf- fles by cutting the deck randomly, and then randomly dropping from each cut pile with probability proportional to the size of the deck so far. We defined −1 Qa ∗ Qb = Qa(η)Qb(πη ). where Qa is an a shuffle, cuttingX a pile into a piles. We found Qa ∗ ∗h Qb = Qab and Q2 = Q2h . Remark 20.4 (Cool magic trick to impress your friends!). Persi sends Sarah a deck of cards in random order, and he knows the order of the deck. When you shuffle a deck of cards once, it’s not very ran- dom. But each half of the cards stay in the same order. So, when you shuffle again you have four sequences. After 3 shuffles, you usually have eight rising sequences. When she puts the card into the mid- dle, she gets a 9th rising sequence. Then Persi plays solitaire. Then, you’ll have eight piles, each about an eighth of the deck, and one pile which has a single card. Theorem 20.5 (Bayer). We have n + a − r Q (π) = /an. a n where r = r(π) is equal to the number of rising sequences in π. 82 AARON LANDESMAN

k 1 2 3 4 5 6 7 8 9 |Q∗k − u| 1.000 1.000 1.000 1.000 .924 .62 .32 .16 .08

TABLE 2.

Remark 20.6. Note that we also have r(π) = d(π−1) + 1. So, we also have r + a − 1 − d(π−1) Q (π) = /an. a n Using the total variation distance 1 1 |Q∗k − u| = |Q∗k(π) − | 2 n! = maxX|Q∗k(A) − u(A)|. A⊂Sn

Example 20.7. When n = 52, we get 3 Theorem 20.8. If k = 2 log2 n + c, then  1  |Q∗h − u| = 2φ(2−c) − 1 + o √ n x 2 where φ(x) = √1 e−t /2dt. 2π XR P(x) Remark 20.9. If is∞ a finite set and is a probability. Then, the entropy of P is − x P(x) log P(x). Theorem 20.10. IfPX = Sn and P is any probability supported on single shuffles, then the entropy of P is at most the entropy of Q2. Consider n+2−r n+1 n + 1 Q (id) = n = n = . 2 2n 2n 2n We have that Q2 of any non-identity π will have two rising sequences, 1 yielding 2n . Theorem 20.11. We have ∗k 1 1 |Q − u| = |Q k (π) − | 2 2 2 n! π∈S Xn k n 2 +n−j 1 = B | n − | n,j 2nk n! j=1 X MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS83 where Bn,j are the Eulerian numbers, the number of permutations with j rising sequences, which An,j−1, which is the number of π with j − 1 de- scents.

21. 11/9/16

21.1. Algebra of the Ai. We’ll work in Q [Sn]. Define n + 1 1 Q = I + A . 2 2n 2n 2 We let

A2 = σ −1 Xσ where σ−1 has one descent. We let

Ai := σ. −1 σ hasX i-1 descents

Theorem 21.1. The set n {Ai}i=1 generates a commutative, semisimple subalgebra of Q [Sn] with idempo- tents n en(`) = σ` (n − r, ... , 1 − r) Ar r=1 X where σ` is the `th elementary symmetric function and en(`) are the Euler- ian idempotents.

Remark 21.2. Further, the idempotents above en(`) give a hodge de- composition of hochschild homology. Proof. We know that an A-shuffle corresponds to an element 1 n a + n − r B := A . a an n r r=1 X 2 Let’s consider B2 (the same as Q2 above). Writing B2 = c1A1 + c2A2 with A1 = id We have 2 2 B2 = (c1A1 + c2A2) 2 2 2 = c1A1 + c1c2A2 + c2c1A2 + c2A2 = B2. 84 AARON LANDESMAN

Then, we see B4 is a linear combination of A1, A2, A3, A4. Next, we 2 look at B2B3, which is a linear combination of A1, A2, A3, A2 and A2A3. Hence, A2A3 = αijAi. Similarly, we see B2B3 = B3B2 = B6. We also see A2A3 = A3A2, and then we can keep going by induction P to see AiA3 = A3Ai, and so on. This shows the algebra is commuta- tive. Next, let’s verify semi-simplicity (meaning that it is a direct sum of simple algebras, algebras with no nontrivial ideals). We have n  m  m 1 2 + n − r B = B m = A 2 2 2mn n r r=1 X 1 n 1 m = σ (n − r, ... , 1 − r) A . n! 2`m ` r `=0 r=1 X X Now, B2 acts on this subalgebra by left multiplication as a linear map 1 1 ... 1 1 e (`) with eigenvalues , 2 , , 2n−1 and eigenvectors of the form 2` n , and this turns out to be one equivalent characterization of semisim- ple. We’ll omit the proof that these are the idempotents, but there’s some discussion of them in chuck Weibel’s book.  21.2. Quasi-Symmetric Functions. We work in infinitely many vari- ables x1, x2, .... Definition 21.3. A quasi-symmetric function of degree n is a formal linear combination c(α)xα α n X where α  n means α is a composition of n, with xα = xα1 xα2 ··· xαn i1 i2 in so that the coefficient h i h i xα1 ··· xαn f = xα1 ··· xαn f i1 in j1 jn for all i1 < ··· < in and j1, ... , jn. for all n. 2 Example 21.4. The polynomial i

→ MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS85

3 12 21 111 ∅ {1} {2} {1, 2}

TABLE 3.

If S = s1 < s2 < ··· sk, then α1 = s1, α2 = s2 − s1, ... , αn = n − sk. From α, we let Sα = α1, α1 + α2, ... , α1 + ··· + αn−1. Example 21.6. If n = 3, the compositions of 3 are Definition 21.7. Let Q be the ring of quasi-symmetric functions, i Q = ⊕i=1Q , where Qi are quasi-symmetric function∞ of degree i.

Definition 21.8. Let α = (α1, ... , αn) with no zero parts. Let

α1 αk Mα = x ··· x . i1 ik i <···

Lα := xi1 ··· xik i ≤i ≤···≤i 1 2X k with ij < ij+1 if j ∈ Sα. Example 21.10. If n = 3,

3 M3 = xi i X 2 M21 = xi xj i

L3 = M111 + M12 + M21 + M3 L21 = M111 + M21 L12 = M111 + M12 L111 = M111. Proposition 21.11. We have

Lα = Mβ = Mco(T). β≥α X Sα⊂XT⊂[n−1] Proof. We have

T/Sα Mα = (−1) Lco(T), Sα⊂XT⊂[n−1] {L } so α αn−1 form a basis. 

22. 11/11/16 The purpose of the last few lectures were to get us better acquainted with quasi-symmetric functions, card shuffling, and plane partitions. Now, we’re discussing quasi-symmetric functions. Recall last time we defined

QSym(x1, ...) so that h α i h α i x 1 ··· xαr f = x 1 ··· xαr f, i1 in j1 jn for i1 < ··· < in, j1 < ··· < jn. We have two bases

αj Mα = x Lα = x ··· x ij i1 ik i <···

→ MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS87 with f(p1) ≤ · · · ≤ f(πn) and f(πi) < f(πi+1) if πi > πi+1. Fix π ∈ Sn. We have f x = Lco(π)(x) f∈XP(π) where co(π) is the composition of the descent set of π. Proposition 22.1. Let P be a poset of cardinality n. Then, f KP(x) := x = Lco(π)(x). f∈XP(P) π∈XL(P) Proof. This follows from the above, and from the fundamental lemma that we can write a sum over all functions subject to the constraints of P as a sum over all possible total orderings of all functions subject to those total orderings. 

Definition 22.2 (A twist of notation). Let w ∈ Sn and P is a partial order on [n].A (P, w) partition is a function

f: [n] N+ so that f(π1) ≤ · · · ≤ f(πn), but if s < t and w(s) > w(t) then f(s) < f(t). → Warning 22.3. This is slightly different from the P partition, since we don’t just have conditions on adjacent elements. 22.1. Application to symmetric function theory. Example 22.4. Take a skew symmetric tableau • • • • • • • • • •

We can now tilt this by 135 degrees, to obtain Pλ/µ. Then, we create wλ/µ by labeling the boxes 8 9 10 3 5 7 2 4 6 1 88 AARON LANDESMAN which yields a labeling on the tilted Pλ/µ. Observe that  Pλ/µ, wλ/µ K = s is a semi-standard young tableau. Further, Pλ/µ,wλ/µ λ/µ. So, we can express

sλ/µ = Lco(T) T SSYT ofX shape λ/µ where co(T) is the composition associated with the descent set of T. We say i is a descent of T is i + 1 is in a strictly lower row. Example 22.5. If we have • • • 2 8 • 1 4 5 10 3 6 9 7. the corresponding descent set is 2, 8, 5, 6. This is a subset of 10. This corresponds to the composition 23122, gotten by taking the differ- ences in the parts of the subset. This proves qmaj(T) s (1, q, q2, ... , qn) = T , λ/µ (1 − q) ··· (1 − qn) P where the sum is taking over all SSYT of shape λ/µ and maj(T) = i. i∈XD(T) Remark 22.6. You can then use these definitions to reprove MacMa- hon’s formula that  −i PP(n)xn = 1 − xi . ∞ i=1 X Y 22.2. Connection of quasi-symmetric functions to card shuffling. Let θ = (θ1, ... , θn, ...) where i θi = 1 and θi ≥ 0. Definition 22.7. One obtains aPθ shuffle of n cards by dropping n balls into boxes with probability θi that the ball lands in box i. Then, from the ni cards in the ith barrel, riffle shuffle these by GSR. MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS89

Example 22.8. If θ = (1/2, 1/2, 0, ...) then this is usual 2-shuffles. If 1 1  θ = a , ... , a , 0, ... , then this is a-shuffles. We can also have things like θ = (2/3, 1/3, 0, ...) which is a weighted shuffle.

Let Pθ(π) be the chance of obtaining π after a θ shuffle. Fact 22.9. We have

Pθ ∗ Pη(π) = Pθ∗η(π) where  θ ∗ η = θi · ηj, ... , . For example, if θ = η = (α, β, 0, ...) then   θ ∗ η = α2, αβ, αβ, β2, 0, ... . The key theorem is the following:

Theorem 22.10 (Stanley). We have Pθ(π) = Ldes(π−1)(θ). 22.3. Applications. The following application is due to Assaf, Persi, and sound. Fix α ∈ (0, 1). We have a correspondence sending Dα(π) to (α, 1 − α, 0, ...). We can study convolutions by defining ∗k Sep(k) = max 1 − n!Pα (π), π∈Sn and define the norm ` (k) = max |1 − n!P∗k(π)|. π α

Theorem 22.11. We have∞ Sep(k) = 1 − sgn(w) (θi(1 − θ)i)kni(π), w∈S Xn Y and  kni(π) ` (k) = θi + (1 − θ)i − 1, w∈S Xn Y where ∞

ni(π) = # { i cycles in π} . Further, if 2 log n − log 2 + c k = b  c − log θ2 + (1 − θ)2

−c −c then sep(k) ∼ 1 − c−` and ` (k) ∼ c−` − 1. This uses quasi-symmetric function theory. ∞ 90 AARON LANDESMAN

23. 11/14/16 23.1. Combinatorial Hopf Algebras. One useful reference is Com- binatorial Hopf Algebras by Aguiar, Bergeron, and Sottile. Definition 23.1. A Hopf algebra H is an algebra over a field k, mean- ing there is a multiplication map which is k-bilinear, (we will usually take k = R,) inducing the map m : H ⊗ H H

aijxi ⊗ xj 7 aijxixj ij → ij X X with a coproduct → ∆ : H H ⊗ H which is an algebra map, meaning the diagonal is a homomorphism → ∆(xy) = ∆(x)∆(y) and satisfies “coassociativity” meaning that if

∆(x) = xi ⊗ xj ij X then

(1 ⊗ ∆) ∆(x) = (1 ⊗ ∆) xi ⊗ xj ij X ! k l = xi ⊗ xj ⊗ xj ij kl X X ! k l = xi ⊗ xi ⊗ xj ij kl X X = (∆ ⊗ 1) xi ⊗ xj ij X = (∆ ⊗ 1) ∆(x). Further, we will assume Hopf algebras are graded and connected (meaning H0 = k · 1). Remark 23.2. A Hopf algebra is essentially the same as a group scheme. MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS91

Remark 23.3. From coassociativity, Hopf algebras satisfy 2 ∆ = xi ⊗ xj ⊗ xk ijk X Remark 23.4. We say H is commutative if multiplication is commu- tative, and H is cocommutative if

∆(x) = xi ⊗ xj = xj ⊗ xi. 23.2. Examples of Hopf Algebras.X X Example 23.5. One example of a Hopf algebra is the free associative algebra on x1, ... , xn given by

k [x1, ... , xn] where elements are of the form aww with w = xi1 ··· xij , with mul- tiplication given by w · w0 = ww0. Further,

∆(xi) = 1 ⊗ xi + xi ⊗ 1. Then,

∆(w) = ∆(xi1 ) ··· ∆(xij )    = 1 ⊗ xi1 + xi1 ⊗ 1 ··· 1 ⊗ xij + xij ⊗ 1

= xia ⊗ xib . a∈S SX⊂[j] Y b∈Y[j]\S Consider the Hopf square map m∆ : H H

w 7 xia ⊗ xib . → a∈S SX⊂[j] Y b∈Y[j]\S This is inverse riffle shuffling!→ You can use this to find the eigenvectors for riffle shuffling, see Persi’s paper Hopf algebras and Markov chains, two examples and a theory, with Amy Pang and Arun Ram. Example 23.6. The ring of symmetric functions is an algebra with the following coproduct. Choose the e basis, for example (or we could have chosen h or p with the same formula as follows, with h or p replacing e,) and define ∆ by n ∆(en) = ei ⊗ en−i. i=0 X 92 AARON LANDESMAN

Question 23.7. What is the Hopf square map m∆. The Hopf square map is Kolmogorov’s model of rock breaking. Remark 23.8. You can develop all of symmetric function theory from the Hopf algebras perspective. Zelevinsky cleaned this up, and the same arguments give you the representations of GLn(Fq). Example 23.9. Consider all finite simple unlabeled graphs. This is an algebra by taking linear combinations so that

HG = agg where product is given by disjointX union. We can grade this by the number of vertices. The coproduct is given by

∆(g) = gs ⊗ gsc , s⊂XV(g) where gs is the induced subgraph. This is both commutative and cocommutative. Example 23.10. Consider labeled graphs, where each vertex is la- beled with a unique identifier from one up to |V(g)|. Then, ele- ments are finite linear combinations and multiplication is given by g1 ··· g2 = g1 g2 with labeling of g2 all higher than labelings of g1. Comultiplication is given by ` ∆(g) = gs ⊗ gsc s⊂XV(g) with labelings for the second graph in the tensor product moved down to 1. 23.3. What did Hopf do? In the 1930s the great mathematicians of the world were trying to figure out the topology of the classical groups. That is, they were trying to compute the cohomology of things like O(n). The homotopy type of the orthogonal group is fairly hopeless, even now, since this is computing homotopy types of spheres. Peo- ple were doing this one group at a time. Cartan would do U(3), and someone else would do O(4). Hopf realized that cohomology has a cup product. Because one is working with a group, one also has a coproduct on the cohomology algebra. Hatcher discusses this story is his book, in the section on H-spaces. Hopf algebras also appear in the development of algebraic groups, and it allows one to work over finite fields and more general fields. The story of more relevance to us is the following. Next time, we will define a combinatorial Hopf algebra with a character ρ. This MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS93 has to do with the zeta function of the poset, in the poset case. The theorem says that there is a terminal object in this category which is quasi-symmetric functions. When the character map to quasi- symmetric functions, the image is the chromatic polynomial. We’ll see this in more detail next class.

Definition 23.11. For G a graph, the chromatic polynomial PG(r) is the number of ways to color a graph with r colors.

Definition 23.12. The Stanley chromatic polynomial is PG(x) is de- fined as follows. Suppose P1, ... , Pc ≥ 0 with c Pi = 1 i=1 X and look at the probability that this coloring is proper. This is a sym- metric function of P1, ... , Pc.

24. 11/16/16 24.1. Definition of combinatorial Hopf algebras. Definition 24.1. A Hopf algebra is a connected graded Hopf algebra with a coproduct ∆ : H H ⊗ H with ∆(x) ∈ n H ⊗ H for x ∈ H , and ∆(xy) = ∆(x)∆(y) i=0 i n−i → n together with a unit, a counit, and an antipode (though the antipode follows from theP other properties). A combinatorial Hopf algebra is a Hopf algebra together with a character ζ : H k meaning ζ(ab) = ζ(a)ζ(b). → Remark 24.2. The characters, hom(H, k) form a group by

φ ∗ ψ(x) = mkφ ⊗ ψ∆(x) = φ(x1) · ψ(x2).

We also have X ∆ φ⊗ψ ζ H − H ⊗ H −−− H0 ⊗ H0 − k Remark 24.3. We have X(H) is abelian, if A is cocommutative. → → → 94 AARON LANDESMAN

24.2. Examples. Definition 24.4. A poset is graded if it has a unique maximal element 1^ and unique minimal element 0^, and all chains from 0^ to 1^ have the same length. The rank of a poset is the length of the maximal chain. The rank of an element is the rank of the poset between 0^ to x (meaning the sub-poset of all elements less than or equal to x). Example 24.5. Here are some examples of graded posets: (1) A chain (2) A cube B3, the boolean algebra on 3 elements, graded by the number of elements in the set. (3) In general Bn, the boolean algebra on n elements which is graded by the number of elements in the set. (4) The poset of divisors of n. This is a product of chains whose lengths are the powers of the distinct primes appearing in the expansion of n. (5) Set partitions of n. For example, when n = 3, we have 1, 2, 3 at the bottom, (12, 3), (13, 2), (1, 23) in the middle and 123. (6) Given a graded poset and two elements, the poset between those two elements is another graded poset. Example 24.6 (Rota’s Hopf algebra). Take R to be the set of all formal linear combinations of graded finite posets. We have a product P · Q is given by the product poset P × Q, with ordering on the product given by (p, q) ≤ (p0, q0) if p ≤ p0 and q ≤ q0. The coproduct is given by sending

∆(p) = [0^, x] ⊗ [x, 1^] ^ ^ 0≤Xx≤1 where [x, y] means the subposet of z with x ≤ z ≤ y. Further, we take ζ(P) = 1 for all P. Definition 24.7. This is the zeta function of the poset ζ−1(x) = µ[0^, x] with µ[x, x] = 1 and x≤z≤y µ[x, z] = 0. These two relations define µ uniquely by inducting from the bottom of the poset. P Definition 24.8. Let P be all isomorphism classes of finite posets, not necessarily graded. We take P · Q to be the disjoint union P Q. The coproduct ∆(P) := I⊂P I ⊗ P \ I, where I ranges over all downward closed ideals meaning x ∈ I, y ≤ x = y ∈ I. Here the character` is defined by ζ(P) = 1P. ⇒ MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS95

Remark 24.9. We have a map J : (P, ζ) (,ζ) by J(P) = {I ⊂ P}. → Example 24.10. Take the Hopf algebra QSym defined by k[x1, ...] with xi non-commuting variables. For Mα, Lα two compositions, the coproduct ∆(Mα) = α=βγ Mα ⊗ Mγ. We define the product by the usual product in the ring. This is graded by degree. P Question 24.11. What is the Hopf square map m∆(x)? We define 1 if α = (n) or empty ζQSym(Mα) = 0 otherwise We have Λ ⊂ QSym is a combinatorial Hopf algebra with the same character. Theorem 24.12. For any combinatorial Hopf algebra (H, ζ), there exists a unique Hopf morphism ψ :(H, ζ) (QSym, ζQSym). For h ∈ Hn, ψ(h) = α a composition of n ζαmα. If α = (α1, ... , αn) a composition of n, we have P → k−1 ∗k ∆ ⊗k ζ H −−− H Hα1 ⊗ · · · Hαn −− k with the middle map being the tensor product of the projections H Hα . → → → i 25. 11/18/16 → Persi predicts this will be the last lecture about Hopf algebras.

25.1. What do Hopf algebras have to do with card shuffling? Re- call a + n − 1 − d(w−1) Q (w) = /an. a n This has the feature that

Qa ∗ Qb = Qab. This is related to P-partitions, as we saw in previous classes. These also relate to quasi-symmetric functions, as we saw. 96 AARON LANDESMAN

Remark 25.1. We can say this in an alternate way: We have a Markov 0 0 −1 chain Ka(w, w ) = Qa(w w ). 0 It’s a theorem of Persi and Ken Brown that Ka(w, w ) is diagonal- izable, but first proved by Phil Hanlen. (The proof uses algebraic topology in a funny way.) The eigenvalues are real, and equal to 1, 1/a, 1/a2, ... , 1/an−1 with multiplicity of 1/an−k = c(n, k), with

c(n, k) := # {w ∈ Sn : w has k cycles.}

Example 25.2. (1) When k = n, we have c(n, n) = 1. n (2) When k = n − 1, we have c(n, n − 1) = 2 , corresponding to transpositions. 0 Question 25.3. What are the eigenvectors of this Ka(w, w ) matrix?

The Hopf machinery gives us the eigenvectors. n Example 25.4. Here, let’s answer the question: What are the 2 eigenvectors with eigenvalue 1/a? We’ll index the eigenvectors by i, j for i < j, and call the wth component fi,j(w). We have

1 if i and j are adjacent in w in order fij(w) = −1 if i and j are adjacent in w and out of order 0 otherwise.

Observe that

fij(w) = 2d(w) − n − 1. 1≤i

25.2. Lyndon words. Say we have an ordered alphabet of non-commuting variables, 1, 2, .... Definition 25.5. A word is Lyndon if it is lexicographically strictly smaller than any cyclic shift. Example 25.6. We have 1213 is Lyndon, but 1212 is not Lyndon, since shifting twice recovers the same word.

Theorem 25.7. Any word w has a unique decomposition w = `1`2 ··· `n with `1 ≥ `2 ≥ · · · ≥ `k, with lexicographic ordering. Example 25.8. We have 32114 has a decomposition as 3, 2, 114. 25.3. The standard bracketing of Lyndon words. If L is Lyndon, it has a unique decomposition ` = `1 ··· `2 into nontrivial Lyndon words with `2 the longest right hand Lyndon factor. Example 25.9. We can write 13245 = 13, 245 with 245 the longest right hand Lyndon factor. Definition 25.10. The standard bracketing λ(`) is recursively de- fined as follows: λ(i) = i

λ(`1, `2) = [λ(`1λ(`2)] with [w, w0] = ww0 − w0w. We have λ(13245) = [λ(13), λ(245)] [13 − 31, 2 (45 − 54) − (45 − 54) 2]

Definition 25.11. For w = `1 · `2 ··· `k, the k Sym(w) = λ(`σ(i)). σ∈S i=1 Xk Y Fact 25.12. If w has no repetitions, then Sym(w) is a sum of words with all distinct letters, with coefficients ±1. 0 Theorem 25.13. On Sn, Ka(w, w ) has n! eigenfunctions fw(•) for w in Sn, where 0 0 fw(w ) = the coefficient of w in Sym(w). 1 k The corresponding eigenvalue is an−k where is the number of left to right minima in w. 98 AARON LANDESMAN

Example 25.14. Taking w = 32145 w has three minimal 3, 2, 1. These are the record values. The number of permutations with k record values is the sterling number c(n, k). Remark 25.15. We (Persi, McGrath, and Pitman) showed that any function of w which only depends on the biggest cycles is “close to random” after one Q2 shuffle. Remark 25.16. Exactly the same description of the right eigenfunc- tions holds for any combinatorial Hopf algebra Markov chain where H is cocommutative. There is a similar description of the left eigenfunctions. Remark 25.17. To learn more, see what Amy Peng has done.

26. 11/28/16

We are familiar with the classical bases mλ, eλ, pλ, hλ, sλ. But, there are also (1) Pλ(x1, ... , xn; t) the Hall-Littlewood polynomials (2) Jα(x1, ... , xn) the Jack polynomials (3) Pλ(x1, ... , xn; t, q) Today, we’ll talk about Hall-Littlewood polynomials. Hall was studying enumerative . If A is an , A is the direct product of its Sylow p subgroups. Let λ = n n |λ| 1 1 ··· r r . Say Mλ = p . Let λ gµν(p) = # { subgroups H ⊂ Mλ of type µ and cotype ν } . λ Now, fix λ, µ, ν and gµν(Γ) has a formula using the symmetric functions {uλ} with λ a partition and u a variable. We can write H to denote the algebra given by elements of the form λ aλuλ. Make H into an algebra by P λ uµuν = gµν(p)uλ. λ X He showed this is a commutative associative algebra freely gener- ated by uΓ . We can now map into Λ, the space of symmetric functions by uΓ 7 (n) er. Hall observed it is better to map uΓ 7 p 2 er. So, this is not a map over Z, but it is a map →

H Λ ⊗Z Z [1/p→] . 1 The image of uλ on the right hand side is Pλ(x1, x2, ... ; t), with t = . → p MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS99

Littlewood defined these pλ(x1, ... , xn; t) directly, and showed they fit together compatibly so that one may take n .

Example 26.1. We have → ∞ 4 2 2 4 3 3 p42(x1, x2; t) = x1x2 + x1x2 + (1 − t)x1x3.

λ Fact 26.2. These gµν(p) and Pλ(x1, ... ; t) are crucial in writing down

(1) the character theory of GLn(Fq) (2) the spherical functions (i.e., the K bi-invariant functions) on GLn(k)/K where k is a local field and K is a maximal compact subgroup. (3) the distribution of the critical group of a random graph (Jason Fulman makes the connection in a nice way between sand- piles and Hall-Littlewood polynomials) (4) K-P-Z found a nonlinear system of partial differential equa- tions. Alexei Borodin and Ivan Corwin, made sense of these equations in terms of Hall-Littlewood polynomials.

Here are some more facts of a different kind.

Fact 26.3. (1) The Pλ(x; t) are an orthogonal basis for Λ with co- −1 efficients for Λ ⊗ Z [t] with hpλ, pµi = ζλ (t)δλµ with

n  −1 λi ζλ(t) = ζλ 1 − x i=1 Y and

di ζλ = i di! i Y a1 a2 λ and λ = 1 2 ··· . Furthermore, Pλ = µ≤λ wµmµ for some wλ with wλ = 1. µ λ P (2) When t = 0, Pλ = sλ. (3) When t = 1, Pλ = mλ.

2 Example 26.4. Take λ = 21 and mλ = c(p ) × c(p), where c(n) := Z/n. When µ = 1, we have H = c(p). We can have ν corresponding to c(p) × c(p) = (1, 1), or ν = 2 corresponding to c(p2). 21 What is g1,11? 100 AARON LANDESMAN

Say p = 3. We have H = {(0, 0) , (0, 1) , (0, 2)} , H = {(0, 0) , (3, 1) , (6, 2)} , H = {(0, 0) , (6, 1) , (3, 2)} . 21 21 Here, we have g1,11(p) = p. We also have g11,2 = 1, with

H = {(0, i)}0≤i

sλsν = cµν6λsλ. λ X λ λ (2) If cµν 6= 0, then gµν has degree n(λ) − n(µ) − n(ν), with n(λ) = n i=1(i − 1)λi. (3) We have gλ = gλ . P µν νµ λ Remark 26.7. These gµν arise also naturally in the study of dvr’s R. ∼ n λi If M is a finitely generated R-module, then M = ⊕i=1p/p . That is, λ these are classified by partitions λ. If |R/p| = q, then gµν(q) is the number of modules of shape µ and cotype ν in mλ.

27. 11/30/16 Definition 27.1. A Gelfond pair is a pair (G, H) for G a group and H a subgroup, so that for any two functions on H, the convolution commutes. Proposition 27.2. We have A is equivalent to G, H Gelfond, and B = L(G/H) = V ⊕ V1 ⊕ · · · ⊕ Vk, where the Vi are multiplicity free.

We have Sn, Sk × Sn−k and L(Sn/Sk × Sn−k = V0 ⊕ · · · ⊕ Vk. Here, Vi corresponds to λ = (n − i, i) for 0 ≤ i ≤ k. Then, G L(G/H) = IndH(1) = V0 ⊕ · · · ⊕ Vk.

Frobenius reciprocity says each Vi has the same 1-dim space of H- fixed vectors, call them s0, s1, .... Then,

si(ux) = si(x) k for all u, x, and si(id) = 1. These si from i = 1 are the spherical functions MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS101

Example 27.3. If G = O(n), H = O(n − 1), then (G, H) is Gelfond n−1 and G/H = S . Then, L(G/H) = ⊕i=0Vi.

Then, we have Sn/Sk × Sn−k with ∞si(x) and x a k-set of [n]. This only depends on x through all |x ∩ {1, 2, ... , n} | = d. Then sn(x) = n Pi (d) are the Dual-Hahn polynomials. If P(x) is left H invariant, the probably P(hx) = P(x), with Pd(i) = x si(x)P(x) with si the ith spherical function. P Remark 27.4. Saxl classified all subgroups H of Sn (and more gener- ally all finite groups of Lie type) where the quotient Sn/H is Gelfond. Apart from some small particular cases, like the alternating groups, the main cases are H equal to Sk × Sn−k and S2 o Sn/2 (the hyperocta- hedral group) when n is even.

Example 27.5. We have GLn(R) and O(n, R) is a Gelfond pair. q Here, in GLn, O(n) acts on Σ with Σ n × n symmetric (with Σ = qΣqT ). Then, the isotropy subgroup of this action is the orthogonal group. Then,

L(GLn/On) = ⊕λVλ where λ ranges over all partitions with at most n parts.

The spherical functions are called zλ. This is a polynomial in the eigenvalues of Σ. Say zλ(Σ) are polynomials in the eigenvectors of Σ. Then, for n x1, ... , xn ∈ R , we can take 1 T Σ^ := (x − x) x − x , n! i j The principle components areX the eigenvectors of Σ^. So, an impor- tant math problem is: what is the probability distribution of the eigenvectors. If we assume our data comes from a normal distri- bution, which has a true covariance. It turns out that zλ(x1, ... , xn) We have

zλ(x1, ... , xn) = zµsλ(µ)Pµ, µX`n where Pµ are the zonal polynomials.

Remark 27.6. Zonal polynomials are spherical functions for GLn/O(n). They were invented to do mathematical statistics. They are a basis for symmetric functions. They are a special case of MacDonald poly- nomials. 102 AARON LANDESMAN

28. 12/2/16 Today we’ll discuss MacDonald polynomials. 28.1. Macdonald Polynomials. For today, we write F := Q(q, t). Definition 28.1. The inner product

hpλ, pµi := zλ(q, t)δλµ where `(λ) 1 − qλi zλ(q, t) := . 1 − tλi r=1 X Remark 28.2. If you’d like, for intuition, you can pretend q > 0, t < 1, which makes these inner products positive. Theorem 28.3. Let < denote the lexicographic order. (If n = 4 then 14 < 212 < 22 < 31 < 4.) For every partition λ, there exist unique unique polynomials

Pλ(x; q, t) ∈ ΛF so that (1)

Pλ = mλ + uλµmµ µ<λ X and (2) If λ 6= µ.

hPλ, Pµi = 0

Definition 28.4. The polynomials Pλ(x, q, t) are the Macdonald poly- nomials. Here are some special cases of the theorem.

Example 28.5. (1) If q = t, we have Pλ = sλ. (2) If q = 0, we have Pλ are the Hall-Littlewood polynomials. (3) Let α > 0. Set q = tα, let q 1 so that t 1. Then, 1 − qm 1 − tαm = α, 1 − tm 1→− tm → so the inner product becomes → `(λ) hpλ, pµi = δλµzλa . These are the Jack symmetric functions. When α = 2, these are the zonal polynomials from the last lecture. MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS103

(4) When t = 1, Pλ(x, q, 1) = mλ. (5) When q = 1, we have P(x, 1, t) = eλ0 . We have −1 −1 −1 |λ| zλ(q , t ) = (q t) zλ(q, t). 28.2. Proof of Theorem 28.3. We now prove Theorem 28.3 We have inner products

(txiyj; q) Π(x, y; q, t) = (x y ; q) i j i j Y, ∞ where ∞

(a; q) = (1 − aqr) . ∞ r=1 Y ∞ Lemma 28.6. We have −1 Π (x, y; q, t) = zλ (q, t)Pλ(x)Pλ(y). λ X Proof. Then,

r r −1 log π(x, y; q, t) = . log(1 − xiyjq ) − log(1 − txyjq ) ∞ i j r=0 X, X So,

1 r n n (xiyjq ) (1 − t ) ∞ ∞ n i j i=0 n=1 X, X X Then, n 1 rn n 1 1 − t n n xiyjq (1 − t ) = n xi yj ∞ ∞ n ∞ n 1 − q i j r=0 n=1 i j n=1 X, X X X, X 1 1 − tn = n Pn(x)Pn(y). ∞ n 1 − q n=1 X Therefore, n 1 1−t P (x)P (y) Π(x, y; q, t) = e n 1−qn n n ∞ n=1 Y 1  1 1 − tn mn = n Pn(x)Pn(y) . ∞ m n 1 − q n m =0 Y Xn 104 AARON LANDESMAN

 Here’s another fact: n Lemma 28.7. Let uλ and vλ be two bases of ΛF ¡ working in x1, ... , xn. Then, the following are equivalent: (1) We have

huλ, vλiq,t = δλµ (2) We have

uλ(x)vλ(y) = Π(x, y; q, t) λ X ∗ Proof. Let pλ := zλ(q, t)pλ. Then, ∗ hpλ, pµi = δλµ. Then, expanding, ∗ uλ = aλρpρ. ρ X Also,

vµ = bµσpσ. σ X Then,

huλ, vµiq,t = aλρbµρ. ρ X Then, vλ, uµ are dual bases if and only if

aλρbµρ = δλµ. ρ X  Now, note that the second condition of Lemma 28.7 holds if and only if ∗ uλ(x)vλ(y) = pρ(x)pρ(y), λ ρ X X by Lemma 28.6. This in turn is equivalent to

aλρbλσ = δρσ. λ X  Exercise 28.8. Verify this claimed equivalence. Hint: Take A = aλρ and B = (bµσ), and use that AB = I if and only if BA = I. MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS105

Now, the idea to finish the proof is to work in x1, ... , xn and con- n n struct an F linear map D : ΛF ΛF satisfying (1) → Dmj = cλµmµ µ≤λ X (2)

hDf, giq,t = hf, Dgiq,t

(3) We have λ 6= µ then cλλ 6= cλµ. In other words, D is diagonalizable and self adjoint with distinct eigenvalues. The Macdonald polynomials are the eigenvectors of D. We’ll construct this D next time, probably, but first let’s check uniqueness, assuming this D exists satisfying properties. Proposition 28.9. For every partition λ of length ≤ n, there exists a n unique symmetric polynomial Pλ ∈ ΛF such that

Pλ = uλµmµ µ≤λ X with uλµ ∈ F and uλλ = 1 and DPλ = cλλPλ. The main content of this is the uniqueness (and existence will be postponed until next time when we construct D. Proof. Suppose both conditions in the proposition statement hold. Then,

DPλ = uλµDmµ µ≤λ X = uλµcµνmν. ν≤µ≤λ X Also,

cλλPλ = cλλuλνmν. ν X Therefore, equating coefficients,

cλλuλν = uλµcµν. ν≤µ≤λ X 106 AARON LANDESMAN

This implies

(cλλ − cνν) uλν = uλµcµν µ≤µ≤λ X if ν ≤ λ. If uλλ = 1, this gives us a recursive way of computing uλµ’s. Then, orthogonality is free because

cλλhpλ, pµiq,t = hpλ, Dpµiq,t

= cµµhpλ, pµiq,t.  29. 12/5/16 Recall that last time we were trying to define Macdonald polyno- mials. We continue with this goal today.

29.1. Review. Let Λn,F denote the ring of symmetric functions in n variables over the field F = Q(q, t). We have an inner product `(λ) 1 − qλi hpλ, pµiq,t = δλµzλ . 1 − tλi i=1 Y The claim (the existence of Macdonald polynomials) is that for all λ there exist unique polynomials Pλ(x, q, t) ∈ λn,F so that

Pλ(x, q, t) = mλ + uµλmµ µ<λ X with uµλ ∈ F, and

hPλ, Pµiq,t = 0 for λ 6= µ. To construct these polynomials, Macdonald builds an operator

D:: Λn,F Λn,F satisfying (1) →

Dmλ = cµλmµ. µ≤λ X (2)

hDf, giq,t = hf, Dgiq,t

for f, g ∈ Λn,F. (3) For λ 6= µ we have cλλ 6= cµµ. MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS107

The Macdonald polynomials are then taken to be the eigenvectors of D. 29.2. Defining D. We now define the operator D explicitly. Let wδ ∆ := (xi − xj) = ε(w)x 1≤i

Tt,xi f(x1, ... , xn) = f(x1, ... , txi, ... , xn)Tq,xi f(x1, ... , xn) = f(x1, ... , qxi, ... , xn). Then, define D by 1 n D := (t ∆) T . ∆ t,xi q,xi i=1 X Remark 29.1. Observe, 1 n D = (t ∆) T ∆ t,xi q,xi i=1 n X txi − xj T . x − x q,xi i=1 i j X Yi6=j Then, Macdonald proves the three properties by manipulatorics. There- fore, the Macdonald polynomials exist. 29.3. Examples of Macdonald polynomials. Example 29.2. We have 2 2 2  (q + 1)(t − 1 ) 3q t + q + 2qt + 2qt + 2q + t + 2 4 P3,1(x, q, t) = p1(x) 24 (qt − 1)2 (qt + 1) (q + 1)(q − 1)2 (t + 1) t2 + 1 + ··· − p4(x). 4 (qt − 1)2 (qt + 1) n n where pi are the power sums, so p4(x) = i=1 xi . Remark 29.3. Everybody in this field, saveP Persi, uses the computer. The package to use is John Stembridge’s (at Michigan) Maple pack- age there’s also Mike Zabrocki’s homepage (at York in Canada), and sage programs. Persi computed this by asking his friend Bergeron. There’s a particular integral form x H (x; q, t) = P ( ; q, t). µ λ 1 − t 108 AARON LANDESMAN

Example 29.4. In this case, we have 1   H (x) = (q + 1) q2t + 3q2 + 2qt + 3q + 3t + 1 p4 e 3,1 24 1 1 + ··· − (q + 1)(q − 1)2 (t − 1) p . 4 4

If we now expand He 3,1 in the Schur functions, we obtain 2  2   2   3 2  3 He 3,1(x; q, t) = s4 + q + q + t s3,1 + q + qt s22 + q + q t + qt s211 + q s14 . We also have 2 3 He 3(x; q, t) = s3 + (q + q)s21 + q s13

He 21(x; q, t) = s3 + (q + t)s21 + qs13 2 3 He 111(x; q, t) = s3 + (t + t)s21 + t s13 In general, we have the Macdonald positivity conjecture:

Theorem 29.5 (Haiman). Defining Kλµ(q, t) by

He µ = Kλµ(q, t)sλ λ X and Kλµ(q, t) have positive integral coefficients. Remark 29.6. The proof involves a lot of algebraic geometry and Hilbert schemes, surprisingly! However, this means no one has been able to read it, since no one knows both! 29.4. Understanding the operator D in an alternate manner.

Definition 29.7. Let Pk be the set of partitions of k. Let q, t > 1. Define Z πq,t(λ) = zλ(q, t) Here, `(λ) 1 − qλi zλ(q, t) = zλ . 1 − tλi i=1 Y Here, (q, q) Z = λ (q, t)λ MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS109 where (x, y)k is the Pochhammer symbol k−1  i (x, y)k := 1 − xy . i=0 Y This determines a probability measure on Pk, meaning that the πq,t(λ) add to 1. Taking q = t, we get Z πq,q(λ) = . zλ Taking q = tα and letting q approach 1, we get

Z −`(λ) πα(λ) = α . zλ This is useful in biology, known as Ewen’s sampling formula.

Question 29.8. How do we pick λ at random using πq,t(λ)? Finding Macdonald’s operator D translates to a Markov chain M(λ, λ0) 0 on Pk. That is, from λ, there is a method of picking λ . Each row of the matrix is a probability distribution. It is easy to run this matrix. Further, πq,t(λ) is a stationary distribution. That is, 0 0 πq,t(λ)M(λ, λ ) = πq,t(λ ). λ X 0 So, πq,t(λ) is a left eigenvector with eigenvalue 1 for M(λ, λ ). In order to describe the spectral theory of M(λ, λ0), we have 1   −1 ρi λ P(x; q, t) = zρ (1 − t ) Xρ(q, t) pρ. cλ(q, t) ρX`|λ| Y λ with Xρ defined by λ λ Xρ(q, t) := χρKµλ(q, t). µ X The Kλµ are the Koska numbers as we defined previously. It hap- pens that Kµλ are equal to −1 µ µ Kµλ(q, t) := zρ χρXρ(q, t). ρ X λ Finally, χµ are the character of the symmetric group corresponding to the partition λ. 110 AARON LANDESMAN M(λ λ0) {R } Theorem 29.9. , has expected value λ λ∈Pk . Here, `(λ) t   R = qλi − 1 t−i. λ qk − 1 i=1 X The first eigenvalue is 1, the second eigenvalue is t qk − 1 q − 1 + . qk − 1 t t2 Here, the eigenfunctions are

`(ρ) λ ρi fλ(ρ) = Xρ(q, t) (1 − q ) i=1 Y 2 which are orthogonal in L (πq,t). In our last lecture on Wednesday, we’ll say more about what this D operator is.

30. 12/7/16 There are two schools applying Macdonald polynomials and other combinatorial techniques. There are two schools.

30.1. School 1. The first school is Garsia, Haiman, Haglurd, and others. These people do combinatorics and manipulatorics. There are three main results from this school. (1) Haglunds formula for He µ(x; q, t). Hagland’s formula says for µ ` n,

maj(π(σ,µ)) inv(π(q,t) He µ = q t Lides(σ). σ∈S Xn Here, L is the quasi-symmetric function. ides(σ) is the de- scent set of σ−1 (so ides stands for inverse descent). maj is the major index, π is the permutation gotten by putting σ in µ from bottom to top. Finally inv. Example 30.1. Suppose σ = 18362457. We have σ−1 = 15367482. Then, ides(σ) = {2, 5, 7}. We associate to ides(σ) = {2, 5, 7} the composition 2, 3, 2, 1 (the difference set). Then, for w a com- position, Lw is the corresponding quasi-symmetric function. MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS111

Next, we explain what π(σ, µ) is. Say µ = 5, 3, 2, 2 is a par- tition of 12. Then, we construct

Then, take a permutation of 12. Say, 12, 2, 1, 3, 4, 7, 6, 11, 10, 9, 8.9, 5. Then, put this permutation in the shape from bottom to top

11 10 8 9 5 4 7 6 1 3 12 2

To compute maj (for major MacMahon) the major index we have maj π(σ, µ) above is maj(12, 1, 4, 11) + maj(2, 3, 7, 10) + maj(6, 8) + maj(9) + maj(5) = 2 + 0 + 0 + 0 + 0. In general, to compute maj we sum maj over all columns, and maj(η) = i∈des(η) i. Finally, we describe inv For each row, sum 1 for each inver- P sion i < j but si > sj, except if there is an inversion at a < b and immediately above a there is a c with a < c < b, then don’t count that inversion. (2) Haiman’s proof of Macdonald positivity: He showed you can express He µ(x; q, t) = λ Kq,t(λ, µ)sλ, where Kq,t(λ, µ) are pos- itive integer polynomials in q and t. (3) The Shuffle conjecture.P Here is a nice quote from Persi about school 1: They’re careful, they write proofs, they write code, they’re great!

Example 30.2. Here is an expression for He 21(x; q, t). He 12(x; q, t) = tL2 + tL4 + qL1,2 + qL41 + qL2 + L∅. 30.2. School 2. The second school is the DAHA school. Both schools think the other school is crazy. DAHA stands for “double affine Hecke algebra.” This is the school of Macdonald, LLT (three au- thors), Arun Ram, Chednick, and so on. 112 AARON LANDESMAN

Example 30.3. If G is a group and H is a subgroup, the Hecke algebra is the bi-invariant functions on G,

H(G, G) = {f : G C : f(h1gh2) = f(g)} . These Hecke algebras come from affine root systems. There are a few common types. We have type A→ with n − 1 dots in a row. These have coefficients in q and t. Similarly, this works for all types. Macdonald polynomials turn out to be type A. There are gener- alizations to all types. Roughly, they are the characters of finite di- mensional representations of the corresponding lie groups . The list chapter of Macdonald’s book is amazing. There is a classical analy- sis on lots of families of orthogonal polynomials, such as Hermite- Hahn, Chevychev, Laguare, Jacobi polynomials, and so on. There’s a sort of hierarchy, where higher members specialize to lower mem- bers in the limit. There is a highest member of this hierarchy. The Askey Wilson polynomials lie at the top of the hierarchy, and prov- ably so. It’s a fact that the Pλ associated to the root system C1 are exactly the Askey Wilson polynomials. 30.3. Persi’s next project. Here is the next project Persi’s planning to work on. If you add two numbers, one usually has places where one carries the one. A natural question is how often one must carry digits. Say one adds n numbers in base b‘ John Holt looks at frac- tals, and he got interested in the carries process when the numbers are chosen at random. The carries form a Markov chain (mean- ing whether you carry at step m only depends on whether you did at step m − 1). We let M(i, j) denote the matrix whose i, j entry is the chance of carrying at j and following that one carries at i. This was claimed to be an amazing matrix. The eigenvalues are 1, 1/b, ... , 1/bn−1. The eigenfunctions were also quite explicit with Eulerian numbers all over the place. The 1/bi are eigenvalues of rif- fle shuffling. Persi showed with Jason Fulman. The descent process of repeated riffle shuffling is exactly the same as the carry process. What does this have to do with this course. Suppose we have a finite abelian group. Observe Z/p2 ⊃ Z/p. These tell us about the carries process. More generally, suppose we have Mλ a finite abelian group of type λ. Say Hµ is a subgroup of Mλ/Hµ of type ν. This splits if and only if the partition splits into two parts.