MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS
AARON LANDESMAN
CONTENTS 1. Introduction 4 2. 10/26/16 5 2.1. Logistics 5 2.2. Overview 5 2.3. Down to Math 5 2.4. Partitions 6 2.5. Partial Orders 7 2.6. Monomial Symmetric Functions 7 2.7. Elementary symmetric functions 8 2.8. Course Outline 8 3. 9/28/16 9 3.1. Elementary symmetric functions eλ 9 3.2. Homogeneous symmetric functions, hλ 10 3.3. Power sums pλ 12 4. 9/30/16 14 5. 10/3/16 20 5.1. Expected Number of Fixed Points 20 5.2. Random Matrix Groups 22 5.3. Schur Functions 23 6. 10/5/16 24 6.1. Review 24 6.2. Schur Basis 24 6.3. Hall Inner product 27 7. 10/7/16 29 7.1. Basic properties of the Cauchy product 29 7.2. Discussion of the Cauchy product and related formulas 30 8. 10/10/16 32 8.1. Finishing up last class 32 8.2. Skew-Schur Functions 33 8.3. Jacobi-Trudi 36 9. 10/12/16 37 1 2 AARON LANDESMAN
9.1. Eigenvalues of unitary matrices 37 9.2. Application 39 9.3. Strong Szego limit theorem 40 10. 10/14/16 41 10.1. Background on Tableau 43 10.2. KOSKA Numbers 44 11. 10/17/16 45 11.1. Relations of skew-Schur functions to other fields 45 11.2. Characters of the symmetric group 46 12. 10/19/16 49 13. 10/21/16 55 13.1. Review 55 13.2. Completing the example from last class 55 13.3. Completing the example; back to the Schur functions 57 14. 10/24/16 58 15. 10/26/16 61 16. 10/28/16 66 16.1. Plane partitions, RSK, and MacMahon’s generating function 66 17. 10/31/16 71 17.1. Announcements and Review 71 18. 11/2/16 74 18.1. Overview 74 18.2. P-partitions 74 18.3. The order polynomial 75 19. 11/4/16 78 19.1. Review 78 19.2. A possibly non-politically correct example 78 19.3. More on descent 79 19.4. Shuffling Cards 79 20. 11/7/16 81 21. 11/9/16 83 21.1. Algebra of the Ai 83 21.2. Quasi-Symmetric Functions 84 22. 11/11/16 86 22.1. Application to symmetric function theory 87 22.2. Connection of quasi-symmetric functions to card shuffling 88 22.3. Applications 89 23. 11/14/16 90 23.1. Combinatorial Hopf Algebras 90 23.2. Examples of Hopf Algebras 91 MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS3
23.3. What did Hopf do? 92 24. 11/16/16 93 24.1. Definition of combinatorial Hopf algebras 93 24.2. Examples 94 25. 11/18/16 95 25.1. What do Hopf algebras have to do with card shuffling? 95 25.2. Lyndon words 97 25.3. The standard bracketing of Lyndon words 97 26. 11/28/16 98 27. 11/30/16 100 28. 12/2/16 102 28.1. Macdonald Polynomials 102 28.2. Proof of Theorem 28.3 103 29. 12/5/16 106 29.1. Review 106 29.2. Defining D 107 29.3. Examples of Macdonald polynomials 107 29.4. Understanding the operator D in an alternate manner 108 30. 12/7/16 110 30.1. School 1 110 30.2. School 2 111 30.3. Persi’s next project 112 4 AARON LANDESMAN
1. INTRODUCTION Persi Diaconis taught a course (Math 263A) on Algebraic Combi- natorics and Symmetric Function Theory at Stanford in Fall 2016. These are my “live-TEXed“ notes from the course. Conventions are as follows: Each lecture gets its own “chapter,” and appears in the table of contents with the date. Of course, these notes are not a faithful representation of the course, either in the mathematics itself or in the quotes, jokes, and philo- sophical musings; in particular, the errors are my fault. By the same token, any virtues in the notes are to be credited to the lecturer and not the scribe. Thanks to Lisa Sauermann for taking notes on October 17, when I missed class. 1 Please email suggestions to aaronlandesman@ gmail.com.
1This introduction has been adapted from Akhil Matthew’s introduction to his notes, with his permission. MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS5
2. 10/26/16 2.1. Logistics. (1) Math 263A (2) Algebraic Combinatorics (3) Persi Diaconis (4) office hours Tuesday 2-4, 383 D (5) (No email) 2.2. Overview. This is a course in algebraic combinatorics and sym- metric function theory. We’ll talk about what we want to cover in the course. Combinatorics is pretty hard to define. It deals with things like sets Xn which are finite, permutations, partitions, graphs, trees. We might try to estimate |Xn|, functions T : Xn R or
| {x ∈ Xn : T(x) = y} |. → Here’s a slogan: Symmetric function theory “makes math” out of lots of classical combinatorics. We’ll try to cover (1) Chapter I of MacDonald’s book symmetric functions and Hall polynomials (2) More things that weren’t mentioned. . . Remark 2.1. We’ll have many digressions into “why are we studying this” and “what is it good for.” 2.3. Down to Math. Definition 2.2. For n ∈ Z, a weak composition of n is a partition of n into (a1, a2, ...) with k=1 ak = n with ak ≥ 0. Definition 2.3. Let R beP∞ a commutative ring. For example, R = Z, Q, Z[x], ... Suppose we have infinitely many variables
x1, x2, x3, ... then a homogeneous symmetric function of degree n is a formal power series α f(x1, x2, ...) = f(x) = cαx α X where (1) α ranges over all weak compositions of n. (2) cα ∈ R α α1 α2 (3) x = x1 x2 ··· (4) f(x1, x2, ...) = f(xσ(1), xσ(2), ... for σ ∈ S .
∞ 6 AARON LANDESMAN
(5) Every term has the same degree.
Example 2.4. (1) f(x) = n=1 xi is a symmetric function. 2 2 2 2 (2) f(x) = x1x2 + x1x3 + ···∞+ x2x2 + x2x3 + ··· is another sym- metric function. P n Definition 2.5. Let ΛR be all symmetric functions of degree n over R. Remark 2.6. We often omit the R subscript when it is understood or clear from context. Remark 2.7. We have Λn · Λm ⊂ Λn+m. n Definition 2.8. Define ΛR := ⊕n=0ΛR. 2.4. Partitions. ∞ Definition 2.9. λ is a partition of n, written λ ` n if
λ = (λ1, λ2, ...) with λ1 ≤ λ2 ≤ · · · and
λi = n. i X Write |λ| = n, `(λ) := the number of nonzero parts of λ . Example 2.10. The partitions of 5 are {5, 41, 311, 32, 2111, 11111, 221} . We will often write
λ = 1n1(λ)2n2(λ) with ni(λ) equal to the number of parts equal to i. For example,
221 = 122. These satisfy
ini(λ) = n. i X Example 2.11. We can also draw young diagrams with dots or young tableaux with boxes. Here is the partition 13223 = 322111 ` 10. MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS7
Definition 2.12. If λ is a partition of n, λ0 (the transpose) is what you obtain when flipping the diagram. 2.5. Partial Orders.
Example 2.13. We can define the partial order by λ ≤ µ if λi ≤ µi for all i. Example 2.14. One can write down a partial orders. For example, we can define a partial order on these diagrams by saying one can get from one partition to another by moving dots to adjacent rows so that at every stage one arrives at a partition. Algebraically, the partial j j order is majorization order, where λ ≤ µ if i=1 λi ≥ i=1 µi for all j. P P Fact 2.15. We have λ ≤ µ µ0 ≤ λ0. This isn’t too hard, but it’s a little bit finicky, and we’ll come back to proving it later. ⇐⇒
Example 2.16. Take λ < µ if |λ| < µ or λ1 = µ1, ... , λn = µn, λn+1 < µn+1. This is a lexicographic ordering. 2.6. Monomial Symmetric Functions. Suppose λ is some partition α λ = (λ1, λ2, ...) and mλ = α x , the sum over all distinct permuta- tions of λ. P 2 2 Example 2.17. We have m21 = i Proof. Apply the preceding lemma. 8 AARON LANDESMAN 2.7. Elementary symmetric functions. Definition 2.20. The elementary symmetric functions ej = xi1 xi2 ··· xij , i <··· Fact 2.21.PWe’ll seeP that {eλ} as λ ` n form a basis of Λ over Z. Lemma 2.22. We have eλ = Mλµmµ µX`n where mλµ is the number of matrices with row sums λ and column sums µ. Proof. Say λ = λ1 ··· λr, µ = µ1 ··· µs. Example 2.23. [Darwin’s data (see Persi’s paper “sequential Monte Carlo methods for statistical analysis of tables”)] Look at the set of all tables with the same row sums and the same column sums (this is the Mµλ) and see where Darwin’s original table fits in. The size of this set is sharp-P complete problem, and nobody knows the answer. Example 2.24. Given a bipartite graph one can represent it as a 0 1 matrix depending on whether vertex i in the first column is con- nected to vertex j in the second column. The degrees of the vertices are the row and column sums. We show that the generating function (1 + xiyj) = Mλµmλ(x)mµ(y). i j λ µ Y, X, You can pick off the λ, µ coefficient of the left by using a Fast Fourier transform. So, you want to get your hands on the coefficients of xλyµ in the left hand product. 2.8. Course Outline. (1) Next time, we’ll talk about the classical bases eλ, hλ, pλ. (2) We’ll discuss the Hall inner product (3) Schur functions (4) Robin Schensted Knuth correspondence (5) Character Theory of the symmetric group. Letting Rm be the class functions of Sn we get ⊕Rm = Λ. In particular, sλ = µ χλ pµ with χ the characters of the symmetric group. (6) Random matrix theory (relates to characters of the unitary group)P MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS9 (7) Combinatorial Hopf algebras - can stick things together by combinatorics and you can also pull them apart. Whenever you can pull things apart and stick them together, you can form a Hopf algebra. It turns out that Λ is the terminal object in Hopf algebra. This ends up explaining all sorts of generat- ing functions. (8) MacDonald polynomials. There will be two course projects. (1) Everybody from now until November 2, choose 10 problems. We’ll try and combine them so we have a set of solutions. (2) At the end, there will be a list of things to do a small final pa- per on. There are lots of interesting applications to algebraic geometry and so on. 3. 9/28/16 Today we’ll talk about various bases for symmetric functions. 3.1. Elementary symmetric functions eλ. We defined the elemen- tary symmetric functions last time as ei = xi1 xi2 ··· xir . 1≤i ni(λ) eλ := eλ1 eλ2 ··· = ei . ∞ i=1 Y We have i E(t) := (1 + xit) = eit , ∞ ∞ i=1 i=0 Y X with e0 = 1. Note that n n i (1 + xit) = eit . i=1 i=1 Y X Proposition 3.1. For any λ, we have eλ0 = aµλmµ µ≤λ X 0 with the ≤ order the dominating order, with aλλ = 1. Remember λ is the transpose of λ. 10 AARON LANDESMAN Proof. Say e 0 = e 0 . Writing λ λi α Q(xi1 xi2 ··· )(xi1 xi2 ··· ) ··· = x . α αi with α = α1α2 ··· , x = x and i1 < i2 < ··· < i 0 , j1 < j2 < i λ1 0 ··· < jλ0 . Draw a tableau of shape λ. λ is the length of the ith 2 Q i column of λ. i1 j1 k1 i2 j2 k2 i3 k3 i4 All the values of r appearing in the Tableau have to be in the first r rows. Hence, the size of the first r rows are λ1 + ··· + λr. This implies α1 + ··· + λr ≤ λ1 + ··· + λr. Now, eλ0 is a symmetric function. Pick any monomial appearing on the right hand side. This shows that µ ≤ λ. The only nonzero mµ have µ ≤ λ. It’s also easy to see that aλλ = 1 using this interpretation as filling in tableau of shape λ (we need to have the ith row filled in completely by i’s, assuming the lower rows were filled only by lower elements). Corollary 3.2. The collection eλ form a basis for λZ. Proof. Apply the previous proposition. The eλ0 form a basis, as the change of basis to mλ is upper triangular, with 1’s on the diagonal. Hence the eλ form a basis since taking the transpose is an involution. Theorem 3.3 (Fundamental theorem of symmetric functions). We have Λ = Z[e1, ...] and these ei are algebraically independent over Z. Proof. If the ei were dependent, then there is some polynomial in them which is 0. Writing out that polynomial would give a finite linear combination of eλ which evaluates to 0. This contradicts that eλ form a basis for ΛZ. 3.2. Homogeneous symmetric functions, hλ. Definition 3.4. Define hn := mλ. Xλ`n MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS11 Example 3.5. So, h1 = m1 = e1 = xi i X 2 h2 = m1 + m11 = xi + xixj = xixj. i i hn = xi1 xi2 ··· xin 1≤i ≤i ≤···≤i 1 X2 n and h0 = 1. Define i −1 H(t) = hit = (1 − xit) . ∞ ∞ n=1 i=1 X Y Observe E(t)H(−t) = 1. This allows us to inductively compute the coefficients of H. So, n i (3.1) (−1) eihn−i = 0 i=0 X for n ≥ 1. Example 3.6. Take n = 2. We have e0h2 − e1h1 + e2h0 = 0. Expanding this, we have 2 h2 = e1 − e2 2 2 2 since h0 = e0 = 1, e1 = h1. Verifying this, e1 = ( i xi) = i xi + 2 xixj, and indeed h2 = xixj. i ei 7 hi. This is a well defined homomorphism→ by Theorem 3.3. → Remark 3.9. By (3.1), we have w2 = id. This implies that ω is an involution, so Λ = Z[h1, ...] which is equivalent to showing that hλ := hλ1 hλ2 ··· form a basis. 12 AARON LANDESMAN We can write hλ = Nµλmµ where the Nµλ are always non-negativeX integer entries. N H = h h = 0 a < 0 Fix and let i−j 0≤i,j≤N where a if . Define i−j E = (−1) ei−j . 0≤i,j≤N Example 3.10. When N = 2, we get 1 0 0 h1 1 0 h2 h1 1 We have HE = id by (3.1). Fact 3.11. If A is invertible, then any minor of A equal the comple- mentary cofactor of AT . Corollary 3.12 (Jacobi-Trudi identity). Proof. If λ, µ have length ≤ p and λ0, µ0 have length ≤ q with p + q = N + 1, consider the minor of H with row indices λi + p − i for 1 ≤ i ≤ p and column indices µi + p − i for 1 ≤ i ≤ p, then by Fact 3.11, we have det hλi−µj−i+1 = det eλi−λj−i+1 1≤i,j≤p 1≤i,j≤q Taking µ = 0, we have that det hλ −i+1 = det e 0 . i λi−i+1 3.3. Power sums pλ. Definition 3.13. The rth power sum is r pr := xi i X for i ≥ 1. Warning 3.14. p0 is not defined! MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS13 Lemma 3.15. Define r−1 P(t) = prt . ∞ r=1 X H0(t) E0(−t) We have P(t) = H(t) . and P(t) = E(t) . Proof. We just verify the first identity. r−1 P(t) = prt ∞ r=1 X r r−1 xit r i X X x = i 1 − x t i i X ∂ 1 = log ∂t 1 − xit i X ∂ 1 = log ∂t 1 − xit ∂ Y = log H(t) ∂t H0(t) = . H(t) Corollary 3.16 (Newton’s identities). We have n nhn = prhn−r. r=1 X and n r−1 nen = (−1) pren−r r=1 X Proof. This follows immediately from expanding the equalities in Lemma 3.15. Remark 3.17. We have hn ∈ Q[p1, ... , pn] with pn ∈ Z[h1, ... , hn], 1 2 and h2 = 2 p1 + p2 . Observe that ΛQ = Λ ×Z Q = Q[p1, p2, ...]. Defining pλ = i pλi , we obtain that pλ form a basis and pj are alge- braically independent over Q. Q 14 AARON LANDESMAN We can also see that recalling ω as the map sending ei 7 hi, we have n ω(pn) = (−1) pn, → and |λ|−`(λ) ω(pλ) = (−1) pλ with |λ| the size of λ and `(λ) equal to the length of λ. 4. 9/30/16 Recall we have Λ the ring of symmetric functions, and the various bases mλ, eλ, hλ, pλ. Lemma 4.1. We can write 1 hn = pλ ξλ Xλ`n and 1 en = tλ pλ ξλ Xλ`n where ni ξλ = i ni! i Y |λ|−`(λ) tλ = (−1) . where ni is the number of i’s in λ. Proof. We first claim r n prt H(t) = hnt = e r=1 r . ∞ n=0 P∞ X To see this, observe it suffices to show r log H(t) = prt /r which is equivalent to showing X 0 H (t) r−1 = prt , H(t) ∞ r=1 X which we proved last time. MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS15 The right hand side is r r prt prt e r=1 r = e r P∞ r Y r mr prt = /mr! ∞ r r m =0 Y Xr 1 r mr = t pr m ∞ r r mr! n=0 i X m1,...,Xrmr=n Y P n 1 = t pλ . ∞ ξ n=0 λ X Xλ`n Remark 4.2. Here is some motivation for why we are doing these calculations. Let Sn be the symmetric group. Consider a typical per- mutation σ ∈ Sn. Say it “looks like?” (1) For example, how many fixed points does it have? (2) How many cycles does it have? (3) What is the length of the longest cycle (4) What is the order of the cycle? (the order o(σ) is the smallest k so that σk = id.) Answers: (1) about 1 (2) about log n (3) about .62n (log n)2 (4) about e 2 . 1 Definition 4.3. Let u(σ) := n! denote the uniform distribution for the symmetric group. (1) More precisely, 1 1 # {σ ∈ Sn : fp(σ) = j} ∼ n! ej! where fp(σ) is the number of fixed points of σ. So, 1 P(fp(σ) = 0) ∼ . e This models a Poisson distribution. 16 AARON LANDESMAN (2) We also have 1 c(σ) − log n # σ ∈ Sn : p ∼ Φ(x) n! log n √ x −t2/2 where Φ(x) = − e / 2tdt is the normal distribution, and c(σ) is the number of cycles. R (3) We also have ∞ (log n)2 1 log o(σ) − 2 # σ ∈ Sn : √ ∼ Φ(x). n! (log n)3 3 These feature only depend on σ having a given cycle class. That is, they only depend on the conjugacy class of σ in Sn. Recall σ is conjugate to τ if and only if they have the same cycle type. Let ai(σ) be the number of cycles in σ of type i. Remark 4.4. We have the following facts k iai(σ) = n i=1 X fp(σ) = a1(σ) c(σ) = ai(σ) i X `(σ) = max ai(σ) > 0 i o(σ) = lcmai(σ). Lemma 4.5 (Cauchy). We have n! n! # {σ ∈ Sn : σ has cycle type a1 . . . am} = = . ai i=1 n ai! 3λ Proof. Fix a1, ... , an. Observe that Sn acts transitively.Q The size of n! the cycle class is ai . i n ai! Q Define the cycle indicator n 1 a (σ) C (x , ... , x ) = x i , n 1 n n! i σ∈S i=1 Xn Y MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS17 and C0(x) = 1. We have another function, also called the cycle indi- cator n C(t) = t Cn(x) ∞ n=0 X Theorem 4.6 (Polya). We have i C(t) = e i=1 t xi/i P∞ Proof. Observe i i e i=1 t xi/i = et xi /i ∞ P∞ i=1 Y a i i 1 = t xi/i ∞ ∞ ai! i=1 a =0 Y Xi ai n xi = t a ∞ i i a ! n=0 i X a1,a2,...X,: i ai=n Y = C(t). P Remark 4.7. There are similar formulas for cycle factorizations of GLn(Fq), which are q-analogs of the above formulas for Sn. Definition 4.8. Fix θ ∈ (0, ) . The Poisson distribution with pa- rameter theta is ∞ e−θθj P (j) = θ j! for 0 ≤ j ≤ . Define ∞ j Mθ(x) = x Pθ(j) ∞ j=0 X xjθje−θ = ∞ j! j=1 X = e−θ+xθ. 18 AARON LANDESMAN Definition 4.9. If P(j), 0 ≤ j < is any probability distribution, then we have moments and falling factorial moments ∞ h Mh = j P(j), ∞ j=0 X Vh = j(j − 1) ··· (j − h + 1)P(j). ∞ j=1 X These two moments are equivalent, we can find either from the other. Differentiating k times, and taking P(j) = Pθ(j) we see h j−h Mθ(x) = j(j − 1) ··· (j − h + 1)x Pθ(j). ∞ j=0 X h For Pθ(j) we get vh = θ . We have mh(x) = θkexθ−θ Now, setting x = 1, we see k k m (1) = Vk = θ . So, the falling factorial moments are these simple numbers. Example 4.10. Theorem 4.11. For any k = 1, 2, ..., for all n ≥ k, the kth moments of the number of fixed points of σ ∈ Sn satisfies 1 fp(σ)k = kth moment of the Poisson P (j). n! 1 σ∈S Xn Proof. Consider ai(σ) = fp(σ). We want 1 a (σ) (a σ − 1) ··· (a (σ) − k + 1) n! 1 1 1 σ∈S Xn in 1 a (σ) C (x , ... , ) = x i . n 1 n! i σ∈S Xn Y MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS19 Set x1 = x1, x2 = ··· = 1. Then, 1 C (x) = xai(σ). n n! σ∈S Xn Then, i n tx+ t t cn(x) = e i=2 i ∞ P i X tx−t+ t = e i=1 i ∞ etx−t P = . 1 − t Differentiate k times in x and set x = 1, we get 1 tn a (σ)(A ··· (a (σ) − k − 1) n! 1 1(σ)−1 1 σ∈S X Xn tk = 1 − t = tk + tk+1 + tk+2 + ··· . One can read off the moments from equating powers of t, since the left hand side is the formula for the moments. Remark 4.12. If you have two measures, and all moments are equal, then the two measures are close. This is called the method of mo- ments. Since the falling factorial moments in the above theorem are equal, the moments are equal. The method of moments implies that: Let Pn(j) = PSn {fp(σ) = j} 1 then as n , pn(j) . ej! The point→ of∞ these calculations→ is the following. Exactly the same calculations show (1) E(ai(σ))k = jkP1/i(x). ∞ j=0 X So, the number of transpositions has Poisson distribution with parameter 1/2. Here, the subscript k on the expectation means the falling factorial moment. 20 AARON LANDESMAN (2) The joint distribution ` E(a1(σ)j1 a2(σ)j2 ··· a`(σ)j` ) = E1/k(xjk ) k=1 Y for n ≥ jii. Remark 4.13. TheP fact that the moments are exactly equal is called stabilization. This will lead to some interpretations in terms of char- acters. 5. 10/3/16 5.1. Expected Number of Fixed Points. Recall from last time we have 1 a (σ) C = x i . n n! i σ∈S Xn Y Then we defined i n x t C(t) = t Cn = e i i . ∞ n=0 P∞ X This has a lot of information in it. Example 5.1. Suppose c(σ) is the number of cycles of σ. Then, n c(σ) = ai(σ). i=1 X Setting all xi = x, we have 1 C (x) = xc(σ). n n! σ X Then, 1 C(t) = (1 − t)x −x = (−t)j ∞ j j=0 X tj = x(x + 1) ··· (x + j − 1). j! j X i since i xt /i is the power series expansion for − log(1 − t) P MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS21 Therefore, 1 C = x(x + 1) ··· (x + n − 1) n n! x + 1 2 + x n − 1 + x = x ··· 2 3 n = E(xSn ) = Exxi , Here Sn denotes the sum, notY the symmetric group. Here P(xi = i−1 1 0) = i and P(xi = 1) = i , and n Sn j E(x ) = x P(Xn = j), j=0 X as in general, E(f(Sn)) = f(j)P(Sn = j)). Here we took f(j) = xj. X So, 1 1 AV(S ) = 1 + + ··· + ∼ log n, n 2 n n 1 1 VAR(S ) = 1 − ∼ log n n i i n=1 X c(σ) − log n P p ≤ x Φ(x). log n The coefficient of xj is the number of permutations→ with j cycles. These happen to be called sterling numbers of the first kind. For more information, see Shepp, Lloyd, cycles of permutations. Question 5.2. Who cares about all this stuff with fixed points? There was a game played where someone took two decks of cards up to n. People play this game and you get a dollar if the same number comes up. The question is a question of the number of fixed points. Monmort in 1708 proved the number of fixed points has a Poisson distribution, as we proved last time. Note that we may as well call the cards on the first deck 1 . . . n, so the number of matches is just the number of fixed points in a random permutation. We also have a metric d(π, σ) = # {i : π(i) 6= σ(i)} . 22 AARON LANDESMAN See Diaconis, Goralnick, and Mulman on fixed points of permuta- tions for a classification of possible fixed points of transitive primi- tive actions of the symmetric group. Definition 5.3. The Caley distance between two permutations −1 dc(σ, π) = minimum number of transpositions needed to express πσ . I.e., this is the distance in the Caley graph where the vertices are permutations and the edges join two elements differing by a permu- tations. −1 Exercise 5.4. We have dc(σ, π) = n − c(σπ ), where c(σ) is the num- ber of cycles of σ. Remark 5.5. The above two distance measures are the only two bi- invariant distances that Persi knows of. 5.2. Random Matrix Groups. Question 5.6. For any sequence of groups, call it Gn, pick g ∈ Gn at random. What does it look like? Example 5.7. Take Gn = GLn(Fq). The conjugacy classes of a ma- trix are indexed by (f, λ(f)) where f is an irreducible polynomial of degree d(f) and λ(f) a partition of |λ(f)|, with the restriction that d(f)|λ(f)| = n. Here λ is a function fromX irreducible polynomials to partitions of Sn. This is an interesting way to get your hands on the symplectic group. This is Jordan form where we’re not assuming the roots are in the field. There is more information on these for general finite groups of lie type, look at J. Fulman’s thesis: Random matrix theory over finite fields. Remark 5.8. One can do similar things for (1) On(R) (2) Un(C) (3) Sp2n(R) (4) GLn(Zp). As long as the group is compact, it make sense to have a description of conjugacy classes. If you wanted to look at this subject, you might look at Persi’s paper Patterns and Eigenvalues, in bulletin of the ams. MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS23 Example 5.9. In the orthogonal group has conjugacy classes indexed by its eigenvalues. So, this is asking: pick a matrix at random, what are its eigenvalues? In fact, G doesn’t have to be compact, but we can stick take an n ap(n) up to x uniformly, and look at n = p p . Example 5.10. For example, we canQ take ω(n) = p|n 1. This is the number of prime divisors. So, ω(12) = 2. The Erdos Katz theorem P says ω(n) − log log x P p Φ(t). log log x This can similarly be done for GLn(R) by→ chopping it off along com- pact subsets, say with all entries up to n, and let n . 5.3. Schur Functions. Recall we have Λ the ring of symmetric func- tions, with four bases mλ, eλ, hλ, pλ. → ∞ n We will work with n variables x1, ... , xn. If α ∈ N , say α = (α1, ... , αn) , define α α1 αn x := x1 ··· xn . Polynomials in n variables have an action of the symmetric group by permuting variables. Consider α sgn(w)w(x ) =: aα w∈S Xn These are alternating polynomials under the action of the symmetric group. That is, σaα = sgn σaα. In particular, aα = 0 if xi = xj for any i 6= j. Therefore, xi − xj ≡ 0 mod aα. Let δ = (n − 1, n − 2, ... , 1, 0) . Hence, j xi − xj = det(xi)1≤i≤n,0≤j≤n−1 i α1 − 1 ≥ · · · ≥ αn − n. and we can write α = λ + δ. Then, λ+δ aα = aλ+δ = sgn ww(x ) w∈S Xn λj+(n−j) = det xi . 1≤i,j≤n Definition 5.11. The Schur function sλ(x1, ... , xn) is aλ+δ sλ(x1, ... , xn) := . aδ Exercise 5.12. If one adds 0’s to λ one gets the same Schur function. 6. 10/5/16 n 6.1. Review. Last time, we had x1, ... , xn and took α ∈ N . Then, we took α aα = aα(a1, ... , xn) = ε(w)w(x ), w∈S Xn where ε(w) is the sign of w. We saw that aα is divisible by aδ := i f 7 aδf → → MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS25 is bijective. Since the aα form a basis in alternating polynomials the Schur functions form a basis for Λn. Warning 6.1. This map is not degree preserving, but it does show that the dimension of the symmetric functions in n variables of de- gree d is equal to the dimension of alternating functions of degree d + deg aδ. Remark 6.2 (Important secret about Schur functions!). The Schur functions are the characters of the unitary group. That is, {sλ}λ`n, at most n parts are the irreducible polynomial characters of ∗ Un := {M ∈ Mn×n : MM = I} . That is, given an irreducible representation ρλ of Un, we have sλ(z1, ... , zn) = tr(ρλ(M)), 1 if M has eigenvalues z1, ... , zn in S . Theorem 6.3 (Jacobi Trudi identity). We have s = h λ det λi−i+j 1≤i,j≤n , for any n ≥ `(λ) with the convention that h0 = 1 and hj = 0 for j < 0. Additionally, sλ = det eλ0−i+j i 1≤i,j≤n for any m ≥ `(λ0). Example 6.4. Consider the matrix hλ1 hλ1+1 hλ1+2 h h λ2−1 λ2 ... hλn For example, s(j) = hj, since we have a 1 × 1 matrix. We also get s1n = en. For example, h3 h4 s3,1 = det = h3h1 − h4. h0 h1 (k) Proof. Let ej be the nth elementary symmetric function in x1 ··· x^k ··· xn. Introduce n−i (k) M = (−1) en−i 1≤i,k≤n 26 AARON LANDESMAN n αi and let α = α1, ... , αn ∈ N . Let Aα = xj . Let Hα = hαi−n+j . Lemma 6.5. We have Aα = HαM. Proof. We have (k) (k) n E (t) = en t ∞ n=0 Xn = (1 + xit) i=Y1,i6=k and recall n −1 n H(t) = (1 − xit) = hnt . ∞ i=1 n=1 Y X Therefore, (k) −1 H(t)E (−t) = (1 − xit) . Now, look at the coefficient of tαn on each side. We have n n−1 (k) αi hαi−n+j(t) en−j = xn j=1 X Now, take the determinant of both sides in the lemma. We have aα = det Hα det M. Taking α = δ = (n − 1, ... , 1). We get det Hδ = det(hn−i−λ+j) = 1, since it is upper triangular with diagonal 1. Therefore, det M = aδ. Our formula says aα = det Hαaδ, using that aα is the determinant of Aα by definition. Hence, aλ+δ sλ = = det (Hλ+δ) . aδ We can prove the other formula directly by this sort of manipulation. MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS27 6.3. Hall Inner product. In order to define the hall inner product, we will need three expansions of −1 1 − xiyj , i j Y, where xi, yj are two sets of variables. We have the following identi- ties. Lemma 6.6. (1) −1 −1 1 − xiyj = zλ pλ(x)pλ(y). i j λ Y, X (2) −1 1 − xiyj = hλ(x)mλ(y) = hλ(y)mλ(x) i j λ λ Y, X X (3) −1 1 − xiyj = sλ(x)sλ(y). i j λ Y, X Proof. We prove these in order (1) Recall −1 hn = zλ pλ(x). Xλ`n We have also n −1 H(t) = t hn = (1 − xit) . n i X Y Set t = 1, and we obtain −1 (1 − xiyj) = hn(xy) ∞ i j n=0 Y, X = zλpλ(x, y) λ X = zλpλ(x)pλ(y). λ X Since ! r r r pr = (xiyj) = xi yj . i j i X, X 28 AARON LANDESMAN (2) We have −1 1 − xiyj = H(yj) i j j Y, Y r = hn(x)yj ∞ j n=0 Y X α = hαy α∈Nn X = hλ(x)mλ(y), λ X where the last equality uses that hα is symmetric, so it doesn’t depend on the ordering of α, only on the partition λ associ- ated to α. (3) The third part is slightly messier, but similar, and we will omit it. Definition 6.7. We have the Hall inner product h, i on Λ so that hhµ, mλi = δµλ. Lemma 6.8. Say uλ, vµ are two bases of Λ. Then, huλ, vµi = δλµ if and only if −1 1 − xiyj = uλ(x)vλ(y). i j λ Y, X We’ll fill in the proof of this lemma next time. Lemma 6.9. With respect to the Hall inner product, pλ form an orthogonal basis and the Schur functions sλ form an orthonormal basis. Proof. This is immediate from Lemma 6.8 and Lemma 6.6. Remark 6.10 (Important secret fact). We have hf, gi = f(m)g(m)dm, ZUn where by integrating a function we mean integrating the function of the eigenvalues of the corresponding matrix. MATH 263A NOTES: ALGEBRAIC COMBINATORICS AND SYMMETRIC FUNCTIONS29 7. 10/7/16 7.1. Basic properties of the Cauchy product. Last time we saw −1 −1 (7.1) 1 − xiyj = zλ pλ(x)pλ(y) λ Y X (7.2) = mλ(x)hλ(y) λ X = sλ(x)sλ(y).(7.3) λ X Remark 7.1. We had a fairly uninspiring manipulation to prove this, but Dan bump’s textbook has a very helpful group theoretic/representation theoretic argument for this. We defined the Cauchy inner product by hmλ, hµi = δλ,µ. Recall from last time, we stated n Proposition 7.2. If {uλ} , {vλ} are two bases of Λ . The the following are equivalent: (1) We have huλ, vµi = δλµ. −1 (2) The Cauchy product 1 − xiyj = λ uλ(x)vλ(y). Proof. Write Q P uλ = aλρhρ ρ X vµ = bµσmσ. σ X Observe (1) says aλρbµρ = δλµ ρ X and (2) says uλ(x)vλ(y) = mλ(x)hλ(y). λ λ X X because aλρbλσ = δρσ. λ X 30 AARON LANDESMAN So, these two rephrasings are equivalent because AB = I BA = I. Corollary 7.3. We have ⇐⇒ hpλ, pµi = zλδλµ and we have that the sλ are orthonormal. Proof. Apply Equation 7.1 and Proposition 7.2. Corollary 7.4. The inner product h, i is symmetric and positive definite. Proof. Write f = λ aλ. Then, P 2 hf, fi = ai . X Recall the map w : Λn Λn with ω(hλ) = eλ → ω(pλ) = ±pλ. This implies hpλ, pµi = hωpλ, ωpµi hu, vi = hω(u), ω(v)i. so ω preserves norms (it is an orthogonal transformation). Now, our three identities yield