INTRODUCTION TO ALGEBRAIC : (INCOMPLETE) NOTES FROM A COURSE TAUGHT BY JENNIFER MORSE

GEORGE H. SEELINGER

These are a set of incomplete notes from an introductory class on I took with Dr. Jennifer Morse in Spring 2018. Especially early on in these notes, I have taken the liberty of skipping a lot of details, since I was mainly focused on understanding symmetric functions when writ- ing. Throughout I have assumed basic knowledge of the theory of the , theory of polynomial rings, and familiarity with set theoretic constructions, such as posets. A reader with a strong grasp on introductory enumerative combinatorics would probably have few problems skipping ahead to symmetric functions and referring back to the earlier sec- tions as necessary. I want to thank Matthew Lancellotti, Mojdeh Tarighat, and Per Alexan- dersson for helpful discussions, comments, and suggestions about these notes. Also, a special thank you to Jennifer Morse for teaching the class on which these notes are based and for many fruitful and enlightening conversations. In these notes, we use French notation for Ferrers diagrams and Young tableaux, so the Ferrers diagram of (5, 3, 3, 1) is

We also frequently use one-line notation for permutations, so the permuta- tion σ = (4, 3, 5, 2, 1) ∈ S5 has σ(1) = 4, σ(2) = 3, σ(3) = 5, σ(4) = 2, σ(5) = 1

0. Prelimaries This section is an introduction to some notions on permutations and par- titions. Most of the arguments are given in brief or not at all. A familiar reader can skip this section and refer back to it as necessary.

Date: May 2018.

1 0.1. Some Statistics.

0.1. Definition. A statistic on permutations is a function stat: Sn → N.

0.2. Definition. Given a permutation σ = (σ1, . . . , σn) in one-line notation, we have the following definitions. (a) The length of σ, denoted `(σ), is the number of pairs (i, j) such that i < j and σi > σj. (b) The sign of σ is (−1)`(σ). (c) The inversion statistic on σ is defined by inv(σ) = `(σ)

(d)A descent of σ is an index i ∈ [n − 1] such that σi > σi+1. The set of all descents on σ is denoted Des(σ). (e) The major statistic or major index is given by X maj(σ) = i i∈Des(σ) 0.3. Example. Let σ = (3, 1, 4, 2, 5). Then, it has • Inversions (1, 2), (1, 4), (3, 4). Thus, inv(σ) = 3 = `(σ). • Descents {1, 3} and thus maj(σ) = 1 + 3 = 4. 0.4. Remark. Some readers may be more familiar with the definition of `(σ) to be the length of the reduced word of σ in terms of simple transpositions of Sn. These two notions are equivalent. This can be made explicit using “Rothe diagrams.”

0.5. Definition. Any statistic which is equidistributed over Sn with the inversion statistic, inv, is called Mahonian. 0.6. Theorem. The major index, maj, is Mahonian. 0.7. Example. Given that

S3 = {(123), (213), (132), (231), (312), (321)} in one-line notation we have the corresponding lengths {0, 1, 1, 2, 2, 3} and corresponding descent sets {{}, {1}, {2}, {1}, {2}, {1, 2}}, giving major indexes {0, 1, 2, 1, 2, 3}. Thus, on S3, inv and maj are equidistributed.

To prove that maj is Mahonian in general, we will use the characterization of Mahonian given below. 0.8. Theorem. A statistic stat is Mahonian if and only if it satisfies the identity X X qstat(σ) = qinv(σ)

σ∈Sn σ∈Sn

2 0.9. Remark. Note that X inv(σ) 2 n−1 q = [n]q! := (1)(1 + q)(1 + q + q ) ··· (1 + q + ··· + q )

σ∈Sn which can be proved using an inductive argument by noting n−1 X inv(σ) X inv(σ) X inv(σ) X X inv(σ) q = q + q = [n − 1]q! + q

σ∈Sn σ∈Sn−1 σ∈Sn j=1 σ∈Sn σ(n)6=n σ(n)=j but we will not need this in what follows. To prove maj is Mahonian, we will use the following theorem, although other bijections exist, such as the Carlitz bijection.

0.10. Theorem (Foata Bijection). There exists a bijection ψ : Sn → Sn such that maj(σ) = inv(ψ(σ)).

Proof. Let w = w1w2 ··· wn ∈ Sn in one-line notation. Then, set ψ(w1) = w1. Next, if ψ(w1 ··· wi−1) = v1 . . . vi−1, then, we define ψ(w1 ··· wi) by

(a) Add wi on the right. (b) Place vertical bars in w1 ··· wi following the rules (i) If wi > vi−1, then place a bar to the right of any vk with wi > vk. (ii) If wi < vi−1, then place a bar to the right of any vk with wi < vk. (c) Cyclically permute one step to the right in any run between two bars. 0.11. Example. Let σ = 31426875 which has maj(σ) = 17. Then, we iterate 3 |3|1 |3|1|4 3|14|2 → 3412 3|4|1|2|6 3|4|1|2|6|8 |341268|7 → 8341267 |8|34126|7|5 → 86341275 = ψ(σ) Indeed, inv(ψ(σ)) = 17 To invert the Foata bijection, one does the following

(a) For w = w1 ··· wi, place a dot between wi−1 and wi. (b) Place vertical bars in w1 . . . wi following the rules (i) If wi > w1, place a bar to the left of any wk with wk < wi and also before wi. (ii) If wi < w1, place a bar to the left of any wk with wk > wi and also before wi. (c) Cyclically permute every run between two bars to the left.

3 (d) wi will be the ith entry of the resulting permutation. 0.12. Example. For our example, if we start with 86341275, we carry out |8|63412|7.5 → 8341267 5 |834126.7 → 341268 7 |3|4|1|2|6.8 → 34126 8 |3|4|1|2.6 → 3412 6 |3|41.2 → 314 2 |3|1.4 → 31 4 3.1 → 3 1 .3 3 and thus we recover σ = 31426875.

 0.13. Proposition. The Foata bijection preserves the inverse descent set of σ. 0.14. Definition. The charge statistic on a permutation σ is given by (a) Write the permutation in one-line notation counterclockwise around a circle with a ? between the beginning and the end. ? σn σ1

σn−1 σ2 ...

(b) To each entry σk, assign an “index” Ik recursively as follows ( Ik−1 + 1 if ? is passed going from σk−1 to σk in a clockwise path around the circle. Ik = Ik−1 otherwise. Pn (c) Define charge(σ) = k=1 Ik, that is, the sum of the indices. 0.15. Example. The permutation (6, 7, 1, 4, 2, 3, 5) has ? 6 5 3 3 7 4 2 3 0 1 1 2 2 4 charge 3 + 4 + 0 + 2 + 1 + 2 + 3 = 15.

4 0.16. Definition. We define the cocharge of a permutation σ = (σ1, . . . , σn) to be n cocharge(σ) := − charge(σ) 2 We also define X comaj(σ) = (n − i) i∈Des(σ) 0.17. Remark. We can also show cocharge can be computed like charge, but with the “opposite” recurrence relation of charge, which is what we will use to show the proposition below. 0.18. Proposition. Given a permutation σ, cocharge(σ) = comaj(σ−1)

Idea of Proof. Let σ be a permutation with a descent k, that is, σk > σk+1. This adds n − k to comaj(σ). Then, observe     1 ··· k k + 1 ··· n −1 ··· σ ··· σ ··· σ = → σ−1 = k+1 k σ1 ··· σk σk+1 ··· σn ··· k + 1 ··· k ··· That is, k + 1 is to the left of k in σ−1. So, our cocharge diagram will look like ? ...

k + 1 Ik + 1

Ik ...... k Therefore, the k + 1 occurring before the k adds 1 to the index for every −1 number greater than k, which adds n − k to the cocharge(σ ).  0.2. on Partitions. 0.19. Definition. Let S be the set of partitions of degree n, ie whose parts sum to n. Then, for λ, µ ∈ S, the dominance order on S is given by ` ` X X λ E µ ⇐⇒ λi ≤ µi ∀` i=1 i=1 where λ = (λ1, . . . , λm) and µ = (µ1, . . . , µk). 0.20. Example. (1, 1, 1, 1) E (2, 2) E (4) and (2, 2, 1) D (2, 1, 1, 1). Further- more, (3, 3) and (4, 1, 1) are incomparable since 3 < 4 but 3 + 3 = 6 > 5 = 4 + 1. 0.21. Definition. An partition µ covers λ, denoted µ m λ, when µ B λ (that is, µ D λ, µ 6= λ) and there is no partition ν such that µ B ν B λ.

5 0.22. Remark. Covering relations are sufficient to define a partial order in general. 0.23. Proposition. Given partitions λ, µ of degree n,

0 0 λ E µ ⇐⇒ λ D µ where λ0 is the conjugate partition to λ, and similarly for µ0.

Proof. See [Mac79, 1.11]. 

0.24. Definition. The weak order on Sn is defined by, for σ, γ ∈ Sn,

σ lw γ ⇐⇒ γ = σsi and `(γ) = `(σ) + 1 where si is the transposition switching i and i + 1. 0.25. Example. In one-line notation,

(123) l (213) = (123)s1 l (231) = (213)s2 = s1s2 Note that

s1 = (213) l6 (312) = s2s1

0.26. Definition. The strong (Bruhat) order on Sn is defined by, for σ, γ ∈ Sn, σ ls γ ⇐⇒ γ = στij and `(γ) = `(σ) + 1 where τij is the transposition switching i and j.

0.27. Theorem (Strong Exchange Property). Given σ ls γ and γ = si1 si2 ··· si` , then σ = si1 si2 ··· sˆik ··· si` where the hat means sik is deleted from the ex- pression.

0.3. Young’s Poset on Partitions. 0.28. Definition. The Young order on partitions is given by λ ≤ µ ⇐⇒ λ ⊆ µ when viewed as Ferrers diagrams. Equivalently,

λ l µ ⇐⇒ µ = λ + an addable corner 0.29. Definition. Given a rectangle (mn), the order ideal generated by (mn) is {λ ⊆ (mn)} with induced containment order. The rank generating func- tion is given by X q|λ| λ⊆(mn) P where |λ| = λi.

6 0.30. Example. The induced sub-poset of (23) = (2, 2, 2) is given by

∅ and thus the rank generating function is 1 + q + 2q2 + 2q3 + 2q4 + q5 + q6 0.31. Definition. We define the quantum binomial or Gaussian polynomial to be  m + n  [m + n] ! := q n q [m]q![n]q! 0.32. Example.  2  1 + q = = 1 + q 1 q 1 · 1 0.33. Proposition. Given m, n ∈ N, we have X  m + n  q|λ| = n λ⊆(mn) q Proof. The proof is given by an inductive argument on m + n. For the inductive step, one breaks up the set {λ ⊆ (mn)} into the set of all λ’s containing a top left cell and the set of all λ’s that do not, thus giving X X X q|λ| = qnq|λ| + q|λ| λ⊆(mn) λ⊆((m−1)n) λ⊆(mn−1)  1. Tableaux 1.1. Tableaux Basics. 1.1. Proposition. A saturated chain in Young’s Poset is given by 1 2 ` ∅ l λ l λ l ··· l λ where λi/λi−1 is one cell.

7 1.2. Definition. A standard of shape λ is given by a saturated chain in Young’s poset where each λi/λi−1 is filled with the number i. Alternatively, one can define a standard Young tableau of shape λ ` n to be a filling of λ with {1, 2, . . . , n} such that rows and columns are strictly increasing. 1.3. Example. 3 5 ∅ l l l l l ↔ 1 2 4

1.4. Definition. Given a partition λ, the hook length of cell x = (i, j) ∈ λ, denoted hλ(x), is given by the number of cells in row i easy of x plus the number of cells in column j north of x plus 1. 1.5. Example.

λ =

The black shaded cell above has hook length 2 + 2 + 1 = 5. The following diagram is filled in with the hook length of each cell. 2 1 4 3 1

1.6. Proposition. Let f λ be the number of standard Young tableaux of shape λ ` n. Then, λ n! f = Q x∈λ hλ(x) 1.2. Robinson-Schensted-Knuth (RSK) Correspondence. We will dis- cuss a bijection between matrices with entries in N0 with finite support and pairs of semistandard Young tableaux (P, Q) of the same shape.

1.7. Definition. Let T = (Ti,j) be a tableau and x be a positive . We define row insertion, denoted T ← x to be the algorithm, that takes T and adds a cell labeled x as follows.

(1) Go to row 1. Pick r1 such that T1,r1−1 ≤ x (if T1,1 > x, r1 = 1).

(2) If T1,r1 does not exist, add cell labeled x to the end of the row. 0 0 (3) Otherwise, replace T1,r1 with x to get T . Then, do T ← T1,r1 on row 2.

8 (4) Repeat this process until either the bumped entry can be put into the end of the row into which it is bumped or until it is bumped out of the final row, in which case it will form a new row with one entry.

1.8. Example. Let

T = 8 5 6 6 1 1 2 4

Then,

8 8 T ← 3 = and T ← 5 = 5 5 6 6 4 6 6 1 1 2 4 5 1 1 2 3

Explicitly, in the first case, we get the steps

← 8 8 8 8 ← 5 5 8 , , , , 5 6 6 5 6 6 ← 4 4 6 6 4 6 6 5 1 1 2 4 ← 3 1 1 2 3 1 1 2 3 1 1 2 3 4 6 6 1 1 2 3

Notice that, in both cases, we ended up with a semistandard Young tableau.

1.9. Lemma. If T is a semistandard Young tableau, then so is T ← x.

Proof. From the algorithm, it is immediate that the rows will still be weakly increasing, so we need only check that the columns are strictly increasing. To do this, we will define the notion of an “insertion path.”

1.10. Definition. The insertion path or bumping route of the row-insertion T ← x is the collection of the cells (1, r1), (2, r2),..., (k, rk) where (r1, . . . , rk) come from the row-bumping algorithm.

1.11. Example. In

8 8 ← 3 = 5 6 6 5 1 1 2 4 4 6 6 1 1 2 3

9 the bumping path is

8 5 4 6 6 1 1 2 3 or ((1, 4), (2, 1), (3, 1), (4, 1)).

We want to show that the insertion path (starting at row 1) always moves in a weakly westward direction.

Let us say an entry b, originally in Tn,m, is bumped into Tn+1,m0 . If 0 0 m > m, then m is the largest integer such that Tn+1,m0−1 ≤ b by the construction of the algorithm. So, we have the following picture

Tn+1,m ··· Tn+1,m0 > b ··· > b ↔

Tn,m ··· Tn,m0 b ··· ≥ b

where we have Tn,m0 ≥ b by the semistandardness of T, which also gives Tn+1,m0 > b. However, this would force Tn+1,m0−1 > b, which contradicts the inequality Tn+1,m0−1 ≤ b above.  1.12. Lemma. Let T be a semistandard Young tableau. If j ≤ k, then the insertion path T ← j is strictly to the left of (T ← j) ← k.

1.13. Remark. This lemma can be strengthened, but we are content with the above. See [Ful97, p 9] for a more complete version. Our proof is adapted from the proof in [Ful97].

Proof of Lemma. Suppose j ≤ k and j bumps an element x from the first row. The element y bumped by k from the first row must lie strictly to the right of the cell where j bumped x by the semistandardness of T and the construction of the row-bumping algorithm. Furthermore, we get x ≤ y, and so the argument continues from row to row.

x y j k



10 1.14. Remark. An important that underlies the RSK correspondence is that this row-bumping process is invertible provided one knows the inser- tion path. For instance, 8 5 4 6 6 1 1 2 3 Can be undone by simply shifting every entry in the path down.

→ 8 , 3 8 5 6 6 5 6 6 1 1 2 4 1 1 2 4 3 In fact, one can do this only knowing the location of the cell that has been added to the diagram, simply running the algorithm backwards 8 5 8 8 8 , ← 8, ← 5, ← 4, , 3 5 4 6 6 4 6 6 5 6 6 5 6 6 4 6 6 1 1 2 3 1 1 2 3 1 1 2 3 1 1 2 4 1 1 2 3 In RSK, we will use another tableau to record this information about when boxes were added.

1.15. Definition (Robinson-Schensted). Given a word w = w1w2 . . . w`, let P(w) be the tableau

P (w) := ((... (( w1 ← w2) ← w3) ← · · · ) ← w`−1) ← w` and let Q(w) be the recording tableau or insertion tableau whose entries are 1, . . . , ` and where k is placed in the box that is added at the kth step in the construction of P (w). 1.16. Example. Let w = 5482. Then, we get successive pairs 5 1 5 2 5 2 5 4 4 1 4 8 1 3 4 2 2 8 1 3 where the last pair is (P(w), Q(w)).

The fact that this is a correspondence between words and pairs of tableaux comes from the reversibility of the row-bump combined with knowing the inserted cell. In the example above, Q is a standard tableaux. We will prove a more general correspondence, which will get us what we need about Q.

11 1.17. Definition. Given a (possibly infinite) matrix A with entries in N0 with finite support and nonzero entries , we define   i1 ··· im wA := j1 ··· jm in which, for any pair (i, j) that indexes an entry Ai,j of A, there are Ai,j i columns equal to j , and all columns satisfy the following two conditions.

• i1 ≤ · · · ≤ im and • If ir = is and r ≤ s, then jr ≤ js. 1.18. Example.  1 0 2   1 1 1 2 2 2 2 3  A = 0 3 1 → w =   A 1 3 3 2 2 2 3 3 0 0 1 1.19. Definition (Robinson-Schensted-Knuth). Given a matrix A with en- tries in N0 with finite support, construct   i1 . . . im wA = j1 . . . jm Then, we will construct a sequence of pairs of tableau were 0 0 t+1 (P , Q ) = (∅, ∅), P = P ← jt+1, t+1 t and Q = Q with it+1 added in the cell that appeared by doing the insertion for Pt+1. Then, we say that A corresponds to (Pm, Qm).

1.20. Example. Continue with A, wA as in the example above. Then, 0 0 (P , Q ) = (∅, ∅)   1 1 (P1, Q1) = ,   1 3 1 1 (P2, Q2) = ,   1 3 3 1 1 1 (P3, Q3) = ,   4 4 3 2 (P , Q ) =  ,  1 2 3 1 1 1   5 5 3 3 2 2 (P , Q ) =  ,  1 2 2 1 1 1   6 6 3 3 2 2 (P , Q ) =  ,  1 2 2 2 1 1 1 2

12   7 7 3 3 2 2 (P , Q ) =  ,  1 2 2 2 3 1 1 1 2 2   8 8 3 3 2 2 (P , Q ) =  ,  1 2 2 2 3 3 1 1 1 2 2 3

1.21. Lemma. Qt constructed above is a semistandard Young tableau. Proof. The rows of Qt are certainly weakly increasing since the first row of wA is weakly increasing by construction. For columns strictly increasing, we observe that if ik = ik+1, then jk ≤ jk+1, and so by lemma 1.12, we get that the insertion path M ← jk is strictly to the left of (M ← jk) ← jk+1 and so the columns must be strictly increasing.  1.22. Theorem. The RSK correspondence is a actual correspondence be- tween matrices with entries in N0 with finite support and pairs of semistan- dard Young tableaux of the same shape. Proof. To show we have a correspondence, we will construct an inverse al- gorithm. Given (P, Q) = (Pm, Qm), we do the following.

(1) Consider Qr,s, the rightmost entry of the largest element in Q (since m−1 m Q can have repeated entries). Then, set Q = Q \ Qr,s. m−1 (2) Pr,s is the last spot in the insertion path of P ← jm. (3) We need Pr−1,t, the rightmost element of row r − 1 that is smaller than Pr,s. Then, do reverse bumping until you get jm. (4) Repeat this process until wA is recovered. 1.23. Example. (a) 2 2 Q = =⇒ i5 = 2 1 1 1 (b) 3 3 P = 1 2 2 (c) 3 3 ← 3, and j5 = 2 1 2 2 1 2 3

We also want to show that this inverse is well-defined, that is, if ik = ik+1, then jk ≤ jk+1. If ik = Qr,s and ik+1 = Qu,v, then r ≥ u and s < v. Then, it must be that the inverse insertion path of Pr,s is strictly to the left of the insertion path of Pu,v. Thus, we get, in the first row,

jk ··· jk+1 =⇒ jk ≤ jk+1

13 

2. Symmetric Polynomials and Symmetric Functions 2.1. Introduction to Symmetric Polynomials. Let R be a commutative ring with unity and let x1, x2, . . . , xn be indeterminates. 2.1. Definition. The ring of polynomials in n variables with coefficients in R is n o X α n α α1 α2 αn R[x1, x2, . . . , xn] = cαx | α ∈ N where x = x1 x2 ··· xn with multiplication xαxβ = xα+β.

2.2. Definition. We say f ∈ R[x1, . . . , xn] has homogeneous degree d if X α f = cαx |α|=d 2 2.3. Example. 3x1 + 19x1x3 has degree 2. 2.4. Proposition. Let d R [x1, . . . , xn] := {f ∈ R[x1, . . . , xn] | f has degree d}

Then, R[x1, . . . , xn] is a graded ring with M d R[x1, . . . , xn] = R [x1, . . . , xn] d≥0

a b a+b Proof. Let f ∈ R [x1, . . . , xn] and g ∈ R [x1, . . . , xn]. Then, fg ∈ R [x1, . . . , xn]. 

2.5. Proposition. There is a degree preserving Sn-action on R[x1, . . . , xn] given by σ.f(x1, . . . , xn) = f(xσ(1), . . . , xσ(n)) for σ ∈ Sn. 2.6. Example. Let σ = (1, 3, 2) in one-line notation. Then, for 2 4 g(x) = 3x1 + 19x1x3 − 7x1 + 18x2x3 ∈ Z[x1, x2, x3] we have 2 4 σg = 3x1 + 19x1x2 − 7x1 + 18x2x3 2.7. Definition. The ring of symmetric polynomials on n indeterminates is

Λn := {f ∈ R[x1, . . . , xn] | σf = f ∀σ ∈ Sn}

Sn This is a subring of R[x1, . . . , xn] and is sometimes denoted R[x1, . . . , xn] (meaning Sn-action fixed points). 2 2 2 2.8. Example. x1 + x2 + x − 3 and 3x1 + 3x2 + 3x3 − 9x1x2x3 are both in Λ3. However, x1 + x2 is not in Λ3 since the permutation (1, 3, 2) (one-line) yields x1 + x3.

14 2.9. Corollary. M d d Λn = Λn where Λn := {f ∈ Λn | f has degree d} d≥0

Furthermore, if R is a field, say k, then Λn is a graded since k[x1, . . . , xn] is a graded algebra.

2.10. Remark. In particular, this makes Λn a over k. Thus, we can ask about various k-bases of Λn. 2.11. Definition. For λ ` d, the monomial is X α mλ(x1, . . . , xn) := x n α∈N α∼λ

2.12. Example. • m1(x1, x2) = x1 + x2 since (1, 0) ∼ (0, 1). 2 2 2 • m211(x1, x2, x3) = x1x2x3 + x1x2x3 + x!x2x3 since (211) ∼ (121) ∼ (112). • m211(x1, x2) = 0 because there is no partition with 2 parts that can be rearranged to (2, 1, 1). d 2.13. Proposition. {mλ(x1, . . . , xn)}λ`d is a basis for Λn. Proof. Each element of our set is symmetric since every monomial coefficient λ is 1 and every permutation of x occurs in mλ. Also, is immediate; each mλ has unique monomials. Thus, it suffices to show that d monomial symmetric polynomials span Λn.

d Given f ∈ Λn, we write X α f = cαx where cα = cσ(α), ∀σ ∈ Sn |α|=d

In particular, cα = cλ for α ∼ λ. In otherwords, λ is a partition rearrange- ment of α Thus, ! X X α X f = cλ x = cλmλ(x1, . . . , xn) λ`d α∼λ λ`d  2.14. Corollary. An immediate corollary to this result is that any basis for d Λn can be indexed by λ ` d. 2.15. Definition. For α a partition, we establish the convention

α α1 α2 αn x := x1 x2 ··· xn 2.16. Definition. For r ≥ 0, elementary symmetric polynomial X er(x1, x2, . . . , xn) := xi1 xi2 ··· xir 1≤i1

15 and, for λ ` d with λ = (λ1, . . . , λ`),

eλ(x1, . . . , xn) = eλ1 (x1, . . . , xn)eλ2 (x1, . . . , xn) ··· eλ` (x1, . . . , xn) 2.17. Example.

e1(x1, x2, x3) =x1 + x2 + x3 = m1(x1, x2, x3)

e2(x1, x2, x3) =x1x2 + x1x3 + x2x3 = m11(x1, x2, x3)

e21(x1, x2, x3) =(x1x2 + x1x3 + x2x3)(x1 + x2 + x3) 2 2 =x1x2 + x1x2 + x1x2x3+ 2 2 x1x3 + x1x2x3 + x1x3+ 2 2 x1x2x3 + x2x3 + x2x3 =m21(x1, x2, x3) + 3m111(x1, x2, x3)

2.18. Proposition. er = m1r and {er(x1, . . . , xn)}r∈N cannot form a basis of Λ.

Proof. By definition, the elementary symmetric polynomials er are symmet- ric polynomials with degree r monomials with no variable occurring more than once in each monomial. Thus, it must be that er = m1r . Furthermore, by the corollary to the monomial symmetric polynomials forming a basis, r it must be that {er}r∈N cannot form a basis because each er ∈ Λn and so d there simply are not enough in each Λn to form a basis.  d 2.19. Proposition. The set {eλ(x1, . . . , xn)}λ`d is a basis for Λn.

Proof. Since this set is indexed by λ ` d, it suffices to show that the set d spans Λn. Since {mλ} is a basis, it suffices to show X mλ = cλ,µeµ for some cλ,µ. Instead, we will show that X eη = Mη,µmµ and (Mη,µ)η,µ is a unitriangular matrix

To do this, we will need the combinatorics of (0, 1)-matrices. 2.20. Definition. Given a matrix (0, 1)-matrix A, let the row sum be

row(A) := (r1, f2, . . . , rn), ri = number of 1’s in row i and column sum be

col(A) := (c1, . . . , cn), cj = number of 1’s in row j

Finally, let Bαβ be the number of (0, 1) matrices with row sum vector α and column sum vector β.

16 2.21. Example. Some matrices with row(A) = (2, 1, 1, 0)

 1 1 0 0   1 1 0 0   1 1 0 0   1 1 0 0   1 0 0 0   0 1 0 0   1 0 0 0   1 0 0 0    ,   ,   ,    1 0 0 0   1 0 0 0   0 0 1 0   0 1 0 0  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

The second and fourth matrices have col(A) = (2, 2, 0, 0).

Then, we claim X eν0 = mν + Bν0µmµ νBµ To do this, first consider one term. Let A∗ be the (0, 1)-matrix with row sum ν0 where all the one’s are left justified.

2.22. Example. If ν0 = (2, 1, 1, 0), then

 1 1 0 0  ∗  1 0 0 0  A =    1 0 0 0  0 0 0 0

Note that col(A∗) = (3, 1) = ν

Such an element has col(A∗) = ν and it is the unique (0, 1)-matrix with 0 row(A) = ν and col(A) = ν. In particular, if A has ν1 ones in column 0 one and row(A) = ν;, the first `(ν ) = ν1 rows have ones. Thus, all ones in column 1 are north justified. Iterating this process gives the uniqueness result. Any other matrix A with row(A) = ν0 will have column sum smaller in dominance than ν0 since the ones in A∗ are pushed all the way to the left. Thus, let η = ν0. Then,

eη = eη1 (x1, . . . , xn)eη2 (x1, . . . , xn) ··· eη` (x1, . . . , xn)       X X X = x ··· x x ··· x ··· x ··· x  i1 iη1   j1 jη2   z1 zη`  i1

17 where the circled entries are the i1, i2, . . . , iη1 entries of row 1, the j1, j2, . . . , jη2 entries of row 2, etc. and thus X col(A) X X X eη = x = mµ = Bη,µmµ A∈{(0,1)-matrices with µ`d (0,1)-matrices A µ`d row(A)=η} with row(A)=η, col(A)=µ 2.23. Example. X α e211 = (x1x2 + x1x3 + x2x3)(x1 + x2 + x3)(x1 + x2 + x3) = x α=col(A) So for instance,   x1 x2 x3  x x x  col(A) 2  1 2 3  x = x1x2x3 x1 x2 x3

 0 1 1   1 0 0  1 0 0



2.24. Corollary (Fundamental Theorem of Symmetric Polynomials). Λn with coefficients in R is a polynomial ring in the n elementary symmetric ∼ polynomials, that is, Λn = R[e1, . . . , en] as commutative rings.

d Proof. By the above, for f ∈ Λn, we write X f = cµeµ(x1, . . . , xn) µ`d

α α1 α2 αn However, since eµ1 , eµ2 , . . . , eµn commute, define e := e1 e2 ··· en . 0 3 1 (031) 2.25. Example. e3222 = e3e2e2e2 = e1e2e3 = e . Then, we get that X α f = cµe where α1 is the number of 1’s in µ, α2 is the number of 2’s in µ, etc. This d shows that f ∈ R[e1, . . . , en]. Thus, Λn ⊆ R[e1, . . . , en]. However, since e1, . . . , en are all symmetric, it must be that R[e1, . . . , en] ⊆ Λn. Thus, we get the desired .  2.26. Definition. For r ∈ N, the complete symmetric polynomial is X hr(x1, . . . , xn) := xi1 ··· xir 1≤i1≤···≤ir≤n

18 and, for λ a partition, the homogeneous symmetric polynomials are

hλ := hλ1 hλ2 ··· hλ` 2.27. Example.

h1(x1, x2, x3) = x1 + x2 + x3 2 2 2 h2(x1, x2, x3) = x1 + x1x2 + x2 + x2x3 + x3 + x1x3 = m2(x1, x2, x3) + m11(x1, x2, x3) P 2.28. Proposition. hr(x1, . . . , xn) = λ`r mλ

Proof. This follows immediately by grouping the terms of hr appropriately.  We now wish to show the following.

d 2.29. Proposition. {hλ(x1, . . . , xn)}λ`d is a basis for Λn. We will give a proof by generating functions. 2.30. Lemma. We have the following generating functions

Y X k E(t) = (1 + xit) = t ek i k≥0

1 X k H(t) = Q = t hk (1 − xit) i k≥0

Proof of Lemma. Observe first Y E(t) = (1 + xit) i = (1 + x1t)(1 + x2t) ··· (1 + xnt) X j = t xi1 xi2 ··· xik 1≤i1<···

1 X k k = xi t 1 − xit k≥0 and thus   Y 1 Y X k k X k =  xi t  = t hk 1 − xit i k≥0 k≥0 

19 1 Proof of Proposition 2.29. Since H(t) = E(−t) , we get !   X i X j X k H(t)E(−t) = 1 =⇒ t hi  (−t) ej = 1 + 0 · t i j k≥1 n X j =⇒ (−1) hn−jej = 0, ∀n > 0 j=0 So, we check

n = 1: h1 − e1 = 0 =⇒ e1 = h1 2 n = 2: h2 − e1h1 + e2 = 0 =⇒ e2 = e1h1 − h2 = h1 − h2 and so we can iterate to show that er is a sum of hµ’s. Thus, we get that {hµ} spans Λn.  2.2. Symmetric Functions. Notice that we can always get a homomor- phism of rings Λn+1  Λn by setting xn+1 = 0. 2.31. Example.

e21(x1, x2, x3, x4) = (x1x2 + x1x3 + x1x4 + ··· )(x1 + x2 + x3 + x4)

= m21(x1, x2, x3, x4) + 3m111(x1, x2, x3, x4)

7→ e21(x1, x2, x3) = (x1x2 + x1x3 + x2x3)(x1 + x2 + x3)

= m21(x1, x2, x3) + 3m111(x1, x2, x3)

In fact, one can check that mλ(x1, . . . , xn+1) 7→ mλ(x1, . . . , xn) which is nontrivial provided `(λ) > n. Thus, d ∼ d 2.32. Proposition. Λn = Λn+1 as R-modules provided n ≥ d. Thus, the number of variables we work in is largely irrelevant provided there are sufficiently many. In order to avoid this formal annoyance, we will often work with “infinite symmetric polynomials”, which we will construct in this following section. 2.33. Definition. Let R be a commutative ring with unity. Then, for an infinite collection of indeterminates {x1, x2,...}, we say R[[x1, x2,...]] is the ring of formal series over R with elements given as (formal) infinite sums of monomials in the indeterminates, addition given by componentwise addition, and multiplication given by extension of polynomial multiplication. Note that multiplication is well-defined since, given a product of infinite series !   X α X β cαx  dβx  , α∈N∞ β∈N∞

20 γ ∞ the coefficient at x for γ ∈ N has only finitely many contributions from the cα and dβ’s. ∞ 2.34. Definition. Given α, β ∈ N , we say α ∼ β if the underlying multisets of entries are equal (not counting 0). 2.35. Example. (2, 0, 1, 0) ∼ (1, 2, 0)

2.36. Definition. We say f ∈ R[[x1, x2,...]] is a symmetric infinite series if the coefficient of xα is equal to the coefficient of xβ when α ∼ β.

X X d 2.37. Example. The element xi is a symmetric infinite series. d≥0 i≥1 2.38. Definition. A symmetric polynomial in infinitely many variables (usu- ally called a ) is a symmetric infinite series with bounded degree. The set of these symmetric polynomials forms a ring of symmetric functions, denoted Λ.

b X X d 2.39. Example. Let b ∈ N. Then, xi is a symmetric function. d≥0 i≥1 2.40. Remark. For the categorically inclined, the symmetric functions can also be constructed as an inverse limit by taking Λd = lim Λd ← n n in the category of Z-modules and then setting M Λ = Λk k≥0 This requires setting up the necessary maps, but this is not so hard given the above for those who wish to use inverse limits. Note, however, that Λ is not the inverse limit of Λn in the category of rings; such an inverse limit would be the ring of symmetric infinite series. However, Λ would be the inverse limit of Λn in the category of graded rings. We wish to extend our definitions for bases of symmetric polynomials to our ring of symmetric functions.

2.41. Definition. Let x = (x1, x2,...). Then, we define (a) for λ a partition, X α mλ(x) = x α∼λ (b) for r ∈ N, X er(x) = xi1 xi2 ··· xir 1≤i1

21 (c) for r ∈ N, X X hr(x) = mλ = xi1 xi2 ··· xir |λ|=r 1≤i1≤i2≤···≤ir

2 2 2.42. Example. h2(x) = m(2)(x)+m(1,1)(x) = x1 +x2 +···+x1x2 +x1x3 + ··· + x2x3 + ···

Finally, we wish to introduce one more family of symmetric functions.

2.43. Definition. For any r ∈ N, we define the power symmetric functions X r pr := xi i and, for any partition λ, we define Y pλ = pλi i

2.44. Proposition. The generating function for the pr is X xi P (t) = 1 − xit and also satisfies the identities H0(t) E0(t) P (t) = P (−t) = H(t) E(t)

Proof. By direct computation, we see

X r−1 X X r r−1 X xi P (t) := prt = xi t = 1 − xit r≥1 i≥1 r≥1 i≥1 Furthermore, if we are clever at manipulating generating functions, we see   X d  1  d Y 1 d H0(t) P (t) = log = log   = log(H(t)) = dt 1 − xit dt 1 − xit dt H(t) i≥1 i≥1 and   X d d Y d E0(t) P (−t) = log(1 + x t) = log (1 + x t) = log E(t) = dt i dt  i  dt E(t) i≥1 i≥1

 2.45. Corollary (Newton’s Identities). We have the following identities n n X X r−1 nhn = prhn−r, nen = (−1) pren−r r=1 r=1

22 Proof. These come from the proposition above by rearranging and expand- ing. For instance,     0 X i−1 X r−1 X j H (t) = P (t)H(t) =⇒ ihit =  prt   hjt  i≥1 r≥1 j≥1 and so, fixing a specific n, we have X nhn = prhj r+j=n giving us the desired identity. 

2.46. Theorem. The set {pλ}all partitions λ is a basis for Λ over a field of characteristic 0. Proof. By Newton’s identities, n X pn = ejpn−j j=1 if we rewrite this recurrence, we get n 1 X e = (−1)j−1e p n n n−j j j=1

Thus, each en can be written as a rational linear combination of the power sums, giving that Λn ⊆ Q[p1, . . . , pn]. Therefore, the pλ’s form a basis and ∼ because every power sum polynomial is symmetric, Λn = Q[p1, . . . , pn]. 

2.47. Remark. An alternative proof is just to let C = (cλ,µ) be a matrix expressing pλ in terms of the monomial basis. One argues that X pλ = cλλmλ + cλµmµ µBλ µ1 µm with cλλ 6= 0. Then, if x1 ··· xm appears in λ1 λ2 λ2 λ2 pλ = (x1 + x2 + ··· )(x1 + x2 + ··· ) + ··· then each µi must be a sum of λj’s, thus making µ larger in dominance order than λ. This proof is probably simpler than the one above but has the disadvantage of not introducing Newton’s identities.

Finally, we note how pn’s are explicitly related to the hn’s and en’s.

2.48. Proposition. Given λ ` n, let mi be the multiplicity of i in λ and set Q mi zλ = i i mi!. Then, X pλ hn = zλ λ`n and X n−`(λ) pλ en = (−1) zλ λ`n

23 Proof. We show this by writing each hr as a linear combination of pλ’s. Observe, ∞ ∞ X k Y 1 hk(~x)t = 1 − xit k=0 i=1 Y = exp (− log(1 − xit))

∞ k ! Y X (xit) = exp k k=1 ∞ ∞ k ! X X (xit) = exp k k=1 i=1 ∞ k ! X pk(x)t = exp k k=1 ∞  k  Y pk(x)t = exp k k=1 ∞ ∞ k m Y X (pk(x)t ) k = m mk! · k k k=1 mk=0 X 1 = p t|λ| z(λ) λ λ

P pλ Thus, we see that hn = . The relation for en is a similar manipula- λ`n zλ tion of the generating function E(t).  2.49. Remark. Note that z(λ) is equal to the cardinality of the centralizer of an element of Sn with cycle type λ. Indeed, an i-cycle (1, 2, . . . , i) is fixed by i elements and a product of mi disjoint i-cycles can be permuted in mi! ways. So then, why does this num- 2.3. Schur functions. ber show up 2.50. Definition. The Vandermonde determinant is here?  n−1 n−2  x1 x1 ··· x1 1 n−1 n−2  x x ··· x2 1  ∆ := det  2 2   . . . .   . . . .  n−1 n−2 xn xn ··· xn 1 Y 2.51. Theorem. ∆ = (xi − xj) 1≤i

n Q n Q Proof. Note first that deg ∆ = 2 and deg = 2 for the right hand side of the equality. Also, observe that if xi = xj, then ∆ = 0 since then

24 two rows of the matrix would be identical. Therefore, (xi − xj) divides ∆, so Y ∆ = c · (xi − xj) 1≤i

2.53. Remark. We let ρ := (n − 1, n − 2,..., 1, 0). Then, ∆ρ = ∆, the Vandermonde determinant. n 2.54. Proposition. Let α = (α1, . . . , αn) ∈ N . Then, for Jacobi’s general- ized Vandermonde

(a) αi = αj =⇒ ∆α = 0 (b) If α has distinct parts, then ∆α is “alternating,” that is σ∆α = `(σ) (−1) ∆α for any permutation σ of [n]. (c) ∆ρ divides ∆α.

Proof. (a) If αi = αj, then our matrix has two equal columns, so its determinant is 0. (b) This is immediate since si∆α swaps the ith and i + 1st rows. (c) If xi = xj, then ∆α = 0 since our matrix has two equal rows. Thus, (xi − xj) must divide ∆α. However, since i, j were arbitrary, we get that every such difference must divide ∆α, so ∆ρ divides ∆α.  Thus, now our following definition makes sense 2.55. Definition. Given a partition λ, Jacobi’s bialternant formula for a Schur function is sλ(x1, . . . , xn) := ∆λ+ρ/∆ρ which is a polynomial by the above proposition.

2.56. Remark. Both ∆λ+ρ and ∆ρ are alternating, so ∆λ+ρ/∆ρ is symmet- ric. 2.57. Theorem (Littlewood’s Combinatorial Definition). For a partition λ ` n, X wt(T) sλ(x1, . . . , xn) = x T∈SSYT(λ) in alphabet [n]

25 In infinite variables, X wt(T) sλ(x1, x2,...) = x T∈SSYT(λ)

Proof. Postponed.  One can instead take the statement of the theorem to be the definition of the sλ, if so inclined, and prove that it implies the bialternate formula; in fact, we will do this. See 2.74. We wish to show, using this construction, that the Schur functions form a basis for Λ.

2.58. Example. Consider s21(x1, x2, x3). The possible semistandard Young tableaux are 2 3 3 2 3 3 2 3 1 1 1 1 2 2 1 2 1 3 2 3 1 3 1 2 , , , , , , , and so 2 2 2 2 2 2 s21(x1, x2, x3) = x1x2 + x1x3 + x2x3 + x1x2 + x1x3 + x2x3 + 2x1x2x3 2.59. Proposition. Given λ a partition of n, X xwt(T) T∈SSYT(λ) is a symmetric function. To prove this fact, we will need the following definition.

2.60. Definition. Let the , Kλα be the number of semistan- dard Young tableaux of shape λ with weight α. In other words, | SSYT(λ, α)| = Kλα.

Proof of Proposition. For si a simple reflection in Sn, we will show that X wt(T) X wt(T) x = si x T∈SSYT(λ) T∈SSYT(λ) We observe that X wt(T) X X α X α x = x = Kλαx T∈SSYT(λ) α∈N∞ T∈SSYT(λ,α) α∈N∞

Therefore, we need only show that Kλ,α = Kλ,siα. To show this, we will use the Bender-Knuth , defined below. Namely, there is an involution (thus a bijection) between SSYT(λ, α) and SSYT(λ, siα), so

Kλ,α = | SSYT(λ, α)| = | SSYT(λ, siα)| = Kλ,siα 

2.61. Definition. The Bender-Knuth Involution is a map φi : SSYT(λ, α) → SSYT(λ, siα) given as follows:

26 (a) We define that entries i and i + 1 in T ∈ SSYT(λ, α) are married if they are in the same column. Otherwise, we say they are single. 2.62. Example. For i = 3, in 3 4 4 4 T = 2 2 2 3 3 4 4 1 1 1 1 2 2 3 3 3 the red entries are married and the pink entries are married. All other entries are single.

(b) φi(T) is obtained by replacing single entries iii ··· i i + 1 ··· i + 1 | {z } | {z } a b in each row by iii ··· i i + 1 ··· i + 1 | {z } b | {za } 2.63. Example. If T is the example above, then 3 3 4 4

φ3(T) = 2 2 2 3 3 4 4 1 1 1 1 2 2 3 4 4

2.64. Proposition. φi(T) ∈ SSYT(λ, siα).

Proof. We note that the shape λ is preserved by construction, so we need only show that (a) Columns strictly increase (b) Rows weakly increase (c) The weight is siα Note that single entries in a row are contiguous and between married entries, so rows necessarily must be weakly increasing. That is, in a semistandard tableau T, for φx, y = x + 1, and z > y > x > w, a sufficiently generic row might look like y y z z ... x x x x y y y y . . . w w x x Thus, if x is single, it lies below z > x+1 and if y is single, it lies above w < x. Therefore, the replacement is still strictly column increasing. Finally, there are an equal number of married i and i + 1 entries, and the number of single i’s in each row is replaced by the number of single i + 1’s in that row, and vice-versa. Thus, the weight is permuted. 

27 Thus, we have shown that Littlewood’s combinatorial definition of Schur functions gives a symmetric polynomial. To see the equivalence with Jacobi’s bialternate formula, we will actually prove the more general result 2.65. Theorem. [Ste02, p 2] X ∆λ+ρsµ/ν = ∆λ+wt(T)+ρ T∈SSYT(µ/ν) T is λ−Yamanouchi where sµ/ν and λ-Yamanouchi are defined below. 2.66. Definition. A skew Schur function for skew-shape µ/ν is given by X xwt(T) T∈SSYT(µ/ν) and is symmetric by the Bender-Knuth involution. 2.67. Definition. We say a (skew) semistandard tableau T is a Yamanouchi tableau if, for all j, T≥j (only columns east of j − 1) has partition weight. This is also sometimes called a reverse . 2.68. Example. (a)

4 4 2 2 2 3 1 1 1 1 1

is not Yamanouchi; it fails at the second column since wt(T≥4) = (2, 0, 1) is not a partition. (b) The unique semistandard tableau of shape λ and weight λ is Ya- manouchi. In fact, it is the only semistandard “straight shape” that can be Yamanouchi, sometimes called the super-semistandard tableau of shape λ.

3 2 2 2 2 2 , , , 2 2 1 1 1 1 1 1 1 1 1 1

(c) Here are some Yamanouchi skew-tableau of the following shape.

2 2 3 : 1 1 , 1 2 , 1 2 1 1 1

2.69. Definition. A λ-Yamanouchi tableau T has, for all j, wt(T≥j) + λ is a partition. (Addition is given by vector addition.)

28 2.70. Example. Take λ = (3, 2). Then,

2 2 3 1 1 1 1 is λ-Yamanouchi since

wt(T≥4) + λ = (4, 2)

wt(T≥3) + λ = (5, 2, 1)

wt(T≥2) + λ = (6, 3, 1)

wt(T≥1) + λ = (7, 4, 1)

Proof of 2.65; [Ste02]. We observe that X ∆λ+ρsµ/ν = ∆λ+wt(T)+ρ T∈SSYT(λ) T λ-Yamanouchi X X = (−1)`(w)xw(λ+ρ) xwt(T)

w∈Sn T∈SSYT(µ/ν) X X = (−1)`(w)xw(λ+ρ+wt(T))

w∈Sn T∈SSYT(µ/ν) X = ∆λ+ρ+wt(T) T∈SSYT(µ/ν) However, we want to show the result is true summing only over T that are λ-Yamanouchi. To do this, we will show that all the non λ-Yamanouchi tableaux cancel each other out by finding an appropriate sign-reversing in- volution on them. To that effect, say T is a “Bad Guy” if T is not λ-Yamanouchi. For such a T, there exists a j, k ∈ [n] such that

λk + wtk(T≥j) < λk+1 + wtk+1(T≥j) Pick the largest such j and then pick the pair (j, k) with the smallest such k. Then, λ + wt(T>j) is a partition. Since, by column strictness, wtk(T≥j) − wtk+1(T≥j) can change by at most one when j is incremented or decremented, there must be a k + 1 in column j of T, but no k.

2.71. Example. For an easy example, take λ = ∅ and 3 4 4 4 2 2 2 3 3 4 4 T = 1 1 2 2 3 3 3 2 2 2 1 1 1

29 The shaded part of the tableau is Yamanouchi, but it fails to be Yamanouchi at T≥6. Furthermore, the minimal such k so that our inequality fails is 1, that is

3 = 0 + 3 = λ1 + wt1(T≥6) < λ2 + wt2(T≥6) = 0 + 4 = 4 and λ + wt(T>6) = (3, 3, 3, 1) is a partition. Indeed also, in column 6, there is a k + 1 = 2 entry, but not a k = 1 entry. Thus, we have

λk + wtk(T≥j) + 1 = λk+1 + wtk+1(T≥j) which, since ρk = ρk+1 + 1, also gives

sk(λ + wt(T≥j) + ρ) = λ + wt(T≥j) + ρ where sk ∈ Sn swaps k and k + 1. Now, we apply the Bender-Knuth involution φk to T

We apply the Bender-Knuth involution φ1 to the red part only to get 3 4 4 4 1 1 2 3 3 4 4 T∗ = 1 1 2 2 3 3 3 2 2 2 1 1 1 Note that the blue part has an equal number of 1’s and 2’s. T∗ is still an SSYT since the Bender-Knuth involution only swaps some ks and k + 1s in T

30 = λ + wt(T≥j) + sk(wt(T

= sk(λ + wt(T≥j) + ρ) + sk(wt(T

= sk(λ + wt(T) + ρ) and

∗ ∆λ+wt(T )+ρ = ∆sk(λ+wt(T))+ρ = −∆λ+wt(T)+ρ since sk acts by swapping the k and k + 1st rows of ∆, which changes the determinant by a sign. Thus, we can cancel the contributions of the Bad Guys from our desired sums. 2.73. Remark. Note, there is still some work to do. A concern is that some “Bad Guys” will actually have T

λk + wtk(T≥j) + 1 = λk+1 + wtk+1(T≥j) So, adding these two expressions together, we get

λk + wtk(T) + 1 = λk+1 + wtk+1(T) and thus, importantly,

(λ + wt(T) + ρ)k = (λ + wt(T) + ρ)k+1 =⇒ ∆λ+wt(T)+ρ = 0

 2.74. Corollary (Corollaries to Theorem 2.65). In the following special cases, we get the following corollaries. (a) If ν = ∅, λ = ∅, then X ∆ρsµ = ∆wt(T)+ρ T∈SSYT(µ),T Yamanouchi Since there is a unique =⇒ ∆ρsµ = ∆µ+ρ Yamanouchi SSYT of shape µ by 2.68 =⇒ sµ = ∆µ+ρ/∆ρ

31 (b) If ν = ∅, then X ∆λ+ρsµ = ∆λ+wt(T)+ρ T∈SSYT(µ) T λ-Yamanouchi X =⇒ sλsµ = sλ+wt(T) from dividing by ∆ρ T of shape µ T λ-Yamanouchi 2.75. Remark. The corollaries above have significance outside of combina- torics. (a) In , the product rule for Schur functions gives the decomposition of tensor products of Specht modules for Sn into irreducibles. (b) In , this is useful for understanding the intersection of Grassmanian subvarieties. Make this more precise. We can rephrase the corollary above to get the following major theorem. 2.76. Theorem (Littlewood-Richardson Rule). Given shapes λ, µ,

X ν sλsµ = cλ,µsν ν ν where cλ,µ, called the Littlewood-Richardson coefficients, counts the number tableaux of skew shape ν/λ of weight µ which are Yamanouchi . Check this re- ν moval of λ 2.77. Remark. cλ,µ is also the number of tableaux of the form

µ →

← λ

where the bottom right connected component must be super semi-standard and the whole skew shape is Yamanouchi of weight ν.

2.78. Example. Naturally, we expect that, since s∅ = 1, we get s∅sµ = sµ. ν Indeed, c∅,µ is the number of tableaux of shape ν and weight µ, so ν c∅,µ = | SSYT(ν, µ)| = Kν,µ recovering sµ from the Littlewood-Richardson rule.

32 For something less trivial, take λ = (2, 1) and µ = (1). Then,

(3,1) (2,2) (2,1,1) c(2,1),(1) = 1, c(2,1),(1) = 1, c(2,1),(1) = 1 since these are the only skew-shapes such that ν/(2, 1) can have one box. Therefore, all the other coefficients are 0 and we get

s21s1 = s31 + s22 + s211 2.79. Theorem (Pieri Rule). We have X hrsλ = sν ν=λ+ horizontal r-strip 2.80. Remark. If one already has the Littlewood-Richardson rule, the Pieri ν rule is actually a specialization since s(r) = hr and it turns out that c(r),µ is always 0 or 1.

Proof of 2.79. Recall     X wt(T) X wt(S) hrsλ =  x   x  T of shape (r) S of shape λ and we want to get a sum of the form

X X X wt(I) sν = x ν=λ+ horizontal r-strip ν=λ+ horizontal tableaux I r-strip of shape ν r In other words, we want to find a bijection φλ from pairs of tableaux (T, S) where T has shape (r) and S has shape λ ↔ Tableaux I of shape λ+horizontal r-strip such that wt(T) + wt(S) = wt(I).

r r Let φλ be RSK. That is, φλ(T, S) = (I = S ← T) with T = x1 x2 ... xr . Since x1 ≤ x2 ≤ · · · ≤ xr, then, by Lemma 1.12, the bumping route of x1 is to the left of the bumping route of x2 and so on.

x1 x2 For the inverse, it must be that ν/λ is a horizontal r-strip. Therefore, the inverse is straightforward to compute directly. 

Finally, another important relationship between sλ’s and hn’s is given below without proof.

33 2.81. Theorem (Jacobi-Trudi Identities). Let λ = (λ1, . . . , λ`) be a partition of n. Then,   hλ1 hλ1+1 ··· hλ1+`−1  .   hλ2−1 hλ2 ··· .  sλ = det(hλ −i+j)1≤i,j≤` = det   i  . ..   . . 

hλ`−(`−1) ··· hλ` where hn = 0 for all n < 0. Similarly,

0 sλ = det(eλi−i+j)1≤i,j≤` 2.82. Example. If λ = (4, 3, 3, 1, 1), then   h4 h5 h6 h7 h8  h2 h3 h4 h5 h6    s43311 = det  h1 h2 h3 h4 h5     0 0 h0 h1 h2  0 0 0 h0 h1 2.4. Inner Product Structure of Λ. 2.83. Proposition. Λ is an inner-product space with Hall-inner product ( 1 if λ = µ hmλ, hµi = δλ,µ = 0 otherwise 2.84. Remark. This is a symmetric form, although this is not obvious from the definition.

A natural question about this inner product is how might it behave with respect to other bases we have encountered. One significant basis with this form is the Schur basis. 2.85. Theorem. Schur functions are an orthonormal basis with respect to h·, ·i. In other words, hsλ, sµi = δλ,µ There are multiple ways to show this, each of which reveals a different perspective. 2.86. Lemma. Given a shape λ, X hλ = Kµ,λsµ µ where Kλ,α = | SSYT(λ, α)| is the Kostka number (see 2.60).

Proof. From the Pieri rule, we see that

hr1 s∅ = s(r1)

34 Then, X hr2 hr1 s∅ = hr2 s(r1) = sν ν=(r1)+horizontal r2-strip

2.87. Example. For instance, if r1 = 5, r2 = 3, we get

hr2 s(r1) = s + s + s + s which, if you fill the gray boxes with 1 and the white boxes with 2, you get an SSYT of weight (5, 3). Thus, the sum is precisely X h(5,3) = Kµ,(5,3)sµ µ,(5,3)

In general, we see that the process of multiplying many hri ’s to s∅ will be equivalent to building different SSYTs of partition weight (r1, . . . , rk). In fact, since the weight is partition weight, this will exactly get us the desired equality.  I know this is not the best 2.88. Remark. Note, if we encode the Kostka numbers is a matrix, called proof; the ex- the Kostka matrix, then we have change of bases given by ample really (Kλ,µ)µ,λ(sµ)µ = (hλ)λ illustrates the and thus also the other change of basis given by taking the inverse of the point, though. Kostka matrix. A proof of 2.85. Based on our lemma above and Littlewood’s combinatorial description of Schur functions, we observe that X −1 X sλ = Kµ,λhµ and sλ = Kλ,µmµ µ µ −1 −1 where Kµ,λ is the (µ, λ)-entry of inverse Kostka matrix (Kα,β)α,β. Therefore, X X −1 hsλ, sµi = h Kλ,αmα, Kβ,µhβi α β X X −1 = Kλ,α Kβ,µhmα, hβi α β X −1 = Kλ,αKα,µ since hmα, hβi = δα,β α = (Id)λ,µ

= δλ,µ  2.89. Corollary. The Schur functions can be characterized as the unique orthonormal basis for Λ that is unitriangularly related to the basis {mλ}λ. In fact, one could take the corollary to be the definition of the Schur functions.

35 2.90. Example. Imagine one knows only the above corollary. Then, we have by the unitriangularity that  s = m  111 111 s21 = m21 + u21,111m111  s3 = m3 + u3,21m21 + u3,111m111 Then, since s111 = m111 = h111 − 2h21 + h3 we get

hs111, s21i = 0 =⇒ 0 = hh111 − 2h21 + h3, m21 + u21,111m111i = u21,111 − 2 and so s21 = m21 + 2m111

One could then continue this process to get the unknown coefficients in s3, but this would be quite tedious.

Another proof technique to see the orthonormality of Schur functions is using “Cauchy kernels.” This process requires us to know more about how the power sum basis behaves with respect to the Hall inner product.

2.91. Definition. The Cauchy kernel in x = (x1, x2,...) and y = (y1, y2,...) is defined by Y 1 Ω(x, y) = 1 − xiyi i,j≥1 2.92. Proposition. We have the following facts about the Cauchy kernel. P (a) Ω(x, y) = hλ(x)mλ(y) Pλ (b) Ω(x, y) = λ sλ(x)sλ(y) Add that Ω(x, y) = Proof. To see part (a), observe P λ pλ(x)pλ(y). ! Only problem Y 1 Y X n = hn(~x)yj is that I do not 1 − xjyj i,j≥1 j n know how to and so, expanding out our product and collecting terms, we see prove this yet... ! Y X n 2 2 hn(~x)yj = 1+h1(~x)(y1 + y2 + ··· ) + h2(~x) · (y1 + y2 + ··· ) + h1(~x)h1(~x) · (y1y2 + y1y3 + ··· + y2y3 + ··· ) + ··· n | {z } | {z } j `1 `2 Therefore, we have our desired result.

To see part (b), we make use of the RSK correspondence. Note that ∞ Y 1 Y X k = (xiyj) 1 − xiyj i,j≥1 i,j≥1 k=0

36 when expanded out into an infinite series has monomials (each with co- efficient 1) indexed by matrices with nonnegative integer entries. On the righthand side, if we expand     X X X wt(T) X wt(S) X X wt(T) wt(S) sλ(x)sλ(y) =  x   y  = x y λ λ T∈SSYT(λ) S∈SSYT(λ) λ T,S∈SSYT(λ) the monomials are indexed by pairs of SSYTs of the same shape, each with coefficient 1. Thus, by the RSK correspondence, these two infinite sums are equal.  ˆ 2.93. Theorem. Two bases {bλ} and {bλ} for Λ are dual if and only if P λ bλ(x)bλ(y) = Ω(x, y). P ˆ P Proof. Write bλ(x) = µ aλ,µmµ and bγ(x) = ν cγ,νhν. Then, ˆ X X X hbλ, bγi = aλ,µcγ,νhmµ, hνi = aλ,µcγ,µ µ ν µ Thus, the two bases are orthonormal if and only if

X T T T X aλ,µcγ,µ = δλ,γ ⇐⇒ AC = I ⇐⇒ C A = I ⇐⇒ A C = I ⇐⇒ aµ,λcµ,γ = δλ,γ µ µ where A = (aλ,µ) and C = (cγ,ν). However, this happens if and only if ! X X X X aµ,λcµ,γ hλ(x)mγ(y) = δλ,γhλ(x)mγ(y) = hλ(x)mλ(y) = Ω(x, y) λ,γ µ λ,γ λ  P Alternative proof of 2.85. Since Ω(x, y) = λ sλ(x)sλ(y), then the Schur basis is self-dual by the theorem above. 

3. LLT Polynomials 3.1. Some Notions on Semistandard Young Tableaux. Note that we can consider a semistandard Young tableau T of shape λ on n letters as a function T: {boxes of λ} → {1, . . . , n} and it T is a standard Young tableau of shape λ on n letters, then this function is bijective. 3.1. Definition. Let T be a standard Young tableau and let T(x) = a and T(y) = a + 1. If x is strictly south of y and weakly to the east of y (that is, c(x) > c(y)), we say that a is a descent of T and we say that des(T) := {a | a is a descent of T} ⊆ {1, . . . , n − 1}

37 3.2. Example. 7 9 T = 2 5 8 1 3 4 6 has des(T) = {1, 4, 6, 8}

3.3. Proposition. Every semistandard Young tableau T of shape λ has a unique “standardization” S, that is, a unique standard Young tableau S such that (a) T ◦ S−1 is a weakly increasing function and (b) The contents of the “standardization” corresponding to the same let- ter have increasing contents. Formally, if T◦S−1(j) = T◦S−1(j+1) = −1 ··· = T ◦ S (k) = a, then {j, . . . , k − 1} ∩ des(S) = ∅.

Proof. Consider that if λ is a horizontal strip, there is a unique labeling of the cells of λ to form a standard tableau (with no descents). Similarly, if λ is a vertical strip, there is a unique standard tableau on λ with descents at every position. To get a weakly increasing sequence, one must simply replace the numbers of the SSYT in increasing order. To resolve the multiple ways one can relabel boxes with the same number, one starts from lowest content to highest content. One then gets the desired standardization.

3 4 5 7 T = 2 2 4 has standardization S = 3 4 8 since 1 1 3 5 6 1 2 6 9 10

(T ◦ S−1)(1) = 1  (T ◦ S−1)(2) = 1   −1 (T ◦ S )(3) = 2  (T ◦ S−1)(4) = 2  (T ◦ S−1)(5) = 3 −1 and des(S) = {2, 4, 6} (T ◦ S )(6) = 3  (T ◦ S−1)(7) = 4   −1 (T ◦ S )(8) = 4  −1 (T ◦ S )(9) = 5  (T ◦ S−1)(10) = 6

 3.4. Definition. The content of the cell in row i and column j in λ is j − i.

38 3.5. Example. The diagram is filled such that the number in the cell indi- cates its content. −3 −2 −1 0 −1 0 1 2 0 1 2 3 4 3.6. Remark. A semistandard Young tableau of weight α can be created from a standard Young tableau on |α| letters where the sequences of cells (1, 2, . . . , α1), (α1 + 1, α1 + 2, . . . , α2),... each have increasing content by replacing the labels {1, 2, . . . , α1} by 1, {α1 + 1, α1 + 2 . . . , α2} by 2, etc. This is essentially inverse to the process of “standardization.” 3.7. Example. Take (123)(456)(78)(9)(10), that is, take weight α = (3, 3, 2, 1, 1), so |α| = 10. Then, from the standard Young tableau below (which has the desired content increasing condition), we get the following semistandard tableau using this method: 10 5 9 4 7→ 4 5 6 2 2 2 1 2 3 7 8 1 1 1 3 3

Below, we will seek to construct so-called “semistandard ribbon tableau” from “standard ribbon tableau” in an analogous manner. 3.2. Rim Hook Tabloids. Warning: In this section, partitions λ = (λ1, . . . , λk) will be in increasing order, that is λ1 ≤ λ2 ≤ · · · ≤ λk. 3.8. Definition. (a)A rim hook H of a partition λ is a consecutive sequence of cells on Fλ’s northeast rim such that any two adjacent cells of H have a common edge and removal of H from Fλ leaves a “legal” Ferrers diagram. (b)A special rim hook is a rim hook which starts on the first column of λ, that is, it starts in cell (λ1, 1). (c) The number of rows of the rim hook is called its height, denoted ht(H). (d) The number of cells of the rim hook is called its size, denoted |H|. 3.9. Example. The green shaded boxes form a rim hook. The one on the right is a special rim hook.

39 3.10. Definition. A special rim hook tabloid T of shape µ and type λ is a filling of Fµ repeatedly with rim hooks of sizes {λ1, λ2, . . . , λk} such that each rim hook is special.

Note, in this context, rim hooks may be called “ribbons” since they are not rim hooks of the original diagram.

3.11. Definition. An n-ribbon R is a connected skew shape of n squares with no subshape of 2 × 2 squares.

To get a special rim hook tabloid, one follows the following algorithm

(a) Pick a special rim hook H1 in Fµ and remove the cells of H1 to get Fµ(1). (b) Pick a special rim hook H2 in Fµ(1) and remove it to get Fµ(2). (c) Proceed analogously until there is nothing to remove.

3.12. Definition. The type of a special rim hook tabloid T is λ if (|H1|, |H2|,..., |H`|) arranged in weakly increasing order produces partition λ

3.13. Example. Take µ = (3, 3, 3, 4). The only special rim hook tabloids of type λ = (2, 2, 4, 5) are

  |H1| = 4 |H1| = 5   |H | = 2 |H | = 2 I) 2 II) 2 |H3| = 5 |H3| = 2   |H4| = 2 |H4| = 4

3.14. Definition. The sign of a rim hook H is taken to be (−1)ht(H)−1. The sign of T is the product of signs of rim hooks of T .

3.15. Example. The sign of rim hook tabloid I) in the example above is

(−1)2−1(−1)1−1(−1)2−1(−1)1−1 = 1 At some point we switch back 3.16. Theorem (Murnaghan-Nakayama Rule). Given a shape λ and r ∈ N, to standard con- we have that vention. X ht(µ/λ)−1 prsλ = (−1) sµ µ where the sum is over all partitions µ such that µ/λ is a rim-hook of size r.

Proof. Let us work in Λn. First, consider that n X ∆λ+ρpr = ∆λ+ri+ρ i=1

40 Then, arrange λ + ri + ρ in descending order. If two terms are equal, then

∆λ+ri+ρ = 0, otherwise there will be some p ≤ q such that

λp−1 + ρp−1 > λq + ρq + r > λp + ρp ⇐⇒

λp−1 + n − p + 1 > λq + n − q + r > λp + n − p q−p and so ∆λ+ri+ρ = (−1) ∆ξ where

ξ = (λ1+n−1, ··· , λp−1+n−p+1, λq + n − q + r, λp + n − p, . . . , λq−1 + n − q + 1, λq+1+n−q−1, . . . , λn) | q−p{zrows } Thus, µ := ξ − ρ is a partition of the form

µ = (λ1, . . . , λp−1, λp + p − q + r, λp + 1, . . . , λq−1 + 1, λq+1, . . . , λn) Such a partition is exactly such that µ/λ has a rim-hook of size r. In the example below, r = 5, p = 2, q = 4

λ =

λ+54+ρ = 7→ ξ =

µ =

To complete the proof, divide both sides by ∆ρ and let n → ∞. 

3.17. Definition. The type of a rim hook tabloid is α = (α1, . . . , α`) if the ribbon labeled i has αi cells. 3.18. Definition. A rim hook tabloid of type α is a sequence of partitions λ0 ⊆ λ1 ⊆ · · · ⊆ λ` i i−1 i i−1 such that λ /λ is a rim hook with αi-cells and the cells of λ /λ are labeled with the letter i

41 3.19. Example. A rim hook tabloid with type α = (3, 4, 5) is 2 3 3 3 2 2 2 3 1 1 1 3 3.20. Definition. Fix n, a > 0. A standard ribbon tableau (sometimes called an n-ribbon tableau) of degree a is a rim hook tabloid of type (n, . . . , n) = | {za } (na). 3.21. Example. A 3-ribbon tableau of degree 6 is given by 4 4 4 3 3 1 3 5 5 5 6 1 1 2 2 2 6 6 3.22. Definition. A ribbon head is the southeastern-most cell in a ribbon R.A ribbon tail is the northwestern-most cell. 3.23. Example. A ribbon is given as the colored cells combined. The tail is the dark colored cell (olive), the head is the light colored cell (lime).

3.24. Definition. Fix n > 0. A semistandard ribbon tableau of weight α is created from an n-ribbon tableau on len(α) letters such that the contents of the ribbon heads labeled {1, 2, . . . , α1} are increasing, the contents of the ribbon heads labeled {α1 +1, α1 +2, . . . , α2} are increasing, etc., by replacing the labels {1, 2, . . . , α1} by 1, {α1 + 1, α1 + 2, . . . , α2} by 2, etc. 3.25. Example. Let α = (2, 3, 1, 1). Then, the semistandard ribbon tableau is given on the right 7 4 7 4 7 4 3 3 3 4 6 6 2 2 2 2 3 3 1 2 2 4 5 6 1 1 1 2 2 3 1 1 2 4 5 5 1 1 1 2 2 2 Ribbons with same numbers have increasing content heads.

42 3.26. Definition. Let R be a ribbon. Then, we define the spin of a ribbon to be spin(R) := ht(R) − 1 If T is a ribbon tableau, we define X spin(T ) := spin(R) ribbon R∈T 3.3. p-quotients Of Integer Partitions. For later use in understanding n-ribbon tableaux, we will need to understand a certain quotienting oper- ation on partitions. This exposition will follow problem 8 in chapter 1 of MacDonald’s book. 3.27. Proposition. Let λ, µ be partitions of length ≤ m such that µ ⊆ λ and such that λ−µ is a rim hook of length p. Let ρ = (m−1, m−2,..., 1, 0) and set ξ = λ + ρ, η = µ + ρ

Then, η is obtained from ξ by subtracting p from some part of ξi of ξ and rearranging in descending order. Proof. Consider that, due to the shift by ρ, ξ and µ both have distinct parts and our rim-hook will no longer be connected vertically.

( λ = (4, 4, 2, 2) 7→ µ = (4, 1, 1)

Furthermore, our rim hook (before the shift) can only move down vertically if the two rows have the same length and, when it moves horizontally, it must move to the end of the row. Therefore, after the shift, deletion of the rim-hook corresponds to subtracting p from a part of ξ and shifting, resulting in η either way.

7→ ←



3.28. Definition. Given a partition λ = (λ1, . . . , λ`), let mk(λ) be the num- ber of parts of λ of size k. In other words mk(λ) = |{λi ∈ λ | λi = k}|.

3.29. Definition. Let λ be a partition of length ≤ m, let ξ = ρm + λ where ρm = (m − 1, m − 2,..., 1, 0), and let p ≥ 2. Then, we define the p-quotient of λ as follows.

43 Let mr = mr(ξ) = |{ξi ∈ ξ | ξi ≡ r mod p}|, that is, the number of parts (r) of ξ congruent to r mod p. Then, ξi = pξk + r for 1 ≤ k ≤ mr where (r) (r) (r) ξ1 > ξ2 > ··· > ξmr ≥ 0 (r) (r) (r) (r) (r) and let λk = ξk − mr + k so that λ = (λ1 , . . . , λmr ) is a partition. Then, the collection λ˜ = (λ(0), . . . , λ(p−1)) is called the p-quotient of the partition λ.

3.30. Example. Let λ = (4, 4, 2, 2). Then, ζ = λ + ρ4 = (7, 6, 3, 2) has distinct parts. If p = 3, then

m0 = 2, m1 = 1, m2 = 1 and    ζ(0) = 2 λ(0) = 2 − 2 + 1 = 1 ζ2 = 6 = 2 · 3 + 0  1  1   (0)  (0) ζ3 = 3 = 1 · 3 + 0 ζ2 = 1 λ2 = 1 − 2 + 2 = 1 =⇒ (1) =⇒ (1) ζ1 = 7 = 2 · 3 + 1 ζ = 2 λ = 2 − 1 + 1 = 2   1  1 ζ = 2 = 0 · 3 + 2  (2)  (2) 2 ζ1 = 0 λ1 = 0 − 1 + 1 = 0 Thus, we get the 3-quotient λ∗ = ((1, 1), (2), (0)), ie

! 7→ , , ∅

Note that, for a different choice of m (here we picked m = 4), we would get a cyclic rearrangement of the 3-quotient. Thus, p-quotients are defined up to cyclic rearrangement. This fact leads [Mac79] to suggest that a p-quotient should really be thought of as a “necklace of partitions”. 3.31. Proposition. The p-core of a partition is unique and may be obtained by removing border strips of length p in any order until it is no longer pos- sible. Proof. By above, removing a border strip of length p corresponds to sub- tracting p from some part of ζ = λ + ρ and then rearranging the sequence in descending order. Simply subtracting p from parts of ζ until it is no longer possible is certainly independent of choice, and will result in a unique p-core.  3.4. LLT Polynomials. 3.32. Definition. Given n > 0 and a shape λ, we define the (spin-squared) LLT polynomial (named after Lascoux, Leclerc, and Thibon) to be (n) X spin(T) wt(T) Gλ (x, t) = t x T a ss ribbon tableau of shape λ

44 where every ribbon of T is of size n. 3.33. Example. If we take n = 3 and λ = (3, 3), we have the following possible ribbon tableaux of shape λ:      1 2 2  2 2 2  1 1 1  2 2 2      1 1 2  1 1 1  1 1 1  2 2 2 α = (1, 1) α = (1, 1) α = (2, 0) α = (0, 2)     spin = 2 spin = 0 spin = 2 spin = 2 Thus giving us (3) 2 2 2 2 2 G(3,3) = t x1x2 + x1x2 + t x1 + t x2 Also useful is to rewrite this in terms of the monomial and Schur bases: (3) 2 2 2 G(3,3) = (t + 1)m11 + t m2 = t s2 + s11 Notice, however, our definition is not clear for something like G(3) =??? because we cannot tile with 3-ribbons. To deal with this, we will have to define the following: 3.34. Definition. An n-core is a partition with no removable n-rim-hook. Equivalently, an n-core is a shape where none of its cells have a hook length of n. 3.35. Example. The diagram on the left is not a 3-core, the one on the right is.

3.36. Proposition. For any partition γ, successively removing all removable n-rim hooks in any order yields a unique n-core. 3.37. Definition. Fix n > 0. For any shape λ, let λ˜ be its n-core. We define G(n) := G(n) λ λ/λ˜ 3.38. Example. Now, we can deal with our issue above. Namely, G(3) = G(3)

(n) 3.39. Proposition. For any skew shape λ, Gλ is a symmetric function. (n) 3.40. Proposition. Gλ is “Schur-positive”, that is, (n) X Gλ = cµλ(t)sµ µ such that cµλ(t) ∈ N[t].

45 (n) 3.41. Proposition. When t = 1, Gλ is a product of Schur functions. 3.42. Example. From above (3) G(3,3)(x; 1) = s2 + s11 = s1 · s2 3.43. Definition. We define the statistic cospin(T) = (spin max(µ)−spin(T))/2 where T is a (semistandard) ribbon tableaux of skew-shape µ and spin max(µ) = max{spin(T) | T is a ribbon tableau of skew-shape µ}. 3.44. Definition. We define the (cospin) LLT polynomial to be (n) X wt(T) cospin(T) LLTµ (x, t) = x t T 3.45. Example. To see the difference (and similarity) with the spin-squared LLT family defined above, we see (3) LLT(3,3)(x, t) = (t + 1)m11 + m2 since, in this case, spin = 2 =⇒ cospin = 0 and spin = 0 =⇒ cospin = 1. (n) 3.46. Proposition. For any skew shape µ, LLTµ (x, t) is a symmetric func- tion. To show this, we will need the another statistic. 3.5. Bylund and Haiman’s inversion statistic. We define a quotient map which takes a ribbon tableau T to a tuple T˜ of SSYT’s as follows. 3.47. Definition. Recall the content of a ribbon R is the content of its head. Write the content of each ribbon R in T as rn + p where 0 ≤ p < n. Then, send R to a square of content r in the p + 1st SSYT. Why does this 3.48. Example. We send work?! Proba- bly follows from 6 the “quotient- 5 5 ing” aspect. The ! 3 5 5 3 3 6 p-quotient sec- 7→ , , tion probably 3 2 3 1 1 2 resolves this. 2 2 3 1 1 where the coloring of the SSYT’s are to indicate from which ribbon the number comes. For example, the orange 6 has content −1 = −1 · 3 + 2, and so it is sent to the 3rd tableau in a cell with content 1. 3.49. Definition. We define the Bylund and Haiman inversion statistic on a tuple of SSYTs T˜ = (T˜1,..., T˜n) to be the number of inversion pairs in T˜ and denote this inv(T˜). We define an inversion pair to be a pair (c, d) of squares containing numbers σ(c) and σ(d) respectively, with c ∈ T˜i and d ∈ T˜j where 1 ≤ i < j ≤ n where either

46 (a) σ(c) > σ(d) and c and d have the same content, or (b) σ(c) < σ(d) and the content of c is one more than the content of d (that is, c is one diagonal below d). Alternatively, for x ∈ Ti, define the adjusted content Note that this is a modification c˜(x) := nc(x) + (i − 1) the definition where c(x) is the content of x. Then, an inversion pair is a pair (x, y) such in [HHL+04]: that σ(x) < σ(y) and 0 < c˜(x) − c˜(y) < n. c˜(x) = nc(x)+si. 3.50. Example. Below the tuple above is filled in with numbers representing the adjusted contents. ! −3 0 −2 1 −1 , , 0 3 1 4 2 To use the first definition of inversion pairs, redraw our tuple like so 6 2

3 3 1 1

5 5 2 3 so the contents line up. Then, we see the inversion pairs of the first type               5 , 3 , 5 , 6 , 5 , 1 , 5 , 3 , 5 , 2 , 2 , 1 , 3 , 1 that is to say, the 5 in the upper left hand corner of T˜1 forms an inversion pair with the 3 in the upper left hand corner of T˜2, the 5 in the upper right hand corner of T˜1 forms an inversion pair with the 1 and 3 on the same diagonal in T˜2 and with the 2 in T˜3, etc. The inversion pairs of the second type are       2 , 3 , 2 , 6 , 1 , 6 so, for instance, the 2 in the bottom left hand corner of T˜1 forms an inver- sion pair with the 3 in the upper left hand corner of T˜2.

Thus, inv(T˜) = 9. 3.51. Lemma (Spin Lemma). Let |µ| = 2n. Show that if µ can be tiled by two n-ribbons whose cells have at least one content in common, then there are exactly two tilings of µ and their spins differ by 2.

47 Proof. Below is a sufficiently generic example illustrating the situation.

2 1

2 1 In this situation, there can only be one contiguous “section” of the shape that has common contents, otherwise the shape cannot be filled with two n-ribbons. Consider such a contiguous section:

We can fill the white boxes two different ways:

In the left configuration, each ribbon has height 1 and in the right config- uration, each ribbon has height 2. Importantly, in the contiguous region, the two ribbons must have the same height since the swap trades a vertical move for a horizontal move on both ribbons.

Any addition of cells with unique contents will add to the overall height of the ribbon tableau, but the contributed height will not be affected by the swap. Thus, the total spins will differ by 2.  3.52. Lemma (Bylund, Haiman). Given a skew shape µ and n > 0, there is a constant e such that for every standard n-ribbon tableau T of shape µ, we have cospin(T) + e = inv(T˜)

Proof. We say that standard tableaux S, S0 of shapeµ ˜ = (µ1, . . . , µn) (that is, S, S0 are a disjoint collection of n standard tableaux of shapes µ1, . . . , µn) differ by a switch if they are identical except for the positions of two con- secutive entries a, a + 1. Then, two tableaux S, S0 differing by a switch can

48 only have inv(S), inv(S0) differing by 1.

Now, assume that T, T0 are standard n-ribbon tableau such that T˜ and T˜0 differ by a switch. We wish to show that cospin(T) − cospin(T0) = inv(T˜) − inv(T˜0). Since T, T0 are identical except for the ribbons labeled a and a + 1, the problem reduces to the case whereµ ˜ has only two non-empty SSYTs and |µ| = 2n; in other words, T, T0 are n-ribbon tableaux tiled by only 2 ribbons each with a distinct content. We now reduce to 2 cases. This is going to be impossible to • (inv(T˜) − inv(T˜0) = 0). For this situation, our switched cells x, y present without cannot have the same content, nor can they be in two distinct SSYTs illustrative dia- with one a column higher than the other. Therefore, µ cannot be grams. tiled with two ribbons whose contents differ by less than n. Thus, in this case, the shape µ is the union of two ribbons whose cells have no contents in common, otherwise the shape could not be connected. Therefore, these ribbons are unique and so spin(T) = spin(T0) and thus cospin(T) = cospin(T0). • (| inv(T˜) − inv(T˜0)| = 1). This situation is possible if and only if |c˜(y) − c˜(x)| < n, which forces the ribbon heads to have contents differing by less than n. Therefore, the two ribbons must have cells with at least one content in common. Thus, by our lemma above, there are exactly 2 tilings of µ whose spin differ by 2, say T, T0 with spin(T) = spin(T0) + 2 Therefore, 1 1 cospin(T) = (spin max(µ)−spin(T)) = (spin max(µ)−spin(T0)−2) = cospin(T0)−1 2 2

 3.53. Lemma (Bylund, Haiman). The lemma above also holds for semis- tandard ribbon tableaux.

3.54. Definition. If S˜ is a tuple of standard tableaux of shapeµ ˜, we call a a descent of S˜ if S˜(x) = a, S˜(y) = a + 1, andc ˜(x) > c˜(y) for boxes x, y in S˜.

3.55. Proposition. Given a tuple of semistandard tableaux T˜, there is a unique “standardization” S˜ such that T˜ ◦ S˜−1 is weakly increasing and, if −1 −1 −1 T˜ ◦S˜ (j) = T˜ ◦S˜ (j+1) = ··· = T˜ ◦S˜ (k), then des(S˜)∩{j, . . . , k−1} = ∅ where a is a descent if there is an x, y such that S˜(x) = a, S˜(y) = a + 1 and c˜(x) > c˜(y).

Proof of Proposition. Observe that ifµ ˜ is a horizontal strip, there is a unique I find this proof standard tableau of shapeµ ˜ with no descents, and conversely, such a tableau relatively uncon- exists only on a horizontal strip. vincing, but the example makes 1 2 3 4 5 6 7 it somewhat clear. Any other attempt at stan- 49 dardization will force an unde- sired descent. Thus, each semistandard tableau T˜ has a unique standardization meeting the properties above. ! ! 5 5 3 3 6 8 9 5 6 10 T˜ = , , has standardization S˜ = , , 2 3 1 1 2 3 7 1 2 4

˜ with des(S) = {2, 4, 7, 9} 

Proof of Lemma. By the definition of an inversion pair, equal entries T˜(x) = T˜(y) = a contribute nothing to inv(T˜). In the standardization S˜, equal en- tries are replaced with entries labeled in increasing order of adjusted content c˜, contributing nothing to inv(S˜). On the other hand, unequal entries of T˜ give rise to entries ordered in the same way in S˜, so inv(T˜) = inv(S˜).

Given a semistandard n-ribbon tableau T, we define its standardization to be the unique standard n-ribbon tableau S such that S˜ is the standardization of T˜. Then, T and S have the same underlying ribbon tiling and so spin(T) = spin(S). Thus, by the previous lemma, inv(T˜) = inv(S˜) = cospin(S) + e = cospin(T) + e

 3.56. Theorem (Bylund, Haiman). For any skew shape µ that can be tiled by n-ribbons, there is a constant e depending on µ such that

e (n) X inv(T˜) wt(T˜) t LLTµ (x, t) = t x T˜ where the sum is over all SSYT of shape µ˜.

Proof. Since cospin(T)+e = inv(T˜) for all T of shape µ by the lemma above and xwt(T) = xwt(T˜) by definition, we have that

e (n) X e+cospin(T) wt(T) X inv(T) wt(T˜) t LLTµ (x, t) = t x = t x T T˜ 

4. Hall-Littlewood Polynomials Throughout our discussions on symmetric functions, most of our examples have been stated and worked out for symmetric functions over reasonably simple fields, such as Q, or some other relatively structured ring, such as a PID like Z. However, there is no reason we cannot consider Λ over other rings. In this section, we will primarily be interested in symmetric functions over Q(t).

50 4.1. Symmetric polynomials over Q(t).

4.1. Example. Consider Λ2 over Q(t). The polynomial 1 − 3t tx2 + tx2 + x x 1 2 1 + t2 1 2 would be in Λ2.

4.2. Definition. Let λ be a partition. We define mi(λ) to be the number of parts in λ of size i. Equivalently, this is the number of rows with i boxes if λ is represented as a Ferrers diagram. 4.3. Example. Let λ = (4, 4, 4, 3, 3, 2, 2, 2, 1, 1, 1, 1). Then,

m1(λ) = 4, m2(λ) = 3, m3(λ) = 2, m4(λ) = 3 λ 4.4. Definition. Let λ be a partition and let n = `(λ). Then, we define Sn to be a subgroup of Sn such that λ σ ∈ Sn ⇐⇒ σ(λ) = λ where σ(λi) = λσ(i). 4.5. Example. Let σ = (1, 3, 2, 5, 4, 8, 7, 6, 12, 11, 10, 9) in one line notation. Then, λ ∼ Sn = SmN × · · · × Sm1 σ 7→ (1, 3, 2) × (2, 1) × (3, 2, 1) × (4, 3, 2, 1) all in one line notation.

4.6. Remark. Recall for σ ∈ Sn,(i, j) is an inversion pair if i < j but λ σ(i) > σ(j). Thus, if σ ∈ Sn, say σ = w1 × w2 × · · · × wN , then N X inv(σ) = inv(wi) i=1 since blocks do not contribute inversions outside of the block.

4.7. Definition. For n ∈ N, let n n Y 1 − ti Y V (t) := = (1 + t + ··· + ti−1) n 1 − t i=1 i=1 and, for a partition λ, let Y Vλ(t) = Vmi(λ)(t) i≥0 4.8. Remark. One should exercise caution since, for a ∈ N, a Y i−1 V(a)(t) = 1 but Va(t) = (1 + t + ··· + t ) i=1

51 4.9. Example. Continuing with λ = (4, 4, 4, 3, 3, 2, 2, 2, 1, 1, 1, 1), we have 2 Vλ(t) = V3 (t)V2(t)V4(t) 4.10. Proposition. Given n ∈ N, X inv(σ) Vn(t) = t

σ∈Sn Proof. The proof is a straightforward application of the quantum factorial.  4.11. Corollary. By the above proposition,

X inv(σ) Vλ(t) = t λ σ∈Sn Proof. We check Y Vλ(t) = Vmi(λ)(t) i≥0 Y X = tinv(wi) by Proposition

i≥0 wi∈Smi X = tinv(w1)+inv(w2)+···+inv(wN )

(w1,··· ,wN )∈Sm1 ×···×SmN X = tinv(σ) by Remark 4.6 λ σ∈Sn  4.12. Definition. Let λ be a partition. We define   X λ Y xi − txj Rλ(x1, . . . , xn; t) := w x  ∈ Q(t)[x1, . . . , xn] xi − xj w∈Sn i

4.13. Remark. There are many things to notice about this Rλ polynomial, and things to be suspicious of, too. (a) The product should be reminiscent of the Vandermonde determinant. Recall j Y X ρ ∆ = det(xi ) = (xi − xj) = sgn(w)w(x ) i

and, more generally, for α = (α1, . . . , α`) with distinct parts,

αj X α ∆α = det(xi ) = sgn(w)w(x ) w∈Sn

Thus, this Rλ polynomial is of a similar form.

52 (b) In the definition of Rλ, we are dividing the expression by xi − xj’s, but these elements are not in our ring. Thus, we must argue that the numerator is somehow divisible by this term. However, recall that a Schur function sλ is

∆λ+ρ ∆λ+ρ sλ = = Q ∈ Λ ∆ i

Q ρ Q  and, since 1≤i

4.14. Example. We compute the small examples of R11(x1, x2) and R2(x1, x2). Thus, we are working over S2 = {id, (12)} (in cycle notation). First,     x1 − tx2 x2 − tx1 R11(x1, x2) = x1x2 + x1x2 x1 − x2 x2 − x1 x1x2 = (x1 − tx2 − x2 + tx1) x1 − x2 x1x2 = (x1 − x2 + t(x1 − x2)) x1 − x2 = x1x2(1 + t)

53 which is (1 + t)s11 in terms of Schur functions. Also, note that 1 − t2 V (t) = V (t) = = 1 + t (11) 2 1 − t Second,

2 x1 − tx2 2 x2 − tx1 R2(x1, x2) = x1 + x2 x1 − x2 x2 − x1 1 3 2 3 2 = (x1 − tx1x2 − x2 + tx1x2) x1 − x2 1 3 3 = (x1 − x2 − tx1x2(x1 − x2)) x1 − x2 2 2 = x1 + x1x2 + x2 − tx1x2 which is s2 − ts11 in terms of Schur functions. Thus, we see that Rλ’s are not necessarily Schur positive. Also, of note,

V(2) = V1(t) = 1

Thus, in both our examples, Rλ has been divisible by Vλ(t).

4.15. Proposition. Rλ(x1, . . . , xn; t) is a symmetric polynomial. Proof. Observe that

Y (xi − txj) = (x1 − tx2)(x1 − tx3)(x1 − tx4) ··· i

Thus, if we take U = (ui,j) to be a (0, 1)-matrix with ui,i = 0 and ui,j +uj,i = 1, we get Y X Y uij uji (xi − txj) = xi (−txj) i

54 where the sum is over all such (0, 1)-matrices U. Thus, the symmetry follows.  Elaborate 4.16. Proposition.   X Rλ(x1, . . . , xn; t) = Vλ(t) sλ + cλ,µ(t)sµ λBµ for some cλ,µ(t) ∈ Q(t).

4.17. Corollary. {Rλ}λ is a basis of symmetric polynomials. 4.18. Remark. If we let 1 Pλ(x1, . . . , xn; t) = Rλ(x1, . . . , xn; t) Vλ(t) then X sλ(x1, . . . , xn) = cλ,µ(t)Pµ(x1, . . . , xn; t) µ with cλ,µ(t) ∈ N[t], that is, each cλ,µ(t) is a polynomial in t with no negative coefficients! This is incredibly remarkable since there are no t’s on the left hand side, so they must all cancel somehow. Furthermore, it is known that X charge(T) cλ,µ(t) = Kλ,µ(t) = t T∈SSYT(λ,µ) where charge(T) is appropriately defined. Some of this material will be elaborated in the next section. 4.2. t-Generalization of Hall Inner Product. Following the remark concluding the previous section, we define the following. 4.19. Definition. Let λ be a partition. Then, we define the Hall-Littlewood polynomials to be   1 X λ Y xi − txj Pλ(x; t) = w x  Vλ(t) xi − xj w∈Sn i

4.20. Proposition. (a) Pλ(x; 0) = sλ (b) Pλ(x; 1) = mλ (c) {Pλ(x; t)} is a basis for Λ[t], that is, Λ over Q(t).

4.21. Definition. Let h·, ·it : Λ[t] × Λ[t] → Q(t) be given by

Y λi hpλ, pµit = δλ,µzλ (1 − t ) i

Q mi where zλ = i i mi!. (See 2.48).

4.22. Proposition. The set {Pλ(x; t)} is characterized by

(a) Orthogonality with respect to h·, ·it.

55 (b) Unitriangularity with respect to Schur basis. That is, X Pλ(x; t) = sλ + cλ,µ(t)sµ λBµ 4.23. Corollary. If we set t = 1, this gives us that

mλ = sλ + lower order terms where the lower order terms come from the combinatorics of special rim hook tabloids.

0 4.24. Definition. Let {Qµ(x; t)} in Λ[t] be the dual basis to {Pλ(x; t)} with respect to h·, ·it. P 4.25. Proposition. Since sλ = µ Kλµ(t)Pµ by 4.18, we get

0 X Qµ = Kλ,µ(t)sλ λ 4.26. Theorem. X charge(T) Kλ,µ(t) = t T∈SSYT(λ,µ)

Of course, to understand this theorem, we must define the charge of a semistandard Young tableau.

4.27. Definition. The reading word of a tableau T is the word created from reading the entries of T from left to right, top to bottom.

4.28. Example. Let 5 4 T = 3 6 8 1 2 7 9 Then, the reading word is 543681279.

4.29. Definition. Recall that the charge of a permutation σ is given by writing σ counterclockwise on a circle and labeling the letters 1, 2, 3,... with an “index” clockwise starting at 0 and increasing if you cross ?. Finally, the charge is the sum of the indices. Then, for T a standard Young tableau, we define charge(T) to be the charge of the reading word of T. Alternatively, one can assign indices I = (I1,I2,...,I`) by setting ( Ix−1 + 1 if x is east of x − 1 I1 = 0,Ix = Ix−1 else

56 4.30. Example. Let T be as above so it has reading word σ = 543681279. Then, we compute

? 5 9 1 4 4 1 3 7 3 1 1 2 2 3 0 6 8 1 giving charge(T) = 16.

4.31. Definition. We define the charge of a semistandard Young tableau as follows. Given a word (tableau) of partition weight,

(a) Extract the “standard” (permutation) subwords. (b) Compute charge on each of these subwords. (c) Sum up each of the separate charges to get the charge.

4.32. Example. Let

5 3 4 4 4 T = 2 2 3 3 3 1 1 1 1 2 2 4

Then, we draw the reading word on a charge circle and find the standard subwords, labeled in different colors and done successively for clarity.

5 ? 4 5 ? 4 5 ? 4 3 2 3 2 3 2 4 2 4 2 4 2 4 1 4 1 4 1 4 1 4 1 4 1 2 1 2 1 2 1 2 1 2 1 2 1 3 3 3 3 3 3 3 → 3 → 3

57 And thus giving us ? ? 5 4 5 ? 4 1 1 4 1 0 1 3 2 0 0 3 0 1 0 1 4 2 2 2 3 4 1 4 1 7→ ? ?

2 1 4 1 1 2 4 1 1 2 2 1 1 0 1 0 3 3 3 3 1 3 1

Thus, the charge would be 2 + 2 + 3 + 3 = 10. 4.33. Example. Our theorem says that   X X charge(T) s21 =  t  Pµ(x; t) partition µ T∈SSYT((2,1),µ)     X charge(T) X charge(T) =  t  P21(x; t) +  t  P111(x; t) T∈SSYT((2,1),(2,1)) T∈SSYT((2,1),(1,1,1)) Thus, we can now use our theorem to compute 0 2 s21 = t P21(x; t) + (t + t)P111(x; t) since   2  has charge 0  1 1      3 has charge 1  1 2      2  has charge 2  1 3

4.34. Remark. It should be noted that all Hall-Littlewood polynomials are actually special cases of LLT polynomials. However, I do not know how to References prove this. [Ful97] W. Fulton, Young Tableaux, 1997. [Hag08] J. Haglund, The q, t-Catalan Numbers and the Space of Diagonal Harmonics, 2008.

58 [HHL+04] J. Haglund, M. Haiman, N. Loehr, J. B. Remmel, and A. Ulyanov, A combi- natorial formula for the character of the diagonal coinvariants (2004). [Mac79] I. G. Macdonald, Symmetric Functions and Hall Polynomials, 1979. 2nd Edi- tion, 1995. [Ste02] J. R. Stembridge, A Concise Proof of the Littlewood-Richardson Rule, The Electronic Journal of Combinatorics (2002).

59