POLYNOMIAL BEHAVIOUR OF KOSTKA NUMBERS

DINUSHI MUNASINGHE

+ + − Abstract. Given two standard partitions λ+ = (λ1 ≥ · · · ≥ λs ) and λ− = (λ1 ≥ − · · · ≥ λr ) we write λ = (λ+, λ−) and set sλ(x1, . . . , xt) := sλt (x1, . . . , xt) where + + − − λt = (λ1 , . . . , λs , 0,..., 0, −λr ,..., −λ1 ) has t parts. We provide two proofs that the Kostka numbers kλµ(t) in the expansion X sλ(x1, . . . , xt) = kλµ(t)mµ(x1, . . . , xt) µ demonstrate polynomial behaviour, and establish the degrees and leading coefficients of these polynomials. We conclude with a discussion of two families of examples that introduce areas of possible further exploration.

Contents

1. Introduction 2

2. Representations of GLn 3 2.1. Representations 3

2.2. Characters of GLn 4 3. Symmetric Polynomials 5 3.1. Kostka Numbers 6 3.2. Littlewood-Richardson Coefficients 6 3.3. Representations and Characters 7 4. Restrictions 7 4.1. Interlacing Partitions 7 4.2. Branching Rules 8 5. Beyond Polynomial Representations 9 6. Weight Decompositions 11 7. Some Notation 11 8. Stability 12 8.1. Kostka Numbers 12 8.2. Littlewood-Richardson Coefficients 13 9. Asymptotics 15 10. The Polynomial Behaviour of Kostka Numbers 16 11. Further Asymptotics 20

1 2 DINUSHI MUNASINGHE 12. Remarks 23 Appendix: Solving Difference Equations 24 References 24

1. Introduction

The Kostka numbers, which we will denote kλµ, were introduced by Carl Kostka in his 1882 work relating to symmetric functions. The Littlewood-Richardson coefficients ν cλµ were introduced by Dudley Ernest Littlewood and Archibald Read Richardson in their 1934 paper Group Characters and Algebra. These two combinatorial objects are ubiquitous, appearing in situations in and , which we will address in this work, as well as algebraic geometry. Being such natural objects, the Kostka numbers and Littlewood-Richardson coeffi- cients have been studied extensively. There are explicit combinatorial algorithms for computing both types of coefficients, and many well-established properties. There still exist, however, open questions and areas which remain unexplored. One area of study which contains interesting open questions relating to these coefficients is that of asymp- totics, which we will explore in this work. ν We define the Littlewood-Richardson coefficient cλµ as the multiplicity of the sν(x1, . . . , xt) in the decomposition of the product of Schur polynomials X ν sλ(x1, . . . , xt)sµ(x1, . . . , xt) = cλµsν(x1, . . . , xt). ν

Similarly, we can define the kλµ as the coefficient of mµ(x1, . . . , xt) in the expansion of the Schur polynomial sλ(x1, . . . , xt) as X sλ(x1, . . . , xt) = kλµmµ(x1, . . . , xt) µ where mµ(x1, . . . , xt) denotes the minimal generated by the mono- mial xµ. In the above formulas, the number of variables, t, may be arbitrary. However, ν cλµ and kλµ become independent of t as it becomes sufficiently large.

We begin our study by extending the definition of the Schur polynomial sλ(x1, . . . , xt) to arbitrary weakly decreasing sequences of integers λ = (λ1 ≥ · · · ≥ λt) by shifting λ to λ + ct, i.e. 1 t sλ(x1, . . . , xt) = c sλ+c (x1, . . . , xt). (x1 . . . xt) + + − − Given two standard partitions λ+ = (λ1 ≥ · · · ≥ λs ) and λ− = (λ1 ≥ · · · ≥ λr ) we write λ = (λ+, λ−) and set

sλ(x1, . . . , xt) := sλt (x1, . . . , xt) + + − − where λt = (λ1 , . . . , λs , 0,..., 0, −λr ,..., −λ1 ) has t parts. We consider the expansions X ν (1) sλ(x1, . . . , xt)sµ(x1, . . . , xt) = cλµ(t)sν(x1, . . . , xt) ν POLYNOMIAL BEHAVIOUR OF KOSTKA NUMBERS 3 and X (2) sλ(x1, . . . , xt) = kλµ(t)mµ(x1, . . . , xt) µ

where all λ, µ, and ν above are pairs λ = (λ+, λ−), µ = (µ+, µ−), and ν = (ν+, ν−). Both sums in (1) and (2) are finite. A result of Penkov and Styrkas, [PS], implies ν that the coefficients cλµ(t) stabilize for sufficiently large t. The coefficients kλµ(t), on the other hand, demonstrate polynomial behaviour. This result has been known for a while, see for example [BKLS] and the references therein. The methods used by Benkart and collaborators apply to representations of groups beyond GLn, but provide little information about the Kostka polynomials themselves.

The purpose of this work is to give two proofs of the fact that kλµ(t) is a polynomial in t. The proofs given are specific to GLn, but will hopefully lead to formulas or algorithms for computing these polynomials explicitly. The first proof uses the stability of the Littlewood-Richardson coefficients, and the second uses a recurrence relation established through branching rules. We end the document with a discussion of two families of examples which will suggest further generalizations and directions of study. Conventions. Unless otherwise stated, we will be working over C, representations are finite dimensional, and N = Z≥0. We denote by [k] the set of integers {1, . . . , k}.

2. Representations of GLn

Definition 1. We define a polynomial representation of the group G = GLn to be a group homomorphism ∼ ρ : G → GL(V ) = GLN where N = dim(V ) is the dimension of the representation, such that for every A = (aij) ∈ GLn, ρ(A) = (ρij(A)), where each entry ρij(A) is a polynomial in the entries akl of A ∈ GLn.

In this work we will refer to polynomial representations simply as representations of GLn. For a more extensive discussion, one can refer to [GW] and [FH].

2.1. Representations. All finite dimensional representations of GLn are completely reducible - i.e., if ρ is a representation of GLn such that N = dim(V ) < ∞, then there is a decomposition of V as the direct sum of irreducible representations. It is therefore natural to want to parameterize all possible non-isomorphic irreducible representations, and when given a representation, to ask how it could be decomposed as the direct sum of irreducible components. We turn our attention to the Schur-Weyl Duality, which will allow us to build all irreducible representations of GLn.

2.1.1. Schur-Weyl Duality. The Schur-Weyl Duality relates the irreducible representa- tions of the , GLn, to those of the Symmetric group Sk. We begin by considering the kth tensor power of V,V ⊗k, where V = Cn. The General Linear group n ⊗k GLn acts on C by matrix multiplication, so it acts on V by

g · (v1 ⊗ · · · ⊗ vk) = g · v1 ⊗ · · · ⊗ g · vk 4 DINUSHI MUNASINGHE ⊗k for g ∈ GLn. On the other hand, Sk acts on V by permuting the terms of v1 ⊗· · ·⊗vk ∈ V ⊗k. That is to say,

σ · (v1 ⊗ · · · ⊗ vn) = vσ−1(1) ⊗ · · · ⊗ vσ−1(n).

⊗k These actions commute, so V is a GLn ×Sk -module. We thus have the decomposition: ⊗k ∼ M λ V = Vλ ⊗ S λ λ where the sum runs over all λ partitioning k, the S are the Sk-modules known as Specht modules, and the Vλ are GLn-modules. In particular, each Vλ is a polynomial GLn-module, and is irreducible if λ has at most n parts and is the zero module otherwise. Furthermore, every irreducible polynomial GLn module is isomorphic to a module Vλ for λ = (λ1 ≥ · · · ≥ λn) with λi ∈ Z≥0. Two modules Vλ and Vµ are isomorphic if and only if λ = µ. This gives us a concrete way of realizing the irreducible representations of GLn. We will not prove these results in this work, however proofs can be found in [GW]. Example 1. A simple example is the decomposition of V ⊗2: M V ⊗ V = Sym2V Alt2V

where Sym2V is the submodule of V ⊗V generated by all elements of the form u⊗v+v⊗u for u, v ∈ V , and Alt2V is the submodule of V ⊗ V generated by the elements of the form u ⊗ v − v ⊗ u for u, v ∈ V . While it is clear that V ⊗ V = Sym2V L Alt2V as vector spaces, it is not immediately obvious that this is a decomposition into irreducible submodules. This fact can be established directly. However, it follows cleanly from the Schur-Weyl duality: S2 has two irreducible representations indexed by the corresponding Young tableaux, so we have the decomposition into irreducibles M V ⊗ V = V(2) V(1,1),

where S2 acts on V(2) via the trivial representation, and on V(1,1) via the sign represen- tation. Now,

V(2) = {x ∈ V ⊗ V |σ · x = x, ∀σ ∈ S2} and

V(1,1) = {x ∈ V ⊗ V |σ · x = sgn(σ)x, ∀σ ∈ S2}, 2 2 but these are precisely Sym V and Alt V. Thus, these modules are irreducible GLn modules by the Schur-Weyl duality.

2.2. Characters of GLn.

Definition 2. We define the character of a representation ρ of G = GLn to be the map:

χV : G → C

such that χV (g) = tr(ρ(g)) for all g ∈ G. POLYNOMIAL BEHAVIOUR OF KOSTKA NUMBERS 5 Since −1 −1 χV (g) = tr(g) = tr(hgh ) = χV (hgh ),

the character χV is invariant under conjugation and hence is constant on the conjugacy classes of G.

Since the diagonalizable matrices are dense in GLn, the values of χV when restricted to the diagonal matrices completely determine the character χV (since all diagonalizable matrices are, by definition, conjugate to a diagonal matrix, and characters are constant on conjugacy classes). Furthermore, since the n diagonal entries completely determine a diagonal matrix, we can consider the character χV to actually be a map × n χV :(C ) → C.

When we consider the representations Vλ of GLn as discussed in Section 2.1.1, it can

be shown that χVλ (x1, . . . , xn) = sλ(x1, . . . , xn), i.e. the characters of the representa- tions Vλ of GLn are in fact the Schur polynomials sλ(x1, . . . , xn). We will discuss these polynomials in the next section. It is important to note that two representations are isomorphic if and only if their respective characters are the same. Furthermore, we have

χU L V = χU + χV and χU⊗V = χU χV , allowing us to work with the characters when examining direct sums and tensor products of modules.

3. Symmetric Polynomials

In this section we will give an overview of symmetric and Schur polynomials, following the notation of [F]. We begin our discussion of Schur polynomials by defining several combinatorial objects. Definition 3. We define a partition to be a weakly decreasing sequence of non-negative integers λ = (λ1 ≥ λ2 ≥ · · · ≥ λt). We say that the number of parts of the partition is 0 the largest integer t such that λt0 > 0, and set t X |λ| := λi. i=1

Definition 4. For a given partition λ = (λ1 ≥ · · · ≥ λt) we define the Young diagram Pt th of shape λ to be a left-justified array of |λ| := i=1 λi boxes with λi boxes in the i row. We often identify the partitions λ and λ0 if their corresponding Young diagrams are the same. In other words, we identify two partitions if their strictly positive parts are the same, regardless of how many parts of size zero occur. We now consider filling these boxes, which we will also refer to as cells, with numbers. We define a filling of a Young diagram to be a numbering of the boxes using positive integers. In particular, we are interested in fillings that satisfy certain properties. 6 DINUSHI MUNASINGHE Definition 5. We say that a filling T of a Young diagram λ is a if the numbering is weakly increasing across the rows of λ and strictly increasing down the t columns. Given a Young tableau T , we say that T has content µ = (µ1, . . . , µt) ∈ N if 0 T contains µi i s, 1 ≤ i ≤ t.

T For a Young tableau T of shape λ and content µ = (µ1, . . . , µt), we denote by x the monomial: t T Y µi x := xi . i=1 We then define the Schur polynomial in t variables as: X T sλ(x1, . . . , xt) = x T where we sum over all possible tableaux T of shape λ filled with integers from 1 to t. For ease of notation, we will denote

~xt := (x1, . . . , xt).

While not obvious from the definition, sλ(~xt) is a symmetric polynomial in t variables for any Young diagram of shape λ. Furthermore, letting λ vary over partitions of k having at most t parts, the Schur polynomials form a for the symmetric polynomials in t variables of degree k.

Definition 6. Let λ = (λ1 ≥ · · · ≥ λt) be a partition of k having at most t strictly positive parts. We define mλ(~xt) to be the minimal symmetric polynomial generated by λ1 λt the monomial x1 . . . xt . 3.1. Kostka Numbers.

Definition 7. We denote by kλµ(t) the number of possible Young tableaux of shape λ = (λ1 ≥ · · · ≥ λt) and content µ = (µ1 ≥ · · · ≥ µt) (adjoining parts of size zero where necessary).

We have the relation X sλ(~xt) = kλµ(t)mµ(~xt). µ 3.2. Littlewood-Richardson Coefficients. Given two partitions λ and µ and their re- spective Schur polynomials sλ(~xt) and sµ(~xt), we can define the Littlewood-Richardson ν coefficients cλµ(t) through the expansion X ν sλ(~xt)sµ(~xt) = cλµ(t)sν(~xt). ν

That is to say, when one multiplies two Schur polynomials sλ(~xt) and sµ(~xt), the product, being the product of two symmetric polynomials, is again a symmetric polynomial. Since the Schur polynomials of degree |λ| + |µ| in t variables form a basis of the vector space of symmetric polynomials of corresponding degree, we can rewrite the product as a sum of Schur polynomials, with ν running over partitions of |λ| + |µ|. The Littlewood- ν Richardson coefficient, cλµ(t) is then the coefficient of the Schur polynomial sν(~xt) in the expansion. POLYNOMIAL BEHAVIOUR OF KOSTKA NUMBERS 7 3.3. Representations and Characters. The correspondence between characters of representations of GLn and symmetric polynomials allows us to translate statements from one setting to the other. For example,

X ν sλ(~xt)sµ(~xt) = cλµ(t)sν(~xt) ν gives us M ν Vλ ⊗ Vµ = cλµ(t)Vν. Denoting X hs(x1, . . . , xt) := xi1 . . . xis

1≤i1···≤is≤t for a given s ∈ N, we define

t Y hλ := hλj (x1, . . . , xt) j=1

for a partition λ = (λ1 ≥ · · · ≥ λt). It can be shown that

s hs(~xt) = χV(s) = χSym V .

Our definition of hλ then allows us to directly obtain

hλ = χSymλ1 V . . . χSymλt V . The identity X hµ(~xt) = kλµsλ(~xt) λ (see [F]) then gives us the result

µ1 µt M Sym V ⊗ · · · ⊗ Sym V = kλµVλ. λ

4. Restrictions

4.1. Interlacing Partitions. Definition 8. For given partitions λ and µ, where λ has t parts and µ has t − 1 parts, we say that µ interlaces λ, denoted µ λ, if  λ1 ≥ µ1 ≥ λ2 ≥ µ2 ≥ · · · ≥ λt−1 ≥ µt−1 ≥ λt.

This concept will allow us to obtain an important relation of Schur polynomials. Theorem 1. X |λ|−|µ| sλ(~xt) = sµ(~xt−1)xt µ λ  8 DINUSHI MUNASINGHE

Proof. For a given Young tableaux T of shape λ, we let Tt denote the object obtained by removing all cells with the entry t, and mT the number of cells removed (that is to say, the number of cells of T which contained the entry t). We claim that Tt is a Young tableau of shape µ for some µ having r ≤ t − 1 parts and is filled with the entries from [t − 1]. In fact, µ interlaces λ.

First, note that Tt is a Young diagram because removing cells with entry t can only remove cells on the ends of rows. Furthermore, a cell with the entry t cannot lie above an entry of lower value, because that would contradict the strict decrease down columns of T. Thus, Tt is a Young diagram. Given that we have a Young diagram with entries strictly increasing down columns and weakly increasing across rows, it is clear that Tt is a Young tableau. To see that µ interlaces λ, we first note that since µ was obtained by removing cells from λ, µi ≤ λi ∀i. Furthermore, µi ≥ λi+1. This follows because µi < λi+1 would imply that in the initial Young tableau the ith row contained a t which lay above an entry less than t in the row below, a contradiction. Thus, µ interlaces λ. For any Young tableau T 0 of shape µ interlacing λ, we obtain a unique tableau T of shape λ by filling in λ \ µ with t0s. The result is again a tableau, as

λ1 ≥ µ1 ≥ λ2 ≥ µ2 ≥ · · · ≥ λt−1 ≥ µt−1 ≥ λt ensures that the t0s are placed below and to the right of all lower entries. The theorem then follows immediately from the construction of the Schur polynomials as X T sλ(~xt) = x . T 

4.2. Branching Rules. We now consider how a GLt-module Vλ restricts to GLt−1 × GL1. It is natural to consider the embedding of GLt−1 × GL1 ⊂ GLt as matrices of the form A 0 0 c

× where A ∈ GLt−1 and c ∈ C = GL1. It can be shown that for any irreducible polynomial 0 0 m representation ρ of GLt−1 ×GL1, ρ can be written in the form Vµ ×C for µ a partition having s ≤ t−1 parts, and for m ∈ N, Cm the C×-module with c acting by multiplication m m ∼ ⊗m by c . Note that the module C = C , where C denotes the natural representation of −1 m GL1 and C denotes the dual. We will continue using the notation C . We obtain the decomposition of Vλ as a GLt−1 × GL1-module into

∼ M m Vλ = rµmVµ ⊗ C . (µ,m)

To determine the coefficients rµm we consider the characters of the representations. Since the left hand side is simply the representation Vλ, we know that the corresponding character is the Schur polynomial sλ(~xt). On the right hand side, given our additive and POLYNOMIAL BEHAVIOUR OF KOSTKA NUMBERS 9 multiplicative properties of characters, the corresponding character should be

X m rµmsµ(x1, . . . , xt−1)xt , (µ,m) giving us the equation

X m sλ(x1, . . . , xt) = rµmsµ(x1, . . . , xt−1)xt . (µ,m) On the other hand, as previously discussed, X |λ|−|µ| sλ(x1, . . . , xt) = sµ(x1, . . . , xt−1)xt . µ λ Since these two equations must agree, we have

X m X |λ|−|µ| rµmsµ(x1, . . . , xt−1)xt = sµ(x1, . . . , xt−1)xt . (µ,m) µ λ  Identifying coefficients and setting m = |λ| − |µ| gives  1 if µ interlaces λ r = µm 0 otherwise Moving back from characters to representations, we obtain the result that as GLt−1 × GL1-modules M |λ|−|µ| Vλ = Vµ ⊗ C . µ λ  5. Beyond Polynomial Representations

For a partition λ = (λ1 ≥ · · · ≥ λn), we can consider Vλ to be a representation of GLt for t ≥ n by adding t − n parts of size zero to λ.

For λ = (λ1 ≥ · · · ≥ λt) we now define t λ + c := (λ1 + c, . . . , λt + c). The corresponding Schur polynomial is c sλ+ct (~xt) = (x1 . . . xt) sλ(~xt). P T This easily follows given the construction of sλ(~xt) as sλ(~xt) = T x , since adjoining c cells to each row is equivalent to attaching a t × c rectangle to the left of the original tableau. We can extend any filling of the original tableau of shape λ to a filling of the tableau of shape λ + ct by filling the ith row of the t × c rectangle with i0s. This is in fact the only way that this rectangle can be filled, since the tableau must be strictly increasing down columns. Thus, c sλ+ct (~xt) = (x1 . . . xt) sλ(~xt). We can now extend the concept of a Schur polynomial to weakly decreasing sequences of integers

λ = (λ1 ≥ · · · ≥ λt) 10 DINUSHI MUNASINGHE

with λi ∈ Z. Letting d = λt we can use this shift to extend the definition of the Schur polynomial to the case when λ is an arbitrary sequence of weakly decreasing integers via

sλ+dt (~xt) sλ(~xt) = d . (x1 . . . xt) P Extending mλ(~xt) similarly, we can easily verify that sλ(~xt) = µ kλµmµ(~xt) continues to hold by setting kλ+ct,µ+ct = kλµ.

We now extend the definition of a representation of GLn to include ∼ ρ : G → GL(V ) = GLN , where for A = (akl) ∈ GLn ρ(A) = (ρij(A)) where the ρij are polynomials in the entries of A and the inverse of its , det−1(A). In this setting, the relation X |λ|−|µ| sλ(x1, . . . , xn) = sµ(x1, . . . , xn−1)xn µ λ  still holds, since detA = (x1 . . . xn) allows us to use our shift of representations to use the identity for partitions and shift back. Thus, we still have M t Vλ = Vµ ⊗ C µ,t n Example 2. An important example to consider is GLn acting on V = C . This corre- sponds to the partition λ = (1, 0,..., 0).

V has standard basis e1, . . . , en, so we have  GL → GL ρ = n n A 7→ A ∗ ∗ ∗ ∗ Example 3. The dual module V is also a representation. V has dual basis e1, . . . , en where ei(ej) = δij for δij the Kronecker delta. We obtain the dual representation  GL → GL ρ∗ : n n A 7→ (A−1)> Now, A−1 can be written as 1 A0 detA where A0, the adjugate of A, is a matrix with entries that are polynomials in the entries of A. Thus, this representation is a matrix with entries that are polynomials in the −1 entries of A and det A. The character of the dual representation, χV ∗ (~xn), will be n X 1 = m (~x ). x (−1) n i=1 i Consider now the representation corresponding to the partition λ = (0, 0,..., 0, −1). Shifting this partition, we obtain the character 1 sλ(~xn) = s(1,1,...,1,0)(~xn) = m(−1)(~xn). x1 . . . xn POLYNOMIAL BEHAVIOUR OF KOSTKA NUMBERS 11 Thus, by comparing characters, we see that the dual representation corresponds to the partition λ = (0,..., 0, −1).

6. Weight Decompositions

µ Let H ⊂ GLn denote the subgroup of diagonal matrices. We define the H-module C to be the one-dimensional vector space C with H acting on C by µ1 µn diag(z1, . . . , zn) · v = z1 . . . zn v for all v ∈ C. ∼ Definition 9. Let ρ : GLn → GL(V ) = GLN be a representation. We refer to v ∈ V n as a weight vector with weight µ = (µ1, . . . , µn) ∈ Z if µ1 µn z · v = z1 . . . zn v for all z = diag(z1, . . . , zn) ∈ H. We define a weight space V µ ⊂ V of a representation V to be the subspace of all vectors in V that are weight vectors with weight µ.

One can show that every representation of GLn decomposes into the direct sum of its weight spaces M µ M µ µ V = V = (dim V )C . µ µ µ σ(µ) Moreover, dim V = dim V for any σ ∈ Sn. Moving to the character of the irreducible representation Vλ, we obtain the expansion X sλ(~xn) = kλµmµ(~xn). µ Comparing these we are able to conclude that µ dim Vλ = kλµ.

7. Some Notation

So far, we have only considered the Schur polynomials as polynomials in a finite number of variables. However, we can easily extend the concept to infinitely many variables by removing the restriction on the Young tableaux T to contain only integers from 1 to some finite t. Through our previous definition of the Schur polynomials, we then obtain X T sλ = sλ(x1, x2, x3,... ) = x . T For partitions λ, sλ(x1, . . . , xt, 0, 0, 0,... ) = sλ(x1, . . . , xt) for all t ∈ N, t > 0. Notationally, we will use mλ := mλ(x1, x2, x3,... ) to refer to the polynomial in infinitely many variables generated by the monomial xλ. For example, s(1)(~xt) = x1 + ··· + xt = m(1)(~xt) 12 DINUSHI MUNASINGHE becomes X s(1) = xi = m(1). i≥1 Similarly, X 2 X s(2)(~xt) = xi + xixj = m(2)(~xt) + m(1,1)(~xt) 1≤i≤t 1≤i

m(1,1)(x1, x2, x3) = x1x2 + x1x3 + x2x3, while X 2 m(2,1)(~xt) = xi xj in three variables gives 2 2 2 2 2 2 m(2,1)(x1, x2, x3) = x1x2 + x1x2 + x2x3 + x2x3 + x1x3 + x1x3, which has six terms.

8. Stability

8.1. Kostka Numbers. We begin our study of stability with the Kostka numbers. Specifically, we want to know if there a value t0 such that for partitions λ and µ having t > t0 parts, kλµ(t) = kλµ(t0). Again, we look at characters X sλ(~xt) = kλµ(t)mµ(~xt) µ corresponding to M µ Vλ = kλµVλ . λ,µ The Kostka numbers demonstrate stability as we let t go to infinity. We recognize that for the partitions µ = µ1 ≥ · · · ≥ µt, we have

µ µ1 µt x = x1 . . . xt , µk but as µk = 0 for sufficiently large k, xk = 1 for these k. Thus the multiplicity of the µ1 µt monomials x1 . . . xt , namely the Kostka numbers kλµ(t) will stabilize for sufficiently large t. This stability can also be seen by looking at the Kostka numbers kλµ as the number of Young tableaux of shape λ and content µ: as the number of parts go to infinity, the number of possible tableaux stabilizes, as adding parts of size zero to both λ and µ will eventually not create any new Young tableaux of shape λ and content µ. POLYNOMIAL BEHAVIOUR OF KOSTKA NUMBERS 13 8.2. Littlewood-Richardson Coefficients. Having established the stability of the Kostka numbers for standard partitions we move to that of the Littlewood-Richardson coefficients, maintaining the restriction of λ, µ, and ν to be partitions. As previously discussed, for λ a partition having n strictly positive parts, we can view Vλ as a GLt- module for any t ≥ n by adding t − n parts of size zero to λ. The definition of the Littlewood-Richardson coefficients as the coefficients in the expansion of the product sλ(~xt)sµ(~xt) is equivalent to M ν Vλ ⊗ Vµ = cλµVν.

However, this decomposition of the tensor product of representations of GLt depends on t. It is natural to then ask for what values of t the formula is valid. We begin by considering an example. Example 4. In terms of the bases of symmetric polynomials given by Schur polynomials, we have X ν s(2)s(2) = c(2)(2)sν ν where we sum over partitions of 4. In terms of the corresponding representations, this becomes M ν V(2) ⊗ V(2) = c(2)(2)Vν. The partitions of 4 are (4), (3, 1), (2, 2), (2, 1, 1) and (1, 1, 1, 1). However, (1, 1, 1, 1), hav- ing 4 parts, does not correspond to a valid irreducible representation of GL3. This can be seen by moving to the corresponding characters. Considering the Schur polynomials in an indeterminate number of variables we have: 2 2 s(2)(x1, . . . , xt) = x1 + ··· + xt + x1x2 + . . . xt−1xt = m(2) + m(1,1), so that P 4 P 3 P 2 2 P 2 P s(2)s(2) = xi + 2 xi xj + 3 xi xj + 4 xi xjxk + 6 xixjxkxl = m(4) + 2m(3,1) + 3m(2,2) + 4m(2,1,1) + 6m(1,1,1,1). P However, if we let n = 3, the final sum, xixjxkxl = m(1,1,1,1) disappears, since there are no collections of four distinct indices. We now find the corresponding Littlewood- Richardson coefficients. Corresponding to the partitions of 4, we have the Schur func- tions: P 4 P 3 P 2 2 P 2 P s(4) = xi + xi xj + xi xj + xi xjxk + xixjxkxl = m(4) + m(3,1) + m(2,2) + m(2,1,1) + m(1,1,1,1),

P 3 P 2 2 P 2 P s(3,1) = xi xj + xi xj + 2 xi xjxk + 3 xixjxkxl = m(3,1) + m(2,2) + 2m(2,1,1) + 3m(1,1,1,1),

P 2 2 P 2 P s(2,2) = xi xj + xi xjxk + 2 xixjxkxl = m(2,2) + m(2,1,1) + 2m(1,1,1,1),

P 2 P s(2,1,1) = xi xjxk + 3 xixjxkxl = m(2,1,1) + 3m(1,1,1,1) and P s(1,1,1,1) = xixjxkxl = m(1,1,1,1). 14 DINUSHI MUNASINGHE We see that P 4 P 3 P 2 2 P 2 P s(2)s(2) = xi + 2 xi xj + 3 xi xj + 4 xi xjxk + 6 xixjxkxl = m(4) + 2m(3,1) + 3m(2,2) + 4m(2,1,1) + 6m(1,1,1,1) = s(4) + s(3,1) + s(2,2). Thus, we have the Littlewood-Richardson coefficients:

(4) (3,1) (2,2) (2,1,1) (1,1,1,1) c(2)(2) = 1, c(2)(2) = 1, c(2)(2) = 1, c(2)(2) = 0, c((2)(2) = 0.

We now examine the general situation for partitions having only non-negative parts. ν For clarity, we denote by cλµ(t) the Littlewood-Richardson coefficient for partitions hav- ing t parts. To facilitate an inductive argument, we will use the lexicographic order on partitions: we say that λ µ if there exists i such that λi > µi where λj = µj for all j < i.

Theorem 2. For any partitions λ = (λ1 ≥ · · · ≥ λt), µ = (µ1 ≥ · · · ≥ µt), and ν ν ν = (ν1 ≥ · · · ≥ νt) there exists a positive integer t0 such that for t ≥ t0, cλµ(t) = cλµ(t0). That is to say, when increasing the number of variables in the Schur polynomials, or equivalently, the number of parts in the corresponding partitions, the Littlewood- ν Richardson coefficients stabilize. We will henceforth reserve cλµ for this stable asymptotic value.

ν ν Proof. We claim that there exists t0 ∈ N such that for t > t0, cλµ(t) = cλµ(t0). This can be seen by looking at X ν sλ(~xt)sµ(~xt) = cλµ(t)sν(~xt).

As t → ∞, we know that the Kostka numbers kλµ(t) stabilize for standard partitions. Using the expansion of the Schur polynomials as X sλ(~xt) = kλτ (t)mτ (~xt) τ we obtain

X X X ν X ( kλτ (t)mτ (~xt))( kµτ 0 (t)mτ 0 (~xt)) = cλµ(t) kντ 00 (t)mτ 00 (~xt). τ τ 0 ν τ 00 Since the Kostka numbers stabilize, the left hand side of the equation stabilizes. We use an inductive argument based on the lexicographic order for the right hand side. We begin by comparing the highest terms in the expansion

X ν sλ(~xt)sµ(~xt) = cλµ(t)sν(~xt). ν Since X sλ(~xt) = mλ(~xt) + kλτ mτ (~xt), τ≺λ and similarly X sµ(~xt) = mµ(~xt) + kµτ 0 mτ 0 (~xt), τ 0≺µ POLYNOMIAL BEHAVIOUR OF KOSTKA NUMBERS 15 the highest term on the left hand side is xλ+µ. On the other hand, the highest term on λ+µ λ+µ the right hand side is cλµ x . This shows us that λ+µ cλµ (t) = 1 proving stability for the term of highest lexicographic order in the expansion. We now assume that stability holds for any ν0 ν. We begin with the expansion of the sums as X X X ν ( kλτ mτ (~xt))( kµτ 0 mτ 0 (~xt)) = cλµ(t)sν(~xt). τ τ 0 ν We can rearrange these sums as

X X X ν ν0 X ν 0 0 ( kλτ mτ (~xt))( kµτ mτ (~xt)) − cλµ(t)sν(~xt) = cλµ(t)sν0 (~xt) + cλµ(t)sν(~xt). τ τ 0 ν ν0 ν≺ν0 We now compare coefficients in front of the terms xν0 on both sides, using the expansion

X X X ν X ν0 X ν 0 0 00 00 ( kλτ mτ (~xt))( kµτ mτ (~x))− cλµ kντ mτ (~xt) = cλµ(t)sν0 (~xt)+ cλµ(t)sν(~xt). τ τ 0 ν ν0 τ 00 ν≺ν0 On the left hand side, this coefficient is a combination of stable coefficients, hence is ν0 itself stable. The coefficient on the right hand side is cλµ(t), establishing the result. 

9. Asymptotics

We now begin our study of sequences of the form λ = (λ+, λ−). We construct these + + − − sequences by taking two partitions λ+ = (λ1 ≥ · · · ≥ λr ) and (λ− = λ1 ≥ · · · ≥ λs ), joining them together after negating and reversing the order of the parts of λ− to obtain + + − − λt = (λ1 ≥ . . . λr ≥ 0 · · · ≥ 0 ≥ −λs ≥ · · · ≥ −λ1 ). Throughout our discussion, we assume that the total number of parts, or terms, in λ, including parts of size 0, is t. When we discuss asymptotic behaviour, we will be allowing t to grow large by inserting parts of size zero. We will denote by

+ + − − sλ(~xt) := sλt (~xt) = s (~xt) (λ1 ,...,λr ,0,...,0,−λs ,...,−λ1 ) the Schur polynomial corresponding to the partition λ. For example,

s(1,−1)(~xt) = m(1,−1)(~xt) + (t − 1). We now compute another example. Example 5. We note that s(−1) = m(−1),

s(1,−2) = m(1,−2) + m(1,−1,−1) + (t − 1)m(−1), and s(1,−1,−1) = m(1,−1,−1) + (t − 2)m(−1). Then s(−1)s(1,−1) = m(−1)(m(1,−1) + (t − 1)) = m(1,−2) + m(−1) + 2(t − 1)m(−1) + 2m(1,−1,−1) = s(1,−2) + s(1,−1,−1) + s(−1). 16 DINUSHI MUNASINGHE

As a corollary to Theorem 2.3 in [PS], we obtain that for given partitions λ = (λ+, 0) and µ = (0, µ−)

X ν s(λ+,∅)s(∅,µ−) = c(λ+,∅)(∅,µ−)sν ν

where ν = (ν+, ν−) and for r = |λ+| − |ν+| = |µ−| − |ν−|,

ν X λ+ µ− c(λ+,∅)(∅,µ−) = cν+γcν−γ. |γ|=r

Firstly, we note that c(λ+,µ−) = 1, since ν = (λ , µ ) gives |λ | − |ν | = 0 which (λ+,0)(0,µ−) + − + + implies that the only summand is the one corresponding to γ = ∅, for which cλ+ = λ+∅ cµ− = 1. Secondly, for any ν = (ν , ν ) 6= (λ , µ ), |ν | ≥ |λ | or |ν | ≥ |µ | implies µ−∅ + − + − + + − − that cν = 0. We then have (λ+,∅)(∅,µ−)

X ν s(λ+,∅)s(∅,µ−) = s(λ+,µ−) + c(λ+,∅)(∅,µ−)sν. ν Having established that the generalized Littlewood-Richardson coefficients cν (λ+,∅)(∅,µ−) can be written as a combination of Littlewood-Richardson coefficients for standard par- titions, we are able to infer their stability. 1 In the next section, we will use this new stability to deduce the polynomial behaviour of the Kostka numbers.

10. The Polynomial Behaviour of Kostka Numbers

In this section, we will prove that kλµ(t) is a polynomial in t, where λ = (λ+, λ−) and µ = (µ+, µ−). Furthermore, as an integer valued polynomial, kλµ(t) can be written as a polynomial in the binomial basis for integer valued polynomials, t + i { |i ≥ 0} i in addition to the usual basis of polynomials {ti|i ≥ 0}. Our first proof of the polynomial behaviour of Kostka numbers will rely on the results from [PS]. We begin by proving a lemma.

Lemma 1. Let p(~xt) and q(~xt) be symmetric Laurent polynomials in t variables with coefficients that are polynomials in t. Then the product p(~xt)q(~xt) also has coefficients that are polynomials in t.

Proof. We say that p(~xt) involves at most k variables if, when expanded as a sum of P symmetric polynomials mγ, i.e. p(~xt) = rγmγ(~xt), each γ has at most k parts. We say that q(~xt) involves s variables and induct on the total k + s. The base case is clear, as if k + s = 0, we simply have the product 1 · 1 = 1.

1 ν No formula for general cλµ can be derived directly from [PS], but these coefficients likely also stabilize. We will not, however, address this question here. POLYNOMIAL BEHAVIOUR OF KOSTKA NUMBERS 17 0 0 0 We now assume that the statement holds for all p (~xt) involving k variables and q (~xt) involving s0 variables such that k0 +s0 < k+s. On one hand, as the product of symmetric polynomials, we have X p(~xt)q(~xt) = aγ(t)mγ(~xt), γ

where aγ(t) is some function. On the other hand, by sorting by the powers of xt, we write X i X j p(~xt) = pi(~xt−1)x , q(~xt) = qj(~xt−1)x i j

where each pi(~xt−1) and each qj(~xt−1) is a symmetric Laurent polynomial in x1, . . . , xt−1. Since the inductive assumption applies for any pair pi, qj with (i, j) 6= (0, 0), we have P i P j p(~xt)q(~xt) = ( i pi(~xt−1)xt)( j qj(~xt−1)xt ) P i+j = p0(~xt−1)q0(~xt−1) + (i,j)6=(0,0) pi(~xt−1)qj(~xt−1)xt P P d = p0(~xt−1)q0(~xt−1) + d( β aβ,d(t − 1)mβ(~xt−1))xt . So, X X X d (3) aγ(t)mγ(~xt) = p0(~xt−1)q0(~xt−1) + ( aβ,d(t − 1)mβ(~xt−1))xt . γ d β

P P d If γ = (γ1, . . . , γt) 6= ∅, say γt 6= 0, then d( β aβ,d(t − 1)mβ(~xt−1)xt contributes γ to ~xt , so aγ(t) = aγ,γt (t − 1) for γ = (γ1, . . . , γt−1), hence is a polynomial in t by the inductive hypothesis. If γ = ∅, comparing coefficients of x∅, i.e. comparing constant coefficients on either side of (3) we obtain

a∅(t) = a∅(t − 1) + a∅(t)

⇒ a∅(t) − a∅(t − 1) = a∅(t), giving a difference equation whose solution is a polynomial in t, see Appendix. This gives us that a∅(t) is also a polynomial, and hence that the entire product p(~xt)q(~xt) has polynomial coefficients, as desired. 

Theorem 3. kλµ(t) is a polynomial in t. Proof. We begin by noting that we extend the lexicographic order to an order between pairs µ = (µ+, µ−) and ν = (ν+, ν−) by saying that µ ν if µ+ ν+ and µ− ν−. We will establish the polynomial behaviour using the stability of the Littlewood- Richardson coefficients and induction on the above order. For the base case, s∅(~xt) = 1 has coefficients that are polynomials in t, namely, all are the zero polynomial except for k∅∅ = 1. We now assume that sν0 (~xt) is polynomial - meaning it has coefficients that are 0 polynomials in t - for all ν ≺ ν = (ν+, ν−). Consider X ν s(ν+,∅)(~xt)s(∅,ν−)(~xt) = cλµsν(~xt).

On the right hand side, we have that ν = (ν+, ν−) is the largest element. We can therefore rearrange the equation to obtain X ν0 0 s(ν+,∅)(~xt)s(∅,ν−)(~xt) − cλµsν (~xt) = sν(~xt). ν0≺ν 18 DINUSHI MUNASINGHE By Lemma 1 and the inductive hypothesis, all Schur polynomials on the left hand side of the equation are polynomial, as is the product sν+ (~xt)sν− (~xt). Moreover, the Littlewood-Richardson coefficients are stable. Therefore, sν(~xt) is also polynomial, es- tablishing the result.  Neither Theorem 3 nor its proof actually provide much information about the poly- nomial kλµ(t). Approaching the problem through branching rules allows us to determine both the degree and leading coefficient of the kλmu(t).

Theorem 4. The degree of kλµ(t) is d = |λ+| − |µ+| = |λ−| − |µ−|. Furthermore, 1 the leading coefficient of kλµ(t) is d! kλ+µ˜+ kλ−µ˜− , where µ˜+ is obtained from µ+ by ad- joining |λ+| − |µ+| single cells to the corresponding Young diagram, that is to say, µ˜+ = (µ+, 1,..., 1), (and similarly for µ˜−). Proof. In order to prove this result, we must first establish a recurrence relation for kλµ(t). We will do this using the branching rules as discussed in Section 4.2. Recall that we have the decomposition M |λ|−|ν| Vλ = Vν ⊗ C ν λ which corresponds to the relation of characters X |λ|−|ν| sλt+1 (~xt+1) = sνt (~xt)xt+1 . νt λt+1 Expanding the Schur polynomials in terms of the Kostka numbers gives X X X |λ|−|ν| kλµ(t + 1)mµ(~xt+1) = ( kνµ(t)mµ(~xt))xt+1 . µ νt λt+1 µ We now compare coefficients of the terms 

µ µ1 µt+1 ~xt+1 = x1 . . . xt+1 on either side of the above equation. To simplify our induction, we look at the terms where µt+1 = 0, that is to say, we are really comparing coefficients for terms of the form

µ1 µt x1 . . . xt . This gives us X kλµ(t + 1) = kνµ(t).

νt λt+1,|λ|=|ν|  We note that λt λt+1. Denoting at = kλµ(t), we have  X kλµ(t + 1) = kλµ(t) + kνµ(t)

νt λt+1,|λ|=|ν|,ν6=λ so  X at+1 = at + kνµ(t)

νt λt+1,|λ|=|µ|,ν6=λ  X (4) ⇒ at+1 − at = kνµ(t).

νt λt+1,|λ|=|ν|,ν6=λ  POLYNOMIAL BEHAVIOUR OF KOSTKA NUMBERS 19

We will now induct on d = |λ+| − |µ+| to prove that kλµ(t) is of degree d with leading 1 coefficient d! kλ+µ˜+ kλ−µ˜− . For the base case, we assume |λ+| = |µ+|. We note that the right hand side of (4) is then zero, as for any ν such that νt λt+1 where ν 6= λ,  |ν+| < |λ+| = |µ+| ⇒ kνµ(t) = 0.

Thus, at+1 − at = 0, so at is a constant. To determine this constant, we note that |λ+| = |µ+| and |λ−| = |µ−|, so by looking at the corresponding Young diagrams with the appropriate shifts, since the positive and negative portions of the Young diagram are forced to be filled independently, we obtain kλµ(t) = kλ+µ+ kλ−µ− , establishing the reuslt.

We now assume that the result holds for ν = (ν+, ν−) such that |ν+| − |µ+| < d. From (4), the right hand side is the sum of polynomials of degrees |ν+| − |µ+| < d. To see that the degree of the sum is, in fact, exactly d − 1, we note that sequences of the form ν = (ν+, ν−) obtained by removing one cell each from suitable locations of λ+ and λ−, i.e. at the end of a row that does not conflict with the weakly decreasing lengths of the rows, will interlace λ, and will satisfy |ν+| = |λ+| − 1. Each of the sequences of this form will give a Kostka polynomial kνµ(t) of degree d − 1 and positive leading coefficient, so no cancellation will occur. Thus, the degree of the right hand side of (4) is exactly d−1, so Theorem A implies that the degree of kλµ(t) is d.

The leading coefficient of at = kλµ(t) (in the expansion in the binomial basis) will be the sum of all leading coefficients on the right hand side of (4) such that |λ+| − |ν+| = 1. In other words, it is equal to the sum X (5) Cν

|λ+|−|ν+|=1

where Cν denotes the leading coefficient of the Kostka polynomial kνµ(t). By the induc- tive hypothesis, we have

Cν = kν+µ˜+ kν−µ˜− ,

which combined with (5) implies that the leading coefficient of kλµ(t) equals X X X (6) kν+µ˜+ kν−µ˜− = ( kν+µ˜+ )( kν−µ˜− )

|λ+|−|ν+|=|λ−|−|ν−|=1 |λ+|−|ν+|=1 |λ−|−|ν−|=1 To compute the sums on the right hand side of (6), notice that for standard partitions τ and η, with the definition of the Kostka number kτη as the number of possible Young tableaux of shape τ and content η = (η1, . . . , ηt), X (7) kτη = kτ 0η τ 0 where τ 0 runs over all Young diagrams obtained from τ by removing one cell, and η = η − (0,..., 0, 1) (assuming that ηt > 0). This follows because, for each Young tableau of shape τ and content η, removing a cell containing the entry t creates a new Young tableau (following much the same argument as in Section 3.1), and given the collection of all possible Young tableaux obtained by the removal of a single cell containing the entry t, we can reconstruct a unique “parent” Young tableau. 20 DINUSHI MUNASINGHE Rewriting (7) as X kτη = kτ 0η |τ|−|τ 0|=1 and combining this with (5) and (6), we conclude that the leading coefficient of kλµ(t) in the binomial basis is

kλ+µ˜+ kλ−µ˜− . 

11. Further Asymptotics

After proving that the Kostka numbers do exhibit polynomial behaviour, a natural next step is to try to determine what these polynomials actually are. Given partitions λ = (λ+, λ−) and µ = (µ+, µ−), is it possible to determine the Kostka polynomial kλµ(t)? In section 10 we were able to determine the degree and leading coefficient of kλµ(t), however, at this time no general formula for determining kλµ(t) explicitly is known. We can, however, establish formulas for two simple cases. Theorem 5. t + s − 2 k (t) = (s,−s),∅ s

First Proof. This statement follows from the fact that

k(s,−s),∅(t) = k(2s,st−2),st where k(2s,st−2),st denotes the regular Kostka number obtained by simply counting the number of Young tableaux of shape (2s, st−2) and content st. In the first row of (s, −s) + st, we have no choice but to fill the first s spots with 10s. We then have an arbitrary choice with repetition for s elements out of 2, . . . , t for the remaining s spots in the first row. These choices completely determine the tableau, as the t − 2 × s block can then only be filled in one way. Thus, by the formula for choosing s elements with repetition out of t − 1 possible choices, we have that (t − 1) + (s − 1) t + s − 2 k (t) = = (s,−s),∅ s s as desired. From this formula, we can see that k(s,−s),∅(t) is a polynomial in t for fixed 1 s of degree s and leading coefficient s! , as expected from Theorem 4.  Second Proof. We can also obtain this result from branching rules and induction. Since s−1 X k(s,−s)∅(t + 1) = k(s,−s)∅(t) + k(j,−j)∅(t), j=0 and s−1 X k(s+1,−(s+1))∅(t + 1) = k(s+1,−(s+1))∅(t) + k(s,−s)∅(t) + k(j,−j)∅(t), j=0 POLYNOMIAL BEHAVIOUR OF KOSTKA NUMBERS 21 we obtain

k(s+1,−(s+1))∅(t + 1) = k(s+1,−(s+1))∅(t) + k(s,−s)∅(t + 1). Defining

p(t, s) := k(s,−s)∅(t) we thus obtain the recurrence relation with boundary conditions   p(t + 1, s + 1) = p(t, s + 1) + p(t + 1, s) p(t, 0) = 1  p(2, s) = 1 as

p(2, s) = k(s,−s)∅(2) = k(2s,0)(s,s) = 1. t+s−2 We claim that p(t, s) = s . Clearly, t − 2 = 1 = p(t, 0), 0 and 2 + s − 2 s = = 1 = p(2, s). s s We now induct on t + s, assuming that the statement holds for all t0 and s0 such that t0 + s0 < t + s. By the recurrence relation, p(t, s) = p(t − 1, s) + p(t, s − 1) which, by the inductive hypothesis, is t − 1 + s − 2 t + s − 1 − 2 p(t, s) = + s s − 1 t + s − 3 t + s − 3 ⇒ p(t, s) = + . s s − 1 n+1 n  n Using the relation r+1 = r+1 + r , we obtain t + s − 2 p(t, s) = , s the desired result.  We now look at λ = (1s, −1s) and µ = ∅. To begin our discussion, we introduce the notion of a Dyck path: Definition 10. A Dyck path is a path of steps of length 1 in the (x, y) − plane from the point (0, 0) to (s, s), where each step goes either directly up or to the right and does not cross the line x = y. The number of Dyck paths from (0, 0) to (s, s) is known to be the sth Catalan number, 1 2s C = s s + 1 s (see [W]). 22 DINUSHI MUNASINGHE Theorem 6.  t  X t − (2i + 1) k s s (t) = − C . (1 ,−1 ),∅ t − s i t − s − i i≥0

s s Proof. To determine k(1s,−1s),∅(t), we will look at k(2s,1t−2s),1t (t). Column one of (1 , −1 )+ 1t = (2s, 1t−2s) has t − s spots, and column two has s. Choosing the elements in the first column completely determines the Young tableau, as the second column will then t  be forced to be filled in increasing order. In total, there are t−s ways of choosing the first column. However, not all of these choices will allow for a valid Young tableau. We must therefore subtract from the total number of possibilities the number of fillings that are not valid tableaux. Let ai denote the number in row i + 1 in the first column. Thus each filling directly corresponds to a (t − s)-tuple (a0, . . . , at−s−1).

We note that ai < 2(i + 1) for 1 ≤ i ≤ s is a necessary and sufficient condition for the corresponding filling to be a Young tableau. Thus, from the total number of choices for the first column, we must subtract all (t−s)-tuples which contradict the above condition. t−1 It is clear that there are t−s ways to fill the first column with a0 > 1, i.e. with a t−1 t  contradiction in the first row. Thus we begin by subtracting t−s from the total t−s . t−(2i+1) In general, we claim that there are Ci t−s−i possible fillings that do not contradict st aj < 2j + 2 for 0 ≤ j ≤ i − 1, but contradict for the first time at ai, in the i + 1 row.

Firstly, there are Ci ways to construct a valid filling for the first i rows. We establish this by filling the i × 2 rectangle with the numbers 1,..., 2i in order. For a placement in the left of the two columns, we add a step to the right to our Dyck path, and for a placement in the right column we assign a vertical step. The condition aj < 2j + 2 corresponds to the condition that the Dyck path does not cross the line x = y. Thus, for every valid filling of the aforementioned rectangle, we have a corresponding Dyck path and vice versa, so the number of valid i × 2 rectangular tableaux is Ci. st To introduce a contradiction at the i + 1 row, i.e. at ai, we must fill the remaining t − s − i spots with choices from the t − (2i + 1) elements which would give the desired contradiction. Subtracting all possible invalid fillings by running over the t − s rows, we obtain the result:

 t  X t − (2i + 1) k s s (t) = − C . (1 ,−1 ),∅ t − s i t − s − i i≥0  We can also prove a simpler formula for this Kostka polynomial. Theorem 7. t + 1  t  k s s (t) = − 2 . (1 ,−1 )∅ s s − 1 Proof. By branching rules,

k(1s,−1s)∅(t + 1) = k(1s,−1s)∅(t) + k(1s−1,−1s−1)∅(t). We define q(t, s) := k(1s,−1s)∅(t), POLYNOMIAL BEHAVIOUR OF KOSTKA NUMBERS 23 so that the branching rule and our argument above gives us the recurrence relation with boundary conditions   q(t + 1, s) = q(t, s) + q(t, s − 1) q(t, 0) = 1  q(2s, s) = Cs. We note that the variables are constrained by s ≥ 0 and t ≥ 2s. Now, by another straightforward inductive argument, we show that t + 1  t  q(t, s) = − 2 . s s − 1 It is clear that the base cases are satisfied, as t + 1  t  q(t, 0) = 1 = − 2 0 −1 and 2s + 1  2s  2s  2s  q(2s, s) = − 2 = − s s − 1 s s − 1 (2s)! (2s)! (s + 1)(2s)! − s(2s)! ⇒ q(2s, s) = − = s!s! (s − 1)!(s + 1)! s!(s + 1)! (2s)! ⇒ q(2s, s) = = C . s!(s + 1)! s Now, assuming that the statement holds for all t0 and s0 such that t0 + s” < t + s, we use the recurrence relation to obtain t t − 1  t  t − 1 q(t, s) = q(t − 1, s) + q(t − 1, s − 1) = − 2 + − 2 s s − 1 s − 1 s − 2 t + 1  t  ⇒ q(t, s) = − 2 s s − 1 as desired. 

12. Remarks

The Kostka numbers k(s,−s)∅(t) and k(1s,−1s)∅(t) discussed above are examples of as- ymptotic behaviour involving two variables, a more general situation than that discussed in the majority of this project, which only involved the variable t.

We notice that k(s,−s)∅(t) is a polynomial in t for fixed s, and is a polynomial in s for fixed t. Furthermore, it can be expressed by a single binomial coefficient.

The Kostka number k(1s,−1s)∅(t) on the other hand, while a polynomial in t when s is fixed, is only a polynomial in s for fixed t − s. It is, however, still a combination of only two binomial coefficients. A very general problem to study would be to examine Kostka functions

p(s1, . . . , sk, t1, . . . , tl) = k s1 sk t1 tl , (λ1 ,...,λk )(µ1 ,...,µl ) where λ = (λ1, . . . , λk) and µ = (µ1, . . . , µl) are weakly decreasing sequences of integers, with the hope that p can be expressed as a finite combination of multinomial coefficients. 24 DINUSHI MUNASINGHE Appendix: Solving Difference Equations

We record a statement used repeatedly in the text. Theorem A. Let d X t + k p(t) = α , k k k=0 and let at be a sequence such that at+1 − at = p(t) ∀t ≥ 0. Then d+1 X t + k a = C + (α − α ) t k−1 k k k=0

where αd+1 = 0 and C is some constant. In particular, at is a polynomial in t of degree pd d + 1 and leading coefficient d+1 where d = deg(p) and pd is the leading coefficient of p(t). Proof. We first note that if this difference equation has a solution it is unique up to a constant. This is easily seen because if

bt+1 − bt = at+1 − at, then bt+1 − at+1 = bt − at ∀t so bt − at = C is constant. Consider d+1 d X t + k X t + k + 1 t + k b = (α − α ) = α ( − ). t k−1 k k k k + 1 k k=1 k=0 Then Pd t+k+2 t+k+1 t+k+1 t+k bt+1 − bt = k=0 αk( k+1 − k ) − αk( k+1 − k ) Pd t+k+1 t+k+1 t+k+1 t+k+1 t+k = k=0 αk( k+1 + k − k − k+1 + k ) Pd t+k = k=0 αk k = p(t). 

References [BKLS] G. Benkart, S.-J. Kang, H. Lee, D.-U. Shin, The polynomial behavior of weight multiplicities for classical simple Lie algebras and classical affine Kac-Moody algebras, in Contemp. Math., 248. [F] W. Fulton, Young Tableaux With Applications to Representation Theory and Geometry. Cam- bridge University Press, 1997. [FH] W. Fulton and J. Harris, Representation Theory A First Course. Springer, 2004. [GW] R. Goodman and N.R. Wallach, Representations and Invariants of the Classical Groups. Cam- bridge University Press, 1998. [PS] I. Penkov and K. Styrkas, Tensor Representations of Classical Locally Finite Lie Algebras, Developments and Trends in Infinite-Dimensional Lie Theory, 2010. [W] http://mathworld.wolfram.com/DyckPath.html