The Kostant-Toda Lattice, Combinatorial Algorithms and Ultradiscrete Dynamics
Item Type text; Electronic Dissertation
Authors Ramalheira-Tsu, Jonathan
Citation Ramalheira-Tsu, Jonathan. (2020). The Kostant-Toda Lattice, Combinatorial Algorithms and Ultradiscrete Dynamics (Doctoral dissertation, University of Arizona, Tucson, USA).
Publisher The University of Arizona.
Rights Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction, presentation (such as public display or performance) of protected items is prohibited except with permission of the author.
Download date 25/09/2021 19:02:03
Link to Item http://hdl.handle.net/10150/648656 The Kostant-Toda Lattice, Combinatorial Algorithms and Ultradiscrete Dynamics
by Jonathan Ramalheira-Tsu
Copyright © Jonathan Ramalheira-Tsu 2020
A Dissertation Submitted to the Faculty of the Department of Mathematics In Partial Fulfillment of the Requirements For the Degree of Doctor of Philosophy In the Graduate College The University of Arizona
2 0 2 0
2
THE UNIVERSITY OF ARIZONA GRADUATE COLLEGE
As members of the Dissertation Committee, we certify that we have read the dissertation
The Kostant-Toda Lattice, Combinatorial Algorithms and Ultradiscrete Dynamics
3
Dedication
Para minha m˜ae Dedicated in loving memory to my mum. 4
Acknowledgements
First and foremost, I would like to express my deepest gratitude to my advisor, Nick Ercolani, for the rich and rewarding mathematical journey that culminated in this dissertation. Your exuberance in sharing your love of mathematics was a constant source of motivation, keeping charged the excitement and momentum in our work. You have truly made me feel supported and challenged, and I really cannot empha- sise enough how grateful I am for the privilege and honour of having had you as my advisor.
I wish to thank my committee members: Joceline Lega, Sergey Cherkis and David Glickenstein, for so generously sharing your time and guidance. I have thoroughly enjoyed our interactions; your questions and advice have been crucial in helping to form a great deal of what has been accomplished in this dissertation.
I wish to convey my appreciation for my family. Firstly, to my parents for your love and support. To my mother: your unfaltering faith in my life and mathematical journey have kept me going. Although you are no longer with me, I want you to know how much I miss you and still feel you cheering me on. To my father, thank you for your continuing to be there for me, taking on the mantle of encouraging and supporting me on this journey. I especially thank you for offering such a patient ear when I have needed someone to talk to.
Whilst I may not be able to list everyone else in my family who has supported me during this journey, I will certainly give it a good go! My first thanks go to the aunts and uncles who have supported me throughout the years, either directly in helping me move to the Arizona or supporting me during my stay, or simply for the occasional contact and friendship. I wish to thank Tio Amadeu, Tia Antonia, Tio Artur, Auntie Cathy, Auntie Dominique, Uncle George, Uncle Jim, Uncle Jimmy, Tia Luisa, Auntie 5
Mina, Tio Paulo, Auntie Paulette, Auntie Rita and Auntie Suzanne for your help. To Auntie Rita, Auntie Paulette and Auntie Cathy, I thank you for our many en- tertaining conversations throughout my time at the University of Arizona: you have helped keep me sane, providing lightheartedness when needed. I also wish to thank Cousin Marcelle and Justin for their support in my move to Tucson. Thanks also to Cousin Janina and her family for allowing me to join in with the fun and games at their house, as well as Cousin Nadine and her family for hosting me for my Christmas in the US.
To the rest of my family, I thank you for being part of such a fantastic network of love and care, especially during the difficult times.
Whilst at the University of Arizona, I have been fortunate to have forged great friend- ships, making my time in Tucson so much better. Brandon Tippings, I thank you for being such a wonderful friend, colleague and office mate. Arias Storm Hathway Turner, Breanna Gushiken, Jeff Davis and Michelle Trinh, I thank you all for the great times and close friendship we shared. To Ashley, Bonnie, Elliot and Max (the Klahrs), I thank you for welcoming me into your home for such fun-filled holidays and for providing me with a second family away from home. Lastly, but by no means least, I wish to thank my Tucson dance friends, Erika Raymond, Karen Yee and Terry Daily, to name a few, for helping to provide the fun outside of academia.
Charlotte Dorme and Ruth Lewis, my dearest friends back in the United Kingdom, I thank you for your most cherished friendship and for not letting our busy lives take us out of contact.
Finally, I would like to acknowledge the support for this research from the NSF grant DMS 1615921, for which I am very grateful. 6
Table of Contents
List of Figures ...... 9
Abstract ...... 11
Chapter 1. Introduction ...... 12 1.1. The Tropical Semiring and Maslov Dequantisation...... 13 1.2. The Robinson-Schensted-Knuth Correspondence...... 14 1.3. Geometric RSK and the Discrete-Time Toda Lattice...... 16 1.4. The Continuous-Time Toda Lattice and B¨acklund Transformations. 18 1.5. Box-Ball Systems...... 20 1.6. The Structure of this Dissertation...... 22 1.7. Main Results...... 25 1.7.1. Realisation of the Quantisation of the RSK Algorithm as a Dis- crete Dynamical System...... 25 1.7.2. Interpretation of RSK in Terms of Solitonic Particle Dynamics 26 1.7.3. The Ghost-Box-Ball System...... 27 1.7.4. Direct Integrable Systems Construction of Continuous and Dis- crete Time Geometric RSK...... 28 1.7.5. The Full Toda Lattice...... 29
Chapter 2. General Background ...... 31 2.1. The Toda Lattice...... 31 2.1.1. Definition and Description of the Toda Lattice Dynamical System 31 2.1.2. Integrability of the Toda Lattice...... 33 2.1.3. The Phase-Shift Formulæ for the Toda Lattice...... 36 2.1.4. Explicit Solution of the Toda Lattice: Method of Factorisation 38 2.1.5. Geometry of the Solutions: Embeddings into the Flag Manifold Phase Space...... 39 2.1.6. Symes’s Discrete-Time Matrix Dynamics and the Discrete-Time Toda Lattice...... 41 2.2. Box-Ball Systems...... 45 2.2.1. The Box-Ball Evolution...... 45 2.2.2. Soliton Behaviour and the Sorting Property...... 46 2.2.3. Invariants of the Box-Ball System...... 47 2.2.4. The Box-Ball Phase Shift...... 48 2.2.5. Coordinates on the Box-Ball System...... 51 2.2.6. Ultradiscretisation of Discrete-Time Toda...... 52 2.3. The Robinson-Schensted-Knuth Correspondence...... 54 7
Table of Contents—Continued
2.3.1. Young Tableaux...... 54 2.3.2. The Length of the Longest Increasing Subsequence of a Permu- tation and Patience Sorting...... 56 2.3.3. Schensted Insertion...... 58 2.3.4. The Robinson-Schensted Correspondence...... 60 2.3.5. Semistandard Young Tableaux and the Robinson-Schensted- Knuth Correspondence...... 63 2.3.6. The RSK Equations for Schensted Insertion...... 69 2.3.7. Kirillov’s Geometric Lifting: gRSK...... 76 2.3.8. A Matrix Representation of the Geometric RSK...... 78 2.3.9. Noumi and Yamada’s Observation...... 81
Chapter 3. Geometric RSK and Toda: The Discrete Picture .. 83 3.1. The Factorisation Problem...... 83 3.2. Parametrised Factorisations...... 84 3.3. Factorisations by Generalised Eigenfunctions...... 90 3.4. Geometric RSK as a Degeneration of the Discrete-Time Toda Lattice 93
Chapter 4. RSK and BBS: The Ultradiscrete Picture ...... 103 4.1. Ultradiscretisation of Geometric RSK...... 104 4.2. RSK Insertion and the Box-Ball Coordinates...... 107 4.3. The Ghost-Box-Ball System...... 110 4.4. Exorcism, Soliton Behaviour and the Invariant Shape...... 113 4.5. The Ghost-Box-Ball System and Schensted Insertion...... 116 4.5.1. The RSK Walls in the Ghost-Box-Ball System...... 117 4.6. The Ghost-Box-Ball Evolution and the Extended Box-Ball Coordinate Dynamics...... 124 4.7. Fukuda: Remarks and Distinctions...... 126
Chapter 5. Geometric RSK and the Toda Lattice: The Continuous- Time Picture ...... 128 5.1. Geometric Lifting and Path Operators...... 128 5.2. Lusztig Parameters and Total Positivity...... 131 5.2.1. Lusztig Parameters...... 131 λ 5.2.2. The Flow Rt on P ...... 132
5.3. Tw0 and the Linear Path η(t) = λt ...... 133 5.3.1. Painlev´eBalances...... 135 5.3.2. The Birkhoff Decomposition...... 135 5.3.3. Passage to General Toda: The Crystal Embedding...... 136 5.3.4. Tridiagonality of the Flow Associated to b(t) = eελt ...... 138 8
Table of Contents—Continued
5.4. The gRSK Stroboscope and a Nesting of Toda Lattices...... 139
Chapter 6. The Full Kostant-Toda Lattice ...... 142 6.1. Triangular Arrays and the Gelfand-Tsetlin Parametrisation...... 142 6.2. Continuous-Time gRSK and Dynamics on T and P ...... 144 6.3. The Set Tλ ...... 145 6.3.1. The Connection to the Toda Lattice...... 147 6.4. Flag Manifolds: The Crystal Embedding and the Companion Embedding 149 6.5. Extension to the Full Kostant-Toda Lattice...... 154 6.5.1. The Mapg ˆλ ...... 155 6.6. The Poisson Structure and Symplectic Geometry of Full Kostant-Toda 166 6.6.1. The Arhangelskij Normal Form and Parabolic Casimirs.... 166
Chapter 7. Future Directions ...... 178 7.1. Some Natural Extensions of this Work...... 178 7.2. Generalised Dressing Transformations...... 179 7.3. Geometric RSK and Box-Ball Systems for General Semisimple Lie Al- gebras...... 180 7.4. Geometric Quantisation...... 180
Appendices ...... 182
Appendix A. Fukuda: The Advanced Box-Ball System and the Carrier Algorithm ...... 183 A.1. The Box-Ball System with Labels...... 183 A.2. The Advanced Box-Ball System...... 184 A.3. The Carrier Algorithm...... 185 A.4. Schensted Insertion in the Advanced Box-Ball System...... 188
Index ...... 190
References ...... 192 9
List of Figures
Figure 1.1. Patience sorting a permutation in S8...... 14 Figure 1.2. The box-ball time evolution (iterated five times)...... 21 Figure 1.3. Roadmap of the dissertation, with numbers corresponding to connections between adjacent cells...... 22 Figure 2.1. A box-ball system time evolution (one time step)...... 46 Figure 2.2. The sorting property of the box-ball system...... 47 Figure 2.3. The invariant shape of the box-ball system(s) in Figure 2.2... 48 Figure 2.4. A phase shift interaction between two colliding chains...... 48 Figure 2.5. The box-ball coordinates on a box-ball system and its time evo- lution...... 51 Figure 2.6. Iterative Schensted word insertions building the RSK correspon- dence...... 80 Figure 4.1. A single time-step of the ghost-box-ball evolution, split into its the subroutine that defines it...... 112 Figure 4.2. A single time-step of the ghost-box-ball evolution (without the intermediate steps)...... 113 Figure 4.3. Three iterations of the ghost-box-ball algorithm...... 115 Figure 4.4. Three iterations of the ghost-box-ball evolution (after global ex- orcism)...... 115 Figure 4.5. The invariant shape of the ghost-box-ball system(s) in Figure 4.3 116 Figure 4.6. The initial ghost-box-ball system with its finite regions labelled. 118 Figure 4.7. The evolution of a ghost-box-ball system with walls...... 120 Figure 4.8. The Schensted evolution encoded by Figure 4.7...... 120 Figure 5.1. Iterative Schensted word insertions. In the geometric setting, the y’s here are precisely the y’s in Theorem 5.11...... 141 Figure 6.1. The quiver structure on triangular arrays...... 146 Figure 7.1. Roadmap of the dissertation, with numbers corresponding to connections between adjacent cells...... 178 Figure A.1. A single time-step of the advanced box-ball system, split into the successive movements of each colour/label...... 183 Figure A.2. An advanced box-ball system with carrying capacities...... 184 Figure A.3. The steps of a single time evolution of an advanced box-ball system with carrying capacities...... 185 Figure A.4. The Carrier Algorithm...... 185 Figure A.5. Successive applications of the Carrier Algorithm with input se- quence from an advanced box-ball system...... 187 10
List of Figures—Continued
Figure A.6. Schensted insertion encoded in a unit carrying capacity advance box-ball evolution...... 189 11
Abstract
We study the relationship between the algorithm underlying the Robinson-Schensted- Knuth correspondence (Schensted insertion) and the Toda lattice, exploring this in the settings of discrete-time, ultradiscrete, and continuous-time dynamical systems.
Starting with the work of Noumi and Yamada [NY04] and their observation of a similarity between Hirota’s [Hi77] discrete-time Toda lattice and Kirillov’s [Ki00] ge- ometric lifting of the RSK (geometric RSK) equations for Schensted insertion, we derive solutions to the former in its unbounded setting and provide an explicit em- bedding of geometric RSK in the discrete-time Toda lattice.
Mimicking the ultradiscretisation of the discrete-time Toda lattice to the soliton cel- lular automaton, the box-ball system [To04], we produce an extension of the classical box-ball system for Schensted insertion, which we call the ghost-box-ball system. We study this new cellular automaton in relation to Schensted insertion, demonstrating their equivalence, both on their respective coordinatisation and also on the algorith- mic level.
O’Connell et al. ([O13], [BBO09], [COSZ14], [O12]) demonstrate an impressive treat- ment of the relation between a continuous version of geometric RSK and the Toda lattice. Through the introduction of dressing transformations and Painlev´eanaly- sis [EFH91], we reformulate some of these connections in a more integrable systems theoretic way. In this continuous setting, we also see the general Toda flows arise and present results on the Poisson geometry of the full Kostant-Toda lattice to lay the foundation for future probing of these exciting connections between algorithms, combinatorics. and dynamical systems theory. 12
Chapter 1 Introduction
Recently, a great deal of remarkable development in the literature has been made to bring together motivations and methods from the study of algorithms, combina- torics, integrable systems theory, and representation theory ([O13], [BBO09], [NY04], [Fu04], [Ga70], [Wo02], to list a few). A consistent theme is the translation of ideas and approaches from one area to another, serving to provide insight from new per- spectives. Of particular interest in this dissertation is the idea that an algorithm can effectively be thought of as being like a discrete dynamical system; continuum limits of the latter may provide insights into the analysis of the former.
The main focus of this dissertation will be an instance of this principle, studied in great detail and on multiple levels. We will see this principle in the context of Schen- sted insertion, the algorithm at the heart of the famous Robinson-Schensted-Knuth correspondence (cf. 2.3). Noumi and Yamada [NY04] observed a remarkable connec- tion between Schensted insertion (a purely algorithmic/combinatorial process) and the Toda lattice (a similarly well-known integrable dynamical system). The key piece of the bridge was provided by Kirillov [Ki00], in the form of tropicalisation (cf. 1.1). Through the exploration of this correspondence, other intriguing systems will be seen to emerge.
We now preview all of these ideas in the following sections of this introduction, starting with Maslov’s [LMRS11] elegant description of tropicalisation in Section 1.1. 13
1.1 The Tropical Semiring and Maslov Dequantisation
Tropical mathematics ([LMRS11], [Li07], [Vi01]) is the study of the max-plus semir- ing, which we will now define. In this section, we follow the presentation given by
Maslov [LMRS11]. The structure of the semiring (R≥0, +, ×) is carried over to the
set S = R ∪ {−∞} by a family of bijections D~, for ~ > 0, given by
ln x if x 6= 0 D (x) = ~ . (1.1.1) ~ −∞ if x = 0
This induces a family of semirings, parametrised by ~ > 0, (S, ⊕~, ⊗~) with operations given by
ln(ea/~ + eb/~) if a, b 6= −∞ a ⊕ b = D (D−1(a) + D−1(b)) = ~ (1.1.2) ~ ~ ~ ~ max(a, b) otherwise a ⊗ b = D (D−1(a)D−1(b)) = a + b. (1.1.3) ~ ~ ~ ~
In the limit, ~ → 0, Maslov ‘dequantises’ (R≥0, +, ×) to obtain the tropical semiring (R ∪ {−∞}, max, +), where its addition is the usual max operation and its multipli- cation operation is usual addition, hence the name “max-plus semiring”.
Maslov views this construction as an analogue of the correspondence principle from
quantum mechanics, with (R≥0, +, ×) as the quantum object and (R∪{−∞}, max, +) as its classical counterpart.
Tropicalisation takes one into the realm of the piecewise linear. This relatively new field of mathematics has seen applications in many areas of mathematics, including algebraic geometry, numerical analysis, cryptography, and, as we will see shortly, combinatorics and integrable systems. 14
1.2 The Robinson-Schensted-Knuth Correspondence
The celebrated Robinson-Schensted-Knuth (RSK) correspondence is a fundamental correspondence (see Section 2.3) between permutation groups and their representa- tion theory. This may be regarded as a kind of discrete nonlinear Fourier transform.
At the heart of the RSK correspondence is an insertion procedure called Schensted insertion which is a revamped version of patience sorting [AD99], which itself is a solitaire-like algorithm. Patience sorting takes a permutation (σ(1), . . . , σ(n)) of n numbers and forms a sequence of piles by taking each number in succession, placing it onto the left-most pile whose top number is greater than the number being placed. The process is performed by initialising with a single pile consisting of the first num- ber in the sequence.
For example, applying patience sorting to the permutation (1, 3, 6, 2, 4, 7, 5, 8) in S8, one obtains the following sequence of piles, initialised at the top-left, and terminating at the bottom-left:
3 6 2 2 1 1 3 1 3 6 1 3 6
4
2 4 5 8 2 4 5 5 2 4 7 2 4 1 3 6 7 8 1 3 6 7 1 3 6 7 1 3 6
Figure 1.1. Patience sorting a permutation in S8.
A key property of patience sorting is that the total number of piles is equal to the length of the longest increasing subsequence of the permutation. There may be more than one subsequence of this length (e.g. (1, 3, 6, 7, 8) and (1, 2, 4, 5, 8) are both of 15 length 5).
Although the build-up to Schensted insertion is rather involved, we point out that
n n Schensted insertion can be reduced to a discrete evolution on N0 × N0 , where N0 := N ∪ {0}. The foundation for Schensted insertion is a prescription for taking an input pair of sequences
(a, x) = ((a1, . . . , an), (x1, . . . , xn)), which encode words (weakly increasing sequences of positive integers) from an alpha- bet {1, . . . , n}, applying an extended version of patience sorting on the words, and transforming the input sequences into an output pair of sequences
(b, y) = ((b1, . . . , bn), (y1, . . . , yn)) which encode the the result of performing this extension of patience sorting. This encoding is introduced in full detail in Section 2.3.6.
The prescription, as we will see in Section 2.3, is given by recursively defining an n-tuple (η1, . . . , ηn) by
η1 = ξ1 + a1,
ηj = max{ηj−1, ξj} + aj, ∀ j = 2, . . . , n,
j P where ξj = xi for j = 1, . . . , n, and then solving for y and b via the following: i=1
y1 = η1,
yj = ηj − ηj−1, ∀ j = 2, . . . , n
b1 = 0,
bj = aj + xj − yj, ∀ j = 2, . . . , n.
The ξ and η variables above are auxiliary, providing these equations describing Schen- sted insertion. 16
1.3 Geometric RSK and the Discrete-Time Toda Lattice
Kirillov [Ki00] noticed that the RSK equations (cf. Corollary 2.20) were of a max- plus nature, and performed the reverse of dequantisation to obtain the quantised (or de-tropicalised) RSK equations. These equations have now come to be known as the geometric RSK (or gRSK) equations, the construction of which we very briefly provide. Treating the Schensted insertion equations as if they were obtained from tropicalisation/dequantisation, one can recover their quantum analogue by performing the following conversion on them (max, +, −, 0) 7→ (+, ×, ÷, 1). This calculation, due to Kirillov [Ki00], results in the following equations:
η1 = ξ1a1
ηj = (ηj−1 + ξj)aj ∀ j = 2, . . . , n,
where ξj = x1 ··· xj for j = 1, . . . , n, and then solving for y and b via the following:
y1 = η1
ηj yj = ∀ j = 2, . . . , n ηj−1
b1 = 1
xj ξjηj−1 bj = aj = aj ∀ j = 2, . . . , n. yj ξj−1ηj
Finally, one can eliminate (cf. Lemma 2.21) the supplementary variables ηi and ξi to obtain the following equations, commonly known as the geometric RSK (gRSK) 17
equations: b1 = 1 a1x1 = y1 ajxj = yjbj ∀ j = 2, . . . , n (1.3.1) 1 1 1 + = a x b 1 2 2 1 1 1 1 + = + ∀ j = 2, . . . , n. aj xj+1 yj bj+1 Remarkably, the system of equation in 1.3.1 can be written equivalently as the fol- lowing matrix factorisation [NY04]:
a¯1 1 x¯1 1 y¯1 1 1 0 a¯ 1 0 x¯ 1 0 y¯ 1 0 ¯b 1 0 2 2 2 2 . . . . . . . . .. .. .. .. = .. .. .. .. , (1.3.2) ¯ a¯n−1 1 x¯n−1 1 y¯n−1 1 bn−1 1 ¯ 0 a¯n 0 x¯n 0 y¯n 0 bn
1 where variables with bars are reciprocated, e.g.a ¯i = . ai
Given the left-hand side of Equation 1.3.2, the solution (the y’s and b’s), if it exists, is uniquely determined by virtue of the b-matrix, specifically by the 1 and 0 in the top-left.
Noumi and Yamada [NY04] then observed that, modulo certain boundary condi- tions, a change of coordinates could be performed on the gRSK equations to yield Hirota’s [Hi77] integrable discretisation of the famous Toda lattice, one of the most important examples of a completely integrable system, especially due to its natural generalisations to the setting of semisimple Lie algebras and their representations ([Ko78], [EFS93]). Noumi and Yamada’s change of coordinates [NY04], which forms 18
the starting point for our work, is the following:
t −1 t −1 t+1 −1 t+1 −1 ai = (Ii+1) , xi = (Vi ) , yi = (Vi ) , bi = (Ii ) . (1.3.3)
When performed on Hirota’s discretisation of the Toda lattice [Hi77]:
t+1 t t t+1 Ii = Ii + Vi − Vi−1 t t , (1.3.4) t+1 Ii+1Vi Vi = t+1 Ii the result is the following system of equations:
aixi = yibi (1.3.5) 1 1 1 1 + = + ai xi+1 yi bi+1 both for all i ∈ Z. This is clearly seen to be related to Equations 1.3.1, and this observation provides the initial point of contact between the worlds of RSK and the Toda lattice. The exact means by which we recover geometric RSK involves degenerate boundary conditions, and is the focus of Chapter3.
1.4 The Continuous-Time Toda Lattice and B¨acklund Transformations
The classical Toda lattice, introduced by Toda [To67], is a dynamical system in con- tinuous time. It models the mechanics of a chain of unit mass particles repelling each other with an exponential potential. It is a prototype for integrable systems theory, with generalisations from gln or sln to other semisimple Lie algebras ([Ko78] and [EFS93]). To solve the Toda lattice, one usually uses the factorisation method (cf. Theorem 2.5), which involves computing the LU decomposition or Gauss de- composition (cf. Definition 2.2). An equivalent formulation is in terms of B¨acklund 19
transformations, whose discrete analogues provide a convenient means of discretisa- tion to yield Hirota’s discrete-time Toda lattice.
Its initial formulation was in terms of relative positions and momenta, the j-th parti-
cle from the left has relative position qj and momentum/velocity pj. In the so called open-ended case, which is the only case we will treat in this dissertaion, there are n + 2 particles, labelled 0, 1, . . . , n + 1, with the 0-th pinned at negative infinity and the (n + 1)-st pinned at positive infinity.
Under Flaschka’s change of variables [Fl91], given by aj = −pj for j = 1, . . . , n, and
qj −qj+1 bj = e for j = 1, . . . , n−1, the phase space is presented conveniently as matrices of the following form:
a1 1 .. b1 a2 . X = . (1.4.1) . . .. .. 1 bn−1 an
A physical interpretation of the Toda lattice naturally gives rise to an understanding of the asymptotic behaviour of the Toda lattice that would be harder to access by analysis of the defining equations alone. The particles will repel each other until the system is sufficiently spread out so as to allow the particles to essentially move freely, no longer experiencing any significant repelling forces from their neighbours. Eventu- ally, as time goes to positive infinity, the particles willl sort themselves by ascending momenta, and the matrix in Equation 1.4.1 will have bj → 0 for each j, with the a’s taking on the values of the ordered momenta. This phenomenon of sorting by momenta is appropriately referred to as the sorting property of the Toda lattice, and we will see an analogue of this property follow through in the next section, for the 20
box-ball system.
In the Flaschka representation [Fl91], on symmetric matrices, this long-term sorting amounts to a diagonalisation of the matrix which is yet another algorithm. The ma- trix in 1.4.1 can by conjugated by the diagonal matrix
1 √ b1 √ D = b b (1.4.2) 1 2 . .. p b1 ··· bn−1
to obtain the symmetric form of the Toda lattice on Jacobi matrices:
√ a1 b1 √ .. b1 a2 . D−1XD = . (1.4.3) . . .. .. p bn−1 p bn−1 an
1.5 Box-Ball Systems
The box-ball system ([TS90], [TTS96], [To04]) is a famous example of a cellular automaton, another being Conway’s game of life [Ga70]. The box-ball system consists of a row of infinitely many boxes, filled with finitely many balls. A simple time evolution rule is given as follows:
(1) Take the left-most ball that has not been moved and move it to the left-most empty box to its right.
(2) Repeat (1) until all balls have been moved precisely once. 21
To illustrate this algorithm, below are six box-ball states (the top is the initial state, followed by its five subsequent time evolutions):
··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ···
Figure 1.2. The box-ball time evolution (iterated five times).
The box-ball system exhibits a sorting property: chains of balls (consecutive strings of balls) travel as coherent masses (solitons) with velocity equal to the number of balls in a chain. When two solitons collide, the faster one comes out in front, and the two experience a phase shift (a deviation in position from where they would have been, barring the collision, cf. 2.2.4). Asymptotically, with enough iterations of the time evolution, the solitons will order themselves from slowest to fastest, travelling freely after being sorted.
The similarity to the behaviour of the Toda lattice is absolutely no accident: Toki- hiro [To04] shows that there is a coordinatisation of the box-ball system, with clear rules prescribing the evolution on these coordinates. Tokihiro also demonstrates the process of ultradiscretisation (cf. 2.2.6) on the discrete-time Toda lattice, and this is precisely the evolution rule on the box-ball system equations.
Although we will imminently repeat this, we would be remiss to not whet the reader’s appetite with our developments on the connection between Schensted insertion and the box-ball system. In Chapter4, we offer a multitude of new results on this con- nection. We mention the two main results now.
1. We show that by expanding the domain of the box-ball coordinate evolution, 22
one retrieves precisely the Schensted insertion rules.
2. Another rather exciting result is in the development of a new variant of the box-ball system that encodes Schensted insertion.
1.6 The Structure of this Dissertation
We provide the following table as a roadmap of the general topics in this dissertation. The numbers between two adjacent cells of the table act as a key for the more detailed outline below the table.
de-tropicalisation continuum limit
discrete space continuous space continuous space discrete time discrete time continuous time
RSK Continuous Algorithm 2 Geometric RSK 5 (Schensted Insertion) gRSK
. 1 4 7
(Ghost) (Hirota’s) Dynamics 3 6 Toda Lattice Box-Ball System Discrete Toda
Maslov tropicalisation stroboscope
Figure 1.3. Roadmap of the dissertation, with numbers corresponding to connec- tions between adjacent cells.
Chapter2 provides most of the background details concerning the above. In Section 2.1.6, we describe a relationship between the continuous-time Toda lattice and the discrete-time Toda lattice (6 in the table). Section 2.3.7 provides Kirillov’s geometric lifting of the RSK equations for Schensted insertion to the geometric RSK equations [Ki00](2 in the table). In Section 2.2.6, we present Tokihiro’s ultradiscretisation of 23 discrete-time Toda to yield the box-ball equations [To04](3 in the table). Section 6.2, whilst not in the Chapter2, is a presentation of background material, namely O’Connell presentation of a continuous time analogue of geometric RSK [O13](5 in the table). Finally, we include an analogy between a certain (spatial) extension of ge- ometric RSK and the discrete-time Toda lattice in what we have come to call “Noumi and Yamada’s observation” [NY04]. This last item is not quite 4 in the table, but it is what motivated our eventual description of 4 in the table.
Chapter3 concerns 4 in the table. In this chapter, we study the relationship between Schensted insertion and Toda on the discrete level, which is the original level to which Noumi and Yamada’s observation applies. We present the Hirota’s discretisation of Toda, which manifests as a stroboscopic dynamics of “factoring and flipping”, per- formed recursively. Central to this dynamics is the method of Gauss elimination/LU decomposition: factoring a matrix into a lower unipotent matrix, multiplied on the right by an upper triangular matrix. The original discrete-time Toda lattice concerns infinitely many particles, hence bi-infinite matrices (indexed by pairs of integers, with- out bound). By imposing boundary conditions, one obtains semi-infinite (indexed by pairs of integers, each bounded from one (the same) direction) and finite versions.
The first part of Chapter3 is an exploration into the solution of the discrete-time Toda lattice in the semi-infinite and bi-infinite cases only (the solution to the semi- infinite case truncates to the solution to the finite case). For semi-infinite discrete-time Toda, we provide an explicit description of the solution, when it exists, in terms of τ-functions (principal minor determinants). We then show how to piggyback on the semi-infinite solution to a parametrised family of solutions for the bi-infinite discrete- time Toda lattice. We also prove an extension of a result of Murphy [Mu18] on the factorisation of discrete-Schr¨odingeroperators, expressing the bi-infinite discrete-time Toda solutions in terms of generalised eigenfunctions. 24
Chapter4 concerns 1 in the table. In this chapter, we perform an operation on geo- metric Schensted insertion called ultradiscretisation. At its heart is Maslov’s tropical- isation. Indeed, we show that this recovers the original RSK equations for Schensted insertion. In the work of Tokihiro [To04], it is shown that this ultradiscretisation procedure, when applied to the discrete-time Toda lattice, yields the famous box-ball system equations. By mimicking Tokihiro’s discrete Toda ultradiscretisation for ge- ometric Schensted insertion, we present Schensted insertion in a form lending itself very naturally to comparison to the box-ball system. This natural view of Schensted insertion as a box-ball coordinate dynamics gives rise to one’s needing to make sense of zero length chains of balls and zero length chains of empty boxes, which is not inherently part of the original box-ball system but makes complete sense at the level of the equations describing the box-ball evolution. On this level, we view Schensted insertion as a degeneration of the box-ball equations. Furthermore, since the elegance of the original box-ball system lies in the simplicity of its description (a recipe for moving balls into empty boxes), we hoped to recapture the extended box-ball equa- tions in a similarly pictographic/procedural manner, which we accomplished in the creation of a new cellular automaton, the ghost-box-ball system. The remainder of Chapter4 is dedicated to studying the ghost-box-ball system in relation to Schensted insertion. We show that the ghost-box-ball system, at each stage of a single time step, encodes the stages of a single Schensted word insertion. We also show that the underlying evolution, modulo the ghosts, is precisely the original box-ball evolution.
Chapter5 concerns 7 in the table. In this chapter, we summarise the material in [BBO09] and [O13] pertaining to the connection between a continuous time analogue of geometric RSK and the classical (tridiagonal) Toda lattice. We begin with the construction of a partial right action of a unitary group on the space of continuous paths through a subgroup, starting with the 2 × 2 case, motivated by Sturm-Liouville 25 equations and dressing transformations. We also review the Lusztig parametrisa- tion, with Berenstein, Fomin and Zelevinsky’s [BFZ96] explicit specialisation of this parametrisation to a certain right-translation of the set of upper triangular matrices defined by the positivity of certain minor determinants.
We provide the background on the Painlev´eanalysis of the Toda lattice, using this and the geometry of flag manifolds to construct a novel proof of a main theorem (cor- responding to Theorem 5.9) of O’Connell’s paper on geometric RSK and the Toda lattice [O13].
Chapter6, we provide O’Connell’s [O13] description of relationship between the con- tinuous time geometric RSK equations and the Toda lattice, in terms of triangular arrays.
1. By relaxing a condition on triangular arrays, we provide a proof that the general Toda flows drive the dynamics on the triangular arrays.
2. We then provide a description of the Poisson structure and symplectic geometry underlying the full Kostant-Toda lattice through the study of the Arhangelskij normal form (6.6.11) and its associated invariants ([GS99], [Ar79]).
1.7 Main Results
We conclude the introduction with a discussion of the main results of the dissertation.
1.7.1 Realisation of the Quantisation of the RSK Algorithm as a Discrete Dynamical System
Our main result for Chapter3 is Theorem 3.7, providing a very explicit way of view- ing the quantisation of Schensted insertion (geometric RSK) as a time step of the discrete-time Toda lattice. Before our result, this was a problem of interest in the literature, but the closest solution was a result of Noumi and Yamada [NY04] showing 26 an equivalence of geometric RSK and the discrete-time Toda lattice in the bi-infinite setting, rather than the finite setting in which geometric RSK actually lives. Our result takes this relationship to the finite setting, resulting in witnessing geometric RSK as lying on the boundary of the discrete-time Toda lattice. This helps to finally make precise what had previously only existed as an observation/analogy in the lit- erature.
In addition to our main result, with Noumi and Yamada’s observation connecting to the bi-infinite discrete-time Toda lattice which is solved by bi-infinite matrix fac- torisations, we provide two results classifying the LU (lower-upper) decompositions of tridiagonal, bi-infinite, Hessenberg matrices. The first result, Theorem 3.2, offers a parametrisation of the factorisations by leveraging the already known solution for semi-infinite and finite matrices. In addition to this parametrised factorisation result, we offer Theorem 3.4 as a classification of these factorisations in terms of generalised eigenfunctions.
1.7.2 Interpretation of RSK in Terms of Solitonic Particle Dynamics
Motivated by our precise connection between geometric RSK and the discrete-time Toda lattice we searched for the analogous connection on the level of their respective dequantisations (tropicalisations or ultradiscretisations). The main result for this goal is Theorem 4.2 which shows how Schensted insertion is captured by the coordi- nate evolution on the box-ball system, a soliton particle dynamics that arises from dequantising the discrete-time Toda lattice. Theorem 4.2 and Corollary 4.3 achieve this goal by demonstrating that the RSK equations for Schensted insertion are given by the box-ball system equations. Key to these results was a rewriting of the RSK equations without which the relation to the box-bal system equations was obscured. 27
Prior to this work, a relationship between RSK and an advanced version of the box- ball system was known by Fukuda [Fu04]. However, this advanced box-ball system requires various extra features not automatically possessed by the dequantisation of the discrete-time Toda lattice. Our results work solely with what is obtained from the discrete-time Toda lattice, potentially opening up an avenue for more direct future connections between dynamical systems theory and the representation theory underlying the RSK correspondence.
1.7.3 The Ghost-Box-Ball System
Whilst the connection between Schensted insertion and the coordinate evolution of the box-ball system is quite satisfying, it relinquishes one of the appealing qualities of the box-ball system: its graphical representation as an asymmetric simple exclusion process on boxes and balls. The ghost-box-ball system is our answer to restoring a graphical nature to the extension that was necessary for the classical domain of the box-ball coordinates. This produced a completely new cellular automaton which comes equipped with its own motivation for study due to its creation from the already important algorithm of Schensted insertion. We show in Corollary 4.9 that this new cellular automaton does indeed capture the RSK equation. We further amplify this in Theorem 4.8 which establishes a one-to-one correspondence between each step of Schensted insertion and each step of the ghost-box-ball evolution.
Knowing some of the classical features of the box-ball system, we answer some of the natural questions on what persists in the modification leading to the ghost-box-ball system. The key to leveraging properties of the box-ball system is a transformation we call exorcism. Lemma 4.5 establishes that the ghost-box-ball evolution, under this transformation, is precisely the box-ball evolution. Using this, we show in Lemma 28
4.6 that the ghost-box-ball system possesses the classical solitonic behaviour of the box-ball system. Using this, we also prove in Corollary 4.7 the invariance of a com- binatorial signature (the shape) of the ghost-box-ball system. Aside from answering natural questions, these results show that the ghost-box-ball system possesses prop- erties that have historically attracted attention to the classical box-ball system.
Our final work presented in this dissertation on the ghost-box-ball system is an at- tempt at answering the question of whether the extended box-ball coordinate evolu- tion completely captures our ghost-box-ball evolution for ghost-box-ball systems not arising from RSK. We conjecture that it does (Conjecture 4.10), and prove a strati- fied (restricted to certain level sets of an invariant function of the box-ball coordinate evolution) version of the conjecture in Theorem 4.11.
1.7.4 Direct Integrable Systems Construction of Continuous and Discrete Time Geometric RSK
The (classical) Toda lattice is a continuous-time dynamical system. O’Connell [O13] studied a continuous-time version of geometric RSK, and related this to the Toda lattice. This work, whilst establishing this relationship, was rather involved, requir- ing one to pass through ideas and constructions from last passage percolation and continuum versions of the Gessel-Lindstr¨om-Viennottheorem, as well as continuum analogues of Gelfand-Tsetlin patterns. Additionally, this approach requires the in- volvement of semi-discrete random polymers, Whittaker vector and quantum Toda lattices. Our approach is much simpler in nature, and we demonstrate the simplicity in our proof of Theorem 5.9, which re-derives the key result of O’Connell, without needing to pass through such complicated structures. Additionally, Theorem 5.11 establishes a passage from continuous-time geometric RSK back to (discrete-time) geometric RSK, aiding in further increasing our mobility through our roadmap table 29
(Figure 1.3).
An advantage of our approach is that we bring in the geometry of flag manifolds, making available the machinery of [EFH91] and [EFS93] to our perspective. To aid in connecting to the geometry of flag manifolds, we derive explicit formulæ for describing the key components involved in mapping to the flag manifold. Lemma 6.8 provides an explicit description of the matrix used to conjugate a tridiagonal, Hessenberg matrix to its companion matrix, and Lemma 6.9 provides a relatively explicit description of the matrix that conjugates the companion matrix to a particular distinguished member of its coadjoint orbit. Finally, we write a slightly more explicit version of a result of [EFH91] for diagonalising the aforementioned distinguished matrix (when possible).
1.7.5 The Full Toda Lattice
In O’Connell’s work [O13], it is shown how classical (tridiagonal) Toda is obtained from continuous geometric RSK. We show very explicitly in Theorem 6.13 that re- laxing a certain condition, imposed by O’Connell, yields the full Toda lattice (not restricted to the triadiagonal case). We capture this in a commutative diagram in Theorem 6.15 which also contains a complete roadmap between O’Connell’s various components and the flag manifold and associated embeddings of the Toda flow. This opens up a potential path to generalising the constructions of this dissertation to the full Toda lattice. For example, we hope this will provide a way of find an analogue of the (ghost-)box-ball system for the full Toda lattice.
In light of this hopeful avenue to full Toda, we lay the groundwork for some of the relevant Poisson geometry. We present (between Lemma 6.18 and Corollary 6.19) a correction to Gekhtman and Shapiro’s formula [GS99] for the matrix conjugating 30 a matrix to the so-called Arhangelskij form and a proof that this correction works. Additionally, we offer Theorem 6.20 as a very explicit proof that the Arhangelskij form produces invariants of a matrix under the coadjoint orbit of certain parabolic subgroups. These results have been known in the literature (cf., for example, [GS99]). However, we provide more concrete details to help illuminate the path forward. 31
Chapter 2 General Background
2.1 The Toda Lattice
In this section, we present the relevant background on the Toda lattice: from its basic definition, to its solutions, geometry and discretisation.
2.1.1 Definition and Description of the Toda Lattice Dynamical System
The Toda lattice, due to Toda [To67], is a nearest neighbour interaction dynamical system on a collection of particles, in which two particles exert a repelling force on each other, given by the exponential of the distance between them.
Take n + 2 particles of unit mass, labelled j = 0, 1, . . . , n + 1. Let Particle j have
position qj (relative to its equilibrium position) and momentum pj. Because each particle has unit mass, one has
q˙j = pj (2.1.1)
for each j.
qj−1−qj qj −qj+1 Furthermore, taking the force as described above, one hasp ˙j = e − e for each j.
Lastly, boundary conditions of q0 = −∞ and qn+1 = ∞ are imposed, which, formally, result in eq0−q1 = eqn−qn+1 = 0. These boundary conditions are chosen to truncate the lattice to a finite system. Consequently, the Toda lattice is expressed as the following 32 system of differential equations:
q˙j = pj j = 1, . . . , n (2.1.2)
q1−q2 −e if j = 1 qj−1−qj qj −qj+1 p˙j = e − e if 1 < j < n (2.1.3) eqn−1−qn if j = n
2n When R , with coordinates (p1, . . . , pn, q1, . . . , qn), is equipped with the standard Poisson bracket, the Toda lattice is a Hamiltonian system with Hamiltonian function
n n−1 1 X X H(p , . . . , p , q , . . . , q ) = p2 + eqj −qj+1 . (2.1.4) 1 n 1 n 2 j j=1 j=1
Flaschka’s change of variables (p1, . . . , pn, q1, . . . , qn) 7→ (a1, . . . , an, b1, . . . , bn) is given
qj −qj+1 by by setting aj = −pj for j = 1, . . . , n, and bj = e for j = 1, . . . , n − 1.
In Flaschka variables, the Toda lattice assumes the following simple form:
˙ bj = (aj+1 − aj)bj, j = 1, . . . , n − 1 (2.1.5)
b1 if j = 1 a˙ j = bj − bj−1 if 1 < j < n . (2.1.6) −bn−1 if j = n
One can arrange the variables neatly into a matrix
a1 1 .. b1 a2 . . (2.1.7) .. .. . . 1 bn−1 an 33
Equations 2.1.5 and 2.1.6 amount to the following matrix differential equation:
b1 0 a1 1 . .. (a − a )b b − b .. d b1 a2 . 2 1 1 2 1 = . dt . . .. .. 1 . . .. .. 0 bn−1 an (an − an−1)bn−1 −bn−1 (2.1.8)
Remark 2.1. The boundary conditions chosen in Section 2.1.1 amount to setting b0 = bn = 0, which are precisely what enables this expression of the Toda lattice as a dynamical evolution on this finite dimensional matrix phase space.
2.1.2 Integrability of the Toda Lattice
Let X be the matrix
a1 1 .. b1 a2 . X = (2.1.9) . . .. .. 1 bn−1 an
Proposition 2.1. Equations 2.1.5 and 2.1.6 are equivalent to the following Lax equa- tion d X = [X, π (X)] (2.1.10) dt − where π−(X) is the projection of X onto its strictly lower triangular part, i.e. X with the a’s and 1’s replaced by zeroes. 34
Remark 2.2. If π+(X) is the projection of X onto its upper triangular part, then one could replace the right-hand side of Equation 2.1.10 with [π+(X),X], by the usual properties of the matrix commutator.
One can uniformly shift the particles in the Toda lattice, so we may assume that the average of the a’s is zero, i.e. a1 + ··· + an = 0. Under this assumption, X is a traceless matrix and is therefore an element of the Lie algebra sl(n, R). We will allow for complex numbers and make the following definitions.
Definition 2.1. Let g := sl(n, C) be the Lie algebra of traceless n × n matrices. Let n+ (n−) be the subalgebra of upper (respectively, lower) nilpotent matrices. Similarly, let b+ (b−) be the subalgebra of upper (respectively, lower) triangular matrices with trace zero.
Let π+ be the projection π+ : g → b+ and π− be the projection π− : g → n−.
We now define the Lie group analogues of the above.
Definition 2.2. Let G = SL(n, C) be the Lie group of determinant one n×n matrices.
Let N+ (N−) be the group of upper (respectively, lower) unipotent matrices. Let B+
(B−) be the group of upper (respectively, lower) triangular matrices with determinant
1. When g ∈ G is expressed as g = nb, where n ∈ N− and b ∈ B+, one calls nb the LU- decomposition (lower-upper decomposition). For g ∈ G, if g has an LU-decomposition, define Π− and Π+ by g = Π−(g)Π+(g), where Π−(g) ∈ N− and Π+(g) ∈ B+.
Remark 2.3. Not all matrices have an LU decomposition, i.e. G 6= N−B+. However,
N−B+ is dense in G. Furthermore, one can find a neighbourhood of the identity on which this factorisation always exists.
Definition 2.3. . Let εn be the matrix in g with 1’s on its superdiagonal, and 0’s 35
elsewhere. For example, when n = 3,
0 1 0 ε3 = 0 0 1 . 0 0 0
When it is unambiguous to do so, we drop the subscript and simply write ε.
With these definitions in place, we see that the phase space for the matrix form of the
Toda lattice is the subset of tridiagonal matrices in ε + b−, the so-called Hessenberg matrices.
Lemma 2.2. If X solves Equation 2.1.10, then
d Xk = [Xk, π (X)] (2.1.11) dt −
for all k ∈ N.
This is shown by induction and the product rule for matrix differentiation.
Proposition 2.3. The Toda flow is isospectral.
Proof. By Lemma 2.2, one has
d d trXk = tr Xk = tr[Xk, π (X)] = 0 (2.1.12) dt dt − since the commutators are traceless. With all trace powers having zero derivative, it follows that the characteristic polynomial, hence spectrum, of X is conserved by the Toda flow.
Moser [Mo75] showed that, generically, as time goes to +∞ or −∞, the repelling forces will cause the system to expand: the particles will move away from each other
asymptotically. This is exhibited in qj+1 − qj → ∞ for all j and as t → ±∞. There-
fore, asymptotically, each pj will limit to a constant. In the Flaschka variables, this 36
means that bj → 0 as t → ±∞, and aj will eventually become constant. Since the constant values for the aj’s will be eigenvalues for the asymptotically attained matrix, and the Toda lattice is isospectral, one must have that (a1, . . . , an) is some permuta- tion of the eigenvalues (λ1, . . . , λn) of X.
Furthermore, as the particles separate, the particles behave more like free particles with momenta given by the eigenvalues. A fast particle will move past a slower one, each undergoing a phase shift (cf. Section 2.1.3), and so at the end, the particles will arrange themselves from slowest to fastest, as t → ∞. Thus, asymptotically, if
λ1 < λ2 < ··· < λn, then
X(t) → diag(λ1, λ2, . . . , λn) + ε as + t → ∞. (2.1.13)
A similar argument holds in reverse time:
X(t) → diag(λn, λn−1, . . . , λ1) + ε as t → −∞. (2.1.14)
This property of the Toda lattice is often called the sorting property. In analogy to the box-ball system and its sorting property in Section 1.5, these eigenvalues are the ordered asymptotic speeds of the particles in the Toda lattice, like the speeds of the blocks of adjacent balls in the box-ball system.
2.1.3 The Phase-Shift Formulæ for the Toda Lattice
Returning to the original Hamiltonian variables for the Toda lattice (Equations 2.1.2 and 2.1.3),
q˙j = pj j = 1, . . . , n
q1−q2 −e if j = 1 qj−1−qj qj −qj+1 p˙j = e − e if 1 < j < n eqn−1−qn if j = n 37
+ n − n Definition 2.4. Define two sequences of numbers (αk )k=1 and (αk )k=1 by
+ − α = lim pk, α = lim pn−k+1. (2.1.15) k t→+∞ k t→+∞
In the notation of 2.1.13, since ai = −pi for all i, and ai → λi as t → +∞, we have
+ − αk = −λk, αk = −λn−k+1. (2.1.16)
The following is a result of Moser.
Theorem 2.4 ([Mo75]). The asymptotic behaviour of the solution to the (Hamilto- nian form of the) Toda lattice is given by
+ + −δt qk(t) = αk t + βk + O(e ) (2.1.17)
− − −δt qk(−t) = −αk t + βk + O(e ) (2.1.18) for t → +∞ with δ > 0.
Furthermore,
+ − X − βn−k+1 = βk + φjk(α ), (2.1.19) j6=k where
− − 2 log(αj − αk ) for j < k φjk(α) = − − 2 . (2.1.20) − log(αj − αk ) for j > k
Equation 2.1.19 is what we refer to as the phase shift: asymptotically, comparing the
+ − solution at −∞ and ∞, for λk, the difference is given by βk − βn−k+1. The quantity − − φjk represent the phase shift between two particls with velocities αj , αk at t = −∞. Moser [Mo75] interprets Equation 2.1.19 if the interactions take place two at a time, resulting in this overall scattering. 38
2.1.4 Explicit Solution of the Toda Lattice: Method of Factorisation
We now state, without proof, a method for solving the Toda lattice by the factorisation of matrices.
Theorem 2.5. (The Factorisation Theorem) To solve
d X = [X, π (X)],X(0) = X (2.1.21) dt − 0
X0t X0t X0t Factor e = Π−(e )Π+(e ), if possible. Then, the solution is given by
−1 X0t X0t X(t) = Π− (e )X0Π−(e ). (2.1.22)
We provide an example, in lieu of the proof, the latter of which is by direct com- putation. In fact, the main aspects of the proof are replicated in our later proof of Theorem 6.13. a 1 1 X0t Example 2.1. Let X0 = . Then, for t 6= − a , one has e = I + X0t −a2 −a 2 (since X0 = 0), hence
X t 1 + at t 1 0 1 + at t e 0 = = . (2.1.23) 2 −a2t 1 −ta 1 − at 1+at 1 0 1+at
The solution to the Toda lattice with initial condition X(0) = X0 is then given by
−1 a 1 0 a 1 1 0 1+at 1 X(t) = = . (2.1.24) −a2t 2 −a2t −a2 a 1+at 1 −a −a 1+at 1 (1+at)2 − 1+at 39
2.1.5 Geometry of the Solutions: Embeddings into the Flag Manifold Phase Space
In Example 2.1, the solution blows up at precisely the value(s) at which eX0t fails to have an LU-decomposition. We now present the method of [FH91], [EFH91] and
[EFS93], in which the Toda flows are mapped to the flag manifold G/B+, thus com- pactifying the flow and continuing the solution beyond its singularities. The key tool will be a theorem of Kostant [Ko78].
Definition 2.5. Let λ = (λ1, . . . , λn). Define the isospectral set of Hessenberg ma- trices with spectrum λ, denoted (ε + b)λ or Fλ, by
(ε + b−)λ = Fλ = {X ∈ ε + b− : σ(X) = λ}, (2.1.25) where σ(X) denotes the spectrum of X. We also define the subset of tridiagonal Hessenberg matrices with this spectrum:
Mλ = {X ∈ Fλ : X is tridiagonal}. (2.1.26)
Definition 2.6. Define two distinguished matrices in Fλ: λ1 1 .. λ2 . ελ = diag(λ) + ε = (2.1.27) .. . 1 λn and the companion matrix 0 1 . 0 .. .. cλ = . 1 , (2.1.28) 0 1 −c0 −c1 · · · −cn−2 −cn−1 40
n Q n Pn−1 i where (x − λi) = x + i=0 cix is the characteristic polynomial for λ. i=1
Remark 2.4. In sln, since the matrices are traceless, one has λ1 + ··· + λn = 0,
so that the bottom-right entry of cλ is zero. We provide the more general definition above for both convenience and for generalisability.
Theorem 2.6. [Ko78] For each X ∈ Fλ, there exists a unique lower unipotent L ∈
N−, such that −1 X = LcλL . (2.1.29)
The same statement holds (with a different L ∈ N−) when cλ is replaced by ελ.
Definition 2.7. The companion embedding is the map κλ : Fλ → G/B+ defined as −1 follows: for X ∈ Fλ, if X = LcλL , then
−1 κλ(X) = L mod B+. (2.1.30)
Proposition 2.7. [EFS93] Under the companion embedding, the Toda flow becomes linear, given by left multiplication by ecλt.
Proof. Suppose L0 is Kostant’s lower unipotent matrix for the initial condition X0, −1 i.e. X0 = L0cλL0 . Combining this with Equation 2.1.45, one finds that the Toda lattice solution is
−1 X0t −1 X0t X(t) = Π− (e )L0cλL0 Π−(e ). (2.1.31)
Therefore, by Kostant’s theorem, one has
−1 X0t κλ(X(t)) = L0 Π−(e ) mod B+. (2.1.32)
X0t We now compute Π−(e ) in terms of cλ:
−1 X0t L0cλL0 t cλt −1 cλt −1 Π−(e ) = Π−(e ) = Π−(L0e L0 ) = L0Π−(e L0 ). (2.1.33) 41
Thus,
cλt −1 cλt −1 κλ(X(t)) = Π−(e L0 ) mod B+ = e L0 mod B+. (2.1.34)
Returning to Example 2.1, we compute the companion embedding of the Toda flow: a 1 0 1 Example 2.2. For X0 = , one has cλ = and −a2 −a 0 0
a 1 1 0 0 1 1 0 = . (2.1.35) −a2 −a −a 1 0 0 a 1
Then, the companion embedding of the Toda flow is given by
1 t 1 0 1 + at t κλ(X(t)) = mod B+ = mod B+. (2.1.36) 0 1 a 1 a 1
Remark 2.5. The flows for the other integrals of Toda, the flows for the higher trace powers, are similarly linearised by mapping to the flag manifold.
2.1.6 Symes’s Discrete-Time Matrix Dynamics and the Discrete-Time Toda Lattice
In 1980, Symes [Sy80] proposed a new approach for Moser’s result (Theorem 2.4), based on matrix factorisations. In this section, the matrix factorisation is the so- called LU decomposition that arises from Gaussian elimination.
We inductively define a two-step discrete evolution on Hessenberg matrices. If at (discrete) time n we have a matrix X(n), we obtain X(n + 1) as follows: 42
1. Perform Gaussian elimination to factor X(n) = L(n)R(n), with L(n) lower unipotent and R(n) upper triangular.
2. Permute the factors to define X(n + 1) = R(n)L(n)
Remark 2.6. By construction, one has
X(n + 1) = R(n)L(n) = (L(n)−1X(n))L(n) = L(n)−1X(n)R(n). (2.1.37)
Thus, this discrete evolution is given by conjugating a matrix by its lower unipotent factor. Since the spectrum of a matrix is invariant under conjugation, it follows that the eigenvalues are constants of motion for this discrete evolution. Hence, if
X(n) ∈ (ε + b−)λ = Fλ, then so is X(n + 1).
Furthermore, if X(n) is tridiagonal, then one can show that L(n) is lower bi-diagonal with ones on its diagonal and R(n) is upper bi-diagonal with ones on its superdiag- onal. Therefore, the product X(n + 1) = R(n)L(n) is itself once again a tridiagonal
Hessenberg matrix. Thus, if X(n) ∈ Mλ, then so is X(n + 1).
Symes [Sy80] showed that this discrete evolution extends to a continuous evolution with Lax equation of the same form as the Toda lattice Lax equation (Equation
2.1.10), but with π−(X) replaced by π−(log X):
d X = [X, π (log X)]. (2.1.38) dt −
This is intimately connected to the work of Deift, Nanda and Tomei [DNT83] in which they describe a general framework associating to each real, injective function G(λ) on the spectrum of a a symmetric tridiagonal matrix X, a unique isospectral flow on the space of tridiagonal matrices convergent to a diagonal matrix as t → ±∞. When G(λ) = λ, this yields the classical Toda flow, and twhen G(λ) = log(λ), this 43 produces the so-called QR flow that is the symmetric analogue of Symes’s factorisa- tion evolution, with the factorisation given by the QR factorisation instead of the LU decomposition.
When working with Hessenberg matrices, Symes showed that his discrete LU evolu- tion extends to the continuous evolution on Hessenberg matrices with Lax equation 2.1.38.
To write the Symes discrete-time evolution out explicitly, let
t I1 1 1 . t . V t 1 I2 . 1 L(t) = , and R(t) = , (2.1.39) .. .. .. . 1 . . t t In Vn−1 1 then Symes’s discrete-time evolution produces what has come to be known as the finite discrete-time Toda lattice:
Definition 2.8. The finite discrete-time Toda lattice is the system
t+1 t t t+1 Ii = Ii + Vi − Vi−1 , i = 1, . . . , n t t t+1 Ii+1Vi Vi = t+1 , i = 1, . . . , n − 1 (2.1.40) Ii t t V0 = Vn = 0 which is expressible as
L(t + 1)R(t + 1) = R(t)L(t) (2.1.41)
This, in turn, extends to the (infinite) discrete-time Toda lattice:
Definition 2.9. The discrete-time Toda lattice, originally due to Hirota [Hi77][HT95], is the system 44
It+1 = It + V t − V t+1 i i i i−1 It V t (2.1.42) V t+1 = i+1 i i t+1 Ii for i, t ∈ Z. Alternatively, this can be represented as the bi-infinite matrix equation
L(t + 1)R(t + 1) = R(t)L(t) (2.1.43) where
X X t X t L(t) = (Ei,i + Vi Ei+1,i),R(t) = (Ei,i+1 + Ii Ei,i), (2.1.44) i∈Z i∈Z i∈Z
p q and Ei,j are the usual standard basis matrices, i.e. (Eij)pq = δi δj .
We conclude this coverage of the discrete-time Toda lattice with the discrete-time analogue of the Factorisation theorem (Theorem 2.5):
Lemma 2.8. [Su18] To solve the discrete-time Toda latitice with initial condition
t log X0 t t t X(0) = X0, factor e = X0 = Π−(X0)Π+(X0), if possible. Then, the solution is given by
−1 t t X(t) = Π− (X0)X0Π−(X0), (2.1.45) for all t ∈ N ∪ {0}. 45
2.2 Box-Ball Systems
A cellular automaton is a special type of discrete dynamical system with both discrete time steps and a discrete (in fact finite) number of states. Of particular interest is the box-ball system (BBS) which was introduced in 1990 by Takahashi and Satsuma [TS90]
The process of ultradiscretisation can be used to transform a discrete dynamical sys- tem with continuous variables into one with variables taking on a finite number of values. In [To04], it is shown that the box-ball system arises as a result of ultradis- cretisation of both the discrete KP equation and the discrete-time Toda lattice. In this chapter, we define the box-ball system and present a coordinatisation of box-ball systems and the equations these coordinates solve. We finish by following the process of ultradiscretisation of discrete-time Toda to yield the box-ball system equations. This section follows [To04] closely.
2.2.1 The Box-Ball Evolution
The (basic) box-ball system consists of a one-dimensional infinite array of boxes with a finite number of the boxes filled with balls, and no more than one ball in each box (single capacity). A simple evolution rule is provided for a box-ball system state:
(1) Take the left-most ball that has not been moved and move it to the left-most empty box to its right.
(2) Repeat (1) until all balls have been moved precisely once.
Since the algorithm requires one to know which balls have been moved, we can, with- out technically changing the algorithm, introduce a colour-coding based on whether balls have moved or not. Balls will be blue until they have moved, after which they will become red. When all balls are red, the colours should be reset to blue, ready 46 for the next time step. Or, equivalently, a 0-th step of colouring all balls blue should be prescribed. We will use the latter for a minor benefit in brevity. Below is an example of the evolution with this colour-coding, with each ball move separated into a sub-step:
··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ···
Figure 2.1. A box-ball system time evolution (one time step).
Some authors use ones and zeroes to represent filled boxes and empty boxes, respec- tively. The numbered representation lends itself to the generalisation of the basic box-ball system to the advanced box-ball system, which we will define and utilise in AppendixA.
2.2.2 Soliton Behaviour and the Sorting Property
The box-ball system is sometimes referred to as the soliton cellular automaton. A chain of n consecutive balls travels with velocity n, so a chain (or soliton) of n parti- cles will have greater velocity than soliton of length m, if m < n. The solitons collide, and come out of the collision ordered with the longer chain ahead. There may be mixing phase, but the solitons come out ordered, with a phase shift. Here, by “phase shift”, we mean the difference between where the chain ends up after the collision and where the chain would have been if it were not for the collision. For now, we take this for granted, and provide a more detailed analysis shortly in Section 2.2.4, along 47 with a conjectured formula.
In the following figure, we demonstrate how the solitons become ordered after suffi- ciently many time evolutions. Once sorted, they travel with their respective velocities, never to collide again. ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ···
Figure 2.2. The sorting property of the box-ball system.
2.2.3 Invariants of the Box-Ball System
In [TTS96], a method is described for producing a pair of Young tableaux from a box- ball state via Dyck language, stackable permutations and the RSK correspondence. Not only are the shapes of the two Young tableaux the same, but this shape is also conserved under the box-ball dynamics. This invariant shape can be described in terms of balls and boxes, but it is slightly more convenient to represent the box-ball system as a sequence of 0’s (for empty boxes) and 1’s (for boxes with balls), which is made explicit in [To04]. The procedure goes as follows:
Let p1 be the number of 10’s in the sequence.
Eliminate all of these 10’s, and let p2 be the number of 10’s in the resulting sequence.
Repeat this process until no 10’s remain.
The sequence (p1, p2,...) is weakly decreasing, hence a partition of the number of balls. It can be represented by a Young diagram by taking the jth column to
have pj boxes. 48
For example, the associated shape to the sequence of box-ball states in Figure 2.2 is
Figure 2.3. The invariant shape of the box-ball system(s) in Figure 2.2
Remark 2.7. The row lengths then act as a signature for the system: if the columns
of the tableau are the pi’s, then the row lengths give the lengths of the solitons. One can see this heuristically by noting that as time goes to ±∞, the chains will be sufficiently separated by empty boxes so that each chain provides precisely one “10” for each particle comprising it. 2.2.4 The Box-Ball Phase Shift
We have seen how, as t → +∞, the blocks sort themselves by increasing length. The same holds in reverse time: as t → −∞, the blocks are ordered by decreasing lengths. The asymptotic sequence of lengths are revealed at any finite time using the invariant shape construction. The following example demonstrates why this is needed:
··· ··· ··· ··· ··· ··· ··· ··· ··· ···
Figure 2.4. A phase shift interaction between two colliding chains.
In the above example, we can discern the asymptotic ordering in the first, second, fourth and fifth rows, simply by counting the numbers of balls in each block of adja- cent balls. The middle row (the third) could be misleading, since it reveals a (2, 2) structure for the blocks. If two blocks are spaced far enough apart, then no such 49
obfuscation occurs.
Barring this intricacy (i.e. when there is enough space between adjacent blocks), one can take two blocks, evolve sufficiently many times according to the box-ball evolu- tion, and compare the position of the blocks to where they would have been if it had not have been for the collision.
In the figure below, we replicate Figure 2.2.4. However, we use green balls to keep track of where the block of three balls would have been without the collision, and magenta balls to keep track of where the block of one ball would have been.
··· ··· ··· ··· ··· ··· ··· ··· ··· ···
When the collision has concluded, we see that the three-block is two positions ahead of where it would have been, and the one-block is two positions behind where it would have been. Therefore, we say that the three-block experiences a +2 phase shift, and the one-block experiences a −2 phase shift.
We present here two conjectures on the phase shift formulæ for the box-ball system.
Conjecture 2.9. Take a box-ball system consististing of just two blocks of adjacent balls, subject to the following:
1. The left-most block has k balls.
2. The right-most block has l balls. 50
3. k > l
4. The two blocks are separated by at least l empty boxes.
After sufficiently many time steps of the box-ball evolution, after the blocks have collided and ordered themselves, the k-block will have experienced a phase shift of +2 min(k, l) = 2l, and the l-block will have experienced a phase shift of −2 min(k, l) = −2l.
This conjecture extends to the following conjectured formula for the phase shifts for box-ball systems with any number of blocks.
Conjecture 2.10. If one has a box-ball system with a total of n blocks, with Qk-many balls in the k-th block, and with blocks separated sufficiently so as to be able to identify
n the asymptotic soliton structure by simply ordering (Qk)k=1. After sufficiently many time-steps have passed (i.e. after the blocks have finished all collisions), the k-th block will have experienced a total phase shift of
X X 2 min(Qj,Qk) − 2 min(Qj,Qk). (2.2.1) j>k j
enced by the Qk-block as a result of colliding with slower chains in front, and the
second sum is the total (negative) phase shift experienced by the Qk-block as a result of colliding with faster chains initially behind it.
We draw the readers attention to the similarity between these conjectures and the phase shift formula for Section 2.1.3 for the Toda lattice: phase shifts propagate collision-by-collision. Whilst we defer the proof to our future direction, the next two sections help to shed some light on some of the analogies between the observed behaviour of the Toda lattice and the observed and conjectured behaviour of the box-ball system. 51
2.2.5 Coordinates on the Box-Ball System
t t t Suppose at time t, there are N sets of consecutive filled boxes. Let Q1, Q2, ..., QN t denote the lengths of these sets of filled boxes, taken from left to right. Let W1, t t W2, ..., WN−1 denote the lengths of the sets of empty boxes between the N sets of t t filled boxes, again taken from left to right. Lastly, let W0 and WN be formally defined to be ∞, reflecting the fact that the empty boxes continue infinitely in both directions.
The following theorem gives evolution equations for these coordinates. They can be found, for example, in [To04].
t t t t t Theorem 2.11. ([To04]) The coordinates (W0,Q1,W1,...,QN ,WN ) evolve under the box ball dynamics according to
t+1 t+1 W0 = WN = ∞ (2.2.2)
t+1 t t t+1 Wn = Qn+1 + Wn − Qn , n = 1,...,N − 1 (2.2.3) n n−1 ! t+1 t X t X t+1 Qn = min Wn, Qj − Qj , n = 1,...,N, (2.2.4) j=1 j=1
Example 2.3. Take the initial state in Figure 2.1:
··· ···
t t t t t t t t t W0 Q1 W1 Q2 W2 Q3 W3 Q4 W4 ··· ···
t+1 t+1 t+1 t+1 t+1 t+1 t+1 t+1 t+1 W0 Q1 W1 Q2 W2 Q3 W3 Q4 W4
Figure 2.5. The box-ball coordinates on a box-ball system and its time evolution.
Under the time evolution, the coordinates evolve as
(∞, 3, 3, 1, 2, 2, 1, 1, ∞) 7→ (∞, 3, 1, 1, 3, 1, 1, 2, ∞) (2.2.5) 52
To distinguish between the box-ball evolution and the induced coordinate evolution, we will make the following definitions.
Definition 2.10. Let BBS be the set of states of the box-ball system (with at least
2n−1 one ball) and Bn = {∞} × N × {∞} the set of coordinates of BBS for a state with
n solitons. We define % : BBS → BBS to be the box-ball evolution and χn : Bn → Bn S to be the evolution on coordinates. We also define a map C : BBS → B := Bn n∈N taking a box-ball system state to its coordinates.
With this definition, we have an immediate corollary of Theorem 2.11.
Corollary 2.12. The following diagram commutes: % BBS BBS
C C χ B B
where χ : B → B is the map naturally induced by {χn : Bn → Bn}n∈N. 2.2.6 Ultradiscretisation of Discrete-Time Toda
Continuing to follow [To04], we perform the process of ultradiscretisation on the discrete-time Toda lattice to see that this results in the box-ball system. Later, in Chapter4, we present the full procedure for ultradiscretising geometry RSK. For this reason, we omit some of the detailed calculations in the ultradiscretisation of the discrete-time Toda lattice.
Recall the discrete-time Toda lattice:
t t V0 = Vn = 0 t+1 t t t+1 In = In + Vn − Vn−1 n = 1,...,N . (2.2.6) t+1 t+1 t t Vn In = In+1Vn n = 1,...,N − 1 53
It can be shown that the discrete-time Toda lattice is equivalent to the following system V t = V t = 0 0 n It ··· It It+1 = V t + n 1 n = 1,...,N . (2.2.7) n n It+1 ··· It+1 n−1 1 t+1 t+1 t t Vn In = In+1Vn n = 1,...,N − 1
Making the change of variables
1 t 1 t t − ε Qn(ε) t − ε Wn(ε) In = e ,Vn = e , (2.2.8) one obtains
t+1 t t t+1 Wn (ε) = Qn+1(ε) + Wn(ε) − Qn (ε) n = 1,...,N − 1 (2.2.9)
1 t − 1 Pn Qt (ε)−Pn−1 Qt+1(ε) t+1 − ε Wn(ε) ε ( j=1 j j=1 j ) Qn (ε) = −ε log e + e n = 1,...,N (2.2.10)
t t W0(ε) = WN (ε) = ∞. (2.2.11)
Finally, assuming the limits
t t t+1 t+1 Wn := lim Wn(ε),Wn := lim Wn (ε), ε→0+ ε→0+ (2.2.12) t t t+1 t+1 Qn := lim Qn(ε),Qn := lim Qn (ε), ε→0+ ε→0+ exist, one obtains
t+1 t t t+1 Wn = Qn+1 + Wn − Qn , n = 1,...,N − 1 (2.2.13) n n−1 ! t+1 t X t X t+1 Qn = min Wn, Qj − Qj , n = 1,...,N (2.2.14) j=1 j=1 t t W0 = WN = ∞ (2.2.15)
which are exactly the box-ball system equations. 54
2.3 The Robinson-Schensted-Knuth Correspondence
In this section, we provide the background and some basic motivation behind the Robinson-Schensted-Knuth correspondence and Schensted insertion. We begin with a review of some of the combinatorial objects of interest, the RSK equations describ- ing Schensted word insertion, and Kirillov’s geometric lifting of the (tropical) RSK equations to the geometric RSK (gRSK) equations. We will be following the papers [AD99] by Aldous and Diaconis and [NY04] by Noumi and Yamada, and the book [Ai07] by Aigner.
The coverage of this background is fairly detailed, and proofs and examples have been provided to aid in following the rather algorithmic constructions presented here. However, what is most pertinent to this dissertation are the equations for Schensted insertion (which are described in Corollary 2.20 or their equivalent presentation in Section 2.3.7) and the material thereafter (in Sections 2.3.7 to 2.3.9).
2.3.1 Young Tableaux
Definition 2.11. Let n ∈ N and let λ = (λ1, λ2, . . . , λk) ` n be a partition of n, i.e.
λi ∈ N for each i, and λ1 ≥ λ2 ≥ · · · ≥ λk. Associated to λ is the Young diagram (or
Ferrers diagram) of shape λ which is composed of λ1 boxes in the first row, λ2 boxes in the second row, ..., and λk boxes in the k-th row. The boxes are of equal size and aligned in a grid, justified to the left.
Example 2.4. The partition λ = (5, 3, 3, 2, 1), which partitions n = 14, has Young diagram: 55
Definition 2.12. A Young tableau is a Young diagram with its boxes labelled with n distinct numbers, where n is the number of boxes in the diagram, such that the numbers increase along the rows and down the columns. A standard tableau is a Young tableau in which the n distinct numbers are the numbers 1-through-n.
Example 2.5. For the example given above, here are some standard tableaux:
1 2 3 4 5 1 6 10 13 14 1 3 6 10 14 1 2 3 4 5 6 7 8 2 7 11 2 5 9 6 10 11 9 10 11 3 8 12 4 8 13 7 12 14 12 13 4 9 7 12 8 13 14 5 11 9
Definition 2.13. To each box (or cell) of a Young diagram, we can assign a quantity known as the hook length. The hook length of a box is the number of boxes (in its row) to the right of the box, plus the number of boxes (in its column) below the box, plus one (counting the box itself).
Example 2.6. To illustrate this, the diagram below has each of its cells filled in with their hook numbers (note that the diagram is shaded to avoid confusion with Young tableaux) 9 7 5 2 1 6 4 2 5 3 1 3 1 1 56
Before moving on, we make the following definition that generalises that of standard tableaux:
Definition 2.14. A semistandard tableau (SST) is a Young diagram filled with pos- itive integers that are weakly increasing along rows and strictly increasing down the columns. For ease of writing, let SST denote the set of semistandard tableaux.
Example 2.7. Below are three examples of semistandard tableaux:
1 3 3 5 6 6 8 1 1 1 4 4 4 6 7 2 2 2 5 7 7 3 3 3 9
2.3.2 The Length of the Longest Increasing Subsequence of a Permutation and Patience Sorting
Definition 2.15. The length of the longest increasing subsequence of a permutation
σ ∈ Sn, denoted `(σ), is defined to be the length of the longest increasing subsequence of the sequence (σ(1), σ(2), . . . , σ(n)). i.e. `(σ) is the longest increasing subsequence of the bottom row of σ when written in the 2-row permutation notation:
1 2 ··· n . σ(1) σ(2) ··· σ(n)
For ease of notation, we may write (σ(1), . . . , σ(n)) to represent σ ∈ Sn.
Example 2.8. The permutation σ = (1, 4, 7, 2, 3, 6, 8, 5, 9) has `(σ) = 6 corresponding to the subsequence (σ(1), σ(4), σ(5), σ(6), σ(7), σ(9)) = (1, 2, 3, 6, 8, 9).
Definition 2.16. Patience Sorting is an algorithm by which a sequence of distinct
numbers, say i1, . . . , in, can be sorted into ‘piles’: 57
(1) Set k = 1.
(2) Scan through the top number of each pile. If there are no piles, start a pile with
ik. Otherwise, either ik is larger than the top number of each pile or it is not. In
the case of the former, start a new pile to the right of all piles, and place ik in
this new pile. If ik is not larger than the top number of each pile, place ik on the
top of the left-most pile whose top number is larger than ik.
(3) If k < n, replace k 7→ k + 1 and return to (2). Otherwise, the algorithm ends.
Example 2.9. Return to the permutation σ = (1, 4, 7, 2, 3, 6, 8, 5, 9). Patience sorting yields the following piles: 2 3 5 1 4 7 6 8 9.
It is no coincidence that the number of piles is equal to the length of the longest increasing subsequence. Clearly, the number of piles is at least `(σ), since, if σ(j) > σ(i) for some j > i, then σ(j) must be put into a pile at least one to the right of the σ(i)’s pile. To see that `(σ) is achieved, we can construct a longest increasing sequence of the permutation by taking one number in each pile as the piles are being formed: whenever a number is placed in a pile other than the first pile, draw an arrow from that number to the top number of the pile immediately to the left of it. This results in a path from the last pile to the first, which picks out a longest increasing subsequence.
Example 2.10. Taking Example 2.9, and including the arrows, we obtain:
2 3 5
1 4 7 6 8 9
This gives the subsequence (1, 2, 3, 6, 8, 9). An arrow is placed at each step of the patience sorting algorithm: the order in which the arrows above were placed is given 58
by taking the permutation (1, 4, 7, 2, 3, 6, 8, 5, 9) without the first entry and taking the sequence of arrows with these numbers as their tails.
2.3.3 Schensted Insertion
In this section, we describe a procedure called Schensted insertion, which takes a permutation and incrementally grows the empty tableau into a standard tableau, passing through intermediated semistandard tableaux along the way. This procedure is essentially a more aesthetically pleasing version of patience sorting. In patience sorting, once a number was placed atop a pile, all numbers below it became irrelevant for future steps. Schensted insertion creates a first row out of the top numbers from patience sorting, and bumped numbers are then essentially patience sorted on the piles that result from stripping away the top numbers. We will now make this precise.
Suppose we have a (standard) Young tableau, T , and we want to insert a number a into T (where a is not a number that is already contained in T ). To do this, we perform a process called ‘bumping’, prescribing how a number is inserted into a row.
(1) To insert a number, a, into an empty row, we simply create a box in the row and place the number into it. If the row is not empty, move onto Step (2).
(2) If the row is not empty, search the row for the smallest number, m, which is larger than the number to be inserted, if there is such a number. Replace this number by a, and ‘bump’ (i.e. remove) m from the row.
(3) If no such number exists, create a new box at the end of the row and fill it with a.
The process of inserting a into a row of T either results in an extension of the row (with nothing being replaced) or a replacement, which ‘bumps’ a number out of the row into which the insertion was performed. 59
To generalise this row insertion to insertion into a Young tableau, we perform the following:
(1) Insert the number into the first row.
(2) If no numbers are bumped, then the insertion is complete.
(3) If a number is bumped, insert that number into the next row.
Notation 2.17. The result of inserting a into T is sometimes denoted T ← a.
This procedure clearly generalises to ‘words’. If one has a sequence of (distinct) num-
bers (a1, a2, . . . , ak) to insert into a tableau, T , not already containing the numbers,
simply insert the numbers recursively: ((((T ← a1) ← a2) ··· ) ← ak). We can also build Young tableaux from scratch by starting with the empty tableau, and a stan- dard tableau can be build by inserting the numbers 1-through-n in some particular order. Na¨ıvely, one might think it is sufficient to just build a standard tableau from the per- mutation (π(1), π(2), . . . , π(n)). But, what we will see is that two different permuta- tions may give rise to the same tableau. This next example serves to demonstrate the insertion procedure, as well as demonstrate why we need more than just one tableau.
Example 2.11. Take two permutations π = (4, 3, 1, 2) and σ = (1, 4, 3, 2). For each, we produce the sequence of (semi)standard tableaux:
4 3 1 1 2 π : ∅ , , , , . 4 3 3 4 4
1 1 4 1 3 1 2 σ : ∅ , , , , . 4 3 4 60
We see that the resulting tableaux are the same, but the transition through the shapes was different for the two permutations. Therefore, just knowing the tableau that one obtains from Schensted insertion on a permutation is not enough to recover the permutation uniquely. However, if one knows the shape transition and resulting tableau, then one can uniquely pin down the permutation. This is essentially the statement of the Robinson-Schensted correspondence, which we will now present and then prove.
2.3.4 The Robinson-Schensted Correspondence
Definition 2.18. For each λ ` n, let Dλ = {T : T is a standard tableau of shape λ} and dλ = |Dλ|.
Theorem 2.13. (The Robinson-Schensted Correspondence) There exists a one-to- S one correspondence between elements of Sn and Dλ × Dλ, i.e. the elements of Sn λ`n are in bijection with pairs of standard tableaux of the same shape λ ` n.
To prove Theorem 2.13, we need to define a procedure that extends Schensted inser- tion (of a word into a tableau) to a procedure that ultimately produces the same final tableau, but accompanied by another tableau that can be used to backtrack to the original permutation. That is, we will describe an algorithm which assigns a pair of standard tableaux (P,Q) to a permutation, and proceed to show that this algorithm has an inverse algorithm. The tableau P will be called the insertion tableau and Q will be called the recording tableau.
S First, we define the map Sn → Dλ × Dλ: λ`n
Fix a permutation σ ∈ Sn. As in the previous section, from (σ(1), σ(2), . . . , σ(n)) we build (n + 1) semistandard tableaux, say P0, P1, ..., Pn, where P0 = ∅ and
Pi = Pi−1 ← σ(i) for all 1 ≤ i ≤ n. Then Pσ := Pn is the (standard) Young tableau 61
associated to σ, as in the previous section.
Viewing σ as a 2-row array:
1 2 ··· n , σ(1) σ(2) ··· σ(n)
we can use the top row to build another standard tableau, but this will not be built
via the insertion algorithm. Instead, set Q0 = ∅, and define Qi to be the result of
adding a box to Qi−1 so that Qi and Pi have the same shape, and then insert i into
this new box. At the final stage, we have Qn, which we will denote by Qσ. S The map Sn → Dλ × Dλ is then defined as: λ`n [ Sn → Dλ × Dλ λ`n
σ 7→ (Pσ,Qσ).
S This map is invertible. To show this, we construct its inverse: Dλ × Dλ → Sn: λ`n
Given a pair of standard tableaux (P,Q) of the same shape λ ` n, we can obtain a
permutation σ(P,Q) ∈ Sn. This is achieved via the following algorithm:
(1) Identify the box in Q with the largest number, m. Delete this box from Q, and delete the corresponding box (positionally-speaking, that is) in P , making a note of the number, e, that was in this box. If Q was empty, then we are done.
(2) If e was in the first row, set σ(P,Q)(m) = e, and go back to Step (1). Otherwise, go onto Step (3).
(3) Search in the row above e’s row for the largest number, f, that is smaller than e. Replace f by e, and repeat Step (2) with f taking the role of e.
We will omit the proof that these maps are inverse to each other in favour of an example, which should be more enlightening in demonstrating two algorithms. 62
Example 2.12. Start with
1 2 6 7 1 4 6 7 P = ,Q = . 3 5 2 8 4 3 8 5
1 2 6 7 1 4 6 7 1 5 6 7 1 4 6 7 (1) (2) σ(P,Q)(8) = 2 3 5 2 8 3 2 4 3 4 3 8 5 8 5
1 5 6 1 4 6 1 5 1 4 (3) σ(P,Q)(7) = 7 (4) σ(P,Q)(6) = 6 3 2 3 2 4 3 4 3 8 5 8 5
3 5 1 4 3 1 (5) σ(P,Q)(5) = 1 (6) σ(P,Q)(4) = 5 4 2 4 2 8 3 8 3
4 1 8 1 (7) σ(P,Q)(3) = 3 (8) σ(P,Q)(2) = 4 8 2
1 2 3 4 5 6 7 8 leaving σ (1) = 8. Thus, the pair (P,Q) maps to . (P,Q) 8 4 3 5 1 6 7 2
1 2 3 4 5 6 7 8 Now, to go in the other direction, start with σ = , and set 8 4 3 5 1 6 7 2 (P0,Q0) = (∅, ∅). Then, we obtain the following sequence of pairs of Young tableaux, recalling that the sequence of Q tableaux is formed by mimicking the growing shape of the P -tableaux, and inserting successive numbers from the top row of σ into the newly formed boxes: 63
8 1 4 1 (P1,Q1): (P2,Q2): 8 2
3 1 3 5 1 4 (P3,Q3): (P4,Q4): 4 2 4 2 8 3 8 3
1 5 1 4 1 5 6 1 4 6 (P5,Q5): (P6,Q6): 3 2 3 2 4 3 4 3 8 5 8 5
1 5 6 7 1 4 6 7 1 2 6 7 1 4 6 7 (P7,Q7): (P8,Q8): 3 2 3 5 2 8 4 3 4 3 8 5 8 5
Which is the original pair of standard tableaux.
Theorem 2.14. ([NY04]) Given a permutation σ ∈ Sn, `(σ) is equal to the number
of columns in Pσ (or, equivalently, in Qσ).
Proof. This fact follows from the observation in Section 2.3.2 that patience sorting a sequence σ(1), σ(2), . . . , σ(n) yields `(σ) piles, paired with the fact that the first row of the tableau Pi contains precisely the top number of each pile in the patience sorting algorithm.
2.3.5 Semistandard Young Tableaux and the Robinson-Schensted-Knuth Correspondence
We have defined both (standard) Young tableaux and semistandard tableaux, yet the Robinson-Schensted correspondence only involves pairs of standard Young tableaux of the same shape. A natural question is the question of what happens to the Robinson- Schensted correspondence when the pairs of tableaux are allowed be semistandard. The answer is given by the celebrated Robinson-Schensted-Knuth correspondence. 64
Recall that permutations in Sn are in one-to-one correspondence with n × n permu- tation matrices. One can embed all such matrices, for all n, as semi-infinite matrices by extending by zeroes. The image is a subset of the set of non-negative arrays which are semi-infinite in two directions with entries zero for all but finitely many indices:
A := {A = (aij)1≤i,j<∞ : aij ∈ N0 ∀i, j and aij = 0 for all but finitely many i, j}.
I.e. if σ ∈ Sn, then σ ↔ Aσ, where 1 if 1 ≤ i ≤ n and j = σ(i) (A ) = . σ ij 0 otherwise
Lemma 2.15. ([NY04]) The elements of A are in one-to-one correspondence with ‘generalised permutations’
i i ··· i w = 1 2 m j1 j2 ··· jm
where ik ≤ ik+1 for all 1 ≤ k ≤ m − 1, and the j’s are weakly increasing in their i-blocks (i.e. if ik = ik+1, then jk ≤ jk+1).
Proof. Given w as in the statement of the lemma, let aij be equal to the number of i times the column appears in w. j i Conversely, given A ∈ A, arrange a copies of the column in a 2-by-||A|| ij j 1,1 array as prescribed by the conditions described above.
Theorem 2.16. ([NY04]) (The Robinson-Schensted-Knuth Correspondence) The set of non-negative integer matrices is in bijection with the set of pairs of semistandard Young tableaux of the same shape.
By identifying A with the set of generalised permutations, the previous algorithms establish the correspondence. We just need to make a few notes on how to generalise the procedures to generalised permutations and semistandard tableaux. 65
Generalised Permutations to Semistandard Tableaux:
The algorithm is essentially the same as before, with a couple of additional comments to be made on how row insertion works. In the following, we are inserting a:
(1) To insert the number into an empty row, we just create a box in the row and place the number into it. If the row is not empty, move onto Step (2).
(2) If the row is not empty, search the row for the smallest number which is strictly larger than the number to be inserted, if it exists, and call the found number m. If more than one such number exists, take m to be the left-most such number. Replace m by a, and ‘bump’ (i.e. remove) m from the row.
(3) If no such number exists, create a new box at the end of the row and fill it with a.
To insert a word into a (possibly empty) semistandard tableau, we cascade in the same manner as before. Insert the first ‘letter’ (as prescribed above). If a letter is bumped, then insert that letter into the next row. Otherwise, move onto the next letter in the word. Continue in this manner until no letters remain.
Lemma 2.17. ([NY04]) Word insertion, when performed on a semistandard tableau, yields another semistandard tableaux.
Proof. We check this by considering the case when the word is just one letter/number, w = a. The result will then follow by induction.
We need to check three things: we need insertion T ← a to have (weakly) increasing row lengths, to be (weakly) increasing along the rows, and to be (strictly) increasing down the columns.
(1) To fail to be weakly increasing in row lengths, one would have to encounter the situation of two adjacent rows having the same row lengths, with a bump number 66
from the first being appended to the second row. Thus, we need not consider the
full tableau, just two rows, R1 and R2 of the same length.
Suppose a bumps w from R1. Then, this implies that w > a, since w is the
smallest number in R1 which is strictly larger than a. Now, for w to be appended 0 0 to R2, one would have to have w ≥ w for all w in R2. However, consider the 00 number, w in R2, below the original position of w in R1. Since the tableau was semistandard before, we have w00 > w. But, this then gives w00 > w > a, which
tells us that w cannot possibly be appended to R2. Thus, row insertion must preserve the property of the row lengths being weakly decreasing.
(2) The fact that the weakly increasing property of rows is preserved by word insertion can be checked at the level of rows, since, if it is shown that insertion of a into a row, R, results in a weakly increasing row, then the result will follow inductively.
There are two possibilities: either a bumps a number in R, or a is appended to R. The latter case is trivial; a is only appended to R if a is greater than or equal to every number in R. Since R started off being weakly increasing, the result follows immediately.
Now, suppose a does bump a number, w, from R. Since w > a, we have no violations to the right of the bumped position. If w0 the number to the left of the bump, then w0 ≤ w. Since w was the smallest (and left-most) entry that was larger than a, we must have w0 ≤ a, which confirms that there are no issues to the left of the bump.
Therefore, we have that row insertion preserves the weakly increasing property of rows.
(3) Similarly, to see the preservation of column-strictness under word insertion, we only need to consider two rows again. The only thing to consider is whether a 67
row insertion can affect the relation of a row to the row above. The reason for this is that, if a bumps w, we have a < w, thus, if w0 was below w previously, then a < w < w0 gives us a < w0. If a is simply appended to a row, then, by virtue of the row lengths previously being weakly increasing, there cannot be anything below the new box, so there is nothing to check.
Now we turn our attention to the relation between a row and the row above. The key fact is that a bump must go to the left (not strictly) of the box below the
bump from the previous row. If we have two rows, R1 and R2, with R1 above R2,
then consider R1 ← a. If a bump occurs, then a bumps some number w. If w had no box below it, then this fact is trivially true. However, if there were a box below w, containing w0, then we have w0 > w, which means, when it comes to
0 inserting w into R2, it cannot go to the right of w (since we bump the left-most instance of the smallest number greater than w). Now, by (2), we know that a is
greater than or equal to everything to its left (in R1), and we know that a < w (again, since a bumped w), and so we have that w is inserted beneath a number strictly lesser than it.
Example 2.13. Take the matrix
0 1 0 0 1 A = (aij) = 0 0 2 0 0 . 1 0 0 0 0
The index pairs (i, j), counted with multiplicity aij, are
(2, 1), (1, 5), (2, 3), (2, 3), (3, 1).
The corresponding generalised permutation is then
1 1 2 2 3 w = . 2 5 3 3 1 68
Applying the RSK correspondence, one obtains the following sequences of semistan- dard tableaux, resulting in the pair (P,Q) of semistandard tableaux corresponding to A (or w).
2 2 5 2 3 2 3 3 1 3 3 = P 5 5 2 5
1 1 1 1 1 1 1 2 1 1 2 = Q. 2 2 2 3
Semistandard Tableaux to Generalised Permutations:
Given a pair of semistandard tableaux, (P,Q), of the same shape λ ` n, we obtain a generalised permutations via the following algorithm:
(1) Identify the box in Q with the largest number, m. If more than one box contains the largest number, pick the right-most instance of it. Delete this box from Q, and delete the corresponding box (positionally-speaking, that is) in P , making a note of the number, e, that was in this box. If Q was empty, then go to Step (4).
m (2) If e was in the first row, form the vector , and go back to Step (1). e Otherwise, go onto Step (3).
(3) Search in the row above e’s row for the largest number, f, that is strictly smaller than e. If more than one instance occurs, pick the right-most one. Replace f by e, and repeat Step (2) with f taking the role of e.
(4) Form all of the vectors into a 2 × n-array, according to the conditions imposed on generalised permutations. 69
Example 2.14. Take
1 1 2 4 2 2 3 3 P = and Q = 2 3 3 4 5 6 4 5
Which leads to the following series of pairs of tableaux and vectors:
1 1 2 4 2 2 3 3 1 1 3 4 2 2 3 3 6 , → , , 2 3 3 4 5 6 2 3 4 5 2 4 5 4 5 1 3 3 4 2 2 3 3 5 → , , 2 4 1 4 5 2 3 3 4 2 2 3 3 5 → , , 4 4 1 2 3 4 4 2 2 3 3 4 → , , 3 3 → 2 3 4 , 2 2 3 , 4 3 → 2 3 , 2 2 , 4 2 2 2 → , , 3 2 → ∅ , ∅ , . 2
Thus, this pair of semistandard tableaux yields the following generalised permutation:
2 2 3 3 4 5 5 6 . 2 3 4 4 3 1 1 2
2.3.6 The RSK Equations for Schensted Insertion
We now turn our attention back to Schensted insertion. We consider insertion of a word v into a tableau T , which produces a new tableau T 0. We will deal with the 70
a1 a2 an case of v being weakly increasing, i.e. v = 1 2 ··· n , where ai ∈ N0 for each 0 i ∈ [n] := {1, 2, . . . , n}. Thus, T is obtained from T by inserting a1 lots of 1 into
T , then a2 lots of 2, ..., finishing off by inserting an lots of n . The result is a new semi-standard tableau T 0 = T ← v.
Since the rows of a semistandard tableau are weakly increasing, we can identify T ∈
Dλ, where λ = (λ1, . . . , λm) ` n and m ≤ n, with a sequence of weakly-increasing
i i i xi xi+1 xn m i words (wi = i (i + 1) ··· n )i=1. Here, xj is the number of times j appears in Pn i row i of T . Thus, j=i xj = λi.
Note 2.8. Since semistandard tableaux are strictly increasing down their columns, this identification really does capture all of the information of T .
Example 2.15. Inserting the word v = 11203241 = 113241 into the tableau
1 1 2 4 T = 2 3 3 4 4 71 yields
T 0 = ((((T ← 1) ← 3) ← 3) ← 4) 1 1 2 4 = ← 1 ← 3 ← 3 ← 4 2 3 3 4 4 1 1 1 4 = ← 3 ← 3 ← 4 2 2 3 3 4 4 1 1 1 3 = ← 3 ← 4 2 2 3 4 3 4 4 1 1 1 3 3 = ← 4 2 2 3 4 3 4 4 1 1 1 3 3 4 = 2 2 3 4 3 4 4
The procedure T ← v yields a semistandard tableau T 0. If we can identify T 0 with 72
i i i 0 0 yi yi+1 yn m 0 its sequence of row words (wi = i (i + 1) ··· n )i=1, then finding T in general (given T and v) amounts to finding how the y’s arise from the x’s and a’s. Since word insertion is performed iteratively on each of the rows of the tableau, it suffices to understand the exponents at the level of rows.
Notation 2.19. We will use labelled arrows to denote word insertion. For word insertion into a tableau T 0 = T ← v, we will write T −−→v T 0. For word insertion into v a row, r, of a tableau, we write 0r r0 where r0 is the row after the word v has v0 been inserted and v0 is the word that has been bumped out of r.
v Once 0r r0 is understood, T −−→v T 0 is understood as v0
v v 0 1 0 0 2 0 w1 w1 , w2 w2 , ··· v2 v3
0 where wi are the words associated to the rows of T , wi are the words associated to 0 the rows of T , and v1 = v.
Explicit Formulæ for Dynamics:
Write w = 1x1 2x2 ··· nxn for the row into which we wish to insert the word v =
1a1 2a2 ··· nan , write w0 = 1y1 2y2 ··· nyn for the result, and write v0 = 1b1 2b2 ··· nbn for the word bumped by the process of the insertion w ← v. Thus, we want to know how
(y1, . . . , yn) and (b1, . . . , bn) arise from (x1, . . . , xn) and (a1, . . . , an).
In order to simplify the calculations, we introduce new variables by taking partial sums:
ξj = x1 + ··· + xj, ηj = y1 + ··· + yj (2.3.1)
for j = 1, . . . , n. These will serve to reformulate the information in a “max-plus” form (Lemma 2.18), which will be crucial for detropicalisation. 73
The y’s can then be recovered from the η’s as y1 = η1 and yj = ηj − ηj−1 for j > 1.
Clearly, the b’s can be obtained once the y’s are known. For example, bj represents the number of j’s bumped from w. Since w started with xj lots of j’s, and we introduced aj lots of j’s, and ended up with yj lots j’s, the number of bumped j’s must equal bj = xj + aj − yj = aj + ξj − ξj−1 − ηj + ηj−1 for j > 1. For j = 1, note that 1 cannot
be bumped, so y1 = x1 + a1, so that b1 = 0 always.
Lemma 2.18. ([NY04]) When v = ka and w = 1x1 2x2 ··· nxn , we get
ξj if j < k ηj = ξk + a if j = k . (2.3.2) max{ηk, ξj} if j > k
Proof. If ξj > ηk, then some of the j’s ‘survived’ the bumping. This means the bumping did not get past the last j, and so the number of boxes with numbers ≤ j is still given by ξj, hence ηj = ξj. If ξj = ηk, then the bumping got to the very last j, which still gives the same conclusion of ηj = ξj. However, if ξj < ηk, then this means that the bumping eradicated all instances of j (hence, by the nature of the insertion algorithm, no numbers k < l ≤ j remain) and so ηj = ηk, hence ηj = max{ηk, ξj} for all j > k.
Corollary 2.19. ([NY04]) Inserting v = 1a1 2a2 ··· nan into w = 1x1 2x2 ··· nxn , one has
ηj = max {x1 + ··· + xk + ak + ··· + aj} (2.3.3) 1≤k≤j for all j.
Proof. We apply Lemma 2.18 recursively:
1a1 (ξ1, ξ2, ξ3, . . . , ξj,...)−−−→ (ξ1 + a1 = η1, max{η1, ξ2}, max{η1, ξ3},..., max{η1, ξj},...)
2a2 −−−→ (η1, max{η1, ξ2} + a2 = η2, max{η1, η2, ξ3},..., max{η1, η2, ξj},...)
3a3 −−−→ (η1, η2, max{η1, η2, ξ3} + a3,..., max{η1, η2, η3, ξj},...). 74
Thus, ηj = max{η1, η2, . . . , ηj−1, ξj} + aj for all j > 1 and η1 = ξ1 + a1.
Since the η’s are weakly-increasing, we have
ηj = max{ηj−1, ξj} + aj = max{ηj−1 + aj, ξj + aj} (2.3.4) for all j > 1.
Unpacking this (to remove the ηl’s, for l < j), we get
ηj = max{max{ηj−2 + aj−1, ξj−1 + aj−1} + aj, ξj + aj} (2.3.5)
= max{ηj−2 + aj−1 + aj, ξj−1 + aj−1 + aj, ξj + aj} (2.3.6)
= ··· = max {ξk + ak + ak+1 + ... + aj} (2.3.7) 1≤k≤j
= max {x1 + ··· + xk + ak + ... + aj} (2.3.8) 1≤k≤j which also covers j = 1.
To summarise, we now have found
x1 + a1 if j = 1 yj = max1≤k≤j{x1 + ··· + xk + ak + ··· + aj} − max1≤k≤j−1{x1 + ··· + xk + ak + ··· + aj−1} if j > 1 and bj = xj + aj − yj for all j.
However, for the purpose of convenient calculation, dividing the computation into phases, using Equation 2.3.8, one has
Corollary 2.20. ([NY04]) Given input coordinates (a1, . . . , an) and (x1, . . . , xn), one obtains the output coordinates (b1, . . . , bn) and (y1, . . . , yn) as follows:
1. compute ξj = x1 + ··· + xj for j = 1, . . . , n,
2. compute ηj = max{ηj−1, ξj} + aj recursively for j = 1, . . . , n, initialising with
η1 = ξ1 + a1, 75
3. the y-coordinates are obtained by taking y1 = η1 and yj = ηj − ηj−1 for j = 2, . . . , n,
4. the b-coordinates are obtained by taking b1 = 0 and bj = aj + xj − yj for j = 2, . . . , n.
Example 2.16. Take w = 1223314052 = 12233152 and v = 11224151, i.e. x = (2, 3, 1, 0, 2) and a = (1, 2, 0, 1, 1), where emboldened letters denote the vectors of the corresponding variables. According to Equation 2.3.8, we should have
η1 = 3, η2 = max{5, 7} = 7, η3 = max{5, 7, 6} = 7,
η4 = max{6, 8, 7, 7} = 8, η5 = max{7, 9, 8, 8, 9} = 9.
Thus,
y1 = 3, y2 = 7 − 3 = 4, y3 = 7 − 7 = 0, y4 = 8 − 7 = 1, y5 = 9 − 8 = 1.
So,
b1 = 1 + 2 − 3 = 0, b2 = 2 + 3 − 4 = 1, b3 = 0 + 1 − 0 = 1,
b4 = 1 + 0 − 1 = 0, b5 = 1 + 2 − 1 = 2.
To check this, let us perform the word insertion:
1 1 2 2 2 3 5 5 ← 11224151 = 1 1 1 2 2 3 5 5 ← 224151 2 1 1 1 2 2 2 5 5 = ← 214151 2 3 1 1 1 2 2 2 2 5 = ← 4151 2 3 5 1 1 1 2 2 2 2 4 = ← 51 2 3 5 5
= 1 1 1 2 2 2 2 4 5 2 3 5 5 76
So, the word w0 = 13244151 is left, and v0 = 213152 is bumped. This gives y = (3, 4, 0, 1, 1) and b = (0, 1, 1, 0, 2) agreeing with the results of the formulæ.
2.3.7 Kirillov’s Geometric Lifting: gRSK
The formulæ in the previous section involve only the operations max and addition, hence the formulæ live in the tropical max-plus algebra. We will make the change of operations:
(max, +) → (+, ·) to the formulæ in the Corollary 2.20, making the necessary algebraic analogue for the ‘additive’ identities (0 → 1) to go from
ξj = x1 + ··· + xj ∀ j = 1, . . . , n
η1 = ξ1 + a1
ηj = max{ηj−1, ξj} + aj ∀ j = 2, . . . , n and
y1 = η1
yj = ηj − ηj−1 ∀ j = 2, . . . , n
b1 = 0
bj = aj + xj − yj = aj + ξj − ξj−1 − ηj + ηj−1 ∀ j = 2, . . . , n. to the (de)tropicalised analogue:
ξj = x1 ··· xj ∀ j = 1, . . . , n
η1 = ξ1a1
ηj = (ηj−1 + ξj)aj ∀ j = 2, . . . , n 77
and
y1 = η1
ηj yj = ∀ j = 2, . . . , n ηj−1
b1 = 1
xj ξjηj−1 bj = aj = aj ∀ j = 2, . . . , n. yj ξj−1ηj
Lemma 2.21. ([NY04]) Returning to the original variables of
x1, . . . , xn, a1, . . . , an, y1, . . . , yn, b1, . . . , bn,
the above formulæ reduce to the following system (x, a) 7→ (y, b):
b1 = 1 a1x1 = y1 ajxj = yjbj ∀ j = 2, . . . , n (2.3.9) 1 1 1 + = a1 x2 b2 1 1 1 1 + = + ∀ j = 2, . . . , n. aj xj+1 yj bj+1
Proof. The first two formulæ are by virtue of η1 = y1 and ξ1 = x1. For the other formulæ, take η1 = ξ1a1 and ηj = (ηj−1 + ξj)aj and rearrange to get
η 1 = 1 (∗) ξ1a1
ηj − ηj−1aj = 1 ∀ j = 2, . . . , n (∗j) ξjaj 78
Equating (∗) and (∗2) yields
η − η a η 2 1 2 = 1 ξ2a2 ξ1a1 y 1 1 ⇒ 2 − = x2a2 x2 a1 1 1 1 ⇒ − = b2 x2 a1 resulting in the third formula.
Equating (∗j+1) and (∗j) for j = 2, . . . , n yields
η − η a η − η a j+1 j j+1 = j j−1 j ξj+1aj+1 ξjaj η y 1 η 1 1 ⇒ j j+1 − = j − ξj xj+1aj+1 xj+1 ξj aj yj y 1 1 1 ⇒ j+1 + = + xj+1aj+1 yj aj xj+1 1 1 1 1 ⇒ + = + . bj+1 yj aj xj+1
2.3.8 A Matrix Representation of the Geometric RSK
Returning to the system of equations presented in Lemma 2.21, and letting bars
1 denote reciprocals (i.e.x ¯ := x ), one obtains the following equations:
a¯1x¯1 =y ¯1 ¯ a¯jx¯j =y ¯jbj ∀ j = 2, . . . , n ¯ a¯1 +x ¯2 = b2 ¯ a¯j +x ¯j+1 =y ¯j + bj+1 ∀ j = 2, . . . , n. 79
This can be represented in the following form:
a¯1 1 x¯1 1 y¯1 1 1 0 a¯ 1 0 x¯ 1 0 y¯ 1 0 ¯b 1 0 2 2 2 2 . . . . . . . . .. .. .. .. = .. .. .. .. . ¯ a¯n−1 1 x¯n−1 1 y¯n−1 1 bn−1 1 ¯ a¯n x¯n y¯n bn 0 0 0 0 (2.3.10)
As we stated before, the full Schensted insertion procedure can be reduced to single word insertions. Noumi and Yamada [NY04] provide a very explicit generalisation of Equation 2.3.10 to the problem of describing the output insertion tableau from the RSK correspondence. i.e. the tableau obtained by inserting a sequence of words into the empty tableau.
Let w1, w2, . . . , wm be a sequence of weakly increasing words, and suppose each is xi xi coordinatised as wi = 1 1 ··· n n for i = 1, . . . , m. As Schensted insertion prescribes:
1 1,1 1. One inserts w1 (coordinatised by x =: x ) into the empty word, which pro- duces a new first row, which we will say is coordinatised by y1,1 (the first row 1). Nothing is bumped.
2. One then inserts x2 =: x2,1 into the new first row (y1,1) which produces a new first row (its second form), which we denote by y2,1. A word may be bumped, and we call it x2,2.
Continuing on in this manner, keeping the indexing going, one can capture all of the above concisely in the following diagram: 80
x1 = x1,1 x2 = x2,1 x3 = x3,1
∅ y1,1 y2,1 ···
∅ x2,2 x3,2
∅ y2,2 ···
∅ x3,3
∅ ···
∅ . ..
Figure 2.6. Iterative Schensted word insertions building the RSK correspondence.
Definition 2.20. For x ∈ Rm, 1 6= m ≤ n, define
E(x) = diag(x) + εn (2.3.11)
where εn is as in Definition 2.3.
If x = (1, 1,..., 1, xk, xk+1, . . . , xn), then define
I 0 E (x) = k−1 (2.3.12) k 0 E(x0)
0 0 0 where x = (xk, xk+1, . . . , xn) and E is modified for the size of x , i.e. E(x ) = diag(xk, . . . , xn) + εn−k+1.
In [NY04], they prove a result that is equivalent to the following:
Theorem 2.22. ([NY04]) The geometric analogue of Figure 2.6, one obtains yk,m 81 for 1 ≤ m ≤ k ≤ n by solving the following factorisation problems recursively:
1 1,1 E(x¯ ) = E1(y¯ )
2 1 2,1 1,1 E(x¯ )E(x¯ ) = E(x¯ )E1(y¯ )
2,1 2,2 = E1(y¯ )E2(y¯ )
3 2 1 3,1 2,1 2,2 E(x¯ )E(x¯ )E(x¯ ) = E(x¯ )E1(y¯ )E2(y¯ )
3,1 3,2 2,2 = E1(y¯ )E2(x¯ )E2(y¯ )
3,1 3,2 3,3 = E1(y¯ )E2(y¯ )E3(y¯ ).
In general, at the k-th stage, one has
k k−1 2 1 k,1 k,2 k,k E(x )E(x ··· E(x )E(x )) = E1(y )E2(y ) ··· Ek(y ).
Remark 2.9. Noumi and Yamada’s proof of Theorem 2.22 in [NY04] is based on detailed path switching arguments that go by the name of Lindstr¨om-Gessel-Viennot formulæ [GV85], which are in turn based on the characterisation of totally positive matrices in terms of weighted path counting matrices. Although Theorem 2.22 is pre- sented as a factorisation into increasingly many matrices, each step involves only the permutation and factorisation of a pair of consecutive matrices.This suggests the idea that this full geometric RSK is built from a tower of sequential geometric Schensted insertions which are described by equations of the type seen in 2.3.10.
2.3.9 Noumi and Yamada’s Observation
In [NY04], the following observation was made:
Starting with the discrete-time Toda lattice (Definition 2.9):
t+1 t t t+1 Ii = Ii + Vi − Vi−1 t t (2.3.13) t+1 Ii+1Vi Vi = t+1 Ii 82
and performing the change of variables
t −1 t −1 t+1 −1 t+1 −1 ai = (Ii+1) , xi = (Vi ) , yi = (Vi ) , bi = (Ii ) , (2.3.14)
the discrete-time Toda lattice transforms into the following
1 1 1 1 aixi = yibi, + = + , i ∈ Z. (2.3.15) ai xi+1 yi bi+1 which bears a striking resemblance to the Equations 2.3.9. 83
Chapter 3 Geometric RSK and Toda: The Discrete Picture
In [NY04], Noumi and Yamada made a connection between geometric RSK and the discrete-time Toda lattice:
t+1 t+1 t t t+1 t+1 t t Ii Vi = Ii+1Vi ,Ii + Vi−1 = Ii + Vi (3.0.1)
for i, t ∈ Z. By setting
t −1 t −1 t+1 −1 t+1 −1 ai = (Ii+1) , xi = (Vi ) , yi = (Vi ) , bi = (Ii ) , (3.0.2)
in Equations 3.0.1, one obtains
1 1 1 1 aixi = yibi, + = + (3.0.3) ai xi+1 yi bi+1 for i ∈ Z.
Inspired by this observation, we proceed to explore the connections and direct conse- quences in this chapter. First, we delve into the bi-infinite discrete-time Toda lattice, exploring its solutions, and then we present a solution to the problem of preserving this connection under the restriction to the finite-dimensional setting, which is the natural home of the original geometric RSK correspondence.
3.1 The Factorisation Problem
In this section, we present the factorisation problem that underlies solving the discrete- time Toda lattice, first in the bi-infinite case, then in the semi-infinite case. 84
Definition 3.1. For a sequence x = (xi)i∈Z, define two operators, L(x) and U(x), to have the following matrix representations: X X L(x) := (Eii + xiEi+1,i), U(x) := (xiEii + Ei,i+1). (3.1.1) i∈Z i∈Z
Given input sequences a = (ai)i∈Z, x = (xi)i∈Z, the problem of finding sequences b = (bi)i∈Z and y = (yi)i∈Z such that
L(y)U(b) = U(a)L(x), (3.1.2)
t t t+1 t+1 when ai = Ii , xi = Vi , b = Ii and yi = Vi is the problem of solving the discrete- time Toda lattice for one time-step.
More generally speaking, this is the problem of factoring a bi-infinite, tridiagonal matrix, with 1’s on the super-diagonal (a tridiagonal Hessenberg operator) X H = (Ei,i+1 + giEii + hiEi+1,i) i∈Z into a product of a lower bidiagonal operator and an upper bidiagonal operator.
In general, this factorisation problem does not have a unique solution. In the next two sections (Section 3.2 and Section 3.3), we present two descriptions of the solution set to this problem: the first is a parametric description of the solution set and the second is a generalisation of a result developed by Murphy [Mu18] for one-dimensional discrete Schr¨odingeroperators.
3.2 Parametrised Factorisations
With the factorisation problem now established, we work our way to a new result on the LU-decomposition of bi-infinite Hessenberg operators by leveraging the simple formulation for semi-infinite Hessenberg operators. The first result of this section is a description of the semi-infinite LU-decomposition in terms of τ-functions. The following theorem provides a parametrisation of the LU-decompositions in the bi- infinite case. Ultimately, since the semi-infinite solution to the factorisation problem 85 truncates to the solution to the finite factorisation problem, this provides full coverage of the three cases (finite, semi-infinite and bi-infinite) of interest for the discrete-time Toda lattice.
Definition 3.2. For S a semi-infinite (half-line) operator and k ∈ N, define Sk to be the principal k × k submatrix of S, viewing S as a semi-infinite matrix (i.e. a matrix
(sij){(i,j)∈N: i≥a,j≥b}), and define τk(S) = det(Sk).
To state the next lemma, let us extend the definitions of L and U to both semi-infinite sequences and finite sequences.
Definition 3.3. Let x = (x1, x2,...) be a sequence, then
X X L(x) := (Eii + xiEi+1,i), U(x) := (xiEii + Ei,i+1). i∈N i∈N
If x = (x1, . . . , xm) is a finite sequence, then
m m X X L(x) := Im+1 + xiEi+1,i, U(x) := εm + xiEi,i, i=1 i=1 where Im+1 is the (m + 1) × (m + 1) identity matrix and εm is the m × m matrix with 1’s on its superdiagonal and 0’s elsewhere.
In the semi-infinite setting, the factorisation solution is unique for generic tridiagonal Hessenberg operators, if it exists. The following lemma does seem to be known in the literature, but for completeness, we include a proof here.
Lemma 3.1. Let H = P (E + g E + h E ), with h 6= 0 for all i, then i∈N i,i+1 i ii i i+1,i i either there exists unique sequences y = (y1, y2,...) and b = (b1, b2,...) such that
L(y)U(b) = H or no solution exists. Furthermore, if a solution exists, it is given by 86
τi(H) bi = i = 1, 2,... (3.2.1) τi−1(H)
hiτi−1(H) yi = i = 1, 2,... (3.2.2) τi(H)
where τ0(H) is defined to be 1.
Proof. We start by substituting in and multiplying to obtain
∞ ∞ X X X (Ei,i+1 + giEii + hiEi+1,i) = (Ei,i+1 + biEii + yibiEi+1,i) + yj−1Ejj (3.2.3) i∈N i=1 j=2
Comparing coefficients yields the following system of equations:
b1 = g1 (3.2.4)
biyi = hi i = 1, 2,... (3.2.5)
bi+1 + yi = gi+1 i = 1, 2,... (3.2.6)
Since hi 6= 0 for all i, it is clear that one can solve uniquely for the (b1, y1, b2, y2,...) in that order, if there is no obstruction to solving Equation 3.2.5.
Completion of the proof boils down to proving 3.2.1, since 3.2.2 follows immediately from this and Equation 3.2.5. We proceed by induction on i:
For i = 1, we have
τ1(H) b1 = g1 = τ1(H) = (3.2.7) τ0(H)
so that the base case holds trivially.
τi(H) Assume bi = for some i ∈ , then, by Equations 3.2.1 and 3.2.2, one has τi−1(H) N
hi gi+1bi − hi bi+1 = gi+1 − = . (3.2.8) bi bi 87
Using the induction hypothesis, one then obtains
gi+1τi(H) − hiτi−1H τi+1(H) bi+1 = = (3.2.9) τi(H) τi(H)
where the latter equality follows from a cofactor expansion of Hi+1 along its bottom row.
Definition 3.4. Let B be a bi-infinite operator, viewed as a bi-infinite matrix B = P b E . For (m, n) ∈ 2, define two semi-infinite operators from B: i,j∈Z ij ij Z
∞ ∞ + X − X Bm,n := bm+i−1,n+j−1Eij, Bm,n := bm−i+1,n−j+1Eji. (3.2.10) i,j=1 i,j=1
To elucidate this definition, suppose H is the bi-infinite Hessenberg operator H = P (E + g E + h E ). In matrix form, H is i∈N i,i+1 i ii i i+1,i
. . .. .. . .. g−1 1 H = (3.2.11) h−1 g0 1 . h g .. 0 1 . . .. .. where the 0-th column and 0-th rows are encased by bars for reference. The two
+ instances of interest are the semi-infinite, tridiagonal, Hessenberg operators H1,1 and − H−1,−1: g1 1 + . H = h g .. (3.2.12) 1,1 1 2 . . .. .. 88
g0 1 − . H = h g .. (3.2.13) 0,0 −1 −1 . . .. ..
We can now state the analogous result for bi-infinite operators.
Theorem 3.2. Let a = (ai)i∈Z and x = (xi)i∈Z be sequences and define H = U(a)L(x). For ρ 6= 0, define two operators:
h a x H+ = H+ − 0 E = H+ − 1 0 E ρ 1,1 ρ 11 1,1 ρ 11
and
− − Hρ = H0,0 − ρE11.
Then, there exists a unique pair of sequences (b, y) with b0 = ρ satisfying L(y)U(b) = ± U(a)L(x) if and only if τk(Hρ ) 6= 0 for all k ∈ N. If this solution exists, it is given by + + ai+1xiτi−1(H ) τ (H ) ρ i ≥ 1 i ρ + + i ≥ 1 τi(H ) τ (H ) ρ i−1 ρ a1x0 yi = i = 0 , and bi = ρ i = 0 . ρ − − τ−i(Hρ ) ai+1xiτ−i+1(Hρ ) i ≤ −1 i ≤ −1 − − τ−i+1(Hρ ) τ−i(Hρ )
Proof. Define P := L(y)U(b), then P = H is equivalent to the following simultane- ous system of equations:
y0b0 = a1x0 (3.2.14)
+ + P1,1 = H1,1 (3.2.15)
− − P0,0 = H0,0 (3.2.16) 89
a1x0 We start by setting ρ := b0, so that y0 = ρ . Then
+ X P1,1 = (Ei,i+1 + (bi + yi−1)Eii + yibiEi+1,i) (3.2.17) i∈N ! ∞ X X = y0E11 + (Ei,i+1 + biEii + yibiEi+1,i) + yj−1Ejj (3.2.18) i∈N j=2 a x = 1 0 E + L(y )U(b ), (3.2.19) ρ 11 + + where y+ := (y1, y2,...) and b+ := (b1, b2,...). Thus, Equation 3.2.15 is equivalent to a x L(y )U(b ) = H+ − 1 0 E = H+. (3.2.20) + + 1,1 ρ 11 ρ By Lemma 3.1, we have + + τi(Hρ ) ai+1xiτi−1(Hρ ) bi = + , yi = + (3.2.21) τi−1(Hρ ) τi(Hρ ) for i = 1, 2,....
Turning our attention to Equation 3.2.16:
− X P0,0 = (Ei,i+1 + (y−i + b−i+1)Eii + y−iib−iEi+1,i) (3.2.22) i∈N ! ∞ X X = b0E11 + (Ei,i+1 + y−iEii + y−ib−iEi+1,i) + b−j+1Ejj (3.2.23) i∈N j=2 ∞ ! ∞ ! X X = b0E11 + (Eii + b−iEi+1,i) (y−jEjj + Ej,j+1) (3.2.24) i=1 j=1
= ρE11 + L(b−)U(y−), (3.2.25)
where y− := (y−1, y−2,...) and b− := (b−1, b−2,...). Thus, Equation 3.2.16 is equiv- alent to
− − L(b−)U(y−) = H0,0 − ρE11 = Hρ . (3.2.26)
By Lemma 3.1, we have − − ai+1xiτ−i+1(Hρ ) τ−i(Hρ ) bi = − , yi = − (3.2.27) τ−i(Hρ ) τ−i+1(Hρ ) for i = −1, −2,.... 90
3.3 Factorisations by Generalised Eigenfunctions
Noumi and Yamada’s original observation (2.3.9) drew a connection between an bi- infinite analogue of geometric RSK and the bi-infinite discrete-time Toda lattice. Whilst we ultimately reduce this connection to the finite setting, exploring the orig- inal observation led us to studying solutions to the bi-infinite factorisation problem. In addition to the parametrised solution in Theorem 3.2, we now provide another so- lution extending the Murphy’s [Mu18] analogous result for one-dimensional discrete Schr¨odingeroperators.
Definition 3.5. We define the following three operators:
X X X dl = (Ejj − Ej+1,j), dr = (Ej,j+1 − Ejj), sr = Ej+1,j, j∈Z j∈Z j∈Z the first two are first-order difference operators, and the latter is a shift operator.
The main result of this section is a factorisation by generalised eigenfunction. The result was inspired by and generalised from the following result of Murphy [Mu18]
Proposition 3.3. [Mu18] Let L = ∆ + u be a discrete Schr¨odingeroperator, and
f = {fi} a solution to the equation Lf = λ0f for a fixed value λ = λ0, with fi 6= 0.
Define (with a slight abuse of notation) sequences µl and µr by
fi − fi−1 −1 fi+1 − fi −1 (µl)i = = f dlf, (µr)i = = f drf, (3.3.1) fi fi then
L − λ0 = (dr + µl)(dl − srµr).
Remark 3.1. Taking λ0 = 0 yields a family of UL-decompositions of the operator. Modulo the LU-vs-UL component, this is a result on the factorisation of a subclass of the operators in which we are interested: the discrete Schr¨odingeroperators, ma- tricially speaking, are tridiagonal operators both of whose super- and sub-diagonals
consists solely of 1’s (∆ = dr − dl and u is diagonal). 91
Theorem 3.4. Let H = U(a)L(x), and suppose H = L(y)U(b) has a solution (b, y).
fj+1 Let f = (fi)i∈ be the sequence defined via f0 = 1, f1 = −ρ and bj = − for all Z fj j ∈ Z. Then f is a generalised eigenfunction for H with eigenvalue 0: Hf = 0, and we have
L(y) = (dl + srµr) and U(b) = (dr + µl), (3.3.2)
where µr and µl are diagonal operators defined via
fj+1 − aj+1xjfj fj − fj+1 (µr)j = and (µl)j = . (3.3.3) fj+1 fj
Conversely, starting with a generalised 0-eigenfunction f = (fi)i∈Z such that fi 6= 0 for all i, and building µr and µl as prescribed yields an LU-decomposition of H. Therefore, the familiy of LU-decompositions of H are completely classified by the nonvanishing generalised 0-eigenfunctions.
Remark 3.2. The family of generalised eigenfunctions of H is (generically) two- dimensional. However, multiplication by a nonzero scalar does not change the result- ing factorisation, so the eigenfunctions can be normalised, without loss of generality, so that f0 = 1. This normalisation picks out a 1-dimensional subspace of eigenfunc- tions, which, in connection to our parametrised factorisations, is parametrised by our choice of f1, or, equivalently, −f1 = ρ.
Proof. By Theorem 3.2, we know the form of every solution, if it exists. With b0 = ρ
fixed, define the sequence of interest: f0 = 1, f1 = −ρ, and define fj for j 6= 0, 1
fj+1 recursively by forcing bj = − for all j. fj 92
We compute dl + srµr first: X fi+1 − ai+1xifi dl + srµr = Eii − Ei+1,i + Ei+1,i fi+1 i∈Z X ai+1xifi = Eii − Ei+1,i fi+1 i∈Z X ai+1xi = Eii + Ei+1,i bi i∈Z X = (Eii + yiEi+1,i) i∈Z = L(y).
The verification that U(b) = (dr + µl) is similarly straightforward, so is omitted.
Next, we prove that f = (fi)i∈Z is a generalised 0-eigenfunction for H. Recall that b and y satisfy the following system of equations biyi = ai+1xi , ∀ i ∈ Z. (3.3.4) bi + yi−1 = ai + xi
Since
(Hf)i = fi+1 + (ai + xi)fi + aixi−1fi−1 ∀ i (3.3.5)
(Hf)i and each fi 6= 0, it suffices to show that = 0 for all i ∈ . Indeed, fi Z
(Hf)i fi+1 fi−1 = + ai + xi + aixi−1 (3.3.6) fi fi fi aixi−1 = −bi + ai + xi − (3.3.7) bi−1 bi−1yi−1 = yi−1 − (3.3.8) bi−1 = 0. (3.3.9)
Hence, f is indeed a generalised 0-eigenfunction for H.
Finally, we establish that if fi+1 + (ai + xi)fi + aixi−1fi−1 = 0 and fi 6= 0 for all i, 93
then (dl + srµr)(dr + µl) = H: X ai+1xifi X fj+1 (dl + srµr)(dr + µl) = Eii − Ei+1,i Ej,j+1 − Ejj (3.3.10) fi+1 fj i∈Z j∈Z X fi+1 + aixi−1fi−1 = Ei,i+1 − Eii + ai+1xiEi+1,i (3.3.11) fi i∈Z X (ai + xi)fi = Ei,i+1 + Eii + ai+1xiEi+1,i (3.3.12) fi i∈Z = H. (3.3.13)
3.4 Geometric RSK as a Degeneration of the Discrete-Time Toda Lattice
Whilst the observation of Noumi and Yamada (2.3.9) provides a direct change of coordinates to transition between bi-infinite discrete-time Toda and the bi-infinite extension of geometric RSK, it does not provide a direct way to connect to the original (finite-dimensional) geometric RSK. In this section, our main result, Theorem 3.7, demonstrates how geometric RSK is attainable as a degeneration of discrete-time Toda.
Remark 3.3. From this point on, we shall omit the bars on the geometric RSK variables.
Lemma 3.5. Let M be a square matrix with block structure A B M = C D with A invertible. Then
det(M) = det(A) det(D − CA−1B). 94
This is a fairly standard result for block matrices. For a reference, see [EFS93].
Definition 3.6. For an n × n matrix M, and for a, b ∈ [n], let M(a),(b) be the (n − 1) × (n − 1) submatrix of M obtained by deleting the ath column and bth row of M.
Lemma 3.6. [NY04] Let S = U(a)U(x), for a = (a1, . . . , an) and x = (x1, . . . , xn),
and suppose τk(S(1),(n)) 6= 0 for k = 1, 2, . . . , n − 1, then solution to the geometric RSK equations with input (a, x) exists and is given by the unique pair of sequences
y = (y1, . . . , yn) and b = (b2, . . . , bn), where
τk−1(S(1),(n)) bk = , k = 2, . . . , n (3.4.1) τk−2(S(1),(n))
τk−2(S(1),(n)) yk = akxk , k = 1, . . . , n, (3.4.2) τk−1(S(1),(n)) and where, in addition to setting τ0 = 1, we also set τ−1 = 1.
To illustrate the lemma and definition, consider the case of n = 3. a1 1 0 x1 1 0 a1x1 a1 + x2 1 H = = (3.4.3) a2 1 x2 1 a2x2 a2 + x3 a3 x3 a3x3 then one has a1 + x2 1 H(1),(3) = (3.4.4) a2x2 a2 + x3
and the gRSK solution (y1, y2, y3, b2, b3) is given by
τ−1(H(1),(3)) y1 = a1x1 = a1x1 (3.4.5) τ0(H(1),(3))
τ1(H(1),(3)) b2 = a1 + x2 = (3.4.6) τ0(H(1),(3))
a2x2 τ0(H(1),(3)) y2 = = a2x2 (3.4.7) a1 + x2 τ1(H(1),(3))
a2x2 τ2(H(1),(3)) b3 = a2 + x3 − = (3.4.8) a1 + x2 τ1(H(1),(3))
a1 + x2 τ1(H(1),(3)) y3 = a3x3 = a3x3 . (3.4.9) (a1 + x2)(a2 + x3) − a2x2 τ2(H(1),(3)) 95
We now present our main result: the degeneration from the (finite) discrete-time Toda lattice to geometric RSK.
Theorem 3.7. Let (a, x) = ((a1, . . . , an), (x1, . . . , xn)), and define two (n+1)×(n+1)
matrices Uδ and L via
a U = U x 1 − 1 , a , a , . . . , a ,L = L(x , x , . . . , x ), (3.4.10) δ 1 δ 1 2 n 1 2 n
i.e. a1 x1 − 1 1 1 δ .. a1 . x1 1 U = ,L = , (3.4.11) δ . . . .. .. .. 1 an xn 1
˜ ˜ ˜ δ δ then the LU decomposition problem LδUδ = UδL, where Uδ = U(c1, . . . , cn+1) and ˜ δ δ Lδ = L(z1, . . . , zn) satisfies:
δ δ y1 = c1z1, (3.4.12)
bk = lim ck, k = 2, . . . , n, (3.4.13) δ→0
yk = lim zk, k = 2, . . . , n (3.4.14) δ→0 where ((1, b2, . . . , bn), (y1, y2, . . . , yn)) is the geometric RSK solution with initial data
((a1, . . . , an), (x1, . . . , xn)).
Proof. Let Hδ = UδL, then by Lemma 3.1, one has
δ τk(Hδ) ck = (3.4.15) τk−1(Hδ)
δ τk−1(Hδ) zk = akxk (3.4.16) τk(Hδ)