Arrangement of minors in the positive Grassmannian

by Miriam Farber

Submitted to the Department of Mathematics in partial fulfillment of the requirements for the degree of

Doctor of Philosophy

at the

MASSACHUSETTS INSTITUTE OF TECHNOLOGY

June 2017

@ Massachusetts Institute of Technology 2017. All rights reserved.

Signature A uthor ...... red acted Department of Mathematics May 05, 2017 Signature redacted Certified by...... Alexander Postnikov Professor Thesis Supervisor

Accepted by Signature redacted.. Jonathan Kelner Chairman, Department Committee on Graduate Theses

MASSA ESFT" S INSTITUTE

A UG 0 12017]

LIBRARIES 2 Arrangement of minors in the positive Grassmannian

by

Miriam Farber

Submitted to the Department of Mathematics on May 05, 2017, in partial fulfillment of the requirements for the degree of Doctor of Philosophy

Abstract

This thesis consists of three parts. In the first chapter we discuss arrangements of equal minors of totally positive matrices. More precisely, we investigate the structure of equalities and inequalities between the minors. We show that arrangements of equal minors of largest value are in bijection with sorted sets, which earlier appeared in the context of alcoved polytopes and Gr6bner bases. Maximal arrangements of this form correspond to simplices of the alcoved triangulation of the hypersimplex; and the number of such arrangements equals the Eulerian number. On the other hand, we prove in many cases that arrangements of equal minors of smallest value are weakly separated sets. Weakly separated sets, originally introduced by Leclerc and Zelevinsky, are closely related to the positive Grassmannian and the associated cluster algebra. However, we also construct examples of arrangements of smallest minors which are not weakly separated using chain reactions of mutations of plabic graphs. In the second chapter, we investigate arrangements of tth largest minors and their relations with alcoved triangulation of the hypersimplex. We show that second largest minors correspond to the facets of the simplices. We then introduce the notion of cubical distance on the dual graph of the triangulation, and study its relations with these arrangements. In addition, we show that arrangements of largest minors induce a structure of a partially ordered set on the entire collection of minors. We use this triangulation of the hypersimplex to describe a 2-dimensional grid structure on this poset. In the third chapter, we obtain new families of quadratic Schur function identities, via examination of several types of networks and the usage of Lindstrdm-Gessel- Viennot lemma. We generalize identities obtained by Kirillov, Fulmek and Kleber and also prove a conjecture suggested by Darij Grinberg.

Thesis Supervisor: Alexander Postnikov Title: Professor

3 4 Acknowledgments

I had a wonderful experience at MIT, and I would like to thank all those who have

contributed to this experience. First and foremost, I would like to thank my advisor

Alex Postnikov for introducing me to the beautiful topic of algebraic

and for his guidance and support over my years at MIT. I would like to thank the other two members of my thesis committee, Richard Stanley and Tom Roby. Thanks to Richard for introducing me with interesting problems and for many valuable dis- cussions. Thanks to Tom for his helpful comments, suggestions and discussions. I am very grateful to Abraham Berman from the Technion for introducing me to mathe- matics and mathematical research. I wouldn't be here without his support during my undergraduate and my Master studies.

Thanks to the preseminar participants: Sam Hopkins, Wuttisak Trongsiriwat, Efrat Engel, Darij Grinberg, Pavel Galashin, and many others, for great discussions and collaborations.

Last but not least, I would like to thank my wonderful family. To my parents and sisters who supported me over the years, and to my husband Ido, for always being there for me, and for his endless support and encouragement.

5 6 Contents

1 Arrangements of equal minors in the positive Grassmannian 13

1.1 Introduction ...... 13

1.2 From totally positive matrices to the positive Grassmannian ..... 17

1.3 Arrangements of minors ...... 19

1.4 Case k = 2: triangulations and thrackles ...... 21

1.5 Weakly separated sets and sorted sets ...... 29

1.6 Inequalities for products of minors ...... 32

1.7 Cluster algebra on the Grassmannian ...... 36

1.8 Constructions of matrices for arrangements of largest minors ... .. 38

1.9 The case of the nonnegative Grassmannian ...... 45

1.10 Construction of arrangements of smallest minors which are not weakly

separated ...... 48

1.10.1 Plabic graphs ...... 48

1.10.2 p-Interlaced sets ...... 50

1.10.3 Conjecture and results on pairs of smallest minors ...... 51

1.10.4 The 2 x 2 honeycomb and an example of arrangement of smallest

minors which is not weakly separated ...... 56

1.10.5 Mutation distance and chain reactions ...... 57

1.10.6 Square pyramids and the octahedron/tetrahedron moves . . . 62

1.11 Final rem arks ...... 65

1.11.1 M utation distance ...... 65

1.11.2 Inequalities between products of minors ...... 66

7 1.11.3 Schur positivity ...... 66

2 Arrangements Of Minors In The Positive Grassmannian And a Tri-

angulation of The Hypersimplex 69

2.1 Introduction ...... 69

2.2 The Triangulation of the Hypersimplex ...... 72

2.2.1 Sturmfels's construction ...... 73

2.2.2 Circuit triangulation ...... 73

2.3 Arrangements of second largest minors ...... 75

2.3.1 The case k = 2: maximal thrackles ...... 76

2.3.2 Arrangements of second largest minors - the general case . .. 78

2.4 Arrangements of tth largest minors ...... 86

2.4.1 Cubical distance in '(k,n) ...... 86

2.4.2 Partially ordered set of minors ...... 88

2.4.3 Arrangements of tth largest minors - the general case .... . 103

3 Quadratic Schur function identities and oriented networks 105

3.1 Introduction ...... 105

3.2 Schur functions and the Lindstrbm-Gessel-Viennot lemma ...... 106

3.3 Exposition of identities ...... 107

3.4 Proof of Theorem 3.3.3 ...... 110

3.5 Proof of Theorem 3.3.5 ...... 115

8 List of Figures

1-1 A triangulation (left) and a thrackle (right). The edges of the triangu-

lation correspond to the arrangement of smallest minors A 12 = A 2 3 =

A 34 = A 4 5 = A56 = A 1 6 = A 13 = A 14 A 15 in the positive Grass- mannian Gr+(2, 6); while the edges of the thrackle correspond to the

arrangement of largest minors A13 = A 14 = A 15 = A 2 5 = A 26 = A 3 6 -

A 3 7 in Gr+(2,7). This thrackle is obtained from the 5-star by adding tw o leaves...... 23

1-2 All triangulations of n-gons for n = 3, 4, 5, 6 (up to rotations and re-

flections)...... 23

1-3 All maximal thrackles with 3, 4, 5, and 6 vertices (up to rotations and

reflections). These thrackles are obtained from the 3-star (triangle)

and the 5-star by adding leaves...... 24

1-4 The 10-gon that corresponds to the thrackle (from Figure 1-1) obtained

by attaching two leaves 4 and 7 to the 5-star with vertices 1, 2, 3, 5,

6. The vertices v1 , v2 , v3 , v5 , v6 of the 10-gon correspond to the vertices

of the 5-star, and the points v 4 and v 7 on the sides of the 10-gon correspond to the leaves of the thrackle...... 28

1-5 For the regular pentagon, there are the Eulerian number A(4, 2) = 11

rescalings that give maximal sorted subsets in ([51). In the first case, all the scalars Ai are 1. In the second case, the Ai are 1, 1, 4, #, #. Here

= (I+V5)/2 is the golden ratio. (There are 5 rotations of this case.)

In the last case, the Ai are 1, 0, j2, q2, 0. (Again, there are 5 rotations.) In total, we get 1 + 5 + 5 = 11 rescalings...... 40

9 1-6 For the regular hexagon, there are 10 types of allowed rescalings (up to rotations and reflections) shown in this figure. In total, we get the

Eulerian number A(5,2) = 6+6+6+6+6+6+3+3+12+12= 66

rescalings...... 40

1-7 (M l) square m ove ...... 50

1-8 (M2) unicolored edge contraction ...... 50

1-9 (M 3) vertex removal ...... 50

1-10 P({1, 4, 7, 8}, {2, 3, 5, 6}) and its cyclic rotation P({1, 2, 3, 6}, {4, 5, 7, 8}) 51

1-11 The plabic graph G ...... 54

1-12 The 2 x 2 honeycomb ...... 56

1-13 The chain reaction in the 3 x 2 honeycomb...... 60

1-14 The 4 x 3 honeycomb...... 61

1-15 A honeycomb with one layer...... 61

1-16 the octahedron move ...... 63

1-17 the tetrahedron move ...... 63

1-18 The chain reaction in a 3 x 2 honeycomb, described using octahedron

and tetrahedron moves...... 64

1-19 First 8 steps in the chain reaction ...... 65

1-20 Final 6 steps in the chain reaction ...... 66

...... 2-1 The graph l'(2,6) . .. 74

2-2 A minimal circuit in G3,8 ...... 74

2-3 The figure on the left is a minimal circuit in G 3,s. The tuple (1,3,4) can be replaced with the tuple (1,2,5) according to Theorem 2.2.5. The

figure on the right depicts the situation described in the theorem. . . 76

2-4 The figure that corresponds to Example 2.3.2 ...... 77

2-5 A maximal thrackle and the corresponding poset of minors ...... 88

10 2-6 An oriented Young graph. Its inner boundary path is formed by the

edges labeled from 1' through 7'. Its outer boundary path is formed

by the edges labeled from 1 through 7, and all the vertices that appear

along the latter path form the collection V ...... 89

2-7 The figure that corresponds to Lemma 2.4.8...... 89

2-8 The graph on the left is Q1. The graph on the top right is Q2, and the

graph on the bottom right is Q3...... 90

2-9 The figure on the left is a circuit in G3,8 which we have already seen before. There are 3 detours depicted in dotted lines, and the circuit

to the right is the circuit that is obtained by these detours. These two minimal circuits correspond to a pair of maximal sorted sets of cubical

distance 1...... 94

2-10 The description of the sequence from the proof of Lemma 2.4.11 .. . 95

2-11 Depiction of the situation described in (2)...... 99

2-12 The left figure corresponds to the base case of the inductive proof of

property (2). The right figure corresponds to the inductive step of the

proof of property (2)...... 100

2-13 An example of an oriented Young graph formed in the case A

B. Here k = 3,n = 6,A = B = {1,3,5},W = {1,2,3}, j = {{1, 3, 5}, {1, 4, 5}, {2, 4, 5}, {3, 4, 5}, {3, 4, 6}, {3, 5, 6}}...... 102

3-1 For A = (3, 2, 2, 1), t = 1, and n = 5. The rectangles are the sources

and the circles are the sinks...... 110

3-2 This figure depicts the set of sources and sinks. Each of them is divided

into x part and 9 part, according to the classification above...... 116

11 12 Chapter 1

Arrangements of equal minors in the positive Grassmannian

This chapter is based on [8].

1.1 Introduction

In this chapter, we investigate possible equalities and inequalities between minors

of totally positive matrices. This study can be viewed as a variant of the matrix

completion problem. It is related to the "higher positroid structure" on the Grass-

mannian. In order to attack this problem, we employ a variety of tools, such as

Leclerc-Zelevinsky's weakly separated sets [22, 26], Fomin-Zelevinsky's cluster alge-

bras and cluster mutations [10, 11], combinatorics of the positive Grassmannian and

plabic graphs [27], Skandera's inequalities for products of minors [31], alcoved poly- topes and triangulations of hypersimplices [20], etc.

One motivation for the current chapter came from the study of combinatorics of

the positive Grassmannian. The positive (resp., nonnegative) Grassmannian was de-

scribed in [27] as the part of the real Grassmannian Gr(k, n) such that all Plucker coordinates (maximal minors of k x n matrices) are strictly positive (resp., non- negative). The nonnegative part of the Grassmannian Gr(k, n) can be subdivided into positroid cells, which are defined by setting some subset of the Plucker coor-

13 dinates to zero, and requiring the other PlUcker coordinates to be strictly positive.

The positroid cells and the corresponding arrangements of zero and positive Plucker coordinates were combinatorially characterized in [27].

One can introduce a finer subdivision of the nonnegative part of the Grassmannian, where the strata are defined by all possible equalities and inequalities between the

Phucker coordinates. This is a "higher analog" of the positroid stratification. A natural

question is: How to extend the combinatorial constructions from [27] to this "higher positroid stratification" of the Grassmannian?

Another motivation for the study of equal minors came from a variant of the matrix

completion problem. This is the problem of completing missing entries of a partial

matrix so that the resulting matrix satisfies a certain property (e.g., it is positive

definite or totally positive). Completion problems arise in various applications, such

as statistics, discrete optimization, data compression, etc. Totally positive matrix

completion problems were studied in [3, 15, 16, 17]. Recently, the following variant of the completion problem was investigated in [4]

and [9]. It is well-known that one can "slightly perturb" a totally nonnegative matrix

(with all nonnegative minors) and obtain a totally positive matrix (with all strictly

positive minors). It is natural to ask how to do this in a minimal way. In other

words, one would like to find the minimal number of matrix entries that one needs

to change in order to get a totally positive matrix. The most degenerate totally

nonnegative matrix all of whose entries are positive is the matrix filled with all l's.

The above question for this matrix can be equivalently reformulated as follows: What

is the maximal number of equal entries in a totally positive matrix? (One can always rescale all equal matrix entries to 1's.) It is then natural to ask about the maximal number of equal minors in a totally positive matrix.

In [4, 9], it was shown that the maximal number of equal entries in a totally positive n x n matrix is E(n 4 / 3 ), and that the maximal number of equal 2 x 2-minors in a 2 x n totally positive matrix is E(n4/ 3 ). It was also shown that the maximal number of equal k x k minors in a k x n totally positive matrix is 0(nk1 ). The construction is based on the famous Szemer6di-Trotter theorem [36] (conjectured by

14 Erd6s) about the maximal number of point-line incidences in the plane.

One would like an explicit combinatorial description of all possible collections of

equal minors in a totally positive matrix (or collections of equal Plucker coordinates in

a point of the positive Grassmannian). In general, this seems to be a hard problem, which is still far from having a complete solution. However, in the special cases

of minors of smallest and largest values, the problem leads to nice combinatorial structures.

In this chapter we show that arrangements of equal minors of largest value are

exactly sorted sets. Such sets correspond to the simplices of the alcoved triangulation

of the hypersimplex [32, 20]. They appear in the study of Grdbner bases [351 and in the study of alcoved polytopes [20].

On the other hand, we show that arrangements of equal minors of smallest value

include weakly separated sets of Leclerc-Zelevinsky [22]. Weakly separated sets are

closely related to the positive Grassmannian and plabic graphs [26, 27]. In many

cases, we prove that arrangements of smallest minors are exactly weakly separated sets.

However, we construct examples of arrangements of smallest minors which are

not weakly separated, and make a conjecture on the structure of such arrangements.

We construct these examples using certain chain reactions of mutations of plabic

graphs, and also visualize them geometrically using square pyramids and octahe- dron/tetrahedron moves. As a byproduct of this construction, we obtain a new type of inequality for products of minors, which does not follow from Skandera's inequali- ties.

We present below the general outline of the chapter. In Section 1.2, we discuss the positive Grassmannian Gr+(k, n). In Section 1.3, we define arrangements of mi- nors. As a warm-up, in Section 1.4 we consider the case of the positive Grassmannian

Gr+(2, n). In this case, we show that maximal arrangements of smallest minors are in bijection with triangulations of the n-gon, while the arrangements of largest minors are in bijection with thrackles, which are graphs where every pair of edges intersect.

In Section 1.5, we define weakly separated sets and sorted sets. They generalize trian-

15 gulations of the n-gon and thrackles. We formulate our main result (Theorem 1.5.4) on arrangements of largest minors, which says that these arrangements coincide with sorted sets. We also give results (Theorems 1.5.5 and 1.5.6) and Conjecture 1.5.7 on arrangements of smallest minors, that relate these arrangements to weakly separated sets. In Section 1.6, we use Skandera's inequalities [31] for products of minors to prove one direction (=) of Theorems 1.5.4 and 1.5.6. In Section 1.7, we discuss the cluster algebra associated with the Grassmannian. According to [26, 27], maximal weakly separated sets form clusters of this cluster algebra. We use Fomin-Zelevinsky's

Laurent phenomenon [101 and the positivity result of Lee-Schiffler [23] to prove The- orem 1.5.5 (and thus the second direction of Theorem 1.5.6). In Section 1.8, we prove the other direction (<-) of Theorem 1.5.4. In order to do this, for any sorted set, we show how to construct an element of the Grassmannian, that is a matrix with needed equalities and inequalities between the minors. We actually show that any torus orbit on the positive Grassmannian Gr+(k, n) contains the Eulerian number

A(n - 1, k - 1) of such special elements (Theorem 1.8.1). We give examples for

Gr+(3, 5) and Gr+(3,6) that can be described as certain labellings of vertices of the regular pentagon and hexagon by positive numbers. The proof of Theorem 1.8.1 is based on the theory of alcoved polytopes [20]. In Section 1.9, we discuss the case of the nonnegative Grassmannian Gr (2, n). If we allow some minors to be zero, then we can actually achieve a larger number (~- n 2 /3) of equal positive minors. In

Section 1.10, we construct examples of arrangements of smallest minors for Gr+(4, 8) and Gr+(5, 10), which are not weakly separated. We formulate Conjecture 1.10.10 on the structure of pairs of equal smallest minors, and prove it for Gr+(k, n) with k < 5. As a part of the proof, we obtain a new inequality for products of minors

(Proposition 1.10.12). Our construction uses plabic graphs, especially honeycomb plabic graph that have mostly hexagonal faces. We describe certain chain reactions of mutations (square moves) for these graphs. We also give a geometric visualization of these chain reactions using square pyramids. In Section 1.11, we give a few final remarks.

16 1.2 From totally positive matrices to the positive

Grassmannian

A matrix is called totally positive (resp., totally nonnegative) if all its minors, that

is, determinants of square submatrices (of all sizes), are positive (resp., nonnegative).

The notion of total positivity was introduced by Schoenberg 1291 and by Gantmacher

and Krein [13] in the 1930s. Lusztig [24, 25] extended total positivity to the general

Lie theoretic setup and defined the positive part for a reductive Lie group G and a generalized partial flag manifold G/P.

For n > k > 0, the GrassmannianGr(k, n) (over R) is the space of k-dimensional

linear subspaces in R'. It can be identified with the space of real k x n matrices of

rank k modulo row operations (the rows of a matrix span a k-dimensional subspace

in Rn). The maximal k x k minors of k x n matrices form projective coordinates

on the Grassmannian, called the Plucker coordinates. We will denote the Plucker

coordinates by A,, where I is a k-element subset in [n] := {1, ... ,n} corresponding

to the columns of the maximal minor. We use the following conventions:

'A( i1,i2,..,A) := A{i142 ... iAk} for i1 < ... < k

and

These coordinates on Gr(k, n) are not algebraically independent; they satisfy the Phicker relations [27]:

A(~~P1,lP) A(q 1 . ,qk) = A(p 1 ._ ,qk-t+1.__qk, ... PO A(q 1 ,q2 ., qk- t,Pil, --, (1.1) ii< ...

Here (pi, ... I.qk-t 1,., q,... .Pk) denotes the tuple (P, ,p) with the entries pi,...-,pit replaced by qk-t 1, ... , qk and vice-versa for the other factor.

In [27], the positive GrassmannianGr+(k, n) was described as the subset of the

Grassmannian Gr(k, n) such that all the Phicker coordinates are simultaneously pos- itive: A, > 0 for all I. (Strictly speaking, since the A, are projective coordinates

17 defined up to rescaling, one should say "all A, have the same sign.") Similarly, the

nonnegative Grassmannian Gr (k,n) was defined by the condition A, > 0 for all I.

This construction agrees with Lusztig's general theory of total positivity. (However, it is a nontrivial fact that Lusztig's positive part of Gr(k, n) is the same as Gr+(k, n) defined above.)

The space of totally positive (totally nonnegative) k x m matrices A = (aij) can be embedded into the positive (nonnegative) Grassmannian Gr+(k, n) with n = m + k, as follows, see [271. The element of the Grassmannian Gr(k, n) associated with a k x m matrix A is represented by the k x n matrix

... 1 0 0 0 0 (-1)k-lakl (1)k-la 2 ... (-1k-lakm

#(A) 0 0 -.. 1 0 0 a31 a32 -.. a3 m (1.2)

0 0 ... 0 1 0 -a 21 -a 2 2 ... -a2m

0 0 - - 0 0 1 all a 12 - aim

Under the map 0, all minors (of all sizes) of the k x m matrix A are equal to the maximal k x k-minors of the extended k x n matrix O(A). More precisely, denote by

A,j(A) the minor of the k x m matrix A in row set I = {i1 ,... , } c [k] and column set J {ji,.. . , j} c [m]; and denote by AK(B) the maximal k x k minor of a k x n matrix B in column set K c [n], where n = m + k. Then

AI, (A) = A([k]\{k+1-i ...,k+1-i 1 })u{ji+k,...jr+k}(# (A)). (1.3)

This map is actually a bijection between the space of totally positive k x m matrices and the positive Grassmannian Gr+(k, n). It also identifies the space of totally non- negative k x m matrices with the subset of the nonnegative Grassmannian Gr (k, n) such that the Pl6cker coordinate A[k] is nonzero. Note, however, that the whole nonnegative Grassmannian Gr (k, n) is strictly bigger than the space of totally non- negative k x m matrices, and it has a more subtle combinatorial structure.

This construction allows us to reformulate questions about equalities and inequal-

18 ities between minors (of various sizes) in terms of analogous questions for the positive

Grassmannian, involving only maximal k x k minors (the Plucker coordinates). One immediate technical simplification is that, instead of minors with two sets of indices

(for rows and columns), we will use the Plucker coordinates A, with one set of col- umn indices I. More significantly, the reformulation of the problem in terms of the

Grassmannian unveils symmetries which are hidden on the level of matrices.

Indeed, the positive Grassmannian Gr+ (k, n) possesses the cyclic symmetry. De- note by [v1,..., vn] the point in Gr(k, n) given by n column vectors v 1 ,..., Vn c Rk. Then the map

[V1,., Vn] W _[(V*1o, 7V1, V2,.... ., n_1] preserves the positive Grassmannian Gr+(k, n). This defines the action of the cyclic group Z/nZ on the positive Grassmannian Gr+(k, n).

We will see that all combinatorial structures that appear in the study of the positive Grassmannian and arrangements of equal minors have the cyclic symmetry related to this action of Z/nZ.

1.3 Arrangements of minors

Definition 1.3.1. LetI = (Zo,1, , I) be an ordered set-partition of the set ([n]) of all k-element subsets in [n]. Let us subdivide the nonnegative GrassmannianGr (k, n) into the strata S1 labelled by such ordered set partitionsI and given by the conditions:

1. A1 = 0 for I E Io,

2. A, = Aj if I, J C 11,

3. A, < Aj if IC I and J EIE with i

An arrangement of minors is an ordered set-partitionI such that the stratum S- is not empty.

19 Example 1.3.2. Let

10 = 0, 1i = {f3, 4} ,2 = {{1,4}},13 = {{1,2},{2,3},{1,3},{2, 4}.

Then I = (Io, I1, 12,13) is an ordered set partition of (1). Consider the matrix

A =1 2 1 1/3), which satisfies 1 3 2 1

A 3 4 = 1/3, A 1 4 = 2/3, A 1 2 = A 2 3 = A 1 3 = A 2 4 1

We get that S_ is nonempty since A c S1 , and hence I is an arrangement of minors.

Problem 1.3.3. Describe combinatorially all possible arrangements of minors in

Gr (k, n). Investigate the geometric and the combinatorial structures of the strat- ification Gr (k, n) = U S.

For k = 1, this stratification is equivalent to the subdivision of the linear space

R n by the hyperplanes xi = xj, which forms the Coxeter arrangementof type A, also known as the braid arrangement. The structure of this Coxeter arrangement is well studied. Combinatorially, it is equivalent of the face structure of the permutohedron.

For k > 2, the above problem seems to be quite nontrivial.

Postnikov (127]) described the cell structure of the nonnegative Grassmannian

Gr (k, n), which is equivalent to the description of possible sets T o. This description already involves quite rich and nontrivial combinatorial structures. It was shown that possible 10 's are in bijection with various combinatorial objects: positroids, decorated , L-diagrams, Grassmann necklaces, etc. The stratification of Gr (k, n) into the strata S1 is a finer subdivision of the positroid stratificationstudied in [27]. It should lead to even more interesting combinatorial objects.

In the present chapter, we mostly discuss the case of the positive Grassmannian

Gr+(k, n), that is, we assume that L3 = 0. We concentrate on a combinatorial description of possible sets 11 and I. In Section 1.9 we also discuss some results for the nonnegative Grassmannian Gr (k, n).

20 Definition 1.3.4. We say that a subset J C (n]) is an arrangement of smallest minors in Gr+(k, n) if there exists a nonempty stratum S1 such that Io 0 and

We also say that J c ([Q) is an arrangement of largest minors in Gr+(k, n) if T there exists a nonempty stratum S1 such that o = 0 and I1 J.

As a warm-up, in the next section we describe all possible arrangements of smallest and largest minors in the case k = 2. We will treat the general case in the subsequent sections.

1.4 Case k = 2: triangulations and thrackles

In the case k = 2, one can identify 2-element sets I = {i, J} that label the Plucker coordinates A, with the edges {i, j} of the complete graph Kn on the vertices 1, .. , n. A subset in ('1) can be identified with a subgraph G c Kn.

Let us assume that the vertices 1, ... , n are arranged on the circle in the clockwise order.

Definition 1.4.1. For distinct a, b, c, d E [n], we say that the two edges {a, b} and

{ c, d} are non-crossing if the corresponding straight-line chords [a, b] and [c, d] in the circle do not cross each other. Otherwise, if the chords [a, b] and [c, d] cross each other, we say that the edges {a, b} and {c, d} are crossing.

For example, the two edges {1, 4} and {2, 3} are non-crossing; while the edge

{1, 3} and {2, 4} are crossing.

Theorem 1.4.2. A nonempty subgraph G c Kn corresponds to an arrangement of smallest minors in Gr+(2, n) if and only if every pair of edges in G is non-crossing, or they share a common vertex.

Theorem 1.4.3. A nonempty subgraph H c Kn corresponds to an arrangement of largest minors in Gr+(2, n) if and only if every pair of edges in H is crossing, or they share a common vertex.

21 In one direction (->), both Theorems 1.4.3 and 1.4.2 easily follow from the 3-term

Plucker relation for the Plucker coordinates Aij in Gr+(2,n):

Aac Abd = Aab Acd + Aad Abc, for a < b < c < d.

Here all the Aij should be strictly positive. Indeed, if Aac = Abd then some of the minors Aab, Abc, Acd, Aad should be strictly smaller than Aac = Abd. Thus the pair of crossing edges {a, c} and {b, d} cannot belong to an arrangement of smallest minors.

On the other hand, if, say, Aab - Acd, then Aac or Abd should be strictly greater than

Aab = Ac. Thus the pair of non-crossing edges {a, b} and {c, d} cannot belong to an arrangement of largest minors. Similarly, the pair of non-crossing edges {a, d} and

{ b, c} cannot belong to an arrangement of largest minors.

In order to prove Theorems 1.4.3 and 1.4.2 it remains to show that, for any nonempty subgraph of Kn with no crossing (resp., with no non-crossing) edges, there exists an element of Gr+(2, n) with the corresponding arrangement of equal smallest

(resp., largest) minors. We will give explicit constructions of 2 x n matrices that represent such elements of the Grassmannian. Before we do this, let us discuss trian- gulations and thrackles.

When we say that G is a "maximal" subgraph of K satisfying some property, we mean that it is maximal by inclusion of edge sets, that is, there is no other subgraph of K, satisfying this property whose edge set contains the edge set of G.

Clearly, maximal subgraphs G C K without crossing edges correspond to triangu- lations of the n-gon. Such graphs contain all the "boundary" edges {1, 2}, {2, 3},...,

{n - 1, n}, {n, 1} together with some n - 3 non-crossing diagonals that subdivide the n-gon into triangles, see Figure 1-1 (the graph on the left-hand side) and Figure 1-2.

Of course, the number of triangulations of the n-gon is the famous

I~ (2(n-2))

Definition 1.4.4. Let us call subgraphs G c Kn such that every pair of edges in G is crossing or shares a common vertex thrackles'.

'Our thrackles are a special case of Conway's thrackles. The latter are not required to have

22 2 2 13 6 3

5 4 6 5 Figure 1-1: A triangulation(left) and a thrackle (right). The edges of the triangulation correspond to the arrangement of smallest minors A 12 = A 2 3 = A34 = A 4 5 = A56 = Gr+(2,6); while the edges of A 16 = A 13 = A 1 4 = A15 in the positive Grassmannian the thrackle correspond to the arrangement of largest minors A 13 = A 1 4 = A 15 = the 5-star by A 2 5 = A 2 6 = A36 = A 3 7 in Gr+(2,7). This thrackle is obtained from adding two leaves.

1 2 1 2 1

2 5 2 3 44 3

11 2 1 2

6 3 6 3 62 3

5 4 5 4 5 4 Figure 1-2: All triangulations of n-gons for n = 3, 4, 5, 6 (up to rotations and reflec- tions).

For an odd number 2r + 1 > 3, let the (2r + 1)-star be the subgraph of K 2 r+1 such that each vertex i is connected by edges with the vertices i + r and i + r + 1, where the labels of vertices are taken modulo 2r + 1. We call such graphs odd stars. Clearly, odd stars are thrackles.

We can obtain more thrackles by attaching some leaves to vertices of an odd star, as follows. As before, we assume that the vertices 1,..., 2r + 1 of the (2r + 1)-star are arranged on a circle. For each i E [2r + 1], we can insert some number ki 0 of vertices arranged on the circle between the vertices i+ r and i+ r + 1 (modulo 2r + 1) and connect them by edges with the vertex i. Then we should relabel all vertices of the obtained graph by the numbers 1, ... , n clockwise starting from any vertex, where n = (2r + 1) + E ki. For example, the graph shown in Figure 1-1 (on the right-hand side) is obtained from the 5-star by adding two leaves. More examples of thrackles vertices arranged on a circle.

23 are shown in Figure 1-3.

1 2 1 2

3 5- A 2 3 3 5 3 4 5 4 4 3

1 2 *2

32325 43 1 3

4 5 6 5 Figure 1-3: All maximal thrackles with 3, 4, 5, and 6 vertices (up to rotations and reflections). These thrackles are obtained from the 3-star (triangle) and the 5-star by adding leaves.

We leave the proof of the following claim as an exercise for the reader.

Proposition 1.4.5. Maximal thrackles in K, have exactly n edges. They are obtained from an odd star by attaching some leaves, as described above. The number of maximal thrackles in K is 2-1 - n.

Note that the number 2n - n is the Eulerian number A(n - 1, 1), that is, the number of permutations w1,. .. , wn_ 1 of size n - 1 with exactly one descent wi_ 1 > wi. Theorems 1.4.2 and 1.4.3 imply the following results.

Corollary 1.4.6. Maximal arrangements of smallest minors in Gr+(2,n) correspond to triangulations of the n-gon. They contain exactly 2n - 3 minors. The number of such maximal arrangements is the Catalan number Cn-2 = n-1 n-2

Corollary 1.4.7. Maximal arrangements of largest minors in Gr+(2, n) correspond to maximal thrackles in Kn. They contain exactly n minors. The number of such maximal arrangements is the Eulerian number A(n - 1,1) = 2 n-1 - n.

Let us return to the proof of Theorem 1.4.2. The following claim is essentially well- known in the context of Fomin-Zelevinsky's cluster algebra [10, 11], more specifically, cluster algebras of finite type A. We will talk more about the connection with cluster algebras in Section 1.7.

24 Proposition 1.4.8. Let G C Kn be a graph corresponding to a triangulation of the n-gon. Assign 2n - 3 positive real parameters xij to the edges {i, J} of G.

There exists a 2 x n matrix A such that the minors Aij(A) corresponding to the edges {i, j} of G are Aij (A) = xij.

All other minors Aab(A) are Laurent in the xij with positive integer

coefficients with at least two monomials. Such a matrix A is unique up to the left

SL2 -action, i.e., it is unique regarded as an element of the GrassmannianGr(2, n).

Proof. We construct A by induction on n. For n = 2, we have A = x20 (0 X12 Now assume that n > 3. For any triangulation of the n-gon, there is a vertex i C {2, ... , n - 1} which is adjacent to only one triangle of the triangulation. This means that the graph G contains the edges {i-1, i}, {i, i+1}, {i-1, i1+1}. Let G' be the graph obtained from G by removing the vertex i together with the two adjacent edges {i - 1, i} and {i, i + 1}; it corresponds to a triangulation of the (n - 1)-gon

(with vertices labelled by 1,... , i - 1, i + 1,... , n). By the induction hypothesis, we have already constructed a 2 x (n - 1) matrix A'= (v 1 , ... , vi- 1 , vi+1,..., v) for the graph G' with the required properties, where v 1, ... , vi_ 1 , vi+1 , ... , vn are the column vectors of A'. Let us take the 2 x n matrix

A = (vi, . .. ,vi-1, Xi,i+i Vi_1 + 'ii' Vi+1, Vi+1, . .. ,Vn). Xi-1,i+1 Xi-i,i+1

One can easily check that the matrix A has the required properties. Indeed, all

2 x 2 minors of A whose indices do not include i are the same as the corresponding minors of A'. We have

Ai1,i(A) - -i= det(vi- 1 , vi+1) =xi-,, Xi-1,i+1 and

Ai,i+1(A) = +1 det(vi_ 1 , vi+1) =ii+1- Xi-1,i+i

25 Also, for j $ Z' 1, the minor Aij(A) equals

X,i+i det(vi_1, vj) + x "'det(vi+1, vj), Xi-1,i+1 xi-1,i+1

which is a positive integer Laurent with at least two terms.

The uniqueness of A modulo the left SL2 -action also easily follows by induction. By the induction hypothesis, the graph G' uniquely defines the matrix A' modulo the

left SL2-action. The columns vi- 1 and vi+1 of A' are linearly independent (because

all 2 x 2 minors of A' are strictly positive). Thus the ith column of A is a linear

combination a vi_ 1 +vi+ 1 . The conditions Ai_ 1,i(A) = xi_,. and Ai,i+1(A) = Xii+i

imply that 3 = xi-1 ,i/ det(vi_1, vi+1 ) = xi_1,ix/i-1,i+1 and a = xi,i+1 / det(vi_1, vi+1) =

Xi,i+1/xi-1,i+1- E

Example 1.4.9. Let us give some examples of matrices A correspondingto triangu-

lations. Assume for simplicity that all xij = 1. According to the above construction, these matrices are obtained, starting from the identity 2 x 2 matrix, by repeatedly in-

serting sums of adjacent columns between these columns. The matrices corresponding

to the triangulationsfrom Figure 1-2 (in the same order) are

1 1 0 1 1 1 0 1 3 2 1 0

0 1 1 0 1 2 1 0 1 1 1 1

1 4 3 2 1 0 1 3 2 3 1 0 1 1 1 2 1 0

0 1 1 1 1 1 0 1 1 2 1 1 0 1 2 5 3 1

We can now finish the proof of Theorem 1.4.2

Proof of Theorem 1.4.2. Let G C K be a graph with no crossing edges. Pick a maximal graph C c K without crossing edges (i.e., a triangulation of the n-gon) that contains all edges G. Construct the matrix A for the graph C as in Proposition 1.4.8 with 1 if {i, j} is an edge of G xij 1 + e if {i, j} is an edge of G \ G

26 where e > 0 is a small positive number.

The minors of A corresponding to the edges of G are equal to 1, the minors

corresponding to the edges of C \ G are slightly bigger than 1, and all other 2 x 2

minors are bigger than 1 (if e is sufficiently small), because they are positive integer

Laurent polynomials in the xij with at least two terms.

Let us now prove Theorem 1.4.3.

Proof of Theorem 1.4.3. For a thrackle G, we need to construct a 2 x n matrix B

such that all 2 x 2 minors of B corresponding to edges of G are equal to each other, and all other 2 x 2 minors are strictly smaller.

First, we consider the case of maximal thrackles. According to Proposition 1.4.5, a maximal thrackle G is obtained from an odd star by attaching some leaves to its vertices. Assume that it is the (2r + 1)-star with ki > 0 leaves attached to the jth vertex, for i = 1,...,2r+1. We have n= (2r-+ 1) +E ki.

Let m = 2(2r + 1). Consider a regular m-gon with center at the origin. To be more specific, let us take the m-gon with the vertices ui = (cos(27r-1), sin(27rF )), for i 1,..., m. Let us mark the ki points on the side [Ui+r, ui+r+1] of the m-gon that subdivide this side into ki + 1 equal parts, for i = 1, . . ., m. (Here the indices are taken modulo m. We assume that ki+m/2 = k.) Let v,... , vn, -v 1 , .. . , -vn be all vertices of the m-gon and all marked points ordered counterclockwise starting from vi = ul. (In order to avoid confusion between edges of the graph G and edges of the m-gon, we use the word "side" of the latter.) For example, Figure 1-4 shows the 10-gon (with extra marked points) that corre- sponds to the thrackle shown on Figure 1-1.

We claim that the 2 x n-matrix B with the column vectors v,...,vn, has the needed equalities and inequalities between minors. Indeed, the minor Aij(B) equals the volume Vol(vi, vi) of the parallelogram generated by the vectors vi and vj, for i

27 V 5 V4 V 3

V 6 V 2

V 7

-v V1

- V 7 -v

-V 3 -V4 -V 5 Figure 1-4: The 10-gon that corresponds to the thrackle (from Figure 1-1) obtained by attaching two leaves 4 and 7 to the 5-star with vertices 1, 2, 3, 5, 6. The vertices and the points v1 , v2 , v 3 , v 5 , v 6 of the 10-gon correspond to the vertices of the 5-star, thrackle. v 4 and v7 on the sides of the 10-gon correspond to the leaves of the

of) the other vector v3 lies on the side of the m-gon which is farthest from the line spanned by the vector vi. In this case, the volume Vol(vi, vj) has the value sin( r ).

Let us now consider the case in which {i, j} is not an edge of G. If one of i and j

(say i) is not a leaf of the thrackle (that is, vi is a vertex of the m-gon), then v3 does not belong to the side of the m-gon which is farthest from the line spanned by vi, so

Vol(vi, vj) is smaller than sin( r ). If both i and j are leaves, then one can always strictly increase Vol(vi, vj) by replacing vi with one of the sides of the m-gon that bound vi. Thus Vol(vi, vj) is smaller from the maximal possible volume. This proves the theorem for maximal thrackles.

Let us now assume that G is not a maximal thrackle. Pick a maximal thrackle C that contains G. Construct the vectors v1 , ... , vn for C as described above. Denote by a the largest minor in the matrix, and by b the second largest. Let us show how to slightly modify the vectors vi so that some minors (namely, the minors corresponding to the edges in 0 \ G) become smaller, while the minors corresponding to the edges of G remain the same.

Suppose that we want to remove an edge {i, j} from G, that is, {i, j} is not an edge of G. If this edge is a leaf of C, then one of its vertices, say i, has degree 1, and vi is a marked point on a side of the m-gon, but not a vertex of the m-gon. If we rescale the vector vi, that is, replace it by the vector a vi, then the minor Aij will be rescaled by the same factor a, while all other minors corresponding to edges of

C will remain the same. If we pick the factor a to be slightly smaller than 1, then

28 this will make the minor Aij smaller. Actually, this argument shows that we can independently rescale the minors for all leaves of 0.

Now assume that neither i nor j is a leaf of 0. Then C contains two non-leaf edges {i,j} and {i,j'} incident to i. If G does not contain both edges {i,j} and

{i, j'} then we can also rescale the vector vi by a factor a slightly smaller than 1. This will make both minors Aij and Aij smaller (and any other minor Aq as well).

This rescaling of vi will also modify the minors for the leaves incident to i. However, as we showed above, we can always independently rescale the minors for all leaves of C and make them equal to any values. If some of those leaves are present in G, we will rescale their value by . Otherwise we leave them unchanged. Let p be a leaf attached to i that is present in G (it might be that there is no such p). The process we just described will guarantee that Aip = a. The value of Ai, (which was originally at most b) was rescaled by for any i' = i. However, by taking a close enough to 1, we can guarantee that the resulting minor is still strictly smaller that a. Finally, if G does not contain the edge {i, j} but contains the edge {i, j'}, then we can slightly move the point vi along one of the two sides of the m-gon which is parallel to the vector vj, towards the other vertex of this side of the m-gon. This will make the minor Aij smaller but preserve the minor Aiy. We can handle the leaves attached to i in a similar way to the previous case. This shows that we can slightly decrease the values of minors for any subset of edges of C, without letting the minors outside of O reach a. This finishes the proof. 0

1.5 Weakly separated sets and sorted sets

In this section, we show how to extend triangulations and thrackles to the case of general k. As before, we assume that the vertices 1,..., n are arranged on the circle in the clockwise order.

Definition 1.5.1. Two k-element sets I, J E ( are called weakly separated if their set-theoretic differences I\J = {ai,... , a,} and J\I = {b1, .... b,} are separated from each other by some chord in the circle, i.e., a1 < ... < as < b1 < ... < b, <

29 a,+1 < ... < a, (or the same inequalities with a's and b's switched). A subset of ([']) is called weakly separated if every two elements in it are weakly separated.

Weakly separated sets were originally introduced by Leclerc-Zelevinsky [22] in the study of quasi-commuting quantum minors. It was conjectured in [22] that all maximal (by containment) weakly separated sets have the same number of elements

(the Purity Conjecture), and that they can be obtained from each other by a sequence of mutations. The purity conjecture was proved independently by Danilov-Karzanov-

Koshevoy [11 and by Oh, Postnikov and Speyer in [261.

The latter presented a bijection between maximal weakly separated sets and re-

duced plabic graphs. The latter appear in the study of the positive Grassmannian

[27]. Leclerc-Zelevinsky's purity conjecture and the mutation connectedness conjec- ture follow from the properties of plabic graphs proved in [27].

More precisely, it was shown in [26], cf. [1], that any maximal by containment weakly separated subset of (["]) has exactly k(n - k) + 1 elements. We will talk more about the connection between weakly separated sets and plabic graphs in Section 1.10.

Definition 1.5.2. Two k-element sets I, J C (I"3) are called sorted if their set- theoretic differences I\ J = {a1 ,...,a,} and J\I = {b1 , ... , b,} are interlaced on the circle, i.e., a1 < b1 < a 2 < b 2 < . < a, < br (or the same inequalities with a's and b 's switched).

A subset of (In]) is called sorted if every two elements in it are sorted.

Sorted sets appear in the study of Grbbner bases [35] and in the theory of alcoved polytopes [20]. Any maximal (by containment) sorted subset in ([n]) has exactly n elements. Such subsets were identified with simplices of the alcoved triangulationof the hypersimplex Ak,n, see [20, 35]. The number of maximal sorted subsets in ([I) equals the Eulerian number A(n - 1, k - 1), that is, the number of permutations w of size n - 1 with exactly k - 1 descents, des(w) = k - 1. (Recall, that a descent in a w is an index i such that w(i) > w(i + 1).) An explicit bijection

30 between sorted subsets in (["]) and permutations of size n -1 with k - 1 descents was constructed in [201.

Remark 1.5.3. For k = 2, a pair {a, b} and {c, d} is weakly separated if the edges

{a, b} and {c, d} of Kn are non-crossing or share a common vertex. On the other hand, a pair {a, b} and {c, d} is sorted if the edges {a, b} and {c, d} of Kn are crossing or share a common vertex. Thus maximal weakly separated subsets in ([n]) are exactly

the graphs corresponding to triangulations of the n-gon, while sorted subsets in ]) are exactly thrackles discussed in Section 1.4.

Here is our main result on arrangements of largest minors.

Theorem 1.5.4. A nonempty subset of ([n]) is an arrangement of largest minors in

Gr+(k, n) if and only if it is a sorted subset. Maximal arrangements of largest minors

contain exactly n elements. The number of maximal arrangements of largest minors

in Gr+(k, n) equals the Eulerian number A(n - 1, k - 1).

Regarding arrangements of smallest minors, we will show the following.

Theorem 1.5.5. Any nonempty weakly separated set in (In) is an arrangement of smallest minors in Gr+(k, n).

Theorem 1.5.6. For k = 1, 2, n-, n - 2, n - 3, a nonempty subset of (In) is an

arrangement of smallest minors in Gr+(k, n) if and only if it is a weakly separated subset. For these values of k, maximal arrangements of smallest minors contain

exactly k(n - k) + 1 elements.

Note that the symmetry Gr(k, n) ~ Gr(n - k, n) implies that the cases k = 1, 2, 3 are equivalent to the cases k = n - 1, n - 2, n - 3.

In Section 1.10, we will construct, for k > 4, examples of arrangements of smallest minors which are not weakly separated. We will describe the conjectural structure of such arrangements (Conjecture 1.10.10) and prove it for k = 4, 5, n - 4, n - 5.

These examples show that it is not true in general that all maximal (by contain- ment) arrangements of smallest minors are weakly separated. However, the following

31 conjecture says that maximal by size arrangements of smallest minors are exactly maximal weakly separated sets.

Conjecture 1.5.7. Any arrangement of smallest minors in Gr+(k,n) contains at most k(n - k) + 1 elements. Any arrangement of smallest minors in Gr+(k, n) with k(n - k) + 1 elements is a (maximal) weakly separated set in ( -).

In order to prove Theorems 1.5.4 and 1.5.6 in one direction (=), we need to show that, for a pair of elements I and J in an arrangement of largest (smallest) minors, the pair I, J is sorted (weakly separated).

In order to prove these claims in the other direction (<) and also Theorem 1.5.5, it is enough to construct, for each sorted (weakly separated) subset, matrices with the corresponding collection of equal largest (smallest) minors.

In Section 1.6, we discuss inequalities between products of minors and use them to prove Theorems 1.5.4 and 1.5.6 in one direction ( ). That is, we show arrangements of largest (smallest) minors should be sorted (weakly separated). In Section 1.7, we prove Theorem 1.5.5 (and hence the other direction (=) of Theorem 1.5.6) using the theory of cluster algebras. In Section 1.8, we prove the other direction (e=) of

Theorem 1.5.4 using the theory of alcoved polytopes [201.

1.6 Inequalities for products of minors

As we discussed in Section 1.4, in the case k = 2, in one direction, our results follow from the inequalities for products of minors of the form Aac Abd > Aab Acd and

Aac Abd > Aad Abe, for a < b < c < d.

There are more general inequalities of this form found by Skandera [311.

For I, J E ([]) and an interval [a, b] := {a, a + 1, ... , b} c [n], define

r(I, J; a, b) = |(I \ J) n [a, b] I - |(J \I) n [a, b] .

Notice that the pair I, J is sorted if and only if r(I, J; a, b) < 1 for all a and b. In a sense, r(I, J; a, b) is a measure of "unsortedness" of the pair I, J.

32 Theorem 1.6.1 (Skandera [311). For I, J, K, L E ([)), the products of the Plucker

coordinates satisfy the inequality

A1 A J > AK AL for all points of the nonnegative GrassmannianGr (k, n), if and only if the multiset

union of I and J equals to the multiset union of K and L; and, for any interval

[a,b] c [n], we have r(I, J; a, b) r(K, L; a, b).

Remark 1.6.2. Skandera's result [31] is given in terms of minors (of arbitrarysizes)

of totally nonnegative matrices. Here we reformulated this result in terms of Plicker

coordinates (i.e., maximal minors) on the nonnegative Grassmannian using the map

0 : Mat(k, n - k) -+ Gr(k, n) from Section 1.2. We also used a different notation to

express the condition for the sets I, J, K, L. We leave it as an exercise for the reader to check that the above Theorem 1.6.1 is equivalent to [31, Theorem 4.2].

Roughly speaking, this theorem says that the product of minors A, Aj should be

"large" if the pair I, J is "close" to being sorted; and the product should be "small" if the pair I, J is "far" from being sorted.

Actually, we need a similar result with strict inequalities. It also follows from

Skandera's work [311.

Theorem 1.6.3 ( [311). Let I, J, K, L E (I]) be subsets such that {I, J} # {K, L}. The products of the Plicker coordinates satisfy the strict inequality

1A AJ > AK AL for all points of the positive GrassmannianGr+(k, n), if and only if the multiset union of I and J equals to the multiset union of K and L; and, for any interval [a, b] c [n], we have r(I, J; a, b) r(K, L; a, b).

33 Proof. In one direction (->), the result directly follows from Theorem 1.6.1. Indeed, the nonnegative Grassmannian Gr (k, n) is the closure of the positive Grassmannian

Gr+(k, n). This implies that, if A, Aj > AK AL on Gr+(k, n), then A, Asj AK AL on Gr (k, n).

Let us show how to prove the other direction (4-) using results of [31]. Every totally positive (nonnegative) matrix A = (aij) can be obtained from an acyclic directed planar network with positive (nonnegative) edge weights, cf. [31]. The matrix entries aij are sums of products of edge weights over directed paths from the i'h source to the jth sink of a network.

Theorem 1.6.1 implies the weak inequality A, Aj > AK AL. Moreover, [31, Corol- lary 3.3] gives a combinatorial interpretation of the difference A1IA - AK AL as a weighted sum over certain families of directed paths in a network.

In case of totally positive matrices, all edge weights, as well as weights of all families of paths, are strictly positive. It follows that, if AI AJ - AK AL = 0, for some point of Gr+(k, n), then there are no families of paths satisfying the condition of [31, Corollary 3.3], and thus 1A AJ - AK AL = 0, for all points of Gr+(k, n).

However, the only case when we have the equality AI AJ = AK AL for all points of Gr+(k, n) is when {I, J} = {K, L}. El

Definition 1.6.4. For a multiset S of elements from [n], let Sort(S) be the non-

decreasing sequence obtained by ordering the elements of S. Let I, J C ([]) and let

Sort(I U J) = (a1 , a2 ,.. ., a2k) so that a1 <_a 2 - a2k. Define the sorting of the pair I, J to be

sort,(I, J) := {ai, a3,.., a2k-1}1, sort2 (I, J) := {a2, a4, . .. ,a2k}.

Note that the pair {I, J} is sorted if and only if sort1(I, J) = I and sort 2 (I, J) = J, or vice versa.

Theorem 1.6.3 implies the following corollary.

34 Corollary 1.6.5. Let I, J E ([n]) be a pair which is not sorted, and let

sort1 (I,J), sort2 (I, J) be the sorting of the pairI, J. Then we have the strict inequality

Asorti(I,J) Asort 2 (I,J) > AiAj for points of the positive GrassmannianGr+(k, n).

Proof. We have r(sort1(I, J), sort 2(I, J); a, b) < r(I, J; a, b). 0

This result easily implies one direction of Theorem 1.5.4.

Proof of Theorem 1.5.4 in the a direction. We need to show that a pair I, J, which

is not sorted, cannot belong to an arrangement of largest minors. If a pair I, J, which

is not sorted, belongs to an arrangement of largest minors, we have A, = Aj = a,

and the inequality Asorti(I,J) Asort2 (I,J) > AjAj implies that Asorti(I,J) or Asort 2 (I,J) is greater than a, a contradiction. l

Using a similar argument, we can also prove the same direction of Theorem 1.5.6.

Proof of Theorem 1.5.6 in the => direction. The Grassmannian Gr(k, n) can be iden-

tified with Gr(n - k, n) so that the Plucker coordinates A, in Gr(k, n) map to

the Plucker coordinates A[4]\ 1 in Gr(n - k, n). This duality reduces the cases k =

n - 1,n - 2,n - 3 to the cases k = 1,2,3.

The case k = 1 is trivial. The case k = 2 is covered by Theorem 1.4.2. It remains to prove the claim in the case k = 3.

We need to show that a pair I, J E ([]), which is not weakly separated, cannot belong to an arrangement of smallest minors in the positive Grassmannian Gr+(3, n).

If |I nl JI > 2, then I and J are weakly separated. If In JI = 1, say I n J = {e}, then the result follows from the 3-terms Plflcker relation

A{a,c,e} A{b,d,e} - A{a,b,e} A{c,d,e} + Ala,d,e} Ajb,c,e}, for a < b < c < d, as in the k = 2 case (Theorem 1.4.2).

Thus we can assume that I n J = 0. Without loss of generality, we can assume that I U J = {1, 2, 3, 4, 5, 6}. Up to cyclic symmetry, and up to switching I with J,

35 there are only 2 types of pairs I, J which are not weakly separated:

I = {1, 3,5}, J ={2,4,6} and I = {1, 2,4}, J ={3,5,6}.

In both cases, we have strict Skandera's inequalities (Theorem 1.6.3):

A11,3,5} A(2,4,6} > A11,2,3} A14,5,6}

A{1,2,4} IA{3,5,6} > 'A(1,2,3} IA{4,5,6} -

This shows that, if A1 = Aj = a, then there exists AK < a. Thus a pair I, J which is not weakly separated cannot belong to an arrangement of smallest minors. l

1.7 Cluster algebra on the Grassmannian

In this section we prove Theorem 1.5.5 using cluster algebras.

The following statement follows from results of [26, 271.

Theorem 1.7.1. Any maximal weakly separatedsubset S c (m3) corresponds to k(n- k) + 1 algebraically independent Plicker coordinates A 1 , I E S. Any other Plucker coordinate Aj can be uniquely expressed in terms of the A1 , I E S, by a subtraction- free rational expression.

In the following proof we use plabic graphs from [271. See Section 1.10 below for more details on plabic graphs.

Proof. In [261, maximal weakly subsets of ([]) were identified with labels of faces of reduced plabic graphs for the top cell of Gr+(k, n). (This labelling of faces is described in Section 1.10 of the current chapter in the paragraph after Definition 1.10.4.) According to [27J, all reduced plabic graphs for the top cell can be obtained from each other by a sequence of square moves, that correspond to mutations of weakly separated sets.

A mutation has the following form. For 1 < a < b < c < d < n and R k 2 such that {a, b, c, d}n R = 0, if a maximal weakly separated set S contains {a, b} U R,

36 {b, c} U R, {c, d} U R, {a, d} U R, and {a, c} U R, then we can replace {a, c} U R in S

by {b, d} U R. In terms of the Plucker coordinates A,, I E S, a mutation means that

we replace Afa,c}UR by

Albd}UR = A{a,b}UR A{c,d}UR - A{a,d}UR A{b,c}UR A{a,c}UR

Since any J c (n) appears as a face label of some plabic graph for the top cell, it

follows that any Plucker coordinate Aj can be expressed in terms of the A,, I E S,

by a sequence of rational subtraction-free transformation of this form.

The fact that the A,, I E S, are algebraically independent follows from dimension

considerations. Indeed, we have ISI = k(n -k)+1, and all Plucker coordinates (which

are projective coordinates on the Grassmannian Gr(k, n)) can be expressed in terms

of the AI, I c S. If there was an algebraic relation among the A,, I C S, it would

imply that dim Gr(k, n) < k(n - k). However, dim Gr(k, n) = k(n - k). l

This construction fits within the general framework of Fomin-Zelevinsky's cluster

algebras [10]. For a maximal weakly separated set S c (']), the Plucker coordinates

A,, I c S, form an initial seed of the cluster algebra associated with the Grassman- nian. It is the cluster algebra whose quiver is the dual graph of the plabic graph

associated with S. This cluster algebra was studied by Scott [30].

According to the general theory of cluster algebras, the subtraction-free expres- sions mentioned in Theorem 1.7.1 are actually Laurent polynomials, see [10]. This property is called the Laurent phenomenon. In [10], Fomin and Zelevinsky conjec- tured that these Laurent polynomials have positive integer coefficients. This con- jecture was recently proven by Lee and Schiffler in [231, for skew-symmetric cluster algebras. Note that the cluster algebra associated with the Grassmannian Gr(k, n) is skew-symmetric.

The Laurent phenomenon and the result of Lee-Schiffler [23] imply the following claim.

Theorem 1.7.2. The rational expressions from Theorem 1.7.1 that express the Aj in terms of the A 1, I E S, are Laurent polynomials with nonnegative integer coefficients

37 that contain at least 2 terms.

Theorem 1.7.1 implies that any maximal weakly separated subset S uniquely

defines a point As in the positive Grassmannian Gr+(k, n) such that the Plucker

coordinates A,, for all I c S, are equal to each other. Moreover, Theorem 1.7.2

implies that all other Plucker coordinates Aj are strictly greater than the A,, for I E

S. This proves Theorem 1.5.5 (and hence the other direction (e=) of Theorem 1.5.6)

for the case of maximal weakly separated sets. If S is not maximal, let us complete

it to a maximal weakly separated set S. We can then specify A, = 1 for I E S and

Aj = 1+e for J c 5\ S and small E. Since any AK for K V S is a Laurent polynomial with nonnegative integer coefficients in the variables Aj for J E 5 that contain at

least 2 terms, then by taking e small enough we can guarantee that AK > 1. This

completes the proof of Theorems 1.5.5 and 1.5.6.

We can now reformulate Conjecture 1.5.7 as follows.

Conjecture 1.7.3. Any point in Gr+(k, n) with a maximal (by size) arrangement

of smallest equal minors has the form As, for some maximal weakly separated subset S c ([]).

1.8 Constructions of matrices for arrangements of

largest minors

In this section, we prove the other direction (e) of Theorem 1.5.4. In the previous sections, we saw that the points in Gr+(k, n) with a maximal arrangement of small- est equal minors have a very rigid structure. On the other hand, the cardinality of a maximal arrangement of largest minors is n, which is much smaller than the conjec- tured cardinality k(n - k) +1 of a maximal arrangement of smallest minors. Maximal arrangements of largest minors impose fewer conditions on points of Gr+(k, n) and have a much more flexible structure. Actually, one can get any maximal arrangement of largest minors from any point of Gr+(k, n) by the torus action.

The 'ositive torus"Rn acts on the positive Grassmannian Gr+(k, n) by rescaling

38 the coordinates in R". (The group R> is the positive part of the complex torus

(C \ {0})".) In terms of k x n matrices this action is given by rescaling the columns of the matrix.

Theorem 1.8.1. (1) For any point A in Gr+(k, n) and any maximal sorted subset

S C ([]), there is a unique point A' of Gr+(k, n) obtained from A by the torus action

(that is, by rescaling the columns of the k x n matrix A) such that the Plicker coor-

dinates A1 , for all I E S, are equal to each other. (2) All other Plicker coordinates

Ai, J V S, for the point A' are strictly less than the A 1 , for I E S.

The proof of this result is based on geometric techniques of alcoved polytopes and affine Coxeter arrangements developed in [20].

Before presenting the proof, let us give some examples of 3 x n matrices A =

[vi, v2 ,... , vn] with maximal arrangements of largest equal minors. Here v 1 ,..., n are 3-dimensional vectors. Projectively, we can think about the 3-dimensional vectors vi as points in the (projective) plane. More precisely, let P ~ R2 be an affine plane in

R' that does not pass through the origin 0. A point p in the plane P represents the

3-dimensional vector v from the origin 0 to p. A collection of points Pi, .. . , pn E P corresponds to an element A = [v1,. .. , vn] of the positive Grassmannian Gr+ (3, n) if and only if the points pi, . , pn form vertices of a convex n-gon with vertices labelled in the clockwise order.

Let us now assume that the n-gon formed by the points Pi, ... ,p is a regular n-gon. Theorem 1.8.1 implies that it is always possible to uniquely rescale (up to a common factor) the corresponding 3-dimensional vectors by some positive scalars Ai in order to get any sorted subset in ([]). Geometrically, for a triple I = {i, J, r}, the minor A1 equals the area of the triangle with the vertices pi, pj, pr times the product of the scalar factors Ai, Aj, A, (times a common factor which can be ignored). We want to make the largest area of such rescaled triangles repeat as many times as possible.

Example 1.8.2. For the regular pentagon, there are the Eulerian number A(4, 2) =

11 rescalings of vertices that give maximal sorted subsets in ([5]). For the regular

39 hexagon there are A(5, 2) = 66 rescalings. Figures 1-5 and 1-6 show all these rescal- ings up to rotations and reflections.

, 2 (j2 ~ 1 1 Figure 1-5: For the regular pentagon, there are the Eulerian number A(4, 2) = 11 rescalings that give maximal sorted subsets in ([51). In the first case, all the scalars are 1. In the second case, the A, are 1, 1, #, #, #. Here = (1 + v5)/2 is the golden ratio. (There are 5 rotations of this case.) In the last case, the Ai are 1, q, #2 2 (Again, there are 5 rotations.) In total, we get 1 + 5 + 5 = 11 rescalings.

1 1 1 1

2 2 3/2 3/2

4 4 9/4 9/4 1 1 1 1 3 3

2 1 3/2 1 3/2 3

3 2 9/4 3/2 1 3/2 9/4 9/4 1 1 3/2 3/2

3/2 9/4 3/2 3/2 101

1 3/2 1 1 3/2 3/2 3 9/4 3/2

3/2n 2 3/201

11 3/2 3/2 Figure 1-6: For the regular hexagon, there are 10 types of allowed rescalings (up to rotations and reflections) shown in this figure. In total, we get the Eulerian number A(5, 2) = 6 +6 6+ 6+6+6+ 3+ 3+12+12 = 66 rescalings.

Our proof of Theorem 1.8.1 relies on results from [20] about hypersimplices and their alcoved triangulations. Let us first summarize these results.

The hypersimplex Ak,, is the (n - 1)-dimensional polytope

Ak,n := f(X,...,X)10

40 Let e,. .. ,en be the coordinate vectors in R'. For I E (In,), let e1 = Xgic ej denote the 01-vector with k ones in positions I. For a subset S c (Ij), let Ps be the polytope defined as the convex hull of e1 , for I C S. Equivalently, Ps has the vertices e1 , I E S. The polytope Ps lies in the affine hyperplane H = {x1 +- - -+x = k} C R".

For 1 < i j < n and an integer r, let Hi,j, be the affine hyperplane {Xi +

Xi+1 + Xj = r} c R".

Theorem 1.8.3 ([201, cf. [32, 351). (1) The hyperplanes Hij, subdivide the hypersim- plex Ak,n into simplices. This forms a triangulationof the hypersimplex. (2) Simplices

(of all dimensions) in this triangulation of Ak,n are in bijection with sorted sets in

([I). For a sorted set S, the corresponding simplex is Ps. (3) There are the Eulerian number A(n - 1, k - 1) of (n - 1)-dimensional simplices Ps in this triangulation.

They correspond to the A(n - 1, k - 1) maximal sorted sets S in ([M). In particular, maximal sorted sets in (I]) have exactly n elements.

The following lemma proves the first part of Theorem 1.8.1.

Lemma 1.8.4. Let A be a point in Gr+(k, n), and let S C ([I) be a maximal sorted subset. There is a unique point A' of Gr+(k, n) obtained from A by the torus action, such that the Plicker coordinates A,, for all I c S, are equal to each other.

Proof. Let t1 , t 2 ,. . . , tn > 0 be a collection of n positive real variables, and let A' be a matrix that is obtained from A by multiplying the ith column of A by ti, for each

1

zi = -b 1 , for every I E S, iEI where zi = log(ti) and b, = log(A1 (A)). This n x n system has a unique solution (zi,..., zn) because, according to The- orem 1.8.3, the rows of its matrix are exactly the vertices of the simplex Ps, so the

41 matrix of the system is invertible.

The positive numbers ti = e'i, i = 1, .. . , n, give us the needed rescaling constants. E

In order to prove the second part of Theorem 1.8.1, let us define a distance d(S, J)

between a maximal sorted set S and some J E (['1). Such a function will enable us to provide an inductive proof.

Let us say that a hyperplane Hij,, = {xi + xi+1 - + x = r} separates a simplex

Ps and a point ej if Ps and ej are in the two disjoint halfspaces formed by Hi,j,r. Here we allow Hi,j,r to touch the simplex Ps along the boundary, but the point ej should not lie on the hyperplane.

For J E ([]), 1 < i < j n, let

dij(S, J) := #{r I the hyperplane Hi,j,r separates Ps and ej}.

Define the distance between J and S as

d (S, J) : = Edij (S, J).

In other words, d(S, J) is the total number of hyperplanes Hij,, that separate Ps and ej.

Lemma 1.8.5. Let J E (n ) and let S C ( [n]) be a maximal sorted subset. Then d(S, J) = 0 if and only if J E S.

Proof. If J c S, that is, ej is a vertex of the simplex Ps, then d(S, J) = 0.

Now assume that ej is not a vertex of Ps, so it lies strictly outside of Ps. Consider the n hyperplanes Hi,j,r that contain the n facets of the (n - 1)-simplex Ps. At least one of these hyperplanes separate Ps and ej, so d(S, J) > 1. El

Recall (Definition 1.6.4) that the sorting sorti(I, J), sort2 (I, J) of a pair I, J E ([]) with the multiset union I U J = {ai < a2 < ... < a2k} is given by sort, (I, J) = {ai, a3 ,. .. , a2k-1} and sort2 (I, J) = {a 2 , a 4 , ... , a2k}-

42 Lemma 1.8.6. Let S C (['l) be a maximal sorted subset, let I E S and J C (n]),

let sortl(I, J), sort 2 (I, J) be the sorting of I, J, and let 1 < i j < n. Then

d3.(S, sorti(I, J)) < dij(S, J) and dij (S, sort2 (I, J)) dij (S, J).

Proof. In order to show that dij(S,sorti(I, J)), dij(S,sort2 (I, J)) dij(S, J), it is enough to show that any hyperplane Hi,j,r (for some positive integer r) that separates

Ps and esort (I,J) also separates 1 Ps and ej (and similarly for Ps and esort2 (I,J))- Let a = 1I n [i, j]I and 3 = IJ n [i, J]1, where [i, j] = {i, i + 1,... ,j}. So ej lies on H ,j,, and ej lies on Hi,j,3.

By the definition of sorting, the numbers Isorti(I, J) n [i, j] I and Isort 2 (I, J)n [i, ]]I are equal to ['] and f2+1] (not necessarily respectively). So esorti(I,J) lies on HIjL or HZ , and similarly for esort2(I,J). Since both ['1] and ['3l are weakly between a and 3, we get the needed claim.

Lemma 1.8.7. Let S C ([I) be a maximal sorted subset, and let J E ([I) such that

d(S, J) > 0. Then there exists I E S such that, for the sorting sort1 (I, J), sort 2 (I, J) of the pair I, J, we have the strict inequalities d(S, sorti(I, J)) < d(S, J) and d(S, sort2(I, J)) < d(S, J).

Proof. According to Lemma 1.8.5, there exists I C S such that I and J are not sorted.

This means that there are 1 < i < j < n such that the numbers a I n [i,jlI and

= |J n [i, j]I differ by at least two. (We leave it as exercise for the reader to check that I and J are sorted if and only if Ia - 0| < 1 for any 1 < i < j < n.) Therefore, both ['1J and [F1 are strictly between a and 3.

The point esorti(I,j) lies on the hyperplane H,3 [L /3 J or on HF2 In both cases this hyperplane separates Ps and ej, but does not separate PS and esorti(I,J). The same holds for esort2 (I,J). This means that we have the strict inequalities dij (S, sort, (I, J)) < dij(S, J) and dij(S,sort 2 (I, J)) < dij(S, J). Also, according to Lemma 1.8.6, we have the weak inequalities day(S, sorti(I, J)) < duv(S, J) and du,(S, sort 2 (I, J)) du,(S, J), for any 1 < u < v < n. This implies the needed claim. l

We are now ready to prove the second part of Theorem 1.8.1.

43 Proof. Let A, A' and S be as in Lemma 1.8.4. Rescale A' so that AI(A') = 1, for

I E S. We want to show that, for any J E ([']) such that J S, we have Aj(A') < 1.

The proof is by induction. Start with the base case, that is, with J for which

d(S, J) = 1. By Lemma 1.8.7, there exists I E S such that

d(S, sort 2 (I, J)) < d(S, J) = 1, d(S, sort,(I, J)) < d(S, J) = 1, and hence d(S, sort 2 (I, J))

d(S, sorti(I, J)) = 0. Therefore, by Lemma 1.8.5, we have sorti(I, J), sort2 (I, J) E S,

and thus Asort2 (I,J)(A') = Asorti(IjJ)(A') = AI(A') = 1. Applying Corollary 1.6.5, we

get that AI(A')Aj(A') < Asort2 (I,J)(A')Asorti(IJ)(A'), so 1 . Aj(A') < 1 - 1, and hence Aj(A') < 1, which proves the base case.

Now assume that the claim holds for any set whose distance from S is smaller

than d, and let J E S such that d(S, J) = d. Using again Lemma 1.8.7, we pick

I E S for which d(S, sort2 (I, J)), d(S, sorti(I, J)) < d. By the inductive assump-

tion, Asort2(I,J)(A'), Asorti(IJ,)(A') < 1. Therefore, applying Corollary 1.6.5, we get

that AI(A')Aj(A') < Asort 2(I,J)(A')Asorti(I,J)(A') < 1, and since A1 (A') = 1, we get

Aj(A') < 1. We showed that, for all J E ([') such that J S, we have Aj(A') < 1, so we are done.

We can now finish the proof of Theorem 1.5.4.

Proof of Theorem 1.5.4. The =- direction was already proven in Section 1.6.

For the case of maximal sorted sets, Theorem 1.8.1 implies the - direction of Theorem 1.5.4.

Suppose that the sorted set S' (given in Theorem 1.5.4) is not maximal. Complete it to a maximal sorted set S and rescale the columns of A to get A' as in Theorem 1.8.1 for the maximal sorted set S.

We now want to slightly modify A' so that only the subset of minors A,, for I E S', forms an arrangement of largest minors.

Apply the procedure in the proof Lemma 1.8.4 to get the matrix A' such that

1 for I E S' { -cfor IE S\S'.

44 Clearly, in the limit e -+ 0, we have A' -÷ A'.

Since all minors Ai(A') are continuous functions of e, we can take e > 0 to be small enough, so that all the minors Ai(A'), J V S', are strictly less than 1. This completes the proof of Theorem 1.5.4.

1.9 The case of the nonnegative Grassmannian

The next natural step is to extend the structures discussed above to the case of the nonnegative Grassmannian Gr (k, n). In other words, let us now allow some subset of

Pl6cker coordinates to be zero, and try to describe possible arrangements of smallest (largest) positive Phicker coordinates.

Many arguments that we used for the positive Grassmannian, will not work for the nonnegative Grassmannian. For example, if some Plucker coordinates are allowed to be zero, then we can no longer conclude from the 3-term Plucker relation that

A 13 A 24 > A 12 A 34 .

Let us describe these structures in the case k = 2. The combinatorial structure of the nonnegative Grassmannian Gr (2, n) is relatively easy. Its positroid cells [27]

2 are represented by 2 x n matrices A = [v 1 ,... ,vn] , vi c R , with some (possibly empty) subset of zero columns vi = 0, and some (cyclically) consecutive columns

Vr, Vrl, . .. , V. parallel to each other. One can easily remove the zero columns; and assume that A has no zero columns. Then this combinatorial structure is given by a decomposition of the set [n] into a disjoint union of cyclically consecutive intervals

[n] = B1 U ... U Br. The Plucker coordinate Aj is strictly positive if i and j belong to two different intervals Bl's; and Aij = 0 if i and j are in the same interval.

The following result can be deduced from the results of Section 1.4.

Theorem 1.9.1. Maximal arrangements of smallest (largest) positive minors corre- spond to triangulations (thrackles) on the r vertices 1,... , r. Whenever a triangu- lation (thrackle) contains an edge (a, b), the corresponding arrangement contains all

Plicker coordinates Aij, for i E Ba and j E Bb.

45 We can think that vertices 1, ... , r of a triangulation (thrackle) G have the multi- plicities na = IB, ,. The total sum of the multiplicities should be E na = n. The num- ber of minors in the corresponding arrangement of smallest (largest) minors equals the sum

nanb (ab)EE(G) over all edges (a, b) of G.

Note that it is no longer true that all maximal (by containment) arrangements of smallest (or largest) equal minors contain the same number of minors.

Theorem 1.9.2. A maximal (by size) arrangement of smallest minors or largest minors in Gr (2, n) contains the following number of elements:

3m 2 if n = 3m

m(3m + 2) if n = 3m + 1

(m + 1)(3m + 1) if n = 3m + 2

Proof. We start with smallest minors. By Theorem 1.9.1, we can assume that the graph G described above corresponds to a triangulation (since adding an edge to

G cannot decrease the expression E(a)EE(G) nanb), and we would like to maximize

E(ab)CE(G) nanb, subject to the constraint E na = n (keeping in mind that all the variables are nonnegative integers). We will use Lagrange multipliers. Define

f (ni, n2,---nr)=E nanb--A Ena-n- (ab)EE(G) \a=1 /

Taking partial derivatives with respect to the variables ni, n 2 , ... ,nr, A, we get, for every v c V(G), an equality of the form Z(vb)EE(G) nb = A. We also get E na = n.

Now consider several cases.

(1) r = 3. In this case, G is a triangle, and the equalities are

n-+ n 2 = n1 + n3 = n2 + n3 , n1 + n2 + n3 = n.

46 Thus, if n = 0 (mod 3), the solution is ni = n2= n3= n/3, and nin 2 +nin 3 +n2n 3 = 2 n /3. If n = 1 (mod 3) then let n = 3m + 1. Since ni, n2 , n3 are integers, the

2 maximal possible value for nin2+ nin3+ n2n 3 is In /3], which we attain by choosing ni = n2 = m, n3 = m+1. Finally, ifn = 2 (mod 3), let n = 3m + 2. Then by

2 choosing ni = n2= m + 1, n3 = n we obtain again nin2 + nin3 + n2n3 =I /3J.

(2) r = 4. In this case, G is K4 \e, and the equalities are

ni + n2+ n4= n2+ n3+ n4= ni + n3, nli+n2+ n3+ n4= n.

Hence ni = n3= n2 + n4 and thus, if n = 0 (mod 3), the maximal value achieved at ni = n3= n/3, n2 + n 4 =n/3. We have

nin3+ nin2 + n2n3 + n3 n4 + n1n4

2 = n2/9 + (n2 + n4 )(ni + n3)= n2/9 + (2n/3)(n/3) = n /3.

Note that for n = 1, 2 (mod 3), the maximal value of nin3+nin2+n2n3+n3n4+nin4

(subject to the constraints) cannot exceed [n2/3J, and thus for r = 4 we obtain at most the same maximal value as in the case r = 3.

(3) r > 5. In this case, let v be a vertex of degree 2 in the triangulation, and let a and b be its neighbors, so a and b are connected. Note that the edge (a, b) is an "inner edge" in the triangulation (since r > 5), and hence it is part of another triangle. Let p $ v be the vertex that forms, together with a and b, this additional triangle, and hence p is connected to both a and .b. Since r > 5, the degree of p is at least 3, so there exists a vertex x {a, b, p, v} that is connected to p. Therefore we get in particular that na+ b> na + n b+ nb , and since all the n7's are nonnegative (since those are the only cases that we consider) we get nx = 0. Thus we could equivalently consider a triangulation on r - 1 vertices instead of r vertices (having even fewer constraints, so the maximal value can only increase). Since this process holds for any triangulation on at least 5 vertices, we obtain a reduction to the case r = 4.

After considering all the possible cases, we conclude that the maximal arrangement of smallest minors in Gr (2, n) contains the number of elements stated in the theorem.

47 Now consider an arrangement of largest minors. In this case, G is a maximal

thrackle. It is easy to check that if G contains leaves then there exists a vertex v

for which nv = 0, and hence we get reduction to smaller number of vertices. Thus

we can assume that G does not contain leaves, and hence G is an odd cycle. In this

case, applying Lagrange multipliers we get that n = n2 = ... = nr = n/r, and hence

the maximal value of the expression E(ab)EE(G) nanb is n 2 /r. Thus we get that the

maximal value achieved in the case r = 3 (where G is a triangle). We can analyze

the cases n = 0, 1, 2 (mod 3) in the same way as above, and hence we are done. l

1.10 Construction of arrangements of smallest mi-

nors which are not weakly separated

In this section, we discuss properties of pairs of minors which are not weakly separated

but still can be equal and smallest. In order to construct such pairs, we will use plabic

graphs from [27]. A bijection between plabic graphs and weakly separated sets was

constructed in [26].

1.10.1 Plabic graphs

Let us give some definitions and theorems from [27, 26]. See these papers for more details.

Definition 1.10.1. A plabic graph (planarbicolored graph) is a planar undirected

graph G drawn inside a disk with vertices colored in black or white colors. The vertices

on the boundary of the disk, called the boundary vertices, are labeled in clockwise order by [n].

Definition 1.10.2. Let G be a plabic graph. A strand in G is a directed path T such

that T satisfies the following rules of the road: At every black vertex turn right, and

at every white vertex turn left.

Definition 1.10.3. A plabic graph is called reduced if the following holds:

48 1. (No closed strands) The strands cannot be closed loops in the interior of the

graph.

2. (No self-intersecting strands) No strand passes through itself. The only excep-

tion is that we allow simple loops that start and end at a boundary vertex i.

3. (No bad double crossings) For any two strands a and 0, if a and 0 have two

common vertices A and B, then one strand, say a, is directed from A to B, and

the other strand 0 is directed from B to A. (That is, the crossings of a and 3

occur in opposite orders in the two strands.)

Any strand in a reduced plabic graph G connects two boundary vertices.

Definition 1.10.4. We associate the decorated strand permutation 'G E Sn with a reduced plabic graph G, such that WG(i) = j if the strand that starts at the boundary vertex i ends at the boundary vertex j. A strand is labelled by i c [n] if it ends at the boundary vertex i (and starts at the boundary vertex irG(i)).

The fixed points of 7rG are colored in two colors as follows. If i is a fixed point of

7G, that is rG(i) = i, then the boundary vertex i is attached to a vertex v of degree 1. The color of i is the color of the vertex v.

Let us describe a certain labeling of faces of a reduced plabic graph G with subsets of [n]. Let i E [n] and consider the strand labelled by i. By definition 1.10.3(2), this strand divides the disk into two parts. Place i in every face F that lies to the left of strand i. Apply the same process for every i in [n]. We then say that the label of F is the collection of all i's that placed inside F. Finally, let F(G) be the set of labels that occur on each face of the graph G. In [271 it was shown that all the faces in

G are labeled by the same number of strands, which we denote by k. The following theorem is from [261.

Theorem 1.10.5 ([261). Each maximal weakly separated collection C C ([n]) has the form C = F(G) for some reduced plabic graph G with decorated strand permutation

7r(i) = i + k (mod n), i= 1, ... , n.

49 Let us describe 3 types of moves on a plabic graph:

(Ml) Pick a square with vertices alternating in colors, such that all vertices have

degree 3. We can switch the colors of all the vertices as described in Figure 1-7.

Figure 1-7: (MI) square move

(M2) For two adjoint vertices of the same color, we can contract them into one vertex.

See Figure 1-8.

Figure 1-8: (M2) unicolored edge contraction

(M3) We can insert or remove a vertex inside any edge. See Figure 1-9.

-0---- -* Figure 1-9: (M3) vertex removal

The moves do not change reducedness of plabic graphs.

Theorem 1.10.6 ([27|). Let G and G' be two reduced plabic graphs with the same number of boundary vertices. Then G and G' have the same decorated strand per- mutation irG = 7rG' if and only if G' can be obtained from G by a sequence of moves (M1)-(M3).

1.10.2 p-Interlaced sets

Let us associate to each pair I, J of k-element subset in [n] a certain lattice path.

Definition 1.10.7. Let I, J C (n]) be two k-element sets, and let r =|I\J| = |J\I|.

Let (I\ J) U (J\I) = {c1 < c2 < ... < c2 r- 1 < c2 r}. Define P = P(I, J) to be the lattice path in Z2 that starts at P (0, 0), ends at P2r = (2r, 0), and contains up

50 steps (1, 1) and down steps (1, -1), such that if ci E I \ J (resp., ci G J \ I) then the

ith step of p is an up step (resp., down step).

For example, the paths P({1, 4, 7, 8}, {2, 3, 5, 6}) and P({1, 2, 3, 6}, {4, 5, 7, 8}) are

shown in Figure 1-10.

Figure 1-10: P({1, 4, 7, 8}, {2, 3, 5, 6}) and its cyclic rotation P({1, 2, 3, 6}, {4, 5, 7, 8})

Clearly, for any pair I, J c ([n]), there is a cyclic shift I', J' such that the path P(I', J') is a Dyck path, that is, it never goes below y = 0. From now on we will

assume, without loss of generality, that P(I, J) is a Dyck path.

Definition 1.10.8. A peak in the path P = P(I, J) is an index i c [2r - 1] such

that the ith step in P is an up step and the (i +1)s' step of P is a down step.

We say that the pair I, J is p-interlaced if the number of peaks in P(I, J) is p.

For example, the pair {1, 2, 3, 6}, {4, 5, 7, 8} is 2-interlaced.

Remark 1.10.9. The pair I, J E ([n]) is weakly separated if and only if it is 1-

interlaced. The pair I, J E ([']) for which II \ J| = |J \ I| r is sorted if and only if

it is r-interlaced.

For a p-interlaced pair I, J, the length parameters (a,, 01, a2 , /2, ... , ap, 3O) are defined as the lengths of the 2p straight line segments of P(I, J). (The a, are the

lengths of chains of up steps, and the /3 are the length of chains of down steps.) For example the length parameters for the pair {1, 2, 3, 6}, {4, 5, 7, 8} are (3, 2, 1, 2).

1.10.3 Conjecture and results on pairs of smallest minors

We are now ready to state a conjecture regarding the structure of pairs of minors that can be equal and minimal.

Conjecture 1.10.10. Let I, J ([3) such that P(I, J) is a Dyck path. Then there

exists an arrangement of smallest minors S c (I]) such that I, J c S if and only if one of the following holds:

51 1. the pair I, J is 1-interlaced (equivalently, it is weakly separated), or

2. the pair I, J is 2-interlaced and its length parameters (a1, 01, a2 , 02) satisfy ace #j, for any i and j.

We now explain the necessity of the condition in part (2) above. Let I, J be a 2-

interlaced pair for which ai = /3 for some i, j E {1, 2}. Then Skandera's inequalities

(Theorem 1.6.3) implies that there exists K, L E ) such that A, Ai > AK AL- Therefore it is impossible to have I, J E S, and hence the criteria in part (2) is necessary.

Let us provide some evidence for the validity of the conjecture. From Theo-

rem 1.5.6, the conjecture holds for 1 < k < 3. We will show in this section that the

conjecture holds for k = 4, 5 as well, and then suggest a possible way to generalize the

proof for general k. The idea behind the construction is that pairs I, J that appear in

the conjecture are related in a remarkable way via a certain chain of moves of plabic

graphs.

Theorem 1.10.11. Conjecture 1.10.10 holds for k < 5 (or k > n - 5) and any n.

In order to prove this theorem, we will present several examples of matrices with

needed equalities and inequalities between the minors. It is not hard to check directly

that these matrices satisfy the needed conditions. However, it was quite nontrivial to

find these examples. After the proof we will explain a general method that allowed

us to construct such matrices using plabic graphs.

Proof. Because of the duality of Gr(k, n) ~ Gr(n - k, n), the cases k > n - 5 are

equivalent to the cases k < 5. The case k < 3 follows from Theorem 1.5.6.

Let us assume that k = 4. If I n J / 0, then the problem reduces to a smaller k and the result follows from Theorem 1.5.6. Therefore, assume that I n J = 0.

Without loss of generality we can assume that n = 8. Using the cyclic symmetry

of the Grassmannian and the results from previous sections, there is only one case to consider: I = {1, 2, 3, 6}, J = {4, 5, 7, 8} (all the other cases follow either from

52 Theorem 1.5.5, or from Theorem 1.6.3). The matrix bellow satisfies A, = Aj = 1, and AK > 1 for all K E (8).

1 0 0 0 -1 -7 -7 -13

3 19 95 33 0 1 0 0 2 2 4 2

5 27 125 43 0 0 1 0 2 2 4

\0 0 0 1 1 1 2

This proves the case k = 4.

Let us now assume that k = 5. If I n J Z 0 then using a similar construction we are done. So assume that I n J = 0. Up to cyclic shifts and exchanging the roles of

I and J, there are 3 cases to consider:

1. I = {1, 2, 3,4,7}, J ={5,6,8,9, 10}

2. I = {1, 2, 3,4,8}, J ={5,6,7,9, 10}

3. I = {1, 2, 3, 6, 8}, J ={4, 5, 7,9, 10}

We need to show that the pair I, J that appear in cases (1) and (2) can be equal and minimal, while the pair that appears in case (3) cannot be equal and minimal. Let Q = -2955617 + V8665656785065. Then the following two matrices provide the constructions for cases (1) and (2) respectively. In each one of them, A, = Aj = 1, and Au > 1 for all U E (1).

1 0 0 0 0 1 6 53 98311+ Q 237904

0 1 0 0 0 -1 -5 -36 -32768 -79343

- 0 0 1 0 0 1 4 20 372 19 +- 186Q

0 0 0 1 0 -1 -3 -5 -6 -7

0 0 0 0 1 1 1 1 1 1

53 I

1 0 0 0 0 1 5 25 265 318

0 1 0 0 0 -1 -4 -17 -128 4869

0 0 1 0 0 1 3 10 43 1237612480 0 0 0 1 0 -1 -2 -4 -9 -10

0 0 0 0 1 1 1 1 1 1

We will now consider case (3). Assume by way of contradiction that there exists

M E Gr+(5, 10) for which AI(M) = Aj(M) = 1, and all the other Plucker coordi- nates of M are at least 1. Let G be the plabic graph that appears in Figure 1-11.

9 8

1,2,3,4 10 123.9,10 X4 u 1.2.3AG1,2,3, 1A , 4,9 10 I,27,8,9 1 378 B 110T8,9, 6 1,7,8,9,10 123, 1,6,7. 1,2,A6,8 A,, , A{ ,, = 23 123 6:,89.10 8,9 1,2,6,7,8 1 X(2 C ' X, 5 -007 ,,,,7 1,2,3,5,6 5,6,7,8,91.,67 D 1,2,4,5,6

4 , , , .a 4 5 687 X , A 3 , = X t 2,3,4,5,6

th f3,4,5,6,7

3 2

Figure 1-11: The plabic graph G

The faces of G form a maximal weakly separated set, and note that one of the

faces is labeled by I (the face with the yellow background). We assume that A, = 1, and assign 25 variables to the remaining 25 Phicker coordinates that correspond to

the faces of G (G has 26 faces). Among those 25 faces, 8 were of particular importance

for the proof, and we assign the following variables to the corresponding 8 minors:

A{1,6,7,8,9} {)1f,5,6,7,8} = D, A11,2,3,8,9} = B, A11,2,3,5,8} = A, AI1,2,3,4,5} = X1,

A16,7,8,9,10}1 X2, A14,5,6,7,8} = X3, A{1,2,3,9,10} = X4 (those variables also appear in

the figure, where the relevant labels are written in red).

54 Recall that we assume that all these variables are equal to or bigger than 1. By Theorem 1.7.1 and the discussion afterwards, any other Plcker coordinate can be

uniquely expressed through Laurent polynomials in those 25 variables with positive

integer coefficients. Using the software Mathematica, we expressed Aj in terms of

these 25 variables. The minor Aj is a sum of Laurent monomials 2 , and among others,

the following terms appear in the sum: X1X22 + X3 X4 -. Note that since all the variables are at least 1, we have

DB AC DB AC AJ > X X + X X --C > - + >1. 1 2 AC 3 4 DB- AC DB

Therefore, it is impossible to have AI(M) = Aj(M) = 1, and we are done. L

The above proof of Theorem 1.10.11 gives rise to a new type of inequalities for products of minors.

Proposition 1.10.12. For all points of the positive GrassmannianGr+(5, 10), we have

2 (1) (A 4 ,5 ,7 ,9 , 10 A 1 ,2 ,3 ,6 ,8 ) > 4 A 1 ,2 ,3 ,4 ,5 A 6 ,7 ,8 ,9 , 10 A 4 ,5 ,6 ,7 ,8 A 1 ,2 ,3 ,9 , 10 .

Also, for all the points of the nonnegative GrassmanianGr (5, 10), we have

2 (2) (A 4 ,5 ,7 ,9 ,1 0 A 1 ,2 ,3 ,6 ,8 ) > 4 A 1 ,2 ,3 ,4 ,5 A 6 ,7 ,8 ,9 ,10 A 4 ,5 ,6 ,7 ,8 A 1 ,2 ,3 ,9 , 10 -

Proof. The inequality (1) follows from the last inequality in the proof of Theo- rem 1.10.11, and the well-known AM-GM inequality a + b > 2Va, for positive a and b. Here a = X1X 2 % and b = X3X 4 A. Inequality (2) follows from (1) by continuity. l

Remark 1.10.13. The proposition above implies the following two inequalities as well: For all points of the positive GrassmannianGr+(5, 10), we have

A 4 ,5 ,7 ,9 ,10 A 1 ,2 ,3 ,6 ,8 > 2 min{A 1 ,2 ,3 ,4 ,5 A 6 ,7 ,8 ,9 , 10 , A 4 ,5 ,6 ,7 ,8 A 1 ,2 ,3 ,9 , 10 }- 2 For the sake of brevity, we omitted here this expression for Aj. The authors can provide it upon a request.

55 Also, for all the points of the nonnegative GrassmanianGr (5, 10), we have

A 4 ,5 ,7 ,9 ,1 0 A 1 ,2,3,6,8 > 2 min{ A 1 ,2 ,3 ,4 ,5 A 6 ,7 ,8 ,9 ,1 0 , A 4 ,5 ,6 ,7 ,8 A 1 ,2 ,3 ,9 , 10 }.

It is easy to verify that Skandera's inequalities (Theorems 1.6.1 and 1.6.3) do not provide a lower bound for A 4,5,7,9,10 A 1,2,3 ,6 ,8 , and hence the inequality above does not follow from Skandera's inequalities. It would be interesting to characterize this new type of inequalitiesfor totally positive matrices.

Actually, by a more careful examination of the Laurent polynomial in the proof of

Theorem 1.10.11, we can prove the slightly stronger inequalityfor points of Gr+(5, 10)

2 (A 4 ,5 ,7 ,9 ,1 0 A 1 ,2 ,3 ,6 ,8 ) > 16 A 1 ,2 ,3 ,4 ,5 A 6 ,7 ,8 ,9 ,1 0 A 4 , 5 ,6 ,7 ,8 A 1 ,2 ,3 ,9 , 10 -

1.10.4 The 2 x 2 honeycomb and an example of arrangement

of smallest minors which is not weakly separated

We would like to explain how we constructed the matrices in the proof of Theo- rem 1.10.11, using properties of plabic graphs. We think that these properties may be generalized and lead to the proof of Conjecture 1.10.10. In addition, these prop- erties also reveal a quite remarkable structure of plabic graphs that is interesting on its own.

4

3 4,5,6,8 5 5,6.7,8

2 3,4,5,6 3,5,6,8 3,6,7,8 6

2,3,4,5 2,3,5,6 2,3,6,8 2,3,7,8

11,2,3,67 1,2,3,5 1,2,3,8

8

Figure 1-12: The 2 x 2 honeycomb

56 Let us first consider the case k = 4. Consider the plabic graph G in Figure 1-

12. The 12 faces of G form a weakly separated set C = F(G), and one of the faces

(the square face) is labelled by I = {1, 2, 3, 6} (which is the minor that appeared in the proof for the case k = 4). Consider the four bounded faces in G. They consist of a square face labeled with I, and 3 additional hexagonal faces. We call such a plabic graph a 2 x 2 honeycomb. (We will show later how to generalize it.)

One way to complete C = F(G) to a maximal weakly separated set C' in (81) is C' = C U {{1,2,3,4}, {4,5,6,7},{1,6,7,8}, {l,2,7,8},{1,3,7,8}}.

Assign the variable T to the Plucker coordinates associated with the 3 hexagonal faces mentioned above, and assign the value 1 to the Plucker coordinates of the rest of the faces in C'. Using the software Mathematica, we expressed all the other Plucker coordinates AK, K E ([']) \ C', as functions (positive Laurent polynomials) of T. We checked that, for all K E ([1) such that K $ J = {4, 5,7, 8}, the Laurent polynomials that corresponds to AK has either the summand 1 or the summand T. Therefore, -6 if > we require T > 1, then AK 1 for all K / {4, 5, 7, 8}. Finally, A( 4 ,5 ,7,8 } = Therefore, by choosing T = 6, we get an element in Gr+(4, 8) for which A, = Aj = 1.

The matrix in the proof is exactly the matrix that corresponds to the construction described above. Moreover, the collection of smallest minors in this matrix consists of 15 minors that correspond to C' \ {{2, 3, 5, 6}, {2, 3, 6, 8}, {3, 5,6, 8}} U {4, 5, 7, 8}.

We verified that this is a maximal arrangement of smallest minors.

Remark 1.10.14. Conjecture 1.5.7 states that, for k = 4, n = 8 any maximal (by size) arrangement of smallest minors is weakly separated and has 17 elements. Here

we constructed a maximal (by containment, but not by size) arrangement of smallest minors C' \ {{2, 3,5, 6}, {2, 3,6, 8}, {3, 5, 6, 8}} U {4, 5,7, 8} that has 15 elements and

contains a pair I, J, which is not weakly separated.

1.10.5 Mutation distance and chain reactions

Definition 1.10.15. Let I, J E ([]) be any two k-element subsets in [n]. Define the

mutation distance D(I, J) as the minimal number of square moves (Ml) needed

57 to transform a plabic graph G that contains I as a face label into a plabic graph G' that contains J as a face label. (The moves (M2) and (MS) do not contribute to the mutation distance.)

Clearly, D(I, J) = 0 if and only if I and J are weakly separated. Indeed, any two weakly separated k-element subsets can appear as face labels in the same plabic graph.

The number D(I, J) measures how far I and J are from being weakly separated.

Problem 1.10.16. How to calculate the mutation distance between any I and J, and

how to find a shortest chain of square moves between plabic graphs containing these

subsets?

Note that this problem can be extended in a general setting of cluster algebras as finding the mutation distance between two given cluster variables.

Below we give several examples of pairs I, J and shortest chains of square moves between plabic graphs containing I and J, respectively.

Example 1.10.17. In the previous subsection, we constructed an arrangement of

smallest minors that included the non weakly separated pair I = {1, 2, 3, 6} and J = {4, 5, 7, 8}. In order to calculate D(I, J), let us describe a shortest chain of square

moves between a pair of plabic graphs that contain I = {1, 2, 3, 6} and J = {4, 5, 7, 8}, respectively. Since I and J are not weakly separated, they cannot appear as face labels

of the same plabic graph. We start with the plabic graph shown in Figure 1-12 (the

2 x 2 honeycomb) that contains I as the label of its square face. We want to transform

it into another plabic that contains J as a face label using the minimal possible number

of square moves. In order to do this, we first apply a square move (Ml) to the face

I = {1, 2, 3, 6}. Then apply square moves to faces {2, 3, 4, 6} and {2, 3, 6, 7} (those faces become squares after appropriatemoves of type (M2), so it is possible to apply

a square move to them). Finally, apply a square move to the face {3,4,6,7}. The

result is exactly J = {4, 5, 7, 8}.

We verified, using a computer, that this is indeed a shortest chain of moves that

"connects" I with J. Moreover, this is the only shortest chain of moves for this pair

of subsets. Therefore, D(I, J) = 4 in this case.

58 The sequence of moves in the above example can be generalized as follows. Pick a pair I, J with length parameters (a 1 , /1, a2 , /2) as in case (2) of Conjecture 1.10.10 such that a2 = 1. Consider the #1 x0 2 honeycomb. The structure of such a honeycomb should be clear from the examples in Figures 1-12, 1-13, and 1-14. This 01 x /2 honeycomb has #1 - /2 - 1 hexagonal faces, and one square face on the bottom with label I. The square face serves as a "catalyst" for a "chain reaction" of moves. First, we apply a square move (Ml) to I. This transforms the neighbouring hexagons into squares (after some (M2) moves). Then we apply square moves to these new squares, which in turn transforms their neighbours into squares, etc. In the end, we obtain a new honeycomb with all hexagonal faces except one square face on the top with label

J.

Example 1.10.18. Figure 1-13 presents an example for the pair I = {1, 2,3,4,8}

and J = {5, 6, 7, 9, 10}. The length parameters are (ai,, 1, a 2 , 02) = (4, 3, 1, 2). In this example, the face A of the first honeycomb has label I and the face F' of the last honeycomb has label J. Figure 1-13 shows a shortest chain of square moves of length

D(I, J) = 6 "connecting" I and J.

Conjecture 1.10.19. Let G be a reduced plabic graph with the strand permutation lrG(i) =i + k (mod n) that contains an a x b honeycomb H as a subgraph, such that a, b > 1. Let I be the label of the squareface of the honeycomb H, and J be the label of the square faces of the honeycomb H' obtainedfrom H by the chain reaction. Assign the value T to the Plicker coordinates correspondingto the hexagons in the honeycomb

H, and the value 1 to the Plicker coordinates of the rest of the faces of G (including

A1 = 1). Express any other Plicker coordinate AK as a Laurent polynomial in T with positive integer coefficients. Then the degree of the Laurent polynomial, for any

AK, K 7 J, is at least 0; that is, it contains at least one term T a with a > 0. Also the degree of the polynomial for Aj is at most -1; that is, it only contains terms Tb with b < -1.

This conjecture suggests that there exists a unique positive value of T such that

AI = Aj = 1, and all the other Plucker coordinates AK > 1. This provides a

59 5

4 6 5

F 4 6 3 7

E D 3 F 7 8 E D C B 2 C B 1 A A' 9

10 10

5 5

4 6 6

F F 7 7 3 E' D' 3 E D C' B' C' B' 8 2 8 2 A' A'

1 9 1 9

10 10

5

6

4 F'

7 3 E' D'

C' B'

2 8 A'

1 9

10

Figure 1-13: The chain reaction in the 3 x 2 honeycomb. construction of a matrix for an arrangement of smallest minors containing I and J, for any pair I, J as in part (2) of Conjecture 1.10.10 with a2 = 1.

Example 1.10.20. Let us give another example for the case a2 = 1. The 4 x 3 honeycomb that appears in Figure 1-14 corresponds to the pair I = {1, 2, 3, 4, 5, 6, 11}

60 and J = {7,8,9,10,12,13,14}. The length parameters of P(I, J) are (6,4,1,3). In

this case we need D(I, J) = 12 mutations.

7

6 8

5 9

4 10

11

3

2 12

1 13

14

Figure 1-14: The 4 x 3 honeycomb.

5 6

4 4 4 7

3 3 3 8

2 2

2 3 1 3 9

2 2

12 11

Figure 1-15: A honeycomb with one layer.

Example 1.10.21. Let us give an example for the case a2 = 2. Consider the pair I = {1, 2,3,4,8,9}, J = {5,6,7,10,11,12}. The length parameters of P(I,J) are

that starts (a1, 01, a2 ,02) = (4,3, 2,3). We can obtain the face J via a chain reaction with a plabic graph that contains the face I as follows.

Consider the plabic graph in Figure 1-15. This plabic graph consists of a 2 x 2 honeycomb surrounded with one "layer" of hexagonal faces. In this plabic graph, the square face (denoted by 1) has label I. The chain reaction that enables us to obtain

the face J is the following. First, apply a square move to face 1. Then (after some

moves of type (M2)) apply square moves to faces 2 (in any order). We continue with

61 square moves to the faces denoted by 3 and then the faces denoted by 4. After this iteration, we apply the chain reaction again, this time only on the internalfaces (with red labels). Then the face denoted by 3 (in red color) will have the label J. We need D(I, J) = 16 square moves.

In order to obtain an arrangement of smallest minors that contains both I and

J, one can complete the graph G in Figure 1-15 to a maximal weakly separated set and assign the following values to its Plicker coordinates: Assign the value 1 to all the coordinates that do not appear in G, and also to the square face of G. Assign the value T to the coordinates in G that correspond to the "layer." Assign the value T2 to the coordinates of the 2 x 2 honeycomb (shown in red), excluding the square face.

We checked, using a computer, that there exists a unique T for which A1 and Aj are equal and minimal.

1.10.6 Square pyramids and the octahedron/tetrahedron moves

We conclude with a brief discussion of an alternative geometric description for the chain reactions of honeycomb plabic graphs. The objects described below are spe- cial cases of membranes from the forthcoming paper [191. They are certain surfaces associated with plabic graphs.

Define the following map 7r, from weakly separated sets to R . Subdivide [n] into a disjoint union of four intervals [n] = T1 U T2 U T3 U T4 such that T, = [1, a],

T2= [a + 1, b], T3 = [b + 1, c], T4 = [c +1,n], for some 1< a < b < c < n. Assume

4 that I =T1 UT3 c ([n]). Then [n] \I = T2 U T 4 . Let 7r : (I]) R be the projection given by

71(W) = (|W n T1, LW n T2 1, |W n T31, |W n T4 1).

For example, 7, (I) = (a, 0, c - b, 0). The image of ir(W) belongs to the 3-dimensional hyperplane {X 1 + X 2 + x 3 +x 4 = k} ~R3 in R4 . For a plabic graph G (whose face labels W E (n]) form a weakly separated set F(G)), the map 7r maps the elements W E F(G) into integer points on a 2-

62 dimensional surface in R 3 .

Figure 1-16: the octahedron move

Figure 1-17: the tetrahedron move

The map 7r, transforms the moves (Ml) and (M2) of plabic graphs to the "octahe- dron move" and the "tetrahedron move" of the corresponding 2-dimensional surfaces, as shown on Figures 1-16 and 1-17. For example, the "octahedron move" replaces a part of the surface which is the upper boundary of an octahedron by the lower part of the octahedron. (This construction is a special case of a more general construction that will appear in full details in [19].) As an example, consider the sequence of plabic graphs in the chain reaction shown in Figure 1-13. In this case I = {1, 2, 3, 4, 8} and J = {5,6,7,9, 10}. Let G and H be the first and the last plabic graphs (respectively) in this chain reaction. Then

I E F(G) and J E F(H). The image 7r(F(G)) consists of integer points on the upper boundary of a square pyramid with top vertex 7,(I) (see part (A) of Figure 1- 18).

The map 7r, transforms the chain reaction shown in Figure 1-13 into the sequence of 2-dimensional surfaces in R3 shown in Figure 1-18. These surfaces are the upper boundaries of the solids obtained from the square pyramid by repeatedly removing little octahedra and tetrahedra, as shown in the figure.

63 0, -1,

-2,

-3, -3,

-4,

5 (a) Initial surface (b) 1 octahedron move

-2. -2 -3. -3

-5 -5 V -55 (c) 3 tetrahedron moves (d) 2 octahedron moves

-i

(-2m -2

-3

-4

(e) 4 tetrahedron moves (f) 2 octahedron moves

-2. -2.

-3 -3.

-4. -4.

.5\

(g) 3 tetrahedron moves (h) 1 octahedron move

Figure 1-18: The chain reaction in a 3 x 2 honeycomb, described using octahedron and tetrahedron moves.

Similarly, Figures 1-19 and 1-20 show the surfaces for the chain reaction that corresponds to the plabic graph from Figure 1-15.

64 0 -1, -2.

--5

10

oX 5

-5-2 -

-2,

-- 3, 0-5- -10

-4 - -2,

-23 -2, -36

-4, 0 0 -5, -1-2

-

0 5 -2.

-3

-4 -5 1 -2, -3. 5 0 -4,

10 -44

-10 2 2

Figure 1-19: First 8 steps in the chain reaction

1.11 Final remarks

1.11.1 Mutation distance

In the current chapter, we presented several pairs of k-tuples and their mutation distance. Further results regarding mutation distance and the associated shortest chain of square moves can be found in [5].

65 -2, -2. -3. -3

-5 -5 -6 -6 10 10 5 5 0 0

4 2 -2 4 6 10 4 2 0 -2 10

-2, -2,

-3, -3

10 10 5 0 0

-- 10 4 2 0 -2 - 10-- 2 0 .2 _ -

-2, -2,

-5, -5,

5 5 0 0

2 -1_4 -6 -5 4 2 -2 -4 -6

Figure 1-20: Final 6 steps in the chain reaction

1.11.2 Inequalities between products of minors

Proposition 1.10.12 gives rise to new type of inequalities between products of minors in totally positive matrices. It would be interesting to generalize these inequalities for matrices of any size, as well as to find additional types of inequalities.

1.11.3 Schur positivity

Skandera's inequalities [31] for products of minors discussed in Section 1.6 and also results of Rhoades-Skandera [281 on immanants are related to Schur positivity of expressions in terms of the Schur functions the form ss, - s.sS. In [21], several Schur positivity results of this form were proved. There are some parallels between the current work on arrangements of equal minors and constructions from [21]. It

66 would be interesting to clarify this link.

67 68 Chapter 2

Arrangements Of Minors In The Positive Grassmannian And a Triangulation of The Hypersimplex

This chapter is based on [7].

2.1 Introduction

In this chapter, we continue our study of the relationship between equalities and inequalities of minors in the positive Grassmannian and a triangulation of the hy- persimplex. In the previous chapter, we posed the following problem: What is the full structure of all the possible equalities and inequalities between minors in totally positive matrices? The only part of this problem that we discussed was the structure of the minors with largest value and smallest value, while the rest of the problem remains open. The description in the previous chapter involved rich combinatorial structures. Arrangements of smallest minors were shown to be related to weakly separated sets. Such sets were originally introduced by Leclerc-Zelevinsky [22] in the study of quasi-commuting quantum minors, and are closely related to the associated cluster algebra of the positive Grassmannian. Arrangement of largest minors were shown to be in bijection with simplices of Sturmfels's triangulation of the hypersim-

69 plex, which also appear in the context of Gr6bner bases [201. In this chapter, we discuss the general case, and its close relation with the triangulation of the hyper- simplex. As in the previous chapter, we restrict ourselves to the case of the positive

Grassmannian, which means that all the Plucker coordinates are positive. We begin by extending definition 1.3.4 from the previous chapter.

Definition 2.1.1. We say that a subset 3 C (mQ) is an arrangement of tth largest

(smallest) minors in Gr+(k, n) if there exists a nonempty stratum S1 such that

-o = 0 and Ii-t+1 = 3 (It = J). If t = 1 we say that such an arrangementis an arrangement of largest (small- est) minors.

Here we are interested in the combinatorial description of arrangements of tth largest minors for t > 2. For a stratum S1 , the structure of Z for t < I depends on the structure of I1, as we will show later.

Definition 2.1.2. Let 3 c (n)) be an arrangement of largest minors. We say that y C (n]) is a (t,3)-largest arrangement (t > 2) if there exists a nonempty stratum S_ such that Io 0, I1 = 3 and I,-t+1 = Y.

We say that W E ([n]) is a (t, J)-largest minor if there exists a (t, 3)-largest arrangementY such that W E Y.

In particular, if Y c ([]) is a (t, 3)-largest arrangement, then Y is also an arrangement of tth largest minors. Example 1.3.2 from the previous chapter implies that {{3, 4}} is a (3, {{1, 2}, {2, 3}, {1, 3}, {2, 4}) -largest arrangement, and that

{1, 4} is a (2, {{1, 2}, {2, 3}, {1, 3}, {2, 4}}) -largest minor.

We present below the general outline of the chapter, including some selected re- sults. Some of these results include definitions and notations that will be presented later in the chapter. In section 2.2 we introduce the hypersimplex Ak,n, its dual graph

F(k,n), and several of its (equivalent) triangulations. In section 2.3 we present our main results on arrangements of second largest minors. These include necessary and sufficient conditions on a collection of subsets to form such an arrangement. In par- ticular we show that while maximal arrangements of largest minors are in bijection

70 with the vertices of F(kn), the structure of arrangements of second largest minors is strongly related to the structure of edges in F(k,n). We start from the case k = 2 and prove the following theorem.

Theorem 2.1.3. Let W C ([n]) and let J c (I) be some maximal arrangement of largest minors, such that W V J. The following statements are equivalent.

1. W is a (2, 1)-largest minor.

2. There exists a vertex Q in F(2 ,n) that is adjacent to J, such that W C Q.

3. There exists J E J such that (3 \ J) U W is an arrangement of largest minors.

4. There exist four distinct numbers a, a + 1, b, b + 1 (mod n) with a < b such that {{a, b}, {a - 1, b}, {b+ 1, a}} C J and W = {a - 1, b+ 1}.

In particular, the minors that can be second largest are in bijection with the edges of

F(2,n) that are connected to vertex U, and the number of such minors is at most n.

We then generalize the result above for any k, and prove:

Theorem 2.1.4. Let W G ([]) and let J c ([Q) be some arrangement of largest minors such that W V J. Denote |,| = c. If W is a (2, J)-largest minor, then one of 1,2 holds, or equivalently, one of 3,4 holds:

1. The collection {W} U 3 is sorted.

2. There exists J E U such that W and J are not sorted, and (J \ J) U {W } is a sorted collection.

3. V{w}uj is a c-dimensional simplex in Sturmfels's triangulation of the hyper-

simplex Ak,n.

4. There exists a c - 1-dimensional simplex Vy in Sturmfels's triangulation such that ew is a vertex in Vy, and the simplices Vy,Vj share a common facet.

71 Finally, in section 2.4 we discuss arrangements of tth largest minors and their

relations to a certain notion of distance between simplices in the hypersimplex. We

begin by introducing the cubical distance dbenu(W, 3) on '(k,,), and then state the

following conjecture regarding (t, 3)-largest minors.

Conjecture 2.1.5. Let W E (1n]) and let 3 C ([Q) be some maximal arrangement

of largest minors. If deb,(W, J) = t, then W is a (> t + 1, J)-largest minor.

We then proceed to prove this conjecture for a wide family of cases, including

n > 2t, k = 2,n - 2 (and any n, t), and t = 2,3 (and any n, k). We conclude the

section by showing that arrangements of tth largest minors must lie within a certain

ball in R' of radius 2-1.

2.2 The Triangulation of the Hypersimplex

In this section, we will delve deeper into the subject of triangulation of the hypersim- plex. We recall some definitions from the previous chapter, and expand the discussion, introducing more properties of the triangulation. Recall that the hypersimplex

Ak,n = f1,..., xn) 10 <-x1,..., x <- 1; x1+ x2 + ... + Xn = k} has normalized volume equals to the Eulerian number A(n - 1, k - 1). That is, the number of permutations w on n - 1 elements with exactly k - 1 descents. (A bijective proof of this property was given by Stanley in [32].) In [201 four different constructions of a triangulation of the hypersimplex into A(n -1, k - 1) unit simplices are presented:

Stanley's triangulation [32], the alcove triangulation, the circuit triangulation and

Sturmfels' triangulation [351. It was shown in [20] that these four triangulations coincide. We now recall Sturmfels's construction introduced in the previous chapter following the notation of [20]. Afterwards we describe the circuit triangulation as it appears in [20].

72 2.2.1 Sturmfels's construction

For I c ('1), let e, be a 01-vector c, = (E1 , e 2 , - -- , en) such that Ej = 1 iff i E I, and otherwise ei 0. In some cases we will use I instead of e, (if it is clear from the context). For a sorted collection S = {S1,..., S,}, we denote by Vs the (r - 1)- dimensional simplex with the vertices Es.,... es,. Note that if Si = {ai < a2 <

... < aki} for all i, then S is sorted if (possibly after reordering the Si's) we have

1 2 a1 < a12 < ... < aik < a2 < a2 2 < ... < a2 < ... ar a < ar.

Theorem 2.2.1. [35, Theorems 14.2 and 8.3] The collection of simplices Vs where

S varies over all sorted collections of k-element subsets in [n], is a simplicial complex that forms a triangulation of the hypersimplex Ak,n. Maximal-by-inclusion sorted collections correspond to the maximal simplices in the triangulation, and they are of size n.

Definition 2.2.2. The dual graph [>k,n) of Sturmfels' triangulation of Ak,, is the graph whose vertices are the maximal simplices, and two maximal simplices are adja- cent by an edge if they share a common facet.

Figure 2-1 depicts the graph F(2,6). This graph has A(5, 1) = 26 vertices, each corresponding to a maximal thrackle (which are maximal sorted sets for k = 2) on 6 vertices. We also describe explicitly 6 of the vertices. In particular, vertices a and b are connected since b can be obtained from a by removing the edge {1, 6} and adding instead the edge {2, 5}. Therefore Va and Vb share a common facet.

2.2.2 Circuit triangulation

We start by defining the graphs Gk,n and circuits in these graphs. These definitions are taken from [201.

Definition 2.2.3. We define Gk,n to be the directed graph whose vertices are {Ejjj, and where two vertices e = (C1, 62,... ,en) and e' are connected by an edge oriented from e to c' if there exists some i E [n] such that (e, ei 1 ) = (1, 0) and the vector E' is

73 01 01

a 3 6 0

C ~2 32

0 4 5

4 5 4

Figure 2-1: The graph I(2,6)

obtained from c by switching cs, cj+1 (and leaving all the other coordinates unchanged, so the 1 is "shifted" one place to the right). We give such an edge the label, i. When

considering i e [n] we regard it modulo n, and thus if i = n, we have i + 1 = 1.

A circuit in Gs,n of minimal length must be of length n, and it is associated with a sequence of shifts of l's: the first 1 in e moves to the position of the second 1, the second 1 moves to the position of the third 1, and so on; finally, the last 1 cyclically moves to the position of the first 1. Figure 2-2 is an example of a minimal circuit in G3,8 . For convenience, we label the vertices by I instead of cer. The sequence of

labels of edges in a minimal circuit forms a permutation w = w1w2 -- . wo, E So, and two permutations that are obtained from each other by cyclic shifts correspond to the same circuit. Thus, we can label each minimal circuit in Gs~ by its permutation modulo cyclic shifts. For example, the permutation corresponding to the minimal circuit in

Figure 2-2 is w = 56178243, and we label this circuit Ca,. Circuit triangulation is

4,5 1,4,6

13,5

4

1,3,4 2,4,7

1,2,4 2,4,8

Figure 2-2: A minimal circuit in G3 ,8

74 described in the following theorem.

Theorem 2.2.4. [20, Theorem 2.6] Each minimal circuit C, in Gk,n determines

the simplex A, inside the hypersimplex Ak,n with the vertex set C,. The collection

of simplices A, (and all their faces) corresponding to all minimal circuits in Gk,n forms a triangulation of the hypersimplex, which is called the circuit triangulation.

The vertices of C, form a maximal sorted set, and every maximal sorted set can be

realized via a minimal circuit in the graph Gk,,.

The circuit triangulation proves to be a useful tool when studying adjacency of

maximal simplices in the hypersimplex, and understanding the structure of F(k,n). In

particular, the following theorem implies that the maximal degree of a vertex in F(k,n) is at most n.

Theorem 2.2.5. [20, Theorem 2.9] Let S = { S1 , S2 , ... , Sn} be a sorted subset cor-

responding to the maximal simplex Vs of F(k,n). Let t E [n] and St = {ii, 2, ... , i}. Then we can replace St in S by another S' E ([n]) to obtain an adjacent maximal sim-

, - , - ik} plex Vs, if and only if the following holds: we must have St' {i1 , . , i',. , for some a # b E [n] and i' 'la -Z'/ = i',-ib 1(mod n) and also both k-subsets

Sc ={ i,..,a',..,b,...,ik} andSd ={i,...,i,...,4,..., i} must lie inS.

In terms of minimal circuits, S' is obtained by a detour from the minimal circuit

that corresponds to S, as presented in Figure 2-3. Every detour can be defined by a

triple {Sc, St, Sd} (again see Figure 2-3).

2.3 Arrangements of second largest minors

In this section, we describe necessary and sufficient conditions on a collection of

subsets to form an arrangement of second largest minors. Theorem 1.5.4 from the

previous chapter implies that maximal arrangements of largest minors are in bijec-

tion with the vertices of IF(k,n). In this section, we will show that the structure of

arrangements of second largest minors is strongly related to the structure of edges in

F(k,n). Then, in the next section, we discuss necessary conditions for arrangements of

75 ,4,5 .4,6 6

,3,5 '4'7 S -d

,3,4 ,4,7 St 4 :ii b'

1,2,4 8 ,4,8 ~

Figure 2-3: The figure on the left is a minimal circuit in G 3 ,8 . The tuple (1,3,4) can be replaced with the tuple (1,2,5) according to Theorem 2.2.5. The figure on the right depicts the situation described in the theorem.

tth largest minors for any t > 2. As a warm-up, we start our discussion with the case k = 2.

2.3.1 The case k = 2: maximal thrackles

Consider the space Gr+(2, n), and let J C (Inl) be a maximal arrangement of largest minors (hence it corresponds to a maximal thrackle. We will later consider the case

in which no maximality assumption is involved). Given W c ([In), we ask whether

W is a (2, J)-largest minor. That is, whether there exists an element in Gr+(2, n)

in which the collection of largest minors is J and W is second largest. Our theorem

below gives necessary and sufficient conditions on such W.

Theorem 2.3.1. Let W E (I]) and let J c ([ ) be some maximal arrangement of largest minors, such that W V J. The following four statements are equivalent.

1. W is a (2, J)-largest minor.

2. There exists a vertex Q in F(2,n) that is adjacent to J, such that W E Q.

3. There exists J G T such that (3 \ J) U W is an arrangementof largest minors.

4. There exist a # b E [n] such that either { (a, b), (a - 1, b), (a, b + 1)} c J and W = (a- 1,b+ 1), or {(a,b),(a+-1,b),(a,b- 1)} C J and W = (a+1,b-1).

In particular, the minors that can be second largest are in bijection with the edges of

F(2,n) that are connected to vertex T, and the number of such minors is at most n.

76 Theorems 2.2.5 and 1.5.4 imply the equivalence (2) +== (3) <-+ (4). The

equivalence (1) - (2) is a special case of Theorem 2.3.6, which we will prove later

in this section.

We emphasize the relation, implied by our theorem, between arrangements of

second largest minors and the structure of F(2,n). Let J C ('1) be a maximal thrackle, and let

T = {A E Gr+(2, n) I the set of largest minors of A is J}.

Let W c (n]). Theorem 2.3.1(2) implies that there exists A C T for which W is the second largest minor if and only if there exists a vertex Q in F( 2 ,n) that is adjacent to J such that W C Q.

Example 2.3.2. Consider the maximal thrackle J in Figure 2-4 appearing in the

left part on the top. Using part (4) of Theorem 2.3.1, we identify the elements in

(]) that can be second largest minors when J is maximal, and denote them by red lines (and this is the second graph at the top of the figure). Then, on the bottom, we

describe the thrackle which resulted by adding the red line and removing one of the

edges of J. Those three cases correspond to the three edges that are connected to J

in F(2 ,5 ).

1 2 1 2

----5 E 5

Figure 2-4: The figure that corresponds to Example 2.3.2

Remark 2.3.3. Theorem 2.3.1 states that any W that satisfies condition (2) can be a

(2, J )-largest minor. What if we take several such W 's? Must they be second largest minors simultaneously (that is, belong to an arrangement of second largest minors)?

77 The answer is not necessarily. Consider the maximal thrackle

J = {{1, 5}, {1, 4}, {2, 6}, {2, 5}, {3, 6}, {4, 6}}

that is labeled by (c) in Figure 2-1. According to Theorem 2.3.1, the minors that can

be (2, J)-largest are

K = {{1, 6}, {5,6}, {1, 3}, {3,5}, {2,4}}.

We will now show that they cannot form an arrangement of second largest minors.

Assume by way of contradiction that they could form such an arrangement. WLOG we can assume that Az = 1 for any Z E J. Then using 3-term Plucker relations we get

'Aj1,5}{3,6} = A{ 1,3}A{ 5,6} + A{ 1,6}A{ 3,5 }.

The LHS equals 1, and all the minors from the RHS are contained in K. Hence

Az = for any Z E K. Next,

A{1,4}1{2,6} = A{ 1 ,6}A{ 2,4} + A{ 1,2}A{ 4,6 },

and hence A{ 1,21 = !. By symmetry, A 4,5 1 =. Finally, consider the 3-term Plucker relation

1A1,4}A{2,5} = A{ 1,5}A{ 2,4} + A{ 1,2}A{ 4,5}-

The LHS equals 1, while the RHS equals -I + 1, which leads to a contradiction.

2.3.2 Arrangements of second largest minors - the general case

In the previous subsection, we considered the space Gr+(2, n) and discussed arrange- ments of second largest minors when J was maximal. In this subsection, we consider the space Gr+(k, n) and discuss arrangements of second largest minors, with no as- sumption on J. Theorem 2.3.4 summarizes our results. The special case in which j is maximal will be discussed in Theorem 2.3.6.

78 Theorem 2.3.4. Let W c ([~]) and let J C (,) be some arrangement of largest minors such that W $ J. Denote |JI = c. If W is a (2, J)-largest minor, then one of 1,2 holds, or equivalently, one of 3,4 holds:

1. The collection {W} U J is sorted.

2. There exists J c j such that W and J are not sorted, and (J \ J) U {W} is a sorted collection.

3. V{w}uj is a c-dimensional simplex in Sturmfels's triangulation of the hyper-

simplex Ak,n.

4. There exists a (c - 1)-dimensional simplex Vy in Sturmfels's triangulationsuch that ew is a vertex in Vy, and the simplices Vy,V share a common facet.

Before presenting the proof, we will prove the following key lemma:

Lemma 2.3.5. Let W, U, V c ([j) be three different k-subsets, such that the following

three conditions hold:

1. U and V are sorted.

2. W and V are not sorted.

3. W and U are not sorted.

Then the set T {U, V, sorti(W, V), sort2 (W, V), sorti(W, U), sort2 (W, U)} is not sorted.

Proof. First of all, we can assume that W n U n V 0 (otherwise we could remove the common elements and prove the lemma for the resulting subsets, which implies the claim for the original k-subsets as well). In addition, we can also assume that

U U V U W = [n] (since if some i c [n] appears in none of them then we could redefine

U, V, and W to be in (In-1) and ignore this i). Given I E ([n]) and 1 i j n,

79 define Ij = Zj (E)t. For example, if I {1, 3, 5, 7, 8} c (91) then Ej = 101010110

and 137 = 3. By definition, for a pair of k-subsets I, J E(n) we have

{sorti(I, J)ij, sort (I, 2 J)ij} { 2 ' 2

(not necessarily respectively). In particular, I and J are sorted iff Ij and Jij differ by at most 1 for all i and j. In order to prove the lemma, assume by way of contradiction

that T is sorted, and let aij = Ujj -Wij, -W= Vi -Wij for all1

the discussion above, the parameters azi, O3j satisfy the following properties for all

1 < i < j < n (the proof of each one of the properties is given below).

1. IaJij, Ii1jj 2.

2. If ociYj = 2 or I ijI = 2 then aoi = fi3.

3. If 1ai3 1 then aij = f3j or #ij = 0.

4. If |oij = I then aij = Oij or aij = 0.

Property 1: We have

Wj + Uj Uj - Wj + 2Wtj aij 2 2 2

The assumption that T is sorted implies that the pair sorti(W, U), W is sorted, as well as the pair sort 2 (W, U), W. Therefore, Wij differs from both sorti(W, U)ij and sort 2 (W, U)ij by at most 1, which implies property 1. The proof for 3ij is similar.

Property 2: Assume without loss of generality that aij = 2 (the other cases can be handled similarly), so Ugj = Wij + 2. In addition, one of sorti(W, V) i, sort2 (W, V) i equals [L + Wj 2J, and since T is sorted we must have /3j = 2 = acj. Properties 3 and 4: Assume without loss of generality that aij = 1. Combining properties 1 and 2, it is enough to show that O3j 74 -1. We have Ujj = Wij + I and

Vi= W-J + fij, and since T is sorted, f35 7 -1.

80 U and W are not sorted, and hence there exist 1 < i < j < n such that

Ia 3I= IUj - WijI > 1. From property 1 we get that IUj - Wij = 2. Recall that

Uin= W1n, so after appropriate simultaneous rotation of U, V, and W (modulo n), we can assume without loss of generality that there exists 1 < j < n such that

U13 = Wj +2 (and by property 2 V13 = Wij+ 2 as well, so U13 = Vij). From now on, we assume that U, V and W are rotated appropriately, and that j is maximal with respect to this property (that is, there is no j' > j for which U1 ' = Wij' + 2). We divide the proof into a series of claims. Our purpose is to show that the assumption that T is sorted implies U = V, which leads to a contradiction.

Claim 1:{1,j} c U, {1, j} c V, {1,j} n W = 0.

Proof: Assume by way of contradiction that 1 c W. Since W n U n V = 0, then

WLOG 1 ( U. Therefore U23 = W2j + 3, contradicting property 1. Similarly j W, hence {1,j} n W = 0. Since U U V U W = [n], we can assume WLOG that 1 E V. If

1 ( U then U23 = W2 j + 2, while V2j = W2 j + 1, contradicting property 2. Therefore 1 E U and hence 1 E V n U. Similarly j E V n U, so Claim 1 is proven.

Claim 2: For j

Proof: We prove it by induction on t. For t = +1 property 1 implies that t E W

(otherwise either Vj+1 = 3 + W1,j+1 or U1,3+1 = 3 + W1 ,j+1 , a contradiction). If t E V then since W n U n V 0 we have t U. Therefore,

U1,+1 = W1,3+1 + 1 and V1,j+1 = W1,3+1 + 2, a contradiction to property 2. Hence t V, and similarly t V U, and the base case of the induction is proven. Assume that the claim holds for all j + 1 < t < c, and let t = c. By the inductive assumption Vj+1,c-1 = Uj+1,c-1, so applying properties 1 and

2 to al,c_1 , 1,c-1, aj+1,c-1, /j+1,c-1 leads us to one of the following three options:

a) Wj+1,c-i = Vj+=,c-,,

81 b) W+1,ci = Vj+,,c- + 1 Uj+1,c-l + 1,

c) W3+1,C- 1 = V+1,c-1 + 2 Uj+i,c-i + 2.

Case a) contradicts the maximality of j, so consider case b). If c V W then applying property 2 to a1c, /1c implies c E U n V, and hence Uc = W + 2, contradicting the maximality of j. If c E W then applying property 2 to aj+1,c, ij+1,c implies c V U, c V V, so Claim 2 holds. Let us now consider case c). From property 1 for aj+1,c, 3 +1,c we must have c V W, and hence from property 2, c E U n V. Thus the claim is proven.

Claim 3: For 1 < t < j, (Eu)t = (Ev)t.

Proof: We prove it by induction on t. The case t j follows from Claim 1, so assume that the claim is proven for c < t < j, and let t = c. By the inductive hypothesis Uc+ 1,j = Vc+1,j, and from the proof of Claim 2 it follows that

{j + 1,j + 2} C W, { + 1,1j + 2} n U = 0, {j + 1,j -+ 2} n V = 0.

3 Applying properties 1 and 2 on ac+/,j, c+1,j, ac+1,.+2,c+1,j+2 leads us to one of the following options:

a) Uc+1,j = Vc+1,,j = Wc+1,j,

b) Uc+ 1,,j = Vc+1,j = Wc+ 1,, + 1,

c) Uc+1,j = Vc+ 1,j = Wc+ 1,, + 2.

First consider case a). In this case we must have c E U n V and c V W (otherwise we get a contradiction when applying properties 1 and 2 to ac,j+2, !c,j+2 ). In case b), if c E W then the properties of OZc,j+2, /c,j+2 imply c V U, c V V. On the other hand, if c 0 W then the properties of acj, cj imply c E V n U. Finally, in case c) we must have c c W, c V, c V U (by considering acj, Oc,j), so Claim 3 holds.

82 Combining claims 1, 2 and 3 leads us to the conclusion that cu = ev, so U = V, a contradiction. Therefore T is not sorted.

We are now ready to present the proof of Theorem 2.3.4.

Proof. Conditions 1 and 3 are equivalent, as well as conditions 2 and 4. If W is sorted with all the elements in j then condition 1 holds and we are done. Otherwise, we need to show that W is not sorted with exactly one element in j (which implies that condition 2 holds). Assume by way of contradiction that W is not sorted with U and V for some U, V c J. Since W is a (2, J)-largest minor, then there exists A E Gr+(k, n) such that 1 is the largest value of a Plucker coordinate of A, and A,(A) = 1 if and only if I E J. Moreover, if for some I E(') we have Aw(A) < AI(A), then

I E J and AI(A) = 1. Consider the set

T {U, V, sortI(W, V), sort2 (W, V), sort,(W, U), sort2 (W, U)}.

By Lemma 2.3.5 T is not sorted, and hence T is not contained in 3 (since 3 is sorted by Theorem 1.5.4). Without loss of generality, sorti(W, U) J, so

Asorti(w~U)(A) < 1. By Corollary 1.6.5

Aw (A)Au(A) < Asort1(wU)(A)Asort2(WU) (A).

Recall that Au(A) = 1, Asort2(WU) (A) < 1, so

Aw(A) < Asorti(WU)(A)Asort 2 (WU)(A) < Asortjji(w)(A).

In conclusion

Aw(A) < Asorti(WU)(A) < 1, contradicting the fact that the value of Aw(A) is second largest among the Pllcker coordinates of A. Therefore W is not sorted with exactly one element in 3, and condition 2 holds.

83 The theorem above gives a necessary condition on second largest minors. If 3 from Theorem 2.3.4 is maximal, we obtain sufficient conditions as well. The following

generalizes Theorem 2.3.1.

Theorem 2.3.6. Let W C (n]) and let 3 c ( []) be some maximal arrangement of

largest minors such that W V J. The following two statements are equivalent.

1. W is a (2, 3)-largest minor.

2. There exists a vertex Q in IF(k,n) that is adjacent to 3, such that W C Q.

In particular, the minors that can be second largest when 3 are maximal are in

bijection with the edges of ]F(k,n) that are connected to vertex 3, and the number of such minors is at most n.

In order to prove this theorem, we will use Theorem 1.8.1 from the previous chap-

ter, which deals with the action of the positive torus on the positive Grassmannian.

Proof. Theorem 2.3.4 implies (1) -- (2). In order to show that (2) -> (1), we should

construct an element A C Gr+(k, n) for which the following 3 requirements hold:

1. AI(A) < 1 for all IE ([]).

2. A, (A) = I iff I E J.

3. Aw(A) > AI(A) for all I ( 3.

By Theorems 1.5.4 and 2.2.5 there exists St = {iii 2 ,... ,'ik} J such that

W= Z,...,ia .. ,is g for some a < b E[n]

and ' $ 'b, i - '= ' - i= 1( mod n), and also both k-subsets

Sc = { i, . ,I . . ib, .. . ik}, Sd { , . . i, , . . . , ik} are in J. Let B E Gr+(k, n) be some element, and let B' be the element that is obtained from B after multiplying the ith column of B by the variable a, (for all

84 1 < i < n). Then

A w(B') = (IIa- aj,)a' (II 1+1aj,)a (Il =+1ai 3 )Aw (B)

Asc (B')Asd (B') As, (B)A w (B) As,(B') Asc(B)Asd(B)'

By Theorem 1.8.1 we can choose the scalars {a} 1 in such a way that AI(B') = 1

for all I E J. Therefore, for such a set of scalars,

_ As, (B)Aw (B) A w(B') =- Asc(B)Asd(B)

Since i - ' - 1( mod n), assume WLOG that Ia - 1, 'b + 1. Then using three term Plucker relations we get

Aw( B') = 1 - 1 'i , . " .... ." (B)Aii 2,..,b+1,ib,ibI..,i(B) Asc(B)Asd(B)

By Theorems 2.2.5 and 2.3.4, the second largest minor in B' must be obtained from

the circuit of j by a detour. Hence, in order to show that Aw can be second largest, it is enough to show that we can choose the initial matrix B in such a way that

Aw(B') is the biggest among all the minors obtained by a detour. For this purpose

we need to maximize the RHS of (2.1). Let us choose some C E Gr+(k, n), and

denote by {Ci}>_ its columns. Let C' E Gr+(k, n) be an element for which

C, ifj {ia, ib +1};

C = Csa_1 + eCsa, if j ai; (2.2)

Cib + ECi+,I ifj ib+1 for small E > 0. Note that in this case (and in fact for any E > 0) C' C Gr+(k, n).

By setting B = C' and using (2.1) to evaluate Aw(B') (and any other minor that is obtained from a detour) one can verify that Aw(B') = 1 - O(E 2 ) while the other minors obtained from a detour are 1 - O(c) or 1 - 0(1). Therefore by choosing E

85 small enough, we obtain an element

A = B' E Gr+(k,n) that satisfies the requirements stated in the beginning of the proof.

2.4 Arrangements of tth largest minors

Theorem 2.3.6 states that when J is a maximal sorted set, the second largest minor must appear in one of the neighbors of J in F(k,n). A natural question is what can be said regarding tth largest minors for general t, and this is the topic of this section.

In the first part, we will define the notion of cubical distance on F(k,n), and state our conjecture regarding (t, J)-largest minors. In the second part, we will prove special cases of this conjecture, and also discuss the structure of a natural partial order on minors. In the third part, we discuss additional properties of arrangements of tt' largest minors, and among other things show that they must lie within a certain ball in R'.

2.4.1 Cubical distance in F(k,n)

Consider the blue edges in Figure 2-1, and note that they form a square, while the red edges form a 3-dimensional . We say that two distinct vertices 71,,J 2 in F(k,n) are of cubical distance 1 if both of them lie on a certain cube (of any dimension). For example, vertices a and b from Figure 2-1 are of cubical distance 1 since both of them lie on a 1-dimensional cube (which is just an edge). Similarly, a and c are of cubical distance 1 (both of them lie on a square), as well as c and d (both of them lie on a 3-dimensional cube).

Definition 2.4.1. Let J1, 2 c (n) be maximal sorted sets, and let W E ([]). We say that 71, 72 are of cubical distance D, and denote it by dubne(J1, J 2 ) = D, if one can arrive from 1 to 2 by moving along D cubes in F(k,n), and D is minimal with

86 respect to this property. We say that W is of cubical distance D from 31, and denote it

by dcube(31, W) = D, if for any vertex 32 in I?(kn) that contains W, debe(31, 32) > D, and for at least one such 32 this inequality becomes equality.

For example, using the notation of Figure 2-1, dcube(a, d) = 2, dcube(b, d) = 2,

dcmbe(a, e) = 3. We also have dcube(a, {1, 4}) = 1 since {1, 4} E f. Similarly,

dcube(a, {2, 4}) = 2 since {2, 4} V b, f, c, and {2, 4} E d. It can also be shown that

dcube(a, {2,3}) = 3.

Definition 2.4.2. Let 3 C ([n]) be an arrangement of largest minors, and let

W c ([]). We say that W is a (> t, 3)-largest minor if for any arrangement of

minors I =(-o, 1,.,i) such that 11 = 3,_0 = 0 the following holds:

W 11,,11_1, ...,711t+2 -

For example, let J be the maximal sorted set that corresponds to vertex a in

Figure 2-1, and let A c Gr+(2,6) in which the collection of maximal minors is J.

Using Skandera's inequalities (Corollary 1.6.5), it is possible to show that for such

A, A1 5 > A 1 4 > A 2 4 > A 2 3 . Therefore, {2, 3} is a (> 4, 3)-largest minor, since

{ 2, 3}1 V T, -T1- 1, 1 - 2.

Conjecture 2.4.3. Let W c ([n]) and let 3 c ([n]) be some maximal arrangement

of largest minors. If desbe(W, J) = t, then W is a (> t + 1, J)-largest minor.

Note that the examples we gave earlier are special cases of this conjecture. For example, dcube(a,{2,3}) = 3, and indeed {2,3} is a (> 4,3)-largest minor. In many cases, we can prove this conjecture. Our main results in this section are Theo- rems 2.4.4 and 2.4.5, both of which validate the conjecture for a wide class of cases.

Theorem 2.4.4. Conjecture 2.4.3 holds in the following cases:

* n > 2t.

" k 2, n - 2 (and any n, t).

St =2,3 (and any n, k).

Theorem 2.4.5. If W is sorted with at least one element in 3, then Conjecture 2.4.3 holds.

87 2.4.2 Partially ordered set of minors

In this part, we show that arrangements of largest minors induce a structure of a

partially ordered set on the entire collection of minors. The investigation of this

poset leads us to the proof of Theorem 2.4.5. We conclude this part with the proof of Theorem 2.4.4.

Example 2.4.6. Let k = 2, n = 6, and let A E Gr+(2, 6) be an element for which

the minors that appear in Figure 2-5 on the left are maximal. Thus, without loss of generality, we can assume that

A 1 2 = A 13 A 14 = A 1 5 = A 2 5 = A 2 6 1.

{12,13,14,15,25,26}

6 {36} {24} {16}

5 4{46} 35) (23) 4 3

{56} (45) (34)

Figure 2-5: A maximal thrackle and the corresponding poset of minors

By Theorem 1.5.4, all the other minors are strictly smaller than 1. However, there is

much more information that we can obtain on the order of the minors. For example,

using 3-term Plicker relations, we get A 46A 13 < A 14A 36, and hence A46 < A 36 . Once the set of largest minors is fixed, it induces a partial order on the entire collection of

minors. Figure 2-5 depicts the Hasse diagram that corresponds to the example above

(and the relation A 46 < A 3 6 is one of the covering relations in this diagram).

In order to discuss these partially ordered sets more systematically, and to prove

Theorem 2.4.5, we will use the circuit triangulation of the hypersimplex, introduced in Section 2.2. The structure of Gk,, is quite complicated in general. Yet, we found an algorithm that recognizes certain planar subgraphs of Gk,n which induce the partial order.

Definition 2.4.7. An oriented Young graph is the graph that is obtained from a

Young diagram after rotating it by 180 degrees and orienting each horizontal edge from

88 left to right and each vertical edge from bottom to top. We call the vertex that is in the

lower right corner the origin vertex, and denote the upper right (lower left) vertex by

v1 (vo). There are two paths that start at vo, continue along the boundary and end at

v1 . The path that passes through the origin vertex is called the inner boundary path,

and the second path is called the outer boundary path. From now on, we denote the

set of the vertices that appear in the outer boundary path by V. See Figure 2-6 for an example.

7 V1

4 5 6 7'

2 6'

vO 1' 2' 3' 4' U

Figure 2-6: An oriented Young graph. Its inner boundary path is formed by the edges labeled from 1' through 7'. Its outer boundary path is formed by the edges labeled from 1 through 7, and all the vertices that appear along the latter path form the collection V.

Lemma 2.4.8. Let H be an oriented Young subgraph of Gk,,, and let F E Gr+(k, n) for which all the minors indexed by the outer boundary path V are equal and have largest value. Then for any vertex D of H such that D V V, we have

AD(F) < Ac(F) and AD(F) < AA(F), where C is the vertex immediately above D and A is the vertex immediately to the left of D in H (see Figure 2-7).

B J C

A- j -- D

Figure 2-7: The figure that corresponds to Lemma 2.4.8.

Before presenting the proof of the lemma, we would like to present the proof idea of Theorem 2.4.5 using the running example depicted in Figure 2-8. The proof

89 _4 1,_5 --P --- 1,3,5

2 2: 2

8 8 1,2,4 1,2,5 12,41,2,5 8 8

2,4,8 2,5,8 3,4, 8 4 3.4, 8 35. 7: 7 1 2 21 . 2,4,7 2,5,7

2,4,8 25,8 6 6

3,4,7,, 1.4,6 1 2,4,6 4 2,5,6 71 7

2 2 1,3,4 1,3,5

74,8 3,8 3,5,8

,A6 1 ,46 -4

.4,7 4 5,7 2 -- - 2,4,7 3,4,7 3,5,7

t6: 6 2,4,6 4 2,5, 6 6 1,46 2,46 2 3,4,6 4 3,5,6

1,4,6 4 A56

Figure 2-8: The graph on the left is Q1. The graph on the top right is Q2, and the graph on the bottom right is Q3- will show that under the conditions of the theorem, one can find an oriented Young subgraph of Gk,, such that W is the origin vertex and V C J. Then we apply Lemma 2.4.8 and obtain an ordering on the minors. As an example, suppose that the minors corresponding to the circuit C, in Figure 2-2 form an arrangement J of largest minors, and let W = {3, 5, 6}. One can verify that dcae(j, W) < 4. Among the vertices of C, W is sorted with {1, 3, 5}, {1, 4, 5}, {1, 4, 6}, and not sorted with the rest. So the set of vertices that are not sorted with W form a path in C" (and this property also holds in the general case as we will show in Lemma 2.4.13). We would like to construct an alternative path in G 3,8 that starts at {1, 4, 6}, ends at {1, 3, 5}, passes through W, and contains only vertices that are sorted with W. Consider the left graph Q, that appears in Figure 2-8. Q, is a subgraph of the graph G3,8, and the edges that correspond to the circuit C, appear as dotted lines. The part of w that corresponds to the dotted lines is 617824 (we ignore the vertex {1, 4, 5}, as it is sorted with W). Consider the path that starts at {1, 4, 6} and continues along the

90 edges labeled by 124678. Note that after 3 steps in this path, we arrive to the vertex

W. Q3 (see Figure 2-8) is the oriented Young subgraph of G 3 ,8 in which the set V consists of vertices from C and W is the origin vertex. One can check that this is

indeed a subgraph of Q1. Applying Lemma 2.4.8 we get

A{3,5,6} < IA{3,4,6} < IA{3,4,7} < A13,4,8} < A{1,3,4}-

This implies that W is a (> 4 + 1, J)-largest minor, which is consistent with The- orem 2.4.5. A similar claim holds in the case W = {2, 5, 6}, with the corresponding oriented Young subgraph Q2.

The proof of Theorem 2.4.5 will be based on several lemmas. We start by pre- senting the proof of Lemma 2.4.8.

Proof of Lemma 2.4.8. Without loss of generality we can assume that AM(F) = 1 for all M E V, and that 1 is the largest minor of F. Consider the subgraph of H that looks like the graph in Figure 2-7. Then the labelings of its edges (that induced from

Gk,n) must look as in the figure for some i and j. The proof is by induction on the distance d of the vertex D from the vertices in V, where distance is defined as the sum of the lengths of the vertical path and horizontal path that starts at D and ends at a vertex in V. We denote this distance by d(D, V). For example, the distance of vertex u from V in Figure 2-6 is 3+4=7, as the vertical path has 3 edges and the horizontal path has 4 edges. The base case of the induction is the case d = 2. In such a case, A, B, C c V. Moreover, using the labelings in Figure 2-7, A, B, C, D are of the form:

A {aj,a2,... = am , am+2,..., ap,j, a+ 2 ,..., a},

B = {aa2,... ami+1, am+ 2 ,..., ap, j, ap+ 2 ,... ak},

C ={aa2,..., ami+1, am+2,..., ap, +1, a+ 2 ,..., ak},

D ={al,a2,..., ami, am+2,..., aj+1, a+ 2 ,..., a}.

Applying 3-term Plucker relation we get AD(F)AB(F) < AA(F)Ac(F), and since

91 A, B, C E V we have AD(F) < 1. This implies AD(F) < Ac(F) and

AD(F) < AA(F), so we are done with the base case. Suppose now that the distance is d = d(D, V) > 2. Clearly

d(D, V) > d(A, V), d(D, V) > d(C, V) and d(D, V) > d(B, V),

so we can apply the inductive hypothesis on A, B and C, and get

Ac(F) < AB(F), AA(F) AB(F). Hence by the 3-term Plucker relations

AD(F)AB(F) < AA(F)Ac(F) AB(F)Ac(F),

so AD(F) < Ac(F). Similarly,

AD(F)AB(F) < AA(F)Ac(F) < AA(F)AB(F),

so AD(F) < AA(F) and we are done.

Given an oriented Young graph H and a vertex w E H, we denote the position of

w in H by (i, j) where i and j start at 0 and the origin vertex corresponds to (0,0).

For example, in Figure 2-6 the position of v1 is (3, 0), the position of vo is (0, 4) and the position of u is (0, 0). In this section, we sometimes refer to a vertex directly by its position.

Definition 2.4.9. Let H be an oriented Young subgraph of Gk,,, and let u be the

origin vertex. The swapping distance between u and V is max{i + j - 1(i, j) E H}.

For example, the swapping distance of u from V in Figure 2-6 is 4, and it is obtained by taking the vertex that is incident to both edges 3 and 4.

Corollary 2.4.10. Let H be an oriented Young subgraph of Gk,n with outer boundary path V and origin vertex u. Denote by s the swapping distance of u from V. Let

-T, C (n] ) be an arrangement of largest minors such that V c 11 . Then u is a s + 1,E,)-largest minor.

92 Proof. Denote by (i1 ,ji) the vertex in H that maximizes {i+j - 11(ij) c H}. Note that such a vertex must be a corner vertex (that is, there is no vertex above it or to

the left of it). Let us take a path that starts at (ii, j, -1), continues all the way down

(Zi,111 - 1) -+ (si - 11,1 - 1) -+(-2, i- 1) -+..3(0,j- 1)

and then continues all the way to the right

(0, ji - 1) - (0, j, - 2)-...-+ (0, 0).

This path has length equal to the swapping distance (note that one of the two sections

above might be empty, but this claim still holds). When we move along the path the

values of the minors strictly decrease by Lemma 2.4.8. This implies that u is a

(> s + 1, Uj)-largest minor, and we are done. El

Our next lemma relates the swapping distance with the cubical distance, defined in Definition 2.4.1.

Lemma 2.4.11. Let Jc (') be a maximal sorted set, and suppose that there exists

an oriented Young subgraph H of Gk,n such that V c J for the outer boundary path

V. Let u be the origin vertex in H. Then dcube(J, u) is bounded from above by the swapping distance of u from V.

Before presenting the proof of this lemma, we would like to clarify the relationship between the circuit triangulation and cubical distance. Let C, and Cq be two minimal circuits. By Theorem 2.2.4, the vertices of each one of the circuits in Gk,n form a maximal sorted set. We denote these collections by P and Q respectively. We leave it as an exercise for the reader to check that the following claim holds (see also Figure 2-9):

Claim 2.4.12. 1. deube(P, Q) = 1 if and only if Cq is obtainedfrom C, by making

a collection of different detours {S', St', SdzI}, such that for every pair 1 < i < j < m, neither St' nor St' lie in the intersection

{Sc, St", Sdj} n {Ses, St-, SdJ}.

93 2. dcube(P, Q) = t if and only if Cq is obtained from Cp by a sequence of t steps, each one of them of the form described in (1), such that t is minimal with regard to this property.

5 3,6 3 1,4,6 6 3 4,5 5 1,4,6 6

1-,3.. 47 6A5

4 2 2

, 2,4 4 ,7,2,5 .4,8 4 4,

2 2,4 ,4,8 .2,4 ,4,8

Figure 2-9: The figure on the left is a circuit in G3,8 which we have already seen before. There are 3 detours depicted in dotted lines, and the circuit to the right is the circuit that is obtained by these detours. These two minimal circuits correspond to a pair of maximal sorted sets of cubical distance 1.

We will now prove Lemma 2.4.11.

Proof. Denote by s the swapping distance of u from V. In order to prove this lemma, we need to show that there exists a maximal sorted set S c ([]) such that u E S and such that there exists a sequence of s moves that connects S and 3 as described in Claim 2.4.12 (so each of these moves corresponds to a certain set of detours). Consider the set of all corner vertices {wi}__ in V (w is a corner vertex if neither the vertex above w nor the vertex to the left of w are present in V). Each such corner vertex corresponds to a vertex B in a square as in Figure 2-7. So we can make a detour that exchanges the arcs A -+ B and B -+ C with the arcs A -+ D and D -+ C. Those detours satisfy the requirement in Claim 2.4.12, so we can make all of them at the same time. The resulting oriented Young graph has swapping distance s - 1, so after applying this process s times we get a maximal sorted set S that contains u (note that S and 3 are identical on all the vertices outside of V), and that completes the proof. See Figure 2-10 for an example.

Our last lemma deals with induced paths in minimal circuits.

Lemma 2.4.13. Let Ce, be a minimal circuit in Gk,n and let W E ([]), such that W V Ca,. Let B be the set of vertices of C, that are sorted with W. Then the induced subgraph on B in C, is a path (which might be empty).

94 7 V1 7 V, 7 V, 4 5 6 7' > 4 5 6 7' 4 5 6 7' 2 6' 2 6' 2 6' 5 5' 15 2' 3' 4' U (V 2 4' u

7 V 1 7 V1 6 7' 4 56 7' 4 5

2 M' 2 1 ' 1 1_5' V 2' 3' 4' U 1' 2' 4'

Figure 2-10: The description of the sequence from the proof of Lemma 2.4.11

As an example, consider the circuit C, in Figure 2-2, and let W = {3, 5, 6}. Among the vertices of C,, W is sorted with {1, 3, 5}, {1, 4, 5}, {1, 4, 6}, which indeed form a path.

Proof. If W = {c1 ,... , Ck} is sorted with exactly 0 or one elements in C, then the statement is clear. Hence assume that it is sorted with at least two elements in C".

Assume by way of contradiction that the statement of the theorem is wrong. Then it

implies that there exist a = {al, a2 ,... , ak}, b = {bi, b 2 , ... , bk} E ([) that are sorted with W such that the following two claims hold:

1. There exists an element on the path from a to b in Ce, that isn't sorted with W.

2. There exists an element on the path from b to a in C, that isn't sorted with W.

Since the collection {a, b, W} is sorted, then by possibly rotating {1, 2..., n} and

switching the roles of a and b we can assume WLOG that

cl a, l b1 : c2 a2< b2 < ... ck 5 ak< b.

We will show that every element in the path from a to b is sorted with W, and this

will provide the required contradiction. Let d = {di, d 2 , ... , dk} be an element in this

path. Then by the definition of minimal circuit,

a1 < d1 b1 < a 2 d 2 b2,..., ! ak dk < bk.

95 Therefore,

c1 < di < c 2 < d2 ... ck dk, so c and d are sorted, and we are done. L

We are now ready to present the proof of Theorem 2.4.5.

Proof. Suppose that there exists an oriented Young subgraph H of Gk," such that

V C J and W is the origin vertex of H. In such a case, if we denote by s the swapping distance of W from V, then by Lemma 2.4.11 dcube(J, W) < s. On the other hand, Corollary 2.4.10 implies that W is a (> s + 1, J)-largest minor. There- fore in particular W is a (> dcube(J, W) + 1, J)--largest minor, which is exactly the statement of Conjecture 2.4.3. Hence our purpose in this proof is to construct such an H. Denote by Cj the minimal circuit in Gk,n that corresponds to the set 3, and by wo the permutation that is associated with Cj. As we mentioned in the proof of Lemma 2.4.11, H will actually provide us with a minimal circuit CH in Gk,, that contains W (see also Figure 2-8, and the discussion regarding this figure following

Lemma 2.4.8). Thus, in order to find such a subgraph H it is enough to find the permutation WH which corresponds to the minimal circuit CH, and to show that the part on which Cj and CH differ induces a structure of an oriented Young subgraph. For example, in Figures 2-2 and 2-8, if W = {3, 5, 6}, then we have w 5 = 61782435, wH = 12467835, so the part on which Cj and CH differ corresponds to the graph Q3 depicted in Figure 2-8.

We will first give a description of CH, and then prove that it satisfies the require- ments. Since W is sorted with at least one vertex in C5 , then by Lemma 2.4.13 there exist vertices A = {a,,..., ak} and B = {b 1 ,..., bk} in Cr such that W is sorted with all the vertices in the path B - A (including the endpoints), and not sorted with all the vertices in the path A -+ B (excluding the endpoints). We also allow the possibility A = B (in which case W is sorted with exactly one element in CJ; note that W cannot be sorted with all the elements in Cr since J is maximal). Since

A and B are sorted, then by appropriate rotation of the circle {1, 2, ... , n} we can

96 assume that

a, < bi ! a2 < b2< ... < a b. (2.3)

So if A = {1, 4, 6} and B = {1, 3, 5} as in Figure 2-2, then using the order

6 < 7 < 8 < 1 < 2 < 3 < 4 < 5 we have 6 < 1 < 1 < 3 < 4 < 5, and we

"redefine" A to be ={6, 1, 4}. A In the case A = B we set B = {a2 , a3,..., ak,al}.

Let W = {di,. .. ,dk } such that d, < d 2 < ... < d in the order

a, < a, + I1... < n < I < 2 < ...a, - 1.

We claim that the numbers {ai, bi, di}_ 1 satisfy inequality (2.4) below. We will first show how to use this inequality in order to construct CH, and then in the last paragraph of the proof we will prove this inequality.

a1 < di < b1 < a2< d2< b2 . < ak : dk< bk (2.4)

Denote the path from A to B in C5 by Q, and let D = WIW2 ... Wm be the partial permutation that corresponds to Q. In particular, W is a contiguous part of

W7 = W1 w2 .. . WmWm+1 ... wn (for example, in Figure 2-2, if W {3, 5, 6} then D = 617824). Since (2.3) holds, then for every 1 < i < k, the "1" in the position a in

EA is shifted by Q to the "1" in the position bi in 6B. Define

Ai = {ai,ai+ 1, ..., bi - 2, bi - 1} for all 1 < i < k (where n + 1 is identified with 1), and note that Ai = 0 iff ai = bi. Then the set of numbers that appear in D is, in fact,

Uk 1A. We would now like to use property (2.4): for every 1 < i K k define

Dil = {ai, ai + 1,). di - 1}, Di 2 = di, di + ,..,bi - 1}

(this is well defined since ai 5 di < bi). We define WH as follows: Its first part consists of the numbers from Uk Dil, placed according to the order in which they appear in

2 D. Its second part consists of the numbers from U_ 1D , again placed according to the order in which they appear in W-. Finally we place wm+ .. . w,. To make this

97 definition more clear, consider the circuit in Figure 2-2 and let

W = {3,5,6}. Then W = 617824, A = {6,1,4}, B = {1, 3,5}, and we rotate the elements in W so that W {6, 3, 5}. We have:

A1 = {6,7,8}, A 2 ={1, 2}, A 3 = {4},

2 1 2 2 D = 0, D 1 = {6,7,8}, D 2 = {1, 2}, D 2 = 0,D 3 1 = {4}, D 3 = 0.

2 Therefore, uf 1 Di = {1, 2,4},u_ 1 D {6,7,8}. Therefore WH = 12467835, and indeed CH contains W as is shown in the graph Q3 in Figure 2-8.

Let us now describe the inner and outer boundary paths of H. We set vo = A, v, = B, u = W. The inner boundary path consists of two sections: horizontal and vertical. For the horizontal section we place horizontal edges, labeled by the

1 numbers appearing in the first part (Uk D ) of WH (according to the order in which they appear in WH. Note that the last vertex in the horizontal section is W). For the vertical section we place vertical edges that are labeled by the numbers appearing in

2 the second part (U'_ 1 Di ) of WH. Note that the definition of the Di's and the fact that Cj is a circuit in Gk,, implies that the inner boundary path described above is indeed a subgraph of Gk,,. For the outer boundary path, consider the edges of

Cj that are labeled by the numbers in W. Every such number appears in exactly

1 2 one of U_1 Di and Uk_ 1 Di . Every edge that corresponds to the former set will be horizontal, and every edge that corresponds to the latter set will be vertical (see the graph Q3 in Figure 2-8 for an example). Note that since C5 is a subgraph of Gk,,, then the outer boundary path is a subgraph of Gk,, as well. In addition, the inner and the outer boundary paths have the same number of vertical and horizontal edges.

Now, in order to show that the inner and the outer boundary paths described above induce a structure of an oriented Young graph, we need to show that the following holds:

1. The first and the last edges in the outer boundary path are vertical and hori- zontal respectively.

98 2. Once we establish the property above, we already know that the boundary of

H looks like the left part of Figure 2-11. Let us now add internal horizontal

and vertical edges (see the right part of Figures 2-11 and 2-8), such that each

horizontal edge is directed from left to right, and each vertical edge is directed

from bottom to top. We label each horizontal edge by the same labeling as

the horizontal edge from below in the inner boundary path, and we label each

vertical edge by the same labeling as the vertical edge from right in the inner

boundary path. The resulting graph is an oriented Young graph, and we need

to show that this graph is a subgraph of Gk,n. We assume for now that A # B, and deal with the case A = B later.

B B

A W A# -- W

Figure 2-11: Depiction of the situation described in (2).

We start with (1). Assume by way of contradiction that the first edge is horizontal, and denote by Z its other vertex. Then

Z = {ai, a2 ,... , ai_ 1 , ai + 1, aj+1, ... ak} such that ai c Dil, which implies ai < di - 1. Therefore, from (2.4) we have

a1 < di < a2<& d2 ... : ai_1 < di_1 < ai +1 I di :! aj+1 < dj+1 ...< ak< dk.

This implies that W is sorted with Z, and thus contradicts the fact that W is not sorted with all the vertices in the path A -* B (excluding the endpoints) in Cj. We can similarly show that the last edge is horizontal, so property (1) is established.

We will prove property (2) by induction on the length of the first part of WH. If its length equals 1, then there exists an arc from A to W labeled by a, for some 1 < j < k.

Property (1) implies that U = W1 w2 ... wm-laj, and the situation is depicted in the left

99 part of Figure 2-12. We need to show that by labeling every horizontal edge with aj

8i B a, P

Wm11~jWMn1 W U:jWU

2 w W2 w a 2

W, W, W i A W A T > W

Figure 2-12: The left figure corresponds to the base case of the inductive proof of property (2). The right figure corresponds to the inductive step of the proof of property (2).

we get a subgraph of Gk,,. Since C is part of a permutation, aj {w 1 ,W 2 ,. . m-1

In addition we also have a3 + 1 V {w 1 , W 2 ,... - 1} (otherwise a3 + 1 c A, which contradicts the existence of the arc from A to W). Thus by labeling every horizontal edge with aj we indeed get a subgraph of Gk,,, and the base case is proven. Now assume that the length of the first part of WH equals r > 1. The vertex that follows

A in the inner boundary path is of the form

T = { 1 , a2 , ... , a_1, aj + 1, aj+,.. . , ak}.

Then W is of the form W = W 1 ... wuajWu+2 ... Wim. Applying the base case of the induction, we get the situation depicted in the right part of Figure 2-12, where x from the figure satisfies x E {Wu+2, --- Wm}.W Note that the vertices on the red path

(except A and T) on the right part of Figure 2-12 are not sorted with W. Now consider the minimal circuit 0 that starts in A, continues along the red path in the right part of Figure 2-12, and then continues in the same way as C5 . The outer boundary path that corresponds to this circuit is associated with the following part of the permutation:

W1 .WuWu+2 .Wm. (2.5)

The length of the corresponding first part is smaller than r, so we can use the inductive

100 hypothesis and construct the rest of the graph. To complete the proof we just need to verify that the initial vertical segment of the path that corresponds to 0 and starts in A has at least u edges. This follows from (2.5), so the case A 74 B is done.

Now consider the case A = B. Recall that we order the elements in B as follows:

B = {a 2 , a 3 ,. . . , ak, al}. Applying the inductive process described above still leads us to an oriented Young graph. This graph is not a subgraph of Gk,, (as we duplicated one of its vertices), but we can still apply the reasoning from the beginning of the proof and get the asserted claim. See Figure 2-13 for an example.

The last paragraph of the proof will be dedicated to proving equation (2.4). If A = B then this is trivial, so assume that A 74 B. Denote the path B -+ A in Cj by

P := B - T1 -+4 T2 4 ... - T, - A

(so it has r + 2 vertices for some r > 0). Since W is sorted with all the elements in P, there exists a minimal circuit C, in Gk,n that contains W and all the vertices in

P. We will show that P is also a path in C,. Note that showing this will imply that

W is on the path from A to B in C,., which by definition of minimal circuits implies

(2.4). We will start by showing that B is followed by T in C.. Since B is followed by T in Cj, then

T, = {bi,... bU, bu+ + 1, bu+2, ...,7bk} for some u (the +1 is modulo n). Now assume that a vertex M :/ B precedes T in C,. Then WLOG

M = {bi ...,bx-1, bx - 1, bx+1,,..., bu, bu+1 + 1, bu+2, ...,)bk}.

Therefore, M and B are not sorted, contradicting the fact that both of them are on

C,.. We can show similarly that T is followed by Ti+1 for all 1 < i < r - 1 and that T, is followed by A, so (2.4) is proven. El

We conclude this part with the proof of Theorem 2.4.4.

101 . 3,5

4: 4 ~. 5 * 6 -- . ,4,6 ,3,4

2 2 2 1 5 6 4,5 ,4,6 2,4

3 3 3 3

,2,3 ,3,5 , 3,5 5 ,3,6 6

Figure 2-13: An example of an oriented Young graph formed in the case A = B. Here k = 3,n = 6,A = B = {1,3,5},W = {1,2,3}, J = {{1,3,5},{1,4,5},{2,4,5},{3,4,5},{3,4,6},{3,5,6}}.

Proof. Let us start with the case n > 2t. Consider a sequence of t steps going from 3 to a minimal circuit containing W as in Claim 2.4.12 part (2), which uses a minimum number of detours (that is, the minimum required number to arrive to some minimal circuit that contains W). In such a sequence, in the last step (among the t) we only perform one detour at W. This detour depends only on three vertices, which are

W and its two adjacent vertices. Thus, in the second to last step, only these three vertices may have participated in the detour (from minimality). In the third to last step we would have only 5 vertices that may have participated (W, its two neighbors, and their neighbors). Continuing this way for each step, we get that in the first step among the t, only 2t - 1 vertices from J may have participated in detours. Since n> 2t this means that there exists at least one vertex in 3 that never participated in a detour, and thus this vertex belongs to the minimal circuit containing W, and hence it is sorted with W. We can now apply Theorem 2.4.5 and we are done.

Let us consider the case k = 2, and let W = {a, b}. Since 3 is a maximal sorted set, there exists an element A containing a in 3, and similarly there exists an element

B containing b in 3 (otherwise 3 would have at most n - 1 elements). W is sorted with both A and B, so the claim follows from Theorem 2.4.5. The case k = n - 2 follows from the case k = 2 using the symmetry Gr(k, n) ~ Gr(n - k, n).

Next, the case t = 2 follows from Theorem 2.3.4. Finally, consider the case t = 3.

If n < 5 then either k or n - k is at most 2, and the result follows from the discussion above. The case n > 6 follows from Theorem 2.4.5, and this completes the proof. Eli

102 Conjecture 2.4.3 deals with the case in which J is maximal. We will now discuss the general case, in which J can be any sorted collection.

2.4.3 Arrangements of tth largest minors - the general case

Theorem 2.3.4 implies that if W E ([SQ) is a second largest minor, then Ew is "close" to Vj. This notion of distance was discussed in the first chapter, and we recall the relevant concepts here, and expand the definition. This definition allows us to generalize this property for arrangements of t"h largest minors (t > 2).

Definition 2.4.14. Let r be an integer, 1 < i < j < n, and denote by Hi,j,, the affine hyperplane {xi + xi+1 - + xj = r} c R'. Fix a point x E R'. For y E R , we say that Hi,j,, separates y from x if one of the following holds:

" x and y lie in the two open halfspaces formed by Hij,,.

" y lies on Hi,j,, and x does not.

Define dij (xIy) = {r| the hyperplane Hi,j,, separates y from x}|.

Finally, let B,(x) {y I dij(x, y) < r for all 1 < i < j < n}.

For simplicity, we sometimes write dij(I, J) instead of dij(e1 , cj). The notion dij arises naturally in the discussion of sorted sets. In particular, I and J are sorted if and only if dij (eI, ej) < 1 for every 1 < i < j < n.

Theorem 2.4.15. Let J c ([n]) be some arrangement of largest minors, and let Y be a (t, J)-largest arrangement. Then cy c B 2 -1 (J) for any Y E Y, J E J.

Proof. Assume WLOG that the maximal minors equal 1. Fix some pair 1 < i < j < n, Y E Y, J c J. We will prove the theorem by induction on t, starting with the case t = 2. If Y and J are sorted then dij(Y, J) < 1. Now assume that they are not sorted. We will show that there exists N that is sorted with both of them. If |11 > 2 then such N exists by Theorem 2.3.4. Otherwise, Corollary 1.6.5 and the fact that

J E J imply AyAJ < Asorti(YJ)Asort 2 (YJ); hence, Ay < Asorti(yJ)Asort 2 (YJ). Since

Y is a second largest minor, one of sorti(Y, J), sort 2 (Y, J) must belong to J, and

103 we can take that one to be N (by Theorem 2.3.4 it is sorted with Y). For such N, dij (Y, J) dij(Y, N) + dij (N, J) < 2 so the base case of the induction is proven.

Assume now that t > 2, and that the claim is proven for all the numbers up to t - 1. Suppose for contradiction that dij(Y, J) > 2 1-1, so cy lies on Hi,j,, and Ej lies on Hi,jp for some pair of numbers a, 3 that satisfy Ia - 0 1 > 2 ~. As before, since Y and J are not sorted we have Ay < Asorti(yJ)Asort 2 (YJ). Note that at least one of sort,(Y, J), sort2 (Y, J) lies on H , and assume that a > 3 (the other case can be handled similarly). Then 2 > + 1+ +2 Therefore at least one of dij(J, sort1 (Y, J)), dij(J, sort 2(Y, J)) is bigger than 2 t2, which by the inductive hypothesis implies that at least one of sorti(Y, J), sort 2 (Y, J) is not a (t-1, J)-largest minor. Now, since we assumed that 1 is the largest minor, then Ay < Asorti(YJ) and

AY < Asort 2 (,J), and hence Y is not a (t, J)--largest minor, a contradiction. El

Thus, we get that if W is a (t, J)-largest minor, then W must lie within a ball of certain bounded radius around J. We conclude this section with the following corollary.

Corollary 2.4.16. Let Y be an arrangement of tth largest minors, t > 2. Then all the elements cy, Y E Y lie within a ball of radius 2-.

104 Chapter 3

Quadratic Schur function identities and oriented networks

Section 3.4 of this chapter is based on section 4 in [6].

3.1 Introduction

In this chapter, we study quadratic Schur function identities, expanding products of the form SA - S , where Sk o is the Schur function in the variables Xk, Xk+1, - - --

Our results generalize identities that were studied by Kirillov [18] and Fulmek and

Kleber [12]. Several common approaches to generate such identities involve bijec- tions between noncrossing tuples of paths in a network, as well as the use of Plucker relations. Both of them are based on the relationship between Schur functions and noncrossing tuples of paths via the Lindstrdm-Gessel-Viennot lemma. In this chapter we combine both approaches, and come up with several types of networks to generate new types of identities for k = 2, 3. We also pose a conjecture for higher values of k.

In the next section we provide the required background on Schur functions and the

Lindstr6m-Gessel-Viennot lemma. In section 3.3 we present our main results and ex- plain how they generalize previous results from [18] and [12]. In the last two sections we present the proofs of our results.

105 3.2 Schur functions and the Lindstrdm-Gessel-Viennot

lemma

We start with the combinatorial definition of Schur functions, following the notation

of Stanley 133, chapter 7]. Let A = (A 1 , ... , Am) be a partition of n. A Young diagram of shape A is a left-justified array of n boxes, with Ai boxes in row i. A semistandard

Young tableau (SSYT) is a filling of a Young diagram with positive integers, weakly increasing in rows and strictly increasing in columns. Let T be an SSYT, and define the monomial xT := 0 om(i'T)X where m(i, T) is the number of entries equal to i in

T. For a partition A, the Schur function SA is defined as follows: SA(x) := ET xT where the sum is over all SSYTs T of shape A. Throughout this chapter, we use the following notation for specializations of Schur functions:

a-1

A := S 0, 0, . .. , 0, Xa, Xae ,,... a-i [la b] SA' := SA(0,S 0,.. . , 0, Xa, Xa+1,--- , 0, 0, . ..

The next topic that we introduce is the relationship between Schur functions and noncrossing paths. Let G be a weighted oriented graph whose vertices are the points in Z2, and whose arcs are all the pairs of the form ((i-1, j), (i, j)) and ((i, j-1), (i, j)). Horizontal edges are oriented from left to right, and vertical edges are oriented from bottom to top. Denote by w(e) the weights of the arc e in the graph, and set W((i -

1,j), (i,Wj)) := x3 , ((i,j - 1), (i,j)) := 1. We next define the weight of a path and the weight of noncrossing m-tuple of paths in G: For a path P, let wt(P) be the product of the weight of the arcs in P. More generally, for an m-tuple of noncrossing paths N = (PI, P2 ,..., Pm), let wt(H) = wt(P). Let A = {al,..., am} c Z2 be a designated set of vertices that are called sources, and let B = {b, . . , bm} E Z2 be a designated set of vertices that are called sinks. Assume from now on that the points (a,, a2 , ... ,am, bin, bm-1,.. . , bi) are ordered in clockwise order in Z2. We

106 denote by Path({a1, ... , a.}, {b,. ... , bm}) the set of all m-tuple of noncrossing paths

(P, P2 , ... ,Pm) such that for 1 < I < m, P starts at a2 E Z2 and ends at bi E Z2. Given a set of sources A and a set of sinks B, we define the m x m network matrix

A,B = (cij)m. 1 as follows:

cij = wt(P). (3.1) PEPath({ai},{bj})

If A and B are clear from the context, we use C instead of CAB. The Lindstr6m-

Gessel-Viennot (LGV) lemma (see [34, Theorem 2.7.11) states that

det CA,B =- W) HEPath({ai.am},{bi,...,bm})

One of the well known consequences of this lemma is the following correspondence between Schur functions and noncrossing paths (see [33, Theorem 7.16.11):

Theorem 3.2.1. Let A = (A,..., Am) be a partition, and for all 1 < i < m, let ai = (m - i, 1), bi = (A 2 + m - i, n) for some positive integer n. Then SQ' =

ZHEPath({ai,...,am},{bi,...,bm}) wt(fl).

By translation one can obtain the following more general identity:

Proposition 3.2.2. Let A = (A, ... , Am) be a partition, and for all 1 < i < m, let a = (m - i + c, t), bi = (Ai + m - i + c, n) for some positive integers n > t and c. Then S[tn] A = ZnePath({ai,...,am},{bi,...,bm}) WtI)

3.3 Exposition of identities

We begin this section by introducing the following identity due to Kirillov [18], which was later generalized in [12].

Theorem 3.3.1. Let c, r be positive integers. Denote by (c') the rectangularpartition consisting of r rows with constant length c. Then we have the following identity for Schur functions:

107 S2:r) = S(cr-1) . S(cr+l) + S((Cl)r) - S((c+l)r).

In [12] Fulmek and Kleber gave a bijective proof for the following generalization of Kirillov's identity:

Theorem 3.3.2. Let (A,, A2 ,.. A+,) be a partition, where r > 0 is some integer. Then we have the following identity for Schur functions:

S( ,-,Ar) S(A2 ,...,A,+j) S(A,--,Ar) S(A ,...,Ar+) + S(A2 -1,...,Ar+-1) ' S(A+1,...,Ar+1). (3.2)

As the authors mentioned in [12], the identity above can be also proven alge- braically. Consider the network depicted in Theorem 3.2.1 with m = r + 1, and let C be the network matrix. For I, J c [m], we denote by C[IIJ] the submatrix of C on row set I and column set J. Dodgson's condensation formula [2] implies in particular that

det C[[1, r]1[1, r]] det C[[2, r + 1]1[2, r + 1]]

det C[[2, r]1[2, r]] det C + det C [[1, r]1[2, r + 1]] det C [[2, r + 11[1, r]].

Thus, identity (3.2) follows immediately from Proposition 3.2.2, where each one of the six terms in the equation above equals to the corresponding term in identity (3.2).

The network that was used in the proof was quite simple. All the sources lay on one line, as well as all the sinks. In this chapter we will consider more complicated types of networks, which will enable us to obtain two families of identities. We now introduce the first one:

Theorem 3.3.3. Let A = (A,, A2, ... , Ak) and 0

S S[2,0) .S2,oo)3 (A . ,At,At -1_.Ak-i) 1 . 2 ~ A . tA -. ki

+-1(A -1,..A 1 Aj1.At 1,At+ 2 ., Ak)'

This result was also proven in the Ph.D. thesis of Wuttisak Trongsiriwat [37] using a bijective argument. Here we provide an algebraic proof. Substituting k = r+1, t = r

108 in Theorem 3.3.3 gives the following corollary.

Corollary 3.3.4. Let (A, A2 , ... A,+,) be a partition, where r is some positive integer. Then

S(Al, ,Ar+) (Al.,Ar) - (A , .Ar+1) . 1 .. SA,.. .,Ar) + XlS( AI 1 .. ,Ar+1)I (l l..A~ )

We will now present our second family of identities. The special case of rectangular partitions was originally conjectured by Darij Grinberg [14]:

Theorem 3.3.5. Let (A,, A2 ,... A,+,) be a partition, where r > 0 is some integer. Then we have the following identities for Schur functions:

1.S(A 1 ., .Ar) (A,..Ar+i)

2.S,.00 ) SS[2,oor)i . SA,2..Ar ,Ar+i) (~7A-r) + S(A2 1.. .,Ar1,Ar+1)...... ,Ar1)*

Substituting x 1 = 0 in (2) leads us to (1), and substituting x 1 - -2 0 in (2) leads us to Theorem 3.3.2. Thus, Theorem 3.3.5 is a generalization of Theorem 3.3.2. Theorem 3.3.5 generalizes Corollary 3.3.4 as well. This can be seen by considering the coefficient of xi" on both sides of (2). By doing so, we get

r) S[3 ~ S= [3, oo) [)) _SS ,,0o) . ((A,. .,Ar,) ( A.AAr +) (A 2,...-A) 2( -1,.., A1) (A 1,. 1)-

After reordering the sides of the equation, the expression above becomes equivalent to the identity in Theorem 3.3.4. Note that from the discussion above, in order to prove Theorem 3.3.5 it is enough to prove part (2).

109 Figure 3-1: For A = (3, 2, 2,1), t = 1, and n = 5. The rectangles are the sources and the circles are the sinks.

3.4 Proof of Theorem 3.3.3

We start by describing the structure of the network that will be used in the proof. Let vi := (Ak+1- + i, n) for 1 < i < k. Define the set of sources to be

A = (a,,...,a2k-1) := ((k, 1), (k, 2),..., (3, k - 1), (2, k - 1), (2, k), (1, k)), and the set of sinks to be

B = (b, .... , b2k-1) := (V1, V 1 , V 2 , V2 , .. ., V, , . .. V, V ).

(The bar above Vkt denotes omission. So each vi appears twice, except for Vkt that appears once). For an example, see Figure 3-1. Let C be the network matrix. We first note that C satisfies certain rank properties.

Definition 3.4.1. Let k > 2. An interlacing matrix of order 2k -1 is a (2k -1) x (2k-1) matrix M whose rank is at most k, such that the rank of M [[2k-1]|[2, 2k-2]] is at most k - 1.

Proposition 3.4.2. Let C be the network matrix discussed above. Then CT is an interlacing matrix.

Proof. Let U c A, W c B such that IUl = W > k. This implies that W con- tains at least two identical sinks (as there are k non-overlapping sinks), and hence Path(U, W) = 0. Hence by the LGV lemma every k + 1-by-k + 1 minor in C equals

110 0, so CT is of rank at most k. Now, let U C A \ {ai, a 2 k- 1 }, W C B such that jUI = |W| > k. Then again Path(U, W) = 0. This holds since any path that starts at a vertex from U must cross one of {a2 , a 4, ... , a2k-2}, and hence there may be at most

|{a2 , a4 , ... , a2k-2} = k - 1 noncrossing paths starting at U and ending at W. Since JUI > k, this implies that Path(U, W) 0. Thus the rank of CT [[2k - 11 [2, 2k - 2]] is at most k - 1, and we are done.

In the next theorem, we use Plncker relations to show that interlacing matrices satisfy certain 3-term determinantal identity. We will later use the LGV lemma to transform this identity to the 3-term Schur function identity from theorem 3.3.3.

Theorem 3.4.3. Let k > 2 and let M be an interlacing matrix of order 2k - 1. Fix

I, J E ([1) such that {1,2k - 1} J = 0. Set J' := [2,2k-i] \ J and J" = [1, 2k - 2] \ J. Then

det M[71J] det M[Ij J] = det M[7|jT] det M[IIJ'] + det M[- I'J] det M[Ij J"]

where for K C [2k - 1] we define K = [2k - 1] \ K.

Proof. For a number a and a set B = {b, b 2 ,..., b,} such that b1 < b 2 < ... < br, denote by {a + B} (respectively, {a - B}) the set {a + bi, a + b 2 , . .. , a + br} (resp.,

{a - br, a - br_ 1 ,... , a - b 1}). Let # be the map described in (1.2) in chapter 1, so O(M) is a (2k - 1) x (4k - 2) matrix. Then the conclusion of Theorem 3.4.3 is equivalent (via (1.3)) to the following equation on the Plicker coordinates of #(M):

A[2k-1I\{2k-7}u{2k-1+7} A[2k-1]\{2k-I}u{2k-l+J} = (3.3)

A[2k-1]\{2k-7}u{2k-1+T'} A[2k-1]\{2k-I}u{2k-1+J'}

+ A[2k-1]\{2k-7}U{2k-1+T'}A[2k-1]\{2k-I}u{2k-l+J"}-

Because M is interlacing, we have det M[WIV] = 0 in the following two cases:

1. |WI = lVi > k;

2. |WI= lVi> k-1 and 1,2k-1 V.

111 Using (1.3), we get that the Plucker coordinates of O(M) satisfy

(a) Au = 0 for any U = {U1, U 2 ,..., U2 k-1} such that ui < ... < U2k-1

and Uk_1 > 2k - 1;

(b) Au = 0 for any U = {u, U 2 ,..., U2 k-1} such that ul < -. < u2k-1,

2k, 2(2k - 1) U and Uk > 2k - 1.

Now, note that since J' = {1} U J and J" = J U {2k - 1}, equation (3.3) is equivalent to

A{2k-I}u[2k,2(2k-1)]\2k-1+J}A[2k-1]\{2k-I}u{2k-1+J} (3.4)

{2k-I}u{2k}u{2k-1+J}A[2k-1]\{2k-I}u{2k-1+J'}

+ A{2k-I}u{2k-1+J}u{2(2k-1)}A[2k-11\{2k-I}u{2k-1+J"}-

To show that (3.4) holds, we will expand the left hand side using (1.1) where k is

replaced by 2k-1 and t is replaced by k-1. According to the formula, we are summing

over all the (2k-1) ways in which we can put the k - I elements of {2k - 1 + J} in place of some k - 1 elements of

{2k - I} U [2k, 2(2k - 1)] \ {2k - 1 + J}.

We will show that only two summands among the (T- ) summands appear on the

right side of (1.1) may be nonzero, and they are equal to the right side of (3.4). First, consider the summands in which at least one element from {2k - 1 + J} is placed

instead of an element in {2k - I}. Since all of the elements in

[2k, 2(2k - 1)] \ {2k - I + J} are bigger than 2k - 1 and 1{2k - I} = k - 1 we are

in case (a), which means that the resulting summand equals zero. Thus in order

to obtain a nonzero summand, all the k - 1 elements from {2k - 1 + J} must be

placed instead of some k - I elements from [2k, 2(2k - 1)] \ {2k - 1 + J}. There are exactly (kk1) = k such summands since 1[2k, 2(2k - 1)] \ {2k - I + J}I = k, and in each of the summands exactly one element from the set [2k, 2(2k - 1)] \ {2k - 1 + J}

112 is not replaced by an element from {2k - 1 + J}, and all the other are replaced. Note

that since 1, 2k - 1 J,

2k, 2(2k - 1) E [2k, 2(2k - 1)] \ {2k - 1 + J}.

We may choose one of the following to be the element that is not replaced: 2k;

2(2k - 1); or an element from [2k, 2(2k - 1)] \ {2k - 1 + J} not equal to 2k or

2(2k - 1). If we choose 2k, the resulting summand is

A{2k-I}U{2k}U{2k-1+J} A[2k-1]\{2k-I}u{2k-1+J'},

and if we choose 2(2k - 1) the resulting summand is

A{2k-I}U{2k-1+J}u{2(2k-1)} A[2k-1]\{2k-I}u{2k-1+J"}-

If we choose an element which is not equal to 2k or 2(2k - 1), then (b) implies that the k - 2 resulting summands equal zero. Thus we showed that (3.4) holds and we are done. Fl

The last ingredient that we need for the proof of Theorem 3.3.3 is the following lemma.

Lemma 3.4.4. Let A = (A,..., A,) be a partition, and let n be a positive integer, k < min(m, n), and c > 0. For all 1 < i < m, define the set of sinks to be bi =

(Ai + m - i + c, n). Also, define the set of sources {ai}' as follows:

(m - i + c, i), if i < k; (3.5) (M - i + c, k), if k < i

Then S ' rIEPath({ai,...,am},{bi,...,bm}) wt(H).

Proof. Let a, = (m - i + c, 1) for all 1 < i < m. From proposition 3.2.2, we have

S 'n] ErEPath({ni,...,am},{b,...,bm}) Wt(H). Thus, in order to prove the lemma, We will construct a bijection # between

113 Path({ci,..., am}, {bi,.. .,bm}) and Path({ai,...,am},{bi, ... , bm})

that preserves the weight. Let

1I = (P1,... , Pm) c Path({a,. ., am}, {b, .. . , bm),

so P starts at a and ends in bi. Since H is noncrossing, the first step in P2 must be

vertical (otherwise P and P2 will cross). Similarly, the first two steps in P3 must be vertical (otherwise P3 and P2 will cross). More generally, the first j - 1 steps in P must be vertical, for all 1 < j < m. This implies in particular that each Pj passes through a3 , and until it arrives to aj it contains only vertical steps. Let Pj be a path that starts at aj and from this point is identical to P, so in particular Pj$ ends at bj. In addition, since vertical edges are weighted by 1, we have wt(Pj) = wt(Pj). Thus define 0(H) E Path({ai,... , am}, { 1 , ... , bm}) to be #() = (P... , P'). Clearly # is a bijection for which wt(H) = wt(#(H)), so we are done. 0

We are now ready to present the proof of Theorem 3.3.3.

Proof. Let J = { 2, 4, 6, ..., 2k-2}, J' = { 3, 5, 7, .. .,*2k-1}, J"f = { 1, 3, 5, ... , 2k-3}, I = {2, 4, 6, ... , 2k - 2}. Recall that C is the network matrix, so CT is an interlacing matrix. Then according to Theorem 3.4.3,

det T[I7 ] det CTfI|J] det CT[I|J]det CT[I|J'] + det CT[7|T] det CT[IJ"]

Equivalently,

det C[JI7] det C[JII] = det C['7] det C[J'I1] + det C[J"17] det C[J"I].

In order to prove the theorem, we need to show that the following equalities hold:

(a) det C[JI7] = SA ;

(b) det C[JII] = S

(c) det C[J'1I] = X1S(Aj 1,...,Ak-1);

114 (d) det C[J'II] [,o (A 1 +1,...,At+1,At+ 2 ,...,Ak)

(e) det C[J'/] -S ;

(f) det C[J"I)= S(A1,-..,At,At+ 2 -1,-.,Ak-1)-

All the parts except from (c) follow directly from Lemma 3.4.4. For part (c), since a 2 lies immediately above a,, the first step of the path that starts at a, must be to the right (since the paths should be noncrossing), and its weight is xi. From here we can use Lemma 3.4.4 and conclude that det C[J'II] = X1S(A1_1,...,Ak-1). This completes the proof of the theorem.

3.5 Proof of Theorem 3.3.5

In this section we present the proof of Theorem 3.3.5, and conclude with a conjecture that deals with a possible generalization.

Proof. Recall that it is enough to prove part (2). We start by assuming that r > 1.

Let us define a set of sources {ai} 1r and a set of sinks {bi} 2 1 in the following way:

(r, 1), if i = 1;

(r - 1, 2), if i = 2; (3.6) (r - t, 3), if i = 2t + 1 for 1 K t K r - 1;

(r - t, 3), if i = 2t for 2 K t K r.

+ r - t, n), if i = 2t + 1 for 0 K t K r - 1; bi (At+, (3.7) (At+, + r - t, n), if i = 2t for 1 K t K r.

Note that we have a2t = a2t+1 for 2 < t K r - 1, and b2 t = b2 t+1 for 1< t Kr-1. Let

A, = { 1, 2, 5, 7, 9, .. ,2(r21 - 1) - 1, 2 (r - 1) + 1}, A. = {3, 4, 6, 8, . .. , 2r}1,

115 Bx = {1, 3,5,7,9,..., 2(r - 1) - 1, 2(r - 1) +I}, B. = {2,4,6,8,..., 2r}.

Then Ax U A. = Bx U B. = [2r] and Ax n A. = Bx n B. = 0. We let Ax U A. index the sources and Bx U B. index the sinks. Figure 3-2 depicts the set of sources and sinks. Note that since r > 1, JAxI = IA.I = IBxI = 1B.1 = r. Define the matrix C as in (3.1) (for m = 2r). We start by showing that the following

2(r-1)+1 5 n 'C

2r 2(r) 4 2

5

3 g 3 2r' 2(r-1) 4 2 XC2

41 2 3 * . rt1 A-'1

Figure 3-2: This figure depicts the set of sources and sinks. Each of them is divided into x part and 9 part, according to the classification above. four identities hold:

(a) det C[AxIBx] det C[A.|B.] = S( ,- . S3, ) (AAr)(A1 2 . .Ar+i)'

(b) det C[Ax U {2r}IBx U {2r}] det C[A. \ {2r}IB. \ {2r}] sE"] S[3,n]. (AIA2 ,---,Ar,Ar+1) (A2,---,Ar)'

(c) det C[AxIB.] det C[A.|Bx] = sf"I 5[3l,n]. (M 1 ,Ar... 1 -1) ( A+r--,1)'

(d) det C[Ax U {3}IB U {2r}] det C[A. \ {3}IB. \ {2r}] =

X12(Al,.,r11 (A 2 1 ,...,Ar+1)*

116 The first three identities follow directly from the LGV lemma and Lemma 3.4.4. For the fourth identity, note that by the LGV lemma,

det C[A, U {3}B, U {2r}] = wt(O), (3.8) where the sum on the RHS is through all H in

Path({ai, a2, a3, a5, a7, aq, .... , a2(r--)+1}, { bi, b3, b5, . ... , b2(r-1)+1, b2,}).

Let H = (P, P2 , .. . , Pr+1) be a noncrossing tuple that corresponds to some summand on the RHS of (3.8), so P starts at a1 and ends at bl, and P2 starts at a 2 and ends at b3 . Since P2 and P3 are noncrossing, the first step in P2 must be to the right.

Similarly, since P and P2 are noncrossing, the first step of P must be to the right as well. Let a, = a, + (1, 0), a2 = a2 + (1, 0), and define Pj, P2 to be the paths that are obtained from P1 , P2 by removing the first step respectively. Let

H' = (P1, P2, P3 , P4,... ,Pr+)-

Then wt(H) = xix 2wt(H'). This defines a bijection between

Path({ai, .. . a2 , a3 , a5, a7 , a9, , a2(r-1)+1}, {bi, b3 , b 5 ,... , b2 (r-l)+l, b2r}) and

Path({a1 , a 2 , a3 , a5 , a7 ,a,... ,a2(r-1)+1}, {bi, b 3, bs,..., b2 (r-l)+, b2r})

that multiplies the weight of each path in the latter by x 1x 2 . Thus combining the LGV lemma and Lemma 3.4.4 we deduce the fourth identity. Therefore, in order to prove the theorem, we need to verify that

detC[AxIBx]detC[A.IB.] = (3.9)

det C[AX U {2r}IBx U {2r}] det C[A. \ {2r} lB. \ {2r}]+

det C [A IB.] det C [A.IBx] -

det C[Ax U {3}1Bx U {2r}] det C[A. \ {3}1B. \ {2r}].

117 Denote by C the 2r x 4r matrix that is obtained from C via the map described in

(1.2). Denote by {AI}I([4-]) the set of Plucker coordinates of C. Finally, let us introduce the following notations:

T =13, 5, 7, . .. , 2(r - 2) + 1}, U = {2r + 3, 2r + 5, 2r + 7, ..., 2r + 2(r - 1) + 1},

V= {2, 4, 6 ... , 2(r - 2), 2(r - 1)+ 1, 2r}, W = {2r + 2, 2r + 4,..., 2r + 2(r - 1)},

V1 = {2,4,6, ... , 2(r - 2)}, V2 = {2(r - 1) + 1, 2r}.

Note that ITI = r - 2,|UI = r - 1,IVI =r,WI = r - 1, V = V U V2 . In addition, det C [A,|B.] det C [A.IB,] equals

det C[AI{3, 5,..., 2(r - 1) + 1, 2r}] det C[A.I{1, 2,4, ... ,2(r - 1)}]. (3.10)

From now on, whenever we write, {X, Y} (when X, Y are sets or numbers), we just mean X U Y (or X U {Y} if X is a set and Y is a number, and similarly for the other options), while we treat X U Y as a tuple, so the order of the elements is preserved.

Now, using (3.10) and (1.3), we can translate (3.9) to the following equation on the Plucker coordinates of C:

A{l,T,2(r-1),2r+l,U}A{V,W,2r+2r} (3-11)

A{T,2(r-1),2r+,U,2r+2r}A{1,V,W} + A{1,T,2(r-1),U,2r+2r}A{V,2r+1,W} -

A{1,T,2r+1,U,2r+2r} A{V 1 ,2(r-1),V2 ,W}-

In order to prove this equation, we would like to use Plucker relations. Consider the product A{l,T,2(r-1),2r+,U}AVW,2r+2r}. According to the Plucker relations, by exchanging the element 2r + 2r with some element from {1, T, 2(r - 1), 2r + 1, U}, we can write this product as a sum of 2r products (possibly with negative sign) of pairs of Plucker coordinates of the form A1 A11 such that li = |i'l , I f I' = 0 and

I U I' = {1, T, 2(r - 1), 2r + 1, U} U{V, W, 2r + 2r}.

= Recall that a2t a2t+1 for 2 < t < r - 1, and b2 t = b2t+1 for 1 < t < r - 1. Therefore,

118 if I C ([r]) contains the pair {2r + 2t, 2r + 2t + 1} for some 1 < t < r -1 then A, = 0,

and hence A1 A1 , = 0. Similarly, if for some 1 < t < r - 2 both of the elements 2t

and 2t + 1 are not in I then A 1p = 0, and hence A1 AI, = 0.

Using this observation, we will now show that 2r - 3 of these 2r products equal

0, and the other three are exactly the products that appear on the right hand side of (3.11). Let I ([,1) be the set that is obtained from {1, T, 2(r - 1), 2r + 1, U} by removing some element from T (which is thus of the form 2t + 1 for some 1 < t < r -2) and adding the element 2r + 2r instead. Then the elements 2t and 2t + 1 are not in I, and hence A1 A1 , = 0. Similarly, if we exchange 2r + 2r with some element from U, then I contains a pair of elements of the form 2r + 2t, 2r + 2t + 1 for some

1 < t < r - 1. Thus we again get A1 A1 , = 0. In conclusion, we can exchange 2r + 2r only with the elements 1, 2(r - 1), 2r + 1, and this leads to (3.11). We can now let n -+ oc, and this completes the proof for the case r > 1. We now consider the case r = 1. In this case, we need to show that

.S [3,oo , 1,o) -XX2S(-1,A 2 -1>' S(() (A 2 ) = S(AlA 2 ) + S(A 2 1 ) (A) -1 x S -,

Let a, = (1, 1), a 2 = (0, 3) be the sources and let b1 = (A, + 1, n), b 2 = (A 2 , n) be the sinks. Define the 2-by-2 matrix C as in (3.1) (for m = 2). We have 11C22 = det C +C21C12. By the LGV lemma,

C11 = S '[1,C - s], 012 - S! C), 21 =S

Thus, in order to prove the claim we need to show that

det C = SS'",n -11 S[i,2

119 Let u2 = (0, 2), v 2 =(1, 2). We have

wt(H) = (3.12)

HePath({ai,a2},{bi,b2 })

wt(H) - X 2 wt(H).

flCPath({ai,u2},{bi,b2 }) rIEPath({ai,v2 },{bi,b2 })

This holds since the first step of any path that starts at U2 is either up (with

weight 1), or to the right (with weight x 2 ). Also note that for H = (P1 , P2 ) E

Path({ai,v 2 }, {bi, b 2 }), the first step of P must be to the right (as P and P2 are

noncrossing) and the weight of this step is x1 . Thus

E wt(H) =det C,

HEPath({aj,a2 },{bi,b2 })

Swt(fl) S ', fEPath({ai,u 2},{bi,b2 }) and

wt(fl) = XiS'7ln ) HEPath({a 1 ,v2 },{bi,b2 })

Substituting these three expressions in (3.12) and letting n -+ oo concludes the proof for r = 1, and we are done. l

Theorem 3.3.5 deals with products of the form S(A1 ,...,A) 2. ) for k = 2, 3. We conclude this chapter with a conjecture for the case k = 4, when A is a rectangular partition. Note that when we moved from k = 1 to k = 2, 0 terms were added, while the move from k = 2 to k = 3 added one term. Here, two terms are added on the right hand side.

Conjecture 3.5.1. Let c, r > 1. Then

S(cr (Cr) =

S(cr+I) S-1 ) + S((c-)r) - S 2))) - (XiX 2 + X1i 3 + X 2 X 3 )S((c_1yr1) -S _)rl)+

120 [ 4co X 1 X2 X3 (S((c-)r,c-2)S (C+)r-1) - S((c-)r+1) S])r-2,C))

In the expression above we set S, = 0 if one of the parts in p is a negative number.

121 122 Bibliography

[11 V. I. Danilov, A. V. Karzanov, and G. A. Koshevoy. On maximal weakly sepa- rated set-systems. Journal of Algebraic Combinatorics, 32:497 - 531, 2010.

[2] C. L. Dodgson. Condensation of determinants. Proceedings of the Royal Society of London, 15:150 - 155, 1866.

[3] S. M. Fallat, C. R. Johnson, and R. L. Smith. The general totally positive matrix completion problem with few unspecified entries. The Electronic Journal of Linear Algebra, 7:1 - 20, 2000.

[41 M. Farber, M. Faulk, C. R. Johnson, and E. Marzion. Equal entries in totally positive matrices. Linear Algebra and its Applications, 454:91 - 106, 2014.

[5] M. Farber and P. Galashin. Weak separation, pure domains and cluster distance. arXiv:1 612.05387, 2016.

[6] M. Farber, S. Hopkins, and W. Trongsiriwat. Interlacing networks: Birational rsk, the octahedron recurrence, and schur function identities. J. Combin. Theory Ser. A, 133:339 - 371, 2015.

[7] M. Farber and Y. Mandelshtam. Arrangements of minors in the positive grass- mannian and a triangulation of the hypersimplex. arXiv:1509.02600, 2015.

[8] M. Farber and A. Postnikov. Arrangements of equal minors in the positive grassmannian. Advances in Mathematics, 300:788-834, 2016.

[91 M. Farber, S. Ray, and S. Smorodinsky. On totally positive matrices and geo- metric incidences. Journal of Combinatorial Theory, Series A, 128:149 - 161, 2014.

[10] S. Fomin and A. Zelevinsky. Cluster algebras I: Foundations. Journal of the American Mathematical Society, 15:497 - 529, 2002.

[11] S. Fomin and A. Zelevinsky. Cluster algebras II: Finite type classification. In- ventiones Mathematicae, 154:63 - 121, 2003.

[12] M. Fulmek and M. Kleber. Bijective proofs for Schur function identities which imply Dodgson's condensation formula and Plicker relations. Electron. J. Com- bin., 8(1):Research Paper 16, 22 pp. (electronic), 2001.

123 [13] F. Gantmacher and M. Krein. Sur les matrices oscillatores. C.R. Acad. Sci. Paris, 201, 1935.

[14] D. Grinberg. personal communication. April 2014.

[15] C. R. Johnson, B. K. Kroschel, and M. Lundquist. The totally nonnegative completion problem. Fields Institute Communications, American Mathematical Society, Providence, RI, 18:97 - 108, 1998.

[16] C. R. Johnson and C. Negron. Totally positive completions for monotonically labeled block clique graphs. The Electronic Journal of Linear Algebra, 18:146 - 161, 2009.

[17] C. Jordan and R. Torregrosa. The totally positive completion problem. Linear Algebra Appl., 393:259 - 274, 2004.

[18] A. N. Kirillov. Completeness of states of the generalized Heisenberg magnet. Zap. Nauchn. Sem. Leningrad. Otdel. Mat. Inst. Steklov. (LOMI), 134:169 - 189, 1984. Automorphic functions and number theory, II.

[19] T. Lam and A. Postnikov. Polypositroids and membranes. in preparation.

[20] T. Lam and A. Postnikov. Alcoved polytopes I. Discrete & Computational Geometry, 38:453 - 478, 2007.

[21] T. Lam, A. Postnikov, and P. Pylyavskyy. Schur positivity and Schur log- concavity. Amer. J. Math., 129(6):1611 - 1622, 2007.

[22] B. Leclerc and A. Zelevinsky. Quasicommuting families of quantum Plucker coordinates. American Mathematical Society Translations, Ser. 2:181, 1998.

[23] K. Lee and R. Schiffler. Positivity for cluster algebras. arXiv:1306.2415, 2013.

[241 G. Lusztig. Total positivity in reductive groups. Lie Theory and Geometry, Progress in Mathematics, 123:531 - 568, 1994.

[25] G. Lusztig. Total positivity in partial flag manifolds. Representation Theory, 2:70 - 78, 1998.

[26] S. Oh, A. Postnikov, and D. Speyer. Weak separation and plabic graphs. Pro- ceedings of the London Mathematical Society, 110(3):721-754, 2015.

[27] A. Postnikov. Total positivity, Grassmannians, and networks. arXiv:0609764, September 2006.

[28] B. Rhoades and M. Skandera. Temperley-lieb immanants. Annals of Combina- torics, 9:451 - 494, 2005.

[29] I. J. Schoenberg. Ober variationsvermindernde lineare transformationen. Math- ematische Zeitschrift, 32:321 - 328, 1930.

124 [301 J. Scott. Grassmannians and cluster algebras. Proceedings of the London Math- ematical Society, 92(2):345 - 380, 2006.

[31] M. Skandera. Inequalities in products of minors of totally nonnegative matrices. J. Algebraic Combin., 20(2):195 - 211, 2004.

[32] R. P. Stanley. Eulerian partitions of a unit hypercube. M. Aigner, ed., Higher Combinatorics, page 49, 1977.

[33] R. P. Stanley. Enumerative combinatorics. Vol. 2. Cambridge University Press, Cambridge, 1999.

[34] R. P. Stanley. Enumerative combinatorics. Vol. 1. Cambridge University Press, Cambridge, second edition, 2012.

[351 B. Sturmfels. Gr6bner bases and convex polytopes. University Lecture Series, 8. American Mathematical Society, Providence, RI, 1996.

[36] E. Szemeredi and W. Trotter. Extremal problems in discrete geometry. Combi- natorica, 3(3):381 - 392, 1983.

[37] W. Trongsiriwat. Combinatorics of permutation patterns, interlacing networks, and Schur functions. Ph.D. dissertation, MIT, 2015.

125