<<

PU.M.A. Vol. 28 (2020), No. 1, pp. 1–13, DOI:10.1515/puma-2015-0041

Finiteness of the criss-cross for the problem with s-monotone index selection rules

Adrienn Csizmadia∗† Marston Green, UK

(Received: March 7, 2021)

Abstract The traditional criss-cross algorithm for the linear programming prob- lem is shown to be finite when s-monotone index selection rules are used. The set of s-monotone index selection rules, among others, include the Last In First Out (LIFO) and the Most Often Selected Variable rule (MOSV). The advantage of applying the s-monotone index selection rule is the flexibility it provides in selecting the while still preserv- ing the guarantee for finiteness. Such flexibility may be used to improve the numerical stability of the algorithm.

Mathematics Subject Classifications (2015). 90C49, 90C05

Keywords. Criss-cross method, index selection, finiteness

1 Introduction and problem definitions

The linear programming problem (LP) is one of the most studied fields of oper- ations research. Most pivot (PA) for LPs minimize a merit function like the sum of infeasibilities or the objective function. An early exception, intro- duced by Terlaky [24, 25] and independently by Zionts [29] without a finiteness proof, has been the criss-cross algorithm (CCA) in which the proof of finiteness depends on purely combinatorial considerations as opposed to type methods that are always finite if the problem is non-degenerate. This reliance on index selection rules of the criss-cross algorithm makes studying the class of index selection rules for which it is finite and allows flexible pivot selection of particular interest. The scope of the method has been extensively broadened in [1, 2, 13, 18, 21, 26]. In linear programming, an often cited way for selecting the pivot position is using the minimal index rule. Though such a pivot position is good in theory it may not necessarily be suitable in practice. It would be interesting to use a pivot rule which provides finiteness but also offers more flexibility. To allow

∗Maiden name Adrienn Nagy †e-mail: [email protected]

1 2 CSIZMADIA that, the idea of Zhang was to use the Most-Often-Selected-Variable and Last- In-First-Out rules to prove the finiteness of the criss-cross algorithm. The same is proven using the Orthogonality theorem in [20]. This paper shows that the recently introduced concept of s-monotone index selection rules [9, 10, 15, 16] can also be applied to guarantee the finiteness of the criss-cross algorithm. This proof first appeared as part of the PhD dissertation [23] of the author. Throughout the paper, matrices are denoted by italic capital letters, vectors by bold, scalars by normal letters and index sets by capital calligraphic letters. Columns of a matrix are indexed as a subscript while rows are indexed by superscripts. A ∈ Rm×n and M ∈ Rn×n denote the original problem matrices for various problem types, while b and c denote the right hand side and objective vectors, respectively. Let A ∈ IRm×n, c ∈ IRn, b ∈ IRm be a matrix and vectors of appropriate dimensions, then

min cT x Ax = b (1) x ≥ 0 is a primal linear programming (P-LP) problem, while the dual linear program- ming (D-LP) problem can be defined as follows

max bT y (2) AT y ≤ c where x ∈ IRn and y ∈ IRm are primal and dual decision vectors, respectively. Without loss of generality we assume that rank(A) = m. A regular m × m rectangular submatrix of the constraint matrix is called a basis [15]. For a given basis B, we denote the nonbasic part of A by N; the corre- sponding set of indices for the basic and nonbasic parts are denoted by IB and IN respectively. The corresponding short pivot tableau for B is denoted by T := B−1N, while the transformed right hand side and objective vectors are denoted by b := B−1b and c := cT B−1. We refer to the rows (index i) and columns (index j) of the short pivot tableau corresponding to a given basis −1 (i) −1 (i) B as tj := B aj and t := (B N) [20]. We will denote the individual coefficients of the pivot tableau as tij . The variables corresponding to the column vectors of the basis B are called basic variables. (i) Define (column) vectors t and tj with dimension (n+2) [20], corresponding to the (primal) basic tableau of the (P-LP) problem, where i ∈ IB and j ∈ IN , respectively, in the following way:   tik if k ∈ IB ∪ IN (i) ¯ (t )k = tik =  bi if k = b (3) 0 if k = c FINITENESS OF THE CRISS-CROSS ALGORITHM. . . 3 and   ∈ I  tkj if k B −1 if k = j (tj)k = tkj =  ∈ I \{ } ∪ { } (4)  0 if k ( N j ) b c¯j if k = c where b and c denotes indices associated with vectors b and c, respectively. (c) Furthermore, define t and tb vectors in the following way   c¯k if k ∈ IB ∪ IN (c) (t )k = tck =  1 if k = c (5) − T −1 cBB b if k = b and   ¯ ∈ I  bk if k B −1 if k = b (tb)k = tkb =  ∈ I (6)  0 if k N − T −1 cBB b if k = c and from now on we assume that c is always a basic index, while b is always a nonbasic index of the (P-LP) problem. The following result is widely used in the proofs of finiteness of pivot algorithms for LP.

Theorem 1 [20] Let a (P-LP) problem be given, with rank(A) = m and assume that IB′ and IB′′ are two arbitrary bases of the problem. Then

′′(i) T ′ (t ) tj = 0 (7) for all i ∈ IB′′ and for all j ̸∈ IB′ .

2 The s-monotone index selection rules

The s-monotone index selection rules were introduced in [9, 10] for LP and LCP problems, and later directly applied to QPs in [8]. Pivot based algorithms (like the simplex algorithm [11], MBU-simplex algo- rithm [3] or the criss–cross algorithm [24, 25, 29]) often feature the following similar principles: 1. The main flow of the algorithm is defined by a pivot selection rule which defines the basic characteristics of the algorithm, though the pivot position defined by it is not necessarily unique (see for instance [7, 12, 22]), a series of "wrong" choices may even lead to cycling [7, 22]. 2. To avoid the possibility of cycling, an index selection rule is used as an anti-cycling strategy (see for instance [6, 7, 27]), which may be flexible [9, 16] but usually at several bases during the algorithm, it defines the pivot position uniquely. 4 CSIZMADIA

For several pivot algorithms, – like simplex, MBU-simplex or criss–cross algorithms – proofs of finiteness are often based on the orthogonality theorem [5, 15, 16, 20], considering a minimal cycling example [5, 15, 16, 10]. In minimal cycling examples, all variables should move during a cycle – if such exists – and following the movements of the least preferred variable according to the index selection rule [6, 9, 10, 15, 16, 19, 28], using orthogonality theorem we obtain contradiction. Examples of such pivot and index selection rules include 1. Pivot selection rules for (P-LP):

(a) simplex [11] (Pivot column selection: negative reduced cost. Pivot element selection: using ratio test. Preserving non negativity of the right hand side.) (b) MBU simplex [3] (Pivot column selection: negative reduced cost, choosing driving variable. Pivot element selection: defining driving and auxiliary pivots using primal and after that dual ratio tests. Monotone in the reduced cost of the driving variable.) (c) criss–cross [25] (Pivot column/row selection is based on infeasibility – negative right hand side or negative reduced cost. Pivot element selection: admissible pivot positions.) 2. Index selection rules: (a) Bland’s or the minimal index rule [6] (b) Last-In-First-Out (LIFO) (c) Most-Often-Selected-Variable (MOSV) LIFO and MOSV index selection rules for linear programming problems were first used by S. Zhang [28] to prove the finiteness of the criss–cross algorithm with these anti-cycling index selection rules. Bilen, Csizmadia and Illés [5] proved that variants of MBU simplex algorithm are finite with both LIFO and MOSV index selection rules, while Csizmadia in his PhD Thesis [9] and Csiz- madia et al [10] showed that the simplex algorithm is finite when the LIFO and MOSV are applied. These results led to the joint generalization of the above mentioned anti-cycling index selection rules. The following general framework for proving the finiteness of several pivot algorithms and index selection rule combinations is introduced, as in [10].

Definition 1 (Possible pivot sequence [10]) A sequence of index pairs

S = {Sk = (ik, ok): ik, ok ∈ IN for some consecutive k ∈ IN}, is called a possible pivot sequence, if

(i) n = max{max ik, max ok} is finite, k∈IN k∈IN (ii) there exists a (P-LP) with n variables and rank(A) = m, and FINITENESS OF THE CRISS-CROSS ALGORITHM. . . 5

(iii) the (possibly infinite) pivot sequence is such that the moving variable pairs of (P-LP) correspond to the index pairs of S. The index pairs of a possible pivot sequence are thus only required to comply with the basic and nonbasic status. It is now easy to show that

Proposition 1 If a possible pivot sequence is not finite then there exists a (sub)set of indices, I∗, that occur infinitely many times in S.

Definition 2 (Pivot index preference [9, 10]) A sequence of vectors sk ∈ Nn is called a pivot index preference of an index selection rule, if in iteration j, in the case of ambiguity according to a pivot selection rule, the index selection rule selects an index with highest value in sj among the candidates. The concept of s-monotone index selection rule aims to formalize a common monotonicity property of several index selection rules.

Definition 3 (s-monotone index selection rules [9, 10]) Let n ∈ IN be given. An index selection rule is called s-monotone, if n 1. there exists a pivot index preference sk ∈ N , for which

(a) the values in the vector sj−1 after iteration j may only change for ij and oj, where ij and oj are the indices involved in the pivot operation, (b) the values may not decrease. 2. For any infinite possible pivot sequence S and for any iteration j there exists iteration r ≥ j such that I∗ ∩ I (a) the index with minimal value in sr among Br is unique (let it I I∗ be l), where Br is the set of basic indices in iteration r, and is the set of all indices that appear infinitely many times in S, (b) in iteration t > r when index l ∈ I∗ occurs again in S for the first ∗ time, the indices of I that occurred in S strictly between Sr and St have a value in st higher than that of the index l. The goal of the definition of the s-monotone index selection rule is generalize the concept of flexible index selection rules in a framework that still guarantees finiteness.

Theorem 2 The following index selection rules: 1. the minimal index rule, 2. the Most-Often-Selected-Variable rule and 3. the Last-In-First-Out index selection rule are s-monotone index selection rules. The proof of this theorem can be found in [9, 10]. It would be interesting to find other non-trivial index selection rules that are also s-monotone. A number of such generalizations can be found in [10]. 6 CSIZMADIA

3 The criss-cross algorithm with s-monotone in- dex selection rules

We state the CCA for LP problems.

The criss-cross algorithm with s-monotone index selection rules

input data: A ∈ Rm×n, b ∈ Rm, c ∈ Rn, I = {1, . . . , n}, a basis B, initialized s vector; output An optimal solution, or a certificate that the problem is primal or dual infeasible;

begin − T −1 calculate c := c (cBB )A, the reduced costs; calculate b = B−1b, the transformed right hand side; I− := {i ∈ IN | ci < 0} ∪ {j ∈ IB | bj < 0} ; while (I− ≠ ∅) do let p ∈ {i ∈ IB : bi < 0} be arbitrary with a maximal s-value; let q ∈ {j ∈ IN : cj < 0} be arbitrary with a maximal s-value; let r ∈ {p, q} be arbitrary with a maximal s-value; if(r = p) then if (t(p) ≥ 0) then STOP: problem primal infeasible, certificate t(p); else let q ∈ {j ∈ IN : tpj < 0} be arbitrary with a maximal s-value; endif else (so r = q) if(tq ≤ 0) then STOP: problem dual infeasible, certificate tq; else let p ∈ {i ∈ IB : tiq > 0} be arbitrary with a maximal s-value; endif endif pivot on the (p, q) element; update I− and IB := IB ∪ {q}\{p}; endwhile STOP: an optimal solution is found; end

We are ready to prove the main result of the paper. The structure of the proof follows as in [9, 10, 15, 20].

Theorem 3 Let the problem (P-LP) be given. The criss–cross algorithm is finite when an s-monotone index selection rule is applied. FINITENESS OF THE CRISS-CROSS ALGORITHM. . . 7

Proof. The CCA terminates with one of its terminal tableaus: the problem is optimal, primal infeasible or dual infeasible. Let us assume to the contrary, that the criss–cross algorithm in not finite with an s-monotone index selection rule. As the number of possible different bases is finite, the algorithm must cycle in the sense that it must visit the same basis multiple (infinitely many) times. Let us consider a minimal cycling example, in which all variables move; as the selection according to the s vector always only relies on the value of the variables in s, which are only updated for those variables that move in or out of the bases. This assumption is not restrictive: the part of each cycling example in which the variables move an infinite time is such an example. According to the definition of s-monotone rules, there exists an iteration where the index with the minimal s value is unique. Let this index be l. We can assume that the variable xl is outside the basis (if not, we can consider the dual which is a symmetric case in the case of the criss–cross algorithm). Let the ′ ′′ basis in which xl enters the basis be B , while when it leaves again be B . We call a criss–cross iteration primal, if the negative valued index is selected from the objective row, and a dual iteration if the index with a negative value is selected from the right hand side vector. The following 4 cases are possible:

(a) primal-primal: xl enters the basis in a primal iteration, and then also leaves in a primal iteration.

(b) primal-dual: xl enters the basis in a primal iteration, and leaves in a dual iteration.

(c) dual-primal: xl enters the basis in a dual iteration, and leaves in a primal iteration.

(d) dual-dual: xl enters the basis in a dual iteration, and also leaves in a dual iteration. According to the definition of s-monotone index selection rules, the s value ′ ′′ for those variables that move in between B and B is larger than of xl. Let the index set of those variables that have not moved in between B′ and B′′ be denoted by K partitioned to KB and KN based on whether k ∈ K is in the basis or not, while the index set of those variables that have moved at least once be L (also partitioned to LB and LN ), with the index l not being included in either. The corresponding cases, sometimes referred to as almost terminal-tableaus [14] are shown on Figure 1, describing the following cases:

(1.) xl enters the basis in a primal iteration.

(2.) xl enters the basis in a dual iteration.

(3.) xl leaves the basis in a primal iteration.

(4.) xl leaves the basis in a dual iteration. 8 CSIZMADIA

KN xl KN xl ⊕ (2.): (1.): . K . B . ⊕ ... ⊕ ⊕ ... ⊕ - - s ⊕ ⊕ . . ⊕ ′ ⊕ ... ⊕ ⊕ ... ⊕ - ζ

′ ′ B , primal iteration B , dual iteration

r KN * (4.): * (3.): . . . . KB KB * * ⊖ ⊕ . . . . ⊖ ⊕ xl + - xl ′′ - * ... * ⊕ ... ⊕ ζ ′′ ′′ B , primal iteration B , dual iteration

Figure 1: The possible short pivot tableaus specific to the s-monotone rule during the proof.

We will show that none of the tableau combinations explained before are possible, by showing that it is possible to select a combination of t vectors for each case that would violate the orthogonality theorem, and therefore prove that the algorithm can not cycle. The presentation of the relevant t vectors will follow the same structured decomposition according to the index sets of interest:

K L \{ } L \{ } t = B KN B l N l l b c

From table (1.), B′:

′ ′ t (c) = 0 ... 0 ⊕ ... ⊕ 0 ... 0 ⊕ ... ⊕ - ζ 1

′ ⊕ ⊕ ′ tb = ... 0 ... 0 ⊕ ... ⊕ 0 ... 0 0 -1 ζ

From table (2.), B′: FINITENESS OF THE CRISS-CROSS ALGORITHM. . . 9

′ t (s) = 0 ... 0 ⊕ ... ⊕ 0 ... 0 1 0 ... 0 ⊕ ... ⊕ - - 0 s

From table (3.), B′′:

′′ ⊖ ⊖ tr = * ... * 0 ... 0 ... 0 ... 0 -1 0 ... 0 + 0 - r

From table (4.), B′′:

′′ ⊕ ⊕ ′′ tb = * ... * 0 ... 0 ... 0 ... 0 - -1 ζ

′′(c) ′′ t = 0 ... 0 * ... * 0 ... 0 ⊕ ... ⊕ 0 ζ 1

We are ready to consider the combinations of the ways xl may enter at basis B′ and leave the basis at basis B′′.

Case (a), primal-primal: xl enters the basis in a primal iteration, and then also leaves in a primal iteration. It is easy to see, that in all cases that follow, the contribution from indices from K is zero. ′(c) ′′ ′ ≥ ′′ ≤ ∈ I \ {K ∪ Consider vectors t and tr . As tci 0 and tir 0, for all i B { }} ′′ ∈ K ′ ′′ ′′ l, b, c , and tir = 0 for all i B, and tcl < 0 with tlr > 0 and tbr = 0, and ′ ′′ tcc = 1 with tcr < 0 we get that

∑ ∑ ′′ T ′(c) ′ ′′ ′ ′′ ′ ′′ ′ ′′ ′ ′′ 0 = (tr ) t = tci tir + tci tir + tcl tlr + tcb tbr + tcc tcr =

i∈I\{KB ∪{l,b,c}} i∈KB ′ ′′ ′ ′′ tcl tlr + tcc tcr < 0.

which contradicts the orthogonality theorem.

Case (b), primal-dual: xl enters the basis in a primal iteration, and leaves in a dual iteration. ′(c) ′′ ′′(c) ′ Consider vectors t and tb , and also t and tb. ′ ≥ ′′ ≥ ∈ I \ {K ∪ { }} ′ As tci 0, and tib 0 for all i B l, b, c , and tci = 0 for all ∈ K ′ ′′ ′ ′ ′′ − ′ i B, and tcl < 0 with tlb < 0 and tcb = ζ with tbb = 1 and tcc = 1 with ′′ ′′ tcb = ζ we get 10 CSIZMADIA

∑ ∑ ′′ T ′(c) ′ ′′ ′ ′′ ′ ′′ ′ ′′ ′ ′′ 0 = (tb ) t = tci tib + tci tib + tcl tlb + tcb tbb + tcc tcb

i∈I\{KB ∪{l,b,c}} i∈KB ≥ ′ ′′ ′′ − ′ tcl tlb + ζ ζ . ′ ≥ ′′ ≥ ∈ I \ {K ∪ { }} ′ Further, as tib 0 and tci 0 for all i N l, b, c and tib = 0 for ∈ K ′ ′ − ′′ ′′ ′ ′ ′′ all i N , and tlb = 0 and tbl = 1 with tcb = ζ and tcb = ζ with tcc = 1 we get

∑ ∑ ′ T ′′(c) ′′ ′ ′′ ′ ′′ ′ ′′ ′ ′′ ′ 0 = (tb) t = tci tib + tci tib + tcl tlb + tcb tbb + tcc tcb

i∈I\{KN ∪{l,b,c}} i∈KN = ζ′ − ζ′′

As the two scalar products must be zero according to the orthogonality theorem, their sum should also equal to zero.

′′ T ′(c) ′ T ′′(c) ≥ ′ ′′ ′′ − ′ ′ − ′′ ′ ′′ 0 = (tb ) t + (tb) t tcl tlb + ζ ζ + ζ ζ = tcl tlb > 0. ′′ ′ as tlb < 0, tcl < 0, contradicting the orthogonality theorem. Case (c), dual-primal: xl enters the basis in a dual iteration, and leaves in a primal iteration ′(s) ′′ ′ ≥ ′′ ≤ ∈ I \ {K ∪ Consider vectors t and tr . Using tsi 0 and tir 0 for all i B { }} ′ ∈ K ′ ′′ l, b, c , and tsi = 0 for all i B, and that tsc = 0 and tbr = 0, and also ′ ′′ tsl < 0 and tlr > 0, we get that

0 = (t′′)T t′(s) = ∑ ∑ r ′ ′′ ′ ′′ ′ ′′ ′ ′′ ′ ′′ ≤ ′ ′′ tsi tir + tsi tir + tsl tlr + tsc tcr + tsb tbr tsl tlr < 0.

i∈I\{KB ∪{l,b,c}} i∈KB contradicting the orthogonality theorem. Case (d), dual-dual: xl enters the basis in a dual iteration, and also leaves in a dual iteration ′(s) ′′ ′ ≥ ′′ ≥ ∈ I \ {K ∪ Consider vectors t and tb . As tsi 0 and tib 0 for all i B { }} ′ ∈ K ′ ′ ′′ l, b, c and tsi = 0 for all i B and that tsc = 0, and also that tsl < 0, tlb < ′ ′′ − 0, tsb < 0 and tbb = 1 we get that ∑ ∑ ′′ T ′(s) ′ ′′ ′ ′′ ′ ′′ ′ ′′ ′ ′′ 0 = (tb ) t = tsi tib + tsi tib + tsl tlb + tsc tcb + tsb tbb

i∈I\{KB ∪{l,b,c}} i∈KB ≥ ′ ′′ ′ ′′ tsl tlb + tsb tbb > 0. contradicting the orthogonality theorem. Since all cases lead to a contradiction, we have proved that the criss–cross algorithm is finite with s-monotone index selection rules. □ FINITENESS OF THE CRISS-CROSS ALGORITHM. . . 11

4 Summary

We have shown that the traditional criss-cross algorithm for the linear program- ming problem is finite when s-monotone index selection rules are applied. Such rules include ones like the most often selected variable and last in first out rules that offer significant flexibility in selecting the pivot position.

5 Further research

It would be interesting to numerically verify the practical value of the flexibility offered by some s-monotone rules, similarly to the case of the simplex method [10, 17]. We expect that the results shown in this paper can be generalised to oriented , generalising the result of [4, 26]. We also believe that the criss-cross method for hyperbolic programming [18] is also finite when s- monotone index selection rules are applied.

References

[1] A. A. Akkeleş, L. Balogh, and T. Illés. A véges criss-cross módszer új variánsai biszimmetrikus lineáris komplementaritási feladatra. Alkalmazott Matematikai Lapok, 21:1–25, 2003.

[2] A. A. Akkeleş, L. Balogh, and T. Illés. New variants of the criss-cross method for linearly constrained convex . European Journal of Operational Research, 157(1):74–86, 2004.

[3] K. M. Anstreicher and T. Terlaky. A monotonic build-up simplex algorithm for linear programming. , 42(3):556–561, 1994.

[4] L. Balogh, F. Bilen, and T. Illés. A simple proof of the generalized farkas lemma for oriented matroids. Pure Mathematics and Applications, 13:423– 431, 01 2002.

[5] F. Bilen, Zs. Csizmadia, and T. Illés. Anstreicher-Terlaky type monotonic simplex algorithms for linear feasibility problems. Optimization Methods and Software, 22(4):679–695, 2007.

[6] R. G. Bland. New finite pivoting rules for the simplex method. Mathematics of Operations Research, 2:103–107, 1977.

[7] V. Chvátal. Linear programming. A Series of Books in the Mathematical Sciences. W. H. Freeman and Company, New York, 1983.

[8] A. Csizmadia, Zs. Csizmadia, and T. Illés. Finiteness of the quadratic primal simplex method when s-monotone index selection rules are applied. Central European Journal of Operations Research, 26, 2018. 12 CSIZMADIA

[9] Zs. Csizmadia. New pivot based methods in linear optimization, and an application in petroleum industry. PhD thesis, Eötvös Loránd University of Sciences, 2007. Available at www.cs.elte.hu/∼csisza.

[10] Zs. Csizmadia, T. Illés, and A. Nagy. The s-monotone index selection rules for pivot algorithms of linear programming. European Journal of Operation Research, 221(3):491–500, 2012.

[11] G. B. Dantzig. Programming in a linear structure. Comptroller, USAF, Washington D.C., 1948.

[12] G. B. Dantzig. Linear programming and extensions. Princeton University Press, Princeton, N.J., 1963.

[13] K. Fukuda and T. Terlaky. Criss-cross methods: a fresh view on pivot algorithms. Mathematical Programming, 79(1-3, Ser. B):369–395, 1997. Lectures on mathematical programming (ismp97) (Lausanne, 1997).

[14] K. Fukuda and T. Terlaky. On the existence of a short admissible pivot sequence for feasibility and linear optimization problems. Pure Mathematics and Applications, 10(4):431–447, 1999.

[15] T. Illés. Lineáris optimalizálás elmélete és pivot algoritmusai. Technical report, Operációkutatási Tanszék, Eötvös Loránd Tudományegyetem, 2013. Operations Research Report, 2013-03.

[16] T. Illés and K. Mészáros. A new and constructive proof of two basic results of linear programming. Yugoslav Journal of Operations Research, 11(1):15– 30, 2001.

[17] T. Illés and A. Nagy. Computational aspects of simplex and mbu-simplex algorithms using different anti-cycling pivot rules. Optimization, 63(1):49– 66, 2014.

[18] T. Illés, Á. Szirmai, and T. Terlaky. The finite criss-cross method for hyper- bolic programming. European Journal of Operational Research, 114(1):198– 214, 1999.

[19] T. Illés and T. Terlaky. Pivot versus interior point methods: pros and cons. European Journal of Operational Research, 140(2):170–190, 2002.

[20] E. Klafszky and T. Terlaky. The role of pivoting in proving some funda- mental theorems of linear algebra. Linear Algebra and its Applications, 151:97–118, 1991.

[21] E. Klafszky and T. Terlaky. Some generalizations of the criss-cross method for quadratic programming. Optimization, 24(1-2):127–139, 1992.

[22] K. G. Murty. Linear and combinatorial programming. Robert E. Krieger Publishing Co. Inc., Melbourne, FL, 1985. FINITENESS OF THE CRISS-CROSS ALGORITHM. . . 13

[23] A. Nagy. On the theory and applications of flexible anti-cycling index se- lection rules for linear optimization problems. PhD thesis, Eötvös Loránd University of Sciences, 2015. [24] T. Terlaky. Egy új, véges criss-cross módszer lineáris programozási felada- tok megoldására. Alkalmazott Matematikai Lapok, 10(3-4):289–296, 1983.

[25] T. Terlaky. A convergent criss-cross method. Optimization, 16(5):683–690, 1985. [26] T. Terlaky. A finite criss-cross method for oriented matroids. Journal of Combinatorial Theory B, 42(3):319–327, 1987.

[27] T. Terlaky and S. Zhang. Pivot rules for linear programming: a survey on recent theoretical developments. Annals of Operations Research, 46/47(1- 4):203–233, 1993. Degeneracy in optimization problems. [28] S. Zhang. A new variant of criss-cross pivot algorithm for linear program- ming. European Journal of Operational Research, 116(3):607–614, 1997.

[29] S. Zionts. The criss-cross method for solving linear programming problems. Management Science, 15(7):426–445, 1969.