<<

The existence of a strongly polynomial time

Zi-zong Yan, Xiang-jun Li and Jinhai Guo †

Abstract It is well known how to clarify whether there is a polynomial time simplex algorithm for (LP) is the most challenging open problem in optimization and discrete geometry. This paper gives a affirmative answer to this open question by the use of the parametric analysis technique that we recently proposed. We show that there is a simplex algorithm whose number of pivoting steps does not exceed the number of variables of a LP problem. Keywords: Linear programming, parametric linear programming, com- plement slackness property, projection polyhedron, simplex algorithm, pivot rule, polynomial complexity

AMS subject classifications. 90C05, 90C49, 90C60, 68Q25, 68W40

1 Introduction

Linear programming (LP) is the problem of minimizing a linear objective function over a polyhedron P ⊂ Rn given by a system of m equalities and n nonnegative variables. As one of the fundamental problems in optimization, it has been a very successful undertaking in the field of in the last six decades. Part of this success relies on a very rich interplay between geometric and algebraic arXiv:2006.11466v8 [math.OC] 17 Sep 2021 properties of the faces of such a polyhedron and corresponding combinatorial struc- tures of the problem it encodes. The simplex algorithm, invented by Dantzig [8] solves LP problems by using pivot rules and procedures an optimal solution. These pivot rules are to iteratively improve the current feasible solutions by moving from one vertex of the polyhedron to an adjoint one according to some pivot rule, until no more improvement is possible and optimality is proven. All edges that connect these adjoint vertices of the polyhedron form a pivot path. The complexity of the simplex algorithm is then determined by the length of the path - the number of pivot steps.

∗Supported by the National Natural Science Foundation of China (11871118, 11771058). †School of Information and Mathematics, Yangtze University, Jingzhou, Hubei, China([email protected], [email protected] and [email protected]).

1 The simplex algorithm belongs to the ’10 with the greatest influence on the development and practice of science and engineering in the 20th century’, see [11]. The survey by Terlaky and Zhang [60] contributes the various pivot rules of the simplex algorithm and its variants, in which they categorized pivot rules into three types. The first alternative, which is called combinatorial pivot rule, is to take care of the sign of the variables and either costs or profits. The algorithms of this type known to the authors are that of Bland [4], Folkman and Lawrence [13], Fukuda [14] and etc. The second alternative, which is called parametric pivot rule, is closely related to parametric programming, more precisely to the shadow vertex algorithm [7, 21, 44] and to Dantig’s self-dual parametric simplex algorithm [9]. The algorithms can be interpreted as Lemke’s algorithm [34] for the corresponding lin- ear complementarity problem [36]. Algorithms of this third type, which are close connections to certain interior point methods (see Karmarker [31]), allow the iter- ative points to go inside the . It is believed to be able to use more global information and therefore to avoid myopiness of the simplex method, for example, see Roos [48], Todd [61] and Tamura et al. [57]. It is a theoretically interesting and practically relevant question to recognize how a pivot rule [55] quickly leads to the optimal vertex. Connected to this are the diameter problem and the algorithm problem, for example, see [17]. The diameter problem, closely to the and its variants, is whether there a short path to the optimal vertices. Unfortunately Santos’ recent counter-example [51] disproves the Hirsch conjecture. On the subject for more information see the papers [6, 23, 40, 41] and the comments [12, 27, 35, 52, 59]. The algorithm problem is whether there is a very strong polynomial time algo- rithm for LP. Up to now, no pivoting rule yields such a algorithm depending only on the polynomiality of the number of constraints and the number of variables in the field of LP. In contrast to the excellent practical performance of the simplex algo- rithm, the worst-case of each analyzed pivot rule is known to grow exponentially, for example, see [2, 21, 22, 33, 42, 43, 44, 46, 49]. Other exponen- tial example was presented by Fukuda and Namiki [15] for linear complementarity problems. The simplex algorithm performs sufficiently well in practice, but theoretically the complexity of a pivot rule is not clear. To explain the large gap between practical experience and the disappointing worstcase, the tools of average case analysis and have been devised, and to conquer the worst case bounds, research has turned to randomized pivot rules. For the average case analysis, a polynomial upper bounded was achieved [6]. In contrast, so far, smoothed analysis has only been done for the shadow-vertex pivot rule introduced by Gaas and Saaty [18]. Under reasonable probabilistic assumptions its expected number of pivot steps is polynomial [56]. It is worth noting that none of the existing results exclude the possibility of (randomized) pivot rules being the desired (expected) polynomial- time pivot rules. For more information on the randomized pivot rules reference the papers [16, 17, 29, 38, 44, 63] and etc. Two important advances have been made in polynomial time solvability for the

2 developed by Khachain [32] and the interior-point method initiated by Karmarkar [31] since the 1970s. However, the run times complexity of such two algorithms is only qualified as weak polynomial. Actually, constructing a strongly polynomial time pivot rule is the most challenging open problem in optimization and discrete geometry [23, 40, 54, 62]. Our main goal of this paper is to give an existence result of the above open problem by the use of the parametric analysis technique that we recently proposed in [64]. We show that there exists a simplex algorithm whose number of pivoting steps does not exceed the number of variables of a LP problem. The organization of the rest of the paper is as follows. In Section 2, we recall the strong duality theorem of LP. In Section 3, we review the main results of the paper [64]: we define the set-valued mappings between two projection polyhedrons for parametric LP problems and establish the relationship between perturbing the objective function data (OFD) and perturbing the right-hand side (RHS) for a LP problem without using dual. We then investigate the projected behavior of the set-valued mappings in Section 4. As a application, we present the existence of a strongly polynomial time pivot rule for a LP problem in the final section.

2 Preliminaries

Consider the following linear programming (LP) problem pair in the standard forms:

min hc, xi s.t. Ax = b, (1) x ≥ 0 and max hb, wi s.t. AT w + y = c, (2) y ≥ 0, where c ∈ Rn, b ∈ Rm and A ∈ Rm×n are given. As usual, the first LP problem (1) is called the primal problem and the second problem (2) is called the dual problem, and the vectors c and b are called the cost and the profit vectors, respectively. By P = {x ∈ Rn|Ax = b, x ≥ 0} and D = {(w, y) ∈ Rm × Rn|AT w + y = c, y ≥ 0}, we denote the feasible sets of the primal and dual problems, respectively. If the problems (1) and (2) are feasible and if b = Ad, then

hd, c − yi = hd, AT wi = hAd, wi = hb, wi.

Therefore, the dual problem (2) can be equivalently expressed as the form

max hd, c − yi s.t. AT w + y = c, (3) y ≥ 0.

3 In following statement, we always use the problem (3) instead of the dual problem (2). The strong duality theorem provides a sufficient and necessary condition for optimality, for example, see [8, 20, 43, 53]. Theorem 2.1. (Strong duality) If the primal-dual problem pair (1) and (3) are feasible, then the two problems are solvable and share the same objective value. If the primal and dual programs have optimal solutions and the duality gap is zero, then the Karush-Kuhn-Tucker (KKT) conditions for the primal-dual LP problem (1) and (3) pair are

Ax = b, x ≥ 0, (4a) AT w + y = c, y ≥ 0, (4b) hx, yi = 0, (4c)

in which the last equality is called the complement slackness property. Conversely, if (x∗, y∗) ∈ Rn × Rn is a pair of solutions of the system (4a)-(4c), then (x∗, y∗) is a pair of optimal solutions of the primal-dual problem pair (1) and (3). The of a LP problem is a polyhedron. In particular, a polyhedron is called a polytope if it is bounded. A nontrivial face F of a polyhedron is the intersection of the polyhedron with a supporting hyperplane, in which F itself is a polyhedron of some lower dimension. If the dimension of F is k we call F a k-face of the polyhedron. The empty set and the polyhedron itself are regarded as trivial faces. 0-faces of the polyhedron are called vertices and 1-faces are called edges. Two different vertices x1 and x2 are neighbors if x1 and x2 are the endpoints of an edge of the polyhedron. For material on convex polyhedron and for many references see Ziegler’s book [65].

3 Parametric KKT conditions

Describing a pivot path of the simplex algorithm is a difficult task. The ingredient in our construction is to investigate the relationship between two projection polyhe- drons associated with a pair of almost primal-dual problems. Once the relationship is available, either the OFD or the RHS perturbations of the primal and the dual problems are closely linked. The concepts and results in this section come all from the paper [64]. Consider the following two parametric LP problems with two independent para- metric vectors u, v ∈ Rr min hc + M T u, xi s.t. Ax = b, (5) x ≥ 0, and min hd + M T v, yi s.t. By = a, (6) y ≥ 0,

4 where A ∈ Rm×n,M ∈ Rr×n,B ∈ Rl×n, c, d ∈ Rn, b ∈ Rm and a ∈ Rl. The following technical claim is assumed to hold throughout this paper. Assumption 1. Three range spaces R(AT ),R(BT ),R(M T ) are orthogonal to each other, and their direct sum is equal to Rn. Assumption 1 ensures that the perturbation of the objective function is inde- pendent of the constraints in problems (5) and (6). A detailed explanation for the assumption can be found in the paper [64]. For brevity, sometimes Assumption 1 is replaced by the following assumption:

Q = [AT ,BT ,M T ] is an orthogonal matrix, (7)

T i. e., Q Q = Iq, a q ×q unit matrix. This assumption will apply to all of our results. For the parametric LP problems (5) and (6), their dual problems can be expressed as follow max hd, c + M T u − yi s.t. −AT w2 + y = c + M T u, (8) y ≥ 0 and max hc, d + M T v − xi s.t. −BT w1 + x = d + M T v, (9) x ≥ 0 respectively. If the objective functions in problems (8) and (9) are replaced by −dT y and −cT x, respectively, then their optimal solutions remain unchanged. So the perturbations in (8) and (9) occur in the RHS and not in the OFD. In the following statements, we always assume that b = Ad and a = Bc. In particular, if (d, c) ≥ 0, then d and c are feasible solutions of the problems (5) and (6), respectively. If the assumption (7) holds, then the problem (9) is equivalent to the problem

min hc, xi s.t. Ax = b, (10) Mx = Md + v, x ≥ 0.

That is, these two problems have the same optimal solution. Indeed, left multiplying both sides of the equation constraint of (9) by A and M, one has Ax = b and Mx = Md + v. On the other hand, the system of linear equations Ax = b and Mx = Md + v has a special solution d + M T v, and the corresponding system of homogeneous equations has a general solution Bw1. In other words, this system has solutions x = d + M T v + BT w1, which leads to the equality constraint of (9). Basing on the same reason, the condition (4b) can be replaced by

By = a, My = Md, y ≥ 0. (11)

5 Figure 1: Perturbations

Analogously, the problem (8) is equivalent to the problem

min hd, yi s.t. By = a, (12) My = Mc + u, y ≥ 0. The LP problems and their interrelationships involved in this paper are shown in Figure 1. Let us define two projection polyhedrons associated with the feasible regions of the problems (12) and (10) as follow

r ΘP = {v ∈ R |Ax = b, Mx = Md + v, x ≥ 0}, r ΘD = {u ∈ R |By = a, My = Mc + u, y ≥ 0}, in which ΘP is called a primal projection polyhedron and ΘD is called a dual pro- jection polyhedron. The following result shows that the sets ΘP and ΘD are the projections of the polyhedrons {(v, w1) ∈ Rr+l|d+M T v +BT w1 ≥ 0} and {(u, w2) ∈ Rr+m|c + M T u + AT w2 ≥ 0}, respectively. Corollary 3.1. Suppose that (d, c) ≥ 0. (1) There is a vector w1 ∈ Rl such that the primal slack vector x = d + M T v + BT w1

6 is feasible for the problem (10) if and only if v ∈ ΘP ; (2) There is a vector w2 ∈ Rm such that the dual slack vector y = c + M T u + AT w2 is feasible for the problem (12) if and only if u ∈ ΘD. Proof. These results follow from the decomposition (7).  The problems (5) and (10) represent two different perturbations of the problem (1). The perturbations in the two problems occur in the OFD and in the RHS, respectively. To characterize the relationship, let us define one set-valued mapping as follow Φ(u) = {M(x∗(u) − d)|x∗(u) is an optimal solution of (5) if it exists} for any u ∈ ΘD. Analogously, we may define another set-valued mapping as follow Ψ(v) = {M(y∗(v) − c)|y∗(v) is an optimal solution of (6) if it exists} for any v ∈ ΘP . Here the value of the mapping Φ(u) (or Ψ(v)) could be a set if the optimal solution is not unique corresponding to the parameter u (or v). The similar idea was used by Borrelli et al. [5]. The set-valued mappings Φ closely links two different kinds of perturbations of (1). Geometrical, the OFD perturbation means that the hyperplane H = {x ∈ Rn|hc + M T u, xi = hc + M T u, x∗(u)i} passing through x∗(u) supports the feasible region P = {x ∈ Rn|Ax = b, x ≥ 0}, and the RHS perturbation means that the affine set C = {x ∈ Rn|Mx = Md + v} cuts the region P , in which the affine set C passes through x∗(u) if v ∈ Φ(u). A similar geometric explanation applies the set-valued mappings Ψ. Theorem 3.2. Suppose that (d, c) ≥ 0. (1) For every u ∈ ΘD, Φ(u) is well defined such that Φ(u) ⊂ ΘP ; (2) For every v ∈ ΘP , Ψ(v) is well defined such that Ψ(v) ⊂ ΘD. Proof. We only prove the first result. By Theorem 2.1, for every u ∈ ΘD, the problem (5) is solvable that Φ(u) is well defined. If x∗(u) is an optimal solution of the problem (5) and if v ∈ Φ(u), then x∗(u) is also an optimal solution of the problem (10). Then by Corollary 3.1, one has Φ(u) ⊂ ΘP .  The set-valued mappings defined above play an important role in the develop- ment of strong duality theory and in the sensitivity analysis of conic linear opti- mization. As noted in the paper [64], either the problem pair (10) and (12) or the problem pair (5) and (6) is almost dual under the set-valued mappings. The corresponding parametric KKT conditions are provided below.

Theorem 3.3. Let u ∈ ΘD and v ∈ ΘP be arbitrary. Then there are a pair of vectors (¯x, y¯) ∈ Rn × Rn such that the parametric KKT conditions hold Ax¯ = Ad, Mx¯ = Md + v, x¯ ≥ 0, (13a) By¯ = Bc, My¯ = Mc + u, y¯ ≥ 0, (13b) hx,¯ y¯i = 0, (13c)

7 if and only if both v ∈ Φ(u) and u ∈ Ψ(v) hold. Furthermore, (¯x, y¯) is a pair of optimal solutions of the almost primal-dual program pair (10) and (12) or (5) and (6). Proof. We first show that if the parametric KKT conditions (13a)-(13c) hold, then both v ∈ Φ(u) and u ∈ Ψ(v) hold. Clearly, the above parametric KKT conditions imply that My¯ = Mc + u (14) and Ax¯ = b, Mx¯ = Md + v, x¯ ≥ 0, By¯ = a, y¯ ≥ 0, hx,¯ y¯i = 0. From the KKT conditions (4a)-(4c), the second system implies that (¯x, y¯) is a pair of optimal solutions of the primal-dual program pair (10) and (6). Finally, the equality (14) implies that u = M(¯y − c) ∈ Ψ(v). Analogously, one has v ∈ Φ(u). The converse is easy to prove. If both v ∈ Φ(u) and u ∈ Ψ(v) hold, then Φ(u) and Ψ(v) are well defined, which means the problems (5) and (6) have the optimal solution pair (x∗(u), y∗(v)). Moreover, (x∗(u), y∗(v)) is also a pair of optimal solu- tions of the problems (10) and (12). Then, applying the KKT conditions (4a)-(4c) to the primal-dual program pair (5) and (12) or (10) and (6), (¯x, y¯) = (x∗(u), y∗(v)) satisfies the parametric KKT conditions (13a)-(13c). The final claim follows from the KKT conditions (4a)-(4c) trivially.  Theorem 3.4. Suppose that (d, c) ≥ 0. Then [ Φ(u) = ΘP , (15a)

u∈ΘD [ Ψ(v) = ΘD. (15b)

v∈ΘP Proof. By Theorem 3.2, we have [ Φ(u) ⊂ ΘP . (16)

u∈ΘD Following we prove that the inverse inclusion relation is true. Note that the feasible region of of the problem (6) does not depend on the given parametric vector v. Let us number all feasible bases of the problem (6) and let I denotes an index set I = {i|yi is a feasible basis of the dual problem (6)}. Then for every i ∈ I, yi is a dual feasible basis and xi(v) is the corresponding dual cost (primal basis, perhaps xi(v) is primal infeasible for some v ∈ Rr) such that the complementary condition holds, i. e., hxi(v), yii = 0. Defined the set

r i Θi = {v ∈ R |x (v) ≥ 0}.

8 S Clearly, one has Θi ⊂ ΘP and Θi = ΘP . If Θi is not empty, then for every i∈I i v ∈ Θi, x (v) is feasible and is also optimal for the problem (10). Therefore, the set i Fi = {x (v)|v ∈ Θi} is a nontrivial face of the feasible region of the primal problem (10); and yi is also optimal for the problem (12) corresponding to the parameter i i i u = u = My − Mc. Furthermore, by Theorem 3.3, one has Φ(u ) = Θi such that the inverse of the inclusion relation (16) holds. 

Corollary 3.5. Suppose that (d, c) ≥ 0, and let u ∈ ΘD. Then v ∈ Φ(u) if and only if u ∈ Ψ(v). Proof. Assume that v ∈ Φ(u). By Theorem 3.4, one has v ∈ ΘP . Then the problems (5) and (12) are solvable, and the problem (10) is also solvable (see [64, Corollary 3. 3]). Furthermore, the problem (6) is solvable and u ∈ Ψ(v). And vice versa. 

4 The projection transformation

Prior to the study of sensitivity analysis in recent years, the actual invariancy region plays an important role in the development of parametric LP. Adler and Monteiro [1] investigated the sensitivity analysis of LP problems first using the optimal partition approach, in which they identify the range of parameters where the optimal partition remains invariant. Other treatments of parametric analysis for LP problems based on the optimal partition approach was given by Jansen et al. [28], Greenberg [24], and Roos et al. [50], Ghaffari-Hadigheh et al. [19], Berkelaar et al. [3], Dehghan et al. [10], Hlad´ik [26] and etc.

Definition 4.1. Let V be a simply connected subset of ΘP . Then V is called an invariancy set if for all v1, v2 ∈ V, one of the following statements holds: (1) Ψ(v1) = Ψ(v2) if V is not a singleton set; (2) Ψ(v1) is not a singleton set if V is equal to {v1}. In the approach one can identify the range of parameters where the optimal partition invariant. In this definition, the first claim means that the dual optimal objective value remains unchange, whereas the second claim means that the primal optimal objective value remains unchange.

Definition 4.2. Let V be an invariancy set of ΘP . Then V is called a transition face if dim(V) < r. In particular, if dim(V) = 0, then V is called a transition point; and if dim(V) = 1, then V is called a transition line, and etc. For the dual projection polyhedron ΘD, the definitions of the invariant set and the transition face is similar. We refer to an invariancy set as a nontrivial invariancy set if it is not a transition face. In contrast, a transition face is called a trivial invariancy set.

Theorem 4.3. Any two different invariancy regions of a projection polyhedron do not intersect. Proof. This result follows immediately from Definition 4.1. 

9 Consider the almost primal-dual parametric LP problem pair

min hc + M T Su, xi s.t. Ax = b, (17) x ≥ 0 and min hd + M T Sv, yi s.t. By = a, (18) y ≥ 0, where S ∈ Rr×r is a symmetric projection matrix, i. e., ST = S = S2. Such a almost parametric primal-dual LP pair denote the projection of the parametric primal-dual LP pair (5) and (6). In particular, if S is an identity matrix, the above two problems become the problems (5) and (6), respectively. Suppose that (d, c) ≥ 0, and let (x∗(u), y∗(v)) denotes a pair of optimal solutions of the problems (5) and (6). Then (x∗(u), y∗(v)) satisfies the complement slackness property, i. e., hx∗(u), y∗(v)i = 0. If u ∈ Ψ(v) and v ∈ Φ(u), then (x∗(u), y∗(v)) is a pair of optimal solutions of the problems (10) and (12). Moreover, Sv = SMx∗(u) − SMd. On the other hand, the dual of the problem (17) is as follows

min hd, yi s.t. By = a, (19) My = Mc + Su, y ≥ 0, in which the second equality constraint can be equivalently transformed into two equality constraints SMy = SMc + Su and (Ir − S)My = (Ir − S)Mc. Therefore, (x∗(u), y∗(v)) is a pair of feasible solutions of the following primal-dual parametric LP problem pair

min hc, xi s.t. Ax = b, (20) SMx = SMd + Sv, x ≥ 0 and min hd, yi s.t. By = a, (21) (Ir − S)My = (Ir − S)Mc, y ≥ 0. By the complement slackness properity, (x∗(u), y∗(v)) is also a pair of optimal solu- tions of the primal-dual LP pair (20) and (21). This yields the following result.

10 Lemma 4.4. Suppose that (d, c) ≥ 0, and let S ∈ Rr×r be a symmetric projection matrix. Let (x∗(u), y∗(v)) denotes a pair of optimal solutions of the problems (5) and (6). ∗ ∗ (1) For every u ∈ ΘD, if v ∈ Φ(u), then x (Sv) = x (v) is an optimal solution of (20); ∗ ∗ (2) For every v ∈ ΘP , if u ∈ Ψ(v), then y (Su) = y (u) is an optimal solution of (21).

Corollary 4.5. Suppose that (d, c) ≥ 0, and let S ∈ Rr×r be a symmetric projection matrix. Let u ∈ ΘD and v ∈ ΘP . Then Sv ∈ Φ(Su) if and only if Su ∈ Ψ(Sv). Proof. The above discussion illustrates that Su ∈ ΘD and Sv ∈ ΘP . Then this result follows from Corollary 3.5 trivially.  Theorem 4.6. Suppose that (d, c) ≥ 0, and let S ∈ Rr×r be a symmetric projection matrix. (1) For every u ∈ ΘD, Φ(Su) = SΦ(u) = {Sv|v ∈ Φ(u)}; (2) For every v ∈ ΘP , Ψ(Sv) = SΨ(v) = {Su|u ∈ Ψ(v)}. Proof. Let us first show that for every u ∈ ΘP ,

Φ(Su) ⊃ SΦ(u). (22)

Assume that r > 1, and let x∗(Su) and x∗(Sv) denote the optimal solutions of the problem (17) and (20), respectively, where v ∈ Φ(u). From the definition of the set-valued mappings Φ(Su) and Φ(u), we have

Φ(Su) = {SM(x∗(Su) − d)|x∗(Su) is an optimal solution of the problem(17)} = {SM(x∗(Sv) − d)|x∗(Sv) is an optimal solution of the problem(20)} ⊃ {SM(x∗(v) − d)|x∗(v) is an optimal solution of the problem(10)} = S{M(x∗(u) − d)|x∗(u) is an optimal solution of the problem(5)} = SΦ(u).

On the other hand, by Theorem 3.4, we have [ [ Φ(Su) = SΘP = SΦ(u),

Su∈ΘD u∈ΘD which concludes the opposite inclusion of (22).  Note that if the rank of S is 1, then the projection transformation turns a nontrivial invariant set into an open invariant interval.

5 Polynomial complexity

In this section we focus on single-parameter LP problems. For this type of the parametric LP problems, a nontrivial invariancy set is a open interval. Moreover, the endpoints of invariancy interval are the transition points (see also [50]).

11 Theorem 5.1. Suppose that (d, c) ≥ 0, and let r = 1. (1) If u ∈ ΘD is given, then Φ(u) = [v, v] can be identified by solving the following two auxiliary LP problems:

v = min{v| ∃ (¯x, ¯y) s. t. the parametric KKT conditions (13a) − (13c) holds}, v = max{v| ∃ (¯x, ¯y) s. t. the parametric KKT conditions (13a) − (13c) holds}.

(2) If v ∈ ΘP is given, then Ψ(v) = [u, u] can be identified by solving the following two auxiliary LP problems:

u = min{u| ∃ (¯x, ¯y) s. t. the parametric KKT conditions (13a) − (13c) holds}, u = max{u| ∃ (¯x, ¯y) s. t. the parametric KKT conditions (13a) − (13c) holds}.

Proof. The result follows from Theorem 3.3.  The open interval (v, v) is an invariancy interval, and the finite endpoints of the invariancy interval are transition points. Of course, either v = −∞ or v = +∞ is allowed. The similar results apply the set-valued mapping Ψ. Two direct consequences of Theorem 5.1 is as follows.

Corollary 5.2. Suppose that (d, c) ≥ 0, and let r = 1. (1) If Φ(u) = [v, v] is an interval, where v 6= v, then all optimal solutions x¯∗(v) of the problem (10) for v ∈ [v, v] forms an edge of the primal feasible region of P = {x ∈ Rn|Ax = b, x ≥ 0}, in which x¯∗(v) and x¯∗(v) are two adjoint vertices of P if v and v are finite. And the optimal solutions of the dual problems (6) and (12) remain unchange. (2) If Ψ(v) = [u, u] is an interval, where u 6= u, then all optimal solutions y¯∗(u) of the problem (12) for u ∈ [u, u] forms an edge of the dual feasible region of D = {y ∈ Rn|By = a, y ≥ 0}, in which y¯∗(u) and y¯∗(u) are two adjoint vertices of P if u and u are finite. And the optimal solutions of the primal problems (5) and (10) remain unchange.

Corollary 5.3. Suppose that (d, c) ≥ 0, and let r = 1. (1) If Φ(u) = [v, v] is an interval, where v 6= v, then for any v ∈ (v, v), Ψ(v) = u. (1) If Ψ(v) = [u, u] is an interval, where u 6= u, then for any u ∈ (u, u), Φ(u) = v. From Corollary 5.2 , a projection interval (ΘP or ΘD) is the union of finite open invariancy intervals and transition points, see also Adler and Monterio [1]. For the primal projection interval ΘP , there is a finite set of transition points v1 < v2 < . . . < vk such that k−1 [ ΘP = [vi, vi+1], i=1

in which every open interval (vi, vi+1) is an invariancy interval, and v1 = −∞ and/or vk = +∞ are allowed. Then by Corollary 5.3, for the dual projection interval ΘD, there is a finite set of transition points u1 < u2 < . . . < uk−1 such that Ψ((vi, vi+1)) = ui for i = 1, 2, ··· , k − 1. And every Ψ(vi) is a closed interval whose interiors forms an open invariancy interval of ΘD if vi is finite. In particular, if v1

12 is finite, then Ψ(v1) = [uk−1, +∞); and if vk is finite, then Ψ(vk) = (−∞, u1]. All in all, if ΘP contains k − 1 open invariancy intervals, then ΘD contains k − 2 to k open invariancy intervals.

Corollary 5.4. If l = r = 1, then ΘP contains n transition points and n invariancy intervals at most. Proof. Since there’s only one dual constraint, the parametric LP problem (6) is easy to be solved by the use of the simplex algorithm. Hence the dual basic variable yj can be chosen by the ratio test   a j ∈ J = j > 0, b1j 6= 0 . b1j Clearly, the number of the entries of the index set J is less than or equal to n. If the number of the index set J is equal to n, then by Corollary 5.2, ΘP has n finite transition points and n − 1 invariancy intervals. Otherwise, the number of finite transition points is less than n and the number of invariancy intervals is less than or equal to n.  The main result of this paper is as follows. Theorem 5.5. If (d, c) ≥ 0, then there is a pivot rule such that the simplex algorithm solves the problem (1) at most in n steps. Proof. Suppose that l = 1 and S is of rank 1 in the almost primal-dual problem pair (17) and (18). By Corollary 5.4, the set SΘP = {Sv|v ∈ ΘP } contains n transition points and n invariancy intervals at most. Then by Corollary 4.5 and Theorem 4.6, the set SΘD = {Su|u ∈ ΘD} contains n transition points and n invariancy intervals at most, which implies the desired result. 

References

[1] I. Adler and R. Monteiro, A geometric view of parametric linear programing, Algorithmica, 8 (1992), 161-176. [2] D. Avis and V. Chv´aital, Notes on Bland’s rule, Math. Program. Study, 8(1978), 24-34. [3] A. B. Berkelaar, B. Jansen, K. Roos, T. Terlaky, Basis and partition identifica- tion for and linear complementarity problems, Math. Program., 86(2)(1999), 261-282. [4] R. G. Bland, A combinatorial abstraction of linear programming. J. Combin. Theory (Ser. B), 23(1977), 33-57. [5] F. Borrelli, A. Bemporad and Morari, Geometric Algorithm for Multiparametric Linear Programming, J. Optim. Theory Appl., 118(3)(2003), 515-540. [6] K. H. Borgwardt, The simplex method-A probabilistic analysis, Springer- Verlag, Berlin, 1987.

13 [7] E. A. Boyd, Resolving degeneracy in combinatorial linear programes: steepest edge, steepest ascent and parametric ascent, Math. Program., 68(1995), 155- 168. [8] G. B. Dantzig, Maximization of a linear function of variables subject to lin- ear inequalities. In: Koopmans TC (ed) Activity analysis of production and allocation, 1951. Wiley, New York, 1947, 339-347. [9] G. B. Dantzig, Linear Programming and Extensions, Princeton University Press, Princeton, N. J., 1963. [10] M. Dehghan, A. Ghaffari Hadigheh and K. Mirnia, Support set invariancy sensitivity analysis in bi-parametric linear optimization, Adv. Model Optim., 9(1)(2007), 81-89. [11] J. Dongarra and F. Sullivan, Guest editor’s introduction: The top 10 algo- rithms, Comput. Sci. Eng., 2(22)(2000), 2 pages. [12] F. Eisenbrand, Comments on: recent progress on the combinatorial diameter of and simplicial complexes, Top 21(2013), 468-471. [13] J. Folkman and J. Lawrence, Oriented , J. Combin. Theory, Ser. B, 25(1978), 199-236. [14] K. Fukuda, Oriented programming, Ph. D. Thesis, Waterloo Univer- sity, Waterloo, Ontario, Canada, 1982. [15] K. Fukuda and M. Namiki, On extremal behaviors of Murty’s least index method, Math. Program., 64(1994), 119-124. [16] B. G¨artner and V. Kaibel, Two new bounds for the random-ddge simplex- algorithm, SIAM Journal on Discrete Mathematics, 1(21)(2007), 178-190. [17] B. G¨artner, M. Henk, G¨unter M. Ziegler, Randomized simplex algorithms on Klee-Minty cubes, Combinatorica, 18(3)(1998), 349-372. [18] S. Gass and Th. Saaty, The computational algorithm for the parametric objec- tive function, Naval Res. Log. Quarterly, 2(1955), 39-45. [19] A. Ghaffari Hadigheh, H. Ghaffari Hadigheh and T. Terlaky, Bi-parametric optimal partition invariancy sensitivity analysis in linear optimization, Cent. Eur. J. Oper. Res., 16(2)(2008), 215-238. [20] A. J. Goldman and A. W. Tucker, Theory of linear programming, in: H. W. Kuhn and A. W. Tucker, eds., Linear inequalities and related systems, Prince- ton University Press, Princeton, N. J., 1956, 53-97. [21] D. Goldfarb, Worst case complexity of the shadow vertex simplex algoritinn, Technical Report, Department of Industrial Engineering and Operations Re- search, Columbia University, 1983.

14 [22] D. Goldfarb and W. Y. Sit, Worst case behavior of the steepest edge simplex method, Dicrete Applied Mathematics, 1(1979), 277-285.

[23] P. Gritzmann and V. Klee, Mathematical programming and convex geometry, in Handbook of convex geometry, North-Holland, Amsterdam, 1993, 627-674.

[24] H. J. Greenberg, The use of the optimal partition in a linear programming solution for postoptimal analysis, Oper. Res. Lett., 15(1994), 179-185.

[25] O. G¨uler and Y. Ye, Convergence behavior of interior-point algorithms, Math. Program., 60(1993), 215-228.

[26] M. Hlad´ik, Multiparametric linear programming: support set and optimal par- tition invariancy, Eur. J. Oper. Res., 202(1)(2010), 25-31.

[27] J. -B. Hiriart-Urruty, Comments on: Recent progress on the combinatorial diameter of polytopes and simplicial complexes, Top, 21(2013), 472-473.

[28] B. Jansen, K. Roos and T. Terlaky, An interior point method approach to post optimal and parametric analysis in linear programming, Netherlands: Delft University of Technology; 1993. (Tech. rep. ; 92-21).

[29] G. Kalai, A subexponential randomized simplex algorithm, in Proc. 24th ACM Symposium on the theory of computing (STOC), ACM Press 1992, 475-482.

[30] B. Jansen, K. Roos and T. Tedaky, An interior point method approach to postoptimal and parametric analysis in linear programming, in: Proceedings of the Workshop Interior Point Methods, Budapest, Hungary, January 1993.

[31] N. Karmarkar, A new polynomial-time algorithm for linear programming, Com- binatorica, 4 (1984), 373-395.

[32] L. G. Khachian, Polynomial algorithms in linear programming, Zhurnal Vichis- litelnoj Matematikii Matematischeskoi Fiziki 20(1980)51-68, in Russian transl: USSR Comp. Math. Math. Phys., 20(1980), 53-72.

[33] V. Klee and G. J. Minty: How good is the simplex algorithm?, in: Inequalities III (O. Sisha, ed. ), Academic Press, New York, 1972, 159-175.

[34] C. E. Lemke, Bimatrix equilibrium points and mathematical programming, Manag. Sci., 11(1965), 681-689.

[35] J. A. De Loera, Comments on: recent progress on the combinatorial diameter of polytopes and simplicial complexes, Top, 21 (2013), 474-481.

[36] I. Lustig, The equivalence of Dantzig’s self-dual parametric algorithm for linear programs to Lemke’s algorithm for linear complementarity problems applied to linear programming, Technical Report SOL 87-4, Department of , Stanford University, Stanford, CA, USA, 1987.

15 [37] T. L. Magnanti and J. B. Orlin, Parametric linear programming and anti-cycling pivoting rules, Mathematical Programming, 41 (1988), 317-325.

[38] J. Matou˘sek, M. Sharir and E. Welzl, A subexponential bound for linear pro- gramming, Algorithmica, 16(1996), 498-516.

[39] L. McLinden, An analogue of Moreau’s proximation theorem, with applications to the nonlinear complementarity problem, Pacific Journal of Mathematics, 88 (1980), 101-161.

[40] N. Megiddo, On the complexity of linear programming, in Advances in economic theory: Fifth world congress, T. Bewley, ed. Cambridge University Press, Cam- bridge, 1987, 225-268.

[41] E. Miranda, The Hirsch conjecture has been disproved, interview of Francisco Santos. In: The newsletter of the European mathematical society, (2012), 31-36.

[42] T. L. Morin, N. Prabhu and Z. Zhang, Complexity of the gravitational method for linear programming, Journal of Optim. Theory and Appl., 3(2001), 633-658.

[43] K. G. Murty, Linear and combinatorial programming (Krieger Publishing Com- pany, Malabar, FL, 1976).

[44] K. G. Murty, Computational complexity of parametric linear programming, Mathematical Programming, 19(1980), 213-219.

[45] C. H. Papadimitriou and K. Steiglitz: Combinatorial optimization: algorithms and complexity, Prentice-Hall, Inc., Englewood Cliffs, NJ, 1982.

[46] K. Papanizos, Pivoting rules directing the simplex method through all feasible vertices of Klee-Minty examples, Opsean:h 26 (2) (1989) 77-95.

[47] R. T. Rockafellar, Convex analysis, Princeton University Press, Princeton, 1970.

[48] C. Roos, A pivoting rule for the simplex method which is related to Karmarkar’s potential function, Manuscript, Faculty of Technical Mathematics and Infor- matics, Delft University of Technology, The Netherlands (1986).

[49] C. Roos, Exponential example for Terlaky’s pivoting rule for the criss-cross simplex method, Mathematical Programming 46 (1990), 79-84.

[50] C. Roos, T. Terlaky and J-Ph Vial, Interior point algorithms for linear opti- mization, Springer, Boston, 2005.

[51] F. Santos, A counterexample to the Hirsch conjecture, Ann. Math., 176(1)(2012), 383-412.

[52] F. Santos, Rejoinder on: Recent progress on the combinatorial diameter of polytopes and simplicial complexes, Top, 21(2013), 482-484.

16 [53] A. Schrijver, Theory of linear and , Wiley, New York, 1986.

[54] S. Smale, Mathematical problems for the next century, in Mathematics: fron- tiers and perspectives, American Mathematics Society, Providence, RI (2000), 271-294.

[55] R. Shamir, The efficiency of the simplex method: a survey, Management Sci. 33(1987), 301-334.

[56] D. Spielman and S. -H. Teng, Smoothed analysis of algorithms: why the simplex algorithm usually takes polynomial time. In ACM, editor, Proceedings of the 33rd Annual ACM Symposium on Theory of Computing, pages 296-305, New York, NY, USA, 2001. ACM Press.

[57] A. Tamura, H. Takehara, K. Fukuda, S. Fujishige and M. Kojima, A dual interior primal simplex method for linear programming. J. Oper. Res. Soc. Japan 31(1988)413-430.

[58] T. Terlaky, Interior point methods of mathematial programming, Kluwer Aca- demic Publishers, P. O. Box 17,3300 AA Dordrecht, The Netherlands, 1996.

[59] T. Terlaky, Comments on: Recent progress on the combinatorial diameter of polytopes and simplicial complexes, Top, 21(2013), 461-467.

[60] T. Terlaky and S. Zhang, Pivot rules for linear programming: a survey on recent theoretical developments, Ann. Oper. Res., 46(1) (1993), 203-233.

[61] M. J. Todd, A. Dantzig-Wolfe-like variant of Karmarkar’s interior-point linear programming algorithm, Oper. Res. 38(1990), 1006-1018.

[62] M. J. Todd, The many facets of linear programming, Math. Program., Ser. B, 91(2002), 417-436.

[63] R. Vershynin, Beyond Hirsch Conjecture: Walks on Random Polytopes and Smoothed Complexity of the Simplex Method SIAM Journal on Computing 2009, Vol. 2(39)(2009), 646-678.

[64] Z. Yan, X. Li and J. Guo, Primal and dual conic representable sets: a fresh view on multiparametric analysis, http://arxiv. org/abs/2006.08104.

[65] G. M. Ziegler, Lectures on polytopes, Graduate Texts in Mathematics, 152, Springer-Verlag, 1995.

17