Linear Programming Solvers: the State of the Art

Total Page:16

File Type:pdf, Size:1020Kb

Linear Programming Solvers: the State of the Art Linear Programming solvers: the state of the art Julian Hall School of Mathematics, University of Edinburgh 4th ISM-ZIB-IMI MODAL Workshop Mathematical Optimization and Data Analysis Tokyo 27 March 2019 Overview LP background Serial simplex Interior point methods Solvers Parallel simplex For structured LP problems For general LP problems A novel method Julian Hall Linear Programming solvers: the state of the art 2 / 59 Solution of linear programming (LP) problems minimize f = cT x subject to Ax = b x 0 ≥ Background Example Fundamental model in optimal decision-making Solution techniques Simplex method (1947) ◦ Interior point methods (1984) ◦ Novel methods ◦ Large problems have 103{108 variables ◦ 103{108 constraints ◦ Matrix A is (usually) sparse STAIR: 356 rows, 467 columns and 3856 nonzeros Julian Hall Linear Programming solvers: the state of the art 3 / 59 Solving LP problems: Necessary and sufficient conditions for optimality minimize f = cT x subject to Ax = b x 0 ≥ Karush-Kuhn-Tucker (KKT) conditions x∗ is an optimal solution there exist y ∗ and s∗ such that () Ax = b (1) x 0 (3) xT s = 0 (5) ≥ AT y + s = c (2) s 0 (4) ≥ For the simplex algorithm (1{2 and 5) always hold Primal simplex algorithm: (3) holds and the algorithm seeks to satisfy (4) Dual simplex algorithm: (4) holds and the algorithm seeks to satisfy (3) For interior point methods (1{4) hold and the method seeks to satisfy (5) Julian Hall Linear Programming solvers: the state of the art 4 / 59 Solving LP problems: Characterizing the feasible region x3 minimize f = cT x subject to Ax = b x 0 ≥ x2 m×n A R is full rank 2 n Solution of Ax = b is a n m dim. hyperplane in R K − Intersection with x 0 is the feasible region K ≥ A vertex of K has m basic components, i given by Ax = b n m zero nonbasic components,2 B j − 2 N where partitions 1;:::; n B[N f g A solution of the LP occurs at a vertex of K x1 Julian Hall Linear Programming solvers: the state of the art 5 / 59 Solving LP problems: Optimality conditions at a vertex minimize f = cT x subject to Ax = b x 0 ≥ Karush-Kuhn-Tucker (KKT) conditions x∗ is an optimal solution there exist y ∗ and s∗ such that () Ax = b (1) x 0 (3) xT s = 0 (5) ≥ AT y + s = c (2) s 0 (4) ≥ x c s Given , partition A as BN , x as B , c as B and s as B B[N x N c N s N If x = 0 and x = b B−1b thenBx + Nx = b so Ax = b (1) N B ≡ B N For (2) If y = B−T c and s = 0 then BT y + s = c b B B B B T T B s B c B If s N = c N c N N y then (2) holds T y + = T ≡ −T T N s N c N Finally, x s = x s + x s = 0 (5) B B N N Need b b 0 for (3) and c 0 for (4) ≥ N ≥ Julian Hall Linear Programming solvers: the state of the art 6 / 59 b b Solving LP problems: Simplex and interior point methods Simplex method (1947) Interior point method (1984) Given so (1{2 and 5) hold Replace x 0 by log barrier B[N ≥ Primal simplex method Solve n ^ T Assume b 0 (3) maximize f = c x + µ ln(xj ) Force c^ ≥0 (4) N ≥ subject to Ax = b j=1 Dual simplex method X Assume c^ 0 (4) KKT (5) changes: N ≥ T Force b^ 0 (3) Replace x s = 0 by XS = µe ≥ X and S have x and s on diagonal Modify B[N KKT (1{4) hold Combinatorial approach Satisfy (5) by forcing XS = µe as Cost O(2n) iterations µ 0 Practically: O(m + n) iterations ! Iterative approach Practically: O(pn) iterations Julian Hall Linear Programming solvers: the state of the art 7 / 59 Simplex method The simplex algorithm: Definition x3 b x2 At a feasible vertex x = corresponding to 0 B[N 1 If c N 0 then stop: theb solution is optimal K ≥ 2 Scan cj < 0 for q to leave x b N x+αd −1 aq 3 Let aq = B Neq and d = − b eq b 4 Scanbbi =aiq > 0 for α and p to leave B 5 Exchange p and q between and b b B N x1 6 Go to 1 Julian Hall Linear Programming solvers: the state of the art 9 / 59 Primal simplex algorithm: Choose a column Assume b 0 Seek c 0 RHS ≥ N ≥ N Scan ci < 0 for q to leave b b N b B T cq cN b b Julian Hall Linear Programming solvers: the state of the art 10 / 59 Primal simplex algorithm: Choose a row Assume b 0 Seek c 0 RHS ≥ N ≥ N Scan cj < 0 for q to leave b b N aq b Scan bi =aiq > 0 for p to leave B b B a b bpq bp b b b b Julian Hall Linear Programming solvers: the state of the art 11 / 59 Primal simplex algorithm: Update cost and RHS Assume b 0 Seek c 0 RHS ≥ N ≥ N Scan cj < 0 for q to leave b b N aq b Scan bi =aiq > 0 for p to leave b B B T apq ap bbp Update:b Exchange p and q between and b b B N Update b := b α a α = b =a T b P q P p pq bcq cbN T −T T Update c N := c N + αD ap αD = cq=apq b b b− b b b b b b b b b Julian Hall Linear Programming solvers: the state of the art 12 / 59 Primal simplex algorithm: Data required Assume b 0 Seek c 0 RHS ≥ N ≥ N Scan cj < 0 for q to leave b b N aq b Scan bi =aiq > 0 for p to leave b B B T apq ap bbp Update:b Exchange p and q between and b b B N Update b := b α a α = b =a T b P q P p pq bcq cbN T −T T Update c N := c N + αD ap αD = cq=apq b b b− b b b b Data requiredb b b b b T T −1 Pivotal row ap = ep B N −1 Pivotal column aq = B aq b Julian Hall Linear Programming solvers: the state of the art 13 / 59 b Primal simplex algorithm Assume b 0 Seek c 0 RHS ≥ N ≥ N Scan cj < 0 for q to leave b b N aq b Scan bi =aiq > 0 for p to leave b B B T apq ap bbp Update:b Exchange p and q between and b b B N Update b := b α a α = b =a T b P q P p pq bcq cbN T −T T Update c N := c N + αD ap αD = cq=apq b b b− b b b b Data requiredb b b bWhyb does it work? Pivotal row aT = eT B−1N b c p p Objective improves by p q each iteration −1 × Pivotal column aq = B aq − apq b b b Julian Hall Linear Programming solvers: the state of the art 14 / 59 b b Why does it work? bp cq Objective improves by × each iteration − apq b b b Simplex method: Computation Standard simplex method (SSM): Major computational component 1 T RHS Update of tableau: N := N aqap N − apq where N = B−1N N b b b b b B Hopelessly inefficient for sparseb LP problems T b cbN b Prohibitively expensive for large LP problems Revised simplexb method (RSM): Major computational components T T T Pivotal row via B πp = ep BTRAN and ap = πp N PRICE −1 Pivotal column via B aq = aq FTRAN Represent B INVERT b −1 ¯ T Update B exploiting B = B + (aq Bep)ep UPDATE b − Julian Hall Linear Programming solvers: the state of the art 15 / 59 Serial simplex: Hyper-sparsity Serial simplex: Solve Bx = r for sparse r Given B = LU, solve Ly = r; Ux = y In revised simplex method, r is sparse: consequences? If B is irreducible then x is full If B is highly reducible then x can be sparse Phenomenon of hyper-sparsity Exploit it when forming x Exploit it when using x Julian Hall Linear Programming solvers: the state of the art 17 / 59 Serial simplex: Hyper-sparsity Inverse of a sparse matrix and solution of Bx = r Optimal B for LP problem stair B−1 has density of 58%, so B−1r is typically dense Julian Hall Linear Programming solvers: the state of the art 18 / 59 Serial simplex: Hyper-sparsity Inverse of a sparse matrix and solution of Bx = r Optimal B for LP problem pds-02 B−1 has density of 0.52%, so B−1r is typically sparse|when r is sparse Julian Hall Linear Programming solvers: the state of the art 19 / 59 Serial simplex: Hyper-sparsity Use solution of Lx = b To illustrate the phenomenon of hyper-sparsity To demonstrate how to exploit hyper-sparsity Apply principles to other triangular solves in the simplex method Julian Hall Linear Programming solvers: the state of the art 20 / 59 Serial simplex: Hyper-sparsity Recall: Solve Lx = b using function ftranL(L, b, x) r = b When b is sparse for all j 1;:::; m do Inefficient until r fills in 2 f g for all i : Lij = 0 do 6 ri = ri Lij rj x = r − Julian Hall Linear Programming solvers: the state of the art 21 / 59 Serial simplex: Hyper-sparsity Better: Check rj for zero When x is sparse function ftranL(L, b, x) r = b Few values of rj are nonzero for all j 1;:::; m do Check for zero dominates 2 f g if rj = 0 then Requires more efficient identification 6 for all i : Lij = 0 do of set of indices j such that rj = 0 6 X 6 ri = ri Lij rj x = r − Gilbert and Peierls (1988) H and McKinnon (1998{2005) Julian Hall Linear Programming solvers: the state of the art 22 / 59 Serial simplex: Hyper-sparsity Recall: major computational components −1 FTRAN: Form aq = B aq −T BTRAN: Form πp = B ep bT T PRICE: Form ap = πp N b −T T T BTRAN: Form πp = B ep PRICE: Form ap = πp N Transposed triangular solves T Hyper-sparsity: πp is sparse T T b L x = b has xi = bi l i x Store N row-wise T− Hyper-sparsity: l i x typically zero T Form ap as a combination of Also store L (and U) row-wise and T use FTRAN code rows of N for nonzeros in πp b H and McKinnon (1998{2005) COAP best paper prize (2005) Julian Hall Linear Programming solvers: the state of the art 23 / 59 Interior point methods Interior point methods: Traditional Replace x 0 by log barrier function and solve ≥ n T maximize f = c x + µ ln(xj ) such that Ax = b j=1 X For small µ this has the same solution as the LP Solve for a decreasing sequence of values of µ, moving through interior of K Perform a small number of (expensive) iterations: each solves Θ−1 AT ∆x f − = G∆y = h A 0 ∆y d () where ∆x and ∆y are steps in the primal and dual variables and G = AΘAT Standard technique is to form the Cholesky decomposition G = LLT and perform triangular solves with L Julian Hall Linear Programming solvers: the state of the art 25 / 59 Interior point methods: Traditional Forming the Cholesky decomposition G = LLT and perform triangular solves with L T T G = AΘA = θj aj aj is generally sparse j So long as denseX columns of A are treated carefully Much effort has gone into developing efficient serial Cholesky
Recommended publications
  • A Learning-Based Iterative Method for Solving Vehicle
    Published as a conference paper at ICLR 2020 ALEARNING-BASED ITERATIVE METHOD FOR SOLVING VEHICLE ROUTING PROBLEMS Hao Lu ∗ Princeton University, Princeton, NJ 08540 [email protected] Xingwen Zhang ∗ & Shuang Yang Ant Financial Services Group, San Mateo, CA 94402 fxingwen.zhang,[email protected] ABSTRACT This paper is concerned with solving combinatorial optimization problems, in par- ticular, the capacitated vehicle routing problems (CVRP). Classical Operations Research (OR) algorithms such as LKH3 (Helsgaun, 2017) are inefficient and dif- ficult to scale to larger-size problems. Machine learning based approaches have recently shown to be promising, partly because of their efficiency (once trained, they can perform solving within minutes or even seconds). However, there is still a considerable gap between the quality of a machine learned solution and what OR methods can offer (e.g., on CVRP-100, the best result of learned solutions is between 16.10-16.80, significantly worse than LKH3’s 15.65). In this paper, we present “Learn to Improve” (L2I), the first learning based approach for CVRP that is efficient in solving speed and at the same time outperforms OR methods. Start- ing with a random initial solution, L2I learns to iteratively refine the solution with an improvement operator, selected by a reinforcement learning based controller. The improvement operator is selected from a pool of powerful operators that are customized for routing problems. By combining the strengths of the two worlds, our approach achieves the new state-of-the-art results on CVRP, e.g., an average cost of 15.57 on CVRP-100. 1 INTRODUCTION In this paper, we focus on an important class of combinatorial optimization, vehicle routing problems (VRP), which have a wide range of applications in logistics.
    [Show full text]
  • Multi-Objective Optimization of Unidirectional Non-Isolated Dc/Dcconverters
    MULTI-OBJECTIVE OPTIMIZATION OF UNIDIRECTIONAL NON-ISOLATED DC/DC CONVERTERS by Andrija Stupar A thesis submitted in conformity with the requirements for the degree of Doctor of Philosophy Graduate Department of Edward S. Rogers Department of Electrical and Computer Engineering University of Toronto © Copyright 2017 by Andrija Stupar Abstract Multi-Objective Optimization of Unidirectional Non-Isolated DC/DC Converters Andrija Stupar Doctor of Philosophy Graduate Department of Edward S. Rogers Department of Electrical and Computer Engineering University of Toronto 2017 Engineers have to fulfill multiple requirements and strive towards often competing goals while designing power electronic systems. This can be an analytically complex and computationally intensive task since the relationship between the parameters of a system’s design space is not always obvious. Furthermore, a number of possible solutions to a particular problem may exist. To find an optimal system, many different possible designs must be evaluated. Literature on power electronics optimization focuses on the modeling and design of partic- ular converters, with little thought given to the mathematical formulation of the optimization problem. Therefore, converter optimization has generally been a slow process, with exhaus- tive search, the execution time of which is exponential in the number of design variables, the prevalent approach. In this thesis, geometric programming (GP), a type of convex optimization, the execution time of which is polynomial in the number of design variables, is proposed and demonstrated as an efficient and comprehensive framework for the multi-objective optimization of non-isolated unidirectional DC/DC converters. A GP model of multilevel flying capacitor step-down convert- ers is developed and experimentally verified on a 15-to-3.3 V, 9.9 W discrete prototype, with sets of loss-volume Pareto optimal designs generated in under one minute.
    [Show full text]
  • Xpress Nonlinear Manual This Material Is the Confidential, Proprietary, and Unpublished Property of Fair Isaac Corporation
    R FICO Xpress Optimization Last update 01 May, 2018 version 33.01 FICO R Xpress Nonlinear Manual This material is the confidential, proprietary, and unpublished property of Fair Isaac Corporation. Receipt or possession of this material does not convey rights to divulge, reproduce, use, or allow others to use it without the specific written authorization of Fair Isaac Corporation and use must conform strictly to the license agreement. The information in this document is subject to change without notice. If you find any problems in this documentation, please report them to us in writing. Neither Fair Isaac Corporation nor its affiliates warrant that this documentation is error-free, nor are there any other warranties with respect to the documentation except as may be provided in the license agreement. ©1983–2018 Fair Isaac Corporation. All rights reserved. Permission to use this software and its documentation is governed by the software license agreement between the licensee and Fair Isaac Corporation (or its affiliate). Portions of the program may contain copyright of various authors and may be licensed under certain third-party licenses identified in the software, documentation, or both. In no event shall Fair Isaac Corporation or its affiliates be liable to any person for direct, indirect, special, incidental, or consequential damages, including lost profits, arising out of the use of this software and its documentation, even if Fair Isaac Corporation or its affiliates have been advised of the possibility of such damage. The rights and allocation of risk between the licensee and Fair Isaac Corporation (or its affiliates) are governed by the respective identified licenses in the software, documentation, or both.
    [Show full text]
  • Julia, My New Friend for Computing and Optimization? Pierre Haessig, Lilian Besson
    Julia, my new friend for computing and optimization? Pierre Haessig, Lilian Besson To cite this version: Pierre Haessig, Lilian Besson. Julia, my new friend for computing and optimization?. Master. France. 2018. cel-01830248 HAL Id: cel-01830248 https://hal.archives-ouvertes.fr/cel-01830248 Submitted on 4 Jul 2018 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. « Julia, my new computing friend? » | 14 June 2018, IETR@Vannes | By: L. Besson & P. Haessig 1 « Julia, my New frieNd for computiNg aNd optimizatioN? » Intro to the Julia programming language, for MATLAB users Date: 14th of June 2018 Who: Lilian Besson & Pierre Haessig (SCEE & AUT team @ IETR / CentraleSupélec campus Rennes) « Julia, my new computing friend? » | 14 June 2018, IETR@Vannes | By: L. Besson & P. Haessig 2 AgeNda for today [30 miN] 1. What is Julia? [5 miN] 2. ComparisoN with MATLAB [5 miN] 3. Two examples of problems solved Julia [5 miN] 4. LoNger ex. oN optimizatioN with JuMP [13miN] 5. LiNks for more iNformatioN ? [2 miN] « Julia, my new computing friend? » | 14 June 2018, IETR@Vannes | By: L. Besson & P. Haessig 3 1. What is Julia ? Open-source and free programming language (MIT license) Developed since 2012 (creators: MIT researchers) Growing popularity worldwide, in research, data science, finance etc… Multi-platform: Windows, Mac OS X, GNU/Linux..
    [Show full text]
  • Solving Mixed Integer Linear and Nonlinear Problems Using the SCIP Optimization Suite
    Takustraße 7 Konrad-Zuse-Zentrum D-14195 Berlin-Dahlem fur¨ Informationstechnik Berlin Germany TIMO BERTHOLD GERALD GAMRATH AMBROS M. GLEIXNER STEFAN HEINZ THORSTEN KOCH YUJI SHINANO Solving mixed integer linear and nonlinear problems using the SCIP Optimization Suite Supported by the DFG Research Center MATHEON Mathematics for key technologies in Berlin. ZIB-Report 12-27 (July 2012) Herausgegeben vom Konrad-Zuse-Zentrum f¨urInformationstechnik Berlin Takustraße 7 D-14195 Berlin-Dahlem Telefon: 030-84185-0 Telefax: 030-84185-125 e-mail: [email protected] URL: http://www.zib.de ZIB-Report (Print) ISSN 1438-0064 ZIB-Report (Internet) ISSN 2192-7782 Solving mixed integer linear and nonlinear problems using the SCIP Optimization Suite∗ Timo Berthold Gerald Gamrath Ambros M. Gleixner Stefan Heinz Thorsten Koch Yuji Shinano Zuse Institute Berlin, Takustr. 7, 14195 Berlin, Germany, fberthold,gamrath,gleixner,heinz,koch,[email protected] July 31, 2012 Abstract This paper introduces the SCIP Optimization Suite and discusses the ca- pabilities of its three components: the modeling language Zimpl, the linear programming solver SoPlex, and the constraint integer programming frame- work SCIP. We explain how these can be used in concert to model and solve challenging mixed integer linear and nonlinear optimization problems. SCIP is currently one of the fastest non-commercial MIP and MINLP solvers. We demonstrate the usage of Zimpl, SCIP, and SoPlex by selected examples, give an overview of available interfaces, and outline plans for future development. ∗A Japanese translation of this paper will be published in the Proceedings of the 24th RAMP Symposium held at Tohoku University, Miyagi, Japan, 27{28 September 2012, see http://orsj.or.
    [Show full text]
  • A Lifted Linear Programming Branch-And-Bound Algorithm for Mixed Integer Conic Quadratic Programs Juan Pablo Vielma, Shabbir Ahmed, George L
    A Lifted Linear Programming Branch-and-Bound Algorithm for Mixed Integer Conic Quadratic Programs Juan Pablo Vielma, Shabbir Ahmed, George L. Nemhauser, H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, 765 Ferst Drive NW, Atlanta, GA 30332-0205, USA, {[email protected], [email protected], [email protected]} This paper develops a linear programming based branch-and-bound algorithm for mixed in- teger conic quadratic programs. The algorithm is based on a higher dimensional or lifted polyhedral relaxation of conic quadratic constraints introduced by Ben-Tal and Nemirovski. The algorithm is different from other linear programming based branch-and-bound algo- rithms for mixed integer nonlinear programs in that, it is not based on cuts from gradient inequalities and it sometimes branches on integer feasible solutions. The algorithm is tested on a series of portfolio optimization problems. It is shown that it significantly outperforms commercial and open source solvers based on both linear and nonlinear relaxations. Key words: nonlinear integer programming; branch and bound; portfolio optimization History: February 2007. 1. Introduction This paper deals with the development of an algorithm for the class of mixed integer non- linear programming (MINLP) problems known as mixed integer conic quadratic program- ming problems. This class of problems arises from adding integrality requirements to conic quadratic programming problems (Lobo et al., 1998), and is used to model several applica- tions from engineering and finance. Conic quadratic programming problems are also known as second order cone programming problems, and together with semidefinite and linear pro- gramming (LP) problems are special cases of the more general conic programming problems (Ben-Tal and Nemirovski, 2001a).
    [Show full text]
  • Numericaloptimization
    Numerical Optimization Alberto Bemporad http://cse.lab.imtlucca.it/~bemporad/teaching/numopt Academic year 2020-2021 Course objectives Solve complex decision problems by using numerical optimization Application domains: • Finance, management science, economics (portfolio optimization, business analytics, investment plans, resource allocation, logistics, ...) • Engineering (engineering design, process optimization, embedded control, ...) • Artificial intelligence (machine learning, data science, autonomous driving, ...) • Myriads of other applications (transportation, smart grids, water networks, sports scheduling, health-care, oil & gas, space, ...) ©2021 A. Bemporad - Numerical Optimization 2/102 Course objectives What this course is about: • How to formulate a decision problem as a numerical optimization problem? (modeling) • Which numerical algorithm is most appropriate to solve the problem? (algorithms) • What’s the theory behind the algorithm? (theory) ©2021 A. Bemporad - Numerical Optimization 3/102 Course contents • Optimization modeling – Linear models – Convex models • Optimization theory – Optimality conditions, sensitivity analysis – Duality • Optimization algorithms – Basics of numerical linear algebra – Convex programming – Nonlinear programming ©2021 A. Bemporad - Numerical Optimization 4/102 References i ©2021 A. Bemporad - Numerical Optimization 5/102 Other references • Stephen Boyd’s “Convex Optimization” courses at Stanford: http://ee364a.stanford.edu http://ee364b.stanford.edu • Lieven Vandenberghe’s courses at UCLA: http://www.seas.ucla.edu/~vandenbe/ • For more tutorials/books see http://plato.asu.edu/sub/tutorials.html ©2021 A. Bemporad - Numerical Optimization 6/102 Optimization modeling What is optimization? • Optimization = assign values to a set of decision variables so to optimize a certain objective function • Example: Which is the best velocity to minimize fuel consumption ? fuel [ℓ/km] velocity [km/h] 0 30 60 90 120 160 ©2021 A.
    [Show full text]
  • Linear Programming
    Lecture 18 Linear Programming 18.1 Overview In this lecture we describe a very general problem called linear programming that can be used to express a wide variety of different kinds of problems. We can use algorithms for linear program- ming to solve the max-flow problem, solve the min-cost max-flow problem, find minimax-optimal strategies in games, and many other things. We will primarily discuss the setting and how to code up various problems as linear programs At the end, we will briefly describe some of the algorithms for solving linear programming problems. Specific topics include: • The definition of linear programming and simple examples. • Using linear programming to solve max flow and min-cost max flow. • Using linear programming to solve for minimax-optimal strategies in games. • Algorithms for linear programming. 18.2 Introduction In the last two lectures we looked at: — Bipartite matching: given a bipartite graph, find the largest set of edges with no endpoints in common. — Network flow (more general than bipartite matching). — Min-Cost Max-flow (even more general than plain max flow). Today, we’ll look at something even more general that we can solve algorithmically: linear pro- gramming. (Except we won’t necessarily be able to get integer solutions, even when the specifi- cation of the problem is integral). Linear Programming is important because it is so expressive: many, many problems can be coded up as linear programs (LPs). This especially includes problems of allocating resources and business 95 18.3. DEFINITION OF LINEAR PROGRAMMING 96 supply-chain applications. In business schools and Operations Research departments there are entire courses devoted to linear programming.
    [Show full text]
  • Integer Linear Programs
    20 ________________________________________________________________________________________________ Integer Linear Programs Many linear programming problems require certain variables to have whole number, or integer, values. Such a requirement arises naturally when the variables represent enti- ties like packages or people that can not be fractionally divided — at least, not in a mean- ingful way for the situation being modeled. Integer variables also play a role in formulat- ing equation systems that model logical conditions, as we will show later in this chapter. In some situations, the optimization techniques described in previous chapters are suf- ficient to find an integer solution. An integer optimal solution is guaranteed for certain network linear programs, as explained in Section 15.5. Even where there is no guarantee, a linear programming solver may happen to find an integer optimal solution for the par- ticular instances of a model in which you are interested. This happened in the solution of the multicommodity transportation model (Figure 4-1) for the particular data that we specified (Figure 4-2). Even if you do not obtain an integer solution from the solver, chances are good that you’ll get a solution in which most of the variables lie at integer values. Specifically, many solvers are able to return an ‘‘extreme’’ solution in which the number of variables not lying at their bounds is at most the number of constraints. If the bounds are integral, all of the variables at their bounds will have integer values; and if the rest of the data is integral, many of the remaining variables may turn out to be integers, too.
    [Show full text]
  • A Generalized Dual Phase-2 Simplex Algorithm1
    A Generalized Dual Phase-2 Simplex Algorithm1 Istvan´ Maros Department of Computing, Imperial College, London Email: [email protected] Departmental Technical Report 2001/2 ISSN 1469–4174 January 2001, revised December 2002 1This research was supported in part by EPSRC grant GR/M41124. I. Maros Phase-2 of Dual Simplex i Contents 1 Introduction 1 2 Problem statement 2 2.1 The primal problem . 2 2.2 The dual problem . 3 3 Dual feasibility 5 4 Dual simplex methods 6 4.1 Traditional algorithm . 7 4.2 Dual algorithm with type-1 variables present . 8 4.3 Dual algorithm with all types of variables . 9 5 Bound swap in dual 11 6 Bound Swapping Dual (BSD) algorithm 14 6.1 Step-by-step description of BSD . 14 6.2 Work per iteration . 16 6.3 Degeneracy . 17 6.4 Implementation . 17 6.5 Features . 18 7 Two examples for the algorithmic step of BSD 19 8 Computational experience 22 9 Summary 24 10 Acknowledgements 25 Abstract Recently, the dual simplex method has attracted considerable interest. This is mostly due to its important role in solving mixed integer linear programming problems where dual feasible bases are readily available. Even more than that, it is a realistic alternative to the primal simplex method in many cases. Real world linear programming problems include all types of variables and constraints. This necessitates a version of the dual simplex algorithm that can handle all types of variables efficiently. The paper presents increasingly more capable dual algorithms that evolve into one which is based on the piecewise linear nature of the dual objective function.
    [Show full text]
  • The Generalized Simplex Method for Minimizing a Linear Form Under Linear Inequality Restraints
    Pacific Journal of Mathematics THE GENERALIZED SIMPLEX METHOD FOR MINIMIZING A LINEAR FORM UNDER LINEAR INEQUALITY RESTRAINTS GEORGE BERNARD DANTZIG,ALEXANDER ORDEN AND PHILIP WOLFE Vol. 5, No. 2 October 1955 THE GENERALIZED SIMPLEX METHOD FOR MINIMIZING A LINEAR FORM UNDER LINEAR INEQUALITY RESTRAINTS GEORGE B. DANTZIG, ALEX ORDEN, PHILIP WOLFE 1. Background and summary. The determination of "optimum" solutions of systems of linear inequalities is assuming increasing importance as a tool for mathematical analysis of certain problems in economics, logistics, and the theory of games [l;5] The solution of large systems is becoming more feasible with the advent of high-speed digital computers; however, as in the related problem of inversion of large matrices, there are difficulties which remain to be resolved connected with rank. This paper develops a theory for avoiding as- sumptions regarding rank of underlying matrices which has import in applica- tions where little or nothing is known about the rank of the linear inequality system under consideration. The simplex procedure is a finite iterative method which deals with problems involving linear inequalities in a manner closely analogous to the solution of linear equations or matrix inversion by Gaussian elimination. Like the latter it is useful in proving fundamental theorems on linear algebraic systems. For example, one form of the fundamental duality theorem associated with linear inequalities is easily shown as a direct consequence of solving the main prob- lem. Other forms can be obtained by trivial manipulations (for a fuller discus- sion of these interrelations, see [13]); in particular, the duality theorem [8; 10; 11; 12] leads directly to the Minmax theorem for zero-sum two-person games [id] and to a computational method (pointed out informally by Herman Rubin and demonstrated by Robert Dorfman [la]) which simultaneously yields optimal strategies for both players and also the value of the game.
    [Show full text]
  • A Computerized Poultry Farm Optimization System By
    • A COMPUTERIZED POULTRY FARM OPTIMIZATION SYSTEM BY TURYARUGAYO THOMPSON 16/U/12083/PS 216008822 SUPERVISOR MR. SERUNJOGI AMBROSE A DISSERTATION SUBMITTED TO THE SCHOOL OF STATISTICS AND PLANNING IN PARTIAL FULFILMENT FOR THE AWARD FOR A DEGREE OF BACHELOR OF STATISTICS AT MAKERERE UNIVERSITY JUNE 2019 i ii iii DEDICATION I dedicate this project to the ALMIGHTY GOD for giving me a healthy life, secondly to my parents; Mr. and Mrs. Twamuhabwa Wilson, my brother Mr. Turyasingura Thomas, my aunt Mrs. Olive Kamuli for being supportive to me financially and always encouraging me to keep moving on. I lastly dedicate it to my course mates most especially the B. Stat computing class. iv ACKONWLEDGEMENT I would like to thank the Almighty God for enabling me finish my final year project. I would also like to express my special appreciation to my supervisor Mr. Sserunjogi Ambrose in providing me with me suggestions, encouragement and helped me to coordinate my project and in writing this document. A special thanks goes to my parents; Mr. and Mrs. Twamuhabwa Wilson, my brother Mr. Turyasingura Thomas, my aunt Mrs. Olive Kamuli for being supportive to me financially and spiritually towards the accomplishment of this dissertation. Lastly, many thanks go to my closest friend Mr. Kyagera Sulaiman, my fellow students most especially Akandinda Noble, Mulapada Seth Augustine, Kakuba Caleb Kanyesigye, Nuwabasa Moses, Kyomuhendo Evarce among others who invested their effort and for being supportive and kind to me during the time of working on my final year project. v TABLE OF CONTENTS DECLARATION ............................................................................................................................. i APPROVAL .................................................................................. Error! Bookmark not defined.
    [Show full text]