MATLAB's Optimization Toolbox Algorithms

Total Page:16

File Type:pdf, Size:1020Kb

MATLAB's Optimization Toolbox Algorithms Appendix A MATLAB’s Optimization Toolbox Algorithms Abstract MATLAB’s Optimization Toolbox (version 7:2) includes a family of algorithms for solving optimization problems. The toolbox provides functions for solving linear programming, mixed-integer linear programming, quadratic program- ming, nonlinear programming, and nonlinear least squares problems. This chapter presents the available functions of this toolbox for solving LPs. A computational study is performed. The aim of the computational study is to compare the efficiency of MATLAB’s Optimization Toolbox solvers. A.1 Chapter Objectives • Present the linear programming algorithms of MATLAB’s Optimization Tool- box. • Understand how to use the linear programming solver of MATLAB’s Optimiza- tion Toolbox. • Present all different ways to solve linear programming problems using MAT- LAB. • Compare the computational performance of the linear programming algorithms of MATLAB’s Optimization Toolbox. A.2 Introduction MATLAB is a matrix language intended primarily for numerical computing. MATLAB is especially designed for matrix computations like solving systems of linear equations or factoring matrices. Users that are not familiar with MATLAB are referred to [3, 7]. Experienced MATLAB users that want to further optimize their codes are referred to [1]. MATLAB’s Optimization Toolbox includes a family of algorithms for solving optimization problems. The toolbox provides functions for solving linear pro- gramming, mixed-integer linear programming, quadratic programming, nonlinear programming, and nonlinear least squares problems. MATLAB’s Optimization Toolbox includes four categories of solvers [9]: © Springer International Publishing AG 2017 565 N. Ploskas, N. Samaras, Linear Programming Using MATLAB®, Springer Optimization and Its Applications 127, DOI 10.1007/978-3-319-65919-0 566 Appendix A MATLAB’s Optimization Toolbox Algorithms • Minimizers: solvers that attempt to find a local minimum of the objective function. These solvers handle unconstrained optimization, linear programming, quadratic programming, and nonlinear programming problems. • Multiobjective minimizers: solvers that attempt to either minimize the value of a set of functions or find a location where a set of functions is below some certain values. • Equation solvers: solvers that attempt to find a solution to a nonlinear equation f .x/ D 0. • Least-squares solvers: solvers that attempt to minimize a sum of squares. A complete list of the type of optimization problems, which MATLAB’s Optimization Toolbox solvers can handle, along with the name of the solver can be found in Table A.1 [9]: Table A.1 Optimization problems handled by MATLAB’s Optimization Toolbox Solver Type of problem Solver Description Scalar minimization fminbnd Find the minimum of a single-variable function on fixed interval Unconstrained fminunc, Find the minimum of an unconstrained minimization fminsearch multivariable function (fminunc)using derivative-free optimization (fminsearch) Linear programming linprog Solve linear programming problems Mixed-integer linear intlinprog Solve mixed-integer linear programming programming problems Quadratic programming quadprog Solve quadratic programming problems Constrained fmincon Find the minimum of a constrained nonlinear minimization multivariable function Semi-infinite fseminf Find the minimum of a semi-infinitely minimization constrained multivariable nonlinear function Goal attainment fgoalattain Solve multiobjective goal attainment problems Minimax fminimax Solve minimax constraint problems Linear equations \(matrix left Solve systems of linear equations division) Nonlinear equation of fzero Find the root of a nonlinear function one variable Nonlinear equations fsolve Solve a system of nonlinear equations Linear least-squares \(matrix left Find a least-squares solution to a system of linear division) equations Nonnegative lsqnonneg Solve nonnegative least-squares constraint linear-least-squares problems Constrained lsqlin Solve constrained linear least-squares problems linear-least-squares Nonlinear least-squares lsqnonlin Solve nonlinear least-squares problems Nonlinear curve fitting lsqcurvefit Solve nonlinear curve-fitting problems in least-squares sense Appendix A MATLAB’s Optimization Toolbox Algorithms 567 MATLAB’s Optimization Toolbox offers four LP algorithms: • an interior point method, • a primal simplex algorithm, • a dual simplex algorithm, and • an active-set algorithm. This chapter presents the LP algorithms implemented in MATLAB’s Optimiza- tion Toolbox. Moreover, the Optimization App, a graphical user interface for solving optimization problems, is presented. In addition, LPs are solved using MATLAB in various ways. Finally, a computational study is performed. The aim of the computational study is to compare the execution time of MATLAB’s Optimization Toolbox solvers for LPs. The structure of this chapter is as follows. In Section A.3, the LP algorithms implemented in MATLAB’s Optimization Toolbox are presented. Section A.4 presents the linprog solver, while Section A.5 presents the Optimization App, a graphical user interface for solving optimization problems. Section A.6 provides different ways to solve LPs using MATLAB. In Section A.7, a MATLAB code to solve LPs using the linprog solver is provided. Section A.8 presents a computa- tional study that compares the execution time of MATLAB’s Optimization Toolbox solvers. Finally, conclusions and further discussion are presented in Section A.9. A.3 MATLAB’s Optimization Toolbox Linear Programming Algorithms As already mentioned, MATLAB’s Optimization Toolbox offer four algorithms for the solution of LPs. These algorithms are presented in this section. A.3.1 Interior Point Method The interior point method used in MATLAB is based on LIPSOL [12], which is a variant of Mehrotra’s predictor-corrector algorithm [10], a primal-dual interior point method presented in Chapter 11. The method starts by applying the following preprocessing techniques [9]: • All variables are bounded below zero. • All constraints are equalities. • Fixed variables, these with equal lower and upper bounds, are removed. • Empty rows are removed. • The constraint matrix has full structural rank. • Empty columns are removed. 568 Appendix A MATLAB’s Optimization Toolbox Algorithms • When a significant number of singleton rows exist, the associated variables are solved for and the associated rows are removed. After preprocessing, the LP problem has the following form: min cT x s.t. Ax D b .LP:1/ 0 Ä x Ä u The upper bounds are implicitly included in the constraint matrix A. After adding the slack variables s, problem (LP.1) becomes: min cT x s.t. Ax D b .LP:2/ x C s D u x 0; s 0 Vector x consists of the primal variables and vector s consists of the primal slack variables. The dual problem can be written as: max bT y uT w s.t. AT y w C z D c .DP:1/ z 0; w 0 Vectors y and w consist of the simplex multiplier and vector z consists of the reduced costs. This method is a primal-dual algorithm, meaning that both the primal (LP.2) and the dual (DP.1) problems are solved simultaneously. Let us assume that v D ŒxI yI zI sI w, where ŒxI zI sI w 0. This method first calculates the prediction or Newton direction: T 1 vp D F .v/ F .v/ (A.1) Then, the corrector direction is computed as: T 1 vc D F .v/ F v C vp e (A.2) where 0 is called the centering or barrier parameter and must be chosen carefully and e is a zero-one vector with the ones corresponding to the quadratic equation in F .v/. The two directions are combined with a step length parameter ˛ 0 and update v to get a new iterate vC: Appendix A MATLAB’s Optimization Toolbox Algorithms 569 C v D v C a vp C vc (A.3) where the step length parameter ˛ is chosen so that vC D xCI yCI zCI sCI wC satisfies xCI yCI zCI sCI wC 0. This method computes a sparse direct factorization on a modification of the Cholesky factors of AAT .IfA is dense, the Sherman-Morrison formula [11] is used. If the residual is too large, then an LDL factorization of an augmented system form of the step equations to find a solution is performed. The algorithm then loops until the iterates converge. The main stopping criterion is the following: ˇ ˇ ˇ ˇ kr k kr k kr k cT x bT y C uT w b C c C u C Ä tol max .1; kbk/ max .1; kck/ max .1; kuk/ max .1; jcT xj ; jbT y uT wj/ (A.4) T where rb D Ax b, rc D A y w C z c, and ru D x C s u are the primal residual, dual residual, and upper-bound feasibility, respectively, cT x bT y C uT w is the difference between the primal and dual objective values (duality gap), and tol is the tolerance (a very small positive number). A.3.2 Primal Simplex Algorithm This algorithm is a primal simplex algorithm that solves the following LP problem: min cT x s.t. Ax Ä b .LP:3/ Aeqx D beq lb Ä x Ä ub The algorithm applies the same preprocessing steps as the ones presented in Section A.3.1. In addition, the algorithm performs two more preprocessing methods: • Eliminates singleton columns and their corresponding rows. • For each constraint equation Ai:x D bi, i D 1; 2; ; m, the algorithm calculates the lower and upper bounds of the linear combination Ai:bi as rlb and rub if the lower and upper bounds are finite. If either rlb or rub equals bi, then the constraint is called a forcing constraint. The algorithm sets each variable corresponding to a nonzero coefficient of Ai:x equal to its lower or upper bound, depending on the forcing constraint. Then, the algorithm deletes the columns corresponding to these variables and deletes the rows corresponding to the forcing constraints. 570 Appendix A MATLAB’s Optimization Toolbox Algorithms This is a two-phase algorithm. In Phase I, the algorithm finds an initial basic feasible solution by solving an auxiliary piecewise LP problem.
Recommended publications
  • A Computerized Poultry Farm Optimization System By
    • A COMPUTERIZED POULTRY FARM OPTIMIZATION SYSTEM BY TURYARUGAYO THOMPSON 16/U/12083/PS 216008822 SUPERVISOR MR. SERUNJOGI AMBROSE A DISSERTATION SUBMITTED TO THE SCHOOL OF STATISTICS AND PLANNING IN PARTIAL FULFILMENT FOR THE AWARD FOR A DEGREE OF BACHELOR OF STATISTICS AT MAKERERE UNIVERSITY JUNE 2019 i ii iii DEDICATION I dedicate this project to the ALMIGHTY GOD for giving me a healthy life, secondly to my parents; Mr. and Mrs. Twamuhabwa Wilson, my brother Mr. Turyasingura Thomas, my aunt Mrs. Olive Kamuli for being supportive to me financially and always encouraging me to keep moving on. I lastly dedicate it to my course mates most especially the B. Stat computing class. iv ACKONWLEDGEMENT I would like to thank the Almighty God for enabling me finish my final year project. I would also like to express my special appreciation to my supervisor Mr. Sserunjogi Ambrose in providing me with me suggestions, encouragement and helped me to coordinate my project and in writing this document. A special thanks goes to my parents; Mr. and Mrs. Twamuhabwa Wilson, my brother Mr. Turyasingura Thomas, my aunt Mrs. Olive Kamuli for being supportive to me financially and spiritually towards the accomplishment of this dissertation. Lastly, many thanks go to my closest friend Mr. Kyagera Sulaiman, my fellow students most especially Akandinda Noble, Mulapada Seth Augustine, Kakuba Caleb Kanyesigye, Nuwabasa Moses, Kyomuhendo Evarce among others who invested their effort and for being supportive and kind to me during the time of working on my final year project. v TABLE OF CONTENTS DECLARATION ............................................................................................................................. i APPROVAL .................................................................................. Error! Bookmark not defined.
    [Show full text]
  • Reformulations in Mathematical Programming: a Computational Approach⋆
    Reformulations in Mathematical Programming: A Computational Approach⋆ Leo Liberti, Sonia Cafieri, and Fabien Tarissan LIX, Ecole´ Polytechnique, Palaiseau, 91128 France {liberti,cafieri,tarissan}@lix.polytechnique.fr Summary. Mathematical programming is a language for describing optimization problems; it is based on parameters, decision variables, objective function(s) subject to various types of constraints. The present treatment is concerned with the case when objective(s) and constraints are algebraic mathematical expres- sions of the parameters and decision variables, and therefore excludes optimization of black-box functions. A reformulation of a mathematical program P is a mathematical program Q obtained from P via symbolic transformations applied to the sets of variables, objectives and constraints. We present a survey of existing reformulations interpreted along these lines, some example applications, and describe the implementation of a software framework for reformulation and optimization. 1 Introduction Optimization and decision problems are usually defined by their input and a mathematical de- scription of the required output: a mathematical entity with an associated value, or whether a given entity has a specified mathematical property or not. Mathematical programming is a lan- guage designed to express almost all practically interesting optimization and decision problems. Mathematical programming formulations can be categorized according to various properties, and rather efficient solution algorithms exist for many of the categories.
    [Show full text]
  • Numerical Methods in MATLAB
    Numerical methods in MATLAB Mariusz Janiak p. 331 C-3, 71 320 26 44 c 2015 Mariusz Janiak All Rights Reserved Contents 1 Introduction 2 Ordinary Differential Equations 3 Optimization Introduction Using numeric approximations to solve continuous problems Numerical analysis is a branch of mathematics that solves continuous problems using numeric approximation. It involves designing methods that give approximate but accurate numeric solutions, which is useful in cases where the exact solution is impossible or prohibitively expensive to calculate. Numerical analysis also involves characterizing the convergence, accuracy, stability, and computational complexity of these methods.a aThe MathWorks, Inc. Introduction Numerical methods in Matlab Interpolation, extrapolation, and regression Differentiation and integration Linear systems of equations Eigenvalues and singular values Ordinary differential equations (ODEs) Partial differential equations (PDEs) Optimization ... Introduction Numerical Integration and Differential Equations Ordinary Differential Equations – ordinary differential equation initial value problem solvers Boundary Value Problems – boundary value problem solvers for ordinary differential equations Delay Differential Equations – delay differential equation initial value problem solvers Partial Differential Equations – 1-D Parabolic-elliptic PDEs, initial-boundary value problem solver Numerical Integration and Differentiation – quadratures, double and triple integrals, and multidimensional derivatives Ordinary Differential Equations An ordinary differential equation (ODE) contains one or more derivatives of a dependent variable y with respect to a single independent variable t, usually referred to as time. The notation used here for representing derivatives of y with respect to t is y 0 for a first derivative, y 00 for a second derivative, and so on. The order of the ODE is equal to the highest-order derivative of y that appears in the equation.
    [Show full text]
  • Highs High-Performance Open-Source Software for Linear Optimization
    HiGHS High-performance open-source software for linear optimization Julian Hall School of Mathematics University of Edinburgh [email protected] CO@Work 2020 21 September 2020 HiGHS: The team What's in a name? HiGHS: Hall, ivet Galabova, Huangfu and Schork Team HiGHS Julian Hall: Reader (1990{date) Ivet Galabova: PhD (2016{date) Qi Huangfu PhD (2009{2013) FICO Xpress (2013{2018) Michael Feldmeier: PhD (2018{date) Julian Hall HiGHS: High-performance open-source linear optimization 2 / 10 HiGHS: Solvers Linear programming (LP) Dual simplex (Huangfu and Hall) Serial techniques exploiting sparsity Parallel techniques exploiting multicore architectures Interior point (Schork) Highly accurate due to its iterative linear system solver Crossover to a basic solution Mixed-integer programming (MIP) Prototype solver Julian Hall HiGHS: High-performance open-source linear optimization 3 / 10 HiGHS: Features and interfaces Features Model management: Load/add/delete/modify problem data Presolve Crash Interfaces Language Applications Future C++ HiGHS class GAMS AMPL Load from .mps JuliaOpt MATLAB Load from .lp OSI Mosel C SCIP PuLp C# SciPy R Julia Suggestions? FORTRAN Python Julian Hall HiGHS: High-performance open-source linear optimization 4 / 10 HiGHS: Access Open-source (MIT license) GitHub: ERGO-Code/HiGHS COIN-OR: Successor to Clp? No third-party code required Runs under Linux, Windows and Mac Build requires CMake 3.15 Parallel code uses OpenMP Documentation: http://www.HiGHS.dev/ Julian Hall HiGHS: High-performance open-source linear optimization
    [Show full text]
  • T-Optimal Designs for Multi-Factor Polynomial Regression Models Via a Semidefinite Relaxation Method
    Noname manuscript No. (will be inserted by the editor) T-optimal designs for multi-factor polynomial regression models via a semidefinite relaxation method Yuguang Yue · Lieven Vandenberghe · Weng Kee Wong Received: date / Accepted: date Abstract We consider T-optimal experiment design problems for discriminat- ing multi-factor polynomial regression models where the design space is defined by polynomial inequalities and the regression parameters are constrained to given convex sets. Our proposed optimality criterion is formulated as a convex optimization problem with a moment cone constraint. When the regression models have one factor, an exact semidefinite representation of the moment cone constraint can be applied to obtain an equivalent semidefinite program. When there are two or more factors in the models, we apply a moment relax- ation technique and approximate the moment cone constraint by a hierarchy of semidefinite-representable outer approximations. When the relaxation hier- archy converges, an optimal discrimination design can be recovered from the optimal moment matrix, and its optimality is confirmed by an equivalence theorem. The methodology is illustrated with several examples. Keywords Continuous design · Convex optimization · Equivalence theorem · Moment relaxation · Semidefinite programming 1 Introduction In many scientific investigations, the underlying statistical model that drives the outcome of interest is not known. In practice, researchers may be able to identify a few plausible models for their problem and an early goal is to Yuguang Yue Department of Biostatistics, University of California, Los Angeles E-mail: [email protected] Lieven Vandenberghe Department of Electrical Engineering, University of California, Los Angeles arXiv:1807.08213v2 [stat.CO] 2 Feb 2020 Weng Kee Wong Department of Biostatistics, University of California, Los Angeles 2 Yuguang Yue et al.
    [Show full text]
  • Global Optimization: from Theory to Implementation
    Global Optimization: from Theory to Implementation Leo Liberti DEI, Politecnico di Milano, Piazza L. da Vinci 32, 20133 Milano, Italy Nelson Maculan COPPE, Universidade Federal do Rio de Janeiro, P.O. Box, 68511, 21941-972 Rio de Janeiro, Brazil DRAFT To Anne-Marie DRAFT Preface The idea for this book was born on the coast of Montenegro, in October 2003, when we were invited to the Sym-Op-Is Serbian Conference on Operations Research. During those days we talked about many optimization problems, going from discussion to implementation in a matter of minutes, reaping good profits from the whole \hands-on" process, and having a lot of fun in the meanwhile. All the wrong ideas were weeded out almost immediately by failed computational experiments, so we wasted little time on those. Unfortunately, translating ideas into programs is not always fast and easy, and moreover the amount of literature about the implementation of global optimization algorithm is scarce. The scope of this book is that of moving a few steps towards the system- atization of the path that goes from the invention to the implementation and testing of a global optimization algorithm. The works contained in this book have been written by various researchers working at academic or industrial institutions; some very well known, some less famous but expert nonetheless in the discipline of actually getting global optimization to work. The papers in this book underline two main developments in the imple- mentation side of global optimization: firstly, the introduction of symbolic manipulation algorithms and automatic techniques for carrying out algebraic transformations; and secondly, the relatively wide availability of extremely ef- ficient global optimization heuristics and metaheuristics that target large-scale nonconvex constrained optimization problems directly.
    [Show full text]
  • MINLP Solver Software
    MINLP Solver Software Michael R. Bussieck and Stefan Vigerske GAMS Development Corp., 1217 Potomac St, NW Washington, DC 20007, USA [email protected], [email protected] March 10, 2014 Abstract In this article we give a brief overview of the start-of-the-art in software for the solution of mixed integer nonlinear programs (MINLP). We establish several groupings with respect to various features and give concise individual descriptions for each solver. The provided information may guide the selection of a best solver for a particular MINLP problem. Keywords: mixed integer nonlinear programming, solver, software, MINLP, MIQCP 1 Introduction The general form of an MINLP is minimize f(x; y) subject to g(x; y) ≤ 0 (P) x 2 X y 2 Y integer n+s n+s m The function f : R ! R is a possibly nonlinear objective function and g : R ! R a possibly nonlinear constraint function. Most algorithms require the functions f and g to be continuous and differentiable, but some may even allow for singular discontinuities. The variables x and y are the decision variables, where y is required to be integer valued. The n s sets X ⊆ R and Y ⊆ R are bounding-box-type restrictions on the variables. Additionally to integer requirements on variables, other kinds of discrete constraints are commonly used. These are, e.g., special-ordered-set constraints (only one (SOS type 1) or two consecutive (SOS type 2) variables in an (ordered) set are allowed to be nonzero) [8], semicontinuous variables (the variable is allowed to take either the value zero or a value above some bound), semiinteger variables (like semicontinuous variables, but with an additional integer restric- tion), and indicator variables (a binary variable indicates whether a certain set of constraints has to be enforced).
    [Show full text]
  • A High-Performance Linear Optimizer Turning Gradware Into Software
    HiGHS: a high-performance linear optimizer Turning gradware into software Julian Hall School of Mathematics, University of Edinburgh ICCOPT, Berlin 6 August 2019 HiGHS: High performance linear optimization Linear optimization Linear programming (LP) minimize cT x subject to Ax = b x 0 ≥ Convex quadratic programming (QP) 1 minimize xT Qx + cT x subject to Ax = b x 0 2 ≥ Q positive semi-definite High performance Serial techniques exploiting sparsity in A Parallel techniques exploiting multicore architectures Julian Hall HiGHS: a high-performance linear optimizer 2 / 22 HiGHS: The team What's in a name? HiGHS: Hall, ivet Galabova, Huangfu and Schork Team HiGHS Julian Hall: Reader (1990{date) Ivet Galabova: PhD (2016{date) Qi Huangfu PhD (2009{2013) FICO Xpress (2013{2018) MSc (2018{date) Lukas Schork: PhD (2015{2018) Michael Feldmeier: PhD (2018{date) Joshua Fogg: PhD (2019{date) Julian Hall HiGHS: a high-performance linear optimizer 3 / 22 HiGHS: Past (2011{2014) Overview Written in C++ to study parallel simplex Dual simplex with standard algorithmic enhancements Efficient numerical linear algebra No interface or utilities Concept High performance serial solver (hsol) Exploit limited task and data parallelism in standard dual RSM iterations (sip) Exploit greater task and data parallelism via minor iterations of dual SSM (pami) Huangfu and H Julian Hall HiGHS: a high-performance linear optimizer 4 / 22 HiGHS: Dual simplex algorithm Assume cN 0 Seek b 0 RHS ≥ ≥ N Scan bi < 0 for p to leave b b B Scan cj =apj < 0 for q to leave aq b b N B T Update:
    [Show full text]
  • 67 SOFTWARE Michael Joswig and Benjamin Lorenz
    67 SOFTWARE Michael Joswig and Benjamin Lorenz INTRODUCTION This chapter is intended as a guide through the ever-growing jungle of geometry software. Software comes in many guises. There are the fully fledged systems consisting of a hundred thousand lines of code that meet professional standards in software design and user support. But there are also the tiny code fragments scattered over the Internet that can nonetheless be valuable for research purposes. And, of course, the very many individual programs and packages in between. Likewise today we find a wide group of users of geometry software. On the one hand, there are researchers in geometry, teachers, and their students. On the other hand, geometry software has found its way into numerous applications in the sciences as well as industry. Because it seems impossible to cover every possible aspect, we focus on software packages that are interesting from the researcher's point of view, and, to a lesser extent, from the student's. This bias has a few implications. Most of the packages listed are designed to run on UNIX/Linux1 machines. Moreover, the researcher's genuine desire to understand produces a natural inclination toward open source software. This is clearly reflected in the selections below. Major exceptions to these rules of thumb will be mentioned explicitly. In order to keep the (already long) list of references as short as possible, in most cases only the Web address of each software package is listed rather than manuals and printed descriptions, which can be found elsewhere. At least for the freely available packages, this allows one to access the software directly.
    [Show full text]
  • Highs: a High-Performance Linear Optimizer
    HiGHS: A High-performance Linear Optimizer Julian Hall School of Mathematics University of Edinburgh [email protected] INFORMS Annual Meeting Seattle 21 October 2019 HiGHS: High performance linear optimization Linear optimization Linear programming (LP) minimize cT x subject to Ax = b x 0 ≥ Convex quadratic programming (QP) 1 minimize xT Qx + cT x subject to Ax = b x 0 2 ≥ Q positive semi-definite High performance Serial techniques exploiting sparsity in A Parallel techniques exploiting multicore architectures Julian Hall HiGHS: A High-performance Linear Optimizer 2 / 15 HiGHS: The team What's in a name? HiGHS: Hall, ivet Galabova, Huangfu and Schork Team HiGHS Julian Hall: Reader (1990{date) Ivet Galabova: PhD (2016{date) Qi Huangfu PhD (2009{2013) FICO Xpress (2013{2018) Lukas Schork: PhD (2015{2018) Michael Feldmeier: PhD (2018{date) Joshua Fogg: PhD (2019{date) Julian Hall HiGHS: A High-performance Linear Optimizer 3 / 15 HiGHS: Dual simplex algorithm Assume cN 0 Seek b 0 RHS ≥ ≥ N Scan bi < 0 for p to leave b b B Scan cj =apj < 0 for q to leave aq b b N B T Update: Exchange p and q between and apq ap bbp b b B N b Update b := b αP aq αP = bp=apq − T b T T T bcq cbN Update cN := cN + αD ap αD = cq=apq b b b b− b Computationb b b b b b b T T T Pivotal row via B πp = ep BTRAN and ap = πp N PRICE −1 Pivotal column via B aq = aq FTRAN Represent B INVERT b −1 ¯ T Update B exploiting B = B + (aq Bep)ep UPDATE b − Julian Hall HiGHS: A High-performance Linear Optimizer 4 / 15 HiGHS: Multiple iteration parallelism Perform standard dual simplex
    [Show full text]
  • Optimization Toolbox User's Guide
    Optimization Toolbox For Use with MATLAB® User’s Guide Version 2 How to Contact The MathWorks: www.mathworks.com Web comp.soft-sys.matlab Newsgroup [email protected] Technical support [email protected] Product enhancement suggestions [email protected] Bug reports [email protected] Documentation error reports [email protected] Order status, license renewals, passcodes [email protected] Sales, pricing, and general information 508-647-7000 Phone 508-647-7001 Fax The MathWorks, Inc. Mail 3 Apple Hill Drive Natick, MA 01760-2098 For contact information about worldwide offices, see the MathWorks Web site. Optimization Toolbox User’s Guide COPYRIGHT 1990 - 2003 by The MathWorks, Inc. The software described in this document is furnished under a license agreement. The software may be used or copied only under the terms of the license agreement. No part of this manual may be photocopied or repro- duced in any form without prior written consent from The MathWorks, Inc. FEDERAL ACQUISITION: This provision applies to all acquisitions of the Program and Documentation by or for the federal government of the United States. By accepting delivery of the Program, the government hereby agrees that this software qualifies as "commercial" computer software within the meaning of FAR Part 12.212, DFARS Part 227.7202-1, DFARS Part 227.7202-3, DFARS Part 252.227-7013, and DFARS Part 252.227-7014. The terms and conditions of The MathWorks, Inc. Software License Agreement shall pertain to the government’s use and disclosure of the Program and Documentation, and shall supersede any conflicting contractual terms or conditions. If this license fails to meet the government’s minimum needs or is inconsistent in any respect with federal procurement law, the government agrees to return the Program and Documentation, unused, to MathWorks.
    [Show full text]
  • Three High Performance Simplex Solvers
    Three high performance simplex solvers Julian Hall1 Qi Huangfu2 Ivet Galabova1 Others 1School of Mathematics, University of Edinburgh 2FICO Argonne National Laboratory 16 May 2017 Overview Background Dual vs primal simplex method Three high performance simplex solvers EMSOL (1992{2004) PIPS-S (2011{date) hsol (2011{date) The future Julian Hall, Qi Huangfu, Ivet Galabova et al. Three high performance simplex solvers 2 / 37 Solving LP problems: Characterizing a basis minimize f = cT x x3 subject to Ax = b x 0 ≥ x2 n A vertex of the feasible region K R has ⊂ m basic components, i given by Ax = b 2 B K n m zero nonbasic components, j − 2 N where partitions 1;:::; n B[N f g Equations partitioned according to as B[N Bx B + Nx N = b with nonsingular basis matrix B x1 Solution set of Ax = b characterized by x = b B−1Nx B − N where b = B−1b b Julian Hall, Qi Huangfu, Ivet Galabova et al. Three high performance simplex solvers 3 / 37 b Solving LP problems: Optimality conditions minimize f = cT x x3 subject to Ax = b x 0 ≥ x2 Objective partitioned according to as B[N f = cT x + cT x K B B N N T = f + c N x N T T T T −1 where f = c B b and c N = c N c B B N is the vector of reduced costs b b − b b Partition yields an optimalb solution if there is Primal feasibility b 0 Dual feasibility c ≥0 x1 N ≥ b b Julian Hall, Qi Huangfu, Ivet Galabova et al.
    [Show full text]