MATLAB's Optimization Toolbox Algorithms
Total Page:16
File Type:pdf, Size:1020Kb
Appendix A MATLAB’s Optimization Toolbox Algorithms Abstract MATLAB’s Optimization Toolbox (version 7:2) includes a family of algorithms for solving optimization problems. The toolbox provides functions for solving linear programming, mixed-integer linear programming, quadratic program- ming, nonlinear programming, and nonlinear least squares problems. This chapter presents the available functions of this toolbox for solving LPs. A computational study is performed. The aim of the computational study is to compare the efficiency of MATLAB’s Optimization Toolbox solvers. A.1 Chapter Objectives • Present the linear programming algorithms of MATLAB’s Optimization Tool- box. • Understand how to use the linear programming solver of MATLAB’s Optimiza- tion Toolbox. • Present all different ways to solve linear programming problems using MAT- LAB. • Compare the computational performance of the linear programming algorithms of MATLAB’s Optimization Toolbox. A.2 Introduction MATLAB is a matrix language intended primarily for numerical computing. MATLAB is especially designed for matrix computations like solving systems of linear equations or factoring matrices. Users that are not familiar with MATLAB are referred to [3, 7]. Experienced MATLAB users that want to further optimize their codes are referred to [1]. MATLAB’s Optimization Toolbox includes a family of algorithms for solving optimization problems. The toolbox provides functions for solving linear pro- gramming, mixed-integer linear programming, quadratic programming, nonlinear programming, and nonlinear least squares problems. MATLAB’s Optimization Toolbox includes four categories of solvers [9]: © Springer International Publishing AG 2017 565 N. Ploskas, N. Samaras, Linear Programming Using MATLAB®, Springer Optimization and Its Applications 127, DOI 10.1007/978-3-319-65919-0 566 Appendix A MATLAB’s Optimization Toolbox Algorithms • Minimizers: solvers that attempt to find a local minimum of the objective function. These solvers handle unconstrained optimization, linear programming, quadratic programming, and nonlinear programming problems. • Multiobjective minimizers: solvers that attempt to either minimize the value of a set of functions or find a location where a set of functions is below some certain values. • Equation solvers: solvers that attempt to find a solution to a nonlinear equation f .x/ D 0. • Least-squares solvers: solvers that attempt to minimize a sum of squares. A complete list of the type of optimization problems, which MATLAB’s Optimization Toolbox solvers can handle, along with the name of the solver can be found in Table A.1 [9]: Table A.1 Optimization problems handled by MATLAB’s Optimization Toolbox Solver Type of problem Solver Description Scalar minimization fminbnd Find the minimum of a single-variable function on fixed interval Unconstrained fminunc, Find the minimum of an unconstrained minimization fminsearch multivariable function (fminunc)using derivative-free optimization (fminsearch) Linear programming linprog Solve linear programming problems Mixed-integer linear intlinprog Solve mixed-integer linear programming programming problems Quadratic programming quadprog Solve quadratic programming problems Constrained fmincon Find the minimum of a constrained nonlinear minimization multivariable function Semi-infinite fseminf Find the minimum of a semi-infinitely minimization constrained multivariable nonlinear function Goal attainment fgoalattain Solve multiobjective goal attainment problems Minimax fminimax Solve minimax constraint problems Linear equations \(matrix left Solve systems of linear equations division) Nonlinear equation of fzero Find the root of a nonlinear function one variable Nonlinear equations fsolve Solve a system of nonlinear equations Linear least-squares \(matrix left Find a least-squares solution to a system of linear division) equations Nonnegative lsqnonneg Solve nonnegative least-squares constraint linear-least-squares problems Constrained lsqlin Solve constrained linear least-squares problems linear-least-squares Nonlinear least-squares lsqnonlin Solve nonlinear least-squares problems Nonlinear curve fitting lsqcurvefit Solve nonlinear curve-fitting problems in least-squares sense Appendix A MATLAB’s Optimization Toolbox Algorithms 567 MATLAB’s Optimization Toolbox offers four LP algorithms: • an interior point method, • a primal simplex algorithm, • a dual simplex algorithm, and • an active-set algorithm. This chapter presents the LP algorithms implemented in MATLAB’s Optimiza- tion Toolbox. Moreover, the Optimization App, a graphical user interface for solving optimization problems, is presented. In addition, LPs are solved using MATLAB in various ways. Finally, a computational study is performed. The aim of the computational study is to compare the execution time of MATLAB’s Optimization Toolbox solvers for LPs. The structure of this chapter is as follows. In Section A.3, the LP algorithms implemented in MATLAB’s Optimization Toolbox are presented. Section A.4 presents the linprog solver, while Section A.5 presents the Optimization App, a graphical user interface for solving optimization problems. Section A.6 provides different ways to solve LPs using MATLAB. In Section A.7, a MATLAB code to solve LPs using the linprog solver is provided. Section A.8 presents a computa- tional study that compares the execution time of MATLAB’s Optimization Toolbox solvers. Finally, conclusions and further discussion are presented in Section A.9. A.3 MATLAB’s Optimization Toolbox Linear Programming Algorithms As already mentioned, MATLAB’s Optimization Toolbox offer four algorithms for the solution of LPs. These algorithms are presented in this section. A.3.1 Interior Point Method The interior point method used in MATLAB is based on LIPSOL [12], which is a variant of Mehrotra’s predictor-corrector algorithm [10], a primal-dual interior point method presented in Chapter 11. The method starts by applying the following preprocessing techniques [9]: • All variables are bounded below zero. • All constraints are equalities. • Fixed variables, these with equal lower and upper bounds, are removed. • Empty rows are removed. • The constraint matrix has full structural rank. • Empty columns are removed. 568 Appendix A MATLAB’s Optimization Toolbox Algorithms • When a significant number of singleton rows exist, the associated variables are solved for and the associated rows are removed. After preprocessing, the LP problem has the following form: min cT x s.t. Ax D b .LP:1/ 0 Ä x Ä u The upper bounds are implicitly included in the constraint matrix A. After adding the slack variables s, problem (LP.1) becomes: min cT x s.t. Ax D b .LP:2/ x C s D u x 0; s 0 Vector x consists of the primal variables and vector s consists of the primal slack variables. The dual problem can be written as: max bT y uT w s.t. AT y w C z D c .DP:1/ z 0; w 0 Vectors y and w consist of the simplex multiplier and vector z consists of the reduced costs. This method is a primal-dual algorithm, meaning that both the primal (LP.2) and the dual (DP.1) problems are solved simultaneously. Let us assume that v D ŒxI yI zI sI w, where ŒxI zI sI w 0. This method first calculates the prediction or Newton direction: T 1 vp D F .v/ F .v/ (A.1) Then, the corrector direction is computed as: T 1 vc D F .v/ F v C vp e (A.2) where 0 is called the centering or barrier parameter and must be chosen carefully and e is a zero-one vector with the ones corresponding to the quadratic equation in F .v/. The two directions are combined with a step length parameter ˛ 0 and update v to get a new iterate vC: Appendix A MATLAB’s Optimization Toolbox Algorithms 569 C v D v C a vp C vc (A.3) where the step length parameter ˛ is chosen so that vC D xCI yCI zCI sCI wC satisfies xCI yCI zCI sCI wC 0. This method computes a sparse direct factorization on a modification of the Cholesky factors of AAT .IfA is dense, the Sherman-Morrison formula [11] is used. If the residual is too large, then an LDL factorization of an augmented system form of the step equations to find a solution is performed. The algorithm then loops until the iterates converge. The main stopping criterion is the following: ˇ ˇ ˇ ˇ kr k kr k kr k cT x bT y C uT w b C c C u C Ä tol max .1; kbk/ max .1; kck/ max .1; kuk/ max .1; jcT xj ; jbT y uT wj/ (A.4) T where rb D Ax b, rc D A y w C z c, and ru D x C s u are the primal residual, dual residual, and upper-bound feasibility, respectively, cT x bT y C uT w is the difference between the primal and dual objective values (duality gap), and tol is the tolerance (a very small positive number). A.3.2 Primal Simplex Algorithm This algorithm is a primal simplex algorithm that solves the following LP problem: min cT x s.t. Ax Ä b .LP:3/ Aeqx D beq lb Ä x Ä ub The algorithm applies the same preprocessing steps as the ones presented in Section A.3.1. In addition, the algorithm performs two more preprocessing methods: • Eliminates singleton columns and their corresponding rows. • For each constraint equation Ai:x D bi, i D 1; 2; ; m, the algorithm calculates the lower and upper bounds of the linear combination Ai:bi as rlb and rub if the lower and upper bounds are finite. If either rlb or rub equals bi, then the constraint is called a forcing constraint. The algorithm sets each variable corresponding to a nonzero coefficient of Ai:x equal to its lower or upper bound, depending on the forcing constraint. Then, the algorithm deletes the columns corresponding to these variables and deletes the rows corresponding to the forcing constraints. 570 Appendix A MATLAB’s Optimization Toolbox Algorithms This is a two-phase algorithm. In Phase I, the algorithm finds an initial basic feasible solution by solving an auxiliary piecewise LP problem.