A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm

Total Page:16

File Type:pdf, Size:1020Kb

A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm Journal name manuscript No. (will be inserted by the editor) A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm Renke Kuhlmann · Christof B¨uskens Received: date / Accepted: date Abstract Interior-point methods have been shown to be very efficient for large-scale nonlinear programming. The combination with penalty methods increases their robustness due to the regularization of the constraints caused by the penalty term. In this paper a primal-dual penalty-interior-point algo- rithm is proposed, that is based on an augmented Lagrangian approach with an `2-exact penalty function. Global convergence is maintained by a combina- tion of a merit function and a filter approach. Unlike other filter methods no separate feasibility restoration phase is required. The algorithm has been im- plemented within the solver WORHP to study different penalty and line search options and to compare its numerical performance to two other state-of-the- art nonlinear programming algorithms, the interior-point method IPOPT and the sequential quadratic programming method of WORHP. Keywords Nonlinear Programming · Constrained Optimization · Augmented Lagrangian · Penalty-Interior-Point Algorithm · Primal-Dual Method Mathematics Subject Classification (2000) 49M05 · 49M15 · 49M29 · 49M37 · 90C06 · 90C26 · 90C30 · 90C51 Renke Kuhlmann Optimization and Optimal Control, Center for Industrial Mathematics (ZeTeM), Biblio- thekstr. 5, 28359 Bremen, Germany E-mail: [email protected] Christof B¨uskens E-mail: [email protected] 2 Renke Kuhlmann, Christof B¨uskens 1 Introduction In this paper we consider the nonlinear optimization problem min f(x) n x2R s.t. c(x) = 0 (1.1) x ≥ 0 with twice continuously differentiable functions f : Rn ! R and c : Rn ! Rm, but the methods can easily be extended to the general case with l ≤ x ≤ g and c(x) ≤ 0 (cf. [45]). The widely used and very efficient interior-point strategy (cf. [6,21,34]) handles the inequality constraints by adding a barrier term to the objective function f(x) and solving a sequence of barrier problems n X (i) min 'µ(x) := f(x) − µ ln x x2 n R i=1 (1.2) s.t. c(x) = 0 with a decreasing barrier parameter µ > 0. In this paper, we consider an algorithm that penalizes both, the inequality box constraints and the nonlinear equality constraints c(x), by a log-barrier term and an augmented Lagrangian term, respectively. However, unlike other augmented Lagrangian methods we do not use a quadratic `2-norm as measure for the constraint violation, but an exact `2-penalty-interior-point algorithm (see Chen and Goldfarb [10,11,12]). The resulting unconstrained reformulation is n ! X (i) > min Φµ,λ,ρ,τ (x) := ρ f(x) − µ ln x + λ c(x) + τ kc(x)k (1.3) x 2 i=1 with penalty parameters ρ ≥ 0 and τ > 0, a barrier parameter µ ≥ 0 and Lagrangian multipliers λ 2 Rm. For improved readability the depen- dences of ρ, τ and λ are neglected when clear from the context and we write Φµ(x) := Φµ,λ,ρ,τ (x). The penalty parameter τ controls the size of the multipli- ers and will be updated until a certain threshold value is reached. The penalty parameter ρ balances the optimization of the Lagrangian function and the con- straint violation of problem (1.2). In particular the algorithm solves a sequence of (1.3) with a decreasing penalty parameter ρ until finding a first-order op- timal point of (1.2). However, unlike penalty-interior-point algorithms with a quadratic penalty function (e.g. Armand et al. [1], Armand and Omheni [2,3] or Yamashita and Yabe [47]) the penalty parameter ρ does not have to con- verge to zero. A first-order optimal point of (1.2) satisfying the Mangasarian- Fromovitz constraint qualification (MFCQ) is a stationary point of the merit function Φµ(x) if ρ is smaller than a certain threshold value or the duals of (1.3) equal ρλ. Using two penalty parameters is mainly motivated by a better accuracy of the implemented algorithm. Primal-Dual Augmented Lagrangian Penalty-Interior-Point Algorithm 3 It is an important feature of optimization algorithms to detect infeasibility of the given problem. In such a case a first-order optimal point of (1.2) does not exist and the penalty parameter ρ will converge to zero resulting in the optimization of min kc(x)k : (1.4) x≥0 2 The solution of (1.4) that is infeasible for (1.1) serves as a certificate of in- feasibility. The presented algorithm follows the idea of Fletcher [18] and Byrd et al. [8] to place the penalty parameter in front of the objective function or the Lagrangian function, respectively, instead of in front of the measure of constraint violation for better solver performance for infeasible problems. The proposed algorithm shares the following properties with other primal- dual penalty-interior-point algorithms (e.g. [1,10,15]): The step is a guaranteed descent direction for the merit function Φµ(x) and a rank-deficient Jacobian of the constraints at infeasible non-stationary points can be handled without modification of the Newton system. The latter avoids failure of global con- vergence for example for the optimization problem in W¨achter and Biegler [44]. An extension to the pure (quadratic) `2-penalty function are the aug- mented Lagrangian methods (e.g. [14,35]). Recently, primal-dual augmented Lagrangian methods have enjoyed an increased popularity. They have been studied by Armand and Omheni [2,3], Forsgren and Gill [20], Gertz and Gill [22], Gill and Robinson [23] and Goldfarb et al. [25]. These methods can re- move the perturbation of the KKT system caused by the penalty term by an appropriate update of the Lagrangian multipliers λ. This makes it unnecessary to calculate a further unperturbed step per iteration like in Chen and Goldfarb [11,12], and naturally leads to a quadratic rate of convergence to first-order optimal points of (1.2) and a superlinear rate in case of the nonlinear program (1.1). Our update of the Lagrangian multipliers λ differs from other augmented Lagrangian based algorithms (e.g. [2,3,13]), as it does not rely on a criterion that measures the reduction of the constraint violation. Instead, it is based on the dual information and is designed to be applied as often as possible when approaching the optimal solution. For step acceptance, instead of following recent research trends to avoid penalties and a filter like in Liu and Yuan [33] or Gould and Toint [30], we combine the two { the merit function and the filter mechanism { as line search criteria, of which at least one has to indicate progress for a trial iterate. Com- parable combinations have been proposed by Chen and Goldfarb [12] and Gould et al. [26,27]. The filter, originally introduced by Fletcher and Leyf- fer [19] significantly increases the flexibility of the step acceptance and, thus, is widely used by nonlinear programming solvers (e.g. [4,9,19,40,45]). Global convergence has been proved for several filter methods and usually depends on a further algorithm phase: the feasibility restoration. Due to the combination with the merit function, a feasibility restoration phase { which we believe to be a drawback of the filter approach { is not necessary for global convergence. 4 Renke Kuhlmann, Christof B¨uskens A further advantage is that our filter entries do not depend on parameter choices, e.g. the barrier parameter µ. Other penalty-interior-point algorithms consider an `1-penalty, see e.g., Benson et al. [5], Boman [7], Curtis [15], Fletcher [18], Tits et al. [39], Gould et al. [29], Yamashita [46]. Many `1-penalty-interior-point algorithms reformulate the problem into a smooth one using additional elastic variables. However, for large-scale nonlinear programming this can be a disadvantage. Closely related are also the stabilized sequential quadratic programming methods like the works of Gill and Robinson [24] or Shen et al. [38]. The aim of this paper is to study the convergence properties of the pro- posed algorithm and its numerical performance. Therefore, we implemented the algorithm within the large-scale nonlinear programming solver WORHP. The paper is organized as follows. In Section2 we describe the algorithm in- cluding the general approach of primal-dual penalty-interior-point algorithms, the step calculation and the line search. The global and local convergence of the presented algorithm are shown in Section3 and Section4, respectively. Finally, in Section5 we perform numerical experiments using the CUTEst test set [28] to show the efficiency of the proposed algorithm and compare it to other solvers, in particular the interior-point method IPOPT [45] and the sequential quadratic programming algorithm of WORHP [9]. Notation Matrices are written in uppercase and vectors in lowercase. The i-th component of a vector x is denoted by x(i). A diagonal matrix with the entries of a vector x on its diagonal has the same name in uppercase, i.e X := diag(x). The vector e stands for a vector of all ones with appropriate dimension. The norm k·k is the Euclidean norm k·k2 unless stated differently, e.g. k·k1 is the maximum norm. The notation In(X) = (λ+; λ−; λ0) stands for the inertia of a matrix X, in particular (λ+; λ−; λ0) are the numbers of positive, negative and zero eigenvalues, respectively. We will denote the gradient of a function n n h1 : R ! R at the point x0 as rh1(x0) 2 R , the Jacobian of a function n m n×m h2 : R ! R as rh2(x0) 2 R and the subdifferential of h1(x) at x0 as @h1(x0).
Recommended publications
  • University of California, San Diego
    UNIVERSITY OF CALIFORNIA, SAN DIEGO Computational Methods for Parameter Estimation in Nonlinear Models A dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy in Physics with a Specialization in Computational Physics by Bryan Andrew Toth Committee in charge: Professor Henry D. I. Abarbanel, Chair Professor Philip Gill Professor Julius Kuti Professor Gabriel Silva Professor Frank Wuerthwein 2011 Copyright Bryan Andrew Toth, 2011 All rights reserved. The dissertation of Bryan Andrew Toth is approved, and it is acceptable in quality and form for publication on microfilm and electronically: Chair University of California, San Diego 2011 iii DEDICATION To my grandparents, August and Virginia Toth and Willem and Jane Keur, who helped put me on a lifelong path of learning. iv EPIGRAPH An Expert: One who knows more and more about less and less, until eventually he knows everything about nothing. |Source Unknown v TABLE OF CONTENTS Signature Page . iii Dedication . iv Epigraph . v Table of Contents . vi List of Figures . ix List of Tables . x Acknowledgements . xi Vita and Publications . xii Abstract of the Dissertation . xiii Chapter 1 Introduction . 1 1.1 Dynamical Systems . 1 1.1.1 Linear and Nonlinear Dynamics . 2 1.1.2 Chaos . 4 1.1.3 Synchronization . 6 1.2 Parameter Estimation . 8 1.2.1 Kalman Filters . 8 1.2.2 Variational Methods . 9 1.2.3 Parameter Estimation in Nonlinear Systems . 9 1.3 Dissertation Preview . 10 Chapter 2 Dynamical State and Parameter Estimation . 11 2.1 Introduction . 11 2.2 DSPE Overview . 11 2.3 Formulation . 12 2.3.1 Least Squares Minimization .
    [Show full text]
  • An Algorithm Based on Semidefinite Programming for Finding Minimax
    Takustr. 7 Zuse Institute Berlin 14195 Berlin Germany BELMIRO P.M. DUARTE,GUILLAUME SAGNOL,WENG KEE WONG An algorithm based on Semidefinite Programming for finding minimax optimal designs ZIB Report 18-01 (December 2017) Zuse Institute Berlin Takustr. 7 14195 Berlin Germany Telephone: +49 30-84185-0 Telefax: +49 30-84185-125 E-mail: [email protected] URL: http://www.zib.de ZIB-Report (Print) ISSN 1438-0064 ZIB-Report (Internet) ISSN 2192-7782 An algorithm based on Semidefinite Programming for finding minimax optimal designs Belmiro P.M. Duarte a,b, Guillaume Sagnolc, Weng Kee Wongd aPolytechnic Institute of Coimbra, ISEC, Department of Chemical and Biological Engineering, Portugal. bCIEPQPF, Department of Chemical Engineering, University of Coimbra, Portugal. cTechnische Universität Berlin, Institut für Mathematik, Germany. dDepartment of Biostatistics, Fielding School of Public Health, UCLA, U.S.A. Abstract An algorithm based on a delayed constraint generation method for solving semi- infinite programs for constructing minimax optimal designs for nonlinear models is proposed. The outer optimization level of the minimax optimization problem is solved using a semidefinite programming based approach that requires the de- sign space be discretized. A nonlinear programming solver is then used to solve the inner program to determine the combination of the parameters that yields the worst-case value of the design criterion. The proposed algorithm is applied to find minimax optimal designs for the logistic model, the flexible 4-parameter Hill homoscedastic model and the general nth order consecutive reaction model, and shows that it (i) produces designs that compare well with minimax D−optimal de- signs obtained from semi-infinite programming method in the literature; (ii) can be applied to semidefinite representable optimality criteria, that include the com- mon A−; E−; G−; I− and D-optimality criteria; (iii) can tackle design problems with arbitrary linear constraints on the weights; and (iv) is fast and relatively easy to use.
    [Show full text]
  • Chapter 8 Constrained Optimization 2: Sequential Quadratic Programming, Interior Point and Generalized Reduced Gradient Methods
    Chapter 8: Constrained Optimization 2 CHAPTER 8 CONSTRAINED OPTIMIZATION 2: SEQUENTIAL QUADRATIC PROGRAMMING, INTERIOR POINT AND GENERALIZED REDUCED GRADIENT METHODS 8.1 Introduction In the previous chapter we examined the necessary and sufficient conditions for a constrained optimum. We did not, however, discuss any algorithms for constrained optimization. That is the purpose of this chapter. The three algorithms we will study are three of the most common. Sequential Quadratic Programming (SQP) is a very popular algorithm because of its fast convergence properties. It is available in MATLAB and is widely used. The Interior Point (IP) algorithm has grown in popularity the past 15 years and recently became the default algorithm in MATLAB. It is particularly useful for solving large-scale problems. The Generalized Reduced Gradient method (GRG) has been shown to be effective on highly nonlinear engineering problems and is the algorithm used in Excel. SQP and IP share a common background. Both of these algorithms apply the Newton- Raphson (NR) technique for solving nonlinear equations to the KKT equations for a modified version of the problem. Thus we will begin with a review of the NR method. 8.2 The Newton-Raphson Method for Solving Nonlinear Equations Before we get to the algorithms, there is some background we need to cover first. This includes reviewing the Newton-Raphson (NR) method for solving sets of nonlinear equations. 8.2.1 One equation with One Unknown The NR method is used to find the solution to sets of nonlinear equations. For example, suppose we wish to find the solution to the equation: xe+=2 x We cannot solve for x directly.
    [Show full text]
  • Julia, My New Friend for Computing and Optimization? Pierre Haessig, Lilian Besson
    Julia, my new friend for computing and optimization? Pierre Haessig, Lilian Besson To cite this version: Pierre Haessig, Lilian Besson. Julia, my new friend for computing and optimization?. Master. France. 2018. cel-01830248 HAL Id: cel-01830248 https://hal.archives-ouvertes.fr/cel-01830248 Submitted on 4 Jul 2018 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. « Julia, my new computing friend? » | 14 June 2018, IETR@Vannes | By: L. Besson & P. Haessig 1 « Julia, my New frieNd for computiNg aNd optimizatioN? » Intro to the Julia programming language, for MATLAB users Date: 14th of June 2018 Who: Lilian Besson & Pierre Haessig (SCEE & AUT team @ IETR / CentraleSupélec campus Rennes) « Julia, my new computing friend? » | 14 June 2018, IETR@Vannes | By: L. Besson & P. Haessig 2 AgeNda for today [30 miN] 1. What is Julia? [5 miN] 2. ComparisoN with MATLAB [5 miN] 3. Two examples of problems solved Julia [5 miN] 4. LoNger ex. oN optimizatioN with JuMP [13miN] 5. LiNks for more iNformatioN ? [2 miN] « Julia, my new computing friend? » | 14 June 2018, IETR@Vannes | By: L. Besson & P. Haessig 3 1. What is Julia ? Open-source and free programming language (MIT license) Developed since 2012 (creators: MIT researchers) Growing popularity worldwide, in research, data science, finance etc… Multi-platform: Windows, Mac OS X, GNU/Linux..
    [Show full text]
  • [20Pt]Algorithms for Constrained Optimization: [ 5Pt]
    SOL Optimization 1970s 1980s 1990s 2000s 2010s Summary 2020s Algorithms for Constrained Optimization: The Benefits of General-purpose Software Michael Saunders MS&E and ICME, Stanford University California, USA 3rd AI+IoT Business Conference Shenzhen, China, April 25, 2019 Optimization Software 3rd AI+IoT Business Conference, Shenzhen, April 25, 2019 1/39 SOL Optimization 1970s 1980s 1990s 2000s 2010s Summary 2020s SOL Systems Optimization Laboratory George Dantzig, Stanford University, 1974 Inventor of the Simplex Method Father of linear programming Large-scale optimization: Algorithms, software, applications Optimization Software 3rd AI+IoT Business Conference, Shenzhen, April 25, 2019 2/39 SOL Optimization 1970s 1980s 1990s 2000s 2010s Summary 2020s SOL history 1974 Dantzig and Cottle start SOL 1974{78 John Tomlin, LP/MIP expert 1974{2005 Alan Manne, nonlinear economic models 1975{76 MS, MINOS first version 1979{87 Philip Gill, Walter Murray, MS, Margaret Wright (Gang of 4!) 1989{ Gerd Infanger, stochastic optimization 1979{ Walter Murray, MS, many students 2002{ Yinyu Ye, optimization algorithms, especially interior methods This week! UC Berkeley opened George B. Dantzig Auditorium Optimization Software 3rd AI+IoT Business Conference, Shenzhen, April 25, 2019 3/39 SOL Optimization 1970s 1980s 1990s 2000s 2010s Summary 2020s Optimization problems Minimize an objective function subject to constraints: 0 x 1 min '(x) st ` ≤ @ Ax A ≤ u c(x) x variables 0 1 A matrix c1(x) B . C c(x) nonlinear functions @ . A c (x) `; u bounds m Optimization
    [Show full text]
  • Numericaloptimization
    Numerical Optimization Alberto Bemporad http://cse.lab.imtlucca.it/~bemporad/teaching/numopt Academic year 2020-2021 Course objectives Solve complex decision problems by using numerical optimization Application domains: • Finance, management science, economics (portfolio optimization, business analytics, investment plans, resource allocation, logistics, ...) • Engineering (engineering design, process optimization, embedded control, ...) • Artificial intelligence (machine learning, data science, autonomous driving, ...) • Myriads of other applications (transportation, smart grids, water networks, sports scheduling, health-care, oil & gas, space, ...) ©2021 A. Bemporad - Numerical Optimization 2/102 Course objectives What this course is about: • How to formulate a decision problem as a numerical optimization problem? (modeling) • Which numerical algorithm is most appropriate to solve the problem? (algorithms) • What’s the theory behind the algorithm? (theory) ©2021 A. Bemporad - Numerical Optimization 3/102 Course contents • Optimization modeling – Linear models – Convex models • Optimization theory – Optimality conditions, sensitivity analysis – Duality • Optimization algorithms – Basics of numerical linear algebra – Convex programming – Nonlinear programming ©2021 A. Bemporad - Numerical Optimization 4/102 References i ©2021 A. Bemporad - Numerical Optimization 5/102 Other references • Stephen Boyd’s “Convex Optimization” courses at Stanford: http://ee364a.stanford.edu http://ee364b.stanford.edu • Lieven Vandenberghe’s courses at UCLA: http://www.seas.ucla.edu/~vandenbe/ • For more tutorials/books see http://plato.asu.edu/sub/tutorials.html ©2021 A. Bemporad - Numerical Optimization 6/102 Optimization modeling What is optimization? • Optimization = assign values to a set of decision variables so to optimize a certain objective function • Example: Which is the best velocity to minimize fuel consumption ? fuel [ℓ/km] velocity [km/h] 0 30 60 90 120 160 ©2021 A.
    [Show full text]
  • Quadratic Programming GIAN Short Course on Optimization: Applications, Algorithms, and Computation
    Quadratic Programming GIAN Short Course on Optimization: Applications, Algorithms, and Computation Sven Leyffer Argonne National Laboratory September 12-24, 2016 Outline 1 Introduction to Quadratic Programming Applications of QP in Portfolio Selection Applications of QP in Machine Learning 2 Active-Set Method for Quadratic Programming Equality-Constrained QPs General Quadratic Programs 3 Methods for Solving EQPs Generalized Elimination for EQPs Lagrangian Methods for EQPs 2 / 36 Introduction to Quadratic Programming Quadratic Program (QP) minimize 1 xT Gx + g T x x 2 T subject to ai x = bi i 2 E T ai x ≥ bi i 2 I; where n×n G 2 R is a symmetric matrix ... can reformulate QP to have a symmetric Hessian E and I sets of equality/inequality constraints Quadratic Program (QP) Like LPs, can be solved in finite number of steps Important class of problems: Many applications, e.g. quadratic assignment problem Main computational component of SQP: Sequential Quadratic Programming for nonlinear optimization 3 / 36 Introduction to Quadratic Programming Quadratic Program (QP) minimize 1 xT Gx + g T x x 2 T subject to ai x = bi i 2 E T ai x ≥ bi i 2 I; No assumption on eigenvalues of G If G 0 positive semi-definite, then QP is convex ) can find global minimum (if it exists) If G indefinite, then QP may be globally solvable, or not: If AE full rank, then 9ZE null-space basis Convex, if \reduced Hessian" positive semi-definite: T T ZE GZE 0; where ZE AE = 0 then globally solvable ... eliminate some variables using the equations 4 / 36 Introduction to Quadratic Programming Quadratic Program (QP) minimize 1 xT Gx + g T x x 2 T subject to ai x = bi i 2 E T ai x ≥ bi i 2 I; Feasible set may be empty ..
    [Show full text]
  • Gauss-Newton SQP
    TEMPO Spring School: Theory and Numerics for Nonlinear Model Predictive Control Exercise 3: Gauss-Newton SQP J. Andersson M. Diehl J. Rawlings M. Zanon University of Freiburg, March 27, 2015 Gauss-Newton sequential quadratic programming (SQP) In the exercises so far, we solved the NLPs with IPOPT. IPOPT is a popular open-source primal- dual interior point code employing so-called filter line-search to ensure global convergence. Other NLP solvers that can be used from CasADi include SNOPT, WORHP and KNITRO. In the following, we will write our own simple NLP solver implementing sequential quadratic programming (SQP). (0) (0) Starting from a given initial guess for the primal and dual variables (x ; λg ), SQP solves the NLP by iteratively computing local convex quadratic approximations of the NLP at the (k) (k) current iterate (x ; λg ) and solving them by using a quadratic programming (QP) solver. For an NLP of the form: minimize f(x) x (1) subject to x ≤ x ≤ x; g ≤ g(x) ≤ g; these quadratic approximations take the form: 1 | 2 (k) (k) (k) (k) | minimize ∆x rxL(x ; λg ; λx ) ∆x + rxf(x ) ∆x ∆x 2 subject to x − x(k) ≤ ∆x ≤ x − x(k); (2) @g g − g(x(k)) ≤ (x(k)) ∆x ≤ g − g(x(k)); @x | | where L(x; λg; λx) = f(x) + λg g(x) + λx x is the so-called Lagrangian function. By solving this (k) (k+1) (k) (k+1) QP, we get the (primal) step ∆x := x − x as well as the Lagrange multipliers λg (k+1) and λx .
    [Show full text]
  • Implementing Customized Pivot Rules in COIN-OR's CLP with Python
    Cahier du GERAD G-2012-07 Customizing the Solution Process of COIN-OR's Linear Solvers with Python Mehdi Towhidi1;2 and Dominique Orban1;2 ? 1 Department of Mathematics and Industrial Engineering, Ecole´ Polytechnique, Montr´eal,QC, Canada. 2 GERAD, Montr´eal,QC, Canada. [email protected], [email protected] Abstract. Implementations of the Simplex method differ only in very specific aspects such as the pivot rule. Similarly, most relaxation methods for mixed-integer programming differ only in the type of cuts and the exploration of the search tree. Implementing instances of those frame- works would therefore be more efficient if linear and mixed-integer pro- gramming solvers let users customize such aspects easily. We provide a scripting mechanism to easily implement and experiment with pivot rules for the Simplex method by building upon COIN-OR's open-source linear programming package CLP. Our mechanism enables users to implement pivot rules in the Python scripting language without explicitly interact- ing with the underlying C++ layers of CLP. In the same manner, it allows users to customize the solution process while solving mixed-integer linear programs using the CBC and CGL COIN-OR packages. The Cython pro- gramming language ensures communication between Python and COIN- OR libraries and activates user-defined customizations as callbacks. For illustration, we provide an implementation of a well-known pivot rule as well as the positive edge rule|a new rule that is particularly efficient on degenerate problems, and demonstrate how to customize branch-and-cut node selection in the solution of a mixed-integer program.
    [Show full text]
  • A Sequential Quadratic Programming Algorithm with an Additional Equality Constrained Phase
    A Sequential Quadratic Programming Algorithm with an Additional Equality Constrained Phase Jos´eLuis Morales∗ Jorge Nocedal † Yuchen Wu† December 28, 2008 Abstract A sequential quadratic programming (SQP) method is presented that aims to over- come some of the drawbacks of contemporary SQP methods. It avoids the difficulties associated with indefinite quadratic programming subproblems by defining this sub- problem to be always convex. The novel feature of the approach is the addition of an equality constrained phase that promotes fast convergence and improves performance in the presence of ill conditioning. This equality constrained phase uses exact second order information and can be implemented using either a direct solve or an iterative method. The paper studies the global and local convergence properties of the new algorithm and presents a set of numerical experiments to illustrate its practical performance. 1 Introduction Sequential quadratic programming (SQP) methods are very effective techniques for solv- ing small, medium-size and certain classes of large-scale nonlinear programming problems. They are often preferable to interior-point methods when a sequence of related problems must be solved (as in branch and bound methods) and more generally, when a good estimate of the solution is available. Some SQP methods employ convex quadratic programming sub- problems for the step computation (typically using quasi-Newton Hessian approximations) while other variants define the Hessian of the SQP model using second derivative informa- tion, which can lead to nonconvex quadratic subproblems; see [27, 1] for surveys on SQP methods. ∗Departamento de Matem´aticas, Instituto Tecnol´ogico Aut´onomode M´exico, M´exico. This author was supported by Asociaci´onMexicana de Cultura AC and CONACyT-NSF grant J110.388/2006.
    [Show full text]
  • A Parallel Quadratic Programming Algorithm for Model Predictive Control
    MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com A Parallel Quadratic Programming Algorithm for Model Predictive Control Brand, M.; Shilpiekandula, V.; Yao, C.; Bortoff, S.A. TR2011-056 August 2011 Abstract In this paper, an iterative multiplicative algorithm is proposed for the fast solution of quadratic programming (QP) problems that arise in the real-time implementation of Model Predictive Control (MPC). The proposed algorithm–Parallel Quadratic Programming (PQP)–is amenable to fine-grained parallelization. Conditions on the convergence of the PQP algorithm are given and proved. Due to its extreme simplicity, even serial implementations offer considerable speed advantages. To demonstrate, PQP is applied to several simulation examples, including a stand- alone QP problem and two MPC examples. When implemented in MATLAB using single-thread computations, numerical simulations of PQP demonstrate a 5 - 10x speed-up compared to the MATLAB active-set based QP solver quadprog. A parallel implementation would offer a further speed-up, linear in the number of parallel processors. World Congress of the International Federation of Automatic Control (IFAC) This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc.
    [Show full text]
  • 1 Linear Programming
    ORF 523 Lecture 9 Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Any typos should be emailed to a a [email protected]. In this lecture, we see some of the most well-known classes of convex optimization problems and some of their applications. These include: • Linear Programming (LP) • (Convex) Quadratic Programming (QP) • (Convex) Quadratically Constrained Quadratic Programming (QCQP) • Second Order Cone Programming (SOCP) • Semidefinite Programming (SDP) 1 Linear Programming Definition 1. A linear program (LP) is the problem of optimizing a linear function over a polyhedron: min cT x T s.t. ai x ≤ bi; i = 1; : : : ; m; or written more compactly as min cT x s.t. Ax ≤ b; for some A 2 Rm×n; b 2 Rm: We'll be very brief on our discussion of LPs since this is the central topic of ORF 522. It suffices to say that LPs probably still take the top spot in terms of ubiquity of applications. Here are a few examples: • A variety of problems in production planning and scheduling 1 • Exact formulation of several important combinatorial optimization problems (e.g., min-cut, shortest path, bipartite matching) • Relaxations for all 0/1 combinatorial programs • Subroutines of branch-and-bound algorithms for integer programming • Relaxations for cardinality constrained (compressed sensing type) optimization prob- lems • Computing Nash equilibria in zero-sum games • ::: 2 Quadratic Programming Definition 2. A quadratic program (QP) is an optimization problem with a quadratic ob- jective and linear constraints min xT Qx + qT x + c x s.t. Ax ≤ b: Here, we have Q 2 Sn×n, q 2 Rn; c 2 R;A 2 Rm×n; b 2 Rm: The difficulty of this problem changes drastically depending on whether Q is positive semidef- inite (psd) or not.
    [Show full text]