Semidefinite Programming Relaxations of Non-Convex Problems in Control

Total Page:16

File Type:pdf, Size:1020Kb

Semidefinite Programming Relaxations of Non-Convex Problems in Control 1 SEMIDEFINITE PROGRAMMING RELAXATIONS OF NON-CONVEX PROBLEMS IN CONTROL AND COMBINATORIAL OPTIMIZATION Stephen Boyd and Lieven Vandenb erghe Information Systems Laboratory, Stanford University In celebration of Tom Kailath's contributions, on his 60th birthday. DedicatedtoTom Kailath: mentor, model, col league, teacher, friend. ABSTRACT We p oint out some connections b etween applications of semide nite programming in control and in combinatorial optimization. In b oth elds semide nite programs arise as convex relaxations of NP-hard quadratic optimization problems. We also show that these relaxations are readily extended to optimization problems over bilinear matrix inequaliti es. 1 SEMIDEFINITE PROGRAMMING In a semide nite program SDP we minimize a linear function of a variable m x 2 R sub ject to a matrix inequality: T minimize c x 1.1 sub ject to F x 0 where m X x F : F x = F + i i 0 i=1 1 2 Chapter 1 m The problem data are the vector c 2 R and m + 1 symmetric matrices nn F ;:::;F 2 R . The inequality sign in F x 0 means that F xis 0 m n T p ositive semide nite, i.e., z F xz 0 for all z 2 R .We call the inequality F x 0a linear matrix inequality LMI. Semide nite programs can b e regarded as an extension of linear programming where the comp onentwise inequalities b etween vectors are replaced by matrix inequalities, or, equivalently, the rst orthant is replaced by the cone of p osi- tive semide nite matrices. Semide nite programming uni es several standard problems e.g., linear and quadratic programming, and nds many applica- tions in engineering and combinatorial optimization see [Ali95], [BEFB94 ], [VB96]. Although semide nite programs are much more general than linear programs, they are not much harder to solve. Most interior-p oint metho ds for linear programming have b een generalized to semide nite programs. As in lin- ear programming, these metho ds have p olynomial worst-case complexity, and p erform very well in practice. 2 SEMIDEFINITE PROGRAMMING AND COMBINATORIAL OPTIMIZATION Semide nite programs playavery useful role in non-convex or combinatorial optimization. Consider, for example, the quadratic optimization problem minimize f x 0 1.2 sub ject to f x 0;i=1;:::;L i T T where f x= x Ax+2b x+c , i =0;1;:::;L. The matrices A can b e i i i i i inde nite, and therefore problem 1.2 is a very hard, non-convex optimization problem. For example, it includes all optimization problems with p olynomial ob jective function and p olynomial constraints see [NN94, x6.4.4], [Sho87]. For practical purp oses, e.g., in branch-and-b ound algorithms, it is imp ortantto have go o d and cheaply computable lower b ounds on the optimal value of 1.2. Shor and others have prop osed to compute suchlower b ounds by solving the Semide nite programming relaxations 3 semide nite program with variables t and i maximize t A b A b A b 0 0 1 1 L L sub ject to + + + 0 1 L T T T b c t b c b c 0 1 L 0 1 L 0;i=1;:::;L: i 1.3 One can easily verify that this semide nite program yields lower b ounds for 1.2. Supp ose x satis es the constraints in the nonconvex problem 1.2, i.e., T x A b x i i f x= 0 i T 1 b c 1 i i for i =1;:::;L, and t, ,..., satisfy the constraints in the semide nite 1 L program 1.3. Then T x A b A b A b x 0 0 1 1 L L 0 + + + 1 L T T T 1 b c t b c b c 1 0 1 L 0 1 L = f x t + f x+ + f x 0 1 1 L L f x t: 0 Therefore t f x for every feasible x in 1.2, as desired. Problem 1.3 can 0 also b e derived via Lagrangian duality; for a deep er discussion, see Shor [Sho87], or Poljak, Rendl, and Wolkowicz [PRW94]. Most semide nite relaxations of NP-hard combinatorial problems seem to b e related to the semide nite program 1.3, or the related one, T minimize TrXA +2b x+ c 0 0 0 T sub ject to TrXA +2b x+c 0;i=1;:::;L i i i 1.4 X x 0; T x 1 k k k T where the variables are X = X 2 R and x 2 R . It can b e shown that 1.4 is the semide nite programming dual of Shor's relaxation 1.3; the two problems 1.3 and 1.4 yield the same b ound. Note that the constraint X x 0 1.5 T x 1 4 Chapter 1 T is equivalentto X xx . The semide nite program 1.4 can therefore b e directly interpreted as a relaxation of the original problem 1.2, which can b e written as T minimize TrXA +2b x+ c 0 0 0 T sub ject to TrXA +2b x+c 0;i=1;:::;L 1.6 i i i T X = xx : The only di erence b etween 1.6 and 1.4 is the replacement of the non- T T convex constraint X = xx with the convex relaxation X xx . It is also interesting to note that the relaxation 1.4 b ecomes the original problem 1.6 if we add the nonconvex constraint that the matrix on the left hand side of 1.5 is rank one. As an example, consider the 1; 1-quadratic program T T minimize x Ax +2b x 1.7 2 sub ject to x =1;i=1;:::;k; i which is NP-hard. The constraint x 2f1;1gcan b e written as the quadratic i 2 2 equality constraint x = 1, or, equivalently,astwo quadratic inequalities x 1 i i 2 T and x 1. Applying 1.4 we nd that the semide nite program in X = X i and x T minimize TrXA +2b x sub ject to X =1;i=1;:::;k ii 1.8 X x 0 T x 1 yields a lower b ound for 1.7. In a recent pap er on the MAX-CUT problem, which is a sp eci c case of 1.7 where b = 0 and the diagonal of A is zero, Go emans and Williamson have proved that the lower b ound from 1.8 is at most 14 sub optimal see [GW94] and [GW95]. This is much b etter than any previously known b ound. Similar strong results on semide nite programming relaxations of NP-hard problems have b een obtained by Karger, Motwani, and Sudan [KMS94]. The usefulness of semide nite programming in combinatorial optimization was recognized more than twentyyears ago see, e.g., Donath and Ho man [DH73]. Many p eople seem to have develop ed similar ideas indep endently.We should however stress the imp ortance of the work by Grotschel, Lov asz, and Schrij- ver [GLS88, Chapter 9], [LS91] who have demonstrated the p ower of semidef- inite relaxations on some very hard combinatorial problems. The recent de- velopment of ecientinterior-p oint metho ds has turned these techniques into Semide nite programming relaxations 5 powerful practical to ols; see Alizadeh [Ali92b, Ali91, Ali92a], Kamath and Kar- markar [KK92, KK93], Helmb erg, Rendl, Vanderb ei and Wolkowicz [HRVW94]. For a more detailed survey of semide nite programming in combinatorial opti- mization, we refer the reader to the recent pap er by Alizadeh [Ali95]. 3 SEMIDEFINITE PROGRAMMING AND CONTROL THEORY Semide nite programming problems arise frequently in control and system theory; Boyd, El Ghaoui, Feron and Balakrishnan catalog many examples in [BEFB94 ]. We will describ e one simple example here. Consider the di erential inclusion dx = Axt+But; yt=Cxt; ju tj jy tj;i=1;:::;p 1.9 i i dt l p p where xt 2 R , ut 2 R , and y t 2 R . In the terminology of control theory, this is describ ed as a linear system with uncertain, time-varying, unity- b ounded, diagonal feedback. We seek an invariant ellipsoid, i.e., an ellipsoid E such that for any x and u that satisfy 1.9, xT 2E implies xt 2E for all t T . The existence of such an ellipsoid implies, for example, that all solutions of the di erential inclusion 1.9 are b ounded. T T The ellipsoid E = fx j x Px 1g, where P = P > 0, is invariant if and T only if the function V t=xt Pxt is nonincreasing for any x and u that satisfy 1.9. In this case wesay that V is a quadratic Lyapunov function that proves stability of the di erential inclusion 1.9. We can express the derivativeof V as a quadratic form in xt and ut: T T d xt A P + PA PB xt V xt = : 1.10 T ut B P 0 ut dt We can express the conditions ju tj jy tj as the quadratic inequalities i i T T xt c c 0 xt i 2 2 i t y t= 0; i =1;:::;p; u i i ut 0 E ut ii 1.11 6 Chapter 1 where c is the ith rowofC, and E is the matrix with all entries zero except i ii the ii entry, whichis1.
Recommended publications
  • An Algorithm Based on Semidefinite Programming for Finding Minimax
    Takustr. 7 Zuse Institute Berlin 14195 Berlin Germany BELMIRO P.M. DUARTE,GUILLAUME SAGNOL,WENG KEE WONG An algorithm based on Semidefinite Programming for finding minimax optimal designs ZIB Report 18-01 (December 2017) Zuse Institute Berlin Takustr. 7 14195 Berlin Germany Telephone: +49 30-84185-0 Telefax: +49 30-84185-125 E-mail: [email protected] URL: http://www.zib.de ZIB-Report (Print) ISSN 1438-0064 ZIB-Report (Internet) ISSN 2192-7782 An algorithm based on Semidefinite Programming for finding minimax optimal designs Belmiro P.M. Duarte a,b, Guillaume Sagnolc, Weng Kee Wongd aPolytechnic Institute of Coimbra, ISEC, Department of Chemical and Biological Engineering, Portugal. bCIEPQPF, Department of Chemical Engineering, University of Coimbra, Portugal. cTechnische Universität Berlin, Institut für Mathematik, Germany. dDepartment of Biostatistics, Fielding School of Public Health, UCLA, U.S.A. Abstract An algorithm based on a delayed constraint generation method for solving semi- infinite programs for constructing minimax optimal designs for nonlinear models is proposed. The outer optimization level of the minimax optimization problem is solved using a semidefinite programming based approach that requires the de- sign space be discretized. A nonlinear programming solver is then used to solve the inner program to determine the combination of the parameters that yields the worst-case value of the design criterion. The proposed algorithm is applied to find minimax optimal designs for the logistic model, the flexible 4-parameter Hill homoscedastic model and the general nth order consecutive reaction model, and shows that it (i) produces designs that compare well with minimax D−optimal de- signs obtained from semi-infinite programming method in the literature; (ii) can be applied to semidefinite representable optimality criteria, that include the com- mon A−; E−; G−; I− and D-optimality criteria; (iii) can tackle design problems with arbitrary linear constraints on the weights; and (iv) is fast and relatively easy to use.
    [Show full text]
  • Chapter 8 Constrained Optimization 2: Sequential Quadratic Programming, Interior Point and Generalized Reduced Gradient Methods
    Chapter 8: Constrained Optimization 2 CHAPTER 8 CONSTRAINED OPTIMIZATION 2: SEQUENTIAL QUADRATIC PROGRAMMING, INTERIOR POINT AND GENERALIZED REDUCED GRADIENT METHODS 8.1 Introduction In the previous chapter we examined the necessary and sufficient conditions for a constrained optimum. We did not, however, discuss any algorithms for constrained optimization. That is the purpose of this chapter. The three algorithms we will study are three of the most common. Sequential Quadratic Programming (SQP) is a very popular algorithm because of its fast convergence properties. It is available in MATLAB and is widely used. The Interior Point (IP) algorithm has grown in popularity the past 15 years and recently became the default algorithm in MATLAB. It is particularly useful for solving large-scale problems. The Generalized Reduced Gradient method (GRG) has been shown to be effective on highly nonlinear engineering problems and is the algorithm used in Excel. SQP and IP share a common background. Both of these algorithms apply the Newton- Raphson (NR) technique for solving nonlinear equations to the KKT equations for a modified version of the problem. Thus we will begin with a review of the NR method. 8.2 The Newton-Raphson Method for Solving Nonlinear Equations Before we get to the algorithms, there is some background we need to cover first. This includes reviewing the Newton-Raphson (NR) method for solving sets of nonlinear equations. 8.2.1 One equation with One Unknown The NR method is used to find the solution to sets of nonlinear equations. For example, suppose we wish to find the solution to the equation: xe+=2 x We cannot solve for x directly.
    [Show full text]
  • Quadratic Programming GIAN Short Course on Optimization: Applications, Algorithms, and Computation
    Quadratic Programming GIAN Short Course on Optimization: Applications, Algorithms, and Computation Sven Leyffer Argonne National Laboratory September 12-24, 2016 Outline 1 Introduction to Quadratic Programming Applications of QP in Portfolio Selection Applications of QP in Machine Learning 2 Active-Set Method for Quadratic Programming Equality-Constrained QPs General Quadratic Programs 3 Methods for Solving EQPs Generalized Elimination for EQPs Lagrangian Methods for EQPs 2 / 36 Introduction to Quadratic Programming Quadratic Program (QP) minimize 1 xT Gx + g T x x 2 T subject to ai x = bi i 2 E T ai x ≥ bi i 2 I; where n×n G 2 R is a symmetric matrix ... can reformulate QP to have a symmetric Hessian E and I sets of equality/inequality constraints Quadratic Program (QP) Like LPs, can be solved in finite number of steps Important class of problems: Many applications, e.g. quadratic assignment problem Main computational component of SQP: Sequential Quadratic Programming for nonlinear optimization 3 / 36 Introduction to Quadratic Programming Quadratic Program (QP) minimize 1 xT Gx + g T x x 2 T subject to ai x = bi i 2 E T ai x ≥ bi i 2 I; No assumption on eigenvalues of G If G 0 positive semi-definite, then QP is convex ) can find global minimum (if it exists) If G indefinite, then QP may be globally solvable, or not: If AE full rank, then 9ZE null-space basis Convex, if \reduced Hessian" positive semi-definite: T T ZE GZE 0; where ZE AE = 0 then globally solvable ... eliminate some variables using the equations 4 / 36 Introduction to Quadratic Programming Quadratic Program (QP) minimize 1 xT Gx + g T x x 2 T subject to ai x = bi i 2 E T ai x ≥ bi i 2 I; Feasible set may be empty ..
    [Show full text]
  • A Framework for Solving Mixed-Integer Semidefinite Programs
    A FRAMEWORK FOR SOLVING MIXED-INTEGER SEMIDEFINITE PROGRAMS TRISTAN GALLY, MARC E. PFETSCH, AND STEFAN ULBRICH ABSTRACT. Mixed-integer semidefinite programs arise in many applications and several problem-specific solution approaches have been studied recently. In this paper, we investigate a generic branch-and-bound framework for solving such problems. We first show that strict duality of the semidefinite relaxations is inherited to the subproblems. Then solver components like dual fixing, branch- ing rules, and primal heuristics are presented. We show the applicability of an implementation of the proposed methods on three kinds of problems. The re- sults show the positive computational impact of the different solver components, depending on the semidefinite programming solver used. This demonstrates that practically relevant mixed-integer semidefinite programs can successfully be solved using a general purpose solver. 1. INTRODUCTION The solution of mixed-integer semidefinite programs (MISDPs), i.e., semidefinite programs (SDPs) with integrality constraints on some of the variables, received increasing attention in recent years. There are two main application areas of such problems: In the first area, they arise because they capture an essential structure of the problem. For instance, SDPs are important for expressing ellipsoidal uncer- tainty sets in robust optimization, see, e.g., Ben-Tal et al. [10]. In particular, SDPs appear in the context of optimizing robust truss structures, see, e.g., Ben-Tal and Nemirovski [11] as well as Bendsøe and Sigmund [12]. Allowing to change the topological structure then yields MISDPs; Section 9.1 will provide more details. Another application arises in the design of downlink beamforming, see, e.g., [56].
    [Show full text]
  • Abstracts Exact Separation of K-Projection Polytope Constraints Miguel F
    13th EUROPT Workshop on Advances in Continuous Optimization 8-10 July [email protected] http://www.maths.ed.ac.uk/hall/EUROPT15 Abstracts Exact Separation of k-Projection Polytope Constraints Miguel F. Anjos Cutting planes are often used as an efficient means to tighten continuous relaxations of mixed-integer optimization problems and are a vital component of branch-and-cut algorithms. A critical step of any cutting plane algorithm is to find valid inequalities, or cuts, that improve the current relaxation of the integer-constrained problem. The maximally violated valid inequality problem aims to find the most violated inequality from a given family of cuts. $k$-projection polytope constraints are a family of cuts that are based on an inner description of the cut polytope of size $k$ and are applied to $k\times k$ principal minors of a semidefinite optimization relaxation. We propose a bilevel second order cone optimization approach to solve the maximally violated $k$ projection polytope constraint problem. We reformulate the bilevel problem as a single-level mixed binary second order cone optimization problem that can be solved using off-the-shelf conic optimization software. Additionally we present two methods for improving the computational performance of our approach, namely lexicographical ordering and a reformulation with fewer binary variables. All of the proposed formulations are exact. Preliminary results show the computational time and number of branch-and-bound nodes required to solve small instances in the context of the max-cut problem. Global convergence of the Douglas{Rachford method for some nonconvex feasibility problems Francisco J.
    [Show full text]
  • A Survey of Numerical Methods for Nonlinear Semidefinite Programming
    Journal of the Operations Research Society of Japan ⃝c The Operations Research Society of Japan Vol. 58, No. 1, January 2015, pp. 24–60 A SURVEY OF NUMERICAL METHODS FOR NONLINEAR SEMIDEFINITE PROGRAMMING Hiroshi Yamashita Hiroshi Yabe NTT DATA Mathematical Systems Inc. Tokyo University of Science (Received September 16, 2014; Revised December 22, 2014) Abstract Nonlinear semidefinite programming (SDP) problems have received a lot of attentions because of large variety of applications. In this paper, we survey numerical methods for solving nonlinear SDP problems. Three kinds of typical numerical methods are described; augmented Lagrangian methods, sequential SDP methods and primal-dual interior point methods. We describe their typical algorithmic forms and discuss their global and local convergence properties which include rate of convergence. Keywords: Optimization, nonlinear semidefinite programming, augmented Lagrangian method, sequential SDP method, primal-dual interior point method 1. Introduction This paper is concerned with the nonlinear SDP problem: minimize f(x); x 2 Rn; (1.1) subject to g(x) = 0;X(x) º 0 where the functions f : Rn ! R, g : Rn ! Rm and X : Rn ! Sp are sufficiently smooth, p p and S denotes the set of pth-order real symmetric matrices. We also define S+ to denote the set of pth-order symmetric positive semidefinite matrices. By X(x) º 0 and X(x) Â 0, we mean that the matrix X(x) is positive semidefinite and positive definite, respectively. When f is a convex function, X is a concave function and g is affine, this problem is a convex optimization problem. Here the concavity of X(x) means that X(¸u + (1 ¡ ¸)v) ¡ ¸X(u) ¡ (1 ¡ ¸)X(v) º 0 holds for any u; v 2 Rn and any ¸ satisfying 0 · ¸ · 1.
    [Show full text]
  • Introduction to Semidefinite Programming (SDP)
    Introduction to Semidefinite Programming (SDP) Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. 1 1 Introduction Semidefinite programming (SDP ) is the most exciting development in math- ematical programming in the 1990’s. SDP has applications in such diverse fields as traditional convex constrained optimization, control theory, and combinatorial optimization. Because SDP is solvable via interior point methods, most of these applications can usually be solved very efficiently in practice as well as in theory. 2 Review of Linear Programming Consider the linear programming problem in standard form: LP : minimize c · x s.t. ai · x = bi,i=1 ,...,m n x ∈+. x n c · x Here is a vector of variables, and we write “ ” for the inner-product n “ j=1 cj xj ”, etc. n n n Also, + := {x ∈ | x ≥ 0},and we call + the nonnegative orthant. n In fact, + is a closed convex cone, where K is called a closed a convex cone if K satisfies the following two conditions: • If x, w ∈ K, then αx + βw ∈ K for all nonnegative scalars α and β. • K is a closed set. 2 In words, LP is the following problem: “Minimize the linear function c·x, subject to the condition that x must solve m given equations ai · x = bi,i =1,...,m, and that x must lie in the closed n convex cone K = +.” We will write the standard linear programming dual problem as: m LD : maximize yibi i=1 m s.t. yiai + s = c i=1 n s ∈+. Given a feasible solution x of LP and a feasible solution (y, s)ofLD , the m m duality gap is simply c·x− i=1 yibi =(c−i=1 yiai) ·x = s·x ≥ 0, because x ≥ 0ands ≥ 0.
    [Show full text]
  • First-Order Methods for Semidefinite Programming
    FIRST-ORDER METHODS FOR SEMIDEFINITE PROGRAMMING Zaiwen Wen Advisor: Professor Donald Goldfarb Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Graduate School of Arts and Sciences COLUMBIA UNIVERSITY 2009 c 2009 Zaiwen Wen All Rights Reserved ABSTRACT First-order Methods For Semidefinite Programming Zaiwen Wen Semidefinite programming (SDP) problems are concerned with minimizing a linear function of a symmetric positive definite matrix subject to linear equality constraints. These convex problems are solvable in polynomial time by interior point methods. However, if the number of constraints m in an SDP is of order O(n2) when the unknown positive semidefinite matrix is n n, interior point methods become impractical both in terms of the × time (O(n6)) and the amount of memory (O(m2)) required at each iteration to form the m × m positive definite Schur complement matrix M and compute the search direction by finding the Cholesky factorization of M. This significantly limits the application of interior-point methods. In comparison, the computational cost of each iteration of first-order optimization approaches is much cheaper, particularly, if any sparsity in the SDP constraints or other special structure is exploited. This dissertation is devoted to the development, analysis and evaluation of two first-order approaches that are able to solve large SDP problems which have been challenging for interior point methods. In chapter 2, we present a row-by-row (RBR) method based on solving a sequence of problems obtained by restricting the n-dimensional positive semidefinite constraint on the matrix X. By fixing any (n 1)-dimensional principal submatrix of X and using its − (generalized) Schur complement, the positive semidefinite constraint is reduced to a sim- ple second-order cone constraint.
    [Show full text]
  • Gauss-Newton SQP
    TEMPO Spring School: Theory and Numerics for Nonlinear Model Predictive Control Exercise 3: Gauss-Newton SQP J. Andersson M. Diehl J. Rawlings M. Zanon University of Freiburg, March 27, 2015 Gauss-Newton sequential quadratic programming (SQP) In the exercises so far, we solved the NLPs with IPOPT. IPOPT is a popular open-source primal- dual interior point code employing so-called filter line-search to ensure global convergence. Other NLP solvers that can be used from CasADi include SNOPT, WORHP and KNITRO. In the following, we will write our own simple NLP solver implementing sequential quadratic programming (SQP). (0) (0) Starting from a given initial guess for the primal and dual variables (x ; λg ), SQP solves the NLP by iteratively computing local convex quadratic approximations of the NLP at the (k) (k) current iterate (x ; λg ) and solving them by using a quadratic programming (QP) solver. For an NLP of the form: minimize f(x) x (1) subject to x ≤ x ≤ x; g ≤ g(x) ≤ g; these quadratic approximations take the form: 1 | 2 (k) (k) (k) (k) | minimize ∆x rxL(x ; λg ; λx ) ∆x + rxf(x ) ∆x ∆x 2 subject to x − x(k) ≤ ∆x ≤ x − x(k); (2) @g g − g(x(k)) ≤ (x(k)) ∆x ≤ g − g(x(k)); @x | | where L(x; λg; λx) = f(x) + λg g(x) + λx x is the so-called Lagrangian function. By solving this (k) (k+1) (k) (k+1) QP, we get the (primal) step ∆x := x − x as well as the Lagrange multipliers λg (k+1) and λx .
    [Show full text]
  • Semidefinite Optimization ∗
    Semidefinite Optimization ∗ M. J. Todd † August 22, 2001 Abstract Optimization problems in which the variable is not a vector but a symmetric matrix which is required to be positive semidefinite have been intensely studied in the last ten years. Part of the reason for the interest stems from the applicability of such problems to such diverse areas as designing the strongest column, checking the stability of a differential inclusion, and obtaining tight bounds for hard combinatorial optimization problems. Part also derives from great advances in our ability to solve such problems efficiently in theory and in practice (perhaps “or” would be more appropriate: the most effective computational methods are not always provably efficient in theory, and vice versa). Here we describe this class of optimization problems, give a number of examples demonstrating its significance, outline its duality theory, and discuss algorithms for solving such problems. ∗Copyright (C) by Cambridge University Press, Acta Numerica 10 (2001) 515–560. †School of Operations Research and Industrial Engineering, Cornell University, Ithaca, New York 14853, USA ([email protected]). Research supported in part by NSF through grant DMS-9805602 and ONR through grant N00014-96-1-0050. 1 1 Introduction Semidefinite optimization is concerned with choosing a symmetric matrix to optimize a linear function subject to linear constraints and a further crucial constraint that the matrix be positive semidefinite. It thus arises from the well-known linear programming problem by replacing the
    [Show full text]
  • Implementing Customized Pivot Rules in COIN-OR's CLP with Python
    Cahier du GERAD G-2012-07 Customizing the Solution Process of COIN-OR's Linear Solvers with Python Mehdi Towhidi1;2 and Dominique Orban1;2 ? 1 Department of Mathematics and Industrial Engineering, Ecole´ Polytechnique, Montr´eal,QC, Canada. 2 GERAD, Montr´eal,QC, Canada. [email protected], [email protected] Abstract. Implementations of the Simplex method differ only in very specific aspects such as the pivot rule. Similarly, most relaxation methods for mixed-integer programming differ only in the type of cuts and the exploration of the search tree. Implementing instances of those frame- works would therefore be more efficient if linear and mixed-integer pro- gramming solvers let users customize such aspects easily. We provide a scripting mechanism to easily implement and experiment with pivot rules for the Simplex method by building upon COIN-OR's open-source linear programming package CLP. Our mechanism enables users to implement pivot rules in the Python scripting language without explicitly interact- ing with the underlying C++ layers of CLP. In the same manner, it allows users to customize the solution process while solving mixed-integer linear programs using the CBC and CGL COIN-OR packages. The Cython pro- gramming language ensures communication between Python and COIN- OR libraries and activates user-defined customizations as callbacks. For illustration, we provide an implementation of a well-known pivot rule as well as the positive edge rule|a new rule that is particularly efficient on degenerate problems, and demonstrate how to customize branch-and-cut node selection in the solution of a mixed-integer program.
    [Show full text]
  • A Sequential Quadratic Programming Algorithm with an Additional Equality Constrained Phase
    A Sequential Quadratic Programming Algorithm with an Additional Equality Constrained Phase Jos´eLuis Morales∗ Jorge Nocedal † Yuchen Wu† December 28, 2008 Abstract A sequential quadratic programming (SQP) method is presented that aims to over- come some of the drawbacks of contemporary SQP methods. It avoids the difficulties associated with indefinite quadratic programming subproblems by defining this sub- problem to be always convex. The novel feature of the approach is the addition of an equality constrained phase that promotes fast convergence and improves performance in the presence of ill conditioning. This equality constrained phase uses exact second order information and can be implemented using either a direct solve or an iterative method. The paper studies the global and local convergence properties of the new algorithm and presents a set of numerical experiments to illustrate its practical performance. 1 Introduction Sequential quadratic programming (SQP) methods are very effective techniques for solv- ing small, medium-size and certain classes of large-scale nonlinear programming problems. They are often preferable to interior-point methods when a sequence of related problems must be solved (as in branch and bound methods) and more generally, when a good estimate of the solution is available. Some SQP methods employ convex quadratic programming sub- problems for the step computation (typically using quasi-Newton Hessian approximations) while other variants define the Hessian of the SQP model using second derivative informa- tion, which can lead to nonconvex quadratic subproblems; see [27, 1] for surveys on SQP methods. ∗Departamento de Matem´aticas, Instituto Tecnol´ogico Aut´onomode M´exico, M´exico. This author was supported by Asociaci´onMexicana de Cultura AC and CONACyT-NSF grant J110.388/2006.
    [Show full text]