Applications and Solution Approaches for Mixed-Integer Semidefinite

Total Page:16

File Type:pdf, Size:1020Kb

Applications and Solution Approaches for Mixed-Integer Semidefinite Applications and Solution Approaches for Mixed-Integer Semidefinite Programming Tristan Gally joint work with Marc E. Pfetsch and Stefan Ulbrich January 11, 2018 | Applications and Solution Approaches for Mixed-Integer Semidefinite Programming | Tristan Gally | 1 I Efficient solvers for specific applications, but few solvers (and theory) for the general case Mixed-Integer Semidefinite Programming I Mixed-integer semidefinite program MISDP sup bT y m X s.t. C − Ai yi 0, i=1 yi 2 Z 8 i 2 I for symmetric matrices Ai , C I Linear constraints, bounds, multiple blocks possible within SDP-constraint January 11, 2018 | Applications and Solution Approaches for Mixed-Integer Semidefinite Programming | Tristan Gally | 2 Mixed-Integer Semidefinite Programming I Mixed-integer semidefinite program MISDP sup bT y m X s.t. C − Ai yi 0, i=1 yi 2 Z 8 i 2 I for symmetric matrices Ai , C I Linear constraints, bounds, multiple blocks possible within SDP-constraint I Efficient solvers for specific applications, but few solvers (and theory) for the general case January 11, 2018 | Applications and Solution Approaches for Mixed-Integer Semidefinite Programming | Tristan Gally | 2 Contents Applications Solution Approaches Outer Approximation SDP-based Branch-and-Bound Warmstarts Dual Fixing Solvers for MISDP Conclusion & Outlook January 11, 2018 | Applications and Solution Approaches for Mixed-Integer Semidefinite Programming | Tristan Gally | 3 Contents Applications Solution Approaches Outer Approximation SDP-based Branch-and-Bound Warmstarts Dual Fixing Solvers for MISDP Conclusion & Outlook January 11, 2018 | Applications and Solution Approaches for Mixed-Integer Semidefinite Programming | Tristan Gally | 4 n Using variables (xi )i2V 2 {−1, 1g with xi = 1 , i 2 S, this is equivalent to Max-Cut MIQP X 1 − xi xj max c ij 2 i<j s.t. xi 2 {−1, 1g 8 i ≤ n Classical Example: Max-Cut 1 2 4 Max-Cut 2 2 Find Cut δ(S), with S ⊆ V and fi, jg 2 δ(S) 3 iff i 2 S, j 2 V n S, that maximizes 1 6 1 1 X c . 2 ij 3 5 fi,jg2δ(S) January 11, 2018 | Applications and Solution Approaches for Mixed-Integer Semidefinite Programming | Tristan Gally | 5 Classical Example: Max-Cut 1 2 4 Max-Cut 2 2 Find Cut δ(S), with S ⊆ V and fi, jg 2 δ(S) 3 iff i 2 S, j 2 V n S, that maximizes 1 6 1 1 X c . 2 ij 3 5 fi,jg2δ(S) n Using variables (xi )i2V 2 {−1, 1g with xi = 1 , i 2 S, this is equivalent to Max-Cut MIQP X 1 − xi xj max c ij 2 i<j s.t. xi 2 {−1, 1g 8 i ≤ n January 11, 2018 | Applications and Solution Approaches for Mixed-Integer Semidefinite Programming | Tristan Gally | 5 T P With X := xx (and notation A • B := Tr(AB) = ij Aij Bij ), this is equivalent to Max-Cut Rk1-MISDP [Poljak, Rendl 1995] 1 max (Diag(C1) − C) • X 4 s.t. diag(X) = 1 Rank(X) = 1 X 0 Xij 2 {−1, 1g Classical Example: Max-Cut n 0 n n 1 X 1 − xi xj 1 X X X cij = @ cij xi xi − cij xi xj A 2 4 i<j i=1 j=1 j=1 1 = x T (Diag(C1) − C)x 4 January 11, 2018 | Applications and Solution Approaches for Mixed-Integer Semidefinite Programming | Tristan Gally | 6 Classical Example: Max-Cut n 0 n n 1 X 1 − xi xj 1 X X X cij = @ cij xi xi − cij xi xj A 2 4 i<j i=1 j=1 j=1 1 = x T (Diag(C1) − C)x 4 T P With X := xx (and notation A • B := Tr(AB) = ij Aij Bij ), this is equivalent to Max-Cut Rk1-MISDP [Poljak, Rendl 1995] 1 max (Diag(C1) − C) • X 4 s.t. diag(X) = 1 Rank(X) = 1 X 0 Xij 2 {−1, 1g January 11, 2018 | Applications and Solution Approaches for Mixed-Integer Semidefinite Programming | Tristan Gally | 6 Theorem [Laurent, Poljak 1995] Every integral solution satisfies Rank(X) = 1. Classical Example: Max-Cut Max-Cut Rk1-MISDP 1 max (Diag(C1) − C) • X 4 s.t. diag(X) = 1 Rank(X) = 1 X 0 Xij 2 {−1, 1g I Relaxation still non-convex because of rank constraint January 11, 2018 | Applications and Solution Approaches for Mixed-Integer Semidefinite Programming | Tristan Gally | 7 Classical Example: Max-Cut Max-Cut MISDP 1 max (Diag(C1) − C) • X 4 s.t. diag(X) = 1 Rank(X) = 1 X 0 Xij 2 {−1, 1g I Relaxation still non-convex because of rank constraint Theorem [Laurent, Poljak 1995] Every integral solution satisfies Rank(X) = 1. January 11, 2018 | Applications and Solution Approaches for Mixed-Integer Semidefinite Programming | Tristan Gally | 7 I Under certain conditions on A, this is equivalent to `1-Minimization min kxk1 s.t. Ax = b n x 2 R Compressed Sensing I Task: find sparsest solution to underdetermined system of linear equations, i.e. a solution of `0-Minimization min kxk0 s.t. Ax = b n x 2 R where kxk0 := jsupp(x)j. January 11, 2018 | Applications and Solution Approaches for Mixed-Integer Semidefinite Programming | Tristan Gally | 8 Compressed Sensing I Task: find sparsest solution to underdetermined system of linear equations, i.e. a solution of `0-Minimization min kxk0 s.t. Ax = b n x 2 R where kxk0 := jsupp(x)j. I Under certain conditions on A, this is equivalent to `1-Minimization min kxk1 s.t. Ax = b n x 2 R January 11, 2018 | Applications and Solution Approaches for Mixed-Integer Semidefinite Programming | Tristan Gally | 8 Theorem [Foucart, Lai 2008] If Ax = b has a solution x with kxk0 ≤ k and the RIP of order 2k holds for 2 p β2k 2 < 4 2 − 3 ≈ 2.6569, α2k then x is the unique solution of both the `0- and the `1-optimization problem. Compressed Sensing One such condition is the (asymmetric) restricted isometry property (RIP): 2 2 2 2 2 αk kxk2 ≤ kAxk2 ≤ βk kxk2 8x : kxk0 ≤ k January 11, 2018 | Applications and Solution Approaches for Mixed-Integer Semidefinite Programming | Tristan Gally | 9 Compressed Sensing One such condition is the (asymmetric) restricted isometry property (RIP): 2 2 2 2 2 αk kxk2 ≤ kAxk2 ≤ βk kxk2 8x : kxk0 ≤ k Theorem [Foucart, Lai 2008] If Ax = b has a solution x with kxk0 ≤ k and the RIP of order 2k holds for 2 p β2k 2 < 4 2 − 3 ≈ 2.6569, α2k then x is the unique solution of both the `0- and the `1-optimization problem. January 11, 2018 | Applications and Solution Approaches for Mixed-Integer Semidefinite Programming | Tristan Gally | 9 Compressed Sensing 2 2 The optimal constant αk (and similarly βk ) for 2 2 2 2 2 αk kxk2 ≤ kAxk2 ≤ βk kxk2 8x : kxk0 ≤ k can be computed via the following non-convex rank-constrained MISDP: RIP-Rk1-MISDP min Tr(AT AX) s.t. Tr(X) = 1 −zj ≤ Xjj ≤ zj 8 j ≤ n n X zj ≤ k j=1 Rank(X) =1 X 0 z 2f0, 1gn January 11, 2018 | Applications and Solution Approaches for Mixed-Integer Semidefinite Programming | Tristan Gally | 10 Compressed Sensing RIP-MISDP min Tr(AT AX) s.t. Tr(X) = 1 −zj ≤ Xjj ≤ zj 8j ≤ n n X zj ≤ k j=1 Rank(X) = 1 X 0 z 2f0, 1gn Theorem [G., Pfetsch 2016] There always exists an optimal solution for (RIP-MISDP) with Rank(X) = 1. January 11, 2018 | Applications and Solution Approaches for Mixed-Integer Semidefinite Programming | Tristan Gally | 11 Rm I Cross-sectional areas x 2 + for bars that minimize the volume while creating a “stable” truss I Stability is measured by the 1 T compliance 2 f u with node displacements u optimal structure Truss Topology Design d I n nodes V = vi 2 R : i = 1, ... , n I nf free nodes Vf ⊂ V I m possible bars E ⊆ ffvi , vj g : i =6 jg , jEj = m df I Force f 2 R for df = d · nf ground structure 3x3 January 11, 2018 | Applications and Solution Approaches for Mixed-Integer Semidefinite Programming | Tristan Gally | 12 Truss Topology Design Rd Rm I n nodes V = vi 2 : i = 1, ... , n I Cross-sectional areas x 2 + for bars that minimize the volume I nf free nodes Vf ⊂ V while creating a “stable” truss I m possible bars Stability is measured by the E ⊆ ffvi , vj g : i =6 jg , jEj = m I compliance 1 f T u with node Force f 2 Rdf for d = d · n 2 I f f displacements u ground structure 3x3 optimal structure January 11, 2018 | Applications and Solution Approaches for Mixed-Integer Semidefinite Programming | Tristan Gally | 12 Truss Topology Design TTD-SDP [Ben-Tal, Nemirovski 1997] I E : set of possible bars X I `e : length of bar e min `exe e2E I x : cross-sectional areas 2C f T I f : external force s.t. max 0 f A(x) I Cmax : upper bound on compliance xe ≥ 0 8e 2 E I Ae: bar stiffness matrices P with stiffness matrix A(x) = Ae`exe. e2E January 11, 2018 | Applications and Solution Approaches for Mixed-Integer Semidefinite Programming | Tristan Gally | 13 TTD-MISDP [Kocvaraˇ 2010, Mars 2013] X X a min `e axe e2E a2A 2C f T s.t. max 0 fA (x) X a xe ≤ 1 8e 2 E a2A a xe 2 f0, 1g8 e 2 E, a 2 A, P P a where A(x) = Ae `e a xe . e2E a2A Truss Topology Design I In practice, we won’t be able to produce/buy bars of any desired size.
Recommended publications
  • An Algorithm Based on Semidefinite Programming for Finding Minimax
    Takustr. 7 Zuse Institute Berlin 14195 Berlin Germany BELMIRO P.M. DUARTE,GUILLAUME SAGNOL,WENG KEE WONG An algorithm based on Semidefinite Programming for finding minimax optimal designs ZIB Report 18-01 (December 2017) Zuse Institute Berlin Takustr. 7 14195 Berlin Germany Telephone: +49 30-84185-0 Telefax: +49 30-84185-125 E-mail: [email protected] URL: http://www.zib.de ZIB-Report (Print) ISSN 1438-0064 ZIB-Report (Internet) ISSN 2192-7782 An algorithm based on Semidefinite Programming for finding minimax optimal designs Belmiro P.M. Duarte a,b, Guillaume Sagnolc, Weng Kee Wongd aPolytechnic Institute of Coimbra, ISEC, Department of Chemical and Biological Engineering, Portugal. bCIEPQPF, Department of Chemical Engineering, University of Coimbra, Portugal. cTechnische Universität Berlin, Institut für Mathematik, Germany. dDepartment of Biostatistics, Fielding School of Public Health, UCLA, U.S.A. Abstract An algorithm based on a delayed constraint generation method for solving semi- infinite programs for constructing minimax optimal designs for nonlinear models is proposed. The outer optimization level of the minimax optimization problem is solved using a semidefinite programming based approach that requires the de- sign space be discretized. A nonlinear programming solver is then used to solve the inner program to determine the combination of the parameters that yields the worst-case value of the design criterion. The proposed algorithm is applied to find minimax optimal designs for the logistic model, the flexible 4-parameter Hill homoscedastic model and the general nth order consecutive reaction model, and shows that it (i) produces designs that compare well with minimax D−optimal de- signs obtained from semi-infinite programming method in the literature; (ii) can be applied to semidefinite representable optimality criteria, that include the com- mon A−; E−; G−; I− and D-optimality criteria; (iii) can tackle design problems with arbitrary linear constraints on the weights; and (iv) is fast and relatively easy to use.
    [Show full text]
  • Multi-Objective Optimization of Unidirectional Non-Isolated Dc/Dcconverters
    MULTI-OBJECTIVE OPTIMIZATION OF UNIDIRECTIONAL NON-ISOLATED DC/DC CONVERTERS by Andrija Stupar A thesis submitted in conformity with the requirements for the degree of Doctor of Philosophy Graduate Department of Edward S. Rogers Department of Electrical and Computer Engineering University of Toronto © Copyright 2017 by Andrija Stupar Abstract Multi-Objective Optimization of Unidirectional Non-Isolated DC/DC Converters Andrija Stupar Doctor of Philosophy Graduate Department of Edward S. Rogers Department of Electrical and Computer Engineering University of Toronto 2017 Engineers have to fulfill multiple requirements and strive towards often competing goals while designing power electronic systems. This can be an analytically complex and computationally intensive task since the relationship between the parameters of a system’s design space is not always obvious. Furthermore, a number of possible solutions to a particular problem may exist. To find an optimal system, many different possible designs must be evaluated. Literature on power electronics optimization focuses on the modeling and design of partic- ular converters, with little thought given to the mathematical formulation of the optimization problem. Therefore, converter optimization has generally been a slow process, with exhaus- tive search, the execution time of which is exponential in the number of design variables, the prevalent approach. In this thesis, geometric programming (GP), a type of convex optimization, the execution time of which is polynomial in the number of design variables, is proposed and demonstrated as an efficient and comprehensive framework for the multi-objective optimization of non-isolated unidirectional DC/DC converters. A GP model of multilevel flying capacitor step-down convert- ers is developed and experimentally verified on a 15-to-3.3 V, 9.9 W discrete prototype, with sets of loss-volume Pareto optimal designs generated in under one minute.
    [Show full text]
  • Julia, My New Friend for Computing and Optimization? Pierre Haessig, Lilian Besson
    Julia, my new friend for computing and optimization? Pierre Haessig, Lilian Besson To cite this version: Pierre Haessig, Lilian Besson. Julia, my new friend for computing and optimization?. Master. France. 2018. cel-01830248 HAL Id: cel-01830248 https://hal.archives-ouvertes.fr/cel-01830248 Submitted on 4 Jul 2018 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. « Julia, my new computing friend? » | 14 June 2018, IETR@Vannes | By: L. Besson & P. Haessig 1 « Julia, my New frieNd for computiNg aNd optimizatioN? » Intro to the Julia programming language, for MATLAB users Date: 14th of June 2018 Who: Lilian Besson & Pierre Haessig (SCEE & AUT team @ IETR / CentraleSupélec campus Rennes) « Julia, my new computing friend? » | 14 June 2018, IETR@Vannes | By: L. Besson & P. Haessig 2 AgeNda for today [30 miN] 1. What is Julia? [5 miN] 2. ComparisoN with MATLAB [5 miN] 3. Two examples of problems solved Julia [5 miN] 4. LoNger ex. oN optimizatioN with JuMP [13miN] 5. LiNks for more iNformatioN ? [2 miN] « Julia, my new computing friend? » | 14 June 2018, IETR@Vannes | By: L. Besson & P. Haessig 3 1. What is Julia ? Open-source and free programming language (MIT license) Developed since 2012 (creators: MIT researchers) Growing popularity worldwide, in research, data science, finance etc… Multi-platform: Windows, Mac OS X, GNU/Linux..
    [Show full text]
  • Numericaloptimization
    Numerical Optimization Alberto Bemporad http://cse.lab.imtlucca.it/~bemporad/teaching/numopt Academic year 2020-2021 Course objectives Solve complex decision problems by using numerical optimization Application domains: • Finance, management science, economics (portfolio optimization, business analytics, investment plans, resource allocation, logistics, ...) • Engineering (engineering design, process optimization, embedded control, ...) • Artificial intelligence (machine learning, data science, autonomous driving, ...) • Myriads of other applications (transportation, smart grids, water networks, sports scheduling, health-care, oil & gas, space, ...) ©2021 A. Bemporad - Numerical Optimization 2/102 Course objectives What this course is about: • How to formulate a decision problem as a numerical optimization problem? (modeling) • Which numerical algorithm is most appropriate to solve the problem? (algorithms) • What’s the theory behind the algorithm? (theory) ©2021 A. Bemporad - Numerical Optimization 3/102 Course contents • Optimization modeling – Linear models – Convex models • Optimization theory – Optimality conditions, sensitivity analysis – Duality • Optimization algorithms – Basics of numerical linear algebra – Convex programming – Nonlinear programming ©2021 A. Bemporad - Numerical Optimization 4/102 References i ©2021 A. Bemporad - Numerical Optimization 5/102 Other references • Stephen Boyd’s “Convex Optimization” courses at Stanford: http://ee364a.stanford.edu http://ee364b.stanford.edu • Lieven Vandenberghe’s courses at UCLA: http://www.seas.ucla.edu/~vandenbe/ • For more tutorials/books see http://plato.asu.edu/sub/tutorials.html ©2021 A. Bemporad - Numerical Optimization 6/102 Optimization modeling What is optimization? • Optimization = assign values to a set of decision variables so to optimize a certain objective function • Example: Which is the best velocity to minimize fuel consumption ? fuel [ℓ/km] velocity [km/h] 0 30 60 90 120 160 ©2021 A.
    [Show full text]
  • A Framework for Solving Mixed-Integer Semidefinite Programs
    A FRAMEWORK FOR SOLVING MIXED-INTEGER SEMIDEFINITE PROGRAMS TRISTAN GALLY, MARC E. PFETSCH, AND STEFAN ULBRICH ABSTRACT. Mixed-integer semidefinite programs arise in many applications and several problem-specific solution approaches have been studied recently. In this paper, we investigate a generic branch-and-bound framework for solving such problems. We first show that strict duality of the semidefinite relaxations is inherited to the subproblems. Then solver components like dual fixing, branch- ing rules, and primal heuristics are presented. We show the applicability of an implementation of the proposed methods on three kinds of problems. The re- sults show the positive computational impact of the different solver components, depending on the semidefinite programming solver used. This demonstrates that practically relevant mixed-integer semidefinite programs can successfully be solved using a general purpose solver. 1. INTRODUCTION The solution of mixed-integer semidefinite programs (MISDPs), i.e., semidefinite programs (SDPs) with integrality constraints on some of the variables, received increasing attention in recent years. There are two main application areas of such problems: In the first area, they arise because they capture an essential structure of the problem. For instance, SDPs are important for expressing ellipsoidal uncer- tainty sets in robust optimization, see, e.g., Ben-Tal et al. [10]. In particular, SDPs appear in the context of optimizing robust truss structures, see, e.g., Ben-Tal and Nemirovski [11] as well as Bendsøe and Sigmund [12]. Allowing to change the topological structure then yields MISDPs; Section 9.1 will provide more details. Another application arises in the design of downlink beamforming, see, e.g., [56].
    [Show full text]
  • Abstracts Exact Separation of K-Projection Polytope Constraints Miguel F
    13th EUROPT Workshop on Advances in Continuous Optimization 8-10 July [email protected] http://www.maths.ed.ac.uk/hall/EUROPT15 Abstracts Exact Separation of k-Projection Polytope Constraints Miguel F. Anjos Cutting planes are often used as an efficient means to tighten continuous relaxations of mixed-integer optimization problems and are a vital component of branch-and-cut algorithms. A critical step of any cutting plane algorithm is to find valid inequalities, or cuts, that improve the current relaxation of the integer-constrained problem. The maximally violated valid inequality problem aims to find the most violated inequality from a given family of cuts. $k$-projection polytope constraints are a family of cuts that are based on an inner description of the cut polytope of size $k$ and are applied to $k\times k$ principal minors of a semidefinite optimization relaxation. We propose a bilevel second order cone optimization approach to solve the maximally violated $k$ projection polytope constraint problem. We reformulate the bilevel problem as a single-level mixed binary second order cone optimization problem that can be solved using off-the-shelf conic optimization software. Additionally we present two methods for improving the computational performance of our approach, namely lexicographical ordering and a reformulation with fewer binary variables. All of the proposed formulations are exact. Preliminary results show the computational time and number of branch-and-bound nodes required to solve small instances in the context of the max-cut problem. Global convergence of the Douglas{Rachford method for some nonconvex feasibility problems Francisco J.
    [Show full text]
  • TOMLAB –Unique Features for Optimization in MATLAB
    TOMLAB –Unique Features for Optimization in MATLAB Bad Honnef, Germany October 15, 2004 Kenneth Holmström Tomlab Optimization AB Västerås, Sweden [email protected] ´ Professor in Optimization Department of Mathematics and Physics Mälardalen University, Sweden http://tomlab.biz Outline of the talk • The TOMLAB Optimization Environment – Background and history – Technology available • Optimization in TOMLAB • Tests on customer supplied large-scale optimization examples • Customer cases, embedded solutions, consultant work • Business perspective • Box-bounded global non-convex optimization • Summary http://tomlab.biz Background • MATLAB – a high-level language for mathematical calculations, distributed by MathWorks Inc. • Matlab can be extended by Toolboxes that adds to the features in the software, e.g.: finance, statistics, control, and optimization • Why develop the TOMLAB Optimization Environment? – A uniform approach to optimization didn’t exist in MATLAB – Good optimization solvers were missing – Large-scale optimization was non-existent in MATLAB – Other toolboxes needed robust and fast optimization – Technical advantages from the MATLAB languages • Fast algorithm development and modeling of applied optimization problems • Many built in functions (ODE, linear algebra, …) • GUIdevelopmentfast • Interfaceable with C, Fortran, Java code http://tomlab.biz History of Tomlab • President and founder: Professor Kenneth Holmström • The company founded 1986, Development started 1989 • Two toolboxes NLPLIB och OPERA by 1995 • Integrated format for optimization 1996 • TOMLAB introduced, ISMP97 in Lausanne 1997 • TOMLAB v1.0 distributed for free until summer 1999 • TOMLAB v2.0 first commercial version fall 1999 • TOMLAB starting sales from web site March 2000 • TOMLAB v3.0 expanded with external solvers /SOL spring 2001 • Dash Optimization Ltd’s XpressMP added in TOMLAB fall 2001 • Tomlab Optimization Inc.
    [Show full text]
  • A Survey of Numerical Methods for Nonlinear Semidefinite Programming
    Journal of the Operations Research Society of Japan ⃝c The Operations Research Society of Japan Vol. 58, No. 1, January 2015, pp. 24–60 A SURVEY OF NUMERICAL METHODS FOR NONLINEAR SEMIDEFINITE PROGRAMMING Hiroshi Yamashita Hiroshi Yabe NTT DATA Mathematical Systems Inc. Tokyo University of Science (Received September 16, 2014; Revised December 22, 2014) Abstract Nonlinear semidefinite programming (SDP) problems have received a lot of attentions because of large variety of applications. In this paper, we survey numerical methods for solving nonlinear SDP problems. Three kinds of typical numerical methods are described; augmented Lagrangian methods, sequential SDP methods and primal-dual interior point methods. We describe their typical algorithmic forms and discuss their global and local convergence properties which include rate of convergence. Keywords: Optimization, nonlinear semidefinite programming, augmented Lagrangian method, sequential SDP method, primal-dual interior point method 1. Introduction This paper is concerned with the nonlinear SDP problem: minimize f(x); x 2 Rn; (1.1) subject to g(x) = 0;X(x) º 0 where the functions f : Rn ! R, g : Rn ! Rm and X : Rn ! Sp are sufficiently smooth, p p and S denotes the set of pth-order real symmetric matrices. We also define S+ to denote the set of pth-order symmetric positive semidefinite matrices. By X(x) º 0 and X(x) Â 0, we mean that the matrix X(x) is positive semidefinite and positive definite, respectively. When f is a convex function, X is a concave function and g is affine, this problem is a convex optimization problem. Here the concavity of X(x) means that X(¸u + (1 ¡ ¸)v) ¡ ¸X(u) ¡ (1 ¡ ¸)X(v) º 0 holds for any u; v 2 Rn and any ¸ satisfying 0 · ¸ · 1.
    [Show full text]
  • COSMO: a Conic Operator Splitting Method for Convex Conic Problems
    COSMO: A conic operator splitting method for convex conic problems Michael Garstka∗ Mark Cannon∗ Paul Goulart ∗ September 10, 2020 Abstract This paper describes the Conic Operator Splitting Method (COSMO) solver, an operator split- ting algorithm for convex optimisation problems with quadratic objective function and conic constraints. At each step the algorithm alternates between solving a quasi-definite linear sys- tem with a constant coefficient matrix and a projection onto convex sets. The low per-iteration computational cost makes the method particularly efficient for large problems, e.g. semidefi- nite programs that arise in portfolio optimisation, graph theory, and robust control. Moreover, the solver uses chordal decomposition techniques and a new clique merging algorithm to ef- fectively exploit sparsity in large, structured semidefinite programs. A number of benchmarks against other state-of-the-art solvers for a variety of problems show the effectiveness of our approach. Our Julia implementation is open-source, designed to be extended and customised by the user, and is integrated into the Julia optimisation ecosystem. 1 Introduction We consider convex optimisation problems in the form minimize f(x) subject to gi(x) 0; i = 1; : : : ; l (1) h (x)≤ = 0; i = 1; : : : ; k; arXiv:1901.10887v2 [math.OC] 9 Sep 2020 i where we assume that both the objective function f : Rn R and the inequality constraint n ! functions gi : R R are convex, and that the equality constraints hi(x) := ai>x bi are ! − affine. We will denote an optimal solution to this problem (if it exists) as x∗. Convex optimisa- tion problems feature heavily in a wide range of research areas and industries, including problems ∗The authors are with the Department of Engineering Science, University of Oxford, Oxford, OX1 3PJ, UK.
    [Show full text]
  • Introduction to Mosek
    Introduction to Mosek Modern Optimization in Energy, 28 June 2018 Micha l Adamaszek www.mosek.com MOSEK package overview • Started in 1999 by Erling Andersen • Convex conic optimization package + MIP • LP, QP, SOCP, SDP, other nonlinear cones • Low-level optimization API • C, Python, Java, .NET, Matlab, R, Julia • Object-oriented API Fusion • C++, Python, Java, .NET • 3rd party • GAMS, AMPL, CVXOPT, CVXPY, YALMIP, PICOS, GPkit • Conda package, .NET Core package • Upcoming v9 1 / 10 Example: 2D Total Variation Someone sends you the left signal but you receive noisy f (right): How to denoise/smoothen out/approximate u? P 2 P 2 minimize ij(ui;j − ui+1;j) + ij(ui;j − ui;j+1) P 2 subject to ij(ui;j − fi;j) ≤ σ: 2 / 10 Conic problems A conic problem in canonical form: min cT x s:t: Ax + b 2 K where K is a product of cones: • linear: K = R≥0 • quadratic: q n 2 2 K = fx 2 R : x1 ≥ x2 + ··· + xng • semidefinite: n×n T K = fX 2 R : X = FF g 3 / 10 Conic problems, cont. • exponential cone: 3 K = fx 2 R : x1 ≥ x2 exp(x3=x2); x2 > 0g • power cone: 3 p−1 p K = fx 2 R : x1 x2 ≥ jx3j ; x1; x2 ≥ 0g; p > 1 q 2 2 2 x1 ≥ x2 + x3; 2x1x2 ≥ x3 x1 ≥ x2 exp(x3=x2) 4 / 10 Conic representability Lots of functions and constraints are representable using these cones. T jxj; kxk1; kxk2; kxk1; kAx + bk2 ≤ c x + d !1=p 1 X xy ≥ z2; x ≥ ; x ≥ yp; t ≥ jx jp = kxk y i p i p 1=n t ≤ xy; t ≤ (x1 ··· xn) ; geometric programming (GP) X 1 t ≤ log x; t ≥ ex; t ≤ −x log x; t ≥ log exi ; t ≥ log 1 + x i 1=n det(X) ; t ≤ λmin(X); t ≥ λmax(X) T T convex (1=2)x Qx + c x + q 5 / 10 Challenge Find a • natural, • practical, • important, • convex optimization problem, which cannot be expressed in conic form.
    [Show full text]
  • Introduction to Semidefinite Programming (SDP)
    Introduction to Semidefinite Programming (SDP) Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. 1 1 Introduction Semidefinite programming (SDP ) is the most exciting development in math- ematical programming in the 1990’s. SDP has applications in such diverse fields as traditional convex constrained optimization, control theory, and combinatorial optimization. Because SDP is solvable via interior point methods, most of these applications can usually be solved very efficiently in practice as well as in theory. 2 Review of Linear Programming Consider the linear programming problem in standard form: LP : minimize c · x s.t. ai · x = bi,i=1 ,...,m n x ∈+. x n c · x Here is a vector of variables, and we write “ ” for the inner-product n “ j=1 cj xj ”, etc. n n n Also, + := {x ∈ | x ≥ 0},and we call + the nonnegative orthant. n In fact, + is a closed convex cone, where K is called a closed a convex cone if K satisfies the following two conditions: • If x, w ∈ K, then αx + βw ∈ K for all nonnegative scalars α and β. • K is a closed set. 2 In words, LP is the following problem: “Minimize the linear function c·x, subject to the condition that x must solve m given equations ai · x = bi,i =1,...,m, and that x must lie in the closed n convex cone K = +.” We will write the standard linear programming dual problem as: m LD : maximize yibi i=1 m s.t. yiai + s = c i=1 n s ∈+. Given a feasible solution x of LP and a feasible solution (y, s)ofLD , the m m duality gap is simply c·x− i=1 yibi =(c−i=1 yiai) ·x = s·x ≥ 0, because x ≥ 0ands ≥ 0.
    [Show full text]
  • First-Order Methods for Semidefinite Programming
    FIRST-ORDER METHODS FOR SEMIDEFINITE PROGRAMMING Zaiwen Wen Advisor: Professor Donald Goldfarb Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Graduate School of Arts and Sciences COLUMBIA UNIVERSITY 2009 c 2009 Zaiwen Wen All Rights Reserved ABSTRACT First-order Methods For Semidefinite Programming Zaiwen Wen Semidefinite programming (SDP) problems are concerned with minimizing a linear function of a symmetric positive definite matrix subject to linear equality constraints. These convex problems are solvable in polynomial time by interior point methods. However, if the number of constraints m in an SDP is of order O(n2) when the unknown positive semidefinite matrix is n n, interior point methods become impractical both in terms of the × time (O(n6)) and the amount of memory (O(m2)) required at each iteration to form the m × m positive definite Schur complement matrix M and compute the search direction by finding the Cholesky factorization of M. This significantly limits the application of interior-point methods. In comparison, the computational cost of each iteration of first-order optimization approaches is much cheaper, particularly, if any sparsity in the SDP constraints or other special structure is exploited. This dissertation is devoted to the development, analysis and evaluation of two first-order approaches that are able to solve large SDP problems which have been challenging for interior point methods. In chapter 2, we present a row-by-row (RBR) method based on solving a sequence of problems obtained by restricting the n-dimensional positive semidefinite constraint on the matrix X. By fixing any (n 1)-dimensional principal submatrix of X and using its − (generalized) Schur complement, the positive semidefinite constraint is reduced to a sim- ple second-order cone constraint.
    [Show full text]