Conic Linear Programming Yinyu Ye December 2004, revised January 2015 i ii Preface This monograph is developed for MS&E 314, \Conic Linear Programming", which I am teaching at Stanford. Information, lecture slides, supporting mate- rials, and computer programs related to this book may be found at the following address on the World-Wide Web: http://www.stanford.edu/class/msande314 Please report any question, comment and error to the address: [email protected] A little story in the development of semidefinite programming (SDP), a major subclass of conic linear programming. One day in 1990, I visited the Computer Science Department of the University of Minnesota and met a young graduate student, Farid Alizadeh. He, working then on combinatorial optimiza- tion, introduced me “semidefinite optimization" or linear programming over the positive definite matrix cone. We had a very extensive discussion that afternoon and concluded that interior-point linear programming algorithms could be ap- plicable to solving SDPs. I suggested Farid to look at the linear programming (LP) interior-point algorithms and to develop an SDP (primal) potential reduc- tion algorithm. He worked hard for several months, and one afternoon showed up in my office in Iowa City, about 300 miles from Minneapolis. He had every- thing worked out, including potential function, algorithm, complexity bound, and even a \dictionary" list between LP and SDP. But he was stuck on one problem that was on how to keep the symmetry of the scaled directional ma- trix. We went to a bar nearby on Clinton Street in Iowa City (I paid for him since I was a third-year professor then and eager to demonstrate that I could take care of my students). After chatting for a while, I suggested that he should use scaling X−1=2∆X−1=2 to compute symmetric directional matrix ∆, instead of X−1∆ which he was using earlier, where X is the current symmetric positive definite matrix. This way, X + α∆ would remain symmetric with a step-size scalar. He returned to Minneapolis and moved to Berkeley shortly after, and few weeks later sent me an e-mail message telling me that everything had worked out beautifully. At the same time, Nesterov and Nemirovskii developed a more general and powerful theory in extending interior-point algorithms for solving convex pro- grams, where SDP was a special case. Boyd and his group presented a wide range of SDP applications and formulations, many of which were incredibly novel and elegant. Then came the primal-dual algorithm of many authors, the iii iv PREFACE SDP approximation algorithm for Max-Cut, ... { SDP eventually established its full popularity. PREFACE v To Fei, Tim, and Kaylee vi PREFACE Contents Preface iii List of Figures ix 1 Introduction and Preliminaries 1 1.1 Introduction . 1 1.2 Mathematical Preliminaries . 3 1.2.1 Basic notations . 3 1.2.2 Convex sets and cones . 5 1.2.3 Real functions . 10 1.2.4 Inequalities . 12 1.3 Some Basic Decision and Optimization Problems . 13 1.3.1 System of linear equations . 13 1.3.2 Linear least-squares problem . 14 1.3.3 System of linear inequalities . 15 1.3.4 Linear programming (LP) . 16 1.3.5 Quadratic programming (QP) . 19 1.4 Algorithms and Computations . 20 1.4.1 Complexity of problems . 20 1.4.2 Convergence rate . 21 1.5 Basic Computational Procedures . 23 1.5.1 Gaussian elimination method . 23 1.5.2 Choleski decomposition method . 24 1.5.3 The Newton method . 24 1.5.4 Solving ball-constrained linear problem . 25 1.5.5 Solving ball-constrained quadratic problem . 26 1.6 Notes . 26 1.7 Exercises . 27 2 Conic Linear Programming 31 2.1 Conic Linear Programming and its Dual . 31 2.1.1 Dual of conic linear programming . 33 2.2 Farkas' Lemma and Duality Theorem of Conic Linear Programming 35 2.2.1 Alternative theorem for conic systems . 36 vii viii CONTENTS 2.2.2 Duality theorem for conic linear programming . 38 2.2.3 Optimality conditions of conic linear programming . 40 2.3 Exact Low-Rank SDP Solutions . 42 2.3.1 Exact low-rank theorem . 42 2.4 Approximate Low-Rank SDP Solutions . 45 2.4.1 Approximate low-rank theorem . 45 2.4.2 A constructive proof . 47 2.5 Uniqueness of CLP Optimal Solution . 51 2.6 Notes . 53 2.7 Exercises . 54 Bibliography 56 Index 83 List of Figures 1.1 A hyperplane and half-spaces. 8 1.2 Polyhedral and nonpolyhedral cones. 8 1.3 Illustration of the separating hyperplane theorem; an exterior point b is separated by a hyperplane from a convex set C. 10 ix x LIST OF FIGURES Chapter 1 Introduction and Preliminaries 1.1 Introduction Semidefinite Programming, hereafter SDP, is a natural extension of Linear pro- gramming (LP) that is a central decision model in Management Science and Operations Research. LP plays an extremely important role in the theory and application of Optimization. In one sense it is a continuous optimization prob- lem in minimizing a linear objective function over a convex polyhedron; but it is also a combinatorial problem involving selecting an extreme point among a finite set of possible vertices. Businesses, large and small, use linear program- ming models to optimize communication systems, to schedule transportation networks, to control inventories, to adjust investments, and to maximize pro- ductivity. In LP, the variables form a vector which is required to be component-wise nonnegative (≥ 0), where in SDP they are components of a symmetric matrix and the matrix is constrained to be positive semidefinite ( 0). Both of them have linear objective function and linear equality constraints as well. Example 1.1 Consider the following two optimization problems with three vari- ables: • an LP problem in standard form: minimize 2x1 + x2 + x3 subject to x1 + x2 + x3 = 1; (x1; x2; x3) ≥ 0: 1 2 CHAPTER 1. INTRODUCTION AND PRELIMINARIES • an SDP problem where the dimension of the matrix is two: minimize 2x1 + x2 + x3 subject to x1 + x2 + x3 = 1; x x 1 2 0; x2 x3 One can see that, although the objective and constraint are identical, the last constraint of the problems represents a different restriction, so that they are really different optimization problems and models. For example, the simplex method for LP is hardly applicable to SDP However, one thing in common is that interior-point algorithms developed in past three decades for LP are naturally applied to solving SDP. Interior- point algorithms are continuous iterative algorithms. Computation experience with sophisticated procedures suggests that the number of iterations necessarily grows much more slowly than the dimension grows. Furthermore, they have an established worst-case polynomial iteration bound, providing the potential for dramatic improvement in computation effectiveness. The goal of the monograph is to provide a text book for teaching Semidefinite Programming, a modern Linear Programming decision model and its applica- tions in other scientific and engineering fields. One theme of the monograph is the \mapping" between SDP and LP, so that the reader, with knowledge of LP, can understand SDP with little effort. The monograph is organized as follows. In Chapter 1, we discuss some necessary mathematical preliminaries. We also present several decision and optimization problems and several basic numerical procedures used throughout the text. Chapter 2 is devoted to studying the theories and geometries of linear and matrix inequalities, convexity, and semidefinite programming. Almost all interior-point methods exploit rich geometric properties of linear and matrix inequalities, such as \center," \volume," \potential," etc. These geometries are also helpful for teaching, learning, and research. Chapter 3 focuses on interior-point algorithms. Here, we select two types algorithms: the path-following algorithm and the potential reduction algorithm. Each algorithm has three forms, the primal, the dual and the primal-dual form. We analyze the worst-case complexity bound for them, where we will use the real number computation model in our analysis because of the continuous nature of interior-point algorithms. We also compare the complexity theory with the convergence rate used in numerical analysis. Not only has the convergence speed of SDP algorithms been significantly improved during the last decade, but also the problem domain applicable by SDP has dramatically widened. Chapters 4, 5, and 6 would describe some of SDP applications and new established results in Engineering, Combinatory Optimization, Robust Optimization, Euclidean Geometry Computation, etc. Finally, we discuss major computational issues in Chapter 7. We discuss sev- eral effective implementation techniques frequently used in interior-point SDP 1.2. MATHEMATICAL PRELIMINARIES 3 software, such as the sparse linear system, the predictor and corrector step, and the homogeneous and self-dual formulation. We also present major difficulties and challenges faced by SDP. 1.2 Mathematical Preliminaries This section summarizes mathematical background material for linear algebra, linear programming, and nonlinear optimization. 1.2.1 Basic notations The notation described below will be followed in general. There may be some deviation where appropriate. We write vectors in bold lower case through out this monograph. Upper-case letters will be used to represent matrices. Greek letters will typically be used to represent scalars. By R we denote the set of real numbers. R+ denotes the set of nonnegative real numbers, and R++ denotes the set of positive numbers. For a natural num- n n n ber n, the symbol R (R+, R++) denotes the set of vectors with n components in R (R+, R++). A vector is always considered as a column vector, unless otherwise stated. For convenience, we sometime write a column vector x as x = (x1; x2; ::: ; xn) and a row vector as x = (x1; x2; : : : ; xn): A set of vectors a1; :::; am is said to be linearly dependent if there are scalars λ1; :::; λm, not all zero, such that the linear combination m X λiai = 0: i=1 The vector inequality x ≥ y means xj ≥ yj for j = 1; 2; :::; n.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages98 Page
-
File Size-