How to Advance in Structural Convex Optimization

Total Page:16

File Type:pdf, Size:1020Kb

How to Advance in Structural Convex Optimization op TI M A 7 8 NOVEMBER 2008 page 2 How to advance in Structural Convex Optimization Yurii Nesterov October, 2008 Abstract called oracle, and it is local, which means that it is not changing if the function is In this paper we are trying to analyze the modified far enough from the test point. common features of the recent advances This model of interaction between the in Structural Convex Optimization: optimization scheme and the problem data polynomial-time interior-point methods, is called the local Black Box. At the time of smoothing technique, minimization in its development, this concept fitted very relative scale, and minimization of composite well the existing computational practice, functions. where the interface between the general optimization packages and the problem Keywords convex optimization · non- data was established by Fortran subroutines smooth optimization · complexity theory created independently by the users. · black-box model · optimal methods Black-Box framework allows to speak · structural optimization · smoothing about the lower performance bounds technique. for diffierent problem classes in terms of informational complexity. That is the lower Mathematics Subject Classification (2000) estimate for the number of calls of oracle 90C06 · 90C22 · 90C25 · 90C60. which is necessary for any optimization method in order to guarantee delivering an Convex Optimization is one of the rare ε-solution to any problem from the problem fields of Numerical Analysis, which benefit class. In this performance measure we do from existence of well-developed complexity not include at all the complexity of auxiliary theories. In our domain, this theory was computations of the scheme. created in the middle of the seventies in Let us present these bounds for the most a series of papers by A.Nemirovsky and important classes of optimization problems D.Yudin (see [8] for full exposition). It posed in the form consists of three parts: – Classification and description of problem min f(x), (1) instances. x∈Q – Lower complexity bounds. – Optimal methods. n where Q ⊆ R is a bounded closed convex set (⎢⎢x⎜⎜ ≤ R, x ∈ Q), and function f is In [8], the complexity of a convex convex on Q. In the table below, the first optimization problem was linked with its column indicates the problem class, the level of smoothness introduced by Hölder second one gives an upper bound for conditions on the first derivatives of allowed number of calls of the oracle in functional components. It was assumed 1 the optimization scheme , and the last that the only information the optimization column gives the lower bound for analytical methods can learn about the particular complexity of the problem class, which problem instance is the values and derivatives depends on the absolute accuracy ε and the of these components at some test points. class parameters. This data can be reported by a special unit This paper was written during the visit of the author at IFOR (ETH, Zurich). The author expresses his gratitude to the Scientific Director of this center Hans-Jacob Lüthi for his support and excellent working conditions. Yurii Nesterov Center for Operations Research and Econometrics (CORE) Université catholique de Louvain (UCL) 34 voie du Roman Pays, 1348 Louvain-la-Neuve, Belgium. Tel.: +32-10-474348 Fax: +32-10-474301 e-mail: [email protected] op TI M A 7 8 NOVEMBER 2008 page put them into the black box for numerical exclusion of variables. methods. That is the main conceptual Imagine for a moment that we do not contradiction of the standard Convex know how to solve the system of linear Optimization. equations. In order to discover the above Intuitively, we always hope that the scheme we should apply the following structure of the problem can be used for improving the performance of minimization Golden Rules schemes. Unfortunately, structure is a very fuzzy notion, which is quite difficult to 1. Find a class of problems which a (2) formalize. One possible way to describe can be solved very efficiently. It is important that these bounds are exact. the structure is to fix the analytical type 2. Describe the transformation This means that there exist methods, which of functional components. For example, rules for converting the initial have efficiency estimates on corresponding we can consider the problems with linear problem into desired form. problem classes proportional to the lower constraints only. It can help, but this . Describe the class of problems bounds. The correspondingoptimal methods approach is very fragile: If we add just a for which these transformation were developed in [8,9,19,22,2]. For single constraint of another type, then we rules are applicable. further references, we present a simplified get a new problem class, and all theory must a version of the optimal method [9] as applied be redone from scratch. In our example, it is the class 2 to the problem (1) with f ∈ C2: On the other hand, it is clear that having of linear systems with triangular the structure at hand we can play a lot with matrices. (4) Choose a starting point y0 ∈ Q and set the analytical form of the problem.We can x−1 = y0. For k ≥ 0 iterate: rewrite the problem in many equivalent In Convex Optimization, these rules were settings using non-trivial transformations used already several times for breaking of variables or constraints, introducing down the limitations of Complexity Theory. additional variables, etc. However, this Historically, the first example of that would serve almost no purpose without type is the theory of polynomial-time () fixing a clear final goal. So, let us try to interior-point methods (IPM) based on self- understand what it could be. concordant barriers. In this framework, the As usual, it is better to look at classical class of easy problems is formed by problems As we see, the complexity of each iteration examples. In many situations the sequential of unconstrained minimization of self- of this scheme is comparable with that of the reformulations of the initial problem can concordant functions treated by the Newton simplest gradient method. However, the rate be seen as a part of numerical scheme. We method. This know-how is further used in of convergence of method () is much faster. start from a complicated problem P and, the framework of path-following schemes After a certain period of time, it became step by step, change its structure towards to for solving so-called standard minimization clear that, despite its mathematical the moment we get a trivial problem (or, a problems. Finally, it can be shown that by excellence, Complexity Theory of Convex problem which we know how to solve): a simple barrier calculus this approach can Optimization has a hidden drawback. P → … → (f *, x*). be extended onto all convex optimization Indeed, in order to apply convex A good example of such a strategy is the problems with known structure (see [11,18] optimization methods, we need to be standard approach for solving system of for details). The efficiency estimates of sure that functional components of our linear equations corresponding schemes are of the order problem are convex. However, we can check 1/2 υ Ax = b. O(υ ln –ε ) iterations of the Newton convexity only by analyzing the structure of We can proceed as follows: method, where υ is the parameter of these functions: If our function is obtained 1. Check if A is symmetric and positive corresponding self-concordant barrier. from the basic convex functions by convex definite. Sometimes this is clear from the Note that for many important feasible operations (summation, maximum, etc.), we origin of the matrix. sets this parameter is smaller than the conclude that it is convex. If not, then we 2. Compute Cholesky factorization of this dimension of the space of variables. Hence, have to apply general optimization methods matrix: for the pure Black-Box schemes such an which usually do not have theoretical A = LLT ; efficiency is simply unreachable in view guarantees for the global performance. where L is a lower-triangular matrix. of the lower complexity bound for class Thus, the functional components of Form two auxiliary systems C (see (2)). It is interesting that formally the problem are not in the black box the Ly = b, LT x = y. the modern IPMs look very similar to the moment we check their convexity and . Solve these system by sequential usual Black-Box schemes (Newton method choose minimization scheme. However, we plus path-following approach), which 1 If this upper bound is smaller than O(n), then the dimension of the problem is really very big, and we were developed in the very early days of cannot afford the method to perform this amount of calls. Nonlinear Optimization [4]. However, this 2 In method (11)-(13) from [9], we can set ak = 1 + k/2 since in the proof we need only to ensure is just an illusion. For complexity analysis 2 2 ak+1 – ak ≤ ak+1. of polynomial-time IPM, it is crucial that 3 Numerical verification of convexity is an extremely difficult problem. op TI M A 7 8 NOVEMBER 2008 page 4 they employ the special barrier functions difficult. Usually we can work directly with which do not satisfy the local Black-Box the structure of our problem, at least in the assumptions (see [10] for discussion). cases when it is created by us. The second example of using the rules Thus, the basis of the smoothing technique Thus, up to a logarithmic factor, for (4) needs more explanations. By certain [12,1] is formed by two ingredients: obtaining the same accuracy, the methods circumstances, these results were discovered the above observation, and a trivial but based on smoothing technique need only with a delay of twenty years.
Recommended publications
  • 4. Convex Optimization Problems
    Convex Optimization — Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form • convex optimization problems • quasiconvex optimization • linear optimization • quadratic optimization • geometric programming • generalized inequality constraints • semidefinite programming • vector optimization • 4–1 Optimization problem in standard form minimize f0(x) subject to f (x) 0, i =1,...,m i ≤ hi(x)=0, i =1,...,p x Rn is the optimization variable • ∈ f : Rn R is the objective or cost function • 0 → f : Rn R, i =1,...,m, are the inequality constraint functions • i → h : Rn R are the equality constraint functions • i → optimal value: p⋆ = inf f (x) f (x) 0, i =1,...,m, h (x)=0, i =1,...,p { 0 | i ≤ i } p⋆ = if problem is infeasible (no x satisfies the constraints) • ∞ p⋆ = if problem is unbounded below • −∞ Convex optimization problems 4–2 Optimal and locally optimal points x is feasible if x dom f and it satisfies the constraints ∈ 0 ⋆ a feasible x is optimal if f0(x)= p ; Xopt is the set of optimal points x is locally optimal if there is an R> 0 such that x is optimal for minimize (over z) f0(z) subject to fi(z) 0, i =1,...,m, hi(z)=0, i =1,...,p z x≤ R k − k2 ≤ examples (with n =1, m = p =0) f (x)=1/x, dom f = R : p⋆ =0, no optimal point • 0 0 ++ f (x)= log x, dom f = R : p⋆ = • 0 − 0 ++ −∞ f (x)= x log x, dom f = R : p⋆ = 1/e, x =1/e is optimal • 0 0 ++ − f (x)= x3 3x, p⋆ = , local optimum at x =1 • 0 − −∞ Convex optimization problems 4–3 Implicit constraints the standard form optimization problem has an implicit
    [Show full text]
  • Convex Optimisation Matlab Toolbox
    UNLOCBOX: USER’S GUIDE MATLAB CONVEX OPTIMIZATION TOOLBOX Lausanne - February 2014 PERRAUDIN Nathanaël, KALOFOLIAS Vassilis LTS2 - EPFL Abstract Nowadays the trend to solve optimization problems is to use specific algorithms rather than very gen- eral ones. The UNLocBoX provides a general framework allowing the user to design his own algorithms. To do so, the framework try to stay as close from the mathematical problem as possible. More precisely, the UNLocBoX is a Matlab toolbox designed to solve convex optimization problem of the form K min ∑ fn(x); x2C n=1 using proximal splitting techniques. It is mainly composed of solvers, proximal operators and demonstra- tion files allowing the user to quickly implement a problem. Contents 1 Introduction 1 2 Literature 2 3 Installation and initialization2 3.1 Dependencies.........................................2 3.2 GPU computation.......................................2 4 Structure of the UNLocBoX2 5 Problems of interest4 6 Solvers 4 6.1 Defining functions......................................5 6.2 Selecting a solver.......................................5 6.3 Optional parameters......................................6 6.4 Plug-ins: how to tune your algorithm.............................6 6.5 Returned arguments......................................8 7 Proximal operators9 7.1 Constraints..........................................9 8 Example 10 References 16 LTS2 - EPFL 1 Introduction During your research, you have most likely faced the situation where you need to solve a convex optimiza- tion problem. Maybe you were trying to find an optimal combination of parameters, you were transforming some process into something automatic or, even more simply, your algorithm was equivalent to solving convex problem. In this situation, you usually face a dilemma: you need a specific algorithm to solve your problem, but you do not want to spend hours writing it.
    [Show full text]
  • Conditional Gradient Sliding for Convex Optimization ∗
    CONDITIONAL GRADIENT SLIDING FOR CONVEX OPTIMIZATION ∗ GUANGHUI LAN y AND YI ZHOU z Abstract. In this paper, we present a new conditional gradient type method for convex optimization by calling a linear optimization (LO) oracle to minimize a series of linear functions over the feasible set. Different from the classic conditional gradient method, the conditional gradient sliding (CGS) algorithm developed herein can skip the computation of gradients from time to time, and as a result, can achieve the optimal complexity bounds in terms of not only the number of callsp to the LO oracle, but also the number of gradient evaluations. More specifically, we show that the CGS method requires O(1= ) and O(log(1/)) gradient evaluations, respectively, for solving smooth and strongly convex problems, while still maintaining the optimal O(1/) bound on the number of calls to LO oracle. We also develop variants of the CGS method which can achieve the optimal complexity bounds for solving stochastic optimization problems and an important class of saddle point optimization problems. To the best of our knowledge, this is the first time that these types of projection-free optimal first-order methods have been developed in the literature. Some preliminary numerical results have also been provided to demonstrate the advantages of the CGS method. Keywords: convex programming, complexity, conditional gradient method, Frank-Wolfe method, Nesterov's method AMS 2000 subject classification: 90C25, 90C06, 90C22, 49M37 1. Introduction. The conditional gradient (CndG) method, which was initially developed by Frank and Wolfe in 1956 [13] (see also [11, 12]), has been considered one of the earliest first-order methods for solving general convex programming (CP) problems.
    [Show full text]
  • Additional Exercises for Convex Optimization
    Additional Exercises for Convex Optimization Stephen Boyd Lieven Vandenberghe January 4, 2020 This is a collection of additional exercises, meant to supplement those found in the book Convex Optimization, by Stephen Boyd and Lieven Vandenberghe. These exercises were used in several courses on convex optimization, EE364a (Stanford), EE236b (UCLA), or 6.975 (MIT), usually for homework, but sometimes as exam questions. Some of the exercises were originally written for the book, but were removed at some point. Many of them include a computational component using one of the software packages for convex optimization: CVX (Matlab), CVXPY (Python), or Convex.jl (Julia). We refer to these collectively as CVX*. (Some problems have not yet been updated for all three languages.) The files required for these exercises can be found at the book web site www.stanford.edu/~boyd/cvxbook/. You are free to use these exercises any way you like (for example in a course you teach), provided you acknowledge the source. In turn, we gratefully acknowledge the teaching assistants (and in some cases, students) who have helped us develop and debug these exercises. Pablo Parrilo helped develop some of the exercises that were originally used in MIT 6.975, Sanjay Lall developed some other problems when he taught EE364a, and the instructors of EE364a during summer quarters developed others. We’ll update this document as new exercises become available, so the exercise numbers and sections will occasionally change. We have categorized the exercises into sections that follow the book chapters, as well as various additional application areas. Some exercises fit into more than one section, or don’t fit well into any section, so we have just arbitrarily assigned these.
    [Show full text]
  • Efficient Convex Quadratic Optimization Solver for Embedded MPC Applications
    DEGREE PROJECT IN ELECTRICAL ENGINEERING, SECOND CYCLE, 30 CREDITS STOCKHOLM, SWEDEN 2018 Efficient Convex Quadratic Optimization Solver for Embedded MPC Applications ALBERTO DÍAZ DORADO KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE Efficient Convex Quadratic Optimization Solver for Embedded MPC Applications ALBERTO DÍAZ DORADO Master in Electrical Engineering Date: October 2, 2018 Supervisor: Arda Aytekin , Martin Biel Examiner: Mikael Johansson Swedish title: Effektiv Konvex Kvadratisk Optimeringslösare för Inbäddade MPC-applikationer School of Electrical Engineering and Computer Science iii Abstract Model predictive control (MPC) is an advanced control technique that requires solving an optimization problem at each sampling instant. Sev- eral emerging applications require the use of short sampling times to cope with the fast dynamics of the underlying process. In many cases, these applications also need to be implemented on embedded hard- ware with limited resources. As a result, the use of model predictive controllers in these application domains remains challenging. This work deals with the implementation of an interior point algo- rithm for use in embedded MPC applications. We propose a modular software design that allows for high solver customization, while still producing compact and fast code. Our interior point method includes an efficient implementation of a novel approach to constraint softening, which has only been tested in high-level languages before. We show that a well conceived low-level implementation of integrated constraint softening adds no significant overhead to the solution time, and hence, constitutes an attractive alternative in embedded MPC solvers. iv Sammanfattning Modell prediktiv reglering (MPC) är en avancerad regler-teknik som involverar att lösa ett optimeringsproblem vid varje sampeltillfälle.
    [Show full text]
  • 1 Linear Programming
    ORF 523 Lecture 9 Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Any typos should be emailed to a a [email protected]. In this lecture, we see some of the most well-known classes of convex optimization problems and some of their applications. These include: • Linear Programming (LP) • (Convex) Quadratic Programming (QP) • (Convex) Quadratically Constrained Quadratic Programming (QCQP) • Second Order Cone Programming (SOCP) • Semidefinite Programming (SDP) 1 Linear Programming Definition 1. A linear program (LP) is the problem of optimizing a linear function over a polyhedron: min cT x T s.t. ai x ≤ bi; i = 1; : : : ; m; or written more compactly as min cT x s.t. Ax ≤ b; for some A 2 Rm×n; b 2 Rm: We'll be very brief on our discussion of LPs since this is the central topic of ORF 522. It suffices to say that LPs probably still take the top spot in terms of ubiquity of applications. Here are a few examples: • A variety of problems in production planning and scheduling 1 • Exact formulation of several important combinatorial optimization problems (e.g., min-cut, shortest path, bipartite matching) • Relaxations for all 0/1 combinatorial programs • Subroutines of branch-and-bound algorithms for integer programming • Relaxations for cardinality constrained (compressed sensing type) optimization prob- lems • Computing Nash equilibria in zero-sum games • ::: 2 Quadratic Programming Definition 2. A quadratic program (QP) is an optimization problem with a quadratic ob- jective and linear constraints min xT Qx + qT x + c x s.t. Ax ≤ b: Here, we have Q 2 Sn×n, q 2 Rn; c 2 R;A 2 Rm×n; b 2 Rm: The difficulty of this problem changes drastically depending on whether Q is positive semidef- inite (psd) or not.
    [Show full text]
  • Linear Programming and Convex Optimization
    Lecture 2 : Linear programming and convex optimization Rajat Mittal ? IIT Kanpur We talked about optimization problems and why they are important. We also looked at one of the class of problems called least square problems. Next we look at another class, 1 Linear Programming 1.1 Definition Linear programming is one of the well studied classes of optimization problem. We already discussed that a linear program is one which has linear objective and constraint functions. This implies that a standard linear program looks like P T min i cixi = c x T subject to ai xi ≤ bi 8i 2 f1; ··· ; mg n Here the vectors c; a1; ··· ; am 2 R and scalars bi 2 R are the problem parameters. 1.2 Examples { Max flow: Given a graph, start(s) and end node (t), capacities on every edge; find out the maximum flow possible through edges. s t Fig. 1. Max flow problem: there will be capacities for every edge in the problem statement The linear program looks like: P max fs;ug f(s; u) P P s.t. fu;vg f(u; v) = fv;ug f(v; u) 8v 6= s; t; ∼ 0 ≤ f(u; v) ≤ c(u; v) Note: There is another one which can be made using the flow through paths. ? Thanks to books from Boyd and Vandenberghe, Dantzig and Thapa, Papadimitriou and Steiglitz { Another example: This time, I will give the linear program and you will tell me what real world situation models it :). max 2x1 + 4x2 s.t. x1 + x2 ≤ 10; x1 ≤ 4 1.3 Solving linear programs You might have already had a course on linear optimization.
    [Show full text]
  • Convex Optimization: Algorithms and Complexity
    Foundations and Trends R in Machine Learning Vol. 8, No. 3-4 (2015) 231–357 c 2015 S. Bubeck DOI: 10.1561/2200000050 Convex Optimization: Algorithms and Complexity Sébastien Bubeck Theory Group, Microsoft Research [email protected] Contents 1 Introduction 232 1.1 Some convex optimization problems in machine learning . 233 1.2 Basic properties of convexity . 234 1.3 Why convexity? . 237 1.4 Black-box model . 238 1.5 Structured optimization . 240 1.6 Overview of the results and disclaimer . 240 2 Convex optimization in finite dimension 244 2.1 The center of gravity method . 245 2.2 The ellipsoid method . 247 2.3 Vaidya’s cutting plane method . 250 2.4 Conjugate gradient . 258 3 Dimension-free convex optimization 262 3.1 Projected subgradient descent for Lipschitz functions . 263 3.2 Gradient descent for smooth functions . 266 3.3 Conditional gradient descent, aka Frank-Wolfe . 271 3.4 Strong convexity . 276 3.5 Lower bounds . 279 3.6 Geometric descent . 284 ii iii 3.7 Nesterov’s accelerated gradient descent . 289 4 Almost dimension-free convex optimization in non-Euclidean spaces 296 4.1 Mirror maps . 298 4.2 Mirror descent . 299 4.3 Standard setups for mirror descent . 301 4.4 Lazy mirror descent, aka Nesterov’s dual averaging . 303 4.5 Mirror prox . 305 4.6 The vector field point of view on MD, DA, and MP . 307 5 Beyond the black-box model 309 5.1 Sum of a smooth and a simple non-smooth term . 310 5.2 Smooth saddle-point representation of a non-smooth function312 5.3 Interior point methods .
    [Show full text]
  • Efficient Convex Optimization for Linear
    Efficient Convex Optimization for Linear MPC Stephen J. Wright Abstract MPC formulations with linear dynamics and quadratic objectives can be solved efficiently by using a primal-dual interior-point framework, with complexity proportional to the length of the horizon. An alternative, which is more able to ex- ploit the similarity of the problems that are solved at each decision point of linear MPC, is to use an active-set approach, in which the MPC problem is viewed as a convex quadratic program that is parametrized by the initial state x0. Another alter- native is to identify explicitly polyhedral regions of the space occupied by x0 within which the set of active constraints remains constant, and to pre-calculate solution operators on each of these regions. All these approaches are discussed here. 1 Introduction In linear model predictive control (linear MPC), the problem to be solved at each decision point has linear dynamics and a quadratic objective. This a classic problem in optimization — quadratic programming (QP) — which is convex when (as is usu- ally true) the quadratic objective is convex. It remains a convex QP even when linear constraints on the states and controls are allowed at each stage, or when only linear functions of the state can be observed. Moreover, this quadratic program has special structure that can be exploited by algorithms that solve it, particularly interior-point algorithms. In deployment of linear MPC, unless there is an unanticipated upset, the quadratic program to be solved differs only slightly from one decision point to the next. The question arises of whether the solution at one decision point can be used to “warm- start” the algorithm at the next decision point.
    [Show full text]
  • A Tutorial on Convex Optimization
    A Tutorial on Convex Optimization Haitham Hindi Palo Alto Research Center (PARC), Palo Alto, California email: [email protected] Abstract— In recent years, convex optimization has be- quickly deduce the convexity (and hence tractabil- come a computational tool of central importance in engi- ity) of many problems, often by inspection; neering, thanks to it’s ability to solve very large, practical 2) to review and introduce some canonical opti- engineering problems reliably and efficiently. The goal of this tutorial is to give an overview of the basic concepts of mization problems, which can be used to model convex sets, functions and convex optimization problems, so problems and for which reliable optimization code that the reader can more readily recognize and formulate can be readily obtained; engineering problems using modern convex optimization. 3) to emphasize modeling and formulation; we do This tutorial coincides with the publication of the new book not discuss topics like duality or writing custom on convex optimization, by Boyd and Vandenberghe [7], who have made available a large amount of free course codes. material and links to freely available code. These can be We assume that the reader has a working knowledge of downloaded and used immediately by the audience both linear algebra and vector calculus, and some (minimal) for self-study and to solve real problems. exposure to optimization. Our presentation is quite informal. Rather than pro- I. INTRODUCTION vide details for all the facts and claims presented, our Convex optimization can be described as a fusion goal is instead to give the reader a flavor for what is of three disciplines: optimization [22], [20], [1], [3], possible with convex optimization.
    [Show full text]
  • Lecture 4 Convex Programming and Lagrange Duality
    Lecture 4 Convex Programming and Lagrange Duality (Convex Programming program, Convex Theorem on Alternative, convex duality Optimality Conditions in Convex Programming) 4.1 Convex Programming Program In Lecture2 we have discussed Linear Programming model which cover numerous applica- tions. Whenever applicable, LP allows to obtain useful quantitative and qualitative informa- tion on the problem at hand. The analytic structure of LP programs gives rise to a number of general results (e.g., those of the LP Duality Theory) which provide us in many cases with valuable insight. Nevertheless, many \real situations" cannot be covered by LP mod- els. To handle these \essentially nonlinear" cases, one needs to extend the basic theoretical results and computational techniques known for LP beyond the realm of Linear Program- ming. There are several equivalent ways to define a general convex optimization problem. Here we follow the \traditional" and the most direct way: when passing from a generic LP problem fmin cT xj Ax ≥ bg [A 2 Rm×n] (LP) to its nonlinear extensions, we simply allow for some of linear functions involved { the linear T T objective function f(x) = c x and linear inequality constraints gi(x) = ai x ≤ bi; i = 1; :::; m; to become nonlinear. A (constrained) Mathematical Programming program is a problem as follows: ( g(x) ≡ (g (x); :::; g (x)) ≤ 0; ) (P) min f(x) j x 2 X; 1 m : (4.1.1) h(x) ≡ (h1(x); :::; hk(x)) = 0 (P) is called convex (or Convex Programming program), if • X is a convex subset of Rn 89 90 LECTURE 4. CONVEX PROGRAMMING AND LAGRANGE DUALITY • f; g1; :::; gm are real-valued convex functions on X, and • there are no equality constraints at all.
    [Show full text]
  • Optimum Engineering Design with MATLAB
    Optimization Methods in Engineering Design Prof. Kamran Iqbal, PhD, PE Professor of Systems Engineering College of Engineering and Information Technology University of Arkansas at Little Rock [email protected] Course Objectives • Learn basic optimization methods and how they are applied in engineering design • Use MATLAB to solve optimum engineering design problems – Linear programming problems – Nonlinear programming problems – Mixed integer programing problems Course Prerequisites • Prior knowledge of these areas will be assumed of the participants: – Linear algebra – Multivariable calculus – Scientific reasoning – Basic programming – MATLAB Course Materials • Arora, Introduction to Optimum Design, 3e, Elsevier, (https://www.researchgate.net/publication/273120102_Introductio n_to_Optimum_design) • Parkinson, Optimization Methods for Engineering Design, Brigham Young University (http://apmonitor.com/me575/index.php/Main/BookChapters) • Iqbal, Fundamental Engineering Optimization Methods, BookBoon (https://bookboon.com/en/fundamental-engineering-optimization- methods-ebook) Engineering Design • Engineering design is an iterative process • Design choices are limited due to resource constraints • Design optimization aims to obtain the ‘best solution’ to the problem under constraints. Optimum Engineering Design Conventional design Optimal design Optimum Engineering Design • The optimum engineering design involves: – Formulation of mathematical optimization problem – Solving the problem using analytical or iterative methods Soda Can Design Problem: Design a soda can of 125 푚푙 capacity for mass production. It is desired to minimize the material cost. For simplicity, assume that same material is used for the cylindrical can as well as the two ends. For esthetic reasons, the height to diameter ratio should be at least 2. https://images.google.com/ Insulated Spherical Tank Design Problem: Design an insulated spherical tank of radius 푅, i.e., choose the insulation layer thickness 푡 to minimize the life-cycle costs (over 10 years) of the tank.
    [Show full text]