A Tutorial on Convex Optimization Haitham Hindi Palo Alto Research Center (PARC), Palo Alto, California email: [email protected] Abstract— In recent years, convex optimization has be- quickly deduce the convexity (and hence tractabil- come a computational tool of central importance in engi- ity) of many problems, often by inspection; neering, thanks to it’s ability to solve very large, practical 2) to review and introduce some canonical opti- engineering problems reliably and efficiently. The goal of this tutorial is to give an overview of the basic concepts of mization problems, which can be used to model convex sets, functions and convex optimization problems, so problems and for which reliable optimization code that the reader can more readily recognize and formulate can be readily obtained; engineering problems using modern convex optimization. 3) to emphasize modeling and formulation; we do This tutorial coincides with the publication of the new book not discuss topics like duality or writing custom on convex optimization, by Boyd and Vandenberghe [7], who have made available a large amount of free course codes. material and links to freely available code. These can be We assume that the reader has a working knowledge of downloaded and used immediately by the audience both linear algebra and vector calculus, and some (minimal) for self-study and to solve real problems. exposure to optimization. Our presentation is quite informal. Rather than pro- I. INTRODUCTION vide details for all the facts and claims presented, our Convex optimization can be described as a fusion goal is instead to give the reader a flavor for what is of three disciplines: optimization [22], [20], [1], [3], possible with convex optimization. Complete details can [4], convex analysis [19], [24], [27], [16], [13], and be found in [7], from which all the material presented numerical computation [26], [12], [10], [17]. It has here is taken. Thus we encourage the reader to skip recently become a tool of central importance in engi- sections that might not seem clear and continue reading; neering, enabling the solution of very large, practical the topics are not all interdependent. engineering problems reliably and efficiently. In some Also, in order keep the paper quite general, we sense, convex optimization is providing new indispens- have tried to not to bias our presentation toward any able computational tools today, which naturally extend particular audience. Hence, the examples used in the our ability to solve problems such as least squares and paper are simple and intended merely to clarify the linear programming to a much larger and richer class of optimization ideas and concepts. For detailed examples problems. and applications, the reader is refered to [7], [2], and the references therein. Our ability to solve these new types of problems We now briefly outline the paper. Sections II and III, comes from recent breakthroughs in algorithms for solv- respectively, describe convex sets and convex functions ing convex optimization problems [18], [23], [29], [30], along with their calculus and properties. In section IV, coupled with the dramatic improvements in computing we define convex optimization problems, at a rather power, both of which have happened only in the past abstract level, and we describe their general form and decade or so. Today, new applications of convex op- desirable properties. Section V presents some specific timization are constantly being reported from almost canonical optimization problems which have been found every area of engineering, including: control, signal to be extremely useful in practice, and for which effi- processing, networks, circuit design, communication, in- cient codes are freely available. Section VI comments formation theory, computer science, operations research, briefly on the use of convex optimization for solving economics, statistics, structural design. See [7], [2], [5], nonstandard or nonconvex problems. Finally, section VII [6], [9], [11], [15], [8], [21], [14], [28] and the references concludes the paper. therein. The objectives of this tutorial are: Motivation 1) to show that there are straight forward, systematic A vast number of design problems in engineering can rules and facts, which when mastered, allow one to be posed as constrained optimization problems, of the form: develop an intuition for the geometry of convex opti- mization. minimize f (x) 0 A function f : Rn Rm is affine if it has the form subject to f (x) 0; i = 1; : : : ; m (1) i linear plus constant f!(x) = Ax + b. If F is a matrix h (x) ≤= 0; i = 1; : : : ; p: i valued function, i.e., F : Rn Rp×q, then F is affine ! where x is a vector of decision variables, and the if it has the form functions f0, fi and hi, respectively, are the cost, in- F (x) = A0 + x1A1 + + xnAn equality constraints, and equality constraints. However, · · · where A Rp×q. Affine functions are sometimes such problems can be very hard to solve in general, i 2 especially when the number of decision variables in x loosely refered to as linear. Recall that S Rn is a subspace if it contains the is large. There are several reasons for this difficulty: ⊆ First, the problem “terrain” may be riddled with local plane through any two of its points and the origin, i.e., optima. Second, it might be very hard to find a feasible x; y S; λ, µ R = λx + µy S: point (i.e., an x which satisfies all the equalities and 2 2 ) 2 inequalities), in fact the feasible set, which needn’t even Two common representations of a subspace are as the be fully connected, could be empty. Third, stopping range of a matrix criteria used in general optimization algorithms are often range(A) = Aw w Rq arbitrary. Forth, optimization algorithms might have very f j 2 g = w a + + w a w R poor convergence rates. Fifth, numerical problems could f 1 1 · · · q q j i 2 g cause the minimization algorithm to stop all together or where A = a1 aq ; or as the nullspace of a · · · wander. matrix It has been known for a long time [19], [3], [16], [13] nullspace(B) = x Bx = 0 that if the f are all convex, and the h are affine, then f j g i i = x bT x = 0; : : : ; bT x = 0 the first three problems disappear: any local optimum f j 1 p g is, in fact, a global optimum; feasibility of convex op- T where B = b1 bp . As an example, let n · · · timization problems can be determined unambiguously, S = X Rn×n X = XT denote the set at least in principle; and very precise stopping criteria of symmetricf 2matrices. jThen Sn is ag subspace, since are available using duality. However, convergence rate symmetric matrices are closed under addition. Another and numerical sensitivity issues still remained a potential way to see this is to note that Sn can be written as problem. n×n X R Xij = Xji; i; j which is the nullspace It was not until the late ’80’s and ’90’s that researchers off linear2 functionj X X T8. g in the former Soviet Union and United States discovered A set S Rn is af−fine if it contains line through any ⊆ that if, in addition to convexity, the fi satisfied a property two points in it, i.e., known as self-concordance, then issues of convergence x; y S; λ, µ R; λ + µ = 1 = λx + µy S: and numerical sensitivity could be avoided using interior 2 2 ) 2 point methods [18], [23], [29], [30], [25]. The self- p concordance property is satisfied by a very large set of PSfrag replacementsλ = 1:5 px important functions used in engineering. Hence, it is now p possible to solve a large class of convex optimization y problems in engineering with great efficiency. λ = 0:6 p p II. CONVEX SETS λ = 0:5 − In this section we list some important convex sets Geometrically, an affine set is simply a subspace which and operations. It is important to note that some of is not necessarily centered at the origin. Two common these sets have different representations. Picking the representations for of an affine set are: the range of affine right representation can make the difference between a function tractable problem and an intractable one. S = Az + b z Rq ; f j 2 g We will be concerned only with optimization prob- or as the solution of a set of linear equalities: lems whose decision variables are vectors in Rn or m×n S = x bT x = d ; : : : ; bT x = d matrices in R . Throughout the paper, we will make f j 1 1 p pg frequent use of informal sketches to help the reader = x Bx = d : f j g 2 n n A set S R is a convex set if it contains the line • K = S : X Y means Y X is PSD ⊆ + K − segment joining any of its points, i.e., These are so common we drop the K. Given points x Rn and θ R, then y = θ x + x; y S; λ, µ 0; λ + µ = 1 = λx + µy S i 2 i 2 1 1 2 ≥ ) 2 + θkxk is said to be a convex not convex · · · • linear combination for any real PSfrag replacements θi • affine combination if i θi = 1 • convex combination if θi = 1, θi 0 y P i ≥ x • conic combination if θi 0 P≥ The linear (resp. affine, convex, conic) hull of a set is the set of all linear (resp. affine, convex, conic) Geometrically, we can think of convex sets as always S combinations of points from S, and is denoted by bulging outward, with no dents or kinks in them.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages14 Page
-
File Size-