An Introduction to Duality in Convex Optimization

An Introduction to Duality in Convex Optimization

An Introduction to Duality in Convex Optimization Stephan Wolf Betreuer: Stephan M. Günther, M.Sc. Hauptseminar Innovative Internettechnologien und Mobilkommunikation SS2011 Lehrstuhl Netzarchitekturen und Netzdienste Fakultät für Informatik, Technische Universität München Email: [email protected] ABSTRACT dual problem. Each convex optimization problem can be This paper provides a short introduction to the Lagrangian transformed to a dual problem, which provides another per- duality in convex optimization. At first the topic is moti- spective and mathematical point of application. With the vated by outlining the importance of convex optimization. dual problem it is often possible to determine the solution After that mathematical optimization classes such as con- of the primal problem analytically or alternatively to de- vex, linear and non-convex optimization, are defined. Later termine efficiently a lower bound for the solution (even of the Lagrangian duality is introduced. Weak and strong du- non-convex problems). Furthermore the dual theory is a ality are explained and optimality conditions, such as the source of interesting interpretations, which can be the basis complementary slackness and Karush-Kuhn-Tucker condi- of efficient and distributed solution methods. tions are presented. Finally, three di↵erent examples illus- trate the power of the Lagrangian duality. They are solved Therefore, when tackling optimization problems, it is ad- by using the optimality conditions previously introduced. visable to be able to use the powerful tool of Lagrangian duality. This paper o↵ers an introduction to this topic by The main basis of this paper is the excellent book about outlining the basics and illustrating these by three examples. convex optimization [5] of Stephen Boyd and Lieven Van- denberghe. The following section presents an overview over the di↵er- ent optimization classes and explains the di↵erence of con- Keywords vex and linear optimization. After that, Lagrangian du- mathematical optimization problem, convex optimization, ality is introduced and intuitively derived. Furthermore, linear optimization, Lagrangian duality, Lagrange function, weak and strong duality are explained and Slater’s condi- dual problem, primal problem, strong duality, weak duality, tion, which guarantees strong duality for convex optimiza- Slater’s condition, complementary slackness, Karush-Kuhn- tion, is described. The Section 4 introduces optimality con- Tucker conditions, constrained least squares problem, water ditions, concretely the complementary slackness condition filling algorithm and the Karush-Kuhn-Tucker conditions. They will be used to demonstrate the power of duality to solve convex opti- mization problems by the dual in Section 5. In particular, 1. MOTIVATION 1 duality is used to solve a constrained least squares problem Convex optimization is very important in practice. Ap- and to derive the water-filling method. At the end, a con- plications are numerous. Important areas are for example clusion is drawn and further literature hints are presented. automatic control systems, estimation and signal process- ing, communications and networks, electronic circuit design, data analysis and modelling, statistics and finance (see [5], p. xi). Furthermore linear optimization, which is a subclass 2. OPTIMIZATION PROBLEMS of convex optimization, bases mainly on the theory of convex There are di↵erent kinds of mathematical optimization prob- optimization. lems, for example non-convex, convex and linear as well as constrained and unconstrained optimization problems. One advantage of these convex optimization problems is that These classes do not only di↵er in their definition, but also there exists methods to solve them very reliably and effi- in their solvability. The more specific requirements for an ciently, whereas there are no such methods for the general optimization class are, the easier it is usually to solve. non-linear problem so far. One example is the interior-point method, which can be used to solve general convex opti- mization problems. However, its reliability and efficiency are still an active topic of research, but it is likely that these 2.1 The general optimization problem difficulties will be overcome within a few years. (See [5], The standard form of a mathematical optimization problem p.8). or just optimization problem consists of an optimization vari- n able x =(x1,...,xn) and an objective function f0 : R R. Another even more important advantage is the associated 7! Furthermore there are inequality constraint functions fi : 1 n n The definition of convex optimization problems and con- R R and equality constraint functions hi : R R, vexity itself can be found in Section 2.3 which7! constrain the solution. 7! Seminar FI & IITM SS 2011, 153 doi: 10.2313/NET-2011-07-2_20 Network Architectures and Services, July 2011 2.2 The linear optimization problem The problem is called a linear program, if the objective func- tion f0 and the inequality and equality constraints f1,...,fm, h1,...,hp are linear, that means that they fullfill the follow- n ing equation for all x, y R and ↵,β R: 2 2 fi(↵x + βy)=↵fi(x)+βfi(y) One example for a two dimensional linear function is shown in Figure 2. Figure 1: The non-convex Rosenbrock function f(x, y)=(1 x)2 +100(y x2)2. − − The standard form of the problem is: minimize f0(x) subject to fi(x) 0,i=1,...,m hi(x)=0,i=1,...,p The problem is to find x such that the objective function Figure 2: A linear function f(x, y)=3x +2.5y. f0 is minimized while satisfying the inequality and equality constraints. If the problem has no constraints, it is called A linear program can also be written as: unconstrained. minimize cT x + d The set which the objective and constraint functions are subject to Gx q defined for is called the domain and is defined as: Ax = b m p m n p n D = domfi domhi The matrices G R ⇥ and A R ⇥ specify the linear \ 2 2 i=0 i=0 inequality and equality constraints and the vectors c and \ \ n d R parameterize the objective function. The vector d 2 Apointx D is feasible if it satisfies the constraints. The can be left out, as it does not influence the feasible set and problem itself2 is feasible if there exists at least one feasible the solution x? (see [5], p. 146). Therefore the vector d is point. All feasible points form the feasible set or constraint ignored in other definitions. set. As the negation of a linear function f(x) is also linear, a ? − Avectorx =(x1,...,xn) which is feasible and minimizes linear maximization problem can be easily transformed to a the objective function is called optimal or solution. Its cor- linear minimization problem. For example, if the objective responding value is called optimal value p? and is defined function cT x + d should be maximized, one can solve the as: problem by minimizing the objective function cT x d. That is the reason why linear maximization problems− − are inf f0(x) fi(x) 0,i=1,...,m hj (x)=0,j =1,...,p also linear programs. { | ^ } If at least one constraint or the objective function is not By definition p? can be . p? is + if the problem is linear, then the problem is called a non-linear program. infeasible and if the± problem1 is unbounded1 below, that 1 means that there are feasible points xk with f0(xk) 2.3 The convex optimization problem for k . !1 !1 The requirement for convex optimization problems is that the equality constraints are still linear but the inequality The other problem classes are subclasses of the general opti- constraints and the objective function have to be convex, mization problem. The main di↵erence is the class of the ob- that means they must fulfill the following inequality for all n jective and constraint functions. Figure 1 shows the Rosen- x, y R and ↵,β R,with↵ + β =0,↵,β 0: brock function.2 It is a non-convex function, which is used as 2 2 ≥ performance test for optimization algorithms for non-convex fi(↵x + βy) ↵fi(x)+βfi(y) problems. As one can see, this requirement is less restrictive as the previous requirement for linear programs, where equality is 2The plot bases on a script from [1] required. Consequently the linear programs can be seen as Seminar FI & IITM SS 2011, 154 doi: 10.2313/NET-2011-07-2_20 Network Architectures and Services, July 2011 m p a subclass of the convex optimization problems and the the- function) g : R R R0,+ is the infimum of the La- ⇥ 7! m p ory of convex optimization can be also applied to linear pro- grangian over x (for all λ R ,⌫ R ) grams. 2 2 g(λ,⌫)=infL(x, λ,⌫) x D Figure 3 illustrates a convex function. The intuitive charac- 2 teristics of such functions is that if one connects two points, If there is no lower bound of the Lagrangian, its dual func- the inner line segment always lies above the graph. tion takes on the value . The main advantage of the Lagrangian dual function1 is, that it is concave even if the problem is not convex. The reason for this is that the dual function is the pointwise infimum of a family of linear func- tions of (λ, ⌫) (see [5], p. 216). The basic idea behind Lagrangian duality is to take the con- straints and put them into the objective function. The most intuitive way would be to rewrite the problem as the follow- ing unconstrained problem: m p minimize l(x)=f0(x)+ I (fi(x)) + I0(hi(x)) − i=1 i=1 X X Here I and I0 ( ) are the indicator functions of non- − R R positive reals and 07! respectively: 4 2 0 u 0 0 u =0 Figure 3: The convex function f(x, y)=x + y .

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us