Duality Theory of Constrained Optimization

Total Page:16

File Type:pdf, Size:1020Kb

Duality Theory of Constrained Optimization Duality Theory of Constrained Optimization Robert M. Freund April, 2014 ⃝c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 The Practical Importance of Duality Duality is pervasive in nonlinear (and linear) optimization models in a wide variety of engineering and mathematical settings. Some elementary exam- ples of models where duality is plays an important role are: Electrical networks. The current flows are \primal variables" and the voltage differences are the \dual variables" that arise in consideration of optimization (and equilibrium) in electrical networks. Economic markets. The \primal" variables are production levels and consumption levels, and the \dual" variables are prices of goods and services. Structural design. The tensions on the beams are \primal" variables, and the nodal displacements are the \dual" variables. Nonlinear (and linear) duality is extremely useful both as a conceptual as well as a computational tool. For example, dual problems and their solutions are used in connection with: (a) Identifying near-optimal solutions. A good dual solution can be used to bound the values of primal solutions, and so can be used to actually identify when a primal solution is optimal or near-optimal. (b) Proving optimality. Using a strong duality theorem, one can prove optimality of a primal solution by constructing a dual solution with the same objective function value. (c) Sensitivity analysis of the primal problem. The dual variable on a constraint represents the incremental change in the optimal solution value per unit increase in the right-hand-side (RHS) of the constraint. (d) Karush-Kuhn-Tucker (KKT) conditions. The optimal solution to the dual problem is a vector of KKT multipliers. (e) Convergence of algorithms. The dual problem is often used in the convergence analysis of algorithms. (f) Discovering and Exploiting Problem Structure. Quite often, the dual problem has some useful mathematical, geometric, or compu- 3 tational structure that can exploited in computing solutions to both the primal and the dual problem. 2 The Dual Problem 2.1 Constructing the Dual Problem Recall the basic constrained optimization model: OP : minimumx f(x) s.t. g1(x) ≤ 0 · = · ≥ . gm(x) ≤ 0 x 2 X: n n In this model, we have f(x): R 7! R and gi(x): R 7! R; i = 1; : : : ; m. We can always convert the constraints involving gi(x) to have the format \gi(x) ≤ 0" , and so we will now presume that our problem is of the form: 4 ∗ OP : z = minimumx f(x) s.t. g1(x) ≤ 0 . gm(x) ≤ 0 x 2 X: We will refer to X as the \ground-set". Whereas in previous developments X was often assumed to be Rn, in our study of duality it will be useful to presume that X itself is composed of a structural part of the feasible region of the optimization problem. A typical example is when the feasible region of our problem of interest is the intersection of k inequality constraints: n F = fx 2 R : gi(x) ≤ 0; i = 1; : : : ; kg : Depending on the structural properties of the functions gi(·), we might wish re-order the constraints so that the first m of these k constraints (where of course m ≤ k) are kept explicitly as inequalities, and the last k − m constraints are formulated into the ground-set n X := fx 2 R : gi(x) ≤ 0; i = m + 1; : : : ; kg : Then it follows that: n F = fx 2 R : gi(x) ≤ 0; i = 1; : : : ; m; g \ X; where now there are only m explicit inequality constraints. More generally, the ground-set X could be of any form, including the fol- lowing: 5 (i) X = fx 2 Rn : x ≥ 0g, (ii) X = K where K is a convex cone, (iii) X = fx j gi(x) ≤ 0; i = m + 1; : : : ; m + kg, { } j 2 n (iv) X = x x Z+ , (v) X = fx 2 Rn : Nx = b; x ≥ 0g where \Nx = b" models flows in a network, (vi) X = fx j Ax ≤ bg, (vii) X = Rn . We now describe how to construct the dual problem of OP. The first step is to form the Lagrangian function. For a nonnegative vector u, the Lagrangian function is defined simply as follows: Xm T L(x; u) := f(x) + u g(x) = f(x) + uigi(x) : i=1 The Lagrangian essentially takes the constraints out of the description of the feasible region of the model, and instead places them in the the objective function with multipliers ui for i = 1; : : : ; m. It might be helpful to think of these multipliers as costs or prices or penalties. The second step is to construct the dual function L∗(u), which is defined as follows: ∗ T L (u) := minimumx L(x; u) = minimumx f(x) + u g(x) (1) s.t. x 2 X s.t. x 2 X: 6 Note that L∗(u) is defined for any given u. In order for the dual problem to be computationally viable, it will be necessary to be able to compute L∗(u) efficiently for any given u. Put another way, it will be necessary to be able to solve the optimization problem (1) efficiently for any given u. The third step is to write down the dual problem D, which is defined as follows: ∗ ∗ D: v = maximumu L (u) s.t. u ≥ 0 : 2.2 Summary: Steps in the Construction of the Dual Prob- lem Here is a summary of the process of constructing the dual problem of OP. The starting point is the primal problem: ∗ OP : z = minimumx f(x) s.t. gi(x) ≤ 0; i = 1; : : : ; m x 2 X: The dual problem D is constructed in the following three-step procedure: 1. Create the Lagrangian: L(x; u) := f(x) + uT g(x) : 7 2. Create the dual function: ∗ T L (u) := minimumx f(x) + u g(x) s.t. x 2 X: 3. Create the dual problem: ∗ ∗ D: v = maximumu L (u) s.t. u ≥ 0 : 2.3 Example: the Dual of a Linear Problem Consider the linear optimization problem: T LP : minimumx c x s.t. Ax ≥ b x ≥ 0 : We can rearrange the inequality constraints and re-write LP as: T LP : minimumx c x s.t. b − Ax ≤ 0 x ≥ 0 : 8 Let us define X := fx 2 Rn : x ≥ 0g and identify the linear inequality con- straints \b−Ax ≤ 0" as the constraints \gi(x) ≤ 0, i = 1; : : : ; m". Therefore the Lagrangian for this problem is: L(x; u) = cT x + uT (b − Ax) = uT b + (c − AT u)T x : The dual function L∗(u) is defined to be: ∗ L (u) = minx2X L(x; u) = minx≥0 L(x; u) : Let us now evaluate L∗(u). For a given value of u, L∗(u) is evaluated as follows: ∗ L (u) = minx≥0 L(x; u) T T T = minx≥0 u b + (c − A u) x 8 < uT b if AT u ≤ c = : −∞ if AT u ̸≤ c : The dual problem (D) is defined to be: ∗ ∗ (D) v = maxu≥0 L (u) ; and is therefore constructed as: ∗ T (D) v = maxu u b s.t. AT u ≤ c u ≥ 0 : 9 2.4 Remarks on Conventions involving ±∞ It is often the case when constructing dual problems that functions such as L∗(u) takes the values of ±∞ for certain values of u. Here we discuss conventions in this regard. Suppose we have an optimization problem: ∗ (P) : z = maximumx f(x) s.t. x 2 S: We could define the function: 8 < f(x) if x 2 S h(x) := : −∞ if x2 = S: Then we can rewrite our problem as: ∗ (P) : z = maximumx h(x) s.t. x 2 Rn : Conversely, suppose that we have a function k(·) that takes on the value −∞ outside of a certain region S, but that k(·) is finite for all x 2 S. Then the problem: ∗ (P) : z = maximumx k(x) s.t. x 2 Rn is equivalent to: ∗ (P) : z = maximumx k(x) s.t. x 2 S: Similar logic applies to minimization problems over domains where the func- tion values might take on the value +1. 10 3 More Examples of Dual Constructions of Opti- mization Problems 3.1 The Dual of a Convex Quadratic Problem Consider the following convex quadratic optimization problem: 1 T T QP : minimumx 2 x Qx + c x s.t. Ax ≥ b ; where Q is symmetric and positive definite. To construct a dual of this problem, let us re-write the inequality constraints as b − Ax ≤ 0 and dualize these constraints. We construct the Lagrangian: 1 T T T − L(x; u) = 2 x Qx + c x + u (b Ax) T − T T 1 T = u b + (c A u) x + 2 x Qx : The dual function L∗(u) is: ∗ L (u) = minx2Rn L(x; u) T − T T 1 T = minx2Rn u b + (c A u) x + 2 x Qx T − T T 1 T = u b + minx2Rn (c A u) x + 2 x Qx : For a given value of u, letx ~ be the optimal value of x in this last expression. Then since the expression is a convex quadratic function of x, it follows that x~ must satisfy: T 0 = rxL(~x; u) = (c − A u) + Qx~ ; wherebyx ~ is given by: 11 x~ = −Q−1(c − AT u) : Substituting this value of x into the Lagrangian, we obtain: ∗ T − 1 − T T −1 − T L (u) = L(~x; u) = u b 2 (c A u) Q (c A u) : The dual problem (D) is defined to be: ∗ ∗ (D) v = maxu≥0 L (u) ; and is therefore constructed as: ∗ T − 1 − T T −1 − T (D) v = maxu u b 2 (c A u) Q (c A u) s.t.
Recommended publications
  • A Perturbation View of Level-Set Methods for Convex Optimization
    A perturbation view of level-set methods for convex optimization Ron Estrin · Michael P. Friedlander January 17, 2020 (revised May 15, 2020) Abstract Level-set methods for convex optimization are predicated on the idea that certain problems can be parameterized so that their solutions can be recov- ered as the limiting process of a root-finding procedure. This idea emerges time and again across a range of algorithms for convex problems. Here we demonstrate that strong duality is a necessary condition for the level-set approach to succeed. In the absence of strong duality, the level-set method identifies -infeasible points that do not converge to a feasible point as tends to zero. The level-set approach is also used as a proof technique for establishing sufficient conditions for strong duality that are different from Slater's constraint qualification. Keywords convex analysis · duality · level-set methods 1 Introduction Duality in convex optimization may be interpreted as a notion of sensitivity of an optimization problem to perturbations of its data. Similar notions of sensitivity appear in numerical analysis, where the effects of numerical errors on the stability of the computed solution are of central concern. Indeed, backward-error analysis (Higham 2002, §1.5) describes the related notion that computed approximate solutions may be considered as exact solutions of perturbations of the original problem. It is natural, then, to ask if duality can help us understand the behavior of a class of numerical algorithms for convex optimization. In this paper, we describe how the level-set method (van den Berg and Friedlander 2007, 2008a; Aravkin et al.
    [Show full text]
  • Arxiv:2011.09194V1 [Math.OC]
    Noname manuscript No. (will be inserted by the editor) Lagrangian duality for nonconvex optimization problems with abstract convex functions Ewa M. Bednarczuk · Monika Syga Received: date / Accepted: date Abstract We investigate Lagrangian duality for nonconvex optimization prob- lems. To this aim we use the Φ-convexity theory and minimax theorem for Φ-convex functions. We provide conditions for zero duality gap and strong duality. Among the classes of functions, to which our duality results can be applied, are prox-bounded functions, DC functions, weakly convex functions and paraconvex functions. Keywords Abstract convexity · Minimax theorem · Lagrangian duality · Nonconvex optimization · Zero duality gap · Weak duality · Strong duality · Prox-regular functions · Paraconvex and weakly convex functions 1 Introduction Lagrangian and conjugate dualities have far reaching consequences for solution methods and theory in convex optimization in finite and infinite dimensional spaces. For recent state-of the-art of the topic of convex conjugate duality we refer the reader to the monograph by Radu Bot¸[5]. There exist numerous attempts to construct pairs of dual problems in non- convex optimization e.g., for DC functions [19], [34], for composite functions [8], DC and composite functions [30], [31] and for prox-bounded functions [15]. In the present paper we investigate Lagrange duality for general optimiza- tion problems within the framework of abstract convexity, namely, within the theory of Φ-convexity. The class Φ-convex functions encompasses convex l.s.c. Ewa M. Bednarczuk Systems Research Institute, Polish Academy of Sciences, Newelska 6, 01–447 Warsaw Warsaw University of Technology, Faculty of Mathematics and Information Science, ul.
    [Show full text]
  • Lagrangian Duality and Perturbational Duality I ∗
    Lagrangian duality and perturbational duality I ∗ Erik J. Balder Our approach to the Karush-Kuhn-Tucker theorem in [OSC] was entirely based on subdifferential calculus (essentially, it was an outgrowth of the two subdifferential calculus rules contained in the Fenchel-Moreau and Dubovitskii-Milyutin theorems, i.e., Theorems 2.9 and 2.17 of [OSC]). On the other hand, Proposition B.4(v) in [OSC] gives an intimate connection between the subdifferential of a function and the Fenchel conjugate of that function. In the present set of lecture notes this connection forms the central analytical tool by which one can study the connections between an optimization problem and its so-called dual optimization problem (such connections are commonly known as duality relations). We shall first study duality for the convex optimization problem that figured in our Karush-Kuhn-Tucker results. In this simple form such duality is known as Lagrangian duality. Next, in section 2 this is followed by a far-reaching extension of duality to abstract optimization problems, which leads to duality-stability relationships. Then, in section 3 we specialize duality to optimization problems with cone-type constraints, which includes Fenchel duality for semidefinite programming problems. 1 Lagrangian duality An interesting and useful interpretation of the KKT theorem can be obtained in terms of the so-called duality principle (or relationships) for convex optimization. Recall our standard convex minimization problem as we had it in [OSC]: (P ) inf ff(x): g1(x) ≤ 0; ··· ; gm(x) ≤ 0; Ax − b = 0g x2S n and recall that we allow the functions f; g1; : : : ; gm on R to have values in (−∞; +1].
    [Show full text]
  • Arxiv:2001.06511V2 [Math.OC] 16 May 2020 R
    A perturbation view of level-set methods for convex optimization Ron Estrin · Michael P. Friedlander January 17, 2020 (revised May 15, 2020) Abstract Level-set methods for convex optimization are predicated on the idea that certain problems can be parameterized so that their solutions can be recovered as the limiting process of a root-finding procedure. This idea emerges time and again across a range of algorithms for convex problems. Here we demonstrate that strong duality is a necessary condition for the level-set approach to succeed. In the absence of strong duality, the level-set method identifies -infeasible points that do not converge to a feasible point as tends to zero. The level-set approach is also used as a proof technique for establishing sufficient conditions for strong duality that are different from Slater's constraint qualification. Keywords convex analysis · duality · level-set methods 1 Introduction Duality in convex optimization may be interpreted as a notion of sensitivity of an optimization problem to perturbations of its data. Similar notions of sensitivity ap- pear in numerical analysis, where the effects of numerical errors on the stability of the computed solution are of central concern. Indeed, backward-error analysis (Higham 2002, x1.5) describes the related notion that computed approximate solutions may be considered as exact solutions of perturbations of the original problem. It is natural, then, to ask if duality can help us understand the behavior of a class of numerical al- gorithms for convex optimization. In this paper, we describe how the level-set method arXiv:2001.06511v2 [math.OC] 16 May 2020 R.
    [Show full text]
  • A Perturbation View of Level-Set Methods for Convex Optimization
    A perturbation view of level-set methods for convex optimization Ron Estrina and Michael P. Friedlanderb a b Institute for Computational and Mathematical Engineering, Stanford University, Stanford, CA, USA ([email protected]); Department of Computer Science and Department of Mathematics, University of British Columbia, Vancouver, BC, Canada ([email protected]). Research supported by ONR award N00014-16-1-2242 July 31, 2018 Level-set methods for convex optimization are predicated on the idea where τ is an estimate of the optimal value τ ∗. If τ τ ∗, the p ≈ p that certain problems can be parameterized so that their solutions level-set constraint f(x) τ ensures that a solution x ≤ τ ∈ X can be recovered as the limiting process of a root-finding procedure. of this problem causes f(xτ ) to have a value near τp∗. If, This idea emerges time and again across a range of algorithms for additionally, g(x ) 0, then x is a nearly optimal and τ ≤ τ convex problems. Here we demonstrate that strong duality is a nec- feasible solution for (P). The trade-off for this potentially essary condition for the level-set approach to succeed. In the ab- more convenient problem is that we must compute a sequence sence of strong duality, the level-set method identifies -infeasible of parameters τk that converges to the optimal value τp∗. points that do not converge to a feasible point as tends to zero. Define the optimal-value function, or simply the value func- The level-set approach is also used as a proof technique for estab- tion, of (Qτ ) by lishing sufficient conditions for strong duality that are different from Slater’s constraint qualification.
    [Show full text]
  • CONJUGATE DUALITY for GENERALIZED CONVEX OPTIMIZATION PROBLEMS Anulekha Dhara and Aparna Mehra
    JOURNAL OF INDUSTRIAL AND Website: http://AIMsciences.org MANAGEMENT OPTIMIZATION Volume 3, Number 3, August 2007 pp. 415–427 CONJUGATE DUALITY FOR GENERALIZED CONVEX OPTIMIZATION PROBLEMS Anulekha Dhara and Aparna Mehra Department of Mathematics Indian Institute of Technology Delhi Hauz Khas, New Delhi-110016, India (Communicated by M. Mathirajan) Abstract. Equivalence between a constrained scalar optimization problem and its three conjugate dual models is established for the class of general- ized C-subconvex functions. Applying these equivalent relations, optimality conditions in terms of conjugate functions are obtained for the constrained multiobjective optimization problem. 1. Introduction and preliminaries. Two basic approaches have emerged in lit- erature for studying nonlinear programming duality namely the Lagrangian ap- proach and the Conjugate Function approach. The latter approach is based on the perturbation theory which in turn helps to study stability of the optimization prob- lem with respect to variations in the data parameters. That is, the optimal solution of the problem can be considered as a function of the perturbed parameters and the effect of the changes in the parameters on the objective value of the problem can be investigated. In this context, conjugate duality helps to analyze the geometric and the economic aspects of the problem. It is because of this reason that since its early inception conjugate duality has been extensively studied by several authors for convex programming problems (see, [1, 6] and references therein). The class of convex functions possess rich geometrical properties that enable us to examine the conjugation theory. However, in non convex case, many of these properties fail to hold thus making it difficult to develop conjugate duality for such classes of functions.
    [Show full text]
  • Lecture Slides on Convex Analysis And
    LECTURE SLIDES ON CONVEX ANALYSIS AND OPTIMIZATION BASED ON 6.253 CLASS LECTURES AT THE MASS. INSTITUTE OF TECHNOLOGY CAMBRIDGE, MASS SPRING 2014 BY DIMITRI P. BERTSEKAS http://web.mit.edu/dimitrib/www/home.html Based on the books 1) “Convex Optimization Theory,” Athena Scien- tific, 2009 2) “Convex Optimization Algorithms,” Athena Sci- entific, 2014 (in press) Supplementary material (solved exercises, etc) at http://www.athenasc.com/convexduality.html LECTURE 1 AN INTRODUCTION TO THE COURSE LECTURE OUTLINE The Role of Convexity in Optimization • Duality Theory • Algorithms and Duality • Course Organization • HISTORY AND PREHISTORY Prehistory: Early 1900s - 1949. • Caratheodory, Minkowski, Steinitz, Farkas. − Properties of convex sets and functions. − Fenchel - Rockafellar era: 1949 - mid 1980s. • Duality theory. − Minimax/game theory (von Neumann). − (Sub)differentiability, optimality conditions, − sensitivity. Modern era - Paradigm shift: Mid 1980s - present. • Nonsmooth analysis (a theoretical/esoteric − direction). Algorithms (a practical/high impact direc- − tion). A change in the assumptions underlying the − field. OPTIMIZATION PROBLEMS Generic form: • minimize f(x) subject to x C ∈ Cost function f : n , constraint set C, e.g., ℜ 7→ ℜ C = X x h1(x) = 0,...,hm(x) = 0 ∩ | x g1(x) 0,...,gr(x) 0 ∩ | ≤ ≤ Continuous vs discrete problem distinction • Convex programming problems are those for which• f and C are convex They are continuous problems − They are nice, and have beautiful and intu- − itive structure However, convexity permeates
    [Show full text]
  • Remarks on Fenchel–Moreau Conjugate in the Setting of Consumer Theoryଝ
    A Service of Leibniz-Informationszentrum econstor Wirtschaft Leibniz Information Centre Make Your Publications Visible. zbw for Economics Cunha, George H.M.; Oliveira, Michel A.C.; Oliveira, Michel A.C.; Sosa, Wilfredo Article Remarks on Fenchel-Moreau conjugate in the setting of consumer theory EconomiA Provided in Cooperation with: The Brazilian Association of Postgraduate Programs in Economics (ANPEC), Rio de Janeiro Suggested Citation: Cunha, George H.M.; Oliveira, Michel A.C.; Oliveira, Michel A.C.; Sosa, Wilfredo (2016) : Remarks on Fenchel-Moreau conjugate in the setting of consumer theory, EconomiA, ISSN 1517-7580, Elsevier, Amsterdam, Vol. 17, Iss. 2, pp. 176-184, http://dx.doi.org/10.1016/j.econ.2016.04.001 This Version is available at: http://hdl.handle.net/10419/179619 Standard-Nutzungsbedingungen: Terms of use: Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Documents in EconStor may be saved and copied for your Zwecken und zum Privatgebrauch gespeichert und kopiert werden. personal and scholarly purposes. Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle You are not to copy documents for public or commercial Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich purposes, to exhibit the documents publicly, to make them machen, vertreiben oder anderweitig nutzen. publicly available on the internet, or to distribute or otherwise use the documents in public. Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, If the documents have been made available under an Open gelten abweichend von diesen Nutzungsbedingungen die in der dort Content Licence (especially Creative Commons Licences), you genannten Lizenz gewährten Nutzungsrechte. may exercise further usage rights as specified in the indicated licence.
    [Show full text]
  • New Duality Results for Evenly Convex Optimization Problems
    New Duality Results for Evenly Convex Optimization Problems Maria Dolores Fajardo∗ Sorin-Mihai Grad† Jos´eVidal‡ Abstract. We present new results on optimization problems where the involved functions are evenly convex. By means of a generalized conjugation scheme and the perturbation theory introduced by Rockafellar, we propose an alternative dual problem for a general optimization one defined on a sepa- rated locally convex topological space. Sufficient conditions for converse and total duality involving the even convexity of the perturbation function and c- subdifferentials are given. Formulae for the c-subdifferential and biconjugate of the objective function of a general optimization problem are provided, too. We also characterize the total duality also by means of the saddle-point theory for a notion of Lagrangian adapted to the considered framework. MSC Subject Classification. 52A20, 26B25, 90C25, 49N15. Keywords. Evenly convex function, generalized convex conjugation, converse duality, total duality, Lagrangian function, convex optimization in locally con- vex spaces. 1 Introduction An important part of mathematical programming from both theoretical and computational points of view is the duality theory. Rockafellar (cf. [25]) devel- oped the well-known perturbational approach (see also [7]), consisting in the use of a perturbation function as the keystone to obtain a dual problem for a general primal one by means of the Fenchel conjugation. Both problems always verify weak duality (the optimal value of the dual problem is less or equal to the optimal value of the primal one), as a direct consequence of the Fenchel-Young arXiv:1904.10478v1 [math.OC] 23 Apr 2019 inequality, whereas conditions ensuring strong duality(no duality gap and dual problem solvable) can be found in many references in the literature.
    [Show full text]
  • Convex Analysis and Nonsmooth Optimization
    Convex Analysis and Nonsmooth Optimization Dmitriy Drusvyatskiy October 22, 2020 ii Contents 1 Background 1 1.1 Inner products and linear maps . .1 1.2 Norms . .3 1.3 Eigenvalue and singular value decompositions of matrices . .4 1.4 Set operations . .6 1.5 Point-set topology and existence of minimizers . .7 1.6 Differentiability . 10 1.7 Accuracy in approximation . 13 1.8 Optimality conditions for smooth optimization . 16 1.9 Rates of convergence . 18 2 Convex geometry 21 2.1 Operations preserving convexity . 22 2.2 Convex hull . 25 2.3 Affine hull and relative interior . 28 2.4 Separation theorem . 30 2.5 Cones and polarity . 34 2.6 Tangents and normals . 37 3 Convex analysis 43 3.1 Basic definitions and examples . 44 3.2 Convex functions from epigraphical operations . 50 3.3 The closed convex envelope . 54 3.4 The Fenchel conjugate . 57 3.5 Subgradients and subderivatives . 60 3.5.1 Subdifferential . 61 3.5.2 Subderivative . 68 3.6 Lipschitz continuity of convex functions . 72 3.7 Strong convexity, Moreau envelope, and the proximal map . 75 iii iv CONTENTS 3.8 Monotone operators and the resolvant . 83 3.8.1 Notation and basic properties . 84 3.8.2 The resolvant and the Minty parametrization . 88 3.8.3 Proof of the surjectivity theorem. 90 4 Subdifferential calculus and primal/dual problems 95 4.1 The subdifferential of the value function . 98 4.2 Duality and subdifferential calculus . 99 4.2.1 Fenchel-Rockafellar duality . 100 4.2.2 Lagrangian Duality . 107 4.2.3 Minimax duality .
    [Show full text]
  • Duality for Almost Convex Optimization Problems Via the Perturbation Approach
    Duality for almost convex optimization problems via the perturbation approach Radu Ioan Bot¸ ∗ G´abor Kassay † Gert Wanka ‡ Abstract. We deal with duality for almost convex finite dimensional op- timization problems by means of the classical perturbation approach. To this aim some standard results from the convex analysis are extended to the case of almost convex sets and functions. The duality for some classes of primal-dual problems is derived as a special case of the general approach. The sufficient regularity conditions we need for guaranteeing strong duality are proved to be similar to the ones in the convex case. Keywords. almost convex sets, almost convex functions, conjugate func- tions, duality, perturbation approach AMS subject classification (2000). 26A51, 42A50, 49N15 1 Introduction Dealing with duality for a given optimization problem is one of the main features in mathematical programming and convex analysis both from theoretical and practical point of view. There is a well developed theory of duality for convex optimization problems in finite dimensional spaces, as one can read for instance in [15]. Distinct dual problems have been investigated by using the so-called perturbation approach in [16]. This is based on the theory of conjugate functions and describes how a dual problem can be assigned to a primal one ([5]). Generalized convex functions are those non-convex functions which possess at least one valuable property of convex functions. The growing interest in them during the last decades comes with no surprise since they are often more suitable than convex functions to describe practical problems originated from economics, management science, engineering, etc.
    [Show full text]
  • New Insights Into Conjugate Duality
    New insights into conjugate duality Von der Fakult¨at fur¨ Mathematik der Technischen Universit¨at Chemnitz genehmigte D i s s e r t a t i o n zur Erlangung des akademischen Grades Doctor rerum naturalium (Dr. rer. nat.) vorgelegt von Dipl. - Math. Sorin - Mihai Grad geboren am 02.03.1979 in Satu Mare (Rum¨anien) eingereicht am 26.01.06 Gutachter: Prof. Dr. rer. nat. habil. Gert Wanka Prof. Dr. Juan Enrique Mart´ınez - Legaz Prof. Dr. rer. nat. habil. Winfried Schirotzek Tag der Verteidigung: 13.07.2006 Bibliographical description Sorin - Mihai Grad New insights into conjugate duality Dissertation, 116 pages, Chemnitz University of Technology, Faculty of Mathe- matics, 2006 Report With this thesis we bring some new results and improve some existing ones in conjugate duality and some of the areas it is applied in. First we recall the way Lagrange, Fenchel and Fenchel - Lagrange dual problems to a given primal optimization problem can be obtained via perturbations and we present some connections between them. For the Fenchel - Lagrange dual problem we prove strong duality under more general conditions than known so far, while for the Fenchel duality we show that the convexity assumptions on the functions involved can be weakened without altering the conclusion. In order to prove the latter we prove also that some formulae concerning conjugate functions given so far only for convex functions hold also for almost convex, respectively nearly convex functions. After proving that the generalized geometric dual problem can be obtained via perturbations, we show that the geometric duality is a special case of the Fenchel - Lagrange duality and the strong duality can be obtained under weaker condi- tions than stated in the existing literature.
    [Show full text]