Mathematics Mathematics Neelam Patel

Total Page:16

File Type:pdf, Size:1020Kb

Mathematics Mathematics Neelam Patel CONSTRAINT QUALIFICATIONS , LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS A Dissertation Submitted For The Award of the Degree of Master of Philosophy in Mathematics Neelam Patel School of Mathematics Devi Ahilya Vishwavidyalaya, (((NACC(NACC Accredited Grade “A”“A”)))) Indore (M.P.) 20122012----20132013 Contents Page No. Introduction 1 Chapter-1 2-7 Preliminaries Chapter-2 8-21 Constraint Qualifications Chapter-3 22-52 Lagrangian Duality & Saddle Point Optimality Conditions References 53 Introduction The dissertation is a study of Constraint Qualifications, Lagrangian Duality and saddle point Optimality Conditions. In fact, it is a reading of chapters 5 and 6 of [1]. First chapter is about preliminaries. We collect results which are useful in subsequent chapters, like Fritz-John necessary and sufficient conditions for optimality and Karush-Kuhn-Tucker necessary and sufficient conditions for optimality. In second chapter we define the cone of tangents and show that F 0 T = is a necessary condition for local optimality is the cone of tangents. The constraint ∩ ∅ qualification which are defined are Abidie ′s, Slater ′s, Cottle ′s, Zangvill ′s, Kuhn- Tucker ′s and linear independence constraint qualification. We shall prove LICQ ⇒ CQ ⇒ ZCQ KT CQ ⇒ AQ ⇒ SQ ⇑ We derive KKT conditions under various constraint qualifications. Further, We study of various constraint qualifications and their interrelationships. In third chapter, we define the Lagrangian dual problem and give its geometric interpretation. We prove the weak and strong duality theorems. We also develop the saddle point optimality conditions and its relationship with KKT conditions. Further, some important properties of the dual function, such as concavity, differentiability, and subdifferentiability have been discussed. Special cases of the Lagrangian duality for Linear and quadratic programs are also discussed. Chapter 1 Preliminaries We collect definitions and results which will be useful. Definition 1.1(Convex function): Let f : S R, where S is a nonempty convex set in Rn. The function f is said to be convex on S if f( x1+(1- )x 2) f(x1) + (1- )f(x 2) for each x 1, x 2 S and for each (0, 1).≤ Definition 1.2(Pseudoconvex ∈ function): ∈ Let S be a nonempty open set in Rn, and let f : S R be differentiable on S. The function f is said to be pseudoconvex t if for each x 1, x 2 S with f(x 1) (x 2 - x1) 0 we have f(x 2) f(x 1). Definition 1.3(Strictly Pseudoconvex ∈ ∇ function):≥ Let S be a nonempty≥ open set in Rn, and let f : S R be differentiable on S. The function f is said to be strictly pseudoconvex t if x 1 x2, f(x 1) (x 2 - x1) 0 we have f(x 2) > f(x 1). Definition 1.4(Quasiconvex≠ ∇ function): Let≥ f : S R, where S is a nonempty convex n set in R . The function f is said to be quasiconvex if, for each x 1 and x 2 S, f( x1+(1- )x 2) max {f(x 1), f(x 2)} for each (0, 1).∈ Notation 1.5: ≤ ∈ t F0 = { d/ f(x 0) d 0} The cone of feasible directions: ∇ < D = {d/d 0, x+ d, for all (0, ) for some } Theorem 1.6: Consider the≠ problem to minimize ∈ f(x) subject to x > 0 S, where f : Rn n R and S is a nonempty set in R . Suppose f is a differentiable at∈ x 0, x 0 S. If x 0 is local minimum then F 0 D . Conversely, suppose F 0 D , f is pseudoconvex∈ at x 0 and there exists an∩ -neigborhood= ∅ N (x 0), > 0 such∩ that= d ∅ = (x – x0) D for # any x S N (x 0). Then,# x 0 is a local minimum #of f. ∈ # Lemma∈ 1.7:∩ Consider the feasible region S = {x X : g i(x) 0 for i = 1,…,m}, n n where X is a nonempty open set in R , and where g ∈i : R R for≤ i = 1,…,m. Given a feasible point x 0 S, let I = {i : g i(x 0) = 0} be the index set for the binding or active ∈ constraints, and assume that g i for i I are differentiable at x 0 and that the g i′s for i I are continuous at x 0. Define the sets ∈ ∉ t G0 = {d : gi(x 0) d 0, for each i I} t G′ = { d 0∇ : gi(x 0)<d 0, for each ∈i I} [Cones of interior directions at≠ x 0] ∇ ≤ ∈ Then, we have G0 D G0′ Theorem 1.8: Consider the Problem⊆ P to⊆ minimize f(x) subject to x X and g i(x) 0 n n n for i = 1,…,m, where X is a nonempty open set in R , f : R R, and∈ g i : R R,≤ for i = 1,…,m. Let x 0 be a feasible point, and denote I = {i : g i(x 0) = 0}. Furthermore, suppose f and g i for i I are differentiable at x 0 and g i for i I are continuous at x 0. If x0 is a local optimal solution, ∈ then F 0 G0 = . Conversely,∉ if F 0 G0 = , and if f is pseudoconvex at x 0 and g i for i ∩ I are∅ strictly pseudoconvex∩ over∅ some - neigborhood of x 0, then x 0 is a local minimum.∈ # Theorem 1.9(The Fritz John Necessary Conditions): Let X be a nonempty open set n n n in R and let f : R R, and g i : R R, for i = 1,…,m. Consider the Problem P to minimize f(x) subject to x X and g i(x) 0 for i = 1,…,m. Let x 0 be a feasible solution, and denote I = {i :∈ g i(x 0) = 0}. Furthermore,≤ suppose f and g i for i I are differentiable at x 0 and g i for i I are continuous at x 0. If x 0 locally solves Problem ∈ P, then there exist scalars u 0 and u ∉i for i I such that u0 f(x 0) ∈+ i gi(x 0) = 0 $∈ ∇ ∑ ͩ ∇ u0, u i 0 for i I (u 0, uI ) ≥ (0, 0) ∈ where uI is the vector whose component are u i for i ≠I. Furthermore, if g i for i I are also differentiable at x 0, then the foregoing conditions∈ can be written in the following∉ equivalent form: u0 f(x 0) + i gi(x 0) = 0 ( $Ͱ ∇ ∑ ͩ ∇uigi(x 0) = 0 for i = 1,…,m u0, u i 0 for i 1,…,m (u 0, u) ≥ (0, 0) = where u is the vector whose components are u i for i = 1,…,m.≠ Theorem 1.10(Fritz John Sufficient Conditions): Let X be a nonempty open set in n n n R and let f : R R, and g i : R R, for i = 1,…,m. Consider the Problem P to minimize f(x) subject to x X and g i(x) 0 for i = 1,…,m. Let x0 be a FJ solution and denote I = {i : g i(x 0) = 0}.∈ Define S as ≤the relaxed feasible region for Problem P in which the nonbinding constraints are dropped. a. If there exists an -neigborhood N (x 0), > 0 such that f is pseudoconvex over # N (x 0) S and gi#, i I are strictly pseudoconvex# over N (x 0) S, then x 0 is a # # local minimum∩ for Problem ∈ P. ∩ b. If f is pseudoconvex at x 0 and if g i, i I are both strictly pseudoconvex and quasiconvex at x 0, then x 0 is a global ∈ optimal solution for Problem P. In particular, if these generalized convexity assumptions hold true only by restricting the domain of f to N (x 0) for some > 0, then x 0 is a local minimum # for Problem P. # Theorem 1.11(Karush-Kuhn-Tucker Necessary Conditions): Let X be a nonempty n n n open set in R and let f : R R, and g i : R R, for i = 1,…,m. Consider the Problem P to minimize f(x) subject to x X and g i(x) 0 for i = 1,…,m. Let x 0 be a feasible solution, and denote I = {i : g∈i(x 0) = 0}. Suppose≤ f and g i for i I are differentiable at x 0 and gi for i I are continuous at x 0. Furthermore, suppose ∈ gi(x 0) for i I are linearly independent.∉ If x 0 locally solves Problem P, then there∇ exist scalars∈ u i for i I such that ∈ f(x 0) + i gi(x 0) = 0 $∈ ∇ ∑ ͩ ∇ ui 0 for i I In addition to the above assumption, if g i for each≥ i I is∈ also differentiable at x 0, then the foregoing conditions can be written in the following∉ equivalent form: f(x 0) + i gi(x 0) = 0 ( $Ͱ ∇ ∑ ͩ ∇uigi(x 0) = 0 for i = 1,…,m ui 0 for i 1,…,m Theorem 1.12(Karush-Kuhn-Tucker Sufficient Conditio≥ ns): Let= X be a nonempty n n n open set in R and let f : R R, and g i : R R, for i = 1,…,m. Consider the Problem P to minimize f(x) subject to x X and g i(x) 0 for i = 1,…,m. Let x 0 be a ∈ ≤ KKT solution, and denote I = {i : g i(x0) = 0}. Define S as the relaxed feasible region for Problem P in which the constraints that are not binding at x 0 are dropped. Then, a. If there exists an -neigborhood N (x 0), > 0 such that f is pseudoconvex over # N (x 0) S and #g i, i I are differentiable# at x 0 and are quasiconvex over # N (x 0) ∩ S, then x 0 is local ∈ minimum for Problem P.
Recommended publications
  • Duality Gap Estimation Via a Refined Shapley--Folkman Lemma | SIAM
    SIAM J. OPTIM. \bigcircc 2020 Society for Industrial and Applied Mathematics Vol. 30, No. 2, pp. 1094{1118 DUALITY GAP ESTIMATION VIA A REFINED SHAPLEY{FOLKMAN LEMMA \ast YINGJIE BIy AND AO TANG z Abstract. Based on concepts like the kth convex hull and finer characterization of noncon- vexity of a function, we propose a refinement of the Shapley{Folkman lemma and derive anew estimate for the duality gap of nonconvex optimization problems with separable objective functions. We apply our result to the network utility maximization problem in networking and the dynamic spectrum management problem in communication as examples to demonstrate that the new bound can be qualitatively tighter than the existing ones. The idea is also applicable to cases with general nonconvex constraints. Key words. nonconvex optimization, duality gap, convex relaxation, network resource alloca- tion AMS subject classifications. 90C26, 90C46 DOI. 10.1137/18M1174805 1. Introduction. The Shapley{Folkman lemma (Theorem 1.1) was stated and used to establish the existence of approximate equilibria in economy with nonconvex preferences [13]. It roughly says that the sum of a large number of sets is close to convex and thus can be used to generalize results on convex objects to nonconvex ones. n m P Theorem 1.1. Let S1;S2;:::;Sn be subsets of R . For each z 2 conv i=1 Si = Pn conv S , there exist points zi 2 conv S such that z = Pn zi and zi 2 S except i=1 i i i=1 i for at most m values of i. Remark 1.2.
    [Show full text]
  • ABOUT STATIONARITY and REGULARITY in VARIATIONAL ANALYSIS 1. Introduction the Paper Investigates Extremality, Stationarity and R
    1 ABOUT STATIONARITY AND REGULARITY IN VARIATIONAL ANALYSIS 2 ALEXANDER Y. KRUGER To Boris Mordukhovich on his 60th birthday Abstract. Stationarity and regularity concepts for the three typical for variational analysis classes of objects { real-valued functions, collections of sets, and multifunctions { are investi- gated. An attempt is maid to present a classification scheme for such concepts and to show that properties introduced for objects from different classes can be treated in a similar way. Furthermore, in many cases the corresponding properties appear to be in a sense equivalent. The properties are defined in terms of certain constants which in the case of regularity proper- ties provide also some quantitative characterizations of these properties. The relations between different constants and properties are discussed. An important feature of the new variational techniques is that they can handle nonsmooth 3 functions, sets and multifunctions equally well Borwein and Zhu [8] 4 1. Introduction 5 The paper investigates extremality, stationarity and regularity properties of real-valued func- 6 tions, collections of sets, and multifunctions attempting at developing a unifying scheme for defining 7 and using such properties. 8 Under different names this type of properties have been explored for centuries. A classical 9 example of a stationarity condition is given by the Fermat theorem on local minima and max- 10 ima of differentiable functions. In a sense, any necessary optimality (extremality) conditions de- 11 fine/characterize certain stationarity (singularity/irregularity) properties. The separation theorem 12 also characterizes a kind of extremal (stationary) behavior of convex sets. 13 Surjectivity of a linear continuous mapping in the Banach open mapping theorem (and its 14 extension to nonlinear mappings known as Lyusternik-Graves theorem) is an example of a regularity 15 condition.
    [Show full text]
  • Subdifferentiability and the Duality
    Subdifferentiability and the Duality Gap Neil E. Gretsky ([email protected]) ∗ Department of Mathematics, University of California, Riverside Joseph M. Ostroy ([email protected]) y Department of Economics, University of California, Los Angeles William R. Zame ([email protected]) z Department of Economics, University of California, Los Angeles Abstract. We point out a connection between sensitivity analysis and the funda- mental theorem of linear programming by characterizing when a linear programming problem has no duality gap. The main result is that the value function is subd- ifferentiable at the primal constraint if and only if there exists an optimal dual solution and there is no duality gap. To illustrate the subtlety of the condition, we extend Kretschmer's gap example to construct (as the value function of a linear programming problem) a convex function which is subdifferentiable at a point but is not continuous there. We also apply the theorem to the continuum version of the assignment model. Keywords: duality gap, value function, subdifferentiability, assignment model AMS codes: 90C48,46N10 1. Introduction The purpose of this note is to point out a connection between sensi- tivity analysis and the fundamental theorem of linear programming. The subject has received considerable attention and the connection we find is remarkably simple. In fact, our observation in the context of convex programming follows as an application of conjugate duality [11, Theorem 16]. Nevertheless, it is useful to give a separate proof since the conclusion is more readily established and its import for linear programming is more clearly seen. The main result (Theorem 1) is that in a linear programming prob- lem there exists an optimal dual solution and there is no duality gap if and only if the value function is subdifferentiable at the primal constraint.
    [Show full text]
  • Arxiv:2011.09194V1 [Math.OC]
    Noname manuscript No. (will be inserted by the editor) Lagrangian duality for nonconvex optimization problems with abstract convex functions Ewa M. Bednarczuk · Monika Syga Received: date / Accepted: date Abstract We investigate Lagrangian duality for nonconvex optimization prob- lems. To this aim we use the Φ-convexity theory and minimax theorem for Φ-convex functions. We provide conditions for zero duality gap and strong duality. Among the classes of functions, to which our duality results can be applied, are prox-bounded functions, DC functions, weakly convex functions and paraconvex functions. Keywords Abstract convexity · Minimax theorem · Lagrangian duality · Nonconvex optimization · Zero duality gap · Weak duality · Strong duality · Prox-regular functions · Paraconvex and weakly convex functions 1 Introduction Lagrangian and conjugate dualities have far reaching consequences for solution methods and theory in convex optimization in finite and infinite dimensional spaces. For recent state-of the-art of the topic of convex conjugate duality we refer the reader to the monograph by Radu Bot¸[5]. There exist numerous attempts to construct pairs of dual problems in non- convex optimization e.g., for DC functions [19], [34], for composite functions [8], DC and composite functions [30], [31] and for prox-bounded functions [15]. In the present paper we investigate Lagrange duality for general optimiza- tion problems within the framework of abstract convexity, namely, within the theory of Φ-convexity. The class Φ-convex functions encompasses convex l.s.c. Ewa M. Bednarczuk Systems Research Institute, Polish Academy of Sciences, Newelska 6, 01–447 Warsaw Warsaw University of Technology, Faculty of Mathematics and Information Science, ul.
    [Show full text]
  • Lagrangian Duality and Perturbational Duality I ∗
    Lagrangian duality and perturbational duality I ∗ Erik J. Balder Our approach to the Karush-Kuhn-Tucker theorem in [OSC] was entirely based on subdifferential calculus (essentially, it was an outgrowth of the two subdifferential calculus rules contained in the Fenchel-Moreau and Dubovitskii-Milyutin theorems, i.e., Theorems 2.9 and 2.17 of [OSC]). On the other hand, Proposition B.4(v) in [OSC] gives an intimate connection between the subdifferential of a function and the Fenchel conjugate of that function. In the present set of lecture notes this connection forms the central analytical tool by which one can study the connections between an optimization problem and its so-called dual optimization problem (such connections are commonly known as duality relations). We shall first study duality for the convex optimization problem that figured in our Karush-Kuhn-Tucker results. In this simple form such duality is known as Lagrangian duality. Next, in section 2 this is followed by a far-reaching extension of duality to abstract optimization problems, which leads to duality-stability relationships. Then, in section 3 we specialize duality to optimization problems with cone-type constraints, which includes Fenchel duality for semidefinite programming problems. 1 Lagrangian duality An interesting and useful interpretation of the KKT theorem can be obtained in terms of the so-called duality principle (or relationships) for convex optimization. Recall our standard convex minimization problem as we had it in [OSC]: (P ) inf ff(x): g1(x) ≤ 0; ··· ; gm(x) ≤ 0; Ax − b = 0g x2S n and recall that we allow the functions f; g1; : : : ; gm on R to have values in (−∞; +1].
    [Show full text]
  • Monotonic Transformations: Cardinal Versus Ordinal Utility
    Natalia Lazzati Mathematics for Economics (Part I) Note 10: Quasiconcave and Pseudoconcave Functions Note 10 is based on Madden (1986, Ch. 13, 14) and Simon and Blume (1994, Ch. 21). Monotonic transformations: Cardinal Versus Ordinal Utility A utility function could be said to measure the level of satisfaction associated to each commodity bundle. Nowadays, no economist really believes that a real number can be assigned to each commodity bundle which expresses (in utils?) the consumer’slevel of satisfaction with this bundle. Economists believe that consumers have well-behaved preferences over bundles and that, given any two bundles, a consumer can indicate a preference of one over the other or the indi¤erence between the two. Although economists work with utility functions, they are concerned with the level sets of such functions, not with the number that the utility function assigns to any given level set. In consumer theory these level sets are called indi¤erence curves. A property of utility functions is called ordinal if it depends only on the shape and location of a consumer’sindi¤erence curves. It is alternatively called cardinal if it also depends on the actual amount of utility the utility function assigns to each indi¤erence set. In the modern approach, we say that two utility functions are equivalent if they have the same indi¤erence sets, although they may assign di¤erent numbers to each level set. For instance, let 2 2 u : R+ R where u (x) = x1x2 be a utility function, and let v : R+ R be the utility function ! ! v (x) = u (x) + 1: These two utility functions represent the same preferences and are therefore equivalent.
    [Show full text]
  • The Ekeland Variational Principle, the Bishop-Phelps Theorem, and The
    The Ekeland Variational Principle, the Bishop-Phelps Theorem, and the Brøndsted-Rockafellar Theorem Our aim is to prove the Ekeland Variational Principle which is an abstract result that found numerous applications in various fields of Mathematics. As its application to Convex Analysis, we provide a proof of the famous Bishop- Phelps Theorem and some related results. Let us recall that the epigraph and the hypograph of a function f : X ! [−∞; +1] (where X is a set) are the following subsets of X × R: epi f = f(x; t) 2 X × R : t ≥ f(x)g ; hyp f = f(x; t) 2 X × R : t ≤ f(x)g : Notice that, if f > −∞ on X then epi f 6= ; if and only if f is proper, that is, f is finite at at least one point. 0.1. The Ekeland Variational Principle. In what follows, (X; d) is a metric space. Given λ > 0 and (x; t) 2 X × R, we define the set Kλ(x; t) = f(y; s): s ≤ t − λd(x; y)g = f(y; s): λd(x; y) ≤ t − sg ⊂ X × R: Notice that Kλ(x; t) = hyp[t − λ(x; ·)]. Let us state some useful properties of these sets. Lemma 0.1. (a) Kλ(x; t) is closed and contains (x; t). (b) If (¯x; t¯) 2 Kλ(x; t) then Kλ(¯x; t¯) ⊂ Kλ(x; t). (c) If (xn; tn) ! (x; t) and (xn+1; tn+1) 2 Kλ(xn; tn) (n 2 N), then \ Kλ(x; t) = Kλ(xn; tn) : n2N Proof. (a) is quite easy.
    [Show full text]
  • Bounding the Duality Gap for Problems with Separable Objective
    Bounding the Duality Gap for Problems with Separable Objective Madeleine Udell and Stephen Boyd March 8, 2014 Abstract We consider the problem of minimizing a sum of non-convex func- tions over a compact domain, subject to linear inequality and equality constraints. We consider approximate solutions obtained by solving a convexified problem, in which each function in the objective is replaced by its convex envelope. We propose a randomized algorithm to solve the convexified problem which finds an -suboptimal solution to the original problem. With probability 1, is bounded by a term propor- tional to the number of constraints in the problem. The bound does not depend on the number of variables in the problem or the number of terms in the objective. In contrast to previous related work, our proof is constructive, self-contained, and gives a bound that is tight. 1 Problem and results The problem. We consider the optimization problem Pn minimize f(x) = i=1 fi(xi) subject to Ax ≤ b (P) Gx = h; N ni Pn with variable x = (x1; : : : ; xn) 2 R , where xi 2 R , with i=1 ni = N. m1×N There are m1 linear inequality constraints, so A 2 R , and m2 linear equality constraints, so G 2 Rm2×N . The optimal value of P is denoted p?. The objective function terms are lower semi-continuous on their domains: 1 ni fi : Si ! R, where Si ⊂ R is a compact set. We say that a point x is feasible (for P) if Ax ≤ b, Gx = h, and xi 2 Si, i = 1; : : : ; n.
    [Show full text]
  • Deep Neural Networks with Multi-Branch Architectures Are Less Non-Convex
    Deep Neural Networks with Multi-Branch Architectures Are Less Non-Convex Hongyang Zhang Junru Shao Ruslan Salakhutdinov Carnegie Mellon University Carnegie Mellon University Carnegie Mellon University [email protected] [email protected] [email protected] Abstract Several recently proposed architectures of neural networks such as ResNeXt, Inception, Xception, SqueezeNet and Wide ResNet are based on the designing idea of having multiple branches and have demonstrated improved performance in many applications. We show that one cause for such success is due to the fact that the multi-branch architecture is less non-convex in terms of duality gap. The duality gap measures the degree of intrinsic non-convexity of an optimization problem: smaller gap in relative value implies lower degree of intrinsic non-convexity. The challenge is to quantitatively measure the duality gap of highly non-convex problems such as deep neural networks. In this work, we provide strong guarantees of this quantity for two classes of network architectures. For the neural networks with arbitrary activation functions, multi-branch architecture and a variant of hinge loss, we show that the duality gap of both population and empirical risks shrinks to zero as the number of branches increases. This result sheds light on better understanding the power of over-parametrization where increasing the network width tends to make the loss surface less non-convex. For the neural networks with linear activation function and `2 loss, we show that the duality gap of empirical risk is zero. Our two results work for arbitrary depths and adversarial data, while the analytical techniques might be of independent interest to non-convex optimization more broadly.
    [Show full text]
  • A Tutorial on Convex Optimization II: Duality and Interior Point Methods
    A Tutorial on Convex Optimization II: Duality and Interior Point Methods Haitham Hindi Palo Alto Research Center (PARC), Palo Alto, California 94304 email: [email protected] Abstract— In recent years, convex optimization has become a and concepts. For detailed examples and applications, the computational tool of central importance in engineering, thanks reader is refered to [8], [2], [6], [5], [7], [10], [12], [17], to its ability to solve very large, practical engineering problems [9], [25], [16], [31], and the references therein. reliably and efficiently. The goal of this tutorial is to continue the overview of modern convex optimization from where our We now briefly outline the paper. There are two main ACC2004 Tutorial on Convex Optimization left off, to cover sections after this one. Section II is on duality, where we important topics that were omitted there due to lack of space summarize the key ideas the general theory, illustrating and time, and highlight the intimate connections between them. the four main practical applications of duality with simple The topics of duality and interior point algorithms will be our examples. Section III is on interior point algorithms, where focus, along with simple examples. The material in this tutorial is excerpted from the recent book on convex optimization, by the focus is on barrier methods, which can be implemented Boyd and Vandenberghe, who have made available a large easily using only a few key technical components, and yet amount of free course material and freely available software. are highly effective both in theory and in practice. All of the These can be downloaded and used immediately by the reader theory we cover can be readily extended to general conic both for self-study and to solve real problems.
    [Show full text]
  • Lecture 11: October 8 11.1 Primal and Dual Problems
    10-725/36-725: Convex Optimization Fall 2015 Lecture 11: October 8 Lecturer: Ryan Tibshirani Scribes: Tian Tong Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal publications. They may be distributed outside this class only with the permission of the Instructor. 11.1 Primal and dual problems 11.1.1 Lagrangian Consider a general optimization problem (called as primal problem) min f(x) (11.1) x subject to hi(x) ≤ 0; i = 1; ··· ; m `j(x) = 0; j = 1; ··· ; r: We define its Lagrangian as m r X X L(x; u; v) = f(x) + uihi(x) + vj`j(x): i=1 j=1 Lagrange multipliers u 2 Rm; v 2 Rr. Lemma 11.1 At each feasible x, f(x) = supu≥0;v L(x; u; v), and the supremum is taken iff u ≥ 0 satisfying uihi(x) = 0; i = 1; ··· ; m. Pm Proof: At each feasible x, we have hi(x) ≤ 0 and `(x) = 0, thus L(x; u; v) = f(x) + i=1 uihi(x) + Pr j=1 vj`j(x) ≤ f(x). The last inequality becomes equality iff uihi(x) = 0; i = 1; ··· ; m. Proposition 11.2 The optimal value of the primal problem, named as f ?, satisfies: f ? = inf sup L(x; u; v): x u≥0;v ? Proof: First considering feasible x (marked as x 2 C), we have f = infx2C f(x) = infx2C supu≥0;v L(x; u; v). Second considering non-feasible x, since supu≥0;v L(x; u; v) = 1 for any x2 = C, infx=2C supu≥0;v L(x; u; v) = ? 1.
    [Show full text]
  • Stability of the Duality Gap in Linear Optimization
    Set-Valued and Variational Analysis manuscript No. (will be inserted by the editor) Stability of the duality gap in linear optimization M.A. Goberna · A.B. Ridolfi · V.N. Vera de Serio Received: date / Accepted: date Abstract In this paper we consider the duality gap function g that measures the difference between the optimal values of the primal problem and of the dual problem in linear pro- gramming and in linear semi-infinite programming. We analyze its behavior when the data defining these problems may be perturbed, considering seven different scenarios. In partic- ular we find some stability results by proving that, under mild conditions, either the duality gap of the perturbed problems is zero or +¥ around the given data, or g has an infinite jump at it. We also give conditions guaranteeing that those data providing a finite duality gap are limits of sequences of data providing zero duality gap for sufficiently small perturbations, which is a generic result. Keywords Linear programming · Linear semi-infinite programming · Duality gap function · Stability · Primal-dual partition Mathematics Subject Classifications (2010) 90C05, 90C34, 90C31, 90C46 1 Introduction Linear optimization consists in the minimization of a linear objective function subject to linear constraints. Here the duality gap plays an important role both for theoretical and for This research was partially supported by MINECO of Spain and FEDER of EU, Grant MTM2014-59179- C2-01 and SECTyP-UNCuyo Res. 4540/13-R. M.A. Goberna Department of Mathematics, University of Alicante, 03080 Alicante, Spain. E-mail: [email protected] A.B. Ridolfi Universidad Nacional de Cuyo, Facultad de Ciencias Aplicadas a la Industria, Facultad de Ciencias Economicas,´ Mendoza, Argentina.
    [Show full text]