Convex Optimization 5

Total Page:16

File Type:pdf, Size:1020Kb

Convex Optimization 5 Convex Optimization 5. Duality Prof. Ying Cui Department of Electrical Engineering Shanghai Jiao Tong University 2018 SJTU Ying Cui 1 / 46 Outline Lagrange dual function Lagrange dual problem Geometric interpretation Optimality conditions Perturbation and sensitivity analysis Examples Generalized inequalities SJTU Ying Cui 2 / 46 Lagrangian standard form problem (not necessarily convex) min f0(x) x s.t. fi (x) ≤ 0, i = 1, ..., m hi (x) = 0, i = 1, ..., p m p ∗ domain D = i=0 domfi ∩ i=1 domhi and optimal value p ◮ basic idea in Lagrangian duality: take the constraints into T T account by augmenting the objective function with a weighted sum of the constraint functions Lagrangian: L : Rn × Rm × Rp → R, with dom L = D× Rm × Rp, m p L(x, λ, ν)= f0(x)+ λi fi (x)+ νi hi (x) Xi=1 Xi=1 ◮ weighted sum of objective and constraint functions ◮ λi is Lagrange multiplier associated with fi (x) ≤ 0 ◮ νi is Lagrange multiplier associated with hi (x) = 0 SJTU Ying Cui 3 / 46 Lagrange dual function Lagrange dual function (or dual function): g : Rm × Rp → R m p g(λ, ν)= inf L(x, λ, ν)= inf f0(x)+ λi fi (x)+ νi hi (x) x∈D x∈D i i ! X=1 X=1 ◮ g is concave even when problem is not convex, as it is pointwise infimum of a family of affine functions of (λ, ν) ◮ pointwise minimum or infimum of concave functions is concave ◮ g can be −∞ when L is unbounded below in x SJTU Ying Cui 4 / 46 Lower bound property The dual function yields lower bounds on the optimal value of the primal problem, i.e., for any λ 0 and any ν, g(λ, ν) ≤ p∗ ◮ the inequality holds but is vacuous when g(λ, ν)= −∞ ◮ the dual function gives a nontrivial lower bound only when λ 0 and (λ, ν) ∈ domg, i.e., g(λ, ν) > −∞ ◮ refer to (λ, ν) with λ 0, (λ, ν) ∈ domg as dual feasible proof: Supposex ˜ is feasible, i.e., fi (˜x) ≤ 0 and hi (˜x)= 0, and λ 0. Then, we have m p λi fi (˜x)+ νi hi (˜x) ≤ 0 =⇒ L(˜x, λ, ν) ≤ f0(˜x) i i X=1 X=1 Hence, g(λ, ν)= inf L(x, λ, ν) ≤ L(˜x, λ, ν) ≤ f0(˜x) x∈D Minimizing over all feasiblex ˜ gives p∗ ≥ g(λ, ν). SJTU Ying Cui 5 / 46 Examples Least-norm solution of linear equations min xT x x s.t. Ax = b dual function: ◮ to minimize L(x,ν)= xT x + νT (Ax − b) over x (unconstrained convex problem), set gradient equal to zero: T T ∇x L(x,ν) = 2x + A ν = 0 ⇒ x = −(1/2)A ν ◮ plug in L(x,ν) to obtain g: g(ν)= L((−1/2)AT ν,ν) = (−1/4)νT AAT ν − bT ν which is a concave quadratic function of ν, as −AAT 0 lower bound property: p∗ ≥ (−1/4)νT AAT ν − bT ν, for all ν SJTU Ying Cui 6 / 46 Examples Standard form LP min cT x x s.t. Ax = b, x 0 dual function: ◮ Lagrangian L(x, λ, ν)= cT x + νT (Ax − b) − λT x = −bT ν + (c + AT ν − λ)T x is affine in x (bounded below only when identically zero) ◮ dual function −bT ν, AT ν − λ + c = 0 g(λ, ν) = inf L(x, λ, ν)= x (−∞, otherwise lower bound property: nontrivial only when λ 0 and AT ν − λ + c = 0, and hence p∗ ≥−bT ν if AT ν + c 0 SJTU Ying Cui 7 / 46 Examples Two-way partitioning problem (W ∈ Sn) min xT Wx x 2 s.t. xi = 1, i = 1, ..., n ◮ a nonconvex problem with 2n discrete feasible points ◮ find the two-way partition of {1, ..., n} with least total cost ◮ Wij is cost of assigning i, j to the same set ◮ −Wij is cost of assigning i, j to different sets dual function: T 2 g(ν) =inf(x Wx + νi (x − 1)) x i i X −1T ν, W + diag(ν) 0 =inf xT (W + diag(ν))x − 1T ν = x (−∞, otherwise lower bound property: p∗ ≥−1T ν if W + diag(ν) 0 ∗ example: ν = −λmin(W )1 gives bound p ≥ nλmin(W ) SJTU Ying Cui 8 / 46 Lagrange dual function and conjugate function ◮ conjugate f ∗ of a function f : Rn → R: f ∗(y)= sup (y T x − f (x)) x∈dom f ◮ dual function of min f0(x) x s.t. x = 0 g(ν) = inf(f (x)+ νT x)= − sup((−ν)T x − f (x)) x x ◮ relationship: g(ν)= −f ∗(−ν) ◮ conjugate of any function is convex ◮ dual function of any problem is concave SJTU Ying Cui 9 / 46 Lagrange dual function and conjugate function more generally (and more usefully), consider an optimization problem with linear inequality and equality constraints min f0(x) x s.t. Ax b, Cx = d dual function: T T g(λ, ν)= inf f0(x)+ λ (Ax − b)+ ν (Cx − d) x∈dom f0 T T T T T = inf f0(x) + (A λ + C ν) x − b λ − d ν x∈dom f0 ∗ T T T T = −f0 (−A λ − C ν) − b λ − d ν ∗ domain of g follows from domain of f0 : T T ∗ domg = {(λ, µ)|− A λ − C ν ∈ domf0 } ◮ simplify derivation of dual function if conjugate of f0 is known SJTU Ying Cui 10 / 46 Examples Equality constrained norm minimization min kxk x s.t. Ax = b dual function: T T T ∗ T −b ν, ||A ν||∗ ≤ 1 g(ν)= −b ν − f0 (−A ν)= (−∞, otherwise ◮ conjugate of f0 = || · ||: ∗ 0, ||y||∗ ≤ 1 f0 (y)= (∞, otherwise i.e., the indicator function of the dual norm unit ball, where T kyk∗ = supkuk≤1 u y is dual norm of k · k SJTU Ying Cui 11 / 46 Lagrange dual problem max g(λ, ν) λ,ν s.t. λ 0 ◮ find best lower bound on p∗, obtained from Lagrange dual function ◮ always a convex optimization problem (maximize a concave function over a convex set), regardless of convexity of primal problem, optimal value denoted by d ∗ ◮ λ, ν are dual feasible if λ 0 and g(λ, ν) > −∞ (i.e., (λ, ν) ∈ domg = {(λ, ν)|g(λ, ν) > −∞}) ◮ can often be simplified by making implicit constraint (λ, ν) ∈ dom g explicit, e.g., ◮ standard form LP and its dual min cT x max − bT ν x ν s.t. Ax = b, x 0 s.t. AT ν + c 0 SJTU Ying Cui 12 / 46 Weak duality and strong duality weak duality: d ∗ ≤ p∗ ◮ always holds (for convex and nonconvex problems) ◮ can be used to find nontrivial lower bounds for difficult problems, e.g., ◮ solving the SDP max − 1T ν ν s.t. W + diag(ν) 0 gives a lower bound for the two-way partitioning problem strong duality: d ∗ = p∗ ◮ does not hold in general ◮ (usually) holds for convex problems ◮ conditions that guarantee strong duality in convex problems are called constraint qualifications ◮ there exist many types of constraint qualifications SJTU Ying Cui 13 / 46 Slater’s constraint qualification One simple constraint qualification is Slater’s condition (Slater’s constraint qualification): convex problem is strictly feasible, i.e., there exists an x ∈ intD such that fi (x) < 0, i = 1, · · · , m, Ax = b ◮ can be refined, e.g., ◮ can replace intD with relintD (interior relative to affine hull) ◮ affine inequalities do not need to hold with strict inequality ◮ reduce to feasibility when the constraints are all affine equalities and inequalities ◮ implies strong duality for convex problems ◮ implies that the dual value is attained when d ∗ > −∞, i.e., there exists a dual feasible (λ∗,ν∗) with g(λ∗,ν∗)= d ∗ = p∗ SJTU Ying Cui 14 / 46 Examples Inequality form LP primal problem: min cT x x s.t. Ax b dual function: T T T T T −b λ, A λ + c = 0 g(λ) = infx (c + A λ) x − b λ = (−∞, otherwise dual problem: max − bT λ λ s.t. AT λ + c = 0, λ 0 ◮ from weaker form of Slater’s condition: strong duality holds for any LP provided the primal problem is feasible, implying strong duality holds for LPs if the dual is feasible ◮ in fact, p∗ = d ∗ except when primal and dual are infeasible SJTU Ying Cui 15 / 46 Examples n Quadratic program: P ∈ S++ min xT Px x s.t. Ax b dual function: T T 1 T −1 T T g(λ) = infx (x Px + λ (Ax − b)) = − 4 λ AP A λ − b λ dual problem: max − (1/4)λT AP−1AT λ − bT λ λ s.t. λ 0 ◮ from weaker form of Slater’s condition: strong duality holds provided the primal problem is feasible ◮ in fact, p∗ = d ∗ always holds SJTU Ying Cui 16 / 46 Examples A nonconvex problem with strong duality: A 6 0 min xT Ax + 2bT x x s.t. xT x ≤ 1 dual function: T T g(λ) =infx (x (A + λI )x + 2b x − λ) −bT (A + λI )†b − λ, A + λI 0, b ∈ R(A + λI ) = (−∞, otherwise dual problem and equivalent SDP: max − bT (A + λI )†b − λ max − t − λ λ λ,t A + λI b s.t. A + λI 0, b ∈ R(A + λI ) s.t. 0 bT t ◮ strong duality holds although primal problem is nonconvex (difficult to show) SJTU Ying Cui 17 / 46 Geometric interpretation geometric interpretation via set of values ◮ set of values taken on by the constraint and objective functions: G = {(f1(x), · · · , fm(x), h1(x), · · · , hp(x), f0(x)) ∈ Rm × Rp × R|x ∈ D} ◮ optimal value: p∗ = inf{t|(u, v, t) ∈ G, u 0, v = 0} ◮ dual function: g(λ, ν) = inf{(λ, ν, 1)T (u, v, t)|(u, v, t) ∈ G} ◮ if the infimum is finite, then (λ, ν, 1)T (u, v, t) ≥ g(λ, ν) defines a nonvertical supporting hyperplane to G ◮ weak duality: for all λ 0, p∗ =inf{t|(u, v, t) ∈ G, u 0, v = 0} ≥ inf{(λ, ν, 1)T (u, v, t)|(u, v, t) ∈ G, u 0, v = 0} ≥ inf{(λ, ν, 1)T (u, v, t)|(u, v, t) ∈ G} =g(λ, ν) SJTU Ying Cui 18 / 46 Geometric interpretation Example consider a simple problem with one constraint ∗ min f0(x) p =inf{t|(u, t) ∈ G, u ≤ 0} x s.t.
Recommended publications
  • Lagrangian Duality in Convex Optimization
    Lagrangian Duality in Convex Optimization LI, Xing A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Philosophy in Mathematics The Chinese University of Hong Kong July 2009 ^'^'^xLIBRARr SYSTEf^N^J Thesis/Assessment Committee Professor LEUNG, Chi Wai (Chair) Professor NG, Kung Fu (Thesis Supervisor) Professor LUK Hing Sun (Committee Member) Professor HUANG, Li Ren (External Examiner) Abstract of thesis entitled: Lagrangian Duality in Convex Optimization Submitted by: LI, Xing for the degree of Master of Philosophy in Mathematics at the Chinese University of Hong Kong in July, 2009. Abstract In convex optimization, a strong duality theory which states that the optimal values of the primal problem and the Lagrangian dual problem are equal and the dual problem attains its maximum plays an important role. With easily visualized physical meanings, the strong duality theories have also been wildly used in the study of economical phenomenon and operation research. In this thesis, we reserve the first chapter for an introduction on the basic principles and some preliminary results on convex analysis ; in the second chapter, we will study various regularity conditions that could lead to strong duality, from the classical ones to recent developments; in the third chapter, we will present stable Lagrangian duality results for cone-convex optimization problems under continuous linear perturbations of the objective function . In the last chapter, Lagrange multiplier conditions without constraint qualifications will be discussed. 摘要 拉格朗日對偶理論主要探討原問題與拉格朗日對偶問題的 最優值之間“零對偶間隙”成立的條件以及對偶問題存在 最優解的條件,其在解決凸規劃問題中扮演著重要角色, 並在經濟學運籌學領域有著廣泛的應用。 本文將系統地介紹凸錐規劃中的拉格朗日對偶理論,包括 基本規範條件,閉凸錐規範條件等,亦會涉及無規範條件 的序列拉格朗日乘子。 ACKNOWLEDGMENTS I wish to express my gratitude to my supervisor Professor Kung Fu Ng and also to Professor Li Ren Huang, Professor Chi Wai Leung, and Professor Hing Sun Luk for their guidance and valuable suggestions.
    [Show full text]
  • Arxiv:2011.09194V1 [Math.OC]
    Noname manuscript No. (will be inserted by the editor) Lagrangian duality for nonconvex optimization problems with abstract convex functions Ewa M. Bednarczuk · Monika Syga Received: date / Accepted: date Abstract We investigate Lagrangian duality for nonconvex optimization prob- lems. To this aim we use the Φ-convexity theory and minimax theorem for Φ-convex functions. We provide conditions for zero duality gap and strong duality. Among the classes of functions, to which our duality results can be applied, are prox-bounded functions, DC functions, weakly convex functions and paraconvex functions. Keywords Abstract convexity · Minimax theorem · Lagrangian duality · Nonconvex optimization · Zero duality gap · Weak duality · Strong duality · Prox-regular functions · Paraconvex and weakly convex functions 1 Introduction Lagrangian and conjugate dualities have far reaching consequences for solution methods and theory in convex optimization in finite and infinite dimensional spaces. For recent state-of the-art of the topic of convex conjugate duality we refer the reader to the monograph by Radu Bot¸[5]. There exist numerous attempts to construct pairs of dual problems in non- convex optimization e.g., for DC functions [19], [34], for composite functions [8], DC and composite functions [30], [31] and for prox-bounded functions [15]. In the present paper we investigate Lagrange duality for general optimiza- tion problems within the framework of abstract convexity, namely, within the theory of Φ-convexity. The class Φ-convex functions encompasses convex l.s.c. Ewa M. Bednarczuk Systems Research Institute, Polish Academy of Sciences, Newelska 6, 01–447 Warsaw Warsaw University of Technology, Faculty of Mathematics and Information Science, ul.
    [Show full text]
  • Log-Concave Duality in Estimation and Control Arxiv:1607.02522V1
    Log-Concave Duality in Estimation and Control Robert Bassett,˚ Michael Casey,: and Roger J-B Wets; Department of Mathematics, Univ. California Davis Abstract In this paper we generalize the estimation-control duality that exists in the linear- quadratic-Gaussian setting. We extend this duality to maximum a posteriori estimation of the system's state, where the measurement and dynamical system noise are inde- pendent log-concave random variables. More generally, we show that a problem which induces a convex penalty on noise terms will have a dual control problem. We provide conditions for strong duality to hold, and then prove relaxed conditions for the piece- wise linear-quadratic case. The results have applications in estimation problems with nonsmooth densities, such as log-concave maximum likelihood densities. We conclude with an example reconstructing optimal estimates from solutions to the dual control problem, which has implications for sharing solution methods between the two types of problems. 1 Introduction We consider the problem of estimating the state of a noisy dynamical system based only on noisy measurements of the system. In this paper, we assume linear dynamics, so that the progression of state variables is Xt`1 “FtpXtq ` Wt`1; t “ 0; :::; T ´ 1 (1) X0 “W0 (2) arXiv:1607.02522v1 [math.OC] 8 Jul 2016 Xt is the state variable{a random vector indexed by a discrete time-step t which ranges from 0 to some final time T . All of the results in this paper still hold in the case that the dimension nx of Xt is time-dependent, but for notational convenience we will assume that Xt P R for t “ 0; :::; T .
    [Show full text]
  • Lecture 11: October 8 11.1 Primal and Dual Problems
    10-725/36-725: Convex Optimization Fall 2015 Lecture 11: October 8 Lecturer: Ryan Tibshirani Scribes: Tian Tong Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal publications. They may be distributed outside this class only with the permission of the Instructor. 11.1 Primal and dual problems 11.1.1 Lagrangian Consider a general optimization problem (called as primal problem) min f(x) (11.1) x subject to hi(x) ≤ 0; i = 1; ··· ; m `j(x) = 0; j = 1; ··· ; r: We define its Lagrangian as m r X X L(x; u; v) = f(x) + uihi(x) + vj`j(x): i=1 j=1 Lagrange multipliers u 2 Rm; v 2 Rr. Lemma 11.1 At each feasible x, f(x) = supu≥0;v L(x; u; v), and the supremum is taken iff u ≥ 0 satisfying uihi(x) = 0; i = 1; ··· ; m. Pm Proof: At each feasible x, we have hi(x) ≤ 0 and `(x) = 0, thus L(x; u; v) = f(x) + i=1 uihi(x) + Pr j=1 vj`j(x) ≤ f(x). The last inequality becomes equality iff uihi(x) = 0; i = 1; ··· ; m. Proposition 11.2 The optimal value of the primal problem, named as f ?, satisfies: f ? = inf sup L(x; u; v): x u≥0;v ? Proof: First considering feasible x (marked as x 2 C), we have f = infx2C f(x) = infx2C supu≥0;v L(x; u; v). Second considering non-feasible x, since supu≥0;v L(x; u; v) = 1 for any x2 = C, infx=2C supu≥0;v L(x; u; v) = ? 1.
    [Show full text]
  • Epsilon Solutions and Duality in Vector Optimization
    NOT FOR QUOTATION WITHOUT THE PERMISSION OF THE AUTHOR EPSILON SOLUTIONS AND DUALITY IN VECTOR OPTIMIZATION Istvdn VdLyi May 1987 WP-87-43 Working Papers are interim reports on work of the International Institute for Applied Systems Analysis and have received only limited review. Views or opinions expressed herein do not necessarily represent those of the Institute or of its National Member Organizations. INTERNATIONAL INSTITUTE FOR APPLIED SYSTEMS ANALYSIS 2361 Laxenburg, Austria PREFACE This paper is a continuation of the author's previous investigations in the theory of epsilon-solutions in convex vector optimization and serves as a theoretical background for the research of SDS in the field of multicriteria optimization. With the stress laid on duality theory, the results presented here give some insight into the problems arising when exact solutions have to be substituted by approximate ones. Just like in the scalar case, the available computational techniques fre- quently lead to such a situation in multicriteria optimization. Alexander B. Kurzhanski Chairman Systems and Decision Sciences Area CONTENTS 1. Introduction 2. Epsilon Optimal Elements 3. Perturbation Map and Duality 4. Conical Supports 5. Conclusion 6. References EPSILON SOLUTIONS AND DUALITY IN VECTOR OF'TIMIZATION Istv&n V&lyi The study of epsilon solutions in vector optimization problems was started in 1979 by S. S. Kutateladze [I]. These types of solutions are interesting because of their relation to nondifferentiable optimization and the vector valued extensions of Ekeland's variational principle as considered by P. Loridan [2] and I. Vdlyi [3], but computational aspects are perhaps even more important. In practical situations, namely, we often stop the calculations at values that we consider sufficiently close to the optimal solution, or use algorithms that result in some approximates of the Pareto set.
    [Show full text]
  • CONJUGATE DUALITY for GENERALIZED CONVEX OPTIMIZATION PROBLEMS Anulekha Dhara and Aparna Mehra
    JOURNAL OF INDUSTRIAL AND Website: http://AIMsciences.org MANAGEMENT OPTIMIZATION Volume 3, Number 3, August 2007 pp. 415–427 CONJUGATE DUALITY FOR GENERALIZED CONVEX OPTIMIZATION PROBLEMS Anulekha Dhara and Aparna Mehra Department of Mathematics Indian Institute of Technology Delhi Hauz Khas, New Delhi-110016, India (Communicated by M. Mathirajan) Abstract. Equivalence between a constrained scalar optimization problem and its three conjugate dual models is established for the class of general- ized C-subconvex functions. Applying these equivalent relations, optimality conditions in terms of conjugate functions are obtained for the constrained multiobjective optimization problem. 1. Introduction and preliminaries. Two basic approaches have emerged in lit- erature for studying nonlinear programming duality namely the Lagrangian ap- proach and the Conjugate Function approach. The latter approach is based on the perturbation theory which in turn helps to study stability of the optimization prob- lem with respect to variations in the data parameters. That is, the optimal solution of the problem can be considered as a function of the perturbed parameters and the effect of the changes in the parameters on the objective value of the problem can be investigated. In this context, conjugate duality helps to analyze the geometric and the economic aspects of the problem. It is because of this reason that since its early inception conjugate duality has been extensively studied by several authors for convex programming problems (see, [1, 6] and references therein). The class of convex functions possess rich geometrical properties that enable us to examine the conjugation theory. However, in non convex case, many of these properties fail to hold thus making it difficult to develop conjugate duality for such classes of functions.
    [Show full text]
  • On Strong and Total Lagrange Duality for Convex Optimization Problems
    On strong and total Lagrange duality for convex optimization problems Radu Ioan Bot¸ ∗ Sorin-Mihai Grad y Gert Wanka z Abstract. We give some necessary and sufficient conditions which completely characterize the strong and total Lagrange duality, respectively, for convex op- timization problems in separated locally convex spaces. We also prove similar statements for the problems obtained by perturbing the objective functions of the primal problems by arbitrary linear functionals. In the particular case when we deal with convex optimization problems having infinitely many convex in- equalities as constraints, the conditions we work with turn into the so-called Farkas-Minkowski and locally Farkas-Minkowski conditions for systems of convex inequalities, recently used in the literature. Moreover, we show that our new ∗Faculty of Mathematics, Chemnitz University of Technology, D-09107 Chemnitz, Germany, e-mail: [email protected]. yFaculty of Mathematics, Chemnitz University of Technology, D-09107 Chemnitz, Germany, e-mail: [email protected]. zFaculty of Mathematics, Chemnitz University of Technology, D-09107 Chemnitz, Germany, e-mail: [email protected] 1 results extend some existing ones in the literature. Keywords. Conjugate functions, Lagrange dual problem, basic constraint qualification, (locally) Farkas-Minkowski condition, stable strong duality AMS subject classification (2000). 49N15, 90C25, 90C34 1 Introduction Consider a convex optimization problem (P ) inf f(x), x2U; g(x)∈−C where X and Y are separated locally convex vector spaces, U is a non-empty closed convex subset of X, C is a non-empty closed convex cone in Y , f : X ! R is a proper convex lower semicontinuous function and g : X ! Y • is a proper C-convex function.
    [Show full text]
  • Conjugate Duality and Optimization CBMS-NSF REGIONAL CONFERENCE SERIES in APPLIED MATHEMATICS
    Conjugate Duality and Optimization CBMS-NSF REGIONAL CONFERENCE SERIES IN APPLIED MATHEMATICS A series of lectures on topics of current research interest in applied mathematics under the direction of the Conference Board of the Mathematical Sciences, supported by the National Science Foundation and published by SIAM. GARRETT BIRKHOFF, The Numerical Solution of Elliptic Equations D. V. LINDLEY, Bayesian Statistics, A Review R. S. VARGA, Functional Analysis and Approximation Theory in Numerical Analysis R. R. BAHADUR, Some Limit Theorems in Statistics PATRICK BILLINGSLEY, Weak Convergence of Measures: Applications in Probability J. L. LIONS, Some Aspects of the Optimal Control of Distributed Parameter Systems ROGER PENROSE, Techniques of Differential Topology in Relativity HERMAN CHERNOFF, Sequential Analysis and Optimal Design J. DURBIN, Distribution Theory for Tests Based on the Sample Distribution Function SOL I. RUBINOW, Mathematical Problems in the Biological Sciences P. D. LAX, Hyperbolic Systems of Conservation Laws and the Mathematical Theory of Shock Waves I. J. SCHOENBERG, Cardinal Spline Interpolation IVAN SINGER, The Theory of Best Approximation and Functional Analysis WERNER C. RHEINBOLDT, Methods of Solving Systems of Nonlinear Equations HANS F. WEINBERGER, Variational Methods for Eigenvalue Approximation R. TYRRELL ROCKAFELLAR, Conjugate Duality and Optimization SIR JAMES LIGHTHILL, Mathematical Biofluiddynamics GERARD SALTON, Theory of Indexing CATHLEEN S. MORAWETZ, Notes on Time Decay and Scattering for Some Hyperbolic Problems F. HOPPENSTEADT, Mathematical Theories of Populations: Demographics, Genetics and Epidemics RICHARD ASKEY, Orthogonal Polynomials and Special Functions L. E. PAYNE, Improperly Posed Problems in Partial Differential Equations S. ROSEN, Lectures on the Measurement and Evaluation of the Performance of Computing Systems HERBERT B.
    [Show full text]
  • Lec 13 - Math Basis for Rate Distortion Optimization
    ECE 5578 Multimedia Communication Lec 13 - Math Basis for Rate Distortion Optimization Zhu Li Dept of CSEE, UMKC Office: FH560E, Email: [email protected], Ph: x 2346. http://l.web.umkc.edu/lizhu slides created with WPS Office Linux and EqualX LaTex equation editor ECE 5578 Multimedia Communciation, 2018 p.1 Outline Recap of HEVC system R-D Optimization Basis Summary ECE 5578 Multimedia Communciation, 2018 p.2 HEVC Coding Structure Slide Credit: Vivienne Sze & Madhukar Budagavi, ISCAS 2014 Tutorial Quad Tree Decomposition: Ref: G. Schuster, PhD Thesis, 1996: Optimal Allocation of Bits Among Motion, Segmentation and Residual ECE 5578 Multimedia Communciation, 2018 p.3 HEVC Transforms Transform + Quant: ECE 5578 Multimedia Communciation, 2018 p.4 HEVC Intra-Prediction Intra-Prediction Modes ECE 5578 Multimedia Communciation, 2018 p.5 Intra-Predicted Basis As if it is a 1-non zero coefficient transform… Ref: J. Laniema and W.-J. Han, “Intra Picture Prediction in HEVC”, Chapter in, Springer- Velag Book on High Efficiency Video Coding (HEVC): Algorithms and Architectures, Springer, 2014. Ed. V. Sze et. Al. ECE 5578 Multimedia Communciation, 2018 p.6 HEVC Entropy Coding Binary Arithmetic Coding: ECE 5578 Multimedia Communciation, 2018 p.7 Parallel Processing Tools: Slice/Tile Credit: Vivienne Sze & Madhukar Budagavi, ISCAS 2014 Tutorial ECE 5578 Multimedia Communciation, 2018 p.8 HEVC Resources Main Spec: . http://www.itu.int/ITU-T/recommendaBons/rec.aspx?rec=11885 T-CSVT Special Issue: . 2012: Combined Issue on HEVC Standard and Research: http://ieeexplore.ieee.org/xpl/tocresult.jsp?isnumber=6403920 . 2016: Special Issue on HEVC Extensions and Efficient HEVC Implementations: http://ieeexplore.ieee.org/xpl/tocresult.jsp?isnumber=7372356 Springer Book: .
    [Show full text]
  • Conjugate Duality
    LECTURE 5: CONJUGATE DUALITY 1. Primal problem and its conjugate dual 2. Duality theory and optimality conditions 3. Relation to other type of dual problems 4. Linear conic programming problems Motivation of conjugate duality Min f(x) (over R) = Max g(y) (over R) and h(y) = - g(y) • f(x) + h(y) = f(x) – g(y) can be viewed as “duality gap” • Would like to have (i) weak duality 0 (ii) strong duality 0 = f(x*) + h(y*) Where is the duality information • Recall the fundamental result of Fenchel’s conjugate inequality • Need a structure such that in general and at optimal solutions 0 = <x*, y*> = f(x*) + h(y*) Concept of dual cone • Let X be a cone in • Define its dual cone • Properties: (i) Y is a cone. (ii) Y is a convex set. (iii) Y is a closed set. Observations Conjugate (Geometric) duality Dual side information • Conjugate dual function • Dual cone Properties: 1. Y is a cone in 2. Y is closed and convex 3. both are closed and convex. Conjugate (Geometric) dual problem Observations Conjugate duality theory Conjugate duality theory Proof Conjugate duality theory Example – Standard form LP Conjugate dual problem Dual LP Example – Karmarkar form LP Example – Karmarkar form LP Example – Karmarkar form LP • Conjugate dual problem becomes which is an unconstrained convex programming problem. Illustration Example – Posynomial programming • Nonconvex programming problem • Transformation Posynomial programming • Primal problem: Conjugate dual problem: Dual Posynomial Programming h(y) , supx2En [< x; y > −f (x)] < +1 Pn Pn xj Let g(x) = j=1 xj yj − log j=1 cj e ∗ xj @g cj e = 0 ) yj = x∗ @xj Pn j j=1 cj e Pn ) yj > 0 and j=1 yj = 1: ) Ω is closed.
    [Show full text]
  • Fenchel Duality Theory and a Primal-Dual Algorithm on Riemannian Manifolds
    Fenchel Duality Theory and a Primal-Dual Algorithm on Riemannian Manifolds Ronny Bergmann∗ Roland Herzog∗ Maurício Silva Louzeiro∗ Daniel Tenbrinck† José Vidal-Núñez‡ This paper introduces a new notion of a Fenchel conjugate, which generalizes the classical Fenchel conjugation to functions dened on Riemannian manifolds. We investigate its properties, e.g., the Fenchel–Young inequality and the characterization of the convex subdierential using the analogue of the Fenchel–Moreau Theorem. These properties of the Fenchel conjugate are employed to derive a Riemannian primal-dual optimization algorithm, and to prove its convergence for the case of Hadamard manifolds under appropriate assumptions. Numerical results illustrate the performance of the algorithm, which competes with the recently derived Douglas–Rachford algorithm on manifolds of nonpositive curvature. Furthermore, we show numerically that our novel algorithm even converges on manifolds of positive curvature. Keywords. convex analysis, Fenchel conjugate function, Riemannian manifold, Hadamard manifold, primal-dual algorithm, Chambolle–Pock algorithm, total variation AMS subject classications (MSC2010). 49N15, 49M29, 90C26, 49Q99 1 Introduction Convex analysis plays an important role in optimization, and an elaborate theory on convex analysis and conjugate duality is available on locally convex vector spaces. Among the vast references on this topic, we mention Bauschke, Combettes, 2011 for convex analysis and monotone operator techniques, Ekeland, ∗Technische Universität Chemnitz, Faculty of Mathematics, 09107 Chemnitz, Germany ([email protected] arXiv:1908.02022v4 [math.NA] 9 Oct 2020 chemnitz.de, https://www.tu-chemnitz.de/mathematik/part dgl/people/bergmann, ORCID 0000-0001-8342- 7218, [email protected], https://www.tu-chemnitz.de/mathematik/part dgl/people/herzog, ORCID 0000-0003-2164-6575, [email protected], https://www.tu-chemnitz.de/ mathematik/part dgl/people/louzeiro, ORCID 0000-0002-4755-3505).
    [Show full text]
  • Duality for Almost Convex Optimization Problems Via the Perturbation Approach
    Duality for almost convex optimization problems via the perturbation approach Radu Ioan Bot¸ ∗ G´abor Kassay † Gert Wanka ‡ Abstract. We deal with duality for almost convex finite dimensional op- timization problems by means of the classical perturbation approach. To this aim some standard results from the convex analysis are extended to the case of almost convex sets and functions. The duality for some classes of primal-dual problems is derived as a special case of the general approach. The sufficient regularity conditions we need for guaranteeing strong duality are proved to be similar to the ones in the convex case. Keywords. almost convex sets, almost convex functions, conjugate func- tions, duality, perturbation approach AMS subject classification (2000). 26A51, 42A50, 49N15 1 Introduction Dealing with duality for a given optimization problem is one of the main features in mathematical programming and convex analysis both from theoretical and practical point of view. There is a well developed theory of duality for convex optimization problems in finite dimensional spaces, as one can read for instance in [15]. Distinct dual problems have been investigated by using the so-called perturbation approach in [16]. This is based on the theory of conjugate functions and describes how a dual problem can be assigned to a primal one ([5]). Generalized convex functions are those non-convex functions which possess at least one valuable property of convex functions. The growing interest in them during the last decades comes with no surprise since they are often more suitable than convex functions to describe practical problems originated from economics, management science, engineering, etc.
    [Show full text]