CSC 576: Optimization Foundations

Total Page:16

File Type:pdf, Size:1020Kb

CSC 576: Optimization Foundations CSC 576: Optimization Foundations Ji Liu Department of Computer Sciences, University of Rochester April 15, 2016 1 Introduction We are interested in the following optimization problem min f(x) + g(x) x s.t. x 2 Ω; where f(x) is a smooth convex function, g(x) is a closed convex function, and Ω is a closed convex set. The following introduces some basic definitions in optimization. 2 Closed / Open Set, Bounded Set, Convex Set There are several equivalent ways to defined open and closed sets. Here we only introduce one of them. Ω is an open set, if 8x 2 Ω, 9 > 0 such that Bx() ⊂ Ω where Bx() := fy j ky − xk ≤ g defines a ball with center x and radius r. Ω is an closed set, if Ωc is an open set. n ; and R are closed and open sets simultaneously. Ω is bounded, if 9 > 0 such that Ω ⊂ B0(): A set Ω is convex if 8x; y 2 Ω and 8θ 2 [0; 1], the following holds θx + (1 − θ)y 2 Ω: Two important properties for closed sets: • The intersection of arbitrarily many closed sets is still closed; • The union of finite number of closed sets is still closed. Note that the union of infinite number of closed sets may not be closed any more. For example, 1 \n=1[1=n; 1] = (0; 1]: 1 3 Convex Function, Closed Function There are several ways to define convex function. We introduce two equivalent definitions. Let n f(x): R 7! R [ f+1g. f(x) is convex function if 8x; y, 8θ 2 [0; 1], the following inequality holds: f(θx + (1 − θ)y) ≤ θf(x) + (1 − θ)f(y): The epigraph of the function f(x) is defined as a set epif = f(x; t) j f(x) ≤ tg: \f(x) is a convex function" is equivalent to saying \epif is a convex set". The function f(x) is closed, if it epigraph epif is closed. If \f" is a continuous function, then f is closed. A smooth function is continuous, thus is closed. A few examples of convex functions: A linear function T f1(x) = a x which is also closed. An indicator function 0 x 2 Ω f (x) = 2 +1 otherwise. where Ω is a convex set. This function is closed if Ω is closed. 8 < −x x 2 [−1; 1) f3(x) = 1 x 2 f1g : +1 otherwise. This function is convex but not closed. If the function f(x) is smooth (or differentiable), then the following three statements are equiv- alent • f(x) is convex; •8 x; y, we have f(y) ≥ f(x) + hrf(x); y − xi; •8 x; y, we have hrf(x) − rf(y); x − yi ≥ 0. If the function f(x) is twice differentiable, then the following three statements are equivalent • f(x) is convex; •r 2f(x) is positive semidefinite (PSD) for any x. Some important properties for preserving the convexity: • If f1(x) and f2(x) are convex, then f1(x) + f2(x) is also convex; • If f1(x) and f2(x) are convex, then maxff1(x); f2(x)g is also convex; • If f(x) is convex, then g(z) = f(Az + b) is also convex; • If f(x; y) is convex, then g(x) = miny f(x; y) is convex. 2 4 Convexity for Differentiable Functions Theorem 1. If a function f(x) is smooth (or differentiable), the following statements are equiva- lent, 8x; y 2 dom(f): 1. f(x) is convex; 2. f(y) ≥ f(x) + hrf(x); y − xi; 3. hrf(x) − rf(y); x − yi ≥ 0. Proof. We will complete the proof by first showing that (1) and (2) are equivalent, and then (2) and (3): (1))(2): Choose any x; y 2 dom(f) and consider f restricted to the linear combination of them, i.e., the function defined by g(t) = f(x+t(y −x)). Since f is convex, g is also convex. Notice that g(0) = f(x); g(1) = f(y), and g0(t) = hrf(x + t(y − x)); y − xi. From the definition of g0(0), we have g(t) − g(0) (1 − t)g(0) + tg(1) − g(0) g0(0) = lim ≤ lim = g(1) − g(0) t!0 t t!0 t where the inequality uses the convexity of g. Hence we have g(1) ≥ g(0) + g0(0), which implies f(y) ≥ f(x) + hrf(x); y − xi. (2))(1): Choose any x; y 2 dom(f), and 0 ≤ t ≤ 1, and let z = tx + (1 − t)y. Applying (2) yields f(x) ≥ f(z) + hrf(z); x − zi; f(y) ≥ f(z) + hrf(z); y − zi Multiplying the first inequality by t, the second by 1t, and adding them together yields tf(x) + (1 − t)f(y) ≥ f(z) which proves that f(x) is convex. (2))(3): Changing variables in (2) yields f(y) ≥ f(x) + hrf(x); x − yi Add that and (2) together and we have 0 ≥ hrf(x) − rf(y); y − xi which derives (3) if we negate the inner product. (3))(2): Use the same definition above for g. Applying Fundamental Theorem of Calculus (FTC) yields Z 1 g(1) = g(0) + g0(t) dt 0 By our definition, that is Z 1 f(y) = f(x) + hrf(x + t(y − x)); y − xi dt 0 Z 1 = f(x) + hrf(x); y − xi + hrf(x + t(y − x)) − rf(x); y − xi dt 0 1 Let z = x+t(y −x). The last term in the equation above becomes hrf(z)−rf(x); t (z −x)i, which is greater than or equal to 0 because of (3), and then we have f(y) ≥ f(x) + hrf(x); y − xi. 3 5 Convexity of Twice Differentiable Functions Theorem 2. If a function f(x) is twice differentiable, the following statements are equivalent, 8x 2 dom(f): 1. f(x) is convex; 2. r2f(x) is PSD. Proof. (1))(2): Since f is differentiable, theorem 1 shows that hrf(y) − rf(x); y − xi ≥ 0 Define s = y − x and we have hrf(x + s) − rf(x); si ≥ 0 Applying the FTC yields 1 Z t h(t) = hr2f(x + λs)s; si dλ ≥ 0 t 0 for every t 2 [0; 1]. As t approaches 0, we have lim h(t) = hr2f(x + λs)s; si ≥ 0 t!0 which means hr2f(x)s; si ≥ 0, i.e., r2f(x) is PSD. (2))(1): The mean value theorem shows there exists a λ 2 [0; 1], such that f(y) = f(x) + hrf(x + λ(y − x)); y − xi = f(x) + hrf(x); y − xi + hrf(x + λ(y − x)) − rf(x); y − xi Define g(y) to be the last term above, i.e., g = hrf(x + λ(y − x)) − rf(x); y − xi. Notice that f is convex, if we can prove g is greater than or equal to 0. Applying the FTC to g yields Z 1 Z λ g = h r2f(x + t(y − x))(y − x) dt; y − xi dλ 0 0 for every λ 2 [0; 1]. Since r2f(x + t(y − x)) is PSD, the whole thing is greater than or equal to 0. This completes our proof. 6 Convexity of Smooth Convex Functions with Lipschitzian Gra- dient A function f(x) has \L"-Lipschitzian gradient, if it satisfies the following condition: L f(y) ≤ f(x) + hrf(x); y − xi + kx − yk2: 8x; y 2 Theorem 3. If a convex function f(x) has \L"-Lipschitzian gradient, the following statements are equivalent, 8x; y 2 dom(f): 4 L 2 1. f(y) − f(x) − hrf(x); y − xi ≤ 2 jjy − xjj ; 1 2 2. f(y) ≥ f(x) + hrf(x); y − xi + 2L jjrf(x) − rf(y)jj ; 1 2 3. hrf(x) − rf(y); x − yi ≥ L jjrf(x) − rf(y)jj ; 4. jjrf(x) − rf(y)jj ≤ Ljjx − yjj. Proof. (1))(2): Define φ(·) = f(·) − hrf(x); ·i: To prove (2) is equivalent to 1 1 φ(x) ≥ φ(y) − krf(x) − rf(y)k2 = φ(y) − krφ(y)k2: 2L 2L Easy to see that x = arg miny φ(y) From (1), we know that φ(·) has L-Lipschitzian gradient. Therefore, we have 1 1 φ(x) = min φ(z) ≤ min ≤ min φ(y) − hrφ(y); z − yi + kz − yk2 = φ(y) − krφ(y)k2: z z 2L 2L (2))(3): Exchange x and y and then sum them up to obtain (3). (3))(4): Applying triangle inequality, we have jjrf(x) − rf(y); x − yjj ≤ jjrf(x) − rf(y)jj · jjx − yjj. Then from (3) we can get jjrf(x) − rf(y)jj2 ≤ Ljjx − yjj. (4))(1): We have Z 1 f(y) =f(x) + hrf(x + t(y − x)); y − xidt 0 Z 1 =f(x) + hrf(x); y − xi + hrf(x + t(y − x)) − rf(x); y − xidt 0 Z 1 ≤f(x) + hrf(x); y − xi + tLky − xk2dt (due to (4)) 0 1 =f(x) + hrf(x); y − xi + Lky − xk2: 2 It proves (1). 7 Properties of Strongly Convex Functions A function f(x) is l-strongly convex, if it satisfies the following condition: l f(y) ≥ f(x) + hrf(x); y − xi + kx − yk2: 8x; y 2 Theorem 4. If a convex function f(x) is \l"-strongly convex, the following statements are equiv- alent, 8x; y 2 dom(f): l 2 1. f(y) − f(x) − hrf(x); y − xi ≥ 2 jjy − xjj ; l 2 2.
Recommended publications
  • Conjugate Convex Functions in Topological Vector Spaces
    Matematisk-fysiske Meddelelser udgivet af Det Kongelige Danske Videnskabernes Selskab Bind 34, nr. 2 Mat. Fys. Medd . Dan. Vid. Selsk. 34, no. 2 (1964) CONJUGATE CONVEX FUNCTIONS IN TOPOLOGICAL VECTOR SPACES BY ARNE BRØNDSTE D København 1964 Kommissionær : Ejnar Munksgaard Synopsis Continuing investigations by W. L . JONES (Thesis, Columbia University , 1960), the theory of conjugate convex functions in finite-dimensional Euclidea n spaces, as developed by W. FENCHEL (Canadian J . Math . 1 (1949) and Lecture No- tes, Princeton University, 1953), is generalized to functions in locally convex to- pological vector spaces . PRINTP_ll IN DENMARK BIANCO LUNOS BOGTRYKKERI A-S Introduction The purpose of the present paper is to generalize the theory of conjugat e convex functions in finite-dimensional Euclidean spaces, as initiated b y Z . BIRNBAUM and W. ORLICz [1] and S . MANDELBROJT [8] and developed by W. FENCHEL [3], [4] (cf. also S. KARLIN [6]), to infinite-dimensional spaces . To a certain extent this has been done previously by W . L . JONES in his Thesis [5] . His principal results concerning the conjugates of real function s in topological vector spaces have been included here with some improve- ments and simplified proofs (Section 3). After the present paper had bee n written, the author ' s attention was called to papers by J . J . MOREAU [9], [10] , [11] in which, by a different approach and independently of JONES, result s equivalent to many of those contained in this paper (Sections 3 and 4) are obtained. Section 1 contains a summary, based on [7], of notions and results fro m the theory of topological vector spaces applied in the following .
    [Show full text]
  • An Asymptotical Variational Principle Associated with the Steepest Descent Method for a Convex Function
    Journal of Convex Analysis Volume 3 (1996), No.1, 63{70 An Asymptotical Variational Principle Associated with the Steepest Descent Method for a Convex Function B. Lemaire Universit´e Montpellier II, Place E. Bataillon, 34095 Montpellier Cedex 5, France. e-mail:[email protected] Received July 5, 1994 Revised manuscript received January 22, 1996 Dedicated to R. T. Rockafellar on his 60th Birthday The asymptotical limit of the trajectory defined by the continuous steepest descent method for a proper closed convex function f on a Hilbert space is characterized in the set of minimizers of f via an asymp- totical variational principle of Brezis-Ekeland type. The implicit discrete analogue (prox method) is also considered. Keywords : Asymptotical, convex minimization, differential inclusion, prox method, steepest descent, variational principle. 1991 Mathematics Subject Classification: 65K10, 49M10, 90C25. 1. Introduction Let X be a real Hilbert space endowed with inner product :; : and associated norm : , and let f be a proper closed convex function on X. h i k k The paper considers the problem of minimizing f, that is, of finding infX f and some element in the optimal set S := Argmin f, this set assumed being non empty. Letting @f denote the subdifferential operator associated with f, we focus on the contin- uous steepest descent method associated with f, i.e., the differential inclusion du @f(u); t > 0 − dt 2 with initial condition u(0) = u0: This method is known to yield convergence under broad conditions summarized in the following theorem. Let us denote by the real vector space of continuous functions from [0; + [ into X that are absolutely conAtinuous on [δ; + [ for all δ > 0.
    [Show full text]
  • ADDENDUM the Following Remarks Were Added in Proof (November 1966). Page 67. an Easy Modification of Exercise 4.8.25 Establishes
    ADDENDUM The following remarks were added in proof (November 1966). Page 67. An easy modification of exercise 4.8.25 establishes the follow­ ing result of Wagner [I]: Every simplicial k-"complex" with at most 2 k ~ vertices has a representation in R + 1 such that all the "simplices" are geometric (rectilinear) simplices. Page 93. J. H. Conway (private communication) has established the validity of the conjecture mentioned in the second footnote. Page 126. For d = 2, the theorem of Derry [2] given in exercise 7.3.4 was found earlier by Bilinski [I]. Page 183. M. A. Perles (private communication) recently obtained an affirmative solution to Klee's problem mentioned at the end of section 10.1. Page 204. Regarding the question whether a(~) = 3 implies b(~) ~ 4, it should be noted that if one starts from a topological cell complex ~ with a(~) = 3 it is possible that ~ is not a complex (in our sense) at all (see exercise 11.1.7). On the other hand, G. Wegner pointed out (in a private communication to the author) that the 2-complex ~ discussed in the proof of theorem 11.1 .7 indeed satisfies b(~) = 4. Page 216. Halin's [1] result (theorem 11.3.3) has recently been genera­ lized by H. A. lung to all complete d-partite graphs. (Halin's result deals with the graph of the d-octahedron, i.e. the d-partite graph in which each class of nodes contains precisely two nodes.) The existence of the numbers n(k) follows from a recent result of Mader [1] ; Mader's result shows that n(k) ~ k.2(~) .
    [Show full text]
  • CLOSED MEANS CONTINUOUS IFF POLYHEDRAL: a CONVERSE of the GKR THEOREM Emil Ernst
    CLOSED MEANS CONTINUOUS IFF POLYHEDRAL: A CONVERSE OF THE GKR THEOREM Emil Ernst To cite this version: Emil Ernst. CLOSED MEANS CONTINUOUS IFF POLYHEDRAL: A CONVERSE OF THE GKR THEOREM. 2011. hal-00652630 HAL Id: hal-00652630 https://hal.archives-ouvertes.fr/hal-00652630 Preprint submitted on 16 Dec 2011 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. CLOSED MEANS CONTINUOUS IFF POLYHEDRAL: A CONVERSE OF THE GKR THEOREM EMIL ERNST Abstract. Given x0, a point of a convex subset C of an Euclidean space, the two following statements are proven to be equivalent: (i) any convex function f : C → R is upper semi-continuous at x0, and (ii) C is polyhedral at x0. In the particular setting of closed convex mappings and Fσ domains, we prove that any closed convex function f : C → R is continuous at x0 if and only if C is polyhedral at x0. This provides a converse to the celebrated Gale-Klee- Rockafellar theorem. 1. Introduction One basic fact about real-valued convex mappings on Euclidean spaces, is that they are continuous at any point of their domain’s relative interior (see for instance [13, Theorem 10.1]).
    [Show full text]
  • Machine Learning Theory Lecture 20: Mirror Descent
    Machine Learning Theory Lecture 20: Mirror Descent Nicholas Harvey November 21, 2018 In this lecture we will present the Mirror Descent algorithm, which is a common generalization of Gradient Descent and Randomized Weighted Majority. This will require some preliminary results in convex analysis. 1 Conjugate Duality A good reference for the material in this section is [5, Part E]. n ∗ n Definition 1.1. Let f : R ! R [ f1g be a function. Define f : R ! R [ f1g by f ∗(y) = sup yTx − f(x): x2Rn This is the convex conjugate or Legendre-Fenchel transform or Fenchel dual of f. For each linear functional y, the convex conjugate f ∗(y) gives the the greatest amount by which y exceeds the function f. Alternatively, we can think of f ∗(y) as the downward shift needed for the linear function y to just touch or \support" epi f. 1.1 Examples Let us consider some simple one-dimensional examples. ∗ Example 1.2. Let f(x) = cx for some c 2 R. We claim that f = δfcg, i.e., ( 0 (if x = c) f ∗(x) = : +1 (otherwise) This is called the indicator function of fcg. Note that f is itself a linear functional that obviously supports epi f; so f ∗(c) = 0. Any other linear functional x 7! yx − r cannot support epi f for any r (we have supx(yx − cx) = 1 if y 6= c), ∗ ∗ so f (y) = 1 if y 6= c. Note here that a line (f) is getting mapped to a single point (f ). 1 ∗ Example 1.3. Let f(x) = jxj.
    [Show full text]
  • Lecture Slides on Convex Analysis And
    LECTURE SLIDES ON CONVEX ANALYSIS AND OPTIMIZATION BASED ON 6.253 CLASS LECTURES AT THE MASS. INSTITUTE OF TECHNOLOGY CAMBRIDGE, MASS SPRING 2014 BY DIMITRI P. BERTSEKAS http://web.mit.edu/dimitrib/www/home.html Based on the books 1) “Convex Optimization Theory,” Athena Scien- tific, 2009 2) “Convex Optimization Algorithms,” Athena Sci- entific, 2014 (in press) Supplementary material (solved exercises, etc) at http://www.athenasc.com/convexduality.html LECTURE 1 AN INTRODUCTION TO THE COURSE LECTURE OUTLINE The Role of Convexity in Optimization • Duality Theory • Algorithms and Duality • Course Organization • HISTORY AND PREHISTORY Prehistory: Early 1900s - 1949. • Caratheodory, Minkowski, Steinitz, Farkas. − Properties of convex sets and functions. − Fenchel - Rockafellar era: 1949 - mid 1980s. • Duality theory. − Minimax/game theory (von Neumann). − (Sub)differentiability, optimality conditions, − sensitivity. Modern era - Paradigm shift: Mid 1980s - present. • Nonsmooth analysis (a theoretical/esoteric − direction). Algorithms (a practical/high impact direc- − tion). A change in the assumptions underlying the − field. OPTIMIZATION PROBLEMS Generic form: • minimize f(x) subject to x C ∈ Cost function f : n , constraint set C, e.g., ℜ 7→ ℜ C = X x h1(x) = 0,...,hm(x) = 0 ∩ | x g1(x) 0,...,gr(x) 0 ∩ | ≤ ≤ Continuous vs discrete problem distinction • Convex programming problems are those for which• f and C are convex They are continuous problems − They are nice, and have beautiful and intu- − itive structure However, convexity permeates
    [Show full text]
  • Discrete Convexity and Log-Concave Distributions in Higher Dimensions
    Discrete Convexity and Log-Concave Distributions In Higher Dimensions 1 Basic Definitions and notations n 1. R : The usual vector space of real n− tuples n 2. Convex set : A subset C of R is said to be convex if λx + (1 − λ)y 2 C for any x; y 2 C and 0 < λ < 1: n 3. Convex hull : Let S ⊆ R . The convex hull denoted by conv(S) is the intersection of all convex sets containing S: (or the set of all convex combinations of points of S) n 4. Convex function : A function f from C ⊆ R to (−∞; 1]; where C is a n convex set (for ex: C = R ). Then, f is convex on C iff f((1 − λ)x + λy) ≤ (1 − λ)f(x) + λf(y); 0 < λ < 1: 5. log-concave A function f is said to be log-concave if log f is concave (or − log f is convex) n 6. Epigraph of a function : Let f : S ⊆ R ! R [ {±∞}: The set f(x; µ) j x 2 S; µ 2 R; µ ≥ f(x)g is called the epigraph of f (denoted by epi(f)). 7. The effective domain (denoted by dom(f)) of a convex function f on S n is the projection on R of the epi(f): dom(f) = fx j 9µ, (x; µ) 2 epi(f)g = fx j f(x) < 1g: 8. Lower semi-continuity : An extended real-valued function f given n on a set S ⊂ R is said to be lower semi-continuous at a point x 2 S if f(x) ≤ lim inf f(xi) for every sequence x1; x2; :::; 2 S s.t xi ! x and limit i!1 of f(x1); f(x2); ::: exists in [−∞; 1]: 1 2 Convex Analysis Preliminaries In this section, we recall some basic concepts and results of Convex Analysis.
    [Show full text]
  • Discrete Convexity and Equilibria in Economies with Indivisible Goods and Money
    Mathematical Social Sciences 41 (2001) 251±273 www.elsevier.nl/locate/econbase Discrete convexity and equilibria in economies with indivisible goods and money Vladimir Danilova,* , Gleb Koshevoy a , Kazuo Murota b aCentral Institute of Economics and Mathematics, Russian Academy of Sciences, Nahimovski prospect 47, Moscow 117418, Russia bResearch Institute for Mathematical Sciences, Kyoto University, Kyoto 606-8502, Japan Received 1 January 2000; received in revised form 1 October 2000; accepted 1 October 2000 Abstract We consider a production economy with many indivisible goods and one perfectly divisible good. The aim of the paper is to provide some light on the reasons for which equilibrium exists for such an economy. It turns out, that a main reason for the existence is that supplies and demands of indivisible goods should be sets of a class of discrete convexity. The class of generalized polymatroids provides one of the most interesting classes of discrete convexity. 2001 Published by Elsevier Science B.V. Keywords: Equilibrium; Discrete convex sets; Generalized polymatroids JEL classi®cation: D50 1. Introduction We consider here an economy with production in which there is only one perfectly divisible good (numeraire or money) and all other goods are indivisible. In other words, the commodity space of the model takes the following form ZK 3 R, where K is a set of types of indivisible goods. Several models of exchange economies with indivisible goods and money have been considered in the literature. In 1970, Henry (1970) proved the existence of a competitive equilibrium in an exchange economy with one indivisible good and money. He also gave an example of an economy with two indivisible goods in which no competitive *Corresponding author.
    [Show full text]
  • Conjugate Duality and Optimization CBMS-NSF REGIONAL CONFERENCE SERIES in APPLIED MATHEMATICS
    Conjugate Duality and Optimization CBMS-NSF REGIONAL CONFERENCE SERIES IN APPLIED MATHEMATICS A series of lectures on topics of current research interest in applied mathematics under the direction of the Conference Board of the Mathematical Sciences, supported by the National Science Foundation and published by SIAM. GARRETT BIRKHOFF, The Numerical Solution of Elliptic Equations D. V. LINDLEY, Bayesian Statistics, A Review R. S. VARGA, Functional Analysis and Approximation Theory in Numerical Analysis R. R. BAHADUR, Some Limit Theorems in Statistics PATRICK BILLINGSLEY, Weak Convergence of Measures: Applications in Probability J. L. LIONS, Some Aspects of the Optimal Control of Distributed Parameter Systems ROGER PENROSE, Techniques of Differential Topology in Relativity HERMAN CHERNOFF, Sequential Analysis and Optimal Design J. DURBIN, Distribution Theory for Tests Based on the Sample Distribution Function SOL I. RUBINOW, Mathematical Problems in the Biological Sciences P. D. LAX, Hyperbolic Systems of Conservation Laws and the Mathematical Theory of Shock Waves I. J. SCHOENBERG, Cardinal Spline Interpolation IVAN SINGER, The Theory of Best Approximation and Functional Analysis WERNER C. RHEINBOLDT, Methods of Solving Systems of Nonlinear Equations HANS F. WEINBERGER, Variational Methods for Eigenvalue Approximation R. TYRRELL ROCKAFELLAR, Conjugate Duality and Optimization SIR JAMES LIGHTHILL, Mathematical Biofluiddynamics GERARD SALTON, Theory of Indexing CATHLEEN S. MORAWETZ, Notes on Time Decay and Scattering for Some Hyperbolic Problems F. HOPPENSTEADT, Mathematical Theories of Populations: Demographics, Genetics and Epidemics RICHARD ASKEY, Orthogonal Polynomials and Special Functions L. E. PAYNE, Improperly Posed Problems in Partial Differential Equations S. ROSEN, Lectures on the Measurement and Evaluation of the Performance of Computing Systems HERBERT B.
    [Show full text]
  • Nonparametric Estimation of Multivariate Convex-Transformed Densities.” DOI: 10.1214/10-AOS840SUPP
    The Annals of Statistics 2010, Vol. 38, No. 6, 3751–3781 DOI: 10.1214/10-AOS840 c Institute of Mathematical Statistics, 2010 NONPARAMETRIC ESTIMATION OF MULTIVARIATE CONVEX-TRANSFORMED DENSITIES By Arseni Seregin1 and Jon A. Wellner1,2 University of Washington We study estimation of multivariate densities p of the form p(x)= h(g(x)) for x ∈ Rd and for a fixed monotone function h and an un- known convex function g. The canonical example is h(y)= e−y for y ∈ R; in this case, the resulting class of densities P(e−y)= {p = exp(−g) : g is convex} is well known as the class of log-concave densities. Other functions h allow for classes of densities with heavier tails than the log-concave class. We first investigate when the maximum likelihood estimatorp ˆ exists for the class P(h) for various choices of monotone transfor- mations h, including decreasing and increasing functions h. The re- sulting models for increasing transformations h extend the classes of log-convex densities studied previously in the econometrics literature, corresponding to h(y) = exp(y). We then establish consistency of the maximum likelihood esti- mator for fairly general functions h, including the log-concave class P(e−y) and many others. In a final section, we provide asymptotic minimax lower bounds for the estimation of p and its vector of deriva- tives at a fixed point x0 under natural smoothness hypotheses on h and g. The proofs rely heavily on results from convex analysis. 1. Introduction and background. 1.1. Log-concave and r-concave densities.
    [Show full text]
  • CONVEX ANALYSIS and NONLINEAR OPTIMIZATION Theory and Examples
    CONVEX ANALYSIS AND NONLINEAR OPTIMIZATION Theory and Examples JONATHAN M. BORWEIN Centre for Experimental and Constructive Mathematics Department of Mathematics and Statistics Simon Fraser University, Burnaby, B.C., Canada V5A 1S6 [email protected] http://www.cecm.sfu.ca/ jborwein ∼ and ADRIAN S. LEWIS Department of Combinatorics and Optimization University of Waterloo, Waterloo, Ont., Canada N2L 3G1 [email protected] http://orion.uwaterloo.ca/ aslewis ∼ To our families 2 Contents 0.1 Preface . 5 1 Background 7 1.1 Euclidean spaces . 7 1.2 Symmetric matrices . 16 2 Inequality constraints 22 2.1 Optimality conditions . 22 2.2 Theorems of the alternative . 30 2.3 Max-functions and first order conditions . 36 3 Fenchel duality 42 3.1 Subgradients and convex functions . 42 3.2 The value function . 54 3.3 The Fenchel conjugate . 61 4 Convex analysis 78 4.1 Continuity of convex functions . 78 4.2 Fenchel biconjugation . 90 4.3 Lagrangian duality . 103 5 Special cases 113 5.1 Polyhedral convex sets and functions . 113 5.2 Functions of eigenvalues . 120 5.3 Duality for linear and semidefinite programming . 126 5.4 Convex process duality . 132 6 Nonsmooth optimization 143 6.1 Generalized derivatives . 143 3 6.2 Nonsmooth regularity and strict differentiability . 151 6.3 Tangent cones . 158 6.4 The limiting subdifferential . 167 7 The Karush-Kuhn-Tucker theorem 176 7.1 An introduction to metric regularity . 176 7.2 The Karush-Kuhn-Tucker theorem . 184 7.3 Metric regularity and the limiting subdifferential . 191 7.4 Second order conditions . 197 8 Fixed points 204 8.1 Brouwer's fixed point theorem .
    [Show full text]
  • Calculus of Convex Polyhedra and Polyhedral Convex Functions by Utilizing a Multiple Objective Linear Programming Solver1
    Calculus of convex polyhedra and polyhedral convex functions by utilizing a multiple objective linear programming solver1 Daniel Ciripoi2 Andreas L¨ohne2 Benjamin Weißing2 July 17, 2018 Abstract The article deals with operations defined on convex polyhedra or poly- hedral convex functions. Given two convex polyhedra, operations like Minkowski sum, intersection and closed convex hull of the union are con- sidered. Basic operations for one convex polyhedron are, for example, the polar, the conical hull and the image under affine transformation. The concept of a P-representation of a convex polyhedron is introduced. It is shown that many polyhedral calculus operations can be expressed ex- plicitly in terms of P-representations. We point out that all the relevant computational effort for polyhedral calculus consists in computing pro- jections of convex polyhedra. In order to compute projections we use a recent result saying that multiple objective linear programming (MOLP) is equivalent to the polyhedral projection problem. Based on the MOLP- solver bensolve a polyhedral calculus toolbox for Matlab and GNU Octave is developed. Some numerical experiments are discussed. Keywords: polyhedron, polyhedral set, polyhedral convex analysis, poly- hedron computations, multiple objective linear programming, P-represen- tation MSC 2010 Classification: 52B55, 90C29 arXiv:1801.10584v2 [math.OC] 13 Jul 2018 1 Introduction Convex polyhedra and polyhedral convex functions are relevant in many disci- plines of mathematics and sciences. They can be used to approximate convex sets and convex functions with the advantage of always having finite represen- tations. This naturally leads to the need of a calculus, that is, a collection of 1This research was supported by the German Research Foundation (DFG) grant number LO{1379/7{1.
    [Show full text]