Convex Analysis and Nonsmooth Optimization

Total Page:16

File Type:pdf, Size:1020Kb

Convex Analysis and Nonsmooth Optimization Convex Analysis and Nonsmooth Optimization Dmitriy Drusvyatskiy October 22, 2020 ii Contents 1 Background 1 1.1 Inner products and linear maps . .1 1.2 Norms . .3 1.3 Eigenvalue and singular value decompositions of matrices . .4 1.4 Set operations . .6 1.5 Point-set topology and existence of minimizers . .7 1.6 Differentiability . 10 1.7 Accuracy in approximation . 13 1.8 Optimality conditions for smooth optimization . 16 1.9 Rates of convergence . 18 2 Convex geometry 21 2.1 Operations preserving convexity . 22 2.2 Convex hull . 25 2.3 Affine hull and relative interior . 28 2.4 Separation theorem . 30 2.5 Cones and polarity . 34 2.6 Tangents and normals . 37 3 Convex analysis 43 3.1 Basic definitions and examples . 44 3.2 Convex functions from epigraphical operations . 50 3.3 The closed convex envelope . 54 3.4 The Fenchel conjugate . 57 3.5 Subgradients and subderivatives . 60 3.5.1 Subdifferential . 61 3.5.2 Subderivative . 68 3.6 Lipschitz continuity of convex functions . 72 3.7 Strong convexity, Moreau envelope, and the proximal map . 75 iii iv CONTENTS 3.8 Monotone operators and the resolvant . 83 3.8.1 Notation and basic properties . 84 3.8.2 The resolvant and the Minty parametrization . 88 3.8.3 Proof of the surjectivity theorem. 90 4 Subdifferential calculus and primal/dual problems 95 4.1 The subdifferential of the value function . 98 4.2 Duality and subdifferential calculus . 99 4.2.1 Fenchel-Rockafellar duality . 100 4.2.2 Lagrangian Duality . 107 4.2.3 Minimax duality . 110 4.3 Spectral functions . 115 4.3.1 Fenchel conjugate and the Moreau envelope . 117 4.3.2 Proximal map and the subdifferential . 119 4.3.3 Proof of the trace inequality . 121 4.3.4 Orthogonally invariant functions of rectangular matrices122 5 First-order algorithms for black-box convex optimization 125 5.1 Algorithms for smooth convex minimization . 126 5.1.1 Gradient descent . 126 5.1.2 Accelerated gradient descent . 131 5.2 Algorithms for nonsmooth convex minimization . 133 5.2.1 Subgradient method . 134 5.3 Model-based view of first-order methods . 137 5.4 Lower complexity bounds . 138 5.4.1 Lower-complexity bound for nonsmooth convex opti- mization . 140 5.4.2 Lower-complexity bound for smooth convex optimiza- tion . 142 5.5 Additional exercises . 144 6 Algorithms for additive composite problems 149 6.1 Proximal methods based on two-sided models . 151 6.1.1 Sublinear rate . 153 6.1.2 Linear rate . 154 6.1.3 Accelerated algorithm . 157 6.2 Proximal methods based on lower models . 160 CONTENTS v 7 Smoothing and primal-dual algorithms 165 7.1 Proximal (accelerated) gradient method solves the dual . 165 7.2 Smoothing technique . 169 7.3 Proximal point method . 171 7.3.1 Proximal point method for saddle point problems . 174 7.4 Preconditioned proximal point method . 175 7.5 Extragradient method . 178 8 Introduction to Variational Analysis 183 8.1 An introduction to variational techniques. 183 8.2 Variational principles. 185 8.3 Descent principle and stability of sublevel sets. 187 8.3.1 Level sets of smooth functions. 187 8.3.2 Sublevel sets of nonsmooth functions. 190 8.4 Limiting subdifferential and limiting slope. 193 8.5 Subdifferential calculus . 195 vi CONTENTS Chapter 1 Background This chapter sets the notation and reviews the background material that will be used throughout the rest of the book. The reader can safely skim this chapter during the first pass and refer back to it when necessary. The discussion is purposefully kept brief. The comments section at the end of the chapter lists references where a more detailed treatment may be found. Roadmap. Sections 1.1-1.3 review basic constructs of linear algebra, in- cluding inner products, norms, linear maps and their adjoints, as well as eigenvalue and singular value decompositions. Section 1.4 establishes nota- tion for basic set operations, such as sums and images/preimages of sets. Section 1.5 focuses on topological preliminaries; the main results are the Bolzano-Weierstrass theorem and a variant of the extreme value theorem. The final Sections 1.6-1.8 formally define first and second-order derivatives of multivariate functions, establish estimates on the error in Taylor approx- imations, and deduce derivative-based conditions for local optimality. The material in Sections 1.6-1.8 is often covered superficially in undergraduate courses, and therefore we provide an entirely self-contained treatment. 1.1 Inner products and linear maps Throughout, we fix an Euclidean space E, meaning that E is a finite- dimensional real vector space endowed with an inner product h·; ·i. Recall that an inner-product on E is an assignment h·; ·i: E × E ! R satisfying the following three properties for all x; y; z 2 E and scalars a; b 2 R: (Symmetry) hx; yi = hy; xi 1 2 CHAPTER 1. BACKGROUND (Bilinearity) hax + by; zi = ahx; zi + bhy; zi (Positive definiteness) hx; xi ≥ 0 and equality hx; xi = 0 holds if and only if x = 0. The most familiar example is the Euclidean space of n-dimensional col- umn vectors Rn, which we always equip with the dot-product n X hx; yi := xiyi: i=1 One can equivalently write hx; yi = xT y. We will denote the coordinate n n vectors of R by ei and for any vector x 2 R , the symbol xi will denote the i'th coordinate of x. A basic result of linear algebra shows that all Euclidean spaces E can be identified with Rn for some integer n, once an orthonormal basis is chosen. Though such a basis-specific interpretation can be useful, it is often distracting, with the indices hiding the underlying geometry. Consequently, it is often best to think coordinate-free. The space of real m × n-matrices Rm×n furnishes another example of an Euclidean space, which we always equip with the trace product hX; Y i := tr XT Y: P Some arithmetic shows the equality hX; Y i = i;j XijYij. Thus the trace product on Rm×n coincides with the usual dot-product on the matrices stretched out into long vectors. An important Euclidean subspace of Rn×n is the space of real symmetric n×n-matrices Sn, along with the trace product hX; Y i := tr XY . For any linear mapping A: E ! Y, there exists a unique linear mapping A∗ : Y ! E, called the adjoint, satisfying hAx; yi = hx; A∗yi for all points x 2 E; y 2 Y: In the most familiar case of E = Rn and Y = Rm, any linear map A can be identified with a matrix A 2 Rm×n, while the adjoint A∗ may then be identified with the transpose AT . Exercise 1.1. Given a collection of real m × n matrices A1;A2;:::;Al, define the linear mapping A: Rm×n ! Rl by setting A(X) := (hA1;Xi; hA2;Xi;:::; hAl;Xi): ∗ Show that the adjoint is the mapping A y = y1A1 + y2A2 + ::: + ylAl. 1.2. NORMS 3 Linear mappings A: E ! E, between a Euclidean space E and itself, are called linear operators, and are said to be self-adjoint if equality A = A∗ holds. Self-adjoint operators on Rn are precisely those operators that are representable as symmetric matrices. A self-adjoint operator A is positive semi-definite, denoted A 0, whenever hAx; xi ≥ 0 for all x 2 E: Similarly, a self-adjoint operator A is positive definite, denoted A 0, when- ever hAx; xi > 0 for all 0 6= x 2 E: For any two linear operators A and B, we will use the notation A − B 0 to mean A B. The notation A − B 0 is defined similarly. 1.2 Norms A norm on a vector space V is a function k·k: V! R for which the following three properties hold for all point x; y 2 V and scalars a 2 R: (Absolute homogeneity) kaxk = jaj · kxk (Triangle inequality) kx + yk ≤ kxk + kyk (Positivity) Equality kxk = 0 holds if and only if x = 0. The inner product in the Euclidean space E always induces a norm kxk = phx; xi. Unless specified otherwise, the symbol kxk for x 2 E will always denote this induced norm. For example, the dot product on Rn induces the p 2 2 m×n usual 2-norm kxk2 := x1 + ::: + xn, while the trace product on R p T induces the Frobenius norm kXkF := tr (X X). The Cauchy{Schwarz inequality guarantees that the induced norm satisfies the estimate: jhx; yij ≤ kxk · kyk for all x; y 2 E: (1.1) n Other important examples of norms are the lp-norms on R : ( p p 1=p (jx1j + ::: + jxnj ) for 1 ≤ p < 1 kxkp = : maxfjx1j;:::; jxnjg for p = 1 The most notable of these are the l1, l2, and l1 norms; see Figure 1.1. For an arbitrary norm k · k on E, the dual norm k · k∗ on E is defined by kvk∗ := maxfhv; xi : kxk ≤ 1g: 4 CHAPTER 1. BACKGROUND (a) p = 1 (b) p = 1:5 (c) p = 2 (d) p = 5 (e) p = 1 Figure 1.1: Unit balls of `p-norms. Thus kvk∗ is the maximal value that the linear function x 7! hv; xi takes over the closed unit ball of the norm k · k. For example, the lp and lq norms on Rn are dual to each other whenever p−1 + q−1 = 1 and p; q 2 [1; 1]. In n particular, the `2-norm on R is self-dual; the same goes for the Frobenius norm on Rm×n (why?).
Recommended publications
  • Computational Geometry – Problem Session Convex Hull & Line Segment Intersection
    Computational Geometry { Problem Session Convex Hull & Line Segment Intersection LEHRSTUHL FUR¨ ALGORITHMIK · INSTITUT FUR¨ THEORETISCHE INFORMATIK · FAKULTAT¨ FUR¨ INFORMATIK Guido Bruckner¨ 04.05.2018 Guido Bruckner¨ · Computational Geometry { Problem Session Modus Operandi To register for the oral exam we expect you to present an original solution for at least one problem in the exercise session. • this is about working together • don't worry if your idea doesn't work! Guido Bruckner¨ · Computational Geometry { Problem Session Outline Convex Hull Line Segment Intersection Guido Bruckner¨ · Computational Geometry { Problem Session Definition of Convex Hull Def: A region S ⊆ R2 is called convex, when for two points p; q 2 S then line pq 2 S. The convex hull CH(S) of S is the smallest convex region containing S. Guido Bruckner¨ · Computational Geometry { Problem Session Definition of Convex Hull Def: A region S ⊆ R2 is called convex, when for two points p; q 2 S then line pq 2 S. The convex hull CH(S) of S is the smallest convex region containing S. Guido Bruckner¨ · Computational Geometry { Problem Session Definition of Convex Hull Def: A region S ⊆ R2 is called convex, when for two points p; q 2 S then line pq 2 S. The convex hull CH(S) of S is the smallest convex region containing S. In physics: Guido Bruckner¨ · Computational Geometry { Problem Session Definition of Convex Hull Def: A region S ⊆ R2 is called convex, when for two points p; q 2 S then line pq 2 S. The convex hull CH(S) of S is the smallest convex region containing S.
    [Show full text]
  • 1 Overview 2 Elements of Convex Analysis
    AM 221: Advanced Optimization Spring 2016 Prof. Yaron Singer Lecture 2 | Wednesday, January 27th 1 Overview In our previous lecture we discussed several applications of optimization, introduced basic terminol- ogy, and proved Weierstass' theorem (circa 1830) which gives a sufficient condition for existence of an optimal solution to an optimization problem. Today we will discuss basic definitions and prop- erties of convex sets and convex functions. We will then prove the separating hyperplane theorem and see an application of separating hyperplanes in machine learning. We'll conclude by describing the seminal perceptron algorithm (1957) which is designed to find separating hyperplanes. 2 Elements of Convex Analysis We will primarily consider optimization problems over convex sets { sets for which any two points are connected by a line. We illustrate some convex and non-convex sets in Figure 1. Definition. A set S is called a convex set if any two points in S contain their line, i.e. for any x1; x2 2 S we have that λx1 + (1 − λ)x2 2 S for any λ 2 [0; 1]. In the previous lecture we saw linear regression an example of an optimization problem, and men- tioned that the RSS function has a convex shape. We can now define this concept formally. n Definition. For a convex set S ⊆ R , we say that a function f : S ! R is: • convex on S if for any two points x1; x2 2 S and any λ 2 [0; 1] we have that: f (λx1 + (1 − λ)x2) ≤ λf(x1) + 1 − λ f(x2): • strictly convex on S if for any two points x1; x2 2 S and any λ 2 [0; 1] we have that: f (λx1 + (1 − λ)x2) < λf(x1) + (1 − λ) f(x2): We illustrate a convex function in Figure 2.
    [Show full text]
  • Conjugate Convex Functions in Topological Vector Spaces
    Matematisk-fysiske Meddelelser udgivet af Det Kongelige Danske Videnskabernes Selskab Bind 34, nr. 2 Mat. Fys. Medd . Dan. Vid. Selsk. 34, no. 2 (1964) CONJUGATE CONVEX FUNCTIONS IN TOPOLOGICAL VECTOR SPACES BY ARNE BRØNDSTE D København 1964 Kommissionær : Ejnar Munksgaard Synopsis Continuing investigations by W. L . JONES (Thesis, Columbia University , 1960), the theory of conjugate convex functions in finite-dimensional Euclidea n spaces, as developed by W. FENCHEL (Canadian J . Math . 1 (1949) and Lecture No- tes, Princeton University, 1953), is generalized to functions in locally convex to- pological vector spaces . PRINTP_ll IN DENMARK BIANCO LUNOS BOGTRYKKERI A-S Introduction The purpose of the present paper is to generalize the theory of conjugat e convex functions in finite-dimensional Euclidean spaces, as initiated b y Z . BIRNBAUM and W. ORLICz [1] and S . MANDELBROJT [8] and developed by W. FENCHEL [3], [4] (cf. also S. KARLIN [6]), to infinite-dimensional spaces . To a certain extent this has been done previously by W . L . JONES in his Thesis [5] . His principal results concerning the conjugates of real function s in topological vector spaces have been included here with some improve- ments and simplified proofs (Section 3). After the present paper had bee n written, the author ' s attention was called to papers by J . J . MOREAU [9], [10] , [11] in which, by a different approach and independently of JONES, result s equivalent to many of those contained in this paper (Sections 3 and 4) are obtained. Section 1 contains a summary, based on [7], of notions and results fro m the theory of topological vector spaces applied in the following .
    [Show full text]
  • CORE View Metadata, Citation and Similar Papers at Core.Ac.Uk
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Bulgarian Digital Mathematics Library at IMI-BAS Serdica Math. J. 27 (2001), 203-218 FIRST ORDER CHARACTERIZATIONS OF PSEUDOCONVEX FUNCTIONS Vsevolod Ivanov Ivanov Communicated by A. L. Dontchev Abstract. First order characterizations of pseudoconvex functions are investigated in terms of generalized directional derivatives. A connection with the invexity is analysed. Well-known first order characterizations of the solution sets of pseudolinear programs are generalized to the case of pseudoconvex programs. The concepts of pseudoconvexity and invexity do not depend on a single definition of the generalized directional derivative. 1. Introduction. Three characterizations of pseudoconvex functions are considered in this paper. The first is new. It is well-known that each pseudo- convex function is invex. Then the following question arises: what is the type of 2000 Mathematics Subject Classification: 26B25, 90C26, 26E15. Key words: Generalized convexity, nonsmooth function, generalized directional derivative, pseudoconvex function, quasiconvex function, invex function, nonsmooth optimization, solution sets, pseudomonotone generalized directional derivative. 204 Vsevolod Ivanov Ivanov the function η from the definition of invexity, when the invex function is pseudo- convex. This question is considered in Section 3, and a first order necessary and sufficient condition for pseudoconvexity of a function is given there. It is shown that the class of strongly pseudoconvex functions, considered by Weir [25], coin- cides with pseudoconvex ones. The main result of Section 3 is applied to characterize the solution set of a nonlinear programming problem in Section 4. The base results of Jeyakumar and Yang in the paper [13] are generalized there to the case, when the function is pseudoconvex.
    [Show full text]
  • Convexity I: Sets and Functions
    Convexity I: Sets and Functions Ryan Tibshirani Convex Optimization 10-725/36-725 See supplements for reviews of basic real analysis • basic multivariate calculus • basic linear algebra • Last time: why convexity? Why convexity? Simply put: because we can broadly understand and solve convex optimization problems Nonconvex problems are mostly treated on a case by case basis Reminder: a convex optimization problem is of ● the form ● ● min f(x) x2D ● ● subject to gi(x) 0; i = 1; : : : m ≤ hj(x) = 0; j = 1; : : : r ● ● where f and gi, i = 1; : : : m are all convex, and ● hj, j = 1; : : : r are affine. Special property: any ● local minimizer is a global minimizer ● 2 Outline Today: Convex sets • Examples • Key properties • Operations preserving convexity • Same for convex functions • 3 Convex sets n Convex set: C R such that ⊆ x; y C = tx + (1 t)y C for all 0 t 1 2 ) − 2 ≤ ≤ In words, line segment joining any two elements lies entirely in set 24 2 Convex sets Figure 2.2 Some simple convexn and nonconvex sets. Left. The hexagon, Convex combinationwhich includesof x1; its : :boundary : xk (shownR : darker), any linear is convex. combinationMiddle. The kidney shaped set is not convex, since2 the line segment between the twopointsin the set shown as dots is not contained in the set. Right. The square contains some boundaryθ1x points1 + but::: not+ others,θkxk and is not convex. k with θi 0, i = 1; : : : k, and θi = 1. Convex hull of a set C, ≥ i=1 conv(C), is all convex combinations of elements. Always convex P 4 Figure 2.3 The convex hulls of two sets in R2.
    [Show full text]
  • Applications of Convex Analysis Within Mathematics
    Noname manuscript No. (will be inserted by the editor) Applications of Convex Analysis within Mathematics Francisco J. Arag´onArtacho · Jonathan M. Borwein · Victoria Mart´ın-M´arquez · Liangjin Yao July 19, 2013 Abstract In this paper, we study convex analysis and its theoretical applications. We first apply important tools of convex analysis to Optimization and to Analysis. We then show various deep applications of convex analysis and especially infimal convolution in Monotone Operator Theory. Among other things, we recapture the Minty surjectivity theorem in Hilbert space, and present a new proof of the sum theorem in reflexive spaces. More technically, we also discuss autoconjugate representers for maximally monotone operators. Finally, we consider various other applications in mathematical analysis. Keywords Adjoint · Asplund averaging · autoconjugate representer · Banach limit · Chebyshev set · convex functions · Fenchel duality · Fenchel conjugate · Fitzpatrick function · Hahn{Banach extension theorem · infimal convolution · linear relation · Minty surjectivity theorem · maximally monotone operator · monotone operator · Moreau's decomposition · Moreau envelope · Moreau's max formula · Moreau{Rockafellar duality · normal cone operator · renorming · resolvent · Sandwich theorem · subdifferential operator · sum theorem · Yosida approximation Mathematics Subject Classification (2000) Primary 47N10 · 90C25; Secondary 47H05 · 47A06 · 47B65 Francisco J. Arag´onArtacho Centre for Computer Assisted Research Mathematics and its Applications (CARMA),
    [Show full text]
  • Ce Document Est Le Fruit D'un Long Travail Approuvé Par Le Jury De Soutenance Et Mis À Disposition De L'ensemble De La Communauté Universitaire Élargie
    AVERTISSEMENT Ce document est le fruit d'un long travail approuvé par le jury de soutenance et mis à disposition de l'ensemble de la communauté universitaire élargie. Il est soumis à la propriété intellectuelle de l'auteur. Ceci implique une obligation de citation et de référencement lors de l’utilisation de ce document. D'autre part, toute contrefaçon, plagiat, reproduction illicite encourt une poursuite pénale. Contact : [email protected] LIENS Code de la Propriété Intellectuelle. articles L 122. 4 Code de la Propriété Intellectuelle. articles L 335.2- L 335.10 http://www.cfcopies.com/V2/leg/leg_droi.php http://www.culture.gouv.fr/culture/infos-pratiques/droits/protection.htm THESE` pr´esent´eepar NGUYEN Manh Cuong en vue de l'obtention du grade de DOCTEUR DE L'UNIVERSITE´ DE LORRAINE (arr^et´eminist´eriel du 7 Ao^ut 2006) Sp´ecialit´e: INFORMATIQUE LA PROGRAMMATION DC ET DCA POUR CERTAINES CLASSES DE PROBLEMES` EN APPRENTISSAGE ET FOUILLE DE DONEES.´ Soutenue le 19 mai 2014 devant le jury compos´ede Rapporteur BENNANI Youn`es Professeur, Universit´eParis 13 Rapporteur RAKOTOMAMONJY Alain Professeur, Universit´ede Rouen Examinateur GUERMEUR Yann Directeur de recherche, LORIA-Nancy Examinateur HEIN Matthias Professeur, Universit´eSaarland Examinateur PHAM DINH Tao Professeur ´em´erite, INSA-Rouen Directrice de th`ese LE THI Hoai An Professeur, Universit´ede Lorraine Co-encadrant CONAN-GUEZ Brieuc MCF, Universit´ede Lorraine These` prepar´ ee´ a` l'Universite´ de Lorraine, Metz, France au sein de laboratoire LITA Remerciements Cette th`ese a ´et´epr´epar´ee au sein du Laboratoire d'Informatique Th´eorique et Appliqu´ee (LITA) de l'Universit´ede Lorraine - France, sous la co-direction du Madame le Professeur LE THI Hoai An, Directrice du laboratoire Laboratoire d'Informatique Th´eorique et Ap- pliqu´ee(LITA), Universit´ede Lorraine et Ma^ıtre de conf´erence CONAN-GUEZ Brieuc, Institut Universitaire de Technologie (IUT), Universit´ede Lorraine, Metz.
    [Show full text]
  • An Asymptotical Variational Principle Associated with the Steepest Descent Method for a Convex Function
    Journal of Convex Analysis Volume 3 (1996), No.1, 63{70 An Asymptotical Variational Principle Associated with the Steepest Descent Method for a Convex Function B. Lemaire Universit´e Montpellier II, Place E. Bataillon, 34095 Montpellier Cedex 5, France. e-mail:[email protected] Received July 5, 1994 Revised manuscript received January 22, 1996 Dedicated to R. T. Rockafellar on his 60th Birthday The asymptotical limit of the trajectory defined by the continuous steepest descent method for a proper closed convex function f on a Hilbert space is characterized in the set of minimizers of f via an asymp- totical variational principle of Brezis-Ekeland type. The implicit discrete analogue (prox method) is also considered. Keywords : Asymptotical, convex minimization, differential inclusion, prox method, steepest descent, variational principle. 1991 Mathematics Subject Classification: 65K10, 49M10, 90C25. 1. Introduction Let X be a real Hilbert space endowed with inner product :; : and associated norm : , and let f be a proper closed convex function on X. h i k k The paper considers the problem of minimizing f, that is, of finding infX f and some element in the optimal set S := Argmin f, this set assumed being non empty. Letting @f denote the subdifferential operator associated with f, we focus on the contin- uous steepest descent method associated with f, i.e., the differential inclusion du @f(u); t > 0 − dt 2 with initial condition u(0) = u0: This method is known to yield convergence under broad conditions summarized in the following theorem. Let us denote by the real vector space of continuous functions from [0; + [ into X that are absolutely conAtinuous on [δ; + [ for all δ > 0.
    [Show full text]
  • Lipschitz Continuity of Convex Functions
    Lipschitz Continuity of Convex Functions Bao Tran Nguyen∗ Pham Duy Khanh† November 13, 2019 Abstract We provide some necessary and sufficient conditions for a proper lower semi- continuous convex function, defined on a real Banach space, to be locally or globally Lipschitz continuous. Our criteria rely on the existence of a bounded selection of the subdifferential mapping and the intersections of the subdifferential mapping and the normal cone operator to the domain of the given function. Moreover, we also point out that the Lipschitz continuity of the given function on an open and bounded (not necessarily convex) set can be characterized via the existence of a bounded selection of the subdifferential mapping on the boundary of the given set and as a consequence it is equivalent to the local Lipschitz continuity at every point on the boundary of that set. Our results are applied to extend a Lipschitz and convex function to the whole space and to study the Lipschitz continuity of its Moreau envelope functions. Keywords Convex function, Lipschitz continuity, Calmness, Subdifferential, Normal cone, Moreau envelope function. Mathematics Subject Classification (2010) 26A16, 46N10, 52A41 1 Introduction arXiv:1911.04886v1 [math.FA] 12 Nov 2019 Lipschitz continuous and convex functions play a significant role in convex and nonsmooth analysis. It is well-known that if the domain of a proper lower semicontinuous convex function defined on a real Banach space has a nonempty interior then the function is continuous over the interior of its domain [3, Proposition 2.111] and as a consequence, it is subdifferentiable (its subdifferential is a nonempty set) and locally Lipschitz continuous at every point in the interior of its domain [3, Proposition 2.107].
    [Show full text]
  • Some Topics on the Theory of Cones
    POSITIVITY 2013 22-26 July, Leiden, The Netherlands Jointly organized by Leiden University and Delft University of Technology. Some topics on the theory of cones by Ioannis A. Polyrakis Department of Mathematics National Technical University of Athens Let X be a normed space. A convex subset P ⊆ X is a cone in λP = P for any λ ≥ 0. If moreover P \ (−P ) = f0g, the cone P is pointed (or proper). Denote X0 is the algebraic and X∗ topological dual of X. A convex subset B of P is a base for P if a strictly positive linear functional f of X exists such that B = fx 2 P j f(x) = 1g: Then we say that B is defined by f and is denoted by Bf . Theorem 1. The base Bf of P defined by f is bounded if and only if f is uniformly monotonic (i.e f(x) ≥ akxk for each x 2 P , where a > 0 is a real constant). Theorem 2. If f 2 X∗ is strictly positive we have: The base Bf is bounded if and only if f is an interior point of P 0. 1 Unbounded, convex subsets of cones Suppose that hX; Y i is a dual system X; Y ordered normed spaces. For any cone P of X 0 f 2 h i ≥ 2 g PY = y Y : x; y 0 for each x P ; is the dual cone of P in Y . If dual cone of X+ in Y is Y+ and the dual cone of Y+ in X is X+, hX; Y i is an ordered dual system.
    [Show full text]
  • On the Ekeland Variational Principle with Applications and Detours
    Lectures on The Ekeland Variational Principle with Applications and Detours By D. G. De Figueiredo Tata Institute of Fundamental Research, Bombay 1989 Author D. G. De Figueiredo Departmento de Mathematica Universidade de Brasilia 70.910 – Brasilia-DF BRAZIL c Tata Institute of Fundamental Research, 1989 ISBN 3-540- 51179-2-Springer-Verlag, Berlin, Heidelberg. New York. Tokyo ISBN 0-387- 51179-2-Springer-Verlag, New York. Heidelberg. Berlin. Tokyo No part of this book may be reproduced in any form by print, microfilm or any other means with- out written permission from the Tata Institute of Fundamental Research, Colaba, Bombay 400 005 Printed by INSDOC Regional Centre, Indian Institute of Science Campus, Bangalore 560012 and published by H. Goetze, Springer-Verlag, Heidelberg, West Germany PRINTED IN INDIA Preface Since its appearance in 1972 the variational principle of Ekeland has found many applications in different fields in Analysis. The best refer- ences for those are by Ekeland himself: his survey article [23] and his book with J.-P. Aubin [2]. Not all material presented here appears in those places. Some are scattered around and there lies my motivation in writing these notes. Since they are intended to students I included a lot of related material. Those are the detours. A chapter on Nemyt- skii mappings may sound strange. However I believe it is useful, since their properties so often used are seldom proved. We always say to the students: go and look in Krasnoselskii or Vainberg! I think some of the proofs presented here are more straightforward. There are two chapters on applications to PDE.
    [Show full text]
  • Convex Functions
    Convex Functions Prof. Daniel P. Palomar ELEC5470/IEDA6100A - Convex Optimization The Hong Kong University of Science and Technology (HKUST) Fall 2020-21 Outline of Lecture • Definition convex function • Examples on R, Rn, and Rn×n • Restriction of a convex function to a line • First- and second-order conditions • Operations that preserve convexity • Quasi-convexity, log-convexity, and convexity w.r.t. generalized inequalities (Acknowledgement to Stephen Boyd for material for this lecture.) Daniel P. Palomar 1 Definition of Convex Function • A function f : Rn −! R is said to be convex if the domain, dom f, is convex and for any x; y 2 dom f and 0 ≤ θ ≤ 1, f (θx + (1 − θ) y) ≤ θf (x) + (1 − θ) f (y) : • f is strictly convex if the inequality is strict for 0 < θ < 1. • f is concave if −f is convex. Daniel P. Palomar 2 Examples on R Convex functions: • affine: ax + b on R • powers of absolute value: jxjp on R, for p ≥ 1 (e.g., jxj) p 2 • powers: x on R++, for p ≥ 1 or p ≤ 0 (e.g., x ) • exponential: eax on R • negative entropy: x log x on R++ Concave functions: • affine: ax + b on R p • powers: x on R++, for 0 ≤ p ≤ 1 • logarithm: log x on R++ Daniel P. Palomar 3 Examples on Rn • Affine functions f (x) = aT x + b are convex and concave on Rn. n • Norms kxk are convex on R (e.g., kxk1, kxk1, kxk2). • Quadratic functions f (x) = xT P x + 2qT x + r are convex Rn if and only if P 0.
    [Show full text]