CONVEX ANALYSIS and NONLINEAR OPTIMIZATION Theory and Examples

Total Page:16

File Type:pdf, Size:1020Kb

CONVEX ANALYSIS and NONLINEAR OPTIMIZATION Theory and Examples CONVEX ANALYSIS AND NONLINEAR OPTIMIZATION Theory and Examples Second Edition JONATHAN M. BORWEIN Faculty of Computer Science Dalhousie University, Halifax, NS, Canada B3H 2Z6 [email protected] http://users.cs.dal.ca/»jborwein and ADRIAN S. LEWIS School of Operations Research and Industrial Engineering Cornell University [email protected] http://www.orie.cornell.edu/»aslewis To our families v vi Preface Optimization is a rich and thriving mathematical discipline. Properties of minimizers and maximizers of functions rely intimately on a wealth of techniques from mathematical analysis, including tools from calculus and its generalizations, topological notions, and more geometric ideas. The theory underlying current computational optimization techniques grows ever more sophisticated|duality-based algorithms, interior point methods, and control-theoretic applications are typical examples. The powerful and elegant language of convex analysis uni¯es much of this theory. Hence our aim of writing a concise, accessible account of convex analysis and its applications and extensions, for a broad audience. For students of optimization and analysis, there is great bene¯t to blur- ring the distinction between the two disciplines. Many important analytic problems have illuminating optimization formulations and hence can be ap- proached through our main variational tools: subgradients and optimality conditions, the many guises of duality, metric regularity and so forth. More generally, the idea of convexity is central to the transition from classical analysis to various branches of modern analysis: from linear to nonlinear analysis, from smooth to nonsmooth, and from the study of functions to multifunctions. Thus, although we use certain optimization models re- peatedly to illustrate the main results (models such as linear and semide¯- nite programming duality and cone polarity), we constantly emphasize the power of abstract models and notation. Good reference works on ¯nite-dimensional convex analysis already ex- ist. Rockafellar's classic Convex Analysis [167] has been indispensable and ubiquitous since the 1970s, and a more general sequel with Wets, Varia- tional Analysis [168], appeared recently. Hiriart{Urruty and Lemar¶echal's Convex Analysis and Minimization Algorithms [97] is a comprehensive but gentler introduction. Our goal is not to supplant these works, but on the contrary to promote them, and thereby to motivate future researchers. This book aims to make converts. vii viii Preface We try to be succinct rather than systematic, avoiding becoming bogged down in technical details. Our style is relatively informal; for example, the text of each section creates the context for many of the result statements. We value the variety of independent, self-contained approaches over a sin- gle, uni¯ed, sequential development. We hope to showcase a few memorable principles rather than to develop the theory to its limits. We discuss no algorithms. We point out a few important references as we go, but we make no attempt at comprehensive historical surveys. Optimization in in¯nite dimensions lies beyond our immediate scope. This is for reasons of space and accessibility rather than history or appli- cation: convex analysis developed historically from the calculus of vari- ations, and has important applications in optimal control, mathematical economics, and other areas of in¯nite-dimensional optimization. However, rather like Halmos's Finite Dimensional Vector Spaces [90], ease of ex- tension beyond ¯nite dimensions substantially motivates our choice of ap- proach. Where possible, we have chosen a proof technique permitting those readers familiar with functional analysis to discover for themselves how a result extends. We would, in part, like this book to be an entr¶eefor math- ematicians to a valuable and intrinsic part of modern analysis. The ¯nal chapter illustrates some of the challenges arising in in¯nite dimensions. This book can (and does) serve as a teaching text, at roughly the level of ¯rst year graduate students. In principle we assume no knowledge of real analysis, although in practice we expect a certain mathematical maturity. While the main body of the text is self-contained, each section concludes with an often extensive set of optional exercises. These exercises fall into three categories, marked with zero, one, or two asterisks, respectively, as follows: examples that illustrate the ideas in the text or easy expansions of sketched proofs; important pieces of additional theory or more testing examples; longer, harder examples or peripheral theory. We are grateful to the Natural Sciences and Engineering Research Coun- cil of Canada for their support during this project. Many people have helped improve the presentation of this material. We would like to thank all of them, but in particular Patrick Combettes, Guillaume Haberer, Claude Lemar¶echal, Olivier Ley, Yves Lucet, Hristo Sendov, Mike Todd, Xianfu Wang, and especially Heinz Bauschke. Jonathan M. Borwein Adrian S. Lewis Gargnano, Italy September 1999 Preface ix Preface to the Second Edition Since the publication of the First Edition of this book, convex analysis and nonlinear optimization has continued to flourish. The \interior point revolution" in algorithms for convex optimization, ¯red by Nesterov and Nemirovski's seminal 1994 work [148], and the growing interplay between convex optimization and engineering exempli¯ed by Boyd and Vanden- berghe's recent monograph [47], have fuelled a renaissance of interest in the fundamentals of convex analysis. At the same time, the broad success of key monographs on general variational analysis by Clarke, Ledyaev, Stern and Wolenski [56] and Rockafellar and Wets [168] over the last decade tes- tify to a ripening interest in nonconvex techniques, as does the appearance of [43]. The Second Edition both corrects a few vagaries in the original and contains a new chapter emphasizing the rich applicability of variational analysis to concrete examples. After a new sequence of exercises ending Chapter 8 with a concise approach to monotone operator theory via convex analysis, the new Chapter 9 begins with a presentation of Rademacher's fundamental theorem on di®erentiability of Lipschitz functions. The sub- sequent sections describe the appealing geometry of proximal normals, four approaches to the convexity of Chebyshev sets, and two rich concrete mod- els of nonsmoothness known as \amenability" and \partial smoothness". As in the First Edition, we develop and illustrate the material through extensive exercises. Convex analysis has maintained a Canadian thread ever since Fenchel's original 1949 work on the subject in Volume 1 of the Canadian Journal of Mathematics [76]. We are grateful to the continuing support of the Canadian academic community in this project, and in particular to the Canadian Mathematical Society, for their sponsorship of this book series, and to the Canadian Natural Sciences and Engineering Research Council for their support of our research endeavours. Jonathan M. Borwein Adrian S. Lewis September 2005 Contents Preface vii 1 Background 1 1.1 Euclidean Spaces . 1 1.2 Symmetric Matrices . 9 2 Inequality Constraints 15 2.1 Optimality Conditions . 15 2.2 Theorems of the Alternative . 23 2.3 Max-functions . 28 3 Fenchel Duality 33 3.1 Subgradients and Convex Functions . 33 3.2 The Value Function . 43 3.3 The Fenchel Conjugate . 49 4 Convex Analysis 65 4.1 Continuity of Convex Functions . 65 4.2 Fenchel Biconjugation . 76 4.3 Lagrangian Duality . 88 5 Special Cases 97 5.1 Polyhedral Convex Sets and Functions . 97 5.2 Functions of Eigenvalues . 104 5.3 Duality for Linear and Semide¯nite Programming . 109 5.4 Convex Process Duality . 114 6 Nonsmooth Optimization 123 6.1 Generalized Derivatives . 123 6.2 Regularity and Strict Di®erentiability . 130 6.3 Tangent Cones . 137 6.4 The Limiting Subdi®erential . 145 x Contents xi 7 Karush{Kuhn{Tucker Theory 153 7.1 An Introduction to Metric Regularity . 153 7.2 The Karush{Kuhn{Tucker Theorem . 160 7.3 Metric Regularity and the Limiting Subdi®erential . 166 7.4 Second Order Conditions . 172 8 Fixed Points 179 8.1 The Brouwer Fixed Point Theorem . 179 8.2 Selection and the Kakutani{Fan Fixed Point Theorem . 190 8.3 Variational Inequalities . 200 9 More Nonsmooth Structure 213 9.1 Rademacher's Theorem . 213 9.2 Proximal Normals and Chebyshev Sets . 218 9.3 Amenable Sets and Prox-Regularity . 228 9.4 Partly Smooth Sets . 233 10 Postscript: In¯nite Versus Finite Dimensions 239 10.1 Introduction . 239 10.2 Finite Dimensionality . 241 10.3 Counterexamples and Exercises . 244 10.4 Notes on Previous Chapters . 248 11 List of Results and Notation 253 11.1 Named Results . 253 11.2 Notation . 267 Bibliography 275 Index 289 xii Contents Chapter 1 Background 1.1 Euclidean Spaces We begin by reviewing some of the fundamental algebraic, geometric and analytic ideas we use throughout the book. Our setting, for most of the book, is an arbitrary Euclidean space E, by which we mean a ¯nite-dimensional vector space over the reals R, equipped with an inner product h¢; ¢i. We would lose no generality if we considered only the space Rn of real (column) n-vectors (with its standard inner product), but a more abstract, coordinate-free notation is often morep flexible and elegant. We de¯ne the norm of any point x in E by kxk = hx; xi, and the unit ball is the set B = fx 2 E j kxk · 1g: Any two points x and y in E satisfy the Cauchy{Schwarz inequality jhx; yij · kxkkyk: We de¯ne the sum of two sets C and D in E by C + D = fx + y j x 2 C; y 2 Dg: The de¯nition of C ¡ D is analogous, and for a subset ¤ of R we de¯ne ¤C = f¸x j ¸ 2 ¤; x 2 Cg: Given another Euclidean space Y, we can consider the Cartesian product Euclidean space E £ Y, with inner product de¯ned by h(e; x); (f; y)i = he; fi + hx; yi.
Recommended publications
  • Nonlinear Optimization Using the Generalized Reduced Gradient Method Revue Française D’Automatique, Informatique, Recherche Opé- Rationnelle
    REVUE FRANÇAISE D’AUTOMATIQUE, INFORMATIQUE, RECHERCHE OPÉRATIONNELLE.RECHERCHE OPÉRATIONNELLE LEON S. LASDON RICHARD L. FOX MARGERY W. RATNER Nonlinear optimization using the generalized reduced gradient method Revue française d’automatique, informatique, recherche opé- rationnelle. Recherche opérationnelle, tome 8, no V3 (1974), p. 73-103 <http://www.numdam.org/item?id=RO_1974__8_3_73_0> © AFCET, 1974, tous droits réservés. L’accès aux archives de la revue « Revue française d’automatique, in- formatique, recherche opérationnelle. Recherche opérationnelle » implique l’accord avec les conditions générales d’utilisation (http://www.numdam.org/ conditions). Toute utilisation commerciale ou impression systématique est constitutive d’une infraction pénale. Toute copie ou impression de ce fi- chier doit contenir la présente mention de copyright. Article numérisé dans le cadre du programme Numérisation de documents anciens mathématiques http://www.numdam.org/ R.A.LR.O. (8* année, novembre 1974, V-3, p. 73 à 104) NONLINEAR OPTIMIZATION USING THE-GENERALIZED REDUCED GRADIENT METHOD (*) by Léon S. LÀSDON, Richard L. Fox and Margery W. RATNER Abstract. — This paper describes the principles and logic o f a System of computer programs for solving nonlinear optimization problems using a Generalized Reduced Gradient Algorithm, The work is based on earlier work of Âbadie (2). Since this paper was written, many changes have been made in the logic, and significant computational expérience has been obtained* We hope to report on this in a later paper. 1. INTRODUCTION Generalized Reduced Gradient methods are algorithms for solving non- linear programs of gênerai structure. This paper discusses the basic principles of GRG, and constructs a spécifie GRG algorithm. The logic of a computer program implementing this algorithm is presented by means of flow charts and discussion.
    [Show full text]
  • Combinatorial Structures in Nonlinear Programming
    Combinatorial Structures in Nonlinear Programming Stefan Scholtes¤ April 2002 Abstract Non-smoothness and non-convexity in optimization problems often arise because a combinatorial structure is imposed on smooth or convex data. The combinatorial aspect can be explicit, e.g. through the use of ”max”, ”min”, or ”if” statements in a model, or implicit as in the case of bilevel optimization where the combinatorial structure arises from the possible choices of active constraints in the lower level problem. In analyzing such problems, it is desirable to decouple the combinatorial from the nonlinear aspect and deal with them separately. This paper suggests a problem formulation which explicitly decouples the two aspects. We show that such combinatorial nonlinear programs, despite their inherent non-convexity, allow for a convex first order local optimality condition which is generic and tight. The stationarity condition can be phrased in terms of Lagrange multipliers which allows an extension of the popular sequential quadratic programming (SQP) approach to solve these problems. We show that the favorable local convergence properties of SQP are retained in this setting. The computational effectiveness of the method depends on our ability to solve the subproblems efficiently which, in turn, depends on the representation of the governing combinatorial structure. We illustrate the potential of the approach by applying it to optimization problems with max-min constraints which arise, for example, in robust optimization. 1 Introduction Nonlinear programming is nowadays regarded as a mature field. A combination of important algorithmic developments and increased computing power over the past decades have advanced the field to a stage where the majority of practical prob- lems can be solved efficiently by commercial software.
    [Show full text]
  • Conjugate Convex Functions in Topological Vector Spaces
    Matematisk-fysiske Meddelelser udgivet af Det Kongelige Danske Videnskabernes Selskab Bind 34, nr. 2 Mat. Fys. Medd . Dan. Vid. Selsk. 34, no. 2 (1964) CONJUGATE CONVEX FUNCTIONS IN TOPOLOGICAL VECTOR SPACES BY ARNE BRØNDSTE D København 1964 Kommissionær : Ejnar Munksgaard Synopsis Continuing investigations by W. L . JONES (Thesis, Columbia University , 1960), the theory of conjugate convex functions in finite-dimensional Euclidea n spaces, as developed by W. FENCHEL (Canadian J . Math . 1 (1949) and Lecture No- tes, Princeton University, 1953), is generalized to functions in locally convex to- pological vector spaces . PRINTP_ll IN DENMARK BIANCO LUNOS BOGTRYKKERI A-S Introduction The purpose of the present paper is to generalize the theory of conjugat e convex functions in finite-dimensional Euclidean spaces, as initiated b y Z . BIRNBAUM and W. ORLICz [1] and S . MANDELBROJT [8] and developed by W. FENCHEL [3], [4] (cf. also S. KARLIN [6]), to infinite-dimensional spaces . To a certain extent this has been done previously by W . L . JONES in his Thesis [5] . His principal results concerning the conjugates of real function s in topological vector spaces have been included here with some improve- ments and simplified proofs (Section 3). After the present paper had bee n written, the author ' s attention was called to papers by J . J . MOREAU [9], [10] , [11] in which, by a different approach and independently of JONES, result s equivalent to many of those contained in this paper (Sections 3 and 4) are obtained. Section 1 contains a summary, based on [7], of notions and results fro m the theory of topological vector spaces applied in the following .
    [Show full text]
  • Nonlinear Integer Programming ∗
    Nonlinear Integer Programming ∗ Raymond Hemmecke, Matthias Koppe,¨ Jon Lee and Robert Weismantel Abstract. Research efforts of the past fifty years have led to a development of linear integer programming as a mature discipline of mathematical optimization. Such a level of maturity has not been reached when one considers nonlinear systems subject to integrality requirements for the variables. This chapter is dedicated to this topic. The primary goal is a study of a simple version of general nonlinear integer problems, where all constraints are still linear. Our focus is on the computational complexity of the problem, which varies significantly with the type of nonlinear objective function in combination with the underlying combinatorial structure. Nu- merous boundary cases of complexity emerge, which sometimes surprisingly lead even to polynomial time algorithms. We also cover recent successful approaches for more general classes of problems. Though no positive theoretical efficiency results are available, nor are they likely to ever be available, these seem to be the currently most successful and interesting approaches for solving practical problems. It is our belief that the study of algorithms motivated by theoretical considera- tions and those motivated by our desire to solve practical instances should and do inform one another. So it is with this viewpoint that we present the subject, and it is in this direction that we hope to spark further research. Raymond Hemmecke Otto-von-Guericke-Universitat¨ Magdeburg, FMA/IMO, Universitatsplatz¨ 2, 39106 Magdeburg, Germany, e-mail: [email protected] Matthias Koppe¨ University of California, Davis, Dept. of Mathematics, One Shields Avenue, Davis, CA, 95616, USA, e-mail: [email protected] Jon Lee IBM T.J.
    [Show full text]
  • An Asymptotical Variational Principle Associated with the Steepest Descent Method for a Convex Function
    Journal of Convex Analysis Volume 3 (1996), No.1, 63{70 An Asymptotical Variational Principle Associated with the Steepest Descent Method for a Convex Function B. Lemaire Universit´e Montpellier II, Place E. Bataillon, 34095 Montpellier Cedex 5, France. e-mail:[email protected] Received July 5, 1994 Revised manuscript received January 22, 1996 Dedicated to R. T. Rockafellar on his 60th Birthday The asymptotical limit of the trajectory defined by the continuous steepest descent method for a proper closed convex function f on a Hilbert space is characterized in the set of minimizers of f via an asymp- totical variational principle of Brezis-Ekeland type. The implicit discrete analogue (prox method) is also considered. Keywords : Asymptotical, convex minimization, differential inclusion, prox method, steepest descent, variational principle. 1991 Mathematics Subject Classification: 65K10, 49M10, 90C25. 1. Introduction Let X be a real Hilbert space endowed with inner product :; : and associated norm : , and let f be a proper closed convex function on X. h i k k The paper considers the problem of minimizing f, that is, of finding infX f and some element in the optimal set S := Argmin f, this set assumed being non empty. Letting @f denote the subdifferential operator associated with f, we focus on the contin- uous steepest descent method associated with f, i.e., the differential inclusion du @f(u); t > 0 − dt 2 with initial condition u(0) = u0: This method is known to yield convergence under broad conditions summarized in the following theorem. Let us denote by the real vector space of continuous functions from [0; + [ into X that are absolutely conAtinuous on [δ; + [ for all δ > 0.
    [Show full text]
  • Subderivative-Subdifferential Duality Formula
    Subderivative-subdifferential duality formula Marc Lassonde Universit´edes Antilles, BP 150, 97159 Pointe `aPitre, France; and LIMOS, Universit´eBlaise Pascal, 63000 Clermont-Ferrand, France E-mail: [email protected] Abstract. We provide a formula linking the radial subderivative to other subderivatives and subdifferentials for arbitrary extended real-valued lower semicontinuous functions. Keywords: lower semicontinuity, radial subderivative, Dini subderivative, subdifferential. 2010 Mathematics Subject Classification: 49J52, 49K27, 26D10, 26B25. 1 Introduction Tyrrell Rockafellar and Roger Wets [13, p. 298] discussing the duality between subderivatives and subdifferentials write In the presence of regularity, the subgradients and subderivatives of a function f are completely dual to each other. [. ] For functions f that aren’t subdifferentially regular, subderivatives and subgradients can have distinct and independent roles, and some of the duality must be relinquished. Jean-Paul Penot [12, p. 263], in the introduction to the chapter dealing with elementary and viscosity subdifferentials, writes In the present framework, in contrast to the convex objects, the passages from directional derivatives (and tangent cones) to subdifferentials (and normal cones, respectively) are one-way routes, because the first notions are nonconvex, while a dual object exhibits convexity properties. In the chapter concerning Clarke subdifferentials [12, p. 357], he notes In fact, in this theory, a complete primal-dual picture is available: besides a normal cone concept, one has a notion of tangent cone to a set, and besides a subdifferential for a function one has a notion of directional derivative. Moreover, inherent convexity properties ensure a full duality between these notions. [. ]. These facts represent great arXiv:1611.04045v2 [math.OC] 5 Mar 2017 theoretical and practical advantages.
    [Show full text]
  • An Improved Sequential Quadratic Programming Method Based on Positive Constraint Sets, Chemical Engineering Transactions, 81, 157-162 DOI:10.3303/CET2081027
    157 A publication of CHEMICAL ENGINEERING TRANSACTIONS VOL. 81, 2020 The Italian Association of Chemical Engineering Online at www.cetjournal.it Guest Editors: Petar S. Varbanov, Qiuwang Wang, Min Zeng, Panos Seferlis, Ting Ma, Jiří J. Klemeš Copyright © 2020, AIDIC Servizi S.r.l. DOI: 10.3303/CET2081027 ISBN 978-88-95608-79-2; ISSN 2283-9216 An Improved Sequential Quadratic Programming Method Based on Positive Constraint Sets Li Xiaa,*, Jianyang Linga, Rongshan Bia, Wenying Zhaob, Xiaorong Caob, Shuguang Xianga a State Key Laboratory Base for Eco-Chemical Engineering, College of Chemical Engineering, Qingdao University of Science and Technology, Zhengzhou Road No. 53, Qingdao 266042, China b Chemistry and Chemistry Engineering Faculty,Qilu Normal University, jinan, 250013, Shandong, China [email protected] SQP (sequential quadratic programming) is an effective method to solve nonlinear constraint problems, widely used in chemical process simulation optimization. At present, most optimization methods in general chemical process simulation software have the problems of slow calculation and poor convergence. To solve these problems , an improved SQP optimization method based on positive constraint sets was developed. The new optimization method was used in the general chemical process simulation software named optimization engineers , adopting the traditional Armijo type step rule, L-BFGS (Limited-Memory BFGS) algorithm and rules of positive definite matrix. Compared to the traditional SQP method, the SQP optimization method based on positive constraint sets simplifies the constraint number of corresponding subproblems. L-BFGS algorithm and rules of positive definite matrix simplify the solution of the second derivative matrix, and reduce the amount of storage in the calculation process.
    [Show full text]
  • Appendix a Solving Systems of Nonlinear Equations
    Appendix A Solving Systems of Nonlinear Equations Chapter 4 of this book describes and analyzes the power flow problem. In its ac version, this problem is a system of nonlinear equations. This appendix describes the most common method for solving a system of nonlinear equations, namely, the Newton-Raphson method. This is an iterative method that uses initial values for the unknowns and, then, at each iteration, updates these values until no change occurs in two consecutive iterations. For the sake of clarity, we first describe the working of this method for the case of just one nonlinear equation with one unknown. Then, the general case of n nonlinear equations and n unknowns is considered. We also explain how to directly solve systems of nonlinear equations using appropriate software. A.1 Newton-Raphson Algorithm The Newton-Raphson algorithm is described in this section. A.1.1 One Unknown Consider a nonlinear function f .x/ W R ! R. We aim at finding a value of x so that: f .x/ D 0: (A.1) .0/ To do so, we first consider a given value of x, e.g., x . In general, we have that f x.0/ ¤ 0. Thus, it is necessary to find x.0/ so that f x.0/ C x.0/ D 0. © Springer International Publishing AG 2018 271 A.J. Conejo, L. Baringo, Power System Operations, Power Electronics and Power Systems, https://doi.org/10.1007/978-3-319-69407-8 272 A Solving Systems of Nonlinear Equations Using Taylor series, we can express f x.0/ C x.0/ as: Â Ã.0/ 2 Â Ã.0/ df .x/ x.0/ d2f .x/ f x.0/ C x.0/ D f x.0/ C x.0/ C C ::: dx 2 dx2 (A.2) .0/ Considering only the first two terms in Eq.
    [Show full text]
  • A Sequential Quadratic Programming Algorithm with an Additional Equality Constrained Phase
    A Sequential Quadratic Programming Algorithm with an Additional Equality Constrained Phase Jos´eLuis Morales∗ Jorge Nocedal † Yuchen Wu† December 28, 2008 Abstract A sequential quadratic programming (SQP) method is presented that aims to over- come some of the drawbacks of contemporary SQP methods. It avoids the difficulties associated with indefinite quadratic programming subproblems by defining this sub- problem to be always convex. The novel feature of the approach is the addition of an equality constrained phase that promotes fast convergence and improves performance in the presence of ill conditioning. This equality constrained phase uses exact second order information and can be implemented using either a direct solve or an iterative method. The paper studies the global and local convergence properties of the new algorithm and presents a set of numerical experiments to illustrate its practical performance. 1 Introduction Sequential quadratic programming (SQP) methods are very effective techniques for solv- ing small, medium-size and certain classes of large-scale nonlinear programming problems. They are often preferable to interior-point methods when a sequence of related problems must be solved (as in branch and bound methods) and more generally, when a good estimate of the solution is available. Some SQP methods employ convex quadratic programming sub- problems for the step computation (typically using quasi-Newton Hessian approximations) while other variants define the Hessian of the SQP model using second derivative informa- tion, which can lead to nonconvex quadratic subproblems; see [27, 1] for surveys on SQP methods. ∗Departamento de Matem´aticas, Instituto Tecnol´ogico Aut´onomode M´exico, M´exico. This author was supported by Asociaci´onMexicana de Cultura AC and CONACyT-NSF grant J110.388/2006.
    [Show full text]
  • Techniques of Variational Analysis
    J. M. Borwein and Q. J. Zhu Techniques of Variational Analysis An Introduction October 8, 2004 Springer Berlin Heidelberg NewYork Hong Kong London Milan Paris Tokyo To Tova, Naomi, Rachel and Judith. To Charles and Lilly. And in fond and respectful memory of Simon Fitzpatrick (1953-2004). Preface Variational arguments are classical techniques whose use can be traced back to the early development of the calculus of variations and further. Rooted in the physical principle of least action they have wide applications in diverse ¯elds. The discovery of modern variational principles and nonsmooth analysis further expand the range of applications of these techniques. The motivation to write this book came from a desire to share our pleasure in applying such variational techniques and promoting these powerful tools. Potential readers of this book will be researchers and graduate students who might bene¯t from using variational methods. The only broad prerequisite we anticipate is a working knowledge of un- dergraduate analysis and of the basic principles of functional analysis (e.g., those encountered in a typical introductory functional analysis course). We hope to attract researchers from diverse areas { who may fruitfully use varia- tional techniques { by providing them with a relatively systematical account of the principles of variational analysis. We also hope to give further insight to graduate students whose research already concentrates on variational analysis. Keeping these two di®erent reader groups in mind we arrange the material into relatively independent blocks. We discuss various forms of variational princi- ples early in Chapter 2. We then discuss applications of variational techniques in di®erent areas in Chapters 3{7.
    [Show full text]
  • ADDENDUM the Following Remarks Were Added in Proof (November 1966). Page 67. an Easy Modification of Exercise 4.8.25 Establishes
    ADDENDUM The following remarks were added in proof (November 1966). Page 67. An easy modification of exercise 4.8.25 establishes the follow­ ing result of Wagner [I]: Every simplicial k-"complex" with at most 2 k ~ vertices has a representation in R + 1 such that all the "simplices" are geometric (rectilinear) simplices. Page 93. J. H. Conway (private communication) has established the validity of the conjecture mentioned in the second footnote. Page 126. For d = 2, the theorem of Derry [2] given in exercise 7.3.4 was found earlier by Bilinski [I]. Page 183. M. A. Perles (private communication) recently obtained an affirmative solution to Klee's problem mentioned at the end of section 10.1. Page 204. Regarding the question whether a(~) = 3 implies b(~) ~ 4, it should be noted that if one starts from a topological cell complex ~ with a(~) = 3 it is possible that ~ is not a complex (in our sense) at all (see exercise 11.1.7). On the other hand, G. Wegner pointed out (in a private communication to the author) that the 2-complex ~ discussed in the proof of theorem 11.1 .7 indeed satisfies b(~) = 4. Page 216. Halin's [1] result (theorem 11.3.3) has recently been genera­ lized by H. A. lung to all complete d-partite graphs. (Halin's result deals with the graph of the d-octahedron, i.e. the d-partite graph in which each class of nodes contains precisely two nodes.) The existence of the numbers n(k) follows from a recent result of Mader [1] ; Mader's result shows that n(k) ~ k.2(~) .
    [Show full text]
  • Proximal Analysis and Boundaries of Closed Sets in Banach Space Part Ii: Applications
    Can. J. Math., Vol. XXXIX, No. 2, 1987, pp. 428-472 PROXIMAL ANALYSIS AND BOUNDARIES OF CLOSED SETS IN BANACH SPACE PART II: APPLICATIONS J. M. BORWEIN AND H. M. STROJWAS Introduction. This paper is a direct continuation of the article "Proximal analysis and boundaries of closed sets in Banach space, Part I: Theory", by the same authors. It is devoted to a detailed analysis of applications of the theory presented in the first part and of its limitations. 5. Applications in geometry of normed spaces. Theorem 2.1 has important consequences for geometry of Banach spaces. We start the presentation with a discussion of density and existence of improper points (Definition 1.3) for closed sets in Banach spaces. Our considerations will be based on the "lim inf ' inclusions proven in the first part of our paper. Ti IEOREM 5.1. If C is a closed subset of a Banach space E, then the K-proper points of C are dense in the boundary of C. Proof If x is in the boundary of C, for each r > 0 we may find y £ C with \\y — 3c|| < r. Theorem 2.1 now shows that Kc(xr) ^ E for some xr e C with II* - *,ll ^ 2r. COROLLARY 5.1. ( [2] ) If C is a closed convex subset of a Banach space E, then the support points of C are dense in the boundary of C. Proof. Since for any convex set C and x G C we have Tc(x) = Kc(x) = Pc(x) = P(C - x) = WTc(x) = WKc(x) = WPc(x\ the /^-proper points of C at x, where Rc(x) is any one of the above cones, are exactly the support points.
    [Show full text]