
CONVEX ANALYSIS AND NONLINEAR OPTIMIZATION Theory and Examples Second Edition JONATHAN M. BORWEIN Faculty of Computer Science Dalhousie University, Halifax, NS, Canada B3H 2Z6 [email protected] http://users.cs.dal.ca/»jborwein and ADRIAN S. LEWIS School of Operations Research and Industrial Engineering Cornell University [email protected] http://www.orie.cornell.edu/»aslewis To our families v vi Preface Optimization is a rich and thriving mathematical discipline. Properties of minimizers and maximizers of functions rely intimately on a wealth of techniques from mathematical analysis, including tools from calculus and its generalizations, topological notions, and more geometric ideas. The theory underlying current computational optimization techniques grows ever more sophisticated|duality-based algorithms, interior point methods, and control-theoretic applications are typical examples. The powerful and elegant language of convex analysis uni¯es much of this theory. Hence our aim of writing a concise, accessible account of convex analysis and its applications and extensions, for a broad audience. For students of optimization and analysis, there is great bene¯t to blur- ring the distinction between the two disciplines. Many important analytic problems have illuminating optimization formulations and hence can be ap- proached through our main variational tools: subgradients and optimality conditions, the many guises of duality, metric regularity and so forth. More generally, the idea of convexity is central to the transition from classical analysis to various branches of modern analysis: from linear to nonlinear analysis, from smooth to nonsmooth, and from the study of functions to multifunctions. Thus, although we use certain optimization models re- peatedly to illustrate the main results (models such as linear and semide¯- nite programming duality and cone polarity), we constantly emphasize the power of abstract models and notation. Good reference works on ¯nite-dimensional convex analysis already ex- ist. Rockafellar's classic Convex Analysis [167] has been indispensable and ubiquitous since the 1970s, and a more general sequel with Wets, Varia- tional Analysis [168], appeared recently. Hiriart{Urruty and Lemar¶echal's Convex Analysis and Minimization Algorithms [97] is a comprehensive but gentler introduction. Our goal is not to supplant these works, but on the contrary to promote them, and thereby to motivate future researchers. This book aims to make converts. vii viii Preface We try to be succinct rather than systematic, avoiding becoming bogged down in technical details. Our style is relatively informal; for example, the text of each section creates the context for many of the result statements. We value the variety of independent, self-contained approaches over a sin- gle, uni¯ed, sequential development. We hope to showcase a few memorable principles rather than to develop the theory to its limits. We discuss no algorithms. We point out a few important references as we go, but we make no attempt at comprehensive historical surveys. Optimization in in¯nite dimensions lies beyond our immediate scope. This is for reasons of space and accessibility rather than history or appli- cation: convex analysis developed historically from the calculus of vari- ations, and has important applications in optimal control, mathematical economics, and other areas of in¯nite-dimensional optimization. However, rather like Halmos's Finite Dimensional Vector Spaces [90], ease of ex- tension beyond ¯nite dimensions substantially motivates our choice of ap- proach. Where possible, we have chosen a proof technique permitting those readers familiar with functional analysis to discover for themselves how a result extends. We would, in part, like this book to be an entr¶eefor math- ematicians to a valuable and intrinsic part of modern analysis. The ¯nal chapter illustrates some of the challenges arising in in¯nite dimensions. This book can (and does) serve as a teaching text, at roughly the level of ¯rst year graduate students. In principle we assume no knowledge of real analysis, although in practice we expect a certain mathematical maturity. While the main body of the text is self-contained, each section concludes with an often extensive set of optional exercises. These exercises fall into three categories, marked with zero, one, or two asterisks, respectively, as follows: examples that illustrate the ideas in the text or easy expansions of sketched proofs; important pieces of additional theory or more testing examples; longer, harder examples or peripheral theory. We are grateful to the Natural Sciences and Engineering Research Coun- cil of Canada for their support during this project. Many people have helped improve the presentation of this material. We would like to thank all of them, but in particular Patrick Combettes, Guillaume Haberer, Claude Lemar¶echal, Olivier Ley, Yves Lucet, Hristo Sendov, Mike Todd, Xianfu Wang, and especially Heinz Bauschke. Jonathan M. Borwein Adrian S. Lewis Gargnano, Italy September 1999 Preface ix Preface to the Second Edition Since the publication of the First Edition of this book, convex analysis and nonlinear optimization has continued to flourish. The \interior point revolution" in algorithms for convex optimization, ¯red by Nesterov and Nemirovski's seminal 1994 work [148], and the growing interplay between convex optimization and engineering exempli¯ed by Boyd and Vanden- berghe's recent monograph [47], have fuelled a renaissance of interest in the fundamentals of convex analysis. At the same time, the broad success of key monographs on general variational analysis by Clarke, Ledyaev, Stern and Wolenski [56] and Rockafellar and Wets [168] over the last decade tes- tify to a ripening interest in nonconvex techniques, as does the appearance of [43]. The Second Edition both corrects a few vagaries in the original and contains a new chapter emphasizing the rich applicability of variational analysis to concrete examples. After a new sequence of exercises ending Chapter 8 with a concise approach to monotone operator theory via convex analysis, the new Chapter 9 begins with a presentation of Rademacher's fundamental theorem on di®erentiability of Lipschitz functions. The sub- sequent sections describe the appealing geometry of proximal normals, four approaches to the convexity of Chebyshev sets, and two rich concrete mod- els of nonsmoothness known as \amenability" and \partial smoothness". As in the First Edition, we develop and illustrate the material through extensive exercises. Convex analysis has maintained a Canadian thread ever since Fenchel's original 1949 work on the subject in Volume 1 of the Canadian Journal of Mathematics [76]. We are grateful to the continuing support of the Canadian academic community in this project, and in particular to the Canadian Mathematical Society, for their sponsorship of this book series, and to the Canadian Natural Sciences and Engineering Research Council for their support of our research endeavours. Jonathan M. Borwein Adrian S. Lewis September 2005 Contents Preface vii 1 Background 1 1.1 Euclidean Spaces . 1 1.2 Symmetric Matrices . 9 2 Inequality Constraints 15 2.1 Optimality Conditions . 15 2.2 Theorems of the Alternative . 23 2.3 Max-functions . 28 3 Fenchel Duality 33 3.1 Subgradients and Convex Functions . 33 3.2 The Value Function . 43 3.3 The Fenchel Conjugate . 49 4 Convex Analysis 65 4.1 Continuity of Convex Functions . 65 4.2 Fenchel Biconjugation . 76 4.3 Lagrangian Duality . 88 5 Special Cases 97 5.1 Polyhedral Convex Sets and Functions . 97 5.2 Functions of Eigenvalues . 104 5.3 Duality for Linear and Semide¯nite Programming . 109 5.4 Convex Process Duality . 114 6 Nonsmooth Optimization 123 6.1 Generalized Derivatives . 123 6.2 Regularity and Strict Di®erentiability . 130 6.3 Tangent Cones . 137 6.4 The Limiting Subdi®erential . 145 x Contents xi 7 Karush{Kuhn{Tucker Theory 153 7.1 An Introduction to Metric Regularity . 153 7.2 The Karush{Kuhn{Tucker Theorem . 160 7.3 Metric Regularity and the Limiting Subdi®erential . 166 7.4 Second Order Conditions . 172 8 Fixed Points 179 8.1 The Brouwer Fixed Point Theorem . 179 8.2 Selection and the Kakutani{Fan Fixed Point Theorem . 190 8.3 Variational Inequalities . 200 9 More Nonsmooth Structure 213 9.1 Rademacher's Theorem . 213 9.2 Proximal Normals and Chebyshev Sets . 218 9.3 Amenable Sets and Prox-Regularity . 228 9.4 Partly Smooth Sets . 233 10 Postscript: In¯nite Versus Finite Dimensions 239 10.1 Introduction . 239 10.2 Finite Dimensionality . 241 10.3 Counterexamples and Exercises . 244 10.4 Notes on Previous Chapters . 248 11 List of Results and Notation 253 11.1 Named Results . 253 11.2 Notation . 267 Bibliography 275 Index 289 xii Contents Chapter 1 Background 1.1 Euclidean Spaces We begin by reviewing some of the fundamental algebraic, geometric and analytic ideas we use throughout the book. Our setting, for most of the book, is an arbitrary Euclidean space E, by which we mean a ¯nite-dimensional vector space over the reals R, equipped with an inner product h¢; ¢i. We would lose no generality if we considered only the space Rn of real (column) n-vectors (with its standard inner product), but a more abstract, coordinate-free notation is often morep flexible and elegant. We de¯ne the norm of any point x in E by kxk = hx; xi, and the unit ball is the set B = fx 2 E j kxk · 1g: Any two points x and y in E satisfy the Cauchy{Schwarz inequality jhx; yij · kxkkyk: We de¯ne the sum of two sets C and D in E by C + D = fx + y j x 2 C; y 2 Dg: The de¯nition of C ¡ D is analogous, and for a subset ¤ of R we de¯ne ¤C = f¸x j ¸ 2 ¤; x 2 Cg: Given another Euclidean space Y, we can consider the Cartesian product Euclidean space E £ Y, with inner product de¯ned by h(e; x); (f; y)i = he; fi + hx; yi.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages317 Page
-
File Size-