Optimization Over Nonnegative and Convex Polynomials with and Without Semidefinite Programming

Total Page:16

File Type:pdf, Size:1020Kb

Optimization Over Nonnegative and Convex Polynomials with and Without Semidefinite Programming OPTIMIZATION OVER NONNEGATIVE AND CONVEX POLYNOMIALS WITH AND WITHOUT SEMIDEFINITE PROGRAMMING GEORGINA HALL ADISSERTATION PRESENTED TO THE FACULTY OF PRINCETON UNIVERSITY IN CANDIDACY FOR THE DEGREE OF DOCTOR OF PHILOSOPHY RECOMMENDED FOR ACCEPTANCE BY THE DEPARTMENT OF OPERATIONS RESEARCH AND FINANCIAL ENGINEERING ADVISER:PROFESSOR AMIR ALI AHMADI JUNE 2018 c Copyright by Georgina Hall, 2018. All rights reserved. Abstract The problem of optimizing over the cone of nonnegative polynomials is a fundamental problem in computational mathematics, with applications to polynomial optimization, con- trol, machine learning, game theory, and combinatorics, among others. A number of break- through papers in the early 2000s showed that this problem, long thought to be out of reach, could be tackled by using sum of squares programming. This technique however has proved to be expensive for large-scale problems, as it involves solving large semidefinite programs (SDPs). In the first part of this thesis, we present two methods for approximately solving large- scale sum of squares programs that dispense altogether with semidefinite programming and only involve solving a sequence of linear or second order cone programs generated in an adaptive fashion. We then focus on the problem of finding tight lower bounds on polyno- mial optimization problems (POPs), a fundamental task in this area that is most commonly handled through the use of SDP-based sum of squares hierarchies (e.g., due to Lasserre and Parrilo). In contrast to previous approaches, we provide the first theoretical framework for constructing converging hierarchies of lower bounds on POPs whose computation simply requires the ability to multiply certain fixed polynomials together and to check nonnegativ- ity of the coefficients of their product. In the second part of this thesis, we focus on the theory and applications of the problem of optimizing over convex polynomials, a subcase of the problem of optimizing over non- negative polynomials. On the theoretical side, we show that the problem of testing whether a cubic polynomial is convex over a box is NP-hard. This result is minimal in the degree of the polynomial and complements previously-known results on complexity of checking convexity of a polynomial globally. We also study norms generated by convex forms and provide an SDP hierarchy for optimizing over them. This requires an extension of a result of Reznick on sum of squares representation of positive definite forms to positive definite biforms. On the application side, we study a problem of interest to robotics and motion iii planning, which involves modeling complex environments with simpler representations. In this setup, we are interested in containing 3D-point clouds within polynomial sublevel sets of minimum volume. We also study two applications in machine learning: the first is multivariate monotone regression, which is motivated by some applications in pricing; the second concerns a specific subclass of optimization problems called difference of convex (DC) programs, which appear naturally in machine learning problems. We show how our techniques can be used to optimally reformulate DC programs in order to speed up some of the best-known algorithms used for solving them. iv Acknowledgements I would first like to thank my adviser, Amir Ali Ahmadi. I can safely say that I would never have written a PhD thesis, nor pursued a career in academia, if it had not been for my great good luck in having Amir Ali as an adviser. Amir Ali, thank you for being a wonderful person and mentor. Thank you for believing in me when I did not believe in myself, for not giving up on me when I was giving up on the program, and for making me feel comfortable and confident in an environment where I never expected to feel that way. Thank you for giving me the opportunity to see how a truly brilliant mind works—if I have only taken away from this PhD a fragment of your clarity of thought, your attention to detail, and your unbridled creativity, then it will have been a success. Thank you for investing so much time in helping me grow, reading over my papers until they were perfect, listening to my practice talks for hours on end, working with me to perfect the way I answer questions during talks, and giving me feedback on my proofs and my ideas. I know of no other adviser like you, so kind, altruistic, and gifted. Our 4-year collaboration has been a source of great inspiration and pleasure to me and I hope that it will continue past this PhD, until we are both old and grey (a time that may come earlier to some than others!). I would also like to thank my thesis readers and committee members, Anirudha Majum- dar, Bob Vanderbei, and Mengdi Wang. It has been a great honor to interact with you over the years within Princeton and without. Thank you to my co-authors, Emmanuel Abbe, Afonso Bandeira, Mihaela Curmei, Sanjeeb Dash, Etienne de Klerk, Ameesh Makadia, Antonis Papachristodoulou, James Saunderson, Vikas Sindhwani, and Yang Zheng. It was a real pleasure to work with you all. I learnt so much from my interactions with you. A special thank you to Afonso Bandeira for introducing me to Amir Ali. I am extremely grateful to the people who wrote letters of recommendations for me, for job applications, fellowships, and otherwise: Emmanuel Abbe, Rene Carmona, Erhan Cinlar, Sanjeeb Dash, Etienne de Klerk, and Leo Liberti. I would also like to acknowledge Sanjeeb Dash, Jean-B. Lasserre, Leo Liberti, Daniel Kuhn, Antonis Papachristodoulou, v Pablo Parrilo, and Bernd Sturmfels, for inviting me to give seminars and offering help and advice when I was faced with important decisions. Je remercie mes professeurs de mathematiques,´ M. Aiudi, M. Tecourt,´ et M. Koen, ainsi que mon professeur de physique de classes preparatoires, M. Cervera, pour leur soutien. Sans eux, je n’aurais jamais eu la passion que j’ai pour les sciences, ni la rigueur que j’ai acquise avec eux pour les etudier.´ I would like to thank my incredible friends back home for the Skypes over the years, for visiting me, and for finding the time to see me whenever I went home: Alice, Brice, Florian, JG, Joachim, Jorge, Mathieu, Marie B., Marie G., Martin, Noemie,´ Paul, Sacha, Timtim, Valentine. Elise (and Sean!), merci de m’avoir soutenue pendant mes deuxieme` et troisieme` annees — tu es une amie formidable. Hateˆ de vous voir tous beaucoup plus maintenant que je rentre! Thank you to my fantastic ORFE officemates and teammates, as well as all the people I was lucky enough to TA with over the years: Bachir, Cagin, Cemil, Chenyi, Donghwa, Elahe, Firdevs, Han, Jeff (who will remind me to register to conferences and book tickets now?), Joane, Junwei, Kaizheng, Kevin W., Sinem, Tianqi, Thomas P., Yutong, Yuyan, Zongxi. I would also like to acknowledge the wonderful students I have had the opportunity to teach over the years and particularly those with whom I worked on senior theses: Ellie, Mihaela, Salena, and Tin. Thank you to Kim for being a great department coordinator, and to Carol and Melissa for helping me with my receipts! To my other friends in Princeton, you made the toughest times not only bearable but fun: Adam, Adrianna, Amanda, Alex, Carly, Chamsi, Daniel J., Genna, Hiba, Ingrid, Jessica, Kelsey, Lili, Kobkob, Marte, Matteo, Roger, Sachin, Sara, Thomas F. Last but not least, I am lucky enough to have the most supportive family anyone could ever ask for. Kevin, ta capacite´ a voir le cotˆ e´ humouristique dans toutes les situations et a dedramatiser´ mes problemes` les plus compliques´ m’ont permis de rester saine. Merci pour tout. Luey, que dire de plus si ce n’est que tu es allee´ m’acheter tout un pack de survie vi avant mon entretien a lINSEAD. Chaque fois qu’on se voit (#Disney) ou qu’on s’appelle, ma journee´ devient that much better. Tom, merci d’avoir et´ e´ Team Georgi jusqu’au bout. Mutts and Dads, thanks for being the best parents ever. Your unconditional support and the fact that you were willing to learn about a whole new area just to keep up with me meant so much to me. Thank you. vii To Mutts and Dads. viii Contents Abstract . iii Acknowledgements . .v List of Tables . xiv List of Figures . xv 1 Introduction 1 1.1 Outline of this thesis . .6 1.2 Related publications . .9 I LP, SOCP, and Optimization-Free Approaches to Semidefinite and Sum of Squares Programming 11 2 Optimization over Structured Subsets of Positive Semidefinite Matrices via Column Generation 12 2.1 Introduction . 12 2.2 Preliminaries . 17 2.2.1 DSOS and SDSOS optimization . 18 2.3 Column generation for inner approximation of positive semidefinite cones . 21 2.3.1 LP-based column generation . 22 2.3.2 SOCP-based column generation . 25 2.4 Nonconvex polynomial optimization . 27 ix 2.4.1 Experiments with a 10-variable quartic . 33 2.4.2 Larger computational experiments . 34 2.5 Inner approximations of copositive programs and the maximum stable set problem . 37 2.6 Conclusions and future research . 49 3 Sum of Squares Basis Pursuit with Linear and Second Order Cone Program- ming 52 3.1 Introduction . 52 3.1.1 Organization of this chapter . 56 3.2 Preliminaries . 57 3.2.1 DSOS and SDSOS optimization . 58 3.3 Pursuing improved bases . 60 3.3.1 Inner approximations of the psd cone . 61 3.3.2 Inner approximations to the cone of nonnegative polynomials . 66 3.3.3 Extreme-ray interpretation of the change of basis . 68 3.3.4 Outer approximations of the psd cone .
Recommended publications
  • 1 Overview 2 Elements of Convex Analysis
    AM 221: Advanced Optimization Spring 2016 Prof. Yaron Singer Lecture 2 | Wednesday, January 27th 1 Overview In our previous lecture we discussed several applications of optimization, introduced basic terminol- ogy, and proved Weierstass' theorem (circa 1830) which gives a sufficient condition for existence of an optimal solution to an optimization problem. Today we will discuss basic definitions and prop- erties of convex sets and convex functions. We will then prove the separating hyperplane theorem and see an application of separating hyperplanes in machine learning. We'll conclude by describing the seminal perceptron algorithm (1957) which is designed to find separating hyperplanes. 2 Elements of Convex Analysis We will primarily consider optimization problems over convex sets { sets for which any two points are connected by a line. We illustrate some convex and non-convex sets in Figure 1. Definition. A set S is called a convex set if any two points in S contain their line, i.e. for any x1; x2 2 S we have that λx1 + (1 − λ)x2 2 S for any λ 2 [0; 1]. In the previous lecture we saw linear regression an example of an optimization problem, and men- tioned that the RSS function has a convex shape. We can now define this concept formally. n Definition. For a convex set S ⊆ R , we say that a function f : S ! R is: • convex on S if for any two points x1; x2 2 S and any λ 2 [0; 1] we have that: f (λx1 + (1 − λ)x2) ≤ λf(x1) + 1 − λ f(x2): • strictly convex on S if for any two points x1; x2 2 S and any λ 2 [0; 1] we have that: f (λx1 + (1 − λ)x2) < λf(x1) + (1 − λ) f(x2): We illustrate a convex function in Figure 2.
    [Show full text]
  • CORE View Metadata, Citation and Similar Papers at Core.Ac.Uk
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Bulgarian Digital Mathematics Library at IMI-BAS Serdica Math. J. 27 (2001), 203-218 FIRST ORDER CHARACTERIZATIONS OF PSEUDOCONVEX FUNCTIONS Vsevolod Ivanov Ivanov Communicated by A. L. Dontchev Abstract. First order characterizations of pseudoconvex functions are investigated in terms of generalized directional derivatives. A connection with the invexity is analysed. Well-known first order characterizations of the solution sets of pseudolinear programs are generalized to the case of pseudoconvex programs. The concepts of pseudoconvexity and invexity do not depend on a single definition of the generalized directional derivative. 1. Introduction. Three characterizations of pseudoconvex functions are considered in this paper. The first is new. It is well-known that each pseudo- convex function is invex. Then the following question arises: what is the type of 2000 Mathematics Subject Classification: 26B25, 90C26, 26E15. Key words: Generalized convexity, nonsmooth function, generalized directional derivative, pseudoconvex function, quasiconvex function, invex function, nonsmooth optimization, solution sets, pseudomonotone generalized directional derivative. 204 Vsevolod Ivanov Ivanov the function η from the definition of invexity, when the invex function is pseudo- convex. This question is considered in Section 3, and a first order necessary and sufficient condition for pseudoconvexity of a function is given there. It is shown that the class of strongly pseudoconvex functions, considered by Weir [25], coin- cides with pseudoconvex ones. The main result of Section 3 is applied to characterize the solution set of a nonlinear programming problem in Section 4. The base results of Jeyakumar and Yang in the paper [13] are generalized there to the case, when the function is pseudoconvex.
    [Show full text]
  • Applications of Convex Analysis Within Mathematics
    Noname manuscript No. (will be inserted by the editor) Applications of Convex Analysis within Mathematics Francisco J. Arag´onArtacho · Jonathan M. Borwein · Victoria Mart´ın-M´arquez · Liangjin Yao July 19, 2013 Abstract In this paper, we study convex analysis and its theoretical applications. We first apply important tools of convex analysis to Optimization and to Analysis. We then show various deep applications of convex analysis and especially infimal convolution in Monotone Operator Theory. Among other things, we recapture the Minty surjectivity theorem in Hilbert space, and present a new proof of the sum theorem in reflexive spaces. More technically, we also discuss autoconjugate representers for maximally monotone operators. Finally, we consider various other applications in mathematical analysis. Keywords Adjoint · Asplund averaging · autoconjugate representer · Banach limit · Chebyshev set · convex functions · Fenchel duality · Fenchel conjugate · Fitzpatrick function · Hahn{Banach extension theorem · infimal convolution · linear relation · Minty surjectivity theorem · maximally monotone operator · monotone operator · Moreau's decomposition · Moreau envelope · Moreau's max formula · Moreau{Rockafellar duality · normal cone operator · renorming · resolvent · Sandwich theorem · subdifferential operator · sum theorem · Yosida approximation Mathematics Subject Classification (2000) Primary 47N10 · 90C25; Secondary 47H05 · 47A06 · 47B65 Francisco J. Arag´onArtacho Centre for Computer Assisted Research Mathematics and its Applications (CARMA),
    [Show full text]
  • Convex) Level Sets Integration
    Journal of Optimization Theory and Applications manuscript No. (will be inserted by the editor) (Convex) Level Sets Integration Jean-Pierre Crouzeix · Andrew Eberhard · Daniel Ralph Dedicated to Vladimir Demjanov Received: date / Accepted: date Abstract The paper addresses the problem of recovering a pseudoconvex function from the normal cones to its level sets that we call the convex level sets integration problem. An important application is the revealed preference problem. Our main result can be described as integrating a maximally cycli- cally pseudoconvex multivalued map that sends vectors or \bundles" of a Eu- clidean space to convex sets in that space. That is, we are seeking a pseudo convex (real) function such that the normal cone at each boundary point of each of its lower level sets contains the set value of the multivalued map at the same point. This raises the question of uniqueness of that function up to rescaling. Even after normalising the function long an orienting direction, we give a counterexample to its uniqueness. We are, however, able to show uniqueness under a condition motivated by the classical theory of ordinary differential equations. Keywords Convexity and Pseudoconvexity · Monotonicity and Pseudomono- tonicity · Maximality · Revealed Preferences. Mathematics Subject Classification (2000) 26B25 · 91B42 · 91B16 Jean-Pierre Crouzeix, Corresponding Author, LIMOS, Campus Scientifique des C´ezeaux,Universit´eBlaise Pascal, 63170 Aubi`ere,France, E-mail: [email protected] Andrew Eberhard School of Mathematical & Geospatial Sciences, RMIT University, Melbourne, VIC., Aus- tralia, E-mail: [email protected] Daniel Ralph University of Cambridge, Judge Business School, UK, E-mail: [email protected] 2 Jean-Pierre Crouzeix et al.
    [Show full text]
  • On the Ekeland Variational Principle with Applications and Detours
    Lectures on The Ekeland Variational Principle with Applications and Detours By D. G. De Figueiredo Tata Institute of Fundamental Research, Bombay 1989 Author D. G. De Figueiredo Departmento de Mathematica Universidade de Brasilia 70.910 – Brasilia-DF BRAZIL c Tata Institute of Fundamental Research, 1989 ISBN 3-540- 51179-2-Springer-Verlag, Berlin, Heidelberg. New York. Tokyo ISBN 0-387- 51179-2-Springer-Verlag, New York. Heidelberg. Berlin. Tokyo No part of this book may be reproduced in any form by print, microfilm or any other means with- out written permission from the Tata Institute of Fundamental Research, Colaba, Bombay 400 005 Printed by INSDOC Regional Centre, Indian Institute of Science Campus, Bangalore 560012 and published by H. Goetze, Springer-Verlag, Heidelberg, West Germany PRINTED IN INDIA Preface Since its appearance in 1972 the variational principle of Ekeland has found many applications in different fields in Analysis. The best refer- ences for those are by Ekeland himself: his survey article [23] and his book with J.-P. Aubin [2]. Not all material presented here appears in those places. Some are scattered around and there lies my motivation in writing these notes. Since they are intended to students I included a lot of related material. Those are the detours. A chapter on Nemyt- skii mappings may sound strange. However I believe it is useful, since their properties so often used are seldom proved. We always say to the students: go and look in Krasnoselskii or Vainberg! I think some of the proofs presented here are more straightforward. There are two chapters on applications to PDE.
    [Show full text]
  • Sum of Squares and Polynomial Convexity
    CONFIDENTIAL. Limited circulation. For review only. Sum of Squares and Polynomial Convexity Amir Ali Ahmadi and Pablo A. Parrilo Abstract— The notion of sos-convexity has recently been semidefiniteness of the Hessian matrix is replaced with proposed as a tractable sufficient condition for convexity of the existence of an appropriately defined sum of squares polynomials based on sum of squares decomposition. A multi- decomposition. As we will briefly review in this paper, by variate polynomial p(x) = p(x1; : : : ; xn) is said to be sos-convex if its Hessian H(x) can be factored as H(x) = M T (x) M (x) drawing some appealing connections between real algebra with a possibly nonsquare polynomial matrix M(x). It turns and numerical optimization, the latter problem can be re- out that one can reduce the problem of deciding sos-convexity duced to the feasibility of a semidefinite program. of a polynomial to the feasibility of a semidefinite program, Despite the relative recency of the concept of sos- which can be checked efficiently. Motivated by this computa- convexity, it has already appeared in a number of theo- tional tractability, it has been speculated whether every convex polynomial must necessarily be sos-convex. In this paper, we retical and practical settings. In [6], Helton and Nie use answer this question in the negative by presenting an explicit sos-convexity to give sufficient conditions for semidefinite example of a trivariate homogeneous polynomial of degree eight representability of semialgebraic sets. In [7], Lasserre uses that is convex but not sos-convex. sos-convexity to extend Jensen’s inequality in convex anal- ysis to linear functionals that are not necessarily probability I.
    [Show full text]
  • Subdifferential Properties of Quasiconvex and Pseudoconvex Functions: Unified Approach1
    JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: Vol. 97, No. I, pp. 29-45, APRIL 1998 Subdifferential Properties of Quasiconvex and Pseudoconvex Functions: Unified Approach1 D. AUSSEL2 Communicated by M. Avriel Abstract. In this paper, we are mainly concerned with the characteriza- tion of quasiconvex or pseudoconvex nondifferentiable functions and the relationship between those two concepts. In particular, we charac- terize the quasiconvexity and pseudoconvexity of a function by mixed properties combining properties of the function and properties of its subdifferential. We also prove that a lower semicontinuous and radially continuous function is pseudoconvex if it is quasiconvex and satisfies the following optimality condition: 0sdf(x) =f has a global minimum at x. The results are proved using the abstract subdifferential introduced in Ref. 1, a concept which allows one to recover almost all the subdiffer- entials used in nonsmooth analysis. Key Words. Nonsmooth analysis, abstract subdifferential, quasicon- vexity, pseudoconvexity, mixed property. 1. Preliminaries Throughout this paper, we use the concept of abstract subdifferential introduced in Aussel et al. (Ref. 1). This definition allows one to recover almost all classical notions of nonsmooth analysis. The aim of such an approach is to show that a lot of results concerning the subdifferential prop- erties of quasiconvex and pseudoconvex functions can be proved in a unified way, and hence for a large class of subdifferentials. We are interested here in two aspects of convex analysis: the charac- terization of quasiconvex and pseudoconvex lower semicontinuous functions and the relationship between these two concepts. 1The author is indebted to Prof. M. Lassonde for many helpful discussions leading to a signifi- cant improvement in the presentation of the results.
    [Show full text]
  • Chapter 5 Convex Optimization in Function Space 5.1 Foundations of Convex Analysis
    Chapter 5 Convex Optimization in Function Space 5.1 Foundations of Convex Analysis Let V be a vector space over lR and k ¢ k : V ! lR be a norm on V . We recall that (V; k ¢ k) is called a Banach space, if it is complete, i.e., if any Cauchy sequence fvkglN of elements vk 2 V; k 2 lN; converges to an element v 2 V (kvk ¡ vk ! 0 as k ! 1). Examples: Let ­ be a domain in lRd; d 2 lN. Then, the space C(­) of continuous functions on ­ is a Banach space with the norm kukC(­) := sup ju(x)j : x2­ The spaces Lp(­); 1 · p < 1; of (in the Lebesgue sense) p-integrable functions are Banach spaces with the norms Z ³ ´1=p p kukLp(­) := ju(x)j dx : ­ The space L1(­) of essentially bounded functions on ­ is a Banach space with the norm kukL1(­) := ess sup ju(x)j : x2­ The (topologically and algebraically) dual space V ¤ is the space of all bounded linear functionals ¹ : V ! lR. Given ¹ 2 V ¤, for ¹(v) we often write h¹; vi with h¢; ¢i denoting the dual product between V ¤ and V . We note that V ¤ is a Banach space equipped with the norm j h¹; vi j k¹k := sup : v2V nf0g kvk Examples: The dual of C(­) is the space M(­) of Radon measures ¹ with Z h¹; vi := v d¹ ; v 2 C(­) : ­ The dual of L1(­) is the space L1(­). The dual of Lp(­); 1 < p < 1; is the space Lq(­) with q being conjugate to p, i.e., 1=p + 1=q = 1.
    [Show full text]
  • Characterizations of Quasiconvex Functions and Pseudomonotonicity of Subdifferentials
    J. Appl. Environ. Biol. Sci. , 5(6)358-364, 2015 ISSN: 2090-4274 Journal of Applied Environmental © 2015, TextRoad Publication and Biological Sciences www.textroad.com Characterizations of Quasiconvex Functions and Pseudomonotonicity of Subdifferentials Somayeh Yazdani Department of Mathematics, Faculty of Basic Sciences, Nour Branch, Islamic Azad University, Nour, Iran. Received: January 6, 2015 Accepted: March 10, 2015 ABSTRACT In this paper we introduce the concepts of quasimonotone maps and pseudoconvex functions. Moreover, a notion of pseudomonotonicity for multi mappings is introduced; it is shown that, if a function f is continuous, then its pseudoconvexity is equivalent to the pseudomonotonicity of its generalized subdifferential in the sense of Clarke and Rockafellar and prove that a lower semicontinuous function on an infinite dimensional space is quasiconvex if and only if its generalized subdifferential is quasimonotone. KEYWORDS: Quasiconvex functions, pseudoconvex functions, quasimonotone maps, pseudo monotone maps, generalized subdifferentials. I. INTRODUCTION As far as we know quasiconvex functions were first mentioned by Finetti in 1949,[ Finetti,1949]. Since then much effort has been focused on the study of this class of functions for they have much in common with convex functions. One of the most important properties of convex functions is that their level sets are convex . Mapping of the monograph theory in the beginning of methods were obtained in spaces with more banach efforts and evaluation background lightness for the relationship between monograhy on a map and the linkage of premise was created. This study was performed to uniform of non-linear mappings and differential relations and was paid uniformity. The purpose of this paper is to declare these powerful tools and new applications of it in a function in the analysis of the last few decades.
    [Show full text]
  • 1 Lecture 7 Hyperbolic Polynomials
    MIT 6.972 Algebraic techniques and semidefinite optimization March 2, 2006 Lecture 7 Lecturer: Pablo A. Parrilo Scribe: ??? In this lecture we introduce a special class of multivariate polynomials, called hyperbolic. These polynomials were originally studied in the context of partial differential equations. As we will see, they have many surprising properties, and are intimately linked with convex optimization problems that have an algebraic structure. A few good references about the use of hyperbolic polynomials in optimization are [G¨ul97, BGLS01, Ren]. 1 Hyperbolic polynomials Consider a homogeneous multivariate polynomial p ∈ R[x1, . , xn] of degree d. Here homogeneous of degree d means that the sum of degrees of each monomial is constant and equal to d, i.e., � α p(x) = cαx , |α|=d where α = (α1, . , αn), αi ∈ N ∪ {0}, and |α| =α1 + ··· + αn. A homogeneous polynomial satisfies d n p(tw) = t p(w) for all real t and vectors w ∈ R . We denote the set of such polynomials by Hn(d). By identifying a polynomial with its vector of coefficients, we can consider Hn(d) as a normed vector space �n +d−1� of dimension d . n Definition 1. Let e be a fixed vector in R . A polynomial p ∈ Hn(d) is hyperbolic with respect to e if p(e) > 0 and, for all vectors x ∈ Rn, the univariate polynomial t �→ p(x − te) has only real roots. A natural geometric interpretation is the following: consider the hypersurface in Rn given by p(x) = 0. Then, hyperbolicity is equivalent to the condition that every line in Rn parallel to e intersects this hypersurface at exactly d points (counting multiplicities), where d is the degree of the polynomial.
    [Show full text]
  • Analysis of Convex Envelopes of Polynomials and Exact Relaxations of Non-Convex Variational Problems
    Analysis of convex envelopes of polynomials and exact relaxations of non-convex variational problems Ren´eMeziat - Diego Pati˜no Departamento de Matem´aticas, Universidad de los Andes Carrera 1 No 18A - 10, Bogot´a,Colombia [email protected], [email protected] March, 2005 Abstract We solve non-convex, variational problems in the form Z 1 min I(u) = f(u0(x))dx s.t. u(0) = 0, u(1) = a, (1) u 0 1,∞ k k where u ∈ (W (0, 1)) and f : R → R is a non-convex, coercive polynomial. To solve (1) we analise the convex envelope fc at the point a, 1 N k this means to find vectors a , . , a ∈ R and positive values λ1, . , λN satisfying the non-linear equation N X i i (1, a, fc(a)) = λi(1, a , f(a )). (2) i=1 With this information we calculate minimizers of (1) by following a pro- posal of B. Dacorogna in [15]. We represent the solution of (2) as the minimizer of one particular convex, optimization problem defined in prob- ability measures which can be transformed into a semidefinite program by using necessary conditions for the solution of the multidimensional mo- ments problem. To find explicit solutions of (1), we use a constructive, a posteriori approach which exploits the characterization of one dimensional moments on the marginal moments of a bivariate distribution. By follow- ing J.B. Lasserre’s proposal on global optimization of polynomials [25], we determine conditions on f and a to obtain exact semidefinite relaxations of (1).
    [Show full text]
  • Geometry of Real Polynomials, Convexity and Optimization
    Geometry of Real Polynomials, Convexity and Optimization Grigoriy Blekherman, Georgia Institute of Technology, Atlanta, GA, USA Daniel Plaumann, TU Dortmund University, Dortmund, Germany Levent Tunc¸el, University of Waterloo, Ontario, Canada (Contact Organizer) Cynthia Vinzant, North Carolina State University, Raleigh, NC, USA May 26 - May 31, 2019 1 Overview of the Field The workshop was focussed on recent interactions among the areas of real algebraic geometry, the geometry of polynomials, and convex and polynomial optimization. A common thread is the study of two important classes of real polynomials, namely hyperbolic and nonnegative polynomials. These two themes are interact- ing deeply with optimization as well as theoretical computer science. This interaction recently led to solutions of several important open problems, new breakthroughs and applications. The study of hyperbolic polynomials originated in PDE theory, with researchers like Petrovksy laying the foundations in the 1930s. This was further developed by Garding˚ [1959], Atiyah, Bott and Garding˚ [1972, 1973], as well as Hormander.¨ Interest and active research in the area was renewed in the early 1990s in connection with optimization. Since then, there has been an amazing amount of activity allowing the subject to branch into and have a significant impact on a wide range of fields, including systems and control theory, convex analysis, interior-point methods, discrete optimization and combinatorics, semidefinite programming, matrix theory, operator algebras, and theoretical computer science. The other main focus of the workshop was the closely related study of nonnegative polynomials and poly- nomial optimization. A systematic study of nonnegative polynomials already appeared in Minkowski’s early work in the 19th century.
    [Show full text]