Optimization Over Nonnegative and Convex Polynomials with and Without Semidefinite Programming
Total Page:16
File Type:pdf, Size:1020Kb
OPTIMIZATION OVER NONNEGATIVE AND CONVEX POLYNOMIALS WITH AND WITHOUT SEMIDEFINITE PROGRAMMING GEORGINA HALL ADISSERTATION PRESENTED TO THE FACULTY OF PRINCETON UNIVERSITY IN CANDIDACY FOR THE DEGREE OF DOCTOR OF PHILOSOPHY RECOMMENDED FOR ACCEPTANCE BY THE DEPARTMENT OF OPERATIONS RESEARCH AND FINANCIAL ENGINEERING ADVISER:PROFESSOR AMIR ALI AHMADI JUNE 2018 c Copyright by Georgina Hall, 2018. All rights reserved. Abstract The problem of optimizing over the cone of nonnegative polynomials is a fundamental problem in computational mathematics, with applications to polynomial optimization, con- trol, machine learning, game theory, and combinatorics, among others. A number of break- through papers in the early 2000s showed that this problem, long thought to be out of reach, could be tackled by using sum of squares programming. This technique however has proved to be expensive for large-scale problems, as it involves solving large semidefinite programs (SDPs). In the first part of this thesis, we present two methods for approximately solving large- scale sum of squares programs that dispense altogether with semidefinite programming and only involve solving a sequence of linear or second order cone programs generated in an adaptive fashion. We then focus on the problem of finding tight lower bounds on polyno- mial optimization problems (POPs), a fundamental task in this area that is most commonly handled through the use of SDP-based sum of squares hierarchies (e.g., due to Lasserre and Parrilo). In contrast to previous approaches, we provide the first theoretical framework for constructing converging hierarchies of lower bounds on POPs whose computation simply requires the ability to multiply certain fixed polynomials together and to check nonnegativ- ity of the coefficients of their product. In the second part of this thesis, we focus on the theory and applications of the problem of optimizing over convex polynomials, a subcase of the problem of optimizing over non- negative polynomials. On the theoretical side, we show that the problem of testing whether a cubic polynomial is convex over a box is NP-hard. This result is minimal in the degree of the polynomial and complements previously-known results on complexity of checking convexity of a polynomial globally. We also study norms generated by convex forms and provide an SDP hierarchy for optimizing over them. This requires an extension of a result of Reznick on sum of squares representation of positive definite forms to positive definite biforms. On the application side, we study a problem of interest to robotics and motion iii planning, which involves modeling complex environments with simpler representations. In this setup, we are interested in containing 3D-point clouds within polynomial sublevel sets of minimum volume. We also study two applications in machine learning: the first is multivariate monotone regression, which is motivated by some applications in pricing; the second concerns a specific subclass of optimization problems called difference of convex (DC) programs, which appear naturally in machine learning problems. We show how our techniques can be used to optimally reformulate DC programs in order to speed up some of the best-known algorithms used for solving them. iv Acknowledgements I would first like to thank my adviser, Amir Ali Ahmadi. I can safely say that I would never have written a PhD thesis, nor pursued a career in academia, if it had not been for my great good luck in having Amir Ali as an adviser. Amir Ali, thank you for being a wonderful person and mentor. Thank you for believing in me when I did not believe in myself, for not giving up on me when I was giving up on the program, and for making me feel comfortable and confident in an environment where I never expected to feel that way. Thank you for giving me the opportunity to see how a truly brilliant mind works—if I have only taken away from this PhD a fragment of your clarity of thought, your attention to detail, and your unbridled creativity, then it will have been a success. Thank you for investing so much time in helping me grow, reading over my papers until they were perfect, listening to my practice talks for hours on end, working with me to perfect the way I answer questions during talks, and giving me feedback on my proofs and my ideas. I know of no other adviser like you, so kind, altruistic, and gifted. Our 4-year collaboration has been a source of great inspiration and pleasure to me and I hope that it will continue past this PhD, until we are both old and grey (a time that may come earlier to some than others!). I would also like to thank my thesis readers and committee members, Anirudha Majum- dar, Bob Vanderbei, and Mengdi Wang. It has been a great honor to interact with you over the years within Princeton and without. Thank you to my co-authors, Emmanuel Abbe, Afonso Bandeira, Mihaela Curmei, Sanjeeb Dash, Etienne de Klerk, Ameesh Makadia, Antonis Papachristodoulou, James Saunderson, Vikas Sindhwani, and Yang Zheng. It was a real pleasure to work with you all. I learnt so much from my interactions with you. A special thank you to Afonso Bandeira for introducing me to Amir Ali. I am extremely grateful to the people who wrote letters of recommendations for me, for job applications, fellowships, and otherwise: Emmanuel Abbe, Rene Carmona, Erhan Cinlar, Sanjeeb Dash, Etienne de Klerk, and Leo Liberti. I would also like to acknowledge Sanjeeb Dash, Jean-B. Lasserre, Leo Liberti, Daniel Kuhn, Antonis Papachristodoulou, v Pablo Parrilo, and Bernd Sturmfels, for inviting me to give seminars and offering help and advice when I was faced with important decisions. Je remercie mes professeurs de mathematiques,´ M. Aiudi, M. Tecourt,´ et M. Koen, ainsi que mon professeur de physique de classes preparatoires, M. Cervera, pour leur soutien. Sans eux, je n’aurais jamais eu la passion que j’ai pour les sciences, ni la rigueur que j’ai acquise avec eux pour les etudier.´ I would like to thank my incredible friends back home for the Skypes over the years, for visiting me, and for finding the time to see me whenever I went home: Alice, Brice, Florian, JG, Joachim, Jorge, Mathieu, Marie B., Marie G., Martin, Noemie,´ Paul, Sacha, Timtim, Valentine. Elise (and Sean!), merci de m’avoir soutenue pendant mes deuxieme` et troisieme` annees — tu es une amie formidable. Hateˆ de vous voir tous beaucoup plus maintenant que je rentre! Thank you to my fantastic ORFE officemates and teammates, as well as all the people I was lucky enough to TA with over the years: Bachir, Cagin, Cemil, Chenyi, Donghwa, Elahe, Firdevs, Han, Jeff (who will remind me to register to conferences and book tickets now?), Joane, Junwei, Kaizheng, Kevin W., Sinem, Tianqi, Thomas P., Yutong, Yuyan, Zongxi. I would also like to acknowledge the wonderful students I have had the opportunity to teach over the years and particularly those with whom I worked on senior theses: Ellie, Mihaela, Salena, and Tin. Thank you to Kim for being a great department coordinator, and to Carol and Melissa for helping me with my receipts! To my other friends in Princeton, you made the toughest times not only bearable but fun: Adam, Adrianna, Amanda, Alex, Carly, Chamsi, Daniel J., Genna, Hiba, Ingrid, Jessica, Kelsey, Lili, Kobkob, Marte, Matteo, Roger, Sachin, Sara, Thomas F. Last but not least, I am lucky enough to have the most supportive family anyone could ever ask for. Kevin, ta capacite´ a voir le cotˆ e´ humouristique dans toutes les situations et a dedramatiser´ mes problemes` les plus compliques´ m’ont permis de rester saine. Merci pour tout. Luey, que dire de plus si ce n’est que tu es allee´ m’acheter tout un pack de survie vi avant mon entretien a lINSEAD. Chaque fois qu’on se voit (#Disney) ou qu’on s’appelle, ma journee´ devient that much better. Tom, merci d’avoir et´ e´ Team Georgi jusqu’au bout. Mutts and Dads, thanks for being the best parents ever. Your unconditional support and the fact that you were willing to learn about a whole new area just to keep up with me meant so much to me. Thank you. vii To Mutts and Dads. viii Contents Abstract . iii Acknowledgements . .v List of Tables . xiv List of Figures . xv 1 Introduction 1 1.1 Outline of this thesis . .6 1.2 Related publications . .9 I LP, SOCP, and Optimization-Free Approaches to Semidefinite and Sum of Squares Programming 11 2 Optimization over Structured Subsets of Positive Semidefinite Matrices via Column Generation 12 2.1 Introduction . 12 2.2 Preliminaries . 17 2.2.1 DSOS and SDSOS optimization . 18 2.3 Column generation for inner approximation of positive semidefinite cones . 21 2.3.1 LP-based column generation . 22 2.3.2 SOCP-based column generation . 25 2.4 Nonconvex polynomial optimization . 27 ix 2.4.1 Experiments with a 10-variable quartic . 33 2.4.2 Larger computational experiments . 34 2.5 Inner approximations of copositive programs and the maximum stable set problem . 37 2.6 Conclusions and future research . 49 3 Sum of Squares Basis Pursuit with Linear and Second Order Cone Program- ming 52 3.1 Introduction . 52 3.1.1 Organization of this chapter . 56 3.2 Preliminaries . 57 3.2.1 DSOS and SDSOS optimization . 58 3.3 Pursuing improved bases . 60 3.3.1 Inner approximations of the psd cone . 61 3.3.2 Inner approximations to the cone of nonnegative polynomials . 66 3.3.3 Extreme-ray interpretation of the change of basis . 68 3.3.4 Outer approximations of the psd cone .