Global Dynamic Optimization Adam Benjamin Singer
Total Page:16
File Type:pdf, Size:1020Kb
Global Dynamic Optimization by Adam Benjamin Singer Submitted to the Department of Chemical Engineering in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Chemical Engineering at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY September 2004 c Massachusetts Institute of Technology 2004. All rights reserved. Author.............................................................. Department of Chemical Engineering June 10, 2004 Certified by. Paul I. Barton Associate Professor of Chemical Engineering Thesis Supervisor Accepted by......................................................... Daniel Blankschtein Professor of Chemical Engineering Chairman, Committee for Graduate Students 2 Global Dynamic Optimization by Adam Benjamin Singer Submitted to the Department of Chemical Engineering on June 10, 2004, in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Chemical Engineering Abstract My thesis focuses on global optimization of nonconvex integral objective functions subject to parameter dependent ordinary differential equations. In particular, effi- cient, deterministic algorithms are developed for solving problems with both linear and nonlinear dynamics embedded. The techniques utilized for each problem classifi- cation are unified by an underlying composition principle transferring the nonconvex- ity of the embedded dynamics into the integral objective function. This composition, in conjunction with control parameterization, effectively transforms the problem into a finite dimensional optimization problem where the objective function is given implic- itly via the solution of a dynamic system. A standard branch-and-bound algorithm is employed to converge to the global solution by systematically eliminating portions of the feasible space by solving an upper bounding problem and convex lower bounding problem at each node. The novel contributions of this work lie in the derivation and solution of these convex lower bounding relaxations. Separate algorithms exist for deriving convex relaxations for problems with linear dynamic systems embedded and problems with nonlinear dynamic systems embedded. However, the two techniques are unified by the method for relaxing the integral in the objective function. I show that integrating a pointwise in time convex relaxation of the original integrand yields a convex underestimator for the integral. Separate composition techniques, however, are required to derive relaxations for the integrand depending upon the nature of the embedded dynamics; each case is addressed sepa- rately. For problems with embedded linear dynamic systems, the nonconvex integrand is relaxed pointwise in time on a set composed of the Cartesian product between the parameter bounds and the state bounds. Furthermore, I show that the solution of the differential equations is affine in the parameters. Because the feasible set is convex pointwise in time, the standard result that a convex function composed with an affine function remains convex yields the desired result that the integrand is convex under composition. Additionally, methods are developed using interval arithmetic to derive the exact state bounds for the solution of a linear dynamic system. Given a nonzero tolerance, the method is rigorously shown to converge to the global solution in a 3 finite time. An implementation is developed, and via a collection of case studies, the technique is shown to be very efficient in computing the global solutions. For problems with embedded nonlinear dynamic systems, the analysis requires a more sophisticated composition technique attributed to McCormick. McCormick’s composition technique provides a method for computing a convex underestimator for the integrand given an arbitrary nonlinear dynamic system provided that convex un- derestimators and concave overestimators can be given for the states. Because the states are known only implicitly via the solution of the nonlinear differential equa- tions, deriving these convex underestimators and concave overestimators is a highly nontrivial task. Based on standard optimization results, outer approximation, the affine solution to linear dynamic systems, and differential inequalities, I present a novel method for constructing convex underestimators and concave overestimators for arbitrary nonlinear dynamic systems. Additionally, a method is derived to com- pute state bounds for nonquasimonotone ordinary differential equations. Given a nonzero tolerance, the relaxation method is proven to yield finite convergence to the global solution within a branch-and-bound framework. A detailed implementation for solving problems with nonlinear dynamic systems embedded is described. The imple- mentation includes a compiler that automatically applies the theory and generates a Fortran residual file defining the upper and lower bounding problems. This residual file is utilized in conjunction with a discontinuity locking numerical differential equa- tion solver, a local optimizer, and a custom branch-and-bound code to solve globally dynamic optimization problems with embedded nonlinear ordinary differential equa- tions. To enable the comparison of the efficiency of the algorithm, several literature case studies are examined. A detailed analysis of a chemical engineering case study is performed to illustrate the utility of the algorithm for solving realistic problems. Thesis Supervisor: Paul I. Barton Title: Associate Professor of Chemical Engineering 4 Acknowledgments I have always posited that the primary advantage of studying at MIT is neither the faculty nor the resources. No, I believe the primary advantage of studying at MIT is the high quality of one’s coworkers. Not only did I learn a great deal from any number of countless conversations in the office, but it has also been my pleasure to work with the very talented students, postdocs, and research associates who have surrounded me for the past four and a half years. I am grateful that I have been afforded this small space to thank some people individually. I’d like to start by thanking Dr. John Tolsma for software support and, in particular, for support with the tools required for the implementation of the linear theory. I know I have teased John mercilessly about his software, but I hope he realizes that I have a great deal of respect for the work he has performed. While discussing software, I’d like to thank Jerry Clabaugh who was always available to answer general computer questions, especially those questions concerning the C programming language. I’d like to thank Dr. Edward Gatzke for some of the initial work he performed and code he shared concerning his branch-and- bound implementation. I’d like to thank Dr. David Collins with whom I bounced around many ideas on many occasions. Sometimes I feel David and I spent more time together fixing lab computers than doing research. I’d like to thank Dr. Binita Bhattacharjee for her assistance when we TAed 10.551 together. I think performing that duty together likely made the experience less painful for both of us. I’d like to thank Cha Kun Lee for daily discussions ranging anywhere from work related things to the NBA playoffs, and just about everything in between. I’d like to thank Alexander Mitsos for finally discovering the correct avenue to pursue to convince the department to buy us modern office chairs to replace the office chairs inherited from the Pilgrims. I’d like to thank Dr. BenoˆChachuat for our many conversations concerning both the nonlinear theory and it’s implementation. I’d like to thank James Taylor for experimental data and assistance with the problems in Chapter 9. Finally, I’d like to thank all the other members of our research group I didn’t mention by name and the research group as a whole simply for putting up with me for the last four and half 5 years. The above list of people represent the individuals I wish to thank mostly for their professional assistance. However, completing a thesis involves much more than merely academics. First and foremost, I would like to thank my parents, Marc and Ellen, who have always been present to guide and advise me both personally and profession- ally. Without them, this thesis would not have existed, for on many occasions, their foresight into the benefits of this degree and their constant encouragement alone kept me from quitting out of annoyance. Next, I’d like to thank my friends in the student office, Suzanne Easterly, Jennifer Shedd, Annie Fowler, and Mary Keith. I think it was beyond their job description to listen to my incessant complaining. Last, but not least, I’d like to thank Sharron McKinney, who taught me a great deal about both life and love. At a time in my life full of questions, she helped me find answers within myself. I am more appreciative of the lessons I learned from her than of any I have ever learned in the hallowed halls of this institute. 6 Contents 1 Introduction 15 1.1 Motivation and Literature Review . 16 1.2 Structural Outline of the Thesis . 25 2 Problem Statement and Solution Strategy 27 2.1 Problem Statement and Existence of a Minimum . 28 2.2 Solution Strategy . 31 3 Relaxation of an Integral and the Affine Solution of a Linear System 35 3.1 Convex Relaxations for an Integral . 35 3.2 The Affine Solution of Linear Systems . 41 4 Relaxation Theory for Problems with Linear Dynamics Embedded 45 4.1 Affine Composition with a Convex Function . 46 4.2 Computing State Bounds for Linear Dynamic Systems . 48 4.3 Linear Dynamic Relaxations and Branch-and-Bound Convergence . 53 5 Implementation for Problems with Linear Dynamics Embedded 61 5.1 The Three Subproblems: Upper Bound, Lower Bound, and State Bounds 61 5.1.1 Computing an Upper Bound . 62 5.1.2 Computing State Bounds . 63 5.1.3 Lower Bounding Problem . 65 5.1.4 Intersection of State Bounds and Intersection with State Bounds 66 5.2 Dynamic Extensions to Standard Convex Relaxation Techniques . 70 7 5.2.1 Use of McCormick’s Underestimators . 70 5.2.2 Use of αBB Underestimators . 73 5.2.3 Convex Relaxation of Bilinear Terms . 75 5.3 Case Studies . 77 5.3.1 Small Numerical Example . 79 5.3.2 Dynamic Extension to McCormick’s Example Problem .