Algorithms and Concepts for Robust Optimization
Total Page:16
File Type:pdf, Size:1020Kb
Algorithms and Concepts for Robust Optimization Dissertation zur Erlangung des mathematisch-naturwissenschaftlichen Doktorgrades Doctor rerum naturalium der Georg-August Universit¨atG¨ottingen vorgelegt von Marc Goerigk aus Berlin G¨ottingen2012 Referentin: Prof. Dr. Anita Sch¨obel Koreferent: Prof. Dr. Marco L¨ubbecke Tag der m¨undlichen Pr¨ufung 24.9.2012 Preface The more I occupied myself with robust optimization, the more it grew from a mathe- matical discipline to a pattern of thought that influenced my everyday decisions. Un- certainty is all around us - from the flight to Tromsø or the train to Saarbr¨ucken, to the details like lactose in sandwiches (better don't try if you're not sure). Along my way I had plenty of people who helped me and generously overlooked some overconservatism stemming from restrictive robustness concepts. First of all, I thank Anita Sch¨obel for endless support and her impressive ability to not sleep - at least, that's how it seems. Also the great spirit of the optimization working group at the Institute for Numerical and Applied Mathematics made writing this thesis a joy: Thanks to Marco Bender, Ruth H¨ubner,Jonas Ide (\your partner in science"), Mark K¨orner(\der große Mark"), Thorsten Krempasky (\Dr. Lightning"), Michael Schachtebeck, Robert Schieweck, Marie Schmidt (\der kleine Marc"), Daniel Scholz, Morten Tiedemann and Stephan Westphal. Also, the proof-reading of Marie, Ruth, Marco and Robert was a great help during the final phase of the work. I thank Florian Bruns, Emilio Carrizosa, Markus Chimani, Martin Knoth, Sigrid Knust, Matthias M¨uller-Hannemann,Petra Mutzel, and Bernd Zey for the inspira- tional and challenging joint work (see Chapter 8 for an overview). During my time in New Zealand, I enjoyed the hospitality of Matthias Ehrgott, Andrea Raith and Fritz Raffensperger, who made me feel at home when I was in fact up to an " > 0 at maxi- mum distance. Furthermore, Carsten Damm and Max Wardetzky were the best tutor team I could have had. Finally, I thank Christiane for her endless patience and support. 3 Contents 1 Introduction 7 1.1 Motivation . .7 1.2 Uncertain Optimization . .8 1.3 Methodology and Outline . 11 2 Literature Review on Robust Optimization Concepts 15 3 New Concepts and Relations 25 3.1 RecFeas . 25 3.2 RecOpt . 54 3.3 Relations between Concepts . 64 4 Continuous Problem Applications 73 4.1 Linear Programming . 73 4.2 Aperiodic Timetabling . 79 5 Discrete Problem Applications 91 5.1 Robust Load Planning in Intermodal Transport . 91 5.2 Robust Steiner Trees . 124 5.3 Robust Periodic Timetabling . 137 5.4 Robust Timetable Information . 156 6 ROPI: A Robust Optimization Programming Interface 181 6.1 Introduction . 181 6.2 Current Software for Robust Optimization . 181 6.3 Library Features . 183 6.4 Example Applications . 189 6.5 ROPI Extensions . 193 7 Discussion and Outlook 195 8 Summary of Contributions 199 5 If a man will begin with certainties, he shall end in doubts; but if he will be content to begin with doubts, he shall end in certainties. -Francis Bacon, The Advancement of Learning 1 Introduction In this Chapter we consider the foundations of this work: After a short motiva- tion on the use of robust optimization, we formally introduce uncertain optimization problems, and lay out the structure, methodology and purpose of this work. 1.1 Motivation In this thesis we deal with uncertainty { and how to end in certainty. It is a topic everybody has collected their own experience with. Let's assume you have a lot of appointments and activities scheduled for today, and let's further assume that reading into this work is one of them { one of the more pleasurable ones, of course! As you are likely to be a mathematically inclined person, you might have put some thought into optimizing today's schedule, assuming given durations for each appointment. However, what if this work catches you so compellingly that you find the originally scheduled time horizon insufficient? Or, on the other extreme, what if you find it so annoying you will barely read more than this very sentence? Even though the latter does not seem to be the case, you might come to the conclusion that your original schedule does not suit your needs anymore, and you will change it accordingly. Robust optimization is the mathematical discipline that takes exactly this uncertainty in the problem parameters into account { by finding solutions that are still \good" when things happen to turn out differently. How exactly this is done depends on the robustness concept applied. Depending on your situation and character, you might prefer solutions that suit all possible parameter realizations, solutions that can be cheaply repaired, or solutions that distribute a given buffer budget over the critical times of the day. Robust optimization is not new { in fact, its beginnings trace back to the 1970s. However, partly due to the usually increased problem size when optimizing under un- certainty, there has been a surge of literature only since the late nineties, when increased computational power and software sophistication allowed to handle such complex mod- els. Since then, a large pool of robustness concepts have evolved, each of them with their own advantages and disadvantages. As it is said in [Sti08]: \There is no silver bullet for optimization under imperfect information.", and no concept fits all needs. Grossly oversimplifying the current situation of research in robust optimization, there are basically two types of approaches: More theoretically-driven ones, that tend to 7 1.2. UNCERTAIN OPTIMIZATION produce models that can hardly be used for real-world problems, and application- driven ones, that are often so narrowly tailored, they can barely be transferred to other applications. In this work we try to bring the field of robust optimization further, in an effort to bring theory and practice one step closer together. 1.2 Uncertain Optimization We now introduce the general framework of uncertain problems that form the starting point for robust optimization. Uncertainty. Nearly every optimization problem suffers from uncertainty to some de- gree, even if this does not seem to be the case at first sight. Two types of uncertainty can be distinguished: Microscopic and macroscopic uncertainty. Microscopic uncertainty includes: 1. Numerical errors. Storing any number on a computer system is only possible up to a certain exactness, resulting in so-called floating-point errors that may propagate. Although exact mixed-integer programming solvers do exist (e.g., [CKSW11]), they are typically several orders of magnitude slower than compet- itive floating-point optimization software, and therefore not industrial standard. In a representative experiment using NetLib instances, [BTN00] report that op- timal solutions to 13 of 90 considered instances violate constraints by more than 50%, when \ugly" coefficients are perturbed by only 0:01%. As these coefficients are most likely not exact, but only truncated values, this is a warning that \small" inaccuracy can create \very bad" solutions. 2. Measurement errors. Whenever mathematical models are applied to real- world problems, they need to be supplied with data that was measured in some way. These measurements may be intrinsically inexact, e.g., when the volume or the weight of a body is determined, or only statistically representative, e.g., when stemming from a survey. Even though these values may seem \reasonably exact", we see from the previous paragraph that their impact on the solution usefulness can be large. Macroscopic uncertainty includes: 3. Forecast errors. Knowledge about the future is seldom exact { as the most prominent example for forecasting uncertainty, weather conditions are well-known to have foiled plenty a schedule. They influence flight routes, driving speed, harvest quality, and many more aspects of everyday life. Other examples are demographic developments, or share prices. 4. Changing environments due to long-term solutions. When a problem solution is put into practice in a long-term setting, the environment naturally changes over the course of time. Timetables in public transport are one example, 8 Chapter 1. Introduction in which the same schedule might need to serve for rain, snow, wind, low and high passenger demand, etc. Even though there might not be an error involved when these conditions are identified beforehand, still one solution must be able to suit very different scenarios. In robust optimization { contrary to the setting of stochastic programming {, it is typically not assumed that any probability distribution for the uncertainty is known. In many cases however, there exists a so-called nominal scenario. Depending on the uncertainty type, this may either be the coefficient of the given precision for numerical errors (1), the measured value for measurement errors (2), the most likely forecast for forecast errors (3), or an average environment for long-terms solutions (4). Uncertain optimization problems. We consider optimization problems that can be written in the form (P ) min f(x) s.t. F (x) ≤ 0 x 2 X ; n m n where F : R ! R describes the m problem constraints, f : R ! R is the objective n function, and X ⊆ R is the variable space. In real-world applications, both the constraints and the objective may depend on parameters which are uncertain. In order to accommodate such uncertainties, instead of (P ), the following parameterized family of problems is considered: (P (ξ)) min f(x; ξ) s.t. F (x; ξ) ≤ 0 x 2 X ; n m n M where F (·; ξ): R ! R and f(·; ξ): R ! R for any fixed ξ 2 R , which describes a scenario that may occur. Although it is in practice often not known exactly which values such a scenario ξ may take for an optimization problem P (ξ), we assume that it is known that ξ lies M within a given uncertainty set U ⊆ R that represents the scenarios we assume to be likely enough to be considered in our analysis.