Springer Optimization and Its Applications
VOLUME 73
Managing Editor Panos M. Pardalos (University of Florida)
Editor–Combinatorial Optimization Ding-Zhu Du (University of Texas at Dallas)
Advisory Board J. Birge (University of Chicago) C.A. Floudas (Princeton University) F. Giannessi (University of Pisa) H.D. Sherali (Virginia Polytechnic and State University) T. Terlaky (McMaster University) Y.Ye (Stanford University)
Aims and Scope Optimization has been expanding in all directions at an astonishing rate during the last few decades. New algorithmic and theoretical techniques have been developed, the diffusion into other disciplines has proceeded at a rapid pace, and our knowledge of all aspects of the field has grown even more profound. At the same time, one of the most striking trends in optimization is the constantly increasing emphasis on the interdisciplinary nature of the field. Optimization has been a basic tool in all areas of applied mathematics, engineering, medicine, economics, and other sciences. The series Springer Optimization and Its Applications publishes under- graduate and graduate textbooks, monographs, and state-of-the-art expository work that focus on algorithms for solving optimization problems and also study applications involving such problems. Some of the topics covered include nonlinear optimization (convex and nonconvex), network flow problems, stochastic optimization, optimal control, discrete optimization, multiobjective programming, description of software packages, approxima- tion techniques and heuristic approaches.
For further volumes: http://www.springer.com/series/7393
Giorgio Fasano • Ja´nos D. Pinte´r Editors
Modeling and Optimization in Space Engineering Editors Giorgio Fasano Ja´nos D. Pinte´r Thales Alenia Space SpA Pinte´r Consulting Services Inc. Turin, Italy Halifax, Nova Scotia Canada
ISSN 1931-6828 ISBN 978-1-4614-4468-8 ISBN 978-1-4614-4469-5 (eBook) DOI 10.1007/978-1-4614-4469-5 Springer New York Heidelberg Dordrecht London
Library of Congress Control Number: 2012946722 MSC 2010 codes: 49-06, 05B40, 37N05, 37N40, 65K05, 70M20, 90-08, 90BXX, 90C11, 90C26, 90C29, 90C30 # Springer Science+Business Media New York 2013 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. Exempted from this legal reservation are brief excerpts in connection with reviews or scholarly analysis or material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work. Duplication of this publication or parts thereof is permitted only under the provisions of the Copyright Law of the Publisher’s location, in its current version, and permission for use must always be obtained from Springer. Permissions for use may be obtained through RightsLink at the Copyright Clearance Center. Violations are liable to prosecution under the respective Copyright Law. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. While the advice and information in this book are believed to be true and accurate at the date of publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or omissions that may be made. The publisher makes no warranty, express or implied, with respect to the material contained herein.
Printed on acid-free paper
Springer is part of Springer Science+Business Media (www.springer.com) Preface
Since the very beginnings of modern space exploration activities, the simultaneous consideration of key mission objectives and options—including technical feasibil- ity, mission safety, and cost-efficiency aspects—has been essential. Space engineer- ing projects have required, inter alia, the analysis and optimization of trajectories, fuel consumption, cargo handling, and other aspects of mission logistics, with paramount consideration given to crew and environmental safety. The experimental and commercial revenues expected from space activities such as the ones associated with the International Space Station have given rise to further challenging cost–benefit as well as risk analysis issues. The ambitious goals of future interplan- etary explorations including manned missions will also require advanced analysis and optimization of the systems and resources involved. While the necessary depth and quality of the decisions required in space engi- neering is ever increasing, we have also witnessed continuing innovation regarding both theoretical advances and ready-to-use tools for actual applications. The results of scientific innovation and algorithmic developments are supported and enhanced by today’s advanced computational modeling and optimization environments. Since the earliest space engineering applications, the solution of increasingly hard optimization problems has become necessary, giving rise to complex analytical and computational issues. Until just a few years ago, the common numerical approaches were essentially limited to handle certain continuous (linear or convex nonlinear) optimization and linearly structured combinatorial and mixed integer-continuous optimization problems. Recent advances in the area of computational global opti- mization allow the handling of (often) more realistic non-convex problem formulations. Furthermore, the consideration of integer decision variables in more flexible nonlinear modeling frameworks gives rise to often even harder (again, non- convex) combinatorial and mixed integer-continuous optimization problems. The solution of such computational challenges is becoming gradually more viable as a direct result of algorithmic advances and software development. The book emphasizes these new paradigms, by providing an in-depth discussion of mathe- matical modeling and algorithmic aspects of real-world space engineering applications.
v vi Preface
Classic and more recent space engineering problems—including cargo accommodation and object placement, flight control of satellites, system design and trajectory optimization, interplanetary transfers with deep-space maneuvers, low energy transfers, magnetic cleanliness modeling, propulsion system design, sensor system placement, space traffic logistics, and are discussed. Novel points of view related e.g., to computational global optimization and optimal control and to multidisciplinary design optimization are also given proper emphasis. A particular attention is paid (also) to scenarios expected in the context of future interplanetary explorations. The literature dealing with advanced modeling issues and optimization problems arising in space engineering mostly consists of works that address specialists. At the same time, there are only a few works available that cover a wide range of technical aspects and applications and target a broader readership. The present edited volume is aimed at meeting some of the resulting needs, by discussing current and prospec- tive applications of advanced optimization to handle some of the major challenges in space engineering in an expository manner. The contributing authors are well- recognized researchers and practitioners working in modeling and optimization related (also) to space engineering. The book offers an in-depth exposition of the mathematical modeling, algorith- mic and numerical solution aspects of all topics covered. A proper emphasis is placed upon recently developed modeling paradigms and computational tools, including global as well as local optimization, multidisciplinary design, and optimal control theory. Modeling and Optimization in Space Engineering primarily addresses researchers and practitioners in the field of space engineering. It will be useful also for aerospace graduate and postgraduate students willing to broaden their horizon, by studying real-world applications and challenging problems that they will be likely to meet in their professional activities. The contributed chapters are more focused on practical aspects than on theoretical rigor: for the latter readers will be referred (as needed) to the cited extensive literature. With this in mind, researchers and practitioners in mathematical modeling, operations research, math- ematical programming, optimization and optimal control will also benefit from the case studies presented. The approaches discussed can be extended also to application areas that are not necessarily related to space applications. The book can be also used as a reference to assist researchers and practitioners in developing novel applications. Readers will obtain a broad overview of some of the most challenging space engineering operational scenarios of today and tomorrow. This aspect will also benefit managers in the aerospace field, as well as in other advanced industrial sectors.
Giorgio Fasano Ja´nos D. Pinte´r Turin, Italy Halifax, Nova Scotia, Canada About the Editors
Giorgio Fasano is a researcher and practitioner at Thales Alenia Space with more than 25 years of experience in the field of optimization, in support to space engineering. He holds a MSc degree in Mathematics, he is also a Fellow of the Institute of Mathematics and its Applications, has been awarded the Chartered Mathematician (IMA, UK), as well as the Chartered Scientist (Science Council, UK) designations. He is a Co-editor of Operations Research in Space and Air (Ciriani et al., Eds., Kluwer Academic Publ., 2003), and author of several specialist publications on optimization in space. He was a Finalist at the EURO 2007 Excellence in Practice Award. He currently works as technical manager of the CAST (Cargo Accommodation Support Tool) project, funded by the European Space Agency. His interests include Mixed Integer Programming, Global Optimi- zation, and Optimal Control. Ja´nos D. Pinte´r is a researcher and practitioner with four decades of experience in the area of systems modeling and optimization, including algorithm and software development. He holds MSc, PhD and DSc degrees in Mathematics, with specializations in the Operations Research area. He has authored and edited books including the award-winning monograph Global Optimization in Action. He also wrote nearly 200 journal articles, book chapters, and other technical documents. Dr. Pinte´r is an active member and officer of several professional organizations (CORS, EUROPT, HORS, INFORMS), and he serves on the editorial board of international professional journals including the Journal of Global Opti- mization. Dr. Pinte´r is the principal developer or co-developer of a range of nonlinear optimization software products. These products are used worldwide by researchers and practitioners in academia, and at business and research organizations. For more information, please visit www.pinterconsulting.com.
vii
Acknowledgements
First of all, we wish to thank all contributing authors for their high-quality research work and diligent efforts to make the timely completion and publication of this volume possible. We also express our thanks to our coauthors of joint works cited in our own contributed chapters. In addition to the review and editorial activities done by ourselves, several colleagues assisted us with valuable comments and suggestions. We wish to express our special thanks to Roberto Armellin, Pierluigi Di Lizia, Fabio Schoen, and Francesco Topputo. One of the editors (GF) thanks also Gualtiero Brambati, Mario Cardano, Piero Messidoro, and Annamaria Piras of Thales Alenia Space Italia S.p.A., for their support of the research and development activities related to modeling and optimi- zation in a range of space engineering applications. GF also wishes to thank Jane Alice Evans, Editorial Consultant, for her commitment and support of the book review activity. We have been very glad to work with Vaishali Daimle, Senior Editor of Springer Science+Business Media for Mathematics on this book project, from its initial discussions to its completion.
Giorgio Fasano Ja´nos D. Pinte´r Turin, Italy Halifax, Nova Scotia, Canada
ix
Contents
1 Model Development and Optimization for Space Engineering: Concepts, Tools, Applications, and Perspectives ...... 1 Giorgio Fasano and Ja´nos D. Pinte´r 2 Practical Direct Collocation Methods for Computational Optimal Control ...... 33 Victor M. Becerra 3 Formation Flying Control for Satellites: Anti-windup Based Approach ...... 61 Josep Boada, Christophe Prieur, Sophie Tarbouriech, Christelle Pittet, and Catherine Charbonnel 4 The ESA NLP Solver WORHP ...... 85 Christof Buskens€ and Dennis Wassel 5 Global Optimization Approaches for Optimal Trajectory Planning ...... 111 Andrea Cassioli, Dario Izzo, David Di Lorenzo, Marco Locatelli, and Fabio Schoen 6 Indirect Methods for the Optimization of Spacecraft Trajectories ...... 141 Guido Colasurdo and Lorenzo Casalino 7 Trajectory Optimization for Launchers and Re-entry Vehicles ...... 159 Francesco Cremaschi 8 Global Optimization of Interplanetary Transfers with Deep Space Maneuvers Using Differential Algebra ...... 187 Pierluigi Di Lizia, Roberto Armellin, Francesco Topputo, Franco Bernelli-Zazzera, and Martin Berz
xi xii Contents
9 A Mixed Integer Linear Programming Model for Traffic Logistics Management at the International Space Station ...... 215 Giorgio Fasano 10 Global Optimization Approaches to Sensor Placement: Model Versions and Illustrative Results ...... 235 Giorgio Fasano and Ja´nos D. Pinte´r 11 Space Module On-Board Stowage Optimization by Exploiting Empty Container Volumes ...... 249 Giorgio Fasano and Maria Chiara Vola 12 Optimization Models for the Three-Dimensional Container Loading Problem with Practical Constraints ...... 271 Leonardo Junqueira, Reinaldo Morabito, Denise Sato Yamashita, and Horacio Hideki Yanasse 13 Optimal Magnetic Cleanliness Modeling of Spacecraft ...... 295 Klaus Mehlem 14 Integrated Design-Trajectory Optimization for Hybrid Rocket Motors ...... 343 Dario Pastrone and Lorenzo Casalino 15 Mathematical Models of Placement Optimisation: Two- and Three-Dimensional Problems and Applications ...... 363 Yuri Stoyan and Tatiana Romanova 16 Optimization of Low-Energy Transfers ...... 389 Francesco Topputo and Edward Belbruno Chapter 1 Model Development and Optimization for Space Engineering: Concepts, Tools, Applications, and Perspectives
Giorgio Fasano and Ja´nos D. Pinte´r
Abstract The theory and methodology of finding the best possible solution to a broad range of optimization problems has been of interest since the beginnings of modern operations research. The key theoretical results regarding important model types and algorithmic frameworks have been followed by optimization software implementations that are used to handle a large and still growing variety of applications. Our discussion is focused on the practice of nonlinear—specifically including also global and mixed integer optimization, in the context of space engineering applications. We review some of the prominent solution approaches, model development tools, and software implementations of optimization (solver) engines and then relate our discussion to selected applications in space engineering. The review portion of this work cites contributions by our coauthors to the present volume (Fasano and Pinte´r, Modeling and Optimization in Space Engineering, Springer Science + Business Media, New York, 2012) while also drawing on an extensive list of other sources.
Keywords Optimization models and methods • Continuous nonlinear (global and local) optimization • Mixed-integer linear and nonlinear models • Modeling languages and systems • Solution strategies and optimization software • LGO solver suite for nonlinear optimization • Current and prospective optimization applications in space engineering
MSC Classification (2000) 68 T20, 90 C11, 90 C30, 90 C59, 90 C90, 90-02, 90-08 G. Fasano Thales Alenia Space Italia S.p.A., Str. Antica di Collegno 253, Turin 10146, Italy e-mail: [email protected] J.D. Pinte´r(*) Pinte´r Consulting Services Inc., Halifax, NS, Canada e-mail: [email protected]
G. Fasano and J.D. Pinte´r (eds.), Modeling and Optimization in Space Engineering, 1 Springer Optimization and Its Applications 73, DOI 10.1007/978-1-4614-4469-5_1, # Springer Science+Business Media New York 2013 2 G. Fasano and J.D. Pinte´r
1.1 Introduction
1.1.1 Modeling and Optimization: An Operations Research Framework
Operations research (OR) provides a scientifically established objective framework and methodology to assist analysts and decision makers in finding feasible or optimized decisions across a vast range of decision-making issues. Decision problems can be frequently modeled by constrained optimization models: we want to find the best possible decision that satisfies a given set of constraints. In the present discussion, we shall consider the continuous optimization model defined by the following ingredients: • x: decision vector, an element of the real Euclidean n-space Rn • l, u: explicit, finite n-vector bounds of x that define an interval (“box”) in Rn • f(x): continuous objective function, f:Rn!R • g(x): m-vector of continuous constraint functions, g:Rn!Rm Applying these notations, a very general class of optimization models can be concisely formulated as follows:
min f ðxÞ x 2 D :¼ fgx : l x u; gðxÞ 0 Rn (1.1)
In (1.1) all vector inequalities are interpreted component-wise: l, x, u, are all n-component vectors, and the 0 in g(x) 0 denotes an m-component zero vector. The set of feasible vectors D is defined by l, u, and g. The set of the general constraints g could be empty, leading to a box-constrained model. Formally more general optimization models that include also ¼ and constraint relations and/or explicit lower bounds on the admissible constraint function values can be simply reduced to the concise model form (1.1). This model evidently subsumes, e.g., all linear and convex nonlinear program- ming models, under corresponding additional specifications regarding the model functions f and g. It is also easy to demonstrate that (1.1) formally subsumes the entire class of pure combinatorial optimization problems formulated with binary decision variables (and thereby it also covers models with finitely bounded discrete variables). Consequently, all mixed integer-continuous optimization problems (formulated with both discrete and continuous decision variables) are also covered by the concise model form (1.1).
1.1.2 Nonlinear Optimization
The present discussion focuses on nonlinear models since these are essential tools in the context of space engineering applications—as clearly shown also by the contributed chapters briefly reviewed later on. Therefore we will assume that some 1 Model Development and Optimization for Space Engineering... 3 or all of the functions f and g (the latter component-wise) are nonlinear. Additional analytical properties such as the convexity, Lipschitz continuity (implied, e.g., by smooth model structure), or continuous (first, second) differentiability of the model component functions are often verified or postulated, in order to apply certain types of numerical methods to handle the corresponding problem instance. If we can assume that D is nonempty, then the optimal solution set X* in the model (1.1) is non-empty, since this is implied by the continuity of f. This funda- mental existence result does not mean that an element of X* (or all of its elements) is (are) easy to find. In fact, the majority of practically motivated constrained optimization models require numerical solution approaches—as opposed to finding their solution by purely analytical or symbolic methods. This remark is certainly valid in the context of global optimization (GO). If we have to drop the convexity assumption regarding the model functions {f, g}in(1.1), then the resulting model instance frequently will become multi-extremal. To illustrate this point by an example, one can think of solving a square system of nonlinear equations g(x) ¼ 0, g:Rn!Rn assuming that x ∈ D:¼{x: l x u} Rn. Then it is entirely possible that this system has no solutions at all or that it has one “true” (global) solution, as well as a number of (local) “pseudosolutions” with non-negligible residual errors. (There are also many other possible scenarios, of course, but we will consider the one above to illustrate the present discussion.) The possible multi-modality aspect of the problem implies that local scope optimization strategies may not work satisfactorily—unless started “sufficiently close” to the “true” solution. Technically, “sufficiently close” means to have access to a point in the region of attraction of the globally optimal solution point: this region is specific to the model instance and often also to the local scope optimization method used. Therefore the postulated “close guess” require- ment may be elusive in the example mentioned: this explains why classical numerical methods may or may not work to handle this practically important problem type. There are many other structurally similar optimization problems with a typically massive multimodal structure. A few important examples are the fitting of a nonlinear model to a given data set [1–4], data classification [2], or the finding of optimized object configurations [5–9]. Obviously, these general problem classes can be directly related also to numerous space engineering applications; several further examples will be reviewed later on primarily based on contributions to Ciriani et al. [10] and Fasano and Pinte´r[11]. Hence, in many such cases, one has to use global—as opposed to local—scope search strategies. For completeness, first we list a selection of textbooks that present in-depth discussions of local nonlinear optimization models, methods, and their applications: consult, e.g., Bazaraa et al. [12], Peressini et al. [13], Bertsekas [148], Chong and Zak [14], Edgar et al. [15], Diwekar [16], Boyd and Vandenberghe [17], Hillier and Lieberman [18], and Nocedal and Wright [19]. In contrast to local search strategies, the objective of global optimization is to find the absolutely best solution of optimization models that (could) have also local optima. In recent decades, GO has become a mature field of mathematical 4 G. Fasano and J.D. Pinte´r programming. There are several hundred books written on the subject: consult, e.g., Horst and Pardalos [20], Horst and Tuy [21], Kearfott [22], Mockus et al. [23], Pinte´r[2], Floudas et al. [24], Strongin and Sergeyev [25], Pardalos and Romeijn [26], Zabinsky [27], and Liberti and Maculan [28]. For various GO applications in engineering design, consult e.g. Grossmann [29], Papalambros and Wilde [30], and Pinte´r[31]. The edited volumes Ciriani et al. [10] and Fasano and Pinte´r[11] are specifically devoted to space engineering and related applications: we will review work from these volumes later on. For examples of in-depth GO review articles, we mention Neumaier [32] and Floudas and Gounaris [33]; the reviews by Pinte´r[34, 35] focus on software and test problems, including also extensive lists of applications with detailed numerical examples. Rios and Sahinidis [36] present a substantial comparative study of derivative-free nonlinear (often global) optimization methods and their software implementations. The consideration of integer decision variables, possibly in combination with continuous ones, adds significant difficulty to handling the resulting global optimi- zation problems. Grossmann [29], Floudas [37], Nowak [38], Tawarmalani and Sahinidis [39], and Li and Sun [40] discuss mixed-integer linear and nonlinear optimization models and solution strategies; see also the recent review by Burer and Letchford [41]. The above mentioned works advocate exact approaches to solve nonlinear optimization problems with corresponding theoretical (deterministic or stochastic) convergence properties. In many cases—due to the difficulty of the models to solve, the real or perceived lack of suitable software, and various other practical considerations—instead of exact approaches, heuristic strategies are proposed and applied. Therefore we also provide a few references related to prominent heuristic approaches, mostly to solve combinatorial—but also continuous—optimization problems. Consult, e.g., Michalewicz [42], Osman and Kelly [43], Glover and Laguna [44], Rudolph [45], Voss et al. [46], Rothlauf [47], Jones and Pevzner [48], and Michalewicz and Vogel [49]; see also the e-book by Weise [50] that is focused on heuristic strategies in the broader GO context.
1.1.3 Optimization Modeling Systems and Solver Engines
Many of the real-world problems tackled by an OR-based approach are complicated, and the “best possible” model development and solution procedure may not be clear at the beginning of the project. This general statement is certainly valid also for many (if not all) space engineering projects. Under such circumstances, interdisci- plinary teams of decision makers, domain experts, and model and algorithm developers have to work together in an “iterative” fashion. The team repeatedly has to modify the model formulation and solution procedure until the resulting system model properly captures the essence of the decision problem, it can be 1 Model Development and Optimization for Space Engineering... 5 adequately supported by data, its computational solution is viable, and the results are deployable in the real-world setting of the decision problem. The above facts and circumstances imply the essential need for suitable modeling systems (environments, languages) and corresponding robust and efficient optimization solvers. In formulating and solving an optimization model in this context, a key aspect to consider is the choice of the software platform(s) to use. In many professional research and industrial environments, platform acceptance by the organization, compatibility with the available hardware and operating system platforms as well as with other software, ease of use, quality of software documentation, and technical support will often dominate other (perhaps more technical) aspects. Driven by practical considerations and requirements, research engineers and scientists frequently may want “only” a high-quality solution to an optimization problem that can be obtained rather quickly, as opposed to finding a rigorously guaranteed solution in days, weeks, months, or even more time. However, let us also note here that there are cases when high-fidelity solutions are absolutely necessary almost regardless of the computational cost. Hence, the choice of the “most suitable” tools strongly depends on specific user criteria. The modeling environments and software revised below serve a broad range of users, including their optimization needs. To start with the core optimization software implementations, there exists a range of compiler platform-based solvers, with more or less built-in model devel- opment functionality. Typically, the optimization engine itself is developed in C or FORTRAN, while the user interface and related features are implemented using C++, C#, Delphi, Java, Visual Basic, or perhaps some other language. Such solver implementations can be built directly into decision support systems and applications due to their customizability and often relatively small hardware + software “footprint.” At the same time, these stand-alone solver implementations often require the most expertise to use. For reviews, consult, e.g., Leyffer and Mahajan [51] regarding software for nonlinear (mostly local) optimization and Pinte´r[52] on software for global optimization. The core solver engines referred to above are frequently linked to optimization modeling languages (ML). An ML is a model development environment specifi- cally tailored for the optimization of (possibly) complex, large-scale systems: consult, e.g., Kallrath [53]. MLs have a language compiler with model preprocessing, error-checking, and other model development facilities. MLs sup- port the transparent creation of scalable model instances with corresponding data sets, assuming that the key model structure can be well represented. Thereby MLs assist the development of possibly very large models that are easier to maintain and to adapt to new scenarios as needed. MLs incorporate as additional options a collection of high-performance solvers. These solver engines are collectively aimed at handling the range of linear and nonlinear, continuous, discrete, and mixed-integer optimization problems, often with tailored approaches to more special problem classes from 6 G. Fasano and J.D. Pinte´r the main optimization areas. Modeling environments often include a substantial library of illustrative models that can effectively assist users to develop their own models. Without going into details, we also mention the options and facilities of MLs that support solver testing and benchmarking. Examples of prominent commercial modeling systems include AIMMS (by Paragon Decision Technology [146]), AMPL (AMPL LLC [141]), the Excel Solver Platform (Frontline Systems [142]), GAMS (GAMS Development Corpora- tion), the IBM ILOG CPLEX Optimization Studio (IBM), LINGO (LINDO Systems [144]), and MPL (Maximal Software [145]). For current information regarding these MLs, visit the websites of the modeling system developers listed in the references. There has been also a very remarkable development in the optimization context of high-level integrated scientific and technical computing (ISTC) systems such as Maple [54], Mathematica [55], and MATLAB [56]. These systems incorporate substantial ready-to-use function libraries, and they support the complete model development process by offering integrated computing, programming, project documentation, and model visualization facilities. ISTCs also offer optimization features, either as built-in functions or as add-on products. The online help facilities and tutorials of ISTCs assist rapid prototyping and modeling (also) in an interdisci- plinary team context. We see a significant role for ISTCs in the development and solution of advanced optimization models with modules that can represent entire subsystem models. A typical application example is the optimal parameterization (calibration) of a system model that—for each studied parameter setting—requires the evaluation of an embedded numerical procedure, the solution of a set of differential or differential-algebraic equations, deterministic or stochastic simula- tion, and so on. The modeling environments briefly reviewed above will meet the needs and requirements of different types of users to various extents. The main categories include educational users (instructors and students); research scientists, engineers, consultants, and other practitioners (possibly, but not necessarily equipped with an in-depth optimization related background); optimization experts, software application developers, and other “power users.” The pros and cons of the individual software products—in terms of hardware and software platform demands, ease of use, model prototyping options, detailed code devel- opment and maintenance features, optimization model checking and processing tools, availability of solver options and other auxiliary tools, program execution speed, overall level of system integration, quality of related documentation and technical support, customization options, and communication with end users— make the corresponding modeling and solver approaches more or less attractive for the user groups listed above. To illustrate the preceding discussion, in the next section, we introduce the LGO solver suite and its current implementations linked to most of the modeling systems reviewed earlier. The acronym LGO abbreviates a Lipschitz(-continuous) global optimizer program system. 1 Model Development and Optimization for Space Engineering... 7
1.2 The LGO Solver Suite for Nonlinear Optimization
1.2.1 Solver Options
LGO seamlessly integrates a suite of derivative-free global and local optimization strategies. The theoretical foundations of the global search components in LGO are described by Pinte´r[2]. Specifically, this book presents an in-depth discussion of both adaptive deterministic partition strategies and of stochastic search methods to solve GO problems under basic continuity or Lipschitz-continuity assumptions (as discussed briefly in Sect. 1.1). The theoretically exhaustive global search capability of these deterministic or stochastic strategies guarantees their global convergence (to a point of X* or—under proper analytical conditions—to all points of X*). The LGO program system has been continuously developed since the late 1980s. LGO can be used as a compiler platform-based optimization engine or as a solver option linked to various modeling environments. Several of these implementations have been described in Pinte´r[34, 35, 57, 58], Pinte´r and Kampas [6, 59], Pinte´r et al. [60], as well as in platform-specific user manuals. Next, we shall briefly review the key features of the core LGO program system. For a more detailed description we refer to the current User’s Guide [61]. The current LGO implementation integrates the following solver modules: • Regularly spaced sampling, as a global presolver (RSS) • Branch-and-bound global search method (BB) • Global adaptive random search (GARS) • Multi-start-based global random search (MS) • Constrained local search (LS) by the generalized reduced gradient method The global search methods implemented in LGO—with the exception of the recently added RSS module—are based on the exposition by Pinte´r[2]. The integration of RSS with LGO is discussed in detail by Pinte´r[62], and Pinte´r and Horva´th [63]. The generalized reduced gradient algorithm is described in numerous textbooks: consult, e.g., Edgar et al. [15]. Therefore—and also due to the intended overall scope of this chapter—only a brief overview of the solver options is presented here. RSS generates non-collapsing space filling designs in the search box region [l, u] (recall (1.1)), and it produces a corresponding global solution estimate. This information is passed along to LGO’s other—global and local—solvers for refine- ment. RSS can be especially useful in solving hard GO problems under severe limitations regarding the number of model function evaluations. The branch-and-bound search method (BB) is an implementation of a well- established optimization approach. The BB implementation in LGO combines feasible set partition steps with both deterministic and randomized sampling. Pure random search (PRS) is a simple “folklore” approach to global optimization. The GARS option implements an adaptive stochastic search procedure which—while 8 G. Fasano and J.D. Pinte´r theoretically globally convergent—typically leads to a significant performance improvement over PRS and other passive search approaches. The multi-start (MS) global search option applies a similar stochastic search strategy to GARS; however, the total sampling effort is distributed among several global scope searches. Each of these leads to a starting point for subsequent local search. In general, this search strategy requires the most computational effort (due to its multiple local search phases); however, in complicated models, it often finds the best numerical solution. For this reason, MS is the recommended default LGO solver option for global scope optimization. Each of the global scope solver modules is expected to generate a “sufficiently close” approximation of the global solution point(s), before LGO is switched to local search. This “paradigm switch” is dictated by practical requirements, not by convergence theory: as such, it can lead to (much) faster optimization program termination. However, theoretical global convergence is guaranteed only by using proper global search ad infinitum. If we wish to find a rigorously guaranteed close approximation of the solution by LGO’s global scope solvers, then LGO runtimes can be expected to grow at an exponential rate as a function of model size (characterized by the number of variables and model functions). For clarity, a similar comment applies to all other theoretically rigorous global search methods. Therefore the addition of a much quicker local search option is a practically important feature of LGO. In a typical optimization run, the LGO user selects one of the global (BB, GARS, MS) solver options: the corresponding global search phase is then automatically followed by the local search (LS) option. It is also possible to use the LS option in stand-alone mode, started from an automatically set (default) or a user-supplied initial solution “guess.” For algorithmic and practical reasons, an initial LS phase precedes all global search options. If RSS is invoked then it follows LS, then RSS is followed by a second LS phase before launching the global search mode chosen. Ideally—and also in many practical cases—each of BB, GARS, and MS followed by LS will return identical answers, except perhaps small numerical differences. However, LGO users may wish to try using each of the global search options when solving complicated problems to see which one gives the best numerical results with given (or perhaps modified) option and parameter settings. For small to moderate size problems defined by analytically given (i.e. “easily computable”) functions, the corresponding runtimes are seconds or at most minutes. Hence, in such cases, trying several solver modes and option settings is entirely feasible. The global solver options all use an aggregated merit (exact penalty) function. As one of the possible approximations of the constraints g, the objective function 2 f(x) is augmented by terms of the form ci max [gi(x), 0] . In the corresponding (still continuous or Lipschitz-continuous) problem formulation, X min f ðxÞþ c max½ g ðxÞ; 0 2 subject to x 2 D : ¼ ½ l; u (1.2) i i i 0 1 Model Development and Optimization for Space Engineering... 9
D0 replaces the constraint set D of (1.1) in the global search phase. The aggregated merit function shown in (1.2) attempts to balance the aims of reducing the original objective function and of satisfying the constraints gi(x) 0fori ¼ 1,...,m. The non- negative multipliers ci associated with the constraints gi can be either directly assigned or adaptively selected, to enforce the (approximate) numerical feasibility and optimality of the solution found. (Note that instead of using the l2-norm as shown by (1.2), other lp-norms can also be used for a suitably chosen 1 p 1.) The outlined aggregation of constraints and objective function is automatically supported by LGO. As noted above, the local search phase is based on an implementation of the generalized reduced gradient algorithm. This search strategy is aimed at “locally precise search”: therefore its usage is recommended also as the last search phase, following one of the global searches. The application of LS typically results in an improved solution, since it is aimed at attaining or maintaining feasibility and— simultaneously—at improving the objective function value. LGO’s use as a local solver only can be recommended, when one or several good initial solution estimates are directly available, or whenever the LGO user knows that local search suffices (as, e.g., in the case of provably convex models). Additional search strategies and solver options are also considered for future inclusion in the LGO solver suite as dictated by user demands. Due to its derivative-free optimization strategies, LGO can be primarily recommended to solve continuous GO problems, in which the unavailability or tedious generation of model function derivatives excludes the possibility of apply- ing more specifically tailored approaches. (It is worth pointing out that in most truly global scope search algorithms local derivative information plays little or no role.) Let us emphasize that LGO can handle optimization models with a “black box” structure: this category specifically includes many advanced models developed in ISTC systems as highlighted earlier.
1.2.2 LGO Program Structure
The core of LGO has been originally developed for use in conjunction with FORTRAN development environments. Specifically, LF77, LF90, and LF95 by Lahey Computer Systems [64, 65] have been mostly used for development (for the current release version of LF95 visit www.lahey.com). Subsequently, LGO has been ported to many other compiler platforms. LF95 directly supports links to other programming languages under Windows operating systems (OS) such as Borland C/C++ and Delphi, Microsoft Visual C/C++, and Visual Basic. Digital, Compaq, Intel Visual FORTRAN, G77, G95, GCC, GFORTRAN, and Salford FORTRAN (FTN95) are further examples of compiler platforms that have been tested to run LGO. LGO can also be used on Mac, Linux, and UNIX platforms, since its core code is fully portable. For instance, GCC and GFORTRAN or G95 are freely available for all hardware and OS platforms mentioned above. 10 G. Fasano and J.D. Pinte´r
LGO can be used as an executable program, a static object file, or as a dynamic link library (dll). There are secondary and somewhat compiler-dependent differences among the LGO delivery versions: such details are outside of the scope of the present discussion. The changes between delivery versions are straight- forward, and the core LGO solver functionality is identical in all cases. For illustration, we will review the object file delivery-based LGO version that is used with a FORTRAN development environment. (In this case, all program files discussed below have a FOR filename extension; if a C compiler is used, then the . FOR filename extensions should be replaced by .C extensions). To establish a standard application program structure, the following files are written by the model developer, using the templates and sample programs provided with LGO. • LGOMAIN is the main (driver) program that defines or retrieves from an input file (to be discussed below) LGO’s static calling parameters before activating LGO. Here the adjective static refers to model descriptors and LGO solver options defined only once during a given LGO application run. LGOMAIN can also include additional user actions such as links to other program files and to external applications, to report generation, and to the further optional use of LGO results. • LGOFCT defines the model objective f and constraint functions g that describe the dynamic components of an optimization problem. Here dynamic means that LGOFCT will be called in every iteration of LGO, to evaluate all model functions for the algorithmically generated sequence of input variable arguments x. Again, this file may include calls to other application programs such as external modules and procedures, in order to evaluate the model functions. • LGO.IN is an optionally used LGO input parameter (text) file that stores LGO’s static calling parameters. If LGO.IN is used, then the basic role of LGOMAIN is to read the static information and then to call LGO. The source code files LGOMAIN and LGOFCT are to be compiled and linked to the LGO object or dll file using a suitable FORTRAN or C compiler. At runtime, LGOMAIN calls LGO: the latter iteratively calls LGOFCT. LGO’s operations can be partially controlled by the static input parameter file, or by parameters defined in LGOMAIN: this setup structure facilitates the repeated executions of LGO runs under various model specifications and/or solver parameterizations. Of course, LGOFCT can also be changed if necessary to test or to develop different model variants. During the LGO run three types of result files can be (optionally and automati- cally) generated. The first one of these files, called LGO.SUM, presents a concise summary of the results obtained: the optimized decision variables and the function values at these arguments. This information is sufficient in “routine” LGO applications: this way the model developer can avoid browsing through a possibly significant amount of runtime details. The second (optional) report file, called LGO. OUT, provides more detailed information related to the complete optimization process. Specifically, it shows which solver options have been called, how they performed (showing their corresponding final results), and which stopping criterion 1 Model Development and Optimization for Space Engineering... 11 has led to the termination of the LGO run. Therefore its analysis can be useful, e.g., during the development of a new application or to test various LGO input parameter settings. The third (optional) report file, called LGO.LOG, records the entire sequence of the input vector arguments (x) generated, together with the resulting function values f(x) and g(x). Using this option could lead to a huge amount of stored information: hence, it should be used carefully, mostly during model devel- opment or when tracking down some possible (model or solver) error. This option may also be useful in such cases when the number of model function evaluations is very limited due to resource limitations: hence, all generated information can be valuable for the model developer. Both LGOMAIN and LGOFCT can be provided as compiled object or dll files, possibly written in other languages as noted above. These program modules can also be connected to additional program files or even to executable programs. In such cases, LGO can perform true “black box” optimization, assuming proper input/output communication among LGO, LGOMAIN, and LGOFCT. To summarize the above discussion, the interdependence structure of the LGO program components is shown by Fig. 1.1.
[An optional input parameter file] LGO.IN ↓ LGOMAIN ↔ LGO ↔ LGOFCT ↓ LGO.SUM LGO.OUT LGO.LOG [Optionally generated result files]
Fig. 1.1 LGO application program structure
1.2.3 Connectivity to Other Modeling Environments: Current Implementations
LGO can be made available as a solver engine option in various modeling environments, typically as a dll or object file. In the simplest case, this requires only a call to LGO, with proper communication between the modeling platform and the solver. As noted earlier, optimization modeling systems are equipped with model interpreter capabilities and with a range of other features to support trans- parent and scalable model development. Due to certain platform-specific compatibility requirements the linked solver options are not necessarily identical regarding all features. Without going into 12 G. Fasano and J.D. Pinte´r unnecessary details, we list below the currently existing LGO solver implementations for modeling systems, listed in order of release. • LGO solver suite for nonlinear optimization (compiler platform implementations) • GAMS/LGO • MathOptimizer Professional for Mathematica • TOMLAB/LGO for MATLAB • Global Optimization Toolbox for Maple • MPL/LGO • AIMMS/LGO • AMPL/LGO • Excel/LGO • MATLAB/LGO All listed solver products are directly available for personal computers running currently used MS Windows operating systems. Several of these are also implemented, and all can be made available upon request, across all major hard- ware and operating system platforms.
1.2.4 An Illustrative Example
To illustrate the model development and solution procedure using LGO, we will consider the solution of a transcendental system of equations g(x) ¼ 0 g:Rn!Rn assuming that all variables take values from a given n-dimensional interval [l, u] Rn. As noted earlier, such equation systems may not have solutions at all, or they could have a single solution, or several solutions, or possibly even infinite sets of solutions. In lack of a good initial “solution guess,” local scope search strategies (such as Newton’s method and its various extensions) may fail to find the solution(s) in this important class of problems. In order to (hopefully) avoid the scenario of multiple solutions, we can attempt to find the solution that has minimal lp-norm or—more generally—the solution closest to a given point in Rn. (As discussed earlier, 1 p 1is chosen by the model developer.) If the lp-norm criterion still does not guarantee the uniqueness of the solution vector, then other modeling constraints can be imposed. We will assume that such added constraints are not needed, and—as an example—we look for the minimal l2-norm solution of the model type shown below.
2 minjjjjx giðxÞ¼0 j ¼ 1; ...; nx2 D0 :¼ ½ l; u (1.3)
Obviously, (1.3) is a specific case of (1.1), with a convex objective function and with possibly complicated non-convex constraints. The model instance used for illustration is described below, for n ¼ 3, x ¼ (x1, x2, x3); the symbols sin, 1 Model Development and Optimization for Space Engineering... 13 cos, ln, and exp below denote the sine, cosine, natural logarithm, and exponential functions:
2 2 2 2 f ðxÞ¼jjjjx ¼ x1 þ x2 þ x3 2 g ðxÞ¼sin x þ x 2 þ x sinðÞx x 3x 1 1 2 3 1 2 3 g ðxÞ¼ln 1 þ ðÞx 2x þ x 2 þ cosðÞ x x x 1 2 1 2 3 1 2 3 (1.4)
g3ðxÞ¼sinðÞþ expðÞx1 x3 sinðÞ x1x2 x3 cosðÞx2 x3 þ 1 sinðÞ 1