THE UNIVERSITY of QUEENSLAND Bachelor of Engineering Thesis

Total Page:16

File Type:pdf, Size:1020Kb

THE UNIVERSITY of QUEENSLAND Bachelor of Engineering Thesis THE UNIVERSITY OF QUEENSLAND Bachelor of Engineering Thesis Development of Multiple-Phase Trajectory Planner Using Legendre-Gauss-Radau Collocation Methods Student Name: Inderpreet METLA Course Code: MECH4500 Supervisor: Dr Michael Kearney Submission Date: 25 October 2018 A thesis submitted in partial fulfilment of the requirements of the Bachelor of Engineering degree in Mechanical Engineering UQ Engineering Faculty of Engineering, Architecture and Information Technology Abstract Trajectory planning is a technique used to solve optimisation problems for constrained dynamic systems. Advances in personal computers have enabled trajectory planning techniques to become prevalent within many branches of engineering, such as the aerospace and robotics industries. However, practical problems within these industries can be complex and intricate. In particular, such complex problems can exhibit significant changes in the dynamics, constraints and performance measures as systems evolve over time. A technique used to handle these problems is known as “multiple-phase” trajectory planning. In this technique, the time domain is partitioned into connected phases such that each phase can be formulated with a distinct set of parameters and equations. At present, there are no open-source packages available for array-oriented programming languages that can solve multiple-phase trajectory planning problems. As a result, a scope has been identified to develop such a software. Within this thesis, a MATLAB implementation of a multiple-phase trajectory planning software called the Pseudospectral OPTimiser (풫풪풫풯) is developed. The software uses a variable-order Legendre-Gauss-Radau (LGR) orthogonal collocation technique to transcribe the continuous time trajectory planning problem into a sparse nonlinear program (NLP). In this technique, solution trajectories are approximated using orthogonal polynomials with the system dynamics and constraints collocated at LGR quadrature points. The resulting NLP subproblem is solved by employing an interior-point optimisation method. A grid refinement algorithm has been implemented that can determine the degree of the approximating polynomial, as well as the number of grid intervals required to achieve a specified level of accuracy. The software efficiently calculates the first and second order derivatives required by the NLP solver by exploiting the well-defined Jacobian and Hessian sparsity patterns of the LGR pseudospectral method. The performance of the software has been evaluated on five benchmark problems from literature and then compared against a variety of open-source and commercial packages. The problems range in complexity from single-phase two-point boundary problems to multiple- phase problems with numerous constraints and non-convex performance measures. The results show that 풫풪풫풯 has superior performance compared to many open-source counterparts in terms of computation time, optimality and solution accuracy. 풫풪풫풯 has also demonstrated comparable performance against commercial solvers. The outcomes of this thesis prove that 풫풪풫풯 is a highly competitive open-source software that can solve a broad range of optimal control problems, and will be a valuable addition to the existing literature. iii Acknowledgements First and foremost, I would like to thank my supervisor, Dr Michael Kearney, for providing me guidance, support and direction throughout this thesis. This has been the most stimulating work I have done throughout my degree, and I would like to thank Michael for allowing me to change the direction of my thesis so I could study this topic. I would also like to thank Michael for taking an active interest in me outside of my studies and always being willing to chat about other aspects of life. I would like to acknowledge and thank Sholto Forbes-Spyratos for allowing me to borrow his GPOPS-II license so I could benchmark my solver. I would also like to thank Matthew Kelly from Cornell University for providing a stranger some assistance and answering my questions. To my friends who have supported and pushed me all throughout my degree, I thank you. I wish you all the best of luck in your next chapter. I would like to thank my family for always providing me their love and support towards my decisions and goals throughout my studies. Lastly, to Emily, who has provided me love and unwavering support throughout this year, I thank you. iv Table of Contents Abstract . iii List of Figures . ix List of Tables . xi 1 Introduction . 1 1.1 Project Motivation and Purpose . 1 1.2 Project Goals and Intended Outcomes . 3 1.3 Project Scope . 4 1.3.1 In Scope . 4 1.3.2 Out of Scope . 4 1.4 Thesis Outline . 5 2 Literature Review . 6 2.1 Optimal Control History and Solution Approach . 6 2.2 Methods to Formulate Optimal Control Problems . 8 2.3 General Formulation of a Trajectory Optimisation Problem . 9 2.4 Survey of Numerical Methods to Solve Trajectory Optimisation Problems . 10 2.4.1 Dynamic Programming . 10 2.4.2 Differential Dynamic Programming . 10 2.4.3 Indirect and Direct Transcription Methods . 11 2.4.4 Direct Single Shooting . 12 2.4.5 Direct Multiple Shooting . 12 2.4.6 Direct Collocation and Local Orthogonal Collocation . 13 2.4.7 Pseudospectral Collocation (Global Orthogonal Collocation) . 14 2.5 Nonlinear Programming . 17 v 2.5.1 Formal Definitions . 17 2.5.2 Algorithms to Solve NLPs . 18 2.5.2.1 Sequential Quadratic Programming . 18 2.5.2.2 Interior Point Methods . 19 2.6 Derivative Calculation Methods . 20 2.6.1 Finite-Differencing . 21 2.6.2 Complex-Step Differentiation . 21 2.6.3 Automatic Differentiation . 22 2.7 Grid Refinement . 22 2.8 Techniques to Scale Optimal Control Problems . 25 2.9 Available Trajectory Optimisation Software . 26 2.10 Chapter Summary . 27 3 Required Components for Software and Design Decisions . 28 3.1 Overview of Planned Components . 28 3.1.1 Influence on Software Structure from Trajectory Optimisation Problem . 28 3.1.2 Choice of Numerical Transcription Method . 29 3.1.3 Choice of NLP Solver . 31 3.1.4 Choice of Derivative Calculation Methods . 32 3.1.5 Problem Scaling Approach . 33 3.1.6 Choice of Grid Refinement Approach . 33 4 Mathematical Development of Method . 34 4.1 Formulation of General Multiple-Phase Trajectory Optimisation Problem . 34 4.2 Formulation of Multiple-Interval, Multiple-Phase Radau Pseudospectral Method . 35 4.2.1 Multiple-Interval Formulation of Objective and Constraints . 35 4.2.2 State Approximation using the RPM . 39 4.2.3 Control Approximation using the RPM . 40 vi 4.2.4 Total Nonlinear Problem Discretisation using the RPM . 40 4.3 Conversion of Radau Pseudospectral Method Formulation into NLP Algorithms . 42 4.3.1 Construction of Decision Variable Vector and its Upper and Lower Bounds . 42 4.3.2 Construction of Constraint Vector and its Upper and Lower Bounds . 43 4.4 Chapter Summary . 45 5 Extensions to Software for Tractability 46 5.1 Grid Refinement Algorithm . 46 5.2 Exploiting Sparsity in the Radau Pseudospectral Method . 50 5.2.1 Sparsity Patterns of the RPM Constraint Jacobian and Lagrangian Hessian . 50 5.2.2 Exploiting Sparsity to Compute Derivatives . 54 6 Software Architecture . 57 7 Usage of Software and Constructing a Problem . 59 7.1 Constructing a Problem . 59 7.1.1 Creating the problem.funcs Struct . 62 7.1.2 Syntax for Dynamics, Path Objective and Path Constraint Functions . 62 7.1.3 Syntax for Boundary Objective and Boundary Constraint Functions . 63 7.1.4 Creating the problem.bounds Struct . 64 7.1.5 Creating the problem.guess Struct . 65 7.2 Details for the Output of the Software . 66 8 Results: Implementation and Performance Evaluation on Example Problems . 67 8.1 Overview of Example Problems . 67 8.2 Example 1 – Continuous Time Infinite-Horizon LQR . 68 8.3 Example 2 – Bryson Denham . 71 8.3.1 Comparison of 풫풪풫풯 and 픾ℙ핆ℙ핊 − 핀핀 . 71 vii 8.3.2 Comparison of 풫풪풫풯 and OptimTraj . 73 8.4 Example 3 – Free-Flying Robot . 74 8.5 Example 4 – Lee-Ramirez Bioreactor . 78 8.6 Example 5 – Goddard Rocket . 83 8.7 Chapter Summary . 87 9 Haul Truck Energy Management Case Study (HTEMCS) . 88 9.1 Review of Optimal Energy Management Strategies . 88 9.2 General Problem Setup for the HTEMCS . 91 9.3 Multi-Phase Trajectory Optimisation Formulation for the HTEMCS . 94 9.4 Results of the HTEMCS . 96 10 Limitations of the Software . 100 11 Conclusions and Recommendations for Future Work . 101 11.1 Conclusions and Summary of Outcomes . 101 11.2 Recommendations for Future Work . 103 References . 104 Appendix A: 퓟퓞퓟퓣 User’s Guide . 114 Appendix B: 퓟퓞퓟퓣 Code for Example 1 . 126 Appendix C: 퓟퓞퓟퓣 Code for Example 2 . 128 Appendix D: 퓟퓞퓟퓣 Code for Example 3 . 130 Appendix E: 퓟퓞퓟퓣 Code for Example 4 . 132 Appendix F: 퓟퓞퓟퓣 Code for Example 5 . 134 Appendix G: 퓟퓞퓟퓣 Code for HTEMCS . 137 viii List of Figures Figure 2.1: Optimal control problem direct transcription process . 7 Figure 2.2: Optimal control problem solution techniques . 8 Figure 2.3: Single shooting vs Multiple shooting . 13 Figure 2.4: Comparison between h- and p-Methods . 14 Figure 2.5: Location of LG, LGR and LGL points . 15 Figure 2.6: Flow diagram for GPM’s equivalent optimality conditions for Direct and 16 Indirect approaches . Figure 2.7: h- and p- methods of grid refinement . 23 Figure 3.1: Decision tree for choice of transcription method . 29 Figure 3.2: Example of a multi-interval, multi-phase problem using the RPM. 30 Figure 4.1: Example of a two-interval formulation for the state trajectory . 38 Figure 5.1: Single-phase constraint Jacobian sparsity pattern for the RPM . 51 Figure 5.2: Multiple-phase constraint Jacobian sparsity pattern for the RPM.
Recommended publications
  • Julia, My New Friend for Computing and Optimization? Pierre Haessig, Lilian Besson
    Julia, my new friend for computing and optimization? Pierre Haessig, Lilian Besson To cite this version: Pierre Haessig, Lilian Besson. Julia, my new friend for computing and optimization?. Master. France. 2018. cel-01830248 HAL Id: cel-01830248 https://hal.archives-ouvertes.fr/cel-01830248 Submitted on 4 Jul 2018 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. « Julia, my new computing friend? » | 14 June 2018, IETR@Vannes | By: L. Besson & P. Haessig 1 « Julia, my New frieNd for computiNg aNd optimizatioN? » Intro to the Julia programming language, for MATLAB users Date: 14th of June 2018 Who: Lilian Besson & Pierre Haessig (SCEE & AUT team @ IETR / CentraleSupélec campus Rennes) « Julia, my new computing friend? » | 14 June 2018, IETR@Vannes | By: L. Besson & P. Haessig 2 AgeNda for today [30 miN] 1. What is Julia? [5 miN] 2. ComparisoN with MATLAB [5 miN] 3. Two examples of problems solved Julia [5 miN] 4. LoNger ex. oN optimizatioN with JuMP [13miN] 5. LiNks for more iNformatioN ? [2 miN] « Julia, my new computing friend? » | 14 June 2018, IETR@Vannes | By: L. Besson & P. Haessig 3 1. What is Julia ? Open-source and free programming language (MIT license) Developed since 2012 (creators: MIT researchers) Growing popularity worldwide, in research, data science, finance etc… Multi-platform: Windows, Mac OS X, GNU/Linux..
    [Show full text]
  • Solving Mixed Integer Linear and Nonlinear Problems Using the SCIP Optimization Suite
    Takustraße 7 Konrad-Zuse-Zentrum D-14195 Berlin-Dahlem fur¨ Informationstechnik Berlin Germany TIMO BERTHOLD GERALD GAMRATH AMBROS M. GLEIXNER STEFAN HEINZ THORSTEN KOCH YUJI SHINANO Solving mixed integer linear and nonlinear problems using the SCIP Optimization Suite Supported by the DFG Research Center MATHEON Mathematics for key technologies in Berlin. ZIB-Report 12-27 (July 2012) Herausgegeben vom Konrad-Zuse-Zentrum f¨urInformationstechnik Berlin Takustraße 7 D-14195 Berlin-Dahlem Telefon: 030-84185-0 Telefax: 030-84185-125 e-mail: [email protected] URL: http://www.zib.de ZIB-Report (Print) ISSN 1438-0064 ZIB-Report (Internet) ISSN 2192-7782 Solving mixed integer linear and nonlinear problems using the SCIP Optimization Suite∗ Timo Berthold Gerald Gamrath Ambros M. Gleixner Stefan Heinz Thorsten Koch Yuji Shinano Zuse Institute Berlin, Takustr. 7, 14195 Berlin, Germany, fberthold,gamrath,gleixner,heinz,koch,[email protected] July 31, 2012 Abstract This paper introduces the SCIP Optimization Suite and discusses the ca- pabilities of its three components: the modeling language Zimpl, the linear programming solver SoPlex, and the constraint integer programming frame- work SCIP. We explain how these can be used in concert to model and solve challenging mixed integer linear and nonlinear optimization problems. SCIP is currently one of the fastest non-commercial MIP and MINLP solvers. We demonstrate the usage of Zimpl, SCIP, and SoPlex by selected examples, give an overview of available interfaces, and outline plans for future development. ∗A Japanese translation of this paper will be published in the Proceedings of the 24th RAMP Symposium held at Tohoku University, Miyagi, Japan, 27{28 September 2012, see http://orsj.or.
    [Show full text]
  • [20Pt]Algorithms for Constrained Optimization: [ 5Pt]
    SOL Optimization 1970s 1980s 1990s 2000s 2010s Summary 2020s Algorithms for Constrained Optimization: The Benefits of General-purpose Software Michael Saunders MS&E and ICME, Stanford University California, USA 3rd AI+IoT Business Conference Shenzhen, China, April 25, 2019 Optimization Software 3rd AI+IoT Business Conference, Shenzhen, April 25, 2019 1/39 SOL Optimization 1970s 1980s 1990s 2000s 2010s Summary 2020s SOL Systems Optimization Laboratory George Dantzig, Stanford University, 1974 Inventor of the Simplex Method Father of linear programming Large-scale optimization: Algorithms, software, applications Optimization Software 3rd AI+IoT Business Conference, Shenzhen, April 25, 2019 2/39 SOL Optimization 1970s 1980s 1990s 2000s 2010s Summary 2020s SOL history 1974 Dantzig and Cottle start SOL 1974{78 John Tomlin, LP/MIP expert 1974{2005 Alan Manne, nonlinear economic models 1975{76 MS, MINOS first version 1979{87 Philip Gill, Walter Murray, MS, Margaret Wright (Gang of 4!) 1989{ Gerd Infanger, stochastic optimization 1979{ Walter Murray, MS, many students 2002{ Yinyu Ye, optimization algorithms, especially interior methods This week! UC Berkeley opened George B. Dantzig Auditorium Optimization Software 3rd AI+IoT Business Conference, Shenzhen, April 25, 2019 3/39 SOL Optimization 1970s 1980s 1990s 2000s 2010s Summary 2020s Optimization problems Minimize an objective function subject to constraints: 0 x 1 min '(x) st ` ≤ @ Ax A ≤ u c(x) x variables 0 1 A matrix c1(x) B . C c(x) nonlinear functions @ . A c (x) `; u bounds m Optimization
    [Show full text]
  • Propt Product Sheet
    PROPT - The world’s fastest Optimal Control platform for MATLAB. PROPT - ONE OF A KIND, LIGHTNING FAST SOLUTIONS TO YOUR OPTIMAL CONTROL PROBLEMS! NOW WITH WELL OVER 100 TEST CASES! The PROPT software package is intended to When using PROPT, optimally coded analytical solve dynamic optimization problems. Such first and second order derivatives, including problems are usually described by: problem sparsity patterns are automatically generated, thereby making it the first MATLAB • A state-space model of package to be able to fully utilize a system. This can be NLP (and QP) solvers such as: either a set of ordinary KNITRO, CONOPT, SNOPT and differential equations CPLEX. (ODE) or differential PROPT currently uses Gauss or algebraic equations (PAE). Chebyshev-point collocation • Initial and/or final for solving optimal control conditions (sometimes problems. However, the code is also conditions at other written in a more general way, points). allowing for a DAE rather than an ODE formulation. Parameter • A cost functional, i.e. a estimation problems are also scalar value that depends possible to solve. on the state trajectories and the control function. PROPT has three main functions: • Sometimes, additional equations and variables • Computation of the that, for example, relate constant matrices used for the the initial and final differentiation and integration conditions to each other. of the polynomials used to approximate the solution to the trajectory optimization problem. The goal of PROPT is to make it possible to input such problem • Source transformation to turn descriptions as simply as user-supplied expressions into possible, without having to worry optimized MATLAB code for the about the mathematics of the cost function f and constraint actual solver.
    [Show full text]
  • Full Text (Pdf)
    A Toolchain for Solving Dynamic Optimization Problems Using Symbolic and Parallel Computing Evgeny Lazutkin Siegbert Hopfgarten Abebe Geletu Pu Li Group Simulation and Optimal Processes, Institute for Automation and Systems Engineering, Technische Universität Ilmenau, P.O. Box 10 05 65, 98684 Ilmenau, Germany. {evgeny.lazutkin,siegbert.hopfgarten,abebe.geletu,pu.li}@tu-ilmenau.de Abstract shown in Fig. 1. Based on the current process state x(k) obtained through the state observer or measurement, Significant progresses in developing approaches to dy- resp., the optimal control problem is solved in the opti- namic optimization have been made. However, its prac- mizer in each sample time. The resulting optimal control tical implementation poses a difficult task and its real- strategy in the first interval u(k) of the moving horizon time application such as in nonlinear model predictive is then realized through the local control system. There- control (NMPC) remains challenging. A toolchain is de- fore, an essential limitation of applying NMPC is due to veloped in this work to relieve the implementation bur- its long computation time taken to solve the NLP prob- den and, meanwhile, to speed up the computations for lem for each sample time, especially for the control of solving the dynamic optimization problem. To achieve fast systems (Wang and Boyd, 2010). In general, the these targets, symbolic computing is utilized for calcu- computation time should be much less than the sample lating the first and second order sensitivities on the one time of the NMPC scheme (Schäfer et al., 2007). Al- hand and parallel computing is used for separately ac- though powerful methods are available, e.g.
    [Show full text]
  • Numericaloptimization
    Numerical Optimization Alberto Bemporad http://cse.lab.imtlucca.it/~bemporad/teaching/numopt Academic year 2020-2021 Course objectives Solve complex decision problems by using numerical optimization Application domains: • Finance, management science, economics (portfolio optimization, business analytics, investment plans, resource allocation, logistics, ...) • Engineering (engineering design, process optimization, embedded control, ...) • Artificial intelligence (machine learning, data science, autonomous driving, ...) • Myriads of other applications (transportation, smart grids, water networks, sports scheduling, health-care, oil & gas, space, ...) ©2021 A. Bemporad - Numerical Optimization 2/102 Course objectives What this course is about: • How to formulate a decision problem as a numerical optimization problem? (modeling) • Which numerical algorithm is most appropriate to solve the problem? (algorithms) • What’s the theory behind the algorithm? (theory) ©2021 A. Bemporad - Numerical Optimization 3/102 Course contents • Optimization modeling – Linear models – Convex models • Optimization theory – Optimality conditions, sensitivity analysis – Duality • Optimization algorithms – Basics of numerical linear algebra – Convex programming – Nonlinear programming ©2021 A. Bemporad - Numerical Optimization 4/102 References i ©2021 A. Bemporad - Numerical Optimization 5/102 Other references • Stephen Boyd’s “Convex Optimization” courses at Stanford: http://ee364a.stanford.edu http://ee364b.stanford.edu • Lieven Vandenberghe’s courses at UCLA: http://www.seas.ucla.edu/~vandenbe/ • For more tutorials/books see http://plato.asu.edu/sub/tutorials.html ©2021 A. Bemporad - Numerical Optimization 6/102 Optimization modeling What is optimization? • Optimization = assign values to a set of decision variables so to optimize a certain objective function • Example: Which is the best velocity to minimize fuel consumption ? fuel [ℓ/km] velocity [km/h] 0 30 60 90 120 160 ©2021 A.
    [Show full text]
  • Derivative-Free Optimization: a Review of Algorithms and Comparison of Software Implementations
    J Glob Optim (2013) 56:1247–1293 DOI 10.1007/s10898-012-9951-y Derivative-free optimization: a review of algorithms and comparison of software implementations Luis Miguel Rios · Nikolaos V. Sahinidis Received: 20 December 2011 / Accepted: 23 June 2012 / Published online: 12 July 2012 © Springer Science+Business Media, LLC. 2012 Abstract This paper addresses the solution of bound-constrained optimization problems using algorithms that require only the availability of objective function values but no deriv- ative information. We refer to these algorithms as derivative-free algorithms. Fueled by a growing number of applications in science and engineering, the development of derivative- free optimization algorithms has long been studied, and it has found renewed interest in recent time. Along with many derivative-free algorithms, many software implementations have also appeared. The paper presents a review of derivative-free algorithms, followed by a systematic comparison of 22 related implementations using a test set of 502 problems. The test bed includes convex and nonconvex problems, smooth as well as nonsmooth prob- lems. The algorithms were tested under the same conditions and ranked under several crite- ria, including their ability to find near-global solutions for nonconvex problems, improve a given starting point, and refine a near-optimal solution. A total of 112,448 problem instances were solved. We find that the ability of all these solvers to obtain good solutions dimin- ishes with increasing problem size. For the problems used in this study, TOMLAB/MULTI- MIN, TOMLAB/GLCCLUSTER, MCS and TOMLAB/LGO are better, on average, than other derivative-free solvers in terms of solution quality within 2,500 function evaluations.
    [Show full text]
  • TOMLAB –Unique Features for Optimization in MATLAB
    TOMLAB –Unique Features for Optimization in MATLAB Bad Honnef, Germany October 15, 2004 Kenneth Holmström Tomlab Optimization AB Västerås, Sweden [email protected] ´ Professor in Optimization Department of Mathematics and Physics Mälardalen University, Sweden http://tomlab.biz Outline of the talk • The TOMLAB Optimization Environment – Background and history – Technology available • Optimization in TOMLAB • Tests on customer supplied large-scale optimization examples • Customer cases, embedded solutions, consultant work • Business perspective • Box-bounded global non-convex optimization • Summary http://tomlab.biz Background • MATLAB – a high-level language for mathematical calculations, distributed by MathWorks Inc. • Matlab can be extended by Toolboxes that adds to the features in the software, e.g.: finance, statistics, control, and optimization • Why develop the TOMLAB Optimization Environment? – A uniform approach to optimization didn’t exist in MATLAB – Good optimization solvers were missing – Large-scale optimization was non-existent in MATLAB – Other toolboxes needed robust and fast optimization – Technical advantages from the MATLAB languages • Fast algorithm development and modeling of applied optimization problems • Many built in functions (ODE, linear algebra, …) • GUIdevelopmentfast • Interfaceable with C, Fortran, Java code http://tomlab.biz History of Tomlab • President and founder: Professor Kenneth Holmström • The company founded 1986, Development started 1989 • Two toolboxes NLPLIB och OPERA by 1995 • Integrated format for optimization 1996 • TOMLAB introduced, ISMP97 in Lausanne 1997 • TOMLAB v1.0 distributed for free until summer 1999 • TOMLAB v2.0 first commercial version fall 1999 • TOMLAB starting sales from web site March 2000 • TOMLAB v3.0 expanded with external solvers /SOL spring 2001 • Dash Optimization Ltd’s XpressMP added in TOMLAB fall 2001 • Tomlab Optimization Inc.
    [Show full text]
  • Theory and Experimentation
    Acta Astronautica 122 (2016) 114–136 Contents lists available at ScienceDirect Acta Astronautica journal homepage: www.elsevier.com/locate/actaastro Suboptimal LQR-based spacecraft full motion control: Theory and experimentation Leone Guarnaccia b,1, Riccardo Bevilacqua a,n, Stefano P. Pastorelli b,2 a Department of Mechanical and Aerospace Engineering, University of Florida 308 MAE-A building, P.O. box 116250, Gainesville, FL 32611-6250, United States b Department of Mechanical and Aerospace Engineering, Politecnico di Torino, Corso Duca degli Abruzzi 24, Torino 10129, Italy article info abstract Article history: This work introduces a real time suboptimal control algorithm for six-degree-of-freedom Received 19 January 2015 spacecraft maneuvering based on a State-Dependent-Algebraic-Riccati-Equation (SDARE) Received in revised form approach and real-time linearization of the equations of motion. The control strategy is 18 November 2015 sub-optimal since the gains of the linear quadratic regulator (LQR) are re-computed at Accepted 18 January 2016 each sample time. The cost function of the proposed controller has been compared with Available online 2 February 2016 the one obtained via a general purpose optimal control software, showing, on average, an increase in control effort of approximately 15%, compensated by real-time implement- ability. Lastly, the paper presents experimental tests on a hardware-in-the-loop six- Keywords: Spacecraft degree-of-freedom spacecraft simulator, designed for testing new guidance, navigation, Optimal control and control algorithms for nano-satellites in a one-g laboratory environment. The tests Linear quadratic regulator show the real-time feasibility of the proposed approach. Six-degree-of-freedom & 2016 The Authors.
    [Show full text]
  • An Algorithm for Bang–Bang Control of Fixed-Head Hydroplants
    International Journal of Computer Mathematics Vol. 88, No. 9, June 2011, 1949–1959 An algorithm for bang–bang control of fixed-head hydroplants L. Bayón*, J.M. Grau, M.M. Ruiz and P.M. Suárez Department of Mathematics, University of Oviedo, Oviedo, Spain (Received 31 August 2009; revised version received 10 March 2010; second revision received 1 June 2010; accepted 12 June 2010) This paper deals with the optimal control (OC) problem that arise when a hydraulic system with fixed-head hydroplants is considered. In the frame of a deregulated electricity market, the resulting Hamiltonian for such OC problems is linear in the control variable and results in an optimal singular/bang–bang control policy. To avoid difficulties associated with the computation of optimal singular/bang–bang controls, an efficient and simple optimization algorithm is proposed. The computational technique is illustrated on one example. Keywords: optimal control; singular/bang–bang problems; hydroplants 2000 AMS Subject Classification: 49J30 1. Introduction The computation of optimal singular/bang–bang controls is of particular interest to researchers because of the difficulty in obtaining the optimal solution. Several engineering control problems, Downloaded By: [Bayón, L.] At: 08:38 25 May 2011 such as the chemical reactor start-up or hydrothermal optimization problems, are known to have optimal singular/bang–bang controls. This paper deals with the optimal control (OC) problem that arises when addressing the new short-term problems that are faced by a generation company in a deregulated electricity market. Our model of the spot market explicitly represents the price of electricity as a known exogenous variable and we consider a system with fixed-head hydroplants.
    [Show full text]
  • Research and Development of an Open Source System for Algebraic Modeling Languages
    Vilnius University Institute of Data Science and Digital Technologies LITHUANIA INFORMATICS (N009) RESEARCH AND DEVELOPMENT OF AN OPEN SOURCE SYSTEM FOR ALGEBRAIC MODELING LANGUAGES Vaidas Juseviˇcius October 2020 Technical Report DMSTI-DS-N009-20-05 VU Institute of Data Science and Digital Technologies, Akademijos str. 4, Vilnius LT-08412, Lithuania www.mii.lt Abstract In this work, we perform an extensive theoretical and experimental analysis of the char- acteristics of five of the most prominent algebraic modeling languages (AMPL, AIMMS, GAMS, JuMP, Pyomo), and modeling systems supporting them. In our theoretical comparison, we evaluate how the features of the reviewed languages match with the requirements for modern AMLs, while in the experimental analysis we use a purpose-built test model li- brary to perform extensive benchmarks of the various AMLs. We then determine the best performing AMLs by comparing the time needed to create model instances for spe- cific type of optimization problems and analyze the impact that the presolve procedures performed by various AMLs have on the actual problem-solving times. Lastly, we pro- vide insights on which AMLs performed best and features that we deem important in the current landscape of mathematical optimization. Keywords: Algebraic modeling languages, Optimization, AMPL, GAMS, JuMP, Py- omo DMSTI-DS-N009-20-05 2 Contents 1 Introduction ................................................................................................. 4 2 Algebraic Modeling Languages .........................................................................
    [Show full text]
  • Basic Implementation of Multiple-Interval Pseudospectral Methods to Solve Optimal Control Problems
    Basic Implementation of Multiple-Interval Pseudospectral Methods to Solve Optimal Control Problems Technical Report UIUC-ESDL-2015-01 Daniel R. Herber∗ Engineering System Design Lab University of Illinois at Urbana-Champaign June 4, 2015 Abstract A short discussion of optimal control methods is presented including in- direct, direct shooting, and direct transcription methods. Next the basics of multiple-interval pseudospectral methods are given independent of the nu- merical scheme to highlight the fundamentals. The two numerical schemes discussed are the Legendre pseudospectral method with LGL nodes and the Chebyshev pseudospectral method with CGL nodes. A brief comparison be- tween time-marching direct transcription methods and pseudospectral direct transcription is presented. The canonical Bryson-Denham state-constrained double integrator optimal control problem is used as a test optimal control problem. The results from the case study demonstrate the effect of user's choice in mesh parameters and little difference between the two numerical pseudospectral schemes. ∗Ph.D pre-candidate in Systems and Entrepreneurial Engineering, Department of In- dustrial and Enterprise Systems Engineering, University of Illinois at Urbana-Champaign, [email protected] c 2015 Daniel R. Herber 1 Contents 1 Optimal Control and Direct Transcription 3 2 Basics of Pseudospectral Methods 4 2.1 Foundation . 4 2.2 Multiple Intervals . 7 2.3 Legendre Pseudospectral Method with LGL Nodes . 9 2.4 Chebyshev Pseudospectral Method with CGL Nodes . 10 2.5 Brief Comparison to Time-Marching Direct Transcription Methods 11 3 Numeric Case Study 14 3.1 Test Problem Description . 14 3.2 Implementation and Analysis Details . 14 3.3 Summary of Case Study Results .
    [Show full text]