
IT 18 010 Examensarbete 30 hp Augusti 2018 High-Thrust Interplanetary Spacecraft Trajectory Optimization Using Cuda Viktor Wase Institutionen för informationsteknologi Department of Information Technology Abstract High-Thrust Interplanetary Spacecraft Trajectory Optimization Using Cuda Viktor Wase Teknisk- naturvetenskaplig fakultet UTH-enheten Global optimization applied to the design of high-thrust interplanetary space trajectories is a computationally expensive task. This thesis investigates the feasibility Besöksadress: of combining the Multiple Gravity Assist model of spacecraft trajectory with periodic Ångströmlaboratoriet Lägerhyddsvägen 1 Lyapunov orbits. The trajectories are designed by optimization, enabled by the Hus 4, Plan 0 computational power of a high-end graphics processing unit. Postadress: Box 536 751 21 Uppsala Telefon: 018 – 471 30 03 Telefax: 018 – 471 30 00 Hemsida: http://www.teknat.uu.se/student Handledare: Nils Höglund Ämnesgranskare: Stefanos Kaxiras Examinator: Jarmo Rantakokko IT 18 010 Tryckt av: Reprocentralen ITC Contents 1 Introduction 3 1.1 PaperOverview............................ 3 1.2 PaperStructure............................ 3 2 Background 3 2.1 GenerativeDesign .......................... 3 2.2 Parallelization . 4 2.2.1 SIMD ............................. 5 2.2.2 CPUvsGPU......................... 5 2.2.3 CUDA . 5 3 Models of Celestial Dynamics 5 3.1 2-BodyProblem ........................... 6 3.1.1 Lambert’s Problem . 6 3.1.2 MultipleGravityAssistModel . 7 3.1.3 Radius of Planets . 9 3.2 Restricted3-BodyProblem . 10 3.2.1 Planar Circular Restricted 3-body Problem . 10 3.2.2 Lagrange Points . 10 3.2.3 Periodic Lyapunov Orbits . 12 3.2.4 Generation of Periodic Lyapunov Orbits . 12 4 Purpose and Scope 12 5 Algorithms 13 5.1 Integrators . 13 5.1.1 Adaptive time-step Runge-Kutta . 13 5.2 Optimizers .............................. 14 5.2.1 Particle Swarm Optimization . 14 5.2.2 Self-Adaptive Di↵erential Evolution . 15 5.2.3 Ant Colony Optimization . 15 5.3 RootFinders ............................. 17 5.3.1 Newton-Raphson . 17 5.3.2 Halley’s Method . 17 5.3.3 Ad Hoc Root Finder . 18 6 First Attempt 19 7 Details of Implementations 20 7.1 Planetary Positions . 20 7.2 Discrete-Continous Optimization . 20 7.3 ObjectiveFunction .......................... 21 7.4 Implementation of Periodic Lyapunov Orbit Generation . 22 7.4.1 Theory of Optimization . 22 7.4.2 Troubleshooting Lyapunov Orbits . 23 7.4.3 Implementation . 23 7.5 Approximations and Reliability of Results . 24 1 8 Experiments 24 8.1 Hardware Setup . 24 8.2 Compiler and Flags . 24 8.3 Root Finders for Calculation of Closest Approach . 25 8.4 PeriodicLyapunovOrbits . 25 8.5 Massively Parallel Multiple Gravity Assists . 25 8.5.1 Continuous optimization only . 25 8.5.2 Discrete/continuous Optimization . 26 8.6 Conical Arcs And Lyapunov Orbits Combined . 26 9 Results 27 9.1 Root Finders for Calculation of Closest Approach . 27 9.2 PeriodicLyapunovOrbits . 27 9.3 Massively Parallel Multiple Gravity Assists . 27 9.3.1 Continuous optimization . 27 9.3.2 Discrete and Continuous Optimization . 27 9.4 Conical Arcs And Lyapunov Orbits Combined . 27 10 Analysis 32 10.1RootFinders ............................. 32 10.2 Periodic Lyapunov Orbits . 32 10.3 Massively Parallel Multiple Gravity Assists . 32 10.3.1 Continuous optimization . 32 10.3.2 Discrete and Continuous Optimization . 32 10.4 Conical Arcs And Lyapunov Orbits Combined . 32 10.5 Conclusions . 32 11 Further Research 33 2 1 Introduction Humans have been sending space probes into the far reaches of space since the early 60’s with programs such as USA’s Pioneer Program and Soviet’s Sputnik Program and Venera Program. Most notable however is perhaps the Voyager Program from the late 70’s, in which two robotic probes were sent in a trajec- tory that visited most of the outer planets. As of this writing Voyager 1 remains the only human made object to leave our solar system [22], although its twin is expected to enter interstellar space soon as well. While the physical reach of humans has yet to extend beyond the Earth’s moon our robotic probes have gone to a wide variety of celestial objects, each with a di↵erent set of challenges and each providing a di↵erent set of scientific insights; from the complex organic compounds found by Rosetta [5] to the hexagonal shaped storm on Saturn’s north pole [11] to the fact that Pluto is still geolog- ically active [36]. None of these discoveries, or the theories they inspire, would have been possible without the work of the countless men and women designing the complex trajectories of the space probes. 1.1 Paper Overview This study describes the software C.A.S.S.A.N.D.R.A, an acronym for Chaos Assisted Sling Shots And Non-linear Dynamical gRavity Assists. The goal of Cassandra is to automatically generate trajectories that would allow laymen (or at least non-experts in spacecraft trajectory design) to test out the feasibility of high-thrust low-fuel space missions. This is achieved using the power of mas- sively parallel graphics cards. A high-thrust trajectory simply refers a path through space that can be achieved using a finite number of short (on the timescale of seconds) bursts of the engine. 1.2 Paper Structure This paper has the following structure: section 2 gives a background on design by optimization, a.k.a. Generative Design as well as a primer on parallelization. Section 3 describes di↵erent mathematical models of celestial motion and their advantages and drawbacks. This is of vital importance since part of the novelty of this study arises from the combination of several models. Section 4 discusses which e↵ects are deemed to be out of scope. Section 5 describes all algorithms that are used, separately. Section 6 describes a failed first attempt. Section 7 describes how the algorithms are combined as well as details of implementation. Section 8,9 and 10 describe the experiments, their results and what conclusions that can be drawn, respectively. Section 11 is about possible further research. 2 Background 2.1 Generative Design The use of evolutionary algorithms in space mission design is common practice nowadays [40, 28, 15, 20]. The approach is a new one, it was not even mentioned 3 in Betts’ classical survey from ’98 [4], which instead focused on Nonlinear Pro- gramming and Optimal Control. The rough idea is that the user defines some quantity (such as time of flight, fuel usage, or number of celestial bodies visited) and the Generative Design software designs a trajectory which optimizes this quantity while still making sure that the trajectory is within the parameters of the mission. There are many advantages to using the approach of Generative Design, but perhaps the most convincing one is that of simplicity; instead of having a team of experts developing the trajectory anyone can let the software generate an ap- proximate trajectory and inspect the required mission parameters to determine if it is possible. A contrast to this is the Voyager program, where the engineers had to plot tens of thousands pork chop plots and use these to decide which trajectories were viable [26]. Another advantage is that computers can come up with designs that humans are unlikely to ever think of, such as the partition door Autodesk developed for Airbus using generative design [3]. It was twice as light as the one developed by humans. Another example is the antenna designed by an evolutionary algo- rithm by NASA [13]. Its shape was highly unintuitive, but it outperformed the antennas designed by humans. 2.2 Parallelization Parallel programming is the practice of running software on multiple computa- tional units at the same time [1]. This is, however, not as simple as it might sound. In order to turn serial code, running on only one core, into parallel code one has to be able to divide the problem into smaller sub-problems, to be distributed over the available cores. This is not always possible. For example if the input data of one of the sub-problems is based on the output data of one of the other sub-problems, then these two need to be done sequentially and are therefore not parallelizable. Another difficulty of parallelization is load balancing of tasks [39] It is rather common for sub-problems to vary in difficulty and execution time, which means that the sub-problems should be distributed over the the available cores in such a way that no core is idle. This is of course often impossible, but finding a good load balance reduces running time and is therefore nonetheless important. However, even if the load balance is perfect, this will not mean that the speedup achieved is perfectly linear; it is restricted by Amdahl’s law [2]. Speedup is de- fined as the the running time of the parallel code divided by the running time of the serial code. It is usually given as a function of the number of cores. Sometimes a small part of the problem is sequential and cannot be parallelized. Denote the fraction of the running time of this part by rs. This means that the fraction that is parallelizable is 1 rs. Hence the speed up of the running time is restricted by − 1 s(n) r +(1 r )/n s − s 4 where n is the number of cores. Taking this to its limit gives s 1/rs as 1 ! n , thus placing a theoretical ceiling of r− on the speed up. !1 s 2.2.1 SIMD There are di↵erent types of parallel programming. The one used in this thesis is the Single Instruction Multiple Data (SIMD), also known as Single Program Multiple Data (SPMD). This means that even though the cores run in parallel, they have to execute the same operations at the same time [30].
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages40 Page
-
File Size-