On the Use of Gpus for Massively Parallel Optimization of Low-Thrust Trajectories

On the Use of Gpus for Massively Parallel Optimization of Low-Thrust Trajectories

ON THE USE OF GPUS FOR MASSIVELY PARALLEL OPTIMIZATION OF LOW-THRUST TRAJECTORIES Alexander Wittig, Viktor Wase, Dario Izzo ESA Advanced Concepts Team, ESTEC, 2200AG Noordwijk, The Netherlands falexander.wittig, viktor.wase, [email protected] ABSTRACT several layers of memory caches is available. With the ad- vent of modern GPUs a huge number of Arithmetic Logic The optimization of low-thrust trajectories is a difficult task. Units (ALU) per card (typically on the order of thousands) While techniques such as Sims-Flanagan transcription give are made available at costs comparable to those of a single good results for short transfer arcs with at most a few revo- high-end multicore CPU [1]. However, the large increase in lutions, solving the low-thrust problem for orbits with large raw arithmetic computing power comes at the cost of limited numbers of revolutions is much more difficult. Adding to the performance of each core compared to traditional CPU cores difficulty of the problem is that typically such orbits are for- mostly due to the lack of the sophisticated hardware caches mulated as a multi-objective optimization problem, providing present in CPUs. a trade-off between fuel consumption and flight time. This GPU architecture is not suitable to run different, In this work we propose to leverage the power of mod- complex subprograms on each core. Instead, the hardware ern GPU processors to implement a massively parallel evolu- is optimized for what is referred to as the Single Instruc- tionary optimization algorithm. Modern GPUs are capable of tion Multiple Threads (SIMT) parallel processing paradigm running thousands of computation threads in parallel, allow- [2], which applies the same operation to different input data ing for very efficient evaluation of the fitness function over a in parallel. This requires a different approach in algorithm large population. A core component of this algorithm is a fast design to maximize the impact of those new hardware capa- massively parallel numerical integrator capable of propagat- bilities in various fields of scientific computing [3]. ing thousands of initial conditions in parallel on the GPU. Early attempts have been made at using GPUs in global Several evolutionary optimization algorithms are ana- optimization of docking dynamics of spacecraft [4]. More re- lyzed for their suitability for large population size. An ex- cently, GPU programming has found its way into many other ample of how this technique can be applied to low-thrust aerospace related research topics. GPUs have been used for optimization in the targeting of the Moon is given. the parallel computation of trajectories to enable fast Monte- Index Terms— low thrust, global optimization, GPU, Carlo simulations [5], uncertainty propagation [6], as well as evolutionary, parallel, ODE integration non-linear filtering [7] and fast tree-search algorithms [8]. In this work we propose a massively parallel evolutionary optimization algorithm using the GPU to for massively paral- 1. INTRODUCTION lel evaluation of the fitness function over a large population. This fitness function requires the propagation of a spacecraft Techniques such as Sims-Flanagan transcription are com- state over many revolutions. Thus a core component of this monly employed in the optimization of low-thrust trajecto- algorithm is a numerical integrator implementation capable of ries. They are fast and give good results for short transfer arcs propagating thousands of initial conditions in parallel on the with at most a few revolutions. However, solving the low- GPU. thrust problem for orbits with large numbers of revolutions is We chose the standardized OpenCL package [9] for the much more complex. Part of this complexity is that typically GPU programming. The OpenCL heterogeneous computing such trajectory optimization problems are formulated as a platform provides a platform agnostic C like programming multi-objective problems, providing a trade-off between fuel language along with compilers for most current accelerator consumption and flight time. cards such as those by NVidia, Intel and ATI. Furthermore, it Traditional algorithms to solve this kind of problem have provides libraries for both the host side (CPU) as well as the been developed considering the typical computing architec- client side (GPU) to facilitate common tasks in heterogeneous ture of (potentially multicore) Central Processing Unit (CPU) programming in a hardware independent manner. Interfacing machines. In these systems, a small number of computation- the OpenCL code for GPU fitness function evaluation with ally powerful cores with fast random memory access due to the Python language allows for an easy to use interface to the otherwise somewhat cumbersome C code while maintaining very costly and should be avoided [13]. In a naive implemen- the high performance required in the critical code paths. tation of RK integration schemes with step size control, many We consider several evolutionary optimization algorithms different conditions are constantly checked to determine if for their suitability for large population size. In order to be the solution is within the user specified error bounds, step- implemented efficiently on a GPU, it is necessary for the al- size limits and propagation time bounds are met, etc. Also gorithms to have a high level of concurrency and a complexity loops result in potentially costly branching instructions if the scaling well with the population size. compiler cannot automatically unroll them. Lastly, we show an example of how this implementation We avoided these pitfalls by optimizing our code to use can be applied to low-thrust optimization in the targeting of specific GPU commands instead. Of particular use in this celestial bodies, such as the Moon. While we perform the context was the conditional assignment operator (called se- propagation in a simple two-body model, due to its numerical lect() in OpenCL), which selects between two values based nature in principle the propagation can also be performed in on the value of a third parameter. On GPUs this operator is more complete models such as the circular restricted three implemented in the instruction set of the processor and hence body problem. very fast. It does not require any branching, and can replace The remainder of this paper is structured as follows. In many instances of if statements. Further optimizations were Section 2 we introduce the GPU based Runge-Kutta integra- achieved by manually unrolling loops in the code. While the tor implementation at the core of the fitness function. We then AMD OpenCL compiler has flags to force it to unroll loops proceed to give an overview of various implementations of automatically, we found that some loops that could be un- evolutionary optimizers in Section 3. Following this we illus- rolled manually were not automatically unrolled by the com- trate the concept by optimizing a simple low thrust example piler. in Section 4. We finish with some conclusions in Section 5. With these optimizations, we were able to replace all if statements in the code and all but one loop. This has been 2. GPU INTEGRATOR verified by checking the resulting machine code on the AMD GCN Hawaii platform (used on our AMD W8100 card) using In principle, ODE integration is an inherently sequential pro- the AMD Code XL profiling tool [14]. The speedup due to cess. The propagation of a single state almost never benefits these optimizations was impressive, on the order of a factor from the implementation of a GPU. This is because in virtu- of 5 to 10 for simple ODEs such as the Lorenz equation. ally all integration schemes a single step corresponds to the evaluation of the right hand side followed by the calculation 2.2. Performance of a new intermediate state which is used to evaluate the right hand side again. In the following, we perform some simple comparisons be- The highly parallel execution on a GPU comes into play tween our GPU integrator and CPU based integration for large when propagating a set of initial conditions instead of s single numbers of initial conditions. The comparisons are carried state. In that case, all operations of the integrator can be per- out on a machine with a 4 core (8 hyperthreads) Xeon E5- formed on all initial states at once, propagating at the same 1620 V3 at 3.5 GHz with 16 GB of RAM and an AMD Fire- time the full set of initial conditions. Pro W8100 (2560 stream processors) at 850 MHz with 4 GB RAM. All computations are preformed using double preci- 2.1. Implementation sion floating point numbers, which in the latest generation of high-end GPUs are now well supported also on those devices We chose to implement an arbitrary Runge-Kutta integration at high speed. scheme with automatic stepsize control such as the Runge- In order to compare the performance of the integrator be- Kutta-Fehlberg 4/5 [10] or a corresponding 7/8 order method tween CPUs and GPUs, we used a feature of the OpenCL [11]. Such implementations have been described in the lit- environment. By simply switching the computation platform, erature before [12, Chapters 7,8]. However, we found previ- it is possible to run the exact same code once on the CPU and ous implementations not very well adapted to the computa- once on the GPU. In case of the CPU the code will automati- tion paradigm of GPUs, leading to poor relative performance cally make use of all available cores as well as the same auto- when compared with CPU based implementations. The im- vectorization features used to generate the GPU code. This al- plementation in [12, Chapters 8], for example, finds only a lows for a reasonable comparison between the two platforms speedup of a factor of about two when compared with parallel without having to recode the entire test case.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    7 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us