Julia Programming Language Benchmark Using a Flight Simulation

Julia Programming Language Benchmark Using a Flight Simulation

https://ntrs.nasa.gov/search.jsp?R=20200002329 2020-05-24T04:21:52+00:00Z View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by NASA Technical Reports Server Julia Programming Language Benchmark Using a Flight Simulation Ray Sells DESE Research, Inc./Jacobs Space Exploration Group Guidance, Navigation, and Mission Analysis NASA-MSFC EV 42 [email protected] Abstract—Julia’s goal to provide scripting language ease-of- scripting languages. Syntactically, Julia somewhat coding with compiled language speed is explored. The runtime resembles Python, but is not meant to be a “compiled speed of the relatively new Julia programming language is Python.” A key question for Julia application to the assessed against other commonly used languages including simulation domain is, “Can Julia, with its obvious coding Python, Java, and C++. An industry-standard missile and simplicity, attain runtime speeds comparable to rocket simulation, coded in multiple languages, was used as a test bench for runtime speed. All language versions of the conventional compiled languages for flight simulation?” simulation, including Julia, were coded to a highly-developed object-oriented simulation architecture tailored specifically for A wide body of literature exists for Julia benchmarking. time-domain flight simulation. A “speed-of-coding” second- Benchmarking is not as objective as many would accept it to dimension is plotted against runtime for each language to be. For instance, benchmarks between different languages portray a space that characterizes Julia’s scripting language can be masked or misleading due to inconsistent code efficiencies in the context of the other languages. With caveats, architecture or different styles of coding. Also, it is a Julia runtime speed was found to be in the class of compiled or coder’s prerogative on how to utilize a language’s features semi-compiled languages. However, some factors that affect for a particular problem. The literature contains many runtime speed at the cost of ease-of-coding are shown. Julia’s excellent language benchmarking sources that recognize the built-in functionality for multi-core processing is briefly examined as a means for obtaining even faster runtime speed. subjectivity of benchmarking and attempt to address it in a The major contribution of this research to the extensive fair manner. One of the original and most prominent language benchmarking body-of-work is comparing Julia to benchmarks was performed by Bagley [2]. A wide variety other mainstream languages using a complex flight simulation of languages (but not Julia) were benchmarked across as opposed to benchmarking with single algorithms. several code tests. The code tests were comprised of algorithms spanning a wide domain of operation from TABLE OF CONTENTS sorting, string manipulation, floating point computations, and hashes, among many others. Notably, the source code 1. INTRODUCTION ................................................. 1 is included for all tests (in all languages). Following in this 2. BENCHMARK DESCRIPTION.............................. 2 pattern, The Computer Language Benchmarks Game [3] is a 3. PRIOR FINDINGS ............................................... 2 more recent and similar presentation that includes Julia. 4. JULIA EXPERIMENTATION ................................ 3 Notably, multiple code submissions for a benchmark in the same language from different contributors are catalogued in 5. BENCHMARK RESULTS ..................................... 5 a fair attempt to capture code with the fastest speed. 6. CONCLUSIONS ................................................... 7 Kouatchou (the “NASA Modeling Guru”) [4] maintains a REFERENCES ......................................................... 7 comprehensive set of benchmark results where the BIOGRAPHY .......................................................... 8 algorithm tests tend more exclusively toward a scientific interest. This is reflected in the languages tested including Python, Julia, Matlab, and R. R Beats Python! [5] is an 1. INTRODUCTION example of how benchmarks can be interpreted differently. Some “unfair timing comparisons” by “the Julia group” are Julia [1] is a relatively new computer language that aims to contested because the “R code was not vectorized.” A reduce the challenge for math-modelers to develop fast takeaway from this body-of-work review, fairness not computer tools and simulations. It potentially combines the withstanding, is that most of the benchmarks were ease-of-coding feature of scripting languages (like Python) characterized by single algorithms being executed in tight with the performance of compiled languages (like C++). loops. While these benchmarks are certainly informative for Normally, these two features are mutually exclusive with flight simulation coding, extrapolation of their results to existing, conventional programming languages. Julia’s large scale flight simulation execution speed is not ease-of-coding principally derives from its dynamic typing straightforward. feature, where data are inferred “on-the-fly” without the necessity (code and complexity) for static data declaration. The dynamic feature is most commonly associated with xxxx paper #, TBD by IEEE 1 Rocket simulation [8] is coded in several language versions Beyond basic algorithmic benchmarks, the literature which are in active use. Mini-Rocket is open-source [9]. contains several favorable Julia benchmarks for problem domains easily vectorized, like “field” problems for Mini-Rocket (MR) is a very easy-to-configure, multiple numerical solution of partial differential equations. These degree-of-freedom missile and rocket fly-out model that type of computations are typically found in the high speed accurately generates trajectories in three-dimensional space, computing domain. Eadline [6] makes the case for Julia as including maneuver characteristics. It features a unique an ideal solution for this type of computing. Gutierrez [7] algorithm that accurately models missile dynamics at a emphasizes Julia’s built-in primitives for parallel computing fraction of the computational cost of conventional six including vectorization. However, flight simulations degree-of-freedom simulations while maintaining a typically involve the execution of sequentially-dependent significant amount of the fidelity. The program is ideal for calculations inside a state-propagation loop. This type of those analyses requiring trajectory modeling without the computation does not lend itself well to vectorization. necessity of detailed modeling of the onboard missile subsystems. Some of its more detailed features are Consequently, the literature review for Julia benchmarking summarized in Table 1. revealed a gap in addressing Julia for time-domain, lumped- element dynamic computations typical for flight simulation. Table 1. Mini-Rocket Features This paper begins describing how a unique combination of existing elements have been employed to address this Feature relatively sparse area of the Julia benchmark research. An Osculating-plane formulation [10] reduces compute- extensively documented object-oriented simulation time overhead of full six degree-of-freedom equations- architecture, its implementation in an industry standard of-motion rocket flight simulation, and separate versions (C++, Java, Motion in three-dimensional space and Python) provide the setting for a comparative evaluation Two independent channels (pitch and yaw) for steering of Julia runtime speed (and ease of coding). This and guidance combination of existing elements avoids a common pitfall of benchmarks: Benchmarks between different languages 1, 2, and 3-dimensional table lookups to model can be masked or misleading due to inconsistent code aerodynamics and propulsion characteristics architecture or different styles of coding. Thus, a common Capability to model angle-of-attack variations in lift architecture and flight application provides for an accurate and drag and fair comparison. Constraints on lateral acceleration based on angle-of- attack and closed-loop airframe response time Execution speeds for a Julia-version of the rocket simulation Detailed models for control and guidance subsystems are presented for a variety of runtime scenarios including not required single-run and various increments of “batch” runs comprised of several executions. These are compared against their C++, Java, and Python simulation counterparts. Key to this benchmark, all language versions of MR have Recognizing runtime speed must be examined in the larger been coded to the same object-oriented architecture. An context of ease-of-coding, metrics for the relationship object-oriented simulation kernel (OSK) [11] successively between these two criteria are shown for each of the executes sequences of model objects inside a differential language implementations. A comparison of lines-of-code equation (DE) engine. A long, traceable heritage of is included, for instance. Also, key Julia benchmark caveats comparisons exists to establish the accuracy of the MR are noted that reflect a usability and runtime speed tradeoff. model and coding mechanizations [8, Section 5] in its different language instantiations. The paper concludes with a brief experiment addressing Julia runtime speed in a multi-core, parallel computing 3. PRIOR FINDINGS context. Although not in the original scope of this research, The C++, Java, and Python MR simulations have been Julia makes parallel computation very accessible

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us