
1 Julia: Dynamism and Performance Reconciled by Design 2 3 JEFF BEZANSON, Julia Computing, Inc. 4 JIAHAO CHEN, Capital One 5 6 BEN CHUNG, Northeastern University 7 STEFAN KARPINSKI, Julia Computing, Inc 8 VIRAL B. SHAH, Julia Computing, Inc 9 LIONEL ZOUBRITZKY, École Normale Supérieure and Northeastern University 10 JAN VITEK, Northeastern University and Czech Technical University 11 Julia is a programming language for the scientific community that combines features of productivity languages, 12 such as Python or MATLAB, with characteristics of performance-oriented languages, such as C++ or Fortran. 13 Julia’s productivity features include: dynamic typing, automatic memory management, rich type annotations, 14 and multiple dispatch. At the same time, it allows programmers to control memory layout and leverages a 15 specializing just-in-time compiler to eliminate much of the overhead of those features. This paper details the 16 design choices made by the creators of Julia and reflects on the implications of those choices for performance 17 and usability. 18 CCS Concepts: • Software and its engineering → Language features; General programming languages; 19 Just-in-time compilers; Multiparadigm languages; 20 Additional Key Words and Phrases: multiple dispatch, just in time compilation, dynamic languages 21 22 ACM Reference Format: 23 Jeff Bezanson, Jiahao Chen, Ben Chung, Stefan Karpinski, Viral B. Shah, Lionel Zoubritzky, and Jan Vitek.2018. 24 Julia: Dynamism and Performance Reconciled by Design. Proc. ACM Program. Lang. 1, OOPSLA, Article 00 ( 2018), 23 pages. https://doi.org/00.0000/0000000 25 26 1 INTRODUCTION 27 Scientific programming has traditionally adopted one of two programming language families: 28 productivity languages (Python, MATLAB, R) for easy development, and performance languages (C, 29 C++, Fortran) for speed and a predictable mapping to hardware. Features of productivity languages 30 such as dynamic typing or garbage collection make exploratory and iterative development simple. 31 Thus, scientific applications often begin their lives in a productivity language. In many cases, asthe 32 problem size and complexity outgrows what the initial implementation can handle, programmers 33 turn to performance languages. While this usually leads to improved performance, converting an 34 existing application (or some subset thereof) to a different language requires significant programmer 35 involvement; features previously handled by the language (e.g. memory management) now have 36 to be emulated by hand. As a result, porting software from a high level to a low level language is 37 often a daunting task. 38 Scientists have been trying to bridge the divide between performance and productivity for years. 39 One example is the ROOT data processing framework [Antcheva et al. 2015]. Confronted with 40 petabytes of data, the high energy physics community spent more than 20 years developing an 41 extension to C++, providing interpretive execution and reflection—typical features of productivity 42 43 Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee 44 provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all otheruses, 45 contact the owner/author(s). 46 © 2018 Copyright held by the owner/author(s). 47 2475-1421/2018/00-ART00 48 https://doi.org/00.0000/0000000 49 Proceedings of the ACM on Programming Languages, Vol. 1, No. OOPSLA, Article 00. Publication date: 2018. 00:2 Bezanson, Chen, Chung, Karpinski, Shah, Zoubritzky, Vitek 50 languages—while retaining C++’s performance in critical portions of the code. Most scientific fields, 51 however, do not have the resources to build and maintain their own computational infrastructure. 52 The Julia programming language aims to decrease the gap between productivity and performance 53 languages. On one hand, it provides productivity features like dynamic typing, garbage collection, 54 and multiple dispatch. On the other, it has a type-specializing just-in-time compiler and lets 55 programmers control the layout of data structure in memory. Julia, therefore, promises scientific 56 programmers the ease of a productivity language at the speed of a performance language. 57 This promise is surprising. Dynamic languages like 58 Python or R typically suffer from at least an order of mag- mutable struct Node 59 nitude slowdown over C, and often more. Fig.1 illustrates § val ¤ nxt 60 that Julia is indeed a dynamic language. It declares a Node end 61 datatype containing two untyped fields, val and nxt, and function insert(list, elem) 62 an untyped insert function that takes a sorted list and if list isa Void 63 performs an ordered insertion. While this code will be op- return Node(elem, nothing) elseif list.val> elem 64 timized by the Julia compiler, it is not going to run at full return Node(elem, list) 65 speed without some additional programmer intervention. end list.nxt= insert(list.nxt, elem) 66 The key to performance in Julia lies in the synergy list 67 between language design, implementation techniques and end 68 programming style. Julia’s design was carefully tailored 69 so that a very small team of language implementers could ¦ Fig. 1. Linked list ¥ 70 create an efficient compiler. The key to this relative ease 71 is to leverage the combination of language features and programming idioms to reduce overhead, 72 but what language properties enable easy compilation to fast code? 73 Language design: Julia includes a number of features that are common to many productivity 74 languages, namely dynamic types, optional type annotations, reflection, dynamic code loading, and 75 garbage collection. A slightly less common feature is symmetric multiple dispatch [Bobrow et al. 76 1986]. In Julia a function can have multiple implementations, called methods, distinguished by the 77 type annotations added to parameters of the function. At run-time, a function call is dispatched to 78 the most specific method applicable to the types of the arguments. Julia’s type annotations canbe 79 attached to datatype declarations as well, in which case they are checked whenever typed fields are 80 assigned to. Julia differentiate between concrete and abstract types: the former can be instantiated 81 while the latter can be extended by subtypes. This distinction is important for optimization. 82 Language implementation: Performance does not arise from great feats of compiler engineering: 83 Julia’s implementation is simpler than that of many dynamic languages. The Julia compiler has 84 three main optimizations that are performed on a high-level intermediate representation; native 85 code generation is delegated to the LLVM compiler infrastructure. The optimizations are (1) method 86 inlining which devirtualizes multi-dispatched calls and inline the call target; (2) object unboxing to 87 avoid heap allocation; and (3) method specialization where code is special cased to its actual argument 88 types. The compiler does not support the kind of speculative compilation and deoptimizations 89 common in dynamic language implementations, but supports dynamic code loading from the 90 interpreter and with eval(). 91 The synergy between language design and implementation is in evidence in the interaction 92 between the three optimizations. Each call to a function that has a combination of concrete argument 93 types not observed before triggers specialization. A type inference algorithm uses the type of the 94 arguments (and if these are user-defined types, the declared type of fields) to discover thetypes 95 of variables in the specialized function. This enables both unboxing and inlining. The specialized 96 method is added to the function’s dispatch table so that future calls with the same argument types 97 can use the generated code. 98 Proceedings of the ACM on Programming Languages, Vol. 1, No. OOPSLA, Article 00. Publication date: 2018. Julia 00:3 99 Programming style: To assist the implementation, Julia programmers need to write idiomatic 100 code that can be compiled effectively. Programmers are keenly aware of the optimizations that the 101 compiler performs and write code that is shaped accordingly. For instance, adding type annotations 102 to fields of datatypes is viewed as good practice. Another good practice is to write methods that 103 are type stable. A method is type stable if, when it is specialized to a set of concrete types, type 104 inference can assign concrete types to all variables in the function. This property should hold for 105 all specializations of the same method. Type instability can stem from methods that can return 106 values of different types, or from assignment of different types to the same variable dependingon 107 branches of the function. 108 This paper gives the first unified overview of the design of the language and its implementation, 109 paying particular attention to the features that play a role in achieving performance. This furthers 110 the work of Bezanson et al. [2017] by detailing the synergies at work through the entire compilation 111 pipeline between the design and the programming style of the language. Moreover we present 112 experimental results on performance and usage. More specifically, we give results obtained ona 113 benchmark suite of 10 small applications where Julia v0.6.2 performs between 0.9x and 6.1x from 114 optimized C code. On our benchmarks, Julia outperforms JavaScript and Python in all cases. Finally 115 we conduct a corpus analysis over a group of 50 popular projects hosted on GitHub to examine how 116 the features and underlying design choices of the language are used in practice. The corpus analysis 117 confirms that multiple dispatch and type annotations are widely used by Julia programmers. Italso 118 shows that the corpus is mostly made up of type stable code.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages24 Page
-
File Size-