Parallel Programming for Changing Hardware

Total Page:16

File Type:pdf, Size:1020Kb

Parallel Programming for Changing Hardware Embracing Heterogeneity — Parallel Programming for Changing Hardware Michael D. Linderman, James Balfour, Teresa H. Meng and William J. Dally Center for Integrated Systems and Computer Systems Laboratory, Stanford University {mlinderm, jbalfour}@stanford.edu Abstract Application Embed and exploit high-level parallel progamming tools Computer systems are undergoing significant change: to High-level Compilers improve performance and efficiency, architects are ex- when available posing more microarchitectural details directly to pro- CPU Accel. Integrate multiple processor- grammers. Software that exploits specialized accelera- Compiler Compilers or domain-specific compilers tors, such as GPUs, and specialized processor features, such as software-controlled memory, exposes limitations Selection Engine Automatically map application to processor-specific software in existing compiler and OS infrastructure. In this pa- CPU Accelerators per we propose a pragmatic approach, motivated by our modules experience with Merge [3], for building applications that will tolerate changing hardware. Our approach allows Figure 1: Sketch of Merge framework programmers to leverage different processor-specific or croarchitectures and system architectures have changed domain-specific toolchains to create software modules significantly. As modern architectures expose more mi- specialized for different hardware configurations, and it croarchitectural and system details to software to im- provides language mechanisms to enable the automatic prove performance and efficiency, programmers are no mapping of the application to these processor-specific longer insulated from the evolution of the underlying modules. We show this approach can be used to manage hardware. Programming models need to be inclusive of computing resources in complex heterogeneous proces- different processor architectures, and tolerant of contin- sors and to enable aggressive compiler optimizations. ual, often radical, changes in hardware. To exploit these new and different hardware resources, 1 Introduction a diverse set of vendor-specific, architecture-specific and application-specific programming models have and are Heterogeneous computer systems, which may integrate currently being developed. The rapid evolution of hard- GPUs, FPGAs and other accelerators alongside conven- ware ensures that programming models will continue tional CPUs, offer significantly better performance and to be developed at a torrid pace. Integrating different efficiency. However, they often do so by exposing to pro- toolchains, whether from different vendors or using dif- grammers architectural mechanisms, such as low-latency ferent high-level semantics, remains a challenge. How- scratchpad memories and inter-processor interconnect, ever, integrating many narrowly-focused tools is more ef- that are either hidden or unavailable in general-purpose fective than attempting to craft a single all-encompassing CPUs. The software that executes on these accelera- solution; consequently, that is the approach we take. tors often bears little resemblance to its CPU counterpart: In this paper, we present a methodology, motivated source languages and assembly differ, and often entirely by our experiences with the Merge framework [3], for different algorithms are needed to exploit the capabilities building programs that target diverse and evolving het- of the different hardware. erogeneous multicore systems. Our approach, summa- The ISAs of commodity general-purpose processors rized in Figure 1, automatically maps applications to have changed remarkably little during the past 30 years. specialized software modules, implemented with dif- Decades old software still runs correctly, and fast, on ferent processor-specific or domain-specific toolchains. modern processors. Unlike their ISAs, processor mi- Specialized domain-specific languages and accelerator- specific assembly are encapsulated in C/C++ functions kmeans(...); to provide a uniform interface and inclusive abstraction for computations of any complexity. Different imple- mentations of a function are bundled together, creating Domain GPU a layer of indirection between the caller and the imple- Specific Code mentation that facilitates the mapping between applica- Language tion and implementation. Proxy Section 2 motivates the use of encapsulation and C Code C Code C Code bundling, summarizing and updating the techniques first described in [3]. Sections 3 and 4 present our most recent Figure 2: Encapsulation of inline accelerator-specific as- work, in which we show how encapsulation and bundling sembly or domain-specific languages can be used to effectively manage computing resources in complex heterogeneous systems and enable aggressive (DSLs) and accelerator-specific assembly into C-like compiler optimizations. functions, thereby creating a uniform interface, compati- ble with existing software infrastructure, that is indepen- dent of the actual implementation. Figure 2 shows an ex- 2 An Extensible Programming Model ample in which kmeans is implemented using combina- tions of standard C, a DSL, and GPU-specific assembly. The canonical compiler reduces a computation expressed All versions present the same interface and all appear to in some high-level language to a small, fixed set of prim- the caller to execute in the CPU memory space. The itive operations that abstract the capabilities of the tar- proxy layer (e.g., EXOCHI, CUDA) provides the data get hardware. Compilation and optimization strategies transfer and other runtime infrastructure needed to sup- are biased by the choice of primitive operations. Opti- port the interaction between the CPU and the accelerator. mizations developed for one set of primitives are often of These enhanced functions, which we term function- limited use when the primitive operations fail to abstract intrinsics, are conceptually similar to existing compiler important aspects of the target hardware or application. intrinsics, such as those used to represent SSE opera- Unfortunately, no one set of primitive operations can tions. Unlike conventional intrinsics, programmers are effectively abstract all of the unique and specialized ca- not limited to a small fixed set of operations; instead, pabilities provided by modern hardware. For instance, programmers can create intrinsics for operations of any the capabilities of scalar processors are represented well complexity, for any architecture and using any program- by three-address operations on scalar operands; the ca- ming model supported by a proxy interface. When pro- pabilities of SIMD processors, such as Cell, GPUs and grammers use a non-C language, such as GPU assem- SSE units, are better represented by short-vector oper- bly, the appropriate compiler is invoked and the resulting ations; and the capabilities of FPGAs are better repre- machine code (or an intermediate language and a just-in- sented by binary decision diagrams and data flow graphs time compiler or interpreter) is packaged into the binary. with variable-precision operands. Much as the limita- tions of scalar primitives motivated the adoption of short- vector primitives in compilers targeting SIMD architec- 2.2 A Concurrent Function Call ABI tures, compilers that target complex accelerators such as FPGAs will find representations based on simple scalar Using the function call interface to integrate specialized and short-vector primitives limiting and ineffective. implementations is actually common. For example, most We argue that nascent parallel programming systems systems ship with a version of the C standard library that should allow software that uses different programming is optimized for that particular platform. Often the opti- models and primitives to be integrated simply and ef- mized implementation includes machine-specific assem- ficiently. These systems require variable and inclusive bly and operating system specific system calls. We ex- primitives, primitives that can abstract computational tend this approach to more than just a few standardized features of any complexity (variable) and for any archi- libraries. We believe programmers will need to extend tecture, or using any programming model (inclusive). and specialize many different APIs to exploit different hardware efficiently. 2.1 Encapsulating Specialized Code The simple and complete definition of the C func- tion call ABI provides a reasonable starting point, but Fortunately, programming languages already provide must be enhanced to provide guarantees needed for cor- variable and inclusive primitives: functions. Program- rect concurrent execution. Additional restrictions are re- ming systems such as EXOCHI [7] and CUDA [5] al- quired to ensure different implementations of the same low programmers to inline domain-specific languages function can be invoked interchangeably, independently proxy ments (e.g. data set size < 10000); configuration restric- GPU execution Crypto tions, which specify the necessary compute resources CPU Memory (e.g. availability of a suitable GPU); and traits, which inter-accelerator Space describe properties that are useful to users of the func- communication tion (e.g. associativity). At compile time, when function variants implementing the same computation are bundled FPGA Video together, the annotations are analyzed and translated into Coding a set of dispatch wrapper functions that implement the generic function interface and provide introspection into Figure 3: Relationship between different accelerators the variants available in the bundle. and the
Recommended publications
  • Programming Language
    Programming language A programming language is a formal language, which comprises a set of instructions that produce various kinds of output. Programming languages are used in computer programming to implement algorithms. Most programming languages consist of instructions for computers. There are programmable machines that use a set of specific instructions, rather than general programming languages. Early ones preceded the invention of the digital computer, the first probably being the automatic flute player described in the 9th century by the brothers Musa in Baghdad, during the Islamic Golden Age.[1] Since the early 1800s, programs have been used to direct the behavior of machines such as Jacquard looms, music boxes and player pianos.[2] The programs for these The source code for a simple computer program written in theC machines (such as a player piano's scrolls) did not programming language. When compiled and run, it will give the output "Hello, world!". produce different behavior in response to different inputs or conditions. Thousands of different programming languages have been created, and more are being created every year. Many programming languages are written in an imperative form (i.e., as a sequence of operations to perform) while other languages use the declarative form (i.e. the desired result is specified, not how to achieve it). The description of a programming language is usually split into the two components ofsyntax (form) and semantics (meaning). Some languages are defined by a specification document (for example, theC programming language is specified by an ISO Standard) while other languages (such as Perl) have a dominant implementation that is treated as a reference.
    [Show full text]
  • Safe, Fast and Easy: Towards Scalable Scripting Languages
    Safe, Fast and Easy: Towards Scalable Scripting Languages by Pottayil Harisanker Menon A dissertation submitted to The Johns Hopkins University in conformity with the requirements for the degree of Doctor of Philosophy. Baltimore, Maryland Feb, 2017 ⃝c Pottayil Harisanker Menon 2017 All rights reserved Abstract Scripting languages are immensely popular in many domains. They are char- acterized by a number of features that make it easy to develop small applications quickly - flexible data structures, simple syntax and intuitive semantics. However they are less attractive at scale: scripting languages are harder to debug, difficult to refactor and suffers performance penalties. Many research projects have tackled the issue of safety and performance for existing scripting languages with mixed results: the considerable flexibility offered by their semantics also makes them significantly harder to analyze and optimize. Previous research from our lab has led to the design of a typed scripting language built specifically to be flexible without losing static analyzability. Inthis dissertation, we present a framework to exploit this analyzability, with the aim of producing a more efficient implementation Our approach centers around the concept of adaptive tags: specialized tags attached to values that represent how it is used in the current program. Our frame- work abstractly tracks the flow of deep structural types in the program, and thuscan ii ABSTRACT efficiently tag them at runtime. Adaptive tags allow us to tackle key issuesatthe heart of performance problems of scripting languages: the framework is capable of performing efficient dispatch in the presence of flexible structures. iii Acknowledgments At the very outset, I would like to express my gratitude and appreciation to my advisor Prof.
    [Show full text]
  • Dynamic Extension of Typed Functional Languages
    Dynamic Extension of Typed Functional Languages Don Stewart PhD Dissertation School of Computer Science and Engineering University of New South Wales 2010 Supervisor: Assoc. Prof. Manuel M. T. Chakravarty Co-supervisor: Dr. Gabriele Keller Abstract We present a solution to the problem of dynamic extension in statically typed functional languages with type erasure. The presented solution re- tains the benefits of static checking, including type safety, aggressive op- timizations, and native code compilation of components, while allowing extensibility of programs at runtime. Our approach is based on a framework for dynamic extension in a stat- ically typed setting, combining dynamic linking, runtime type checking, first class modules and code hot swapping. We show that this framework is sufficient to allow a broad class of dynamic extension capabilities in any statically typed functional language with type erasure semantics. Uniquely, we employ the full compile-time type system to perform run- time type checking of dynamic components, and emphasize the use of na- tive code extension to ensure that the performance benefits of static typing are retained in a dynamic environment. We also develop the concept of fully dynamic software architectures, where the static core is minimal and all code is hot swappable. Benefits of the approach include hot swappable code and sophisticated application extension via embedded domain specific languages. We instantiate the concepts of the framework via a full implementation in the Haskell programming language: providing rich mechanisms for dy- namic linking, loading, hot swapping, and runtime type checking in Haskell for the first time. We demonstrate the feasibility of this architecture through a number of novel applications: an extensible text editor; a plugin-based network chat bot; a simulator for polymer chemistry; and xmonad, an ex- tensible window manager.
    [Show full text]
  • The University of Chicago Reflective Techniques In
    THE UNIVERSITY OF CHICAGO REFLECTIVE TECHNIQUES IN EXTENSIBLE LANGUAGES A DISSERTATION SUBMITTED TO THE FACULTY OF THE DIVISION OF THE PHYSICAL SCIENCES IN CANDIDACY FOR THE DEGREE OF DOCTOR OF PHILOSOPHY DEPARTMENT OF COMPUTER SCIENCE BY JONATHAN RIEHL CHICAGO, ILLINOIS AUGUST 2008 To Leon Schram ABSTRACT An extensible programming language allows programmers to use the language to modify one or more of the language’s syntactic, static-semantic, and/or dynamic- semantic properties. This dissertation presents Mython, a variant of the Python lan- guage, which affords extensibility of all three language properties. Mython achieves extensibility through a synthesis of reflection, staging, and compile-time evaluation. This synthesis allows language embedding, language evolution, domain-specific opti- mization, and tool development to be performed in the Mython language. This work argues that using language-development tools from inside an extensible language is preferable to using external tools. The included case study shows that users of an embedded differential equation language are able to work with both the embedded language and embedded programs in an interactive fashion — simplifying their work flow, and the task of specifying the embedded language. iii ACKNOWLEDGMENTS In keeping with my dedication, I’d like to begin by thanking Leon Schram, my A.P. computer science teacher. Mr. Schram took a boy who liked to play with computers, and made him a young man that could apply reason to the decomposition and solution of problems. In turn, I’d like to thank Charlie Fly and Robin Friedrich for helping extend that reason to encompass large scale systems and language systems.
    [Show full text]
  • Castor: Programming with Extensible Generative Visitors
    Castor: Programming with Extensible Generative Visitors Weixin Zhanga,∗, Bruno C. d. S. Oliveiraa aThe University of Hong Kong, Hong Kong, China Abstract Much recent work on type-safe extensibility for Object-Oriented languages has focused on design patterns that require modest type system features. Examples of such design patterns include Object Algebras, Extensible Visitors, Finally Tagless interpreters, or Polymorphic Embeddings. Those techniques, which often use a functional style, can solve basic forms of the Expression Problem. However, they have important limitations. This paper presents Castor: a Scala framework for programming with extensible, generative visitors. Castor has several advantages over previous approaches. Firstly, Castor comes with support for (type-safe) pattern matching to complement its visitors with a concise notation to express operations. Secondly, Castor supports type-safe interpreters (à la Finally Tagless), but with additional support for pattern matching and a generally recursive style. Thirdly, Castor enables many operations to be defined using an imperative style, which is significantly more performant than a functional style (especially in the JVM platform). Finally, functional techniques usually only support tree structures well, but graph structures are poorly supported. Castor supports type-safe extensible programming on graph structures. The key to Castor’s usability is the use of annotations to automatically generate large amounts of boilerplate code to simplify programming with extensible visitors. To illustrate the applicability of Castor we present several applications and two case studies. The first case study compares the ability of Castor for modularizing the interpreters from the “Types and Programming Languages” book with previous modularization work. The second case study on UML activity diagrams illustrates the imperative aspects of Castor, as well as its support for hierarchical datatypes and graphs.
    [Show full text]
  • An Abstract, Reusable, and Extensible Programming Language Design Architecture⋆
    An Abstract, Reusable, and Extensible Programming Language Design Architecture⋆ Hassan A¨ıt-Kaci Universit´eClaude Bernard Lyon 1 Villeurbanne, France [email protected] Abstract. There are a few basic computational concepts that are at the core of all programming languages. The exact elements making out such a set of concepts determine (1) the specific nature of the computational services such a language is designed for, (2) for what users it is intended, and (3) on what devices and in what environment it is to be used. It is therefore possible to propose a set of basic build- ing blocks and operations thereon as combination procedures to enable program- ming software by specifying desired tasks using a tool-box of generic constructs and meta-operations. Syntax specified through LALR(k) grammar technology can be enhanced with greater recognizing power thanks to a simple augmentation of yacc technology. Upon this basis, a set of implementable formal operational semantics constructs may be simply designed and generated (syntax and seman- tics) `ala carte, by simple combination of its desired features. The work presented here, and the tools derived from it, may be viewed as a tool box for generating lan- guage implementations with a desired set of features. It eases the automatic prac- tical generation of programming language pioneered by Peter Landin’s SECD Machine. What is overviewed constitutes a practical computational algebra ex- tending the polymorphically typed λ-Calculus with object/classes and monoid comprehensions. This paper describes a few of the most salient parts of such a system, stressing most specifically any innovative features—formal syntax and semantics.
    [Show full text]
  • 201604261441 Merged2.Pdf
    Demystifying secure computation: Familiar abstractions for efficient protocols A dissertation Submitted to the department of Computer Science Of University of Virginia In fulfillment of the requirements For the degree of Doctor of Philosophy Samee Zahur April 2016 Abstract Over the past few years, secure multi-party computation (MPC) has been transformed from a research tool to a practical one with numerous interesting applications in practice. MPC is a cryptographic technique that allows two or more parties to collaboratively perform a computation without revealing their own private inputs to each other (other than what can be inferred from the output result). Example uses include private auctions where all the participants keep their bids private, private aggregation of corporate-internal data for economic analysis, and private set intersection. However, efficiency of MPC protocols have remained a persistent challenge for many applications. One particular issue that we examine in this dissertation is input-dependent memory accesses. It is difficult to efficiently access a memory location without revealing which element is being accessed, which in turn makes it very difficult to efficiently implement certain programs. This dissertation solves the problem by separately considering two different cases. First, we construct efficient circuit structures for cases where the access pattern is known to follow certain constraints, such as locality. The second case involves a new Oblivious RAM (ORAM) construction that provides general random access. The ORAM construction is slower than the specialized circuit structures, but faster than existing ORAM constructions for MPC for a large range of parameters. To help in implementing and evaluating these constructions, we also designed a new extensible programming language for MPC called Obliv-C, which we believe can be a useful contribution in its own right.
    [Show full text]
  • Genesis: an Extensible Java
    Genesis: An Extensible Java by Ian Lewis BComp Hons Submitted in fulfilment of the requirements for the Degree of Doctor of Philosophy University of Tasmania February 2005 This thesis contains no material which has been accepted for a degree or diploma by the University or any other institution, except by way of background information and duly acknowledged in the thesis, and to the best of the candidate’s knowledge and belief no material previously published or written by another person except where due acknowledgement is made in the text of the thesis. Ian Lewis • iii • This thesis may be made available for loan and limited copying in accordance with the Copyright Act 1968 . Ian Lewis • v • Abstract Extensible programming languages allow users to create fundamentally new syntax and translate this syntax into language primitives. The concept of compile-time meta- programming has been around for decades, but systems that provide such abilities generally disallow the creation of new syntactic forms, or have heavy restrictions on how, or where, this may be done. Genesis is an extension to Java that supports compile-time meta-programming by allowing users to create their own arbitrary syntax. This is achieved through macros that operate on a mix of both concrete and abstract syntax, and produce abstract syntax. Genesis attempts to provide a minimal design whilst maintaining, and extending, the expressive power of other similar macro systems. The core Genesis language definition lacks many of the desirable features found in other systems, such as quasi-quote, hygiene, and static expression-type dispatch, but is expressive enough to define these as syntax extensions.
    [Show full text]
  • Extensible Languages for Flexible and Principled Domain Abstraction
    Extensible Languages for Flexible and Principled Domain Abstraction Dissertation for the degree of Doctor of Natural Sciences Submitted by Sebastian Thore Erdweg, MSc born March 14, 1985 in Frankfurt/Main Department of Mathematics and Computer Science Philipps-Universität Marburg Referees: Prof. Dr. Klaus Ostermann Dr. Eelco Visser Prof. Dr. Ralf Lämmel Submitted November 28, 2012. Defended March 06, 2013. Marburg, 2013. Erdweg, Sebastian: Extensible Languages for Flexible and Principled Domain Abstraction Dissertation, Philipps-Universität Marburg (1180), 2013. Curriculum vitae 2007, Bachelor of Science, TU Darmstadt 2009, Master of Science, Aarhus University Cover photo by Tellerdreher Photography, 2012. Abstract Most programming languages are designed for general-purpose software deve- lopment in a one-size-fits-all fashion: They provide the same set of language features and constructs for all possible applications programmers ever may want to develop. As with shoes, the one-size-fits-all solution grants a good fit to few applications only. The trend toward domain-specific languages, model-driven development, and language-oriented programming counters general-purpose languages by promo- ting the use of domain abstractions that facilitate domain-specific language features and constructs tailored to certain application domains. In particular, domain abstraction avoids the need for encoding domain concepts with general- purpose language features and thus allows programmers to program at the same abstraction level as they think. Unfortunately, current approaches to domain abstraction cannot deliver on the promises of domain abstraction. On the one hand, approaches that target internal domain-specific languages lack flexibility regarding the syntax, static checking, and tool support of domain abstractions, which limits the level of actually achieved domain abstraction.
    [Show full text]
  • AAA One Minute Madness
    One Minute Madness PLMW 2014 Alexander Bakst Data Structure Verification via Refinement Types function append(x1, x2) { if (x1 != null){ var n = x1.next; x1.next = append(n, x2); return x1; } else { return x2; } } Alexander Bakst University of California, San Diego Andrew Bedford Benjamin Greenman Conditional Inheritance class List<T> extends Eq<List<T>> given T extends Eq<T> 0Ben Greenman, Cornell University 1/1 Cole Schlesinger tomorrow at 3:15pm come see CAROLYN ANDERSON present NetKAT an algebraic presentation of network packet processing and verifcation Cyrus Omar Type-Oriented Foundations for Safely Extensible Programming Systems Cyrus Omar, CMU client compatibility G+R R G,#P,#Q,#R features G G (syntax, type system, implementation G+P G+Q C+A P Q strategy, editor service) (a) Separate Languages (b) Extensible Language language library Denis Bogdanas K I I I K I I I I I I Diego Gomez Ajhuacho Elias Castegren Elizabeth Davis Modeling Steganography with Linear Epistemic Logic Elizabeth Davis Advisor: Frank Pfenning Bit ops + normalization Innocuous cover Embedded steganographic message message Epistemic logic : reason about information gained from extracting encoded message Linear logic : reason about consumption and generation of resources in changing state Linear Epistemic logic : reason about actions based on changing information state (DeYoung & Pfenning, 2009) – K says A : linear affirmation – K has A : linear knowledge – K knows A : persistent knowledge Eric Mullen Eric Seidel Refinement'Types'with'LiquidHaskell' data$Text$=$Text$ $${$arr$::$Array $$,$off$::${v:Nat$|$v$$$$$$$<=$alen$arr} $$,$len$::${v:Nat$|$v$+$off$<=$alen$arr}$ $$}$ type$Nat$=${v:Int$|$v$>=$0} measure$alen$::$Array$A>$Nat Eric'Seidel';;'UC'San'Diego' Erick Lavoie Erick Lavoie, PhD @ McGill University Why? Allow scientists and engineers to run MATLAB code in the browser, fast! static dynamic (JIT) How? MATLAB JSON AST JavaScript McLab Compiler new MATLAB VM in JS •Find the fastest JS subset for num.
    [Show full text]
  • Where Are We? PL Category: Concatenative Pls Introduction to Forth
    Where Are We? PL Category: Concatenative PLs Introduction to Forth CS F331 Programming Languages CSCE A331 Programming Language Concepts Lecture Slides Monday, March 23, 2020 Glenn G. Chappell Department of Computer Science University of Alaska Fairbanks [email protected] © 2017–2020 Glenn G. Chappell Review 2020-03-23 CS F331 / CSCE A331 Spring 2020 2 Review Haskell: Data We made a Binary Tree type, with a data item in each node. Such a Binary Tree either has no nodes (it is empty) or it has a root node, which contains a data item and has left and right subtrees, each of which is a Binary Tree. The type is called the type BT. It has two constructors. § BTEmpty gives an empty Binary Tree. § BTNode, followed by an item of the value type, the left subtree, and the right subtree, constructs a nonempty tree. data BT vt = BTEmpty | BTNode vt (BT vt) (BT vt) The value type We implemented the Treesort algorithm in Haskell, using BT for the Binary Search Tree. See data.hs. 2020-03-23 CS F331 / CSCE A331 Spring 2020 3 Review PL Feature: Values & Variables [1/3] Remember: § A value has a lifetime: time from construction to destruction. § An identifier has a scope: where in code it is accessible. Because a bound variable involves both an identifier and a value, scope and lifetime are both applicable. 2020-03-23 CS F331 / CSCE A331 Spring 2020 4 Review PL Feature: Values & Variables [2/3] At runtime, a variable is typically implemented as a location in memory large enough to hold the internal representation of the variable’s value.
    [Show full text]
  • Effective Extensible Programming: Unleashing Julia on Gpus
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS 1 Effective Extensible Programming: Unleashing Julia on GPUs Tim Besard, Christophe Foket and Bjorn De Sutter, Member, IEEE Abstract—GPUs and other accelerators are popular devices for accelerating compute-intensive, parallelizable applications. However, programming these devices is a difficult task. Writing efficient device code is challenging, and is typically done in a low-level programming language. High-level languages are rarely supported, or do not integrate with the rest of the high-level language ecosystem. To overcome this, we propose compiler infrastructure to efficiently add support for new hardware or environments to an existing programming language. We evaluate our approach by adding support for NVIDIA GPUs to the Julia programming language. By integrating with the existing compiler, we significantly lower the cost to implement and maintain the new compiler, and facilitate reuse of existing application code. Moreover, use of the high-level Julia programming language enables new and dynamic approaches for GPU programming. This greatly improves programmer productivity, while maintaining application performance similar to that of the official NVIDIA CUDA toolkit. Index Terms—Graphics processors, very high-level languages, code generation F 1 INTRODUCTION the implementation but hinders long-term maintainability To satisfy ever higher computational demands, hardware when the host language gains features or changes semantics. vendors and software developers look at accelerators, spe- It also forces users to learn and deal with the inevitable cialized processors that are optimized for specific, typically divergence between individual language implementations. parallel workloads, and perform much better at them than This paper presents a vision in which the high-level general-purpose processors [1], [2], [3], [4], [5].
    [Show full text]