DAISY: Dynamic Compilation for 100% Architectural Compatibility

Total Page:16

File Type:pdf, Size:1020Kb

DAISY: Dynamic Compilation for 100% Architectural Compatibility DAISY: Dynamic Compilation for 100% Architectural Compatibility Kemal EbciogluÆ and Erik R. Altman IBM Thomas J. Watson Research Center Yorktown Heights, NY 10598 g fkemal,erik @watson.ibm.com Abstract tiously) in a recent keynote speech [11], that architectures which do not run Intel x86 code may well be doomed for Although VLIW architectures offer the advantages of failure, regardless of their speed! simplicity of design and high issue rates, a major impediment To solve the compatibility problem ef®ciently, there have to their use is that they are not compatible with the existing been several proposals beyond plain or caching interpreters software base. We describe new simple hardware features for [10]. One has been the object code translation approach (e.g. a VLIW machine we call DAISY (Dynamically Architected [20, 21, 22, 23]), where a software program takes as input Instruction Set from Yorktown). DAISY is speci®cally in- an executable module generated for the old machine, and tended to emulate existing architectures, so that all existing pro®le directed feedback information from past emulations, software for an old architecture (including operating system if available. It then generates a new executable module that kernel code) runs without changes on the VLIW. Each time a can run on the new architecture (resorting to interpretation in new fragment of code is executed for the ®rst time, the code some dif®cult cases), and that gives the same results that plain is translated to VLIW primitives, parallelized and saved in a interpretation would. Although many of the nasty challenges portion of main memory not visible to the old architecture, to static object code translation (programs printing their own by a Virtual Machine Monitor (software) residing in read checksum, shared variables, self modifying code, generating only memory. Subsequent executions of the same fragment a random number and using it as a branch target address, and do not require a translation (unless cast out). We discuss the so on) have been addressed, the static object code translation architectural requirements for such a VLIW, to deal with is- solution still has some problems. sues including self-modifying code, precise exceptions, and If object code translation is used to emulate applications aggressive reordering of memory references in the presence written for one existing machine on another ([22, 23]), then of strong MP consistency and memory mapped I/O. We have many primitives may need to be generated to emulate one implemented the dynamic parallelization algorithms for the old architecture instruction, or unsafe simplifying assump- PowerPC architecture. The initial results show high degrees tions may need to be made (e.g. about ordering of shared of instruction level parallelism with reasonable translation variable accesses, or the number of bits in the ¯oating point overhead and memory usage. representation) to get more performance, in which case full Keywords: INSTRUCTION-LEVEL PARALLELISM,OBJECT compatibility is sacri®ced. This is typically because hard- CODE COMPATIBLE VLIW, DYNAMIC COMPILATION,BINARY ware features to help compatibility with an ªimportant" old TRANSLATION,SUPERSCALAR architecture were not designed into the new fast machine; compatibility was just not emphasized, or came as an after- thought. For example, the set of condition codes maintained is often quite different in different architectures. This ob- 1 Background and Motivation ject code translation approach does allow the convenience of running many important applications of the old architecture Very Long Instruction Word (VLIW) architectures of- on the new machine, but does not provide a replacement for fer the advantages of design simplicity, a potentially short the old machine in terms of speed and range of applications. clock period, and high issue rates. Unfortunately, high per- If the new architecture is fully binary compatible with the formance is not suf®cient for success. One of the major old one by hardware design, but does not run with the best impediments to using a VLIW (or any new ILP machine performance on old binaries, ([20, 21]), and the new features architecture) has been its inability to run existing binaries of the new architecture that improve performance can be uti- of established architectures. It was argued (and not face- lized only by object code translation, or recompilation, the solution is still not perfect. Rapid adoption of new architec- tural features for higher performance may be possible under certain circumstances; scienti®c and technical computing is an example. But computer designers often underestimate the strong inertia of the user community and software vendors at large, and their resistance to change. 2 The Compilation Algorithm Another approach is to translate the old architecture in- structions to a new internal representation (e.g. VLIW) at In this paper, we call the original, old architecture that Icache miss time, by hardware [9, 14, 19]. This approach we are trying to emulate, the base architecture.TheVLIW is robust in the sense that it implements the old architecture which emulates the old architecture we called the migrant completely. But the optimizations that can be performed architecture, following the terminology of [20]. The base by the hardware are limited, compared to software oppor- architecture could be any architecture, but we will be giving tunities. Also the conversion from the old architecture rep- examples mostly from the IBM PowerPC. resentation in memory to the internal Icache representation Traditional caching emulators may spend under 100 in- is complex (especially if one attempts to do re-ordering) structions to translate a typical base architecture instruction and can require substantial hardware design investment, and (depending on the architectural mismatch and complexity of VLSI real estate. the emulated machine). So caching emulators are very fast, As an alternative we present DAISY (Dynamically but do not do much optimization nor ILP extraction. Tradi- Architected Instruction Set from Yorktown). DAISY em- tional VLIW compiler techniques, on the other hand, extract ploys software translation, which is attractive because it considerable ILP at the cost of much more overhead. dispenses with the need for complex hardware whose sole The goal in DAISY is to obtain signi®cant levels of ILP purpose is to achieve compatibility with (possibly ugly) old while keeping compilation overhead to a minimum, to meet architecture(s). Given the appropriate superset of features in the severe time constraints of a virtual machine implementa- the new architecture (e.g. condition codes in x86, PowerPC, tion. Unlike traditionalVLIW scheduling, DAISY examines and S/390 format), DAISY can be dynamically architected each operation in the order it occurs in the original binary by software to ef®ciently emulate any of the old architec- code, converting each into RISC primitives (if a CISCy oper- tures. Assuming that we can begin with a clean slate for both ation). As each RISC primitive is generated, DAISY imme- hardware and emulation software, and adopt a simple de- diately ®nds a VLIW instruction in which it can be placed, sign philosophy, what architectural features and compilation while still doing VLIW global scheduling on multiple paths techniques are required to make software translation ef®cient and across loop iterations and while maintaining precise ex- and 100% compatible with existing software? We attack this ceptions. problem in the current paper. Figure 1 shows an example of PowerPC code and its While DAISY and this paper focus mainly on a VLIW conversion to VLIW code 1 . This conversion uses the al- as the new architecture, the same ideas can be applied any gorithm described in [6]. However, the discussion below new superscalar design, and potentially to other new ILP ar- should make the main points clear. We begin with four major chitectures that break binary compatibility as well. Current points: compiler techniques for attaining high ILP are unacceptably Operations 1±11 of the original PowerPC code are slow for dynamic parallelization, which requires real-time scheduled in sequence into VLIW's. It turns out that performance from a compiler, in order to make the over- two VLIW's suf®ce for these 11 instructions, yielding head imperceptible to the user. To this end, we describe a an ILP of 4, 4, and 3.5 on the three possible paths new, signi®cantly faster parallelization technique. We have through the code. implemented this technique for the PowerPC and we report the initial encouraging ILP results. Another feature of the Operations are always added to the end of the last new compilation technique is the ability to maintain pre- VLIW on the current path. If input data for an op- cise exceptions, so the original instruction responsible for eration are available prior to the end of the last VLIW, an exception can be identi®ed, whenever an exception oc- then the operation is performed as early as possible curs. While out-of-order superscalars use elaborate hardware with the result placed in a renamed register (that is not mechanisms to maintain precise exceptions, in our case this architected in the original architecture). The renamed is done by software alone. register is then copied to the original (architected) reg- The paper is organized as follows: We ®rst give an exam- ister at the end of the last VLIW. This is illustrated by ple illustrating the new fast dynamic compilation algorithm the xor instruction in step 4, whose result is renamed used by DAISY. Next, various architectural features to sup- to r63 in VLIW1, then copied to the original desti- port high performance translation are described. We then de- nation r4 in VLIW2. By having the result available scribe the dynamic translation mechanism whereby DAISY early in r63, later instructions can be moved up. For runs the old software with minimal hardware support. Next example, the cntlz in step 11 can use the result in we discuss the mapping mechanisms from the old code to r63 before it has been copied to r4.
Recommended publications
  • The LLVM Instruction Set and Compilation Strategy
    The LLVM Instruction Set and Compilation Strategy Chris Lattner Vikram Adve University of Illinois at Urbana-Champaign lattner,vadve ¡ @cs.uiuc.edu Abstract This document introduces the LLVM compiler infrastructure and instruction set, a simple approach that enables sophisticated code transformations at link time, runtime, and in the field. It is a pragmatic approach to compilation, interfering with programmers and tools as little as possible, while still retaining extensive high-level information from source-level compilers for later stages of an application’s lifetime. We describe the LLVM instruction set, the design of the LLVM system, and some of its key components. 1 Introduction Modern programming languages and software practices aim to support more reliable, flexible, and powerful software applications, increase programmer productivity, and provide higher level semantic information to the compiler. Un- fortunately, traditional approaches to compilation either fail to extract sufficient performance from the program (by not using interprocedural analysis or profile information) or interfere with the build process substantially (by requiring build scripts to be modified for either profiling or interprocedural optimization). Furthermore, they do not support optimization either at runtime or after an application has been installed at an end-user’s site, when the most relevant information about actual usage patterns would be available. The LLVM Compilation Strategy is designed to enable effective multi-stage optimization (at compile-time, link-time, runtime, and offline) and more effective profile-driven optimization, and to do so without changes to the traditional build process or programmer intervention. LLVM (Low Level Virtual Machine) is a compilation strategy that uses a low-level virtual instruction set with rich type information as a common code representation for all phases of compilation.
    [Show full text]
  • 1 Lecture 25
    I. Goals of This Lecture Lecture 25 • Beyond static compilation • Example of a complete system Dynamic Compilation • Use of data flow techniques in a new context • Experimental approach I. Motivation & Background II. Overview III. Compilation Policy IV. Partial Method Compilation V. Partial Dead Code Elimination VI. Escape Analysis VII. Results “Partial Method Compilation Using Dynamic Profile Information”, John Whaley, OOPSLA 01 (Slide content courtesy of John Whaley & Monica Lam.) Carnegie Mellon Carnegie Mellon Todd C. Mowry 15-745: Dynamic Compilation 1 15-745: Dynamic Compilation 2 Todd C. Mowry Static/Dynamic High-Level/Binary • Compiler: high-level à binary, static • Binary translator: Binary-binary; mostly dynamic • Interpreter: high-level, emulate, dynamic – Run “as-is” – Software migration • Dynamic compilation: high-level à binary, dynamic (x86 à alpha, sun, transmeta; 68000 à powerPC à x86) – machine-independent, dynamic loading – Virtualization (make hardware virtualizable) – cross-module optimization – Dynamic optimization (Dynamo Rio) – specialize program using runtime information – Security (execute out of code in a cache that is “protected”) • without profiling Carnegie Mellon Carnegie Mellon 15-745: Dynamic Compilation 3 Todd C. Mowry 15-745: Dynamic Compilation 4 Todd C. Mowry 1 Closed-world vs. Open-world II. Overview of Dynamic Compilation • Closed-world assumption (most static compilers) • Interpretation/Compilation policy decisions – all code is available a priori for analysis and compilation. – Choosing what and how to compile • Open-world assumption (most dynamic compilers) • Collecting runtime information – code is not available – Instrumentation – arbitrary code can be loaded at run time. – Sampling • Open-world assumption precludes many optimization opportunities. • Exploiting runtime information – Solution: Optimistically assume the best case, but provide a way out – frequently-executed code paths if necessary.
    [Show full text]
  • Dynamic Extension of Typed Functional Languages
    Dynamic Extension of Typed Functional Languages Don Stewart PhD Dissertation School of Computer Science and Engineering University of New South Wales 2010 Supervisor: Assoc. Prof. Manuel M. T. Chakravarty Co-supervisor: Dr. Gabriele Keller Abstract We present a solution to the problem of dynamic extension in statically typed functional languages with type erasure. The presented solution re- tains the benefits of static checking, including type safety, aggressive op- timizations, and native code compilation of components, while allowing extensibility of programs at runtime. Our approach is based on a framework for dynamic extension in a stat- ically typed setting, combining dynamic linking, runtime type checking, first class modules and code hot swapping. We show that this framework is sufficient to allow a broad class of dynamic extension capabilities in any statically typed functional language with type erasure semantics. Uniquely, we employ the full compile-time type system to perform run- time type checking of dynamic components, and emphasize the use of na- tive code extension to ensure that the performance benefits of static typing are retained in a dynamic environment. We also develop the concept of fully dynamic software architectures, where the static core is minimal and all code is hot swappable. Benefits of the approach include hot swappable code and sophisticated application extension via embedded domain specific languages. We instantiate the concepts of the framework via a full implementation in the Haskell programming language: providing rich mechanisms for dy- namic linking, loading, hot swapping, and runtime type checking in Haskell for the first time. We demonstrate the feasibility of this architecture through a number of novel applications: an extensible text editor; a plugin-based network chat bot; a simulator for polymer chemistry; and xmonad, an ex- tensible window manager.
    [Show full text]
  • Comparative Studies of Programming Languages; Course Lecture Notes
    Comparative Studies of Programming Languages, COMP6411 Lecture Notes, Revision 1.9 Joey Paquet Serguei A. Mokhov (Eds.) August 5, 2010 arXiv:1007.2123v6 [cs.PL] 4 Aug 2010 2 Preface Lecture notes for the Comparative Studies of Programming Languages course, COMP6411, taught at the Department of Computer Science and Software Engineering, Faculty of Engineering and Computer Science, Concordia University, Montreal, QC, Canada. These notes include a compiled book of primarily related articles from the Wikipedia, the Free Encyclopedia [24], as well as Comparative Programming Languages book [7] and other resources, including our own. The original notes were compiled by Dr. Paquet [14] 3 4 Contents 1 Brief History and Genealogy of Programming Languages 7 1.1 Introduction . 7 1.1.1 Subreferences . 7 1.2 History . 7 1.2.1 Pre-computer era . 7 1.2.2 Subreferences . 8 1.2.3 Early computer era . 8 1.2.4 Subreferences . 8 1.2.5 Modern/Structured programming languages . 9 1.3 References . 19 2 Programming Paradigms 21 2.1 Introduction . 21 2.2 History . 21 2.2.1 Low-level: binary, assembly . 21 2.2.2 Procedural programming . 22 2.2.3 Object-oriented programming . 23 2.2.4 Declarative programming . 27 3 Program Evaluation 33 3.1 Program analysis and translation phases . 33 3.1.1 Front end . 33 3.1.2 Back end . 34 3.2 Compilation vs. interpretation . 34 3.2.1 Compilation . 34 3.2.2 Interpretation . 36 3.2.3 Subreferences . 37 3.3 Type System . 38 3.3.1 Type checking . 38 3.4 Memory management .
    [Show full text]
  • Bohm2013.Pdf (3.003Mb)
    This thesis has been submitted in fulfilment of the requirements for a postgraduate degree (e.g. PhD, MPhil, DClinPsychol) at the University of Edinburgh. Please note the following terms and conditions of use: • This work is protected by copyright and other intellectual property rights, which are retained by the thesis author, unless otherwise stated. • A copy can be downloaded for personal non-commercial research or study, without prior permission or charge. • This thesis cannot be reproduced or quoted extensively from without first obtaining permission in writing from the author. • The content must not be changed in any way or sold commercially in any format or medium without the formal permission of the author. • When referring to this work, full bibliographic details including the author, title, awarding institution and date of the thesis must be given. Speeding up Dynamic Compilation Concurrent and Parallel Dynamic Compilation Igor B¨ohm I V N E R U S E I T H Y T O H F G E R D I N B U Doctor of Philosophy Institute of Computing Systems Architecture School of Informatics University of Edinburgh 2013 Abstract The main challenge faced by a dynamic compilation system is to detect and translate frequently executed program regions into highly efficient native code as fast as possible. To efficiently reduce dynamic compilation latency, a dy- namic compilation system must improve its workload throughput, i.e. com- pile more application hotspots per time. As time for dynamic compilation adds to the overall execution time, the dynamic compiler is often decoupled and operates in a separate thread independent from the main execution loop to reduce the overhead of dynamic compilation.
    [Show full text]
  • Language and Compiler Support for Dynamic Code Generation by Massimiliano A
    Language and Compiler Support for Dynamic Code Generation by Massimiliano A. Poletto S.B., Massachusetts Institute of Technology (1995) M.Eng., Massachusetts Institute of Technology (1995) Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY September 1999 © Massachusetts Institute of Technology 1999. All rights reserved. A u th or ............................................................................ Department of Electrical Engineering and Computer Science June 23, 1999 Certified by...............,. ...*V .,., . .* N . .. .*. *.* . -. *.... M. Frans Kaashoek Associate Pro essor of Electrical Engineering and Computer Science Thesis Supervisor A ccepted by ................ ..... ............ ............................. Arthur C. Smith Chairman, Departmental CommitteA on Graduate Students me 2 Language and Compiler Support for Dynamic Code Generation by Massimiliano A. Poletto Submitted to the Department of Electrical Engineering and Computer Science on June 23, 1999, in partial fulfillment of the requirements for the degree of Doctor of Philosophy Abstract Dynamic code generation, also called run-time code generation or dynamic compilation, is the cre- ation of executable code for an application while that application is running. Dynamic compilation can significantly improve the performance of software by giving the compiler access to run-time infor- mation that is not available to a traditional static compiler. A well-designed programming interface to dynamic compilation can also simplify the creation of important classes of computer programs. Until recently, however, no system combined efficient dynamic generation of high-performance code with a powerful and portable language interface. This thesis describes a system that meets these requirements, and discusses several applications of dynamic compilation.
    [Show full text]
  • Transparent Dynamic Optimization: the Design and Implementation of Dynamo
    Transparent Dynamic Optimization: The Design and Implementation of Dynamo Vasanth Bala, Evelyn Duesterwald, Sanjeev Banerjia HP Laboratories Cambridge HPL-1999-78 June, 1999 E-mail: [email protected] dynamic Dynamic optimization refers to the runtime optimization optimization, of a native program binary. This report describes the compiler, design and implementation of Dynamo, a prototype trace selection, dynamic optimizer that is capable of optimizing a native binary translation program binary at runtime. Dynamo is a realistic implementation, not a simulation, that is written entirely in user-level software, and runs on a PA-RISC machine under the HPUX operating system. Dynamo does not depend on any special programming language, compiler, operating system or hardware support. Contrary to intuition, we demonstrate that it is possible to use a piece of software to improve the performance of a native, statically optimized program binary, while it is executing. Dynamo not only speeds up real application programs, its performance improvement is often quite significant. For example, the performance of many +O2 optimized SPECint95 binaries running under Dynamo is comparable to the performance of their +O4 optimized version running without Dynamo. Internal Accession Date Only Ó Copyright Hewlett-Packard Company 1999 Contents 1 INTRODUCTION ........................................................................................... 7 2 RELATED WORK ......................................................................................... 9 3 OVERVIEW
    [Show full text]
  • Performance Analyses and Code Transformations for MATLAB Applications Patryk Kiepas
    Performance analyses and code transformations for MATLAB applications Patryk Kiepas To cite this version: Patryk Kiepas. Performance analyses and code transformations for MATLAB applications. Computa- tion and Language [cs.CL]. Université Paris sciences et lettres, 2019. English. NNT : 2019PSLEM063. tel-02516727 HAL Id: tel-02516727 https://pastel.archives-ouvertes.fr/tel-02516727 Submitted on 24 Mar 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Préparée à MINES ParisTech Analyses de performances et transformations de code pour les applications MATLAB Performance analyses and code transformations for MATLAB applications Soutenue par Composition du jury : Patryk KIEPAS Christine EISENBEIS Le 19 decembre 2019 Directrice de recherche, Inria / Paris 11 Présidente du jury João Manuel Paiva CARDOSO Professeur, University of Porto Rapporteur Ecole doctorale n° 621 Erven ROHOU Ingénierie des Systèmes, Directeur de recherche, Inria Rennes Rapporteur Matériaux, Mécanique, Michel BARRETEAU Ingénieur de recherche, THALES Examinateur Énergétique Francois GIERSCH Ingénieur de recherche, THALES Invité Spécialité Claude TADONKI Informatique temps-réel, Chargé de recherche, MINES ParisTech Directeur de thèse robotique et automatique Corinne ANCOURT Maître de recherche, MINES ParisTech Co-directrice de thèse Jarosław KOŹLAK Professeur, AGH UST Co-directeur de thèse 2 Abstract MATLAB is an interactive computing environment with an easy programming language and a vast library of built-in functions.
    [Show full text]
  • Is Dynamic Compilation Possible for Embedded System ? Scopes 2015, St Goar
    Is dynamic compilation possible for embedded system ? Scopes 2015, St Goar Victor Lomüller, Henri-Pierre Charles CEA DACLE / Grenoble June 2 2015 www.cea.fr & Cliquez pour modifier le Introduction : Wake Up Questions Session style du titre FAQ What do you mean by dynamic compilation ? : a compilation system where the binary code is generated at run-time Like a Java JIT system ? Yes exactly ! But not only for portability : for performances & data / architecture adaptation But Java JIT can not be used on ES ! Yes ! Because it need a huge memory size, introduce lag, is slow in the interpreted part, takes time to generate binary code, ... because it’s java ;-) What do you want ? We want to have flexible and fast code generation at run-time. Use it to specialize (simplify) binary code at run-time.There is many application domains ! Is dynamic compilation possible for embedded system ? | DACLE Division | June 2 2015 | 2 & © CEA. All rights reserved Cliquez pour modifier le Intro Compilers Classical Compiler architecture : GCC, LLVM,style Java JIT du titre Driven by performance only Mono architecture Not energy aware Not data dependent “Compilation time” typology Is dynamic compilation possible for embedded system ? | DACLE Division | June 2 2015 | 3 & © CEA. All rights reserved Cliquez pour modifier le Intro Compilers Future Compilers Architecture style du titre Multi objective (execution time, power, thermal constraints) Multi-target (Heterogeneous multi SoC) Data driven (dynamically) Using “Compilation time” Is dynamic compilation possible
    [Show full text]
  • Csc 553 Principles of Compilation 2 : Interpreters Department Of
    Compiler tokens source Lexer Parser CSc 553 AST Principles of Compilation IR AST VM Code Interm. Semantic Gen Code Gen Analyser 2 : Interpreters VM Department of Computer Science Interpreter University of Arizona Get Next Execute Instruction Instruction [email protected] Copyright c 2011 Christian Collberg Kinds of Interpreters An interpreter is like a CPU, only in software. The compiler generates virtual machine (VM) code rather than native machine code. "APL/Prolog−style (load−and−go/interactive) interpreter" The interpreter executes VM instructions rather than native machine code. Read, lex, read Source parse, VM eval Code Code print Interpreters are semantics slow Often 10–100 times slower than executing machine Interpret code directly. portable The virtual machine code is not tied to any particular Read, lex, Source VM parse, CodeFile.vm architecture. Code Code semantics Interpreters work well with read Read very high-level, dynamic languages CodeFile.vm eval (APL,Prolog,ICON) where a lot is unknown at VM print compile-time (array bounds, etc). "Java−style interpreter" Interpret Kinds of Interpreters. Actions in an Interpreter Just−In−Time (JIT,Dynamic) Compiler Read, lex, Source VM parse, CodeFile.vm Internally, an interpreter consists of Code Code semantics 1 The interpreter engine, which executes the VM instructions. 2 Memory for storing user data. Often separated as a heap and a stack. Native 3 A stream of VM instructions. CodeFile.vm Compile Execute Code Actions in an Interpreter. Stack-Based Instruction Sets Instruct− ionstream add I := Get next instruction. Many virtual machine instruction sets (e.g. Java bytecode, store Forth) are stack based.
    [Show full text]
  • Interprocedural Type Specialization of Javascript Programs Without Type Analysis∗
    Interprocedural Type Specialization of JavaScript Programs Without Type Analysis∗ Maxime Chevalier-Boisvert1 and Marc Feeley2 1 DIRO, Université de Montréal Montreal, Quebec, Canada [email protected] 2 DIRO, Université de Montréal Montreal, Quebec, Canada [email protected] Abstract Previous work proposed lazy basic block versioning, a technique for just-in-time compilation of dynamic languages which we believe represents an interesting point in the design space. Basic block versioning is simple to implement, simple enough that a single developer can build a complete just-in-time compiler for JavaScript in a year, yet it performs surprisingly well as it propagates context-sensitive type information to generate type-specialized code on the fly. In this paper, we demonstrate that lazy basic block versioning can be extended in simple ways to propagate type information across function call boundaries. This gives some of the benefits of whole-program analysis, or a tracing compiler, without having to implement the machinery for either. We have implemented this proposal in the Higgs JavaScript virtual machine and report on the empirical evaluation of this system on a set of industry standard benchmarks. The approach eliminates 94.3% of dynamic type tests on average, which we show is more than what is achievable with any static whole-program type analysis. 1998 ACM Subject Classification D.3.4 Programming Languages: Processors—compilers, op- timization, code generation, run-time environments Keywords and phrases Just-In-Time Compilation, Dynamic Language, Optimization, Object Oriented, JavaScript Digital Object Identifier 10.4230/LIPIcs.ECOOP.2016.7 1 Introduction A production compiler for a widely used dynamic language such as JavaScript is an intricate piece of software, usually the outcome of 10 to 100 developer-years of effort.
    [Show full text]
  • Sulong - Execution of LLVM-Based Languages on the JVM
    Position Paper: Sulong - Execution of LLVM-Based Languages on the JVM Sulong - Execution of LLVM-Based Languages on the JVM Position Paper Manuel Rigger Matthias Grimmer Hanspeter Mossenb¨ ock¨ Johannes Kepler University, Linz, Johannes Kepler University, Linz, Johannes Kepler University, Linz, Austria Austria Austria [email protected] [email protected] [email protected] Abstract Fortran C/C++ … For the last decade, the Java Virtual Machine (JVM) has GCC Clang other front-end been a popular platform to host languages other than Java. +DragonEgg Language implementation frameworks like Truffle allow the implementation of dynamic languages such as JavaScript compile to or Ruby with competitive performance and completeness. LLVM However, statically typed languages are still rare under Truf- tools LLVM IR fle. We present Sulong, an LLVM IR interpreter that brings all LLVM-based languages including C, C++, and Fortran executes in one stroke to the JVM. Executing these languages on the LLVM IR Interpreter JVM enables a wide area of future research, including high- performance interoperability between high-level and low- Truffle level languages, combination of static and dynamic opti- Graal mizations, and a memory-safe execution of otherwise unsafe GraalVM and unmanaged languages. Figure 1. Sulong Interpreter Categories and Subject Descriptors D.3.4 [Programming Languages]: Processors - Run-time environments, Code as C++, D, Fortran, Objective-C, and Objective-C++. We generation, Interpreters, Compilers, Optimization present Sulong, an ecosystem that can execute and dynami- Keywords Sulong, LLVM, Truffle, dynamic compilation, cally compile LLVM IR on a JVM, and can thus handle any static compilation language that can be compiled to LLVM IR.
    [Show full text]