Managed Runtime Technology: General Introduction
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
A Status Update of BEAMJIT, the Just-In-Time Compiling Abstract Machine
A Status Update of BEAMJIT, the Just-in-Time Compiling Abstract Machine Frej Drejhammar and Lars Rasmusson <ffrej,[email protected]> 140609 Who am I? Senior researcher at the Swedish Institute of Computer Science (SICS) working on programming tools and distributed systems. Acknowledgments Project funded by Ericsson AB. Joint work with Lars Rasmusson <[email protected]>. What this talk is About An introduction to how BEAMJIT works and a detailed look at some subtle details of its implementation. Outline Background BEAMJIT from 10000m BEAMJIT-aware Optimization Compiler-supported Profiling Future Work Questions Just-In-Time (JIT) Compilation Decide at runtime to compile \hot" parts to native code. Fairly common implementation technique. McCarthy's Lisp (1969) Python (Psyco, PyPy) Smalltalk (Cog) Java (HotSpot) JavaScript (SquirrelFish Extreme, SpiderMonkey, J¨agerMonkey, IonMonkey, V8) Motivation A JIT compiler increases flexibility. Tracing does not require switching to full emulation. Cross-module optimization. Compiled BEAM modules are platform independent: No need for cross compilation. Binaries not strongly coupled to a particular build of the emulator. Integrates naturally with code upgrade. Project Goals Do as little manual work as possible. Preserve the semantics of plain BEAM. Automatically stay in sync with the plain BEAM, i.e. if bugs are fixed in the interpreter the JIT should not have to be modified manually. Have a native code generator which is state-of-the-art. Eventually be better than HiPE (steady-state). Plan Use automated tools to transform and extend the BEAM. Use an off-the-shelf optimizer and code generator. Implement a tracing JIT compiler. BEAM: Specification & Implementation BEAM is the name of the Erlang VM. -
Clangjit: Enhancing C++ with Just-In-Time Compilation
ClangJIT: Enhancing C++ with Just-in-Time Compilation Hal Finkel David Poliakoff David F. Richards Lead, Compiler Technology and Lawrence Livermore National Lawrence Livermore National Programming Languages Laboratory Laboratory Leadership Computing Facility Livermore, CA, USA Livermore, CA, USA Argonne National Laboratory [email protected] [email protected] Lemont, IL, USA [email protected] ABSTRACT body of C++ code, but critically, defer the generation and optimiza- The C++ programming language is not only a keystone of the tion of template specializations until runtime using a relatively- high-performance-computing ecosystem but has proven to be a natural extension to the core C++ programming language. successful base for portable parallel-programming frameworks. As A significant design requirement for ClangJIT is that the runtime- is well known, C++ programmers use templates to specialize al- compilation process not explicitly access the file system - only gorithms, thus allowing the compiler to generate highly-efficient loading data from the running binary is permitted - which allows code for specific parameters, data structures, and so on. This capa- for deployment within environments where file-system access is bility has been limited to those specializations that can be identi- either unavailable or prohibitively expensive. In addition, this re- fied when the application is compiled, and in many critical cases, quirement maintains the redistributibility of the binaries using the compiling all potentially-relevant specializations is not practical. JIT-compilation features (i.e., they can run on systems where the ClangJIT provides a well-integrated C++ language extension allow- source code is unavailable). For example, on large HPC deploy- ing template-based specialization to occur during program execu- ments, especially on supercomputers with distributed file systems, tion. -
Overview of LLVM Architecture of LLVM
Overview of LLVM Architecture of LLVM Front-end: high-level programming language => LLVM IR Optimizer: optimize/analyze/secure the program in the IR form Back-end: LLVM IR => machine code Optimizer The optimizer’s job: analyze/optimize/secure programs. Optimizations are implemented as passes that traverse some portion of a program to either collect information or transform the program. A pass is an operation on a unit of IR code. Pass is an important concept in LLVM. LLVM IR - A low-level strongly-typed language-independent, SSA-based representation. - Tailored for static analyses and optimization purposes. Part 1 Part 1 has two kinds of passes: - Analysis pass (section 1): only analyze code statically - Transformation pass (section 2 & 3): insert code into the program Analysis pass (Section 1) Void foo (uint32_t int, uint32_t * p) { LLVM IR ... Clang opt } test.c test.bc stderr mypass.so Transformation pass (Section 2 & 3) mypass.so Void foo (uint32_t int, uint32_t * p) { ... LLVM IR opt LLVM IR } test.cpp Int main () { test.bc test-ins.bc ... Clang++ foo () ... LLVM IR } Clang++ main.cpp main.bc LLVM IR lib.cpp Executable lib.bc Section 1 Challenges: - How to traverse instructions in a function http://releases.llvm.org/3.9.1/docs/ProgrammersManual.html#iterating-over-the-instruction-in-a-function - How to print to stderr Section 2 & 3 Challenges: 1. How to traverse basic blocks in a function and instructions in a basic block 2. How to insert function calls to the runtime library a. Add the function signature to the symbol table of the module Section 2 & 3 Challenges: 1. -
Using JIT Compilation and Configurable Runtime Systems for Efficient Deployment of Java Programs on Ubiquitous Devices
Using JIT Compilation and Configurable Runtime Systems for Efficient Deployment of Java Programs on Ubiquitous Devices Radu Teodorescu and Raju Pandey Parallel and Distributed Computing Laboratory Computer Science Department University of California, Davis {teodores,pandey}@cs.ucdavis.edu Abstract. As the proliferation of ubiquitous devices moves computation away from the conventional desktop computer boundaries, distributed systems design is being exposed to new challenges. A distributed system supporting a ubiquitous computing application must deal with a wider spectrum of hardware architectures featuring structural and functional differences, and resources limitations. Due to its architecture independent infrastructure and object-oriented programming model, the Java programming environment can support flexible solutions for addressing the diversity among these devices. Unfortunately, Java solutions are often associated with high costs in terms of resource consumption, which limits the range of devices that can benefit from this approach. In this paper, we present an architecture that deals with the cost and complexity of running Java programs by partitioning the process of Java program execution between system nodes and remote devices. The system nodes prepare a Java application for execution on a remote device by generating device-specific native code and application-specific runtime system on the fly. The resulting infrastructure provides the flexibility of a high-level programming model and the architecture independence specific to Java. At the same time the amount of resources consumed by an application on the targeted device are comparable to that of a native implementation. 1 Introduction As a result of the proliferation of computation into the physical world, ubiquitous computing [1] has become an important research topic. -
CDC Build System Guide
CDC Build System Guide Java™ Platform, Micro Edition Connected Device Configuration, Version 1.1.2 Foundation Profile, Version 1.1.2 Optimized Implementation Sun Microsystems, Inc. www.sun.com December 2008 Copyright © 2008 Sun Microsystems, Inc., 4150 Network Circle, Santa Clara, California 95054, U.S.A. All rights reserved. Sun Microsystems, Inc. has intellectual property rights relating to technology embodied in the product that is described in this document. In particular, and without limitation, these intellectual property rights may include one or more of the U.S. patents listed at http://www.sun.com/patents and one or more additional patents or pending patent applications in the U.S. and in other countries. U.S. Government Rights - Commercial software. Government users are subject to the Sun Microsystems, Inc. standard license agreement and applicable provisions of the FAR and its supplements. This distribution may include materials developed by third parties. Parts of the product may be derived from Berkeley BSD systems, licensed from the University of California. UNIX is a registered trademark in the U.S. and in other countries, exclusively licensed through X/Open Company, Ltd. Sun, Sun Microsystems, the Sun logo, Java, Solaris and HotSpot are trademarks or registered trademarks of Sun Microsystems, Inc. or its subsidiaries in the United States and other countries. The Adobe logo is a registered trademark of Adobe Systems, Incorporated. Products covered by and information contained in this service manual are controlled by U.S. Export Control laws and may be subject to the export or import laws in other countries. Nuclear, missile, chemical biological weapons or nuclear maritime end uses or end users, whether direct or indirect, are strictly prohibited. -
Compiling a Higher-Order Smart Contract Language to LLVM
Compiling a Higher-Order Smart Contract Language to LLVM Vaivaswatha Nagaraj Jacob Johannsen Anton Trunov Zilliqa Research Zilliqa Research Zilliqa Research [email protected] [email protected] [email protected] George Pîrlea Amrit Kumar Ilya Sergey Zilliqa Research Zilliqa Research Yale-NUS College [email protected] [email protected] National University of Singapore [email protected] Abstract +----------------------+ Scilla is a higher-order polymorphic typed intermediate | Blockchain Smart | | Contract Module | level language for implementing smart contracts. In this talk, | in C++ (BC) | +----------------------+ we describe a Scilla compiler targeting LLVM, with a focus + state variable | + ^ on mapping Scilla types, values, and its functional language foo.scilla | | | & message | fetch| | constructs to LLVM-IR. | | |update v v | The compiled LLVM-IR, when executed with LLVM’s JIT +--------------------------------------+---------------------------------+ framework, achieves a speedup of about 10x over the refer- | | | +-------------+ +----------------+ | ence interpreter on a typical Scilla contract. This reduced | +-----------------> |JIT Driver | +--> | Scilla Run-time| | | | |in C++ (JITD)| | Library in C++ | | latency is crucial in the setting of blockchains, where smart | | +-+-------+---+ | (SRTL) | | | | | ^ +----------------+ | contracts are executed as parts of transactions, to achieve | | | | | | | foo.scilla| | | peak transactions processed per second. Experiments on the | | | foo.ll| | | | | | | Ackermann -
Using ID TECH Universal SDK Library Files in a C++ Project
Using ID TECH Universal SDK Library Files in a C++ Project Introduction From time to time, customers who wish to use ID TECH's Universal SDK for Windows (which is .NET-based and comes with C# code examples) ask if it is possible to do development against the SDK solely in C++ (on Windows). The answer is yes. Universal SDK library files (DLLs) are COM-visible and ready to be accessed from C++ code. (SDK runtimes require the .NET Common Language Runtime, but your C++ binaries can still use the SDK.) Note that while the example shown in this document involves Microsoft's Visual Studio, it is also possible to use SDK libraries in C++ projects created in Eclipse or other IDEs. How to Use the IDTechSDK.dll File in a C++ Project: 1. Create a Visual C++ project in Visual Studio 2015 (shown below, an MFC Application as an example). 2. Change the properties of the Visual C++ project. Under the General tag, set Commom Language Runtime Support under Target Platform to "Common Language Runtime Support (/clr)" under Windows. 3. Under VC++ Directories, add the path to the C# .dll file(s) to Reference Directories. 4. Under C/C++ General, set Commom Language Runtime Support to "Common Language Runtime Support (/clr)." 5. Under C/C++ Preprocessor, add _AFXDLL to Preprocessor Definitions. 6. Under C/C++ Code Generation, change Runtime Library to "Multi-threaded DLL (/MD)." 7. Under Code Analysis General, change Rule Set to "Microsoft Mixed (C++ /CLR) Recommended Rules." 8. Use IDTechSDK.dll in your .cpp file. a. -
Using Ld the GNU Linker
Using ld The GNU linker ld version 2 January 1994 Steve Chamberlain Cygnus Support Cygnus Support [email protected], [email protected] Using LD, the GNU linker Edited by Jeffrey Osier (jeff[email protected]) Copyright c 1991, 92, 93, 94, 95, 96, 97, 1998 Free Software Foundation, Inc. Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies. Permission is granted to copy and distribute modified versions of this manual under the conditions for verbatim copying, provided also that the entire resulting derived work is distributed under the terms of a permission notice identical to this one. Permission is granted to copy and distribute translations of this manual into another lan- guage, under the above conditions for modified versions. Chapter 1: Overview 1 1 Overview ld combines a number of object and archive files, relocates their data and ties up symbol references. Usually the last step in compiling a program is to run ld. ld accepts Linker Command Language files written in a superset of AT&T’s Link Editor Command Language syntax, to provide explicit and total control over the linking process. This version of ld uses the general purpose BFD libraries to operate on object files. This allows ld to read, combine, and write object files in many different formats—for example, COFF or a.out. Different formats may be linked together to produce any available kind of object file. See Chapter 5 [BFD], page 47, for more information. Aside from its flexibility, the gnu linker is more helpful than other linkers in providing diagnostic information. -
EXE: Automatically Generating Inputs of Death
EXE: Automatically Generating Inputs of Death Cristian Cadar, Vijay Ganesh, Peter M. Pawlowski, David L. Dill, Dawson R. Engler Computer Systems Laboratory Stanford University Stanford, CA 94305, U.S.A {cristic, vganesh, piotrek, dill, engler} @cs.stanford.edu ABSTRACT 1. INTRODUCTION This paper presents EXE, an effective bug-finding tool that Attacker-exposed code is often a tangled mess of deeply- automatically generates inputs that crash real code. Instead nested conditionals, labyrinthine call chains, huge amounts of running code on manually or randomly constructed input, of code, and frequent, abusive use of casting and pointer EXE runs it on symbolic input initially allowed to be “any- operations. For safety, this code must exhaustively vet in- thing.” As checked code runs, EXE tracks the constraints put received directly from potential attackers (such as sys- on each symbolic (i.e., input-derived) memory location. If a tem call parameters, network packets, even data from USB statement uses a symbolic value, EXE does not run it, but sticks). However, attempting to guard against all possible instead adds it as an input-constraint; all other statements attacks adds significant code complexity and requires aware- run as usual. If code conditionally checks a symbolic ex- ness of subtle issues such as arithmetic and buffer overflow pression, EXE forks execution, constraining the expression conditions, which the historical record unequivocally shows to be true on the true branch and false on the other. Be- programmers reason about poorly. cause EXE reasons about all possible values on a path, it Currently, programmers check for such errors using a com- has much more power than a traditional runtime tool: (1) bination of code review, manual and random testing, dy- it can force execution down any feasible program path and namic tools, and static analysis. -
Performance Analyses and Code Transformations for MATLAB Applications Patryk Kiepas
Performance analyses and code transformations for MATLAB applications Patryk Kiepas To cite this version: Patryk Kiepas. Performance analyses and code transformations for MATLAB applications. Computa- tion and Language [cs.CL]. Université Paris sciences et lettres, 2019. English. NNT : 2019PSLEM063. tel-02516727 HAL Id: tel-02516727 https://pastel.archives-ouvertes.fr/tel-02516727 Submitted on 24 Mar 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Préparée à MINES ParisTech Analyses de performances et transformations de code pour les applications MATLAB Performance analyses and code transformations for MATLAB applications Soutenue par Composition du jury : Patryk KIEPAS Christine EISENBEIS Le 19 decembre 2019 Directrice de recherche, Inria / Paris 11 Présidente du jury João Manuel Paiva CARDOSO Professeur, University of Porto Rapporteur Ecole doctorale n° 621 Erven ROHOU Ingénierie des Systèmes, Directeur de recherche, Inria Rennes Rapporteur Matériaux, Mécanique, Michel BARRETEAU Ingénieur de recherche, THALES Examinateur Énergétique Francois GIERSCH Ingénieur de recherche, THALES Invité Spécialité Claude TADONKI Informatique temps-réel, Chargé de recherche, MINES ParisTech Directeur de thèse robotique et automatique Corinne ANCOURT Maître de recherche, MINES ParisTech Co-directrice de thèse Jarosław KOŹLAK Professeur, AGH UST Co-directeur de thèse 2 Abstract MATLAB is an interactive computing environment with an easy programming language and a vast library of built-in functions. -
Meson Manual Sample.Pdf
Chapter 2 How compilation works Compiling source code into executables looks fairly simple on the surface but gets more and more complicated the lower down the stack you go. It is a testament to the design and hard work of toolchain developers that most developers don’t need to worry about those issues during day to day coding. There are (at least) two reasons for learning how the system works behind the scenes. The first one is that learning new things is fun and interesting an sich. The second one is that having a grasp of the underlying system and its mechanics makes it easier to debug the issues that inevitably crop up as your projects get larger and more complex. This chapter aims outline how the compilation process works starting from a single source file and ending with running the resulting executable. The information in this chapter is not necessary to be able to use Meson. Beginners may skip it if they so choose, but they are advised to come back and read it once they have more experience with the software build process. The treatise in this book is written from the perspective of a build system. Details of the process that are not relevant for this use have been simplified or omitted. Entire books could (and have been) written about subcomponents of the build process. Readers interested in going deeper are advised to look up more detailed reference works such as chapters 41 and 42 of [10]. 2.1 Basic term definitions compile time All operations that are done before the final executable or library is generated are said to happen during compile time. -
An Opencl Runtime System for a Heterogeneous Many-Core Virtual Platform Kuan-Chung Chen and Chung-Ho Chen Inst
An OpenCL Runtime System for a Heterogeneous Many-Core Virtual Platform Kuan-Chung Chen and Chung-Ho Chen Inst. of Computer & Communication Engineering, National Cheng Kung University, Tainan, Taiwan, R.O.C. [email protected], [email protected] Abstract—We present a many-core full system is executed on a Linux PC while the kernel code execution is simulation platform and its OpenCL runtime system. The simulated on an ESL many-core system. OpenCL runtime system includes an on-the-fly compiler and resource manager for the ARM-based many-core The rest of this paper consists of the following sections. In platform. Using this platform, we evaluate approaches of Section 2, we briefly introduce the OpenCL execution model work-item scheduling and memory management in and our heterogeneous full system simulation platform. Section OpenCL memory hierarchy. Our experimental results 3 discusses the OpenCL runtime implementation in detail and show that scheduling work-items on a many-core system its enhancements. Section 4 presents the evaluation system and using general purpose RISC CPU should avoid per work- results. We conclude this paper in Section 5. item context switching. Data deployment and work-item coalescing are the two keys for significant speedup. II. OPENCL MODEL AND VIRTUAL PLATFORM Keywords—OpenCL; runtime system; work-item coalescing; In this section, we first introduce the OpenCL programming full system simulation; heterogeneous integration; model and followed by our SystemC-based virtual platform architecture. I. INTRODUCTION A. OpenCL Platform Overview OpenCL is a data parallel programming model introduced Fig. 1 shows a conceptual OpenCL platform which has two for heterogeneous system architecture [1] which may include execution components: a host processor and the compute CPUs, GPUs, or other accelerator devices.