Language Processors Interpreter Compiler Bytecode Interpreter Just

Total Page:16

File Type:pdf, Size:1020Kb

Language Processors Interpreter Compiler Bytecode Interpreter Just Interpreter Compiler Source Program Source Program Language Processors COMS W4115 Input Interpreter Output Compiler Prof. Stephen A. Edwards Fall 2004 Columbia University Input Executable Program Output Department of Computer Science Bytecode Interpreter Just-in-time Compiler Language Speeds Compared Language Impl. C gcc Source Program Ocaml ocaml SML mlton Source Program C++ g++ SML smlnj Common Lisp cmucl Scheme bigloo Ocaml ocamlb Compiler Java java Pike pike Forth gforth Lua lua Compiler Python python Perl perl Ruby ruby Bytecode Eiffel se Mercury mercury Awk mawk Haskell ghc Lisp rep Icon icon Bytecode Tcl tcl Javascript njs Scheme guile Just-in-time Compiler Forth bigforth Erlang erlang Awk gawk Input Output Emacs Lisp xemacs Scheme stalin Input Bytecode Interpreter Output PHP php Machine Code Bash bash bytecodes native code JIT Threaded code http://www.bagley.org/˜doug/shootout/ Separate Compilation Preprocessor The C Preprocessor foo.c bar.c “Massages” the input before the compiler sees it. cc -E example.c gives #include <stdio.h> extern int C compiler cc: • Macro expansion #define min(x, y) \ printf(char*,...); ((x)<(y))?(x):(y) ... many more declarations • foo.s bar.s printf.o fopen.o malloc.o · · · File inclusion #ifdef DEFINE_BAZ from stdio.h • Conditional compilation int baz(); Assembler as: #endif Archiver ar: void foo() void foo() · · · foo.o bar.o libc.a { { int a = 1; int a = 1; int b = 2; int b = 2; Linker ld: int c; int c; foo — An Executable c = min(a,b); c = ((a)<(b))?(a):(b); } } Compiling a Simple Program What the Compiler Sees Lexical Analysis Gives Tokens int gcd(int a, int b) int gcd(int a, int b) int gcd(int a, int b) { { while (a != b) { while (a != b) { { if (a > b) a -= b; if (a > b) a -= b; while (a != b) { else b -= a; else b -= a; } } if (a > b) a -= b; return a; return a; } } else b -= a; i n t sp g c d ( i n t sp a , sp i int gcd ( int a , int b ) { } n t sp b ) nl { nl sp sp w h i l e sp return a; ( a sp ! = sp b ) sp { nl sp sp sp sp i while ( a != b ) { if ( a } f sp ( a sp > sp b ) sp a sp - = sp b ; nl sp sp sp sp e l s e sp b sp - = sp > b ) a -= b ; else b -= a a ; nl sp sp } nl sp sp r e t u r n sp ; } return a ; } a ; nl } nl A stream of tokens. Whitespace, comments removed. Text file is a sequence of characters Parsing Gives an AST Semantic Analysis Resolves Translation into 3-Address Code Symbols L0: sne $1, a, b func func seq $0, $1, 0 int gcd args seq int gcd args seq btrue $0, L1 % while (a != b) arg arg while return sl $3, b, a arg arg while return seq $2, $3, 0 != int a int b if a int a int b != if a btrue $2, L4 % if (a < b) int gcd(int a, int b) a b > -= -= sub a, a, b % a -= b { Symbol a b > -= -= int gcd(int a, int b) while (a != b) { jmp L5 { a b a b b a while (a != b) { if (a > b) a -= b; Table: a b a b b a else b -= a; L4: sub b, b, a % b -= a if (a > b) a -= b; } int a else b -= a; return a; L5: jmp L0 } } return a; int b L1: ret a } Abstract syntax tree built from parsing rules. Types checked; references to symbols resolved Idealized assembly language w/ infinite registers Generation of 80386 Assembly gcd: pushl %ebp % Save FP movl %esp,%ebp movl 8(%ebp),%eax % Load a from stack movl 12(%ebp),%edx % Load b from stack .L8: cmpl %edx,%eax je .L3 % while (a != b) jle .L5 % if (a < b) subl %edx,%eax % a -= b jmp .L8 .L5: subl %eax,%edx % b -= a jmp .L8 .L3: leave % Restore SP, BP ret.
Recommended publications
  • The LLVM Instruction Set and Compilation Strategy
    The LLVM Instruction Set and Compilation Strategy Chris Lattner Vikram Adve University of Illinois at Urbana-Champaign lattner,vadve ¡ @cs.uiuc.edu Abstract This document introduces the LLVM compiler infrastructure and instruction set, a simple approach that enables sophisticated code transformations at link time, runtime, and in the field. It is a pragmatic approach to compilation, interfering with programmers and tools as little as possible, while still retaining extensive high-level information from source-level compilers for later stages of an application’s lifetime. We describe the LLVM instruction set, the design of the LLVM system, and some of its key components. 1 Introduction Modern programming languages and software practices aim to support more reliable, flexible, and powerful software applications, increase programmer productivity, and provide higher level semantic information to the compiler. Un- fortunately, traditional approaches to compilation either fail to extract sufficient performance from the program (by not using interprocedural analysis or profile information) or interfere with the build process substantially (by requiring build scripts to be modified for either profiling or interprocedural optimization). Furthermore, they do not support optimization either at runtime or after an application has been installed at an end-user’s site, when the most relevant information about actual usage patterns would be available. The LLVM Compilation Strategy is designed to enable effective multi-stage optimization (at compile-time, link-time, runtime, and offline) and more effective profile-driven optimization, and to do so without changes to the traditional build process or programmer intervention. LLVM (Low Level Virtual Machine) is a compilation strategy that uses a low-level virtual instruction set with rich type information as a common code representation for all phases of compilation.
    [Show full text]
  • About ILE C/C++ Compiler Reference
    IBM i 7.3 Programming IBM Rational Development Studio for i ILE C/C++ Compiler Reference IBM SC09-4816-07 Note Before using this information and the product it supports, read the information in “Notices” on page 121. This edition applies to IBM® Rational® Development Studio for i (product number 5770-WDS) and to all subsequent releases and modifications until otherwise indicated in new editions. This version does not run on all reduced instruction set computer (RISC) models nor does it run on CISC models. This document may contain references to Licensed Internal Code. Licensed Internal Code is Machine Code and is licensed to you under the terms of the IBM License Agreement for Machine Code. © Copyright International Business Machines Corporation 1993, 2015. US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents ILE C/C++ Compiler Reference............................................................................... 1 What is new for IBM i 7.3.............................................................................................................................3 PDF file for ILE C/C++ Compiler Reference.................................................................................................5 About ILE C/C++ Compiler Reference......................................................................................................... 7 Prerequisite and Related Information..................................................................................................
    [Show full text]
  • Compiler Error Messages Considered Unhelpful: the Landscape of Text-Based Programming Error Message Research
    Working Group Report ITiCSE-WGR ’19, July 15–17, 2019, Aberdeen, Scotland Uk Compiler Error Messages Considered Unhelpful: The Landscape of Text-Based Programming Error Message Research Brett A. Becker∗ Paul Denny∗ Raymond Pettit∗ University College Dublin University of Auckland University of Virginia Dublin, Ireland Auckland, New Zealand Charlottesville, Virginia, USA [email protected] [email protected] [email protected] Durell Bouchard Dennis J. Bouvier Brian Harrington Roanoke College Southern Illinois University Edwardsville University of Toronto Scarborough Roanoke, Virgina, USA Edwardsville, Illinois, USA Scarborough, Ontario, Canada [email protected] [email protected] [email protected] Amir Kamil Amey Karkare Chris McDonald University of Michigan Indian Institute of Technology Kanpur University of Western Australia Ann Arbor, Michigan, USA Kanpur, India Perth, Australia [email protected] [email protected] [email protected] Peter-Michael Osera Janice L. Pearce James Prather Grinnell College Berea College Abilene Christian University Grinnell, Iowa, USA Berea, Kentucky, USA Abilene, Texas, USA [email protected] [email protected] [email protected] ABSTRACT of evidence supporting each one (historical, anecdotal, and empiri- Diagnostic messages generated by compilers and interpreters such cal). This work can serve as a starting point for those who wish to as syntax error messages have been researched for over half of a conduct research on compiler error messages, runtime errors, and century. Unfortunately, these messages which include error, warn- warnings. We also make the bibtex file of our 300+ reference corpus ing, and run-time messages, present substantial difficulty and could publicly available.
    [Show full text]
  • ROSE Tutorial: a Tool for Building Source-To-Source Translators Draft Tutorial (Version 0.9.11.115)
    ROSE Tutorial: A Tool for Building Source-to-Source Translators Draft Tutorial (version 0.9.11.115) Daniel Quinlan, Markus Schordan, Richard Vuduc, Qing Yi Thomas Panas, Chunhua Liao, and Jeremiah J. Willcock Lawrence Livermore National Laboratory Livermore, CA 94550 925-423-2668 (office) 925-422-6278 (fax) fdquinlan,panas2,[email protected] [email protected] [email protected] [email protected] [email protected] Project Web Page: www.rosecompiler.org UCRL Number for ROSE User Manual: UCRL-SM-210137-DRAFT UCRL Number for ROSE Tutorial: UCRL-SM-210032-DRAFT UCRL Number for ROSE Source Code: UCRL-CODE-155962 ROSE User Manual (pdf) ROSE Tutorial (pdf) ROSE HTML Reference (html only) September 12, 2019 ii September 12, 2019 Contents 1 Introduction 1 1.1 What is ROSE.....................................1 1.2 Why you should be interested in ROSE.......................2 1.3 Problems that ROSE can address...........................2 1.4 Examples in this ROSE Tutorial...........................3 1.5 ROSE Documentation and Where To Find It.................... 10 1.6 Using the Tutorial................................... 11 1.7 Required Makefile for Tutorial Examples....................... 11 I Working with the ROSE AST 13 2 Identity Translator 15 3 Simple AST Graph Generator 19 4 AST Whole Graph Generator 23 5 Advanced AST Graph Generation 29 6 AST PDF Generator 31 7 Introduction to AST Traversals 35 7.1 Input For Example Traversals............................. 35 7.2 Traversals of the AST Structure............................ 36 7.2.1 Classic Object-Oriented Visitor Pattern for the AST............ 37 7.2.2 Simple Traversal (no attributes)....................... 37 7.2.3 Simple Pre- and Postorder Traversal....................
    [Show full text]
  • Toward IFVM Virtual Machine: a Model Driven IFML Interpretation
    Toward IFVM Virtual Machine: A Model Driven IFML Interpretation Sara Gotti and Samir Mbarki MISC Laboratory, Faculty of Sciences, Ibn Tofail University, BP 133, Kenitra, Morocco Keywords: Interaction Flow Modelling Language IFML, Model Execution, Unified Modeling Language (UML), IFML Execution, Model Driven Architecture MDA, Bytecode, Virtual Machine, Model Interpretation, Model Compilation, Platform Independent Model PIM, User Interfaces, Front End. Abstract: UML is the first international modeling language standardized since 1997. It aims at providing a standard way to visualize the design of a system, but it can't model the complex design of user interfaces and interactions. However, according to MDA approach, it is necessary to apply the concept of abstract models to user interfaces too. IFML is the OMG adopted (in March 2013) standard Interaction Flow Modeling Language designed for abstractly expressing the content, user interaction and control behaviour of the software applications front-end. IFML is a platform independent language, it has been designed with an executable semantic and it can be mapped easily into executable applications for various platforms and devices. In this article we present an approach to execute the IFML. We introduce a IFVM virtual machine which translate the IFML models into bytecode that will be interpreted by the java virtual machine. 1 INTRODUCTION a fundamental standard fUML (OMG, 2011), which is a subset of UML that contains the most relevant The software development has been affected by the part of class diagrams for modeling the data apparition of the MDA (OMG, 2015) approach. The structure and activity diagrams to specify system trend of the 21st century (BRAMBILLA et al., behavior; it contains all UML elements that are 2014) which has allowed developers to build their helpful for the execution of the models.
    [Show full text]
  • Superoptimization of Webassembly Bytecode
    Superoptimization of WebAssembly Bytecode Javier Cabrera Arteaga Shrinish Donde Jian Gu Orestis Floros [email protected] [email protected] [email protected] [email protected] Lucas Satabin Benoit Baudry Martin Monperrus [email protected] [email protected] [email protected] ABSTRACT 2 BACKGROUND Motivated by the fast adoption of WebAssembly, we propose the 2.1 WebAssembly first functional pipeline to support the superoptimization of Web- WebAssembly is a binary instruction format for a stack-based vir- Assembly bytecode. Our pipeline works over LLVM and Souper. tual machine [17]. As described in the WebAssembly Core Specifica- We evaluate our superoptimization pipeline with 12 programs from tion [7], WebAssembly is a portable, low-level code format designed the Rosetta code project. Our pipeline improves the code section for efficient execution and compact representation. WebAssembly size of 8 out of 12 programs. We discuss the challenges faced in has been first announced publicly in 2015. Since 2017, it has been superoptimization of WebAssembly with two case studies. implemented by four major web browsers (Chrome, Edge, Firefox, and Safari). A paper by Haas et al. [11] formalizes the language and 1 INTRODUCTION its type system, and explains the design rationale. The main goal of WebAssembly is to enable high performance After HTML, CSS, and JavaScript, WebAssembly (WASM) has be- applications on the web. WebAssembly can run as a standalone VM come the fourth standard language for web development [7]. This or in other environments such as Arduino [10]. It is independent new language has been designed to be fast, platform-independent, of any specific hardware or languages and can be compiled for and experiments have shown that WebAssembly can have an over- modern architectures or devices, from a wide variety of high-level head as low as 10% compared to native code [11].
    [Show full text]
  • Basic Compiler Algorithms for Parallel Programs *
    Basic Compiler Algorithms for Parallel Programs * Jaejin Lee and David A. Padua Samuel P. Midkiff Department of Computer Science IBM T. J. Watson Research Center University of Illinois P.O.Box 218 Urbana, IL 61801 Yorktown Heights, NY 10598 {j-lee44,padua}@cs.uiuc.edu [email protected] Abstract 1 Introduction Traditional compiler techniques developed for sequential Under the shared memory parallel programming model, all programs do not guarantee the correctness (sequential con- threads in a job can accessa global address space. Communi- sistency) of compiler transformations when applied to par- cation between threads is via reads and writes of shared vari- allel programs. This is because traditional compilers for ables rather than explicit communication operations. Pro- sequential programs do not account for the updates to a cessors may access a shared variable concurrently without shared variable by different threads. We present a con- any fixed ordering of accesses,which leads to data races [5,9] current static single assignment (CSSA) form for parallel and non-deterministic behavior. programs containing cobegin/coend and parallel do con- Data races and synchronization make it impossible to apply structs and post/wait synchronization primitives. Based classical compiler optimization and analysis techniques di- on the CSSA form, we present copy propagation and dead rectly to parallel programs because the classical methods do code elimination techniques. Also, a global value number- not account.for updates to shared variables in threads other ing technique that detects equivalent variables in parallel than the one being analyzed. Classical optimizations may programs is presented. By using global value numbering change the meaning of programs when they are applied to and the CSSA form, we extend classical common subex- shared memory parallel programs [23].
    [Show full text]
  • A Compiler-Compiler for DSL Embedding
    A Compiler-Compiler for DSL Embedding Amir Shaikhha Vojin Jovanovic Christoph Koch EPFL, Switzerland Oracle Labs EPFL, Switzerland {amir.shaikhha}@epfl.ch {vojin.jovanovic}@oracle.com {christoph.koch}@epfl.ch Abstract (EDSLs) [14] in the Scala programming language. DSL devel- In this paper, we present a framework to generate compil- opers define a DSL as a normal library in Scala. This plain ers for embedded domain-specific languages (EDSLs). This Scala implementation can be used for debugging purposes framework provides facilities to automatically generate the without worrying about the performance aspects (handled boilerplate code required for building DSL compilers on top separately by the DSL compiler). of extensible optimizing compilers. We evaluate the practi- Alchemy provides a customizable set of annotations for cality of our framework by demonstrating several use-cases encoding the domain knowledge in the optimizing compila- successfully built with it. tion frameworks. A DSL developer annotates the DSL library, from which Alchemy generates a DSL compiler that is built CCS Concepts • Software and its engineering → Soft- on top of an extensible optimizing compiler. As opposed to ware performance; Compilers; the existing compiler-compilers and language workbenches, Alchemy does not need a new meta-language for defining Keywords Domain-Specific Languages, Compiler-Compiler, a DSL; instead, Alchemy uses the reflection capabilities of Language Embedding Scala to treat the plain Scala code of the DSL library as the language specification. 1 Introduction A compiler expert can customize the behavior of the pre- Everything that happens once can never happen defined set of annotations based on the features provided by again.
    [Show full text]
  • Coqjvm: an Executable Specification of the Java Virtual Machine Using
    CoqJVM: An Executable Specification of the Java Virtual Machine using Dependent Types Robert Atkey LFCS, School of Informatics, University of Edinburgh Mayfield Rd, Edinburgh EH9 3JZ, UK [email protected] Abstract. We describe an executable specification of the Java Virtual Machine (JVM) within the Coq proof assistant. The principal features of the development are that it is executable, meaning that it can be tested against a real JVM to gain confidence in the correctness of the specification; and that it has been written with heavy use of dependent types, this is both to structure the model in a useful way, and to constrain the model to prevent spurious partiality. We describe the structure of the formalisation and the way in which we have used dependent types. 1 Introduction Large scale formalisations of programming languages and systems in mechanised theorem provers have recently become popular [4–6, 9]. In this paper, we describe a formalisation of the Java virtual machine (JVM) [8] in the Coq proof assistant [11]. The principal features of this formalisation are that it is executable, meaning that a purely functional JVM can be extracted from the Coq development and – with some O’Caml glue code – executed on real Java bytecode output from the Java compiler; and that it is structured using dependent types. The motivation for this development is to act as a basis for certified consumer- side Proof-Carrying Code (PCC) [12]. We aim to prove the soundness of program logics and correctness of proof checkers against the model, and extract the proof checkers to produce certified stand-alone tools.
    [Show full text]
  • Program Dynamic Analysis Overview
    4/14/16 Program Dynamic Analysis Overview • Dynamic Analysis • JVM & Java Bytecode [2] • A Java bytecode engineering library: ASM [1] 2 1 4/14/16 What is dynamic analysis? [3] • The investigation of the properties of a running software system over one or more executions 3 Has anyone done dynamic analysis? [3] • Loggers • Debuggers • Profilers • … 4 2 4/14/16 Why dynamic analysis? [3] • Gap between run-time structure and code structure in OO programs Trying to understand one [structure] from the other is like trying to understand the dynamism of living ecosystems from the static taxonomy of plants and animals, and vice-versa. -- Erich Gamma et al., Design Patterns 5 Why dynamic analysis? • Collect runtime execution information – Resource usage, execution profiles • Program comprehension – Find bugs in applications, identify hotspots • Program transformation – Optimize or obfuscate programs – Insert debugging or monitoring code – Modify program behaviors on the fly 6 3 4/14/16 How to do dynamic analysis? • Instrumentation – Modify code or runtime to monitor specific components in a system and collect data – Instrumentation approaches • Source code modification • Byte code modification • VM modification • Data analysis 7 A Running Example • Method call instrumentation – Given a program’s source code, how do you modify the code to record which method is called by main() in what order? public class Test { public static void main(String[] args) { if (args.length == 0) return; if (args.length % 2 == 0) printEven(); else printOdd(); } public
    [Show full text]
  • Wind River Diab Compiler
    WIND RIVER DIAB COMPILER Boost application performance, reduce memory footprint, and produce high-quality, standards-compliant object code for embedded systems with Wind River® Diab Compiler. Wind River has a long history of providing software and tools for safety-critical applications requiring certification in the automotive, medical, avionics, and industrial markets. And it’s backed by an award-winning global support organization that draws on more than 25 years of compiler experience and hundreds of millions of successfully deployed devices. TOOLCHAIN COMPONENTS Diab Compiler includes the following programs and utilities: • Driver: Intelligent wrapper program invoking the compiler, assembler, and linker, using a single application • Assembler: Macro assembler invoked automatically by the driver program or as a complete standalone assembler generating object modules – Conditional macro assembler with more than 30 directives – Unlimited number of symbols – Debug information for source-level debugging of assembly programs • Compiler: ANSI/ISO C/C++ compatible cross-compiler – EDG front end – Support for ANSI C89, C99, and C++ 2003 – Hundreds of customizable optimizations for performance and size – Processor architecture–specific optimizations – Whole-program optimization capability • Low-level virtual machine (LLVM)–based technology: Member of the LLVM commu- nity, accelerating inclusion of new innovative compiler features and leveraging the LLVM framework to allow easy inclusion of compiler add-ons to benefit customers • Linker: Precise
    [Show full text]
  • Compilers History Wikipedia History Main Article: History of Compiler
    Compilers History Wikipedia History Main article: History of compiler construction Software for early computers was primarily written in assembly language for many years. Higher level programming languages were not invented until the benefits of being able to reuse software on different kinds of CPUs started to become significantly greater than the cost of writing a compiler. The very limited memory capacity of early computers also created many technical problems when implementing a compiler. Towards the end of the 1950s, machine-independent programming languages were first proposed. Subsequently, several experimental compilers were developed. The first compiler was written by Grace Hopper, in 1952, for the A-0 programming language. The FORTRAN team led by John Backus at IBM is generally credited as having introduced the first complete compiler in 1957. COBOL was an early language to be compiled on multiple architectures, in 1960. In many application domains the idea of using a higher level language quickly caught on. Because of the expanding functionality supported by newer programming languages and the increasing complexity of computer architectures, compilers have become more and more complex. Early compilers were written in assembly language. The first self-hosting compiler — capable of compiling its own source code in a high-level language — was created for Lisp by Tim Hart and Mike Levin at MIT in 1962. Since the 1970s it has become common practice to implement a compiler in the language it compiles, although both Pascal and C have been popular choices for implementation language. Building a self-hosting compiler is a bootstrapping problem—the first such compiler for a language must be compiled either by hand or by a compiler written in a different language, or (as in Hart and Levin's Lisp compiler) compiled by running the compiler in an interpreter.
    [Show full text]