Compiling Scheme to JVM Bytecode: a Performance Study

Total Page:16

File Type:pdf, Size:1020Kb

Compiling Scheme to JVM Bytecode: a Performance Study Compiling Scheme to JVM bytecode: a performance study Bernard Paul Serpette Manuel Serrano Inria Sophia-Antipolis Inria Sophia-Antipolis 2004 route des Lucioles – B.P. 93 2004 route des Lucioles – B.P. 93 F-06902 Sophia-Antipolis, Cedex F-06902 Sophia-Antipolis, Cedex [email protected] [email protected] http://www.inria.fr/oasis/Bernard.Serpette http://www.inria.fr/mimosa/Manuel.Serrano ABSTRACT (henceforth FLs), several papers describing such systems We have added a Java virtual machine (henceforth JVM) have been published (Haskell [36], Scheme [7, 23, 9], and bytecode generator to the optimizing Scheme-to-C compiler SML [5]). Bigloo. We named thisnew compiler BiglooJVM. We have JVM bytecode isan appealing target because: used this new compiler to evaluate how suitable the JVM • It ishighly portable. Producing JVM bytecode, it is bytecode isasa target for compiling strict functional lan- possible to “compile once and run everywhere”. For in- guagessuch asScheme. In thispaper, we focuson the perfor- stance, the BiglooJVM Scheme-to-JVM compiler that manceissue.Wehavemeasuredtheexecutiontimeofmany ispresented in thispaper isimplemented on Unix but Scheme programswhen compiled to C and when compiled theverysamebootstrappedversionalsorunsonMi- to JVM. We found that for each benchmark, at least one of crosoft Windows. our hardware platformsran the BiglooJVM version in less than twice the time taken by the Bigloo version. In order • The standard runtime environment for Java contains to deliver fast programs the generated JVM bytecode must a large set of libraries and features: widget libraries, be carefully crafted in order to benefit from the speedup of database access, network connection, sound libraries, just-in-time compilers. etc. A compiler which compilesa language to JVM bytecode could make these fancy features available to Categories and Subject Descriptors programswritten in that language. D.3.1 [Programming Languages]: Language Classifica- • A lot of programming environmentsand toolsare de- tions—applicative (functional) languages;D.3.4[Program- signedfortheJVM.Somehavenocounterpartfor ming Languages]: Processors—compilers; I.1.3 [Symbolic other languages(especially the onesthat rely on the and Algebraic Manipulation]: Languages and Systems— high-level featuresof Java such asitsautomatic mem- evaluation strategies ory management). Producing JVM bytecode allows these tools to be reused. For instance, we have al- General Terms ready customized the Jinsight system [8] for profiling Scheme compiled programs. Languages, Experimentation, Measurement, Performance • The JVM isdesigned for hosting high-level, object- Keywords oriented languages. It providesconstructionsand facil- ities such as dynamic type testing, garbage collection Functional languages, Scheme, Compilation, Java virtual and subtype polymorphism. These runtime features Machine are also frequently required for FLs. 1. INTRODUCTION 1.1 Performance of JVM executions Many implementorsof high-level languageshave already In spite of the previously quoted advantages, compiling ported their compilersand interpretersto the Java Virtual toJVM(ortoJava)raisesanimportantissue:whatabout Machine [21]. According to Tolksdorf’s web page [34], there performance? Is it possible to deliver fast applications using are more than 130 compilerstargeting JVM bytecode, for a FL-to-JVM compiler? Current Java implementationsare all kinds of languages. In the case of functional languages known not to be very fast and, in addition, because the JVM isdesigned and tuned for Java, one might wonder whether it ispossible to implement a correct mapping from a FL to JVM without an important performance penalty. The aim Permission to make digital or hard copies of all or part of this work for of thispaper isto provide answersto thesequestions. personal or classroom use is granted without fee provided that copies are We have added a new code generator to the existing op- not made or distributed for profit or commercial advantage and that copies timizing Scheme-to-C compiler Bigloo. We call thisnew bear this notice and the full citation on the first page. To copy otherwise, to compiler BiglooJVM. Because the only difference between republish, to post on servers or to redistribute to lists, requires prior specific Bigloo and BiglooJVM is the runtime system, BiglooJVM permission and/or a fee. ICFP’02, October 4-6, 2002, Pittsburgh, Pennsylvania, USA. is a precise tool to evaluate JVM performance. We have used Copyright 2002 ACM 1-58113-487-8/02/0010 ...$5.00. it asa test-bed for benchmarking. We have compared the 259 performance of a wide range of Scheme programswhen com- C piled to native code via C and when compiled to JVM. We runtime have used 21 Scheme benchmarks, implemented by differ- ent authors, in different programming styles, ranging from .o a.out 18 lineslong programsto 100,000 lineslong programs(see Bigloo C Appendix A for a short description). We have tested these generator programson three architectures(Linux/x86, Solaris/Sparc Scheme Parser Optimizer Scheme and Digital Unix/Alpha). We have found the ratiosBigloo- module runtime JVM/Bigloo quite similar on Linux/x86 and Sparc. This BiglooJVM JVM isnot surprising asboth configurationsusethe same JVM generator (HotSpottm 2.0). We have thusdecided not to present the .class a.jar Sparc results in this paper. It would be pointless to measure the performance penalty JVM imposed by the JVM if our compiler is a weak one. In order runtime to “validate” the BiglooJVM compiler we have compared it to other compilers. In a first step, we have compared it to two other Scheme compilers: i) Kawa [7] because it is the Figure 1: The architecture of the Bigloo compiler. only other Scheme compiler that producesJVM code and ii) Gambit because it is a popular, portable and largely used benchmark (see Appendix A), when ran on a Linux/x86 Scheme-to-C compiler. In a second step, we have compared architecture wasmore than 100 timesslower than the cor- BiglooJVM to other compilersproducing JVM code: iii) responding C version. Modifying the produced JVM class MLj [5] and, iv) Sun’sJava compiler. We present these files, we have been able to speed up this benchmark and it is comparisons in Section 2. now “only” 3 times slower than the C version. Many small problems were responsible for the initial poor performance. 1.2 The Bigloo and BiglooJVM compilers For instance, we found that some sequences of JVM instruc- Bigloo isan optimizing compiler for the Scheme program- tionsprevent the just-in-time compiler (JIT) from compiling ming language [18]. It compiles modules into C files. Its methods. We also found that on some 32-bit architectures, runtime library uses the Boehm and Weiser garbage col- using 64-bit integersisprohibitive. We found that some lector [6]. Each module consists in a sequence of Scheme JIT (Sun HotSpot and others) compilers have a size limit definitions and top level expressions. After reading and perfunctionabovewhichtheystopcompiling.Finally,we expanding a source file, Bigloo builds an abstract syntax found that some JIT compilers refuse to compile functions tree (AST) that passes from stage to stage. Many stages that include functional loops, that is, loops that push values implement high-level optimizations. That is, optimizations on the stack. that are hardware independent, such as source-to-source [27] We present our experience with JVM bytecode generation transformations or data flow reductions[29]. Other stages in this paper. Most of the presented information is indepen- implement simplification of the AST, such as the one that dent of the source language to be compiled so it could be compiles Scheme closures [25, 30, 26]. Another stage han- beneficial to anyone wishing to implement a compiler pro- dlespolymorphism[20, 28]. That is,for each variable of the ducing JVM bytecode. source code, it selects a type that can be polymorphic, i.e., thetypedenotingany possible Scheme value, or monomor- 1.3 Overview phic which can denote the type of primitive hardware values, In Section 2 we present the performance evaluation fo- such as integers or floating point double precision.AttheC cusing on the difference between Bigloo and BiglooJVM. code generation point, the AST isentirely type-annotated In Section 3 we present the specific techniques deployed in and closuresare allocated asdata structures. Producing the the BiglooJVM code generator and the BiglooJVM runtime C code requires just one additional transformation: the ex- system. In Section 4 we present the important tunings we pressions of the AST are turned into C statements. In the have applied to the code generator and the runtime system BiglooJVM, the bytecode generator takesthe place of this to tame the JIT compilers. In Section 5, we compare the C statement generator (Figure 1). BiglooJVM compiler with other systems. Reusing all the Bigloo compilation stagesisbeneficial for the quality of the produced JVM bytecode and for the sim- 2. PERFORMANCE EVALUATIONS plicity of the new resulting compiler. The JVM bytecode We present in this section the performance evaluation of only represents 12% of the code of the whole compiler (7,000 BiglooJVM.Inafirststep,wecompareittoBigloo(that linesof Scheme code for the JVM generator to be compared isthe C back-end of Bigloo). Then, we compare it with to the 57,700 linesof Scheme code for the whole compiler). other Scheme implementations. Finally, we compare it with The JVM runtime system is only 5,000 lines of Java code non-Scheme implementationsthat produce JVM bytecode. whichhavetobecomparedwith30,000linesofSchemecode that are common to both JVM and C back-ends. This can 2.1 Settings also be compared with 33,000 lines of C code (including the To evaluate performance, we have used two different ar- Garbage Collector which is23,000 lineslong) of the C run- chitectures: time system. The first versions of BiglooJVM, a mere 2,000 lines of • Linux/x86: An AMD/Thunderbird 800Mhz, 256MB, Scheme code, were delivering extremely poor performance running Linux 2.2, Java HotSpottm of the JDK 1.3.0, applications. For instance, the JVM version of the Cgc used in “client” mode.
Recommended publications
  • Introduction to Programming in Lisp
    Introduction to Programming in Lisp Supplementary handout for 4th Year AI lectures · D W Murray · Hilary 1991 1 Background There are two widely used languages for AI, viz. Lisp and Prolog. The latter is the language for Logic Programming, but much of the remainder of the work is programmed in Lisp. Lisp is the general language for AI because it allows us to manipulate symbols and ideas in a commonsense manner. Lisp is an acronym for List Processing, a reference to the basic syntax of the language and aim of the language. The earliest list processing language was in fact IPL developed in the mid 1950’s by Simon, Newell and Shaw. Lisp itself was conceived by John McCarthy and students in the late 1950’s for use in the newly-named field of artificial intelligence. It caught on quickly in MIT’s AI Project, was implemented on the IBM 704 and by 1962 to spread through other AI groups. AI is still the largest application area for the language, but the removal of many of the flaws of early versions of the language have resulted in its gaining somewhat wider acceptance. One snag with Lisp is that although it started out as a very pure language based on mathematic logic, practical pressures mean that it has grown. There were many dialects which threaten the unity of the language, but recently there was a concerted effort to develop a more standard Lisp, viz. Common Lisp. Other Lisps you may hear of are FranzLisp, MacLisp, InterLisp, Cambridge Lisp, Le Lisp, ... Some good things about Lisp are: • Lisp is an early example of an interpreted language (though it can be compiled).
    [Show full text]
  • PUB DAM Oct 67 CONTRACT N00014-83-6-0148; N00014-83-K-0655 NOTE 63P
    DOCUMENT RESUME ED 290 438 IR 012 986 AUTHOR Cunningham, Robert E.; And Others TITLE Chips: A Tool for Developing Software Interfaces Interactively. INSTITUTION Pittsburgh Univ., Pa. Learning Research and Development Center. SPANS AGENCY Office of Naval Research, Arlington, Va. REPORT NO TR-LEP-4 PUB DAM Oct 67 CONTRACT N00014-83-6-0148; N00014-83-K-0655 NOTE 63p. PUB TYPE Reports - Research/Technical (143) EDRS PRICE MF01/PC03 Plus Postage. DESCRIPTORS *Computer Graphics; *Man Machine Systems; Menu Driven Software; Programing; *Programing Languages IDENTIF7ERS Direct Manipulation Interface' Interface Design Theory; *Learning Research and Development Center; LISP Programing Language; Object Oriented Programing ABSTRACT This report provides a detailed description of Chips, an interactive tool for developing software employing graphical/computer interfaces on Xerox Lisp machines. It is noted that Chips, which is implemented as a collection of customizable classes, provides the programmer with a rich graphical interface for the creation of rich graphical interfaces, and the end-user with classes for modeling the graphical relationships of objects on the screen and maintaining constraints between them. This description of the system is divided into five main sections: () the introduction, which provides background material and a general description of the system; (2) a brief overview of the report; (3) detailed explanations of the major features of Chips;(4) an in-depth discussion of the interactive aspects of the Chips development environment; and (5) an example session using Chips to develop and modify a small portion of an interface. Appended materials include descriptions of four programming techniques that have been sound useful in the development of Chips; descriptions of several systems developed at the Learning Research and Development Centel tsing Chips; and a glossary of key terms used in the report.
    [Show full text]
  • Allegro CL User Guide
    Allegro CL User Guide Volume 1 (of 2) version 4.3 March, 1996 Copyright and other notices: This is revision 6 of this manual. This manual has Franz Inc. document number D-U-00-000-01-60320-1-6. Copyright 1985-1996 by Franz Inc. All rights reserved. No part of this pub- lication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means electronic, mechanical, by photocopying or recording, or otherwise, without the prior and explicit written permission of Franz incorpo- rated. Restricted rights legend: Use, duplication, and disclosure by the United States Government are subject to Restricted Rights for Commercial Software devel- oped at private expense as specified in DOD FAR 52.227-7013 (c) (1) (ii). Allegro CL and Allegro Composer are registered trademarks of Franz Inc. Allegro Common Windows, Allegro Presto, Allegro Runtime, and Allegro Matrix are trademarks of Franz inc. Unix is a trademark of AT&T. The Allegro CL software as provided may contain material copyright Xerox Corp. and the Open Systems Foundation. All such material is used and distrib- uted with permission. Other, uncopyrighted material originally developed at MIT and at CMU is also included. Appendix B is a reproduction of chapters 5 and 6 of The Art of the Metaobject Protocol by G. Kiczales, J. des Rivieres, and D. Bobrow. All this material is used with permission and we thank the authors and their publishers for letting us reproduce their material. Contents Volume 1 Preface 1 Introduction 1.1 The language 1-1 1.2 History 1-1 1.3 Format
    [Show full text]
  • S Allegro Common Lisp Foreign Function Interface
    Please do not remove this page Extensions to the Franz, Inc.'s Allegro Common Lisp foreign function interface Keane, John https://scholarship.libraries.rutgers.edu/discovery/delivery/01RUT_INST:ResearchRepository/12643408560004646?l#13643533500004646 Keane, J. (1996). Extensions to the Franz, Inc.’s Allegro Common Lisp foreign function interface. Rutgers University. https://doi.org/10.7282/t3-pqns-7h57 This work is protected by copyright. You are free to use this resource, with proper attribution, for research and educational purposes. Other uses, such as reproduction or publication, may require the permission of the copyright holder. Downloaded On 2021/09/29 22:58:32 -0400 Extensions to the Franz Incs Allegro Common Lisp Foreign Function Interface John Keane Department of Computer Science Rutgers University New Brunswick NJ keanecsrutgersedu January Abstract As provided by Franz Inc the foreign function interface of Allegro Com mon Lisp has a number of limitations This pap er describ es extensions to the interface that facilitate the inclusion of C and Fortran co de into Common Lisp systems In particular these extensions make it easy to utilize libraries of numerical subroutines such as those from Numerical Recip es in C from within ACL including those routines that take functions as arguments A mechanism for creating Lisplike dynamic runtime closures for C routines is also describ ed Acknowledgments The research presented in this do cument is supp orted in part by NASA grants NCC and NAG This research is also part of the Rutgers based
    [Show full text]
  • A Quick Introduction to Common Lisp
    CSC 244/444 Notes last updated Feb 3, 2020 A Quick Introduction to Common Lisp Lisp is a functional language well-suited to sym- bolic AI, based on the λ-calculus and with list structures as a very flexible basic data type; pro- grams are themselves list structures, and thus can be created and manipulated just like other data. This introduction isn't intended to give you all the details, but just enough to get across the essential ideas, and to let you get under way quickly. The best way to make use of it is in front of a computer, trying out all the things mentioned. Basic characteristics of Lisp and Common Lisp: • Lisp is primarily intended for manipulating symbolic data in the form of list struc- tures, e.g., (THREE (BLIND MICE) SEE (HOW (THEY RUN)) !) • Arithmetic is included, e.g., (SQRT (+ (* 3 3) (* 4 4))) or (sqrt (+ (* 3 3) (* 4 4))) (Common Lisp is case-insensitive, except in strings between quotes " ... ", specially delimited symbols like |Very Unusual Symbol!|, and for characters prefixed with \\"). • Common Lisp (unlike John McCarthy's original \pure" Lisp) has arrays and record structures; this leads to more understandable and efficient code, much as in typed languages, but there is no obligatory typing. • All computation consists of expression evaluation. For instance, typing in the above (SQRT ...) expression and hitting the enter key immediately gives 5. • Lisp is a functional language based on Alonzo Church's λ-calculus (which like Turing machines provides a complete basis for all computable functions). For instance, a function for the square root of the sum of two squares can be expressed as (lambda (x y) (sqrt (+ (* x x) (* y y)))).
    [Show full text]
  • Parallel Programming with Lisp for Performance
    Parallel Programming with Lisp for Performance Pascal Costanza, Intel, Belgium European Lisp Symposium 2014, Paris, France The views expressed are my own, and not those of my employer. Legal Notices • Cilk, Intel, the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries. Other names and brands may be claimed as the property of others. Results have been estimated based on internal Intel analysis and are provided for informational purposes only. Any difference in system hardware or software design or configuration may affect actual performance. Intel does not control or audit the design or implementation of third party benchmark data or Web sites referenced in this document. Intel encourages all of its customers to visit the referenced Web sites or others where similar performance benchmark data are reported and confirm whether the referenced benchmark data are accurate and reflect performance of systems available for purchase. • Optimization Notice: Intel’s compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instructions sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. (Notice revision #20110804) • Copyright © 2014 Intel Corporation.
    [Show full text]
  • Common Lisp Quick Reference
    Common Lisp Quick Reference Compiled by W Burger WS Symb ols nil Constant whose value is itself NIL t Constant whose value is itself T symbolp e Returns T if e evaluates to a symb ol otherwise NIL boundp e Returns T if the value of e whichmust b e a symb ol has neither a global nor a lo cal value defvar sym e Denes sym to b e a global variable with initial value e defparameter sym e Denes sym to b e a global parameter whose value initial e will maychange at runtime but is not exp ected to defconstant sym e Denes sym to b e a global constant whose value e will not change while the program is running Value Assignment setf place e Stores the value of e in the place sp ecied by place setq sym e Evaluates e and makes it the value of the symbol sym the value of e is returned InputOutput read stream Reads the printed representation of a single ob ject from stream which defaults to standardinput builds a corresp onding ob ject and returns the ob ject readline stream eoferrorp eofvalue recursivep Reads a line of text terminated by a newline or endoffile character from stream which defaults to standardinput and returns two values the line as a character string and a Bo olean value T if the line was terminated byanendoffile and NIL if it was terminated by a newline readchar stream eoferrorp eofvalue recursivep Reads one character from stream which defaults to standardinput and returns the corre sp onding character ob ject readcharnohang stream eoferrorp eofvalue recursivep Reads and returns a character from stream which defaults to standardinputif
    [Show full text]
  • The Racket Manifesto∗
    The Racket Manifesto∗ Matthias Felleisen, Robert Bruce Findler, Matthew Flatt, Shriram Krishnamurthi Eli Barzilay, Jay McCarthy, Sam Tobin-Hochstadt Abstract The creation of a programming language calls for guiding principles that point the developers to goals. This article spells out the three basic principles behind the 20-year development of Racket. First, programming is about stating and solving problems, and this activity normally takes place in a context with its own language of discourse; good programmers ought to for- mulate this language as a programming language. Hence, Racket is a programming language for creating new programming languages. Second, by following this language-oriented approach to programming, systems become multi-lingual collections of interconnected components. Each language and component must be able to protect its specific invariants. In support, Racket offers protection mechanisms to implement a full language spectrum, from C-level bit manipulation to soundly typed extensions. Third, because Racket considers programming as problem solving in the correct language, Racket also turns extra-linguistic mechanisms into linguistic constructs, especially mechanisms for managing resources and projects. The paper explains these principles and how Racket lives up to them, presents the evaluation framework behind the design process, and concludes with a sketch of Racket’s imperfections and opportunities for future improvements. 1998 ACM Subject Classification D.3.3 Language Constructs and Features Keywords and phrases design
    [Show full text]
  • An Open Source Implementation of the Locator/ID Separation Protocol
    OpenLISP: An Open Source Implementation of the Locator/ID Separation Protocol L. Iannone D. Saucez O. Bonaventure TUB – Deutsche Telekom Laboratories AG, Universit´ecatholique de Louvain Universit´ecatholique de Louvain [email protected] [email protected] [email protected] Abstract—The network research community has recently description of the encapsulation and decapsulation operations, started to work on the design of an alternate Internet Archi- the forwarding operation and offer several options as mapping tecture aiming at solving some scalability issues that the current system. Nevertheless there is no specification of an API to Internet is facing. The Locator/ID separation paradigm seems to well fit the requirements for this new Internet Architecture. allow the mapping system to interact with the forwarding Among the various solutions, LISP (Locator/ID Separation Pro- engine. In OpenLISP we proposed and implemented a new tocol), proposed by Cisco, has gained attention due to the fact socket based solution in order to overcome this issue: the that it is incrementally deployable. In the present paper we give Mapping Sockets. Mapping sockets make OpenLISP an open a short overview on OpenLISP, an open-source implementation and flexible solution, where different approaches for the lo- of LISP. Beside LISP’s basic specifications, OpenLISP provides a new socket-based API, namely the Mapping Sockets, which makes cator/ID separation paradigm can be experimented, even if OpenLISP an ideal experimentation platform for LISP, but also not strictly related to LISP. To the best of our knowledge, other related protocols. OpenLISP is the only existing effort in developing an open source Locator/ID separation approach.
    [Show full text]
  • Proposal for a Dynamic Synchronous Language Pejman Attar, Frédéric Boussinot, Louis Mandel, Jean-Ferdy Susini
    Proposal for a Dynamic Synchronous Language Pejman Attar, Frédéric Boussinot, Louis Mandel, Jean-Ferdy Susini To cite this version: Pejman Attar, Frédéric Boussinot, Louis Mandel, Jean-Ferdy Susini. Proposal for a Dynamic Syn- chronous Language. 2011. hal-00590420 HAL Id: hal-00590420 https://hal.archives-ouvertes.fr/hal-00590420 Preprint submitted on 3 May 2011 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Proposal for a Dynamic Synchronous Language∗ Pejman Attar Fr´ed´eric Boussinot Louis Mandel INRIA - INDES INRIA - INDES Universit´eParis 11 - LRI [email protected] [email protected] INRIA - PARKAS [email protected] Jean-Ferdy Susini CNAM - C´edric [email protected] May 3, 2011 Abstract or Java threads). The simplification basically results from a cleaner and simpler semantics, which reduces We propose a new scripting language called DSL bas- the number of possible interleavings in parallel com- ed on the synchronous/reactive model. In DSL, sys- putations. However, standard synchronous languages tems are composed of several sites executed asyn- introduce specific issues (namely, non-termination of chronously, and each site is running scripts in a syn- instants) and have major difficulties to cope with dy- chronous parallel way.
    [Show full text]
  • The Kvaser T Programming Language
    The Kvaser t Programming Language Copyright 2011 KVASER AB, Mölndal, Sweden http://www.kvaser.com Compiler version: 3.1 Last updated 2013-01-30 This document is preliminary and subject to change. (This page is intentionally left blank.) KVASER AB, Aminogatan 25A, 431 53 Mölndal, Sweden http://www.kvaser.com The Kvaser t Programming Language 3(67) Table of Contents 1 t Programming ........................................................................................... 9 1.1 Overview ........................................................................................................................................ 9 1.1.1 Creating a t Program ............................................................................................................. 10 1.1.2 Using a t Program ................................................................................................................. 10 1.1.3 Version of t compiler ............................................................................................................ 10 1.1.4 Inherited Settings Versus t Program Settings........................................................................ 11 1.2 Introduction to the t Language for CAN .................................................................................. 11 1.2.1 Simple Example .................................................................................................................... 11 1.2.2 More Complex Example ......................................................................................................
    [Show full text]
  • How to Make Lisp Go Faster Than C [Pdf]
    IAENG International Journal of Computer Science, 32:4, IJCS_32_4_19 ______________________________________________________________________________________ How to make Lisp go faster than C Didier Verna∗ Abstract statically (hence known at compile-time), just as you would do in C. Contrary to popular belief, Lisp code can be very ef- cient today: it can run as fast as equivalent C code Safety Levels While dynamically typed Lisp code or even faster in some cases. In this paper, we explain leads to dynamic type checking at run-time, it how to tune Lisp code for performance by introducing is possible to instruct the compilers to bypass the proper type declarations, using the appropriate all safety checks in order to get optimum per- data structures and compiler information. We also formance. explain how eciency is achieved by the compilers. These techniques are applied to simple image process- Data Structures While Lisp is mainly known for ing algorithms in order to demonstrate the announced its basement on list processing, the Common- performance on pixel access and arithmetic operations Lisp standard features very ecient data types in both languages. such as specialized arrays, structs or hash tables, making lists almost completely obsolete. Keywords: Lisp, C, Numerical Calculus, Image Pro- cessing, Performance Experiments on the performance of Lisp and C were conducted in the eld of image processing. We bench- 1 Introduction marked, in both languages, 4 simple algorithms to measure the performances of the fundamental low- More than 15 years after the standardization process level operations: massive pixel access and arithmetic of Common-Lisp [5], and more than 10 years after processing.
    [Show full text]