Benchmarking Implementations of Functional Languages With

Total Page:16

File Type:pdf, Size:1020Kb

Benchmarking Implementations of Functional Languages With Benchmarking Implementations of Functional Languages with Pseudoknot a FloatIntensive Benchmark Pieter H Hartel Marc Feeley Martin Alt Lennart Augustsson Peter Baumann Marcel Beemster Emmanuel Chailloux Christine H Flo o d Wolfgang Grieskamp John H G van Groningen Kevin Hammond Bogumil Hausman Melo dy Y Ivory Richard E Jones Jasp er Kamp erman Peter Lee Xavier Leroy Rafael D Lins Sandra Lo osemore Niklas Rojemo Manuel Serrano JeanPierre Talpin Jon Thackray Stephen Thomas Pum Walters Pierre Weis Peter Wentworth Abstract Over implementation s of dierent functional languages are b enchmarked using the same program a oating p ointintensive application taken from molecular biology The principal asp ects studied are compile time and Dept of Computer Systems Univ of Amsterdam Kruislaan SJ Amsterdam The Netherlands email pieterfwiuvanl Depart dinformatique et ro Univ de Montreal succursale centreville Montreal HC J Canada email feeleyiroumontrealca Informatik Universitat des Saarlandes Saarbruc ken Germany email altcsunisbde Dept of Computer Systems Chalmers Univ of Technology Goteb org Sweden email augustsscschalmersse Dept of Computer Science Univ of Zurich Winterthurerstr Zurich Switzerland email baumanniunizh ch Dept of Computer Systems Univ of Amsterdam Kruislaan SJ Amsterdam The Netherlands email b eemsterfwiuvanl LIENS URA du CNRS Ecole Normale Superieure rue dUlm PARIS Cedex France email EmmanuelChaillou xensfr Lab oratory for Computer Science MIT Technology Square Cambridge Massachusetts USA email chflcsmitedu Berlin UniversityofTechnologyFranklinstr Berlin Germany email wgcstub erlinde Faculty of Mathematics and Computer Science Univ of Nijmegen Toernooiveld ED Nijmegen The Netherlands email johnvgcskunnl Dept of Computing Science GlasgowUniversity Lilybank Gardens Glasgow G QQ UK email khdcsglasgowacuk Computer Science Lab Ellemtel Telecom Systems Labs Box S Alvsjo Sweden email b ogdanerixericssonse Computer Research Group Institute for Scientic Computer Research Lawrence Livermore National Lab ora toryPOBox L Livermore CA email ivoryllnl gov Dept of Computer Science Univ of Kent at Canterbury Canterbury Kent CT NF UK email REJonesukcacuk CWI Kruislaan SJ Amsterdam The Netherlands email jasp ercwinl Dept of Computer Science Carnegie Mellon University Forb es Avenue Pittsburgh Pennsylvania USA email p etelcscmuedu INRIA Ro cquencourt pro jet Cristal BP Le ChesnayFrance email XavierLeroyinriafr Departamento de Informatica Universidade Federal de Pernambuco Recife PE Brazil email rdldiufp ebr Dept of Computer Science Yale Univ New haven Connecticut email lo osemoresandracsyal eedu Dept of Computer Systems Chalmers Univ of Technology Goteb org Sweden email ro jemocschalmersse INRIA Ro cquencourt pro jet Icsla BP Le ChesnayFrance email ManuelSerranoin ria fr email Europ ean ComputerIndustry ResearchCentre Arab ella Strae D Munich Germany jpecrcde Harlequin Ltd Barrington Hall Barrington Cambridge CB RG England email jontharlequincouk Dept of Computer Science Univ of Nottingham Nottingham NG RD UK email sptcsnottacuk CWI Kruislaan SJ Amsterdam The Netherlands email pumcwinl INRIA Ro cquencourt pro jet Cristal BP Le ChesnayFrance email PierreWeisinriafr Dept of Computer Science Rho des Univ Grahamstown South Africa email cspwcsruacza execution time for the various implementation s that were b enchmarked An imp ortant consideration is howthe program can b e mo died and tuned to obtain maximal p erformance on each language implementation With few exceptions the compilers take a signicantamount of time to compile this program though most compilers were faster than the then current GNU C compiler GCC version Compilers that generate C or Lisp are often slower than those that generate native co de directly the cost of compiling the intermediate form is normally a large fraction of the total compilation time There is no clear distinction b etween the runtime p erformance of eager and lazy implementations when appro priate annotations are used lazy implementations have clearly come of age when it comes to implementing largely strict application s such as the Pseudoknot program The sp eed of C can b e approached by some implementa tions but to achieve this p erformance sp ecial measures such as strictness annotations are required by nonstrict implementations The b enchmark results havetobeinterpreted with care Firstlyabenchmark based on a single program cannot cover a wide sp ectrum of typical application s Secondly the compilers vary in the kind and level of optimisation s oered so the eort required to obtain an optimal version of the program is similarly varied Intro duction At the Dagstuhl Workshop on Applications of Functional Programming in the Real World in May Giegerich and Hughes several interesting applications of functional languages were pre sented One of these applications the Pseudoknot problem Feeley et al had b een written in several languages including C Scheme Rees and Clinger Multilisp Halstead Jr and Miranday Turner A number of workshop participants decided to test their compiler technology using this particular program The rst p oint of comparison is the sp eed of compilation and the sp eed of the compiled program The second p ointishow the program can b e mo died and tuned to obtain maximal p erformance on each language implementation available The initial b enchmarking eorts revealed imp ortant dierences b etween the various compilers The rst impression was that compilation sp eed should generally b e improved After the workshop wehave continued to work on improving b oth the compilation and execution sp eed of the Pseudoknot program Some researchers not present at Dagstuhl joined the team and we present the results as a record of a small scale but exciting collab oration with contributions from many parts of the world As is the case with anybenchmarking work our results should b e taken with a grain of salt We are using a realistic program that p erforms a useful computation however it stresses particular language features that are probably not representative of the applications for which the language implementations were intended Implementations invariably tradeo the p erformance of some programming features for others in the quest for the right blend of usability exibility and p erformance on typical applications ay to measure the overall quality of an implementation It is clear that a single b enchmark is not a go o d w Moreover the p erformance of an implementation usually but not always improves with new releases as the implementors repair bugs add new features and mo dify the compiler We feel that our choice of b enchmark can b e justied by the fact that it is a real world application that it had already b een translated into C and several functional languages and that wewanted to compare a wide range of languages and implementations The main results agree with those found in earlier studies Cann Hartel and Langendo en Section briey characterises the functional languages that have b een used The compilers and inter preters for the functional languages are presented in Section The Pseudoknot application is intro duced in Section Section describ es the translations of the program to the dierent programming languages The b enchmark results are presented in Section The conclusions are given in the last section y Miranda is a trademark of Research Software Ltd language source ref typing evaluation order match SML family Caml INRIA Weis strong p oly eager higher pattern SML Committee Milner et al strong p oly eager higher pattern Haskell family Clean Nijmegen Plasmeijer and strong p oly lazy higher pattern van Eekelen Gofer Yale Jones strong p oly lazy higher pattern LML Chalmers Augustsson and strong p oly lazy higher pattern Johnsson Miranda Kent Turner strong p oly lazy higher pattern Haskell Committee Hudak et al strong p oly lazy higher pattern RUFL Rho des Wentworth strong p oly lazy higher pattern Lisp family Common Committee Steele Jr dynamic eager higher access Lisp Scheme Committee Rees and Clinger dynamic eager higher access Parallel and concurrent languages Erlang Ericsson Armstrong et al dynamic eager rst pattern Facile ECRC Thomsen et al strong p oly eager higher pattern ID MIT Nikhil strong p oly eager higher pattern nonstrict Sisal LLNL McGraw et al strong mono eager rst none Intermediate languages CMC Recife Lins and Lira strong p oly lazy higher access Stoel Amsterdam Beemster strong p oly lazy higher case Other functional languages Epic CWI Walters and strong p oly eager rst pattern Kamp erman Opal TU Berlin Didrich et al strong p oly eager higher pattern Trafola Saarbruc ken Alt et al strong p oly eager higher pattern C ANSI C Committee Kernighan and weak eager rst none Ritchie Table Language characteristics The sourceofeach language is fol lowedbya keyreferencetothe language denition The remaining columns characterise the typing discipline the evaluation strategy whether the language is rst or higherorder and the patternmatching facilities Languages The Pseudoknot b enchmark takes into account a large numb er of languages and an even larger number of compilers Our aim has b een to cover as comprehensively as p ossible the landscap e of functional languages while emphasising typ ed languages Of the general purp ose functional languages the most prominent are the eager dynamically typ ed languages Lisp and Scheme the Lisp family the eager strongly typ ed languages
Recommended publications
  • Benchmarking Implementations of Functional Languages with ‘Pseudoknot’, a Float-Intensive Benchmark
    Zurich Open Repository and Archive University of Zurich Main Library Strickhofstrasse 39 CH-8057 Zurich www.zora.uzh.ch Year: 1996 Benchmarking implementations of functional languages with ‘Pseudoknot’, a float-intensive benchmark Hartel, Pieter H ; Feeley, Marc ; et al Abstract: Over 25 implementations of different functional languages are benchmarked using the same program, a floating-point intensive application taken from molecular biology. The principal aspects studied are compile time and execution time for the various implementations that were benchmarked. An important consideration is how the program can be modified and tuned to obtain maximal performance on each language implementation. With few exceptions, the compilers take a significant amount of time to compile this program, though most compilers were faster than the then current GNU C compiler (GCC version 2.5.8). Compilers that generate C or Lisp are often slower than those that generate native code directly: the cost of compiling the intermediate form is normally a large fraction of the total compilation time. There is no clear distinction between the runtime performance of eager and lazy implementations when appropriate annotations are used: lazy implementations have clearly come of age when it comes to implementing largely strict applications, such as the Pseudoknot program. The speed of C can be approached by some implementations, but to achieve this performance, special measures such as strictness annotations are required by non-strict implementations. The benchmark results have to be interpreted with care. Firstly, a benchmark based on a single program cannot cover a wide spectrum of ‘typical’ applications. Secondly, the compilers vary in the kind and level of optimisations offered, so the effort required to obtain an optimal version of the program is similarly varied.
    [Show full text]
  • A Scheme Foreign Function Interface to Javascript Based on an Infix
    A Scheme Foreign Function Interface to JavaScript Based on an Infix Extension Marc-André Bélanger Marc Feeley Université de Montréal Université de Montréal Montréal, Québec, Canada Montréal, Québec, Canada [email protected] [email protected] ABSTRACT FFIs are notoriously implementation-dependent and code This paper presents a JavaScript Foreign Function Inter- using a given FFI is usually not portable. Consequently, face for a Scheme implementation hosted on JavaScript and the nature of FFI’s reflects a particular set of choices made supporting threads. In order to be as convenient as possible by the language’s implementers. This makes FFIs usually the foreign code is expressed using infix syntax, the type more difficult to learn than the base language, imposing conversions between Scheme and JavaScript are mostly im- implementation constraints to the programmer. In effect, plicit, and calls can both be done from Scheme to JavaScript proficiency in a particular FFI is often not a transferable and the other way around. Our approach takes advantage of skill. JavaScript’s dynamic nature and its support for asynchronous In general FFIs tightly couple the underlying low level functions. This allows concurrent activities to be expressed data representation to the higher level interface provided to in a direct style in Scheme using threads. The paper goes the programmer. This is especially true of FFIs for statically over the design and implementation of our approach in the typed languages such as C, where to construct the proper Gambit Scheme system. Examples are given to illustrate its interface code the FFI must know the type of all data passed use.
    [Show full text]
  • Towards a Portable and Mobile Scheme Interpreter
    Towards a Portable and Mobile Scheme Interpreter Adrien Pi´erard Marc Feeley Universit´eParis 6 Universit´ede Montr´eal [email protected] [email protected] Abstract guage. Because Mobit implements R4RS Scheme [6], we must also The transfer of program data between the nodes of a distributed address the serialization of continuations. Our main contribution is system is a fundamental operation. It usually requires some form the demonstration of how this can be done while preserving the in- of data serialization. For a functional language such as Scheme it is terpreter’s maintainability and with local changes to the original in- clearly desirable to also allow the unrestricted transfer of functions terpreter’s structure, mainly through the use of unhygienic macros. between nodes. With the goal of developing a portable implemen- We start by giving an overview of the pertinent features of the tation of the Termite system we have designed the Mobit Scheme Termite dialect of Scheme. In Section 3 we explain the structure interpreter which supports unrestricted serialization of Scheme ob- of the interpreter on which Mobit is based. Object serialization is jects, including procedures and continuations. Mobit is derived discussed in Section 4. Section 5 compares Mobit’s performance from an existing Scheme in Scheme fast interpreter. We demon- with other interpreters. We conclude with related and future work. strate how macros were valuable in transforming the interpreter while preserving its structure and maintainability. Our performance 2. Termite evaluation shows that the run time speed of Mobit is comparable to Termite is a Scheme adaptation of the Erlang concurrency model.
    [Show full text]
  • The Evolution of Lisp
    1 The Evolution of Lisp Guy L. Steele Jr. Richard P. Gabriel Thinking Machines Corporation Lucid, Inc. 245 First Street 707 Laurel Street Cambridge, Massachusetts 02142 Menlo Park, California 94025 Phone: (617) 234-2860 Phone: (415) 329-8400 FAX: (617) 243-4444 FAX: (415) 329-8480 E-mail: [email protected] E-mail: [email protected] Abstract Lisp is the world’s greatest programming language—or so its proponents think. The structure of Lisp makes it easy to extend the language or even to implement entirely new dialects without starting from scratch. Overall, the evolution of Lisp has been guided more by institutional rivalry, one-upsmanship, and the glee born of technical cleverness that is characteristic of the “hacker culture” than by sober assessments of technical requirements. Nevertheless this process has eventually produced both an industrial- strength programming language, messy but powerful, and a technically pure dialect, small but powerful, that is suitable for use by programming-language theoreticians. We pick up where McCarthy’s paper in the first HOPL conference left off. We trace the development chronologically from the era of the PDP-6, through the heyday of Interlisp and MacLisp, past the ascension and decline of special purpose Lisp machines, to the present era of standardization activities. We then examine the technical evolution of a few representative language features, including both some notable successes and some notable failures, that illuminate design issues that distinguish Lisp from other programming languages. We also discuss the use of Lisp as a laboratory for designing other programming languages. We conclude with some reflections on the forces that have driven the evolution of Lisp.
    [Show full text]
  • Tousimojarad, Ashkan (2016) GPRM: a High Performance Programming Framework for Manycore Processors. Phd Thesis
    Tousimojarad, Ashkan (2016) GPRM: a high performance programming framework for manycore processors. PhD thesis. http://theses.gla.ac.uk/7312/ Copyright and moral rights for this thesis are retained by the author A copy can be downloaded for personal non-commercial research or study This thesis cannot be reproduced or quoted extensively from without first obtaining permission in writing from the Author The content must not be changed in any way or sold commercially in any format or medium without the formal permission of the Author When referring to this work, full bibliographic details including the author, title, awarding institution and date of the thesis must be given Glasgow Theses Service http://theses.gla.ac.uk/ [email protected] GPRM: A HIGH PERFORMANCE PROGRAMMING FRAMEWORK FOR MANYCORE PROCESSORS ASHKAN TOUSIMOJARAD SUBMITTED IN FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF Doctor of Philosophy SCHOOL OF COMPUTING SCIENCE COLLEGE OF SCIENCE AND ENGINEERING UNIVERSITY OF GLASGOW NOVEMBER 2015 c ASHKAN TOUSIMOJARAD Abstract Processors with large numbers of cores are becoming commonplace. In order to utilise the available resources in such systems, the programming paradigm has to move towards in- creased parallelism. However, increased parallelism does not necessarily lead to better per- formance. Parallel programming models have to provide not only flexible ways of defining parallel tasks, but also efficient methods to manage the created tasks. Moreover, in a general- purpose system, applications residing in the system compete for the shared resources. Thread and task scheduling in such a multiprogrammed multithreaded environment is a significant challenge. In this thesis, we introduce a new task-based parallel reduction model, called the Glasgow Parallel Reduction Machine (GPRM).
    [Show full text]
  • Part: an Asynchronous Parallel Abstraction for Speculative Pipeline Computations Kiko Fernandez-Reyes, Dave Clarke, Daniel Mccain
    ParT: An Asynchronous Parallel Abstraction for Speculative Pipeline Computations Kiko Fernandez-Reyes, Dave Clarke, Daniel Mccain To cite this version: Kiko Fernandez-Reyes, Dave Clarke, Daniel Mccain. ParT: An Asynchronous Parallel Abstraction for Speculative Pipeline Computations. 18th International Conference on Coordination Languages and Models (COORDINATION), Jun 2016, Heraklion, Greece. pp.101-120, 10.1007/978-3-319-39519- 7_7. hal-01631723 HAL Id: hal-01631723 https://hal.inria.fr/hal-01631723 Submitted on 9 Nov 2017 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Distributed under a Creative Commons Attribution| 4.0 International License ParT: An Asynchronous Parallel Abstraction for Speculative Pipeline Computations? Kiko Fernandez-Reyes, Dave Clarke, and Daniel S. McCain Department of Information Technology Uppsala University, Uppsala, Sweden Abstract. The ubiquity of multicore computers has forced program- ming language designers to rethink how languages express parallelism and concurrency. This has resulted in new language constructs and new com- binations or revisions of existing constructs. In this line, we extended the programming languages Encore (actor-based), and Clojure (functional) with an asynchronous parallel abstraction called ParT, a data structure that can dually be seen as a collection of asynchronous values (integrat- ing with futures) or a handle to a parallel computation, plus a collection of combinators for manipulating the data structure.
    [Show full text]
  • The Butterfly(TM) Lisp System
    From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. The Butterfly TMLisp System Seth A. Steinberg Don Allen Laura B agnall Curtis Scott Bolt, Beranek and Newman, Inc. 10 Moulton Street Cambridge, MA 02238 ABSTRACT Under DARPA sponsorship, BBN is developing a parallel symbolic programming environment for the Butterfly, based This paper describes the Common Lisp system that BBN is on an extended version of the Common Lisp language. The developing for its ButterflyTM multiprocessor. The BBN implementation of Butterfly Lisp is derived from C Scheme, ButterflyTM is a shared memory multiprocessor which may written at MIT by members of the Scheme Team.4 The contain up to 256 processor nodes. The system provides a simplicity and power of Scheme make it particularly suitable shared heap, parallel garbage collector, and window based as a testbed for exploring the issues of parallel execution, as I/Osystem. The future constructis used to specify well as a good implementation language for Common Lisp. parallelism. The MIT Multilisp work of Professor Robert Halstead and THE BUTTERFLYTM LISP SYSTEM students has had a significant influence on our approach. For example, the future construct, Butterfly Lisp’s primary For several decades, driven by industrial, military and mechanism for obtaining concurrency, was devised and first experimental demands, numeric algorithms have required implemented by the Multilisp group. Our experience porting increasing quantities of computational power. Symbolic MultiLisp to the Butterfly illuminated many of the problems algorithms were laboratory curiosities; widespread demand of developing a Lisp system that runs efficiently on both for symbolic computing power lagged until recently.
    [Show full text]
  • Towards a Portable and Mobile Scheme Interpreter
    Towards a Portable and Mobile Scheme Interpreter Adrien Pi´erard Marc Feeley Universit´eParis 6 Universit´ede Montr´eal [email protected] [email protected] Abstract guage. Because Mobit implements R4RS Scheme [6], we must also The transfer of program data between the nodes of a distributed address the serialization of continuations. Our main contribution is system is a fundamental operation. It usually requires some form the demonstration of how this can be done while preserving thein- of data serialization. For a functional language such as Scheme it is terpreter’s maintainability and with local changes to the original in- clearly desirable to also allow the unrestricted transfer offunctions terpreter’s structure, mainly through the use of unhygienicmacros. between nodes. With the goal of developing a portable implemen- We start by giving an overview of the pertinent features of the tation of the Termite system we have designed the Mobit Scheme Termite dialect of Scheme. In Section 3 we explain the structure interpreter which supports unrestricted serialization of Scheme ob- of the interpreter on which Mobit is based. Object serialization is jects, including procedures and continuations. Mobit is derived discussed in Section 4. Section 5 compares Mobit’s performance from an existing Scheme in Scheme fast interpreter. We demon- with other interpreters. We conclude with related and futurework. strate how macros were valuable in transforming the interpreter while preserving its structure and maintainability. Our performance 2. Termite evaluation shows that the run time speed of Mobit is comparable to Termite is a Scheme adaptation of the Erlang concurrency model.
    [Show full text]
  • Graph Reduction Without Pointers
    Graph Reduction Without Pointers TR89-045 December, 1989 William Daniel Partain The University of North Carolina at Chapel Hill Department of Computer Science ! I CB#3175, Sitterson Hall Chapel Hill, NC 27599-3175 UNC is an Equal Opportunity/Aflirmative Action Institution. Graph Reduction Without Pointers by William Daniel Partain A dissertation submitted to the faculty of the University of North Carolina at Chapel Hill in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Computer Science. Chapel Hill, 1989 Approved by: Jfn F. Prins, reader ~ ~<---(­ CJ)~ ~ ;=tfJ\ Donald F. Stanat, reader @1989 William D. Partain ALL RIGHTS RESERVED II WILLIAM DANIEL PARTAIN. Graph Reduction Without Pointers (Under the direction of Gyula A. Mag6.) Abstract Graph reduction is one way to overcome the exponential space blow-ups that simple normal-order evaluation of the lambda-calculus is likely to suf­ fer. The lambda-calculus underlies lazy functional programming languages, which offer hope for improved programmer productivity based on stronger mathematical underpinnings. Because functional languages seem well-suited to highly-parallel machine implementations, graph reduction is often chosen as the basis for these machines' designs. Inherent to graph reduction is a commonly-accessible store holding nodes referenced through "pointers," unique global identifiers; graph operations cannot guarantee that nodes directly connected in the graph will be in nearby store locations. This absence of locality is inimical to parallel computers, which prefer isolated pieces of hardware working on self-contained parts of a program. In this dissertation, I develop an alternate reduction system using "sus­ pensions" (delayed substitutions), with terms represented as trees and vari­ ables by their binding indices (de Bruijn numbers).
    [Show full text]
  • This Is the Author's Version of a Work Accepted for Publication by Elsevier
    NOTICE: This is the author’s version of a work accepted for publication by Elsevier. Changes resulting from the publishing process, including peer review, editing, corrections, structural formatting and other quality control mechanisms, may not be reflected in this document. A definitive version was subsequently published in the Journal of Systems and Software, Volume 86, Issue 2, pp. 278-301, February 2013. Efficient Support of Dynamic Inheritance for Class- and Prototype-based Languages Jose Manuel Redondo, Francisco Ortin University of Oviedo, Computer Science Department, Calvo Sotelo s/n, 33007, Oviedo, Spain Abstract Dynamically typed languages are becoming increasingly popular for different software devel- opment scenarios where runtime adaptability is important. Therefore, existing class-based plat- forms such as Java and .NET have been gradually incorporating dynamic features to support the execution of these languages. The implementations of dynamic languages on these platforms com- monly generate an extra layer of software over the virtual machine, which reproduces the reflective prototype-based object model provided by most dynamic languages. Simulating this model fre- quently involves a runtime performance penalty, and makes the interoperation between class- and prototype-based languages difficult. Instead of simulating the reflective model of dynamic languages, our approach has been to extend the object-model of an efficient class-based virtual machine with prototype-based seman- tics, so that it can directly support both kinds of languages. Consequently, we obtain the runtime performance improvement of using the virtual machine JIT compiler, while a direct interoperation between languages compiled to our platform is also possible. In this paper, we formalize dynamic inheritance for both class- and prototype-based languages, and implement it as an extension of an efficient virtual machine that performs JIT compilation.
    [Show full text]
  • Parallelism in Lisp
    Parallelism in Lisp Michael van Biema Columbia University Dept. of Computer Science New York, N.Y. 10027 Tel: (212)280-2736 [email protected] three attempts are very interesting, in that two arc very similar Abstract in their approach but very different in the level of their constructs, and the third takes a very different approach. We This paper examines Lisp from the point of view of parallel do not study the so called "pure Lisp" approaches to computation. It attempts to identify exactly where the potential parallelizing Lisp since these are applicative approaches and for parallel execution really exists in LISP and what constructs do not present many of the more complex problems presented are useful in realizing that potential. Case studies of three by a Lisp with side-effects [4, 3]. attempts at augmenting Lisp with parallel constructs are examined and critiqued. The first two attempts concentrate on what we call control parallelism. Control parallelism is viewed here as a medium- or course-grained parallelism on the order of a function call in 1. Parallelism in Lisp Lisp or a procedure call in a traditional, procedure-oriented There are two main approaches to executing Lisp in parallel. language. A good example of this type of parallelism is the One is to use existing code and clever compiling methods to parallel evaluation of all the arguments to a function in Lisp, parallelize the execution of the code [9, 14, 11]. This or the remote procedure call or fork of a process in some approach is very attractive because it allows the use of already procedural language.
    [Show full text]
  • Keyword and Optional Arguments in PLT Scheme
    Keyword and Optional Arguments in PLT Scheme Matthew Flatt Eli Barzilay University of Utah and PLT Northeastern University and PLT mfl[email protected] [email protected] Abstract (define rectangle The lambda and procedure-application forms in PLT Scheme sup- (lambda (width height #:color color) port arguments that are tagged with keywords, instead of identified ....)) by position, as well as optional arguments with default values. Un- or like previous keyword-argument systems for Scheme, a keyword is not self-quoting as an expression, and keyword arguments use (define (rectangle width height #:color color) ....) a different calling convention than non-keyword arguments. Con- sequently, a keyword serves more reliably (e.g., in terms of error This rectangle procedure could be called as reporting) as a lightweight syntactic delimiter on procedure argu- (rectangle 10 20 #:color "blue") ments. Our design requires no changes to the PLT Scheme core compiler, because lambda and application forms that support key- A keyword argument can be in any position relative to other argu- words are implemented by macros over conventional core forms ments, so the following two calls are equivalent to the preceding that lack keyword support. one: (rectangle #:color "blue" 10 20) 1. Using Keyword and Optional Arguments (rectangle 10 #:color "blue" 20) The #:color formal argument could have been in any position A rich programming language offers many ways to abstract and among the arguments in the definition of rectangle, as well. In parameterize code. In Scheme, first-class procedures are the pri- general, keyword arguments are designed to look the same in both mary means of abstraction, and procedures are unquestionably the the declaration and application of a procedure.
    [Show full text]