Fexprs As the Basis of Lisp Function Application Or $Vau : the Ultimate Abstraction by John N

Total Page:16

File Type:pdf, Size:1020Kb

Fexprs As the Basis of Lisp Function Application Or $Vau : the Ultimate Abstraction by John N Fexprs as the basis of Lisp function application or $vau : the ultimate abstraction by John N. Shutt A Dissertation Submitted to the Faculty of the WORCESTER POLYTECHNIC INSTITUTE in partial fulfillment of the requirements for the Degree of Doctor of Philosophy in Computer Science August 23, 2010 Approved: Prof. Michael A. Gennert, Advisor Head of Department Prof. Daniel J. Dougherty Prof. Carolina Ruiz Prof. Shriram Krishnamurthi Computer Science Department, Brown University Abstract Abstraction creates custom programming languages that facilitate programming for specific problem domains. It is traditionally partitioned according to a two-phase model of program evaluation, into syntactic abstraction enacted at translation time, and semantic abstraction enacted at run time. Abstractions pigeon-holed into one phase cannot interact freely with those in the other, since they are required to occur at logically distinct times. Fexprs are a Lisp device that subsumes the capabilities of syntactic abstraction, but is enacted at run-time, thus eliminating the phase barrier between abstractions. Lisps of recent decades have avoided fexprs because of semantic ill-behavedness that accompanied fexprs in the dynamically scoped Lisps of the 1960s and 70s. This dissertation contends that the severe difficulties attendant on fexprs in the past are not essential, and can be overcome by judicious coordination with other elements of language design. In particular, fexprs can form the basis for a simple, well- behaved Scheme-like language, subsuming traditional abstractions without a multi- phase model of evaluation. The thesis is supported by a new Scheme-like language called Kernel, created for this work, in which each Scheme-style procedure consists of a wrapper that in- duces evaluation of operands, around a fexpr that acts on the resulting arguments. This arrangement enables Kernel to use a simple direct style of selectively evaluat- ing subexpressions, in place of most Lisps’ indirect quasiquotation style of selectively suppressing subexpression evaluation. The semantics of Kernel are treated through a new family of formal calculi, introduced here, called vau calculi. Vau calculi use direct subexpression-evaluation style to extend lambda calculus, eliminating a long- standing incompatibility between lambda calculus and fexprs that would otherwise trivialize their equational theories. The impure vau calculi introduce non-functional binding constructs and unconven- tional forms of substitution. This strategy avoids a difficulty of Felleisen’s lambda-v- CS calculus, which modeled impure control and state using a partially non-compatible reduction relation, and therefore only approximated the Church–Rosser and Plotkin’s Correspondence Theorems. The strategy here is supported by an abstract class of Regular Substitutive Reduction Systems, generalizing Klop’s Regular Combinatory Reduction Systems. Preface The concept of first-class object is due to Christopher Strachey (1916–1975). A first-class object in any given programming language is an object that can be employed freely in all the ways that one would ordinarily expect of a general value in that language. The “rights and privileges” of first-class objects —the ways one expects they may be freely employed— depend on the language. Although one might draw up a partial list of first-class rights and privileges for a given language, e.g. [AbSu96, §1.3.4], a complete list is never quite possible, because (as in human societies) the rights and privileges of first-class-ness are difficult to recognize until they are missed. Strachey’s prime example, then, of second-class objects was procedures in Algol. Procedures in Algol can be called, or passed as arguments to procedure calls, but —unlike numbers— cannot be stored as the values of variables, nor returned as the results of procedure calls. Thus Algol procedures cannot be denoted by compound expressions, nor by aliases; as [Stra67, §3.5] put it, they “need to appear in person under their own names.” The Scheme programming language pioneered first-class treatment of several types of objects, notably procedures (whose promotion to first-class status will be discussed in this dissertation in §3.3.2). However, for those cases in which the operands in an expression are not to be evaluated automatically, special reserved operator symbols are used; and the evaluation rules denoted by these reserved symbols cannot be evoked in any other way — they always have to appear in person under their own names.1 The Kernel programming language is a redesign of Scheme for the current work2 that grants first-class status to the objects named by Scheme’s special-form operators (both built-in and user-defined). Why, how, and with what consequences it does so are the concerns of this dissertation. In place of Scheme’s single type of first-class procedures, Kernel has two types of first-class combiners: applicatives, which are analogous to Scheme procedures; and operatives (known historically in the Lisp community as fexprs), which take their operands unevaluated and correspond to Scheme’s special-form combiners. To avoid gratuitous confusion between the two, Kernel conventionally prefixes the names of its 1A more detailed discussion of first-class-ness, both in general and in the particular case of Scheme, appears in [Shu09, App. B (First-class objects)]. 2That is, Kernel and this dissertation are both part of the same work, by the same author. See §3.5. iii operatives with “$” (a convention that enhances the lucidity of even mundane Kernel code, independent of any exotic use of first-class operatives). The elegance of Kernel’s support for first-class operatives arises from the synergism of two distinct innovations. The flashier innovation is the $vau operative, which behaves nearly identically to $lambda except that its result is operative rather than applicative. $vau also differs from $lambda by having an extra parameter, appearing after the usual parameter tree, which locally binds the dynamic environment from which the constructed oper- ative is called. One could, for example, write ($define! $if ($vau (x y z) env (0.1) ($cond ((eval x env) (eval y env)) (#t (eval z env))))) , deriving a compound operative $if from pre-existing operative $cond. As an alter- native to hygienic macro declarations, $vau gains clarity by using ordinary Scheme/ Kernel tools rather than a separate macro sublanguage, and by specifying its evalu- ations —and its uses of the dynamic environment— explicitly; but the more elegant, and subtler, innovation of Kernel’s operative support lies in its treatment of first-class applicatives. A Kernel applicative is simply a shell, or wrapper, to induce argument evaluation, around some other underlying combiner (which may even be another applicative). The constructor of applicatives is an applicative wrap. The introduction of wrap as a primitive, evidently orthogonal to $vau, obviates the need for a primitive $lambda, since $lambda can then be constructed as a compound operative: ($define! $lambda ($vau (ptree . body) static-env (0.2) (wrap (eval (list* $vau ptree #ignore body) static-env)))) . The introduction of an inverse operation unwrap completes the orthogonalization of Kernel combiner semantics, by allowing apply to be constructed as well: ($define! apply ($lambda (applicative argument-tree) (0.3) (eval (cons (unwrap applicative) argument-tree) (make-environment)))) . Content of the dissertation The main content of the dissertation is divided into two parts: Part I addresses Kernel (the practical instrument of the thesis, designed by the author for the extended work), while Part II addresses vau calculi (its theoretical instrument, designed as part of the iv dissertation). Each part begins with a chapter of background materials preliminary to that part. Chapter 1 contains background materials that motivate and clarify the thesis, and therefore logically precede both parts. Chapter 2 contains background materials on the use of language in the dissertation that cross-cut the division between theory and practice. Much of the background material is historical. Historical background serves three functions: it clarifies where we are, by explaining how we got here; it explains why we got here, laying motivational foundation for where to go next; and it reviews related work. The background chapters also concern themselves not only with how programming languages specify computations (semantics), but with how programming languages influence the way programmers think (psychological bias). This introduces a sub- jective element, occasionally passing within hailing distance of philosophy; and the thesis, since it is meant to be defended, mostly avoids this element by concerning itself with semantics. However, the background chapters are also meant to discuss language design motivations, and psychology is integral to that discussion because it often dominates programmer productivity. Semantics may make some intended com- putations difficult to specify, but rarely makes them impossible — especially in Lisp, which doesn’t practice strong typing; whereas psychological bias may prevent en- tire classes of computations outright, just by preventing programmers from intending them. Chapter 16 contains concluding remarks on the dissertation as a whole. Several appendices contain material either tediously uninsightful (and therefore deferred from the dissertation proper) but technically necessary to the thesis defense; or not directly addressing the thesis, but of immediate interest in relation
Recommended publications
  • Processing an Intermediate Representation Written in Lisp
    Processing an Intermediate Representation Written in Lisp Ricardo Pena,˜ Santiago Saavedra, Jaime Sanchez-Hern´ andez´ Complutense University of Madrid, Spain∗ [email protected],[email protected],[email protected] In the context of designing the verification platform CAVI-ART, we arrived to the need of deciding a textual format for our intermediate representation of programs. After considering several options, we finally decided to use S-expressions for that textual representation, and Common Lisp for processing it in order to obtain the verification conditions. In this paper, we discuss the benefits of this decision. S-expressions are homoiconic, i.e. they can be both considered as data and as code. We exploit this duality, and extensively use the facilities of the Common Lisp environment to make different processing with these textual representations. In particular, using a common compilation scheme we show that program execution, and verification condition generation, can be seen as two instantiations of the same generic process. 1 Introduction Building a verification platform such as CAVI-ART [7] is a challenging endeavour from many points of view. Some of the tasks require intensive research which can make advance the state of the art, as it is for instance the case of automatic invariant synthesis. Some others are not so challenging, but require a lot of pondering and discussion before arriving at a decision, in order to ensure that all the requirements have been taken into account. A bad or a too quick decision may have a profound influence in the rest of the activities, by doing them either more difficult, or longer, or more inefficient.
    [Show full text]
  • Compiler Error Messages Considered Unhelpful: the Landscape of Text-Based Programming Error Message Research
    Working Group Report ITiCSE-WGR ’19, July 15–17, 2019, Aberdeen, Scotland Uk Compiler Error Messages Considered Unhelpful: The Landscape of Text-Based Programming Error Message Research Brett A. Becker∗ Paul Denny∗ Raymond Pettit∗ University College Dublin University of Auckland University of Virginia Dublin, Ireland Auckland, New Zealand Charlottesville, Virginia, USA [email protected] [email protected] [email protected] Durell Bouchard Dennis J. Bouvier Brian Harrington Roanoke College Southern Illinois University Edwardsville University of Toronto Scarborough Roanoke, Virgina, USA Edwardsville, Illinois, USA Scarborough, Ontario, Canada [email protected] [email protected] [email protected] Amir Kamil Amey Karkare Chris McDonald University of Michigan Indian Institute of Technology Kanpur University of Western Australia Ann Arbor, Michigan, USA Kanpur, India Perth, Australia [email protected] [email protected] [email protected] Peter-Michael Osera Janice L. Pearce James Prather Grinnell College Berea College Abilene Christian University Grinnell, Iowa, USA Berea, Kentucky, USA Abilene, Texas, USA [email protected] [email protected] [email protected] ABSTRACT of evidence supporting each one (historical, anecdotal, and empiri- Diagnostic messages generated by compilers and interpreters such cal). This work can serve as a starting point for those who wish to as syntax error messages have been researched for over half of a conduct research on compiler error messages, runtime errors, and century. Unfortunately, these messages which include error, warn- warnings. We also make the bibtex file of our 300+ reference corpus ing, and run-time messages, present substantial difficulty and could publicly available.
    [Show full text]
  • How to Design Co-Programs
    JFP, 15 pages, 2021. c Cambridge University Press 2021 1 doi:10.1017/xxxxx EDUCATIONMATTERS How to Design Co-Programs JEREMY GIBBONS Department of Computer Science, University of Oxford e-mail: [email protected] Abstract The observation that program structure follows data structure is a key lesson in introductory pro- gramming: good hints for possible program designs can be found by considering the structure of the data concerned. In particular, this lesson is a core message of the influential textbook “How to Design Programs” by Felleisen, Findler, Flatt, and Krishnamurthi. However, that book discusses using only the structure of input data for guiding program design, typically leading towards structurally recur- sive programs. We argue that novice programmers should also be taught to consider the structure of output data, leading them also towards structurally corecursive programs. 1 Introduction Where do programs come from? This mystery can be an obstacle to novice programmers, who can become overwhelmed by the design choices presented by a blank sheet of paper, or an empty editor window— where does one start? A good place to start, we tell them, is by analyzing the structure of the data that the program is to consume. For example, if the program h is to process a list of values, one may start by analyzing the structure of that list. Either the list is empty ([]), or it is non-empty (a : x) with a head (a) and a tail (x). This provides a candidate program structure: h [ ] = ::: h (a : x) = ::: a ::: x ::: where for the empty list some result must simply be chosen, and for a non-empty list the result depends on the head a and tail x.
    [Show full text]
  • Solving the Funarg Problem with Static Types IFL ’21, September 01–03, 2021, Online
    Solving the Funarg Problem with Static Types Caleb Helbling Fırat Aksoy [email protected][email protected] Purdue University Istanbul University West Lafayette, Indiana, USA Istanbul, Turkey ABSTRACT downwards. The upwards funarg problem refers to the manage- The difficulty associated with storing closures in a stack-based en- ment of the stack when a function returns another function (that vironment is known as the funarg problem. The funarg problem may potentially have a non-empty closure), whereas the down- was first identified with the development of Lisp in the 1970s and wards funarg problem refers to the management of the stack when hasn’t received much attention since then. The modern solution a function is passed to another function (that may also have a non- taken by most languages is to allocate closures on the heap, or to empty closure). apply static analysis to determine when closures can be stack allo- Solving the upwards funarg problem is considered to be much cated. This is not a problem for most computing systems as there is more difficult than the downwards funarg problem. The downwards an abundance of memory. However, embedded systems often have funarg problem is easily solved by copying the closure’s lexical limited memory resources where heap allocation may cause mem- scope (captured variables) to the top of the program’s stack. For ory fragmentation. We present a simple extension to the prenex the upwards funarg problem, an issue arises with deallocation. A fragment of System F that allows closures to be stack-allocated. returning function may capture local variables, making the size of We demonstrate a concrete implementation of this system in the the closure dynamic.
    [Show full text]
  • Panel: NSF-Sponsored Innovative Approaches to Undergraduate Computer Science
    Panel: NSF-Sponsored Innovative Approaches to Undergraduate Computer Science Stephen Bloch (Adelphi University) Amruth Kumar (Ramapo College) Stanislav Kurkovsky (Central CT State University) Clif Kussmaul (Muhlenberg College) Matt Dickerson (Middlebury College), moderator Project Web site(s) Intervention Delivery Supervision Program http://programbydesign.org curriculum with supporting in class; software normally active, but can be by Design http://picturingprograms.org IDE, libraries, & texts and textbook are done other ways Stephen Bloch http://www.ccs.neu.edu/home/ free downloads matthias/HtDP2e/ or web-based NSF awards 0010064 http://racket-lang.org & 0618543 http://wescheme.org Problets http://www.problets.org in- or after-class problem- applet in none - teacher not needed, Amruth Kumar solving exercises on a browser although some adopters use programming concepts it in active mode too NSF award 0817187 Mobile Game http://www.mgdcs.com/ in-class or take-home PC passive - teacher as Development programming projects facilitator to answer Qs Stan Kurkovsky NSF award DUE-0941348 POGIL http://pogil.org in-class activity paper or web passive - teacher as Clif Kussmaul http://cspogil.org facilitator to answer Qs NSF award TUES 1044679 Project Course(s) Language(s) Focus Program Middle school, Usually Scheme-like teaching problem-solving process, by pre-AP CS in HS, languages leading into Java; particularly test-driven DesignStephen CS0, CS1, CS2 has also been done in Python, development and use of data Bloch in college ML, Java, Scala, ... types to guide coding & testing Problets AP-CS, CS I, CS 2. C, C++, Java, C# code tracing, debugging, Amruth Kumar also as refresher or expression evaluation, to switch languages predicting program state in other courses Mobile Game AP-CS, CS1, CS2 Java core OO programming; DevelopmentSt intro to advanced subjects an Kurkovsky such as AI, networks, security POGILClif CS1, CS2, SE, etc.
    [Show full text]
  • An Optimized R5RS Macro Expander
    An Optimized R5RS Macro Expander Sean Reque A thesis submitted to the faculty of Brigham Young University in partial fulfillment of the requirements for the degree of Master of Science Jay McCarthy, Chair Eric Mercer Quinn Snell Department of Computer Science Brigham Young University February 2013 Copyright c 2013 Sean Reque All Rights Reserved ABSTRACT An Optimized R5RS Macro Expander Sean Reque Department of Computer Science, BYU Master of Science Macro systems allow programmers abstractions over the syntax of a programming language. This gives the programmer some of the same power posessed by a programming language designer, namely, the ability to extend the programming language to fit the needs of the programmer. The value of such systems has been demonstrated by their continued adoption in more languages and platforms. However, several barriers to widespread adoption of macro systems still exist. The language Racket [6] defines a small core of primitive language constructs, including a powerful macro system, upon which all other features are built. Because of this design, many features of other programming languages can be implemented through libraries, keeping the core language simple without sacrificing power or flexibility. However, slow macro expansion remains a lingering problem in the language's primary implementation, and in fact macro expansion currently dominates compile times for Racket modules and programs. Besides the typical problems associated with slow compile times, such as slower testing feedback, increased mental disruption during the programming process, and unscalable build times for large projects, slow macro expansion carries its own unique problems, such as poorer performance for IDEs and other software analysis tools.
    [Show full text]
  • Document Number Scco24
    Stanford University Libraries Dept. of Special Cr*l!«ctkjns &>W Title Box Series I Fol 5C Fo<. Title " LISP/360 REFERENCE MANUAL FOURTH EDITION # CAMPUS COMPUTER FACILITY STANFORD COMPUTATION CENTER STANFORD UNIVERSITY STANFORD, CALIFORNIA DOCUMENT NUMBER SCCO24 " ■ " PREFACE This manual is intended to provide the LISP 1.5 user with a reference manual for the LISP 1.5 interpreter, assembler, and compiler on the Campus Facility 360/67. It assumes that the reader has a working knowledge by of LISP 1.5 as described in the LISP_I.is_Primer Clark Weissman, and that the reader has a general knowledge of the operating environment of OS 360. Beginning users of LISP will find the sections The , Functions, LI SRZIi O_Syst em , Organiz ation_of_St orage tlSP_Job_Set-]i£» and t.tsp/360 System Messages Host helpful in obtaining a basic understanding of the LISP system. Other sections of the manual are intended for users desiring a more extensive knowledge of LISP. The particular implementation to which this reference manual is directed was started by Mr. J. Kent while he was at the University of Waterloo. It is modeled after his implemention of LISP 1.5 for the CDC 3600. Included in this edition is information on the use of the time-shared LISP system available on the 360/67 which was implemented by Mr. Robert Berns of the " Computer staff. Campus Facility " II TABLE OF CONTENTS Section PREFACE " TABLE OF CONTENTS xxx 1. THE LISP/360 SYSTEM 1 2. ORGANIZATION OF STORAGE 3 2.1 Free Cell Storage (FCS) 3 2.1.1 Atoms 5 2.1.2 Numbers 8 2.1.3 Object List 9 2.2 Push-down Stack (PDS) 9 2.3 System Functions 9 2.4 Binary Program Space (BPS) 9 W 2.5 Input/Output Buffers 9 10 3.
    [Show full text]
  • A Scalable Provenance Evaluation Strategy
    Debugging Large-scale Datalog: A Scalable Provenance Evaluation Strategy DAVID ZHAO, University of Sydney 7 PAVLE SUBOTIĆ, Amazon BERNHARD SCHOLZ, University of Sydney Logic programming languages such as Datalog have become popular as Domain Specific Languages (DSLs) for solving large-scale, real-world problems, in particular, static program analysis and network analysis. The logic specifications that model analysis problems process millions of tuples of data and contain hundredsof highly recursive rules. As a result, they are notoriously difficult to debug. While the database community has proposed several data provenance techniques that address the Declarative Debugging Challenge for Databases, in the cases of analysis problems, these state-of-the-art techniques do not scale. In this article, we introduce a novel bottom-up Datalog evaluation strategy for debugging: Our provenance evaluation strategy relies on a new provenance lattice that includes proof annotations and a new fixed-point semantics for semi-naïve evaluation. A debugging query mechanism allows arbitrary provenance queries, constructing partial proof trees of tuples with minimal height. We integrate our technique into Soufflé, a Datalog engine that synthesizes C++ code, and achieve high performance by using specialized parallel data structures. Experiments are conducted with Doop/DaCapo, producing proof annotations for tens of millions of output tuples. We show that our method has a runtime overhead of 1.31× on average while being more flexible than existing state-of-the-art techniques. CCS Concepts: • Software and its engineering → Constraint and logic languages; Software testing and debugging; Additional Key Words and Phrases: Static analysis, datalog, provenance ACM Reference format: David Zhao, Pavle Subotić, and Bernhard Scholz.
    [Show full text]
  • Dynamic Variables
    Dynamic Variables David R. Hanson and Todd A. Proebsting Microsoft Research [email protected] [email protected] November 2000 Technical Report MSR-TR-2000-109 Abstract Most programming languages use static scope rules for associating uses of identifiers with their declarations. Static scope helps catch errors at compile time, and it can be implemented efficiently. Some popular languages—Perl, Tcl, TeX, and Postscript—use dynamic scope, because dynamic scope works well for variables that “customize” the execution environment, for example. Programmers must simulate dynamic scope to implement this kind of usage in statically scoped languages. This paper describes the design and implementation of imperative language constructs for introducing and referencing dynamically scoped variables—dynamic variables for short. The design is a minimalist one, because dynamic variables are best used sparingly, much like exceptions. The facility does, however, cater to the typical uses for dynamic scope, and it provides a cleaner mechanism for so-called thread-local variables. A particularly simple implementation suffices for languages without exception handling. For languages with exception handling, a more efficient implementation builds on existing compiler infrastructure. Exception handling can be viewed as a control construct with dynamic scope. Likewise, dynamic variables are a data construct with dynamic scope. Microsoft Research Microsoft Corporation One Microsoft Way Redmond, WA 98052 http://www.research.microsoft.com/ Dynamic Variables Introduction Nearly all modern programming languages use static (or lexical) scope rules for determining variable bindings. Static scope can be implemented very efficiently and makes programs easier to understand. Dynamic scope is usu- ally associated with “older” languages; notable examples include Lisp, SNOBOL4, and APL.
    [Show full text]
  • The Evolution of Lisp
    1 The Evolution of Lisp Guy L. Steele Jr. Richard P. Gabriel Thinking Machines Corporation Lucid, Inc. 245 First Street 707 Laurel Street Cambridge, Massachusetts 02142 Menlo Park, California 94025 Phone: (617) 234-2860 Phone: (415) 329-8400 FAX: (617) 243-4444 FAX: (415) 329-8480 E-mail: [email protected] E-mail: [email protected] Abstract Lisp is the world’s greatest programming language—or so its proponents think. The structure of Lisp makes it easy to extend the language or even to implement entirely new dialects without starting from scratch. Overall, the evolution of Lisp has been guided more by institutional rivalry, one-upsmanship, and the glee born of technical cleverness that is characteristic of the “hacker culture” than by sober assessments of technical requirements. Nevertheless this process has eventually produced both an industrial- strength programming language, messy but powerful, and a technically pure dialect, small but powerful, that is suitable for use by programming-language theoreticians. We pick up where McCarthy’s paper in the first HOPL conference left off. We trace the development chronologically from the era of the PDP-6, through the heyday of Interlisp and MacLisp, past the ascension and decline of special purpose Lisp machines, to the present era of standardization activities. We then examine the technical evolution of a few representative language features, including both some notable successes and some notable failures, that illuminate design issues that distinguish Lisp from other programming languages. We also discuss the use of Lisp as a laboratory for designing other programming languages. We conclude with some reflections on the forces that have driven the evolution of Lisp.
    [Show full text]
  • Final Shift for Call/Cc: Direct Implementation of Shift and Reset
    Final Shift for Call/cc: Direct Implementation of Shift and Reset Martin Gasbichler Michael Sperber Universitat¨ Tubingen¨ fgasbichl,[email protected] Abstract JxKρ = λk:(k (ρ x)) 0 0 We present a direct implementation of the shift and reset con- Jλx:MKρ = λk:k (λv:λk :(JMK)(ρ[x 7! v]) k ) trol operators in the Scheme 48 system. The new implementation JE1 E2Kρ = λk:JE1Kρ (λ f :JE2Kρ (λa: f a k) improves upon the traditional technique of simulating shift and reset via call/cc. Typical applications of these operators exhibit Figure 1. Continuation semantics space savings and a significant overall performance gain. Our tech- nique is based upon the popular incremental stack/heap strategy for representing continuations. We present implementation details as well as some benchmark measurements for typical applications. this transformation has a direct counterpart as a semantic specifica- tion of the λ calculus; Figure 1 shows such a semantic specification for the bare λ calculus: each ρ is an environment mapping variables Categories and Subject Descriptors to values, and each k is a continuation—a function from values to values. An abstraction denotes a function from an environment to a D.3.3 [Programming Languages]: Language Constructs and Fea- function accepting an argument and a continuation. tures—Control structures In the context of the semantics, the rule for call/cc is this: General Terms Jcall=cc EKρ = λk:JEKρ (λ f : f (λv:λk0:k v) k) Languages, Performance Call=cc E evaluates E and calls the result f ; it then applies f to an Keywords escape function which discards the continuation k0 passed to it and replaces it by the continuation k of the call=cc expression.
    [Show full text]
  • Implementing a Vau-Based Language with Multiple Evaluation Strategies
    Implementing a Vau-based Language With Multiple Evaluation Strategies Logan Kearsley Brigham Young University Introduction Vau expressions can be simply defined as functions which do not evaluate their argument expressions when called, but instead provide access to a reification of the calling environment to explicitly evaluate arguments later, and are a form of statically-scoped fexpr (Shutt, 2010). Control over when and if arguments are evaluated allows vau expressions to be used to simulate any evaluation strategy. Additionally, the ability to choose not to evaluate certain arguments to inspect the syntactic structure of unevaluated arguments allows vau expressions to simulate macros at run-time and to replace many syntactic constructs that otherwise have to be built-in to a language implementation. In principle, this allows for significant simplification of the interpreter for vau expressions; compared to McCarthy's original LISP definition (McCarthy, 1960), vau's eval function removes all references to built-in functions or syntactic forms, and alters function calls to skip argument evaluation and add an implicit env parameter (in real implementations, the name of the environment parameter is customizable in a function definition, rather than being a keyword built-in to the language). McCarthy's Eval Function Vau Eval Function (transformed from the original M-expressions into more familiar s-expressions) (define (eval e a) (define (eval e a) (cond (cond ((atom e) (assoc e a)) ((atom e) (assoc e a)) ((atom (car e)) ((atom (car e)) (cond (eval (cons (assoc (car e) a) ((eq (car e) 'quote) (cadr e)) (cdr e)) ((eq (car e) 'atom) (atom (eval (cadr e) a))) a)) ((eq (car e) 'eq) (eq (eval (cadr e) a) ((eq (caar e) 'vau) (eval (caddr e) a))) (eval (caddar e) ((eq (car e) 'car) (car (eval (cadr e) a))) (cons (cons (cadar e) 'env) ((eq (car e) 'cdr) (cdr (eval (cadr e) a))) (append (pair (cadar e) (cdr e)) ((eq (car e) 'cons) (cons (eval (cadr e) a) a)))))) (eval (caddr e) a))) ((eq (car e) 'cond) (evcon.
    [Show full text]