An Adaptive Evaluation Strategy for Non-Strict Programs

Total Page:16

File Type:pdf, Size:1020Kb

An Adaptive Evaluation Strategy for Non-Strict Programs Optimistic Evaluation: An Adaptive Evaluation Strategy for Non­Strict Programs Robert Ennals Simon Peyton Jones Computer Laboratory, University of Cambridge Microsoft Research Ltd, Cambridge [email protected] [email protected] ABSTRACT thunk. This argument-passing mechanism is called call-by- Lazy programs are beautiful, but they are slow because they need, in contrast to the more common call-by-value. The build many thunks. Simple measurements show that most extra memory traffic caused by thunk creation and forcing of these thunks are unnecessary: they are in fact always makes lazy programs perform noticeably worse than their evaluated, or are always cheap. In this paper we describe strict counterparts, both in time and space. Optimistic Evaluation | an evaluation strategy that ex- Good compilers for lazy languages therefore use sophisti- ploits this observation. Optimistic Evaluation complements cated static analyses to turn call-by-need (slow, but sound) compile-time analyses with run-time experiments: it evalu- into call-by-value (fast, but dangerous). Two basic analy- ates a thunk speculatively, but has an abortion mechanism ses are used: strictness analysis identifies expressions that to back out if it makes a bad choice. A run-time adaption will always be evaluated [40]; while cheapness analysis lo- mechanism records expressions found to be unsuitable for cates expressions that are certain to be cheap (and safe) to speculative evaluation, and arranges for them to be evalu- evaluate [7]. ated more lazily in the future. However, any static analysis must be conservative, forc- We have implemented optimistic evaluation in the Glas- ing it to keep thunks that are \probably unnecessary" but gow Haskell Compiler. The results are encouraging: many not \provably unnecessary". Figure 1 shows measurements programs speed up significantly (5-25%), some improve dra- taken from the Glasgow Haskell Compiler, a mature op- matically, and none go more than 15% slower. timising compiler for Haskell (GHC implements strictness analysis but not cheapness analysis, for reasons discussed in Section 7.1.). The Figure shows that most thunks are either Categories and Subject Descriptors always used or are \cheap and usually used". The exact D.3.2 [Programming Languages]: Language Classifica- definition of \cheap and usually-used" is not important: we tions|Applicative (functional) languages, Lazy Evaluation; use this data only as prima facie evidence that there is a D.3.4 [Programming Languages]: Processors|Code gen- big opportunity here. If the language implementation could eration, optimization, profiling somehow use call-by-value for most of these thunks, then the overheads of laziness would be reduced significantly. This paper presents a realistic approach to optimistic eval- General Terms uation, which does just that. Optimistic evaluation decides Performance, Languages, Theory at run-time what should be evaluated eagerly, and provides an abortion mechanism that backs out of eager computa- Keywords tions that go on for too long. We make the following contri- butions: Lazy Evaluation, Online Profiling, Haskell The idea of optimistic evaluation per se is rather ob- • 1. INTRODUCTION vious (see Section 7). What is not obvious is how to (a) make it cheap enough that the benefits exceed the Lazy evaluation is great for programmers [16], but it car- costs; (b) avoid potholes: it is much easier to get big ries significant run-time overheads. Instead of evaluating a speedups on some programs if you accept big slow- function's argument before the call, lazy evaluation heap- downs on others; (c) make it robust enough to run allocates a thunk, or suspension, and passes that. If the full-scale application programs without modification; function ever needs the value of that argument, it forces the and (d) do all this in a mature, optimising compiler that has already exploited most of the easy wins. In Section 2 we describe mechanisms that meet these Permission to make digital or hard copies of all or part of this work for goals. We have demonstrated that they work in prac- personal or classroom use is granted without fee provided that copies are tice by implementing them in the Glasgow Haskell not made or distributed for profit or commercial advantage and that copies Compiler (GHC) [29]. bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. ICFP'03, August 25­29, 2003, Uppsala, Sweden. Copyright 2003 ACM 1­58113­756­7/03/0008 ...$5.00. Optimistic evaluation is a slippery topic. We give an % Always Used % Cheap and Usually Used • operational semantics for a lazy evaluator enhanced 100% with optimistic evaluation (Section 5). This abstract machine allows us to refine our general design choices into a precise form. 75% The performance numbers are encouraging: We have pro- 50% duced a stable extended version of GHC that is able to com- pile arbitrary Haskell programs. When tested on a set of reasonably large, realistic programs, it produced a geomet- 25% ric mean speedup of just over 15% with no program slow- ing down by more than 15%. Perhaps more significantly, 0% naively written programs that suffer from space leaks can digits-of-e1digits-of-e2exp3_8gen_regexpsintegrateparaffinsprimesqueensrfib tak wheel-sieve1wheel-sieve2x2n1constraintsannainfergamtebrsa symalgwordcountatomboyercircsim speed up massively, with improvements of greater than 50% being common. Percentage of dynamically allocated thunks These are good results for a compiler that is as mature as GHC, where 10% is now big news. (For example, strictness analysis buys 10-20% [31].) While the baseline is mature, we have only begun to explore the design space for optimistic Figure 1: How thunks are commonly used evaluation, so we are confident that our initial results can be further improved. make optimistic evaluation really work, in addition to tak- ing care of real-world details like errors, input/output, and 2. OPTIMISTIC EVALUATION so on. The following sub-sections introduce these mech- Our approach to compiling lazy programs involves mod- anisms one at at time. Identifying and addressing these ifying only the back end of the compiler and the run-time concerns is one of our main contributions. To avoid con- system. All existing analyses and transformations are first tinual asides, we tackle related work in Section 7. We use applied as usual, and the program is simplified into a form the term \speculative evaluation" to mean \an evaluation in which all thunk allocation is done by a let expression whose result may not be needed", while \optimistic eval- (Section 5.1). The back end is then modified as follows: uation" describes the entire evaluation strategy (abortion, profiling, adaption, etc). Each let expression is compiled such that it can either • build a thunk for its right hand side, or evaluate its 2.1 Switchable Let Expressions right hand side speculatively (Section 2.1). Which of these it does depends on the state of a run-time ad- In the intermediate language consumed by the code gen- erator, the let expression is the only construct that allocates justable switch, one for each let expression. The set of thunks. In our design, each let chooses dynamically whether all such switches is known as the speculation configu- ration. The configuration is initialised so that all lets to allocate a thunk, or to evaluate the right-hand side of the are speculative. let speculatively instead. Here is an example: If the evaluator finds itself speculatively evaluating a let x = <rhs> in <body> • very expensive expression, then abortion is used to sus- pend the speculative computation, building a special The code generator translates it to code that behaves log- continuation thunk in the heap. Execution continues ically like the following: with the body of the speculative let whose right-hand side has now been suspended (Section 2.2). if (LET237 != 0) { x = value of <rhs> Recursively-generated structures such as infinite lists • }else{ can be generated in chunks, thus reducing the costs x = lazy thunk to compute <rhs> when needed of laziness, while still avoiding doing lots of unneeded } work (Section 2.3). evaluate <body> Online profiling is used to find out which lets are ex- • pensive to evaluate, or are rarely used (Section 3). The Our current implementation associates one static switch profiler then modifies the speculation configuration, by (in this case LET237) with each let. There are many other switching off speculative evaluation, so that the offend- possible choices. For example, for a function like map, which ing lets evaluate their right hand sides lazily rather is used in many different contexts, it might be desirable for than strictly. the switch to take the calling context into account. We have not explored these context-dependent possibilities because Just as a cache exploits the \principle" of locality, op- the complexity costs of dynamic switches seem to overwhelm timistic evaluation exploits the principle that most thunks the uncertain benefits. In any case, GHC's aggressive inlin- are either cheap or will ultimately be evaluated. In practice, ing tends to reduce this particular problem. though, we have found that a realistic implementation re- If <rhs> was large, then this compilation scheme could quires quite an array of mechanisms | abortion, adaption, result in code bloat. We avoid this by lifting the right hand online profiling, support for chunkiness, and so on | to sides of such let expressions out into new functions. 2.2 Abortion is evaluated speculatively. However once the chunk is fin- The whole point of optimistic evaluation is to use call- ished (for example, after 10 elements have been created) by-value in the hope that evaluation will terminate quickly. rest switches over to lazy evaluation, causing the function It is obviously essential to have a way to recover when this to terminate quickly.
Recommended publications
  • CSCI 2041: Lazy Evaluation
    CSCI 2041: Lazy Evaluation Chris Kauffman Last Updated: Wed Dec 5 12:32:32 CST 2018 1 Logistics Reading Lab13: Lazy/Streams I Module Lazy on lazy Covers basics of delayed evaluation computation I Module Stream on streams A5: Calculon Lambdas/Closures I Arithmetic language Briefly discuss these as they interpreter pertain Calculon I 2X credit for assignment I 5 Required Problems 100pts Goals I 5 Option Problems 50pts I Eager Evaluation I Milestone due Wed 12/5 I Lazy Evaluation I Final submit Tue 12/11 I Streams 2 Evaluation Strategies Eager Evaluation Lazy Evaluation I Most languages employ I An alternative is lazy eager evaluation evaluation I Execute instructions as I Execute instructions only as control reaches associated expression results are needed code (call by need) I Corresponds closely to I Higher-level idea with actual machine execution advantages and disadvantages I In pure computations, evaluation strategy doesn’t matter: will produce the same results I With side-effects, when code is run matter, particular for I/O which may see different printing orders 3 Exercise: Side-Effects and Evaluation Strategy Most common place to see differences between Eager/Lazy eval is when functions are called I Eager eval: eval argument expressions, call functions with results I Lazy eval: call function with un-evaluated expressions, eval as results are needed Consider the following expression let print_it expr = printf "Printing it\n"; printf "%d\n" expr; ;; print_it (begin printf "Evaluating\n"; 5; end);; Predict results and output for both Eager and Lazy Eval strategies 4 Answers: Side-Effects and Evaluation Strategy let print_it expr = printf "Printing it\n"; printf "%d\n" expr; ;; print_it (begin printf "Evaluating\n"; 5; end);; Evaluation > ocamlc eager_v_lazy.ml > ./a.out Eager Eval # ocaml’s default Evaluating Printing it 5 Lazy Eval Printing it Evaluating 5 5 OCaml and explicit lazy Computations I OCaml’s default model is eager evaluation BUT.
    [Show full text]
  • A Scalable Provenance Evaluation Strategy
    Debugging Large-scale Datalog: A Scalable Provenance Evaluation Strategy DAVID ZHAO, University of Sydney 7 PAVLE SUBOTIĆ, Amazon BERNHARD SCHOLZ, University of Sydney Logic programming languages such as Datalog have become popular as Domain Specific Languages (DSLs) for solving large-scale, real-world problems, in particular, static program analysis and network analysis. The logic specifications that model analysis problems process millions of tuples of data and contain hundredsof highly recursive rules. As a result, they are notoriously difficult to debug. While the database community has proposed several data provenance techniques that address the Declarative Debugging Challenge for Databases, in the cases of analysis problems, these state-of-the-art techniques do not scale. In this article, we introduce a novel bottom-up Datalog evaluation strategy for debugging: Our provenance evaluation strategy relies on a new provenance lattice that includes proof annotations and a new fixed-point semantics for semi-naïve evaluation. A debugging query mechanism allows arbitrary provenance queries, constructing partial proof trees of tuples with minimal height. We integrate our technique into Soufflé, a Datalog engine that synthesizes C++ code, and achieve high performance by using specialized parallel data structures. Experiments are conducted with Doop/DaCapo, producing proof annotations for tens of millions of output tuples. We show that our method has a runtime overhead of 1.31× on average while being more flexible than existing state-of-the-art techniques. CCS Concepts: • Software and its engineering → Constraint and logic languages; Software testing and debugging; Additional Key Words and Phrases: Static analysis, datalog, provenance ACM Reference format: David Zhao, Pavle Subotić, and Bernhard Scholz.
    [Show full text]
  • Implementing a Vau-Based Language with Multiple Evaluation Strategies
    Implementing a Vau-based Language With Multiple Evaluation Strategies Logan Kearsley Brigham Young University Introduction Vau expressions can be simply defined as functions which do not evaluate their argument expressions when called, but instead provide access to a reification of the calling environment to explicitly evaluate arguments later, and are a form of statically-scoped fexpr (Shutt, 2010). Control over when and if arguments are evaluated allows vau expressions to be used to simulate any evaluation strategy. Additionally, the ability to choose not to evaluate certain arguments to inspect the syntactic structure of unevaluated arguments allows vau expressions to simulate macros at run-time and to replace many syntactic constructs that otherwise have to be built-in to a language implementation. In principle, this allows for significant simplification of the interpreter for vau expressions; compared to McCarthy's original LISP definition (McCarthy, 1960), vau's eval function removes all references to built-in functions or syntactic forms, and alters function calls to skip argument evaluation and add an implicit env parameter (in real implementations, the name of the environment parameter is customizable in a function definition, rather than being a keyword built-in to the language). McCarthy's Eval Function Vau Eval Function (transformed from the original M-expressions into more familiar s-expressions) (define (eval e a) (define (eval e a) (cond (cond ((atom e) (assoc e a)) ((atom e) (assoc e a)) ((atom (car e)) ((atom (car e)) (cond (eval (cons (assoc (car e) a) ((eq (car e) 'quote) (cadr e)) (cdr e)) ((eq (car e) 'atom) (atom (eval (cadr e) a))) a)) ((eq (car e) 'eq) (eq (eval (cadr e) a) ((eq (caar e) 'vau) (eval (caddr e) a))) (eval (caddar e) ((eq (car e) 'car) (car (eval (cadr e) a))) (cons (cons (cadar e) 'env) ((eq (car e) 'cdr) (cdr (eval (cadr e) a))) (append (pair (cadar e) (cdr e)) ((eq (car e) 'cons) (cons (eval (cadr e) a) a)))))) (eval (caddr e) a))) ((eq (car e) 'cond) (evcon.
    [Show full text]
  • Comparative Studies of Programming Languages; Course Lecture Notes
    Comparative Studies of Programming Languages, COMP6411 Lecture Notes, Revision 1.9 Joey Paquet Serguei A. Mokhov (Eds.) August 5, 2010 arXiv:1007.2123v6 [cs.PL] 4 Aug 2010 2 Preface Lecture notes for the Comparative Studies of Programming Languages course, COMP6411, taught at the Department of Computer Science and Software Engineering, Faculty of Engineering and Computer Science, Concordia University, Montreal, QC, Canada. These notes include a compiled book of primarily related articles from the Wikipedia, the Free Encyclopedia [24], as well as Comparative Programming Languages book [7] and other resources, including our own. The original notes were compiled by Dr. Paquet [14] 3 4 Contents 1 Brief History and Genealogy of Programming Languages 7 1.1 Introduction . 7 1.1.1 Subreferences . 7 1.2 History . 7 1.2.1 Pre-computer era . 7 1.2.2 Subreferences . 8 1.2.3 Early computer era . 8 1.2.4 Subreferences . 8 1.2.5 Modern/Structured programming languages . 9 1.3 References . 19 2 Programming Paradigms 21 2.1 Introduction . 21 2.2 History . 21 2.2.1 Low-level: binary, assembly . 21 2.2.2 Procedural programming . 22 2.2.3 Object-oriented programming . 23 2.2.4 Declarative programming . 27 3 Program Evaluation 33 3.1 Program analysis and translation phases . 33 3.1.1 Front end . 33 3.1.2 Back end . 34 3.2 Compilation vs. interpretation . 34 3.2.1 Compilation . 34 3.2.2 Interpretation . 36 3.2.3 Subreferences . 37 3.3 Type System . 38 3.3.1 Type checking . 38 3.4 Memory management .
    [Show full text]
  • Application and Interpretation
    Programming Languages: Application and Interpretation Shriram Krishnamurthi Brown University Copyright c 2003, Shriram Krishnamurthi This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 United States License. If you create a derivative work, please include the version information below in your attribution. This book is available free-of-cost from the author’s Web site. This version was generated on 2007-04-26. ii Preface The book is the textbook for the programming languages course at Brown University, which is taken pri- marily by third and fourth year undergraduates and beginning graduate (both MS and PhD) students. It seems very accessible to smart second year students too, and indeed those are some of my most successful students. The book has been used at over a dozen other universities as a primary or secondary text. The book’s material is worth one undergraduate course worth of credit. This book is the fruit of a vision for teaching programming languages by integrating the “two cultures” that have evolved in its pedagogy. One culture is based on interpreters, while the other emphasizes a survey of languages. Each approach has significant advantages but also huge drawbacks. The interpreter method writes programs to learn concepts, and has its heart the fundamental belief that by teaching the computer to execute a concept we more thoroughly learn it ourselves. While this reasoning is internally consistent, it fails to recognize that understanding definitions does not imply we understand consequences of those definitions. For instance, the difference between strict and lazy evaluation, or between static and dynamic scope, is only a few lines of interpreter code, but the consequences of these choices is enormous.
    [Show full text]
  • Open WATCOM Programmer's Guide
    this document downloaded from... Use of this document the wings of subject to the terms and conditions as flight in an age stated on the website. of adventure for more downloads visit our other sites Positive Infinity and vulcanhammer.net chet-aero.com Watcom FORTRAN 77 Programmer's Guide Version 1.8 Notice of Copyright Copyright 2002-2008 the Open Watcom Contributors. Portions Copyright 1984-2002 Sybase, Inc. and its subsidiaries. All rights reserved. Any part of this publication may be reproduced, transmitted, or translated in any form or by any means, electronic, mechanical, manual, optical, or otherwise, without the prior written permission of anyone. For more information please visit http://www.openwatcom.org/ Portions of this manual are reprinted with permission from Tenberry Software, Inc. ii Preface The Watcom FORTRAN 77 Programmer's Guide includes the following major components: · DOS Programming Guide · The DOS/4GW DOS Extender · Windows 3.x Programming Guide · Windows NT Programming Guide · OS/2 Programming Guide · Novell NLM Programming Guide · Mixed Language Programming · Common Problems Acknowledgements This book was produced with the Watcom GML electronic publishing system, a software tool developed by WATCOM. In this system, writers use an ASCII text editor to create source files containing text annotated with tags. These tags label the structural elements of the document, such as chapters, sections, paragraphs, and lists. The Watcom GML software, which runs on a variety of operating systems, interprets the tags to format the text into a form such as you see here. Writers can produce output for a variety of printers, including laser printers, using separately specified layout directives for such things as font selection, column width and height, number of columns, etc.
    [Show full text]
  • COMPILING FUNCTIONAL PROGRAMMING CONSTRUCTS to a LOGIC ENGINE by Harvey Abramson and Peter Ludemann Technical Report 86-24 November 1986
    COMPILING FUNCTIONAL PROGRAMMING CONSTRUCTS TO A LOGIC ENGINE by Harvey Abramson and Peter Ludemann Technical Report 86-24 November 1986 Compiling Functional Programming Constructs to a Logic Engine Harvey Abramson t and Peter Ltidemann:I: Department of Computer Science University of British Columbia Vancouver, B.C., Canada V6T 1W5 Copyright © 1986 Abstract In this paper we consider how various constructs used in functional programming can be efficiently translated to code for a Prolog engine (designed by Ltidemann) similar in spirit but not identical to the Warren machine. It turns out that this Prolog engine which contains a delay mechanism designed to permit co­ routining and sound implementation of negation is sufficient to handle all common functional programming constructs; indeed, such a machine has about the same efficiency as a machine designed specifically for a functional programming language. This machine has been fully implemented. t Present address: Department of Computer Science, Bristol University, Bristol, England . f Present address: IBM Canada Ltd., 1 Park Centre, 895 Don Mills Road, North York, Ontario, Canada M3C 1W3 The proposals for combining the two paradigms Compiling Functional Programming have been roughly classified in [BoGi86] as being Constructs to a Logic Engine either functional plus logic or logic plus functional. Harvey Abramson t and Peter Ltidemanni In the former approach, invertibility, nondeterminism Department of Computer Science and unification are added to some higher order functional language; in the latter approach, first-order University of British Columbia functions are added to a first-order logic with an Vancouver, B.C., Canada V6T lWS equality theory. We do not take any position as to Copyright© 1986 which approach is best, but since a logic machine already deals with unification, nondeterminism and invertibility, we have taken, from an implementation Abstract viewpoint, the logic plus functional approach.
    [Show full text]
  • Parameter Passing, Generics, Exceptions
    Lecture 9: Parameter Passing, Generics and Polymorphism, Exceptions COMP 524 Programming Language Concepts Stephen Olivier February 12, 2008 Based on notes by A. Block, N. Fisher, F. Hernandez-Campos, and D. Stotts The University of North Carolina at Chapel Hill Parameter Passing •Pass-by-value: Input parameter •Pass-by-result: Output parameter •Pass-by-value-result: Input/output parameter •Pass-by-reference: Input/output parameter, no copy •Pass-by-name: Effectively textual substitution The University of North Carolina at Chapel Hill 2 Pass-by-value int m=8, i=5; foo(m); print m; # print 8 proc foo(int b){ b = b+5; } The University of North Carolina at Chapel Hill 3 Pass-by-reference int m=8; foo(m); print m; # print 13 proc foo(int b){ b = b+5; } The University of North Carolina at Chapel Hill 4 Pass-by-value-result int m=8; foo(m); print m; # print 13 proc foo(int b){ b = b+5; } The University of North Carolina at Chapel Hill 5 Pass-by-name •Arguments passed by name are re-evaluated in the caller’s referencing environment every time they are used. •They are implemented using a hidden-subroutine, known as a thunk. •This is a costly parameter passing mechanism. •Think of it as an in-line substitution (subroutine code put in-line at point of call, with parameters substituted). •Or, actual parameters substituted textually in the subroutine body for the formulas. The University of North Carolina at Chapel Hill 6 Pass-by-name array A[1..100] of int; array A[1..100] of int; int i=5; int i=5; foo(A[i], i); foo(A[i]); print A[i]; #print A[6]=7
    [Show full text]
  • Computer Performance Evaluation Users Group (CPEUG)
    COMPUTER SCIENCE & TECHNOLOGY: National Bureau of Standards Library, E-01 Admin. Bidg. OCT 1 1981 19105^1 QC / 00 COMPUTER PERFORMANCE EVALUATION USERS GROUP CPEUG 13th Meeting NBS Special Publication 500-18 U.S. DEPARTMENT OF COMMERCE National Bureau of Standards 3-18 NATIONAL BUREAU OF STANDARDS The National Bureau of Standards^ was established by an act of Congress March 3, 1901. The Bureau's overall goal is to strengthen and advance the Nation's science and technology and facilitate their effective application for public benefit. To this end, the Bureau conducts research and provides: (1) a basis for the Nation's physical measurement system, (2) scientific and technological services for industry and government, (3) a technical basis for equity in trade, and (4) technical services to pro- mote public safety. The Bureau consists of the Institute for Basic Standards, the Institute for Materials Research, the Institute for Applied Technology, the Institute for Computer Sciences and Technology, the Office for Information Programs, and the Office of Experimental Technology Incentives Program. THE INSTITUTE FOR BASIC STANDARDS provides the central basis within the United States of a complete and consist- ent system of physical measurement; coordinates that system with measurement systems of other nations; and furnishes essen- tial services leading to accurate and uniform physical measurements throughout the Nation's scientific community, industry, and commerce. The Institute consists of the Office of Measurement Services, and the following
    [Show full text]
  • Control Flow
    Control Flow COMS W4115 Prof. Stephen A. Edwards Spring 2002 Columbia University Department of Computer Science Control Flow “Time is Nature’s way of preventing everything from happening at once.” Scott identifies seven manifestations of this: 1. Sequencing foo(); bar(); 2. Selection if (a) foo(); 3. Iteration while (i<10) foo(i); 4. Procedures foo(10,20); 5. Recursion foo(int i) { foo(i-1); } 6. Concurrency foo() jj bar() 7. Nondeterminism do a -> foo(); [] b -> bar(); Ordering Within Expressions What code does a compiler generate for a = b + c + d; Most likely something like tmp = b + c; a = tmp + d; (Assumes left-to-right evaluation of expressions.) Order of Evaluation Why would you care? Expression evaluation can have side-effects. Floating-point numbers don’t behave like numbers. Side-effects int x = 0; int foo() { x += 5; return x; } int a = foo() + x + foo(); What’s the final value of a? Side-effects int x = 0; int foo() { x += 5; return x; } int a = foo() + x + foo(); GCC sets a=25. Sun’s C compiler gave a=20. C says expression evaluation order is implementation-dependent. Side-effects Java prescribes left-to-right evaluation. class Foo { static int x; static int foo() { x += 5; return x; } public static void main(String args[]) { int a = foo() + x + foo(); System.out.println(a); } } Always prints 20. Number Behavior Basic number axioms: a + x = a if and only if x = 0 Additive identity (a + b) + c = a + (b + c) Associative a(b + c) = ab + ac Distributive Misbehaving Floating-Point Numbers 1e20 + 1e-20 = 1e20 1e-20 1e20 (1 + 9e-7) + 9e-7 6= 1 + (9e-7 + 9e-7) 9e-7 1, so it is discarded, however, 1.8e-6 is large enough 1:00001(1:000001 − 1) 6= 1:00001 · 1:000001 − 1:00001 · 1 1:00001 · 1:000001 = 1:00001100001 requires too much intermediate precision.
    [Show full text]
  • 210 CHAPTER 7. NAMES and BINDING Chapter 8
    210 CHAPTER 7. NAMES AND BINDING Chapter 8 Expressions and Evaluation Overview This chapter introduces the concept of the programming environment and the role of expressions in a program. Programs are executed in an environment which is provided by the operating system or the translator. An editor, linker, file system, and compiler form the environment in which the programmer can enter and run programs. Interac- tive language systems, such as APL, FORTH, Prolog, and Smalltalk among others, are embedded in subsystems which replace the operating system in forming the program- development environment. The top-level control structure for these subsystems is the Read-Evaluate-Write cycle. The order of computation within a program is controlled in one of three basic ways: nesting, sequencing, or signaling other processes through the shared environment or streams which are managed by the operating system. Within a program, an expression is a nest of function calls or operators that returns a value. Binary operators written between their arguments are used in infix syntax. Unary and binary operators written before a single argument or a pair of arguments are used in prefix syntax. In postfix syntax, operators (unary or binary) follow their arguments. Parse trees can be developed from expressions that include infix, prefix and postfix operators. Rules for precedence, associativity, and parenthesization determine which operands belong to which operators. The rules that define order of evaluation for elements of a function call are as follows: • Inside-out: Evaluate every argument before beginning to evaluate the function. 211 212 CHAPTER 8. EXPRESSIONS AND EVALUATION • Outside-in: Start evaluating the function body.
    [Show full text]
  • Needed Reduction and Spine Strategies for the Lambda Calculus
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector INFORMATION AND COMPUTATION 75, 191-231 (1987) Needed Reduction and Spine Strategies for the Lambda Calculus H. P. BARENDREGT* University of Ngmegen, Nijmegen, The Netherlands J. R. KENNAWAY~ University qf East Anglia, Norwich, United Kingdom J. W. KLOP~ Cenfre for Mathematics and Computer Science, Amsterdam, The Netherlands AND M. R. SLEEP+ University of East Anglia, Norwich, United Kingdom A redex R in a lambda-term M is called needed if in every reduction of M to nor- mal form (some residual of) R is contracted. Among others the following results are proved: 1. R is needed in M iff R is contracted in the leftmost reduction path of M. 2. Let W: MO+ M, + Mz --t .._ reduce redexes R,: M, + M,, ,, and have the property that Vi.3j> i.R, is needed in M,. Then W is normalising, i.e., if M, has a normal form, then Ye is finite and terminates at that normal form. 3. Neededness is an undecidable property, but has several efhciently decidable approximations, various versions of the so-called spine redexes. r 1987 Academic Press. Inc. 1. INTRODUCTION A number of practical programming languages are based on some sugared form of the lambda calculus. Early examples are LISP, McCarthy et al. (1961) and ISWIM, Landin (1966). More recent exemplars include ML (Gordon etal., 1981), Miranda (Turner, 1985), and HOPE (Burstall * Partially supported by the Dutch Parallel Reduction Machine project. t Partially supported by the British ALVEY project.
    [Show full text]