A Study of Syntactic and Semantic Artifacts and Its Application to Lambda Definability, Strong Normalization, and Weak Normalization in the Presence of State

Total Page:16

File Type:pdf, Size:1020Kb

A Study of Syntactic and Semantic Artifacts and Its Application to Lambda Definability, Strong Normalization, and Weak Normalization in the Presence of State BRICS RS-08-3 J. Munk: A Study of Syntactic and Semantic Artifacts and its Application to Lambda Definability, Strong Normalization BRICS Basic Research in Computer Science A Study of Syntactic and Semantic Artifacts and its Application to Lambda Definability, Strong Normalization, and Weak Normalization in the Presence of State Johan Munk BRICS Report Series RS-08-3 ISSN 0909-0878 April 2008 , Copyright c 2008, Johan Munk. BRICS, Department of Computer Science University of Aarhus. All rights reserved. Reproduction of all or part of this work is permitted for educational or research use on condition that this copyright notice is included in any copy. See back inner page for a list of recent BRICS Report Series publications. Copies may be obtained by contacting: BRICS Department of Computer Science University of Aarhus IT-parken, Aabogade 34 DK–8200 Aarhus N Denmark Telephone: +45 8942 9300 Telefax: +45 8942 5601 Internet: [email protected] BRICS publications are in general accessible through the World Wide Web and anonymous FTP through these URLs: http://www.brics.dk ftp://ftp.brics.dk This document in subdirectory RS/08/3/ A Study of Syntactic and Semantic Artifacts and its Application to Lambda Definability, Strong Normalization, and Weak Normalization in the Presence of State Johan Munk1 (Advisor: Olivier Danvy) May 4, 2007. Revised August 22, 2007 1IT-parken, Aabogade 34, DK-8200 Aarhus N, Denmark. Email: <[email protected]> Student id: 20000345 Abstract Church’s lambda-calculus underlies the syntax (i.e., the form) and the semantics (i.e., the meaning) of functional programs. This thesis is dedicated to studying man-made constructs (i.e., artifacts) in the lambda calculus. For example, one puts the expressive power of the lambda calculus to the test in the area of lambda definability. In this area, we present a course-of-value representation bridging Church numerals and Scott numerals. We then turn to weak and strong normalization using Danvy et al.’s syntactic and functional correspon- dences. We give a new account of Felleisen and Hieb’s syntactic theory of state, and of abstract machines for strong normalization due to Curien, Crégut, Lescanne, and Kluge. Contents I λ-calculi and programming languages 4 1 The λ-calculus 5 1.1 λ-terms ......................................... 5 1.1.1 Conventions .................................. 6 1.1.2 Free variables and bound variables . ................... 7 1.1.3 λ-terms modulo bound variable names .................. 7 1.2 Reductions and normal forms ............................ 8 1.3 One-step reduction and equality . ........................ 8 1.3.1 β-equivalence and convertibility . ................... 10 1.4 Uniqueness of normal forms ............................. 10 1.5 Reduction strategies . .............................. 11 1.5.1 Normal-order reduction . ........................ 11 1.5.2 Standardization and normalization . ................... 12 1.5.3 Reducing to weak head normal forms ................... 13 1.6 The λ-calculus with de Bruijn indices ........................ 14 1.6.1 Correspondence with terms using named variables ........... 15 1.6.2 β-contraction on de Bruijn-indexed λ-terms . ............. 16 1.6.3 Defining the λ-calculus for de Bruijn-indexed λ-terms . ....... 17 1.7 The λ-calculus defined as a proof system . ................... 17 1.8 Summary ........................................ 18 2 Definability in the λ-calculus 19 2.1 Church numerals ................................... 20 2.2 Representing structured elements . ........................ 21 2.2.1 Ordered pairs . .............................. 21 2.2.2 Kleene’s predecessor function and the corresponding subtraction func- tion ....................................... 22 2.2.3 Boolean values and functions on booleans . ............. 23 2.2.4 The factorial function ............................ 23 2.2.5 Representing lists . .............................. 24 2.3 Dynamic programming . .............................. 25 2.3.1 A longest common subsequence . ................... 26 2.3.2 Length of a longest common subsequence ................. 27 2.4 Scott numerals . ................................... 29 2.4.1 Scott numerals are selectors in pair-represented lists ........... 29 2.4.2 Lists and streams . .............................. 30 i 2.4.3 Functions on Scott numerals ........................ 30 2.5 A correspondence between Church numerals and Scott numerals ....... 31 2.6 An alternative definition of the subtraction function for Church numerals . 31 2.6.1 Generalizing Kleene’s predecessor function . ............. 31 2.6.2 A course-of-value representation of Church numerals . ....... 32 2.6.3 Lists as generalized pairs . ........................ 32 2.6.4 The alternative definition of the subtraction function . ....... 33 2.7 The quotient function and the remainder function . ............. 33 2.8 Extending the λ-calculus with literals and a corresponding primitive succes- sor function . ................................... 35 2.9 Summary ........................................ 36 3 Other λ-calculi 38 3.1 The λβη-calculus ................................... 38 3.2 Explicit substitutions . .............................. 39 3.3 The λv-calculus . ................................... 39 3.3.1 Standard reduction .............................. 41 3.3.2 Normalization . .............................. 42 3.3.3 Reducing to weak head normal forms ................... 42 3.4 Summary ........................................ 43 4 Programming languages 44 4.1 Machine-level and high-level programming languages ............. 44 4.2 Paradigms ....................................... 45 4.3 Defining a programming language . ........................ 45 4.4 Syntax ......................................... 46 4.5 Semantics ....................................... 46 4.5.1 Denotational semantics . ........................ 47 4.5.2 Abstract machines .............................. 49 4.5.3 Reduction semantics ............................. 53 4.6 Summary ........................................ 55 5 λ-calculi, programming languages, and semantic artifacts 57 5.1 Call by value, call by name, and the λ-calculus .................. 57 5.1.1 Call by value . .............................. 58 5.1.2 Call by name . .............................. 59 5.2 A syntactic correspondence ............................. 59 5.2.1 The λρ^-calculus . .............................. 60 5.2.2 Correspondence with the λ-calculus . ................... 61 5.2.3 A normal-order reduction semantics for the λρ^-calculus . ....... 63 5.2.4 The reduction semantics with explicit decomposition . ....... 63 5.2.5 Refocusing ................................... 65 5.2.6 Obtaining an abstract machine ....................... 65 5.3 A functional correspondence ............................ 67 5.3.1 A definitional interpreter . ........................ 68 5.3.2 Closure conversion .............................. 68 5.3.3 Continuation-passing-style/direct-style transformations . ....... 69 ii 5.3.4 Defunctionalization/refunctionalization ................. 69 5.3.5 Relating interpreters and abstract machines . ............. 69 5.4 Summary ........................................ 70 6 Including imperative constructs 71 6.1 State variables and assignments . ........................ 71 6.2 Felleisen and Hieb’s revised calculus of state ................... 72 6.2.1 The term language .............................. 72 6.2.2 Conventions .................................. 72 6.2.3 The ρ-application . .............................. 72 6.2.4 Notions of reduction ............................. 73 6.2.5 One-step reduction and equality . ................... 73 6.2.6 The Church-Rosser property ........................ 74 6.2.7 Standard reduction .............................. 74 6.3 An applicative-order reduction semantics with explicit substitution including the imperative constructs .............................. 75 6.3.1 The term language .............................. 75 6.3.2 The notion of reduction . ........................ 75 6.3.3 A one-step reduction function ....................... 76 6.3.4 Evaluation ................................... 77 6.4 Derivation of a corresponding abstract machine ................. 78 6.4.1 Introduction of explicit decomposition and plugging . ....... 78 6.4.2 Obtaining a syntactically corresponding abstract machine ....... 78 6.5 Summary ........................................ 79 II Strong normalization 81 7 Strong normalization with actual substitution 82 7.1 Obtaining a reduction semantics . ........................ 82 7.2 Deriving a corresponding abstract machine . ................... 83 7.3 Summary ........................................ 84 8 Strong normalization with explicit substitutions 85 8.1 Strong normalization via the λs^-calculus . ................... 85 8.1.1 Motivation ................................... 85 8.1.2 The λs^-calculus . .............................. 86 8.1.3 A normal-order reduction semantics for the λs^-calculus . ....... 89 8.1.4 Reduction-free strong normalization in the λs^-calculus . ....... 91 ^ 8.2 Strong normalization via the λs^-calculus . ................... 92 ^ 8.2.1 The λs^-calculus . .............................. 93 8.2.2 Obtaining an efficient abstract machine .................. 93 8.3 Summary ........................................ 97 iii 9 Strong normalization starting from Lescanne’s normalizer 98 9.1 Lescanne’s specification made deterministic . ................... 98 9.2 Obtaining a corresponding abstract machine ..................
Recommended publications
  • From Interpreter to Compiler and Virtual Machine: a Functional Derivation Basic Research in Computer Science
    BRICS Basic Research in Computer Science BRICS RS-03-14 Ager et al.: From Interpreter to Compiler and Virtual Machine: A Functional Derivation From Interpreter to Compiler and Virtual Machine: A Functional Derivation Mads Sig Ager Dariusz Biernacki Olivier Danvy Jan Midtgaard BRICS Report Series RS-03-14 ISSN 0909-0878 March 2003 Copyright c 2003, Mads Sig Ager & Dariusz Biernacki & Olivier Danvy & Jan Midtgaard. BRICS, Department of Computer Science University of Aarhus. All rights reserved. Reproduction of all or part of this work is permitted for educational or research use on condition that this copyright notice is included in any copy. See back inner page for a list of recent BRICS Report Series publications. Copies may be obtained by contacting: BRICS Department of Computer Science University of Aarhus Ny Munkegade, building 540 DK–8000 Aarhus C Denmark Telephone: +45 8942 3360 Telefax: +45 8942 3255 Internet: [email protected] BRICS publications are in general accessible through the World Wide Web and anonymous FTP through these URLs: http://www.brics.dk ftp://ftp.brics.dk This document in subdirectory RS/03/14/ From Interpreter to Compiler and Virtual Machine: a Functional Derivation Mads Sig Ager, Dariusz Biernacki, Olivier Danvy, and Jan Midtgaard BRICS∗ Department of Computer Science University of Aarhusy March 2003 Abstract We show how to derive a compiler and a virtual machine from a com- positional interpreter. We first illustrate the derivation with two eval- uation functions and two normalization functions. We obtain Krivine's machine, Felleisen et al.'s CEK machine, and a generalization of these machines performing strong normalization, which is new.
    [Show full text]
  • From Operational Semantics to Abstract Machines: Preliminary Results
    University of Pennsylvania ScholarlyCommons Technical Reports (CIS) Department of Computer & Information Science April 1990 From Operational Semantics to Abstract Machines: Preliminary Results John Hannan University of Pennsylvania Dale Miller University of Pennsylvania Follow this and additional works at: https://repository.upenn.edu/cis_reports Recommended Citation John Hannan and Dale Miller, "From Operational Semantics to Abstract Machines: Preliminary Results", . April 1990. University of Pennsylvania Department of Computer and Information Science Technical Report No. MS-CIS-90-21. This paper is posted at ScholarlyCommons. https://repository.upenn.edu/cis_reports/526 For more information, please contact [email protected]. From Operational Semantics to Abstract Machines: Preliminary Results Abstract The operational semantics of functional programming languages is frequently presented using inference rules within simple meta-logics. Such presentations of semantics can be high-level and perspicuous since meta-logics often handle numerous syntactic details in a declarative fashion. This is particularly true of the meta-logic we consider here, which includes simply typed λ-terms, quantification at higher types, and β-conversion. Evaluation of functional programming languages is also often presented using low-level descriptions based on abstract machines: simple term rewriting systems in which few high-level features are present. In this paper, we illustrate how a high-level description of evaluation using inference rules can be systematically transformed into a low-level abstract machine by removing dependencies on high-level features of the meta-logic until the resulting inference rules are so simple that they can be immediately identified as specifying an abstract machine. In particular, we present in detail the transformation of two inference rules specifying call-by-name evaluation of the untyped λ-calculus into the Krivine machine, a stack-based abstract machine that implements such evaluation.
    [Show full text]
  • A Systematic Study of Functional Language Implementations
    1 A Systematic Study of Functional Language Implementations Rémi Douence and Pascal Fradet INRIA / IRISA Campus de Beaulieu, 35042 Rennes Cedex, France [email protected] [email protected] Abstract: We introduce a unified framework to describe, relate, compare and classify functional lan- guage implementations. The compilation process is expressed as a succession of program transforma- tions in the common framework. At each step, different transformations model fundamental choices. A benefit of this approach is to structure and decompose the implementation process. The correctness proofs can be tackled independently for each step and amount to proving program transformations in the functional world. This approach also paves the way to formal comparisons by making it possible to estimate the complexity of individual transformations or compositions of them. Our study aims at cov- ering the whole known design space of sequential functional languages implementations. In particular, we consider call-by-value, call-by-name and call-by-need reduction strategies as well as environment and graph-based implementations. We describe for each compilation step the diverse alternatives as program transformations. In some cases, we illustrate how to compare or relate compilation tech- niques, express global optimizations or hybrid implementations. We also provide a classification of well-known abstract machines. Key-words: functional programming, compilers, optimization, program transformation, combinators 1 INTRODUCTION One of the most studied issues concerning functional languages is their implementation. Since Landin’s seminal proposal, 30 years ago [31], a plethora of new abstract machines or compilation techniques have been proposed. The list of existing abstract machines includes the SECD [31], the Cam [10], the CMCM [36], the Tim [20], the Zam [32], the G-machine [27] and the Krivine-machine [11].
    [Show full text]
  • Abstract Machines for Programming Language Implementation
    Future Generation Computer Systems 16 (2000) 739–751 Abstract machines for programming language implementation Stephan Diehl a,∗, Pieter Hartel b, Peter Sestoft c a FB-14 Informatik, Universität des Saarlandes, Postfach 15 11 50, 66041 Saarbrücken, Germany b Department of Electronics and Computer Science, University of Southampton, Highfield, Southampton SO17 1BJ, UK c Department of Mathematics and Physics, Royal Veterinary and Agricultural University, Thorvaldsensvej 40, DK-1871 Frederiksberg C, Denmark Accepted 24 June 1999 Abstract We present an extensive, annotated bibliography of the abstract machines designed for each of the main programming paradigms (imperative, object oriented, functional, logic and concurrent). We conclude that whilst a large number of efficient abstract machines have been designed for particular language implementations, relatively little work has been done to design abstract machines in a systematic fashion. © 2000 Elsevier Science B.V. All rights reserved. Keywords: Abstract machine; Compiler design; Programming language; Intermediate language 1. What is an abstract machine? next instruction to be executed. The program counter is advanced when the instruction is finished. This ba- Abstract machines are machines because they per- sic control mechanism of an abstract machine is also mit step-by-step execution of programs; they are known as its execution loop. abstract because they omit the many details of real (hardware) machines. 1.1. Alternative characterizations Abstract machines provide an intermediate lan- guage stage for compilation. They bridge the gap The above characterization fits many abstract ma- between the high level of a programming language chines, but some abstract machines are more abstract and the low level of a real machine.
    [Show full text]
  • A Certified Extension of the Krivine Machine for a Call-By-Name Higher
    A Certified Extension of the Krivine Machine for a Call-by-Name Higher-Order Imperative Language Leonardo Rodríguez, Daniel Fridlender, and Miguel Pagano Universidad Nacional de Córdoba, FaMAF Córdoba, Argentina {lrodrig2,fridlend,pagano}@famaf.unc.edu.ar Abstract In this paper we present a compiler that translates programs from an imperative higher-order language into a sequence of instructions for an abstract machine. We consider an extension of the Krivine machine for the call-by-name lambda calculus, which includes strict operators and imperative features. We show that the compiler is correct with respect to the big-step semantics of our language, both for convergent and divergent programs. 1998 ACM Subject Classification F.4.1 Mathematical Logic Keywords and phrases abstract machines, compiler correctness, big-step semantics Digital Object Identifier 10.4230/LIPIcs.TYPES.2013.230 1 Introduction Important advancements have been made during the last decade in the field of compiler certification; the project CompCert [19, 20] being the most significant achievement, since it deals with a realistic compiler for the C language. It is still an active research topic open for new techniques and experimentation. In this work we report our experiments in the use of known techniques to prove the correctness of a compiler for a call-by-name lambda calculus extended with strict operators and imperative features. Most compilers are multi-pass, meaning that the process of translation usually involves successive symbolic manipulations from the source program to the target program. The compilation of a source program is often carried as a sequence of translations through several intermediate languages, each one closer to assembly code than the previous one.
    [Show full text]
  • Krivine Nets a Semantic Foundation for Distributed Execution
    Krivine Nets A semantic foundation for distributed execution Olle Fredriksson Dan R. Ghica University of Birmingham Abstract case for machine independent programming has been made, quite We define a new approach to compilation to distributed architec- successfully, a long time ago. tures based on networks of abstract machines. Using it we can im- Machine independence means that programs can just be recom- plement a generalised and fully transparent form of Remote Proce- piled to run on different devices with similar architectures. Archi- dure Call that supports calling higher-order functions across node tecture independence pushes this idea even further to devices with boundaries, without sending actual code. Our starting point is the different architectures, such as conventional CPUs, distributed sys- classic Krivine machine, which implements reduction for untyped tems, GPUs and reconfigurable hardware. In a series of papers [12– call-by-name PCF. We successively add the features that we need 14] we have examined the possibility of lifting machine indepen- for distributed execution and show the correctness of each addi- dence to architecture independence in the context of distributed tion. Then we construct a two-level operational semantics, where computing. The reason is that in distributed computing the deploy- the high level is a network of communicating machines, and the low ment of a program is often reflected at the level of source code. For level is given by local machine transitions. Using these networks, example, these two ERLANG programs: we arrive at our final system, the Krivine Net. We show that Krivine c(A_pid)-> receive X -> A_pid ! X*X end, c(A_pid).
    [Show full text]
  • Simplification of Abstract Machine for Functional Language and Its Theoretical Investigation
    Journal of Software Simplification of Abstract Machine for Functional Language and Its Theoretical Investigation Shin-Ya Nishizaki*, Kensuke Narita, Tomoyuki Ueda Department of Computer Science, Tokyo Institute of Technology, Ookayama, Meguro, Tokyo, Japan. * Corresponding author. Tel.: +81-3-5734-2772; email: [email protected] Manuscript submitted February 18, 2015; accepted March 4, 2015. doi: 10.17706/jsw.10.10.1148-1159 Abstract: Many researchers have studied abstract machines in order to give operational semantics to various kinds of programming languages. For example, Landin's SECD machine and Curien's Categorical Abstract Machine are proposed for functional programming languages and useful not only for theoretical studies but also implementation of practical language processors. We study simplification of SECD machine and design a new abstract machine, called Simple Abstract Machine. We achieve the simplification of SECD machine by abstracting substitutions for variables. In Simple Abstract Machine, we can formalize first-class continuations more simply and intelligibly than SECD machine. Key words: Programming language theory, lambda calculus, first-class continuation, abstract machine. 1. Introduction Many researchers have proposed abstract machines in order to give operational semantics to various kinds of programming languages, such as Landin's SECD machine [1], [2] and Curien's Categorical Abstract machine [3], [4]. From the theoretical viewpoint, there are many kinds of researches of abstract machines. Graham formalized and verified implementation of SECD machine using the HOL prover [5]. Hardin et al. formulated abstract machines for functional languages in the framework of explicit substitutes [6]. Ohori studied abstract machines from the proof-theoretic approach [7]. Curien et al.
    [Show full text]
  • First Class Call Stacks: Exploring Head Reduction
    First Class Call Stacks: Exploring Head Reduction Philip Johnson-Freyd, Paul Downen and Zena M. Ariola University of Oregon Oregon, USA {philipjf,pdownen,ariola}@cs.uoregon.edu Weak-head normalization is inconsistent with functional extensionality in the call-by-name λ-calculus. We explore this problem from a new angle via the conflict between extensionality and effects. Lever- aging ideas from work on the λ-calculus with control, we derive and justify alternative operational semantics and a sequence of abstract machines for performing head reduction. Head reduction avoids the problems with weak-head reduction and extensionality, while our operational semantics and as- sociated abstract machines show us how to retain weak-head reduction’s ease of implementation. 1 Introduction Programming language designers are faced with multiple, sometimes contradictory, goals. On the one hand, it is important that users be able to reason about their programs. On the other hand, we want our languages to support simple and efficient implementations. For the first goal, extensionality is a particularly desirable property. We should be able to use a program without knowing how it was written, only how it behaves. In the λ-calculus, extensional reasoning is partially captured by the η law, a strong equational property about functions which says that functional delegation is unobservable: λx. f x = f . It is essential to many proofs about functional programs: for example, the well-known “state monad” only obeys the monad laws if η holds [13]. The η law is compelling because it gives the “maximal” extensionality possible in the untyped λ-calculus: if we attempt to equate any additional terms beyond β, η, and α, the theory will collapse [4].
    [Show full text]
  • Call-By-Name, Call-By-Value and Abstract Machines
    call-by-name, call-by-value and abstract machines Author: Supervisor: Remy Viehoff Prof. Dr. Herman Geuvers Date: June, 2012 Thesis number: 663 2 Contents 1 Introduction 5 1.1 Exception handling . .6 1.2 Overview . .7 1.3 Acknowledgements . .8 2 Background 9 2.1 The simply typed λ-calculus . .9 2.1.1 Abstract machine . 13 2.2 The λµ-calculus . 14 2.2.1 Exception handling . 16 2.2.2 Abstract machine . 18 3 The λµ¯ -calculus 21 3.1 An isomorphism . 28 3.2 Exception handling . 38 3.3 An environment machine for λµ¯ ................... 39 4 The λµ¯ µ~-calculus 47 4.1 Exception handling . 52 4.2 An environment machine for λµ¯ µ~ .................. 53 5 Conclusion 59 5.1 Future works . 59 3 4 CONTENTS Chapter 1 Introduction The Curry-Howard isomorphism is a remarkable correspondence between logic on the one side and λ-calculi on the other side. Under the isomorphism, formulas are mapped to types, proofs to terms and proof normalization steps correspond to reduction. From a logical perspective, the isomorphism allows us to use terms from a λ-calculus as a short way to write down proofs. But, since λ-calculi are also used as a foundation for programming languages, the terms in such calculus can be interpreted as programs. This provides us with a means to talk about \the computational content" of a proof. From a computational perspective, the isomorphism allows us to assign logi- cal formulae to terms in the λ-calculus. These formulae can then be interpreted as the specification of the program (term) it is assigned to and, hence, we can prove that a program implements its specification.
    [Show full text]
  • Related Work
    Related work Yannick Forster, Fabian Kunze, Gert Smolka Saarland University March 16, 2018 1 Call-by-value lambda-calculus • “The mechanical evaluation of expressions”, Landin (1964) [24]: Landin intro- duces the SECD machine, the name stemming from the 4 components of the machine called stack, environment, control and dump. He does not have correct- ness proofs. He uses closures and mentions the idea of representing composite objects by an address and sharing. • “Call-by-Name, Call-by-Value and the λ-Calculus”, Plotkin (1975) [31]: First in- troduction of call-by-value lambda-calculus. Although the equivalence relation is strong, reduction is weak, i.e. does not apply under binders. Plotkin variables (and constants) as values, but his machine only talks about closed terms (and he has to do this, because he substitutes variables one after the to transform closures to terms). He gives a detailed, top-down correctness proof of an SECD machine similar to Landin’s. He uses an inductive, step-indexed predicate (starting at 1) for the evaluation of lambda terms. He shows that the machine terminates representing the right value iff the corresponding term evaluates with that value. He first shows the usual bigstep-topdown-induction, e.g. that the machine-evaluation of every closure representing a normalizing term eventually puts the right value closure on the argument stack. He then shows that divergence propagates top- down: if the term does not evaluate in less than t steps, than the machine does not succeed in t steps. He does not proof his substitution lemma. His closures are defined to be closed.
    [Show full text]
  • Deriving the Full-Reducing Krivine Machine from the Small-Step Operational Semantics of Normal Order *
    Deriving the Full-Reducing Krivine Machine from the Small-Step Operational Semantics of Normal Order * Alvaro Garcia-Perez Pablo Nogueira Juan Jose Moreno-Navarro EvIDEA Software Institute and Universidad Politecnica de Madrid EvIDEA Software Institute and Universidad Politecnica de Madrid [email protected] Universidad Politecnica de Madrid [email protected] [email protected] within the hole, and recomposing the resulting term.) A reduction- free normaliser is a program implementing a big-step natural se­ mantics. Finally, abstract machines are first-order, tail-recursive implementations of state transition functions that, unlike virtual machines, operate directly on terms, have no instruction set, and no need for a compiler. There has been considerable research on inter-derivation by program transformation of such 'semantic artefacts', of which the Abstract works [3, 5,12,14] are noticeable exemplars. The transformations consist of equivalence-preserving steps (CPS transformation, in­ We derive by program transformation Pierre Cregut's full-reducing verse CPS, defunctionaUsation, refunctionalisation, inlining, light­ Krivine machine KN from the structural operational semantics of weight fusion by fixed-point promotion, etc.) which establish a the normal order reduction strategy in a closure-converted pure formal correspondence between the artefacts: that all implement lambda calculus. We thus establish the correspondence between the the same reduction strategy. strategy and the machine, and showcase our technique for deriving Research in inter-derivation is important not only for the corres­ full-reducing abstract machines. Actually, the machine we obtain pondences and general framework it establishes in semantics. More is a slightly optimised version that can work with open terms and pragmatically, it provides a semi-automatic way of proving the cor­ may be used in implementations of proof assistants.
    [Show full text]
  • Explaining the Lazy Krivine Machine Using Explicit Substitution and Addresses Frederic Lang
    Explaining the lazy Krivine machine using explicit substitution and addresses Frederic Lang To cite this version: Frederic Lang. Explaining the lazy Krivine machine using explicit substitution and addresses. Higher- Order and Symbolic Computation, Springer Verlag, 2007. inria-00198756 HAL Id: inria-00198756 https://hal.inria.fr/inria-00198756 Submitted on 17 Dec 2007 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Explaining the lazy Krivine machine using explicit substitution and addresses Fr´ed´eric Lang INRIA Rhˆone-Alpes, 655 avenue de l’Europe, 38 330 Montbonnot, France. [email protected] Abstract. In a previous paper, Benaissa, Lescanne, and Rose, have ex- tended the weak lambda-calculus of explicit substitution λσw with ad- dresses, so that it gives an account of the sharing implemented by lazy functional language interpreters. We show in this paper that their calcu- a lus, called λσw, fits well to the lazy Krivine machine, which describes the core of a lazy (call-by-need) functional programming language implemen- tation. The lazy Krivine machine implements term evaluation sharing, that is essential for efficiency of such languages. The originality of our proof is that it gives a very detailed account of the implemented strategy.
    [Show full text]