Selective Lambda Lifting

Total Page:16

File Type:pdf, Size:1020Kb

Selective Lambda Lifting 1 Selective Lambda Lifting SEBASTIAN GRAF, Karlsruhe Institute of Technology, Germany SIMON PEYTON JONES, Microsoft Research, UK Lambda lifting is a well-known transformation, traditionally employed for compiling functional programs to supercombinators. However, more recent abstract machines for functional languages like OCaml and Haskell tend to do closure conversion instead for direct access to the environment, so lambda lifting is no longer necessary to generate machine code. We propose to revisit selective lambda lifting in this context as an optimising code generation strategy and conceive heuristics to identify beneficial lifting opportunities. We give a static analysis for estimating impact on heap allocations of a lifting decision. Performance measurements of our implementation within the Glasgow Haskell Compiler on a large corpus of Haskell benchmarks suggest modest speedups. CCS Concepts: • Software and its engineering → Compilers; Functional languages; Procedures, functions and subroutines; Additional Key Words and Phrases: Haskell, Lambda Lifting, Spineless Tagless G-machine, Compiler Opti- mization ACM Reference Format: Sebastian Graf and Simon Peyton Jones. 2019. Selective Lambda Lifting. Proc. ACM Program. Lang. 1, ICFP, Article 1 (January 2019), 17 pages. 1 INTRODUCTION The ability to define nested auxiliary functions referencing variables from outer scopes is essential when programming in functional languages. Take this Haskell function as an example: f a 0 = a f a n = f ¹g ¹n ‘mod‘ 2ºº ¹n − 1º where g 0 = a g n = 1 + g ¹n − 1º To generate code for nested functions like g, a typical compiler either applies lambda lifting or closure conversion. The Glasgow Haskell Compiler (GHC) chooses to do closure conversion [Peyton Jones 1992]. In doing so, it allocates a closure for g on the heap, with an environment containing an entry for a. Now imagine we lambda lifted g before closure conversion: g" a 0 = a g" a n = 1 + g" a ¹n − 1º f a 0 = a f a n = f ¹g" a ¹n ‘mod‘ 2ºº ¹n − 1º The closure for g and the associated heap allocation completely vanished in favour of a few more arguments at the call site! The result looks much simpler. And indeed, in concert with the other optimisations within GHC, the above transformation makes f effectively non-allocating, resulting in a speedup of 50%. Authors’ addresses: Sebastian Graf, Karlsruhe Institute of Technology, Karlsruhe, Germany, [email protected]; Simon Peyton Jones, Microsoft Research, Cambridge, UK, [email protected]. 2019. 2475-1421/2019/1-ART1 $15.00 https://doi.org/ Proc. ACM Program. Lang., Vol. 1, No. ICFP, Article 1. Publication date: January 2019. 1:2 Sebastian Graf and Simon Peyton Jones So should we just perform this transformation on any candidate? We have to disagree. Consider what would happen to the following program: f :: »Int ¼ ! »Int ¼ ! Int ! Int f a b 0 = a f a b 1 = b f a b n = f ¹g nº a ¹n ‘mod‘ 2º where g 0 = a g 1 = b g n = n : h where h = g ¹n − 1º Because of laziness, this will allocate a thunk for h. Closure conversion will then allocate an environment for h on the heap, closing over g. Lambda lifting yields: g" a b 0 = a g" a b 1 = b g" a b n = n : h where h = g" a b ¹n − 1º f a b 0 = a f a b 1 = b f a b n = f ¹g" a b nº a ¹n ‘mod‘ 2º The closure for g is gone, but h now closes over n, a and b instead of n and g. Moreover, this h-closure is allocated for each iteration of the loop, so we have reduced allocation by one closure for g, but increased allocation by one word in each of N allocations of h. Apart from making f allocate 10% more, this also incurs a slowdown of more than 10%. So lambda lifting is sometimes beneficial, and sometimes harmful: we should do it selectively. This work is concerned with identifying exactly when lambda lifting improves performance, providing a new angle on the interaction between lambda lifting and closure conversion. These are our contributions: • We derive a number of heuristics fueling the lambda lifting decision from concrete operational deficiencies in section3. • Integral to one of the heuristics, in section4 we provide a static analysis estimating closure growth, conservatively approximating the effects of a lifting decision on the total allocations of the program. • We implemented our lambda lifting pass in the Glasgow Haskell Compiler as part of its optimisation pipeline, operating on its Spineless Tagless G-machine (STG) language. The decision to do lambda lifting this late in the compilation pipeline is a natural one, given that accurate allocation estimates aren’t easily possible on GHC’s more high-level Core language. • We evaluate our pass against the nofib benchmark suite (section6) and find that our static analysis soundly predicts changes in heap allocations. The measurements confirm the rea- soning behind our heuristics in section3. Our approach builds on and is similar to many previous works, which we compare to in section7. Proc. ACM Program. Lang., Vol. 1, No. ICFP, Article 1. Publication date: January 2019. Selective Lambda Lifting 1:3 Variables f ;д; x;y 2 Var Expressions e 2 Expr F x Variable j f x Function call j let b in e Recursive let Bindings b 2 Bind F f = r Right-hand sides r 2 Rhs F λx ! e Programs p 2 Prog F f x = e; e 0 Fig. 1. An STG-like untyped lambda calculus 2 OPERATIONAL BACKGROUND Typically, the choice between lambda lifting and closure conversion for code generation is mutually exclusive and is dictated by the targeted abstract machine, like the G-machine [Kieburtz 1985] or the Spineless Tagless G-machine [Peyton Jones 1992], as is the case for GHC. Let’s clear up what we mean by doing lambda lifting before closure conversion and the operational effect of doing so. 2.1 Language Although the STG language is tiny compared to typical surface languages such as Haskell, its definition [Marlow and Jones 2004] still contains much detail irrelevant to lambda lifting. This section will therefore introduce an untyped lambda calculus that will serve as the subject of optimisation in the rest of the paper. 2.1.1 Syntax. As can be seen in fig.1, we extended untyped lambda calculus with let bindings, just as in Johnsson[1985]. Inspired by STG, we also assume A-normal form (ANF) [Sabry and Felleisen 1993]: • Every lambda abstraction is the right-hand side of a let binding • Arguments and heads in an application expression are all atomic (e.g., variable references) Throughout this paper, we assume that variable names are globally unique. Similar to Johnsson [1985], programs are represented by a group of top-level bindings and an expression to evaluate. Whenever there’s an example in which the expression to evaluate is not closed, assume that free variables are bound in some outer context omitted for brevity. Examples may also compromise on adhering to ANF for readability (regarding giving all complex subexpressions a name, in particular), but we will point out the details if need be. 2.1.2 Semantics. Since our calculus is a subset of the STG language, its semantics follows directly from Marlow and Jones[2004]. An informal treatment of operational behavior is still in order to express the consequences of lambda lifting. Since every application only has trivial arguments, all complex expressions had to be bound to a let in a prior compilation step. Consequently, heap allocation happens almost entirely at let bindings closing over free variables of their RHSs, with the exception of intermediate partial applications resulting from over- or undersaturated calls. Proc. ACM Program. Lang., Vol. 1, No. ICFP, Article 1. Publication date: January 2019. 1:4 Sebastian Graf and Simon Peyton Jones Put plainly: If we manage to get rid of a let binding, we get rid of one source of heap allocation since there is no closure to allocate during closure conversion. 2.2 Lambda Lifting vs. Closure Conversion The trouble with nested functions is that nobody has come up with concrete, efficient computing architectures that can cope with them natively. Compilers therefore need to rewrite local functions in terms of global definitions and auxiliary heap allocations. One way of doing so is in performing closure conversion, where references to free variables are lowered as field accesses on a record containing all free variables of the function, the closure environment. The environment is passed as an implicit parameter to the function body, which in turn is insensitive to lexical scope and can be floated to top-level. After this lowering, all functions are then regarded as closures: A pair of a code pointer and an environment. data EnvF = EnvF fx :: Int; y :: Int g let f = λa b ! :::x ::: y ::: CC f f ? env a b = :::x env ::: y env:::; ===) in f 4 2 let f = ¹f ?; EnvF x yº in ¹fst f º ¹snd f º 4 2 Closure conversion leaves behind a heap-allocated let binding for the closure1. Compare this to how lambda lifting gets rid of local functions. Johnsson[1985] introduced it for efficient code generation of lazy functional languages to G-machine code[Kieburtz 1985]. Lambda lifting converts all free variables of a function body into parameters. The resulting function body can be floated to top-level, but all call sites must be fixed up to include its former free variables. let f = λa b ! :::x ::: y ::: LL f f " x y a b = :::x ::: y:::; ) in f 4 2 === f " x y 4 2 The key difference to closure conversion is that there is no heap allocation at f ’s former definition site anymore.
Recommended publications
  • Revised5report on the Algorithmic Language Scheme
    Revised5 Report on the Algorithmic Language Scheme RICHARD KELSEY, WILLIAM CLINGER, AND JONATHAN REES (Editors) H. ABELSON R. K. DYBVIG C. T. HAYNES G. J. ROZAS N. I. ADAMS IV D. P. FRIEDMAN E. KOHLBECKER G. L. STEELE JR. D. H. BARTLEY R. HALSTEAD D. OXLEY G. J. SUSSMAN G. BROOKS C. HANSON K. M. PITMAN M. WAND Dedicated to the Memory of Robert Hieb 20 February 1998 CONTENTS Introduction . 2 SUMMARY 1 Overview of Scheme . 3 1.1 Semantics . 3 The report gives a defining description of the program- 1.2 Syntax . 3 ming language Scheme. Scheme is a statically scoped and 1.3 Notation and terminology . 3 properly tail-recursive dialect of the Lisp programming 2 Lexical conventions . 5 language invented by Guy Lewis Steele Jr. and Gerald 2.1 Identifiers . 5 Jay Sussman. It was designed to have an exceptionally 2.2 Whitespace and comments . 5 clear and simple semantics and few different ways to form 2.3 Other notations . 5 expressions. A wide variety of programming paradigms, in- 3 Basic concepts . 6 3.1 Variables, syntactic keywords, and regions . 6 cluding imperative, functional, and message passing styles, 3.2 Disjointness of types . 6 find convenient expression in Scheme. 3.3 External representations . 6 The introduction offers a brief history of the language and 3.4 Storage model . 7 of the report. 3.5 Proper tail recursion . 7 4 Expressions . 8 The first three chapters present the fundamental ideas of 4.1 Primitive expression types . 8 the language and describe the notational conventions used 4.2 Derived expression types .
    [Show full text]
  • The Spineless Tagless G-Machine Version
    Implementing lazy functional languages on sto ck hardware the Spineless Tagless Gmachine Version Simon L Peyton Jones Department of Computing Science University of Glasgow G QQ simonp jdcsglasgowacuk July Abstract The Spineless Tagless Gmachine is an abstract machine designed to supp ort non strict higherorder functional languages This presentation of the machine falls into three parts Firstly we give a general discussion of the design issues involved in implementing nonstrict functional languages Next we present the STG language an austere but recognisablyfunctional language which as well as a denotational meaning has a welldened operational semantics The STG language is the abstract machine co de for the Spineless Tagless Gmachine Lastly we discuss the mapping of the STG language onto sto ck hardware The success of an abstract machine mo del dep ends largely on how ecient this mapping can b e made though this topic is often relegated to a short section Instead we give a detailed discussion of the design issues and the choices we have made Our principal target is the C language treating the C compiler as a p ortable assembler This pap er is to app ear in the Journal of Functional Programming Changes in Version large new section on comparing the STG machine with other designs Section proling material Section index Changes in Version prop er statement of the initial state of the machine Sec tion reformulation of CAF up dates Section new format for state transition rules separating the guards which govern the applicability of
    [Show full text]
  • CSE341: Programming Languages Spring 2014 Unit 5 Summary Contents Switching from ML to Racket
    CSE341: Programming Languages Spring 2014 Unit 5 Summary Standard Description: This summary covers roughly the same material as class and recitation section. It can help to read about the material in a narrative style and to have the material for an entire unit of the course in a single document, especially when reviewing the material later. Please report errors in these notes, even typos. This summary is not a sufficient substitute for attending class, reading the associated code, etc. Contents Switching from ML to Racket ...................................... 1 Racket vs. Scheme ............................................. 2 Getting Started: Definitions, Functions, Lists (and if) ...................... 2 Syntax and Parentheses .......................................... 5 Dynamic Typing (and cond) ....................................... 6 Local bindings: let, let*, letrec, local define ............................ 7 Top-Level Definitions ............................................ 9 Bindings are Generally Mutable: set! Exists ............................ 9 The Truth about cons ........................................... 11 Cons cells are immutable, but there is mcons ............................. 11 Introduction to Delayed Evaluation and Thunks .......................... 12 Lazy Evaluation with Delay and Force ................................. 13 Streams .................................................... 14 Memoization ................................................. 15 Macros: The Key Points .........................................
    [Show full text]
  • Lecture Notes on Types for Part II of the Computer Science Tripos
    Q Lecture Notes on Types for Part II of the Computer Science Tripos Prof. Andrew M. Pitts University of Cambridge Computer Laboratory c 2012 A. M. Pitts Contents Learning Guide ii 1 Introduction 1 2 ML Polymorphism 7 2.1 AnMLtypesystem............................... 8 2.2 Examplesoftypeinference,byhand . .... 17 2.3 Principaltypeschemes . .. .. .. .. .. .. .. .. 22 2.4 Atypeinferencealgorithm . .. 24 3 Polymorphic Reference Types 31 3.1 Theproblem................................... 31 3.2 Restoringtypesoundness . .. 36 4 Polymorphic Lambda Calculus 39 4.1 Fromtypeschemestopolymorphictypes . .... 39 4.2 ThePLCtypesystem .............................. 42 4.3 PLCtypeinference ............................... 50 4.4 DatatypesinPLC ................................ 52 5 Further Topics 61 5.1 Dependenttypes................................. 61 5.2 Curry-Howardcorrespondence . ... 64 References 70 Exercise Sheet 71 i Learning Guide These notes and slides are designed to accompany eight lectures on type systems for Part II of the Cambridge University Computer Science Tripos. The aim of this course is to show by example how type systems for programming languages can be defined and their properties developed, using techniques that were introduced in the Part IB course on Semantics of Programming Languages. We apply these techniques to a few selected topics centred mainly around the notion of “polymorphism” (or “generics” as it is known in the Java and C# communities). Formal systems and mathematical proof play an important role in this subject—a fact
    [Show full text]
  • Hardware Synthesis from a Recursive Functional Language
    Hardware Synthesis from a Recursive Functional Language Kuangya Zhai Richard Townsend Lianne Lairmore Martha A. Kim Stephen A. Edwards Columbia University, Department of Computer Science Technical Report CUCS-007-15 April 1, 2015 ABSTRACT to SystemVerilog. We use a single functional intermedi- Abstraction in hardware description languages stalled at the ate representation based on the Glasgow Haskell Compiler's register-transfer level decades ago, yet few alternatives have Core [23], allowing us to use Haskell and GHC as a source had much success, in part because they provide only mod- language. Reynolds [24] pioneered the methodology of trans- est gains in expressivity. We propose to make a much larger forming programs into successively restricted subsets, which jump: a compiler that synthesizes hardware from behav- inspired our compiler and many others [2, 15, 23]. ioral functional specifications. Our compiler translates gen- Figure 1 depicts our compiler's framework as a series of eral Haskell programs into a restricted intermediate repre- transformations yielding a hardware description. One pass sentation before applying a series of semantics-preserving eliminates polymorphism by making specialized copies of transformations, concluding with a simple syntax-directed types and functions. To implement recursion, our compiler translation to SystemVerilog. Here, we present the overall gives groups of recursive functions a stack and converts them framework for this compiler, focusing on the IRs involved into tail form. These functions are then scheduled over time and our method for translating general recursive functions by rewriting them to operate on infinite streams that model into equivalent hardware. We conclude with experimental discrete clock cycles.
    [Show full text]
  • UNIVERSITY of CAMBRIDGE Computer Laboratory
    UNIVERSITY OF CAMBRIDGE Computer Laboratory Computer Science Tripos Part Ib Compiler Construction http://www.cl.cam.ac.uk/Teaching/1112/CompConstr/ David J Greaves [email protected] 2011–2012 (Lent Term) Notes and slides: thanks due to Prof Alan Mycroft and his predecessors. Summary The first part of this course covers the design of the various parts of a fairly basic compiler for a pretty minimal language. The second part of the course considers various language features and concepts common to many programming languages, together with an outline of the kind of run-time data structures and operations they require. The course is intended to study compilation of a range of languages and accordingly syntax for example constructs will be taken from various languages (with the intention that the particular choice of syntax is reasonably clear). The target language of a compiler is generally machine code for some processor; this course will only use instructions common (modulo spelling) to x86, ARM and MIPS—with MIPS being favoured because of its use in the Part Ib course “Computer Design”. In terms of programming languages in which parts of compilers themselves are to be written, the preference varies between pseudo-code (as per the ‘Data Structures and Algorithms’ course) and language features (essentially) common to C/C++/Java/ML. The following books contain material relevant to the course. Compilers—Principles, Techniques, and Tools A.V.Aho, R.Sethi and J.D.Ullman Addison-Wesley (1986) Ellis Horwood (1982) Compiler Design in Java/C/ML (3
    [Show full text]
  • The Λ Abroad a Functional Approach to Software Components
    The ¸ Abroad A Functional Approach To Software Components Een functionele benadering van software componenten (met een samenvatting in het Nederlands) Proefschrift ter verkrijging van de graad van doctor aan de Universiteit Utrecht op gezag van de Rector Magni¯cus, Prof. dr W.H. Gispen, ingevolge het besluit van het College voor Promoties in het openbaar te verdedigen op dinsdag 4 november 2003 des middags te 12.45 uur door Daniel Johannes Pieter Leijen geboren op 7 Juli 1973, te Alkmaar promotor: Prof. dr S.D. Swierstra, Universiteit Utrecht. co-promotor: dr H.J.M. Meijer, Microsoft Research. The work in this thesis has been carried out under the auspices of the research school IPA (Institute for Programming research and Algorithmics), and has been ¯nanced by Ordina. Printed by Febodruk 2003. Cover illustration shows the heavily cratered surface of the planet Mercury, photographed by the mariner 10. ISBN 90-9017528-8 Contents Dankwoord ix 1 Overview 1 2 H/Direct: a binary language interface for Haskell 5 2.1 Introduction ................................ 5 2.2 Background ................................ 6 2.2.1 Using the host or foreign language ............... 7 2.2.2 Using an IDL ........................... 8 2.2.3 Overview ............................. 9 2.3 The Foreign Function Interface ..................... 12 2.3.1 Foreign static import and export ................ 12 2.3.2 Variations on the theme ..................... 13 2.3.3 Stable pointers and foreign objects ............... 14 2.3.4 Dynamic import ......................... 15 2.3.5 Dynamic export ......................... 15 2.3.6 Implementing dynamic export .................. 18 2.3.7 Related work ........................... 19 iv Contents 2.4 Translating IDL to Haskell ......................
    [Show full text]
  • Œuf: Minimizing the Coq Extraction TCB
    Œuf: Minimizing the Coq Extraction TCB Eric Mullen Stuart Pernsteiner James R. Wilcox University of Washington, USA University of Washington, USA University of Washington, USA USA USA USA Zachary Tatlock Dan Grossman University of Washington, USA University of Washington, USA USA USA Abstract ACM Reference Format: Verifying systems by implementing them in the program- Eric Mullen, Stuart Pernsteiner, James R. Wilcox, Zachary Tatlock, ming language of a proof assistant (e.g., Gallina for Coq) lets and Dan Grossman. 2018. Œuf: Minimizing the Coq Extraction TCB. In Proceedings of 7th ACM SIGPLAN International Conference on us directly leverage the full power of the proof assistant for Certified Programs and Proofs (CPP’18). ACM, New York, NY, USA, verifying the system. But, to execute such an implementation 14 pages. https://doi.org/10.1145/3167089 requires extraction, a large complicated process that is in the trusted computing base (TCB). This paper presents Œuf, a verified compiler from a subset 1 Introduction of Gallina to assembly. Œuf’s correctness theorem ensures that compilation preserves the semantics of the source Gal- In the past decade, a wide range of verified systems have lina program. We describe how Œuf’s specification can be been implemented in Gallina, Coq’s functional programming used as a foreign function interface to reason about the inter- language [8, 19, 20, 25, 26, 44, 45]. This approach eases veri- action between compiled Gallina programs and surrounding fication because Coq provides extensive built-in support for shim code. Additionally, Œuf maintains a small TCB for its reasoning about Gallina programs. Unfortunately, building front-end by reflecting Gallina programs to Œuf source and a system directly in Gallina typically incurs a substantial automatically ensuring equivalence using computational de- trusted computing base (TCB) to actually extract, compile, notation.
    [Show full text]
  • Algol and Haskell
    Spring 2014 Introduction to Haskell Shachar Itzhaky & Mooly Sagiv (original slides by Kathleen Fisher & John Mitchell) Part I: Lambda Calculus Computation Models • Turing Machines • Wang Machines • Counter Programs • Lambda Calculus Untyped Lambda Calculus Chapter 5 Benjamin Pierce Types and Programming Languages Basics • Repetitive expressions can be compactly represented using functional abstraction • Example: – (5 * 4* 3 * 2 * 1) + (7 * 6 * 5 * 4 * 3 * 2 *1) = – factorial(5) + factorial(7) – factorial(n) = if n = 0 then 1 else n * factorial(n-1) – factorial= n. if n = 0 then 0 else n factorial(n-1) Untyped Lambda Calculus t ::= terms x variable x. t abstraction t t application Terms can be represented as abstract syntax trees Syntactic Conventions • Applications associates to left e1 e2 e3 (e1 e2) e3 • The body of abstraction extends as far as possible • x. y. x y x x. (y. (x y) x) Free vs. Bound Variables • An occurrence of x is free in a term t if it is not in the body on an abstraction x. t – otherwise it is bound – x is a binder • Examples – z. x. y. x (y z) – (x. x) x • Terms w/o free variables are combinators – Identify function: id = x. x Operational Semantics ( x. t12) t2 [x ↦t2] t12 (-reduction) FV: t P(Var) is the set free variables of t FV(x) = {x} FV( x. t) = FV(t) – {x} FV (t1 t2) = FV(t1) FV(t2) [x↦s] x = s [x↦s] y = y if y x [x↦s] (y. t1) = y. [x ↦s] t1 if y x and yFV(s) [x↦s] (t1 t2) = ([x↦s] t1) ([x↦s] t2) Operational Semantics ( x.
    [Show full text]
  • A Located Lambda Calculus
    A located lambda calculus Ezra elias kilty Cooper Philip Wadler University of Edinburgh Abstract how to perform a splitting transformation to produce code for each Several recent language designs have offered a unified language for location. However, each of these works relied on concurrently- programming a distributed system; we call these “location-aware” running servers. Ours is the first work we’re aware of that shows languages. These languages provide constructs that allow the pro- how to implement a located language on top of a stateless server grammer to control the location (the choice of host, for example) model. where a piece of code should run, which can be useful for secu- Our technique involves three essential transformations: defunc- rity or performance reasons. On the other hand, a central mantra tionalization a` la Reynolds, CPS transformation, and a “trampo- of web engineering insists that web servers should be “stateless”: line” that allows tunnelling server-to-client requests within server that no “session state” should be maintained on behalf of individ- responses. ual clients—that is, no state that pertains to the particular point of We have implemented a version of this feature in the Links lan- the interaction at which a client program resides. Thus far, most guage (Cooper et al. 2006). In the current version, only calls to top- implementations of unified location-aware languages have ignored level functions can pass control between client and server. Here this precept, usually keeping a process for each client running on we provide a formalization and show how to relax the top level- the server, or otherwise storing state information in memory.
    [Show full text]
  • Defining C Preprocessor Macro Libraries with Functional Programs
    Computing and Informatics, Vol. 35, 2016, 819{851 DEFINING C PREPROCESSOR MACRO LIBRARIES WITH FUNCTIONAL PROGRAMS Boldizs´ar Nemeth´ , M´at´e Karacsony´ Zolt´an Kelemen, M´at´e Tejfel E¨otv¨osLor´andUniversity Department of Programming Languages and Compilers H-1117 P´azm´anyP´eters´et´any1/C, Budapest, Hungary e-mail: fnboldi, kmate, kelemzol, [email protected] Abstract. The preprocessor of the C language provides a standard way to generate code at compile time. However, writing and understanding these macros is difficult. Lack of typing, statelessness and uncommon syntax are the main reasons of this difficulty. Haskell is a high-level purely functional language with expressive type system, algebraic data types and many useful language extensions. These suggest that Haskell code can be written and maintained easier than preprocessor macros. Functional languages have certain similarities to macro languages. By using these similarities this paper describes a transformation that translates lambda expres- sions into preprocessor macros. Existing compilers for functional languages gener- ate lambda expressions from the source code as an intermediate representation. As a result it is possible to write Haskell code that will be translated into preprocessor macros that manipulate source code. This may result in faster development and maintenance of complex macro metaprograms. Keywords: C preprocessor, functional programming, Haskell, program transfor- mation, transcompiler Mathematics Subject Classification 2010: 68-N15 1 INTRODUCTION Preprocessor macros are fundamental elements of the C programming language. Besides being useful to define constants and common code snippets, they also allow 820 B. N´emeth,M. Kar´acsony,Z. Kelemen, M. Tejfel developers to extend the capabilities of the language without having to change the compiler itself.
    [Show full text]
  • Promoting Functions to Type Families in Haskell Richard A
    Bryn Mawr College Scholarship, Research, and Creative Work at Bryn Mawr College Computer Science Faculty Research and Computer Science Scholarship 2014 Promoting Functions to Type Families in Haskell Richard A. Eisenberg Bryn Mawr College, [email protected] Jan Stolarek Let us know how access to this document benefits ouy . Follow this and additional works at: http://repository.brynmawr.edu/compsci_pubs Part of the Computer Sciences Commons Custom Citation Richard A. Eisenberg "Promoting Functions to Type Families," ACM SIGPLAN Notices - Haskell '14 49.12 (December 2014): 95-106. This paper is posted at Scholarship, Research, and Creative Work at Bryn Mawr College. http://repository.brynmawr.edu/compsci_pubs/1 For more information, please contact [email protected]. Final draft submitted for publication at Haskell Symposium 2014 Promoting Functions to Type Families in Haskell Richard A. Eisenberg Jan Stolarek University of Pennsylvania Politechnika Łodzka´ [email protected] [email protected] Abstract In other words, is type-level programming expressive enough? Haskell, as implemented in the Glasgow Haskell Compiler (GHC), To begin to answer this question, we must define “enough.” In this is enriched with many extensions that support type-level program- paper, we choose to interpret “enough” as meaning that type-level ming, such as promoted datatypes, kind polymorphism, and type programming is at least as expressive as term-level programming. families. Yet, the expressiveness of the type-level language remains We wish to be able to take any pure term-level program and write limited. It is missing many features present at the term level, includ- an equivalent type-level one.
    [Show full text]