Secrets of the Glasgow Haskell Compiler Inliner

Total Page:16

File Type:pdf, Size:1020Kb

Secrets of the Glasgow Haskell Compiler Inliner Secrets of the Glasgow Haskell Compiler inliner Simon Marlow Simon Peyton Jones Microsoft Research Ltd, Cambridge Microsoft Research Ltd, Cambridge [email protected] [email protected] Septemb er 1, 1999 A ma jor issue for any compiler, esp ecially for one that Abstract inlines heavily,is name capture. Our initial brute-force solution involved inconvenient plumbing, and wehave Higher-order languages, such as Haskell, encourage the pro- now evolved a simple and e ective alternative Sec- grammer to build abstractions by comp osing functions. A tion 3. go o d compiler must inline many of these calls to recover an eciently executable program. At rst wewere very conservative ab out inlining recur- sive de nitions; that is, we did not inline them at all. In principle, inlining is dead simple: just replace the call But we found that this strategy o ccasionally b ehaved of a function by an instance of its b o dy. But any compiler- very badly. After a series of failed hacks we develop ed writer will tell you that inlining is a black art, full of delicate a simple, obviously-correct algorithm that do es the job compromises that work together to give go o d p erformance b eautifully Section 4. without unnecessary co de bloat. Because the compiler do es so much inlining, it is im- The purp ose of this pap er is, therefore, to articulate the p ortant to get as much as p ossible done in each pass key lessons we learned from a full-scale \pro duction" inliner, over the program. Yet one must steer a careful path the one used in the Glasgow Haskell compiler. We fo cus between doing to o little work in each pass, requiring mainly on the algorithmic asp ects, but we also provide some extra passes, and doing to o much work, leading to an indicative measurements to substantiate the imp ortance of exp onential-cost algorithm. GHC now identi es three various asp ects of the inliner. distinct moments at which an inlining decision maybe taken for a particular de nition. We explain why in Section 6. 1 Intro duction When inlining an expression it is imp ortant to retain the expression's lexical environment, which gives the One of the trickiest asp ects of a compiler for a functional lan- bindings of its free variables. But at the inline site, the guage is the handling of inlining. In a functional-language compiler might know more ab out the dynamic state of compiler, inlining subsumes several other optimisations that some of those free variables; most notably, a free vari- are traditionally treated separately, such as copy propaga- able mightbeknown to b e evaluated at the inline site, tion and jump elimination. As a result, e ective inlining is but not at its original o ccurrence. Some key transfor- particularly crucial in getting go o d p erformance. mations make use of this extra information, and lacking The Glasgow Haskell Compiler GHC is an optimising com- it will cause an extra pass over the co de. We describ e piler for Haskell that has evolved over a p erio d of ab out ten how to exploit our name-capture solution to supp ort years. Wehave rep eatedly b een through a cycle of lo oking accurate tracking of b oth lexical and dynamic environ- at the co de it pro duces, identifying what could b e improved, ments Section 7. and going back to the compiler to make it pro duce b etter co de. It is our exp erience that the inliner is a lead player in None of the algorithms we describ e is individually very sur- many of these improvements. No other single asp ect of the prising. Perhaps b ecause of this, the literature on the sub- compiler has received so much attention. ject is very sparse, and we are not aware of published de- scriptions of any of our algorithms. Our contribution is to The purp ose of this pap er is to rep ort on several algorithmic abstract some of what wehave learned, in the hop e that we asp ects of GHC's inliner, fo cusing on asp ects that were not may help others avoid the mistakes that we made. obvious to us | that is to say, asp ects that we got wrong to b egin with. Most pap ers ab out inlining fo cus on howto For the sake of concreteness we fo cus throughout on GHC, cho ose whether or not to inline a function called from many but we stress that the lessons we learned are applicable to places. This is indeed an imp ortant question, but wehave any compiler for a functional language, and indeed p erhaps found that we had to deal with quite a few other less obvious, to compilers for other languages to o. but equally interesting, issues. Sp eci cally,we describ e the following: 2 Preliminaries is that ys is b ound to the result of evaluating the scrutinee, reverse xs in this case, which makes it p ossible to refer to this value in the alternatives. This detail has no impact on We will assume the use of a pure, non-strict, strongly-typ ed the rest of this pap er | indeed, we omit the extra binder intermediate language, called the GHC Core language. GHC in our examples | but wehave found that it makes several is itself written in Haskell, so we de ne the Core language transformations more simple and uniform, so we include it by giving its data typ e de nition in Haskell: here for the sake of completeness. type Program = [Bind] GHC's actual intermediate language is very slightly more complicated than that given here. It is an explicitly-typ ed data Bind = NonRec Var Expr language based on System F , and supp orts p olymorphism ! | Rec [Var, Expr] through explicit typ e abstraction and application. It turns out that doing so adds only one new constructor to the Expr data Expr = Var Var typ e, and adds nothing to the substance of this pap er, so we | App Expr Expr do not mention it further. The main p oint is that this pap er | Lam Var Expr omits no asp ect essential to a full-scale implementation of | Let Bind Expr Haskell. | Const Const [Expr] | Case Expr Var [Alt] | Note Note Expr 2.1 What is inlining? type Alt -- Case alternative Given a de nition x = E, one can inline x at a particular = Const, [Var], Expr o ccurrence by replacing the o ccurrence by E. We use upp er case letters, suchas\E ", to stand for arbitrary expressions, data Const -- Constant and \==> " to indicate a program transformation. For ex- = Literal Literal ample: | DataCon DataCon | PrimOp PrimOp let { f = \x -> x*3 } in f a + b - c | DEFAULT ==> a+b*3 - c The Core language consists of the lambda calculus aug- Wehave found it useful to identify three distinct transfor- mented with let-expressions b oth non-recursive and recur- mations that collectively implement what we informally de- sive, case expressions, data constructors, literals, and prim- scrib e as \inlining": itive op erations. In presenting examples we will use an in- formal, alb eit hop efully clear, concrete syntax. We will feel Inlining itself replaces an o ccurrence of a let -b ound free to use in x op erators, and to write several bindings in variable by a copy of the right-hand side of its de - a single non-recursive let-expression as shorthand for a se- nition. Inlining f in the example ab ove go es like this: quence of let-expressions. let { f = \x -> x*3 } in f a + b - c A program Program is simply a sequence of bindings, in ==> [inline f] dep endency order. Each binding Bind can be recursive let { f = \x -> x*3 } in \x -> x*3 a + b - c or non-recursive, and the right hand side of each bind- ing is an expression Expr . The constructors for variables Notice that not all the o ccurrences of f need b e inlined, Var , application App , lamb da abstraction Lam, and let- and hence that the original de nition of f must, in expressions Let should be self-explanatory. A constant general, b e retained. application Const is used for literals, data constructor ap- plications, and applications of primitive op erators; the num- Dead code elimination discards bindings that are no b er of arguments must match the arity of the constant, and longer used; this usually o ccurs when all o ccurrences of avariable have b een inlined. Continuing our example and the constant cannot b e DEFAULT . Likewise, the num- gives: ber of b ound variables in a case alternative Alt always matches the arity of the constant; and the latter cannot b e let { f = \x -> x*3 } in \x -> x*3 a + b - c a PrimOp . The Note form of Expr allows annotations to ==> [dead f] b e attached to the tree. The only impact on the inliner is \x -> x*3 a + b - c discussed in Section 7.6. Case expressions Case should b e self-explanatory, except -reduction simply rewrites a lambda application for the Var argumentto Case . Consider the following Core \x->E A to let {x = A} in E. Applying - expression, reduction to our running example gives: case reverse xs of ys { \x -> x*3 a + b - c a:as -> ys ==> [beta] [] -> error "urk" let { x = a+b } in x*3 - c } The unusual part of this construct is the binding o ccur- The rst of these is the tricky one; the latter two are easy.
Recommended publications
  • A Methodology for Assessing Javascript Software Protections
    A methodology for Assessing JavaScript Software Protections Pedro Fortuna A methodology for Assessing JavaScript Software Protections Pedro Fortuna About me Pedro Fortuna Co-Founder & CTO @ JSCRAMBLER OWASP Member SECURITY, JAVASCRIPT @pedrofortuna 2 A methodology for Assessing JavaScript Software Protections Pedro Fortuna Agenda 1 4 7 What is Code Protection? Testing Resilience 2 5 Code Obfuscation Metrics Conclusions 3 6 JS Software Protections Q & A Checklist 3 What is Code Protection Part 1 A methodology for Assessing JavaScript Software Protections Pedro Fortuna Intellectual Property Protection Legal or Technical Protection? Alice Bob Software Developer Reverse Engineer Sells her software over the Internet Wants algorithms and data structures Does not need to revert back to original source code 5 A methodology for Assessing JavaScript Software Protections Pedro Fortuna Intellectual Property IP Protection Protection Legal Technical Encryption ? Trusted Computing Server-Side Execution Obfuscation 6 A methodology for Assessing JavaScript Software Protections Pedro Fortuna Code Obfuscation Obfuscation “transforms a program into a form that is more difficult for an adversary to understand or change than the original code” [1] More Difficult “requires more human time, more money, or more computing power to analyze than the original program.” [1] in Collberg, C., and Nagra, J., “Surreptitious software: obfuscation, watermarking, and tamperproofing for software protection.”, Addison- Wesley Professional, 2010. 7 A methodology for Assessing
    [Show full text]
  • A Deep Dive Into the Interprocedural Optimization Infrastructure
    Stes Bais [email protected] Kut el [email protected] Shi Oku [email protected] A Deep Dive into the Luf Cen Interprocedural [email protected] Hid Ue Optimization Infrastructure [email protected] Johs Dor [email protected] Outline ● What is IPO? Why is it? ● Introduction of IPO passes in LLVM ● Inlining ● Attributor What is IPO? What is IPO? ● Pass Kind in LLVM ○ Immutable pass Intraprocedural ○ Loop pass ○ Function pass ○ Call graph SCC pass ○ Module pass Interprocedural IPO considers more than one function at a time Call Graph ● Node : functions ● Edge : from caller to callee A void A() { B(); C(); } void B() { C(); } void C() { ... B C } Call Graph SCC ● SCC stands for “Strongly Connected Component” A D G H I B C E F Call Graph SCC ● SCC stands for “Strongly Connected Component” A D G H I B C E F Passes In LLVM IPO passes in LLVM ● Where ○ Almost all IPO passes are under llvm/lib/Transforms/IPO Categorization of IPO passes ● Inliner ○ AlwaysInliner, Inliner, InlineAdvisor, ... ● Propagation between caller and callee ○ Attributor, IP-SCCP, InferFunctionAttrs, ArgumentPromotion, DeadArgumentElimination, ... ● Linkage and Globals ○ GlobalDCE, GlobalOpt, GlobalSplit, ConstantMerge, ... ● Others ○ MergeFunction, OpenMPOpt, HotColdSplitting, Devirtualization... 13 Why is IPO? ● Inliner ○ Specialize the function with call site arguments ○ Expose local optimization opportunities ○ Save jumps, register stores/loads (calling convention) ○ Improve instruction locality ● Propagation between caller and callee ○ Other passes would benefit from the propagated information ● Linkage
    [Show full text]
  • Attacking Client-Side JIT Compilers.Key
    Attacking Client-Side JIT Compilers Samuel Groß (@5aelo) !1 A JavaScript Engine Parser JIT Compiler Interpreter Runtime Garbage Collector !2 A JavaScript Engine • Parser: entrypoint for script execution, usually emits custom Parser bytecode JIT Compiler • Bytecode then consumed by interpreter or JIT compiler • Executing code interacts with the Interpreter runtime which defines the Runtime representation of various data structures, provides builtin functions and objects, etc. Garbage • Garbage collector required to Collector deallocate memory !3 A JavaScript Engine • Parser: entrypoint for script execution, usually emits custom Parser bytecode JIT Compiler • Bytecode then consumed by interpreter or JIT compiler • Executing code interacts with the Interpreter runtime which defines the Runtime representation of various data structures, provides builtin functions and objects, etc. Garbage • Garbage collector required to Collector deallocate memory !4 A JavaScript Engine • Parser: entrypoint for script execution, usually emits custom Parser bytecode JIT Compiler • Bytecode then consumed by interpreter or JIT compiler • Executing code interacts with the Interpreter runtime which defines the Runtime representation of various data structures, provides builtin functions and objects, etc. Garbage • Garbage collector required to Collector deallocate memory !5 A JavaScript Engine • Parser: entrypoint for script execution, usually emits custom Parser bytecode JIT Compiler • Bytecode then consumed by interpreter or JIT compiler • Executing code interacts with the Interpreter runtime which defines the Runtime representation of various data structures, provides builtin functions and objects, etc. Garbage • Garbage collector required to Collector deallocate memory !6 Agenda 1. Background: Runtime Parser • Object representation and Builtins JIT Compiler 2. JIT Compiler Internals • Problem: missing type information • Solution: "speculative" JIT Interpreter 3.
    [Show full text]
  • GHC Reading Guide
    GHC Reading Guide - Exploring entrances and mental models to the source code - Takenobu T. Rev. 0.01.1 WIP NOTE: - This is not an official document by the ghc development team. - Please refer to the official documents in detail. - Don't forget “semantics”. It's very important. - This is written for ghc 9.0. Contents Introduction 1. Compiler - Compilation pipeline - Each pipeline stages - Intermediate language syntax - Call graph 2. Runtime system 3. Core libraries Appendix References Introduction Introduction Official resources are here GHC source repository : The GHC Commentary (for developers) : https://gitlab.haskell.org/ghc/ghc https://gitlab.haskell.org/ghc/ghc/-/wikis/commentary GHC Documentation (for users) : * master HEAD https://ghc.gitlab.haskell.org/ghc/doc/ * latest major release https://downloads.haskell.org/~ghc/latest/docs/html/ * version specified https://downloads.haskell.org/~ghc/9.0.1/docs/html/ The User's Guide Core Libraries GHC API Introduction The GHC = Compiler + Runtime System (RTS) + Core Libraries Haskell source (.hs) GHC compiler RuntimeSystem Core Libraries object (.o) (libHsRts.o) (GHC.Base, ...) Linker Executable binary including the RTS (* static link case) Introduction Each division is located in the GHC source tree GHC source repository : https://gitlab.haskell.org/ghc/ghc compiler/ ... compiler sources rts/ ... runtime system sources libraries/ ... core library sources ghc/ ... compiler main includes/ ... include files testsuite/ ... test suites nofib/ ... performance tests mk/ ... build system hadrian/ ... hadrian build system docs/ ... documents : : 1. Compiler 1. Compiler Compilation pipeline 1. compiler The GHC compiler Haskell language GHC compiler Assembly language (native or llvm) 1. compiler GHC compiler comprises pipeline stages GHC compiler Haskell language Parser Renamer Type checker Desugarer Core to Core Core to STG STG to Cmm Assembly language Cmm to Asm (native or llvm) 1.
    [Show full text]
  • CS153: Compilers Lecture 19: Optimization
    CS153: Compilers Lecture 19: Optimization Stephen Chong https://www.seas.harvard.edu/courses/cs153 Contains content from lecture notes by Steve Zdancewic and Greg Morrisett Announcements •HW5: Oat v.2 out •Due in 2 weeks •HW6 will be released next week •Implementing optimizations! (and more) Stephen Chong, Harvard University 2 Today •Optimizations •Safety •Constant folding •Algebraic simplification • Strength reduction •Constant propagation •Copy propagation •Dead code elimination •Inlining and specialization • Recursive function inlining •Tail call elimination •Common subexpression elimination Stephen Chong, Harvard University 3 Optimizations •The code generated by our OAT compiler so far is pretty inefficient. •Lots of redundant moves. •Lots of unnecessary arithmetic instructions. •Consider this OAT program: int foo(int w) { var x = 3 + 5; var y = x * w; var z = y - 0; return z * 4; } Stephen Chong, Harvard University 4 Unoptimized vs. Optimized Output .globl _foo _foo: •Hand optimized code: pushl %ebp movl %esp, %ebp _foo: subl $64, %esp shlq $5, %rdi __fresh2: movq %rdi, %rax leal -64(%ebp), %eax ret movl %eax, -48(%ebp) movl 8(%ebp), %eax •Function foo may be movl %eax, %ecx movl -48(%ebp), %eax inlined by the compiler, movl %ecx, (%eax) movl $3, %eax so it can be implemented movl %eax, -44(%ebp) movl $5, %eax by just one instruction! movl %eax, %ecx addl %ecx, -44(%ebp) leal -60(%ebp), %eax movl %eax, -40(%ebp) movl -44(%ebp), %eax Stephen Chong,movl Harvard %eax,University %ecx 5 Why do we need optimizations? •To help programmers… •They write modular, clean, high-level programs •Compiler generates efficient, high-performance assembly •Programmers don’t write optimal code •High-level languages make avoiding redundant computation inconvenient or impossible •e.g.
    [Show full text]
  • Handout – Dataflow Optimizations Assignment
    Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.035, Spring 2013 Handout – Dataflow Optimizations Assignment Tuesday, Mar 19 DUE: Thursday, Apr 4, 9:00 pm For this part of the project, you will add dataflow optimizations to your compiler. At the very least, you must implement global common subexpression elimination. The other optimizations listed below are optional. You may also wait until the next project to implement them if you are going to; there is no requirement to implement other dataflow optimizations in this project. We list them here as suggestions since past winners of the compiler derby typically implement each of these optimizations in some form. You are free to implement any other optimizations you wish. Note that you will be implementing register allocation for the next project, so you don’t need to concern yourself with it now. Global CSE (Common Subexpression Elimination): Identification and elimination of redun- dant expressions using the algorithm described in lecture (based on available-expression anal- ysis). See §8.3 and §13.1 of the Whale book, §10.6 and §10.7 in the Dragon book, and §17.2 in the Tiger book. Global Constant Propagation and Folding: Compile-time interpretation of expressions whose operands are compile time constants. See the algorithm described in §12.1 of the Whale book. Global Copy Propagation: Given a “copy” assignment like x = y , replace uses of x by y when legal (the use must be reached by only this def, and there must be no modification of y on any path from the def to the use).
    [Show full text]
  • Quaxe, Infinity and Beyond
    Quaxe, infinity and beyond Daniel Glazman — WWX 2015 /usr/bin/whoami Primary architect and developer of the leading Web and Ebook editors Nvu and BlueGriffon Former member of the Netscape CSS and Editor engineering teams Involved in Internet and Web Standards since 1990 Currently co-chair of CSS Working Group at W3C New-comer in the Haxe ecosystem Desktop Frameworks Visual Studio (Windows only) Xcode (OS X only) Qt wxWidgets XUL Adobe Air Mobile Frameworks Adobe PhoneGap/Air Xcode (iOS only) Qt Mobile AppCelerator Visual Studio Two solutions but many issues Fragmentation desktop/mobile Heavy runtimes Can’t easily reuse existing c++ libraries Complex to have native-like UI Qt/QtMobile still require c++ Qt’s QML is a weak and convoluted UI language Haxe 9 years success of Multiplatform OSS language Strong affinity to gaming Wide and vibrant community Some press recognition Dead code elimination Compiles to native on all But no native GUI… platforms through c++ and java Best of all worlds Haxe + Qt/QtMobile Multiplatform Native apps, native performance through c++/Java C++/Java lib reusability Introducing Quaxe Native apps w/o c++ complexity Highly dynamic applications on desktop and mobile Native-like UI through Qt HTML5-based UI, CSS-based styling Benefits from Haxe and Qt communities Going from HTML5 to native GUI completeness DOM dynamism in native UI var b: Element = document.getElementById("thirdButton"); var t: Element = document.createElement("input"); t.setAttribute("type", "text"); t.setAttribute("value", "a text field"); b.parentNode.insertBefore(t,
    [Show full text]
  • Hy Documentation Release 0.12.1+64.G5eb9283
    hy Documentation Release 0.12.1+64.g5eb9283 Paul Tagliamonte Apr 14, 2017 Contents 1 Documentation Index 3 1.1 Quickstart................................................4 1.2 Tutorial..................................................5 1.2.1 Basic intro to Lisp for Pythonistas...............................5 1.2.2 Hy is a Lisp-flavored Python..................................7 1.2.3 Macros............................................. 12 1.2.4 Hy <-> Python interop..................................... 13 1.2.5 Protips!............................................. 14 1.3 Hy Style Guide.............................................. 14 1.3.1 Prelude............................................. 15 1.3.2 Layout & Indentation...................................... 15 1.3.3 Coding Style.......................................... 16 1.3.4 Conclusion........................................... 17 1.3.5 Thanks............................................. 17 1.4 Documentation Index.......................................... 18 1.4.1 Command Line Interface.................................... 18 1.4.2 Hy <-> Python interop..................................... 19 1.4.3 Hy (the language)........................................ 21 1.4.4 Hy Core............................................. 47 1.4.5 Reader Macros......................................... 65 1.4.6 Internal Hy Documentation................................... 66 1.5 Extra Modules Index........................................... 72 1.5.1 Anaphoric Macros....................................... 72 1.5.2
    [Show full text]
  • Dynamic Extension of Typed Functional Languages
    Dynamic Extension of Typed Functional Languages Don Stewart PhD Dissertation School of Computer Science and Engineering University of New South Wales 2010 Supervisor: Assoc. Prof. Manuel M. T. Chakravarty Co-supervisor: Dr. Gabriele Keller Abstract We present a solution to the problem of dynamic extension in statically typed functional languages with type erasure. The presented solution re- tains the benefits of static checking, including type safety, aggressive op- timizations, and native code compilation of components, while allowing extensibility of programs at runtime. Our approach is based on a framework for dynamic extension in a stat- ically typed setting, combining dynamic linking, runtime type checking, first class modules and code hot swapping. We show that this framework is sufficient to allow a broad class of dynamic extension capabilities in any statically typed functional language with type erasure semantics. Uniquely, we employ the full compile-time type system to perform run- time type checking of dynamic components, and emphasize the use of na- tive code extension to ensure that the performance benefits of static typing are retained in a dynamic environment. We also develop the concept of fully dynamic software architectures, where the static core is minimal and all code is hot swappable. Benefits of the approach include hot swappable code and sophisticated application extension via embedded domain specific languages. We instantiate the concepts of the framework via a full implementation in the Haskell programming language: providing rich mechanisms for dy- namic linking, loading, hot swapping, and runtime type checking in Haskell for the first time. We demonstrate the feasibility of this architecture through a number of novel applications: an extensible text editor; a plugin-based network chat bot; a simulator for polymer chemistry; and xmonad, an ex- tensible window manager.
    [Show full text]
  • What I Wish I Knew When Learning Haskell
    What I Wish I Knew When Learning Haskell Stephen Diehl 2 Version This is the fifth major draft of this document since 2009. All versions of this text are freely available onmywebsite: 1. HTML Version ­ http://dev.stephendiehl.com/hask/index.html 2. PDF Version ­ http://dev.stephendiehl.com/hask/tutorial.pdf 3. EPUB Version ­ http://dev.stephendiehl.com/hask/tutorial.epub 4. Kindle Version ­ http://dev.stephendiehl.com/hask/tutorial.mobi Pull requests are always accepted for fixes and additional content. The only way this document will stayupto date and accurate through the kindness of readers like you and community patches and pull requests on Github. https://github.com/sdiehl/wiwinwlh Publish Date: March 3, 2020 Git Commit: 77482103ff953a8f189a050c4271919846a56612 Author This text is authored by Stephen Diehl. 1. Web: www.stephendiehl.com 2. Twitter: https://twitter.com/smdiehl 3. Github: https://github.com/sdiehl Special thanks to Erik Aker for copyediting assistance. Copyright © 2009­2020 Stephen Diehl This code included in the text is dedicated to the public domain. You can copy, modify, distribute and perform thecode, even for commercial purposes, all without asking permission. You may distribute this text in its full form freely, but may not reauthor or sublicense this work. Any reproductions of major portions of the text must include attribution. The software is provided ”as is”, without warranty of any kind, express or implied, including But not limitedtothe warranties of merchantability, fitness for a particular purpose and noninfringement. In no event shall the authorsor copyright holders be liable for any claim, damages or other liability, whether in an action of contract, tort or otherwise, Arising from, out of or in connection with the software or the use or other dealings in the software.
    [Show full text]
  • An Industrial Strength Theorem Prover for a Logic Based on Common Lisp
    An Industrial Strength Theorem Prover for a Logic Based on Common Lisp y z Matt Kaufmannand J Strother Moore Personal use of this material is permitted. particular style of formal veri®cation that has shown consid- However, permission to reprint/republish this erable promise in recent years is the use of general-purpose material for advertising or promotional pur- automated reasoning systems to model systems and prove poses or for creating new collective works for properties of them. Every such reasoning system requires resale or redistribution to servers or lists, or considerable assistance from the user, which makes it im- to reuse any copyrighted component of this portant that the system provide convenient ways for the user work in other works must be obtained from the to interact with it. IEEE.1 One state-of-the-art general-purpose automated reason- ing system is ACL2: ªA Computational Logic for Applica- AbstractÐACL2 is a re-implemented extended version tive Common Lisp.º A number of automated reasoning of Boyer and Moore's Nqthm and Kaufmann's Pc-Nqthm, systems now exist, as we discuss below (Subsection 1.1). In intended for large scale veri®cation projects. This paper this paper we describe ACL2's offerings to the user for con- deals primarily with how we scaled up Nqthm's logic to an venientªindustrial-strengthºuse. WebegininSection2with ªindustrial strengthº programming language Ð namely, a a history of theACL2 project. Next, Section 3 describes the large applicative subset of Common Lisp Ð while preserv- logic supportedby ACL2, which has been designed for con- ing the use of total functions within the logic.
    [Show full text]
  • Comparative Studies of Programming Languages; Course Lecture Notes
    Comparative Studies of Programming Languages, COMP6411 Lecture Notes, Revision 1.9 Joey Paquet Serguei A. Mokhov (Eds.) August 5, 2010 arXiv:1007.2123v6 [cs.PL] 4 Aug 2010 2 Preface Lecture notes for the Comparative Studies of Programming Languages course, COMP6411, taught at the Department of Computer Science and Software Engineering, Faculty of Engineering and Computer Science, Concordia University, Montreal, QC, Canada. These notes include a compiled book of primarily related articles from the Wikipedia, the Free Encyclopedia [24], as well as Comparative Programming Languages book [7] and other resources, including our own. The original notes were compiled by Dr. Paquet [14] 3 4 Contents 1 Brief History and Genealogy of Programming Languages 7 1.1 Introduction . 7 1.1.1 Subreferences . 7 1.2 History . 7 1.2.1 Pre-computer era . 7 1.2.2 Subreferences . 8 1.2.3 Early computer era . 8 1.2.4 Subreferences . 8 1.2.5 Modern/Structured programming languages . 9 1.3 References . 19 2 Programming Paradigms 21 2.1 Introduction . 21 2.2 History . 21 2.2.1 Low-level: binary, assembly . 21 2.2.2 Procedural programming . 22 2.2.3 Object-oriented programming . 23 2.2.4 Declarative programming . 27 3 Program Evaluation 33 3.1 Program analysis and translation phases . 33 3.1.1 Front end . 33 3.1.2 Back end . 34 3.2 Compilation vs. interpretation . 34 3.2.1 Compilation . 34 3.2.2 Interpretation . 36 3.2.3 Subreferences . 37 3.3 Type System . 38 3.3.1 Type checking . 38 3.4 Memory management .
    [Show full text]