Dynamic Extension of Typed Functional Languages

Total Page:16

File Type:pdf, Size:1020Kb

Dynamic Extension of Typed Functional Languages Dynamic Extension of Typed Functional Languages Don Stewart PhD Dissertation School of Computer Science and Engineering University of New South Wales 2010 Supervisor: Assoc. Prof. Manuel M. T. Chakravarty Co-supervisor: Dr. Gabriele Keller Abstract We present a solution to the problem of dynamic extension in statically typed functional languages with type erasure. The presented solution re- tains the benefits of static checking, including type safety, aggressive op- timizations, and native code compilation of components, while allowing extensibility of programs at runtime. Our approach is based on a framework for dynamic extension in a stat- ically typed setting, combining dynamic linking, runtime type checking, first class modules and code hot swapping. We show that this framework is sufficient to allow a broad class of dynamic extension capabilities in any statically typed functional language with type erasure semantics. Uniquely, we employ the full compile-time type system to perform run- time type checking of dynamic components, and emphasize the use of na- tive code extension to ensure that the performance benefits of static typing are retained in a dynamic environment. We also develop the concept of fully dynamic software architectures, where the static core is minimal and all code is hot swappable. Benefits of the approach include hot swappable code and sophisticated application extension via embedded domain specific languages. We instantiate the concepts of the framework via a full implementation in the Haskell programming language: providing rich mechanisms for dy- namic linking, loading, hot swapping, and runtime type checking in Haskell for the first time. We demonstrate the feasibility of this architecture through a number of novel applications: an extensible text editor; a plugin-based network chat bot; a simulator for polymer chemistry; and xmonad, an ex- tensible window manager. In doing so, we demonstrate that static typing is no barrier to dynamic extension. Acknowledgments This thesis describes work carried out between 2004 and 2008 at the School of Computer Science, University of New South Wales in Sydney. I am deeply indebted to my supervisor, Manuel Chakravarty, who drew me into the func- tional programming world, and encouraged the exploration of its uncharted corners. I am grateful to my co-supervisor, Gabriele Keller, for ongoing feedback and insight. Three groups of people influenced my work and thinking over this time, and I would like to thank them directly. Firstly, the members of the Programming Languages and Systems (PLS) group at the University of New South Wales – Roman Leshchinskiy, André Pang, Sean Seefried, Stefan Wehr, Simon Winwood, Mark Wotton, and Patryk Zadarnowski – who, along with my supervisors, built an energetic culture of innovation and exploration in functional programming that shaped the direction of this work from the beginning. Secondly, my colleagues at Galois, Inc. in Portland, for giving me the resources and mo- tivation to complete my research, and for the opportunity to apply functional programming techniques to solve difficult problems, all in an environment of talent and fun. In particular, I wish to thank John Launchbury, for his mentorship, Adam Wick, for support and motivation, and Jason Dagit, for constructive feedback. Thirdly, the Haskell community provided willing feedback and suggestions for many of the projects described in this thesis, motivating me to pursue ideas I might have passed by. Shae Erisson, in particular, encouraged and inspired this work in its early days, helping to ensure the success of several of the projects. I am glad to be a citizen of such a community of artists and hackers. Finally, this work would not have been possible without Suzie Allen, and her patience, encouragement, and love. Portions of the text of this thesis were originally published in [122, 146, 147, 148], and due acknowledgment is due to André Pang, Sean Seefried, Manuel Chakravarty, Gabriele Keller, Hugh Chaffey-Millar and Christopher Barner-Kowollik, for their contributions. Don Stewart. Portland Oregon, June 2010. Contents 1 Introduction1 1.1 Motivation..................................... 1 1.1.1 Safety via types .............................. 3 1.1.2 Flexibility via runtime code loading .................. 4 1.1.3 The middle way .............................. 5 1.2 Approach ...................................... 6 1.2.1 Dynamic extension ............................ 6 1.2.2 Static typing ................................ 9 1.2.3 A framework approach ......................... 10 1.3 Contribution .................................... 11 1.4 Structure ...................................... 12 2 A framework for dynamic extension 13 2.1 Components of the framework ......................... 13 2.1.1 Dynamic linking ............................. 14 2.1.2 Runtime type checking ......................... 16 2.1.3 First class modules ............................ 18 2.1.4 Runtime code generation ........................ 18 2.1.5 Module hot swapping .......................... 20 2.1.6 Embedded extension languages .................... 20 2.2 Implementation .................................. 22 2.3 Dynamic linking ................................. 24 viii CONTENTS 2.3.1 Defining a plugin interface ....................... 25 2.3.2 Implementing an interface ....................... 26 2.3.3 Using a plugin .............................. 27 2.3.4 Loading plugins from other languages . 28 2.4 Runtime type checking .............................. 29 2.4.1 Dynamically checked plugins ...................... 30 2.4.2 Safety and flexibility ........................... 31 2.4.3 Improving runtime type checking ................... 33 2.5 First class modules ................................ 36 2.5.1 A type class for first class modules . 36 2.5.2 Existential types ............................. 37 2.6 Runtime compilation ............................... 38 2.6.1 Invoking the compiler .......................... 39 2.6.2 Generating new code at runtime .................... 39 2.6.3 Cross-language extension ........................ 41 2.7 Hot swapping ................................... 42 2.7.1 Dynamic architectures .......................... 44 2.7.2 State preservation ............................ 45 2.7.3 State preservation as types change ................... 46 2.7.4 Persistent state .............................. 47 2.8 Embedded languages ............................... 47 2.9 Summary of the framework ........................... 49 3 Dynamic linking 51 3.1 Overview ...................................... 51 3.2 Dynamic linking in Haskell ........................... 53 3.2.1 Runtime loading ............................. 53 3.2.2 Basic plugin loading ........................... 54 3.2.3 Dependency chasing ........................... 56 3.3 Typing dynamic linking ............................. 59 CONTENTS ix 3.3.1 Limitations ................................ 61 3.4 Polymorphic dynamics .............................. 62 3.5 Comparing approaches .............................. 65 3.5.1 Performance of dynamic type checking . 65 3.5.2 Type safety and source code plugins . 66 3.6 Applications .................................... 67 3.6.1 Lambdabot and Riot ........................... 67 3.6.2 Haskell Server Pages ........................... 68 3.6.3 Specializing simulators for computational chemistry . 69 3.7 Discussion ..................................... 71 3.7.1 A standalone type checker ....................... 71 3.7.2 Loading packages and archives ..................... 72 3.7.3 Loading C objects into Haskell ..................... 72 3.7.4 Loading bytecode objects ........................ 73 3.7.5 Executable size .............................. 73 3.8 Related work .................................... 74 3.8.1 Type safe linking ............................. 74 3.8.2 Dynamic typing .............................. 75 3.8.3 Clean .................................... 77 3.8.4 ML ..................................... 77 3.8.5 Java and .NET ............................... 78 4 Runtime compilation 81 4.1 A compilation manager ............................. 81 4.1.1 Invoking the compiler .......................... 82 4.1.2 Manipulating abstract syntax ...................... 84 4.2 An eval for Haskell ................................ 87 4.2.1 Evaluating Haskell from other languages . 89 4.3 Runtime meta-programming .......................... 90 4.3.1 The heterogeneous symbol table problem . 92 x CONTENTS 4.3.2 Type safe printf .............................. 95 4.4 Applications .................................... 96 4.4.1 A Haskell interactive environment ................... 96 4.4.2 Source plugins for compiled programs . 98 4.4.3 Type-based sandboxing of untrusted code . 98 4.4.4 Dynamic server pages, revisited . 100 4.4.5 Optimizing embedded DSLs . 100 4.5 Related work ....................................101 4.5.1 Template Haskell and staged type inference . 102 4.5.2 Multi-stage programming . 103 5 Hot swapping 105 5.1 Overview ......................................105 5.2 A dynamic architecture .............................107 5.2.1 Dynamic bootstrapping . 108 5.3 Hot swapping ...................................110 5.3.1 Dynamic reconfiguration . 111 5.3.2 State preservation ............................114 5.3.3 Reloading the application . 115 5.3.4 Upgrading state types . 117 5.3.5 Persistent state ..............................118 5.4 Performance ....................................118 5.4.1 Static applications ............................121
Recommended publications
  • The LLVM Instruction Set and Compilation Strategy
    The LLVM Instruction Set and Compilation Strategy Chris Lattner Vikram Adve University of Illinois at Urbana-Champaign lattner,vadve ¡ @cs.uiuc.edu Abstract This document introduces the LLVM compiler infrastructure and instruction set, a simple approach that enables sophisticated code transformations at link time, runtime, and in the field. It is a pragmatic approach to compilation, interfering with programmers and tools as little as possible, while still retaining extensive high-level information from source-level compilers for later stages of an application’s lifetime. We describe the LLVM instruction set, the design of the LLVM system, and some of its key components. 1 Introduction Modern programming languages and software practices aim to support more reliable, flexible, and powerful software applications, increase programmer productivity, and provide higher level semantic information to the compiler. Un- fortunately, traditional approaches to compilation either fail to extract sufficient performance from the program (by not using interprocedural analysis or profile information) or interfere with the build process substantially (by requiring build scripts to be modified for either profiling or interprocedural optimization). Furthermore, they do not support optimization either at runtime or after an application has been installed at an end-user’s site, when the most relevant information about actual usage patterns would be available. The LLVM Compilation Strategy is designed to enable effective multi-stage optimization (at compile-time, link-time, runtime, and offline) and more effective profile-driven optimization, and to do so without changes to the traditional build process or programmer intervention. LLVM (Low Level Virtual Machine) is a compilation strategy that uses a low-level virtual instruction set with rich type information as a common code representation for all phases of compilation.
    [Show full text]
  • GHC Reading Guide
    GHC Reading Guide - Exploring entrances and mental models to the source code - Takenobu T. Rev. 0.01.1 WIP NOTE: - This is not an official document by the ghc development team. - Please refer to the official documents in detail. - Don't forget “semantics”. It's very important. - This is written for ghc 9.0. Contents Introduction 1. Compiler - Compilation pipeline - Each pipeline stages - Intermediate language syntax - Call graph 2. Runtime system 3. Core libraries Appendix References Introduction Introduction Official resources are here GHC source repository : The GHC Commentary (for developers) : https://gitlab.haskell.org/ghc/ghc https://gitlab.haskell.org/ghc/ghc/-/wikis/commentary GHC Documentation (for users) : * master HEAD https://ghc.gitlab.haskell.org/ghc/doc/ * latest major release https://downloads.haskell.org/~ghc/latest/docs/html/ * version specified https://downloads.haskell.org/~ghc/9.0.1/docs/html/ The User's Guide Core Libraries GHC API Introduction The GHC = Compiler + Runtime System (RTS) + Core Libraries Haskell source (.hs) GHC compiler RuntimeSystem Core Libraries object (.o) (libHsRts.o) (GHC.Base, ...) Linker Executable binary including the RTS (* static link case) Introduction Each division is located in the GHC source tree GHC source repository : https://gitlab.haskell.org/ghc/ghc compiler/ ... compiler sources rts/ ... runtime system sources libraries/ ... core library sources ghc/ ... compiler main includes/ ... include files testsuite/ ... test suites nofib/ ... performance tests mk/ ... build system hadrian/ ... hadrian build system docs/ ... documents : : 1. Compiler 1. Compiler Compilation pipeline 1. compiler The GHC compiler Haskell language GHC compiler Assembly language (native or llvm) 1. compiler GHC compiler comprises pipeline stages GHC compiler Haskell language Parser Renamer Type checker Desugarer Core to Core Core to STG STG to Cmm Assembly language Cmm to Asm (native or llvm) 1.
    [Show full text]
  • The Effect of Compression and Expansion on Stochastic Reaction Networks
    IMT School for Advanced Studies, Lucca Lucca, Italy The Effect of Compression and Expansion on Stochastic Reaction Networks PhD Program in Institutions, Markets and Technologies Curriculum in Computer Science and Systems Engineering (CSSE) XXXI Cycle By Tabea Waizmann 2021 The dissertation of Tabea Waizmann is approved. Program Coordinator: Prof. Rocco De Nicola, IMT Lucca Supervisor: Prof. Mirco Tribastone, IMT Lucca The dissertation of Tabea Waizmann has been reviewed by: Dr. Catia Trubiani, Gran Sasso Science Institute Prof. Michele Loreti, University of Camerino IMT School for Advanced Studies, Lucca 2021 To everyone who believed in me. Contents List of Figures ix List of Tables xi Acknowledgements xiii Vita and Publications xv Abstract xviii 1 Introduction 1 2 Background 6 2.1 Multisets . .6 2.2 Reaction Networks . .7 2.3 Ordinary Lumpability . 11 2.4 Layered Queuing Networks . 12 2.5 PEPA . 16 3 Coarse graining mass-action stochastic reaction networks by species equivalence 19 3.1 Species Equivalence . 20 3.1.1 Species equivalence as a generalization of Markov chain ordinary lumpability . 27 3.1.2 Characterization of SE for mass-action networks . 27 3.1.3 Computation of the maximal SE and reduced net- work . 28 vii 3.2 Applications . 31 3.2.1 Computational systems biology . 31 3.2.2 Epidemic processes in networks . 33 3.3 Discussion . 36 4 DiffLQN: Differential Equation Analysis of Layered Queuing Networks 37 4.1 DiffLQN .............................. 38 4.1.1 Architecture . 38 4.1.2 Capabilities . 38 4.1.3 Syntax . 39 4.2 Case Study: Client-Server dynamics . 41 4.3 Discussion .
    [Show full text]
  • Practical Reflection and Metaprogramming for Dependent
    Practical Reflection and Metaprogramming for Dependent Types David Raymond Christiansen Advisor: Peter Sestoft Submitted: November 2, 2015 i Abstract Embedded domain-specific languages are special-purpose pro- gramming languages that are implemented within existing general- purpose programming languages. Dependent type systems allow strong invariants to be encoded in representations of domain-specific languages, but it can also make it difficult to program in these em- bedded languages. Interpreters and compilers must always take these invariants into account at each stage, and authors of embedded languages must work hard to relieve users of the burden of proving these properties. Idris is a dependently typed functional programming language whose semantics are given by elaboration to a core dependent type theory through a tactic language. This dissertation introduces elabo- rator reflection, in which the core operators of the elaborator are real- ized as a type of computations that are executed during the elab- oration process of Idris itself, along with a rich API for reflection. Elaborator reflection allows domain-specific languages to be imple- mented using the same elaboration technology as Idris itself, and it gives them additional means of interacting with native Idris code. It also allows Idris to be used as its own metalanguage, making it into a programmable programming language and allowing code re-use across all three stages: elaboration, type checking, and execution. Beyond elaborator reflection, other forms of compile-time reflec- tion have proven useful for embedded languages. This dissertation also describes error reflection, in which Idris code can rewrite DSL er- ror messages before presenting domain-specific messages to users, as well as a means for integrating quasiquotation into a tactic-based elaborator so that high-level syntax can be used for low-level reflected terms.
    [Show full text]
  • Programming Language
    Programming language A programming language is a formal language, which comprises a set of instructions that produce various kinds of output. Programming languages are used in computer programming to implement algorithms. Most programming languages consist of instructions for computers. There are programmable machines that use a set of specific instructions, rather than general programming languages. Early ones preceded the invention of the digital computer, the first probably being the automatic flute player described in the 9th century by the brothers Musa in Baghdad, during the Islamic Golden Age.[1] Since the early 1800s, programs have been used to direct the behavior of machines such as Jacquard looms, music boxes and player pianos.[2] The programs for these The source code for a simple computer program written in theC machines (such as a player piano's scrolls) did not programming language. When compiled and run, it will give the output "Hello, world!". produce different behavior in response to different inputs or conditions. Thousands of different programming languages have been created, and more are being created every year. Many programming languages are written in an imperative form (i.e., as a sequence of operations to perform) while other languages use the declarative form (i.e. the desired result is specified, not how to achieve it). The description of a programming language is usually split into the two components ofsyntax (form) and semantics (meaning). Some languages are defined by a specification document (for example, theC programming language is specified by an ISO Standard) while other languages (such as Perl) have a dominant implementation that is treated as a reference.
    [Show full text]
  • Safe, Fast and Easy: Towards Scalable Scripting Languages
    Safe, Fast and Easy: Towards Scalable Scripting Languages by Pottayil Harisanker Menon A dissertation submitted to The Johns Hopkins University in conformity with the requirements for the degree of Doctor of Philosophy. Baltimore, Maryland Feb, 2017 ⃝c Pottayil Harisanker Menon 2017 All rights reserved Abstract Scripting languages are immensely popular in many domains. They are char- acterized by a number of features that make it easy to develop small applications quickly - flexible data structures, simple syntax and intuitive semantics. However they are less attractive at scale: scripting languages are harder to debug, difficult to refactor and suffers performance penalties. Many research projects have tackled the issue of safety and performance for existing scripting languages with mixed results: the considerable flexibility offered by their semantics also makes them significantly harder to analyze and optimize. Previous research from our lab has led to the design of a typed scripting language built specifically to be flexible without losing static analyzability. Inthis dissertation, we present a framework to exploit this analyzability, with the aim of producing a more efficient implementation Our approach centers around the concept of adaptive tags: specialized tags attached to values that represent how it is used in the current program. Our frame- work abstractly tracks the flow of deep structural types in the program, and thuscan ii ABSTRACT efficiently tag them at runtime. Adaptive tags allow us to tackle key issuesatthe heart of performance problems of scripting languages: the framework is capable of performing efficient dispatch in the presence of flexible structures. iii Acknowledgments At the very outset, I would like to express my gratitude and appreciation to my advisor Prof.
    [Show full text]
  • Type Checking Physical Frames of Reference for Robotic Systems
    PhysFrame: Type Checking Physical Frames of Reference for Robotic Systems Sayali Kate Michael Chinn Hongjun Choi Purdue University University of Virginia Purdue University USA USA USA [email protected] [email protected] [email protected] Xiangyu Zhang Sebastian Elbaum Purdue University University of Virginia USA USA [email protected] [email protected] ABSTRACT Engineering (ESEC/FSE ’21), August 23–27, 2021, Athens, Greece. ACM, New A robotic system continuously measures its own motions and the ex- York, NY, USA, 16 pages. https://doi.org/10.1145/3468264.3468608 ternal world during operation. Such measurements are with respect to some frame of reference, i.e., a coordinate system. A nontrivial 1 INTRODUCTION robotic system has a large number of different frames and data Robotic systems have rapidly growing applications in our daily life, have to be translated back-and-forth from a frame to another. The enabled by the advances in many areas such as AI. Engineering onus is on the developers to get such translation right. However, such systems becomes increasingly important. Due to the unique this is very challenging and error-prone, evidenced by the large characteristics of such systems, e.g., the need of modeling the phys- number of questions and issues related to frame uses on developers’ ical world and satisfying the real time and resource constraints, forum. Since any state variable can be associated with some frame, robotic system engineering poses new challenges to developers. reference frames can be naturally modeled as variable types. We One of the prominent challenges is to properly use physical frames hence develop a novel type system that can automatically infer of reference.
    [Show full text]
  • 1 Lecture 25
    I. Goals of This Lecture Lecture 25 • Beyond static compilation • Example of a complete system Dynamic Compilation • Use of data flow techniques in a new context • Experimental approach I. Motivation & Background II. Overview III. Compilation Policy IV. Partial Method Compilation V. Partial Dead Code Elimination VI. Escape Analysis VII. Results “Partial Method Compilation Using Dynamic Profile Information”, John Whaley, OOPSLA 01 (Slide content courtesy of John Whaley & Monica Lam.) Carnegie Mellon Carnegie Mellon Todd C. Mowry 15-745: Dynamic Compilation 1 15-745: Dynamic Compilation 2 Todd C. Mowry Static/Dynamic High-Level/Binary • Compiler: high-level à binary, static • Binary translator: Binary-binary; mostly dynamic • Interpreter: high-level, emulate, dynamic – Run “as-is” – Software migration • Dynamic compilation: high-level à binary, dynamic (x86 à alpha, sun, transmeta; 68000 à powerPC à x86) – machine-independent, dynamic loading – Virtualization (make hardware virtualizable) – cross-module optimization – Dynamic optimization (Dynamo Rio) – specialize program using runtime information – Security (execute out of code in a cache that is “protected”) • without profiling Carnegie Mellon Carnegie Mellon 15-745: Dynamic Compilation 3 Todd C. Mowry 15-745: Dynamic Compilation 4 Todd C. Mowry 1 Closed-world vs. Open-world II. Overview of Dynamic Compilation • Closed-world assumption (most static compilers) • Interpretation/Compilation policy decisions – all code is available a priori for analysis and compilation. – Choosing what and how to compile • Open-world assumption (most dynamic compilers) • Collecting runtime information – code is not available – Instrumentation – arbitrary code can be loaded at run time. – Sampling • Open-world assumption precludes many optimization opportunities. • Exploiting runtime information – Solution: Optimistically assume the best case, but provide a way out – frequently-executed code paths if necessary.
    [Show full text]
  • What I Wish I Knew When Learning Haskell
    What I Wish I Knew When Learning Haskell Stephen Diehl 2 Version This is the fifth major draft of this document since 2009. All versions of this text are freely available onmywebsite: 1. HTML Version ­ http://dev.stephendiehl.com/hask/index.html 2. PDF Version ­ http://dev.stephendiehl.com/hask/tutorial.pdf 3. EPUB Version ­ http://dev.stephendiehl.com/hask/tutorial.epub 4. Kindle Version ­ http://dev.stephendiehl.com/hask/tutorial.mobi Pull requests are always accepted for fixes and additional content. The only way this document will stayupto date and accurate through the kindness of readers like you and community patches and pull requests on Github. https://github.com/sdiehl/wiwinwlh Publish Date: March 3, 2020 Git Commit: 77482103ff953a8f189a050c4271919846a56612 Author This text is authored by Stephen Diehl. 1. Web: www.stephendiehl.com 2. Twitter: https://twitter.com/smdiehl 3. Github: https://github.com/sdiehl Special thanks to Erik Aker for copyediting assistance. Copyright © 2009­2020 Stephen Diehl This code included in the text is dedicated to the public domain. You can copy, modify, distribute and perform thecode, even for commercial purposes, all without asking permission. You may distribute this text in its full form freely, but may not reauthor or sublicense this work. Any reproductions of major portions of the text must include attribution. The software is provided ”as is”, without warranty of any kind, express or implied, including But not limitedtothe warranties of merchantability, fitness for a particular purpose and noninfringement. In no event shall the authorsor copyright holders be liable for any claim, damages or other liability, whether in an action of contract, tort or otherwise, Arising from, out of or in connection with the software or the use or other dealings in the software.
    [Show full text]
  • Comparative Studies of Programming Languages; Course Lecture Notes
    Comparative Studies of Programming Languages, COMP6411 Lecture Notes, Revision 1.9 Joey Paquet Serguei A. Mokhov (Eds.) August 5, 2010 arXiv:1007.2123v6 [cs.PL] 4 Aug 2010 2 Preface Lecture notes for the Comparative Studies of Programming Languages course, COMP6411, taught at the Department of Computer Science and Software Engineering, Faculty of Engineering and Computer Science, Concordia University, Montreal, QC, Canada. These notes include a compiled book of primarily related articles from the Wikipedia, the Free Encyclopedia [24], as well as Comparative Programming Languages book [7] and other resources, including our own. The original notes were compiled by Dr. Paquet [14] 3 4 Contents 1 Brief History and Genealogy of Programming Languages 7 1.1 Introduction . 7 1.1.1 Subreferences . 7 1.2 History . 7 1.2.1 Pre-computer era . 7 1.2.2 Subreferences . 8 1.2.3 Early computer era . 8 1.2.4 Subreferences . 8 1.2.5 Modern/Structured programming languages . 9 1.3 References . 19 2 Programming Paradigms 21 2.1 Introduction . 21 2.2 History . 21 2.2.1 Low-level: binary, assembly . 21 2.2.2 Procedural programming . 22 2.2.3 Object-oriented programming . 23 2.2.4 Declarative programming . 27 3 Program Evaluation 33 3.1 Program analysis and translation phases . 33 3.1.1 Front end . 33 3.1.2 Back end . 34 3.2 Compilation vs. interpretation . 34 3.2.1 Compilation . 34 3.2.2 Interpretation . 36 3.2.3 Subreferences . 37 3.3 Type System . 38 3.3.1 Type checking . 38 3.4 Memory management .
    [Show full text]
  • Regent: a High-Productivity Programming Language for Implicit Parallelism with Logical Regions
    REGENT: A HIGH-PRODUCTIVITY PROGRAMMING LANGUAGE FOR IMPLICIT PARALLELISM WITH LOGICAL REGIONS A DISSERTATION SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE AND THE COMMITTEE ON GRADUATE STUDIES OF STANFORD UNIVERSITY IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY Elliott Slaughter August 2017 © 2017 by Elliott David Slaughter. All Rights Reserved. Re-distributed by Stanford University under license with the author. This work is licensed under a Creative Commons Attribution- Noncommercial 3.0 United States License. http://creativecommons.org/licenses/by-nc/3.0/us/ This dissertation is online at: http://purl.stanford.edu/mw768zz0480 ii I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Alex Aiken, Primary Adviser I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Philip Levis I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Oyekunle Olukotun Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice Provost for Graduate Education This signature page was generated electronically upon submission of this dissertation in electronic format. An original signed hard copy of the signature page is on file in University Archives. iii Abstract Modern supercomputers are dominated by distributed-memory machines. State of the art high-performance scientific applications targeting these machines are typically written in low-level, explicitly parallel programming models that enable maximal performance but expose the user to programming hazards such as data races and deadlocks.
    [Show full text]
  • The Glasgow Haskell Compiler User's Guide, Version 4.08
    The Glasgow Haskell Compiler User's Guide, Version 4.08 The GHC Team The Glasgow Haskell Compiler User's Guide, Version 4.08 by The GHC Team Table of Contents The Glasgow Haskell Compiler License ........................................................................................... 9 1. Introduction to GHC ....................................................................................................................10 1.1. The (batch) compilation system components.....................................................................10 1.2. What really happens when I “compile” a Haskell program? .............................................11 1.3. Meta-information: Web sites, mailing lists, etc. ................................................................11 1.4. GHC version numbering policy .........................................................................................12 1.5. Release notes for version 4.08 (July 2000) ........................................................................13 1.5.1. User-visible compiler changes...............................................................................13 1.5.2. User-visible library changes ..................................................................................14 1.5.3. Internal changes.....................................................................................................14 2. Installing from binary distributions............................................................................................16 2.1. Installing on Unix-a-likes...................................................................................................16
    [Show full text]