Verifying Concurrent Programs by Controlling Alias Interference

Total Page:16

File Type:pdf, Size:1020Kb

Verifying Concurrent Programs by Controlling Alias Interference c Copyright 2014 Colin S. Gordon Verifying Concurrent Programs by Controlling Alias Interference Colin S. Gordon A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy University of Washington 2014 Reading Committee: Michael D. Ernst, Chair Dan Grossman, Chair Matthew Parkinson, Microsoft Research, Cambridge, UK Program Authorized to Offer Degree: UW Computer Science and Engineering University of Washington Abstract Verifying Concurrent Programs by Controlling Alias Interference Colin S. Gordon Co-Chairs of the Supervisory Committee: Professor Michael D. Ernst Computer Science and Engineering Associate Professor Dan Grossman Computer Science and Engineering This dissertation proposes a family of techniques for static verification of sequential and concurrent imperative programs by leveraging fine-grained characterizations of mutation. The key idea is that by attaching to each reference in a program (1) a restriction on mu- tations permitted using that reference, and (2) a characterization of possible interference through other aliases, a type system can reason about what properties are preserved by all mutations in a program. This thesis develops four variations on this idea: (1) We adapt reference immutability to support data-race-free concurrent programming. (2) We generalize reference immutability to rely-guarantee references, allowing two-state invariants to express usage restrictions be- tween read-only and arbitrary mutation. (3) We extend rely-guarantee references to prove invariants and functional correctness of lock-free concurrent data structures. (4) We evalu- ate rely-guarantee references' utility for existing Haskell programs. Together these variations show that reasoning about aliasing and reasoning about con- current (imperative) programs are the same fundamental challenge, and that by taking the right foundational approach to reasoning about sequential programs, the gap to reasoning about concurrent programs is significantly reduced. TABLE OF CONTENTS Page List of Figures . iv Chapter 1: Introduction . .1 1.1 Modular Verification: Simple, Pure, and Imperative . .2 1.2 Protocols for Shared Mutable State . .5 1.3 The Granularity of Modularity . .7 1.4 Explanation of Thesis . .8 1.5 Published Content . 10 Chapter 2: General Related Work . 12 2.1 Global Reasoning . 15 2.2 Separation and Isolation . 16 2.3 Isolation with Serialized Sharing . 24 2.4 Read-sharing . 27 2.5 Interference Summaries . 32 2.6 Implementing Verification Systems . 43 2.7 Relating Sequential and Concurrent Correctness . 44 2.8 Dependent Type Theory . 45 Chapter 3: Uniqueness and Reference Immutability for Safe Parallelism . 49 3.1 Reference Immutability, Uniqueness, and Parallelism . 51 3.2 Types for Reference Immutability and Parallelism . 58 3.3 Type Soundness . 66 3.4 Polymorphism . 84 3.5 Evaluation . 95 3.6 Related Work . 106 3.7 Conclusions . 112 i Chapter 4: Rely-Guarantee References . 113 4.1 Introduction . 113 4.2 Rely-Guarantee References . 116 4.3 Examples . 119 4.4 A Type System for Rely-Guarantee References . 128 4.5 Implementation . 142 4.6 Future Work: Extensions and Adaptations . 149 4.7 Related Work . 150 4.8 Conclusion . 154 Chapter 5: Concurrent Refinement Types and Functional Correctness via Rely- Guarantee References . 162 5.1 Correctness for Lock-Free Datastructures . 164 5.2 Suitability of RGrefs for Concurrent Programming . 166 5.3 RGrefs for Concurrency . 168 5.4 Refinement from Rely-Guarantee Reasoning . 196 5.5 Implementation . 203 5.6 Related Work . 204 5.7 Conclusions . 208 Chapter 6: Rely-Guarantee References for Existing Code . 210 6.1 Background: State and Refinement Types in Haskell . 212 6.2 Embedding Rely-Guarantee References in Liquid Haskell . 217 6.3 Defining Liquid RGrefs.............................. 222 6.4 Using Liquid RGrefs ............................... 228 6.5 Related Work . 239 6.6 Fruitful Directions for Further Investigation . 240 6.7 Conclusions . 241 Chapter 7: Directions for Future Work . 243 7.1 Direct Extensions to RGrefs........................... 243 7.2 Richer Specification Languages . 248 7.3 Gradual Verification . 249 Chapter 8: Conclusion . 251 Bibliography . 253 ii Appendix A: Background Reading . 278 A.1 Dependent Types . 278 A.2 Program Logics: Hoare Logic, Separation Logic, and Beyond . 279 iii LIST OF FIGURES Figure Number Page 2.1 Selected proof rules for sequential and concurrent separation logic, variable side conditions elided. 18 3.1 External uniqueness with immutable out-references. 52 3.2 Qualifier conversion/subtyping lattice. 53 3.3 Core language syntax. 60 3.4 Subtyping rules . 61 3.5 Core typing rules. 61 3.6 Program typing . 62 3.7 Recovery rules . 63 3.8 Type rules for safe parallelism. IsoOrImm is defined in Figure 3.7 . 64 3.9 Typing derivation for adding a node to an isolated doubly-linked list. 66 3.10 Definition of Well-Regioned . 69 3.11 Denoting types and type environments. 72 3.12 The thread interference relation R0........................ 74 3.13 Weak type environment denotation, for framed-out environments. The differ- ences from Figure 3.11 are the permissions required by readable and writable references, and the way π bounds the state's permissions. 78 3.14 Method call extensions to the language and program logic. 80 3.15 The proof tree for validity of method summaries when the summary is not for an isolated receiver. Environment reordering steps are not shown explic- itly. Parentheticals on the right hand side indicate the method by which the assertion on the same line was proved from the previous assertion and/or statement when it is not an obvious derivation like framing. 85 3.16 Grammar extensions for the polymorphic system. 86 3.17 Auxiliary judgements and metafunctions for the polymorphic system. 87 3.18 Generics typing rules. All structural, recovery, and parallelism rules remain as in Figures 3.5, 3.7, and 3.8, extended with ∆ in conclusions and appropriate hypotheses. 90 iv 3.19 Well formed polymorphic programs. In T-Method*, the bound on the method permission p is present if and only if p is a permission variable. ............................................ 91 3.20 Denoting types and type environments in the polymorphic system, differences from Figure 3.11. The main change is to the subclass clause of the writable and isolated cases, which require the runtime type's field types to exactly match the supertype's. 93 3.21 A simplified collection with a polymorphic enumerator. 100 3.22 Extension methods to add regular and parallel map to a linked list. 102 4.1 RGref code for a prepend-only linked list. 123 4.2 Combining reference immutability permissions, from [105]. Using a p refer- ence to read a q T field produces a (p B q) T ................... 126 4.3 Syntax, omitting booleans (b : bool), unary natural numbers (n : nat), unit, pairs, propositional True and False, and standard recursors. The expression/- type division is presentational; the language is a single syntactic category of PTS-style [14] pseudoterms. 130 4.4 Typing. For standard recursors for naturals, booleans, pairs, and identity types [129], as well as well-formed contexts, and (pure) expression/type con- version (Γ ` τ τ), see (Figure 4.6). 155 4.5 Typing, continued. 156 4.6 Selected auxiliary judgments . 157 4.7 Auxilliary functions . 158 4.8 The simplest term we can imagine that observes the multiple-dereference reduction . 158 4.9 Values, typing of runtime values, and main operational semantics for RGref. 159 4.10 Structural / context operational semantics for RGref.............. 160 4.11 Dynamic state typing. 161 5.1 A positive monotonically increasing counter using RGrefs, in our Coq DSL. The rgref monad for imperative code is indexed by input and output envi- ronments ∆ of substructural values, not used in this example. 171 5.2 Atomic increment for a monotonically increasing counter. This code uses a standard read-modify-CAS loop. Until the increment succeeds, it reads the counter's value, then tries to compare-and-swap the old value with the incremented result. RGFix is a fixpoint combinator, and rgret returns a value.171 5.3 A Treiber Stack [251] using RGrefs, omitting proofs. The relation local imm (not shown) constrains the immediate referent to immutable; any is the always-true predicate. 172 v 5.4 A lock-free union find implementation [7] using RGrefs, omitting interactive proofs. a<|i|> accesses the ith entry of array a. The type Fin.t n is (isomorphic to) a natural number less than n, making it a safe index into the array. u conjoins predicates. 174 5.5 Core typing rules for concurrency-safe RGrefs. See also Figure 5.7 for meta- function definitions. 178 5.6 Primitive to introduce new refinements based on observed values. 179 5.7 Metafunctions for side conditions used in Figure 5.5. 179 5.8 Views state space. \Hats" (e.g., Mc) indicate explicit annotations of RGref.
Recommended publications
  • GHC Reading Guide
    GHC Reading Guide - Exploring entrances and mental models to the source code - Takenobu T. Rev. 0.01.1 WIP NOTE: - This is not an official document by the ghc development team. - Please refer to the official documents in detail. - Don't forget “semantics”. It's very important. - This is written for ghc 9.0. Contents Introduction 1. Compiler - Compilation pipeline - Each pipeline stages - Intermediate language syntax - Call graph 2. Runtime system 3. Core libraries Appendix References Introduction Introduction Official resources are here GHC source repository : The GHC Commentary (for developers) : https://gitlab.haskell.org/ghc/ghc https://gitlab.haskell.org/ghc/ghc/-/wikis/commentary GHC Documentation (for users) : * master HEAD https://ghc.gitlab.haskell.org/ghc/doc/ * latest major release https://downloads.haskell.org/~ghc/latest/docs/html/ * version specified https://downloads.haskell.org/~ghc/9.0.1/docs/html/ The User's Guide Core Libraries GHC API Introduction The GHC = Compiler + Runtime System (RTS) + Core Libraries Haskell source (.hs) GHC compiler RuntimeSystem Core Libraries object (.o) (libHsRts.o) (GHC.Base, ...) Linker Executable binary including the RTS (* static link case) Introduction Each division is located in the GHC source tree GHC source repository : https://gitlab.haskell.org/ghc/ghc compiler/ ... compiler sources rts/ ... runtime system sources libraries/ ... core library sources ghc/ ... compiler main includes/ ... include files testsuite/ ... test suites nofib/ ... performance tests mk/ ... build system hadrian/ ... hadrian build system docs/ ... documents : : 1. Compiler 1. Compiler Compilation pipeline 1. compiler The GHC compiler Haskell language GHC compiler Assembly language (native or llvm) 1. compiler GHC compiler comprises pipeline stages GHC compiler Haskell language Parser Renamer Type checker Desugarer Core to Core Core to STG STG to Cmm Assembly language Cmm to Asm (native or llvm) 1.
    [Show full text]
  • Kotlin Coroutines Deep Dive Into Bytecode #Whoami
    Kotlin Coroutines Deep Dive into Bytecode #whoami ● Kotlin compiler engineer @ JetBrains ● Mostly working on JVM back-end ● Responsible for (almost) all bugs in coroutines code Agenda I will talk about ● State machines ● Continuations ● Suspend and resume I won’t talk about ● Structured concurrency and cancellation ● async vs launch vs withContext ● and other library related stuff Agenda I will talk about Beware, there will be code. A lot of code. ● State machines ● Continuations ● Suspend and resume I won’t talk about ● Structured concurrency and cancellation ● async vs launch vs withContext ● and other library related stuff Why coroutines ● No dependency on a particular implementation of Futures or other such rich library; ● Cover equally the "async/await" use case and "generator blocks"; ● Make it possible to utilize Kotlin coroutines as wrappers for different existing asynchronous APIs (such as Java NIO, different implementations of Futures, etc). via coroutines KEEP Getting Lazy With Kotlin Pythagorean Triples fun printPythagoreanTriples() { for (i in 1 until 100) { for (j in 1 until i) { for (k in i until 100) { if (i * i + j * j < k * k) { break } if (i * i + j * j == k * k) { println("$i^2 + $j^2 == $k^2") } } } } } Pythagorean Triples fun printPythagoreanTriples() { for (i in 1 until 100) { for (j in 1 until i) { for (k in i until 100) { if (i * i + j * j < k * k) { break } if (i * i + j * j == k * k) { println("$i^2 + $j^2 == $k^2") } } } } } Pythagorean Triples fun printPythagoreanTriples() { for (i in 1 until 100) { for (j
    [Show full text]
  • Contention in Structured Concurrency: Provably Efficient Dynamic Non-Zero Indicators for Nested Parallelism
    Contention in Structured Concurrency: Provably Efficient Dynamic Non-Zero Indicators for Nested Parallelism Umut A. Acar1;2 Naama Ben-David 1 Mike Rainey 2 January 16, 2017 CMU-CS-16-133 School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 A short version of this work appears in the proceedings of the 22nd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming 2017 (PPoPP '17). 1Carnegie Mellon University, Pittsburgh, PA 2Inria-Paris Keywords: concurrent data structures, non-blocking data structures, contention, nested parallelism, dependency counters Abstract Over the past two decades, many concurrent data structures have been designed and im- plemented. Nearly all such work analyzes concurrent data structures empirically, omit- ting asymptotic bounds on their efficiency, partly because of the complexity of the analysis needed, and partly because of the difficulty of obtaining relevant asymptotic bounds: when the analysis takes into account important practical factors, such as contention, it is difficult or even impossible to prove desirable bounds. In this paper, we show that considering structured concurrency or relaxed concurrency mod- els can enable establishing strong bounds, also for contention. To this end, we first present a dynamic relaxed counter data structure that indicates the non-zero status of the counter. Our data structure extends a recently proposed data structure, called SNZI, allowing our structure to grow dynamically in response to the increasing degree of concurrency in the system. Using the dynamic SNZI data structure, we then present a concurrent data structure for series-parallel directed acyclic graphs (sp-dags), a key data structure widely used in the implementation of modern parallel programming languages.
    [Show full text]
  • Haskell-Like S-Expression-Based Language Designed for an IDE
    Department of Computing Imperial College London MEng Individual Project Haskell-Like S-Expression-Based Language Designed for an IDE Author: Supervisor: Michal Srb Prof. Susan Eisenbach June 2015 Abstract The state of the programmers’ toolbox is abysmal. Although substantial effort is put into the development of powerful integrated development environments (IDEs), their features often lack capabilities desired by programmers and target primarily classical object oriented languages. This report documents the results of designing a modern programming language with its IDE in mind. We introduce a new statically typed functional language with strong metaprogramming capabilities, targeting JavaScript, the most popular runtime of today; and its accompanying browser-based IDE. We demonstrate the advantages resulting from designing both the language and its IDE at the same time and evaluate the resulting environment by employing it to solve a variety of nontrivial programming tasks. Our results demonstrate that programmers can greatly benefit from the combined application of modern approaches to programming tools. I would like to express my sincere gratitude to Susan, Sophia and Tristan for their invaluable feedback on this project, my family, my parents Michal and Jana and my grandmothers Hana and Jaroslava for supporting me throughout my studies, and to all my friends, especially to Harry whom I met at the interview day and seem to not be able to get rid of ever since. ii Contents Abstract i Contents iii 1 Introduction 1 1.1 Objectives ........................................ 2 1.2 Challenges ........................................ 3 1.3 Contributions ...................................... 4 2 State of the Art 6 2.1 Languages ........................................ 6 2.1.1 Haskell ....................................
    [Show full text]
  • Oolong: a Concurrent Object Calculus for Extensibility and Reuse
    OOlong: A Concurrent Object Calculus for Extensibility and Reuse Elias Castegren Tobias Wrigstad KTH Royal Institute of Technology Uppsala University Kista, Sweden Uppsala, Sweden [email protected] [email protected] ABSTRACT This avoids tying the language to a specific model of class We present OOlong, an object calculus with interface in- inheritance (e.g., Java's), while still maintaining an object- heritance, structured concurrency and locks. The goal of oriented style of programming. Concurrency is modeled in a the calculus is extensibility and reuse. The semantics are finish/async style, and synchronisation is handled via locks. therefore available in a version for LATEX typesetting (written The semantics are provided both on paper and in a mecha- in Ott), a mechanised version for doing rigorous proofs in nised version written in Coq. The paper version of OOlong Coq, and a prototype interpreter (written in OCaml) for is defined in Ott [25], and all typing rules in this paper are typechecking an running OOlong programs. generated from this definition. To make it easy for other researchers to build on OOlong, we are making the sources of both versions of the semantics publicly available. We also CCS Concepts provide a prototype interpreter written in OCaml. •Theory of computation ! Operational semantics; Concur- rency; Interactive proof systems; •Software and its engineer- With the goal of extensibility and re-usability, we make the ing ! Object oriented languages; Concurrent programming following contributions: structures; Interpreters; • We define the formal semantics of OOlong, motivate the choice of features, and prove type soundness (Sections Keywords 2{5). Object Calculi; Semantics; Mechanisation; Concurrency • We provide a mechanised version of the full semantics and soundness proof, written in Coq (Section 6).
    [Show full text]
  • A Formal Methodology for Deriving Purely Functional Programs from Z Specifications Via the Intermediate Specification Language Funz
    Louisiana State University LSU Digital Commons LSU Historical Dissertations and Theses Graduate School 1995 A Formal Methodology for Deriving Purely Functional Programs From Z Specifications via the Intermediate Specification Language FunZ. Linda Bostwick Sherrell Louisiana State University and Agricultural & Mechanical College Follow this and additional works at: https://digitalcommons.lsu.edu/gradschool_disstheses Recommended Citation Sherrell, Linda Bostwick, "A Formal Methodology for Deriving Purely Functional Programs From Z Specifications via the Intermediate Specification Language FunZ." (1995). LSU Historical Dissertations and Theses. 5981. https://digitalcommons.lsu.edu/gradschool_disstheses/5981 This Dissertation is brought to you for free and open access by the Graduate School at LSU Digital Commons. It has been accepted for inclusion in LSU Historical Dissertations and Theses by an authorized administrator of LSU Digital Commons. For more information, please contact [email protected]. INFORMATION TO USERS This manuscript has been reproduced from the microfilm master. UMI films the text directly from the original or copy submitted. Thus, some thesis and dissertation copies are in typewriter face, while others may be from any type of computer printer. H ie quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleedthrough, substandardm argins, and improper alignment can adversely affect reproduction. In the unlikely, event that the author did not send UMI a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note will indicate the deletion. Oversize materials (e.g., maps, drawings, charts) are reproduced by sectioning the original, beginning at the upper left-hand comer and continuing from left to right in equal sections with small overlaps.
    [Show full text]
  • The Logic of Demand in Haskell
    Under consideration for publication in J. Functional Programming 1 The Logic of Demand in Haskell WILLIAM L. HARRISON Department of Computer Science University of Missouri Columbia, Missouri RICHARD B. KIEBURTZ Pacific Software Research Center OGI School of Science & Engineering Oregon Health & Science University Abstract Haskell is a functional programming language whose evaluation is lazy by default. However, Haskell also provides pattern matching facilities which add a modicum of eagerness to its otherwise lazy default evaluation. This mixed or “non-strict” semantics can be quite difficult to reason with. This paper introduces a programming logic, P-logic, which neatly formalizes the mixed evaluation in Haskell pattern-matching as a logic, thereby simplifying the task of specifying and verifying Haskell programs. In P-logic, aspects of demand are reflected or represented within both the predicate language and its model theory, allowing for expressive and comprehensible program verification. 1 Introduction Although Haskell is known as a lazy functional language because of its default eval- uation strategy, it contains a number of language constructs that force exceptions to that strategy. Among these features are pattern-matching, data type strictness annotations and the seq primitive. The semantics of pattern-matching are further enriched by irrefutable pattern annotations, which may be embedded within pat- terns. The interaction between Haskell’s default lazy evaluation and its pattern- matching is surprisingly complicated. Although it offers the programmer a facility for fine control of demand (Harrison et al., 2002), it is perhaps the aspect of the Haskell language least well understood by its community of users. In this paper, we characterize the control of demand first in a denotational semantics and then in a verification logic called “P-logic”.
    [Show full text]
  • Functional Baby Talk: Analysis of Code Fragments from Novice Haskell Programmers
    Functional Baby Talk: Analysis of Code Fragments from Novice Haskell Programmers Jeremy Singer and Blair Archibald School of Computing Science University of Glasgow UK [email protected] [email protected] What kinds of mistakes are made by novice Haskell developers, as they learn about functional pro- gramming? Is it possible to analyze these errors in order to improve the pedagogy of Haskell? In 2016, we delivered a massive open online course which featured an interactive code evaluation en- vironment. We captured and analyzed 161K interactions from learners. We report typical novice developer behavior; for instance, the mean time spent on an interactive tutorial is around eight min- utes. Although our environment was restricted, we gain some understanding of Haskell novice er- rors. Parenthesis mismatches, lexical scoping errors and do block misunderstandings are common. Finally, we make recommendations about how such beginner code evaluation environments might be enhanced. 1 Introduction The Haskell programming language [14] has acquired a reputation for being difficult to learn. In his presentation on the origins of Haskell [23] Peyton Jones notes that, according to various programming language popularity metrics, Haskell is much more frequently discussed than it is used for software implementation. The xkcd comic series features a sarcastic strip on Haskell’s side-effect free property [21]. Haskell code is free from side-effects ‘because no-one will ever run it.’ In 2016, we ran the first instance of a massive open online course (MOOC) at Glasgow, providing an introduction to functional programming in Haskell. We received many items of learner feedback that indicated difficulty in learning Haskell.
    [Show full text]
  • Systematic Use of Models of Concurrency in Executable Domain-Specific Modeling Languages Florent Latombe
    Systematic use of Models of Concurrency in eXecutable Domain-Specific Modeling Languages Florent Latombe To cite this version: Florent Latombe. Systematic use of Models of Concurrency in eXecutable Domain-Specific Modeling Languages. Software Engineering [cs.SE]. Université de Toulouse - Institut National Polytechnique de Toulouse (INPT), 2016. English. tel-01369451 HAL Id: tel-01369451 https://hal.inria.fr/tel-01369451 Submitted on 21 Sep 2016 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. THÈSETHÈSE En vue de l’obtention du DOCTORAT DE L’UNIVERSITÉ DE TOULOUSE Délivré par : l’Institut National Polytechnique de Toulouse (INP Toulouse) Présentée et soutenue le 13/07/2016 par : Florent LATOMBE Systematic use of Models of Concurrency in eXecutable Domain-Specific Modeling Languages JURY Richard PAIGE Professor Rapporteur Antonio VALLECILLO Professor Rapporteur Frédéric BOULANGER Professeur des Universités Examinateur Julien DE ANTONI Maître de Conférences Examinateur Benoît COMBEMALE Maître de Conférences Invité Xavier CRÉGUT Maître de Conférences Encadrant Marc PANTEL Maître de Conférences Encadrant École doctorale et spécialité : MITT : Domaine STIC : Sureté de logiciel et calcul de haute performance Unité de Recherche : IRIT : Institut de Recherche en Informatique de Toulouse (UMR 5505) Directeur(s) de Thèse : Marc PANTEL, Xavier CRÉGUT et Yamine AÏT-AMEUR Rapporteurs : Richard PAIGE et Antonio VALLECILLO All that is gold does not glier, Not all those flho flander are lost; e old that is strong does not flither, Deep roots are not reached by the frost.
    [Show full text]
  • Haskell As a Higher Order Structural Hardware Description Language
    Haskell as a higher order structural hardware description language Master's thesis by Matthijs Kooijman December 9, 2009 Committee: Dr. Ir. Jan Kuper Ir. Marco Gerards Ir. Bert Molenkamp Dr. Ir. Sabih Gerez Computer Architecture for Embedded Systems Faculty of EEMCS University of Twente Abstract Functional hardware description languages have been around for a while, but never saw adoption on a large scale. Even though advanced features like higher order functions and polymorphism could enable very natural parametrization of hardware descriptions, the conventional hardware description languages VHDL and Verilog are still most widely used. Cλash is a new functional hardware description language using Haskell's syntax and semantics. It allows structurally describing synchronous hardware, using normal Haskell syntax combined with a selection of built-in functions for operations like addition or list operations. More complex constructions like higher order functions and polymorphism are fully supported. Cλash supports stateful descriptions through explicit descriptions of a function's state. Every function accepts an argument containing its current state and returns its updated state as a part of its result. This means every function is called exactly once for each cycle, limiting Cλash to synchronous systems with a single clock domain. A prototype compiler for Cλash has been implemented that can generate an equivalent VHDL description (using mostly structural VHDL). The prototype uses the front-end (parser, type-checker, desugarer) ofthe existing GHC Haskell compiler. This front-end generates a Core version of the description, which is a very small typed functional language. A normalizing system of transformations brings this Core version into a normal form that has any complex parts (higher order functions, polymorphism, complex nested structures) removed.
    [Show full text]
  • A Proxima-Based Haskell IDE
    Thesis for the degree of Master of Science A Proxima-based Haskell IDE Gerbo Engels October 2008 INF/SCR-07-93 Supervisor: Center for Software Technology prof. dr. S. Doaitse Swierstra Dept. of Information and Computing Sciences Universiteit Utrecht Daily supervisor: Utrecht, The Netherlands dr. Martijn M. Schrage Abstract Proxima is a generic presentation-oriented editor for structured documents, in which a graphical presentation of the document can be edited directly (WYSIWYG editing), resulting in changes in the underlying structure of the document. Proxima understands the document structure and it supports direct edit operations on the document structure as well. A Proxima instantiation uses a parser to get the document structure; modern compilers have parsers as well, which may be accessible through an API. Using the parser of an external compiler as the Proxima parser saves one from writing a new parser. Furthermore, the parse tree of a com- piler can contain extra information that is useful to display in an editor, such as type information of a Haskell compiler in a Haskell source code editor. This thesis aims to use the parser of an external compiler as the Proxima parser. We identify and solve problems that arise when using the parser of an external compiler as the Proxima parser, and two Haskell editors are developed as Proxima instantiations that use the parser of GHC or EHC respectively. i ii Contents 1 Introduction 1 1.1 Thesis Overview.........................................2 I The State of the World3 2 Existing Haskell Editors and IDE’s5 2.1 Visual Haskell...........................................6 2.2 Eclipse Plug-in...........................................7 2.3 Haskell Editors Written in Haskell...............................8 2.4 Other Editors or Tools.....................................
    [Show full text]
  • The State of Fibers for the JVM Why All Your JVM Coroutines Are Cool, but Broken, and Project Loom Is Gonna Fix That! Lightweigh
    The State of Fibers for the JVM Lightweight Threads Virtual and ... Why all your JVM coroutines are cool, but broken, (Kilim threads, Quasar fibers, Kotlin coroutines, etc.) and Project Loom is gonna fix that! Volkan Yazıcı https://vlkan.com 2019-12-10 @yazicivo 2018-08-25 2018-03-08 Agenda 1) Motivation 2) History 3) Continuations 4) Processes, threads, and fibers 5) Project Loom 6) Structured concurrency 7) Scoped variables Motivation: I/O Quoting from Gor Nishanov's 2016 "LLVM Coroutines: Bringing resumable functions to LLVM" talk. Without I/O, you don’t exist! ● println() ● file access ● network socket access ● database access ● etc. In 1958, ... Quoting from Gor Nishanov's 2016 "LLVM Coroutines: Bringing resumable functions to LLVM" talk. The legend of Melvin Conway Quoting from Gor Nishanov's 2016 "LLVM Coroutines: Bringing resumable functions to LLVM" talk. Subroutine ⊂ Coroutine Coroutine Subroutine A Subroutine B Subroutine A Coroutine C “Coroutines” – Melvin Conway, 1958 … … B start C start call B “Generalization of subroutine” – Donald Knuth, 1968 call C end suspend subroutines coroutines B start resume C allocate frame, allocate frame, suspend call call B pass params pass params free frame, free frame, return resume C return result return result end end suspend No Yes … resume No Yes … Quoting from Gor Nishanov's 2016 "LLVM Coroutines: Bringing resumable functions to LLVM" talk. Where did all the coroutines go? Algol-60 … introduced code blocks and the begin and end pairs for delimiting them. ALGOL 60 was the first language implementing nested function definitions with lexical scope. – Wikipedia “ALGOL 60” Quoting from Gor Nishanov's 2016 "LLVM Coroutines: Bringing resumable functions to LLVM" talk.
    [Show full text]