Performance Engineering of Proof-Based Software Systems at Scale by Jason S
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Reflexive Interpreters 1 the Problem of Control
Reset reproduction of a Ph.D. thesis proposal submitted June 8, 1978 to the Massachusetts Institute of Technology Electrical Engineering and Computer Science Department. Reset in LaTeX by Jon Doyle, December 1995. c 1978, 1995 Jon Doyle. All rights reserved.. Current address: MIT Laboratory for Computer Science. Available via http://www.medg.lcs.mit.edu/doyle Reflexive Interpreters Jon Doyle [email protected] Massachusetts Institute of Technology, Artificial Intelligence Laboratory Cambridge, Massachusetts 02139, U.S.A. Abstract The goal of achieving powerful problem solving capabilities leads to the “advice taker” form of program and the associated problem of control. This proposal outlines an approach to this problem based on the construction of problem solvers with advanced self-knowledge and introspection capabilities. 1 The Problem of Control Self-reverence, self-knowledge, self-control, These three alone lead life to sovereign power. Alfred, Lord Tennyson, OEnone Know prudent cautious self-control is wisdom’s root. Robert Burns, A Bard’s Epitath The woman that deliberates is lost. Joseph Addison, Cato A major goal of Artificial Intelligence is to construct an “advice taker”,1 a program which can be told new knowledge and advised about how that knowledge may be useful. Many of the approaches towards this goal have proposed constructing additive formalisms for trans- mitting knowledge to the problem solver.2 In spite of considerable work along these lines, formalisms for advising problem solvers about the uses and properties of knowledge are rela- tively undeveloped. As a consequence, the nondeterminism resulting from the uncontrolled application of many independent pieces of knowledge leads to great inefficiencies. -
Type Safety of Rewrite Rules in Dependent Types
Type safety of rewrite rules in dependent types Frédéric Blanqui Université Paris-Saclay, ENS Paris-Saclay, CNRS, Inria Laboratoire Spécification et Vérification, 94235, Cachan, France Abstract The expressiveness of dependent type theory can be extended by identifying types modulo some additional computation rules. But, for preserving the decidability of type-checking or the logical consistency of the system, one must make sure that those user-defined rewriting rules preserve typing. In this paper, we give a new method to check that property using Knuth-Bendix completion. 2012 ACM Subject Classification Theory of computation → Type theory; Theory of computation → Equational logic and rewriting; Theory of computation → Logic and verification Keywords and phrases subject-reduction, rewriting, dependent types Digital Object Identifier 10.4230/LIPIcs.FSCD.2020.11 Supplement Material https://github.com/wujuihsuan2016/lambdapi/tree/sr Acknowledgements The author thanks Jui-Hsuan Wu for his prototype implementation and his comments on a preliminary version of this work. 1 Introduction The λΠ-calculus, or LF [12], is an extension of the simply-typed λ-calculus with dependent types, that is, types that can depend on values like, for instance, the type V n of vectors of dimension n. And two dependent types like V n and V p are identified as soon as n and p are two expressions having the same value (modulo the evaluation rule of λ-calculus, β-reduction). In the λΠ-calculus modulo rewriting, function and type symbols can be defined not only by using β-reduction but also by using rewriting rules [19]. Hence, types are identified modulo β-reduction and some user-defined rewriting rules. -
A Critique of Abelson and Sussman Why Calculating Is Better Than
A critique of Abelson and Sussman - or - Why calculating is better than scheming Philip Wadler Programming Research Group 11 Keble Road Oxford, OX1 3QD Abelson and Sussman is taught [Abelson and Sussman 1985a, b]. Instead of emphasizing a particular programming language, they emphasize standard engineering techniques as they apply to programming. Still, their textbook is intimately tied to the Scheme dialect of Lisp. I believe that the same approach used in their text, if applied to a language such as KRC or Miranda, would result in an even better introduction to programming as an engineering discipline. My belief has strengthened as my experience in teaching with Scheme and with KRC has increased. - This paper contrasts teaching in Scheme to teaching in KRC and Miranda, particularly with reference to Abelson and Sussman's text. Scheme is a "~dly-scoped dialect of Lisp [Steele and Sussman 19781; languages in a similar style are T [Rees and Adams 19821 and Common Lisp [Steele 19821. KRC is a functional language in an equational style [Turner 19811; its successor is Miranda [Turner 1985k languages in a similar style are SASL [Turner 1976, Richards 19841 LML [Augustsson 19841, and Orwell [Wadler 1984bl. (Only readers who know that KRC stands for "Kent Recursive Calculator" will have understood the title of this There are four language features absent in Scheme and present in KRC/Miranda that are important: 1. Pattern-matching. 2. A syntax close to traditional mathematical notation. 3. A static type discipline and user-defined types. 4. Lazy evaluation. KRC and SASL do not have a type discipline, so point 3 applies only to Miranda, Philip Wadhr Why Calculating is Better than Scheming 2 b LML,and Orwell. -
The Evolution of Lisp
1 The Evolution of Lisp Guy L. Steele Jr. Richard P. Gabriel Thinking Machines Corporation Lucid, Inc. 245 First Street 707 Laurel Street Cambridge, Massachusetts 02142 Menlo Park, California 94025 Phone: (617) 234-2860 Phone: (415) 329-8400 FAX: (617) 243-4444 FAX: (415) 329-8480 E-mail: [email protected] E-mail: [email protected] Abstract Lisp is the world’s greatest programming language—or so its proponents think. The structure of Lisp makes it easy to extend the language or even to implement entirely new dialects without starting from scratch. Overall, the evolution of Lisp has been guided more by institutional rivalry, one-upsmanship, and the glee born of technical cleverness that is characteristic of the “hacker culture” than by sober assessments of technical requirements. Nevertheless this process has eventually produced both an industrial- strength programming language, messy but powerful, and a technically pure dialect, small but powerful, that is suitable for use by programming-language theoreticians. We pick up where McCarthy’s paper in the first HOPL conference left off. We trace the development chronologically from the era of the PDP-6, through the heyday of Interlisp and MacLisp, past the ascension and decline of special purpose Lisp machines, to the present era of standardization activities. We then examine the technical evolution of a few representative language features, including both some notable successes and some notable failures, that illuminate design issues that distinguish Lisp from other programming languages. We also discuss the use of Lisp as a laboratory for designing other programming languages. We conclude with some reflections on the forces that have driven the evolution of Lisp. -
A Certified Study of a Reversible Programming Language
A Certified Study of a Reversible Programming Language∗ Luca Paolini1, Mauro Piccolo2, and Luca Roversi3 1 Dipartimento di Informatica, Università degli Studi di Torino, corso Svizzera 185, 10149 Torino, Italy [email protected] 1 Dipartimento di Informatica, Università degli Studi di Torino, corso Svizzera 185, 10149 Torino, Italy [email protected] 1 Dipartimento di Informatica, Università degli Studi di Torino, corso Svizzera 185, 10149 Torino, Italy [email protected] Abstract We advance in the study of the semantics of Janus, a C-like reversible programming language. Our study makes utterly explicit some backward and forward evaluation symmetries. We want to deepen mathematical knowledge about the foundations and design principles of reversible computing and programming languages. We formalize a big-step operational semantics and a denotational semantics of Janus. We show a full abstraction result between the operational and denotational semantics. Last, we certify our results by means of the proof assistant Matita. 1998 ACM Subject Classification F.1.2 Computation by Abstract Devices: Modes of Compu- tation, F.3.2 Logics and Meanings of Programs: Semantics of Programming Languages Keywords and phrases reversible computing, Janus, operational semantics, bi-deterministic eval- uation, categorical semantics Digital Object Identifier 10.4230/LIPIcs.TYPES.2015.7 1 Introduction Reversible computing is an alternative form of computing: the isentropic core of classical computing [14] and quantum computing [21]. A classic computation is (forward-)deterministic, -
Reading List
EECS 101 Introduction to Computer Science Dinda, Spring, 2009 An Introduction to Computer Science For Everyone Reading List Note: You do not need to read or buy all of these. The syllabus and/or class web page describes the required readings and what books to buy. For readings that are not in the required books, I will either provide pointers to web documents or hand out copies in class. Books David Harel, Computers Ltd: What They Really Can’t Do, Oxford University Press, 2003. Fred Brooks, The Mythical Man-month: Essays on Software Engineering, 20th Anniversary Edition, Addison-Wesley, 1995. Joel Spolsky, Joel on Software: And on Diverse and Occasionally Related Matters That Will Prove of Interest to Software Developers, Designers, and Managers, and to Those Who, Whether by Good Fortune or Ill Luck, Work with Them in Some Capacity, APress, 2004. Most content is available for free from Spolsky’s Blog (see http://www.joelonsoftware.com) Paul Graham, Hackers and Painters, O’Reilly, 2004. See also Graham’s site: http://www.paulgraham.com/ Martin Davis, The Universal Computer: The Road from Leibniz to Turing, W.W. Norton and Company, 2000. Ted Nelson, Computer Lib/Dream Machines, 1974. This book is now very rare and very expensive, which is sad given how visionary it was. Simon Singh, The Code Book: The Science of Secrecy from Ancient Egypt to Quantum Cryptography, Anchor, 2000. Douglas Hofstadter, Goedel, Escher, Bach: The Eternal Golden Braid, 20th Anniversary Edition, Basic Books, 1999. Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach, 2nd Edition, Prentice Hall, 2003. -
A Survey of Engineering of Formally Verified Software
The version of record is available at: http://dx.doi.org/10.1561/2500000045 QED at Large: A Survey of Engineering of Formally Verified Software Talia Ringer Karl Palmskog University of Washington University of Texas at Austin [email protected] [email protected] Ilya Sergey Milos Gligoric Yale-NUS College University of Texas at Austin and [email protected] National University of Singapore [email protected] Zachary Tatlock University of Washington [email protected] arXiv:2003.06458v1 [cs.LO] 13 Mar 2020 Errata for the present version of this paper may be found online at https://proofengineering.org/qed_errata.html The version of record is available at: http://dx.doi.org/10.1561/2500000045 Contents 1 Introduction 103 1.1 Challenges at Scale . 104 1.2 Scope: Domain and Literature . 105 1.3 Overview . 106 1.4 Reading Guide . 106 2 Proof Engineering by Example 108 3 Why Proof Engineering Matters 111 3.1 Proof Engineering for Program Verification . 112 3.2 Proof Engineering for Other Domains . 117 3.3 Practical Impact . 124 4 Foundations and Trusted Bases 126 4.1 Proof Assistant Pre-History . 126 4.2 Proof Assistant Early History . 129 4.3 Proof Assistant Foundations . 130 4.4 Trusted Computing Bases of Proofs and Programs . 137 5 Between the Engineer and the Kernel: Languages and Automation 141 5.1 Styles of Automation . 142 5.2 Automation in Practice . 156 The version of record is available at: http://dx.doi.org/10.1561/2500000045 6 Proof Organization and Scalability 162 6.1 Property Specification and Encodings . -
Analyzing Individual Proofs As the Basis of Interoperability Between Proof Systems
Analyzing Individual Proofs as the Basis of Interoperability between Proof Systems Gilles Dowek Inria and Ecole´ normale sup´erieure de Paris-Saclay, 61, avenue du Pr´esident Wilson, 94235 Cachan Cedex, France [email protected] We describe the first results of a project of analyzing in which theories formal proofs can be ex- pressed. We use this analysis as the basis of interoperability between proof systems. 1 Introduction Sciences study both individual objects and generic ones. For example, Astronomy studies both the individual planets of the Solar system: Mercury, Venus, etc. determining their radius, mass, composition, etc., but also the motion of generic planets: Kepler’s laws, that do not just apply to the six planets known at the time of Kepler, but also to those that have been discovered after, and those that may be discovered in the future. Computer science studies both algorithms that apply to generic data, but also specific pieces of data. Mathematics mostly studies generic objects, but sometimes also specific ones, such as the number π or the function ζ. Proof theory mostly studies generic proofs. For example, Gentzen’s cut elimination theorem for Predicate logic applies to any proof expressed in Predicate logic, those that were known at the time of Gentzen, those that have been constructed after, and those that will be constructed in the future. Much less effort is dedicated to studying the individual mathematical proofs, with a few exceptions, for example [29]. Considering the proofs that we have, instead of all those that we may build in some logic, sometimes changes the perspective. -
Introduction to the Literature on Programming Language Design Gary T
Computer Science Technical Reports Computer Science 7-1999 Introduction to the Literature On Programming Language Design Gary T. Leavens Iowa State University Follow this and additional works at: http://lib.dr.iastate.edu/cs_techreports Part of the Programming Languages and Compilers Commons Recommended Citation Leavens, Gary T., "Introduction to the Literature On Programming Language Design" (1999). Computer Science Technical Reports. 59. http://lib.dr.iastate.edu/cs_techreports/59 This Article is brought to you for free and open access by the Computer Science at Iowa State University Digital Repository. It has been accepted for inclusion in Computer Science Technical Reports by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. Introduction to the Literature On Programming Language Design Abstract This is an introduction to the literature on programming language design and related topics. It is intended to cite the most important work, and to provide a place for students to start a literature search. Keywords programming languages, semantics, type systems, polymorphism, type theory, data abstraction, functional programming, object-oriented programming, logic programming, declarative programming, parallel and distributed programming languages Disciplines Programming Languages and Compilers This article is available at Iowa State University Digital Repository: http://lib.dr.iastate.edu/cs_techreports/59 Intro duction to the Literature On Programming Language Design Gary T. Leavens TR 93-01c Jan. 1993, revised Jan. 1994, Feb. 1996, and July 1999 Keywords: programming languages, semantics, typ e systems, p olymorphism, typ e theory, data abstrac- tion, functional programming, ob ject-oriented programming, logic programming, declarative programming, parallel and distributed programming languages. -
Elaboration in Dependent Type Theory
Elaboration in Dependent Type Theory Leonardo de Moura1, Jeremy Avigad*2, Soonho Kong3, and Cody Roux4 1 Microsoft Research, Redmond 2 Departments of Philosophy and Mathematical Sciences, Carnegie Mellon University 3 Department of Computer Science, Carnegie Mellon University 4 Draper Laboratories Abstract. We describe the elaboration algorithm that is used in Lean, a new interactive theorem prover based on dependent type theory. To be practical, interactive theorem provers must provide mechanisms to resolve ambiguities and infer implicit information, thereby supporting convenient input of expressions and proofs. Lean’s elaborator supports higher-order unification, ad-hoc overloading, insertion of coercions, type class inference, the use of tactics, and the computational reduction of terms. The interactions between these components are subtle and com- plex, and Lean’s elaborator has been carefully designed to balance effi- ciency and usability. 1 Introduction Just as programming languages run the spectrum from untyped languages like Lisp to strongly-typed functional programming languages like Haskell and ML, foundational systems for mathematics exhibit a range of diversity, from the untyped language of set theory to simple type theory and various versions of dependent type theory. Having a strongly typed language allows the user to convey the intent of an expression more compactly and efficiently, since a good deal of information can be inferred from type constraints. Moreover, a type dis- cipline catches routine errors quickly and flags them in informative ways. But this is a slippery slope: as we increasingly rely on types to serve our needs, the computational support that is needed to make sense of expressions in efficient and predictable ways becomes increasingly subtle and complex. -
Programming and Proving with Classical Types
Programming and proving with classical types Cristina Matache, Victor B. F. Gomes, and Dominic P. Mulligan Computer Laboratory, University of Cambridge Abstract The propositions-as-types correspondence is ordinarily presen- ted as linking the metatheory of typed λ-calculi and the proof theory of intuitionistic logic. Griffin observed that this correspondence could be extended to classical logic through the use of control operators. This observation set off a flurry of further research, leading to the development of Parigot's λµ-calculus. In this work, we use the λµ-calculus as the foundation for a system of proof terms for classical first-order logic. In particular, we define an extended call-by-value λµ-calculus with a type system in correspondence with full classical logic. We extend the language with polymorphic types, add a host of data types in `direct style', and prove several metatheoretical properties. All of our proofs and definitions are mechanised in Isabelle/HOL, and we automatically obtain an inter- preter for a system of proof terms cum programming language|called µML|using Isabelle's code generation mechanism. Atop our proof terms, we build a prototype LCF-style interactive theorem prover|called µTP| for classical first-order logic, capable of synthesising µML programs from completed tactic-driven proofs. We present example closed µML programs with classical tautologies for types, including some inexpressible as closed programs in the original λµ-calculus, and some example tactic-driven µTP proofs of classical tautologies. 1 Introduction Propositions are types; λ-terms encode derivations; β-reduction and proof norm- alisation coincide. These three points are the crux of the propositions-as-types correspondence|a product of mid-20th century mathematical logic|connecting the proof theory of intuitionistic propositional logic and the metatheory of the simply-typed λ-calculus. -
Extending Nunchaku to Dependent Type Theory
Extending Nunchaku to Dependent Type Theory Simon Cruanes Jasmin Christian Blanchette Inria Nancy – Grand Est, France Inria Nancy – Grand Est, France Max-Planck-Institut f¨ur Informatik, Saarbr¨ucken, Germany [email protected] [email protected] Nunchaku is a new higher-order counterexample generator based on a sequence of transformations from polymorphic higher-order logic to first-order logic. Unlike its predecessor Nitpick for Isabelle, it is designed as a stand-alone tool, with frontends for various proof assistants. In this short paper, we present some ideas to extend Nunchaku with partial support for dependent types and type classes, to make frontends for Coq and other systems based on dependent type theory more useful. 1 Introduction In recent years, we have seen the emergence of “hammers”—integrations of automatic theorem provers in proof assistants, such as Sledgehammer and HOLyHammer [7]. As useful as they might be, these tools are mostly helpless in the face of an invalid conjecture. Novices and experts alike can enter invalid for- mulas and find themselves wasting hours (or days) on an impossible proof; once they identify and correct the error, the proof is often easy. To discover flaws early, some proof assistants include counterexample generators to debug putative theorems or specific subgoals in an interactive proof. When formalizing algebraic results in Isabelle/HOL, Guttmann et al. [21] remarked that Counterexample generators such as Nitpick complement the ATP [automatic theorem prov- ing] systems and allow a proof and refutation game which is useful for developing and debugging formal specifications. Nunchaku is a new fully automatic counterexample generator for higher-order logic (simple type theory) designed to be integrated into several proof assistants.