Twelf As a Unifired Framework for Language Formalization And
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Foundational Proof Checkers with Small Witnesses
Foundational Proof Checkers with Small Witnesses Dinghao Wu Andrew W. Appel Aaron Stump Princeton University Washington University in St. Louis dinghao,appel @cs.princeton.edu [email protected] f g ABSTRACT tion of the proof witness to the size of the binary machine- Proof checkers for proof-carrying code (and similar) systems language program. Necula's LFi proof witnesses were about can suffer from two problems: huge proof witnesses and un- 4 times as big as the programs whose properties they proved. trustworthy proof rules. No previous design has addressed both of these problems simultaneously. We show the theory, Pfenning's Elf and Twelf systems [14, 15] are implementa- design, and implementation of a proof-checker that permits tions of the Edinburgh Logical Framework. In these sys- small proof witnesses and machine-checkable proofs of the tems, proof-search engines can be represented as logic pro- soundness of the system. grams, much like (dependently typed, higher-order) Prolog. Elf and Twelf build proof witnesses automatically; if each logic-program clause is viewed as an inference rule, then the 1. INTRODUCTION proof witness is a trace of the successful goals and subgoals In a proof-carrying code (PCC) system [10], or in other executed by the logic program. That is, the proof witness proof-carrying applications [3], an untrusted prover must is a tree whose nodes are the names of clauses and whose convince a trusted checker of the validity of a theorem by subtrees correspond to the subgoals of these clauses. sending a proof. Two of the potential problems with this approach are that the proofs might be too large, and that Necula's theorem provers were written in this style, origi- the checker might not be trustworthy. -
Hypertext Semiotics in the Commercialized Internet
Hypertext Semiotics in the Commercialized Internet Moritz Neumüller Wien, Oktober 2001 DOKTORAT DER SOZIAL- UND WIRTSCHAFTSWISSENSCHAFTEN 1. Beurteiler: Univ. Prof. Dipl.-Ing. Dr. Wolfgang Panny, Institut für Informationsver- arbeitung und Informationswirtschaft der Wirtschaftsuniversität Wien, Abteilung für Angewandte Informatik. 2. Beurteiler: Univ. Prof. Dr. Herbert Hrachovec, Institut für Philosophie der Universität Wien. Betreuer: Gastprofessor Univ. Doz. Dipl.-Ing. Dr. Veith Risak Eingereicht am: Hypertext Semiotics in the Commercialized Internet Dissertation zur Erlangung des akademischen Grades eines Doktors der Sozial- und Wirtschaftswissenschaften an der Wirtschaftsuniversität Wien eingereicht bei 1. Beurteiler: Univ. Prof. Dr. Wolfgang Panny, Institut für Informationsverarbeitung und Informationswirtschaft der Wirtschaftsuniversität Wien, Abteilung für Angewandte Informatik 2. Beurteiler: Univ. Prof. Dr. Herbert Hrachovec, Institut für Philosophie der Universität Wien Betreuer: Gastprofessor Univ. Doz. Dipl.-Ing. Dr. Veith Risak Fachgebiet: Informationswirtschaft von MMag. Moritz Neumüller Wien, im Oktober 2001 Ich versichere: 1. daß ich die Dissertation selbständig verfaßt, andere als die angegebenen Quellen und Hilfsmittel nicht benutzt und mich auch sonst keiner unerlaubten Hilfe bedient habe. 2. daß ich diese Dissertation bisher weder im In- noch im Ausland (einer Beurteilerin / einem Beurteiler zur Begutachtung) in irgendeiner Form als Prüfungsarbeit vorgelegt habe. 3. daß dieses Exemplar mit der beurteilten Arbeit überein -
Functions-As-Constructors Higher-Order Unification
Functions-as-constructors Higher-order Unification Tomer Libal and Dale Miller Inria Saclay & LIX/École Polytechnique, Palaiseau, France Abstract Unification is a central operation in the construction of a range of computational logic systems based on first-order and higher-order logics. First-order unification has a number of properties that dominates the way it is incorporated within such systems. In particular, first-order unifi- cation is decidable, unary, and can be performed on untyped term structures. None of these three properties hold for full higher-order unification: unification is undecidable, unifiers can be incomparable, and term-level typing can dominate the search for unifiers. The so-called pattern subset of higher-order unification was designed to be a small extension to first-order unification that respected the basic laws governing λ-binding (the equalities of α, β, and η-conversion) but which also satisfied those three properties. While the pattern fragment of higher-order unification has been popular in various implemented systems and in various theoretical considerations, it is too weak for a number of applications. In this paper, we define an extension of pattern unifica- tion that is motivated by some existing applications and which satisfies these three properties. The main idea behind this extension is that the arguments to a higher-order, free variable can be more than just distinct bound variables: they can also be terms constructed from (sufficient numbers of) such variables using term constructors and where no argument is a subterm of any other argument. We show that this extension to pattern unification satisfies the three properties mentioned above. -
Postmodern Test Theory
Postmodern Test Theory ____________ Robert J. Mislevy Educational Testing Service ____________ (Reprinted with permission from Transitions in Work and Learning: Implications for Assessment, 1997, by the National Academy of Sciences, Courtesy of the National Academies Press, Washington, D.C.) Postmodern Test Theory Robert J. Mislevy Good heavens! For more than forty years I have been speaking prose without knowing it. Molière, Le Bourgeois Gentilhomme INTRODUCTION Molière’s Monsieur Jourdan was astonished to learn that he had been speaking prose all his life. I know how he felt. For years I have just been doing my job— trying to improve educational assessment by applying ideas from statistics and psychology. Come to find out, I’ve been advancing “neopragmatic postmodernist test theory” without ever intending to do so. This paper tries to convey some sense of what this rather unwieldy phrase means and offers some thoughts about what it implies for educational assessment, present and future. The goods news is that we can foresee some real improvements: assessments that are more open and flexible, better connected with students’ learning, and more educationally useful. The bad news is that we must stop expecting drop-in-from-the-sky assessment to tell us, in 2 hours and for $10, the truth, plus or minus two standard errors. Gary Minda’s (1995) Postmodern Legal Movements inspired the structure of what follows. Almost every page of his book evokes parallels between issues and new directions in jurisprudence on the one hand and the debates and new developments in educational assessment on the other. Excerpts from Minda’s book frame the sections of this paper. -
Poststructuralism, Cultural Studies, and the Composition Classroom: Postmodern Theory in Practice Author(S): James A
Poststructuralism, Cultural Studies, and the Composition Classroom: Postmodern Theory in Practice Author(s): James A. Berlin Source: Rhetoric Review, Vol. 11, No. 1 (Autumn, 1992), pp. 16-33 Published by: Taylor & Francis, Ltd. Stable URL: https://www.jstor.org/stable/465877 Accessed: 13-02-2019 19:20 UTC REFERENCES Linked references are available on JSTOR for this article: https://www.jstor.org/stable/465877?seq=1&cid=pdf-reference#references_tab_contents You may need to log in to JSTOR to access the linked references. JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at https://about.jstor.org/terms Taylor & Francis, Ltd. is collaborating with JSTOR to digitize, preserve and extend access to Rhetoric Review This content downloaded from 146.111.138.234 on Wed, 13 Feb 2019 19:20:03 UTC All use subject to https://about.jstor.org/terms JAMES A. BERLIN - ~~~~~~~~~~~~~Purdue University Poststructuralism, Cultural Studies, and the Composition Classroom: Postmodern Theory in Practice The uses of postmodern theory in rhetoric and composition studies have been the object of considerable abuse of late. Figures of some repute in the field-the likes of Maxine Hairston and Peter Elbow-as well as anonymous voices from the Burkean Parlor section of Rhetoric Review-most recently, TS, a graduate student, and KF, a voice speaking for "a general English teacher audience" (192)-have joined the chorus of protest. -
Media Theory and Semiotics: Key Terms and Concepts Binary
Media Theory and Semiotics: Key Terms and Concepts Binary structures and semiotic square of oppositions Many systems of meaning are based on binary structures (masculine/ feminine; black/white; natural/artificial), two contrary conceptual categories that also entail or presuppose each other. Semiotic interpretation involves exposing the culturally arbitrary nature of this binary opposition and describing the deeper consequences of this structure throughout a culture. On the semiotic square and logical square of oppositions. Code A code is a learned rule for linking signs to their meanings. The term is used in various ways in media studies and semiotics. In communication studies, a message is often described as being "encoded" from the sender and then "decoded" by the receiver. The encoding process works on multiple levels. For semiotics, a code is the framework, a learned a shared conceptual connection at work in all uses of signs (language, visual). An easy example is seeing the kinds and levels of language use in anyone's language group. "English" is a convenient fiction for all the kinds of actual versions of the language. We have formal, edited, written English (which no one speaks), colloquial, everyday, regional English (regions in the US, UK, and around the world); social contexts for styles and specialized vocabularies (work, office, sports, home); ethnic group usage hybrids, and various kinds of slang (in-group, class-based, group-based, etc.). Moving among all these is called "code-switching." We know what they mean if we belong to the learned, rule-governed, shared-code group using one of these kinds and styles of language. -
Mental Arithmetic Processes: Testing the Independence of Encoding and Calculation
Running Head: MENTAL ARITHMETIC PROCESSES 1 Mental Arithmetic Processes: Testing the Independence of Encoding and Calculation Adam R. Frampton and Thomas J. Faulkenberry Tarleton State University Author Note Adam R. Frampton and Thomas J. Faulkenberry, Department of Psychological Sciences, Tarleton State University. This research was supported in part by funding from the Tarleton State University Office of Student Research and Creative Activities. Correspondence concerning this article should be addressed to Thomas J. Faulkenberry, Department of Psychological Sciences, Tarleton State University, Box T-0820, Stephenville, TX 76402. Phone: +1 (254) 968-9816. Email: [email protected] MENTAL ARITHMETIC PROCESSES 2 Abstract Previous work has shown that the cognitive processes involved in mental arithmetic can be decomposed into three stages: encoding, calculation, and production. Models of mental arithmetic hypothesize varying degrees of independence between these processes of encoding and calculation. In the present study, we tested whether encoding and calculation are independent by having participants complete an addition verification task. We manipulated problem size (small, large) as well as problem format, having participants verify equations presented either as Arabic digits (e.g., “3 + 7 = 10”) or using words (e.g., “three + seven = ten”). In addition, we collected trial-by-trial strategy reports. Though we found main effects of both problem size and format on response times, we found no interaction between the two factors, supporting the hypothesis that encoding and calculation function independently. However, strategy reports indicated that manipulating format caused a shift from retrieval based strategies to procedural strategies, particularly on large problems. We discuss these results in light of two competing models of mental arithmetic. -
A Survey of Engineering of Formally Verified Software
The version of record is available at: http://dx.doi.org/10.1561/2500000045 QED at Large: A Survey of Engineering of Formally Verified Software Talia Ringer Karl Palmskog University of Washington University of Texas at Austin [email protected] [email protected] Ilya Sergey Milos Gligoric Yale-NUS College University of Texas at Austin and [email protected] National University of Singapore [email protected] Zachary Tatlock University of Washington [email protected] arXiv:2003.06458v1 [cs.LO] 13 Mar 2020 Errata for the present version of this paper may be found online at https://proofengineering.org/qed_errata.html The version of record is available at: http://dx.doi.org/10.1561/2500000045 Contents 1 Introduction 103 1.1 Challenges at Scale . 104 1.2 Scope: Domain and Literature . 105 1.3 Overview . 106 1.4 Reading Guide . 106 2 Proof Engineering by Example 108 3 Why Proof Engineering Matters 111 3.1 Proof Engineering for Program Verification . 112 3.2 Proof Engineering for Other Domains . 117 3.3 Practical Impact . 124 4 Foundations and Trusted Bases 126 4.1 Proof Assistant Pre-History . 126 4.2 Proof Assistant Early History . 129 4.3 Proof Assistant Foundations . 130 4.4 Trusted Computing Bases of Proofs and Programs . 137 5 Between the Engineer and the Kernel: Languages and Automation 141 5.1 Styles of Automation . 142 5.2 Automation in Practice . 156 The version of record is available at: http://dx.doi.org/10.1561/2500000045 6 Proof Organization and Scalability 162 6.1 Property Specification and Encodings . -
Efficient Verification Techniques For
Overcoming performance barries: efficient verification techniques for logical frameworks Brigitte Pientka School of Computer Science McGill University Montreal, Canada [email protected] Abstract. In recent years, logical frameworks which support formalizing language specifications to- gether with their meta-theory have been pervasively used in small and large-scale applications, from certifying code [2] to advocating a general infrastructure for formalizing the meta-theory and semantics of programming languages [5]. In particular, the logical framework LF [9], based on the dependently typed lambda-calculus, and light-weight variants of it like LFi [17] have played a major role in these applications. While the acceptance of logical framework technology has grown and they have matured, one of the most criticized points is concerned with the run-time performance. In this tutorial we give a brief introduction to logical frameworks, describe its state-of-the art and present recent advances in addressing some of the existing performance issues. 1 Introduction Logical frameworks [9] provide an experimental platform to specify, and implement formal systems together with the proofs about them. One of its applications lies in proof-carrying code, where it is successfully used to specify and verify formal guarantees about the run-time behavior of programs. More generally, logical frameworks provide an elegant language for encoding and mechanizing the meta-theory and semantics of programming languages. This idea has recently found strong advocates among programming languages researchers who proposed the POPLmark challenge, a set of benchmarks to evaluate and compare tool support to experiment with programming language design, and mechanize the reasoning about programming languages [5]. -
Modeling, Dialogue, and Globality: Biosemiotics and Semiotics of Self. 2
Sign Systems Studies 31.1, 2003 Modeling, dialogue, and globality: Biosemiotics and semiotics of self. 2. Biosemiotics, semiotics of self, and semioethics Susan Petrilli Dept. of Linguistic Practices and Text Analysis, University of Bari Via Garruba 6, 70100 Bari, Italy e-mail: [email protected] Abstract. The main approaches to semiotic inquiry today contradict the idea of the individual as a separate and self-sufficient entity. The body of an organism in the micro- and macrocosm is not an isolated biological entity, it does not belong to the individual, it is not a separate and self-sufficient sphere in itself. The body is an organism that lives in relation to other bodies, it is intercorporeal and interdependent. This concept of the body finds confir- mation in cultural practices and worldviews based on intercorporeity, inter- dependency, exposition and opening, though nowadays such practices are almost extinct. An approach to semiotics that is global and at once capable of surpassing the illusory idea of definitive and ultimate boundaries to identity presupposes dialogue and otherness. Otherness obliges identity to question the tendency to totalizing closure and to reorganize itself always anew in a process related to ‘infinity’, as Emmanuel Levinas teaches us, or to ‘infinite semiosis’, to say it with Charles Sanders Peirce. Another topic of this paper is the interrelation in anthroposemiosis between man and machine and the implications involved for the future of humanity. Our overall purpose is to develop global semiotics in the direction of “semioethics”, as proposed by S. Petrilli and A. Ponzio and their ongoing research. -
Type Theory and Applications
Type Theory and Applications Harley Eades [email protected] 1 Introduction There are two major problems growing in two areas. The first is in Computer Science, in particular software engineering. Software is becoming more and more complex, and hence more susceptible to software defects. Software bugs have two critical repercussions: they cost companies lots of money and time to fix, and they have the potential to cause harm. The National Institute of Standards and Technology estimated that software errors cost the United State's economy approximately sixty billion dollars annually, while the Federal Bureau of Investigations estimated in a 2005 report that software bugs cost U.S. companies approximately sixty-seven billion a year [90, 108]. Software bugs have the potential to cause harm. In 2010 there were a approximately a hundred reports made to the National Highway Traffic Safety Administration of potential problems with the braking system of the 2010 Toyota Prius [17]. The problem was that the anti-lock braking system would experience a \short delay" when the brakes where pressed by the driver of the vehicle [106]. This actually caused some crashes. Toyota found that this short delay was the result of a software bug, and was able to repair the the vehicles using a software update [91]. Another incident where substantial harm was caused was in 2002 where two planes collided over Uberlingen¨ in Germany. A cargo plane operated by DHL collided with a passenger flight holding fifty-one passengers. Air-traffic control did not notice the intersecting traffic until less than a minute before the collision occurred. -
Twelf User's Guide
Twelf User’s Guide Version 1.4 Frank Pfenning and Carsten Schuermann Copyright c 1998, 2000, 2002 Frank Pfenning and Carsten Schuermann Chapter 1: Introduction 1 1 Introduction Twelf is the current version of a succession of implementations of the logical framework LF. Previous systems include Elf (which provided type reconstruction and the operational semantics reimplemented in Twelf) and MLF (which implemented module-level constructs loosely based on the signatures and functors of ML still missing from Twelf). Twelf should be understood as research software. This means comments, suggestions, and bug reports are extremely welcome, but there are no guarantees regarding response times. The same remark applies to these notes which constitute the only documentation on the present Twelf implementation. For current information including download instructions, publications, and mailing list, see the Twelf home page at http://www.cs.cmu.edu/~twelf/. This User’s Guide is pub- lished as Frank Pfenning and Carsten Schuermann Twelf User’s Guide Technical Report CMU-CS-98-173, Department of Computer Science, Carnegie Mellon University, November 1998. Below we state the typographic conventions in this manual. code for Twelf or ML code ‘samp’ for characters and small code fragments metavar for placeholders in code keyboard for input in verbatim examples hkeyi for keystrokes math for mathematical expressions emph for emphasized phrases File names for examples given in this guide are relative to the main directory of the Twelf installation. For example ‘examples/guide/nd.elf’ may be found in ‘/usr/local/twelf/examples/guide/nd.elf’ if Twelf was installed into the ‘/usr/local/’ directory.