The Concept of Transprecision Computing

Total Page:16

File Type:pdf, Size:1020Kb

The Concept of Transprecision Computing Diss. ETH No. 26716 The concept of transprecision computing A thesis submitted to attain the degree of DOCTOR OF SCIENCES of ETH ZURICH (Dr. sc. ETH Zurich) presented by FLORIAN MICHAEL SCHEIDEGGER MSc ETH EEIT, ETH Zurich born on April 4th, 1990 citizen of Unterramsern SO, Switzerland accepted on the recommendation of Prof. Dr. Luca Benini, examiner Prof. Dr. Norbert When, co-examiner Dr. Cristiano Malossi, supervisor 2020 Acknowledgments First of all, I would like to thank my supervisors, Luca Benini and Cristiano Malossi. I appreciate that I got the exceptional chance to pursue a doctorate within IBM Research Zurich. In that context, I thank Luca for his prompt and constructive feedback and his guidance to higher-level goals. But most importantly, I thank Luca for opening the doors of that career path in its initial stages — without that, I would not be in my current position. I would like to thank Cristiano for his endless support, his detailed feedback, and fruitful discussions. Especially, I thank Cristiano for practical help and guidance required in the daily business, which includes improving presentation skills up to successfully leading me through a complete process of filing a disclosure as the first author. I would like to thank Costas Bekas, our group manager, for pushing through my contract and for providing motivating and constructive feedback to our team. I would also like to thank Norbert Wehn, the co-examiner of this thesis, for his interest in my work and his feedback. I thank all my collaborators, Goran Flegar, Vedran Novakovic, Giovani Mariani, Andres Tomas, Enrique Quintana-Ortí, Lukas Cav- igelli, Michael Schaffner, Roxana Istrate, Dimitrios Nikolopoulos, Youri Popoff, Michael Gautschi, Frank Gürkaynak, Hubert Kaeslin, Aljoscha Smolic, Manuel Eggimann, Christelle Gloor, Atin Sood, Benjamin El- der, Benjamin Herta, Chao Xue, Debashish Saha, Ganesh Venkatara- man, Gegi Thomas, Hendrik Strobelt, Horst Samulowitz, Martin Wis- tuba, Matteo Manica, Mihir Choudhury, Rong Yan, Ruchir Puri, and Tejaswini Pedapati for constructive discussions and inspiration that caused joint contributions. iii iv I thank all members of the OPRECOMP consortium for impacting our research. Especially, I would like to thank Enrique Quintana-Ortí, Andrew Emerson, Fabian Schuiki, Stefan Mach, Guiseppe, Davide Rossi, Giuseppe Tagliavini, Andrea Borghesi, JunKyu Lee, Umar Min- has, Igor Neri, Eric Flamand, and Christian Weis for their valuable project input, critical feedback and long discussions that help to shape this work. I would like to thank additional team members at IBM, Nico Gorbach, Panagiotis Chatzidoukas, and Andrea Bartezzaghi that were involved in productive technical discussions. A special thank is dedicated to Roxana with whom I shared the office, for technical discussions, for helping me out with many practical problems, and for the joint work that we authored. I would like to thank Luca’s research group and IBM’s depart- ment for providing the facility and required administrative help. A special thank is dedicated to Christoph Hagleitner for managing and providing IBM’s computing infrastructure to researchers. I highly appreciate the proficient help of Lilli-Marie Pavka for copy-editing our latest publication. I thank Dionysios Diamantopoulos for interesting technical out-of-the-box discussions and shared OPRECOMP project efforts. I would like to thank Folashade Ajala, Austin Brauser, and Hari Mistry for volunteering to proofread fragments of this work. Finally, I would like to thank my family and my friends for sup- porting me during this time. I highly appreciate the rooted to the soil inspiration of Sarah Battige that calmed me down in stressful times towards the end of writing this work. I am in particular grateful to my parents Marianne and Thomas Scheidegger who provide uncondi- tional support in any life situations. Especially, I would like to thank them for encouraging my interests and for providing me access to the university in the first place that enabled my career path. Abstract For many years, computing systems rely on guaranteed numerical precision of each step in complex computations. Moore’s law sustains exponential improvements in the semiconductor industry over several decades for building computing infrastructure, from tiny Internet-of- Things nodes, over personal smartphones, laptops or workstations, up to large high performance computing (HPC) computing server centers. With the paradigm of the ”power wall”, achievable improvements start to saturate. To that end, the concept of transprecision com- puting emerged, where existing over-conservative ”precis” computing assumptions are relaxed and replaced with more flexible and efficient policies to gain performance. Unfortunately, it is non-straight forward to adopt and integrate general transprecision concepts into the variety of today’s computing infrastructure. The main challenge consists of leveraging domain- specific knowledge and provide full solutions covering from physical foundations over circuit-level up through the full software stack to the application level. This work focuses on how transprecision concepts improve general computing. We identify and elaborate the standard number repre- sentations, especially the one defined in the IEEE 754 floating-point standard, as the enabler of low precision computing. We developed lightweight libraries that allow integrating transprecision concepts into algorithms. Finally, we focus on building automatized workflows for specific problems, where the solution space is enlarged by multiple orders of magnitude due to the various configurations of low precision. We demonstrate how heuristic optimization strategies applied on top v vi of transprecision computing find near to optimal configurations of approximated kernels in a short time. Zusammenfassung Seit vielen Jahren beruhen komplexen Berechnungen von Computer Systemen auf garantierter numerischer Präzision in jedem Schritt. Das Mooresche Gesetz (engl. Moore’s law) erhält exponentielles Wachstum in der Halbleiterindustrie über mehrere Jahrzehnte aufrecht. Dadurch wird ein kontinuierlicher Fortschritt von Computer Infrastruktur — vom Internet der Dinge, Smartphones, Laptops, Arbeitsplatzrechnern, bis hin zu Hochleistungsrechnern in Rechenzentren — erreicht. Das Paradigma der Grenzen der Leistungsaufnahme (engl. power wall) limitiert erreichbare Verbesserungen. Das Konzept von Transprecision Computing lockert existierende und zu konservative Annahmen be- züglich der Rechengenauigkeit. Stattdessen werden Annahmen durch flexiblere und effizientere Richtlinien ersetzt um die Rechenleistung zu verbessern. Leider ist es nicht einfach Transprecision Konzepte direkt in die Vielfalt der heutigen Computer Systeme zu integrieren. Die grösste Herausforderung besteht darin, Gebiet spezifisches Wissen wirksam einzusetzen, um eine komplette Lösung zu erreichen welche alle Aspek- te — von physikalischen Grundlagen, Schaltungsdetails, Software, bis hin zu Anwendungsgegebenheiten — berücksichtigt. Diese Arbeit fokussiert wie durch Transprecision Konzepte all- gemeine Berechnungen verbessert werden können. Wir identifizieren und etablieren, dass die Standard Repräsentation von Zahlen, insbe- sondre jene des IEEE 754 floating-point Standards, das Rechnen mit vii viii reduzierter Genauigkeit ermöglichen. Wir entwickeln schlanke Soft- warebibliotheken welche die Integration von Transprecision Konzep- te in Algorithmen ermöglichen. Schlussendlich bilden wir automati- sierte Arbeitsabläufe für spezifische Problemstellungen welche auf- grund der vielen Konfigurationen der einstellbaren Genauigkeit in ei- nem, um mehrere Grössenordnungen erweiterten, Lösungsraum liegen. Wir zeigen, wie heuristische Optimierungsstrategien — angewandt auf Genauigkeits-Konfigurationen von Transprecision Berechnungen — nahezu optimale Konfigurationen der approximierten Kernen in kurzer Zeit liefern. Contents 1 Introduction 1 1.1 Trends in deep learning . .5 1.2 Research directions in deep learning . .8 1.3 Contributions . 14 1.4 Thesis structure . 16 1.5 List of publications . 18 2 Benchmarking today’s systems 21 2.1 Benchmarks for transprecision computing . 22 2.1.1 PageRank . 24 2.1.2 BLSTM . 25 2.1.3 GLQ . 28 2.2 Summary and conclusion . 29 3 Approximate computing 31 3.1 Approximate computing techniques . 33 3.1.1 The use of datatypes . 33 3.1.2 Loop perforation . 34 3.1.3 Task skipping and memoization . 36 3.1.4 Using multiple inexact program versions . 37 3.1.5 Stochastic computing . 38 3.2 Applying approximate computing . 40 3.3 Selected results on benchmarks . 43 3.3.1 PageRank . 43 3.3.2 BLSTM . 48 3.3.3 GLQ . 51 ix x CONTENTS 3.4 Summary and conclusion . 51 4 Core transprecision concepts 55 4.1 Number formats . 55 4.1.1 Fixed-point . 56 4.1.2 IEEE 754 floating-point . 57 4.1.3 Logarithmic number system (LNS) . 59 4.2 The transprecision system view . 60 4.2.1 Transprecision concepts . 63 4.2.2 Reduced precision as root-cause . 66 4.2.3 Transprecision computing in current solutions . 68 4.3 Summary and conclusion . 70 5 Emulating numerical behavior of applications 73 5.1 The floatx library . 74 5.1.1 Related work . 74 5.1.2 Interface and design goals . 75 5.1.3 The choice of C++ . 77 5.1.4 The floatx class template . 78 5.1.5 Operations on floatx objects . 81 5.1.6 The floatxr class template . 85 5.1.7 Notes on concurrency . 86 5.1.8 Advanced properties and performance of floatx 87 5.2 Numerical analysis of applications . 89 5.2.1 PageRank . 89 5.2.2 BLSTM . 93 5.2.3 GLQ . 96 5.3 Summary and conclusion . 100 6 Floatx for
Recommended publications
  • Arithmetic Algorithms for Extended Precision Using Floating-Point Expansions Mioara Joldes, Olivier Marty, Jean-Michel Muller, Valentina Popescu
    Arithmetic algorithms for extended precision using floating-point expansions Mioara Joldes, Olivier Marty, Jean-Michel Muller, Valentina Popescu To cite this version: Mioara Joldes, Olivier Marty, Jean-Michel Muller, Valentina Popescu. Arithmetic algorithms for extended precision using floating-point expansions. IEEE Transactions on Computers, Institute of Electrical and Electronics Engineers, 2016, 65 (4), pp.1197 - 1210. 10.1109/TC.2015.2441714. hal- 01111551v2 HAL Id: hal-01111551 https://hal.archives-ouvertes.fr/hal-01111551v2 Submitted on 2 Jun 2015 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. IEEE TRANSACTIONS ON COMPUTERS, VOL. , 201X 1 Arithmetic algorithms for extended precision using floating-point expansions Mioara Joldes¸, Olivier Marty, Jean-Michel Muller and Valentina Popescu Abstract—Many numerical problems require a higher computing precision than the one offered by standard floating-point (FP) formats. One common way of extending the precision is to represent numbers in a multiple component format. By using the so- called floating-point expansions, real numbers are represented as the unevaluated sum of standard machine precision FP numbers. This representation offers the simplicity of using directly available, hardware implemented and highly optimized, FP operations.
    [Show full text]
  • Package 'Pinsplus'
    Package ‘PINSPlus’ August 6, 2020 Encoding UTF-8 Type Package Title Clustering Algorithm for Data Integration and Disease Subtyping Version 2.0.5 Date 2020-08-06 Author Hung Nguyen, Bang Tran, Duc Tran and Tin Nguyen Maintainer Hung Nguyen <[email protected]> Description Provides a robust approach for omics data integration and disease subtyping. PIN- SPlus is fast and supports the analysis of large datasets with hundreds of thousands of sam- ples and features. The software automatically determines the optimal number of clus- ters and then partitions the samples in a way such that the results are ro- bust against noise and data perturbation (Nguyen et.al. (2019) <DOI: 10.1093/bioinformat- ics/bty1049>, Nguyen et.al. (2017)<DOI: 10.1101/gr.215129.116>). License LGPL Depends R (>= 2.10) Imports foreach, entropy , doParallel, matrixStats, Rcpp, RcppParallel, FNN, cluster, irlba, mclust RoxygenNote 7.1.0 Suggests knitr, rmarkdown, survival, markdown LinkingTo Rcpp, RcppArmadillo, RcppParallel VignetteBuilder knitr NeedsCompilation yes Repository CRAN Date/Publication 2020-08-06 21:20:02 UTC R topics documented: PINSPlus-package . .2 AML2004 . .2 KIRC ............................................3 PerturbationClustering . .4 SubtypingOmicsData . .9 1 2 AML2004 Index 13 PINSPlus-package Perturbation Clustering for data INtegration and disease Subtyping Description This package implements clustering algorithms proposed by Nguyen et al. (2017, 2019). Pertur- bation Clustering for data INtegration and disease Subtyping (PINS) is an approach for integraton of data and classification of diseases into various subtypes. PINS+ provides algorithms support- ing both single data type clustering and multi-omics data type. PINSPlus is an improved version of PINS by allowing users to customize the based clustering algorithm and perturbation methods.
    [Show full text]
  • Floating Points
    Jin-Soo Kim ([email protected]) Systems Software & Architecture Lab. Seoul National University Floating Points Fall 2018 ▪ How to represent fractional values with finite number of bits? • 0.1 • 0.612 • 3.14159265358979323846264338327950288... ▪ Wide ranges of numbers • 1 Light-Year = 9,460,730,472,580.8 km • The radius of a hydrogen atom: 0.000000000025 m 4190.308: Computer Architecture | Fall 2018 | Jin-Soo Kim ([email protected]) 2 ▪ Representation • Bits to right of “binary point” represent fractional powers of 2 • Represents rational number: 2i i i–1 k 2 bk 2 k=− j 4 • • • 2 1 bi bi–1 • • • b2 b1 b0 . b–1 b–2 b–3 • • • b–j 1/2 1/4 • • • 1/8 2–j 4190.308: Computer Architecture | Fall 2018 | Jin-Soo Kim ([email protected]) 3 ▪ Examples: Value Representation 5-3/4 101.112 2-7/8 10.1112 63/64 0.1111112 ▪ Observations • Divide by 2 by shifting right • Multiply by 2 by shifting left • Numbers of form 0.111111..2 just below 1.0 – 1/2 + 1/4 + 1/8 + … + 1/2i + … → 1.0 – Use notation 1.0 – 4190.308: Computer Architecture | Fall 2018 | Jin-Soo Kim ([email protected]) 4 ▪ Representable numbers • Can only exactly represent numbers of the form x / 2k • Other numbers have repeating bit representations Value Representation 1/3 0.0101010101[01]…2 1/5 0.001100110011[0011]…2 1/10 0.0001100110011[0011]…2 4190.308: Computer Architecture | Fall 2018 | Jin-Soo Kim ([email protected]) 5 Fixed Points ▪ p.q Fixed-point representation • Use the rightmost q bits of an integer as representing a fraction • Example: 17.14 fixed-point representation
    [Show full text]
  • Hacking in C 2020 the C Programming Language Thom Wiggers
    Hacking in C 2020 The C programming language Thom Wiggers 1 Table of Contents Introduction Undefined behaviour Abstracting away from bytes in memory Integer representations 2 Table of Contents Introduction Undefined behaviour Abstracting away from bytes in memory Integer representations 3 – Another predecessor is B. • Not one of the first programming languages: ALGOL for example is older. • Closely tied to the development of the Unix operating system • Unix and Linux are mostly written in C • Compilers are widely available for many, many, many platforms • Still in development: latest release of standard is C18. Popular versions are C99 and C11. • Many compilers implement extensions, leading to versions such as gnu18, gnu11. • Default version in GCC gnu11 The C programming language • Invented by Dennis Ritchie in 1972–1973 4 – Another predecessor is B. • Closely tied to the development of the Unix operating system • Unix and Linux are mostly written in C • Compilers are widely available for many, many, many platforms • Still in development: latest release of standard is C18. Popular versions are C99 and C11. • Many compilers implement extensions, leading to versions such as gnu18, gnu11. • Default version in GCC gnu11 The C programming language • Invented by Dennis Ritchie in 1972–1973 • Not one of the first programming languages: ALGOL for example is older. 4 • Closely tied to the development of the Unix operating system • Unix and Linux are mostly written in C • Compilers are widely available for many, many, many platforms • Still in development: latest release of standard is C18. Popular versions are C99 and C11. • Many compilers implement extensions, leading to versions such as gnu18, gnu11.
    [Show full text]
  • Typescript Language Specification
    TypeScript Language Specification Version 1.8 January, 2016 Microsoft is making this Specification available under the Open Web Foundation Final Specification Agreement Version 1.0 ("OWF 1.0") as of October 1, 2012. The OWF 1.0 is available at http://www.openwebfoundation.org/legal/the-owf-1-0-agreements/owfa-1-0. TypeScript is a trademark of Microsoft Corporation. Table of Contents 1 Introduction ................................................................................................................................................................................... 1 1.1 Ambient Declarations ..................................................................................................................................................... 3 1.2 Function Types .................................................................................................................................................................. 3 1.3 Object Types ...................................................................................................................................................................... 4 1.4 Structural Subtyping ....................................................................................................................................................... 6 1.5 Contextual Typing ............................................................................................................................................................ 7 1.6 Classes .................................................................................................................................................................................
    [Show full text]
  • Gradual Liquid Type Inference
    Gradual Liquid Type Inference NIKI VAZOU, IMDEA, Spain ÉRIC TANTER, University of Chile, Chile DAVID VAN HORN, University of Maryland, USA Refinement types allow for lightweight program verification by enriching types with logical predicates. Liquid typing provides a decidable refinement inference mechanism that is convenient but subject to two major issues: (1) inference is global and requires top-level annotations, making it unsuitable for inference of modular code components and prohibiting its applicability to library code, and (2) inference failure results in obscure error messages. These difficulties seriously hamper the migration of existing code to use refinements. This paper shows that gradual liquid type inferenceśa novel combination of liquid inference and gradual refinement typesśaddresses both issues. Gradual refinement types, which support imprecise predicates that are optimistically interpreted, can be used in argument positions to constrain liquid inference so that the global inference process effectively infers modular specifications usable for library components. Dually, when gradual refinements appear as the result of inference, they signal an inconsistency in the use of static refinements. Because liquid refinements are drawn from a finite set of predicates, in gradual liquid type inference wecan enumerate the safe concretizations of each imprecise refinement, i.e., the static refinements that justify why a program is gradually well-typed. This enumeration is useful for static liquid type error explanation, since the safe concretizations exhibit all the potential inconsistencies that lead to static type errors. We develop the theory of gradual liquid type inference and explore its pragmatics in the setting of Liquid 132 Haskell. To demonstrate the utility of our approach, we develop an interactive tool, GuiLT, for gradual liquid type inference in Liquid Haskell that both infers modular types and explores safe concretizations of gradual refinements.
    [Show full text]
  • 5. Data Types
    IEEE FOR THE FUNCTIONAL VERIFICATION LANGUAGE e Std 1647-2011 5. Data types The e language has a number of predefined data types, including the integer and Boolean scalar types common to most programming languages. In addition, new scalar data types (enumerated types) that are appropriate for programming, modeling hardware, and interfacing with hardware simulators can be created. The e language also provides a powerful mechanism for defining OO hierarchical data structures (structs) and ordered collections of elements of the same type (lists). The following subclauses provide a basic explanation of e data types. 5.1 e data types Most e expressions have an explicit data type, as follows: — Scalar types — Scalar subtypes — Enumerated scalar types — Casting of enumerated types in comparisons — Struct types — Struct subtypes — Referencing fields in when constructs — List types — The set type — The string type — The real type — The external_pointer type — The “untyped” pseudo type Certain expressions, such as HDL objects, have no explicit data type. See 5.2 for information on how these expressions are handled. 5.1.1 Scalar types Scalar types in e are one of the following: numeric, Boolean, or enumerated. Table 17 shows the predefined numeric and Boolean types. Both signed and unsigned integers can be of any size and, thus, of any range. See 5.1.2 for information on how to specify the size and range of a scalar field or variable explicitly. See also Clause 4. 5.1.2 Scalar subtypes A scalar subtype can be named and created by using a scalar modifier to specify the range or bit width of a scalar type.
    [Show full text]
  • A System of Constructor Classes: Overloading and Implicit Higher-Order Polymorphism
    A system of constructor classes: overloading and implicit higher-order polymorphism Mark P. Jones Yale University, Department of Computer Science, P.O. Box 2158 Yale Station, New Haven, CT 06520-2158. [email protected] Abstract range of other data types, as illustrated by the following examples: This paper describes a flexible type system which combines overloading and higher-order polymorphism in an implicitly data Tree a = Leaf a | Tree a :ˆ: Tree a typed language using a system of constructor classes – a mapTree :: (a → b) → (Tree a → Tree b) natural generalization of type classes in Haskell. mapTree f (Leaf x) = Leaf (f x) We present a wide range of examples which demonstrate mapTree f (l :ˆ: r) = mapTree f l :ˆ: mapTree f r the usefulness of such a system. In particular, we show how data Opt a = Just a | Nothing constructor classes can be used to support the use of monads mapOpt :: (a → b) → (Opt a → Opt b) in a functional language. mapOpt f (Just x) = Just (f x) The underlying type system permits higher-order polymor- mapOpt f Nothing = Nothing phism but retains many of many of the attractive features that have made the use of Hindley/Milner type systems so Each of these functions has a similar type to that of the popular. In particular, there is an effective algorithm which original map and also satisfies the functor laws given above. can be used to calculate principal types without the need for With this in mind, it seems a shame that we have to use explicit type or kind annotations.
    [Show full text]
  • Neuron C Reference Guide Iii • Introduction to the LONWORKS Platform (078-0391-01A)
    Neuron C Provides reference info for writing programs using the Reference Guide Neuron C programming language. 078-0140-01G Echelon, LONWORKS, LONMARK, NodeBuilder, LonTalk, Neuron, 3120, 3150, ShortStack, LonMaker, and the Echelon logo are trademarks of Echelon Corporation that may be registered in the United States and other countries. Other brand and product names are trademarks or registered trademarks of their respective holders. Neuron Chips and other OEM Products were not designed for use in equipment or systems, which involve danger to human health or safety, or a risk of property damage and Echelon assumes no responsibility or liability for use of the Neuron Chips in such applications. Parts manufactured by vendors other than Echelon and referenced in this document have been described for illustrative purposes only, and may not have been tested by Echelon. It is the responsibility of the customer to determine the suitability of these parts for each application. ECHELON MAKES AND YOU RECEIVE NO WARRANTIES OR CONDITIONS, EXPRESS, IMPLIED, STATUTORY OR IN ANY COMMUNICATION WITH YOU, AND ECHELON SPECIFICALLY DISCLAIMS ANY IMPLIED WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of Echelon Corporation. Printed in the United States of America. Copyright © 2006, 2014 Echelon Corporation. Echelon Corporation www.echelon.com Welcome This manual describes the Neuron® C Version 2.3 programming language. It is a companion piece to the Neuron C Programmer's Guide.
    [Show full text]
  • Lecture 2: Variables and Primitive Data Types
    Lecture 2: Variables and Primitive Data Types MIT-AITI Kenya 2005 1 In this lecture, you will learn… • What a variable is – Types of variables – Naming of variables – Variable assignment • What a primitive data type is • Other data types (ex. String) MIT-Africa Internet Technology Initiative 2 ©2005 What is a Variable? • In basic algebra, variables are symbols that can represent values in formulas. • For example the variable x in the formula f(x)=x2+2 can represent any number value. • Similarly, variables in computer program are symbols for arbitrary data. MIT-Africa Internet Technology Initiative 3 ©2005 A Variable Analogy • Think of variables as an empty box that you can put values in. • We can label the box with a name like “Box X” and re-use it many times. • Can perform tasks on the box without caring about what’s inside: – “Move Box X to Shelf A” – “Put item Z in box” – “Open Box X” – “Remove contents from Box X” MIT-Africa Internet Technology Initiative 4 ©2005 Variables Types in Java • Variables in Java have a type. • The type defines what kinds of values a variable is allowed to store. • Think of a variable’s type as the size or shape of the empty box. • The variable x in f(x)=x2+2 is implicitly a number. • If x is a symbol representing the word “Fish”, the formula doesn’t make sense. MIT-Africa Internet Technology Initiative 5 ©2005 Java Types • Integer Types: – int: Most numbers you’ll deal with. – long: Big integers; science, finance, computing. – short: Small integers.
    [Show full text]
  • Relatively Complete Refinement Type System for Verification of Higher-Order Non-Deterministic Programs
    Relatively Complete Refinement Type System for Verification of Higher-Order Non-deterministic Programs HIROSHI UNNO, University of Tsukuba, Japan YUKI SATAKE, University of Tsukuba, Japan TACHIO TERAUCHI, Waseda University, Japan This paper considers verification of non-deterministic higher-order functional programs. Our contribution is a novel type system in which the types are used to express and verify (conditional) safety, termination, non-safety, and non-termination properties in the presence of 8-9 branching behavior due to non-determinism. 88 For instance, the judgement ` e : u:int ϕ(u) says that every evaluation of e either diverges or reduces 98 to some integer u satisfying ϕ(u), whereas ` e : u:int ψ (u) says that there exists an evaluation of e that either diverges or reduces to some integer u satisfying ψ (u). Note that the former is a safety property whereas the latter is a counterexample to a (conditional) termination property. Following the recent work on type-based verification methods for deterministic higher-order functional programs, we formalize theideaon the foundation of dependent refinement types, thereby allowing the type system to express and verify rich properties involving program values, branching behaviors, and the combination thereof. Our type system is able to seamlessly combine deductions of both universal and existential facts within a unified framework, paving the way for an exciting opportunity for new type-based verification methods that combine both universal and existential reasoning. For example, our system can prove the existence of a path violating some safety property from a proof of termination that uses a well-foundedness termination 12 argument.
    [Show full text]
  • Chapter 4 Variables and Data Types
    PROG0101 Fundamentals of Programming PROG0101 FUNDAMENTALS OF PROGRAMMING Chapter 4 Variables and Data Types 1 PROG0101 Fundamentals of Programming Variables and Data Types Topics • Variables • Constants • Data types • Declaration 2 PROG0101 Fundamentals of Programming Variables and Data Types Variables • A symbol or name that stands for a value. • A variable is a value that can change. • Variables provide temporary storage for information that will be needed during the lifespan of the computer program (or application). 3 PROG0101 Fundamentals of Programming Variables and Data Types Variables Example: z = x + y • This is an example of programming expression. • x, y and z are variables. • Variables can represent numeric values, characters, character strings, or memory addresses. 4 PROG0101 Fundamentals of Programming Variables and Data Types Variables • Variables store everything in your program. • The purpose of any useful program is to modify variables. • In a program every, variable has: – Name (Identifier) – Data Type – Size – Value 5 PROG0101 Fundamentals of Programming Variables and Data Types Types of Variable • There are two types of variables: – Local variable – Global variable 6 PROG0101 Fundamentals of Programming Variables and Data Types Types of Variable • Local variables are those that are in scope within a specific part of the program (function, procedure, method, or subroutine, depending on the programming language employed). • Global variables are those that are in scope for the duration of the programs execution. They can be accessed by any part of the program, and are read- write for all statements that access them. 7 PROG0101 Fundamentals of Programming Variables and Data Types Types of Variable MAIN PROGRAM Subroutine Global Variables Local Variable 8 PROG0101 Fundamentals of Programming Variables and Data Types Rules in Naming a Variable • There a certain rules in naming variables (identifier).
    [Show full text]