Bisimulation and Coinduction

Total Page:16

File Type:pdf, Size:1020Kb

Bisimulation and Coinduction Bisimulation and coinduction Davide Sangiorgi Focus Team, INRIA (France)/University of Bologna (Italy) Email: [email protected] http://www.cs.unibo.it/˜sangio/ BiSS 2010, Bertinoro, March 2010 Objectives of the course At the end of the course, a student should: – have understood what bisimulation and coinduction are – be able to do (simple) bisimulation proofs and write coinductive definitions – have a grasp of the duality between induction and coinduction – know what a behavioural equivalence is, in particular the difference between bisimilarity and other behavioural equalities page 1 References This course is based on the draft book: – Davide Sangiorgi, An introduction to bisimulation and coinduction, Draft, 2010 Please contact me if you’d like to read and comment parts of it. page 2 Exam – To be discussed – Based on chapters of the book page 3 Outline The success of bisimulation and coinduction Towards bisimulation, or: from functions to processes Bisimulation Induction and coinduction Weak bisimulation Other equivalences: failures, testing, trace ... Enhancements of the bisimulation proof method .... page 4 The success of bisimulation and coinduction page 5 Bisimulation in Computer Science – One of the most important contributions of concurrency theory to CS – It has spurred the study of coinduction – In concurrency: probably the most studied equivalence ∗ ... in a plethora of equivalences (see van Glabbeek 93) ∗ Why? page 6 Bisimulation in concurrency – Clear meaning of equality – Natural – The finest extensional equality Extensional: – “whenever it does an output at b it will also do an input at a” Non-extensional: – “Has 8 states” – “Has an Hamiltonian circuit” – An associated powerful proof technique – Robust Characterisations: logical, algebraic, set-theoretical, categorical, game-theoretical, .... – Several separation results from other equivalences page 7 Bisimulation in concurrency, today – To define equality on processes (fundamental !!) – To prove equalities ∗ even if bisimilarity is not the chosen equivalence · trying bisimilarity first · coinductive characterisations of the chosen equivalence – To justify algebraic laws – To minimise the state space – To abstract from certain details page 8 Coinduction in programming languages – Bisimilarity in functional languages and OO languages [Abramsky, Ong] A major factor in the movement towards operationally-based techniques in PL semantics in the 90s – Program analysis (see Nielson, Nielson, Hankin ’s book) Noninterference (security) properties – Verification tools: algorithms for computing gfp (for modal and temporal logics), tactics and heuristics page 9 – Types [Tofte] ∗ type soundness ∗ coinductive types and definition by corecursion Infinite proofs in Coq [Coquand, Gimenez] ∗ recursive types (equality, subtyping, ...) A coinductive rule: Γ, hp1,q1i∼hp2,q2i ⊢ pi ∼ qi Γ ⊢hp1,q1i∼hp2,q2i – Recursively defined data types and domains [Fiore, Pitts] – Databases [Buneman] – Compiler correctness [Jones] page 10 Towards bisimulation, or: from functions to processes page 11 The semantics of processes: – usually operational: (Labelled Transitions Systems, behavioural equivalences) – alternative approach could be the denotational one: a structure-preserving function would map processes into elements of a given semantic domain. Problem: it has often proved very hard to find appropriate semantic domains for these languages page 12 Processes? We can think of sequential computations as mathematical objects, namely functions. Concurrent program are not functions, but processes. But what is a process? No universally-accepted mathematical answer. Hence we do not find in mathematics tools/concepts for the denotational semantics of concurrent languages, at least not as successful as those for the sequential ones. page 13 Processes are not functions A sequential imperative language can be viewed as a function from states to states. These two programs denote the same function from states to states: X := 2 and X := 1; X := X + 1 But now take a context with parallelism, such as [·] | X := 2. The program X := 2 | X := 2 always terminates with X = 2. This is not true (why?) for ( X := 1; X := X +1) | X := 2 Therefore: Viewing processes as functions gives us a notion of equivalence that is not a congruence. In other words, such a semantics of processes as functions would not be compositional. page 14 Furthermore: A concurrent program may not terminate, and yet perform meaningful computations (examples: an operating system, the controllers of a nuclear station or of a railway system). In sequential languages programs that do not terminate are undesirable; they are ‘wrong’. The behaviour of a concurrent program can be non-deterministic. Example: ( X := 1; X := X +1) | X := 2 In a functional approach, non-determinism can be dealt with using powersets and powerdomains. This works for pure non-determinism, as in λx. (3 ⊕ 5) But not for parallelism. page 15 What is a process? When are two processes behaviourally equivalent? These are basic, fundamental, questions; they have been at the core of the research in concurrency theory for the past 30 years. (They are still so today, although remarkable progress has been made) Fundamental for a model or a language on top of which we want to make proofs ... We shall approach these questions from a simple case, in which interactions among processes are just synchronisations, without exchange of values. page 16 Interaction In the example at page 14 X := 2 and X := 1; X := X + 1 should be distinguished because they interact in a different way with the memory. Computation is interaction. Examples: access to a memory cell, interrogating a data base, selecting a programme in a washing machine, .... The participants of an interaction are processes (a cell, a data base, a washing machine, ...) The behaviour of a process should tell us when and how a process can interact with its environment page 17 How to represent interaction: labelled transition systems Definition 1 A labeled transition system (LTS) is a triple (P, Act, T ) where – P is the set of states, or processes; – Act is the set of actions; (NB: can be infinite) – T ⊆ (P, Act, P) is the transition relation. µ We write P −→ P ′ if (P,µ,P ′) ∈T . Meaning: process P accepts an interaction with the environment where P performs action µ and then becomes process P ′. ′ P is a derivative of P if there are P1,...,Pn,µ1,...,µn s.t. µ1 µn ′ P −→ P1 ... −→ Pn and Pn = P . page 18 Example A vending machine, capable of dispensing tea or coffee for 1 coin (1c). The behaviour of the machine is what we can observe, by interacting with the machine. We can represent such a behaviour as an LTS: tea s collect−tea 5 3 Õ 1c / s1 / s2 collect−coffee Y % coffee s4 ( where s1 is the initial state) page 19 Other examples of LTS (we omit the name of the states) a # = a {{= c {{ b {{ {{ CC CC CC CC a ! / b page 20 Equivalence of processes An LTS tells us what is the behaviour of processes. When should two behaviours be considered equal? ie, what does it mean that two processes are equivalent? Two processes should be equivalent if we cannot distinguish them by interacting with them. Example (where indicates the processes we are interested in): a b # a # = / c b c a This shows that graph isomorphism as behavioural equivalence is too strong. A natural alternative (from automata theory): trace equivalence. page 21 Examples of trace-equivalent processes: d b a - , ii = b 3 iiii iiii iiii iiii iiii d Ø a tiii U / j UUUU c UUUU V UUUU e UUUU UUUU UUUU $ 1 2 a 2 e c a b / / = = a {{= {{ {{ {{ C CC CC CC a C! / b These equalities are OK on automata. But they are not on processes: in one case interacting with the machine can lead to deadlock! page 22 For instance, you would not consider these two vending machines ‘the same’: tea collect−tea 1c - , ii collect−tea 3 iiii iiii iiii iiii iiii tea Ø 1c tiii / U / / − j UUUU collect coffee UUUU V UUUUcoffee UUUU UUUU UUUU $ 1 2 1c coffee collect−coffee Trace equivalence (also called language equivalence) is still important in concurrency. Examples: confluent processes; liveness properties such as termination page 23 These examples suggest that the notion of equivalence we seek: – should imply a tighter correspondence between transitions than language equivalence, – should be based on the informations that the transitions convey, and not on the shape of the diagrams. Intuitively, what does it mean for an observer that two machines are equivalent? If you do something with one machine, you must be able to the same with the other, and on the two states which the machines evolve to the same is again true. This is the idea of equivalence that we are going to formalise; it is called bisimilarity. page 24 Bisimulation and bisimilarity We define bisimulation on a single LTS, because: the union of two LTSs is an LTS; we will often want to compare derivatives of the same process. Definition 2 (bisimulation) A relation R on the states of an LTS is a bisimulation if whenever P R Q: µ µ 1. ∀µ,P ′ s.t. P −→ P ′, then ∃Q′ such that Q −→ Q′ and P ′ R Q′; µ µ 2. ∀µ, Q′ s.t. Q −→ Q′, then ∃P ′ such that P −→ P ′ and P ′ R Q′. P and Q are bisimilar, written P ∼ Q, if P R Q, for some bisimulation R. P R Q The bisimulation diagram: µ ↓ µ ↓ P ′ R Q′ page 25 Exercises To prove P ∼ Q you have to find a bisimulation R with P R Q (the bisimulation proof method) Exercise 1 Prove that the processes at page 21 are bisimilar. Are the processes at page 22 bisimilar? Proposition 1 1. ∼ is an equivalence relation, i.e. the following hold: 1.1. P ∼ P (reflexivity) 1.2. P ∼ Q implies Q ∼ P (symmetry) 1.3. P ∼ Q and Q ∼ R imply P ∼ R (transitivity); 2.
Recommended publications
  • CTL Model Checking
    CTL Model Checking Prof. P.H. Schmitt Institut fur¨ Theoretische Informatik Fakult¨at fur¨ Informatik Universit¨at Karlsruhe (TH) Formal Systems II Prof. P.H. Schmitt CTLMC Summer 2009 1 / 26 Fixed Point Theory Prof. P.H. Schmitt CTLMC Summer 2009 2 / 26 Fixed Points Definition Definition Let f : P(G) → P(G) be a set valued function and Z a subset of G. 1. Z is called a fixed point of f if f(Z) = Z . 2. Z is called the least fixed point of f is Z is a fixed point and for all other fixed points U of f the relation Z ⊆ U is true. 3. Z is called the greatest fixed point of f is Z is a fixed point and for all other fixed points U of f the relation U ⊆ Z is true. Prof. P.H. Schmitt CTLMC Summer 2009 3 / 26 Finite Fixed Point Lemma Let G be an arbitrary set, Let P(G) denote the power set of a set G. A function f : P(G) → P(G) is called monotone of for all X, Y ⊆ G X ⊆ Y ⇒ f(X) ⊆ f(Y ) Let f : P(G) → P(G) be a monotone function on a finite set G. 1. There is a least and a greatest fixed point of f. S n 2. n≥1 f (∅) is the least fixed point of f. T n 3. n≥1 f (G) is the greatest fixed point of f. Prof. P.H. Schmitt CTLMC Summer 2009 4 / 26 Proof of part (2) of the finite fixed point Lemma Monotonicity of f yields ∅ ⊆ f(∅) ⊆ f 2(∅) ⊆ ..
    [Show full text]
  • Least Fixed Points Revisited
    LEAST FIXED POINTS REVISITED J.W. de Bakker Mathematical Centre, Amsterdam ABSTRACT Parameter mechanisms for recursive procedures are investigated. Con- trary to the view of Manna et al., it is argued that both call-by-value and call-by-name mechanisms yield the least fixed points of the functionals de- termined by the bodies of the procedures concerned. These functionals dif- fer, however, according to the mechanism chosen. A careful and detailed pre- sentation of this result is given, along the lines of a simple typed lambda calculus, with interpretation rules modelling program execution in such a way that call-by-value determines a change in the environment and call-by- name a textual substitution in the procedure body. KEY WORDS AND PHRASES: Semantics, recursion, least fixed points, parameter mechanisms, call-by-value, call-by-name, lambda calculus. Author's address Mathematical Centre 2e Boerhaavestraat 49 Amsterdam Netherlands 28 NOTATION Section 2 s,t,ti,t',... individual } function terms S,T,... functional p,p',... !~individual} boolean .J function terms P,... functional a~A, aeA} constants b r B, IBeB x,y,z,u X, ~,~ X] variables qcQ, x~0. J r149149 procedure symbols T(t I ..... tn), ~(tl,.-.,tn) ~ application T(TI' '~r)' P(~I' '~r)J ~x I . .xs I . .Ym.t, ~x] . .xs I . .ym.p, abstraction h~l ...~r.T, hX1 ..Xr.~ if p then t' else t"~ [ selection if p then p' else p"J n(T), <n(T),r(r)> rank 1 -< i <- n, 1 -< j -< r,) indices 1 -< h -< l, 1 _< k <- t,T,x,~, .
    [Show full text]
  • Domain Theory: an Introduction
    Domain Theory: An Introduction Robert Cartwright Rebecca Parsons Rice University This monograph is an unauthorized revision of “Lectures On A Mathematical Theory of Computation” by Dana Scott [3]. Scott’s monograph uses a formulation of domains called neighborhood systems in which finite elements are selected subsets of a master set of objects called “tokens”. Since tokens have little intuitive significance, Scott has discarded neighborhood systems in favor of an equivalent formulation of domains called information systems [4]. Unfortunately, he has not rewritten his monograph to reflect this change. We have rewritten Scott’s monograph in terms of finitary bases (see Cartwright [2]) instead of information systems. A finitary basis is an information system that is closed under least upper bounds on finite consistent subsets. This convention ensures that every finite answer is represented by a single basis object instead of a set of objects. 1 The Rudiments of Domain Theory Motivation Programs perform computations by repeatedly applying primitive operations to data values. The set of primitive operations and data values depends on the particular programming language. Nearly all languages support a rich collection of data values including atomic objects, such as booleans, integers, characters, and floating point numbers, and composite objects, such as arrays, records, sequences, tuples, and infinite streams. More advanced languages also support functions and procedures as data values. To define the meaning of programs in a given language, we must first define the building blocks—the primitive data values and operations—from which computations in the language are constructed. Domain theory is a comprehensive mathematical framework for defining the data values and primitive operations of a programming language.
    [Show full text]
  • Arxiv:1301.2793V2 [Math.LO] 18 Jul 2014 Uaeacnetof Concept a Mulate Fpwrbet Uetepwre Xo) N/Rmk S Ff of Use Make And/Or Axiom), Presupp Principles
    ON TARSKI’S FIXED POINT THEOREM GIOVANNI CURI To Orsola Abstract. A concept of abstract inductive definition on a complete lattice is formulated and studied. As an application, a constructive version of Tarski’s fixed point theorem is obtained. Introduction The fixed point theorem referred to in this paper is the one asserting that every monotone mapping on a complete lattice L has a least fixed point. The proof, due to A. Tarski, of this result, is a simple and most significant example of a proof that can be carried out on the base of intuitionistic logic (e.g. in the intuitionistic set theory IZF, or in topos logic), and that yet is widely regarded as essentially non-constructive. The reason for this fact is that Tarski’s construction of the fixed point is highly impredicative: if f : L → L is a monotone map, its least fixed point is given by V P , with P ≡ {x ∈ L | f(x) ≤ x}. Impredicativity here is found in the fact that the fixed point, call it p, appears in its own construction (p belongs to P ), and, indirectly, in the fact that the complete lattice L (and, as a consequence, the collection P over which the infimum is taken) is assumed to form a set, an assumption that seems only reasonable in an intuitionistic setting in the presence of strong impredicative principles (cf. Section 2 below). In concrete applications (e.g. in computer science and numerical analysis) the monotone operator f is often also continuous, in particular it preserves suprema of non-empty chains; in this situation, the least fixed point can be constructed taking the supremum of the ascending chain ⊥,f(⊥),f(f(⊥)), ..., given by the set of finite iterations of f on the least element ⊥.
    [Show full text]
  • First Order Logic, Fixed Point Logic and Linear Order Anuj Dawar University of Pennsylvania
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by ScholarlyCommons@Penn University of Pennsylvania ScholarlyCommons IRCS Technical Reports Series Institute for Research in Cognitive Science November 1995 First Order Logic, Fixed Point Logic and Linear Order Anuj Dawar University of Pennsylvania Steven Lindell University of Pennsylvania Scott einsW tein University of Pennsylvania, [email protected] Follow this and additional works at: http://repository.upenn.edu/ircs_reports Dawar, Anuj; Lindell, Steven; and Weinstein, Scott, "First Order Logic, Fixed Point Logic and Linear Order" (1995). IRCS Technical Reports Series. 145. http://repository.upenn.edu/ircs_reports/145 University of Pennsylvania Institute for Research in Cognitive Science Technical Report No. IRCS-95-30. This paper is posted at ScholarlyCommons. http://repository.upenn.edu/ircs_reports/145 For more information, please contact [email protected]. First Order Logic, Fixed Point Logic and Linear Order Abstract The Ordered conjecture of Kolaitis and Vardi asks whether fixed-point logic differs from first-order logic on every infinite class of finite ordered structures. In this paper, we develop the tool of bounded variable element types, and illustrate its application to this and the original conjectures of McColm, which arose from the study of inductive definability and infinitary logic on proficient classes of finite structures (those admitting an unbounded induction). In particular, for a class of finite structures, we introduce a compactness notion which yields a new proof of a ramified version of McColm's second conjecture. Furthermore, we show a connection between a model-theoretic preservation property and the Ordered Conjecture, allowing us to prove it for classes of strings (colored orderings).
    [Show full text]
  • Characterizations of Inductive Posets with Applications Milan R
    PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 104, Number 2, October 1988 CHARACTERIZATIONS OF INDUCTIVE POSETS WITH APPLICATIONS MILAN R. TASKOVIC (Communicated by Andrew Odlyzko) ABSTRACT. This paper presents new characterizations of inductiveness of posets in terms of fixed apexes and fixed points. Applications in nonlinear functional analysis and fixed point theory are considered. 1. Introduction and main results. Let P be a partially ordered set. A self- mapping / of P into itself is called an isotone mapping if x < y implies f(x) < f(y). An isotone (order-preserving or increasing) map / of a partially ordered set (:= poset) P to itself has a fixed point if there exists an element f in P such that /(£) — f. P is said to have the fixed point property if every isotone map / of P into itself has a fixed point. The first of the fixed point theorems for partially ordered sets goes back to Tarski and Knaster (cf. [TA]), who proved that the lattice of all subsets of a set has the fixed point property. In the mid-1950's Tarski [TA] published a generalization: Every complete lattice has the fixed point property. Tarski [TA] raised the question whether the converse of this result also holds. Davis [DA] proved the converse: Every lattice with the fixed point property is complete. In [RI], Rival published a far-reaching extension: Every isotone map of a finite, connected poset P into itself has a fixpoint. Call a poset P inductive (chain-complete) when every nonempty chain in P has an upper bound (supremum) in P.
    [Show full text]
  • Corecursive Algebras: a Study of General Structured Corecursion (Extended Abstract)
    Corecursive Algebras: A Study of General Structured Corecursion (Extended Abstract) Venanzio Capretta1, Tarmo Uustalu2, and Varmo Vene3 1 Institute for Computer and Information Science, Radboud University Nijmegen 2 Institute of Cybernetics, Tallinn University of Technology 3 Department of Computer Science, University of Tartu Abstract. We study general structured corecursion, dualizing the work of Osius, Taylor, and others on general structured recursion. We call an algebra of a functor corecursive if it supports general structured corecur- sion: there is a unique map to it from any coalgebra of the same functor. The concept of antifounded algebra is a statement of the bisimulation principle. We show that it is independent from corecursiveness: Neither condition implies the other. Finally, we call an algebra focusing if its codomain can be reconstructed by iterating structural refinement. This is the strongest condition and implies all the others. 1 Introduction A line of research started by Osius and Taylor studies the categorical foundations of general structured recursion. A recursive coalgebra (RCA) is a coalgebra of a functor F with a unique coalgebra-to-algebra morphism to any F -algebra. In other words, it is an algebra guaranteeing unique solvability of any structured recursive diagram. The notion was introduced by Osius [Osi74] (it was also of interest to Eppendahl [Epp99]; we studied construction of recursive coalgebras from coalgebras known to be recursive with the help of distributive laws of functors over comonads [CUV06]). Taylor introduced the notion of wellfounded coalgebra (WFCA) and proved that, in a Set-like category, it is equivalent to RCA [Tay96a,Tay96b],[Tay99, Ch.
    [Show full text]
  • Type-Based Termination, Inflationary Fixed-Points, and Mixed Inductive-Coinductive Types
    Type-Based Termination, Inflationary Fixed-Points, and Mixed Inductive-Coinductive Types Andreas Abel Department of Computer Science Ludwig-Maximilians-University Munich, Germany [email protected] Type systems certify program properties in a compositional way. From a bigger program one can abstract out a part and certify the properties of the resulting abstract program by just using the type of the part that was abstracted away. Termination and productivity are non-trivial yet desired pro- gram properties, and several type systems have been put forward that guarantee termination, com- positionally. These type systems are intimately connected to the definition of least and greatest fixed-points by ordinal iteration. While most type systems use “conventional” iteration, we consider inflationary iteration in this article. We demonstrate how this leads to a more principled type system, with recursion based on well-founded induction. The type system has a prototypical implementa- tion, MiniAgda, and we show in particular how it certifies productivity of corecursive and mixed recursive-corecursive functions. 1 Introduction: Types, Compositionality, and Termination While basic types like integer, floating-point number, and memory address arise on the machine-level of most current computers, higher types like function and tuple types are abstractions that classify values. Higher types serve to guarantee certain good program behaviors, like the classic “don’t go wrong” ab- sence of runtime errors [Mil78]. Such properties are usually not compositional, i. e., while a function f and its argument a might both be well-behaved on their own, their application f a might still go wrong. This issue also pops up in termination proofs: take f = a = λx.xx, then both are terminating, but their application loops.
    [Show full text]
  • Least and Greatest Fixed Points in Linear Logic
    Least and Greatest Fixed Points in Linear Logic DAVID BAELDE, University of Minnesota The first-order theory of MALL (multiplicative, additive linear logic) over only equalities is a well-structured 2 but weak logic since it cannot capture unbounded (infinite) behavior. Instead of accounting for unbounded behavior via the addition of the exponentials (! and ?), we add least and greatest fixed point operators. The resulting logic, which we call μMALL, satisfies two fundamental proof theoretic properties: we establish weak normalization for it, and we design a focused proof system that we prove complete with respect to the initial system. That second result provides a strong normal form for cut-free proof structures that can be used, for example, to help automate proof search. We show how these foundations can be applied to intuitionistic logic. Categories and Subject Descriptors: F.4.1 [Mathematical Logic and Formal Languages]: Mathematical Logic—Proof theory; F.3.1 [Logics and Meanings of Programs]: Specifying and Verifying and Reasoning about Programs—Specification techniques; F.3.3 [Logics and Meanings of Programs]: Studies of Program Constructs—Program and recursion schemes General Terms: Design, Theory, Verification Additional Key Words and Phrases: Fixed points, linear logic, (co)induction, recursive definitions, cut elimination, normalization, focusing, proof search ACM Reference Format: Baelde, D. 2012. Least and greatest fixed points in linear logic. ACM Trans. Comput. Logic 13, 1, Article 2 (January 2012), 44 pages. DOI = 10.1145/2071368.2071370 http://doi.acm.org/10.1145/2071368.2071370 1. INTRODUCTION Inductive and coinductive definitions are ubiquitous in mathematics and computer sci- ence, from arithmetic to operational semantics and concurrency theory.
    [Show full text]
  • Fixed-Point Logics and Computation
    1 Fixed-Point Logics and Computation Symposium on the Unusual E®ectiveness of Logic in Computer Science Anuj Dawar University of Cambridge Anuj Dawar ICLMPS'03 2 Mathematical Logic Mathematical logic seeks to formalise the process of mathematical reasoning and turn this process itself into a subject of mathematical enquiry. It investigates the relationships among: Structure ² Language ² Proof ² Proof-theoretic vs. Model-theoretic views of logic. Anuj Dawar ICLMPS'03 3 Computation as Logic If logic aims to reduce reasoning to symbol manipulation, On the one hand, computation theory provides a formalisation of \symbol manipulation". On the other hand, the development of computing machines leads to \logic engineering". The validities of ¯rst-order logic are r.e.-complete. Anuj Dawar ICLMPS'03 4 Proof Theory in Computation As all programs and data are strings of symbols in a formal system, one view sees all computation as inference. For instance, the functional programming view: Propositions are types. ² Programs are (constructive proofs). ² Computation is proof transformation. ² Anuj Dawar ICLMPS'03 5 Model Theory in Computation A model-theoretic view of computation aims to distinguish computational structures and languages used to talk about them. Data Structure Programming Language Database Query Language Program/State Space Speci¯cation Language The structures involved are rather di®erent from those studied in classical model theory. Finite Model Theory. Anuj Dawar ICLMPS'03 6 First-Order Logic terms { c; x; f(t1; : : : ; ta) atomic formulas { R(t1; : : : ; ta); t1 = t2 boolean operations { ' Ã, ' Ã, ' ^ _ : ¯rst-order quanti¯ers { x', x' 9 8 Formulae are interpreted in structures: A = (A; R1; : : : ; Rm; f1; : : : ; fn; c1; : : : ; cn) Anuj Dawar ICLMPS'03 7 Success of First-Order Logic First-order logic is very successful at its intended purpose, the formalisation of mathematics.
    [Show full text]
  • Chapter 10 DOMAIN THEORY and FIXED-POINT SEMANTICS
    Chapter 10 DOMAIN THEORY AND FIXED-POINT SEMANTICS lthough we did not stress the point in Chapter 9, the notation of denotational semantics is built upon that of the lambda calculus. The A purpose of denotational semantics is to provide mathematical descrip- tions of programming languages independent of their operational behavior. The extended lambda calculus serves as a mathematical formalism, a metalanguage, for denotational definitions. As with all mathematical formal- isms, we need to know that the lambda calculus has a model to ensure that the definitions are not meaningless. Furthermore, denotational definitions, as well as programming languages in general, rely heavily on recursion, a mechanism whose description we de- ferred in the discussion of the lambda calculus in Chapter 5. Normally a user of a programming language does not care about the logical foundations of declarations, but we maintain that serious questions can be raised concern- ing the validity of recursion. In this chapter we justify recursive definitions to guarantee that they actually define meaningful objects. 10.1 CONCEPTS AND EXAMPLES Programmers use recursion to define functions and procedures as subpro- grams that call themselves and also to define recursive data structures. Most imperative programming languages require the use of pointers to declare recursive data types such as (linked) lists and trees. In contrast, many func- tional programming languages allow the direct declaration of recursive types. Rather than investigating recursive types in an actual programming language, we study recursive data declarations in a wider context. In this introductory section we consider the problems inherent in recursively defined functions and data and the related issue of nontermination.
    [Show full text]
  • Corecursion and Coinduction: How They Relate to Recursion and Induction
    Corecursion and coinduction: what they are and how they relate to recursion and induction Mike Gordon 1 Contents Preface 3 Summary 4 Data, codata, constructors and destructors ............... 4 Recursion and corecursion ......................... 5 Fixed points, induction, coinduction and bisimulation ......... 7 Overview of the rest of this article .................... 9 Natural numbers, Peano’s axioms and Peano structures 10 Algebras and coalgebras 11 Numbers and conumbers .......................... 12 Example of corecursion for numbers ................... 14 Lists and colists ............................... 15 Example of corecursion for lists ...................... 17 Coinduction and bisimulation 19 Bisimulation coinduction for conumbers ................. 20 Example of bisimulation coinduction for numbers ..... 20 Justification of bisimulation coinduction for numbers ... 21 Bisimulation coinduction for lists ..................... 22 Examples of bisimulation coinduction for lists ....... 22 Justification of bisimulation coinduction for lists ..... 27 F-algebras and F-coalgebras 28 Set theory notation ............................. 29 Numbers and lists ............................. 29 Morphisms ................................. 30 Initial and terminal algebras ........................ 32 Least and greatest fixed points 32 Fixed points, numbers and conumbers .................. 33 Summary of fixed point proof rules for numbers and conumbers .... 34 Fixed points, lists and colists ....................... 34 Summary of fixed point proof rules
    [Show full text]