Computability Theory

Total Page:16

File Type:pdf, Size:1020Kb

Computability Theory CSC 438F/2404F Notes (S. Cook) Fall, 2008 Computability Theory This section is partly inspired by the material in \A Course in Mathematical Logic" by Bell and Machover, Chap 6, sections 1-10. Other references: \Introduction to the theory of computation" by Michael Sipser, and \Com- putability, Complexity, and Languages" by M. Davis and E. Weyuker. Our first goal is to give a formal definition for what it means for a function on N to be com- putable by an algorithm. Historically the first convincing such definition was given by Alan Turing in 1936, in his paper which introduced what we now call Turing machines. Slightly before Turing, Alonzo Church gave a definition based on his lambda calculus. About the same time G¨odel,Herbrand, and Kleene developed definitions based on recursion schemes. Fortunately all of these definitions are equivalent, and each of many other definitions pro- posed later are also equivalent to Turing's definition. This has lead to the general belief that these definitions have got it right, and this assertion is roughly what we now call \Church's Thesis". Our first definition will be based on a simple computer model called Register Machines, something proposed by Shepherdson and Sturgis in the 1960's. Then we will give a recursion- theoretic definition due to Kleene, and prove our two definitions are equivalent. A natural definition of computable function f on N allows for the possibility that f(x) may not be defined for all x 2 N, because algorithms do not always halt. Thus we will use the symbol 1 to mean “undefined". Definition: A partial function is a function n f :(N [ f1g) ! N [ f1g; n ≥ 0 such that f(c1; :::; cn) = 1 if some ci = 1. In the context of computability theory, whenever we refer to a function on N, we mean a partial function in the above sense. Definitions: Domain(f) = f~x 2 Nn j f(~x) 6= 1g n where ~x = (x1 ··· xn) We say f is total iff Domain(f) = N (i.e. if f is always defined when all its arguments are defined). Definition: A Register Machine (abbreviated RM) is a computer model specified by a program P = hc0; :::; ch−1i, consisting of a finite sequence of commands (described below). Intuitively, the commands operate on registers R1;R2; :::, each capable of storing an arbitrary natural number. 54 The possible commands are command Ri 0 Abbreviation: Zi i = 1; 2 ··· Ri Ri + 1 Si i = 1; 2 ··· goto k if Ri = Rj Ji;j;k i; j = 1; 2; ··· and k = 0; 1; 2; ··· ; h Example of a program Copy: Rj Ri (Here i and j should be specific numbers from f1; 2; 3; · · · g.) c0: Rj 0 Zj c1: goto 4 if Ri = Rj Jij4 c2 Rj Rj + 1 Sj c3 goto 1 if R1 = R1 J111 c4 Formally, we write the above program as hZj;Jij4;Sj;J111i Semantics of RM's A state is an m + 1-tuple hK; R1; :::; Rmi of natural numbers, where intuitively K is the instruction counter (i.e. the number of the next command to be executed) and R1; :::; Rm are the current values of the registers. (Here m must be as least as large as any register index referred to in the associated program.) Given a state s = hK; R1; :::; Rmi and a 0 program P = hc0; :::; ch−1i, the next state s = NextP (s) is intuitively the state resulting when command cK is applied to the register values given by s. We say that s is a halting state if K = h, and in this case s0 = s. Example: Suppose the state s = hK; R1; ··· ;Rmi and the command cK is Sj, where 1 ≤ j ≤ m. Then NextP (s) = hK + 1;R1; ··· ;Rj−1;Rj + 1;Rj+1; ··· ;Rmi Exercise 1 Give a formal definition of the function NextP for the cases in which cK is Zi and cK is Ji;j;k. A computation of a program P is a finite or infinite sequence s0; s1; ::: of states such that si+1 = NextP (si) for each si+1 in the sequence. If the sequence is finite, then the last state must be a halting state, and in this case we say that the computation is halting. We say that a program P halts starting in state s0 if there is a halting computation of P starting in state s0. Input/Output conventions: A program P computes a (partial) function f(a1; : : : ; an) as follows. Initially place a1; :::; an in R1; :::; Rn and set all other registers to 0. Start execution with command c0. That is, the initial state is s0 = h0; a1; :::; an; 0; :::; 0i 55 If P halts starting in state s0, the final value of R1 must be f(a1; :::; an) (which then must be defined). If P fails to halt, then f(a1; :::; an) = 1. Thus for each program P and each n ≥ 0 we associate an n-ary function fP;n: namely the n-ary function computed by P. Definition: If f is an n-ary function, we say that f is RM-computable (or just computable) if f is computed by some RM program. Our form of Church's Thesis: Every algorithmically computable function is RM-computable. Here the notion \algorithmically computable" is not a precise mathematical notion, but rather an intuitive notion. It is understood that the algorithms in question have unlimited memory. In the case of register machines, this means that each register can hold an arbitrarily large natural number. Church's Thesis will be discussed further at the end of this section, after we have given many examples of computable functions. Exercise 2 Show P = hJ234;S1;S3;J110i computes f(x; y) = x + y. Exercise 3 Write register machine programs to compute each of the following functions: : f1(x) = x 1 f2(x; y) = x · y Be sure to respect our input/output conventions for RM's. Primitive Recursive Functions Primitive recursion is a simple form of recursion defined as follows: Definition f is defined from g and h by primitive recursion iff f(~x; 0) = g(~x) f(~x;y + 1) = h(~x;y; f(~x;y)) We allow n = 0 so ~x could be missing. As an example, f+(x; y) = x + y can be defined by primitive recursion as follows: x + 0 = x x + (y + 1) = (x + y) + 1 In this case, g(x) = x, and h(x; y; z) = z + 1. If f is defined from g and h by primitive recursion, we can compute f from g and h by the following high-level program: 56 u g(~x) for z : 0::y − 1 u h(~x;z; u) end for The final value of u is f(~x;y) Definition f is defined from g and h1 ··· hm by composition iff f(~x) = g(h1(~x); : : : ; hm(~x)) where f; h1 : : : hm are each n-ary functions, g is m-ary. Initial Functions: Z 0-ary function equal to 0 SS(x) = x + 1 In;i(x1 ··· xn) = xi 1 ≤ i ≤ n infinite class of projection functions Definition f is primitive recursive iff f can be obtained from the initial functions by finitely many applications of primitive recursion and composition. Examples: f+(x; y) (see above) is primitive recursive, since it is defined from primitive recursion from g and h, where g = I1;1 and h can be defined by composition as follows: h(x; y; z) = S(I3;3(x; y; z)) As another example, let Z1 be the unary zero function, so Z1(x) = 0 for all x. Then we can define Z1 by primitive recursion Z1(0) = Z = g, Z1(y + 1) = Z1(y) = h(y; Z1(y)); here h = I2;2. All constant functions Kn;i(x1; ··· ; xn) = i are primitive recursive. For example, K2;3(x; y) = 3, and we show it is primitive recursive by repeated composition as follows: K2;3(x; y) = S(S(S(Z1(I2;1(x; y))))) Exercise 4 Show that x · y and xy are each primitive recursive functions of x and y. Show that SQ(x) = x2 is primitive recursive. Proposition: Every primitive recursive function is total. (f is total if f(~x) 6= 1 for all ~x 2 Nn.) Proof: The initial functions are all total, and the two operations composition and primitive recursion preserve totality. Hence all primitive recursive functions are total. (Formally the proof is by induction on the definition of primitive recursive function; see the following proof.) 57 Theorem: Every primitive recursive function is computable (on a RM). Proof: We show that each primitive recursive function f is computable by a program which upon halting leaves all registers 0 except R1 (which contains the output). We do this by induction on the definition of primitive recursive function. That is, the proof is by induction on the number of applications of composition and primitive recursion needed to derive f from the initial functions. The base case: Each initial function is easily shown to be computable by such a program. Exercise 5 For each initial function, give an RM program which computes it and leaves all registers 0 except R1. Induction Step: We show that composition and primitive recursion each preserve computabil- ity by such programs. a) Composition: Assume that g; h1,..., hm are computable by programs Pg; P1; :::; Pm, respectively, where these programs leave all registers 0 except R1. We are to show that f is computable by such a program Pf , where f(~x) = g(h1(~x); ··· ; hm(~x)) At the start x1; ··· ; xn are in registers R1; ··· ;Rn, with all other registers 0.
Recommended publications
  • 2.1 the Algebra of Sets
    Chapter 2 I Abstract Algebra 83 part of abstract algebra, sets are fundamental to all areas of mathematics and we need to establish a precise language for sets. We also explore operations on sets and relations between sets, developing an “algebra of sets” that strongly resembles aspects of the algebra of sentential logic. In addition, as we discussed in chapter 1, a fundamental goal in mathematics is crafting articulate, thorough, convincing, and insightful arguments for the truth of mathematical statements. We continue the development of theorem-proving and proof-writing skills in the context of basic set theory. After exploring the algebra of sets, we study two number systems denoted Zn and U(n) that are closely related to the integers. Our approach is based on a widely used strategy of mathematicians: we work with specific examples and look for general patterns. This study leads to the definition of modified addition and multiplication operations on certain finite subsets of the integers. We isolate key axioms, or properties, that are satisfied by these and many other number systems and then examine number systems that share the “group” properties of the integers. Finally, we consider an application of this mathematics to check digit schemes, which have become increasingly important for the success of business and telecommunications in our technologically based society. Through the study of these topics, we engage in a thorough introduction to abstract algebra from the perspective of the mathematician— working with specific examples to identify key abstract properties common to diverse and interesting mathematical systems. 2.1 The Algebra of Sets Intuitively, a set is a “collection” of objects known as “elements.” But in the early 1900’s, a radical transformation occurred in mathematicians’ understanding of sets when the British philosopher Bertrand Russell identified a fundamental paradox inherent in this intuitive notion of a set (this paradox is discussed in exercises 66–70 at the end of this section).
    [Show full text]
  • “The Church-Turing “Thesis” As a Special Corollary of Gödel's
    “The Church-Turing “Thesis” as a Special Corollary of Gödel’s Completeness Theorem,” in Computability: Turing, Gödel, Church, and Beyond, B. J. Copeland, C. Posy, and O. Shagrir (eds.), MIT Press (Cambridge), 2013, pp. 77-104. Saul A. Kripke This is the published version of the book chapter indicated above, which can be obtained from the publisher at https://mitpress.mit.edu/books/computability. It is reproduced here by permission of the publisher who holds the copyright. © The MIT Press The Church-Turing “ Thesis ” as a Special Corollary of G ö del ’ s 4 Completeness Theorem 1 Saul A. Kripke Traditionally, many writers, following Kleene (1952) , thought of the Church-Turing thesis as unprovable by its nature but having various strong arguments in its favor, including Turing ’ s analysis of human computation. More recently, the beauty, power, and obvious fundamental importance of this analysis — what Turing (1936) calls “ argument I ” — has led some writers to give an almost exclusive emphasis on this argument as the unique justification for the Church-Turing thesis. In this chapter I advocate an alternative justification, essentially presupposed by Turing himself in what he calls “ argument II. ” The idea is that computation is a special form of math- ematical deduction. Assuming the steps of the deduction can be stated in a first- order language, the Church-Turing thesis follows as a special case of G ö del ’ s completeness theorem (first-order algorithm theorem). I propose this idea as an alternative foundation for the Church-Turing thesis, both for human and machine computation. Clearly the relevant assumptions are justified for computations pres- ently known.
    [Show full text]
  • The Five Fundamental Operations of Mathematics: Addition, Subtraction
    The five fundamental operations of mathematics: addition, subtraction, multiplication, division, and modular forms Kenneth A. Ribet UC Berkeley Trinity University March 31, 2008 Kenneth A. Ribet Five fundamental operations This talk is about counting, and it’s about solving equations. Counting is a very familiar activity in mathematics. Many universities teach sophomore-level courses on discrete mathematics that turn out to be mostly about counting. For example, we ask our students to find the number of different ways of constituting a bag of a dozen lollipops if there are 5 different flavors. (The answer is 1820, I think.) Kenneth A. Ribet Five fundamental operations Solving equations is even more of a flagship activity for mathematicians. At a mathematics conference at Sundance, Robert Redford told a group of my colleagues “I hope you solve all your equations”! The kind of equations that I like to solve are Diophantine equations. Diophantus of Alexandria (third century AD) was Robert Redford’s kind of mathematician. This “father of algebra” focused on the solution to algebraic equations, especially in contexts where the solutions are constrained to be whole numbers or fractions. Kenneth A. Ribet Five fundamental operations Here’s a typical example. Consider the equation y 2 = x3 + 1. In an algebra or high school class, we might graph this equation in the plane; there’s little challenge. But what if we ask for solutions in integers (i.e., whole numbers)? It is relatively easy to discover the solutions (0; ±1), (−1; 0) and (2; ±3), and Diophantus might have asked if there are any more.
    [Show full text]
  • Complexity Theory Lecture 9 Co-NP Co-NP-Complete
    Complexity Theory 1 Complexity Theory 2 co-NP Complexity Theory Lecture 9 As co-NP is the collection of complements of languages in NP, and P is closed under complementation, co-NP can also be characterised as the collection of languages of the form: ′ L = x y y <p( x ) R (x, y) { |∀ | | | | → } Anuj Dawar University of Cambridge Computer Laboratory NP – the collection of languages with succinct certificates of Easter Term 2010 membership. co-NP – the collection of languages with succinct certificates of http://www.cl.cam.ac.uk/teaching/0910/Complexity/ disqualification. Anuj Dawar May 14, 2010 Anuj Dawar May 14, 2010 Complexity Theory 3 Complexity Theory 4 NP co-NP co-NP-complete P VAL – the collection of Boolean expressions that are valid is co-NP-complete. Any language L that is the complement of an NP-complete language is co-NP-complete. Any of the situations is consistent with our present state of ¯ knowledge: Any reduction of a language L1 to L2 is also a reduction of L1–the complement of L1–to L¯2–the complement of L2. P = NP = co-NP • There is an easy reduction from the complement of SAT to VAL, P = NP co-NP = NP = co-NP • ∩ namely the map that takes an expression to its negation. P = NP co-NP = NP = co-NP • ∩ VAL P P = NP = co-NP ∈ ⇒ P = NP co-NP = NP = co-NP • ∩ VAL NP NP = co-NP ∈ ⇒ Anuj Dawar May 14, 2010 Anuj Dawar May 14, 2010 Complexity Theory 5 Complexity Theory 6 Prime Numbers Primality Consider the decision problem PRIME: Another way of putting this is that Composite is in NP.
    [Show full text]
  • The Partial Function Computed by a TM M(W)
    CS601 The Partial Function Computed by a TM Lecture 2 8 y if M on input “.w ” eventually <> t M(w) halts with output “.y ” ≡ t :> otherwise % Σ Σ .; ; Usually, Σ = 0; 1 ; w; y Σ? 0 ≡ − f tg 0 f g 2 0 Definition 2.1 Let f :Σ? Σ? be a total or partial function. We 0 ! 0 say that f is a partial, recursive function iff TM M(f = M( )), 9 · i.e., w Σ?(f(w) = M(w)). 8 2 0 Remark 2.2 There is an easy to compute 1:1 and onto map be- tween 0; 1 ? and N [Exercise]. Thus we can think of the contents f g of a TM tape as a natural number and talk about f : N N ! being a recursive function. If the partial, recursive function f is total, i.e., f : N N then we ! say that f is a total, recursive function. A partial function that is not total is called strictly partial. 1 CS601 Some Recursive Functions Lecture 2 Proposition 2.3 The following functions are recursive. They are all total except for peven. copy(w) = ww σ(n) = n + 1 plus(n; m) = n + m mult(n; m) = n m × exp(n; m) = nm (we let exp(0; 0) = 1) 1 if n is even χ (n) = even 0 otherwise 1 if n is even p (n) = even otherwise % Proof: Exercise: please convince yourself that you can build TMs to compute all of these functions! 2 Recursive Sets = Decidable Sets = Computable Sets Definition 2.4 Let S Σ? or S N.
    [Show full text]
  • A Quick Algebra Review
    A Quick Algebra Review 1. Simplifying Expressions 2. Solving Equations 3. Problem Solving 4. Inequalities 5. Absolute Values 6. Linear Equations 7. Systems of Equations 8. Laws of Exponents 9. Quadratics 10. Rationals 11. Radicals Simplifying Expressions An expression is a mathematical “phrase.” Expressions contain numbers and variables, but not an equal sign. An equation has an “equal” sign. For example: Expression: Equation: 5 + 3 5 + 3 = 8 x + 3 x + 3 = 8 (x + 4)(x – 2) (x + 4)(x – 2) = 10 x² + 5x + 6 x² + 5x + 6 = 0 x – 8 x – 8 > 3 When we simplify an expression, we work until there are as few terms as possible. This process makes the expression easier to use, (that’s why it’s called “simplify”). The first thing we want to do when simplifying an expression is to combine like terms. For example: There are many terms to look at! Let’s start with x². There Simplify: are no other terms with x² in them, so we move on. 10x x² + 10x – 6 – 5x + 4 and 5x are like terms, so we add their coefficients = x² + 5x – 6 + 4 together. 10 + (-5) = 5, so we write 5x. -6 and 4 are also = x² + 5x – 2 like terms, so we can combine them to get -2. Isn’t the simplified expression much nicer? Now you try: x² + 5x + 3x² + x³ - 5 + 3 [You should get x³ + 4x² + 5x – 2] Order of Operations PEMDAS – Please Excuse My Dear Aunt Sally, remember that from Algebra class? It tells the order in which we can complete operations when solving an equation.
    [Show full text]
  • 7.2 Binary Operators Closure
    last edited April 19, 2016 7.2 Binary Operators A precise discussion of symmetry benefits from the development of what math- ematicians call a group, which is a special kind of set we have not yet explicitly considered. However, before we define a group and explore its properties, we reconsider several familiar sets and some of their most basic features. Over the last several sections, we have considered many di↵erent kinds of sets. We have considered sets of integers (natural numbers, even numbers, odd numbers), sets of rational numbers, sets of vertices, edges, colors, polyhedra and many others. In many of these examples – though certainly not in all of them – we are familiar with rules that tell us how to combine two elements to form another element. For example, if we are dealing with the natural numbers, we might considered the rules of addition, or the rules of multiplication, both of which tell us how to take two elements of N and combine them to give us a (possibly distinct) third element. This motivates the following definition. Definition 26. Given a set S,abinary operator ? is a rule that takes two elements a, b S and manipulates them to give us a third, not necessarily distinct, element2 a?b. Although the term binary operator might be new to us, we are already familiar with many examples. As hinted to earlier, the rule for adding two numbers to give us a third number is a binary operator on the set of integers, or on the set of rational numbers, or on the set of real numbers.
    [Show full text]
  • Church's Thesis and the Conceptual Analysis of Computability
    Church’s Thesis and the Conceptual Analysis of Computability Michael Rescorla Abstract: Church’s thesis asserts that a number-theoretic function is intuitively computable if and only if it is recursive. A related thesis asserts that Turing’s work yields a conceptual analysis of the intuitive notion of numerical computability. I endorse Church’s thesis, but I argue against the related thesis. I argue that purported conceptual analyses based upon Turing’s work involve a subtle but persistent circularity. Turing machines manipulate syntactic entities. To specify which number-theoretic function a Turing machine computes, we must correlate these syntactic entities with numbers. I argue that, in providing this correlation, we must demand that the correlation itself be computable. Otherwise, the Turing machine will compute uncomputable functions. But if we presuppose the intuitive notion of a computable relation between syntactic entities and numbers, then our analysis of computability is circular.1 §1. Turing machines and number-theoretic functions A Turing machine manipulates syntactic entities: strings consisting of strokes and blanks. I restrict attention to Turing machines that possess two key properties. First, the machine eventually halts when supplied with an input of finitely many adjacent strokes. Second, when the 1 I am greatly indebted to helpful feedback from two anonymous referees from this journal, as well as from: C. Anthony Anderson, Adam Elga, Kevin Falvey, Warren Goldfarb, Richard Heck, Peter Koellner, Oystein Linnebo, Charles Parsons, Gualtiero Piccinini, and Stewart Shapiro. I received extremely helpful comments when I presented earlier versions of this paper at the UCLA Philosophy of Mathematics Workshop, especially from Joseph Almog, D.
    [Show full text]
  • Week 1: an Overview of Circuit Complexity 1 Welcome 2
    Topics in Circuit Complexity (CS354, Fall’11) Week 1: An Overview of Circuit Complexity Lecture Notes for 9/27 and 9/29 Ryan Williams 1 Welcome The area of circuit complexity has a long history, starting in the 1940’s. It is full of open problems and frontiers that seem insurmountable, yet the literature on circuit complexity is fairly large. There is much that we do know, although it is scattered across several textbooks and academic papers. I think now is a good time to look again at circuit complexity with fresh eyes, and try to see what can be done. 2 Preliminaries An n-bit Boolean function has domain f0; 1gn and co-domain f0; 1g. At a high level, the basic question asked in circuit complexity is: given a collection of “simple functions” and a target Boolean function f, how efficiently can f be computed (on all inputs) using the simple functions? Of course, efficiency can be measured in many ways. The most natural measure is that of the “size” of computation: how many copies of these simple functions are necessary to compute f? Let B be a set of Boolean functions, which we call a basis set. The fan-in of a function g 2 B is the number of inputs that g takes. (Typical choices are fan-in 2, or unbounded fan-in, meaning that g can take any number of inputs.) We define a circuit C with n inputs and size s over a basis B, as follows. C consists of a directed acyclic graph (DAG) of s + n + 2 nodes, with n sources and one sink (the sth node in some fixed topological order on the nodes).
    [Show full text]
  • Chapter 24 Conp, Self-Reductions
    Chapter 24 coNP, Self-Reductions CS 473: Fundamental Algorithms, Spring 2013 April 24, 2013 24.1 Complementation and Self-Reduction 24.2 Complementation 24.2.1 Recap 24.2.1.1 The class P (A) A language L (equivalently decision problem) is in the class P if there is a polynomial time algorithm A for deciding L; that is given a string x, A correctly decides if x 2 L and running time of A on x is polynomial in jxj, the length of x. 24.2.1.2 The class NP Two equivalent definitions: (A) Language L is in NP if there is a non-deterministic polynomial time algorithm A (Turing Machine) that decides L. (A) For x 2 L, A has some non-deterministic choice of moves that will make A accept x (B) For x 62 L, no choice of moves will make A accept x (B) L has an efficient certifier C(·; ·). (A) C is a polynomial time deterministic algorithm (B) For x 2 L there is a string y (proof) of length polynomial in jxj such that C(x; y) accepts (C) For x 62 L, no string y will make C(x; y) accept 1 24.2.1.3 Complementation Definition 24.2.1. Given a decision problem X, its complement X is the collection of all instances s such that s 62 L(X) Equivalently, in terms of languages: Definition 24.2.2. Given a language L over alphabet Σ, its complement L is the language Σ∗ n L. 24.2.1.4 Examples (A) PRIME = nfn j n is an integer and n is primeg o PRIME = n n is an integer and n is not a prime n o PRIME = COMPOSITE .
    [Show full text]
  • NP-Completeness Part I
    NP-Completeness Part I Outline for Today ● Recap from Last Time ● Welcome back from break! Let's make sure we're all on the same page here. ● Polynomial-Time Reducibility ● Connecting problems together. ● NP-Completeness ● What are the hardest problems in NP? ● The Cook-Levin Theorem ● A concrete NP-complete problem. Recap from Last Time The Limits of Computability EQTM EQTM co-RE R RE LD LD ADD HALT ATM HALT ATM 0*1* The Limits of Efficient Computation P NP R P and NP Refresher ● The class P consists of all problems solvable in deterministic polynomial time. ● The class NP consists of all problems solvable in nondeterministic polynomial time. ● Equivalently, NP consists of all problems for which there is a deterministic, polynomial-time verifier for the problem. Reducibility Maximum Matching ● Given an undirected graph G, a matching in G is a set of edges such that no two edges share an endpoint. ● A maximum matching is a matching with the largest number of edges. AA maximummaximum matching.matching. Maximum Matching ● Jack Edmonds' paper “Paths, Trees, and Flowers” gives a polynomial-time algorithm for finding maximum matchings. ● (This is the same Edmonds as in “Cobham- Edmonds Thesis.) ● Using this fact, what other problems can we solve? Domino Tiling Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling The Setup ● To determine whether you can place at least k dominoes on a crossword grid, do the following: ● Convert the grid into a graph: each empty cell is a node, and any two adjacent empty cells have an edge between them.
    [Show full text]
  • Iris: Monoids and Invariants As an Orthogonal Basis for Concurrent Reasoning
    Iris: Monoids and Invariants as an Orthogonal Basis for Concurrent Reasoning Ralf Jung David Swasey Filip Sieczkowski Kasper Svendsen MPI-SWS & MPI-SWS Aarhus University Aarhus University Saarland University [email protected][email protected] [email protected] [email protected] rtifact * Comple * Aaron Turon Lars Birkedal Derek Dreyer A t te n * te A is W s E * e n l l C o L D C o P * * Mozilla Research Aarhus University MPI-SWS c u e m s O E u e e P n R t v e o d t * y * s E a [email protected] [email protected] [email protected] a l d u e a t Abstract TaDA [8], and others. In this paper, we present a logic called Iris that We present Iris, a concurrent separation logic with a simple premise: explains some of the complexities of these prior separation logics in monoids and invariants are all you need. Partial commutative terms of a simpler unifying foundation, while also supporting some monoids enable us to express—and invariants enable us to enforce— new and powerful reasoning principles for concurrency. user-defined protocols on shared state, which are at the conceptual Before we get to Iris, however, let us begin with a brief overview core of most recent program logics for concurrency. Furthermore, of some key problems that arise in reasoning compositionally about through a novel extension of the concept of a view shift, Iris supports shared state, and how prior approaches have dealt with them.
    [Show full text]