Predicate Logic …

Total Page:16

File Type:pdf, Size:1020Kb

Predicate Logic … predicate logic … a predicate (relation) – is a function that maps its arguments to the truth values 0 or 1 known example: less then (symbol <), for arguments 5 and 8 we have 5<8 is true, and 8<5 is false they can be written as infix like <, or letter(s), for lecture 14: predicate logic example R(x,y) or mother(x,y) of#45# ece#627,#winter#‘13# 2# of#45# predicate logic predicate logic … … in propositional logic, the preposition Every peach is fuzzy predicate logic is not a replacement for propositional is represented by a single symbol p, logic but an extension or refinement of it in predicate logic, the statement is shown in finer detail - with universal quantifier ( ∀ x)(peach(x) ⊃ fuzzy(x)) ece#627,#winter#‘13# 3# of#45# ece#627,#winter#‘13#€ € 4# of#45# predicate logic predicate logic … … - with existential quantifier - with both quantifiers ~( ∃ x)(peach(x) ∧ ~fuzzy(x)) ( ∀ x)( ∃ y)(integer(x) ⊃ (prime(y) ∧ x<y) € € € € € € ece#627,#winter#‘13# 5# of#45# ece#627,#winter#‘13# 6# of#45# predicate logic predicate logic … … order of quantifiers while … ( ∀ x)( ∃ y) ( man(x) ∧ dept(x,account) ⊃ ( ∃ y)( ∀ x) ( man(x) ∧ dept(x,account) ⊃ (woman(y) ∧ hometown(y,Boston) ∧ married(x,y)) ) (woman(y) ∧ hometown(y,Boston) ∧ married(x,y)) ) € € € € € € € € € € € € ece#627,#winter#‘13# 7# of#45# ece#627,#winter#‘13# 8# of#45# predicate logic predicate logic formation rules formation rules vocabulary contains symbols for constants and a term is either a constant (2 or a, b, c, …), a variables, parenthesis, Boolean operators, and variable (x, y or x0, x1, x2, …), or a function or an symbols for quantifiers, functions and predicates operator symbol applied to its arguments, each of which is itself a term all of them are combined according to three rules for example: f(x), 2+2 ece#627,#winter#‘13# 9# of#45# ece#627,#winter#‘13# 10# of#45# predicate logic predicate logic formation rules formation rules an atom is either a single letter (p) representing a formula is either an atom, a formula preceded by preposition or a predicate symbol (P, Q, R, …) applied ~, any two formulas A and B together with any two- to its arguments, each of which is itself a term place Boolean operator op in the combination (A op B), or any formula A and any variable x in either of for example: P(f(x), 2+2), Q(7) the combinations ( ∃ x)A or ( ∀ x)A € € ece#627,#winter#‘13# 11# of#45# ece#627,#winter#‘13# 12# of#45# predicate logic predicate logic formulas - examples formulas - examples (P(f(x),2+2) ⊃ Q(7)) John is tall ~(P(f(x),2+2) ⊃ Q(7)) T(j) ( ∀ y)~(P(f(x),2+2) ⊃ Q(7)) John is taller than Bill TR(j,b) ( ∃ x)( ∀ y)~(P(f(x),2+2)€ ⊃ Q(7)) Everybody sleeps € ∀ x (S(x)) the occurrence€ of x in f(x) is€ bound by the quantifier ( ∃ x), Somebody likes David the quantifier€ € ( ∀ y) has not effect€ on the formula, y does not occur as an argument of any function/predicate ∃ x [L(x,d)] € € ece#627,#winter#‘13#€ 13# of#45# ece#627,#winter#‘13# 14# of#45# € predicate logic predicate logic formulas - examples rules of inference There are happy people ∃ x H(x) rule of inference - Some books are interesting is to preserve truth, if we start with formulas that are ∃x [B(x) ∧ I(x)] true, the result of performing a rule of inference on Some books are€ interesting and some are easy to read them must also be true ∃ x [B(x) ∧ I(x)] ∧ ∃ x [B(x) ∧ E(x)] No books €are good € ∀x [B(x)→ ¬ G(x)] € € € € € ece#627,#winter#‘13# 15# of#45# ece#627,#winter#‘13# 16# of#45# € € predicate logic predicate logic rules of inference rules of inference issue of equivalence: the rule for relating the quantifiers ( ∀ x)(peach(x) ⊃ fuzzy(x)) ~( ∃ x)(peach(x) ∧ ~fuzzy(x)) ( ∃ x)A is equivalent to ~( ∀ x)~A ( ∀ x)A is equivalent to ~( ∃ x)~A if these€ formulas were€ represented by p and q, there would€ be no way to €prove p ≡ q, but the rules of € € predicate logic can show the equivalence € € € ece#627,#winter#‘13# 17# of#45# ece#627,#winter#‘13# 18# of#45# predicate logic predicate logic rules of inference rules of inference so, the first rule: and ~( ∃ x)~(peach(x) ⊃ fuzzy(x)) ~( ∃ x)(peach(x) ∧ ~fuzzy(x)) knowing, that ~(p ∧ ~q), then this shows 1st formula implies 2nd, if we use the inverse € € of rules€ to show the €2nd implies 1st – then both formulas are equivalent ~( ∃ x)~~(peach(x) ∧ ~fuzzy(x)) € € € ece#627,#winter#‘13# 19# of#45# ece#627,#winter#‘13# 20# of#45# predicate logic predicate logic rules of inference rules of inference Modus ponens: Hypothetical syllogism: from p and p ⊃ q, derive q from p ⊃ q and q ⊃ r, derive p ⊃ r Modus tollens: Disjunctive syllogism: from€ ~q and p ⊃ q, derive ~p € from p ∨ € q and ~p, derive€ p € € ece#627,#winter#‘13# 21# of#45# ece#627,#winter#‘13# 22# of#45# predicate logic predicate logic rules of inference rules of inference Conjunction: Subtraction: from p and q, derive p ∧ q from p ∧ q, derive p Addition: (extra conjuncts may be thrown away) from p, derive€ p ∨ q € (any formula may be added to a disjunction) € ece#627,#winter#‘13# 23# of#45# ece#627,#winter#‘13# 24# of#45# predicate logic predicate logic equivalences equivalences Idempotency: Associativity: p ∧ p is equivalent to p, and p ∧ (q ∧ r) is equivalent to (p ∧ q) ∧ r, and p ∨ p is equivalent to p p ∨ (q ∨ r) is equivalent to (p ∨ q) ∨ r Commutativity:€ Distributivity:€ € € € € p ∧ q is equivalent to q ∧ p, and € € p ∧ (q ∨ r) is equivalent€ to€ (p ∧ q) ∨ (p ∧ r), and p ∨ q is equivalent to q ∨ p p ∨ (q ∧ r) is equivalent to (p ∨ q) ∧ (p ∨ r) € € € € € € € ece#627,#winter#‘13# 25# of#45# ece#627,#winter#‘13# 26# of#45# € € € € € € € predicate logic predicate logic equivalences equivalences Absorption: De Morgans laws: p ∧ (p ∨ q) is equivalent to p, and ~(p ∧ q) is equivalent to ~p ∨ ~q, and p ∨ (p ∧ q) is equivalent to p ~(p ∨ q) is equivalent to ~p ∧ ~q Double€ negation:€ € € € € p is equivalent to ~~p € € ece#627,#winter#‘13# 27# of#45# ece#627,#winter#‘13# 28# of#45# predicate logic predicate logic rules for quantifiers rules for quantifiers if A is an atom, then all occurrences of a variable in A are said to be free if a formula C was derived form a formula A by preceding A with either ( ∀ x) or ( ∃ x), then all free if a formula C was derived from formulas A and B by occurrences of x in A are said to be bound in C, combining them with Boolean operators, then all occurrences of variables that are free in A and B are all free occurrences€ of other€ variables in A remain free also free in C in C ece#627,#winter#‘13# 29# of#45# ece#627,#winter#‘13# 30# of#45# predicate logic predicate logic rules for quantifiers rules for quantifiers rules for dealing with variables depend on which let Φ(x) be a formula with one or more free occurrences are free and bound and which variables occurrences of a variable x, then Φ(t) is the result of must be renamed to avoid name clashes with other substituting every free occurrence of x in Φ with t variables ece#627,#winter#‘13# 31# of#45# ece#627,#winter#‘13# 32# of#45# predicate logic predicate logic rules of quantifier negation rules of quantifier (in)dependence ( ∃ x)A ⇔ ~( ∀ x)~A (∀ x)( ∀ y)A(x,y) ⇔ ( ∀ y)( ∀ x)A(x,y) (∀ x)A ⇔ ~( ∃ x)~A ( ∃ x)( ∃ y)A(x,y) ⇔ ( ∃ y)( ∃ x)A(x,y) ~( ∃ x)A ⇔ ( ∀ x)~A ( ∃ x)(∀ y)A(x,y) ⇒ ( ∀ y)( ∃ x)A(x,y) € € ~(∀ €x)A ⇔ ( ∃ x)~A € € € € € € € € € € € € € € € € € € € € € € € € ece#627,#winter#‘13# 33# of#45# ece#627,#winter#‘13# 34# of#45# predicate logic predicate logic rules of quantifier movement rules of quantifier movement example: A→ (∀ x)(B(x)) ⇔ (∀ x)(A→ B(x)) A→ ( ∃ x)(B(x)) ⇔ ( ∃ x)(A→ B(x)) ( ∃ x)(P(x)) → ( ∀ y)(Q(y)) ⇔ ( ∀ y)[( ∃ x)(P(x)) → (Q(y))] ( ∀ x)(B(x))→ A ⇔ ( ∃ x)(B(x)→ A) ⇔ ( ∀ y)( ∀ x)[(P(x)) → (Q(y))] € € ( ∃ x)(B(x))€ €→ A ⇔ € (∀ x)(B(x)→ A) € € € € € € € € € € € € € € € € € € € € € € € € € € ece#627,#winter#‘13# 35# of#45# ece#627,#winter#‘13# 36# of#45# predicate logic predicate logic rules for quantifiers: permissible substitutions rules for quantifiers: permissible substitutions universal instantiation: dropping quantifiers: from ( ∀ x)Φ(x), derive Φ(c) where c is any constant if the variable x does not occur free in Φ, then from ( ∀ x)Φ(x) derive Φ, and from ( ∃ x)Φ(x) derive existential generalization: € from Φ(c), where c is any constant, derive ( ∃ x)Φ(x) adding quantifiers: provided that every occurrence of x in Φ(x) is free € from Φ derive ( ∀ x)Φ €or derive ( ∃ x)Φ, where x is any variable € € € ece#627,#winter#‘13# 37# of#45# ece#627,#winter#‘13# 38# of#45# predicate logic typed predicate logic rules for quantifiers: permissible substitutions … substituting equal for equals: this form is a purely syntactic extension of untyped for any terms s and t where s=t, derive Φ(t) from logic – its semantic identical to untyped logic, as well Φ(s), provided that all free occurrences of variables in as ever theorem and proof t remain free in Φ(t) the only difference – addition of a type label after the quantifier – x:N - (label is a monadic predicate - n(x)) ece#627,#winter#‘13# 39# of#45# ece#627,#winter#‘13# 40# of#45# typed predicate logic typed predicate logic … … for knowledge representation typed logic has the Universal: ( ∀ x:N)Φ(x) ≡ ( ∀ x)(n(x) ⊃ Φ(x)) advantage of being more concise and readable it can support rules of inference based on inheritance Existential: ( ∃ x:N)Φ(x) ≡ ( ∃ x)(n(x) ∧ Φ(x)) (they do not make logic more expressive, they € € € € shorten some proofs) € € € € ece#627,#winter#‘13# 41# of#45# ece#627,#winter#‘13# 42# of#45# typed predicate logic typed predicate logic … … with a string of multiple quantifiers of the same kind for untyped … and with the same type label, it is permissible to factor out the common quantifier and type label ( ∀ x)(number(x) ⊃ ( ∀ y)(number(y) ⊃ ( ∀ x,y,x:Number) ((x < y ∧ y < z) ⊃ x < z) ( ∀ z)(number(z) ⊃ € ((x€ < y ∧ y < z) ⊃ x < z) ))) € € € € € € € € € ece#627,#winter#‘13# 43# of#45# ece#627,#winter#‘13# 44# of#45# typed predicate logic … Untyped formula as a special case of a typed one Universal: ( ∀ x:T)Φ(x) ≡ ( ∀ x)(T(x) ⊃ Φ(x)) ≡ ( ∀ x) Φ(x) Existential:€ ( ∃ x:T)€ €Φ (x) ≡ ( ∃€ x)(T(x) ∧ Φ(x)) € € ≡ ( ∃ x) Φ(x) € € € € ece#627,#winter#‘13# 45# of#45# € € .
Recommended publications
  • Chapter 2 Introduction to Classical Propositional
    CHAPTER 2 INTRODUCTION TO CLASSICAL PROPOSITIONAL LOGIC 1 Motivation and History The origins of the classical propositional logic, classical propositional calculus, as it was, and still often is called, go back to antiquity and are due to Stoic school of philosophy (3rd century B.C.), whose most eminent representative was Chryssipus. But the real development of this calculus began only in the mid-19th century and was initiated by the research done by the English math- ematician G. Boole, who is sometimes regarded as the founder of mathematical logic. The classical propositional calculus was ¯rst formulated as a formal ax- iomatic system by the eminent German logician G. Frege in 1879. The assumption underlying the formalization of classical propositional calculus are the following. Logical sentences We deal only with sentences that can always be evaluated as true or false. Such sentences are called logical sentences or proposi- tions. Hence the name propositional logic. A statement: 2 + 2 = 4 is a proposition as we assume that it is a well known and agreed upon truth. A statement: 2 + 2 = 5 is also a proposition (false. A statement:] I am pretty is modeled, if needed as a logical sentence (proposi- tion). We assume that it is false, or true. A statement: 2 + n = 5 is not a proposition; it might be true for some n, for example n=3, false for other n, for example n= 2, and moreover, we don't know what n is. Sentences of this kind are called propositional functions. We model propositional functions within propositional logic by treating propositional functions as propositions.
    [Show full text]
  • COMPSCI 501: Formal Language Theory Insights on Computability Turing Machines Are a Model of Computation Two (No Longer) Surpris
    Insights on Computability Turing machines are a model of computation COMPSCI 501: Formal Language Theory Lecture 11: Turing Machines Two (no longer) surprising facts: Marius Minea Although simple, can describe everything [email protected] a (real) computer can do. University of Massachusetts Amherst Although computers are powerful, not everything is computable! Plus: “play” / program with Turing machines! 13 February 2019 Why should we formally define computation? Must indeed an algorithm exist? Back to 1900: David Hilbert’s 23 open problems Increasingly a realization that sometimes this may not be the case. Tenth problem: “Occasionally it happens that we seek the solution under insufficient Given a Diophantine equation with any number of un- hypotheses or in an incorrect sense, and for this reason do not succeed. known quantities and with rational integral numerical The problem then arises: to show the impossibility of the solution under coefficients: To devise a process according to which the given hypotheses or in the sense contemplated.” it can be determined in a finite number of operations Hilbert, 1900 whether the equation is solvable in rational integers. This asks, in effect, for an algorithm. Hilbert’s Entscheidungsproblem (1928): Is there an algorithm that And “to devise” suggests there should be one. decides whether a statement in first-order logic is valid? Church and Turing A Turing machine, informally Church and Turing both showed in 1936 that a solution to the Entscheidungsproblem is impossible for the theory of arithmetic. control To make and prove such a statement, one needs to define computability. In a recent paper Alonzo Church has introduced an idea of “effective calculability”, read/write head which is equivalent to my “computability”, but is very differently defined.
    [Show full text]
  • Thinking Recursively Part Four
    Thinking Recursively Part Four Announcements ● Assignment 2 due right now. ● Assignment 3 out, due next Monday, April 30th at 10:00AM. ● Solve cool problems recursively! ● Sharpen your recursive skillset! A Little Word Puzzle “What nine-letter word can be reduced to a single-letter word one letter at a time by removing letters, leaving it a legal word at each step?” Shrinkable Words ● Let's call a word with this property a shrinkable word. ● Anything that isn't a word isn't a shrinkable word. ● Any single-letter word is shrinkable ● A, I, O ● Any multi-letter word is shrinkable if you can remove a letter to form a word, and that word itself is shrinkable. ● So how many shrinkable words are there? Recursive Backtracking ● The function we wrote last time is an example of recursive backtracking. ● At each step, we try one of many possible options. ● If any option succeeds, that's great! We're done. ● If none of the options succeed, then this particular problem can't be solved. Recursive Backtracking if (problem is sufficiently simple) { return whether or not the problem is solvable } else { for (each choice) { try out that choice. if it succeeds, return success. } return failure } Failure in Backtracking S T A R T L I N G Failure in Backtracking S T A R T L I N G S T A R T L I G Failure in Backtracking S T A R T L I N G S T A R T L I G Failure in Backtracking S T A R T L I N G Failure in Backtracking S T A R T L I N G S T A R T I N G Failure in Backtracking S T A R T L I N G S T A R T I N G S T R T I N G Failure in Backtracking S T A R T L I N G S T A R T I N G S T R T I N G Failure in Backtracking S T A R T L I N G S T A R T I N G Failure in Backtracking S T A R T L I N G S T A R T I N G S T A R I N G Failure in Backtracking ● Returning false in recursive backtracking does not mean that the entire problem is unsolvable! ● Instead, it just means that the current subproblem is unsolvable.
    [Show full text]
  • Chapter 6 Formal Language Theory
    Chapter 6 Formal Language Theory In this chapter, we introduce formal language theory, the computational theories of languages and grammars. The models are actually inspired by formal logic, enriched with insights from the theory of computation. We begin with the definition of a language and then proceed to a rough characterization of the basic Chomsky hierarchy. We then turn to a more de- tailed consideration of the types of languages in the hierarchy and automata theory. 6.1 Languages What is a language? Formally, a language L is defined as as set (possibly infinite) of strings over some finite alphabet. Definition 7 (Language) A language L is a possibly infinite set of strings over a finite alphabet Σ. We define Σ∗ as the set of all possible strings over some alphabet Σ. Thus L ⊆ Σ∗. The set of all possible languages over some alphabet Σ is the set of ∗ all possible subsets of Σ∗, i.e. 2Σ or ℘(Σ∗). This may seem rather simple, but is actually perfectly adequate for our purposes. 6.2 Grammars A grammar is a way to characterize a language L, a way to list out which strings of Σ∗ are in L and which are not. If L is finite, we could simply list 94 CHAPTER 6. FORMAL LANGUAGE THEORY 95 the strings, but languages by definition need not be finite. In fact, all of the languages we are interested in are infinite. This is, as we showed in chapter 2, also true of human language. Relating the material of this chapter to that of the preceding two, we can view a grammar as a logical system by which we can prove things.
    [Show full text]
  • Notes on Mathematical Logic David W. Kueker
    Notes on Mathematical Logic David W. Kueker University of Maryland, College Park E-mail address: [email protected] URL: http://www-users.math.umd.edu/~dwk/ Contents Chapter 0. Introduction: What Is Logic? 1 Part 1. Elementary Logic 5 Chapter 1. Sentential Logic 7 0. Introduction 7 1. Sentences of Sentential Logic 8 2. Truth Assignments 11 3. Logical Consequence 13 4. Compactness 17 5. Formal Deductions 19 6. Exercises 20 20 Chapter 2. First-Order Logic 23 0. Introduction 23 1. Formulas of First Order Logic 24 2. Structures for First Order Logic 28 3. Logical Consequence and Validity 33 4. Formal Deductions 37 5. Theories and Their Models 42 6. Exercises 46 46 Chapter 3. The Completeness Theorem 49 0. Introduction 49 1. Henkin Sets and Their Models 49 2. Constructing Henkin Sets 52 3. Consequences of the Completeness Theorem 54 4. Completeness Categoricity, Quantifier Elimination 57 5. Exercises 58 58 Part 2. Model Theory 59 Chapter 4. Some Methods in Model Theory 61 0. Introduction 61 1. Realizing and Omitting Types 61 2. Elementary Extensions and Chains 66 3. The Back-and-Forth Method 69 i ii CONTENTS 4. Exercises 71 71 Chapter 5. Countable Models of Complete Theories 73 0. Introduction 73 1. Prime Models 73 2. Universal and Saturated Models 75 3. Theories with Just Finitely Many Countable Models 77 4. Exercises 79 79 Chapter 6. Further Topics in Model Theory 81 0. Introduction 81 1. Interpolation and Definability 81 2. Saturated Models 84 3. Skolem Functions and Indescernables 87 4. Some Applications 91 5.
    [Show full text]
  • Formal Languages
    Formal Languages Discrete Mathematical Structures Formal Languages 1 Strings Alphabet: a finite set of symbols – Normally characters of some character set – E.g., ASCII, Unicode – Σ is used to represent an alphabet String: a finite sequence of symbols from some alphabet – If s is a string, then s is its length ¡ ¡ – The empty string is symbolized by ¢ Discrete Mathematical Structures Formal Languages 2 String Operations Concatenation x = hi, y = bye ¡ xy = hibye s ¢ = s = ¢ s ¤ £ ¨© ¢ ¥ § i 0 i s £ ¥ i 1 ¨© s s § i 0 ¦ Discrete Mathematical Structures Formal Languages 3 Parts of a String Prefix Suffix Substring Proper prefix, suffix, or substring Subsequence Discrete Mathematical Structures Formal Languages 4 Language A language is a set of strings over some alphabet ¡ Σ L Examples: – ¢ is a language – ¢ is a language £ ¤ – The set of all legal Java programs – The set of all correct English sentences Discrete Mathematical Structures Formal Languages 5 Operations on Languages Of most concern for lexical analysis Union Concatenation Closure Discrete Mathematical Structures Formal Languages 6 Union The union of languages L and M £ L M s s £ L or s £ M ¢ ¤ ¡ Discrete Mathematical Structures Formal Languages 7 Concatenation The concatenation of languages L and M £ LM st s £ L and t £ M ¢ ¤ ¡ Discrete Mathematical Structures Formal Languages 8 Kleene Closure The Kleene closure of language L ∞ ¡ i L £ L i 0 Zero or more concatenations Discrete Mathematical Structures Formal Languages 9 Positive Closure The positive closure of language L ∞ i L £ L
    [Show full text]
  • Propositional Logic: Semantics and an Example
    Recap: Syntax PDC: Semantics Using Logic to Model the World Proofs Propositional Logic: Semantics and an Example CPSC 322 { Logic 2 Textbook x5.2 Propositional Logic: Semantics and an Example CPSC 322 { Logic 2, Slide 1 Recap: Syntax PDC: Semantics Using Logic to Model the World Proofs Lecture Overview 1 Recap: Syntax 2 Propositional Definite Clause Logic: Semantics 3 Using Logic to Model the World 4 Proofs Propositional Logic: Semantics and an Example CPSC 322 { Logic 2, Slide 2 Definition (body) A body is an atom or is of the form b1 ^ b2 where b1 and b2 are bodies. Definition (definite clause) A definite clause is an atom or is a rule of the form h b where h is an atom and b is a body. (Read this as \h if b.") Definition (knowledge base) A knowledge base is a set of definite clauses Recap: Syntax PDC: Semantics Using Logic to Model the World Proofs Propositional Definite Clauses: Syntax Definition (atom) An atom is a symbol starting with a lower case letter Propositional Logic: Semantics and an Example CPSC 322 { Logic 2, Slide 3 Definition (definite clause) A definite clause is an atom or is a rule of the form h b where h is an atom and b is a body. (Read this as \h if b.") Definition (knowledge base) A knowledge base is a set of definite clauses Recap: Syntax PDC: Semantics Using Logic to Model the World Proofs Propositional Definite Clauses: Syntax Definition (atom) An atom is a symbol starting with a lower case letter Definition (body) A body is an atom or is of the form b1 ^ b2 where b1 and b2 are bodies.
    [Show full text]
  • Symbol Sense: Informal Sense-Making in Formal Mathematics
    Symbol Sense: Informal Sense-making in Formal Mathematics ABRAHAM ARCA VI Prologue Irineo Funes, the "memorious ", was created from Borges ~s fan­ tions and formulating mathematical arguments (proofs) tastic palette He had an extraordinarity retentive memory, he Instruction does not always provide opportunities not only was able to remember everything, be it the shapes of the southern not to memorize, but also to "forget" rules and details and clouds on the dawn of April 30th, 1882, or all the words in to be able to see through them in order to think, abstract, Engli'>h, French, Portuguese and Latin Each one of his experi­ generalize, and plan solution strategies Therefore, it ences was fully and accurately registered in his infinite memory would seem reasonable to attempt a description of a paral­ On one occasion he thought of "reducing" each past day of his lel notion to that of "number sense" in arithmetic: the idea life to s·eventy thousand remembrances to be referred to by num­ of "symbol sense".[!] bers. Two considerations dissuaded him: the task was inter­ minable and futile. Towards the end of the story, the real tragedy What is symbol sense? A first round of Funes is revealed He was incapable of general, "platonic " Compared to the attention which has been given to "num­ ideas For example, it was hard for him to understand that the ber sense", there is very little in the literature on "'symbol generic name dog included so many individuals of diverse sizes sense". One welcome exception is Fey [1990] He does not and shapes Moreover,
    [Show full text]
  • Logic Programming Lecture 19 Tuesday, April 5, 2016 1 Logic Programming 2 Prolog
    Harvard School of Engineering and Applied Sciences — CS 152: Programming Languages Logic programming Lecture 19 Tuesday, April 5, 2016 1 Logic programming Logic programming has its roots in automated theorem proving. Logic programming differs from theorem proving in that logic programming uses the framework of a logic to specify and perform computation. Essentially, a logic program computes values, using mechanisms that are also useful for deduction. Logic programming typically restricts itself to well-behaved fragments of logic. We can think of logic programs as having two interpretations. In the declarative interpretation, a logic pro- gram declares what is being computed. In the procedural interpretation, a logic program program describes how a computation takes place. In the declarative interpretation, one can reason about the correctness of a program without needing to think about underlying computation mechanisms; this makes declarative programs easier to understand, and to develop. A lot of the time, once we have developed a declarative program using a logic programming language, we also have an executable specification, that is, a procedu- ral interpretation that tells us how to compute what we described. This is one of the appealing features of logic programming. (In practice, executable specifications obtained this way are often inefficient; an under- standing of the underlying computational mechanism is needed to make the execution of the declarative program efficient.) We’ll consider some of the concepts of logic programming by considering the programming language Prolog, which was developed in the early 70s, initially as a programming language for natural language processing. 2 Prolog We start by introducing some terminology and syntax.
    [Show full text]
  • Recursion Theory Notes, Fall 2011 0.1 Introduction
    Recursion Theory Notes, Fall 2011 Lecturer: Lou van den Dries 0.1 Introduction Recursion theory (or: theory of computability) is a branch of mathematical logic studying the notion of computability from a rather theoretical point of view. This includes giving a lot of attention to what is not computable, or what is computable relative to any given, not necessarily computable, function. The subject is interesting on philosophical-scientific grounds because of the Church- Turing Thesis and its role in computer science, and because of the intriguing concept of Kolmogorov complexity. This course tries to keep touch with how recursion theory actually impacts mathematics and computer science. This impact is small, but it does exist. Accordingly, the first chapter of the course covers the basics: primitive recur- sion, partial recursive functions and the Church-Turing Thesis, arithmetization and the theorems of Kleene, the halting problem and Rice's theorem, recur- sively enumerable sets, selections and reductions, recursive inseparability, and index systems. (Turing machines are briefly discussed, but the arithmetization is based on a numerical coding of combinators.) The second chapter is devoted to the remarkable negative solution (but with positive aspects) of Hilbert's 10th Problem. This uses only the most basic notions of recursion theory, plus some elementary number theory that is worth knowing in any case; the coding aspects of natural numbers are combined here ingeniously with the arithmetic of the semiring of natural numbers. The last chapter is on Kolmogorov complexity, where concepts of recursion theory are used to define and study notions of randomness and information content.
    [Show full text]
  • Set Notation and Concepts
    Appendix Set Notation and Concepts “In mathematics you don’t understand things. You just get used to them.” John von Neumann (1903–1957) This appendix is primarily a brief run-through of basic concepts from set theory, but it also in Sect. A.4 mentions set equations, which are not always covered when introducing set theory. A.1 Basic Concepts and Notation A set is a collection of items. You can write a set by listing its elements (the items it contains) inside curly braces. For example, the set that contains the numbers 1, 2 and 3 can be written as {1, 2, 3}. The order of elements do not matter in a set, so the same set can be written as {2, 1, 3}, {2, 3, 1} or using any permutation of the elements. The number of occurrences also does not matter, so we could also write the set as {2, 1, 2, 3, 1, 1} or in an infinity of other ways. All of these describe the same set. We will normally write sets without repetition, but the fact that repetitions do not matter is important to understand the operations on sets. We will typically use uppercase letters to denote sets and lowercase letters to denote elements in a set, so we could write M ={2, 1, 3} and x = 2 as an element of M. The empty set can be written either as an empty list of elements ({})orusing the special symbol ∅. The latter is more common in mathematical texts. A.1.1 Operations and Predicates We will often need to check if an element belongs to a set or select an element from a set.
    [Show full text]
  • The Entscheidungsproblem and Alan Turing
    The Entscheidungsproblem and Alan Turing Author: Laurel Brodkorb Advisor: Dr. Rachel Epstein Georgia College and State University December 18, 2019 1 Abstract Computability Theory is a branch of mathematics that was developed by Alonzo Church, Kurt G¨odel,and Alan Turing during the 1930s. This paper explores their work to formally define what it means for something to be computable. Most importantly, this paper gives an in-depth look at Turing's 1936 paper, \On Computable Numbers, with an Application to the Entscheidungsproblem." It further explores Turing's life and impact. 2 Historic Background Before 1930, not much was defined about computability theory because it was an unexplored field (Soare 4). From the late 1930s to the early 1940s, mathematicians worked to develop formal definitions of computable functions and sets, and applying those definitions to solve logic problems, but these problems were not new. It was not until the 1800s that mathematicians began to set an axiomatic system of math. This created problems like how to define computability. In the 1800s, Cantor defined what we call today \naive set theory" which was inconsistent. During the height of his mathematical period, Hilbert defended Cantor but suggested an axiomatic system be established, and characterized the Entscheidungsproblem as the \fundamental problem of mathemat- ical logic" (Soare 227). The Entscheidungsproblem was proposed by David Hilbert and Wilhelm Ackerman in 1928. The Entscheidungsproblem, or Decision Problem, states that given all the axioms of math, there is an algorithm that can tell if a proposition is provable. During the 1930s, Alonzo Church, Stephen Kleene, Kurt G¨odel,and Alan Turing worked to formalize how we compute anything from real numbers to sets of functions to solve the Entscheidungsproblem.
    [Show full text]