Sentential Calculus Revisited: Boolean Algebra

Total Page:16

File Type:pdf, Size:1020Kb

Sentential Calculus Revisited: Boolean Algebra Sentential Calculus Revisited: Boolean Algebra Modern logic began with the work of George Boole, who treated logic algebraically. Writing “+,” “⋅,” and “-” in place of “or,” “and,” and “not,” he developed the sentential calculus in direct analogy to the familiar theories of groups, rings, and fields. Now that we have the predicate calculus with identity, we can discern the consequences of the axioms Boole wrote down, and we’ll find that Boole’s algebraic formalism got it exactly right. Here are the principles he discovered: Definition. The language of Boolean algebra is the language whose individual constants are “0” and “1,” whose function signs are the unary function sign “-” and the binary function signs “+” and “⋅,” and whose predicates are the binary predicates “=” and “≤.” For readability, we write “(x+y)” and (x⋅y)” in place of “+(x,y)” and “⋅(x,y).” So the terms of the language of Boolean algebra will constitute the smallest class of expressions which contains the variables and “0” and “1,” contains -τ whenever it contains τ , and contains (τ+ρ) whenever it contains τ and ρ. We have unique readability, as usual. Definition. A structure is a Boolean algebra just in case it satisfies the following axioms: Associative laws: (∀x)(∀y)(∀z)((x+y)+z) = (x+(y+z)) (∀x)(∀y)(∀z)((x⋅y)⋅z) = (x⋅(y⋅z)) Commutative laws: (∀x)(∀y)(x+y) = (y+x) (∀x)(∀y)(x⋅y) = (y⋅x) Idempotence: (∀x)(x+x) = x (∀x)(x⋅x) = x Distributive laws: (∀x)(∀y)(∀z)(x + (y⋅z)) = ((x+y)⋅(x+z)) (∀x)(∀y)(∀z)(x⋅(y+z)) = ((x⋅y) + (x⋅z)) Identity elements: (∀x)(x+0) = x (∀x)(x⋅1) = x Complementation laws: (∀x)(x+-x) = 1 (∀x)(x⋅-x) = 0 Boolean algebra, p. 2 Non-triviality: ¬0=1 Definition of “≤”: (∀x)(∀y)(x ≤ y ↔ (x+y)=y Definition. A sentence that’s true in every Boolean algebra is a theorem of Boolean algebra. Among the theorems of Boolean algebra are the universal closures of the following formulas: Alternative definition of “≤”: x ≤ y ↔ (x⋅y) = x Further principles governing 1 and 0: (x + 1) = 1 (x⋅0) = 0 ((∀x)(x+y) = y ↔ y=1) ((∀x)(x⋅y = y ↔ y=0) ((∀x)(x+y) = x ↔ y=0) ((∀x)(x⋅y = x ↔ y=1) Further principles governing “-”: -1 = 0 -0 = 1 --x = x (x = -y ↔ ((x+y) = 1 ∧ (x⋅y) = 0)) x ≤ y ↔ -y ≤ -x De Morgan’s laws: -(x+y) = (-x⋅-y) -(x⋅y) = (-x + -y) Absorption principles: (x + (x⋅y)) = x (x⋅(x+y)) = x ≤ is reflexive: x ≤ x ≤ is transitive: ((x ≤ y ∧ y ≤ z) → x ≤ z) Boolean algebra, p. 3 ≤ is antisymmetric: ((x ≤ y ∧ y ≤ x) → x = y) Lattice principles: x ≤ 1 0 ≤ x ((∀x)x ≤ y ↔ y = 1) ((∀x)y ≤ x ↔ y = 0) (x ≤ (x+y) ∧ y ≤ (x+y)) ((x⋅y) ≤ x ∧ (x⋅y) ≤ y) ((x ≤ z ∧ y ≤ z) ↔ (x+y) ≤ z) ((z ≤ x ∧ z ≤ y) ↔ z ≤ (x⋅y)) Further characterizations of ≤: x ≤ y ↔ (-x + y) = 1 x ≤ y ↔ (x ⋅ -y) = 0 (x ≤ y ↔ (∃z)(y⋅z)=x) (x ≤ y ↔ (∃z)(∃w)(x+z) = (y⋅w) Tests for equality: (x = y ↔ ((x + -y) = 1 ∧ (-x + y) = 1)) (x = y ↔ ((x ⋅ -y) =0 ∧ (-x ⋅ y) = 0)) (x = y ↔ (x+y) = (x⋅y)) The simplest example is the two-element Boolean algebra, whose elements are the integers 0 and 1. “0” denotes 0, and “1” denotes 1. The sum of two elements of the domain is defined to be their maximum, the product is their minimum, and the complement of x is defined to be 1 - x. In a table, a b (a+b) (a⋅b) -a 1 1 1 1 0 1 0 1 0 0 0 1 1 0 1 0 0 0 0 1 Definition. The augmented language of Boolean algebra is the language obtained from the language of Boolean algebra by adding infinitely many additional constants c0, c1, c2, c3,.... Definition. For τ a term of the augmented language of Boolean algebra, the dual of τ, τd, is the term obtained from τ by exchanging “0” and “1” everywhere and exchanging “+” and “⋅” everywhere. Boolean algebra, p. 4 Duality Principle. Let τ and ρ be terms of the augmented language of Boolean algebra. If the universal closure of τ=ρ is a theorem of Boolean algebra, so is the universal closure of τd = ρd. If the universal closure of τ≤ρ is a theorem of Boolean algebra, so is ρd ≤ τd. Proof: Suppose that B is a Boolean algebra and σ is a variable assignment for B that fails to satisfy τd = ρd in B. Let the interpretation A be defined as follows: A = B A(ci) = B(ci), for each i A(“1”) = B(“0”) A(“0”) = B(“1”) x “+”A y = x “⋅”B y x “⋅”A y = x “+”B y “-”A x = “-”B x A(“≤”) = {<y,x>: <x,y> ∈ “≤”B It is easy the verify that, for each term µ and variable assignment δ, d Denδ,A(µ) = Denδ,B(µ ). Consequently, σ fails to satisfy τ=ρ in A. It is also easy to verify, by examining the axioms one by one, that A is a Boolean algebra. So A is a Boolean algebra in which the universal closure of τ=ρ is false. We prove the second part of the Duality Principle — the part about “≤” — by deriving it from the first part. Suppose that the universal closure of τ ≤ ρ is a theorem of Boolean algebra. It follows by the definition of “≤” that the universal closure of (τ+ρ) = ρ is a theorem of Boolean algebra. Using the first part of the Duality Principle, we conclude that the universal closure of (τ+ρ)d = ρd, which is the same as (τd ⋅ ρd) = ρd, is a theorem of Boolean algebra. It follows from the alternative definition of “≤,” together with the commutative law for “⋅,” that ρd ≤ τd is a theorem of Boolean algebra._X We now make the connection between Boolean algebra and the sentential calculus. Let us fix our attention on some SC language whose atomic sentences have been arranged in an infinite list. Definition. For φ an SC sentence whose only connectives are “∨,” “∧,” and “¬,” let us take the algebraic translation of φ, φa, to be the closed term of the augmented language of Boolean algebra, obtained as follows: Boolean algebra, p. 5 Replace the ith atomic sentence, wherever it occurs, by ci, and replace “∨,” “∧,” and “¬,” wherever they occur, by “+,” “⋅,” and “-,” respectively. Definition. If Γ is a set of SC sentences whose only connectives are “∨,” “∧,” and “¬,” Γa = {γa: a ∈ Γ}. Theorem. Let φ and ψ be SC sentences whose only connectives are “∨,” “∧,” and “¬.” We have the following: (a) φ is a tautology iff φa = 1 is a theorem of Boolean algebra. (b) φ is a contradiction iff φa = 0 is a theorem of Boolean algebra. (c) φ and ψ are logically equivalent iff φa = ψa is a theorem of Boolean algebra. (d) φ implies ψ iff φa ≤ ψa is a theorem of Boolean algebra. Proof: We present the proof in the following wacky order: First, we prove the right-to-left direction of (c), then the give the left-to-right direction of (b), and finally we fill in the rest. (c) ⇐ Suppose that φ and ψ are not logically equivalent. Define an interpretation A of the augmented language of Boolean algebra, as follows: For χ an SC sentence, let [χ] be the set of all formulas that are logically equivalent to χ; [χ] is called the logical equivalence class of χ. A = {[χ]: χ an SC formula}. A(ci) = the logical equivalence class of the ith atomic sentence. 1A = the set of tautologies. 0A = the set of inconsistent sentences. For a and b elements of A, define a “+”A b to be the unique member c of A such that there exist sentences χ and θ with a = [χ], b = [θ], and c = [(χ ∨ θ)]. In order for this definition to make sense, we have to persuade ourselves that, for any a and b in A, there is indeed one and only one element c of A that meets the condition; but that’s easy. For a and b elements of A, define a “⋅”A b to be the unique member c of A such that there exist sentences χ and θ with a = [χ], b = [θ], and c = [(χ ∧ θ)]. For a an element of A, define “-”A to be the unique member c of A such that there exists a sentence χ with a = [χ] and c = [¬χ]. It is routine to verify that A, so defined, is a Boolean algebra. For example, the commutative law holds for “+” because, for any sentences χ and θ, (χ ∨ θ) is logically equivalent Boolean algebra, p. 6 to (θ ∨ χ). The associative law holds for “⋅” because, for any sentences χ, θ, and η, ((χ ∧ θ) ∧ η) is logically equivalent to (χ ∧ (θ ∧ η)). And so on. It is also straightforward to verify that, for any formula χ whose only connectives are “∨,” “∧,” and “¬,” A(χa) = [χ]. Since [φ] ≠ [ψ], A is a Boolean algebra in which φa = ψa fails. (b) ⇒ We define a set ∆ of closed terms to be a-inconsistent (for algebraically inconsistent) just in case there exist elements δ1, δ2,..., δn of ∆ such that the sentence (δ1⋅(δ2⋅...⋅δn)...) = 0 is a theorem of Boolean algebra. a-consistent sets will play a role analogous to the role played by d- consistent sets in the proof of the Completeness Theorem. We have the following: Lemma. For any set of closed terms Ω and closed term τ, if Ω ∪ {τ} and Ω ∪ {-τ} are both a-inconsistent, then Ω is a-inconsistent.
Recommended publications
  • Semiring Frameworks and Algorithms for Shortest-Distance Problems
    Journal of Automata, Languages and Combinatorics u (v) w, x–y c Otto-von-Guericke-Universit¨at Magdeburg Semiring Frameworks and Algorithms for Shortest-Distance Problems Mehryar Mohri AT&T Labs – Research 180 Park Avenue, Rm E135 Florham Park, NJ 07932 e-mail: [email protected] ABSTRACT We define general algebraic frameworks for shortest-distance problems based on the structure of semirings. We give a generic algorithm for finding single-source shortest distances in a weighted directed graph when the weights satisfy the conditions of our general semiring framework. The same algorithm can be used to solve efficiently clas- sical shortest paths problems or to find the k-shortest distances in a directed graph. It can be used to solve single-source shortest-distance problems in weighted directed acyclic graphs over any semiring. We examine several semirings and describe some specific instances of our generic algorithms to illustrate their use and compare them with existing methods and algorithms. The proof of the soundness of all algorithms is given in detail, including their pseudocode and a full analysis of their running time complexity. Keywords: semirings, finite automata, shortest-paths algorithms, rational power series Classical shortest-paths problems in a weighted directed graph arise in various contexts. The problems divide into two related categories: single-source shortest- paths problems and all-pairs shortest-paths problems. The single-source shortest-path problem in a directed graph consists of determining the shortest path from a fixed source vertex s to all other vertices. The all-pairs shortest-distance problem is that of finding the shortest paths between all pairs of vertices of a graph.
    [Show full text]
  • A Guide to Self-Distributive Quasigroups, Or Latin Quandles
    A GUIDE TO SELF-DISTRIBUTIVE QUASIGROUPS, OR LATIN QUANDLES DAVID STANOVSKY´ Abstract. We present an overview of the theory of self-distributive quasigroups, both in the two- sided and one-sided cases, and relate the older results to the modern theory of quandles, to which self-distributive quasigroups are a special case. Most attention is paid to the representation results (loop isotopy, linear representation, homogeneous representation), as the main tool to investigate self-distributive quasigroups. 1. Introduction 1.1. The origins of self-distributivity. Self-distributivity is such a natural concept: given a binary operation on a set A, fix one parameter, say the left one, and consider the mappings ∗ La(x) = a x, called left translations. If all such mappings are endomorphisms of the algebraic structure (A,∗ ), the operation is called left self-distributive (the prefix self- is usually omitted). Equationally,∗ the property says a (x y) = (a x) (a y) ∗ ∗ ∗ ∗ ∗ for every a, x, y A, and we see that distributes over itself. Self-distributivity∈ was pinpointed already∗ in the late 19th century works of logicians Peirce and Schr¨oder [69, 76], and ever since, it keeps appearing in a natural way throughout mathematics, perhaps most notably in low dimensional topology (knot and braid invariants) [12, 15, 63], in the theory of symmetric spaces [57] and in set theory (Laver’s groupoids of elementary embeddings) [15]. Recently, Moskovich expressed an interesting statement on his blog [60] that while associativity caters to the classical world of space and time, distributivity is, perhaps, the setting for the emerging world of information.
    [Show full text]
  • Higher Braid Groups and Regular Semigroups from Polyadic-Binary Correspondence
    mathematics Article Higher Braid Groups and Regular Semigroups from Polyadic-Binary Correspondence Steven Duplij Center for Information Technology (WWU IT), Universität Münster, Röntgenstrasse 7-13, D-48149 Münster, Germany; [email protected] Abstract: In this note, we first consider a ternary matrix group related to the von Neumann regular semigroups and to the Artin braid group (in an algebraic way). The product of a special kind of ternary matrices (idempotent and of finite order) reproduces the regular semigroups and braid groups with their binary multiplication of components. We then generalize the construction to the higher arity case, which allows us to obtain some higher degree versions (in our sense) of the regular semigroups and braid groups. The latter are connected with the generalized polyadic braid equation and R-matrix introduced by the author, which differ from any version of the well-known tetrahedron equation and higher-dimensional analogs of the Yang-Baxter equation, n-simplex equations. The higher degree (in our sense) Coxeter group and symmetry groups are then defined, and it is shown that these are connected only in the non-higher case. Keywords: regular semigroup; braid group; generator; relation; presentation; Coxeter group; symmetric group; polyadic matrix group; querelement; idempotence; finite order element MSC: 16T25; 17A42; 20B30; 20F36; 20M17; 20N15 Citation: Duplij, S. Higher Braid Groups and Regular Semigroups from Polyadic-Binary 1. Introduction Correspondence. Mathematics 2021, 9, We begin by observing that the defining relations of the von Neumann regular semi- 972. https://doi.org/10.3390/ groups (e.g., References [1–3]) and the Artin braid group [4,5] correspond to such properties math9090972 of ternary matrices (over the same set) as idempotence and the orders of elements (period).
    [Show full text]
  • Propositional Fuzzy Logics: Tableaux and Strong Completeness Agnieszka Kulacka
    Imperial College London Department of Computing Propositional Fuzzy Logics: Tableaux and Strong Completeness Agnieszka Kulacka A thesis submitted in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computing Research. London 2017 Acknowledgements I would like to thank my supervisor, Professor Ian Hodkinson, for his patient guidance and always responding to my questions promptly and helpfully. I am deeply grateful for his thorough explanations of difficult topics, in-depth discus- sions and for enlightening suggestions on the work at hand. Studying under the supervision of Professor Hodkinson also proved that research in logic is enjoyable. Two other people influenced the quality of this work, and these are my exam- iners, whose constructive comments shaped the thesis to much higher standards both in terms of the content as well as the presentation of it. This project would not have been completed without encouragement and sup- port of my husband, to whom I am deeply indebted for that. Abstract In his famous book Mathematical Fuzzy Logic, Petr H´ajekdefined a new fuzzy logic, which he called BL. It is weaker than the three fundamental fuzzy logics Product,Lukasiewicz and G¨odel,which are in turn weaker than classical logic, but axiomatic systems for each of them can be obtained by adding axioms to BL. Thus, H´ajek placed all these logics in a unifying axiomatic framework. In this dissertation, two problems concerning BL and other fuzzy logics have been considered and solved. One was to construct tableaux for BL and for BL with additional connectives. Tableaux are automatic systems to verify whether a given formula must have given truth values, or to build a model in which it does not have these specific truth values.
    [Show full text]
  • Boolean Logic
    Boolean logic Lecture 12 Contents . Propositions . Logical connectives and truth tables . Compound propositions . Disjunctive normal form (DNF) . Logical equivalence . Laws of logic . Predicate logic . Post's Functional Completeness Theorem Propositions . A proposition is a statement that is either true or false. Whichever of these (true or false) is the case is called the truth value of the proposition. ‘Canberra is the capital of Australia’ ‘There are 8 day in a week.’ . The first and third of these propositions are true, and the second and fourth are false. The following sentences are not propositions: ‘Where are you going?’ ‘Come here.’ ‘This sentence is false.’ Propositions . Propositions are conventionally symbolized using the letters Any of these may be used to symbolize specific propositions, e.g. :, Manchester, , … . is in Scotland, : Mammoths are extinct. The previous propositions are simple propositions since they make only a single statement. Logical connectives and truth tables . Simple propositions can be combined to form more complicated propositions called compound propositions. .The devices which are used to link pairs of propositions are called logical connectives and the truth value of any compound proposition is completely determined by the truth values of its component simple propositions, and the particular connective, or connectives, used to link them. ‘If Brian and Angela are not both happy, then either Brian is not happy or Angela is not happy.’ .The sentence about Brian and Angela is an example of a compound proposition. It is built up from the atomic propositions ‘Brian is happy’ and ‘Angela is happy’ using the words and, or, not and if-then.
    [Show full text]
  • Laws of Thought and Laws of Logic After Kant”1
    “Laws of Thought and Laws of Logic after Kant”1 Published in Logic from Kant to Russell, ed. S. Lapointe (Routledge) This is the author’s version. Published version: https://www.routledge.com/Logic-from-Kant-to- Russell-Laying-the-Foundations-for-Analytic-Philosophy/Lapointe/p/book/9781351182249 Lydia Patton Virginia Tech [email protected] Abstract George Boole emerged from the British tradition of the “New Analytic”, known for the view that the laws of logic are laws of thought. Logicians in the New Analytic tradition were influenced by the work of Immanuel Kant, and by the German logicians Wilhelm Traugott Krug and Wilhelm Esser, among others. In his 1854 work An Investigation of the Laws of Thought on Which are Founded the Mathematical Theories of Logic and Probabilities, Boole argues that the laws of thought acquire normative force when constrained to mathematical reasoning. Boole’s motivation is, first, to address issues in the foundations of mathematics, including the relationship between arithmetic and algebra, and the study and application of differential equations (Durand-Richard, van Evra, Panteki). Second, Boole intended to derive the laws of logic from the laws of the operation of the human mind, and to show that these laws were valid of algebra and of logic both, when applied to a restricted domain. Boole’s thorough and flexible work in these areas influenced the development of model theory (see Hodges, forthcoming), and has much in common with contemporary inferentialist approaches to logic (found in, e.g., Peregrin and Resnik). 1 I would like to thank Sandra Lapointe for providing the intellectual framework and leadership for this project, for organizing excellent workshops that were the site for substantive collaboration with others working on this project, and for comments on a draft.
    [Show full text]
  • 12 Propositional Logic
    CHAPTER 12 ✦ ✦ ✦ ✦ Propositional Logic In this chapter, we introduce propositional logic, an algebra whose original purpose, dating back to Aristotle, was to model reasoning. In more recent times, this algebra, like many algebras, has proved useful as a design tool. For example, Chapter 13 shows how propositional logic can be used in computer circuit design. A third use of logic is as a data model for programming languages and systems, such as the language Prolog. Many systems for reasoning by computer, including theorem provers, program verifiers, and applications in the field of artificial intelligence, have been implemented in logic-based programming languages. These languages generally use “predicate logic,” a more powerful form of logic that extends the capabilities of propositional logic. We shall meet predicate logic in Chapter 14. ✦ ✦ ✦ ✦ 12.1 What This Chapter Is About Section 12.2 gives an intuitive explanation of what propositional logic is, and why it is useful. The next section, 12,3, introduces an algebra for logical expressions with Boolean-valued operands and with logical operators such as AND, OR, and NOT that Boolean algebra operate on Boolean (true/false) values. This algebra is often called Boolean algebra after George Boole, the logician who first framed logic as an algebra. We then learn the following ideas. ✦ Truth tables are a useful way to represent the meaning of an expression in logic (Section 12.4). ✦ We can convert a truth table to a logical expression for the same logical function (Section 12.5). ✦ The Karnaugh map is a useful tabular technique for simplifying logical expres- sions (Section 12.6).
    [Show full text]
  • M-Idempotent and Self-Dual Morphological Filters ا
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 34, NO. 4, APRIL 2012 805 Short Papers___________________________________________________________________________________________________ M-Idempotent and Self-Dual and morphological filters (opening and closing, alternating filters, Morphological Filters alternating sequential filters (ASF)). The quest for self-dual and/or idempotent operators has been the focus of many investigators. We refer to some important work Nidhal Bouaynaya, Member, IEEE, in the area in chronological order. A morphological operator that is Mohammed Charif-Chefchaouni, and idempotent and self-dual has been proposed in [28] by using the Dan Schonfeld, Fellow, IEEE notion of the middle element. The middle element, however, can only be obtained through repeated (possibly infinite) iterations. Meyer and Serra [19] established conditions for the idempotence of Abstract—In this paper, we present a comprehensive analysis of self-dual and the class of the contrast mappings. Heijmans [8] proposed a m-idempotent operators. We refer to an operator as m-idempotent if it converges general method for the construction of morphological operators after m iterations. We focus on an important special case of the general theory of that are self-dual, but not necessarily idempotent. Heijmans and lattice morphology: spatially variant morphology, which captures the geometrical Ronse [10] derived conditions for the idempotence of the self-dual interpretation of spatially variant structuring elements. We demonstrate that every annular operator, in which case it will be called an annular filter. increasing self-dual morphological operator can be viewed as a morphological An alternative framework for morphological image processing that center. Necessary and sufficient conditions for the idempotence of morphological operators are characterized in terms of their kernel representation.
    [Show full text]
  • Logic, Proofs
    CHAPTER 1 Logic, Proofs 1.1. Propositions A proposition is a declarative sentence that is either true or false (but not both). For instance, the following are propositions: “Paris is in France” (true), “London is in Denmark” (false), “2 < 4” (true), “4 = 7 (false)”. However the following are not propositions: “what is your name?” (this is a question), “do your homework” (this is a command), “this sentence is false” (neither true nor false), “x is an even number” (it depends on what x represents), “Socrates” (it is not even a sentence). The truth or falsehood of a proposition is called its truth value. 1.1.1. Connectives, Truth Tables. Connectives are used for making compound propositions. The main ones are the following (p and q represent given propositions): Name Represented Meaning Negation p “not p” Conjunction p¬ q “p and q” Disjunction p ∧ q “p or q (or both)” Exclusive Or p ∨ q “either p or q, but not both” Implication p ⊕ q “if p then q” Biconditional p → q “p if and only if q” ↔ The truth value of a compound proposition depends only on the value of its components. Writing F for “false” and T for “true”, we can summarize the meaning of the connectives in the following way: 6 1.1. PROPOSITIONS 7 p q p p q p q p q p q p q T T ¬F T∧ T∨ ⊕F →T ↔T T F F F T T F F F T T F T T T F F F T F F F T T Note that represents a non-exclusive or, i.e., p q is true when any of p, q is true∨ and also when both are true.
    [Show full text]
  • Monoid Modules and Structured Document Algebra (Extendend Abstract)
    Monoid Modules and Structured Document Algebra (Extendend Abstract) Andreas Zelend Institut fur¨ Informatik, Universitat¨ Augsburg, Germany [email protected] 1 Introduction Feature Oriented Software Development (e.g. [3]) has been established in computer science as a general programming paradigm that provides formalisms, meth- ods, languages, and tools for building maintainable, customisable, and exten- sible software product lines (SPLs) [8]. An SPL is a collection of programs that share a common part, e.g., functionality or code fragments. To encode an SPL, one can use variation points (VPs) in the source code. A VP is a location in a pro- gram whose content, called a fragment, can vary among different members of the SPL. In [2] a Structured Document Algebra (SDA) is used to algebraically de- scribe modules that include VPs and their composition. In [4] we showed that we can reason about SDA in a more general way using a so called relational pre- domain monoid module (RMM). In this paper we present the following extensions and results: an investigation of the structure of transformations, e.g., a condi- tion when transformations commute, insights into the pre-order of modules, and new properties of predomain monoid modules. 2 Structured Document Algebra VPs and Fragments. Let V denote a set of VPs at which fragments may be in- serted and F(V) be the set of fragments which may, among other things, contain VPs from V. Elements of F(V) are denoted by f1, f2,... There are two special elements, a default fragment 0 and an error . An error signals an attempt to assign two or more non-default fragments to the same VP within one module.
    [Show full text]
  • A New Lower Bound for Semigroup Orthogonal Range Searching Peyman Afshani Aarhus University, Denmark [email protected]
    A New Lower Bound for Semigroup Orthogonal Range Searching Peyman Afshani Aarhus University, Denmark [email protected] Abstract We report the first improvement in the space-time trade-off of lower bounds for the orthogonal range searching problem in the semigroup model, since Chazelle’s result from 1990. This is one of the very fundamental problems in range searching with a long history. Previously, Andrew Yao’s influential result had shown that the problem is already non-trivial in one dimension [14]: using m units of n space, the query time Q(n) must be Ω(α(m, n) + m−n+1 ) where α(·, ·) is the inverse Ackermann’s function, a very slowly growing function. In d dimensions, Bernard Chazelle [9] proved that the d−1 query time must be Q(n) = Ω((logβ n) ) where β = 2m/n. Chazelle’s lower bound is known to be tight for when space consumption is “high” i.e., m = Ω(n logd+ε n). We have two main results. The first is a lower bound that shows Chazelle’s lower bound was not tight for “low space”: we prove that we must have mQ(n) = Ω(n(log n log log n)d−1). Our lower bound does not close the gap to the existing data structures, however, our second result is that our analysis is tight. Thus, we believe the gap is in fact natural since lower bounds are proven for idempotent semigroups while the data structures are built for general semigroups and thus they cannot assume (and use) the properties of an idempotent semigroup.
    [Show full text]
  • Lecture 4 1 Overview 2 Propositional Logic
    COMPSCI 230: Discrete Mathematics for Computer Science January 23, 2019 Lecture 4 Lecturer: Debmalya Panigrahi Scribe: Kevin Sun 1 Overview In this lecture, we give an introduction to propositional logic, which is the mathematical formaliza- tion of logical relationships. We also discuss the disjunctive and conjunctive normal forms, how to convert formulas to each form, and conclude with a fundamental problem in computer science known as the satisfiability problem. 2 Propositional Logic Until now, we have seen examples of different proofs by primarily drawing from simple statements in number theory. Now we will establish the fundamentals of writing formal proofs by reasoning about logical statements. Throughout this section, we will maintain a running analogy of propositional logic with the system of arithmetic with which we are already familiar. In general, capital letters (e.g., A, B, C) represent propositional variables, while lowercase letters (e.g., x, y, z) represent arithmetic variables. But what is a propositional variable? A propositional variable is the basic unit of propositional logic. Recall that in arithmetic, the variable x is a placeholder for any real number, such as 7, 0.5, or −3. In propositional logic, the variable A is a placeholder for any truth value. While x has (infinitely) many possible values, there are only two possible values of A: True (denoted T) and False (denoted F). Since propositional variables can only take one of two possible values, they are also known as Boolean variables, named after their inventor George Boole. By letting x denote any real number, we can make claims like “x2 − 1 = (x + 1)(x − 1)” regardless of the numerical value of x.
    [Show full text]