An Efficient Algorithm for Exploiting Multiple Arithmetic Units

Total Page:16

File Type:pdf, Size:1020Kb

An Efficient Algorithm for Exploiting Multiple Arithmetic Units R. M. Tomasulo An Efficient Algorithm for Exploiting Multiple Arithmetic Units Abstract: Thispaper describes the methods employed in thefloating-point area of the System/360Model 91 to exploitthe existence of multiple execution units. Basic to these techniques is a simple common data busing and register tagging scheme which permits simultaneous executionof independent instructions while preserving the essential precedences inherentthe instruction in stream. The common data bus improves performance by efficiently utilizing the execution units without requiring specially optimized code. Instead, the hardware, by ‘looking ahead’ about eight instructions. automatically optimizes the program execution on a local basis. The application of these techniquesis not limited to floating-point arithmetic or System/360 architecture.It may be used in almost any computer having multiple execution units and one or more ‘accumulators.’ Both of the execution units, as well as the associated storage buffers, multiple accumulators and input /output buses, are extensively checked. Introduction After storage access time has been satisfactorily reduced execution of floating-point instructions in the IBM Sys- through the use of buffering and overlap techniques, even tem/360 Model 91. Obviously, one begins with multiple after the instruction unit has been pipelined to operate executionunits, in this case an adder and a multi- at a rate approaching oneinstruction per cycle,’ there plier/divider.’ remains the need to optimize the actual performance of It might appear that achieving the concurrent operation arithmetic operations,especially floating-point. Two of these two units does not differ substantially from the familar problems confront the designer in his attempt to attainment of fixed-floating overlap. However,in the latter balance execution with issuing.First, individual operations case the architecture limits each of the instruction classes are not fastenough* to allowsimple serial execution. to its own set of accumulators and this guarantees inde- Second, it is difficult to achieve the fastestexecution pendence.* In the former case there is only one set of times in a universal execution unit. In other words, cir- accumulators, which implies program-specified sequences cuitrydesigned to do both multiply and add will do of dependent operations. Now it is no longer simply a neither as fast as two units each limited to one kind of matter of classifyingeach instruction as fixed-point or instruction. floating-point, a classificationwhich is independent of The first step toward surmounting these obstacles has previous instructions. Rather, it is a question of deter- been presented,’ i.e., the division of the execution func- mining each instruction’s relationship with all previous, tion into twoindependent parts, a fixed-pointexecu- incompletedinstructions. Simply stated, the objective tion area and a floating-point execution area. While this must be to preserve essential precedences while allowing relieves the physical constraint and makes concurrent the greatest possible overlap of independent operations. execution possible, there is another consideration. In order This objective is achieved in the Model 91 through a to secure a performance increase the program must con- schemecalled the common data bus(CDB). It makes tain an intimate mixture of fixed-point and floating-point possiblemaximum concurrency with minimal effort instructions. Obviously, it is not always feasible for the (usually none) by the programmer or, more importantly, programmer to arrange this and, indeed,many of the by the compiler. At the same time, the hardware required programs of greatest interest to the user consist almost is small and logically simple. The CDB can function with wholly of floating-point instructions. The subject of this any numberof accumulators and any numberof execution paper, then, is the method used to achieve concurrent units. In short, it provides a hardware algorithm for the ”.” automatic, efficient exploitation of multipleexecution During the planning phase, floating-point multiply was taken to be units. six cycles, divide as eighteen cycles andadd as two cycles. A subse- quent papers explains how times of 3, 12, and 2 were actually achieved. * Such dependencies as exist are handled by the store-fetch sequenc- This permitted the use of only one, instead of two, multipliers and one ing of thestorage bus and the condition code controldescribed in the adder, pipelined to start an add cycle. following paper.2 STORAGE BUS UNIT INSTRUCTION 1 v 6 FLOATING 5 OPERAND + FLOATING-POINT 4 STACK 8 CONTROL BUFFERS (FLB) 3 (FLOS) FLOATING-POINT 4 CONTROL . 2 REGISTERS (FLR) 2 1 0 FLOATING.POINT BUFFER (FLE) BUS TO STORAGE RESULTBUS Figure 1 Dataregisters and transferpaths without CDB. The next section of this paper will discuss the physical storage is really the sink of a store. (R1 and R2 refer to framework of registers, data paths and execution circuitry fields as defined by System/360 architecture.) which is implied by the architecture and the overall CPU In the pseudo-register-to-register format "seen" by the structure presented in a previouspaper.' Within this FLOS the R2 field can have three different meanings. It framework one can subsequently discuss the problem of can be an FLR as in a normal register-to-register instruc- precedence,some possible solutions, and the selected tion. If the program contains a storage-to-register in- solution, the CDB. In conclusion will be a summary of struction, the R2 field designates the floating-point buffer the results obtained. (FLB)assigned by the instruction unit to receive the storage operand. Finally, R2 can designate a store data buffer(SDB) assigned by the instruction unit to store Definitions and data paths instructions. In the first two cases R2 is the source of an operand; in the last case it is a sink. Thus, the instruction While the reader is assumedto be familiar with System/360 unit maps all of storage into the 6 floating-point buffers architecture and mnemonics, the terminology as modified and 3 store data buffers so that the FLOS sees only pseudo- by the context of the Model 91 organization will be re- register-to-register operations. viewed here. The instruction unit, in preparing instruc- tions for the floating-point operation stack (FLOS), maps The distinction between source and sink will become both storage-to-register and register-to-registerinstruc- quite important during the discussion of precedence and tions into a pseudo-register-to-register format. In this should be fixedfirmly in mind. All of the instructions format R1 is always one of the four floating-point regis- (except store and compare)have the following form: ters (FLR) defined by the architecture. It is usually the R1 op R2"--4R1 sink of the instruction, i.e., it is the FXR whose contents Register RegisterRegister are set equal to the result of the operation. Store opera- Register tions are the soleexception* wherein R1 specifies the or source of the operand to be placed in storage. A word in buffer 26 * Compares not,do of course, alter the contents of R1. source source sink R. M. TOMASULO INSTRUCTION UNIT DECODE ION 3 TO 10ACCESS CYCLE TRANSMIT BFR I 1 TO EXECUTION I TRANSMIT OP I TO FLOS TO I I EXECUTIONUNIT 1 EXECUTION ----- Figure 2 Timing relationship between instruction unit and FLOS decode for the processing of one instruction. For example, the instruction ADO, 2 means “place the adder receives control information which causes it to send double-precision sum of registers 0 and 2 in register 0,” data to floating-point register R1, when its source reg- i.e., RO + R2 + RO. Note that R1 is really both a source ister is set full by the buffer. and a sink.* Nevertheless, it will be calledthe sink and R2 If the instruction is a storage-to-register arithmetic func- the source in all subsequent discussion. tion, the storage operand is handled as in load (control This definition of operations and the machine organiza- bits cause it to be forwarded to the proper unit) but the tion taken togetherimply a set of data registerswith floating-point register, along with the operation, is sent transfer paths among them. These are shown in Fig. 1. by the decoder to the appropriate unit. After receiving The majorsets of registers(FLR’s, FLB’s, FLOS and the buffer the unit will execute the operation and send the SDB’s) have already been discussed, both above and in result to register R1. a precedingpaper.’ Two additional registers,one sink In register-to-register arithmetic instructions two float- and one source, are shown feeding each execution circuit. ing point registers are transmitted on successive cycles to Initially these registers were considered to be the internal the appropriate execution unit. working registers required by the execution circuits and Stores are handledlike storage-to-register arithmetic put to multiple use in a way to be described below. Later, functions,except that the content of the floating-point their function was generalized underthe reservation station register is sent to a store data buffer rather than to an concept and they were dissociated from their “working” execution unit. function. Thus far, the handling of one instruction at a time has In actually designing a machine the data paths evolve proven rather straightforward. Now consider the following as the design progresses. Here, however, a complete, first- “program” : pass data path will be shown to facilitate discussion. To illustrate the operation let us consider, in turn, four kinds Example I of instructions-load of a register from storage, storage- LD FO FLBl LOAD register FO from buffer 1 to-registerarithmetic, register-to-register arithmetic, and MD FO FLB2 MULTIPLY register FO bybuffer 2 store. Let us first see how each can be accomplished in vacuo; then what difficulties arise when each is embedded The load can be handled as before, but what about the in the context of a program.
Recommended publications
  • Infinitary Logic and Inductive Definability Over Finite Structures
    University of Pennsylvania ScholarlyCommons Technical Reports (CIS) Department of Computer & Information Science November 1991 Infinitary Logic and Inductive Definability Over Finite Structures Anuj Dawar University of Pennsylvania Steven Lindell University of Pennsylvania Scott Weinstein University of Pennsylvania Follow this and additional works at: https://repository.upenn.edu/cis_reports Recommended Citation Anuj Dawar, Steven Lindell, and Scott Weinstein, "Infinitary Logic and Inductive Definability Over Finite Structures", . November 1991. University of Pennsylvania Department of Computer and Information Science Technical Report No. MS-CIS-91-97. This paper is posted at ScholarlyCommons. https://repository.upenn.edu/cis_reports/365 For more information, please contact [email protected]. Infinitary Logic and Inductive Definability Over Finite Structures Abstract The extensions of first-order logic with a least fixed point operators (FO + LFP) and with a partial fixed point operator (FO + PFP) are known to capture the complexity classes P and PSPACE respectively in the presence of an ordering relation over finite structures. Recently, Abiteboul and Vianu [AV91b] investigated the relation of these two logics in the absence of an ordering, using a mchine model of generic computation. In particular, they showed that the two languages have equivalent expressive power if and only if P = PSPACE. These languages can also be seen as fragments of an infinitary logic where each ω formula has a bounded number of variables, L ∞ω (see, for instance, [KV90]). We present a treatment of the results in [AV91b] from this point of view. In particular, we show that we can write a formula of FO + LFP and P from ordered structures to classes of structures where every element is definable.
    [Show full text]
  • The Complexity Zoo
    The Complexity Zoo Scott Aaronson www.ScottAaronson.com LATEX Translation by Chris Bourke [email protected] 417 classes and counting 1 Contents 1 About This Document 3 2 Introductory Essay 4 2.1 Recommended Further Reading ......................... 4 2.2 Other Theory Compendia ............................ 5 2.3 Errors? ....................................... 5 3 Pronunciation Guide 6 4 Complexity Classes 10 5 Special Zoo Exhibit: Classes of Quantum States and Probability Distribu- tions 110 6 Acknowledgements 116 7 Bibliography 117 2 1 About This Document What is this? Well its a PDF version of the website www.ComplexityZoo.com typeset in LATEX using the complexity package. Well, what’s that? The original Complexity Zoo is a website created by Scott Aaronson which contains a (more or less) comprehensive list of Complexity Classes studied in the area of theoretical computer science known as Computa- tional Complexity. I took on the (mostly painless, thank god for regular expressions) task of translating the Zoo’s HTML code to LATEX for two reasons. First, as a regular Zoo patron, I thought, “what better way to honor such an endeavor than to spruce up the cages a bit and typeset them all in beautiful LATEX.” Second, I thought it would be a perfect project to develop complexity, a LATEX pack- age I’ve created that defines commands to typeset (almost) all of the complexity classes you’ll find here (along with some handy options that allow you to conveniently change the fonts with a single option parameters). To get the package, visit my own home page at http://www.cse.unl.edu/~cbourke/.
    [Show full text]
  • Canonical Models and the Complexity of Modal Team Logic
    On the Complexity of Team Logic and its Two-Variable Fragment Martin Lück Leibniz Universität Hannover, Germany [email protected] Abstract. We study the logic FO(∼), the extension of first-order logic with team semantics by unrestricted Boolean negation. It was recently shown axiomatizable, but otherwise has not yet received much attention in questions of computational complexity. In this paper, we consider its two-variable fragment FO2(∼) and prove that its satisfiability problem is decidable, and in fact complete for the recently introduced non-elementary class TOWER(poly). Moreover, we classify the complexity of model checking of FO(∼) with respect to the number of variables and the quantifier rank, and prove a di- chotomy between PSPACE- and ATIME-ALT(exp, poly)-completeness. To achieve the lower bounds, we propose a translation from modal team logic MTL to FO2(∼) that extends the well-known standard translation from modal logic ML to FO2. For the upper bounds, we translate to a fragment of second-order logic. Keywords: team semantics, two-variable logic, complexity, satisfiability, model checking 2012 ACM Subject Classification: Theory of computation → Complexity theory and logic; Logic; 1. Introduction In the last decades, the work of logicians has unearthed a plethora of decidable fragments of first-order logic FO. Many of these cases are restricted quantifier prefixes, such as the BSR-fragment which contains only ∃∗∀∗-sentences [30]. Others include the guarded arXiv:1804.04968v1 [cs.LO] 13 Apr 2018 fragment GF [1], the recently introduced separated fragment SF [32, 34], or the two-variable fragment FO2 [13, 27, 31].
    [Show full text]
  • User's Guide for Complexity: a LATEX Package, Version 0.80
    User’s Guide for complexity: a LATEX package, Version 0.80 Chris Bourke April 12, 2007 Contents 1 Introduction 2 1.1 What is complexity? ......................... 2 1.2 Why a complexity package? ..................... 2 2 Installation 2 3 Package Options 3 3.1 Mode Options .............................. 3 3.2 Font Options .............................. 4 3.2.1 The small Option ....................... 4 4 Using the Package 6 4.1 Overridden Commands ......................... 6 4.2 Special Commands ........................... 6 4.3 Function Commands .......................... 6 4.4 Language Commands .......................... 7 4.5 Complete List of Class Commands .................. 8 5 Customization 15 5.1 Class Commands ............................ 15 1 5.2 Language Commands .......................... 16 5.3 Function Commands .......................... 17 6 Extended Example 17 7 Feedback 18 7.1 Acknowledgements ........................... 19 1 Introduction 1.1 What is complexity? complexity is a LATEX package that typesets computational complexity classes such as P (deterministic polynomial time) and NP (nondeterministic polynomial time) as well as sets (languages) such as SAT (satisfiability). In all, over 350 commands are defined for helping you to typeset Computational Complexity con- structs. 1.2 Why a complexity package? A better question is why not? Complexity theory is a more recent, though mature area of Theoretical Computer Science. Each researcher seems to have his or her own preferences as to how to typeset Complexity Classes and has built up their own personal LATEX commands file. This can be frustrating, to say the least, when it comes to collaborations or when one has to go through an entire series of files changing commands for compatibility or to get exactly the look they want (or what may be required).
    [Show full text]
  • Descriptive Complexity: a Logician's Approach to Computation
    Descriptive Complexity a Logicians Approach to Computation Neil Immerman Computer Science Dept University of Massachusetts Amherst MA immermancsumassedu App eared in Notices of the American Mathematical Society A basic issue in computer science is the complexity of problems If one is doing a calculation once on a mediumsized input the simplest algorithm may b e the b est metho d to use even if it is not the fastest However when one has a subproblem that will havetobesolved millions of times optimization is imp ortant A fundamental issue in theoretical computer science is the computational complexity of problems Howmuch time and howmuch memory space is needed to solve a particular problem Here are a few examples of such problems Reachability Given a directed graph and two sp ecied p oints s t determine if there is a path from s to t A simple lineartime algorithm marks s and then continues to mark every vertex at the head of an edge whose tail is marked When no more vertices can b e marked t is reachable from s i it has b een marked Mintriangulation Given a p olygon in the plane and a length L determine if there is a triangulation of the p olygon of total length less than or equal to L Even though there are exp onentially many p ossible triangulations a dynamic programming algorithm can nd an optimal one in O n steps ThreeColorability Given an undirected graph determine whether its vertices can b e colored using three colors with no two adjacentvertices having the same color Again there are exp onentially many p ossibilities
    [Show full text]
  • Descriptive Complexity Theories
    Descriptive Complexity Theories Joerg FLUM ABSTRACT: In this article we review some of the main results of descriptive complexity theory in order to make the reader familiar with the nature of the investigations in this area. We start by presenting the characterization of automata recognizable languages by monadic second-order logic. Afterwards we explain the characteri- zation of various logics by fixed-point logics. We assume familiarity with logic but try to keep knowledge of complexity theory to a minimum. Keywords: Computational complexity theory, complexity classes, descriptive characterizations, monadic second-order logic, fixed-point logic, Turing machine. Complexity theory or more precisely, computational complexity theory (cf. Papadimitriou 1994), tries to classify problems according to the complexity of algorithms solving them. Of course, we can think of various ways of measuring the complexity of an al- gorithm, but the most important and relevant ones are time and space. That is, we think we have a type of machine capable, in principle, to carry out any algorithm, a so-called general purpose machine (or, general purpose computer), and we measure the com- plexity of an algorithm in terms of the time or the number of steps needed to carry out this algorithm. By space, we mean the amount of memory the algorithm uses. Time bounds yield (time) complexity classes consisting of all problems solvable by an algorithm keeping to the time bound. Similarly, space complexity classes are obtained. It turns out that these definitions are quite robust in the sense that, for reasonable time or space bounds, the corresponding complexity classes do not depend on the special type of machine model chosen.
    [Show full text]
  • On the Descriptive Complexity of Color Coding
    On the Descriptive Complexity of Color Coding Max Bannach Institute for Theoretical Computer Science, Universität zu Lübeck, Germany [email protected] Till Tantau Institute for Theoretical Computer Science, Universität zu Lübeck, Germany [email protected] Abstract Color coding is an algorithmic technique used in parameterized complexity theory to detect “small” structures inside graphs. The idea is to derandomize algorithms that first randomly color a graph and then search for an easily-detectable, small color pattern. We transfer color coding to the world of descriptive complexity theory by characterizing – purely in terms of the syntactic structure of describing formulas – when the powerful second-order quantifiers representing a random coloring can be replaced by equivalent, simple first-order formulas. Building on this result, we identify syntactic properties of first-order quantifiers that can be eliminated from formulas describing parameterized problems. The result applies to many packing and embedding problems, but also to the long path problem. Together with a new result on the parameterized complexity of formula families involving only a fixed number of variables, we get that many problems lie in fpt just because of the way they are commonly described using logical formulas. 2012 ACM Subject Classification Theory of computation → Finite Model Theory; Theory of computation → Fixed parameter tractability Keywords and phrases color coding, descriptive complexity, fixed-parameter tractability, quantifier elimination, para-AC0 Digital Object Identifier 10.4230/LIPIcs.STACS.2019.11 Related Version https://arxiv.org/abs/1901.03364 1 Introduction Descriptive complexity provides a powerful link between logic and complexity theory: We use a logical formula to describe a problem and can then infer the computational complexity of the problem just from the syntactic structure of the formula.
    [Show full text]
  • The Complexity of Tiling Problems
    The Complexity of Tiling Problems Fran¸cois Schwarzentruber Univ Rennes, CNRS, IRISA, France August 22, 2019 Abstract In this document, we collected the most important complexity results of tilings. We also propose a definition of a so-called deterministic set of tile types, in order to capture deterministic classes without the notion of games. We also pinpoint tiling problems complete for respectively LOGSPACE and NLOGSPACE. 1 Introduction As advocated by van der Boas [15], tilings are convenient to prove lower complexity bounds. In this document, we show that tilings of finite rectangles with Wang tiles [16] enable to capture many standard complexity classes: FO (the class of decision problems defined by a first-order formula, see [9]), LOGSPACE, NLOGSPACE, P, NP, Pspace, Exptime, NExptime, k-Exptime and k-Expspace, for k ≥ 1. This document brings together many results of the literature. We recall some results from [15]. The setting is close to Tetravex [13], but the difference is that we allow a tile to be used several times. That is why we will the terminology tile types. We also recall the results by Chlebus [6] on tiling games, but we simplify the framework since we suppose that players alternate at each row (and not at each time they put a tile). The first contribution consists in capturing deterministic time classes with an existence of a tiling, and without any game notions. We identify a syntactic class of set of tiles, called deterministic set of tiles. For this, we have slightly adapted the definition of the encoding of executions of Turing machines given in [15].
    [Show full text]
  • Database Theory
    DATABASE THEORY Lecture 4: Complexity of FO Query Answering David Carral Knowledge-Based Systems TU Dresden, 23rd Apr 2019 How to Measure Query Answering Complexity Query answering as decision problem { consider Boolean queries Various notions of complexity: • Combined complexity (complexity w.r.t. size of query and database instance) • Data complexity (worst case complexity for any fixed query) • Query complexity (worst case complexity for any fixed database instance) Various common complexity classes: L ⊆ NL ⊆ P ⊆ NP ⊆ PSpace ⊆ ExpTime David Carral, 23rd Apr 2019 Database Theory slide 2 of 11 An Algorithm for Evaluating FO Queries function Eval(', I) 01 switch (') f I 02 case p(c1, ::: , cn): return hc1, ::: , cni 2 p 03 case : : return :Eval( , I) 04 case 1 ^ 2 : return Eval( 1, I) ^ Eval( 2, I) 05 case 9x. : 06 for c 2 ∆I f 07 if Eval( [x 7! c], I) then return true 08 g 09 return false 10 g David Carral, 23rd Apr 2019 Database Theory slide 3 of 11 • How many recursive calls of Eval are there? { one per subexpression: at most m • Maximum depth of recursion? { bounded by total number of calls: at most m • Maximum number of iterations of for loop? { j∆Ij ≤ n per recursion level { at most nm iterations I • Checking hc1, ::: , cni 2 p can be done in linear time w.r.t. n Runtime in m · nm · n = m · nm+1 FO Algorithm Worst-Case Runtime Let m be the size of ', and let n = jIj (total table sizes) David Carral, 23rd Apr 2019 Database Theory slide 4 of 11 FO Algorithm Worst-Case Runtime Let m be the size of ', and let n = jIj (total table sizes) • How many recursive calls of Eval are there? { one per subexpression: at most m • Maximum depth of recursion? { bounded by total number of calls: at most m • Maximum number of iterations of for loop? { j∆Ij ≤ n per recursion level { at most nm iterations I • Checking hc1, ::: , cni 2 p can be done in linear time w.r.t.
    [Show full text]
  • On a Theory of Computation and Complexity Over the Real Numbers: Np-Completeness, Recursive Functions and Universal Machines1
    BULLETIN (New Series) OF THE AMERICAN MATHEMATICAL SOCIETY Volume 21, Number 1, July 1989 ON A THEORY OF COMPUTATION AND COMPLEXITY OVER THE REAL NUMBERS: NP-COMPLETENESS, RECURSIVE FUNCTIONS AND UNIVERSAL MACHINES1 LENORE BLUM2, MIKE SHUB AND STEVE SMALE ABSTRACT. We present a model for computation over the reals or an arbitrary (ordered) ring R. In this general setting, we obtain universal machines, partial recursive functions, as well as JVP-complete problems. While our theory reflects the classical over Z (e.g., the computable func­ tions are the recursive functions) it also reflects the special mathematical character of the underlying ring R (e.g., complements of Julia sets provide natural examples of R. E. undecidable sets over the reals) and provides a natural setting for studying foundational issues concerning algorithms in numerical analysis. Introduction. We develop here some ideas for machines and computa­ tion over the real numbers R. One motivation for this comes from scientific computation. In this use of the computer, a reasonable idealization has the cost of multiplication independent of the size of the number. This contrasts with the usual theoretical computer science picture which takes into account the number of bits of the numbers. Another motivation is to bring the theory of computation into the do­ main of analysis, geometry and topology. The mathematics of these sub­ jects can then be put to use in the systematic analysis of algorithms. On the other hand, there is an extensively developed subject of the theory of discrete computation, which we don't wish to lose in our theory.
    [Show full text]
  • Chapter 1. Cutting out Fragments
    UvA-DARE (Digital Academic Repository) Logic Engineering. The Case of Description and Hybrid Logics Areces, C.E. Publication date 2000 Link to publication Citation for published version (APA): Areces, C. E. (2000). Logic Engineering. The Case of Description and Hybrid Logics. ILLC dissertation series 2000-05. General rights It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons). Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible. UvA-DARE is a service provided by the library of the University of Amsterdam (https://dare.uva.nl) Download date:07 Oct 2021 Chapter 1 Cutting Out Fragments The logic is invariant, ... but the data are different. So the results are different. from "Stranger in a Strange Land," Robert Heinlein 1.1 Looking for the Right Language The rules of the game are set: we are searching for good formal languages for specific tasks, where by "formal languages" we mean languages with a precise syntax and se­ mantics, and by "specific tasks" we mean reasoning tasks.
    [Show full text]
  • DATABASE THEORY Various Notions of Complexity: Combined Complexity (Complexity W.R.T
    How many recursive calls of Eval are there? • { one per subexpression: at most m Maximum depth of recursion? • { bounded by total number of calls: at most m Maximum number of iterations of for loop? • { ∆I n per recursion level | | { at most nm iterations Checking c , ..., c pI can be done in linear time w.r.t. n • h 1 ni2 Runtime in m nm n = m nm+1 · · · How to Measure Query Answering Complexity Query answering as decision problem { consider Boolean queries DATABASE THEORY Various notions of complexity: Combined complexity (complexity w.r.t. size of query and database instance) • Lecture 4: Complexity of FO Query Answering Data complexity (worst case complexity for any fixed query) • Query complexity (worst case complexity for any fixed database instance) David Carral • Knowledge-Based Systems Various common complexity classes: L NL P NP PSpace ExpTime ✓ ✓ ✓ ✓ ✓ TU Dresden, 23rd Apr 2019 David Carral, 23rd Apr 2019 Database Theory slide 2 of 11 An Algorithm for Evaluating FO Queries FO Algorithm Worst-Case Runtime function Eval(', ) I Let m be the size of ', and let n = (total table sizes) |I| 01 switch (') { 02 case p(c , ..., c ): return c , ..., c pI 1 n h 1 ni2 03 case : return Eval( , ) ¬ ¬ I 04 case : return Eval( , ) Eval( , ) 1 ^ 2 1 I ^ 2 I 05 case x. : 9 06 for c ∆ 2 I { 07 if Eval( [x c], ) then return true 7! I 08 } 09 return false 10 } David Carral, 23rd Apr 2019 Database Theory slide 3 of 11 David Carral, 23rd Apr 2019 Database Theory slide 4 of 11 FO Algorithm Worst-Case Runtime Time Complexity of FO Algorithm Let m be
    [Show full text]