What Can Be Computed? a Practical Guide to the Theory of Computation
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Computability of Fraïssé Limits
COMPUTABILITY OF FRA¨ISSE´ LIMITS BARBARA F. CSIMA, VALENTINA S. HARIZANOV, RUSSELL MILLER, AND ANTONIO MONTALBAN´ Abstract. Fra¨ıss´estudied countable structures S through analysis of the age of S, i.e., the set of all finitely generated substructures of S. We investigate the effectiveness of his analysis, considering effectively presented lists of finitely generated structures and asking when such a list is the age of a computable structure. We focus particularly on the Fra¨ıss´elimit. We also show that degree spectra of relations on a sufficiently nice Fra¨ıss´elimit are always upward closed unless the relation is definable by a quantifier-free formula. We give some sufficient or necessary conditions for a Fra¨ıss´elimit to be spectrally universal. As an application, we prove that the computable atomless Boolean algebra is spectrally universal. Contents 1. Introduction1 1.1. Classical results about Fra¨ıss´elimits and background definitions4 2. Computable Ages5 3. Computable Fra¨ıss´elimits8 3.1. Computable properties of Fra¨ıss´elimits8 3.2. Existence of computable Fra¨ıss´elimits9 4. Examples 15 5. Upward closure of degree spectra of relations 18 6. Necessary conditions for spectral universality 20 6.1. Local finiteness 20 6.2. Finite realizability 21 7. A sufficient condition for spectral universality 22 7.1. The countable atomless Boolean algebra 23 References 24 1. Introduction Computable model theory studies the algorithmic complexity of countable structures, of their isomorphisms, and of relations on such structures. Since algorithmic properties often depend on data presentation, in computable model theory classically isomorphic structures can have different computability-theoretic properties. -
19 Theory of Computation
19 Theory of Computation "You are about to embark on the study of a fascinating and important subject: the theory of computation. It com- prises the fundamental mathematical properties of computer hardware, software, and certain applications thereof. In studying this subject we seek to determine what can and cannot be computed, how quickly, with how much memory, and on which type of computational model." - Michael Sipser, "Introduction to the Theory of Computation", one of the driest computer science textbooks ever In which we go back to the basics with arcane, boring, and irrelevant set theory and counting. This week’s material is made challenging by two orthogonal issues. There’s a whole bunch of stuff to cover, and 95% of it isn’t the least bit interesting to a physical scientist. Nevertheless, being the starry-eyed optimist that I am, I’ll forge ahead through the mountainous bulk of knowledge. In other words, expect these notes to be long! But skim when necessary; if you look at the homework first (and haven’t entirely forgotten the lectures), you should be able to pick out the important parts. I understand if this type of stuff isn’t exactly your cup of tea, but bear with me while you can. In some sense, most of what we refer to as computer science isn’t computer science at all, any more than what an accountant does is math. Sure, modern economics takes some crazy hardcore theory to make it all work, but un- derlying the whole thing is a solid layer of applications and industrial moolah. -
Software for Numerical Computation
Purdue University Purdue e-Pubs Department of Computer Science Technical Reports Department of Computer Science 1977 Software for Numerical Computation John R. Rice Purdue University, [email protected] Report Number: 77-214 Rice, John R., "Software for Numerical Computation" (1977). Department of Computer Science Technical Reports. Paper 154. https://docs.lib.purdue.edu/cstech/154 This document has been made available through Purdue e-Pubs, a service of the Purdue University Libraries. Please contact [email protected] for additional information. SOFTWARE FOR NUMERICAL COMPUTATION John R. Rice Department of Computer Sciences Purdue University West Lafayette, IN 47907 CSD TR #214 January 1977 SOFTWARE FOR NUMERICAL COMPUTATION John R. Rice Mathematical Sciences Purdue University CSD-TR 214 January 12, 1977 Article to appear in the book: Research Directions in Software Technology. SOFTWARE FOR NUMERICAL COMPUTATION John R. Rice Mathematical Sciences Purdue University INTRODUCTION AND MOTIVATING PROBLEMS. The purpose of this article is to examine the research developments in software for numerical computation. Research and development of numerical methods is not intended to be discussed for two reasons. First, a reasonable survey of the research in numerical methods would require a book. The COSERS report [Rice et al, 1977] on Numerical Computation does such a survey in about 100 printed pages and even so the discussion of many important fields (never mind topics) is limited to a few paragraphs. Second, the present book is focused on software and thus it is natural to attempt to separate software research from numerical computation research. This, of course, is not easy as the two are intimately intertwined. -
Turing's Influence on Programming — Book Extract from “The Dawn of Software Engineering: from Turing to Dijkstra”
Turing's Influence on Programming | Book extract from \The Dawn of Software Engineering: from Turing to Dijkstra" Edgar G. Daylight∗ Eindhoven University of Technology, The Netherlands [email protected] Abstract Turing's involvement with computer building was popularized in the 1970s and later. Most notable are the books by Brian Randell (1973), Andrew Hodges (1983), and Martin Davis (2000). A central question is whether John von Neumann was influenced by Turing's 1936 paper when he helped build the EDVAC machine, even though he never cited Turing's work. This question remains unsettled up till this day. As remarked by Charles Petzold, one standard history barely mentions Turing, while the other, written by a logician, makes Turing a key player. Contrast these observations then with the fact that Turing's 1936 paper was cited and heavily discussed in 1959 among computer programmers. In 1966, the first Turing award was given to a programmer, not a computer builder, as were several subsequent Turing awards. An historical investigation of Turing's influence on computing, presented here, shows that Turing's 1936 notion of universality became increasingly relevant among programmers during the 1950s. The central thesis of this paper states that Turing's in- fluence was felt more in programming after his death than in computer building during the 1940s. 1 Introduction Many people today are led to believe that Turing is the father of the computer, the father of our digital society, as also the following praise for Martin Davis's bestseller The Universal Computer: The Road from Leibniz to Turing1 suggests: At last, a book about the origin of the computer that goes to the heart of the story: the human struggle for logic and truth. -
31 Summary of Computability Theory
CS:4330 Theory of Computation Spring 2018 Computability Theory Summary Haniel Barbosa Readings for this lecture Chapters 3-5 and Section 6.2 of [Sipser 1996], 3rd edition. A hierachy of languages n m B Regular: a b n n B Deterministic Context-free: a b n n n 2n B Context-free: a b [ a b n n n B Turing decidable: a b c B Turing recognizable: ATM 1 / 12 Why TMs? B In 1900: Hilbert posed 23 “challenge problems” in Mathematics The 10th problem: Devise a process according to which it can be decided by a finite number of operations if a given polynomial has an integral root. It became necessary to have a formal definition of “algorithms” to define their expressivity. 2 / 12 Church-Turing Thesis B In 1936 Church and Turing independently defined “algorithm”: I λ-calculus I Turing machines B Intuitive notion of algorithms = Turing machine algorithms B “Any process which could be naturally called an effective procedure can be realized by a Turing machine” th B We now know: Hilbert’s 10 problem is undecidable! 3 / 12 Algorithm as Turing Machine Definition (Algorithm) An algorithm is a decider TM in the standard representation. B The input to a TM is always a string. B If we want an object other than a string as input, we must first represent that object as a string. B Strings can easily represent polynomials, graphs, grammars, automata, and any combination of these objects. 4 / 12 How to determine decidability / Turing-recognizability? B Decidable / Turing-recognizable: I Present a TM that decides (recognizes) the language I If A is mapping reducible to -
Anna Lysyanskaya Curriculum Vitae
Anna Lysyanskaya Curriculum Vitae Computer Science Department, Box 1910 Brown University Providence, RI 02912 (401) 863-7605 email: [email protected] http://www.cs.brown.edu/~anna Research Interests Cryptography, privacy, computer security, theory of computation. Education Massachusetts Institute of Technology Cambridge, MA Ph.D. in Computer Science, September 2002 Advisor: Ronald L. Rivest, Viterbi Professor of EECS Thesis title: \Signature Schemes and Applications to Cryptographic Protocol Design" Massachusetts Institute of Technology Cambridge, MA S.M. in Computer Science, June 1999 Smith College Northampton, MA A.B. magna cum laude, Highest Honors, Phi Beta Kappa, May 1997 Appointments Brown University, Providence, RI Fall 2013 - Present Professor of Computer Science Brown University, Providence, RI Fall 2008 - Spring 2013 Associate Professor of Computer Science Brown University, Providence, RI Fall 2002 - Spring 2008 Assistant Professor of Computer Science UCLA, Los Angeles, CA Fall 2006 Visiting Scientist at the Institute for Pure and Applied Mathematics (IPAM) Weizmann Institute, Rehovot, Israel Spring 2006 Visiting Scientist Massachusetts Institute of Technology, Cambridge, MA 1997 { 2002 Graduate student IBM T. J. Watson Research Laboratory, Hawthorne, NY Summer 2001 Summer Researcher IBM Z¨urich Research Laboratory, R¨uschlikon, Switzerland Summers 1999, 2000 Summer Researcher 1 Teaching Brown University, Providence, RI Spring 2008, 2011, 2015, 2017, 2019; Fall 2012 Instructor for \CS 259: Advanced Topics in Cryptography," a seminar course for graduate students. Brown University, Providence, RI Spring 2012 Instructor for \CS 256: Advanced Complexity Theory," a graduate-level complexity theory course. Brown University, Providence, RI Fall 2003,2004,2005,2010,2011 Spring 2007, 2009,2013,2014,2016,2018 Instructor for \CS151: Introduction to Cryptography and Computer Security." Brown University, Providence, RI Fall 2016, 2018 Instructor for \CS 101: Theory of Computation," a core course for CS concentrators. -
On the Complexity of Numerical Analysis
On the Complexity of Numerical Analysis Eric Allender Peter B¨urgisser Rutgers, the State University of NJ Paderborn University Department of Computer Science Department of Mathematics Piscataway, NJ 08854-8019, USA DE-33095 Paderborn, Germany [email protected] [email protected] Johan Kjeldgaard-Pedersen Peter Bro Miltersen PA Consulting Group University of Aarhus Decision Sciences Practice Department of Computer Science Tuborg Blvd. 5, DK 2900 Hellerup, Denmark IT-parken, DK 8200 Aarhus N, Denmark [email protected] [email protected] Abstract In Section 1.3 we discuss our main technical contribu- tions: proving upper and lower bounds on the complexity We study two quite different approaches to understand- of PosSLP. In Section 1.4 we present applications of our ing the complexity of fundamental problems in numerical main result with respect to the Euclidean Traveling Sales- analysis. We show that both hinge on the question of under- man Problem and the Sum-of-Square-Roots problem. standing the complexity of the following problem, which we call PosSLP: Given a division-free straight-line program 1.1 Polynomial Time Over the Reals producing an integer N, decide whether N>0. We show that PosSLP lies in the counting hierarchy, and combining The Blum-Shub-Smale model of computation over the our results with work of Tiwari, we show that the Euclidean reals provides a very well-studied complexity-theoretic set- Traveling Salesman Problem lies in the counting hierarchy ting in which to study the computational problems of nu- – the previous best upper bound for this important problem merical analysis. -
What Every Computer Scientist Should Know About Floating-Point Arithmetic
What Every Computer Scientist Should Know About Floating-Point Arithmetic DAVID GOLDBERG Xerox Palo Alto Research Center, 3333 Coyote Hill Road, Palo Alto, CalLfornLa 94304 Floating-point arithmetic is considered an esotoric subject by many people. This is rather surprising, because floating-point is ubiquitous in computer systems: Almost every language has a floating-point datatype; computers from PCs to supercomputers have floating-point accelerators; most compilers will be called upon to compile floating-point algorithms from time to time; and virtually every operating system must respond to floating-point exceptions such as overflow This paper presents a tutorial on the aspects of floating-point that have a direct impact on designers of computer systems. It begins with background on floating-point representation and rounding error, continues with a discussion of the IEEE floating-point standard, and concludes with examples of how computer system builders can better support floating point, Categories and Subject Descriptors: (Primary) C.0 [Computer Systems Organization]: General– instruction set design; D.3.4 [Programming Languages]: Processors —compders, optirruzatzon; G. 1.0 [Numerical Analysis]: General—computer arithmetic, error analysis, numerzcal algorithms (Secondary) D. 2.1 [Software Engineering]: Requirements/Specifications– languages; D, 3.1 [Programming Languages]: Formal Definitions and Theory —semantZcs D ,4.1 [Operating Systems]: Process Management—synchronization General Terms: Algorithms, Design, Languages Additional Key Words and Phrases: denormalized number, exception, floating-point, floating-point standard, gradual underflow, guard digit, NaN, overflow, relative error, rounding error, rounding mode, ulp, underflow INTRODUCTION tions of addition, subtraction, multipli- cation, and division. It also contains Builders of computer systems often need background information on the two information about floating-point arith- methods of measuring rounding error, metic. -
Quantum Hypercomputation—Hype Or Computation?
Quantum Hypercomputation—Hype or Computation? Amit Hagar,∗ Alex Korolev† February 19, 2007 Abstract A recent attempt to compute a (recursion–theoretic) non–computable func- tion using the quantum adiabatic algorithm is criticized and found wanting. Quantum algorithms may outperform classical algorithms in some cases, but so far they retain the classical (recursion–theoretic) notion of computability. A speculation is then offered as to where the putative power of quantum computers may come from. ∗HPS Department, Indiana University, Bloomington, IN, 47405. [email protected] †Philosophy Department, University of BC, Vancouver, BC. [email protected] 1 1 Introduction Combining physics, mathematics and computer science, quantum computing has developed in the past two decades from a visionary idea (Feynman 1982) to one of the most exciting areas of quantum mechanics (Nielsen and Chuang 2000). The recent excitement in this lively and fashionable domain of research was triggered by Peter Shor (1994) who showed how a quantum computer could exponentially speed–up classical computation and factor numbers much more rapidly (at least in terms of the number of computational steps involved) than any known classical algorithm. Shor’s algorithm was soon followed by several other algorithms that aimed to solve combinatorial and algebraic problems, and in the last few years the- oretical study of quantum systems serving as computational devices has achieved tremendous progress. According to one authority in the field (Aharonov 1998, Abstract), we now have strong theoretical evidence that quantum computers, if built, might be used as powerful computational tool, capable of per- forming tasks which seem intractable for classical computers. -
CS 2210: Theory of Computation
CS 2210: Theory of Computation Spring, 2019 Administrative Information • Background survey • Textbook: E. Rich, Automata, Computability, and Complexity: Theory and Applications, Prentice-Hall, 2008. • Book website: http://www.cs.utexas.edu/~ear/cs341/automatabook/ • My Office Hours: – Monday, 6:00-8:00pm, Searles 224 – Tuesday, 1:00-2:30pm, Searles 222 • TAs – Anjulee Bhalla: Hours TBA, Searles 224 – Ryan St. Pierre, Hours TBA, Searles 224 What you can expect from the course • How to do proofs • Models of computation • What’s the difference between computability and complexity? • What’s the Halting Problem? • What are P and NP? • Why do we care whether P = NP? • What are NP-complete problems? • Where does this make a difference outside of this class? • How to work the answers to these questions into the conversation at a cocktail party… What I will expect from you • Problem Sets (25%): – Goal: Problems given on Mondays and Wednesdays – Due the next Monday – Graded by following Monday – A learning tool, not a testing tool – Collaboration encouraged; more on this in next slide • Quizzes (15%) • Exams (2 non-cumulative, 30% each): – Closed book, closed notes, but… – Can bring in 8.5 x 11 page with notes on both sides • Class participation: Tiebreaker Other Important Things • Go to the TA hours • Study and work on problem sets in groups • Collaboration Issues: – Level 0 (In-Class Problems) • No restrictions – Level 1 (Homework Problems) • Verbal collaboration • But, individual write-ups – Level 2 (not used in this course) • Discussion with TAs only – Level 3 (Exams) • Professor clarifications only Right now… • What does it mean to study the “theory” of something? • Experience with theory in other disciplines? • Relationship to practice? – “In theory, theory and practice are the same. -
The Problem with Threads
The Problem with Threads Edward A. Lee Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2006-1 http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1.html January 10, 2006 Copyright © 2006, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission. Acknowledgement This work was supported in part by the Center for Hybrid and Embedded Software Systems (CHESS) at UC Berkeley, which receives support from the National Science Foundation (NSF award No. CCR-0225610), the State of California Micro Program, and the following companies: Agilent, DGIST, General Motors, Hewlett Packard, Infineon, Microsoft, and Toyota. The Problem with Threads Edward A. Lee Professor, Chair of EE, Associate Chair of EECS EECS Department University of California at Berkeley Berkeley, CA 94720, U.S.A. [email protected] January 10, 2006 Abstract Threads are a seemingly straightforward adaptation of the dominant sequential model of computation to concurrent systems. Languages require little or no syntactic changes to sup- port threads, and operating systems and architectures have evolved to efficiently support them. Many technologists are pushing for increased use of multithreading in software in order to take advantage of the predicted increases in parallelism in computer architectures. -
Mathematics and Computation
Mathematics and Computation Mathematics and Computation Ideas Revolutionizing Technology and Science Avi Wigderson Princeton University Press Princeton and Oxford Copyright c 2019 by Avi Wigderson Requests for permission to reproduce material from this work should be sent to [email protected] Published by Princeton University Press, 41 William Street, Princeton, New Jersey 08540 In the United Kingdom: Princeton University Press, 6 Oxford Street, Woodstock, Oxfordshire OX20 1TR press.princeton.edu All Rights Reserved Library of Congress Control Number: 2018965993 ISBN: 978-0-691-18913-0 British Library Cataloging-in-Publication Data is available Editorial: Vickie Kearn, Lauren Bucca, and Susannah Shoemaker Production Editorial: Nathan Carr Jacket/Cover Credit: THIS INFORMATION NEEDS TO BE ADDED WHEN IT IS AVAILABLE. WE DO NOT HAVE THIS INFORMATION NOW. Production: Jacquie Poirier Publicity: Alyssa Sanford and Kathryn Stevens Copyeditor: Cyd Westmoreland This book has been composed in LATEX The publisher would like to acknowledge the author of this volume for providing the camera-ready copy from which this book was printed. Printed on acid-free paper 1 Printed in the United States of America 10 9 8 7 6 5 4 3 2 1 Dedicated to the memory of my father, Pinchas Wigderson (1921{1988), who loved people, loved puzzles, and inspired me. Ashgabat, Turkmenistan, 1943 Contents Acknowledgments 1 1 Introduction 3 1.1 On the interactions of math and computation..........................3 1.2 Computational complexity theory.................................6 1.3 The nature, purpose, and style of this book............................7 1.4 Who is this book for?........................................7 1.5 Organization of the book......................................8 1.6 Notation and conventions.....................................