Theory of Computation: an Introduction

Total Page:16

File Type:pdf, Size:1020Kb

Theory of Computation: an Introduction CIT 596 – Theory of Computation 1 Theory of Computation: An Introduction Traditionally, the study of theory of computation comprises three cen- tral areas: automata, computability, and complexity. These areas are linked by the question: What are the fundamental capabilities and limitations of computers? In this course, we will study the fundamentals of these three central ar- eas of theory of computation. c Marcelo Siqueira — Spring 2005 CIT 596 – Theory of Computation 2 Theory of Computation: An Introduction The central question of complexity theory is What makes some problems computationally hard and others easy?. Remarkably, we still have not been able to answer this question! An important research achievement, though, was the definition of a scheme to classify problems according to their “level of difficulty” (diffi- culty is precisely expressed in this scheme). What does a computer scientist do when faced with a “hard” computa- tional problem? Cryptography: sometimes, hardness is very desirable! c Marcelo Siqueira — Spring 2005 CIT 596 – Theory of Computation 3 Theory of Computation: An Introduction The central question of computability theory is Which problems can be solved by computers and which ones cannot? Even before the invention of “real computers”, Kurt Gödel, Alan Turing, and Alonzo Church discovered that certain basic problems cannot be solved by computers! For instance, computers cannot solve the problem of determining whether a mathematical statement is true or false. The fact that this discovery is older than the birth of real computers sug- gests that we do not have to study semicondutors or integrated circuits in order to understand “computation”. Actually, we can do that by using “abstract models” of computers. c Marcelo Siqueira — Spring 2005 CIT 596 – Theory of Computation 4 Theory of Computation: An Introduction Automata theory deals with the definitions and properties of “abstract models” of computers. These abstract models are models of computation that can be defined mathematically. Some of these models are (believed to be) as powerful as real comput- ers, whereas others can be defined more simply and are less powerful. The simplest model of computation is the finite automaton or finite- state machine. This simple model has several applications in computer science, includ- ing text processing, compilers, and interface design. c Marcelo Siqueira — Spring 2005 CIT 596 – Theory of Computation 5 Theory of Computation: An Introduction Another important model is the pushdown automaton, which is more powerful than the finite automaton. This model provides a natural way of determining whether the sintax of a statement in a high-level programming language is correct, and therefore it is heavily used in compiler construction and computational linguistics. Finally, there is the Turing machine, named after Alan Turing, the En- glish mathematician who invented it. There is general agreement that Turing machines are able to carry out any algorithm whatsoever. c Marcelo Siqueira — Spring 2005 CIT 596 – Theory of Computation 6 Theory of Computation: An Introduction How do these models of computation work? These models actually solve decision problems, i.e., computational problems for which every specific instance can be answered “yes” or “no”. For instance, “Given a positive integer n, is it a prime?” The input value n is encoded as a string of digits, and a computation that solves the problem starts with this string and ends with a “yes” or “no”. This encoding process is no different from what actually happens with the input data in real computers. Everything is converted into strings of 0’s and 1’s, isn’t it? c Marcelo Siqueira — Spring 2005 CIT 596 – Theory of Computation 7 Theory of Computation: An Introduction Not all computational problems are decision problems. However, in the case of a problem requiring an answer more complicated than “yes” or “no”, there is often a decision problem whose solution is comparable to that of the original. For instance, if f is a function, being able to answer the question “Given x and y, is y = f(x)?” is tantamount to being able to compute f(x) for an arbitrary x. Furthermore, the more powerful computational models are able to deal with more general kinds of problems. An extremely interesting point is the relationship between models of computation and languages. c Marcelo Siqueira — Spring 2005 CIT 596 – Theory of Computation 8 Theory of Computation: An Introduction We can think of a decision problem as a language recognition prob- lem. For instance, for the primality decison problem (“Given a positive integer n, is n a prime?”), we can view ALL strings for which our model answers “yes” as a “language”. So, our decision problem can be viewed as the one of determining whether an arbitrary string is in the language recognized by the model of computation. In our example, the recognition problem is to determine whether a given string (enconding a number) belongs to the language of ALL strings rep- resenting prime numbers. c Marcelo Siqueira — Spring 2005 CIT 596 – Theory of Computation 9 Theory of Computation: An Introduction Distinct models of computation recognize (resp. solves) languages (resp. decision problems) of different complexity. Finite automata recognize regular languages. Pushdown automata recognize context-free languages. Turing machines recognize recursively enumerable languages. Regular languages are simpler than context-free languages, which in turn are simpler than recursively enumerable languages. This seems very natural! Why? c Marcelo Siqueira — Spring 2005 CIT 596 – Theory of Computation 10 Theory of Computation: An Introduction We can also study “languages” from a generation point of view, rather than recognition. From the generation point of view, we use a formalism known as a gram- mar. A grammar contains a set of rules that can be applied to generate all strings in a language. Grammars are extremely useful for specifying and representing pro- gramming languages. In order to build a compiler for a programming language, we must spec- ify the language in some way. Grammars are pretty handy and often used for that. c Marcelo Siqueira — Spring 2005 CIT 596 – Theory of Computation 11 Theory of Computation: An Introduction As you may expect, we also classify grammars according the complexity of the languages they can generate. Regular languages are generated by regular grammars. Context-free languages are generated by context-free grammars. Recursively enumerable languages are generated by unrestricted gram- mars. This hierarchy of grammars and languages was created by Noam Chom- sky, and it is known as Chomsky hierarchy. Yeah, this is the same Noam Chomsky you may have heard about in the past four years within a completely different context... He is a former PhD student of UPenn! c Marcelo Siqueira — Spring 2005.
Recommended publications
  • Computability of Fraïssé Limits
    COMPUTABILITY OF FRA¨ISSE´ LIMITS BARBARA F. CSIMA, VALENTINA S. HARIZANOV, RUSSELL MILLER, AND ANTONIO MONTALBAN´ Abstract. Fra¨ıss´estudied countable structures S through analysis of the age of S, i.e., the set of all finitely generated substructures of S. We investigate the effectiveness of his analysis, considering effectively presented lists of finitely generated structures and asking when such a list is the age of a computable structure. We focus particularly on the Fra¨ıss´elimit. We also show that degree spectra of relations on a sufficiently nice Fra¨ıss´elimit are always upward closed unless the relation is definable by a quantifier-free formula. We give some sufficient or necessary conditions for a Fra¨ıss´elimit to be spectrally universal. As an application, we prove that the computable atomless Boolean algebra is spectrally universal. Contents 1. Introduction1 1.1. Classical results about Fra¨ıss´elimits and background definitions4 2. Computable Ages5 3. Computable Fra¨ıss´elimits8 3.1. Computable properties of Fra¨ıss´elimits8 3.2. Existence of computable Fra¨ıss´elimits9 4. Examples 15 5. Upward closure of degree spectra of relations 18 6. Necessary conditions for spectral universality 20 6.1. Local finiteness 20 6.2. Finite realizability 21 7. A sufficient condition for spectral universality 22 7.1. The countable atomless Boolean algebra 23 References 24 1. Introduction Computable model theory studies the algorithmic complexity of countable structures, of their isomorphisms, and of relations on such structures. Since algorithmic properties often depend on data presentation, in computable model theory classically isomorphic structures can have different computability-theoretic properties.
    [Show full text]
  • 19 Theory of Computation
    19 Theory of Computation "You are about to embark on the study of a fascinating and important subject: the theory of computation. It com- prises the fundamental mathematical properties of computer hardware, software, and certain applications thereof. In studying this subject we seek to determine what can and cannot be computed, how quickly, with how much memory, and on which type of computational model." - Michael Sipser, "Introduction to the Theory of Computation", one of the driest computer science textbooks ever In which we go back to the basics with arcane, boring, and irrelevant set theory and counting. This week’s material is made challenging by two orthogonal issues. There’s a whole bunch of stuff to cover, and 95% of it isn’t the least bit interesting to a physical scientist. Nevertheless, being the starry-eyed optimist that I am, I’ll forge ahead through the mountainous bulk of knowledge. In other words, expect these notes to be long! But skim when necessary; if you look at the homework first (and haven’t entirely forgotten the lectures), you should be able to pick out the important parts. I understand if this type of stuff isn’t exactly your cup of tea, but bear with me while you can. In some sense, most of what we refer to as computer science isn’t computer science at all, any more than what an accountant does is math. Sure, modern economics takes some crazy hardcore theory to make it all work, but un- derlying the whole thing is a solid layer of applications and industrial moolah.
    [Show full text]
  • Software for Numerical Computation
    Purdue University Purdue e-Pubs Department of Computer Science Technical Reports Department of Computer Science 1977 Software for Numerical Computation John R. Rice Purdue University, [email protected] Report Number: 77-214 Rice, John R., "Software for Numerical Computation" (1977). Department of Computer Science Technical Reports. Paper 154. https://docs.lib.purdue.edu/cstech/154 This document has been made available through Purdue e-Pubs, a service of the Purdue University Libraries. Please contact [email protected] for additional information. SOFTWARE FOR NUMERICAL COMPUTATION John R. Rice Department of Computer Sciences Purdue University West Lafayette, IN 47907 CSD TR #214 January 1977 SOFTWARE FOR NUMERICAL COMPUTATION John R. Rice Mathematical Sciences Purdue University CSD-TR 214 January 12, 1977 Article to appear in the book: Research Directions in Software Technology. SOFTWARE FOR NUMERICAL COMPUTATION John R. Rice Mathematical Sciences Purdue University INTRODUCTION AND MOTIVATING PROBLEMS. The purpose of this article is to examine the research developments in software for numerical computation. Research and development of numerical methods is not intended to be discussed for two reasons. First, a reasonable survey of the research in numerical methods would require a book. The COSERS report [Rice et al, 1977] on Numerical Computation does such a survey in about 100 printed pages and even so the discussion of many important fields (never mind topics) is limited to a few paragraphs. Second, the present book is focused on software and thus it is natural to attempt to separate software research from numerical computation research. This, of course, is not easy as the two are intimately intertwined.
    [Show full text]
  • Turing's Influence on Programming — Book Extract from “The Dawn of Software Engineering: from Turing to Dijkstra”
    Turing's Influence on Programming | Book extract from \The Dawn of Software Engineering: from Turing to Dijkstra" Edgar G. Daylight∗ Eindhoven University of Technology, The Netherlands [email protected] Abstract Turing's involvement with computer building was popularized in the 1970s and later. Most notable are the books by Brian Randell (1973), Andrew Hodges (1983), and Martin Davis (2000). A central question is whether John von Neumann was influenced by Turing's 1936 paper when he helped build the EDVAC machine, even though he never cited Turing's work. This question remains unsettled up till this day. As remarked by Charles Petzold, one standard history barely mentions Turing, while the other, written by a logician, makes Turing a key player. Contrast these observations then with the fact that Turing's 1936 paper was cited and heavily discussed in 1959 among computer programmers. In 1966, the first Turing award was given to a programmer, not a computer builder, as were several subsequent Turing awards. An historical investigation of Turing's influence on computing, presented here, shows that Turing's 1936 notion of universality became increasingly relevant among programmers during the 1950s. The central thesis of this paper states that Turing's in- fluence was felt more in programming after his death than in computer building during the 1940s. 1 Introduction Many people today are led to believe that Turing is the father of the computer, the father of our digital society, as also the following praise for Martin Davis's bestseller The Universal Computer: The Road from Leibniz to Turing1 suggests: At last, a book about the origin of the computer that goes to the heart of the story: the human struggle for logic and truth.
    [Show full text]
  • 31 Summary of Computability Theory
    CS:4330 Theory of Computation Spring 2018 Computability Theory Summary Haniel Barbosa Readings for this lecture Chapters 3-5 and Section 6.2 of [Sipser 1996], 3rd edition. A hierachy of languages n m B Regular: a b n n B Deterministic Context-free: a b n n n 2n B Context-free: a b [ a b n n n B Turing decidable: a b c B Turing recognizable: ATM 1 / 12 Why TMs? B In 1900: Hilbert posed 23 “challenge problems” in Mathematics The 10th problem: Devise a process according to which it can be decided by a finite number of operations if a given polynomial has an integral root. It became necessary to have a formal definition of “algorithms” to define their expressivity. 2 / 12 Church-Turing Thesis B In 1936 Church and Turing independently defined “algorithm”: I λ-calculus I Turing machines B Intuitive notion of algorithms = Turing machine algorithms B “Any process which could be naturally called an effective procedure can be realized by a Turing machine” th B We now know: Hilbert’s 10 problem is undecidable! 3 / 12 Algorithm as Turing Machine Definition (Algorithm) An algorithm is a decider TM in the standard representation. B The input to a TM is always a string. B If we want an object other than a string as input, we must first represent that object as a string. B Strings can easily represent polynomials, graphs, grammars, automata, and any combination of these objects. 4 / 12 How to determine decidability / Turing-recognizability? B Decidable / Turing-recognizable: I Present a TM that decides (recognizes) the language I If A is mapping reducible to
    [Show full text]
  • Anna Lysyanskaya Curriculum Vitae
    Anna Lysyanskaya Curriculum Vitae Computer Science Department, Box 1910 Brown University Providence, RI 02912 (401) 863-7605 email: [email protected] http://www.cs.brown.edu/~anna Research Interests Cryptography, privacy, computer security, theory of computation. Education Massachusetts Institute of Technology Cambridge, MA Ph.D. in Computer Science, September 2002 Advisor: Ronald L. Rivest, Viterbi Professor of EECS Thesis title: \Signature Schemes and Applications to Cryptographic Protocol Design" Massachusetts Institute of Technology Cambridge, MA S.M. in Computer Science, June 1999 Smith College Northampton, MA A.B. magna cum laude, Highest Honors, Phi Beta Kappa, May 1997 Appointments Brown University, Providence, RI Fall 2013 - Present Professor of Computer Science Brown University, Providence, RI Fall 2008 - Spring 2013 Associate Professor of Computer Science Brown University, Providence, RI Fall 2002 - Spring 2008 Assistant Professor of Computer Science UCLA, Los Angeles, CA Fall 2006 Visiting Scientist at the Institute for Pure and Applied Mathematics (IPAM) Weizmann Institute, Rehovot, Israel Spring 2006 Visiting Scientist Massachusetts Institute of Technology, Cambridge, MA 1997 { 2002 Graduate student IBM T. J. Watson Research Laboratory, Hawthorne, NY Summer 2001 Summer Researcher IBM Z¨urich Research Laboratory, R¨uschlikon, Switzerland Summers 1999, 2000 Summer Researcher 1 Teaching Brown University, Providence, RI Spring 2008, 2011, 2015, 2017, 2019; Fall 2012 Instructor for \CS 259: Advanced Topics in Cryptography," a seminar course for graduate students. Brown University, Providence, RI Spring 2012 Instructor for \CS 256: Advanced Complexity Theory," a graduate-level complexity theory course. Brown University, Providence, RI Fall 2003,2004,2005,2010,2011 Spring 2007, 2009,2013,2014,2016,2018 Instructor for \CS151: Introduction to Cryptography and Computer Security." Brown University, Providence, RI Fall 2016, 2018 Instructor for \CS 101: Theory of Computation," a core course for CS concentrators.
    [Show full text]
  • What Every Computer Scientist Should Know About Floating-Point Arithmetic
    What Every Computer Scientist Should Know About Floating-Point Arithmetic DAVID GOLDBERG Xerox Palo Alto Research Center, 3333 Coyote Hill Road, Palo Alto, CalLfornLa 94304 Floating-point arithmetic is considered an esotoric subject by many people. This is rather surprising, because floating-point is ubiquitous in computer systems: Almost every language has a floating-point datatype; computers from PCs to supercomputers have floating-point accelerators; most compilers will be called upon to compile floating-point algorithms from time to time; and virtually every operating system must respond to floating-point exceptions such as overflow This paper presents a tutorial on the aspects of floating-point that have a direct impact on designers of computer systems. It begins with background on floating-point representation and rounding error, continues with a discussion of the IEEE floating-point standard, and concludes with examples of how computer system builders can better support floating point, Categories and Subject Descriptors: (Primary) C.0 [Computer Systems Organization]: General– instruction set design; D.3.4 [Programming Languages]: Processors —compders, optirruzatzon; G. 1.0 [Numerical Analysis]: General—computer arithmetic, error analysis, numerzcal algorithms (Secondary) D. 2.1 [Software Engineering]: Requirements/Specifications– languages; D, 3.1 [Programming Languages]: Formal Definitions and Theory —semantZcs D ,4.1 [Operating Systems]: Process Management—synchronization General Terms: Algorithms, Design, Languages Additional Key Words and Phrases: denormalized number, exception, floating-point, floating-point standard, gradual underflow, guard digit, NaN, overflow, relative error, rounding error, rounding mode, ulp, underflow INTRODUCTION tions of addition, subtraction, multipli- cation, and division. It also contains Builders of computer systems often need background information on the two information about floating-point arith- methods of measuring rounding error, metic.
    [Show full text]
  • Quantum Hypercomputation—Hype Or Computation?
    Quantum Hypercomputation—Hype or Computation? Amit Hagar,∗ Alex Korolev† February 19, 2007 Abstract A recent attempt to compute a (recursion–theoretic) non–computable func- tion using the quantum adiabatic algorithm is criticized and found wanting. Quantum algorithms may outperform classical algorithms in some cases, but so far they retain the classical (recursion–theoretic) notion of computability. A speculation is then offered as to where the putative power of quantum computers may come from. ∗HPS Department, Indiana University, Bloomington, IN, 47405. [email protected] †Philosophy Department, University of BC, Vancouver, BC. [email protected] 1 1 Introduction Combining physics, mathematics and computer science, quantum computing has developed in the past two decades from a visionary idea (Feynman 1982) to one of the most exciting areas of quantum mechanics (Nielsen and Chuang 2000). The recent excitement in this lively and fashionable domain of research was triggered by Peter Shor (1994) who showed how a quantum computer could exponentially speed–up classical computation and factor numbers much more rapidly (at least in terms of the number of computational steps involved) than any known classical algorithm. Shor’s algorithm was soon followed by several other algorithms that aimed to solve combinatorial and algebraic problems, and in the last few years the- oretical study of quantum systems serving as computational devices has achieved tremendous progress. According to one authority in the field (Aharonov 1998, Abstract), we now have strong theoretical evidence that quantum computers, if built, might be used as powerful computational tool, capable of per- forming tasks which seem intractable for classical computers.
    [Show full text]
  • CS 2210: Theory of Computation
    CS 2210: Theory of Computation Spring, 2019 Administrative Information • Background survey • Textbook: E. Rich, Automata, Computability, and Complexity: Theory and Applications, Prentice-Hall, 2008. • Book website: http://www.cs.utexas.edu/~ear/cs341/automatabook/ • My Office Hours: – Monday, 6:00-8:00pm, Searles 224 – Tuesday, 1:00-2:30pm, Searles 222 • TAs – Anjulee Bhalla: Hours TBA, Searles 224 – Ryan St. Pierre, Hours TBA, Searles 224 What you can expect from the course • How to do proofs • Models of computation • What’s the difference between computability and complexity? • What’s the Halting Problem? • What are P and NP? • Why do we care whether P = NP? • What are NP-complete problems? • Where does this make a difference outside of this class? • How to work the answers to these questions into the conversation at a cocktail party… What I will expect from you • Problem Sets (25%): – Goal: Problems given on Mondays and Wednesdays – Due the next Monday – Graded by following Monday – A learning tool, not a testing tool – Collaboration encouraged; more on this in next slide • Quizzes (15%) • Exams (2 non-cumulative, 30% each): – Closed book, closed notes, but… – Can bring in 8.5 x 11 page with notes on both sides • Class participation: Tiebreaker Other Important Things • Go to the TA hours • Study and work on problem sets in groups • Collaboration Issues: – Level 0 (In-Class Problems) • No restrictions – Level 1 (Homework Problems) • Verbal collaboration • But, individual write-ups – Level 2 (not used in this course) • Discussion with TAs only – Level 3 (Exams) • Professor clarifications only Right now… • What does it mean to study the “theory” of something? • Experience with theory in other disciplines? • Relationship to practice? – “In theory, theory and practice are the same.
    [Show full text]
  • The Problem with Threads
    The Problem with Threads Edward A. Lee Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2006-1 http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1.html January 10, 2006 Copyright © 2006, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission. Acknowledgement This work was supported in part by the Center for Hybrid and Embedded Software Systems (CHESS) at UC Berkeley, which receives support from the National Science Foundation (NSF award No. CCR-0225610), the State of California Micro Program, and the following companies: Agilent, DGIST, General Motors, Hewlett Packard, Infineon, Microsoft, and Toyota. The Problem with Threads Edward A. Lee Professor, Chair of EE, Associate Chair of EECS EECS Department University of California at Berkeley Berkeley, CA 94720, U.S.A. [email protected] January 10, 2006 Abstract Threads are a seemingly straightforward adaptation of the dominant sequential model of computation to concurrent systems. Languages require little or no syntactic changes to sup- port threads, and operating systems and architectures have evolved to efficiently support them. Many technologists are pushing for increased use of multithreading in software in order to take advantage of the predicted increases in parallelism in computer architectures.
    [Show full text]
  • Mathematics and Computation
    Mathematics and Computation Mathematics and Computation Ideas Revolutionizing Technology and Science Avi Wigderson Princeton University Press Princeton and Oxford Copyright c 2019 by Avi Wigderson Requests for permission to reproduce material from this work should be sent to [email protected] Published by Princeton University Press, 41 William Street, Princeton, New Jersey 08540 In the United Kingdom: Princeton University Press, 6 Oxford Street, Woodstock, Oxfordshire OX20 1TR press.princeton.edu All Rights Reserved Library of Congress Control Number: 2018965993 ISBN: 978-0-691-18913-0 British Library Cataloging-in-Publication Data is available Editorial: Vickie Kearn, Lauren Bucca, and Susannah Shoemaker Production Editorial: Nathan Carr Jacket/Cover Credit: THIS INFORMATION NEEDS TO BE ADDED WHEN IT IS AVAILABLE. WE DO NOT HAVE THIS INFORMATION NOW. Production: Jacquie Poirier Publicity: Alyssa Sanford and Kathryn Stevens Copyeditor: Cyd Westmoreland This book has been composed in LATEX The publisher would like to acknowledge the author of this volume for providing the camera-ready copy from which this book was printed. Printed on acid-free paper 1 Printed in the United States of America 10 9 8 7 6 5 4 3 2 1 Dedicated to the memory of my father, Pinchas Wigderson (1921{1988), who loved people, loved puzzles, and inspired me. Ashgabat, Turkmenistan, 1943 Contents Acknowledgments 1 1 Introduction 3 1.1 On the interactions of math and computation..........................3 1.2 Computational complexity theory.................................6 1.3 The nature, purpose, and style of this book............................7 1.4 Who is this book for?........................................7 1.5 Organization of the book......................................8 1.6 Notation and conventions.....................................
    [Show full text]
  • Cryptography
    Security Engineering: A Guide to Building Dependable Distributed Systems CHAPTER 5 Cryptography ZHQM ZMGM ZMFM —G. JULIUS CAESAR XYAWO GAOOA GPEMO HPQCW IPNLG RPIXL TXLOA NNYCS YXBOY MNBIN YOBTY QYNAI —JOHN F. KENNEDY 5.1 Introduction Cryptography is where security engineering meets mathematics. It provides us with the tools that underlie most modern security protocols. It is probably the key enabling technology for protecting distributed systems, yet it is surprisingly hard to do right. As we’ve already seen in Chapter 2, “Protocols,” cryptography has often been used to protect the wrong things, or used to protect them in the wrong way. We’ll see plenty more examples when we start looking in detail at real applications. Unfortunately, the computer security and cryptology communities have drifted apart over the last 20 years. Security people don’t always understand the available crypto tools, and crypto people don’t always understand the real-world problems. There are a number of reasons for this, such as different professional backgrounds (computer sci- ence versus mathematics) and different research funding (governments have tried to promote computer security research while suppressing cryptography). It reminds me of a story told by a medical friend. While she was young, she worked for a few years in a country where, for economic reasons, they’d shortened their medical degrees and con- centrated on producing specialists as quickly as possible. One day, a patient who’d had both kidneys removed and was awaiting a transplant needed her dialysis shunt redone. The surgeon sent the patient back from the theater on the grounds that there was no urinalysis on file.
    [Show full text]