La Machine De Turing
Total Page:16
File Type:pdf, Size:1020Kb

Load more
Recommended publications
-
Models of Quantum Computation and Quantum Programming Languages
Models of quantum computation and quantum programming languages Jaros law Adam MISZCZAK Institute of Theoretical and Applied Informatics, Polish Academy of Sciences, Ba ltycka 5, 44-100 Gliwice, Poland The goal of the presented paper is to provide an introduction to the basic computational mod- els used in quantum information theory. We review various models of quantum Turing machine, quantum circuits and quantum random access machine (QRAM) along with their classical counter- parts. We also provide an introduction to quantum programming languages, which are developed using the QRAM model. We review the syntax of several existing quantum programming languages and discuss their features and limitations. I. INTRODUCTION of existing models of quantum computation is biased towards the models interesting for the development of Computational process must be studied using the fixed quantum programming languages. Thus we neglect some model of computational device. This paper introduces models which are not directly related to this area (e.g. the basic models of computation used in quantum in- quantum automata or topological quantum computa- formation theory. We show how these models are defined tion). by extending classical models. We start by introducing some basic facts about clas- A. Quantum information theory sical and quantum Turing machines. These models help to understand how useful quantum computing can be. It can be also used to discuss the difference between Quantum information theory is a new, fascinating field quantum and classical computation. For the sake of com- of research which aims to use quantum mechanical de- pleteness we also give a brief introduction to the main res- scription of the system to perform computational tasks. -
Complexity Theory
Complexity Theory Lecture Notes October 27, 2016 Prof. Dr. Roland Meyer TU Braunschweig Winter Term 2016/2017 Preface These are the lecture notes accompanying the course Complexity Theory taught at TU Braunschweig. As of October 27, 2016, the notes cover 11 out of 28 lectures. The handwritten notes on the missing lectures can be found at http://concurrency.cs.uni-kl.de. The present document is work in progress and comes with neither a guarantee of completeness wrt. the con- tent of the classes nor a guarantee of correctness. In case you spot a bug, please send a mail to [email protected]. Peter Chini, Judith Stengel, Sebastian Muskalla, and Prakash Saivasan helped turning initial handwritten notes into LATEX, provided valuable feed- back, and came up with interesting exercises. I am grateful for their help! Enjoy the study! Roland Meyer i Introduction Complexity Theory has two main goals: • Study computational models and programming constructs in order to understand their power and limitations. • Study computational problems and their inherent complexity. Com- plexity usually means time and space requirements (on a particular model). It can also be defined by other measurements, like random- ness, number of alternations, or circuit size. Interestingly, an impor- tant observation of computational complexity is that the precise model is not important. The classes of problems are robust against changes in the model. Background: • The theory of computability goes back to Presburger, G¨odel,Church, Turing, Post, Kleene (first half of the 20th century). It gives us methods to decide whether a problem can be computed (de- cided by a machine) at all. -
Lecture 1 Review of Basic Computational Complexity
Lecture 1 Review of Basic Computational Complexity March 30, 2004 Lecturer: Paul Beame Notes: Daniel Lowd 1.1 Preliminaries 1.1.1 Texts There is no one textbook that covers everything in this course. Some of the discoveries are simply too recent. For the first portion of the course, however, two supplementary texts may be useful. Michael Sipser, Introduction to the Theory of Computation. • Christos Papadimitrion, Computational Complexity • 1.1.2 Coursework Required work for this course consists of lecture notes and group assignments. Each student is assigned to take thorough lecture notes no more than twice during the quarter. The student must then type up the notes in expanded form in LATEX. Lecture notes may also contain additional references or details omitted in class. A LATEXstylesheet for this will soon be made available on the course webpage. Notes should be typed, submitted, revised, and posted to the course webpage within one week. Group assignments will be homework sets that may be done cooperatively. In fact, cooperation is encouraged. Credit to collaborators should be given. There will be no tests. 1.1.3 Overview The first part of the course will predominantly discuss complexity classes above NPrelate randomness, cir- cuits, and counting, to P, NP, and the polynomial time hierarchy. One of the motivating goals is to consider how one might separate P from classes above P. Results covered will range from the 1970’s to recent work, including tradeoffs between time and space complexity. In the middle section of the course we will cover the powerful role that interaction has played in our understanding of computational complexity and, in par- ticular, the powerful results on probabilistically checkable proofs (PCP). -
Turing and the Development of Computational Complexity
Turing and the Development of Computational Complexity Steven Homer Department of Computer Science Boston University Boston, MA 02215 Alan L. Selman Department of Computer Science and Engineering State University of New York at Buffalo 201 Bell Hall Buffalo, NY 14260 August 24, 2011 Abstract Turing's beautiful capture of the concept of computability by the \Turing ma- chine" linked computability to a device with explicit steps of operations and use of resources. This invention led in a most natural way to build the foundations for computational complexity. 1 Introduction Computational complexity provides mechanisms for classifying combinatorial problems and measuring the computational resources necessary to solve them. The discipline proves explanations of why certain problems have no practical solutions and provides a way of anticipating difficulties involved in solving problems of certain types. The classification is quantitative and is intended to investigate what resources are necessary, lower bounds, and what resources are sufficient, upper bounds, to solve various problems. 1 This classification should not depend on a particular computational model but rather should measure the intrinsic difficulty of a problem. Precisely for this reason, as we will explain, the basic model of computation for our study is the multitape Turing machine. Computational complexity theory today addresses issues of contemporary concern, for example, parallel computation, circuit design, computations that depend on random number generators, and development of efficient algorithms. Above all, computational complexity is interested in distinguishing problems that are efficiently computable. Al- gorithms whose running times are n2 in the size of their inputs can be implemented to execute efficiently even for fairly large values of n, but algorithms that require an expo- nential running time can be executed only for small values of n. -
Introduction to the Theory of Complexity
Introduction to the theory of complexity Daniel Pierre Bovet Pierluigi Crescenzi The information in this book is distributed on an “As is” basis, without warranty. Although every precaution has been taken in the preparation of this work, the authors shall not have any liability to any person or entity with respect to any loss or damage caused or alleged to be caused directly or indirectly by the information contained in this work. First electronic edition: June 2006 Contents 1 Mathematical preliminaries 1 1.1 Sets, relations and functions 1 1.2 Set cardinality 5 1.3 Three proof techniques 5 1.4 Graphs 8 1.5 Alphabets, words and languages 10 2 Elements of computability theory 12 2.1 Turing machines 13 2.2 Machines and languages 26 2.3 Reducibility between languages 28 3 Complexity classes 33 3.1 Dynamic complexity measures 34 3.2 Classes of languages 36 3.3 Decision problems and languages 38 3.4 Time-complexity classes 41 3.5 The pseudo-Pascal language 47 4 The class P 51 4.1 The class P 52 4.2 The robustness of the class P 57 4.3 Polynomial-time reducibility 60 4.4 Uniform diagonalization 62 5 The class NP 69 5.1 The class NP 70 5.2 NP-complete languages 72 v vi 5.3 NP-intermediate languages 88 5.4 Computing and verifying a function 92 5.5 Relativization of the P = NP conjecture 95 6 6 The complexity of optimization problems 110 6.1 Optimization problems 111 6.2 Underlying languages 115 6.3 Optimum measure versus optimum solution 117 6.4 Approximability 119 6.5 Reducibility and optimization problems 125 7 Beyond NP 133 7.1 The class coNP -
Time-Space Tradeoffs for Satisfiability 1 Introduction
Time-Space Tradeoffs for Satisfiability Lance Fortnow∗ University of Chicago Department of Computer Science 1100 E. 58th. St. Chicago, IL 60637 Abstract We give the first nontrivial model-independent time-space tradeoffs for satisfiability. Namely, 1+o(1) 1 we show that SAT cannot be solved simultaneously in n time and n − space for any > 0 on general random-access nondeterministic Turing machines. In particular, SAT cannot be solved deterministically by a Turing machine using quasilinear time and pn space. We also give lower bounds for log-space uniform NC1 circuits and branching programs. Our proof uses two basic ideas. First we show that if SAT can be solved nondeterministi- cally with a small amount of time then we can collapse a nonconstant number of levels of the polynomial-time hierarchy. We combine this work with a result of Nepomnjaˇsˇci˘ıthat shows that a nondeterministic computation of super linear time and sublinear space can be simulated in alternating linear time. A simple diagonalization yields our main result. We discuss how these bounds lead to a new approach to separating the complexity classes NL and NP. We give some possibilities and limitations of this approach. 1 Introduction Separating complexity classes remains the most important and difficult of problems in theoretical computer science. Circuit complexity and other techniques on finite functions have seen some exciting early successes (see [BS90]) but have yet to achieve their promise of separating complexity classes above logarithmic space. Other techniques based on logic and geometry also have given us separations only on very restricted models. We should turn back to a traditional separation technique|diagonalization.