La Machine De Turing

Total Page:16

File Type:pdf, Size:1020Kb

La Machine De Turing LA MACHINE DE TURING COMPILATION REALISATION A L’AIDE DE LEGOS Turing machine This article is about the symbol manipulator. For the de- expressing all tasks accomplishable by computers; nearly ciphering machine, see Bombe. For the test of artificial all programming languages are Turing complete if the intelligence, see Turing test. For the instrumental rock limitations of finite memory are ignored. band, see Turing Machine (band). Classes of automata 1 Overview A Turing machine is an abstract machine[1] that ma- nipulates symbols on a strip of tape according to a ta- A Turing machine is a general example of a CPU that ble of rules; to be more exact, it is a mathematical model controls all data manipulation done by a computer, with of computation that defines such a device.[2] Despite the the canonical machine using sequential memory to store model’s simplicity, given any computer algorithm, a Tur- data. More specifically, it is a machine (automaton) capa- ing machine can be constructed that is capable of simu- ble of enumerating some arbitrary subset of valid strings lating that algorithm’s logic.[3] of an alphabet; these strings are part of a recursively enu- The machine operates on an infinite[4] memory tape di- merable set. vided into cells.[5] The machine positions its head over Assuming a black box, the Turing machine cannot know a cell and “reads” (scans[6]) the symbol there. Then as whether it will eventually enumerate any one specific per the symbol and its present place in a finite table[7] of string of the subset with a given program. This is due user-specified instructions the machine (i) writes a sym- to the fact that the halting problem is unsolvable, which bol (e.g. a digit or a letter from a finite alphabet) in the cell has major implications for the theoretical limits of com- (some models allowing symbol erasure[8] and/or no writ- puting. ing), then (ii) either moves the tape one cell left or right The Turing machine is capable of processing an (some models allow no motion, some models move the unrestricted grammar, which further implies that it is ca- head),[9] then (iii) (as determined by the observed sym- pable of robustly evaluating first-order logic in an infinite bol and the machine’s place in the table) either proceeds number of ways. This is famously demonstrated through to a subsequent instruction or halts[10] the computation. lambda calculus. The Turing machine was invented in 1936 by Alan A Turing machine that is able to simulate any other Tur- Turing,[11][12] who called it an a-machine (automatic ing machine is called a universal Turing machine (UTM, machine).[13] With this model Turing was able to answer or simply a universal machine). A more mathemati- two questions in the negative: (1) Does a machine ex- cally oriented definition with a similar “universal” na- ist that can determine whether any arbitrary machine on ture was introduced by Alonzo Church, whose work on its tape is “circular” (e.g. freezes, or fails to continue lambda calculus intertwined with Turing’s in a formal its computational task); similarly, (2) does a machine ex- theory of computation known as the Church–Turing the- ist that can determine whether any arbitrary machine on sis. The thesis states that Turing machines indeed cap- its tape ever prints a given symbol.[14] Thus by provid- ture the informal notion of effective methods in logic ing a mathematical description of a very simple device and mathematics, and provide a precise definition of capable of arbitrary computations, he was able to prove an algorithm or “mechanical procedure”. Studying their properties of computation in general - and in particular, abstract properties yields many insights into computer the uncomputability of the Hilbert Entscheidungsprob- science and complexity theory. lem (“decision problem”).[15] Thus, Turing machines prove fundamental limitations on the power of mechanical computation.[16] While they can 1.1 Physical description express arbitrary computations, their minimalistic design makes them unsuitable for computation in practice: real- In his 1948 essay, “Intelligent Machinery”, Turing wrote world computers are based on different designs that, un- that his machine consisted of: like Turing machines, use random-access memory. ...an unlimited memory capacity obtained in Turing completeness is the ability for a system of instruc- the form of an infinite tape marked out into tions to simulate a Turing machine. A programming lan- squares, on each of which a symbol could be guage that is Turing complete is theoretically capable of printed. At any moment there is one symbol 1 2 2 INFORMAL DESCRIPTION in the machine; it is called the scanned sym- The alphabet contains a special blank symbol (here bol. The machine can alter the scanned sym- written as '0') and one or more other symbols. The bol, and its behavior is in part determined by tape is assumed to be arbitrarily extendable to the that symbol, but the symbols on the tape else- left and to the right, i.e., the Turing machine is al- where do not affect the behavior of the ma- ways supplied with as much tape as it needs for its chine. However, the tape can be moved back computation. Cells that have not been written be- and forth through the machine, this being one fore are assumed to be filled with the blank symbol. of the elementary operations of the machine. In some models the tape has a left end marked with Any symbol on the tape may therefore eventu- a special symbol; the tape extends or is indefinitely ally have an innings.[17] (Turing 1948, p. 3[18]) extensible to the right. 2. A head that can read and write symbols on the tape 2 Informal description and move the tape left and right one (and only one) cell at a time. In some models the head moves and For visualizations of Turing machines, see Turing the tape is stationary. machine gallery. 3. A state register that stores the state of the Tur- ing machine, one of finitely many. Among these is The Turing machine mathematically models a machine the special start state with which the state register is that mechanically operates on a tape. On this tape are initialized. These states, writes Turing, replace the symbols, which the machine can read and write, one at “state of mind” a person performing computations a time, using a tape head. Operation is fully determined would ordinarily be in. by a finite set of elementary instructions such as “in state 42, if the symbol seen is 0, write a 1; if the symbol seen 4. A finite table[19] of instructions[20] that, given the is 1, change into state 17; in state 17, if the symbol seen state(qᵢ) the machine is currently in and the sym- is 0, write a 1 and change to state 6;" etc. In the original bol(a) it is reading on the tape (symbol currently article (“On computable numbers, with an application to under the head), tells the machine to do the follow- the Entscheidungsproblem", see also references below), ing in sequence (for the 5-tuple models): Turing imagines not a mechanism, but a person whom he calls the “computer”, who executes these determinis- • tic mechanical rules slavishly (or as Turing puts it, “in a Either erase or write a symbol (replacing a desultory manner”). with a₁), and then • Move the head (which is described by d and q 4 can have values: 'L' for one step left or 'R' for s1 s1 s 3 s1 s0 s1 one step right or 'N' for staying in the same place), and then The head is always over a particular square of the tape; only a • Assume the same or a new state as prescribed finite stretch of squares is shown. The instruction to be performed (go to state qᵢ₁). (q4) is shown over the scanned square. (Drawing after Kleene (1952) p.375.) In the 4-tuple models, erasing or writing a symbol (a₁) and moving the head left or right (d) are spec- ified as separate instructions. Specifically, the table 0 0 0 0 0 1 1 B 0 0 tells the machine to (ia) erase or write a symbol or (ib) move the head left or right, and then (ii) assume q 1 the same or a new state as prescribed, but not both actions (ia) and (ib) in the same instruction. In some Here, the internal state (q1) is shown inside the head, and the il- models, if there is no entry in the table for the cur- lustration describes the tape as being infinite and pre-filled with rent combination of symbol and state then the ma- “0”, the symbol serving as blank. The system’s full state (its com- chine will halt; other models require all entries to be plete configuration) consists of the internal state, any non-blank filled. symbols on the tape (in this illustration “11B”), and the position of the head relative to those symbols including blanks, i.e. “011B”. (Drawing after Minsky (1967) p. 121). Note that every part of the machine (i.e. its state, symbol- collections, and used tape at any given time) and its ac- More precisely, a Turing machine consists of: tions (such as printing, erasing and tape motion) is finite, discrete and distinguishable; it is the unlimited amount of 1. A tape divided into cells, one next to the other. Each tape and runtime that gives it an unbounded amount of cell contains a symbol from some finite alphabet.
Recommended publications
  • Models of Quantum Computation and Quantum Programming Languages
    Models of quantum computation and quantum programming languages Jaros law Adam MISZCZAK Institute of Theoretical and Applied Informatics, Polish Academy of Sciences, Ba ltycka 5, 44-100 Gliwice, Poland The goal of the presented paper is to provide an introduction to the basic computational mod- els used in quantum information theory. We review various models of quantum Turing machine, quantum circuits and quantum random access machine (QRAM) along with their classical counter- parts. We also provide an introduction to quantum programming languages, which are developed using the QRAM model. We review the syntax of several existing quantum programming languages and discuss their features and limitations. I. INTRODUCTION of existing models of quantum computation is biased towards the models interesting for the development of Computational process must be studied using the fixed quantum programming languages. Thus we neglect some model of computational device. This paper introduces models which are not directly related to this area (e.g. the basic models of computation used in quantum in- quantum automata or topological quantum computa- formation theory. We show how these models are defined tion). by extending classical models. We start by introducing some basic facts about clas- A. Quantum information theory sical and quantum Turing machines. These models help to understand how useful quantum computing can be. It can be also used to discuss the difference between Quantum information theory is a new, fascinating field quantum and classical computation. For the sake of com- of research which aims to use quantum mechanical de- pleteness we also give a brief introduction to the main res- scription of the system to perform computational tasks.
    [Show full text]
  • Complexity Theory
    Complexity Theory Lecture Notes October 27, 2016 Prof. Dr. Roland Meyer TU Braunschweig Winter Term 2016/2017 Preface These are the lecture notes accompanying the course Complexity Theory taught at TU Braunschweig. As of October 27, 2016, the notes cover 11 out of 28 lectures. The handwritten notes on the missing lectures can be found at http://concurrency.cs.uni-kl.de. The present document is work in progress and comes with neither a guarantee of completeness wrt. the con- tent of the classes nor a guarantee of correctness. In case you spot a bug, please send a mail to [email protected]. Peter Chini, Judith Stengel, Sebastian Muskalla, and Prakash Saivasan helped turning initial handwritten notes into LATEX, provided valuable feed- back, and came up with interesting exercises. I am grateful for their help! Enjoy the study! Roland Meyer i Introduction Complexity Theory has two main goals: • Study computational models and programming constructs in order to understand their power and limitations. • Study computational problems and their inherent complexity. Com- plexity usually means time and space requirements (on a particular model). It can also be defined by other measurements, like random- ness, number of alternations, or circuit size. Interestingly, an impor- tant observation of computational complexity is that the precise model is not important. The classes of problems are robust against changes in the model. Background: • The theory of computability goes back to Presburger, G¨odel,Church, Turing, Post, Kleene (first half of the 20th century). It gives us methods to decide whether a problem can be computed (de- cided by a machine) at all.
    [Show full text]
  • Lecture 1 Review of Basic Computational Complexity
    Lecture 1 Review of Basic Computational Complexity March 30, 2004 Lecturer: Paul Beame Notes: Daniel Lowd 1.1 Preliminaries 1.1.1 Texts There is no one textbook that covers everything in this course. Some of the discoveries are simply too recent. For the first portion of the course, however, two supplementary texts may be useful. Michael Sipser, Introduction to the Theory of Computation. • Christos Papadimitrion, Computational Complexity • 1.1.2 Coursework Required work for this course consists of lecture notes and group assignments. Each student is assigned to take thorough lecture notes no more than twice during the quarter. The student must then type up the notes in expanded form in LATEX. Lecture notes may also contain additional references or details omitted in class. A LATEXstylesheet for this will soon be made available on the course webpage. Notes should be typed, submitted, revised, and posted to the course webpage within one week. Group assignments will be homework sets that may be done cooperatively. In fact, cooperation is encouraged. Credit to collaborators should be given. There will be no tests. 1.1.3 Overview The first part of the course will predominantly discuss complexity classes above NPrelate randomness, cir- cuits, and counting, to P, NP, and the polynomial time hierarchy. One of the motivating goals is to consider how one might separate P from classes above P. Results covered will range from the 1970’s to recent work, including tradeoffs between time and space complexity. In the middle section of the course we will cover the powerful role that interaction has played in our understanding of computational complexity and, in par- ticular, the powerful results on probabilistically checkable proofs (PCP).
    [Show full text]
  • Turing and the Development of Computational Complexity
    Turing and the Development of Computational Complexity Steven Homer Department of Computer Science Boston University Boston, MA 02215 Alan L. Selman Department of Computer Science and Engineering State University of New York at Buffalo 201 Bell Hall Buffalo, NY 14260 August 24, 2011 Abstract Turing's beautiful capture of the concept of computability by the \Turing ma- chine" linked computability to a device with explicit steps of operations and use of resources. This invention led in a most natural way to build the foundations for computational complexity. 1 Introduction Computational complexity provides mechanisms for classifying combinatorial problems and measuring the computational resources necessary to solve them. The discipline proves explanations of why certain problems have no practical solutions and provides a way of anticipating difficulties involved in solving problems of certain types. The classification is quantitative and is intended to investigate what resources are necessary, lower bounds, and what resources are sufficient, upper bounds, to solve various problems. 1 This classification should not depend on a particular computational model but rather should measure the intrinsic difficulty of a problem. Precisely for this reason, as we will explain, the basic model of computation for our study is the multitape Turing machine. Computational complexity theory today addresses issues of contemporary concern, for example, parallel computation, circuit design, computations that depend on random number generators, and development of efficient algorithms. Above all, computational complexity is interested in distinguishing problems that are efficiently computable. Al- gorithms whose running times are n2 in the size of their inputs can be implemented to execute efficiently even for fairly large values of n, but algorithms that require an expo- nential running time can be executed only for small values of n.
    [Show full text]
  • Introduction to the Theory of Complexity
    Introduction to the theory of complexity Daniel Pierre Bovet Pierluigi Crescenzi The information in this book is distributed on an “As is” basis, without warranty. Although every precaution has been taken in the preparation of this work, the authors shall not have any liability to any person or entity with respect to any loss or damage caused or alleged to be caused directly or indirectly by the information contained in this work. First electronic edition: June 2006 Contents 1 Mathematical preliminaries 1 1.1 Sets, relations and functions 1 1.2 Set cardinality 5 1.3 Three proof techniques 5 1.4 Graphs 8 1.5 Alphabets, words and languages 10 2 Elements of computability theory 12 2.1 Turing machines 13 2.2 Machines and languages 26 2.3 Reducibility between languages 28 3 Complexity classes 33 3.1 Dynamic complexity measures 34 3.2 Classes of languages 36 3.3 Decision problems and languages 38 3.4 Time-complexity classes 41 3.5 The pseudo-Pascal language 47 4 The class P 51 4.1 The class P 52 4.2 The robustness of the class P 57 4.3 Polynomial-time reducibility 60 4.4 Uniform diagonalization 62 5 The class NP 69 5.1 The class NP 70 5.2 NP-complete languages 72 v vi 5.3 NP-intermediate languages 88 5.4 Computing and verifying a function 92 5.5 Relativization of the P = NP conjecture 95 6 6 The complexity of optimization problems 110 6.1 Optimization problems 111 6.2 Underlying languages 115 6.3 Optimum measure versus optimum solution 117 6.4 Approximability 119 6.5 Reducibility and optimization problems 125 7 Beyond NP 133 7.1 The class coNP
    [Show full text]
  • Time-Space Tradeoffs for Satisfiability 1 Introduction
    Time-Space Tradeoffs for Satisfiability Lance Fortnow∗ University of Chicago Department of Computer Science 1100 E. 58th. St. Chicago, IL 60637 Abstract We give the first nontrivial model-independent time-space tradeoffs for satisfiability. Namely, 1+o(1) 1 we show that SAT cannot be solved simultaneously in n time and n − space for any > 0 on general random-access nondeterministic Turing machines. In particular, SAT cannot be solved deterministically by a Turing machine using quasilinear time and pn space. We also give lower bounds for log-space uniform NC1 circuits and branching programs. Our proof uses two basic ideas. First we show that if SAT can be solved nondeterministi- cally with a small amount of time then we can collapse a nonconstant number of levels of the polynomial-time hierarchy. We combine this work with a result of Nepomnjaˇsˇci˘ıthat shows that a nondeterministic computation of super linear time and sublinear space can be simulated in alternating linear time. A simple diagonalization yields our main result. We discuss how these bounds lead to a new approach to separating the complexity classes NL and NP. We give some possibilities and limitations of this approach. 1 Introduction Separating complexity classes remains the most important and difficult of problems in theoretical computer science. Circuit complexity and other techniques on finite functions have seen some exciting early successes (see [BS90]) but have yet to achieve their promise of separating complexity classes above logarithmic space. Other techniques based on logic and geometry also have given us separations only on very restricted models. We should turn back to a traditional separation technique|diagonalization.
    [Show full text]