IIT Kharagpur Theory of Computation: CS41001 Lecture IX

Total Page:16

File Type:pdf, Size:1020Kb

IIT Kharagpur Theory of Computation: CS41001 Lecture IX Computer Science & Engineering Department IIT Kharagpur Theory of Computation: CS41001 Lecture IX Instructor: Goutam Biswas Autumn Semester 2014-2015 0.1 Hierarchy Theorems Space, the number of tape cells, and the time, the number of computation steps are two main measure of cost of a Turing machine computation. If we increase space or time as a function of the input size, the size of the class of problems that can be solved also increases in size. It is expected that the class of languages that can be decided in linear space (space usage is linearly dependent on the input size) is larger than the class problems decidable in logspace. This type of claims are formally established in space and time hierarchy theorems. Proof of these theorems use diagonalisation techniques in a more sopheisticated way. Theorem 1. (Space Hierarchy Theorem) Let s : N0 → N0 be a space constructible function and s(n) ≥ log n. There is a language L decidable in space O(s(n)), but not in space o(s(n)). ′ ′ In other words, if s,s : N0 → N0 are space-constructible functions, where s (n)= o(s(n)), then SPACE(s′(n)) is a proper subset of SPACE(s(n)). Proof: We need a language L with two properties, (i) L is decided in space O(s(n)), but (ii) L is not decided in space o(s(n)). We use diagonalisation to establish that L is different from every language decided in o(s(n)) space. We design an O(s(n)) space bounded Turing machine D such that L(D) is different from every L(M), where M is a o(s(n)) space bounded Turing machine. The language L(D) is different from every L(M) in at least one string. The string where D differs is related to <M>. Steps of D are roughly as follows, but they are to be tuned. 1. If the input x ∈ {0, 1}∗ is not a description of a Turing machine, then D rejects it. 2. The function s(n) is space constructible. D computes s(n) and puts a right marker after s(n) cells, where x =<M> and n = |x| = | <M> |. 3. D simulates M on <M > within the space bound s(| <M > |). D accepts <M > if and only if M rejects <M >. 4. If M does not halt within 2s(|<M>|) time steps (s(| <M> |) bits are required to count it), D rejects the input. 5. If the simulation of M goes beyond s(n), M is not o(s(n)) space bounded, and D may accept or reject. There are a few important technical details that are to be taken into consideration. 1. Even if M runs in o(s(n)) space, it is an asymptotic behavior. For small values of the input length, the space usage may be more than s(n)1. In such a situation D’s space may be insufficient to complete the simulation of M on <M >. D may reject 1 For all c> 0 there is n0 ∈ N, such that the space is < cg(n) for all n ≥ n0. 1 <M > without knowing the outcome of the computation of M on <M >, and the diagonalisation may fail. One trick to avoid this problem is to also consider inputs of the form x =<M > 10∗. Simulate M on <M > 10∗, which can be arbitrarily long and the problem of ‘small length’ strings can be avoided for some value of k and a string <M > 10k, where the asymptotic behaviour has started. Following is the O(s(n)) space decider D for L(D). We argue that L(D) cannot be decide by any o(s(n)) space decider. D: input x 1. if x =<M > 10∗, reject x. 2. Compute s(|x|) and mark tape space of s(|x|) cells. If there is any move of simulation beyond this mark, reject x. 3. Prepare a counter of s(|x|) bits and initialise it to 0. 4. Simulate M on x, at every step increment the counter. 5. If the counter exceeds 2s(n), reject x. 6. If M accepts x, reject x; else accept x. Note that the set of alphabet of M may be larger than D and one cell of M may require k cells of D for simulation. So, if M runs in s′(n) space, its simulation on D will use ks′(n) space. We prove that L(D) cannot be decided in o(s(n)) space. For the sake of contradiction we assume that L(D) is decided in space s′(n)= o(s(n)) by M. Simulation of M on D runs in ′ ks (n) space. From the definition o(s(n)), we know that there is a n0 such that for all n ≥ n0, ks′(n) <s(n). n0 So for an input x of length ≥ n0 e.g. x =<M> 10 , the simulation of M on D will halt, and D will accept x if and only if M rejects it. So the languages L(D) is not the language of L(M) - a contradiction. QED. ′ ′ If s,s : N0 → N0 be such that s (n)= o(s(n)) and s(n) is space constructible, then SPACE(s′(n)) ⊂ SPACE(s(n)). So we have L ⊂ PSPACE. Also NL ⊆ SPACE((log n)2), by the Savitch’s theorem. And by the space hierarchy, SPACE((log n)2) ⊂ SPACE(n) ⊂ PSPACE. Theorem 2. (Time Hierarchy Theorem) For any time constructible function t : N0 → N0, t(n) ≥ n there is a language L that is decidable in O(t(n)) time but not decidable in o(t(n)/ log t(n)) time. [On a single tape machine]. ′ ′ In other words, if t,t : N0 → N0 are time constructible functions so that t (n) = o(t(n)/ log t(n)), then DTIME(t′(n)) ⊂ DTIME(t(n)). The idea is essentially same. We design a Turing machine D that rejects any input not of the form <M > 10∗. The machine D simulates M on <M > 10∗ for no more than t(n)/ log t(n) steps. D accepts <M > 10∗ if and only if M rejects it within t(n)/ log t(n) time. The overhead of log t(n) for the simulation of M comes from the fact that no better 2 method, other than counting, is known to keep track of the number of steps of simulation of M. Proof: D: input x 1. If x is not of the form <M > 10∗, reject it. 2. Compute t(n) and log t(n) and initialise a counter of size O(log t(n)) to t(n)/ log t(n). 3. If the counter is 0, reject. 4. Simulate next step of M on x. 5. If M accepts x, then reject; if M rejects x, then accept. 6. Decrement counter. 7. Goto step 3. QED. We argue that step 4,5,6 can be performed in O(t(n)) time. We assume that D is a single tape machine. If we keep the step counter of size log(t(n)/ log t(n)) = O(log t(n)); information about the state, head-position, and the transition table of M (all are of constant size), far away from the head position of D, the simulation cost will increase. So after every move of M, D will carry all these required information near its head position on a separate track. It costs O(log t(n)) time. So the simulation of t(n)/ log t(n) steps of M takes O(t(n)) steps of D. The tape of D has more than one track, (i) on the first track the content of M’s tape is present, (ii) on the second track the current state and state transition function of M is stored, (iii) the step counter is kept on the third track. The content of second and third tracks are to be kept closer to the head position of D on the first track. After every move, this information is to be shifted. The length of the content of track two is fixed for a particular M. So shifting this information requires fixed number of steps at every step of simulation of M. The length of the counter in track three is O(log t(n)) and all its bits are shifted by at most one cell after every move. It requirs O(log t(n)) moves. To prove that L(D) cannot be decided in o(t(n)/ log t(n)) time. We start with the as- sumption that there is a Turing machine M that decides L(D) in g(n)= o(t(n)/ log t(n)). D simulates M in time cg(n) steps (not considering the simulation step counter), where c is a constant. g(n) = o(t(n)/ log t(n)), so there is a n0 such that for all n>n0, cg(n) < t(n) log t(n). n So for input <M > 10 , n>n0, the simulation will come to completion and D’s decision will be opposite to M’s decision. So L(D) cannot be same as L(M) - a contradiction. Example 1. We prove a simpler statement: DTIME(n) ⊂ DTIME(n1.5). Consider the following Turing machine: D: input x 1. If x =< M10∗ >, then reject. 2. Simulates M on < M10∗ > for |x|1.4 steps of simulation. 3 3. If M does not halt on < M10∗ > within these steps, then reject. 4. If M accepts, then reject; if M rejects, then accept.
Recommended publications
  • Hierarchy Theorems
    Hierarchy Theorems Peter Mayr Computability Theory, April 14, 2021 So far we proved the following inclusions: L ⊆ NL ⊆ P ⊆ NP ⊆ PSPACE ⊆ EXPTIME ⊆ EXPSPACE Question Which are proper? Space constructible functions In simulations we often want to fix s(n) space on the working tape before starting the actual computation without accruing any overhead in space complexity. Definition s(n) ≥ log(n) is space constructible if there exists N 2 N and a DTM with input tape that on any input of length n ≥ N uses and marks off s(n) cells on its working tape and halts. Note Equivalently, s(n) is space constructible iff there exists a DTM with input and output that on input 1n computes s(n) in space O(s(n)). Example log n; nk ; 2n are space constructible. E.g. log2 n space is constructed by counting the length n of the input in binary. Little o-notation Definition + For f ; g : N ! R we say f = o(g) (read f is little-o of g) if f (n) lim = 0: n!1 g(n) Intuitively: g grows much faster than f . Note I f = o(g) ) f = O(g), but not conversely. I f 6= o(f ) Separating space complexity classes Space Hierarchy Theorem Let s(n) be space constructible and r(n) = o(s(n)). Then DSPACE(r(n)) ( DSPACE(s(n)). Proof. Construct L 2 DSPACE(s(n))nDSPACE(r(n)) by diagonalization. Define DTM D such that on input x of length n: 1. D marks s(n) space on working tape.
    [Show full text]
  • Complexity Theory
    Complexity Theory Course Notes Sebastiaan A. Terwijn Radboud University Nijmegen Department of Mathematics P.O. Box 9010 6500 GL Nijmegen the Netherlands [email protected] Copyright c 2010 by Sebastiaan A. Terwijn Version: December 2017 ii Contents 1 Introduction 1 1.1 Complexity theory . .1 1.2 Preliminaries . .1 1.3 Turing machines . .2 1.4 Big O and small o .........................3 1.5 Logic . .3 1.6 Number theory . .4 1.7 Exercises . .5 2 Basics 6 2.1 Time and space bounds . .6 2.2 Inclusions between classes . .7 2.3 Hierarchy theorems . .8 2.4 Central complexity classes . 10 2.5 Problems from logic, algebra, and graph theory . 11 2.6 The Immerman-Szelepcs´enyi Theorem . 12 2.7 Exercises . 14 3 Reductions and completeness 16 3.1 Many-one reductions . 16 3.2 NP-complete problems . 18 3.3 More decision problems from logic . 19 3.4 Completeness of Hamilton path and TSP . 22 3.5 Exercises . 24 4 Relativized computation and the polynomial hierarchy 27 4.1 Relativized computation . 27 4.2 The Polynomial Hierarchy . 28 4.3 Relativization . 31 4.4 Exercises . 32 iii 5 Diagonalization 34 5.1 The Halting Problem . 34 5.2 Intermediate sets . 34 5.3 Oracle separations . 36 5.4 Many-one versus Turing reductions . 38 5.5 Sparse sets . 38 5.6 The Gap Theorem . 40 5.7 The Speed-Up Theorem . 41 5.8 Exercises . 43 6 Randomized computation 45 6.1 Probabilistic classes . 45 6.2 More about BPP . 48 6.3 The classes RP and ZPP .
    [Show full text]
  • CSE200 Lecture Notes – Space Complexity
    CSE200 Lecture Notes – Space Complexity Lecture by Russell Impagliazzo Notes by Jiawei Gao February 23, 2016 Motivations for space-bounded computation: Memory is an important resource. • Space classes characterize the difficulty of important problems • Cautionary tale – sometimes, things don’t work out like we expected. • We adopt the following conventions: Input tape: read only, doesn’t count. • Output tape: (for functions) write only, doesn’t count. • Work tapes: read, write, counts. • Definition 1. SPACE(S(n)) (or DSPACE(S(n))) is the class of problems solvable by deterministic TM using O(S(n)) memory locations. NSPACE(S(n)) is the class of problems solvable by nondeterministic TM using O(S(n)) memory locations. co-NSPACE(S(n)) = L L NSPACE(S(n)) . f j 2 g Definition 2. S k PSPACE = k SPACE(n ) L = SPACE(log n) NL = NSPACE(log n) Why aren’t we interested in space complexity below logaritimic space? Because on inputs of length n, we need log space to keep track of memory addresses / tape positions. Similar as time complexity, there is a hierarchy theorem is space complexity. Theorem 3 (Space Hierarchy Theorem). If S1(n) and S2(n) are space-computable, and S1(n) = o(S2(n)), then SPACE(S1(n)) SPACE(S2(n)) The Space Hierarchy theorem( is “nicer” than the Time Hierarchy Theorem, because it just needs S1(n) = o(S2(n)), instead of a log factor in the Time Hierarchy Theorem. It’s proved by a straightforward diagonalization method. 1 CSE 200 Winter 2016 1 Space complexity vs. time complexity Definition 4.
    [Show full text]
  • Notes on Hierarchy Theorems 1 Proof of the Space
    U.C. Berkeley | CS172: Automata, Computability and Complexity Handout 8 Professor Luca Trevisan 4/21/2015 Notes on Hierarchy Theorems These notes discuss the proofs of the time and space hierarchy theorems. It shows why one has to work with the rather counter-intuitive diagonal problem defined in Sipser's book, and it describes an alternative proof of the time hierarchy theorem. When we say \Turing machine" we mean one-tape Turing machine if we discuss time. If we discuss space, then we mean a Turing machine with one read-only tape and one work tape. All the languages discussed in these notes are over the alphabet Σ = f0; 1g. Definition 1 (Constructibility) A function t : N ! N is time constructible if there is a Turing machine M that given an input of length n runs in O(t(n)) time and writes t(n) in binary on the tape. A function s : N ! N is space constructible if there is a Turing machine M that given an input of length n uses O(s(n)) space and writes s(n) on the (work) tape. Theorem 2 (Space Hierarchy Theorem) If s(n) ≥ log n is a space-constructible function then SPACE(o(s(n)) 6= SPACE(O(s(n)). Theorem 3 (Time Hierarchy Theorem) If t(n) is a time-constructible function such that t(n) ≥ n and n2 = o(t(n)), then TIME(o(t(n)) 6= TIME(O(t(n) log t(n))). 1 Proof of The Space Hierarchy Theorem Let s : N ! N be a space constructible function such that s(n) ≥ log n.
    [Show full text]
  • Complexity Theory
    COMPLEXITY THEORY Lecture 13: Space Hierarchy and Gaps Markus Krotzsch¨ Knowledge-Based Systems TU Dresden, 5th Dec 2017 Review Markus Krötzsch, 5th Dec 2017 Complexity Theory slide 2 of 19 Review: Time Hierarchy Theorems Time Hierarchy Theorem 12.12 If f , g : N ! N are such that f is time- constructible, and g · log g 2 o(f ), then DTime∗(g) ( DTime∗(f ) Nondeterministic Time Hierarchy Theorem 12.14 If f , g : N ! N are such that f is time-constructible, and g(n + 1) 2 o(f (n)), then NTime∗(g) ( NTime∗(f ) In particular, we find that P , ExpTime and NP , NExpTime: , L ⊆ NL ⊆ P ⊆ NP ⊆ PSpace ⊆ ExpTime ⊆ NExpTime ⊆ ExpSpace , Markus Krötzsch, 5th Dec 2017 Complexity Theory slide 3 of 19 A Hierarchy for Space Markus Krötzsch, 5th Dec 2017 Complexity Theory slide 4 of 19 Space Hierarchy For space, we can always assume a single working tape: • Tape reduction leads to a constant-factor increase in space • Constant factors can be eliminated by space compression Therefore, DSpacek(f ) = DSpace1(f ). Space turns out to be easier to separate – we get: Space Hierarchy Theorem 13.1: If f , g : N ! N are such that f is space- constructible, and g 2 o(f ), then DSpace(g) ( DSpace(f ) Challenge: TMs can run forever even within bounded space. Markus Krötzsch, 5th Dec 2017 Complexity Theory slide 5 of 19 Proof: Again, we construct a diagonalisation machine D. We define a multi-tape TM D for inputs of the form hM, wi (other cases do not matter), assuming that jhM, wij = n • Compute f (n) in unary to mark the available space on the working tape • Initialise
    [Show full text]
  • Notes for Lecture Notes 2 1 Computational Problems
    Stanford University | CS254: Computational Complexity Notes 2 Luca Trevisan January 11, 2012 Notes for Lecture Notes 2 In this lecture we define NP, we state the P versus NP problem, we prove that its formulation for search problems is equivalent to its formulation for decision problems, and we prove the time hierarchy theorem, which remains essentially the only known theorem which allows to show that certain problems are not in P. 1 Computational Problems In a computational problem, we are given an input that, without loss of generality, we assume to be encoded over the alphabet f0; 1g, and we want to return as output a solution satisfying some property: a computational problem is then described by the property that the output has to satisfy given the input. In this course we will deal with four types of computational problems: decision problems, search problems, optimization problems, and counting problems.1 For the moment, we will discuss decision and search problem. In a decision problem, given an input x 2 f0; 1g∗, we are required to give a YES/NO answer. That is, in a decision problem we are only asked to verify whether the input satisfies a certain property. An example of decision problem is the 3-coloring problem: given an undirected graph, determine whether there is a way to assign a \color" chosen from f1; 2; 3g to each vertex in such a way that no two adjacent vertices have the same color. A convenient way to specify a decision problem is to give the set L ⊆ f0; 1g∗ of inputs for which the answer is YES.
    [Show full text]
  • 1 Time Hierarchy Theorem
    princeton university cos 522: computational complexity Lecture 3: Diagonalization Lecturer: Sanjeev Arora Scribe:scribename To separate two complexity classes we need to exhibit a machine in one class that is different (namely, gives a different answer on some input) from every machine in the other class. This lecture describes diagonalization, essentially the only general technique known for constructing such a machine. We also indicate why this technique has been unable thus far to resolve P =? NP and other interesting questions. 1 Time Hierarchy Theorem The Time Hierarchy Theorem shows that allowing Turing Machines more com- putation time strictly increases the class of languages that they can decide. Theorem 1 If f,g are running times satisfying f(n) log f(n)=o(g(n)), then DTIME(f(n)) DTIME(g(n)) (1) To showcase the essential idea of the proof of Theorem 1, we prove the sim- pler statement DTIME(n) DTIME(n2). We use diagonalization. Suppose M1,M2,M3,... is a numbering of all Turing Machines, where the description of Mi can be produced from i in time O(log i). (Such numberings exist. For example, one can order TMs according to the number of states in their tran- sition diagrams, and use lexicographic ordering among then all TMs that have the same number of states. Note that we allow machines that do not halt on all inputs.) Consider the following Turing Machine, D:“On input x,ifx =0j1k for 1.5 some j, k then construct Mk and simulate it on x for |x| steps. If Mk halts and accepts, reject.
    [Show full text]
  • SIGACT News Complexity Theory Column 52
    SIGACT News Complexity Theory Column 52 Lane A. Hemaspaandra Dept. of Computer Science, University of Rochester Rochester, NY 14627, USA Introduction to Complexity Theory Column 52 Bovet and Crescenzi’s Complexity Textbook Is Again Available For those who are big fans of the fantastic complexity textbook by Daniel Bovet and Pierluigi Crescenzi [BC93] (and I myself certainly am), there is great news. The authors have made their book available online, free of charge for noncommercial use. It can be found at via Pilu’s web site (start at http://www.algoritmica.org/piluc, then click on the “Books” section, and then click on “Introduction to the Theory of Complexity”). Speaking of textbooks, I see (via www.aw-bc.com/home) that a third edition of Hopcroft– Motwani–Ullman—which of course in its first edition was Hopcroft–Ullman—has just come out. Nifty! This Issue’s Column Oh, you nasty tricksy classes with promises! You know who you are: UP, FewP, ZPP, R, BPP, NP ∩ coNP, and the other usual suspects. Why can’t you be more like your well-behaved sibling NP? NP cleans up her toys and comes in for dinner when called. She has nice enumerations of machines of her own type that cover her exactly, she (not at all unrelatedly) has complete sets, she contains sparse sets not in her brother P exactly if their exponential-time cousins differ, she has (well, nondeterministic time itself has) a very tight time hierarchy theorem, and if she and P are equal with respect to even one tally oracle then they are equal in the real world.
    [Show full text]
  • Advanced Complexity
    Advanced Complexity TD n◦3 Charlie Jacomme September 27, 2017 Exercise 1 : Space hierarchy theorem Using a diagonal argument, prove that for two space-constructible functions f and g such that f(n) = o(g(n)) (and as always f; g ≥ log) we have SPACE(f(n)) ( SPACE(g(n)). Solution: We dene a language which can be recognized using space O(g(n)) but not in f(n). L = f(M; w)jM reject w using space ≤ g(j(M; w)j andjΓj = 4g Where Γis the alphabet of the Turing Machine We show that L 2 SPACE(g(n)) by constructing the corresponding TM. This is where we need the fact that the alphabet is bounded. Indeed, we want to construct one xed machine that recognizes L for any TM M, and if we allow M to have an arbitrary size of alphabet, the xed machine might need a lot of space in order to represent the alphabet of M, and it might go over O(g(n)). On an input x, we compute g(x) and mark down an end of tape marker at position f(x), so that we reject if we use to much space. If x is not of the form (M; w), we reject, else we simulate M on w for at most jQj × 4g(x) × n steps. If we go over the timeout, we reject. Else, if w is accepted, we reject, and if w is rejected, we accept. This can be done in SPACE(O(g(n))), and we conclude with the speed up theorem.
    [Show full text]
  • 6.080 / 6.089 Great Ideas in Theoretical Computer Science Spring 2008
    MIT OpenCourseWare http://ocw.mit.edu 6.080 / 6.089 Great Ideas in Theoretical Computer Science Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 6.080/6.089 GITCS DATE Lecture 12 Lecturer: Scott Aaronson Scribe: Mergen Nachin 1 Review of last lecture • NP-completeness in practice. We discussed many of the approaches people use to cope with NP-complete problems in real life. These include brute-force search (for small instance sizes), cleverly-optimized backtrack search, fixed-parameter algorithms, approximation algorithms, and heuristic algorithms like simulated annealing (which don’t always work but often do pretty well in practice). • Factoring, as an intermediate problem between P and NP-complete. We saw how factoring, though believed to be hard for classical computers, has a special structure that makes it different from any known NP-complete problem. • coNP. 2 Space complexity Now we will categorize problems in terms of how much memory they use. Definition 1 L is in PSPACE if there exists a poly-space Turing machine M such that for all x, x is in L if and only if M(x) accepts. Just like with NP, we can define PSPACE-hard and PSPACE-complete problems. An interesting example of a PSPACE-complete problem is n-by-n chess: Given an arrangement of pieces on an n-by-n chessboard, and assuming a polynomial upper bound on the number of moves, decide whether White has a winning strategy. (Note that we need to generalize chess to an n-by-n board, since standard 8-by-8 chess is a finite game that’s completely solvable in O(1) time.) Let’s now define another complexity class called EXP, which is apparently even bigger than PSPACE.
    [Show full text]
  • Space and Time Hierarchies
    1 SPACE AND TIME HIERARCHIES This material is covered in section 9.1 in the textbook. We have seen that comparing the power of deterministic and nondeterministic time bounds is very challenging. For corresponding questions dealing with space bounds we have the nice result from Savitch's theorem. Next we consider the natural question: suppose we increase (“sufficiently much") the time or space bound, does this guarantee that the power of the (deterministic) Turing machines increases? Our intuition suggests that giving Turing machines more time or space should increase the computational capability of the machines. Under some reasonable assumptions, this turns out to be the case: we can establish strict inclusions for the classes, assuming that we restrict consideration to constructible functions. Definition. (Def. 9.1) A function f : IN −! IN is space constructible if there is a Turing machine that on input 1n writes the binary representation of f(n) on the output tape and the computation uses space O(f(n)). In the above definition we always assume that f(n) ≥ log n and hence it does not matter whether or not the space used on the output tape is counted. • All the commonly used space bounds, such as polynomials, log n, 2n, are space con- structible. This can be easily verified (examples in class). Note that in the definition of space constructibility of f(n), the argument for the function is represented in unary { this corresponds to our usual convention that the argument of a space bound function is the length of the input. Space hierarchy theorem (Th.
    [Show full text]
  • 1 Hierarchy Theorems
    CS 221: Computational Complexity Prof. Salil Vadhan Lecture Notes 2 January 27, 2010 Scribe: Tova Wiener 1 Hierarchy Theorems Reading: Arora-Barak 3.1, 3.2. \More (of the same) resources ) More Power" Theorem 1 (Time Hierarchy) If f; g are nice (\time-constructible") functions and f(n) log f(n) = 2 3 o(g(n)) (e.g. f(n) = n , g(n) = n ), then DTIME(f(n)) ( DTIME(g(n)). Nice functions : f is time-constructible if: 1. 1n ! 1f(n) can be computed in time O(f(n)) 2. f(n) ≥ n 3. f is nondecreasing (f(n + 1) ≥ f(n)) The proof will be by diagonalization, like what is used to prove the undecidability of the Halting Problem. Specifically, we want to find TM D such that: 1. D runs in time O(g(n)) 2. L(D) 6= L(M) for every TM M that runs in time f(n). First recall how (in cs121) an undecidable problem is obtained via diagonalization. x1 x2 x3 ::: M1 0 M2 1 ::: 0 Index (i; j) of the array is the result of Mi on xj, where M1;M2;::: is an enumeration of all TMs and x1; x2;::: is an enumeration of all strings. Our undecidable problem D is the complement of the diagonal, i.e. D(xi) = :Mi(xi). However, now we want D to be decidable, and even in DTIME(g(n)). This is possible because we only need to differ from TMs Mi that run in time f(n). First Suggestion: Set D(xi) = :Mi(xi), where M1;M2;::: is an enumeration of all time f(n) TMs.
    [Show full text]