Polynomial Time: Nc  (2N)C = 2Cnc

Total Page:16

File Type:pdf, Size:1020Kb

Polynomial Time: Nc  (2N)C = 2Cnc CSE 105 Theory of Computationhttp://www.jflap.org/jflaptmp/ • Professor Jeanne Ferrante 1 Today’s Agenda • P and NP (7.2, 7.3) • Next class: Review Reminders and announcements: • CAPE & TA evals are open: Please participate! • Final Exam Sat Jun 4, 11:30 am – 2:29 pm in WLH 2001 • Seat Assignments: Will be out soon. Last chance to request Left-handed seat 11:59 pm today • You can bring your own a 5 in by 8 in index card (both sides) to the exam (no magnifying aids!!) 2 Time Complexity Def. Let M be a deterministic TM that always halts. The running time or time complexity of M is the function f: N N where f(n) is the maximum number of steps of the TM M on any Input of length n. Let M be a nondeterministic TM that always halts. The running time or time complexity of M is the function f: N N where f(n) is the maximum number of steps of TM M on any branch of the computation on any Input of length n. 3 Running times of decider TM’s Deterministic Nondeterministic q0 q0 f(n)= f(n) = MAX MAX number number of steps of steps qacc q qacc with on any rej input branch length with n input length n q rej qrej 4 Polynomial vs Exponential DTIME • Polynomial running time: nc for some constant c δ • Exponential running time: 2n for some real number δ > 0 If we double the input size from n to 2n: • Polynomial time: nc (2n)c = 2cnc δ δ δ δ • Exponential time: 2n 2(2n) = 22 n δ nδ 2 = ( 2 ) » Quickly explodes in size 5 Review: Big-O Notation f(n) = 4nlog(n) + 3n2 + 3n + 10 is equal to which of the following in Big-O notation? A. O(n3 ) B. O(nlog(n)) C. O(n2 ) D. None or more than one of the above 6 Time Complexity Classes + Def. Let t: N R be a function. The deterministic time complexity class DTIME(t(n)) is the collection of languages L that are decidable by a O(t(n)) deterministic, single tape TM. The nondeterministic time complexity class NTIME(t(n)) is the collection of languages L that are decidable by a O(t(n)) nondeterministic, single tape TM. 2 3 DTIME(n ) ⊂ DTIME(n )? A. TRUE B. FALSE C. Don’t Know 7 Here, model resources matter! • t(n) time deterministic multitape TM O(t(n)2 )time deterministic single tape TM • t(n) time nondeterministic single tape TM 2 O(t(n)) time deterministic single tape TM (Where t(n) >= n) 8 Example TM: M1 M1 = “On input w: 1. Scan the input from left to right to check whether it is of form {0,1}*#{0,1}*. If not, reject. If the input consists of only #, accept. 2. Return the head to the left hand end of tape. 3. Zig-zag across the tape, checking that the first unmarked symbol to the left of the # is the same as the corresponding unmarked symbol following #. If the corresponding symbols do not match, or there is no unmarked symbol left after the #, reject. Otherwise if the symbols match, mark them and continue. 4. If all symbols to the left of # have been marked, check for unmarked symbols after the #. If any unmarked symbols remain to the right of the #, reject; if none are found, accept.” L(M1) is in A.DTIME(n) B. DTIME(n2) C. DTIME(nlogn) D.None or more than 1 of the above 9 Most useful! CLASS P (DETERMINISTIC POLYNOMIAL TIME) 10 k P = Uk DTIME(n ) • P is the class of languages that can be decided in polynomial time on a deterministic, single-tape TM • Encodings in P – Need polynomial time encodings – Graph encodings • Analyze graph problems in number n of nodes; still polynomial in size of graph • Can’t use Nondeterminism • Can use multiple tapes • Brute force approach may not be enough 11 Examples in Class P • PATH = {<G,s,t> | G is a directed graph with n nodes and a directed path from node s to node t} • RELPRIME = { <x,y> | x and y are relatively prime} – Use Euclidean Algorithm to show in P • Every CFL {w | w is generated by CFG G} – Use Dynamic Programming to show in P • Most sorting algorithms To be in P, avoid brute-force searches! 12 May be exponential time deterministically! CLASS NP (NONDETERMINISTIC POLYNOMIAL TIME) 13 k Class NP = U k NTIME(n ) • For many problems, only solution is equivalent to brute force search • NP is the class of languages that can be decided in polynomial time on a non-deterministic TM • For problems in NP, best deterministic algorithms are exponential time • P ⊆ NP ! • Outstanding open problem in CS: P = NP ? • Prevailing opinion: P ≠ NP 14 Travelling Salesperson Problem in NP • Input: Encodings of – collection {C1,…Cn} of cities, – Distance k, – Distance Matrix D(i,j): distance between Ci and Cj • Problem: Is there a tour of all cities with total distance less than k? • Deterministically enumerating all tours to check distance is exponential • Best known algorithms are exponential – e.g. dynamic programming • In NP: – Nondeterministically guess a tour including all cities – Checking if a given tour has distance less than k can be done in deterministic polynomial time 15 Another Example in NP SAT = {<E>|E is a satisfiable Boolean expression} – A Boolean expression has operators AND, OR and NOT, and Boolean values which take the value 0 (false) or 1 (true) – A Boolean expression is satisfiable if there is some assignment to its variables which makes the expression evaluate to 1 (true) • x AND (y OR z) is satisfiable when x = 1, y = 1, z = 0 • x AND (NOT x) is never satisfiable 16 Decider for SAT “Given <E> of size n: 1. Nondeterministically guess an assignment of values to the variables in E 2. Check whether the assignment satisfies E; if so accept” SAT is in c A. DTIME(n ), c a constant B. DTIME (2nc), c a constant C. NTIME(nc), c a constant D. None or more than one of the above 17 Another Example In NP CLIQUE = {<G,k> | G is an undirected graph with a clique of size k} • A clique is a subgraph of G where every 2 nodes are connected by an edge • Deterministically enumerating all subsets of k nodes and checking whether they have edges between each node is exponential Checking if a given set of edges forms a clique of size k can be done in deterministic polynomial time A. True B. False C. Don’t Know 18 Summary Problems in P Problems in NP • PATH • Any problem in P • Any CFL • Traveling Salesperson • RELPRIME • SAT • ADDITION • CLIQUE… • MULTIPLICATION… 19 DOES P = NP or P ⊆ NP??? NP ?? Decidable P CF L? RegularL P = Class of languages for which membership can be decided in deterministic polynomial time NP = Class of languages for which membership can be checked in deterministic polynomial time 20 Advances in P = NP ? • Stephen Cook won the Turing Award (1982) for his 1971 paper that “laid the foundations for the theory of NP- Completeness.” Intuitively: A problem X in NP is NP-complete if showing the problem X is in P guarantees all problems in NP are in P • Cook-Levin TH: SAT is NP-complete • So if SAT is in P, then P = NP ! 21 Polynomial Time Computable Functions • A function f : Σ* Σ* is polynomial time computable if some deterministic polynomial time TM M exists that given a string w in Σ* as input, halts with just f(w) on its tape. – Note that f need not be 1-1 or onto – M must always halt – Can be any function a TM M can compute in deterministic polynomial time! TM M computes a polynomial time computable function: M = “On input w, make a second copy of w immediately after the input, and halt” A. True B. False C. Don’t Know 22 Polynomial Time Reducibility: A ≤ p B Language A is polynomial time reducible to language B, written A ≤ P B , if there is a polynomial time computable function f : Σ* Σ* such that for any w in Σ* w є A IFF f(w) є B Turns question of whether a in A into question about whether f(a) in B in polynomial time – Note that if w NOT in A, then there must be b = f(w) NOT in B If A ≤ p B and B ≤ p C THEN A ≤ p C IS A. TRUE B. FALSE C. DON’T KNOW 23 Th. 7.31: If A ≤ p B AND B IS IN P, THEN A IS IN P. • Proof: Let M be the polynomial time decider for B, and let f be the polynomial time reduction from A to B. We define a polynomial time decider N for A: N = “On input w: 1. Compute f(w) 2. Run M on input f(w) and if M accepts, accept; if M rejects, reject” Correctness: w in A IFF M accepts f(w) IFF N accepts w because f is a reduction and by construction of N. N runs in polynomial time because steps 1 and 2 both run in polynomial time (this follows in step 2 because f(w) is polynomial in the length of w, and then M runs in polynomial time on f(w)) 24 NP-Completeness A language B is NP-Complete if 1. B is in NP 2. Every A in NP is polynomial time reducible to B (A ≤ p B ) TH 7.35: If B is NP-complete and in P, then P = NP Proof: If B is NP-complete, then by definition, for every A in N P, A ≤ p B .
Recommended publications
  • CS601 DTIME and DSPACE Lecture 5 Time and Space Functions: T, S
    CS601 DTIME and DSPACE Lecture 5 Time and Space functions: t, s : N → N+ Definition 5.1 A set A ⊆ U is in DTIME[t(n)] iff there exists a deterministic, multi-tape TM, M, and a constant c, such that, 1. A = L(M) ≡ w ∈ U M(w)=1 , and 2. ∀w ∈ U, M(w) halts within c · t(|w|) steps. Definition 5.2 A set A ⊆ U is in DSPACE[s(n)] iff there exists a deterministic, multi-tape TM, M, and a constant c, such that, 1. A = L(M), and 2. ∀w ∈ U, M(w) uses at most c · s(|w|) work-tape cells. (Input tape is “read-only” and not counted as space used.) Example: PALINDROMES ∈ DTIME[n], DSPACE[n]. In fact, PALINDROMES ∈ DSPACE[log n]. [Exercise] 1 CS601 F(DTIME) and F(DSPACE) Lecture 5 Definition 5.3 f : U → U is in F (DTIME[t(n)]) iff there exists a deterministic, multi-tape TM, M, and a constant c, such that, 1. f = M(·); 2. ∀w ∈ U, M(w) halts within c · t(|w|) steps; 3. |f(w)|≤|w|O(1), i.e., f is polynomially bounded. Definition 5.4 f : U → U is in F (DSPACE[s(n)]) iff there exists a deterministic, multi-tape TM, M, and a constant c, such that, 1. f = M(·); 2. ∀w ∈ U, M(w) uses at most c · s(|w|) work-tape cells; 3. |f(w)|≤|w|O(1), i.e., f is polynomially bounded. (Input tape is “read-only”; Output tape is “write-only”.
    [Show full text]
  • Interactive Proof Systems and Alternating Time-Space Complexity
    Theoretical Computer Science 113 (1993) 55-73 55 Elsevier Interactive proof systems and alternating time-space complexity Lance Fortnow” and Carsten Lund** Department of Computer Science, Unicersity of Chicago. 1100 E. 58th Street, Chicago, IL 40637, USA Abstract Fortnow, L. and C. Lund, Interactive proof systems and alternating time-space complexity, Theoretical Computer Science 113 (1993) 55-73. We show a rough equivalence between alternating time-space complexity and a public-coin interactive proof system with the verifier having a polynomial-related time-space complexity. Special cases include the following: . All of NC has interactive proofs, with a log-space polynomial-time public-coin verifier vastly improving the best previous lower bound of LOGCFL for this model (Fortnow and Sipser, 1988). All languages in P have interactive proofs with a polynomial-time public-coin verifier using o(log’ n) space. l All exponential-time languages have interactive proof systems with public-coin polynomial-space exponential-time verifiers. To achieve better bounds, we show how to reduce a k-tape alternating Turing machine to a l-tape alternating Turing machine with only a constant factor increase in time and space. 1. Introduction In 1981, Chandra et al. [4] introduced alternating Turing machines, an extension of nondeterministic computation where the Turing machine can make both existential and universal moves. In 1985, Goldwasser et al. [lo] and Babai [l] introduced interactive proof systems, an extension of nondeterministic computation consisting of two players, an infinitely powerful prover and a probabilistic polynomial-time verifier. The prover will try to convince the verifier of the validity of some statement.
    [Show full text]
  • On the Randomness Complexity of Interactive Proofs and Statistical Zero-Knowledge Proofs*
    On the Randomness Complexity of Interactive Proofs and Statistical Zero-Knowledge Proofs* Benny Applebaum† Eyal Golombek* Abstract We study the randomness complexity of interactive proofs and zero-knowledge proofs. In particular, we ask whether it is possible to reduce the randomness complexity, R, of the verifier to be comparable with the number of bits, CV , that the verifier sends during the interaction. We show that such randomness sparsification is possible in several settings. Specifically, unconditional sparsification can be obtained in the non-uniform setting (where the verifier is modelled as a circuit), and in the uniform setting where the parties have access to a (reusable) common-random-string (CRS). We further show that constant-round uniform protocols can be sparsified without a CRS under a plausible worst-case complexity-theoretic assumption that was used previously in the context of derandomization. All the above sparsification results preserve statistical-zero knowledge provided that this property holds against a cheating verifier. We further show that randomness sparsification can be applied to honest-verifier statistical zero-knowledge (HVSZK) proofs at the expense of increasing the communica- tion from the prover by R−F bits, or, in the case of honest-verifier perfect zero-knowledge (HVPZK) by slowing down the simulation by a factor of 2R−F . Here F is a new measure of accessible bit complexity of an HVZK proof system that ranges from 0 to R, where a maximal grade of R is achieved when zero- knowledge holds against a “semi-malicious” verifier that maliciously selects its random tape and then plays honestly.
    [Show full text]
  • The Complexity of Space Bounded Interactive Proof Systems
    The Complexity of Space Bounded Interactive Proof Systems ANNE CONDON Computer Science Department, University of Wisconsin-Madison 1 INTRODUCTION Some of the most exciting developments in complexity theory in recent years concern the complexity of interactive proof systems, defined by Goldwasser, Micali and Rackoff (1985) and independently by Babai (1985). In this paper, we survey results on the complexity of space bounded interactive proof systems and their applications. An early motivation for the study of interactive proof systems was to extend the notion of NP as the class of problems with efficient \proofs of membership". Informally, a prover can convince a verifier in polynomial time that a string is in an NP language, by presenting a witness of that fact to the verifier. Suppose that the power of the verifier is extended so that it can flip coins and can interact with the prover during the course of a proof. In this way, a verifier can gather statistical evidence that an input is in a language. As we will see, the interactive proof system model precisely captures this in- teraction between a prover P and a verifier V . In the model, the computation of V is probabilistic, but is typically restricted in time or space. A language is accepted by the interactive proof system if, for all inputs in the language, V accepts with high probability, based on the communication with the \honest" prover P . However, on inputs not in the language, V rejects with high prob- ability, even when communicating with a \dishonest" prover. In the general model, V can keep its coin flips secret from the prover.
    [Show full text]
  • Simple Doubly-Efficient Interactive Proof Systems for Locally
    Electronic Colloquium on Computational Complexity, Revision 3 of Report No. 18 (2017) Simple doubly-efficient interactive proof systems for locally-characterizable sets Oded Goldreich∗ Guy N. Rothblumy September 8, 2017 Abstract A proof system is called doubly-efficient if the prescribed prover strategy can be implemented in polynomial-time and the verifier’s strategy can be implemented in almost-linear-time. We present direct constructions of doubly-efficient interactive proof systems for problems in P that are believed to have relatively high complexity. Specifically, such constructions are presented for t-CLIQUE and t-SUM. In addition, we present a generic construction of such proof systems for a natural class that contains both problems and is in NC (and also in SC). The proof systems presented by us are significantly simpler than the proof systems presented by Goldwasser, Kalai and Rothblum (JACM, 2015), let alone those presented by Reingold, Roth- blum, and Rothblum (STOC, 2016), and can be implemented using a smaller number of rounds. Contents 1 Introduction 1 1.1 The current work . 1 1.2 Relation to prior work . 3 1.3 Organization and conventions . 4 2 Preliminaries: The sum-check protocol 5 3 The case of t-CLIQUE 5 4 The general result 7 4.1 A natural class: locally-characterizable sets . 7 4.2 Proof of Theorem 1 . 8 4.3 Generalization: round versus computation trade-off . 9 4.4 Extension to a wider class . 10 5 The case of t-SUM 13 References 15 Appendix: An MA proof system for locally-chracterizable sets 18 ∗Department of Computer Science, Weizmann Institute of Science, Rehovot, Israel.
    [Show full text]
  • On the Existence of Extractable One-Way Functions
    On the Existence of Extractable One-Way Functions The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation Bitansky, Nir et al. “On the Existence of Extractable One-Way Functions.” SIAM Journal on Computing 45.5 (2016): 1910–1952. © 2016 by SIAM As Published http://dx.doi.org/10.1137/140975048 Publisher Society for Industrial and Applied Mathematics Version Final published version Citable link http://hdl.handle.net/1721.1/107895 Terms of Use Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. SIAM J. COMPUT. c 2016 Society for Industrial and Applied Mathematics Vol. 45, No. 5, pp. 1910{1952 ∗ ON THE EXISTENCE OF EXTRACTABLE ONE-WAY FUNCTIONS NIR BITANSKYy , RAN CANETTIz , OMER PANETHz , AND ALON ROSENx Abstract. A function f is extractable if it is possible to algorithmically \extract," from any adversarial program that outputs a value y in the image of f , a preimage of y. When combined with hardness properties such as one-wayness or collision-resistance, extractability has proven to be a powerful tool. However, so far, extractability has not been explicitly shown. Instead, it has only been considered as a nonstandard knowledge assumption on certain functions. We make headway in the study of the existence of extractable one-way functions (EOWFs) along two directions. On the negative side, we show that if there exist indistinguishability obfuscators for circuits, then there do not exist EOWFs where extraction works for any adversarial program with auxiliary input of unbounded polynomial length.
    [Show full text]
  • Probabilistic Proof Systems: a Primer
    Probabilistic Proof Systems: A Primer Oded Goldreich Department of Computer Science and Applied Mathematics Weizmann Institute of Science, Rehovot, Israel. June 30, 2008 Contents Preface 1 Conventions and Organization 3 1 Interactive Proof Systems 4 1.1 Motivation and Perspective ::::::::::::::::::::::: 4 1.1.1 A static object versus an interactive process :::::::::: 5 1.1.2 Prover and Veri¯er :::::::::::::::::::::::: 6 1.1.3 Completeness and Soundness :::::::::::::::::: 6 1.2 De¯nition ::::::::::::::::::::::::::::::::: 7 1.3 The Power of Interactive Proofs ::::::::::::::::::::: 9 1.3.1 A simple example :::::::::::::::::::::::: 9 1.3.2 The full power of interactive proofs ::::::::::::::: 11 1.4 Variants and ¯ner structure: an overview ::::::::::::::: 16 1.4.1 Arthur-Merlin games a.k.a public-coin proof systems ::::: 16 1.4.2 Interactive proof systems with two-sided error ::::::::: 16 1.4.3 A hierarchy of interactive proof systems :::::::::::: 17 1.4.4 Something completely di®erent ::::::::::::::::: 18 1.5 On computationally bounded provers: an overview :::::::::: 18 1.5.1 How powerful should the prover be? :::::::::::::: 19 1.5.2 Computational Soundness :::::::::::::::::::: 20 2 Zero-Knowledge Proof Systems 22 2.1 De¯nitional Issues :::::::::::::::::::::::::::: 23 2.1.1 A wider perspective: the simulation paradigm ::::::::: 23 2.1.2 The basic de¯nitions ::::::::::::::::::::::: 24 2.2 The Power of Zero-Knowledge :::::::::::::::::::::: 26 2.2.1 A simple example :::::::::::::::::::::::: 26 2.2.2 The full power of zero-knowledge proofs ::::::::::::
    [Show full text]
  • Computational Complexity Theory
    Computational Complexity Theory Markus Bl¨aser Universit¨atdes Saarlandes Draft|February 5, 2012 and forever 2 1 Simple lower bounds and gaps Lower bounds The hierarchy theorems of the previous chapter assure that there is, e.g., a language L 2 DTime(n6) that is not in DTime(n3). But this language is not natural.a But, for instance, we do not know how to show that 3SAT 2= DTime(n3). (Even worse, we do not know whether this is true.) The best we can show is that 3SAT cannot be decided by a O(n1:81) time bounded and simultaneously no(1) space bounded deterministic Turing machine. aThis, of course, depends on your interpretation of \natural" . In this chapter, we prove some simple lower bounds. The bounds in this section will be shown for natural problems. Furthermore, these bounds are unconditional. While showing the NP-hardness of some problem can be viewed as a lower bound, this bound relies on the assumption that P 6= NP. However, the bounds in this chapter will be rather weak. 1.1 A logarithmic space bound n n Let LEN = fa b j n 2 Ng. LEN is the language of all words that consists of a sequence of as followed by a sequence of b of equal length. This language is one of the examples for a context-free language that is not regular. We will show that LEN can be decided with logarithmic space and that this amount of space is also necessary. The first part is easy. Exercise 1.1 Prove: LEN 2 DSpace(log).
    [Show full text]
  • Lecture 3: Instructor: Rafael Pass Scribe: Ashwinkumar B
    COM S 6810 Theory of Computing January 27, 2009 Lecture 3: Instructor: Rafael Pass Scribe: Ashwinkumar B. V 1 Review In the last class Diagonalization was introduced and used to prove some of the first results in Complexity theory. This included 2 • Time Hierarchy Theorem (DT IME(n × log (n)) * DT IME(n)) 2 • Nondeterministic Time Hierarchy Theorem (NTIME(n×log (n)) * NTIME(n)) • Ladners Theorem (Proves the existence of Existence of NP-intermediate problems) After these theorems were proved there was a hope in the community that a clever use of Diagonalization might resolve the P 6= NP ? question. But then the result that any proof which resolves P 6= NP ? must be non-relativising with respect to oracle machines showed that P 6= NP ? cannot be resolved just be the clever use of Diagonalization . Definition 1 For every O ⊆ f0; 1g∗,P O is the set of languages decided by a polynomial- time TM with oracle access to O and NP O is the set of languages decided by a polynomial- time NTM with oracle access to O. O1 O1 O2 O2 Theorem 1 [1] 9 oracles O1 and O2 such that P = NP and P = NP Proof. • To show that 9 oracle O such that P O = NP O Define (1) O(< M >; x; 1n) = 1 if M(x)=1 and M(x) stops in 2n steps = 0 otherwise EXP = [ Dtime(2nc ) ⊆ P O (2) c>1 Simple as the polytime TM can just make a call to the oracle. P O ⊆ NP O(Trivially true) (3) 3-1 NP O ⊆ EXP (4) Any TM which runs in time EXP can enumerate all the choices of the NTM which runs in poly time and can simulate the oracle From equations 2,3,4 we can easily see that P O = NP O = EXP • To show that 9 oracle O such that P O 6= NP O The above step in the proof was left as an exercise and can be found in Section 3.5 of [2] 2 Space Complexity 2.1 Notations • SP ACE(s(n)) =Languages decidable by some TM using O(s(n)) space.
    [Show full text]
  • Complexity Slides
    Basics of Complexity “Complexity” = resources • time • space • ink • gates • energy Complexity is a function • Complexity = f (input size) • Value depends on: – problem encoding • adj. list vs. adj matrix – model of computation • Cray vs TM ~O(n3) difference TM time complexity Model: k-tape deterministic TM (for any k) DEF: M is T(n) time bounded iff for every n, for every input w of size n, M(w) halts within T(n) transitions. – T(n) means max {n+1, T(n)} (so every TM spends at least linear time). – worst case time measure – L recursive à for some function T, L is accepted by a T(n) time bounded TM. TM space complexity Model: “Offline” k-tape TM. read-only input tape k read/write work tapes initially blank DEF: M is S(n) space bounded iff for every n, for every input w of size n, M(w) halts having scanned at most S(n) work tape cells. – Can use less tHan linear space – If S(n) ≥ log n then wlog M halts – worst case measure Complexity Classes Dtime(T(n)) = {L | exists a deterministic T(n) time-bounded TM accepting L} Dspace(S(n)) = {L | exists a deterministic S(n) space-bounded TM accepting L} E.g., Dtime(n), Dtime(n2), Dtime(n3.7), Dtime(2n), Dspace(log n), Dspace(n), ... Linear Speedup THeorems “WHy constants don’t matter”: justifies O( ) If T(n) > linear*, tHen for every constant c > 0, Dtime(T(n)) = Dtime(cT(n)) For every constant c > 0, Dspace(S(n)) = Dspace(cS(n)) (Proof idea: to compress by factor of 100, use symbols tHat jam 100 symbols into 1.
    [Show full text]
  • Computational Complexity
    Computational complexity Plan Complexity of a computation. Complexity classes DTIME(T (n)). Relations between complexity classes. Complete problems. Domino problems. NP-completeness. Complete problems for other classes Alternating machines. – p.1/17 Complexity of a computation A machine M is T (n) time bounded if for every n and word w of size n, every computation of M has length at most T (n). A machine M is S(n) space bounded if for every n and word w of size n, every computation of M uses at most S(n) cells of the working tape. Fact: If M is time or space bounded then L(M) is recursive. If L is recursive then there is a time and space bounded machine recognizing L. DTIME(T (n)) = fL(M) : M is det. and T (n) time boundedg NTIME(T (n)) = fL(M) : M is T (n) time boundedg DSPACE(S(n)) = fL(M) : M is det. and S(n) space boundedg NSPACE(S(n)) = fL(M) : M is S(n) space boundedg . – p.2/17 Hierarchy theorems A function S(n) is space constructible iff there is S(n)-bounded machine that given w writes aS(jwj) on the tape and stops. A function T (n) is time constructible iff there is a machine that on a word of size n makes exactly T (n) steps. Thm: Let S2(n) be a space-constructible function and let S1(n) ≥ log(n). If S1(n) lim infn!1 = 0 S2(n) then DSPACE(S2(n)) − DSPACE(S1(n)) is nonempty. Thm: Let T2(n) be a time-constructible function and let T1(n) log(T1(n)) lim infn!1 = 0 T2(n) then DTIME(T2(n)) − DTIME(T1(n)) is nonempty.
    [Show full text]
  • CS 301 - Lecture 28 P, NP, and NP-Completeness
    CS 301 - Lecture 28 P, NP, and NP-Completeness Fall 2008 Review • Languages and Grammars – Alphabets, strings, languages • Regular Languages – Deterministic Finite and Nondeterministic Automata – Equivalence of NFA and DFA – Regular Expressions and Regular Grammars – Properties of Regular Languages – Languages that are not regular and the pumping lemma • Context Free Languages – Context Free Grammars – Derivations: leftmost, rightmost and derivation trees – Parsing, Ambiguity, Simplifications and Normal Forms – Nondeterministic Pushdown Automata – Pushdown Automata and Context Free Grammars – Deterministic Pushdown Automata – Pumping Lemma for context free grammars – Properties of Context Free Grammars • Turing Machines – Definition, Accepting Languages, and Computing Functions – Combining Turing Machines and Turing’s Thesis – Turing Machine Variations, Universal Turing Machine, and Linear Bounded Automata – Recursive and Recursively Enumerable Languages, Unrestricted Grammars – Context Sensitive Grammars and the Chomsky Hierarchy • Computational Limits and Complexity – Computability and Decidability – Complexity Selecting the “Right” Machine •Model Computation Using Turing machine •But the choice of machine seems to matter a great deal! •Two tapes machines might take fewer steps than one tape machines •Non-Deterministic machines might take fewer steps than Deterministic ones • Is there a “right” choice to make? Selecting the “Right” Machine •If a two-tape machine takes steps, a one tape machine can simulate this in •If a non-deterministic
    [Show full text]