Diagonalization Our Goal: Separate Interesting Complexity Classes Question

Total Page:16

File Type:pdf, Size:1020Kb

Diagonalization Our Goal: Separate Interesting Complexity Classes Question Diagonalization Could we use diagonalization to resolve P v NP? Relativization Our goal: separate interesting complexity classes Here’s some evidence that this won’t work: An argument “relativizes” if it goes through when you give the machine oracle access. Question: could we somehow use diagonalization to resolve P v NP? An Oracle Turing machine MA is a modified TM with the ability to query an oracle for language A. Essentially, diagonalization is a simulation of one TM by another. Only general technique we have (e.g. for hierarchy theorems) Has special tape called an oracle tape. Simulation ensures that simulating machine determines behavior of When MA writes a string on oracle tape, it is informed whether that other machine and then behaves differently. Diagonalization relies on following properties of Turing machines: string is in A in one step. What if you add an oracle? Simulation proceeds as before => if we could prove P NP, we could A Turing machine can be represented by a string Observations: also prove PA NPA SAT A Turing machine can be simulated by another Turing machine without NP P much overhead in time or space. coNP PSAT (because deterministic complexity classes are closed Treats machines as blackboxes: internal workings don’t matter. under complementation) NPSAT contains languages we believe are not in NP. Example: { | is not a minimal boolean formula} 1 2 3 Diagonalization and Relativization SPACE: The next frontier Savitch’s Theorem Theorem: Quite different from time: space can be reused. Savitch’s Thm: Any nondeterministic TM that uses f(n) space can be There is an oracle B whereby PB = NPB converted to deterministic TM that uses O(f2(n)) space. There is an oracle A whereby PA NPA Space complexity of Turing machine M = space used. = maximum number of tape cells that M scans as function of input Idea: Solve yieldability problem. length. Given two configurations of the NTM C and C’, together with number t, determine if NTM can get from C to C’ in t steps. For non-deterministic TM, wherein all branches halt, space complexity O(f(n)) defined as maximum number of tape cells scanned on any branch of Solve CanYield(Cstart, Caccept, 2 ) computation as function of input length. Use recursive algorithm, by searching for intermediate configuration. As usual, use asymptotic notation. SPACE(f(n)) = {L | L is language decided by an O(f(n)) space deterministic TM} NSPACE(f(n)) = {L | L is language decided by an O(f(n)) space nondeterministic TM} 4 5 6 Savitch’s theorem PSPACE PSPACE boolean CanYield(c1, c2, t) { P. Decision problems solvable in polynomial time. Binary counter. Count from 0 to 2n - 1 in binary. if (t 1) return correct answer enumerate using binary counter Algorithm. Use n bit odometer. PSPACE. Decision problems solvable in polynomial space. foreach configuration c' { Claim. 3-SAT is in PSPACE. boolean x = CanYield(c1, c', t/2) boolean y = CanYield(c2, c', t/2) EXPTIME. Decision problems solvable in exponential time. Pf. n if (x and y) return true Enumerate all 2 possible truth assignments using counter. } return false Check each assignment to see if it satisfies all clauses. } Relationships: P NP PSPACE = NPSPACE Theorem. NP PSPACE. Pf. Consider arbitrary problem Y in NP. On input w: Since Y P 3-SAT, there exists algorithm that solves Y in poly-time O(f(n)) Output the result of CanYield (cstart, caccept, 2 ) plus polynomial number of calls to 3-SAT black box. Can implement black box in poly-space. Whenever CanYield invokes itself recursively, stores the current stage number and values of c1, c2 and t on stack. Every level of recursion uses O(f(n)) space. Depth of recursion O(f(n)) 2 => total space O(f(n) ) 7 8 9 Quantified Satisfiability QSAT is in PSPACE QSAT. Let (x1, …, xn) be a Boolean CNF formula. Is the following Theorem. QSAT PSPACE. Quantified Satisfiability propositional formula true? Pf. Recursively try all possibilities. Only need one bit of information from each subproblem. x1 x2 x3 x4 … xn-1 xn (x1, …, xn) Amount of space is proportional to depth of function call stack. assume n is odd Intuition. Amy picks truth value for x1, then Bob for x2, then Amy for return true iff both x3, and so on. Can Amy satisfy no matter what Bob does? x = 0 x = 1 1 1 subproblems are true Ex. (x1 x2 ) (x2 x3) (x1 x2 x3 ) x = 1 return true iff either x2 = 0 2 subproblem is true Yes. Amy sets x1 true; Bob sets x2; Amy sets x3 to be same as x2. (x x ) (x x ) (x x x ) Ex. 1 2 2 3 1 2 3 x3 = 0 x3 = 1 No. If Amy sets x1 false; Bob sets x2 false; Amy loses; No. if Amy sets x1 true; Bob sets x2 true; Amy loses. (0, 0, 0) (0, 0, 1) (0, 1, 0) (0, 1, 1) (1, 0, 0) (1, 0, 1) (1, 1, 0) (1, 1, 1) 11 12 Planning Problem Planning Problem: Binary Counter Planning Conditions. Set C = { C1, …, Cn }. Planning example. Can we increment an n-bit counter from the all- Initial configuration. Subset c0 C of conditions initially satisfied. zeroes state to the all-ones state? Goal configuration. Subset c* C of conditions we seek to satisfy. C corresponds to bit i = 1 Operators. Set O = { O1, …, Ok }. Conditions. C1, …, Cn. i all 0s To invoke operator Oi, must satisfy certain prereq conditions. Initial state. c0 = . all 1s After invoking Oi certain conditions become true, and certain Goal state. c* = {C1, …, Cn }. conditions become false. Operators. O1, …, On. i-1 least significant bits are 1 To invoke operator Oi, must satisfy C1, …, Ci-1. set bit i to 1 PLANNING. Is it possible to apply sequence of operators to get from After invoking Oi, condition Ci becomes true. set i-1 least significant initial configuration to goal configuration? After invoking O , conditions C , …, C become false. i 1 i-1 bits to 0 Solution. {C1} {C2} {C1, C2} {C3} {C3, C1} … Examples. n Many puzzles such as 15-puzzle, Rubik’s cube. Observation. Any solution requires at least 2 - 1 steps. Logistical operations to move people, equipment, materials and robots (software or hardware). 14 15 Planning Problem: In Exponential Time Planning Problem: In Polynomial Space Configuration graph G. Theorem. PLANNING is in PSPACE. n PSPACE-Completeness Include node for each of 2 possible configurations. Pf. Same idea as proof of Savitch’s theorem. Include an edge from configuration c' to configuration c'' if one of Suppose there is a path from c1 to c2 of length L. the operators can convert from c' to c''. Path from c1 to midpoint and from c2 to midpoint are each L/2. Enumerate all possible midpoints. PLANNING. Is there a path from c0 to c* in configuration graph? Apply recursively. Depth of recursion = log2 L. boolean hasPath(c1, c2, L) { Claim. PLANNING is in EXPTIME. if (L 1) return correct answer Pf. Run BFS to find path from c0 to c* in configuration graph. enumerate using binary counter foreach configuration c' { boolean x = hasPath(c1, c', L/2) Note. Configuration graph can have 2n nodes, and shortest path can boolean y = hasPath(c2, c', L/2) if (x and y) return true n be of length = 2 - 1. } return false } 16 17 PSPACE-Complete PSPACE-Complete PSPACE-Complete Problems PSPACE. Decision problems solvable in polynomial space. PSPACE. Decision problems solvable in polynomial space. More PSPACE-complete problems. Competitive facility location. PSPACE-Complete. Problem Y is PSPACE-complete if (i) Y is in PSPACE PSPACE-Complete. Problem Y is PSPACE-complete if (i) Y is in PSPACE Natural generalizations of games. and (ii) for every problem X in PSPACE, X P Y. and (ii) for every problem X in PSPACE, X P Y. – Othello, Hex, Geography, Rush-Hour, Instant Insanity – Shanghai, go-moku, Sokoban Various motion planning and search problems. Why polynomial reducibility? Theorem. [Stockmeyer-Meyer 1973] QSAT is PSPACE-complete. Think about what it means for a problem to be complete for a Theorem. PSPACE EXPTIME. complexity class Pf. Previous algorithm solves QSAT in exponential time, and QSAT is one of the hardest problems in the class PSPACE-complete. every other problem in the class *easily* reduced to it. so reduction must be easy, relative to complexity of typical problems in class. Summary. P NP PSPACE EXPTIME. it is known that P EXPTIME, but unknown which inclusion is strict; conjectured that all are 19 20 21 Competitive Facility Location Competitive Facility Location Competitive Facility Location assume n is odd Input. Graph with positive edge weights, and target B. Claim. COMPETITIVE-FACILITY is PSPACE-complete. Construction. Given instance (x1, …, xn) = C1 C1 … Ck of QSAT. Game. Two competing players alternate in selecting nodes. Not allowed Include a node for each literal and its negation and connect them. to select a node if any of its neighbors has been selected. Pf. – at most one of xi and its negation can be chosen i i Choose c k+2, and put weight c on literal x and its negation; n-1 n-3 4 2 Competitive facility location. Can second player guarantee at least B In PSPACE set B = c + c + … + c + c + 1. units of profit? – ensures variables are selected in order xn, xn-1, …, x1. n-1 n-3 4 2 To show that it's complete, we show that QSAT polynomial reduces As is, player 2 will lose by 1 unit: c + c + … + c + c . to it. Given an instance of QSAT, we construct an instance of COMPETITIVE-FACILITY such that player 2 can force a win iff QSAT n formula is true.
Recommended publications
  • CS601 DTIME and DSPACE Lecture 5 Time and Space Functions: T, S
    CS601 DTIME and DSPACE Lecture 5 Time and Space functions: t, s : N → N+ Definition 5.1 A set A ⊆ U is in DTIME[t(n)] iff there exists a deterministic, multi-tape TM, M, and a constant c, such that, 1. A = L(M) ≡ w ∈ U M(w)=1 , and 2. ∀w ∈ U, M(w) halts within c · t(|w|) steps. Definition 5.2 A set A ⊆ U is in DSPACE[s(n)] iff there exists a deterministic, multi-tape TM, M, and a constant c, such that, 1. A = L(M), and 2. ∀w ∈ U, M(w) uses at most c · s(|w|) work-tape cells. (Input tape is “read-only” and not counted as space used.) Example: PALINDROMES ∈ DTIME[n], DSPACE[n]. In fact, PALINDROMES ∈ DSPACE[log n]. [Exercise] 1 CS601 F(DTIME) and F(DSPACE) Lecture 5 Definition 5.3 f : U → U is in F (DTIME[t(n)]) iff there exists a deterministic, multi-tape TM, M, and a constant c, such that, 1. f = M(·); 2. ∀w ∈ U, M(w) halts within c · t(|w|) steps; 3. |f(w)|≤|w|O(1), i.e., f is polynomially bounded. Definition 5.4 f : U → U is in F (DSPACE[s(n)]) iff there exists a deterministic, multi-tape TM, M, and a constant c, such that, 1. f = M(·); 2. ∀w ∈ U, M(w) uses at most c · s(|w|) work-tape cells; 3. |f(w)|≤|w|O(1), i.e., f is polynomially bounded. (Input tape is “read-only”; Output tape is “write-only”.
    [Show full text]
  • Jonas Kvarnström Automated Planning Group Department of Computer and Information Science Linköping University
    Jonas Kvarnström Automated Planning Group Department of Computer and Information Science Linköping University What is the complexity of plan generation? . How much time and space (memory) do we need? Vague question – let’s try to be more specific… . Assume an infinite set of problem instances ▪ For example, “all classical planning problems” . Analyze all possible algorithms in terms of asymptotic worst case complexity . What is the lowest worst case complexity we can achieve? . What is asymptotic complexity? ▪ An algorithm is in O(f(n)) if there is an algorithm for which there exists a fixed constant c such that for all n, the time to solve an instance of size n is at most c * f(n) . Example: Sorting is in O(n log n), So sorting is also in O( ), ▪ We can find an algorithm in O( ), and so on. for which there is a fixed constant c such that for all n, If we can do it in “at most ”, the time to sort n elements we can do it in “at most ”. is at most c * n log n Some problem instances might be solved faster! If the list happens to be sorted already, we might finish in linear time: O(n) But for the entire set of problems, our guarantee is O(n log n) So what is “a planning problem of size n”? Real World + current problem Abstraction Language L Approximation Planning Problem Problem Statement Equivalence P = (, s0, Sg) P=(O,s0,g) The input to a planner Planner is a problem statement The size of the problem statement depends on the representation! Plan a1, a2, …, an PDDL: How many characters are there in the domain and problem files? Now: The complexity of PLAN-EXISTENCE .
    [Show full text]
  • Hierarchy Theorems
    Hierarchy Theorems Peter Mayr Computability Theory, April 14, 2021 So far we proved the following inclusions: L ⊆ NL ⊆ P ⊆ NP ⊆ PSPACE ⊆ EXPTIME ⊆ EXPSPACE Question Which are proper? Space constructible functions In simulations we often want to fix s(n) space on the working tape before starting the actual computation without accruing any overhead in space complexity. Definition s(n) ≥ log(n) is space constructible if there exists N 2 N and a DTM with input tape that on any input of length n ≥ N uses and marks off s(n) cells on its working tape and halts. Note Equivalently, s(n) is space constructible iff there exists a DTM with input and output that on input 1n computes s(n) in space O(s(n)). Example log n; nk ; 2n are space constructible. E.g. log2 n space is constructed by counting the length n of the input in binary. Little o-notation Definition + For f ; g : N ! R we say f = o(g) (read f is little-o of g) if f (n) lim = 0: n!1 g(n) Intuitively: g grows much faster than f . Note I f = o(g) ) f = O(g), but not conversely. I f 6= o(f ) Separating space complexity classes Space Hierarchy Theorem Let s(n) be space constructible and r(n) = o(s(n)). Then DSPACE(r(n)) ( DSPACE(s(n)). Proof. Construct L 2 DSPACE(s(n))nDSPACE(r(n)) by diagonalization. Define DTM D such that on input x of length n: 1. D marks s(n) space on working tape.
    [Show full text]
  • EXPSPACE-Hardness of Behavioural Equivalences of Succinct One
    EXPSPACE-hardness of behavioural equivalences of succinct one-counter nets Petr Janˇcar1 Petr Osiˇcka1 Zdenˇek Sawa2 1Dept of Comp. Sci., Faculty of Science, Palack´yUniv. Olomouc, Czech Rep. [email protected], [email protected] 2Dept of Comp. Sci., FEI, Techn. Univ. Ostrava, Czech Rep. [email protected] Abstract We note that the remarkable EXPSPACE-hardness result in [G¨oller, Haase, Ouaknine, Worrell, ICALP 2010] ([GHOW10] for short) allows us to answer an open complexity ques- tion for simulation preorder of succinct one counter nets (i.e., one counter automata with no zero tests where counter increments and decrements are integers written in binary). This problem, as well as bisimulation equivalence, turn out to be EXPSPACE-complete. The technique of [GHOW10] was referred to by Hunter [RP 2015] for deriving EXPSPACE-hardness of reachability games on succinct one-counter nets. We first give a direct self-contained EXPSPACE-hardness proof for such reachability games (by adjust- ing a known PSPACE-hardness proof for emptiness of alternating finite automata with one-letter alphabet); then we reduce reachability games to (bi)simulation games by using a standard “defender-choice” technique. 1 Introduction arXiv:1801.01073v1 [cs.LO] 3 Jan 2018 We concentrate on our contribution, without giving a broader overview of the area here. A remarkable result by G¨oller, Haase, Ouaknine, Worrell [2] shows that model checking a fixed CTL formula on succinct one-counter automata (where counter increments and decre- ments are integers written in binary) is EXPSPACE-hard. Their proof is interesting and nontrivial, and uses two involved results from complexity theory.
    [Show full text]
  • NP-Completeness: Reductions Tue, Nov 21, 2017
    CMSC 451 Dave Mount CMSC 451: Lecture 19 NP-Completeness: Reductions Tue, Nov 21, 2017 Reading: Chapt. 8 in KT and Chapt. 8 in DPV. Some of the reductions discussed here are not in either text. Recap: We have introduced a number of concepts on the way to defining NP-completeness: Decision Problems/Language recognition: are problems for which the answer is either yes or no. These can also be thought of as language recognition problems, assuming that the input has been encoded as a string. For example: HC = fG j G has a Hamiltonian cycleg MST = f(G; c) j G has a MST of cost at most cg: P: is the class of all decision problems which can be solved in polynomial time. While MST 2 P, we do not know whether HC 2 P (but we suspect not). Certificate: is a piece of evidence that allows us to verify in polynomial time that a string is in a given language. For example, the language HC above, a certificate could be a sequence of vertices along the cycle. (If the string is not in the language, the certificate can be anything.) NP: is defined to be the class of all languages that can be verified in polynomial time. (Formally, it stands for Nondeterministic Polynomial time.) Clearly, P ⊆ NP. It is widely believed that P 6= NP. To define NP-completeness, we need to introduce the concept of a reduction. Reductions: The class of NP-complete problems consists of a set of decision problems (languages) (a subset of the class NP) that no one knows how to solve efficiently, but if there were a polynomial time solution for even a single NP-complete problem, then every problem in NP would be solvable in polynomial time.
    [Show full text]
  • Computational Complexity: a Modern Approach
    i Computational Complexity: A Modern Approach Draft of a book: Dated January 2007 Comments welcome! Sanjeev Arora and Boaz Barak Princeton University [email protected] Not to be reproduced or distributed without the authors’ permission This is an Internet draft. Some chapters are more finished than others. References and attributions are very preliminary and we apologize in advance for any omissions (but hope you will nevertheless point them out to us). Please send us bugs, typos, missing references or general comments to [email protected] — Thank You!! DRAFT ii DRAFT Chapter 9 Complexity of counting “It is an empirical fact that for many combinatorial problems the detection of the existence of a solution is easy, yet no computationally efficient method is known for counting their number.... for a variety of problems this phenomenon can be explained.” L. Valiant 1979 The class NP captures the difficulty of finding certificates. However, in many contexts, one is interested not just in a single certificate, but actually counting the number of certificates. This chapter studies #P, (pronounced “sharp p”), a complexity class that captures this notion. Counting problems arise in diverse fields, often in situations having to do with estimations of probability. Examples include statistical estimation, statistical physics, network design, and more. Counting problems are also studied in a field of mathematics called enumerative combinatorics, which tries to obtain closed-form mathematical expressions for counting problems. To give an example, in the 19th century Kirchoff showed how to count the number of spanning trees in a graph using a simple determinant computation. Results in this chapter will show that for many natural counting problems, such efficiently computable expressions are unlikely to exist.
    [Show full text]
  • Sustained Space Complexity
    Sustained Space Complexity Jo¨elAlwen1;3, Jeremiah Blocki2, and Krzysztof Pietrzak1 1 IST Austria 2 Purdue University 3 Wickr Inc. Abstract. Memory-hard functions (MHF) are functions whose eval- uation cost is dominated by memory cost. MHFs are egalitarian, in the sense that evaluating them on dedicated hardware (like FPGAs or ASICs) is not much cheaper than on off-the-shelf hardware (like x86 CPUs). MHFs have interesting cryptographic applications, most notably to password hashing and securing blockchains. Alwen and Serbinenko [STOC'15] define the cumulative memory com- plexity (cmc) of a function as the sum (over all time-steps) of the amount of memory required to compute the function. They advocate that a good MHF must have high cmc. Unlike previous notions, cmc takes into ac- count that dedicated hardware might exploit amortization and paral- lelism. Still, cmc has been critizised as insufficient, as it fails to capture possible time-memory trade-offs; as memory cost doesn't scale linearly, functions with the same cmc could still have very different actual hard- ware cost. In this work we address this problem, and introduce the notion of sustained- memory complexity, which requires that any algorithm evaluating the function must use a large amount of memory for many steps. We con- struct functions (in the parallel random oracle model) whose sustained- memory complexity is almost optimal: our function can be evaluated using n steps and O(n= log(n)) memory, in each step making one query to the (fixed-input length) random oracle, while any algorithm that can make arbitrary many parallel queries to the random oracle, still needs Ω(n= log(n)) memory for Ω(n) steps.
    [Show full text]
  • The Complexity Zoo
    The Complexity Zoo Scott Aaronson www.ScottAaronson.com LATEX Translation by Chris Bourke [email protected] 417 classes and counting 1 Contents 1 About This Document 3 2 Introductory Essay 4 2.1 Recommended Further Reading ......................... 4 2.2 Other Theory Compendia ............................ 5 2.3 Errors? ....................................... 5 3 Pronunciation Guide 6 4 Complexity Classes 10 5 Special Zoo Exhibit: Classes of Quantum States and Probability Distribu- tions 110 6 Acknowledgements 116 7 Bibliography 117 2 1 About This Document What is this? Well its a PDF version of the website www.ComplexityZoo.com typeset in LATEX using the complexity package. Well, what’s that? The original Complexity Zoo is a website created by Scott Aaronson which contains a (more or less) comprehensive list of Complexity Classes studied in the area of theoretical computer science known as Computa- tional Complexity. I took on the (mostly painless, thank god for regular expressions) task of translating the Zoo’s HTML code to LATEX for two reasons. First, as a regular Zoo patron, I thought, “what better way to honor such an endeavor than to spruce up the cages a bit and typeset them all in beautiful LATEX.” Second, I thought it would be a perfect project to develop complexity, a LATEX pack- age I’ve created that defines commands to typeset (almost) all of the complexity classes you’ll find here (along with some handy options that allow you to conveniently change the fonts with a single option parameters). To get the package, visit my own home page at http://www.cse.unl.edu/~cbourke/.
    [Show full text]
  • Notes on Space Complexity of Integration of Computable Real
    Notes on space complexity of integration of computable real functions in Ko–Friedman model Sergey V. Yakhontov Abstract x In the present paper it is shown that real function g(x)= 0 f(t)dt is a linear-space computable real function on interval [0, 1] if f is a linear-space computable C2[0, 1] real function on interval R [0, 1], and this result does not depend on any open question in the computational complexity theory. The time complexity of computable real functions and integration of computable real functions is considered in the context of Ko–Friedman model which is based on the notion of Cauchy functions computable by Turing machines. 1 2 In addition, a real computable function f is given such that 0 f ∈ FDSPACE(n )C[a,b] but 1 f∈ / FP if FP 6= #P. 0 C[a,b] R RKeywords: Computable real functions, Cauchy function representation, polynomial-time com- putable real functions, linear-space computable real functions, C2[0, 1] real functions, integration of computable real functions. Contents 1 Introduction 1 1.1 CF computablerealnumbersandfunctions . ...... 2 1.2 Integration of FP computablerealfunctions. 2 2 Upper bound of the time complexity of integration 3 2 3 Function from FDSPACE(n )C[a,b] that not in FPC[a,b] if FP 6= #P 4 4 Conclusion 4 arXiv:1408.2364v3 [cs.CC] 17 Nov 2014 1 Introduction In the present paper, we consider computable real numbers and functions that are represented by Cauchy functions computable by Turing machines [1]. Main results regarding computable real numbers and functions can be found in [1–4]; main results regarding computational complexity of computations on Turing machines can be found in [5].
    [Show full text]
  • Complexity Theory
    Complexity Theory Course Notes Sebastiaan A. Terwijn Radboud University Nijmegen Department of Mathematics P.O. Box 9010 6500 GL Nijmegen the Netherlands [email protected] Copyright c 2010 by Sebastiaan A. Terwijn Version: December 2017 ii Contents 1 Introduction 1 1.1 Complexity theory . .1 1.2 Preliminaries . .1 1.3 Turing machines . .2 1.4 Big O and small o .........................3 1.5 Logic . .3 1.6 Number theory . .4 1.7 Exercises . .5 2 Basics 6 2.1 Time and space bounds . .6 2.2 Inclusions between classes . .7 2.3 Hierarchy theorems . .8 2.4 Central complexity classes . 10 2.5 Problems from logic, algebra, and graph theory . 11 2.6 The Immerman-Szelepcs´enyi Theorem . 12 2.7 Exercises . 14 3 Reductions and completeness 16 3.1 Many-one reductions . 16 3.2 NP-complete problems . 18 3.3 More decision problems from logic . 19 3.4 Completeness of Hamilton path and TSP . 22 3.5 Exercises . 24 4 Relativized computation and the polynomial hierarchy 27 4.1 Relativized computation . 27 4.2 The Polynomial Hierarchy . 28 4.3 Relativization . 31 4.4 Exercises . 32 iii 5 Diagonalization 34 5.1 The Halting Problem . 34 5.2 Intermediate sets . 34 5.3 Oracle separations . 36 5.4 Many-one versus Turing reductions . 38 5.5 Sparse sets . 38 5.6 The Gap Theorem . 40 5.7 The Speed-Up Theorem . 41 5.8 Exercises . 43 6 Randomized computation 45 6.1 Probabilistic classes . 45 6.2 More about BPP . 48 6.3 The classes RP and ZPP .
    [Show full text]
  • 5.2 Intractable Problems -- NP Problems
    Algorithms for Data Processing Lecture VIII: Intractable Problems–NP Problems Alessandro Artale Free University of Bozen-Bolzano [email protected] of Computer Science http://www.inf.unibz.it/˜artale 2019/20 – First Semester MSc in Computational Data Science — UNIBZ Some material (text, figures) displayed in these slides is courtesy of: Alberto Montresor, Werner Nutt, Kevin Wayne, Jon Kleinberg, Eva Tardos. A. Artale Algorithms for Data Processing Definition. P = set of decision problems for which there exists a poly-time algorithm. Problems and Algorithms – Decision Problems The Complexity Theory considers so called Decision Problems. • s Decision• Problem. X Input encoded asX a finite“yes” binary string ; • Decision ProblemA : Is conceived asX a set of strings on whichs the answer to the decision problem is ; yes if s ∈ X Algorithm for a decision problemA s receives an input string , and no if s 6∈ X ( ) = A. Artale Algorithms for Data Processing Problems and Algorithms – Decision Problems The Complexity Theory considers so called Decision Problems. • s Decision• Problem. X Input encoded asX a finite“yes” binary string ; • Decision ProblemA : Is conceived asX a set of strings on whichs the answer to the decision problem is ; yes if s ∈ X Algorithm for a decision problemA s receives an input string , and no if s 6∈ X ( ) = Definition. P = set of decision problems for which there exists a poly-time algorithm. A. Artale Algorithms for Data Processing Towards NP — Efficient Verification • • The issue here is the3-SAT contrast between finding a solution Vs. checking a proposed solution.I ConsiderI for example : We do not know a polynomial-time algorithm to find solutions; but Checking a proposed solution can be easily done in polynomial time (just plug 0/1 and check if it is a solution).
    [Show full text]
  • Lecture 10: Space Complexity III
    Space Complexity Classes: NL and L Reductions NL-completeness The Relation between NL and coNL A Relation Among the Complexity Classes Lecture 10: Space Complexity III Arijit Bishnu 27.03.2010 Space Complexity Classes: NL and L Reductions NL-completeness The Relation between NL and coNL A Relation Among the Complexity Classes Outline 1 Space Complexity Classes: NL and L 2 Reductions 3 NL-completeness 4 The Relation between NL and coNL 5 A Relation Among the Complexity Classes Space Complexity Classes: NL and L Reductions NL-completeness The Relation between NL and coNL A Relation Among the Complexity Classes Outline 1 Space Complexity Classes: NL and L 2 Reductions 3 NL-completeness 4 The Relation between NL and coNL 5 A Relation Among the Complexity Classes Definition for Recapitulation S c NPSPACE = c>0 NSPACE(n ). The class NPSPACE is an analog of the class NP. Definition L = SPACE(log n). Definition NL = NSPACE(log n). Space Complexity Classes: NL and L Reductions NL-completeness The Relation between NL and coNL A Relation Among the Complexity Classes Space Complexity Classes Definition for Recapitulation S c PSPACE = c>0 SPACE(n ). The class PSPACE is an analog of the class P. Definition L = SPACE(log n). Definition NL = NSPACE(log n). Space Complexity Classes: NL and L Reductions NL-completeness The Relation between NL and coNL A Relation Among the Complexity Classes Space Complexity Classes Definition for Recapitulation S c PSPACE = c>0 SPACE(n ). The class PSPACE is an analog of the class P. Definition for Recapitulation S c NPSPACE = c>0 NSPACE(n ).
    [Show full text]