NP-Completeness: Concepts

Total Page:16

File Type:pdf, Size:1020Kb

NP-Completeness: Concepts NP-Completeness : Concepts ••• Why Studying NP-Completeness ? ♣ Pursuing your Ph.D. ♣ Keeping your job Before studying NP-completeness : “I can’t find an efficient algorithm, I guess I’m just too dumb.” 1 After studying NP-completeness : “I can’t find an efficient algorithm, because no such algorithm is possible !” “I can’t find an efficient algorithm, but neither can all these famous people.” 2 ••• Measure to Time Complexity l : measure to the time complexity of an algorithm The discussion of NP-completeness considers l the input size, i.e., the total length of all inputs to the algorithm. Two assumptions : (1) all inputs are integers (a rational number can be represented by a pair of integers); (2) each integer has a binary representation. Ex. Sorting a1, a1, …, an. n l = (loga + 1 ) . ∑ 2 i i=1 3 Ex. Consider the following procedure. input (n); s ←←← 0; for i ←←← 1 to n do s ←←← s + i; output (s). l = log 2 n + 1. l The procedure takes O(n) = O(2 ) time. ⇒ an exponential-time algorithm ! 4 ••• Polynomial-Time Algorithms vs. Exponential-Time Algorithms Suppose that your computer takes 1 second to perform 10 6 operations. The following is the time requirement for your computer to perform f(n) operations, where 2 3 5 n n f(n) = n, n , n , n , 2 , 3 and n = 10, 20, 30, 40, 50, 60. 5 The following shows the largest value of n such that f(n) operations can be performed in 1 hour on a faster computer. 6 An algorithm is referred to as a polynomial-time algorithm, if its time complexity can be bounded above by a polynomial function of input size. An algorithm is referred to as an exponential-time algorithm , if its time complexity cannot be thus bounded (even if the function is not normally log n regarded as an exponential one, like n ). Usually, a problem is referred to as tractable if it can be solved with a polynomial-time algorithm, and intractable otherwise. The two tables above give us a reason why polynomial-time algorithms are much more desirable than exponential-time algorithms. They also motive us to study the theory of NP-completeness. 7 ••• Maximal vs. Maximum Ex. maximal cliques : {1, 2, 3}, {2, 3, 4, 5}, {4, 6} maximum cliques : {2, 3, 4, 5} 8 ••• Decision Problems vs. Optimization Problems A decision problem asks a solution of “yes ” or “no ”. An optimization problem asks a solution of an optimal value (a maximum or a minimum). Ex. The maximum clique problem can be expressed as a decision problem as follows. Instance : An undirected graph G = (V, E) and a positive integer k ≤≤≤ |V|. Question : Does G contain a clique of size ≥≥≥ k ? It can be also expressed as an optimization problem as follows. Instance : An undirected graph G = (V, E). Question : What is the size of a maximum clique of G ? 9 Ex. The traveling salesman problem can be expressed as a decision problem as follows. Instance : A set C of m cities, distances di,j > 0 for all pairs of cities i, j ∈∈∈ C, and a positive integer k. Question : Is there a tour of length ≤≤≤ k that starts at any city, visits each of the other m −−− 1 cities exactly once, and returns to the initial city ? It can be also expressed as an optimization problem as follows. Instance : A set C of m cities and distances di,j > 0 for all pairs of cities i, j ∈∈∈ C. Question : What is the length of a shortest tour that starts at any city, visits each of the other m −−− 1 cities exactly once, and returns to the initial city ? 10 Ex. The problem of sorting a1, a1, …, an can be expressed as a decision problem as follows. Instance : Given a1, a2, …, an and a positive integer k. Question : Is there a permutation of a1, a2, …, an, denoted by a’1, a’2, …, a’n, such that |a’2 −−− a’1| + |a’3 −−− a’2| + … + |a’n −−− a’n−−−1| ≤≤≤ k ? An optimization problem is “harder” than its corresponding decision problem. Since the NP-completeness concerns whether or not a problem can be solved in polynomial time, the discussion of NP-completeness considers only decision problems. (If a decision problem is not polynomial-time solvable, then its corresponding optimization problem is not polynomial-time solvable either.) 11 ••• Problem Reduction A problem P1 reduces to another problem P2, denoted by P1 ∝∝∝ P2, if any instance of P1 can be transformed into an instance of P2 such that the solution for P1 can be obtained from the solution for P2. T∝∝∝ : the reduction time. T : the time required to obtain the solution for P1 from the solution for P2. Since the NP-completeness concerns whether or not a problem can be solved in polynomial time, we consider only the reductions with both T∝∝∝ and T polynomial. (Thus, P2 ∈∈∈ P ⇒⇒⇒ P1 ∈∈∈ P or P1 ∉∉∉ P ⇒⇒⇒ P2 ∉∉∉ P.) If P1 ∝∝∝ P2 and P2 ∝∝∝ P3, then P1 ∝∝∝ P3. 12 ••• P, NP, and NP-Complete Three classes of decision problems : P, NP, and NP-complete. P : the set of decision problems that can be solved in polynomial time by deterministic algorithms. NP : the set of decision problems that can be solved in polynomial time by non-deterministic algorithms. Any non-deterministic algorithm consists of two phases : guessing and checking . 13 For the maximum clique problem, the guessing phase will return a clique, and the checking phase will decide whether or not the clique size is greater than or equal to k. For the traveling salesman problem, the guessing phase will return a tour, and the checking phase will decide whether or not the tour length is greater than or equal to k. A decision problem has an AFFIRMATIVE answer. ⇔ The guessing is SUCCESSFUL. Notice that non-deterministic algorithms are imaginary. A more detailed description of non- deterministic algorithms and more illustrative examples can be found in Ref. (2). 14 Every decision problem in P is also in NP, i.e., P ⊆⊆⊆ NP. An NP problem is NP-complete if every NP problem can reduce to it in polynomial time. ⇒⇒⇒ If any NP-complete problem can be solved in polynomial time, then every NP problem can be solved in polynomial time (i.e., P = NP). (Intuitively, NP-complete problems are the “hardest” problems in NP.) It is one of the most famous open problems in computer science whether P ≠≠≠ NP or P = NP. 15 When P ≠≠≠ NP, NP NP-Complete P (There exist problems in NP that are neither in P, nor in NP-complete (see Chap. 7 in Ref. (1).) When P = NP, P = NP = NP-Complete Almost all people believe P ≠≠≠ NP. 16 A problem is NP-hard if an NP-complete problem can be reduced to it in polynomial time. (Equivalently, a problem is NP-hard if every NP problem can be reduced to it in polynomial time.) ⇒⇒⇒ If any NP-hard problem can be solved in polynomial time, then all NP-complete problems can be solved in polynomial time. (Intuitively, NP-hard problems are “harder” than NP-complete problems.) NP NP-hard NP-complete The class of NP-hard problems contains both decision problems and optimization problems. 17 If an NP-hard problem is in NP, then it is an NP-complete problem. (Intuitively, NP-complete problems are an “easier” subclass of NP-hard problems.) The corresponding optimization problems of NP-complete problems are NP-hard. The well-known halting problem (a decision problem), which is to determine whether or not an algorithm will terminate with a given input, is NP-hard, but not NP-complete. 18 ••• Pseudo-Polynomial Time Algorithms Ex. Given a set S = {a1, a1, …, an} of integers and an integer M > 0, the sum -of -subset problem is to determine whether or not there exists a subset of S whose sum is equal to M. This problem can be solved in O(nM ) time by dynamic programming as follows. Let t(i, j) = true , if there exists a subset of {a1, a2, …, ai} whose sum is equal to j, and false else. Then, t(i, j) = t(i −−− 1, j) + t(i −−− 1, j −−− ai), where i > 1. Initially, t(1, j) = true , if j = 0 or j = a1, and false else. The answer is t(n, M). 19 Although the time complexity is exponential with respect to M, the problem is considered polynomial-time solvable, if M is bounded. An algorithm like this is usually referred to as a pseudo-polynomial time algorithm . An NP-complete problem is in the strong sense if and only if there exists no pseudo-polynomial time algorithm for solving it (unless P = NP). Intuitively, NP-complete problems in the strong sense are “harder” NP-complete problems (refer to Ref. (1)). 20 ••• The Satisfiability Problem and Cook’s Theorem The satisfiability problem, which is the first NP-complete problem, is defined as follows. Instance : A set U of Boolean variables and a collection C of clauses over U. Question : Is there an assignment of U that can satisfy C ? Ex. When U = {x1, x2, x3} and C = {x1 ∨∨∨ x2 ∨∨∨ x3, x , x }, the assignment of U : x ←←← F, 1 2 1 x2 ←←← F and x3 ←←← T, can satisfy C (i.e., (x ∨∨∨ x ∨∨∨ x ) ∧∧∧ ( x ) ∧∧∧ ( x ) = T). 1 2 3 1 2 21 Ex. When U = {x , x } and C = {x ∨∨∨ x , x ∨∨∨ x , 1 2 1 2 1 2 x ∨∨∨ x , x ∨∨∨ x }, no assignment of U can 1 2 1 2 satisfy C. Cook’s Theorem : The satisfiability problem is NP-complete.
Recommended publications
  • Time Complexity
    Chapter 3 Time complexity Use of time complexity makes it easy to estimate the running time of a program. Performing an accurate calculation of a program’s operation time is a very labour-intensive process (it depends on the compiler and the type of computer or speed of the processor). Therefore, we will not make an accurate measurement; just a measurement of a certain order of magnitude. Complexity can be viewed as the maximum number of primitive operations that a program may execute. Regular operations are single additions, multiplications, assignments etc. We may leave some operations uncounted and concentrate on those that are performed the largest number of times. Such operations are referred to as dominant. The number of dominant operations depends on the specific input data. We usually want to know how the performance time depends on a particular aspect of the data. This is most frequently the data size, but it can also be the size of a square matrix or the value of some input variable. 3.1: Which is the dominant operation? 1 def dominant(n): 2 result = 0 3 fori in xrange(n): 4 result += 1 5 return result The operation in line 4 is dominant and will be executedn times. The complexity is described in Big-O notation: in this caseO(n)— linear complexity. The complexity specifies the order of magnitude within which the program will perform its operations. More precisely, in the case ofO(n), the program may performc n opera- · tions, wherec is a constant; however, it may not performn 2 operations, since this involves a different order of magnitude of data.
    [Show full text]
  • Quick Sort Algorithm Song Qin Dept
    Quick Sort Algorithm Song Qin Dept. of Computer Sciences Florida Institute of Technology Melbourne, FL 32901 ABSTRACT each iteration. Repeat this on the rest of the unsorted region Given an array with n elements, we want to rearrange them in without the first element. ascending order. In this paper, we introduce Quick Sort, a Bubble sort works as follows: keep passing through the list, divide-and-conquer algorithm to sort an N element array. We exchanging adjacent element, if the list is out of order; when no evaluate the O(NlogN) time complexity in best case and O(N2) exchanges are required on some pass, the list is sorted. in worst case theoretically. We also introduce a way to approach the best case. Merge sort [4]has a O(NlogN) time complexity. It divides the 1. INTRODUCTION array into two subarrays each with N/2 items. Conquer each Search engine relies on sorting algorithm very much. When you subarray by sorting it. Unless the array is sufficiently small(one search some key word online, the feedback information is element left), use recursion to do this. Combine the solutions to brought to you sorted by the importance of the web page. the subarrays by merging them into single sorted array. 2 Bubble, Selection and Insertion Sort, they all have an O(N2) In Bubble sort, Selection sort and Insertion sort, the O(N ) time time complexity that limits its usefulness to small number of complexity limits the performance when N gets very big. element no more than a few thousand data points.
    [Show full text]
  • Complexity Theory Lecture 9 Co-NP Co-NP-Complete
    Complexity Theory 1 Complexity Theory 2 co-NP Complexity Theory Lecture 9 As co-NP is the collection of complements of languages in NP, and P is closed under complementation, co-NP can also be characterised as the collection of languages of the form: ′ L = x y y <p( x ) R (x, y) { |∀ | | | | → } Anuj Dawar University of Cambridge Computer Laboratory NP – the collection of languages with succinct certificates of Easter Term 2010 membership. co-NP – the collection of languages with succinct certificates of http://www.cl.cam.ac.uk/teaching/0910/Complexity/ disqualification. Anuj Dawar May 14, 2010 Anuj Dawar May 14, 2010 Complexity Theory 3 Complexity Theory 4 NP co-NP co-NP-complete P VAL – the collection of Boolean expressions that are valid is co-NP-complete. Any language L that is the complement of an NP-complete language is co-NP-complete. Any of the situations is consistent with our present state of ¯ knowledge: Any reduction of a language L1 to L2 is also a reduction of L1–the complement of L1–to L¯2–the complement of L2. P = NP = co-NP • There is an easy reduction from the complement of SAT to VAL, P = NP co-NP = NP = co-NP • ∩ namely the map that takes an expression to its negation. P = NP co-NP = NP = co-NP • ∩ VAL P P = NP = co-NP ∈ ⇒ P = NP co-NP = NP = co-NP • ∩ VAL NP NP = co-NP ∈ ⇒ Anuj Dawar May 14, 2010 Anuj Dawar May 14, 2010 Complexity Theory 5 Complexity Theory 6 Prime Numbers Primality Consider the decision problem PRIME: Another way of putting this is that Composite is in NP.
    [Show full text]
  • NP-Completeness (Chapter 8)
    CSE 421" Algorithms NP-Completeness (Chapter 8) 1 What can we feasibly compute? Focus so far has been to give good algorithms for specific problems (and general techniques that help do this). Now shifting focus to problems where we think this is impossible. Sadly, there are many… 2 History 3 A Brief History of Ideas From Classical Greece, if not earlier, "logical thought" held to be a somewhat mystical ability Mid 1800's: Boolean Algebra and foundations of mathematical logic created possible "mechanical" underpinnings 1900: David Hilbert's famous speech outlines program: mechanize all of mathematics? http://mathworld.wolfram.com/HilbertsProblems.html 1930's: Gödel, Church, Turing, et al. prove it's impossible 4 More History 1930/40's What is (is not) computable 1960/70's What is (is not) feasibly computable Goal – a (largely) technology-independent theory of time required by algorithms Key modeling assumptions/approximations Asymptotic (Big-O), worst case is revealing Polynomial, exponential time – qualitatively different 5 Polynomial Time 6 The class P Definition: P = the set of (decision) problems solvable by computers in polynomial time, i.e., T(n) = O(nk) for some fixed k (indp of input). These problems are sometimes called tractable problems. Examples: sorting, shortest path, MST, connectivity, RNA folding & other dyn. prog., flows & matching" – i.e.: most of this qtr (exceptions: Change-Making/Stamps, Knapsack, TSP) 7 Why "Polynomial"? Point is not that n2000 is a nice time bound, or that the differences among n and 2n and n2 are negligible. Rather, simple theoretical tools may not easily capture such differences, whereas exponentials are qualitatively different from polynomials and may be amenable to theoretical analysis.
    [Show full text]
  • Week 1: an Overview of Circuit Complexity 1 Welcome 2
    Topics in Circuit Complexity (CS354, Fall’11) Week 1: An Overview of Circuit Complexity Lecture Notes for 9/27 and 9/29 Ryan Williams 1 Welcome The area of circuit complexity has a long history, starting in the 1940’s. It is full of open problems and frontiers that seem insurmountable, yet the literature on circuit complexity is fairly large. There is much that we do know, although it is scattered across several textbooks and academic papers. I think now is a good time to look again at circuit complexity with fresh eyes, and try to see what can be done. 2 Preliminaries An n-bit Boolean function has domain f0; 1gn and co-domain f0; 1g. At a high level, the basic question asked in circuit complexity is: given a collection of “simple functions” and a target Boolean function f, how efficiently can f be computed (on all inputs) using the simple functions? Of course, efficiency can be measured in many ways. The most natural measure is that of the “size” of computation: how many copies of these simple functions are necessary to compute f? Let B be a set of Boolean functions, which we call a basis set. The fan-in of a function g 2 B is the number of inputs that g takes. (Typical choices are fan-in 2, or unbounded fan-in, meaning that g can take any number of inputs.) We define a circuit C with n inputs and size s over a basis B, as follows. C consists of a directed acyclic graph (DAG) of s + n + 2 nodes, with n sources and one sink (the sth node in some fixed topological order on the nodes).
    [Show full text]
  • Computational Complexity: a Modern Approach
    i Computational Complexity: A Modern Approach Draft of a book: Dated January 2007 Comments welcome! Sanjeev Arora and Boaz Barak Princeton University [email protected] Not to be reproduced or distributed without the authors’ permission This is an Internet draft. Some chapters are more finished than others. References and attributions are very preliminary and we apologize in advance for any omissions (but hope you will nevertheless point them out to us). Please send us bugs, typos, missing references or general comments to [email protected] — Thank You!! DRAFT ii DRAFT Chapter 9 Complexity of counting “It is an empirical fact that for many combinatorial problems the detection of the existence of a solution is easy, yet no computationally efficient method is known for counting their number.... for a variety of problems this phenomenon can be explained.” L. Valiant 1979 The class NP captures the difficulty of finding certificates. However, in many contexts, one is interested not just in a single certificate, but actually counting the number of certificates. This chapter studies #P, (pronounced “sharp p”), a complexity class that captures this notion. Counting problems arise in diverse fields, often in situations having to do with estimations of probability. Examples include statistical estimation, statistical physics, network design, and more. Counting problems are also studied in a field of mathematics called enumerative combinatorics, which tries to obtain closed-form mathematical expressions for counting problems. To give an example, in the 19th century Kirchoff showed how to count the number of spanning trees in a graph using a simple determinant computation. Results in this chapter will show that for many natural counting problems, such efficiently computable expressions are unlikely to exist.
    [Show full text]
  • Chapter 24 Conp, Self-Reductions
    Chapter 24 coNP, Self-Reductions CS 473: Fundamental Algorithms, Spring 2013 April 24, 2013 24.1 Complementation and Self-Reduction 24.2 Complementation 24.2.1 Recap 24.2.1.1 The class P (A) A language L (equivalently decision problem) is in the class P if there is a polynomial time algorithm A for deciding L; that is given a string x, A correctly decides if x 2 L and running time of A on x is polynomial in jxj, the length of x. 24.2.1.2 The class NP Two equivalent definitions: (A) Language L is in NP if there is a non-deterministic polynomial time algorithm A (Turing Machine) that decides L. (A) For x 2 L, A has some non-deterministic choice of moves that will make A accept x (B) For x 62 L, no choice of moves will make A accept x (B) L has an efficient certifier C(·; ·). (A) C is a polynomial time deterministic algorithm (B) For x 2 L there is a string y (proof) of length polynomial in jxj such that C(x; y) accepts (C) For x 62 L, no string y will make C(x; y) accept 1 24.2.1.3 Complementation Definition 24.2.1. Given a decision problem X, its complement X is the collection of all instances s such that s 62 L(X) Equivalently, in terms of languages: Definition 24.2.2. Given a language L over alphabet Σ, its complement L is the language Σ∗ n L. 24.2.1.4 Examples (A) PRIME = nfn j n is an integer and n is primeg o PRIME = n n is an integer and n is not a prime n o PRIME = COMPOSITE .
    [Show full text]
  • NP-Completeness Part I
    NP-Completeness Part I Outline for Today ● Recap from Last Time ● Welcome back from break! Let's make sure we're all on the same page here. ● Polynomial-Time Reducibility ● Connecting problems together. ● NP-Completeness ● What are the hardest problems in NP? ● The Cook-Levin Theorem ● A concrete NP-complete problem. Recap from Last Time The Limits of Computability EQTM EQTM co-RE R RE LD LD ADD HALT ATM HALT ATM 0*1* The Limits of Efficient Computation P NP R P and NP Refresher ● The class P consists of all problems solvable in deterministic polynomial time. ● The class NP consists of all problems solvable in nondeterministic polynomial time. ● Equivalently, NP consists of all problems for which there is a deterministic, polynomial-time verifier for the problem. Reducibility Maximum Matching ● Given an undirected graph G, a matching in G is a set of edges such that no two edges share an endpoint. ● A maximum matching is a matching with the largest number of edges. AA maximummaximum matching.matching. Maximum Matching ● Jack Edmonds' paper “Paths, Trees, and Flowers” gives a polynomial-time algorithm for finding maximum matchings. ● (This is the same Edmonds as in “Cobham- Edmonds Thesis.) ● Using this fact, what other problems can we solve? Domino Tiling Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling The Setup ● To determine whether you can place at least k dominoes on a crossword grid, do the following: ● Convert the grid into a graph: each empty cell is a node, and any two adjacent empty cells have an edge between them.
    [Show full text]
  • The Complexity Zoo
    The Complexity Zoo Scott Aaronson www.ScottAaronson.com LATEX Translation by Chris Bourke [email protected] 417 classes and counting 1 Contents 1 About This Document 3 2 Introductory Essay 4 2.1 Recommended Further Reading ......................... 4 2.2 Other Theory Compendia ............................ 5 2.3 Errors? ....................................... 5 3 Pronunciation Guide 6 4 Complexity Classes 10 5 Special Zoo Exhibit: Classes of Quantum States and Probability Distribu- tions 110 6 Acknowledgements 116 7 Bibliography 117 2 1 About This Document What is this? Well its a PDF version of the website www.ComplexityZoo.com typeset in LATEX using the complexity package. Well, what’s that? The original Complexity Zoo is a website created by Scott Aaronson which contains a (more or less) comprehensive list of Complexity Classes studied in the area of theoretical computer science known as Computa- tional Complexity. I took on the (mostly painless, thank god for regular expressions) task of translating the Zoo’s HTML code to LATEX for two reasons. First, as a regular Zoo patron, I thought, “what better way to honor such an endeavor than to spruce up the cages a bit and typeset them all in beautiful LATEX.” Second, I thought it would be a perfect project to develop complexity, a LATEX pack- age I’ve created that defines commands to typeset (almost) all of the complexity classes you’ll find here (along with some handy options that allow you to conveniently change the fonts with a single option parameters). To get the package, visit my own home page at http://www.cse.unl.edu/~cbourke/.
    [Show full text]
  • Computability (Section 12.3) Computability
    Computability (Section 12.3) Computability • Some problems cannot be solved by any machine/algorithm. To prove such statements we need to effectively describe all possible algorithms. • Example (Turing Machines): Associate a Turing machine with each n 2 N as follows: n $ b(n) (the binary representation of n) $ a(b(n)) (b(n) split into 7-bit ASCII blocks, w/ leading 0's) $ if a(b(n)) is the syntax of a TM then a(b(n)) else (0; a; a; S,halt) fi • So we can effectively describe all possible Turing machines: T0; T1; T2;::: Continued • Of course, we could use the same technique to list all possible instances of any computational model. For example, we can effectively list all possible Simple programs and we can effectively list all possible partial recursive functions. • If we want to use the Church-Turing thesis, then we can effectively list all possible solutions (e.g., Turing machines) to every intuitively computable problem. Decidable. • Is an arbitrary first-order wff valid? Undecidable and partially decidable. • Does a DFA accept infinitely many strings? Decidable • Does a PDA accept a string s? Decidable Decision Problems • A decision problem is a problem that can be phrased as a yes/no question. Such a problem is decidable if an algorithm exists to answer yes or no to each instance of the problem. Otherwise it is undecidable. A decision problem is partially decidable if an algorithm exists to halt with the answer yes to yes-instances of the problem, but may run forever if the answer is no.
    [Show full text]
  • Homework 4 Solutions Uploaded 4:00Pm on Dec 6, 2017 Due: Monday Dec 4, 2017
    CS3510 Design & Analysis of Algorithms Section A Homework 4 Solutions Uploaded 4:00pm on Dec 6, 2017 Due: Monday Dec 4, 2017 This homework has a total of 3 problems on 4 pages. Solutions should be submitted to GradeScope before 3:00pm on Monday Dec 4. The problem set is marked out of 20, you can earn up to 21 = 1 + 8 + 7 + 5 points. If you choose not to submit a typed write-up, please write neat and legibly. Collaboration is allowed/encouraged on problems, however each student must independently com- plete their own write-up, and list all collaborators. No credit will be given to solutions obtained verbatim from the Internet or other sources. Modifications since version 0 1. 1c: (changed in version 2) rewored to showing NP-hardness. 2. 2b: (changed in version 1) added a hint. 3. 3b: (changed in version 1) added comment about the two loops' u variables being different due to scoping. 0. [1 point, only if all parts are completed] (a) Submit your homework to Gradescope. (b) Student id is the same as on T-square: it's the one with alphabet + digits, NOT the 9 digit number from student card. (c) Pages for each question are separated correctly. (d) Words on the scan are clearly readable. 1. (8 points) NP-Completeness Recall that the SAT problem, or the Boolean Satisfiability problem, is defined as follows: • Input: A CNF formula F having m clauses in n variables x1; x2; : : : ; xn. There is no restriction on the number of variables in each clause.
    [Show full text]
  • Polynomial Hierarchy
    CSE200 Lecture Notes – Polynomial Hierarchy Lecture by Russell Impagliazzo Notes by Jiawei Gao February 18, 2016 1 Polynomial Hierarchy Recall from last class that for language L, we defined PL is the class of problems poly-time Turing reducible to L. • NPL is the class of problems with witnesses verifiable in P L. • In other words, these problems can be considered as computed by deterministic or nonde- terministic TMs that can get access to an oracle machine for L. The polynomial hierarchy (or polynomial-time hierarchy) can be defined by a hierarchy of problems that have oracle access to the lower level problems. Definition 1 (Oracle definition of PH). Define classes P Σ1 = NP. P • P Σi For i 1, Σi 1 = NP . • ≥ + Symmetrically, define classes P Π1 = co-NP. • P ΠP For i 1, Π co-NP i . i+1 = • ≥ S P S P The Polynomial Hierarchy (PH) is defined as PH = i Σi = i Πi . 1.1 If P = NP, then PH collapses to P P Theorem 2. If P = NP, then P = Σi i. 8 Proof. By induction on i. P Base case: For i = 1, Σi = NP = P by definition. P P ΣP P Inductive step: Assume P NP and Σ P. Then Σ NP i NP P. = i = i+1 = = = 1 CSE 200 Winter 2016 P ΣP Figure 1: An overview of classes in PH. The arrows denote inclusion. Here ∆ P i . (Image i+1 = source: Wikipedia.) Similarly, if any two different levels in PH turn out to be the equal, then PH collapses to the lower of the two levels.
    [Show full text]