Lecture 19,20 (Nov 15&17, 2011): Hardness of Approximation, PCP
Total Page:16
File Type:pdf, Size:1020Kb
,20 CMPUT 675: Approximation Algorithms Fall 2011 Lecture 19,20 (Nov 15&17, 2011): Hardness of Approximation, PCP theorem Lecturer: Mohammad R. Salavatipour Scribe: based on older notes 19.1 Hardness of Approximation So far we have been mostly talking about designing approximation algorithms and proving upper bounds. From no until the end of the course we will be talking about proving lower bounds (i.e. hardness of approximation). We are familiar with the theory of NP-completeness. When we prove that a problem is NP-hard it implies that, assuming P=NP there is no polynomail time algorithm that solves the problem (exactly). For example, for SAT, deciding between Yes/No is hard (again assuming P=NP). We would like to show that even deciding between those instances that are (almost) satisfiable and those that are far from being satisfiable is also hard. In other words, create a gap between Yes instances and No instances. These kinds of gaps imply hardness of approximation for optimization version of NP-hard problems. In fact the PCP theorem (that we will see next lecture) is equivalent to the statement that MAX-SAT is NP- hard to approximate within a factor of (1 + ǫ), for some fixed ǫ> 0. Most of hardness of approximation results rely on PCP theorem. For proving a hardness, for example for vertex cover, PCP shows that the existence of following reduction: Given a formula ϕ for SAT, we build a graph G(V, E) in polytime such that: 2 • if ϕ is a yes-instance, then G has a vertex cover of size ≤ 3 |V |; 2 • if ϕ is a no-instance, then every vertex cover of G has a size > α 3 |V | for some fixed α> 1. Corollary 1 The vertex cover problem can not be approximated with a factor of α unless P = NP. In this reduction we have created a gap of size α between yes/no instances. Figure 19.1: g is a gap-introducing reduction 19-1 19-2 Lecture 19,20: Hardness of Approximation, PCP theorem Definition 1 Suppose that L is an NP-complete problem and π is a minimization problem. Suppose that g is a function computable in polytime that maps Yes-instances of L into a set S1 of instances of π and No-instances of L into a set S2 of instances of π. Assume that there is a polytime computable function h such that: • for every Yes-instance x of L: OPT (g(x)) ≤ h(g(x)); • for every No-instance x of L: OPT (g(x)) > αh(g(x)). Then g is called a gap-introducing reduction from L to π and α is the size of the gap. The chromatic number of a graph G, denoted by χ(G), is the smallest integer c such that G has a c-coloring. It is known that deciding whether a graph is 3-colorable or not is NP-complete. Using this we easily get the following hardness of approximation for the optimization problem of computing the chromatic number of a given graph G. 4 Theorem 1 There is no better than 3 -approximation algorithm for computing the chromatic number, unless P = NP . Proof. Create a gap-introducing reduction from 3-colorability to the problem of computing chromatic numbers, as shown in the figure below. 4 Figure 19.2: No better than 3 -approximation algorithm for computing the chromatic number, unless P = NP . The reduction we gave for the (non-metric) traveling salesman problem in Lecture 5 was a gap-introducing reduction from Hamilton cycle. Once we have a gap-introducing reduction (e.g. from SAT) to π1, then we can prove a hardness for another problem π2 by giving a gap-preserving reduction from π1 to π2. Definition 2 A gap-preserving reduction g (g is polytime computable) from π1 to π2 has four parameters f1,α,f2,β. Given instance x of π1, g(x) is an instance of π2 such that • if OPT (x) ≤ f1(x) −→ OPT (g(x)) ≤ f2(g(x)); • if OPT (x) > αf1(x) −→ OPT (g(x)) > βf2(g(x)). Lecture 19,20: Hardness of Approximation, PCP theorem 19-3 19.1.1 MAX-SNP Problems and PCP Theorem Many problems, such as Bin Packing and Knapsack, have PTAS’s. A major open question was: does MAX-3SAT have a PTAS? A significant result on this line was the paper by Papadimitriou and M. Yannakakis (’92) where they defined the class MAX-SNP. All problems in MAX-SNP have constant approximation algorithms. They also defined the notion of completeness for this class and showed if a MAX-SNP-complete problem has a PTAS then every MAX-SNP problem has a PTAS. Several well-known problems including MAX-3SAT and 7 TSP are MAX-SNP-complete. We have seen a 8 -approximation for MAX-3SAT. It turns out that even if there 7 are not exactly 3 literals per clause this problem has a 8 -approximation algorithm. How about lower bounds? There is a trivial gap-introducing reduction form 3-SAT to MAX-3SAT which shows that formula ϕ is a Yes- m−1 instance iff more than m fraction of ϕ can be satisfied. Suppose that for each L ∈ NP there is a polytime computable g from L to instances of MAX-3SAT, such that • for Yes-instance y ∈ L, all 3-clauses in g(y) can be satisfied; • for No-instance y ∈ L, at most 99% of the 3-clauses of g(y) can be satisfied. There is a standard proof system for L ∈ NP , that is, a deterministic verifier that takes as input an instance y of L together with a proof π, reads π and accepts iff π is a valid proof (solution). We give an alternative proof system for NP using the gap-introducing reduction g assumed above. Define a proof that y ∈ L to be a truth assignment satisfying g(y) where g is the gap-instance reduction from L to MAX-3SAT. We define a randomized verifier V for L. V runs in polynomial time and • takes y and the “new” proof π; • accept iff π satisfies a 3-clause selected uniformly and randomly from g(y). If y ∈ L then there is a proof (truth assignment) such that all the 3-clauses in g(y) can be satisfied. For that proof Verifier V (y, π) accepts with probability 1. If y∈ / L then every truth assignment satisfies no greater of 99% of clauses in g(y). So verifier V (y, π) will reject 1 with probability not less than 100 . Note that under the assumption we made, we can achieve a constant separation between the two cases above by reading only 3 bits of the proof (regardless of the length of the proof). We can increase the probabability of success (i.e. decrease the probability of error) by simulating V a constant (but large) number of times. For 1 example, by repeating 100 times, the probability of success will be at least 2 in the second case and we are still checking a constant (300) bits of the proof. 19.2 PCP Theorem To be consistent with the PCP definition later on, we give a slightly modified definition for class NP which is clearly equivalent to the original one: Definition 3 A language L is ∈ NP if and only if there is a deterministic polynomial time verifier (i.e. algorithm) V that takes an input x and a proof y with |y| = |x|c for a constant c > 0 and it satisfies the following: 19-4 Lecture 19,20: Hardness of Approximation, PCP theorem • Completeness: if x ∈ L ⇒ ∃ y such that V (x, y)=1. • Soundness: if x∈ / L ⇒ ∀y, V (x, y)=0. Now we define probabilistic verifiers that are restricted to look at (query about) only a few bits of the alleged proof y and make their decision based on those few bits instead of reading the whole proof. Definition 4 A (r(n),b(n))-restricted verifier is a randomized verifier that uses at most r(n) random bits. It runs in probabilistic polynomial time and reads/queries at most b(n) bits of the proof. Finally we provide the main definition of a language ∈ P CP . This definition is for P CP 1 (r(n),b(n)). 1, 2 Definition 5 A language L is ∈ P CP (r(n),b(n)) if and only if there is a (r(n),b(n))-restricted verifier V such that given an input x with length |x| = n and a proof π, it satisfies the following: • Completeness: if x ∈ L ⇒ ∃ a proof π such that P r[V (x, π)=1]=1. 1 • Soundness: if x∈ / L ⇒ ∀π, P r[V (x, π) = 1] ≤ 2 . 1 The probabilities in completeness and soundness given in definition above are 1 and 2 , respectively. A more general definition of P CP languages is by allowing these probabilities be some other constant values: Definition 6 For 0 ≤ s<c ≤ 1, a language L is ∈ P CPc,s(r(n),b(n)) if and only if there is a (r(n),b(n))- restricted verifier V such that given an input x with length |x| = n and a proof π, it satisfies the following: • Completeness: if x ∈ L ⇒ ∃ a proof π such that P r[V (x, π) = 1] ≥ C. • Soundness: if x∈ / L ⇒ ∀π, P r[V (x, π) = 1] ≤ S. In the more general definition of P CP language, we need to have the following restrictions on parameters: • c and s are constants and 0 ≤ S < C ≤ 1.