<<

Lecture 24 Computational Complexity Theory

Abhishek Shetty Raghav Malhotra Undergraduate Department Undergraduate Department Indian Institute of Science Indian Institute of Science Instructor Chandan Saha Computer Science and Automation Indian Institute of Science October 29 2015

1 #P-Completeness

To define the concept of #P- languages, we look at the problem of whether #P = FP. Let f ∈ #P be a function. We define the language Tf = {< x, i >: f(x)i = 1}. A machine M is said to have f oracle access to the function f denoted as M , if it has access to an oracle to the language Tf. With this generalization, we have the following definition. Definition 1.1. Let g : {0, 1}∗ → {0, 1}∗. We say that g ∈ FPf if there exists a polynomial time M such that M f computes g. Definition 1.2. Let g : {0, 1}∗ → {0, 1}∗. Then, g ∈ #P-Hard if ∀h ∈ #P, h ∈ FPg. Definition 1.3. #P-Complete = #P ∩ #P-Hard The following theorem follows clearly from the definition.

Theorem 1.1. If ∃f : {0, 1}∗ → {0, 1}∗ such that f ∈ FP ∩ #P-Hard, then #P = FP.

Definition 1.4. A language A ∈ NP Levin reduces to language B ∈ NP if there are poly time computable functions h and f such that,

• x ∈ A ⇐⇒ f(x) ∈ B

• u is a certificate of x ⇐⇒ h(u) is a certificate for B. Definition 1.5. A Levin reduction is said to be parsimonious if the function h is a bijection.

From the definition, it is clear that if a language B has a parsimonious reduction from every language A ∈ NP, then B ∈ #P-Complete.

Theorem 1.2. #SAT is #P-Complete Proof. From the Cook-Levin Theorem, we can see that the reduction from every language in NP to SAT is parsimonious. Thus, #SAT is #P-Complete. The question of whether every NP-Complete language corresponds to a #P-Complete language - mains open. It is known that the counting versions of NP-Complete problems such as HAMCYCLE and CLIQUE are #P-Complete. Interestingly, counting versions of problems such as 2SAT and CYCLE are #P- Complete even though the decision versions are solvable in polynomial time. We have a quick look at another such problem.

1 Definition 1.6. Let A ∈ Mn×n(C). Then, the of the is defined as

n X Y perm(A) = Aiσ(i)

σ∈Sn i=1 Theorem 1.3. Let G be a and A be the corresponding biadjacency matrix. Then, the number of perfect matchings in G is equal to the permanent of A. Proof. This is clear since each corresponds uniquely to a permutation in the biadjacency matrix. The existence of perfect matchings in a graph can be decided in polytime [Edm65]. But the following theorem tells us that the counting version of the same problem is #P-Complete. Theorem 1.4 ([Val79],[AB09]). Permanent for 0-1 matrices is #P-Complete.

2 Approximation of #P

Definition 2.1. An algorithm A is said to be a (1 + )-approximation for f ∈ #P if ∀x

|A(x) − f(x)| ≤ f(x)

Definition 2.2. An algorithm A is said to be a Fully Polynomial-Time Approximation Scheme (FPAS) −1 for f ∈ #P if ∀, A is an (1 + )- running in time poly(|x|,  ). Definition 2.3. An algorithm A is said to be a Fully Polynomial-Time Randomized Approximation −1 Scheme (FPRAS) for f ∈ #P if ∀, A runs in time poly(|x|,  ), and 2 P [|A(x) − f(x)| ≤ f(x)] ≥ 3 We note that the error probability can be reduced to 2−|x| by repeating the algorithm independantly polynomially many times and outputting the median of values as the answer. From an argument identical to boosting in BPP, we arrive at required reduced error. Various #P-Complete languages behave differently under approximation. The computation of a permanent has an FPRAS [JVV86] while for problems like #CYCLE existence of such schemes would imply P = NP. The approximation of #P problems is considered to be easier than the exact computation. This can be seen from the following two theorems, which shall be proven in the subsequent lectures.

Theorem 2.1. ∀f ∈ #P, f can be approximated by a BPP machine with access to a SAT oracle. Theorem 2.2 ([Tod91],[AB09]). PH ⊂ P#P

P SAT P Since from Sipser-Gacs-Lauteman Theorem, BPP ⊂ Σ2 , we have BPP ⊂ Σ3 . But Toda’s Theorem implies that PH ⊂ P#P. Thus, unless the collapses the exact computation is strictly harder than the approximation of solutions. We also note that in Theorem 2.1, the requirement of the SAT oracle arises due to the fact that the approximation of #SAT is NP-Hard. If the approximation could have been achieved in BPP, we would have NP ∈ BPP implying the collapse of the polynomial hierarchy. Working towards the proofs of the above theorems, we revisit the definition of pairwise independant hash function families which we have seen in the proof of the Goldwasser-Sipser Theorem.

n k Definition 2.4. A family of functions Hn,k = {h : {0, 1} → {0, 1} } is said to be a pairwise independant hash function family if ∀x, x0 ∈ {0, 1}n with x 6= x0 and ∀y, y0 ∈ {0, 1}k,

0 0 −2k P r[h ←R Hn,k; h(x) = y ∧ h(x ) = y ] = 2

The following claim about such functions is clear from the definition. Lemma 2.3. ∀x ∈ {0, 1}n, ∀y ∈ {0, 1}k

−k P r[h ←R Hn,k; h(x) = y] = 2

2 n n n Theorem 2.4. Hn,n = {ha,b : GF(2 ) → GF(2 ); a, b ∈ GF(2 )} defined by

ha,b(x) = ax + b is a pairwise independant hash function family. Proof. Consider the system of equations,

ax0 + b = y0 ax + b = y

0 Since, x − x 6= 0, the system has a unique solution in a, b and thus probability ha,b(x) = y and 0 0 −2n ha,b(x ) = y is equal to 2 . Remark. GF(2n) refers to the Galois Field of size 2n which is the unique finite field (upto isomorphism) of order 2n. n GF(2 ) ' F2[x]/ < p > where p is an irreducible polynomial of degree n.[Lan02] Other examples of pairwise independant hash functions have been considered in Lecture 17 and are not reconsidered here. Towards the proof of Theorem 2.1, we note that since #SAT ∈ #P-Complete, proving that it can be approximated in BPPSAT is sufficient. Also, we note that an 2-approximation scheme is sufficient to prove the theorem since we can take polynomially many copies of the formula on disjoint variable to arrive at an (1 + )-approximation for #SAT.

References

[AB09] Sanjeev Arora and Boaz Barak. Computational complexity: a modern approach. Cambridge University Press, 2009. [Edm65] Jack Edmonds. Paths, trees, and flowers. Canad. J. Math., 17:449 – 467, 1965.

[JVV86] Mark R Jerrum, Leslie G Valiant, and Vijay V Vazirani. Random generation of combinatorial structures from a uniform distribution. Theoretical Computer Science, 43:169–188, 1986. [Lan02] Serge Lang. Algebra revised third edition. Graduate Texts in Mathematics, 1(211):ALL–ALL, 2002. [Tod91] Seinosuke Toda. PP is as hard as the polynomial-time hierarchy. SIAM Journal on Computing, 20(5):865–877, 1991. [Val79] Leslie G Valiant. The complexity of computing the permanent. Theoretical computer science, 8(2):189–201, 1979.

3