Advanced Topics in Complexity Theory: Exercise
Total Page:16
File Type:pdf, Size:1020Kb
Advanced Topics in Complexity Theory Daniel Borchmann October 4, 2016 Contents 1 Approximation Complexity 3 1.1 Function Problems ................................. 3 1.2 Approximation Algorithms ............................ 6 1.3 Approximation and Complexity ......................... 10 1.4 Non-Approximability ............................... 14 2 Interactive Proofs 18 2.1 Interactive Proofs Systems with Deterministic Verifiers ............ 18 2.2 Probabilistic Verifiers ............................... 20 2.3 Public Coin Protocols ............................... 23 2.4 IP = PSpace .................................... 27 2.5 Outlook: Multi-Prover Systems ......................... 32 3 Counting Complexity 33 3.1 Counting problems and the class #P ....................... 33 3.2 #P-completeness and Valiant’s Theorem ..................... 36 2 1 Approximation Complexity Dealing with NP-complete problems in practice may sometimes be infeasible. On the other hand, it is often enough to have approximate solutions to those problems, provided these are not “too bad”. However, for this to make sense, it is no longer possible to only consider decision problems.1 1.1 Function Problems 1.1 Example Let us consider the following “computation problem” associated to SAT: given a propositional formula ', return a satisfying assignment of ' if one exists, and “no” other- wise. Let us call this problem FSAT (“F” for “function”; note however that for a satisfiable formula ' there may be more than one satisfying assignment – the “functional” aspect thus does not really correspond to the mathematical notion of a function). It is clear that when we can solve FSAT in polynomial time, we can also solve SAT in polynomial time. The converse is also true: if SAT 2 P, then FSAT can be solved in polyno- mial time as well. This can be done as follows: let x1; : : : ; xn be the variables in '. We first check whether ' is satisfiable, and return “no” if it is not. Otherwise, weset x1 = true and ask whether ' is still satisfiable. If it is, we fix x1 = true, and otherwise x1 = false. In any case, we continue with x2; x3; : : : ; xn. It is easy to see that when we have fixed xn, we have obtained a satisfying assignment of '. Moreover, the procedure runs in polynomial time in j'j, provided that we can solve SAT in polynomial time. } The previous example illustrates a property of SAT that is called “self-reducibility”: we can solve FSAT by considering instances of SAT of equal or smaller size. The same technique also works for other problems. 1.2 Example Let us consider the traveling salesperson problem (TSP): given an undirected (complete) graph G with non-negative edge weights, find a Hamiltonian cycle in G, called a tour, with minimal total weight. We shall show that we can solve this optimization problem, provided we have access to an oracle for the decision variant TSPD. Let G = (V; E; w) be an edge-weighted graph and let n be the length of the encoding hGi of G. The cost of a shortest tour on G is then between 0 and 2n. Using binary search, we can find the cost C of the optimal tour with at most O(n) calls to TSPD. Once C is fixed, we create instances of TSPD where the budget is fixed to be C, but where the weights of the underlying graph G have been altered. More precisely, for all edges 1The content of this part of the lecture is based on: Christos H. Papadimitriou: Computational Complexity, Addison-Wesley, 1995, Section 10.3 and Chapter 13, and: Sanjeev Arora and Boaz Barak: Computational Complexity A Modern Approach, Cambridge University Press, 2009, Chapter 11. 3 f i; j g 2 E we successively set its weight to C + 1. For each such modification we ask whether G still has a tour of length C or less. If yes, the edge f i; j g is not part of some optimal tour, and we can freeze its cost at C + 1. Otherwise, we restore the weight of f i; j g 2 to the previous value. It is easy to see that after O(n ) calls to TSPD, the only edges with weights below C + 1 form an optimal tour in G. } We can formalize the relationship between decision and function problems in the following way: let L 2 NP. Then there exists a polynomial-time TM K and some polynomial p(n) such that L = f x 2 Σ∗ j 9y:jyj ≤ p(jxj) ^ (x; y) 2 K g: The function problem associated to L, denoted FL, is then the following computation problem: Given x, find y such that jyj ≤ p(jxj) and (x; y) 2 K. If no such y exists, return “no”. We next provide a more formal definition of function problems. 1.3 Definition A function problem over Σ is a pair Q = (I;R), where I ⊆ Σ∗ is called the set of instances of Q and R ⊆ Σ∗ × Σ∗. Both I and R must be decidable in polynomial time. For x 2 I, define F (x) to be the set of feasible solutions of x, consisting of all y such that (x; y) 2 R. A TM M solves the function problem Q if it returns for each instances x 2 I some y 2 F (x) if F (x) 6= ;, and rejects x otherwise. Call a function problem (I;R) polynomially balanced if there exists some polynomial p such that (x; y) 2 R implies jyj ≤ p(jxj). The class FNP consists of all polynomially balanced function problems (I;R). The set FP is the subset of FNP consisting of all function problems solvable in polynomial time. } We have • FSAT 2 FNP, but is is neither known nor expected that FSAT 2 FP. • FHornSAT 2 FP. • Finding perfect matchings in bipartite graphs is in FP. • FTSPD 2 FNP. Note that, on the other hand, it is not known whether TSP is in FNP. The reason is that a given tour cannot be checked easily for being optimal (for all we know). Moreover, TSP is complete for the class FPNP of all function problems computable in polynomial time with access to an NP oracle. We shall discuss this in some future exercise. Note that each problem in FNP can be solved in non-deterministic polynomial time. Con- versely, for each function problem L = (I;R) that is solvable in non-deterministic polyno- mial time, there exists a function problem L0 = (I;R0) 2 FNP such that • each solution to some instance of L0 is a solution for the same instance of L; 4 • if some instance of L0 does not have a solution, neither has the same instance of L. Indeed, if M is some polynomial-time NTM solving L, let p be some bounding polynomial of the runtime of M. Define R0 R0 = f (x; y) j jyj ≤ p(jxj) ^ (x; y) 2 R g: Because of this, we can identify problems in FNP with function problems solvable in non- deterministic polynomial time. 1.4 Definition Let A; B be two function problems. We say that A reduces to B (in logarith- mic space) if and only if the following is true: there exist logspace-computable functions f and g such that 1. if x is an instance of A, then f(x) is an instance of B; 2. if f(x) does not have a solution, then x does not have a solution; 3. if z is a solution for f(x), then g(z) is a solution for x. The problem A is complete for a class FC of function problems if and only if A 2 FC and all problems in FC reduce to A. } It is not hard to see that FP and FNP are closed under reductions (using the convention for FP and FNP as discussed above), and that reductions compose. It is also not hard to show that FSAT is FNP-complete. From Example 1.1 we know that FSAT can be solved in polynomial time if and only if SAT can be. Thus we have obtained the following result. 1.5 Theorem FP = FNP if and only if P = NP. Because of this close relationship between FNP/FP and NP/P, one could think that study- ing function problems does not add any value. However, among the problems in FNP there is a special class of function problems that are guaranteed to never say “no”, corresponding to L = Σ∗. 1.6 Example Consider the problem Factoring: given N 2 N, find its prime decomposition k1 km N = p1 : : : pm : Since Primes 2 P, checking whether a tuple (p1; k1; : : : ; pm; km) denotes the prime decom- position of N can be done in polynomial time. Alternatively, one could also supply primality certificates for p1; : : : ; pm (such certificates of polynomial length exist; see exercises). } It is not known whether Factoring can be solved in polynomial time. However, in contrast to FSAT, Factoring never returns no on valid instances. 1.7 Definition A function problem Q = (I;R) is called total if and only if for each x 2 I there exists a y 2 Σ∗ such that (x; y) 2 R. The class of all total function problems in FNP is denoted TFNP. } 5 1.2 Approximation Algorithms Function problems in TFNP are always guaranteed to have a solution. However, for practical concerns, some of these solutions may be preferred over others, e.g., if they minimize costs. 1.8 Definition An optimization problem is a tuple (Q; cost; opt), where Q is a total function problem, cost(y) 2 Nnf 0 g for each solution y of Q, and opt 2 f min; max g. For an instance x of Q the optimal cost OPT(x) of x is defined as OPT(x) = opts2F (x) cost(s): A Turing machines solves the optimization problem (Q; cost; opt) if for each instance x of Q the Turing machine accepts and returns a solution y 2 F (x) such that OPT(x) = cost(y).