Lecture 10: Boolean Circuits (Cont.) 1 Recap 2 Turing Machines with Advice
Total Page:16
File Type:pdf, Size:1020Kb
CSE 200 Computability and Complexity Wednesday, May 1, 2013 Lecture 10: Boolean Circuits (cont.) Instructor: Professor Shachar Lovett Scribe: Dongcai Shen 1 Recap Definition 1 (Boolean circuit) A Boolean circuit on n inputs, is a DAG with n source nodes and one target (sink) node, m internal nodes perform simple computation (AND, OR, NOT). Definition 2 (P/poly) P/polydef= flanguages L s.t. 8n, L \ f0; 1gn can be computed by a circuit of size nO(1).g. Theorem 3 P ⊆ P/poly. 2 Turing Machines with Advice We can give another equivalent definition of P/poly by introducing the TM that takes advice. advice z}|{ Definition 4 (TIME(T (n))=a(n)) A language L 2 TIME(T (n))= a(n) if 9M 2 TIME(T (n)) and a sequence a(n) fαngn2N with αn 2 f0; 1g s.t. x 2 L , M(x; αjxj) = 1 ∗ for any x 2 f0; 1g . c d Claim 5 P/poly = [c;d≥1TIME(n )=(n ). Proof d (⊆) For any language L 2 P/poly. 8n 2 N, there exists a circuit Cn of size n . We treat Cn as an advice to the corresponding TM M. M evaluates input x on Cn given as an advice (extra input). The advice def def d sequence fαngn2N = fCngn2N is bounded by a(n) = n . c d k (⊇) Fix a language L 2 TIME(n )=(n ). 8n 2 N, there exists a circuit Cn of size n for some constant k s.t. x 2 L , Cn(x; αjxj) = 1 where jxj = n. The advice αn is hard-wired to the circuit. This is illustrated in Figure 1. Cn x αjxj c d Figure 1: P/poly ⊇ [c;d≥1TIME(n )=(n ) 10-1 Remark. Claim 5 explains the reason for naming P/poly: \deterministic poly-time taking poly-size advice". Similarly, we can define P= log, P=c where c stands for constant-sized advice, etc. A natural question arises: c Question 6 What can we get from just 1 bit of advice? P=1 = [c≥1TIME(n )=1. Question 7 Is NP ⊆ P/poly? This is an open question. We believe the answer is no. 3 The Karp-Lipton Theorem Theorem 8 (Karp-Lipton Theorem [1]) If NP ⊂ P/poly then PH = Σ2. Remark. Note that PH collapses to Σ2-complete instead of NP-complete. The theorem is of the style that \one thing unlikely implies another thing unlikely". Finally we see that many unlikely things are equivalent. In some sense it's similar to the NP-complete problems. It has a sister-theorem, the Meyer's Theorem, also proved in early 1980's. Theorem 9 (Meyer's Theorem [1]) If EXP ⊆ P/poly then EXP = Σ2. Proof of Theorem 8: Suppose 3-SAT 2 P/poly. So 8n 2 N, there exists a circuit Cn: ( 1 If φ satisfiable Cn(φ) = 0 Otherwise where φ is a 3-CNF on n variables. Also, jCnj = poly(n). Algorithm: Input 3-CNF φ on n bits, 9Cn s.t. φ(Cn(φ)) = 1. 1 We want to show Σ2 = PH. It suffices to show Π2 ⊂ Σ2 . Let L be a complete language for Π2. We def choose take L = Π2SAT = fφ is a 3-CNF : 8y9zφ(y; z)g. The first step is to transform a circuit which merely checks if a circuit is satisfiable, to a circuit that "proves" this by supplying a satisfying assignment. Claim 10 (Self-reducibility) If 8n 2 N there exists a circuit Cn s.t. φ 2 3-SAT iff Cn(φ) = 1 then there 0 0 0 O(1) exists a circuit Cn s.t. φ 2 3-SAT iff φ(Cn(φ)) = 1 and SIZE(Cn) ≤ SIZE(Cn) . Proof Assume circuit C s.t. C(φ) = 1 iff φ is satisfiable. If φ is satisfiable, discover a satisfying assignment bit-by-bit: ( 0 C(φ(0; x2; ··· ; xn)) = 1 a1 = 1 Otherwise ( 0 C(φ(a1; 0; x3; ··· ; xn)) = 1 a2 = 1 Otherwise In general, ( 0 C(φ(a1; ··· ; ai−1; 0; xi+1; ··· ; xn)) = 1 ai = 1 Otherwise 0 At the end, φ(a1; ··· ; an) = 1. The construction of the multi-output circuit C from the single-output circuit C is illustrated in Figure 2. 1 Recall that L 2 Σ2 means x 2 L , 9y8zM(y; z) = 1 where M 2 P; L 2 Π2 means x 2 L , 8y9zM(y; z) = 1 where M 2 P. Once we have Π2 ⊂ Σ2, namely 89 · · · = 98 · · · , we can simplify the QBF in the language definition by switching the 0 0 00 0 000 0 0 0000 positions of 9 and 8. For example, Π4 3 8989M(·) = 9889M (·) = 98 9M (·) = 998 M (·) = 9 8 M (·) 2 Σ2. 10-2 φ 2 SAT a1 a2 an C0 C(φ) C(φ) C(φ) ··· C(φ) a1 a1a2 a1a2 ··· an x1; ··· ; xn x2; ··· ; xn x3 ··· xn 1 1 1 Figure 2: Self-reducibility: construct C0 using C as a building block This property is called self-reducibility. Algorithm: Input 3-CNF φ on n bits, check if 8y 9z φ(y; z) = 1. | {z } iff Cn(φ(y; ·)) = 1 ( NP ⊆ P/poly | 0 {z } iff φ(y; Cn(φ(y; ·))) = 1 Hence, we have 0 0 φ 2 Π2SAT iff 9Cn8yφ(y; Cn(φ(y; ·))) = 1 | {z } in Σ2 Next we prove Theorem 9. The intuitive idea is: \EXP ⊂ P/poly" means any exponential-time computation can be done by a specific poly-size circuit Cn where n is the input size. Then, our goal is to c d prove that this computation also fits in Σ2 = fx : 9y : jyj ≤ jxj 8z : jzj ≤ jxj M(x; y; z) = 1g where M 2 P. Recall Cook-Levin Theorem's proof. An exponential-size computation is hard to check from the outset. But we can make use of the \for all" quantifier in Σ2. Quantifier 8 comes in handy to check all local places in the exponential-size computation. Meanwhile, the quantifier 9 helps us by \guessing" the circuit Cn. Cn is guaranteed to exist when L 2 EXP ⊂ P/poly. Proof of Theorem 9: Fix a language L 2 EXP. We need to show L 2 Σ2. Suppose L is computable by a c 1-tape TM M 2 TIME(2n ). W.l.o.g., M's alphabet Σ = f0; 1g. Let def LM = f(x; i; j) : running M on input x 2 L in time step i has a configuration whose j-th bit is 1.g c c where in the tuple (x; i; j), x 2 f0; 1gn, i ≤ 2n denotes M's steps, and j ≤ 2n is the index of M's i-th configuration. Note that we can describe (x; i; j) using poly(n) bits. Clearly, LM 2 EXP ⊂ P/poly. So, there is a poly-size circuit C s.t. (x; i; j) 2 LM iff C(x; i; j) = 1 Lets come back to L. For any x 2 L, what do we have from M's computation? We need to show c • M(x) at time N = 2n entered the accepting state. Namely, M(x) = 1. c • x 2 L iff 9C s.t. C(x; i; j) = 1 8i; j 2 [2n ]. as shown in Figure 3. Note that x, i, and j can be represented in poly-size. However, how can we "trust" the circuit C? The answer is that, similar to the Cook-Levin theorem, computation is local and can be verified by a collection of local tests (e.g. a CNF). In our case the CNF size is exponential, however we can specify in a uniform way the various tests. These amount basically to 10-3 c ≤ 2n cells written x 8 # > Row #1: x1 x2 ··· xn 0 0 ··· 0 0 0 ··· 0 steps > > 0 # c <> Row #2: x x2 ··· xn 0 0 ··· 0 0 0 ··· 0 n 1 2 . > . ··· . ··· . ··· . > > c # :> Row #2n : 1 0 ··· 0 0 0 ··· 0 0 0 ··· 0 Accept Figure 3: M's computation verifying the correct transition between any two time steps. The verifier of the computation simply is \AND c c of 2n clauses of constant size". 8i; j 2 [2n ], 08C(x; i; j); 1 > B<>fC(y; i; ·) : head in location j − 1, j, j + 1 content in location jg; C B C VERIFY B C @>fC(x; i; ·) : state in step ig; A > :fC(x; i − 1; ·) : state in step i − 1g Finally, we renders \x 2 L iff M(x) = 1" where M 2 EXP to x 2 L iff 9C 8i; j VERIFY(C; x; i; j) VERIFY 2 P because C is of poly-size. Therefore, L 2 Σ2. This completes the proof. The Karp-Lipton Theorem's proof is by guessing the assignment. The Meyer's Theorem's proof is by verifying the computation in a succinct way. 4 Circuit Lower Bounds Definition 11 (Size of a circuit) The size of a circuit C is the number of edges of the circuit C. 2n Theorem 12 Most functions on n bits require size Ω( n ). Proof By a counting argument. How many circuits of size S are there? we need to specify S wires (which requires O(S log S) bits) and the gate computed by each gate (which takes an additional O(S) bits). Hence the number of functions computed in SIZE(S) is at most SO(S). On the other hand, the number of boolean 2n 2n functions on n bits is 2 . Setting S = O( n ), we get that most functions require size > S.