CS151 Complexity Theory

Total Page:16

File Type:pdf, Size:1020Kb

CS151 Complexity Theory Relationship to other classes CS151 Question: is #P hard because it entails Complexity Theory finding NP witnesses? Lecture 13 …or is counting difficult by itself? May 11, 2021 May 11, 2021 CS151 Lecture 13 Bipartite MatchinGs Bipartite MatchinGs • Definition: • Definition: – G = (U, V, E) bipartite graph with |U| = |V| – G = (U, V, E) bipartite graph with |U| = |V| – a perfect matching in G is a subset M ⊆ E – a perfect matching in G is a subset M ⊆ E that touches every node, and no two edGes in that touches every node, and no two edGes in M share an endpoint M share an endpoint May 11, 2021 CS151 Lecture 13 May 11, 2021 CS151 Lecture 13 Bipartite MatchinGs Cycle Covers • #MATCHING: given a bipartite graph • Claim: 1-1 correspondence between cycle G = (U, V, E) how many perfect covers in G’ and perfect matchings in G matchinGs does it have? – #MATCHING and #CYCLE-COVER parsimoniously reducible to each other 2 Theorem: #MATCHING is #P-complete. 1 2 3 4 5 3 • But… can find a perfect matching in 1 polynomial time! 4 – countinG itself must be difficult 1 2 3 4 5 5 G = (U, V, E) G’ = (V, E’) May 11, 2021 CS151 Lecture 13 May 11, 2021 CS151 Lecture 13 1 Cycle Covers Cycle Cover is #P-complete • cycle cover: collection of node-disjoint • variable GadGet: every cycle cover directed cycles that touch every node includes left cycle or right cycle • #CYCLE-COVER: given directed graph G xi ¬xi •clause GadGet: cycle = (V, E) how many cycle covers does it cover cannot use all three have? outer edges – and each of 7 ways to Theorem: #CYCLE-COVER is #P-complete. exclude at least one gives – implies #MATCHING is #P-complete exactly 1 cover using those external edGes May 11, 2021 CS151 Lecture 13 May 11, 2021 CS151 Lecture 13 Cycle Cover is #P-complete Cycle Cover is #P-complete May 11, 2021 CS151 Lecture 13 May 11, 2021 CS151 Lecture 13 Cycle Cover is #P-complete Cycle Cover is #P-complete • clause GadGet correspondinG to (A∨B∨C) • Proof outline (reduce from #SAT) has “xor” gadget between outer 3 edges (¬x1∨x2∨ ¬x3)∧(¬x3∨x1)∧ … ∧(x3∨ ¬ x2) and A, B, C xor gadgets ¬A B ¬B clause A (exactly 1 of . u v two edges is in gadgets xor cover) v’ u’ x 1 ¬x1 x2 ¬x ¬x xor GadGet ensures that 2 x3 3 exactly one of two C ¬C N.B. must avoid reducinG variable gadgets edges can be in cover SAT to MATCHING! May 11, 2021 CS151 Lecture 13 May 11, 2021 CS151 Lecture 13 2 Cycle Cover is #P-complete Cycle Cover is #P-complete • Introduce edGe weiGhts • WeiGhted xor GadGet: N-1 – cycle cover weiGht is product of weights of its u v u v edges xor N-1 v’ u’ v’ 2 u’ • “implement” xor GadGet by N-1 3 – weiGht of cycle cover that “obeys” xor (unlabeled weights are 1) multiplied by 4 (mod N) – weiGht of cycle cover that “obeys” xor – weiGht of cycle cover that “violates” xor multiplied by 4 (mod N) multiplied by N – weiGht of cycle cover that “violates” xor larGe inteGer multiplied by N May 11, 2021 CS151 Lecture 13 May 11, 2021 CS151 Lecture 13 Cycle Cover is #P-complete Cycle Cover is #P-complete • SimulatinG positive edGe weiGhts (¬x1∨x2∨ ¬x3)∧(¬x3∨x1)∧ … ∧(x3∨ ¬ x2) – need to handle 2, 3, 4, 5, …, N-1 xor gadget (exactly 1 of . clause two edges is in gadget 2 cover) 2k x variable gadgets: 1 ¬x1 x2 ¬x ¬x k times 2 x3 3 3 – m = # xor gadgets; n = # variables; N > 4m2n – # covers (mod N) = (4m)⋅(#sat. assiGnments) May 11, 2021 CS151 Lecture 13 May 11, 2021 CS151 Lecture 13 New Topic Proof systems • proof systems L = { (A, 1k) : A is a true mathematical assertion with a proof of lenGth k} • interactive proofs and their power What is a “proof”? • Arthur-Merlin Games complexity insiGht: meaninGless unless can be efficiently verified May 11, 2021 CS151 Lecture 13 May 11, 2021 CS151 Lecture 13 3 Proof systems Classical Proofs • given language L, goal is to prove x ∈ L • previous definition: “classical” proof system • proof system for L is a verification algorithm V • recall: – completeness: x ∈ L ⇒ ∃ proof, V accepts (x, proof) “true assertions have proofs” L ∈ NP iff expressible as – soundness: x ∉ L ⇒ ∀ proof*, V rejects (x, proof*) L = { x | ∃y, |y| < |x|k, (x, y) ∈ R } and R ∈ P. “false assertions have no proofs” • NP is the set of lanGuaGes with classical – efficiency: ∀ x, proof: V(x, proof) runs in polynomial time in |x| proof systems (R is the verifier) May 11, 2021 CS151 Lecture 13 May 11, 2021 CS151 Lecture 13 Interactive Proofs Interactive Proofs • Two new ingredients: • interactive proof system for L is an – randomness: verifier tosses coins, errs with interactive protocol (P, V) some small probability common input: x – interaction: rather than “readinG” proof, Prover Verifier verifier interacts with computationally unbounded prover . # rounds = • NP proof systems lie in this framework: prover . poly(|x|) sends proof, verifier does not use randomness accept/ reject May 11, 2021 CS151 Lecture 13 May 11, 2021 CS151 Lecture 13 Interactive Proofs Interactive Proofs • interactive proof system for L is an IP = {L : L has an interactive proof interactive protocol (P, V) system} – completeness: x ∈ L ⇒ • Observations/questions: Pr[V accepts in (P, V)(x)] ≥ 2/3 – philosophically interesting: captures more – soundness: x ∉ L ⇒ ∀ P* broadly what it means to be convinced a statement is true Pr[V accepts in (P*, V)(x)] ≤ 1/3 – clearly NP ⊆ IP. Potentially larGer. How much – efficiency: V is p.p.t. machine larGer? • repetition: can reduce error to any ε – if larGer, randomness is essential (why?) May 11, 2021 CS151 Lecture 13 May 11, 2021 CS151 Lecture 13 4 Graph Isomorphism Graph Isomorphism • graphs G0 = (V, E0) and G1 = (V, E1) are • GI = {(G0, G1) : G0 ≃ G1 } isomorphic (G0 ≃ G1) if exists a – in NP permutation π:V → V for which – not known to be in P, or NP-complete (x, y) ∈ E0 ⇔ (π(x), π(y)) ∈ E1 • GNI = complement of GI 1 1 – not known to be in NP 1 4 2 3 Theorem (GMW): GNI ∈ IP 3 4 2 3 – indication IP may be more powerful than NP 2 4 May 11, 2021 CS151 Lecture 13 May 11, 2021 CS151 Lecture 13 GNI in IP GNI in IP • interactive proof system for GNI: • completeness: – if G0 not isomorphic to G1 then H is input: (G0, G1) isomorphic to exactly one of (G , G ) Prover Verifier 0 1 – prover will choose correct r H = π(Gc) flip coin ∈ if H ≃G0 c {0,1}; • soundness: pick r = 0, – if G ≃ G then prover sees same distribution else r = 1 r random π 0 1 accept on H for c = 0, c = 1 iff r = c – no information on c ⇒ any prover P* can succeed with probability at most 1/2 May 11, 2021 CS151 Lecture 13 May 11, 2021 CS151 Lecture 13 The power of IP The power of IP • We showed GNI ∈ IP • Proof idea: input: φ(x1, x2, …, xn) – prover: “I claim φ has k satisfying assignments” • GNI ∈ IP suGGests IP more powerful than – true iff NP, since we don’t know how to show GNI • φ(0, x2, …, xn) has k0 satisfying assignments in NP • φ(1, x2, …, xn) has k1 satisfying assignments • k = k0 + k1 • GNI in coNP – prover sends k0, k1 – verifier sends random c ∈ {0,1} – prover recursively proves “φ’ = φ(c, x2, …, xn) has kc Theorem (LFKN): coNP ⊆ IP satisfying assignments” – at end, verifier can check for itself. May 11, 2021 CS151 Lecture 13 May 11, 2021 CS151 Lecture 13 5 The power of IP The power of IP • Analysis of proof idea: • Solution to problem (ideas): – replace {0,1}n with (F )n – Completeness: φ(x1, x2, …, xn) has k q satisfyinG assiGnments ⇒ accept with prob. 1 – verifier substitutes random field element at – Soundness: φ(x1, x2, …, xn) does not have k each step satisfyinG assiGns. ⇒ accept prob. ≤ 1 – 2-n – vast majority of field elements catch cheating prover (rather than just 1) – Why? It is possible that k is only off by one; verifier only catches prover if coin flips c are Theorem: L = { (φ, k): CNF φ has exactly k successive bits of this assiGnment satisfyinG assiGnments} is in IP May 11, 2021 CS151 Lecture 13 May 11, 2021 CS151 Lecture 13 The power of IP The power of IP • Prover wishes to prove: • First step: arithmetization … k = Σx Σx Σx p (x , x , …, x ) – transform φ(x , … x ) into polynomial p (x , x , … x ) 1 = 0, 1 2 = 0,1 n = 0, 1 φ 1 2 n 1 n φ 1 2 n … n • Define: kz = Σx = 0,1 Σx = 0, 1pφ(z, x2, …, xn) of degree d over a field Fq; q prime > 2 2 n – recursively: • prover sends: kz for all z ∈ Fq • xi → xi ¬φ → (1 - pφ) • verifier: ’ • φ ∧ φ → (pφ)(pφ’) – checks that k0 + k1 = k ’ • φ ∨ φ → 1 - (1 - pφ)(1 - pφ’) – sends random z ∈ Fq n – for all x ∈ {0,1} we have pφ(x) = φ(x) • continue with proof that – degree d ≤ |φ| … kz = Σx2 = 0,1 Σxn = 0, 1pφ(z, x2, …, xn) – can compute pφ(x) in poly time from φ and x • at end: verifier checks for itself May 11, 2021 CS151 Lecture 13 May 11, 2021 CS151 Lecture 13 The power of IP The actual protocol input: (φ, k) Prover Verifier • Prover wishes to prove: p1(x) p1(0)+p1(1)=k? … p1(x) = Σx …,x ∈{0,1} pφ(x, x2, …, xn) k = Σx Σx Σx p (x , x , …, x ) 2, n z1 1 = 0, 1 2 = 0,1 n = 0, 1 φ 1 2 n pick random z1 in Fq … p (x) = Σx x ∈ p (z , x, x , …, x ) • Define: k = Σx Σx p (z, x , …, x ) 2 3,…, n {0,1} φ 1 3 n z 2 = 0,1 n = 0, 1 φ 2 n p (0)+p (1)=p (z )? p2(x) 2 2 1 1 z 2 pick random z2 in Fq p3(x) = Σx …,x ∈ {0,1} pφ(z1, z2, x, x4 …, xn) • a problem: can’t send kz for all z ∈ Fq 4, n p3(x) p3(0)+p3(1)=p2(z2)? .
Recommended publications
  • Complexity Theory Lecture 9 Co-NP Co-NP-Complete
    Complexity Theory 1 Complexity Theory 2 co-NP Complexity Theory Lecture 9 As co-NP is the collection of complements of languages in NP, and P is closed under complementation, co-NP can also be characterised as the collection of languages of the form: ′ L = x y y <p( x ) R (x, y) { |∀ | | | | → } Anuj Dawar University of Cambridge Computer Laboratory NP – the collection of languages with succinct certificates of Easter Term 2010 membership. co-NP – the collection of languages with succinct certificates of http://www.cl.cam.ac.uk/teaching/0910/Complexity/ disqualification. Anuj Dawar May 14, 2010 Anuj Dawar May 14, 2010 Complexity Theory 3 Complexity Theory 4 NP co-NP co-NP-complete P VAL – the collection of Boolean expressions that are valid is co-NP-complete. Any language L that is the complement of an NP-complete language is co-NP-complete. Any of the situations is consistent with our present state of ¯ knowledge: Any reduction of a language L1 to L2 is also a reduction of L1–the complement of L1–to L¯2–the complement of L2. P = NP = co-NP • There is an easy reduction from the complement of SAT to VAL, P = NP co-NP = NP = co-NP • ∩ namely the map that takes an expression to its negation. P = NP co-NP = NP = co-NP • ∩ VAL P P = NP = co-NP ∈ ⇒ P = NP co-NP = NP = co-NP • ∩ VAL NP NP = co-NP ∈ ⇒ Anuj Dawar May 14, 2010 Anuj Dawar May 14, 2010 Complexity Theory 5 Complexity Theory 6 Prime Numbers Primality Consider the decision problem PRIME: Another way of putting this is that Composite is in NP.
    [Show full text]
  • Chapter 24 Conp, Self-Reductions
    Chapter 24 coNP, Self-Reductions CS 473: Fundamental Algorithms, Spring 2013 April 24, 2013 24.1 Complementation and Self-Reduction 24.2 Complementation 24.2.1 Recap 24.2.1.1 The class P (A) A language L (equivalently decision problem) is in the class P if there is a polynomial time algorithm A for deciding L; that is given a string x, A correctly decides if x 2 L and running time of A on x is polynomial in jxj, the length of x. 24.2.1.2 The class NP Two equivalent definitions: (A) Language L is in NP if there is a non-deterministic polynomial time algorithm A (Turing Machine) that decides L. (A) For x 2 L, A has some non-deterministic choice of moves that will make A accept x (B) For x 62 L, no choice of moves will make A accept x (B) L has an efficient certifier C(·; ·). (A) C is a polynomial time deterministic algorithm (B) For x 2 L there is a string y (proof) of length polynomial in jxj such that C(x; y) accepts (C) For x 62 L, no string y will make C(x; y) accept 1 24.2.1.3 Complementation Definition 24.2.1. Given a decision problem X, its complement X is the collection of all instances s such that s 62 L(X) Equivalently, in terms of languages: Definition 24.2.2. Given a language L over alphabet Σ, its complement L is the language Σ∗ n L. 24.2.1.4 Examples (A) PRIME = nfn j n is an integer and n is primeg o PRIME = n n is an integer and n is not a prime n o PRIME = COMPOSITE .
    [Show full text]
  • Succinctness of the Complement and Intersection of Regular Expressions Wouter Gelade, Frank Neven
    Succinctness of the Complement and Intersection of Regular Expressions Wouter Gelade, Frank Neven To cite this version: Wouter Gelade, Frank Neven. Succinctness of the Complement and Intersection of Regular Expres- sions. STACS 2008, Feb 2008, Bordeaux, France. pp.325-336. hal-00226864 HAL Id: hal-00226864 https://hal.archives-ouvertes.fr/hal-00226864 Submitted on 30 Jan 2008 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Symposium on Theoretical Aspects of Computer Science 2008 (Bordeaux), pp. 325-336 www.stacs-conf.org SUCCINCTNESS OF THE COMPLEMENT AND INTERSECTION OF REGULAR EXPRESSIONS WOUTER GELADE AND FRANK NEVEN Hasselt University and Transnational University of Limburg, School for Information Technology E-mail address: [email protected] Abstract. We study the succinctness of the complement and intersection of regular ex- pressions. In particular, we show that when constructing a regular expression defining the complement of a given regular expression, a double exponential size increase cannot be avoided. Similarly, when constructing a regular expression defining the intersection of a fixed and an arbitrary number of regular expressions, an exponential and double expo- nential size increase, respectively, can in worst-case not be avoided.
    [Show full text]
  • NP As Games, Co-NP, Proof Complexity
    CS 6743 Lecture 9 1 Fall 2007 1 Importance of the Cook-Levin Theorem There is a trivial NP-complete language: k Lu = {(M, x, 1 ) | NTM M accepts x in ≤ k steps} Exercise: Show that Lu is NP-complete. The language Lu is not particularly interesting, whereas SAT is extremely interesting since it’s a well-known and well-studied natural problem in logic. After Cook and Levin showed NP-completeness of SAT, literally hundreds of other important and natural problems were also shown to be NP-complete. It is this abundance of natural complete problems which makes the notion of NP-completeness so important, and the “P vs. NP” question so fundamental. 2 Viewing NP as a game Nondeterministic computation can be viewed as a two-person game. The players are Prover and Verifier. Both get the same input, e.g., a propositional formula φ. Prover is all-powerful (but not trust-worthy). He is trying to convince Verifier that the input is in the language (e.g., that φ is satisfiable). Prover sends his argument (as a binary string) to Verifier. Verifier is computationally bounded algorithm. In case of NP, Verifier is a deterministic polytime algorithm. It is not hard to argue that the class NP of languages L is exactly the class of languages for which there is a pair (Prover, Verifier) with the property: For inputs in the language, Prover convinces Verifier to accept; for inputs not in the language, any string sent by Prover will be rejected by Verifier. Moreover, the string that Prover needs to send is of length polynomial in the size of the input.
    [Show full text]
  • Simple Doubly-Efficient Interactive Proof Systems for Locally
    Electronic Colloquium on Computational Complexity, Revision 3 of Report No. 18 (2017) Simple doubly-efficient interactive proof systems for locally-characterizable sets Oded Goldreich∗ Guy N. Rothblumy September 8, 2017 Abstract A proof system is called doubly-efficient if the prescribed prover strategy can be implemented in polynomial-time and the verifier’s strategy can be implemented in almost-linear-time. We present direct constructions of doubly-efficient interactive proof systems for problems in P that are believed to have relatively high complexity. Specifically, such constructions are presented for t-CLIQUE and t-SUM. In addition, we present a generic construction of such proof systems for a natural class that contains both problems and is in NC (and also in SC). The proof systems presented by us are significantly simpler than the proof systems presented by Goldwasser, Kalai and Rothblum (JACM, 2015), let alone those presented by Reingold, Roth- blum, and Rothblum (STOC, 2016), and can be implemented using a smaller number of rounds. Contents 1 Introduction 1 1.1 The current work . 1 1.2 Relation to prior work . 3 1.3 Organization and conventions . 4 2 Preliminaries: The sum-check protocol 5 3 The case of t-CLIQUE 5 4 The general result 7 4.1 A natural class: locally-characterizable sets . 7 4.2 Proof of Theorem 1 . 8 4.3 Generalization: round versus computation trade-off . 9 4.4 Extension to a wider class . 10 5 The case of t-SUM 13 References 15 Appendix: An MA proof system for locally-chracterizable sets 18 ∗Department of Computer Science, Weizmann Institute of Science, Rehovot, Israel.
    [Show full text]
  • Lecture 10: Space Complexity III
    Space Complexity Classes: NL and L Reductions NL-completeness The Relation between NL and coNL A Relation Among the Complexity Classes Lecture 10: Space Complexity III Arijit Bishnu 27.03.2010 Space Complexity Classes: NL and L Reductions NL-completeness The Relation between NL and coNL A Relation Among the Complexity Classes Outline 1 Space Complexity Classes: NL and L 2 Reductions 3 NL-completeness 4 The Relation between NL and coNL 5 A Relation Among the Complexity Classes Space Complexity Classes: NL and L Reductions NL-completeness The Relation between NL and coNL A Relation Among the Complexity Classes Outline 1 Space Complexity Classes: NL and L 2 Reductions 3 NL-completeness 4 The Relation between NL and coNL 5 A Relation Among the Complexity Classes Definition for Recapitulation S c NPSPACE = c>0 NSPACE(n ). The class NPSPACE is an analog of the class NP. Definition L = SPACE(log n). Definition NL = NSPACE(log n). Space Complexity Classes: NL and L Reductions NL-completeness The Relation between NL and coNL A Relation Among the Complexity Classes Space Complexity Classes Definition for Recapitulation S c PSPACE = c>0 SPACE(n ). The class PSPACE is an analog of the class P. Definition L = SPACE(log n). Definition NL = NSPACE(log n). Space Complexity Classes: NL and L Reductions NL-completeness The Relation between NL and coNL A Relation Among the Complexity Classes Space Complexity Classes Definition for Recapitulation S c PSPACE = c>0 SPACE(n ). The class PSPACE is an analog of the class P. Definition for Recapitulation S c NPSPACE = c>0 NSPACE(n ).
    [Show full text]
  • Complements of Nondeterministic Classes • from P
    Complements of Nondeterministic Classes ² From p. 133, we know R, RE, and coRE are distinct. { coRE contains the complements of languages in RE, not the languages not in RE. ² Recall that the complement of L, denoted by L¹, is the language §¤ ¡ L. { sat complement is the set of unsatis¯able boolean expressions. { hamiltonian path complement is the set of graphs without a Hamiltonian path. °c 2011 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 181 The Co-Classes ² For any complexity class C, coC denotes the class fL : L¹ 2 Cg: ² Clearly, if C is a deterministic time or space complexity class, then C = coC. { They are said to be closed under complement. { A deterministic TM deciding L can be converted to one that decides L¹ within the same time or space bound by reversing the \yes" and \no" states. ² Whether nondeterministic classes for time are closed under complement is not known (p. 79). °c 2011 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 182 Comments ² As coC = fL : L¹ 2 Cg; L 2 C if and only if L¹ 2 coC. ² But it is not true that L 2 C if and only if L 62 coC. { coC is not de¯ned as C¹. ² For example, suppose C = ff2; 4; 6; 8; 10;:::gg. ² Then coC = ff1; 3; 5; 7; 9;:::gg. ¤ ² But C¹ = 2f1;2;3;:::g ¡ ff2; 4; 6; 8; 10;:::gg. °c 2011 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 183 The Quanti¯ed Halting Problem ² Let f(n) ¸ n be proper. ² De¯ne Hf = fM; x : M accepts input x after at most f(j x j) stepsg; where M is deterministic.
    [Show full text]
  • A Short History of Computational Complexity
    The Computational Complexity Column by Lance FORTNOW NEC Laboratories America 4 Independence Way, Princeton, NJ 08540, USA [email protected] http://www.neci.nj.nec.com/homepages/fortnow/beatcs Every third year the Conference on Computational Complexity is held in Europe and this summer the University of Aarhus (Denmark) will host the meeting July 7-10. More details at the conference web page http://www.computationalcomplexity.org This month we present a historical view of computational complexity written by Steve Homer and myself. This is a preliminary version of a chapter to be included in an upcoming North-Holland Handbook of the History of Mathematical Logic edited by Dirk van Dalen, John Dawson and Aki Kanamori. A Short History of Computational Complexity Lance Fortnow1 Steve Homer2 NEC Research Institute Computer Science Department 4 Independence Way Boston University Princeton, NJ 08540 111 Cummington Street Boston, MA 02215 1 Introduction It all started with a machine. In 1936, Turing developed his theoretical com- putational model. He based his model on how he perceived mathematicians think. As digital computers were developed in the 40's and 50's, the Turing machine proved itself as the right theoretical model for computation. Quickly though we discovered that the basic Turing machine model fails to account for the amount of time or memory needed by a computer, a critical issue today but even more so in those early days of computing. The key idea to measure time and space as a function of the length of the input came in the early 1960's by Hartmanis and Stearns.
    [Show full text]
  • Complexity Theory
    Complexity Theory Jan Kretˇ ´ınsky´ Chair for Foundations of Software Reliability and Theoretical Computer Science Technical University of Munich Summer 2019 Partially based on slides by Jorg¨ Kreiker Lecture 1 Introduction Agenda • computational complexity and two problems • your background and expectations • organization • basic concepts • teaser • summary Computational Complexity • quantifying the efficiency of computations • not: computability, descriptive complexity, . • computation: computing a function f : f0; 1g∗ ! f0; 1g∗ • everything else matter of encoding • model of computation? • efficiency: how many resources used by computation • time: number of basic operations with respect to input size • space: memory usage person does not get along with • largest party? Jack James, John, Kate • naive computation James Jack, Hugo, Sayid • John Jack, Juliet, Sun check all sets of people for compatibility Kate Jack, Claire, Jin • number of subsets of n Hugo James, Claire, Sun element set is2 n Claire Hugo, Kate, Juliet • intractable Juliet John, Sayid, Claire • can we do better? Sun John, Hugo, Jin • observation: for a given set Sayid James, Juliet, Jin compatibility checking is easy Jin Sayid, Sun, Kate Dinner Party Example (Dinner Party) You want to throw a dinner party. You have a list of pairs of friends who do not get along. What is the largest party you can throw such that you do not invite any two who don’t get along? • largest party? • naive computation • check all sets of people for compatibility • number of subsets of n element set is2 n • intractable • can we do better? • observation: for a given set compatibility checking is easy Dinner Party Example (Dinner Party) You want to throw a dinner party.
    [Show full text]
  • Randomized Computation and Circuit Complexity Contents 1 Plan For
    CS221: Computational Complexity Prof. Salil Vadhan Lecture 18: Randomized Computation and Circuit Complexity 11/04 Scribe: Ethan Abraham Contents 1 Plan for Lecture 1 2 Recap 1 3 Randomized Complexity 1 4 Circuit Complexity 4 1 Plan for Lecture ² Randomized Computation (Papadimitrou x11.1 - x11.2, Thm. 17.12) ² Circuit Complexity (Papadimitrou x11.4) 2 Recap Definition 1 L 2 BPP if 9 a probabilistic polynomial-time Turing Machine M such that ² x 2 L ) Pr [M(x)accepts] ¸ 2=3 ² x2 = L ) Pr [M(x)rejects] · 1=3. We also recall the following lemma: Lemma 2 (BPP Amplification Lemma) If L 2 BPP, then for all poly- nomials p, L has a BPP algorithm with error probability · 2¡p(n). 1 3 Randomized Complexity 3.1 Bounds on BPP What do we know? ² P ⊆ RP; P ⊆ co-RP ² RP ⊆ NP; RP ⊆ BPP ² co-RP ⊆ BPP; co-RP ⊆ co-NP What don’t we know? ² We don’t know the relationship between BPP and NP. ² We don’t even know that BPP 6= NEXP. This is downright insulting. However, we do have the following theorem, that gives some bound on how large BPP can be: Theorem 3 (Sipser) BPP ⊆ Σ2P \ Π2P Based on our knowledge of the polynomial hierarchy, we have the following immediate corollary: Corollary 4 P = NP ) BPP = P ) BPP 6= EXP, so either P 6= NP or BPP 6= EXP. Proof of Theorem (Lautemann): Suppose L 2 BPP. By the BPP Amplification Lemma, we can assume that L has a BPP algorithm with error probability 2¡n.
    [Show full text]
  • P-And-Np.Pdf
    P and NP Evgenij Thorstensen V18 Evgenij Thorstensen P and NP V18 1 / 26 Recap We measure complexity of a DTM as a function of input size n. This complexity is a function tM(n), the maximum number of steps the DTM needs on the input of size n. We compare running times by asymptotic behaviour. g 2 O(f) if there exist n0 and c such that for every n > n0 we have g(n) 6 c · f(n). TIME(f) is the set of languages decidable by some DTM with running time O(f). Evgenij Thorstensen P and NP V18 2 / 26 Polynomial time The class P, polynomial time, is [ P = TIME(nk) k2N To prove that a language is in P, we can exhibit a DTM and prove that it runs in time O(nk) for some k. This can be done directly or by reduction. Evgenij Thorstensen P and NP V18 3 / 26 Abstracting away from DTMs Let’s consider algorithms working on relevant data structures. Need to make sure that we have a reasonable encoding of input. For now, reasonable = polynomial-time encodable/decodable. If we have such an encoding, can assume input is already decoded. For example, a graph (V; E) as input can be reasoned about in terms of 2 jVj, since jEj 6 jVj Evgenij Thorstensen P and NP V18 4 / 26 Algorithm: 1. Mark s 2. While new nodes are marked, repeat: 2.1. For each marked node v and edge (v, w), mark w 3. If t is marked, accept; if not, reject.
    [Show full text]
  • CS 2401 - Introduction to Complexity Theory Lecture #3: Fall, 2015
    CS 2401 - Introduction to Complexity Theory Lecture #3: Fall, 2015 CS 2401 - Introduction to Complexity Theory Instructor: Toniann Pitassi Lecturer: Thomas Watson Scribe Notes by: Benett Axtell 1 Review We begin by reviewing some concepts from previous lectures. 1.1 Basic Complexity Classes Definition DTIME is the class of languages for which there exists a deterministic Turing machine. We define it as a language L for which their exists a Turing machine M that outputs 0 or 1 that is both correct and efficient. It is correct iff M outputs 1 on input x and efficient if M runs in the time O(T (jxj)). The formal definition of this is as follows: DTIME(T (n)) = fL ⊆ f0; 1g∗ : 9 TM M (outputting 0 or 1) s.t. 8x : x 2 L , M(x) = 1 and M runs in time O(T (jxj))g Definition The complexity class P is the set of decision problems that can be solved in polynomial c DTIME. Formally: P = [c≥1DTIME(n ) Definition NTIME is the class of languages for which there exists a non-deterministic Turing machine (meaning the machine can make guesses as to a witness that can satisfy the algorithm). We define this as: NTIME(T (n)) = fL ⊆ f0; 1g∗ : 9 TM M (outputting 0 or 1) s.t. 8x : x 2 L , 9w 2 f0; 1gO(T (jxj))M(x; w) = 1 and M runs in time O(T (jxj))g There are two definitions of NTIME. Above is the external definition where a witness is passed to the machine.
    [Show full text]