(Design And) Analysis of Algorithms NP and Complexity Classes

Total Page:16

File Type:pdf, Size:1020Kb

(Design And) Analysis of Algorithms NP and Complexity Classes Intro P and NP Hard problems CSE 548: (Design and) Analysis of Algorithms NP and Complexity Classes R. Sekar 1 / 38 Intro P and NP Hard problems Search and Optimization Problems Many problems of our interest are search problems with exponentially (or even infinitely) many solutions Shortest of the paths between two vertices Spanning tree with minimal cost Combination of variable values that minimize an objective We should be surprised we find ecient (i.e., polynomial-time) solutions to these problems It seems like these should be the exceptions rather than the norm! What do we do when we hit upon other search problems? 2 / 38 Intro P and NP Hard problems Hard Problems: Where you find yourself ... I can’t find an ecient algorithm, I guess I’m just too dumb. Images from “Computers and Intractability” by Garey and Johnson 3 / 38 Intro P and NP Hard problems Search and Optimization Problems What do we do when we hit upon hard search problems? Can we prove they can’t be solved eciently? 4 / 38 Intro P and NP Hard problems Hard Problems: Where you would like to be ... I can’t find an ecient algorithm, because no such algorithm is possible. Images from “Computers and Intractability” by Garey and Johnson 5 / 38 Intro P and NP Hard problems Search and Optimization Problems Unfortunately, it is very hard to prove that ecient algorithms are impossible Second best alternative: Show that the problem is as hard as many other problems that have been worked on by a host of brilliant scientists over a very long time Much of complexity theory is concerned with categorizing hard problems into such equivalence classes 6 / 38 Intro P and NP Hard problems P, NP, Co-NP, NP-hard and NP-complete 7 / 38 Intro P and NP Hard problems Nondeterminism and Search Problems Nondeterminism is an oft-used abstraction in language theory Non-deterministic FSA Non-deterministic PDA So, why not non-deterministic Turing machines? Acceptance criteria is analogous to NFA and NPDA if there is a sequence of transitions to an accepting state, an NDTM will take that path. What does nondeterminism, a theoretical construct, mean in practice? You can think of it as a boundless potential to search for and identify the correct path that leads to a solution So, it does not change the class of problems that can be solved, just the time/space needed to solve. 8 / 38 Intro P and NP Hard problems Class NP: Non-deterministic Polynomial Time How they operate: Guess a solution verify correctness in polynomial time Polynomial time verifiability is the key property of NP. This is how you build a path from P to NP. Ideal formulation for search problems, where correct solutions are hard to find but easy to recognize. Example: Boolean formula satisfiability (SAT) Given a boolean formula in CNF, find an assignment of {true, false} to variables that makes it true. Why not DNF? 9 / 38 Intro P and NP Hard problems What are the bounds of NP? Only Decision problems: Problems with an “yes” or “no” answer Optimization problems are generally not in NP But we can often find optimal solutions using “binary search” “No” answers are usually not verifiable in P-time So, complement of NP problems are often not NP. UNSAT — show that a CNF formula is false for all truth assignments1 Key point: You cannot negate nondeterministic automata. So, we are unable to convert an NDTM for SAT to solve UNSAT in NP-time. 1Whether UNSAT 2 NP is unknown! 10 / 38 Intro P and NP Hard problems What are the bounds of NP? Existentially quantified vs Universally quantified formulas NP is good for 9x P(x): guess a value for x and check if P(x) holds. NP is not good for 8x P(x): Guessing does not seem to help if you need to check all values of x. Negation of existential formula yields a universal formula. No surprise that complement of NP problems are typically not in NP. UNSAT: 8x:P(x) where P is in CNF VALID: 8xP(x), where P is in DNF NP seems to be a good way to separate hard problems from even harder ones! 11 / 38 Algorithms Lecture 30: NP-Hard Problems[Fa’13] is harder than just checking that a solution is correct. But nobody knows how to prove it! The Clay Mathematics Institute lists P versus NP as thefirst of its seven Millennium Prize Problems, offering a $1,000,000 reward for its solution. And yes, in fact, several people have lost their souls attempting to solve this problem. Intro P and NP Hard problems A more subtle but still openCo-NP: question Problems is whether whose the complexity complement classes is NP in and NP co-NP are different. Even if we can verify every YES answer quickly, there’s no reason to believe we can also verify NO answers quickly. For example, as far as we know, there is no short proof that a boolean circuit is not satisfiable. It is generally believed that NP= co-NP,Decision but problems nobody that knows have a polynomially how to prove checkable it. proof when � the answer is “no” ������ � What we think the world looks like. Biggest open problem: Is P = NP? Will also imply co-NP = P 30.3 NP-hard, NP-easy, and NP-complete A problemΠ is NP-hard if a polynomial-time algorithm forΠ would imply a polynomial-time12 / 38 algorithm for every problem in NP. In other words: Π is NP-hard IfΠ can be solved in polynomial time, then P=NP ⇐⇒ Intuitively, if we could solve one particular NP-hard problem quickly, then we could quickly solve any problem whose solution is easy to understand, using the solution to that one special problem as a subroutine. NP-hard problems are at least as hard as any problem in NP. Calling a problem NP-hard is like saying ‘If I own a dog, then it can speakfluent English.’ You probably don’t know whether or not I own a dog, but I bet you’repretty sure that I don’t own a talking dog. Nobody has a mathematical proof that dogs can’t speak English—the fact that no one has ever heard a dog speak English is evidence, as are the hundreds of examinations of dogs that lacked the proper mouth shape and brainpower, but mere evidence is not a mathematical proof. Nevertheless, no sane person would believe me if I said I owned a dog that spokefluent English. So the statement ‘If I own a dog, then it can speakfluent English’ has a natural corollary: No one in their right mind should believe that I own a dog! Likewise, if a problem is NP-hard, no one in their right mind should believe it can be solved in polynomial time. Finally, a problem is NP-complete if it is both NP-hard and an element of NP (or ‘NP-easy’). NP- complete problems are the hardest problems in NP. If anyonefinds a polynomial-time algorithm for even one NP-complete problem, then that would imply a polynomial-time algorithm for every NP-complete problem. Literally thousands of problems have been shown to be NP-complete, so a polynomial-time algorithm for one (and therefore all) of them seems incredibly unlikely. ������� ���� �� ����������� � More of what we think the world looks like. 3 Intro P and NP Hard problems The class Co-NP \ NP Often, problems that are in NP \ co-NP are in P It requires considerable insight and/or structure in the problem to show that something is both NP and co-NP This can often be turned into a P-time algorithm Examples Linear programming [1979] Obviously in NP. To see why it is in co-NP, we can derive a lower bound by multiplying the constraints by a suitable (guessed) number and adding. Primality testing [2002] Obviously in co−NP; See “primality certificate” for proof it is NP Integer factorization? 13 / 38 Algorithms Lecture 30: NP-Hard Problems[Fa’13] is harder than just checking that a solution is correct. But nobody knows how to prove it! The Clay Mathematics Institute lists P versus NP as thefirst of its seven Millennium Prize Problems, offering a $1,000,000 reward for its solution. And yes, in fact, several people have lost their souls attempting to solve this problem. Intro P and NP Hard problems A more subtle but still openNP-hard question and is whether NP-complete the complexity classes NP and co-NP are different. Even if we can verify every YES answer quickly, there’s no reason to believe we can also verify NO answers quickly. For example, as far as weA problem know, thereΠ is NP is-hard no short if the proofavailability that of a a boolean polynomial circuit solution is not satisfiable. It is generally believed that NP= co-NP,to Π will but allow nobodyNP-problems knows to howbe solved to prove in polynomial it. time. � Π is NP-hard , if Π can be solved in P-time, P = NP NP-complete = NP-hard \ NP ������ � What we think the world looks like. 30.3 NP-hard, NP-easy, and NP-complete A problemΠ is NP-hard if a polynomial-time algorithm forΠ would imply a polynomial-time14 / 38 algorithm for every problem in NP. In other words: Π is NP-hard IfΠ can be solved in polynomial time, then P=NP ⇐⇒ Intuitively, if we could solve one particular NP-hard problem quickly, then we could quickly solve any problem whose solution is easy to understand, using the solution to that one special problem as a subroutine.
Recommended publications
  • CS601 DTIME and DSPACE Lecture 5 Time and Space Functions: T, S
    CS601 DTIME and DSPACE Lecture 5 Time and Space functions: t, s : N → N+ Definition 5.1 A set A ⊆ U is in DTIME[t(n)] iff there exists a deterministic, multi-tape TM, M, and a constant c, such that, 1. A = L(M) ≡ w ∈ U M(w)=1 , and 2. ∀w ∈ U, M(w) halts within c · t(|w|) steps. Definition 5.2 A set A ⊆ U is in DSPACE[s(n)] iff there exists a deterministic, multi-tape TM, M, and a constant c, such that, 1. A = L(M), and 2. ∀w ∈ U, M(w) uses at most c · s(|w|) work-tape cells. (Input tape is “read-only” and not counted as space used.) Example: PALINDROMES ∈ DTIME[n], DSPACE[n]. In fact, PALINDROMES ∈ DSPACE[log n]. [Exercise] 1 CS601 F(DTIME) and F(DSPACE) Lecture 5 Definition 5.3 f : U → U is in F (DTIME[t(n)]) iff there exists a deterministic, multi-tape TM, M, and a constant c, such that, 1. f = M(·); 2. ∀w ∈ U, M(w) halts within c · t(|w|) steps; 3. |f(w)|≤|w|O(1), i.e., f is polynomially bounded. Definition 5.4 f : U → U is in F (DSPACE[s(n)]) iff there exists a deterministic, multi-tape TM, M, and a constant c, such that, 1. f = M(·); 2. ∀w ∈ U, M(w) uses at most c · s(|w|) work-tape cells; 3. |f(w)|≤|w|O(1), i.e., f is polynomially bounded. (Input tape is “read-only”; Output tape is “write-only”.
    [Show full text]
  • EXPSPACE-Hardness of Behavioural Equivalences of Succinct One
    EXPSPACE-hardness of behavioural equivalences of succinct one-counter nets Petr Janˇcar1 Petr Osiˇcka1 Zdenˇek Sawa2 1Dept of Comp. Sci., Faculty of Science, Palack´yUniv. Olomouc, Czech Rep. [email protected], [email protected] 2Dept of Comp. Sci., FEI, Techn. Univ. Ostrava, Czech Rep. [email protected] Abstract We note that the remarkable EXPSPACE-hardness result in [G¨oller, Haase, Ouaknine, Worrell, ICALP 2010] ([GHOW10] for short) allows us to answer an open complexity ques- tion for simulation preorder of succinct one counter nets (i.e., one counter automata with no zero tests where counter increments and decrements are integers written in binary). This problem, as well as bisimulation equivalence, turn out to be EXPSPACE-complete. The technique of [GHOW10] was referred to by Hunter [RP 2015] for deriving EXPSPACE-hardness of reachability games on succinct one-counter nets. We first give a direct self-contained EXPSPACE-hardness proof for such reachability games (by adjust- ing a known PSPACE-hardness proof for emptiness of alternating finite automata with one-letter alphabet); then we reduce reachability games to (bi)simulation games by using a standard “defender-choice” technique. 1 Introduction arXiv:1801.01073v1 [cs.LO] 3 Jan 2018 We concentrate on our contribution, without giving a broader overview of the area here. A remarkable result by G¨oller, Haase, Ouaknine, Worrell [2] shows that model checking a fixed CTL formula on succinct one-counter automata (where counter increments and decre- ments are integers written in binary) is EXPSPACE-hard. Their proof is interesting and nontrivial, and uses two involved results from complexity theory.
    [Show full text]
  • Probabilistic Proof Systems: a Primer
    Probabilistic Proof Systems: A Primer Oded Goldreich Department of Computer Science and Applied Mathematics Weizmann Institute of Science, Rehovot, Israel. June 30, 2008 Contents Preface 1 Conventions and Organization 3 1 Interactive Proof Systems 4 1.1 Motivation and Perspective ::::::::::::::::::::::: 4 1.1.1 A static object versus an interactive process :::::::::: 5 1.1.2 Prover and Veri¯er :::::::::::::::::::::::: 6 1.1.3 Completeness and Soundness :::::::::::::::::: 6 1.2 De¯nition ::::::::::::::::::::::::::::::::: 7 1.3 The Power of Interactive Proofs ::::::::::::::::::::: 9 1.3.1 A simple example :::::::::::::::::::::::: 9 1.3.2 The full power of interactive proofs ::::::::::::::: 11 1.4 Variants and ¯ner structure: an overview ::::::::::::::: 16 1.4.1 Arthur-Merlin games a.k.a public-coin proof systems ::::: 16 1.4.2 Interactive proof systems with two-sided error ::::::::: 16 1.4.3 A hierarchy of interactive proof systems :::::::::::: 17 1.4.4 Something completely di®erent ::::::::::::::::: 18 1.5 On computationally bounded provers: an overview :::::::::: 18 1.5.1 How powerful should the prover be? :::::::::::::: 19 1.5.2 Computational Soundness :::::::::::::::::::: 20 2 Zero-Knowledge Proof Systems 22 2.1 De¯nitional Issues :::::::::::::::::::::::::::: 23 2.1.1 A wider perspective: the simulation paradigm ::::::::: 23 2.1.2 The basic de¯nitions ::::::::::::::::::::::: 24 2.2 The Power of Zero-Knowledge :::::::::::::::::::::: 26 2.2.1 A simple example :::::::::::::::::::::::: 26 2.2.2 The full power of zero-knowledge proofs ::::::::::::
    [Show full text]
  • 26 Space Complexity
    CS:4330 Theory of Computation Spring 2018 Computability Theory Space Complexity Haniel Barbosa Readings for this lecture Chapter 8 of [Sipser 1996], 3rd edition. Sections 8.1, 8.2, and 8.3. Space Complexity B We now consider the complexity of computational problems in terms of the amount of space, or memory, they require B Time and space are two of the most important considerations when we seek practical solutions to most problems B Space complexity shares many of the features of time complexity B It serves a further way of classifying problems according to their computational difficulty 1 / 22 Space Complexity Definition Let M be a deterministic Turing machine, DTM, that halts on all inputs. The space complexity of M is the function f : N ! N, where f (n) is the maximum number of tape cells that M scans on any input of length n. Definition If M is a nondeterministic Turing machine, NTM, wherein all branches of its computation halt on all inputs, we define the space complexity of M, f (n), to be the maximum number of tape cells that M scans on any branch of its computation for any input of length n. 2 / 22 Estimation of space complexity Let f : N ! N be a function. The space complexity classes, SPACE(f (n)) and NSPACE(f (n)), are defined by: B SPACE(f (n)) = fL j L is a language decided by an O(f (n)) space DTMg B NSPACE(f (n)) = fL j L is a language decided by an O(f (n)) space NTMg 3 / 22 Example SAT can be solved with the linear space algorithm M1: M1 =“On input h'i, where ' is a Boolean formula: 1.
    [Show full text]
  • Computational Complexity
    Computational complexity Plan Complexity of a computation. Complexity classes DTIME(T (n)). Relations between complexity classes. Complete problems. Domino problems. NP-completeness. Complete problems for other classes Alternating machines. – p.1/17 Complexity of a computation A machine M is T (n) time bounded if for every n and word w of size n, every computation of M has length at most T (n). A machine M is S(n) space bounded if for every n and word w of size n, every computation of M uses at most S(n) cells of the working tape. Fact: If M is time or space bounded then L(M) is recursive. If L is recursive then there is a time and space bounded machine recognizing L. DTIME(T (n)) = fL(M) : M is det. and T (n) time boundedg NTIME(T (n)) = fL(M) : M is T (n) time boundedg DSPACE(S(n)) = fL(M) : M is det. and S(n) space boundedg NSPACE(S(n)) = fL(M) : M is S(n) space boundedg . – p.2/17 Hierarchy theorems A function S(n) is space constructible iff there is S(n)-bounded machine that given w writes aS(jwj) on the tape and stops. A function T (n) is time constructible iff there is a machine that on a word of size n makes exactly T (n) steps. Thm: Let S2(n) be a space-constructible function and let S1(n) ≥ log(n). If S1(n) lim infn!1 = 0 S2(n) then DSPACE(S2(n)) − DSPACE(S1(n)) is nonempty. Thm: Let T2(n) be a time-constructible function and let T1(n) log(T1(n)) lim infn!1 = 0 T2(n) then DTIME(T2(n)) − DTIME(T1(n)) is nonempty.
    [Show full text]
  • Advanced Complexity
    Advanced Complexity TD n◦3 Charlie Jacomme September 27, 2017 Exercise 1 : Space hierarchy theorem Using a diagonal argument, prove that for two space-constructible functions f and g such that f(n) = o(g(n)) (and as always f; g ≥ log) we have SPACE(f(n)) ( SPACE(g(n)). Solution: We dene a language which can be recognized using space O(g(n)) but not in f(n). L = f(M; w)jM reject w using space ≤ g(j(M; w)j andjΓj = 4g Where Γis the alphabet of the Turing Machine We show that L 2 SPACE(g(n)) by constructing the corresponding TM. This is where we need the fact that the alphabet is bounded. Indeed, we want to construct one xed machine that recognizes L for any TM M, and if we allow M to have an arbitrary size of alphabet, the xed machine might need a lot of space in order to represent the alphabet of M, and it might go over O(g(n)). On an input x, we compute g(x) and mark down an end of tape marker at position f(x), so that we reject if we use to much space. If x is not of the form (M; w), we reject, else we simulate M on w for at most jQj × 4g(x) × n steps. If we go over the timeout, we reject. Else, if w is accepted, we reject, and if w is rejected, we accept. This can be done in SPACE(O(g(n))), and we conclude with the speed up theorem.
    [Show full text]
  • PSPACE-Complete Languages
    CSCI 1590 Intro to Computational Complexity PSPACE-Complete Languages John E. Savage Brown University February 11, 2009 John E. Savage (Brown University) CSCI 1590 Intro to Computational Complexity February 11, 2009 1 / 10 Summary 1 Complexity Class Containment 2 Polynomial Time Hierarchy 3 PH-Complete Languages 4 Games and tqbf 5 tqbf is PSPACE-Complete John E. Savage (Brown University) CSCI 1590 Intro to Computational Complexity February 11, 2009 2 / 10 Complexity Classes from Last Lecture P, coNP, p p Πi and Σi PH PSPACE NP ∪ coNP PSPACE = NPSPACE NP ∩ coNP NP coNP L2 P NL L John E. Savage (Brown University) CSCI 1590 Intro to Computational Complexity February 11, 2009 3 / 10 Polynomial Time Hierarchy A language is in NP(coNP) if and only if it can be reduced in polynomial time to a statement of the form ∃x b(x) (∀x b(x)) By adding additional levels of quantification, as shown below, potentially new complexity classes are added. ∀x1 ∃x2 b(x1, x2) ∃x1 ∀x2 b(x1, x2) The sets of languages PTIME reducible to statements of this form are p p denoted Πi and Σi respectively, when there are i alternations of existential and universal quantifiers and the outermost quantifier is ∀ and ∃, respectively. Definition The Polynomial Hierarchy (PH) is defined as p PH = [ Σi i John E. Savage (Brown University) CSCI 1590 Intro to Computational Complexity February 11, 2009 4 / 10 PH and PSPACE Definition A language L is PH-complete if a) L ∈ PH and b) all languages in PH are ptime reducible to L. It is not hard to see that PH ⊆ PSPACE.
    [Show full text]
  • 6.080 / 6.089 Great Ideas in Theoretical Computer Science Spring 2008
    MIT OpenCourseWare http://ocw.mit.edu 6.080 / 6.089 Great Ideas in Theoretical Computer Science Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 6.080/6.089 GITCS DATE Lecture 12 Lecturer: Scott Aaronson Scribe: Mergen Nachin 1 Review of last lecture • NP-completeness in practice. We discussed many of the approaches people use to cope with NP-complete problems in real life. These include brute-force search (for small instance sizes), cleverly-optimized backtrack search, fixed-parameter algorithms, approximation algorithms, and heuristic algorithms like simulated annealing (which don’t always work but often do pretty well in practice). • Factoring, as an intermediate problem between P and NP-complete. We saw how factoring, though believed to be hard for classical computers, has a special structure that makes it different from any known NP-complete problem. • coNP. 2 Space complexity Now we will categorize problems in terms of how much memory they use. Definition 1 L is in PSPACE if there exists a poly-space Turing machine M such that for all x, x is in L if and only if M(x) accepts. Just like with NP, we can define PSPACE-hard and PSPACE-complete problems. An interesting example of a PSPACE-complete problem is n-by-n chess: Given an arrangement of pieces on an n-by-n chessboard, and assuming a polynomial upper bound on the number of moves, decide whether White has a winning strategy. (Note that we need to generalize chess to an n-by-n board, since standard 8-by-8 chess is a finite game that’s completely solvable in O(1) time.) Let’s now define another complexity class called EXP, which is apparently even bigger than PSPACE.
    [Show full text]
  • Complexity Classes
    CHAPTER Complexity Classes In an ideal world, each computational problem would be classified at least approximately by its use of computational resources. Unfortunately, our ability to so classify some important prob- lems is limited. We must be content to show that such problems fall into general complexity classes, such as the polynomial-time problems P, problems whose running time on a determin- istic Turing machine is a polynomial in the length of its input, or NP, the polynomial-time problems on nondeterministic Turing machines. Many complexity classes contain “complete problems,” problems that are hardest in the class. If the complexity of one complete problem is known, that of all complete problems is known. Thus, it is very useful to know that a problem is complete for a particular complexity class. For example, the class of NP-complete problems, the hardest problems in NP, contains many hundreds of important combinatorial problems such as the Traveling Salesperson Prob- lem. It is known that each NP-complete problem can be solved in time exponential in the size of the problem, but it is not known whether they can be solved in polynomial time. Whether ? P and NP are equal or not is known as the P = NP question. Decades of research have been devoted to this question without success. As a consequence, knowing that a problem is NP- complete is good evidence that it is an exponential-time problem. On the other hand, if one such problem were shown to be in P, all such problems would be been shown to be in P,a result that would be most important.
    [Show full text]
  • 1 Agenda 2 Provably Intractable Problems 3 Variants of Bounded
    CS 221: Computational Complexity Prof. Salil Vadhan Lecture Notes 8 February 19, 2010 Scribe: Kevin Lee 1 Agenda 1. Provably Intractable Problems 2 Provably Intractable Problems Some lower bounds that we proved so far: 1. Hierarchy theorems give us unnatural problems that do not exist in certain complexity classes. These give us "unnatural\ problems in EXPnDTIME(2n) or PSPACEnSPACE(n). 2. 9 > 0 such that TQBF 2= SPACE(n). 3. 8 > 0 we have that SAT 2= TISP(n1+o(1); n1−). We want to find some more natural problems that require superpolynomial time. We can do this by finding \natural" complete problems for classes at the exponential level (like EXP, NEXP, EXPSPACE). Often these will be \concise" versions of problems complete for classes at the polynomial level (like P, NP, PSPACE). We begin with a survey of some results of this type, and then work out one example in detail. 3 Variants of Bounded Halting To find some natural problems, we consider variations of the bounded halting problem: 1. M; x; 1t : M accepts x within t steps we know to be P-complete. 2. fhM; x; ti : M accepts x within t stepsg turns out to be EXP-complete. 3. M; 1t : 9x; M accepts x within t steps we know to be NP-complete. 4. fhM; ti : 9x; M accepts x within t stepsg turns out to be NEXP-complete. Thus by writing the time bound more concisely | in binary instead of unary | the complexity goes up an exponential. This phenomenon is like the converse of what we saw with translation/padding arguments.
    [Show full text]
  • Introduction to Interactive Proofs & the Sumcheck Protocol 1 Introduction
    CS294: Probabilistically Checkable and Interactive Proofs January 19, 2017 Introduction to Interactive Proofs & The Sumcheck Protocol Instructor: Alessandro Chiesa & Igor Shinkar Scribe: Pratyush Mishra 1 Introduction Traditional mathematical proofs are static objects: a prover P writes down a sequence of mathematical statements, and then at some later time a verifier V checks that these statements are consistent and correct. Over the years, computer science has changed the notion of a mathematical proof. The first such change was the observation that for all practical purposes, the verification procedure should be efficient; V should not have to expend large amounts of effort to verify the proof of a claim (at least much less than P expended to find the proof). This notion of “efficient verification” corresponds to the complexity class NP: Definition 1 A language L belongs to the class NP if and only if there exists an efficient algorithm V such that the following conditions hold. COMPLETENESS: For all x 2 L, there exists a proof π that makes V accept: V(x; π) = 1. SOUNDNESS: For all x 62 L, for all claimed proofs π∗, V rejects: V(x; π∗) = 0. Even though the verification procedure is now efficient, the proof is still a static object. Computer scientists in the 80’s and 90’s changed this view by introducing interaction and randomness into the mix: the prover and verifier were no longer required to be deterministic, and could now talk to each other. How did this change things? As we shall see, introducing interaction allows a computationally bounded (but randomized) verifier to check extraordinary claims efficiently.
    [Show full text]
  • Probabilistically Checkable Arguments
    Probabilistically Checkable Arguments Yael Tauman Kalai1 and Ran Raz2 1 Microsoft Research 2 Weizmann Institute Abstract. We give a general reduction that converts any public-coin interactive proof into a one-round (two-message) argument. The reduc- tion relies on a method proposed by Aiello et al. [1], of using a Private- Information-Retrieval (PIR) scheme to collapse rounds in interactive pro- tocols. For example, the reduction implies that for any security param- eter t, the membership in any language in PSPACE can be proved by a one-round (two-message) argument of size poly(n; t), which is sound for malicious provers of size 2t. (Note that the honest prover in this con- struction runs in exponential time, since she has to prove membership in PSPACE, but we can choose t such that 2t is significantly larger than the running time of the honest prover). A probabilistically checkable argument (PCA) is a relaxation of the notion of probabilistically checkable proof (PCP). It is defined analogously to PCP, except that the soundness property is required to hold only com- putationally. We consider the model where the argument is of one round (two-message), where the verifier’s message depends only on his (private) randomness. We show that for membership in many NP languages, there are PCAs (with efficient honest provers) that are of size polynomial in the size of the witness. This compares to the best PCPs that are of size polynomial in the size of the instance (that may be significantly larger). The number of queries to these PCAs is poly-logarithmic.
    [Show full text]