Zero-Knowledge Interactive Proof Systems for New Lattice Problems

Total Page:16

File Type:pdf, Size:1020Kb

Zero-Knowledge Interactive Proof Systems for New Lattice Problems Zero-Knowledge Interactive Proof Systems for New Lattice Problems Claude Cr´epeau? and Raza Ali Kazmi? McGiLL University [email protected], [email protected] Abstract. In this work we introduce a new hard problem in lattices called Isometric Lattice Problem (ILP) and reduce Linear Code Equivalence over prime fields and Graph Isomorphism to this prob- lem. We also show that this problem has an (efficient prover) perfect zero-knowledge interactive proof; this is the only hard problem in lattices that is known to have this property (with respect to malicious verifiers). Under the assumption that the polynomial hierarchy does not collapse, we also show that ILP cannot be NP-complete. We finally introduce a variant of ILP over the rationals radicands and provide similar results for this new problem. 1 Introduction Zero-Knowledge interactive proof systems ZKIP [6] have numerous applications in cryptography such as Identification Schemes, Authentication Schemes, Mul- tiparty Computations, etc. Appart from cryptographic applications these proof systems play an important part in the study of complexity theory. The first IP for lattice problems (coGapCVPγ ; coGapSVPγ ) was presented by Goldreich and Goldwasser [13]. However, these proofs are only honest-verifier Perfect Zero- Knowledge and known to have inefficient provers. Micciancio and Vadhan [10] presented Interactive Proofs for GapCVPγ and GapSVPγ . These proofs are Statistical Zero-Knowledge and have efficient provers1 as well. In this paper we introduce a new hard problem called ISOMETRIC LATTICE PROBLEM (ILP). We present IP systems for the ILP. These proof systems are Perfect Zero-Knowledge and have efficient provers. We show that a variant of ILP over the integers is at least as hard as Graph Isomorphism (GI) [4, 5] and Linear Code Equivalence (LCE) [7, 5]. This is the only hard problem problem known in lattices that have a (malicious-verifier) Perfect Zero-Knowledge IP system with an efficient prover. We also show that ILP is unlikely to be NP-complete. Finally we also introduce another variant of ILP over the rational-radicands and provide similar results for this problem. ? Supported in part by Qu´ebec's FRQNT, Canada's NSERC and CIFAR. 1 The Prover runs in probabilistic polynomial time given a certificate for the input string. 2 Notations For any matrix A; we denote its transpose by At. Let O(n; R) = fQ 2 Rn×n : t Q · Q = Ig denote the group of n × n orthogonal matrices over R. Let GLk(Z) denote the group of k × k invertible (unimodular) matrices over Z. Let GLk(Fq) denote the set of k × k invertible matrices over the finite field Fq. Let Pn denote the set of n × n permutation matrices. Let σn be the set of all permutations of f1; : : : ; ng. For π 2 σn; we denote Pπ the corresponding n × n permutation matrix. P(n; Fq) denotes the set of n × n monomial matrices (there is exactly one nonzero entry in each row and each column) over Fq. Dn is the set of diag- onal matrices D = diag(1; : : : ; n); i = ±1 for i = 1; : : : ; n. For a real vector p 2 2 v = (v1; : : : ; vn) we denote its Euclidean norm by kvk = v1 + ··· + vn and n n×k max-norm kvk1 = maxi=1jvij and for any matrix B = [b1jb2j ::: jbk] 2 R n we define its norm by kBk = maxi=1kbik. For any ordered set of linearly inde- pendent vectors fb1; b2;:::; bkg; we denote fb~1; b~2;:::; b~kg; its Gram-Schmidt orthogonalization. 2.1 Lattices Let Rn be an n-dimensional Euclidean space and let B 2 Rn×k be a matrix of rank k. A lattice L(B) is the set of all vectors k L(B) = Bx : x 2 Z : The integer n and k are called the dimension and rank of L(B). A lattice is called full dimensional if k = n. Two lattices L(B1) and L(B2) are equivalent if k×k and only if there exists a unimodular matrix U 2 Z such that B1 = UB2. 2.2 q-ary Lattices A lattice L is called q-ary, if it satisfies qZn ⊆ L ⊆ Zn for a positive integer q. In other words, the membership of a vector v 2 L is given by v mod q. Let n×k n×k G = [g1j ::: jgk] 2 Zq be a n × k matrix of rank k over Zq . We define below two important families of q-ary lattices used in cryptography n k Λq(G) = fy 2 Z : y ≡ G · s (mod q); for some vector s 2 Z g > n Λq (G) = fy 2 Z : y · G ≡ 0 (mod q)g: A basis B of Λq(G) is n×n B = [g1j ::: jgkjbk+1j ::: jbn] 2 Z n where bj = (0; :::; q; :::; 0) 2 Z is a vector with its j-th coordinate equal to q and > −1 t all other coordinates are 0; k + 1 ≤ j ≤ n. A basis of Λq is given by q · (B ) . 2 2.3 Discrete Gaussian distribution on Lattices For any s > 0; c 2 Rn; we define a Gaussian function on Rn centered at c with parameter s. −πkx−ck n 8x 2 R ; ρs;c(x) = e s2 . X Let L be any n dimensional lattice and ρs;c(L) = ρs;c(y). We define a y2L Discrete Gaussian distribution on L ρs;c(x) 8x 2 L;Ds;c;L(x) = : ρs;c(L) n×k Theorem 1 Given a basis pB = [b1j ::: jbk] 2 R of an n-dimensional lattice L, a parameter s ≥ kB~k·!( log n) and a center c 2 Rn; the algorithm SampleD ([2], section 4.2 page 14) outputs a sample from a distribution that is statistically close to Ds;c;L. Theorem 2 There is a deterministic polynomial-time algorithm that, given an arbitrary basis fb1;:::; bkg of an n-dimensional lattice L and a set of linearly independent lattice vectors S = [s1js2 ::: jsk] 2 L with ordering ks1k ≤ ks2k ≤ · · · ≤ kskk; outputs a basis fr1 ::: rkg of L such that k~rik ≤ k~sik for 1 ≤ i ≤ k. 2.4 Orthogonal Matrices and Givens Rotations A Givens rotation is an orthogonal n × n matrix of the form 21 ··· 0 ··· 0 ··· 03 6. .. .7 6. .7 6 7 60 ··· c · · · −s ··· 07 6 7 6. .. .7 G(i;j,θ) = 6. .7 ; i 6= j 6 7 60 ··· s ··· c ··· 07 6 7 6. .. .7 4. .5 0 ··· 0 ··· 0 ··· 1 The non-zero elements of a Givens matrix G(i;j,θ) are given by gk;k = 1 for k 6= i; j and gi;i = gj;j = c gi;j = s = −gj;i for i < j where c = cos(θ) and s = sin(θ). The product G(i;j,θ) · v represents a counter-clockwise rotation of the vector v in the (i; j) plane by angle θ. Moreover, only the i-th and j-th entries of v are affected and the rest remains unchanged. Any orthogonal matrix Q 2 Rn×n 3 n(n−1) can be written as a product of 2 Givens matrices and a diagonal matrix D 2 Dn Q = D G(1;2,θ1;2) ···· G(1,n,θ1;n) · G(2;3,θ2;3) ··· G(2,n,θ2;n) ··· G(n−1,n,θn−1;n) : The angles θi;j 2 [0; 2π]; 1 ≤ i < j ≤ n are called angles of rotation. 2.5 Properties of Givens Matrices 1. Additivity: For angles θ; φ 2 [0; 2π] and any vector v 2 Rn G(i;j,φ) · G(i;j,θ)v = G(i;j,φ+θ)v: 2. Commutativity: For angles θi;j; θj;i; θy;z 2 [0; 2π] and fi; jg \ fy; zg = ; or fi; jg = fy; zg. G(i;j,θi;j ) · G(j,i,θj;i)v = G(j,i,θj;i) · G(i;j,θi;j )v G(i;j,θi;j ) · G(y;z,θy;z )v = G(y;z,θy;z ) · G(i;j,θi;j )v: n 3. Linearity: For any Givens matrix G(i;j,θi;j ); any vector v 2 R and any permutation π 2 σn G Pπ · v = PπG(i;j,θ ) · v (π(i),π(j),θπ(i),π(j)) i;j Pπ is the corresponding permutation matrix of π. 2.6 The Set R Computationally it is not possible to work over arbitrary real numbers as they require infinite precision. However, there are reals that can be represented finitely and one can add and multiplyp themp without losing any precision. For example we can represent numbers 7 and 4 5 as < 2; 7 > and < 4; 5 >. In, general, a real number r that has the following form r r n q p n q p 11 n21 n 12 n22 n r = a1 x11 + x21 + ··· + k1 xk1 + a2 x12 + x22 + ··· + k2 xk2 r n q p 1l n2l n + ··· + al x1l + x2l + ··· + kl xkl: + where aj's; nij's 2 Q; xij 's 2 Q [ f0g and l; k1 ··· kl 2 N; can be represented as r = a1 < n11; x11+ < n21; x21 + ··· + < nk1; xk1 >> ··· > + a2 < n12; x12+ < n22; x22 + ··· + < nk2; xk2 >> ··· > + ··· + al < n1l; x1l+ < n2l; x2l + ··· + < nkl; xkl >> ··· > : We call such numbers rational radicands and denote the set of all rational radicands R. 2 2 In this notation any rational number x can be represented as ± < 1; x >. 4 2.7 The Set O(n; R) Let O(n; R) denote a set of n×n orthogonal matrices over R. In this sub-section we will define a subset O(n; R) ⊂ O(n; R) that has the following properties: { Any orthogonal matrix Q 2 O(n; R) has finite representation.
Recommended publications
  • Zero Knowledge and Circuit Minimization
    Electronic Colloquium on Computational Complexity, Revision 1 of Report No. 68 (2014) Zero Knowledge and Circuit Minimization Eric Allender1 and Bireswar Das2 1 Department of Computer Science, Rutgers University, USA [email protected] 2 IIT Gandhinagar, India [email protected] Abstract. We show that every problem in the complexity class SZK (Statistical Zero Knowledge) is efficiently reducible to the Minimum Circuit Size Problem (MCSP). In particular Graph Isomorphism lies in RPMCSP. This is the first theorem relating the computational power of Graph Isomorphism and MCSP, despite the long history these problems share, as candidate NP-intermediate problems. 1 Introduction For as long as there has been a theory of NP-completeness, there have been attempts to understand the computational complexity of the following two problems: – Graph Isomorphism (GI): Given two graphs G and H, determine if there is permutation τ of the vertices of G such that τ(G) = H. – The Minimum Circuit Size Problem (MCSP): Given a number i and a Boolean function f on n variables, represented by its truth table of size 2n, determine if f has a circuit of size i. (There are different versions of this problem depending on precisely what measure of “size” one uses (such as counting the number of gates or the number of wires) and on the types of gates that are allowed, etc. For the purposes of this paper, any reasonable choice can be used.) Cook [Coo71] explicitly considered the graph isomorphism problem and mentioned that he “had not been able” to show that GI is NP-complete.
    [Show full text]
  • Is It Easier to Prove Theorems That Are Guaranteed to Be True?
    Is it Easier to Prove Theorems that are Guaranteed to be True? Rafael Pass∗ Muthuramakrishnan Venkitasubramaniamy Cornell Tech University of Rochester [email protected] [email protected] April 15, 2020 Abstract Consider the following two fundamental open problems in complexity theory: • Does a hard-on-average language in NP imply the existence of one-way functions? • Does a hard-on-average language in NP imply a hard-on-average problem in TFNP (i.e., the class of total NP search problem)? Our main result is that the answer to (at least) one of these questions is yes. Both one-way functions and problems in TFNP can be interpreted as promise-true distri- butional NP search problems|namely, distributional search problems where the sampler only samples true statements. As a direct corollary of the above result, we thus get that the existence of a hard-on-average distributional NP search problem implies a hard-on-average promise-true distributional NP search problem. In other words, It is no easier to find witnesses (a.k.a. proofs) for efficiently-sampled statements (theorems) that are guaranteed to be true. This result follows from a more general study of interactive puzzles|a generalization of average-case hardness in NP|and in particular, a novel round-collapse theorem for computationally- sound protocols, analogous to Babai-Moran's celebrated round-collapse theorem for information- theoretically sound protocols. As another consequence of this treatment, we show that the existence of O(1)-round public-coin non-trivial arguments (i.e., argument systems that are not proofs) imply the existence of a hard-on-average problem in NP=poly.
    [Show full text]
  • The Complexity of Nash Equilibria by Constantinos Daskalakis Diploma
    The Complexity of Nash Equilibria by Constantinos Daskalakis Diploma (National Technical University of Athens) 2004 A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Christos H. Papadimitriou, Chair Professor Alistair J. Sinclair Professor Satish Rao Professor Ilan Adler Fall 2008 The dissertation of Constantinos Daskalakis is approved: Chair Date Date Date Date University of California, Berkeley Fall 2008 The Complexity of Nash Equilibria Copyright 2008 by Constantinos Daskalakis Abstract The Complexity of Nash Equilibria by Constantinos Daskalakis Doctor of Philosophy in Computer Science University of California, Berkeley Professor Christos H. Papadimitriou, Chair The Internet owes much of its complexity to the large number of entities that run it and use it. These entities have different and potentially conflicting interests, so their interactions are strategic in nature. Therefore, to understand these interactions, concepts from Economics and, most importantly, Game Theory are necessary. An important such concept is the notion of Nash equilibrium, which provides us with a rigorous way of predicting the behavior of strategic agents in situations of conflict. But the credibility of the Nash equilibrium as a framework for behavior-prediction depends on whether such equilibria are efficiently computable. After all, why should we expect a group of rational agents to behave in a fashion that requires exponential time to be computed? Motivated by this question, we study the computational complexity of the Nash equilibrium. We show that computing a Nash equilibrium is an intractable problem.
    [Show full text]
  • Structure Vs. Hardness Through the Obfuscation Lens∗
    Electronic Colloquium on Computational Complexity, Revision 3 of Report No. 91 (2016) Structure vs. Hardness through the Obfuscation Lens∗ Nir Bitansky Akshay Degwekar Vinod Vaikuntanathan Abstract Much of modern cryptography, starting from public-key encryption and going beyond, is based on the hardness of structured (mostly algebraic) problems like factoring, discrete log, or finding short lattice vectors. While structure is perhaps what enables advanced applications, it also puts the hardness of these problems in question. In particular, this structure often puts them in low (and so called structured) complexity classes such as NP \ coNP or statistical zero-knowledge (SZK). Is this structure really necessary? For some cryptographic primitives, such as one-way per- mutations and homomorphic encryption, we know that the answer is yes | they imply hard problems in NP \ coNP and SZK, respectively. In contrast, one-way functions do not imply such hard problems, at least not by black-box reductions. Yet, for many basic primitives such as public-key encryption, oblivious transfer, and functional encryption, we do not have any answer. We show that the above primitives, and many others, do not imply hard problems in NP \ coNP or SZK via black-box reductions. In fact, we first show that even the very powerful notion of Indistinguishability Obfuscation (IO) does not imply such hard problems, and then deduce the same for a large class of primitives that can be constructed from IO. Keywords: Indistinguishability Obfuscation, Statistical Zero-knowledge, NP \ coNP, Struc- tured Hardness, Collision-Resistant Hashing. ∗MIT. E-mail: fnirbitan,akshayd,[email protected]. Research supported in part by NSF Grants CNS- 1350619 and CNS-1414119, Alfred P.
    [Show full text]
  • Models of Parallel Computation and Parallel Complexity
    Models of Parallel Computation and Parallel Complexity George Lentaris Q ¹ ¸8 A thesis submitted in partial ful¯lment of the requirements for the degree of Master of Science in Logic, Algorithms and Computation. Department of Mathematics, National & Kapodistrian University of Athens supervised by: Asst. Prof. Dionysios Reisis, NKUA Prof. Stathis Zachos, NTUA Athens, July 2010 Abstract This thesis reviews selected topics from the theory of parallel computa- tion. The research begins with a survey of the proposed models of parallel computation. It examines the characteristics of each model and it discusses its use either for theoretical studies, or for practical applications. Subse- quently, it employs common simulation techniques to evaluate the computa- tional power of these models. The simulations establish certain model rela- tions before advancing to a detailed study of the parallel complexity theory, which is the subject of the second part of this thesis. The second part exam- ines classes of feasible highly parallel problems and it investigates the limits of parallelization. It is concerned with the bene¯ts of the parallel solutions and the extent to which they can be applied to all problems. It analyzes the parallel complexity of various well-known tractable problems and it discusses the automatic parallelization of the e±cient sequential algorithms. Moreover, it compares the models with respect to the cost of realizing parallel solutions. Overall, the thesis presents various class inclusions, problem classi¯cations and open questions of the ¯eld. George Lentaris Committee: Asst. Prof. Dionysios Reisis, Prof. Stathis Zachos, Prof. Vassilis Zissimopoulos, Lect. Aris Pagourtzis. Graduate Program in Logic, Algorithms and Computation Departments of Mathematics, Informatics, M.I.TH.E.
    [Show full text]
  • P Versus NP, and More
    1 P versus NP, and More Great Ideas in Theoretical Computer Science Saarland University, Summer 2014 If you have tried to solve a crossword puzzle, you know that it is much harder to solve it than to verify a solution provided by someone else. Likewise, proving a mathematical state- ment by yourself is usually much harder than verifying a proof provided by your instructor. A usual explanation for this difference is that solving a puzzle, or proving a mathematical statement, requires some creativity, which makes it much harder than verifying a solution. This lecture studies several complexity classes, which classify problems with respect to their hardness. We will define a complexity class P which consists of problems that can be solved efficiently, and a complexity class NP which consists of the problems that can be verified efficiently. One of the outstanding open questions in mathematics is whether P = NP, or verifying a statement is indeed much easier than proving a statement. We will discuss efforts theoreticians have made towards solving the question over the 40 years, and the consequences of this question. 1.1 P and NP In early days of computer science, various algorithm design techniques are discovered which lead to efficient solutions of problems. We list a few here. Problem 1.1 (Euler Cycle Problem). Give a graph G, is there a closed cycle which visits each edge of G exactly once? It is not difficult to prove that such cycle exists if and only if the degree of every vertex in G is even. Problem 1.2 (Shortest Path).
    [Show full text]
  • Stathis Zachos
    30's, 40's 50's 60's 70's 80's 90's Century 21 90 Years of Computability and Complexity Stathis Zachos National Technical University of Athens 190 years of Specker and Engeler together February 21-22, 2020 Z¨urich 30's, 40's 50's 60's 70's 80's 90's Century 21 30's, 40's 50's 60's 70's 80's 90's Century 21 30's, 40's 50's 60's 70's 80's 90's Century 21 Abstract: Computational Complexity Theory deals with the classification of problems into classes of hardness called complexity classes. We define complexity classes using general structural properties, such as the model of computation (Turing Machine, RAM, Finite Automaton, PDA, LBA, PRAM, monotone circuits), the mode of computation (deterministic, nondeterministic, probabilistic, alternating, uniform parallel, nonuniform circuits), the resources (time, space, # of processors, circuit size and depth) and also randomness, oracles, interactivity, counting, approximation, parameterization, etc. The cost of algorithms is measured by worst-case analysis, average-case analysis, best-case analysis, amortized analysis or smooth analysis. Inclusions and separations between complexity classes constitute central research goals and form some of the most important open questions in Theoretical Computer Science. Inclusions among some classes can be viewed as complexity hierarchies. We will present some of these: the Arithmetical Hierarchy, the Chomsky Hierarchy, the Polynomial-Time Hierarchy, a Counting Hierarchy, an Approximability Hierarchy and a Search Hierarchy. 30's, 40's 50's 60's 70's 80's 90's Century
    [Show full text]
  • Interactive Oracle Proofs?
    Interactive Oracle Proofs? Eli Ben-Sasson1, Alessandro Chiesa2 and Nicholas Spooner3 1 Technion [email protected] 2 UC Berkeley [email protected] 3 University of Toronto [email protected] Abstract. We initiate the study of a proof system model that naturally combines interactive proofs (IPs) and probabilistically-checkable proofs (PCPs), and gener- alizes interactive PCPs (which consist of a PCP followed by an IP). We define an interactive oracle proof (IOP) to be an interactive proof in which the verifier is not required to read the prover’s messages in their entirety; rather, the verifier has oracle access to the prover’s messages, and may probabilistically query them. IOPs retain the expressiveness of PCPs, capturing NEXP rather than only PSPACE, and also the flexibility of IPs, allowing multiple rounds of communication with the prover. IOPs have already found several applications, including unconditional zero knowledge [BCGV16], constant-rate constant-query probabilistic checking [BCG+16], and doubly-efficient constant-round IPs for polynomial-time bounded- space computations [RRR16]. We offer two main technical contributions. First, we give a compiler that maps any public-coin IOP into a non-interactive proof in the random oracle model. We prove that the soundness of the resulting proof is tightly characterized by the soundness of the IOP against state restoration attacks, a class of rewinding attacks on the IOP verifier that is reminiscent of, but incomparable to, resetting attacks. Second, we study the notion of state-restoration soundness of an IOP: we prove tight upper and lower bounds in terms of the IOP’s (standard) soundness and round complexity; and describe a simple adversarial strategy that is optimal, in expectation, across all state restoration attacks.
    [Show full text]
  • The Computational Complexity of Some Games and Puzzles with Theoretical Applications
    City University of New York (CUNY) CUNY Academic Works All Dissertations, Theses, and Capstone Projects Dissertations, Theses, and Capstone Projects 10-2014 The Computational Complexity of Some Games and Puzzles With Theoretical Applications Vasiliki Despoina Mitsou Graduate Center, City University of New York How does access to this work benefit ou?y Let us know! More information about this work at: https://academicworks.cuny.edu/gc_etds/326 Discover additional works at: https://academicworks.cuny.edu This work is made publicly available by the City University of New York (CUNY). Contact: [email protected] THE COMPUTATIONAL COMPLEXITY OF SOME GAMES AND PUZZLES WITH THEORETICAL APPLICATIONS by Vasiliki - Despoina Mitsou A dissertation submitted to the Graduate Faculty in Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy, The City University of New York 2014 This manuscript has been read and accepted for the Graduate Faculty in Computer Science in satisfaction of the dissertation requirements for the degree of Doctor of Philosophy. Amotz Bar-Noy Date Chair of Examining Committee Robert Haralick Date Executive Officer Matthew Johnson Noson Yanofsky Christina Zamfirescu Kazuhisa Makino Efstathios Zachos Supervisory Committee THE CITY UNIVERSITY OF NEW YORK ii abstract The Computational Complexity of Some Games and Puzzles with Theoretical Applications by Vasiliki Despoina Mitsou Adviser: Amotz Bar-Noy The subject of this thesis is the algorithmic properties of one- and two-player games people enjoy playing, such as Sudoku or Chess. Questions asked about puzzles and games in this context are of the following type: can we design efficient computer programs that play optimally given any opponent (for a two-player game), or solve any instance of the puzzle in question? We examine four games and puzzles and show algorithmic as well as intractability results.
    [Show full text]
  • A Decisive Characterization of BPP
    INFORMATION AND CONTROL 69, 125--135 (1986) A Decisive Characterization of BPP STATHIS ZACHOS Brooklyn College of CUNY, Brooklyn, New York AND HANS HELLER Technical University, Munich, West Germany The complexity class BPP (defined by Gill) contains problems that can be solved in polynomial time with bounded error probability. A new and simple charac- terization of BPP is given. It is shown that a language L is in BPP iff (xcL--,3"-yVzP(x,y, z)) A (x¢L--r Yy3+z -7 P(x, y, z)) for a polynomial-time predicate P and for l Yl, [zl ~poly(dx]). The formula 3+yP(y) with the random quantifier 3 + means that the probability Pr({ ylP(y)})~> ½+ e for a fixed e. This characterization allows a simple proof that BPP c Zpp Ne, which strengthens the result of (Lautemann, Inform. Process. Lett. 17 (1983), "215-217; Sipser, in "Proceedings, 15th Annu. ACM Sympos. Theory of Comput.," 1983, pp. 330-335) that BPP_c)ZzP~/-I2 p. Several other results about probabilistic classes can be proved using similar techniques, e.g., NpRG ZPP NP and '~2 p'BPP= ~2 p. © 1986 Academic Press, Inc. 1. INTRODUCTION Many arguments in the theory of cryptography make use of probabilistic algorithms. A goal in cryptography is to construct encryption schemes, which cannot be broken by probabilistic algorithms. The assumption is that problems solvable by probabilistic algorithms are easy or tractable, supposedly well below NP-complete problems. But in reality little is known about the power of probabilistic algorithms, such as those in the class BPP.
    [Show full text]
  • Relating Counting Complexity to Non-Uniform Probability Measures
    Relating counting complexity to non-uniform probability measures Eleni Bakali National Technical University of Athens, School of Electrical and Computer Engineering, Department of Computer Science, Computation and Reasoning Laboratory (CoReLab, room 1.1.3), 9 Iroon Polytechniou St, Polytechnioupoli Zografou, 157 80, Athens, Greece. Abstract A standard method for designing randomized algorithms to approximately count the number of solutions of a problem in #P, is by constructing a rapidly mixing Markov chain converging to the uniform distribution over this set of solutions. This construction is not always an easy task, and it is conjectured that it is not always possible. We want to investigate other possibilities for using Markov Chains in relation to counting, and whether we can relate algorithmic counting to other, non-uniform, probability distri- butions over the set we want to count. In this paper we present a family of probability distributions over the set of solutions of a problem in TotP, and show how they relate to counting; counting is equivalent to computing their normalizing factors. We analyse the complexity of sampling, of computing the normalizing factor, and of computing the size support of these distributions. The latter is also equiv- alent to counting. We also show how the above tasks relate to each other, and to other problems in complexity theory as well. In short, we prove that sampling and approximating the normalizing arXiv:1711.08852v1 [cs.CC] 24 Nov 2017 factor is easy. We do this by constructing a family of rapidly mixing Markov chains for which these distributions are stationary. At the same time we show that exactly computing the normalizing factor is TotP-hard.
    [Show full text]
  • On Interactive Proofs with a Laconic Prover
    ON INTERACTIVE PROOFS WITH A LACONIC PROVER Oded Goldreich Salil Vadhan and Avi Wigderson Abstract We continue the investigation of interactive pro ofs with b ounded communication as initiated by Goldreich and Hastad IPL Let L be a language that has an interactive pro of in which the prover sends few say b bits to the verier We prove that the com plement L has a constantround interactive pro of of complexity that dep ends only exp onentially on b This provides the rst evidence that for NPcomplete languages we cannot exp ect interactive provers to b e much more laconic than the standard NP pro of When the pro of system is further restricted eg when b or when we have p erfect completeness we get signicantly b etter upp er b ounds on the complex ityofL Keywords Interactive Pro of systems ArthurMerlin games NP sam pling proto cols statistical zeroknowledge game theory Sub ject classication Q Q Q A Intro duction Interactive pro of systems were intro duce by Goldwasser Micali and Rack o GMR in order to capture the most general way in which one party can eciently verify claims made by another more powerful party That is interactive pro of systems are twoparty randomized proto cols through which a computationally unbounded prover can convince a probabilistic p olynomial time verier of the memb ership of a common input in a predetermined lan guage Thus interactive pro of systems generalize and contain as a sp ecial case the traditional NPpro of systems in which verication is deterministic and noninteractive It is wellknown that this generalization
    [Show full text]