Pseudorandom Generators Without the XOR Lemma

Total Page:16

File Type:pdf, Size:1020Kb

Pseudorandom Generators Without the XOR Lemma Pseudorandom generators without the XOR Lemma y z x Madhu Sudan Luca Trevisan Salil Vadhan June Abstract Impagliazzo and Wigderson IW have recently shown that if there exists a decision problem solvable O (n) (n) in time and having circuit complexity for all but nitely many n then P BPP This result is a culmination of a series of works showing connections b etween the existence of hard predicates and the existence of go o d pseudorandom generators The construction of Impagliazzo and Wigderson go es through three phases of hardness amplication a multivariate p olynomial enco ding a rst derandomized XOR Lemma and a second derandomized XOR Lemma that are comp osed with the NisanWigderson NW generator In this pap er we present two dierent approaches to proving the main result of Impagliazzo and Wigderson In developing each approach we introduce new techniques and prove new results that could b e useful in future improvements andor applications of hardnessrandomness tradeos Our rst result is that when a mo died version of the NisanWigderson generator construction is applied with a mildly hard predicate the result is a generator that pro duces a distribution indis tinguishable from having large minentropy An extractor can then b e used to pro duce a distribution computationally indistinguishable from uniform This is the rst construction of a pseudorandom gen erator that works with a mildly hard predicate without doing hardness amplication We then show that in the ImpagliazzoWigderson construction only the rst hardnessamplication phase enco ding with multivariate p olynomial is necessary since it already gives the required average case hardness We prove this result by i establishing a connection b etween the hardnessamplication problem and a listdeco ding problem for errorcorrecting co des and ii presenting a listdeco ding algo rithm for errorcorrecting co des based on multivariate p olynomials that improves and simplies a previous one by Arora and Sudan AS Keywords Pseudorandom generators extractors p olynomial reconstruction listdeco ding Preliminary versions and abstracts of this pap er app eared in ECCC STV STOC STVa and Complexity STVb y Lab oratory for Computer Science Technology Square MIT Cambridge MA Email madhutheorylcsmited u z Department of Computer Science Columbia University W th St New York NY Email lucacscolumbiaedu Work done at MIT x Lab oratory for Computer Science Technology Square MIT Cambridge MA Email saliltheorylcsmited u URL httptheorylcsmited u sali l Supp orted by a DODNDSEG graduate fellowship and partially by DARPA grant DABTC Introduction This pap er continues the exploration of hardness versus randomness tradeos that is results showing that randomized algorithms can b e eciently simulated deterministically if certain complexitytheoretic assumptions are true We present two new approaches to proving the recent result of Impagliazzo and O n Wigderson IW that if there is a decision problem computable in time and having circuit complexity n for all but nitely many n then P BPP Impagliazzo and Wigderson prove their result by presenting a randomnessecient amplication of hardness based on a derandomized version of Yaos XOR Lemma The hardnessamplication pro cedure is then comp osed with the NisanWigderson NW generator NW and this gives the result The hardness amplication go es through three steps an enco ding using multivariate p olynomials from BFNW a rst derandomized XOR Lemma from Imp and a second derandomized XOR Lemma which is the technical contribution of IW In our rst result we show how to construct a pseudo entropy generator starting from a predicate with mild hardness Roughly sp eaking a pseudoentropy generator takes a short random seed as input and outputs a distribution that is indistinguishable from having high minentropy Combining our pseudo entropy generator with an extractor we obtain a pseudorandom generator Interestingly our pseudo entropy generator is a mo dication of the NW generator itself Along the way we prove that when built out of a mildly hard predicate the NW generator outputs a distribution that is indistinguishable from having high Shannon entropy a result that has not b een observed b efore The notion of a pseudo entropy generator and the idea that a pseudo entropy generator can b e converted into a pseudorandom generator using an extractor are due to Hastad et al HILL Our construction is the rst construction of a pseudorandom generator that works using a mildly hard predicate and without hardness amplication We then revisit the hardness amplication problem as considered in BFNW Imp IW and we show that the rst step alone enco ding with multivariate p olynomials is sucient to amplify hardness to the desired level so that the derandomized XOR Lemmas are not necessary in this context Our pro of is based on a listdeco ding algorithm for multivariate p olynomial co des and exploits a connection b etween the listdeco ding and the hardnessamplication problems The listdeco ding algorithm describ ed in this pap er is quantitatively b etter than a previous one by Arora and Sudan AS and has a simpler analysis An overview of previous results The works of Blum and Micali BM and Yao Yao formalize the notion of a pseudorandom generator and show how to construct pseudorandom generators based on the existence of oneway p ermutations A pseudorandom generator meeting their denitions which we call a BMYtype PRG is a p olynomialtime algorithm that on input a randomly selected string of length n pro duces an output of length n that is computationally indistinguishable from uniform by any adver sary of p oly n size where is an arbitrarily small constant Pseudorandom generators of this form and pseudorandom functions GGM constructed from them have many applications b oth inside and outside cryptography see eg GGM Val RR One of the rst applications observed by Yao Yao is derandomization a given p olynomialtime randomized algorithm can b e simulated deterministically using n p oly n by trying all the seeds and taking the ma jority answer a BMYtype PRG in time In a seminal work Nisan and Wigderson NW explore the use of a weaker type of pseudorandom generator PRG in order to derandomize randomized algorithms They observe that for the purp ose of t derandomization one can consider generators computable in time p oly instead of p oly t where t is the length of the seed since the derandomization pro cess cycles through all the seeds and this induces an t overhead factor anyway They also observe that one can restrict to generators that are go o d against adversaries whose running time is b ounded by a xed p olynomial instead of every p olynomial They then show how to construct a pseudorandom generator meeting this relaxed denition under weaker assumptions than those used to build cryptographically strong pseudorandom generators Furthermore they show that under a suciently strong assumption one can build a PRG that uses seeds of logarithmic length which would b e imp ossible for a BMYtype PRG Such a generator can b e used to simulate randomized algorithms in p olynomial time and its existence implies P BPP The condition under which Nisan and Wigderson prove the existence of a PRG with seeds of logarithmic length is the existence of a decision problem ie a n O n predicate P f g f g solvable in time such that for some p ositive constant no circuit of size 1 To b e accurate the term extractor comes fron NZ and p ostdates the pap er of Hastad et al HILL n n can solve the problem on more than a fraction of the inputs This is a very strong hardness requirement and it is of interest to obtain similar conclusions under weaker assumptions An example of a weaker assumption is the existence of a mild ly hard predicate We say that a predicate n is mildly hard if for some xed no circuit of size can decide the predicate on more than a fraction p olyn of the inputs Nisan and Wigderson prove that mild hardness suces to derive a pseudorandom generator with seed of O log n length which in turn implies a quasip olynomial deterministic simulation of BPP This result is proved by using Yaos XOR Lemma Yao see eg GNW for a pro of to convert a mildly hard predicate over n inputs into one which has input size n and is hard to compute n on a fraction of the inputs A series of subsequent pap ers attacks the problem of obtaining stronger pseudorandom generators starting from weaker and weaker assumptions Babai et al BFNW n show that a predicate of worstcase circuit complexity can b e converted into a mildly hard one Impagliazzo Imp proves a derandomized XOR Lemma which implies that a mildly hard predicate can b e converted into one that cannot b e predicted on more than some constant fraction of the inputs by circuits of n size Impagliazzo and Wigderson IW prove that a predicate with the latter hardness condition can b e transformed into one that meets the hardness requirement of NW The result of IW relies on a dierent derandomized version of the XOR Lemma than Imp Thus the general structure of the original construction of Nisan and Wigderson NW has b een preserved in most subsequent works progress b eing achieved by improving the single comp onents In particular the use of an XOR Lemma in NW continues alb eit in increasingly sophisticated forms in Imp IW Likewise the NW generator and its original analysis have always b een used in conditional derandomization results since Future progress in the area will probably require a departure from this observance of the NW metho dology
Recommended publications
  • Four Results of Jon Kleinberg a Talk for St.Petersburg Mathematical Society
    Four Results of Jon Kleinberg A Talk for St.Petersburg Mathematical Society Yury Lifshits Steklov Institute of Mathematics at St.Petersburg May 2007 1 / 43 2 Hubs and Authorities 3 Nearest Neighbors: Faster Than Brute Force 4 Navigation in a Small World 5 Bursty Structure in Streams Outline 1 Nevanlinna Prize for Jon Kleinberg History of Nevanlinna Prize Who is Jon Kleinberg 2 / 43 3 Nearest Neighbors: Faster Than Brute Force 4 Navigation in a Small World 5 Bursty Structure in Streams Outline 1 Nevanlinna Prize for Jon Kleinberg History of Nevanlinna Prize Who is Jon Kleinberg 2 Hubs and Authorities 2 / 43 4 Navigation in a Small World 5 Bursty Structure in Streams Outline 1 Nevanlinna Prize for Jon Kleinberg History of Nevanlinna Prize Who is Jon Kleinberg 2 Hubs and Authorities 3 Nearest Neighbors: Faster Than Brute Force 2 / 43 5 Bursty Structure in Streams Outline 1 Nevanlinna Prize for Jon Kleinberg History of Nevanlinna Prize Who is Jon Kleinberg 2 Hubs and Authorities 3 Nearest Neighbors: Faster Than Brute Force 4 Navigation in a Small World 2 / 43 Outline 1 Nevanlinna Prize for Jon Kleinberg History of Nevanlinna Prize Who is Jon Kleinberg 2 Hubs and Authorities 3 Nearest Neighbors: Faster Than Brute Force 4 Navigation in a Small World 5 Bursty Structure in Streams 2 / 43 Part I History of Nevanlinna Prize Career of Jon Kleinberg 3 / 43 Nevanlinna Prize The Rolf Nevanlinna Prize is awarded once every 4 years at the International Congress of Mathematicians, for outstanding contributions in Mathematical Aspects of Information Sciences including: 1 All mathematical aspects of computer science, including complexity theory, logic of programming languages, analysis of algorithms, cryptography, computer vision, pattern recognition, information processing and modelling of intelligence.
    [Show full text]
  • Prahladh Harsha
    Prahladh Harsha Toyota Technological Institute, Chicago phone : +1-773-834-2549 University Press Building fax : +1-773-834-9881 1427 East 60th Street, Second Floor Chicago, IL 60637 email : [email protected] http://ttic.uchicago.edu/∼prahladh Research Interests Computational Complexity, Probabilistically Checkable Proofs (PCPs), Information Theory, Prop- erty Testing, Proof Complexity, Communication Complexity. Education • Doctor of Philosophy (PhD) Computer Science, Massachusetts Institute of Technology, 2004 Research Advisor : Professor Madhu Sudan PhD Thesis: Robust PCPs of Proximity and Shorter PCPs • Master of Science (SM) Computer Science, Massachusetts Institute of Technology, 2000 • Bachelor of Technology (BTech) Computer Science and Engineering, Indian Institute of Technology, Madras, 1998 Work Experience • Toyota Technological Institute, Chicago September 2004 – Present Research Assistant Professor • Technion, Israel Institute of Technology, Haifa February 2007 – May 2007 Visiting Scientist • Microsoft Research, Silicon Valley January 2005 – September 2005 Postdoctoral Researcher Honours and Awards • Summer Research Fellow 1997, Jawaharlal Nehru Center for Advanced Scientific Research, Ban- galore. • Rajiv Gandhi Science Talent Research Fellow 1997, Jawaharlal Nehru Center for Advanced Scien- tific Research, Bangalore. • Award Winner in the Indian National Mathematical Olympiad (INMO) 1993, National Board of Higher Mathematics (NBHM). • National Board of Higher Mathematics (NBHM) Nurture Program award 1995-1998. The Nurture program 1995-1998, coordinated by Prof. Alladi Sitaram, Indian Statistical Institute, Bangalore involves various topics in higher mathematics. 1 • Ranked 7th in the All India Joint Entrance Examination (JEE) for admission into the Indian Institutes of Technology (among the 100,000 candidates who appeared for the examination). • Papers invited to special issues – “Robust PCPs of Proximity, Shorter PCPs, and Applications to Coding” (with Eli Ben- Sasson, Oded Goldreich, Madhu Sudan, and Salil Vadhan).
    [Show full text]
  • Some Hardness Escalation Results in Computational Complexity Theory Pritish Kamath
    Some Hardness Escalation Results in Computational Complexity Theory by Pritish Kamath B.Tech. Indian Institute of Technology Bombay (2012) S.M. Massachusetts Institute of Technology (2015) Submitted to Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Electrical Engineering & Computer Science at Massachusetts Institute of Technology February 2020 ⃝c Massachusetts Institute of Technology 2019. All rights reserved. Author: ............................................................. Department of Electrical Engineering and Computer Science September 16, 2019 Certified by: ............................................................. Ronitt Rubinfeld Professor of Electrical Engineering and Computer Science, MIT Thesis Supervisor Certified by: ............................................................. Madhu Sudan Gordon McKay Professor of Computer Science, Harvard University Thesis Supervisor Accepted by: ............................................................. Leslie A. Kolodziejski Professor of Electrical Engineering and Computer Science, MIT Chair, Department Committee on Graduate Students Some Hardness Escalation Results in Computational Complexity Theory by Pritish Kamath Submitted to Department of Electrical Engineering and Computer Science on September 16, 2019, in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science & Engineering Abstract In this thesis, we prove new hardness escalation results in computational complexity theory; a phenomenon where hardness results against seemingly weak models of computation for any problem can be lifted, in a black box manner, to much stronger models of computation by considering a simple gadget composed version of the original problem. For any unsatisfiable CNF formula F that is hard to refute in the Resolution proof system, we show that a gadget-composed version of F is hard to refute in any proof system whose lines are computed by efficient communication protocols.
    [Show full text]
  • Pairwise Independence and Derandomization
    Pairwise Independence and Derandomization Pairwise Independence and Derandomization Michael Luby Digital Fountain Fremont, CA, USA Avi Wigderson Institute for Advanced Study Princeton, NJ, USA [email protected] Boston – Delft Foundations and TrendsR in Theoretical Computer Science Published, sold and distributed by: now Publishers Inc. PO Box 1024 Hanover, MA 02339 USA Tel. +1-781-985-4510 www.nowpublishers.com [email protected] Outside North America: now Publishers Inc. PO Box 179 2600 AD Delft The Netherlands Tel. +31-6-51115274 A Cataloging-in-Publication record is available from the Library of Congress The preferred citation for this publication is M. Luby and A. Wigderson, Pairwise R Independence and Derandomization, Foundation and Trends in Theoretical Com- puter Science, vol 1, no 4, pp 237–301, 2005 Printed on acid-free paper ISBN: 1-933019-22-0 c 2006 M. Luby and A. Wigderson All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, mechanical, photocopying, recording or otherwise, without prior written permission of the publishers. Photocopying. In the USA: This journal is registered at the Copyright Clearance Cen- ter, Inc., 222 Rosewood Drive, Danvers, MA 01923. Authorization to photocopy items for internal or personal use, or the internal or personal use of specific clients, is granted by now Publishers Inc for users registered with the Copyright Clearance Center (CCC). The ‘services’ for users can be found on the internet at: www.copyright.com For those organizations that have been granted a photocopy license, a separate system of payment has been arranged.
    [Show full text]
  • David Karger Rajeev Motwani Y Madhu Sudan Z
    Approximate Graph Coloring by Semidenite Programming y z David Karger Rajeev Motwani Madhu Sudan Abstract We consider the problem of coloring k colorable graphs with the fewest p ossible colors We present a randomized p olynomial time algorithm which colors a colorable graph on n vertices 12 12 13 14 with minfO log log n O n log ng colors where is the maximum degree of any vertex Besides giving the b est known approximation ratio in terms of n this marks the rst nontrivial approximation result as a function of the maximum degree This result can 12 12k b e generalized to k colorable graphs to obtain a coloring using minfO log log n 12 13(k +1) O n log ng colors Our results are inspired by the recent work of Go emans and Williamson who used an algorithm for semidenite optimization problems which generalize lin ear programs to obtain improved approximations for the MAX CUT and MAX SAT problems An intriguing outcome of our work is a duality relationship established b etween the value of the optimum solution to our semidenite program and the Lovasz function We show lower b ounds on the gap b etween the optimum solution of our semidenite program and the actual chromatic numb er by duality this also demonstrates interesting new facts ab out the function MIT Lab oratory for Computer Science Cambridge MA Email kargermitedu URL httptheorylcsmitedukarger Supp orted by a Hertz Foundation Graduate Fellowship by NSF Young In vestigator Award CCR with matching funds from IBM Schlumb erger Foundation Shell Foundation and Xerox Corp oration
    [Show full text]
  • Avi Wigderson
    On the Work of Madhu Sudan Avi Wigderson Madhu Sudan is the recipient of the 2002 Nevan- “eyeglasses” were key in this study of the no- linna Prize. Sudan has made fundamental contri- tions proof (again) and error correction. butions to two major areas of research, the con- • Theoretical computer science is an extremely nections between them, and their applications. interactive and collaborative community. The first area is coding theory. Established by Sudan’s work was not done in a vacuum, and Shannon and Hamming over fifty years ago, it is much of the background to it, conceptual and technical, was developed by other people. The the mathematical study of the possibility of, and space I have does not allow me to give proper the limits on, reliable communication over noisy credit to all these people. A much better job media. The second area is probabilistically check- has been done by Sudan himself; his homepage able proofs (PCPs). By contrast, it is only ten years (http://theory.lcs.mit.edu/~madhu/) con- old. It studies the minimal resources required for tains several surveys of these areas which give probabilistic verification of standard mathemati- proper historical accounts and references. In cal proofs. particular see [13] for a survey on PCPs and [15] My plan is to briefly introduce these areas, their for a survey on the work on error correction. motivation, and foundational questions and then to explain Sudan’s main contributions to each. Be- Probabilistic Checking of Proofs fore we get to the specific works of Madhu Sudan, One informal variant of the celebrated P versus let us start with a couple of comments that will set NP question asks, Can mathematicians, past and up the context of his work.
    [Show full text]
  • Lecture Notes in Computer Science 4833 Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan Van Leeuwen
    Lecture Notes in Computer Science 4833 Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Friedemann Mattern ETH Zurich, Switzerland John C. Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen University of Dortmund, Germany Madhu Sudan Massachusetts Institute of Technology, MA, USA Demetri Terzopoulos University of California, Los Angeles, CA, USA Doug Tygar University of California, Berkeley, CA, USA Moshe Y. Vardi Rice University, Houston, TX, USA Gerhard Weikum Max-Planck Institute of Computer Science, Saarbruecken, Germany Kaoru Kurosawa (Ed.) Advances in Cryptology – ASIACRYPT 2007 13th International Conference on the Theory and Application of Cryptology and Information Security Kuching, Malaysia, December 2-6, 2007 Proceedings 13 Volume Editor Kaoru Kurosawa Ibaraki University Department of Computer and Information Sciences 4-12-1 Nakanarusawa Hitachi, Ibaraki 316-8511, Japan E-mail: [email protected] Library of Congress Control Number: 2007939450 CR Subject Classification (1998): E.3, D.4.6, F.2.1-2, K.6.5, C.2, J.1, G.2 LNCS Sublibrary: SL 4 – Security and Cryptology ISSN 0302-9743 ISBN-10 3-540-76899-8 Springer Berlin Heidelberg New York ISBN-13 978-3-540-76899-9 Springer Berlin Heidelberg New York This work is subject to copyright.
    [Show full text]
  • Pseudorandom Generators Without the XOR Lemma£
    Pseudorandom generators without the XOR Lemma£ [Extended Abstract] Þ Ü Madhu SudanÝ Luca Trevisan Salil Vadhan ½ ÁÒØÖÓ Impagliazzo and Wigderson [IW97] have recently shown that if This paper continues the exploration of hardness versus random- Ç ´Òµ there exists a decision problem solvable in time ¾ and hav- ness trade-offs, that is, results showing that randomized algorithms Òµ ª´ can be efficiently simulated deterministically if certain complexity- Ò È ing circuit complexity ¾ (for all but finitely many ) then theoretic assumptions are true. We present two new approaches BÈÈ . This result is a culmination of a series of works showing con- nections between the existence of hard predicates and the existence to proving the recent result of Impagliazzo and Wigderson [IW97] Ç ´Òµ of good pseudorandom generators. that, if there is a decision problem computable in time ¾ and ª´Òµ Ò The construction of Impagliazzo and Wigderson goes through having circuit complexity ¾ for all but finitely many , then BÈÈ three phases of “hardness amplification” (a multivariate polyno- È . Impagliazzo and Wigderson prove their result by pre- mial encoding, a first derandomized XOR Lemma, and a second senting a “randomness-efficient amplification of hardness” based derandomized XOR Lemma) that are composed with the Nisan– on a derandomized version of Yao’s XOR Lemma. The hardness- Wigderson [NW94] generator. In this paper we present two dif- amplification procedure is then composed with the Nisan–Wigderson ferent approaches to proving the main result of Impagliazzo and (NW) generator [NW94] and this gives the result. The hardness Wigderson. In developing each approach, we introduce new tech- amplification goes through three steps: an encoding using multi- niques and prove new results that could be useful in future improve- variate polynomials (from [BFNW93]), a first derandomized XOR ments and/or applications of hardness-randomness trade-offs.
    [Show full text]
  • On the Work of Madhu Sudan Von Avi Wigderson
    Avi Wigderson On the Work of Madhu Sudan von Avi Wigderson Madhu Sudan is the recipient of the 2002 Nevanlinna Prize. Sudan has made fundamental contributions to two major areas of research, the connections between them, and their applications. The first area is coding theory. Established by Shannon and Hamming over fifty years ago, it is the mathematical study of the possibility of, and the limits on, reliable communication over noisy media. The second area is probabilistically checkable proofs (PCPs). By contrast, it is only ten years old. It studies the minimal resources required for probabilistic verification of standard mathematical proofs. My plan is to briefly introduce these areas, their mo- Probabilistic Checking of Proofs tivation, and foundational questions and then to ex- One informal variant of the celebrated P versus NP plain Sudan’s main contributions to each. Before we question asks,“Can mathematicians, past and future, get to the specific works of Madhu Sudan, let us start be replaced by an efficient computer program?” We with a couple of comments that will set up the con- first define these notions and then explain the PCP text of his work. theorem and its impact on this foundational question. Madhu Sudan works in computational complexity theory. This research discipline attempts to rigor- Efficient Computation ously define and study efficient versions of objects and notions arising in computational settings. This Throughout, by an efficient algorithm (or program, machine, or procedure) we mean an algorithm which focus on efficiency is of course natural when studying 1 computation itself, but it also has proved extremely runs at most some fixed polynomial time in the fruitful in studying other fundamental notions such length of its input.
    [Show full text]
  • Evasiveness of Graph Properties and Topological Fixed-Point Theorems
    Evasiveness of Graph Properties and Topological Fixed-Point Theorems arXiv:1306.0110v2 [math.CO] 7 Jun 2013 Evasiveness of Graph Properties and Topological Fixed-Point Theorems Carl A. Miller University of Michigan USA [email protected] Boston – Delft R Foundations and Trends in Theoretical Computer Science Published, sold and distributed by: now Publishers Inc. PO Box 1024 Hanover, MA 02339 USA Tel. +1-781-985-4510 www.nowpublishers.com [email protected] Outside North America: now Publishers Inc. PO Box 179 2600 AD Delft The Netherlands Tel. +31-6-51115274 The preferred citation for this publication is C. A. Miller, Evasiveness of Graph R Properties and Topological Fixed-Point Theorems, Foundation and Trends in Theoretical Computer Science, vol 7, no 4, pp 337–415, 2011 ISBN: 978-1-60198-664-1 c 2013 C. A. Miller All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, mechanical, photocopying, recording or otherwise, without prior written permission of the publishers. Photocopying. In the USA: This journal is registered at the Copyright Clearance Cen- ter, Inc., 222 Rosewood Drive, Danvers, MA 01923. Authorization to photocopy items for internal or personal use, or the internal or personal use of specific clients, is granted by now Publishers Inc. for users registered with the Copyright Clearance Center (CCC). The ‘services’ for users can be found on the internet at: www.copyright.com For those organizations that have been granted a photocopy license, a separate system of payment has been arranged. Authorization does not extend to other kinds of copy- ing, such as that for general distribution, for advertising or promotional purposes, for creating new collective works, or for resale.
    [Show full text]
  • Separating Local & Shuffled Differential Privacy Via Histograms
    Separating Local & Shuffled Differential Privacy via Histograms Victor Balcer∗ and Albert Cheu† April 15, 2020 Abstract Recent work in differential privacy has highlighted the shuffled model as a promising avenue to compute accurate statistics while keeping raw data in users’ hands. We present a protocol in this model that estimates histograms with error independent of the domain size. This implies an arbitrarily large gap in sample complexity between the shuffled and local models. On the other hand, we show that the models are equivalent when we impose the constraints of pure differential privacy and single-message randomizers. 1 Introduction The local model of private computation has minimal trust assumptions: each user executes a randomized algorithm on their data and sends the output message to an analyzer [17, 19, 11]. While this model has appeal to the users—their data is never shared in the clear—the noise in every message poses a roadblock to accurate statistics. For example, a locally private d-bin histogram has, on some bin, error scaling with √log d. But when users trust the analyzer with their raw data (the central model), there is an algorithm that achieves error independent of d on every bin. Because the local and central models lie at the extremes of trust, recent work has focused on the intermediate shuffled model [6, 8]. In this model, users execute ranomization like in the local model but now a trusted shuffler applies a uniformly random permutation to all user messages before the analyzer can view them. The anonymity provided by the shuffler allows users to introduce less noise than in the local model while achieving the same level of privacy.
    [Show full text]
  • Point-Hyperplane Incidence Geometry and the Log-Rank Conjecture
    Point-hyperplane incidence geometry and the log-rank conjecture Noah Singer∗ Madhu Sudan† September 16, 2021 Abstract We study the log-rank conjecture from the perspective of point-hyperplane incidence geometry. We d formulate the following conjecture: Given a point set in R that is covered by constant-sized sets of parallel polylog(d) hyperplanes, there exists an affine subspace that accounts for a large (i.e., 2− ) fraction of the incidences, in the sense of containing a large fraction of the points and being contained in a large fraction of the hyperplanes. (In other words, the point-hyperplane incidence graph for such configurations has a large complete bipartite subgraph.) Alternatively, our conjecture may be interpreted linear-algebraically as follows: Any rank-d matrix containing at most O(1) distinct entries in each column contains a submatrix polylog(d) of fractional size 2− , in which each column contains one distinct entry. We prove that our conjecture is equivalent to the log-rank conjecture; the crucial ingredient of this proof is a reduction from bounds for parallel k-partitions to bounds for parallel (k 1)-partitions. We also introduce an (apparent) − strengthening of the conjecture, which relaxes the requirements that the sets of hyperplanes be parallel. Motivated by the connections above, we revisit well-studied questions in point-hyperplane incidence geometry without structural assumptions (i.e., the existence of partitions), and in particular, we initiate the study of complete bipartite subgraph size in incidence graphs in the regime where dimension is not a constant. We give an elementary argument for the existence of complete bipartite subgraphs d of density Ω(ǫ2 /d) in any d-dimensional configuration with incidence density ǫ (qualitatively matching previous results, which had implicit dimension-dependent constants and required sophisticated incidence- geometric techniques).
    [Show full text]