Proofs, Arguments, and Zero-Knowledge1

Total Page:16

File Type:pdf, Size:1020Kb

Proofs, Arguments, and Zero-Knowledge1 Proofs, Arguments, and Zero-Knowledge1 Justin Thaler2 August 17, 2021 1This manuscript is not in final form. It is being made publicly available in conjunction with the Fall 2020 offering of COSC 544 at Georgetown University, and will be updated regularly over the course of the semester and beyond. Feedback is welcome. The most recent version of the manuscript is available at: http://people.cs.georgetown. edu/jthaler/ProofsArgsAndZK.html. 2Georgetown University. Supported by NSF CAREER award CCF-1845125 and by DARPA under Agreement No. HR00112020022. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the United States Government or DARPA. Abstract Interactive proofs (IPs) and arguments are cryptographic protocols that enable an untrusted prover to provide a guarantee that it performed a requested computation correctly. Introduced in the 1980s, IPs and arguments represented a major conceptual expansion of what constitutes a “proof” that a statement is true. Traditionally, a proof is a static object that can be easily checked step-by-step for correctness. In con- trast, IPs allow for interaction between prover and verifier, as well as a tiny but nonzero probability that an incorrect proof passes verification. Arguments (but not IPs) even permit there to be “proofs” of incorrect statements, so long as those “proofs” require exorbitant computational power to find. To an extent, these notions mimic in-person interactions that mathematicians use to convince each other that a claim is true, without going through the painstaking process of writing out and checking a traditional static proof. Celebrated theoretical results from the 1980s and 1990s such as IP = PSPACE and MIP = NEXP showed that, in principle, surprisingly complicated statements can be verified efficiently. What is more, any argument can in principle be transformed into one that is zero-knowledge, which means that proofs reveal no information other than their own validity. Zero-knowledge arguments have a myriad of applications in cryptography. Within the last decade, general-purpose zero-knowledge arguments have made the jump from theory to practice. This has opened new doors in the design of cryptographic systems, and generated additional insights into the power of IPs and arguments (zero-knowledge or otherwise). There are now no fewer than five promising approaches to designing efficient, general-purpose zero-knowledge arguments. This survey covers these approaches in a unified manner, emphasizing commonalities between them. 1 Preface and Acknowledgements This manuscript began as a set of lecture notes accompanying the Fourteenth Bellairs’ Crypto-Workshop in 2015. I am indebted to Claude Crepeau´ and Gilles Brassard for their warm hospitality in organizing the workshop, and to all of the workshop participants for their generosity, patience, and enthusiasm. The notes were further expanded during the Fall 2017 offering of COSC 544 at Georgetown University, and benefited from comments provided by students in the course. The knowledge and feedback of a number of people heavily influenced the development of this manuscript, including Sebastian Angel, Srinath Setty, abhi shelat, Michael Walfish, and Riad Wahby. I owe a special thanks to Riad for his patient explanations of many cryptographic tools covered in this survey, and his willingness to journey to the end of any rabbit hole he encounters. There are many fewer errors in this manuscript because of Riad’s help; any that remain are entirely my own. A major benefit of taking 5 years (and counting) to complete this manuscript is the many exciting developments that can now be included. This survey would have looked very different had it been completed in 2015, or even in 2018 (perhaps 1=3 of the content covered did not exist 5 years ago). During this period, the various approaches to the design of zero-knowledge arguments, and the relationships between them, have come into finer focus. Yet owing to the sheer volume of research papers, it is increasingly challenging for those first entering the area to extract a clear picture from the literature itself. Will the next 5-10 years bring a similar flood of developments? Will this be enough to render general- purpose arguments efficient enough for routine deployment in diverse cryptographic systems? It is my hope that this survey will make this exciting and beautiful area slightly more accessible, and thereby play some role in ensuring that the answer to both questions is “yes.” Washington D.C., August 2020 2 Contents 1 Introduction 6 1.1 Mathematical Proofs . .9 1.2 What kinds of non-traditional proofs will we study? . .9 2 The Power of Randomness: Fingerprinting and Freivalds’ Algorithm 12 2.1 Reed-Solomon Fingerprinting . 12 2.2 Freivalds’ Algorithm . 15 3 Definitions and Technical Preliminaries 17 3.1 Interactive Proofs . 17 3.2 Argument Systems . 18 3.3 Robustness of Definitions and the Power of Interaction . 19 3.4 Schwartz-Zippel Lemma . 21 3.5 Low Degree and Multilinear Extensions . 22 3.6 Exercises . 25 4 Interactive Proofs 26 4.1 The Sum-Check Protocol . 26 4.2 First Application of Sum-Check: #SAT 2 IP .......................... 31 4.3 Second Application: A Simple IP for Counting Triangles in Graphs . 33 4.4 Third Application: Super-Efficient IP for MATMULT ..................... 35 4.5 Applications of the Super-Efficient MATMULT IP....................... 41 4.6 The GKR Protocol and Its Efficient Implementation . 46 4.7 Publicly Verifiable, Non-interactive Argument via Fiat-Shamir . 59 4.8 Exercises . 65 5 Front Ends: Turning Computer Programs Into Circuits 69 5.1 Introduction . 69 5.2 Machine Code . 70 5.3 A First Technique For Turning Programs Into Circuits [Sketch] . 71 5.4 Turning Small-Space Programs Into Shallow Circuits . 72 5.5 Turning Computer Programs Into Circuit Satisfiability Instances . 73 5.6 Alternative Transformations and Optimizations . 79 5.7 Exercises . 87 3 6 A First Succinct Argument for Circuit Satisfiability, from Interactive Proofs 89 6.1 A Naive Approach: An IP for Circuit Satisfiability . 89 6.2 Succinct Arguments for Circuit Satisfiability . 89 6.3 A First Succinct Argument for Circuit Satisfiability . 90 7 MIPs and Succinct Arguments 98 7.1 MIPs: Definitions and Basic Results . 98 7.2 An Efficient MIP For Circuit Satisfiability . 100 7.3 A Succinct Argument for Deep Circuits . 107 7.4 Preview: A General Paradigm for the Design of Succinct Arguments . 107 7.5 Extension from Circuit-SAT to R1CS-SAT . 108 7.6 MIP = NEXP .......................................... 111 8 PCPs and Succinct Arguments 113 8.1 PCPs: Definitions and Relationship to MIPs . 113 8.2 Compiling a PCP Into a Succinct Argument . 114 8.3 A First Polynomial Length PCP, From a MIP . 117 8.4 A PCP of Quasilinear Length for Arithmetic Circuit Satisfiability . 119 9 Interactive Oracle Proofs 125 9.1 IOPs: Definition and Relation to IPs and PCPs . 125 9.2 Background on FRI . 126 9.3 An IOP for R1CS-SAT . 128 9.4 Details of FRI: Better Reed-Solomon Proximity Proofs via Interaction . 134 9.5 From Reed-Solomon Testing to Multilinear Polynomial Commitments . 138 9.6 An Alternative IOP-Based Polynomial Commitment: Ligero . 139 10 Zero-Knowledge Proofs and Arguments 144 10.1 What is Zero-Knowledge? . 144 10.2 The Limits of Statistical Zero Knowledge Proofs . 147 10.3 Honest-Verifier SZK Protocol for Graph Non-Isomorphism . 147 11 S-Protocols and Commitments from Hardness of Discrete Logarithm 150 11.1 Cryptographic Background . 150 11.2 Schnorr’s S-Protocol for Knowledge of Discrete Logarithms . 152 11.3 A Homomorphic Commitment Scheme . 158 12 Zero-Knowledge for Circuit Satisfiability Via Commit-And-Prove 167 12.1 Proof Length of Witness Size Plus Multiplicative Complexity . 168 12.2 Avoiding Linear Dependence on Multiplicative Complexity: zk-Arguments from IPs . 170 12.3 Zero-Knowledge via Masking Polynomials . 172 13 Polynomial Commitment Schemes from Discrete Logarithm or Pairings 177 13.1 Polynomial Commitments from Hardness of Discrete Logarithm . 177 13.2 Polynomial Commitments from Pairings and a Trusted Setup . 191 13.3 Commitment Scheme for Sparse Polynomials . 200 13.4 Polynomial Commitment Schemes: Pros and Cons . 202 4 13.5 Additional Approaches . 204 14 Linear PCPs and Succinct Arguments 205 14.1 Overview: Interactive Arguments From “Long”, Structured PCPs . 205 14.2 Committing to a Linear PCP without Materializing It . 206 14.3 A First Linear PCP for Arithmetic Circuit Satisfiability . 208 S 14.4 GGPR: A Linear PCP Of Size O(jFj ) for Arithmetic Circuit-SAT . 211 14.5 Non-Interactivity and Public Verifiability . 214 15 Bird’s Eye View of Practical Arguments 221 15.1 A Taxonomy of SNARKs . 221 15.2 Pros and Cons of the Approaches . 223 15.3 Other Issues Affecting Concrete Efficiency . 226 5 Chapter 1 Introduction This manuscript is about verifiable computing (VC). VC refers to cryptographic protocols called interactive proofs (IPs) and arguments that enable a prover to provide a guarantee to a verifier that the prover performed a requested computation correctly. Introduced in the 1980s, IPs and arguments represented a major concep- tual expansion of what constitutes a “proof” that a statement is true. Traditionally, a proof is a static object that can be easily checked step-by-step for correctness, because each individual step of the proof should be trivial to verify. In contrast, IPs allow for interaction between prover and verifier, as well as a tiny but nonzero probability that an incorrect proof passes verification. The difference between IPs and arguments is that arguments (but not IPs) permit the existence of “proofs” of incorrect statements, so long as those “proofs” require exorbitant computational power to find. Celebrated theoretical results from the mid-1980s and early 1990s indicated that VC protocols can, at least in principle, accomplish amazing feats.
Recommended publications
  • On the Foundations of Cryptography∗
    On the Foundations of Cryptography∗ Oded Goldreich Department of Computer Science Weizmann Institute of Science Rehovot, Israel. [email protected] May 6, 2019 Abstract We survey the main paradigms, approaches and techniques used to conceptualize, define and provide solutions to natural cryptographic problems. We start by presenting some of the central tools used in cryptography; that is, computational difficulty (in the form of one-way functions), pseudorandomness, and zero-knowledge proofs. Based on these tools, we turn to the treatment of basic cryptographic applications such as encryption and signature schemes as well as the design of general secure cryptographic protocols. Our presentation assumes basic knowledge of algorithms, probability theory and complexity theory, but nothing beyond this. Keywords: Cryptography, Theory of Computation. ∗This revision of the primer [59] will appear as Chapter 17 in an ACM book celebrating the work of Goldwasser and Micali. 1 Contents 1 Introduction and Preliminaries 1 1.1 Introduction.................................... ............... 1 1.2 Preliminaries ................................... ............... 4 I Basic Tools 6 2 Computational Difficulty and One-Way Functions 6 2.1 One-WayFunctions................................ ............... 6 2.2 Hard-CorePredicates . .. .. .. .. .. .. .. ................ 9 3 Pseudorandomness 11 3.1 Computational Indistinguishability . ....................... 11 3.2 PseudorandomGenerators. ................. 12 3.3 PseudorandomFunctions . ...............
    [Show full text]
  • Draft Lecture Notes
    CS-282A / MATH-209A Foundations of Cryptography Draft Lecture Notes Winter 2010 Rafail Ostrovsky UCLA Copyright © Rafail Ostrovsky 2003-2010 Acknowledgements: These lecture notes were produced while I was teaching this graduate course at UCLA. After each lecture, I asked students to read old notes and update them according to the lecture. I am grateful to all of the students who participated in this massive endeavor throughout the years, for asking questions, taking notes and improving these lecture notes each time. Table of contents PART 1: Overview, Complexity classes, Weak and Strong One-way functions, Number Theory Background. PART 2: Hard-Core Bits. PART 3: Private Key Encryption, Perfectly Secure Encryption and its limitations, Semantic Security, Pseudo-Random Generators. PART 4: Implications of Pseudo-Random Generators, Pseudo-random Functions and its applications. PART 5: Digital Signatures. PART 6: Trapdoor Permutations, Public-Key Encryption and its definitions, Goldwasser-Micali, El-Gamal and Cramer-Shoup cryptosystems. PART 7: Interactive Proofs and Zero-Knowledge. PART 8: Bit Commitment Protocols in different settings. PART 9: Non-Interactive Zero Knowledge (NIZK) PART 10: CCA1 and CCA2 Public Key Encryption from general Assumptions: Naor-Yung, DDN. PART 11: Non-Malleable Commitments, Non-Malleable NIZK. PART 12: Multi-Server PIR PART 13: Single-Server PIR, OT, 1-2-OT, MPC, Program Obfuscation. PART 14: More on MPC including GMW and BGW protocols in the honest-but-curious setting. PART 15: Yao’s garbled circuits, Efficient ZK Arguments, Non-Black-Box Zero Knowledge. PART 16: Remote Secure Data Storage (in a Cloud), Oblivious RAMs. CS 282A/MATH 209A: Foundations of Cryptography °c 2006-2010 Prof.
    [Show full text]
  • How to Encipher Messages on a Small Domain Deterministic Encryption and the Thorp Shuffle
    How to Encipher Messages on a Small Domain Deterministic Encryption and the Thorp Shuffle Ben Morris1, Phillip Rogaway2, and Till Stegers2 1 Dept. of Mathematics, University of California, Davis, California 95616, USA 2 Dept. of Computer Science, University of California, Davis, California 95616, USA Abstract. We analyze the security of the Thorp shuffle, or, equivalently, a maximally unbalanced Feistel network. Roughly said, the Thorp shuffle on N cards mixes any N 1−1/r of them in O(r lg N) steps. Correspond- ingly, making O(r) passes of maximally unbalanced Feistel over an n-bit string ensures CCA-security to 2n(1−1/r) queries. Our results, which em- ploy Markov-chain techniques, enable the construction of a practical and provably-secure blockcipher-based scheme for deterministically encipher- ing credit card numbers and the like using a conventional blockcipher. 1 Introduction Small-space encryption. Suppose you want to encrypt a 9-decimal-digit plaintext, say a U.S. social-security number, into a ciphertext that is again a 9-decimal-digit number. A shared key K is used to control the encryption. Syn- tactically, you seek a cipher E: K×M → M where M = {0, 1,...,N− 1}, 9 N =10 ,andEK = E(K, ·) is a permutation for each key K ∈K. You aim to construct your scheme from a well-known primitive, say AES, and to prove your scheme is as secure as the primitive from which you start. The problem is harder than it sounds. You can’t just encode each plaintext M ∈Mas a 128-bit string and then apply AES, say, as that will return a 128-bit string and projecting back onto M will destroy permutivity.
    [Show full text]
  • Stdin (Ditroff)
    - 50 - Foundations of Cryptography Notes of lecture No. 5B (given on Apr. 2nd) Notes taken by Eyal Kushilevitz Summary In this (half) lecture we introduce and define the concept of secure encryption. (It is assumed that the reader is familiar with the notions of private-key and public-key encryption, but we will define these notions too for the sake of self-containment). 1. Secure encryption - motivation. Secure encryption is one of the fundamental problems in the field of cryptography. One of the important results of this field in the last years, is the success to give formal definitions to intuitive concepts (such as secure encryption), and the ability to suggest efficient implementations to such systems. This ability looks more impressive when we take into account the strong requirements of these definitions. Loosely speaking, an encryption system is called "secure", if seeing the encrypted message does not give any partial information about the message, that is not known beforehand. We describe here the properties that we would like the definition of encryption system to capture: g Computational hardness - As usual, the definition should deal with efficient procedures. That is, by saying that the encryption does not give any partial information about the message, we mean that no efficient algorithm is able to gain such an information Clearly, we also require that the encryption and decryption procedures will be efficient. g Security with respect to any probability distribution of messages - The encryption system should be secure independently of the probability distribution of the messages we encrypt. For example: we do not want a system which is secure with respect to messages taken from a uniform distribution, but not secure with respect to messages written in English.
    [Show full text]
  • On One-Way Functions and Kolmogorov Complexity
    On One-way Functions and Kolmogorov Complexity Yanyi Liu Rafael Pass∗ Cornell University Cornell Tech [email protected] [email protected] September 25, 2020 Abstract We prove that the equivalence of two fundamental problems in the theory of computing. For every polynomial t(n) ≥ (1 + ")n; " > 0, the following are equivalent: • One-way functions exists (which in turn is equivalent to the existence of secure private-key encryption schemes, digital signatures, pseudorandom generators, pseudorandom functions, commitment schemes, and more); • t-time bounded Kolmogorov Complexity, Kt, is mildly hard-on-average (i.e., there exists a t 1 polynomial p(n) > 0 such that no PPT algorithm can compute K , for more than a 1 − p(n) fraction of n-bit strings). In doing so, we present the first natural, and well-studied, computational problem characterizing the feasibility of the central private-key primitives and protocols in Cryptography. arXiv:2009.11514v1 [cs.CC] 24 Sep 2020 ∗Supported in part by NSF Award SATC-1704788, NSF Award RI-1703846, AFOSR Award FA9550-18-1-0267, and a JP Morgan Faculty Award. This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via 2019-19-020700006. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
    [Show full text]
  • On One-Way Functions and Kolmogorov Complexity
    On One-way Functions and Kolmogorov Complexity Yanyi Liu Rafael Pass∗ Cornell University Cornell Tech [email protected] [email protected] September 24, 2020 Abstract We prove that the equivalence of two fundamental problems in the theory of computing. For every polynomial t(n) ≥ (1 + ")n; " > 0, the following are equivalent: • One-way functions exists (which in turn is equivalent to the existence of secure private-key encryption schemes, digital signatures, pseudorandom generators, pseudorandom functions, commitment schemes, and more); • t-time bounded Kolmogorov Complexity, Kt, is mildly hard-on-average (i.e., there exists a t 1 polynomial p(n) > 0 such that no PPT algorithm can compute K , for more than a 1 − p(n) fraction of n-bit strings). In doing so, we present the first natural, and well-studied, computational problem characterizing the feasibility of the central private-key primitives and protocols in Cryptography. ∗Supported in part by NSF Award SATC-1704788, NSF Award RI-1703846, AFOSR Award FA9550-18-1-0267, and a JP Morgan Faculty Award. This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via 2019-19-020700006. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. 1 Introduction We prove the equivalence of two fundamental problems in the theory of computing: (a) the exis- tence of one-way functions, and (b) mild average-case hardness of the time-bounded Kolmogorov Complexity problem.
    [Show full text]
  • Provably Secure Steganography
    IEEE TRANSACTIONS ON COMPUTERS, VOL. 58, NO. X, XXX 2009 1 Provably Secure Steganography Nicholas Hopper, Luis von Ahn, and John Langford Abstract—Steganography is the problem of hiding secret messages in “innocent-looking” public communication so that the presence of the secret messages cannot be detected. This paper introduces a cryptographic formalization of steganographic security in terms of computational indistinguishability from a channel, an indexed family of probability distributions on cover messages. We use cryptographic and complexity-theoretic proof techniques to show that the existence of one-way functions and the ability to sample from the channel are necessary conditions for secure steganography. We then construct a steganographic protocol, based on rejection sampling from the channel, that is provably secure and has nearly optimal bandwidth under these conditions. This is the first known example of a general provably secure steganographic protocol. We also give the first formalization of “robust” steganography, where an adversary attempts to remove any hidden messages without unduly disrupting the cover channel. We give a necessary condition on the amount of disruption the adversary is allowed in terms of a worst case measure of mutual information. We give a construction that is provably secure and computationally efficient and has nearly optimal bandwidth, assuming repeatable access to the channel distribution. Index Terms—Steganography, covert channels, provable security. Ç 1INTRODUCTION classic problem of computer security is the mitigation Steganographic “protocols” have a long and intriguing Aof covert channels. First introduced by Lampson [1], a history that predates covert channels, stretching back to covert channel in a (single-host or distributed) computer antiquity.
    [Show full text]
  • Deterministic Encryption with the Thorp Shuffle
    J Cryptol (2018) 31:521–536 https://doi.org/10.1007/s00145-017-9262-z Deterministic Encryption with the Thorp Shuffle Ben Morris Department of Mathematics, University of California, Davis, CA 95616, USA Phillip Rogaway and Till Stegers∗ Department of Computer Science, University of California, Davis, CA 95616, USA [email protected] Communicated by Kenneth Paterson. Received 3 April 2011 / Revised 26 July 2017 Online publication 25 August 2017 Abstract. We analyze the security of the Thorp shuffle, or, equivalently, a maximally unbalanced Feistel network. Roughly said, the Thorp shuffle on N cards mixes any − / N 1 1 r of them in O(r lg N) steps. Correspondingly, making O(r) passes of maximally ( − / ) unbalanced Feistel over an n-bit string ensures CCA security to 2n 1 1 r queries. Our results, which employ Markov chain techniques, particularly couplings, help to justify a practical, although still relatively inefficient, blockcipher-based scheme for deterministically enciphering credit card numbers and the like. Keywords. Card shuffling, Coupling, Format-preserving encryption, Modes of oper- ation, Symmetric encryption, Thorp shuffle, Unbalanced Feistel network. 1. Introduction 1.1. Small-Space Encryption Suppose you want to encrypt a 9-decimal-digit plaintext, say a US social security number, into a ciphertext that is again a 9-decimal-digit number. A shared key K is used to control the encryption. Syntactically, you seek a cipher E: K × M → M where M = 9 {0, 1,...,N −1}, N = 10 , and EK = E(K, ·) is a permutation for each key K ∈ K. You aim to construct your scheme from a well-known primitive, say AES, and to prove your scheme is as secure as the primitive from which you start.
    [Show full text]
  • Thesis Proposal: Toward a Theory of Steganography
    Thesis Proposal: Toward a Theory of Steganography Nicholas Hopper School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract Informally, steganography refers to the practice of hiding secret messages in communications over a public channel so that an eavesdropper (who listens to all communications) cannot even tell that a secret message is being sent. In contrast to the active literature proposing new ad hoc steganographic protocols and analyzing flaws in existing protocols, there has been very little work on formalizing steganographic notions of security, and none giving complete, rigorous proofs of security in any non-trivial model. My thesis will initiate the study of steganography from a complexity-theoretic point of view. Mod- elling the communication between two parties as a probabilistic channel, I will introduce several different steganographic security properties, and for each of these properties, attempt to determine the necessary and sufficient conditions under which secure steganography is possible. Furthermore, I propose to investigate the open question \How much information can be safely encoded by a stegosystem?"[3] by proving an upper bound MAX on the bit rate of any secure, uni- versal stegosystem; and giving a protocol which achieves rate Ω(MAX), thus providing a universal stegosystem of approximately optimal rate. 1 Introduction The scientific study of steganography in the open literature began in 1983 when Simmons [11] stated the problem in terms of communication in a prison. In his formulation, two inmates, Alice and Bob, are trying to hatch an escape plan. The only way they can communicate with each other is through a public channel, which is carefully monitored by the warden of the prison, Ward.
    [Show full text]
  • Multi Collision Resistant Hash Functions and Their Applications∗
    Electronic Colloquium on Computational Complexity, Report No. 97 (2017) Multi Collision Resistant Hash Functions and their Applications∗ Itay Berman Akshay Degwekar Ron D. Rothblum Prashant Nalini Vasudevan May 30, 2017 Abstract Collision resistant hash functions are functions that shrink their input, but for which it is computationally infeasible to find a collision, namely two strings that hash to the same value (although collisions are abundant). In this work we study multi-collision resistant hash functions (MCRH) a natural relaxation of collision resistant hash functions in which it is difficult to find a t-way collision (i.e., t strings that hash to the same value) although finding (t − 1)-way collisions could be easy. We show the following: • The existence of MCRH follows from the average case hardness of a variant of Entropy Approximation, a problem known to be complete for the class NISZK. • MCRH imply the existence of constant-round statistically hiding (and computationally binding) commitment schemes. In addition, we show a blackbox separation of MCRH from any one-way permutation. ∗MIT. Emails: {itayberm, akshayd, ronr, prashvas}@mit.edu. Research supported in part by NSF Grants CNS-1413920 and CNS-1350619, and by the Defense Advanced Research Projects Agency (DARPA) and the U.S. Army Research Office under contracts W911NF-15-C-0226 and W911NF-15-C-0236. ISSN 1433-8092 Contents 1 Introduction 1 1.1 Our Results.........................................2 1.2 Related Works.......................................3 1.3 Our Techniques.......................................4 2 Preliminaries 10 2.1 Many-wise Independent Hashing............................. 11 2.2 Load Balancing....................................... 11 3 Constructing MCRH Families 12 3.1 Entropy Approximation.................................
    [Show full text]
  • Simultaneous Secrecy and Reliability Amplification for a General Channel
    Simultaneous Secrecy and Reliability Amplification for a General Channel Model Russell Impagliazzo1, Ragesh Jaiswal2, Valentine Kabanets3, Bruce M. Kapron4, Valerie King4, and Stefano Tessaro5 1 University of California, San Diego [email protected] 2 Indian Institute of Technology Delhi [email protected] 3 Simon Fraser University [email protected] 4 University of Victoria [email protected], [email protected] 5 University of California, Santa Barbara [email protected] Abstract. We present a general notion of channel for cryptographic purposes, which can model either a (classical) physical channel or the consequences of a cryptographic protocol, or any hybrid. We consider simultaneous secrecy and reliability amplification for such channels. We show that simultaneous secrecy and reliability amplification is not pos- sible for the most general model of channel, but, at least for some values of the parameters, it is possible for a restricted class of channels that still includes both standard information-theoretic channels and keyless cryptographic protocols. Even in the restricted model, we require that for the original channel, the failure chance for the attacker must be a factor c more than that for the intended receiver. We show that for any c > 4, there is a one-way protocol (where the sender sends information to the receiver only) which achieves simultaneous secrecy and reliability. From results of Holenstein and Renner (CRYPTO'05 ), there are no such one-way protocols for c < 2. On the other hand, we also show that for c > 1:5, there are two-way protocols that achieve simultaneous secrecy and reliability.
    [Show full text]
  • Deterministic Encryption: Definitional Equivalences and Constructions Without Random Oracles
    A preliminary version of this paper appears in Advances in Cryptology – CRYPTO ‘08, Lecture Notes in Computer Science vol. xx, D. Wagner ed., Springer-Verlag, 2008. This is the full version. Deterministic Encryption: Definitional Equivalences and Constructions without Random Oracles MIHIR BELLARE∗ MARC FISCHLINy ADAM O’NEILLz THOMAS RISTENPARTx February 17, 2009 Abstract We strengthen the foundations of deterministic public-key encryption via definitional equivalences and standard-model constructs based on general assumptions. Specifically we consider seven notions of privacy for deterministic encryption, including six forms of semantic security and an indistinguishability notion, and show them all equivalent. We then present a deterministic scheme for the secure encryption of uniformly and independently distributed messages based solely on the existence of trapdoor one-way permutations. We show a generalization of the construction that allows secure deterministic encryption of independent high-entropy messages. Finally we show relations between deterministic and standard (randomized) encryption. ∗ Dept. of Computer Science & Engineering 0404, University of California San Diego, 9500 Gilman Drive, La Jolla, CA 92093-0404, USA. Email: [email protected]. URL: http://www.cs.ucsd.edu/˜mihir. Supported in part by NSF grants CNS 0524765 and CNS 0627779 and a gift from Intel Corporation. y Dept. of Computer Science, Darmstadt University of Technology, Hochschulstrasse 10, 64289 Darmstadt, Germany. Email: [email protected]. URL: http://www.fischlin.de. Supported in part by the Emmy Noether Program Fi 940/2-1 of the German Research Foundation (DFG). z College of Computing, Georgia Institute of Technology, 801 Atlantic Drive, Atlanta, GA 30332, USA. Email: [email protected].
    [Show full text]