Foundations of Cryptography
Cryptography is concerned with the conceptualization, definition, and construction of computing systems that address security concerns. The design of cryptographic systems must be based on firm foundations. Foundationsof Cryptography presents a rigorous and systematic treatment of foundational issues: defining cryptographic tasks and solving new cryptographic problems using existing tools. The emphasis is on the clarification of fundamental concepts and on demonstrating the feasibility of solving several central cryptographic problems, as opposed to describing ad hoc approaches. This second volume contains a rigorous treatment of three basic applications: en- cryption, signatures, and general cryptographic protocols. It builds on the previous volume, which provides a treatment of one-way functions, pseudorandomness, and zero-knowledge proofs. It is suitable for use in a graduate course on cryptography and as a reference book for experts. The author assumes basic familiarity with the design and analysis of algorithms; some knowledge of complexity theory and probability is also useful.
Oded Goldreich is Professor of Computer Science at the Weizmann Institute of Science and incumbent of the Meyer W. Weisgal Professorial Chair. An active researcher, he has written numerous papers on cryptography and is widely considered to be one of the world experts in the area. He is an editor of Journal of Cryptology and SIAM Journal on Computing and the author of Modern Cryptography, Probabilistic Proofs and Pseudorandomness.
Foundations of Cryptography II Basic Applications
Oded Goldreich Weizmann Institute of Science CAMBRIDGE UNIVERSITY PRESS Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, São Paulo, Delhi
Cambridge University Press The Edinburgh Building, Cambridge CB2 8RU, UK
Published in the United States of America by Cambridge University Press, New York
www.cambridge.org Information on this title: www.cambridge.org/9780521119917
© Oded Goldreich 2004
This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press.
First published 2004 This digitally printed version 2009
A catalogue record for this publication is available from the British Library
ISBN 978-0-521-83084-3 hardback ISBN 978-0-521-11991-7 paperback To Dana
Contents II Basic Applications
List of Figures page xi Preface xiii Acknowledgments xxi
5 Encryption Schemes 373 5.1. The Basic Setting 374 5.1.1. Private-Key Versus Public-Key Schemes 375 5.1.2. The Syntax of Encryption Schemes 376 5.2. Definitions of Security 378 5.2.1. Semantic Security 379 5.2.2. Indistinguishability of Encryptions 382 5.2.3. Equivalence of the Security Definitions 383 5.2.4. Multiple Messages 389 5.2.5.* A Uniform-Complexity Treatment 394 5.3. Constructions of Secure Encryption Schemes 403 5.3.1.* Stream-Ciphers 404 5.3.2. Preliminaries: Block-Ciphers 408 5.3.3. Private-Key Encryption Schemes 410 5.3.4. Public-Key Encryption Schemes 413 5.4.* Beyond Eavesdropping Security 422 5.4.1. Overview 422 5.4.2. Key-Dependent Passive Attacks 425 5.4.3. Chosen Plaintext Attack 431 5.4.4. Chosen Ciphertext Attack 438 5.4.5. Non-Malleable Encryption Schemes 470 5.5. Miscellaneous 474 5.5.1. On Using Encryption Schemes 474 5.5.2. On Information-Theoretic Security 476 5.5.3. On Some Popular Schemes 477 vii CONTENTS
5.5.4. Historical Notes 478 5.5.5. Suggestions for Further Reading 480 5.5.6. Open Problems 481 5.5.7. Exercises 481
6 Digital Signatures and Message Authentication 497 6.1. The Setting and Definitional Issues 498 6.1.1. The Two Types of Schemes: A Brief Overview 498 6.1.2. Introduction to the Unified Treatment 499 6.1.3. Basic Mechanism 501 6.1.4. Attacks and Security 502 6.1.5.* Variants 505 6.2. Length-Restricted Signature Scheme 507 6.2.1. Definition 507 6.2.2. The Power of Length-Restricted Signature Schemes 508 6.2.3.* Constructing Collision-Free Hashing Functions 516 6.3. Constructions of Message-Authentication Schemes 523 6.3.1. Applying a Pseudorandom Function to the Document 523 6.3.2.* More on Hash-and-Hide and State-Based MACs 531 6.4. Constructions of Signature Schemes 537 6.4.1. One-Time Signature Schemes 538 6.4.2. From One-Time Signature Schemes to General Ones 543 6.4.3.* Universal One-Way Hash Functions and Using Them 560 6.5.* Some Additional Properties 575 6.5.1. Unique Signatures 575 6.5.2. Super-Secure Signature Schemes 576 6.5.3. Off-Line/On-Line Signing 580 6.5.4. Incremental Signatures 581 6.5.5. Fail-Stop Signatures 583 6.6. Miscellaneous 584 6.6.1. On Using Signature Schemes 584 6.6.2. On Information-Theoretic Security 585 6.6.3. On Some Popular Schemes 586 6.6.4. Historical Notes 587 6.6.5. Suggestions for Further Reading 589 6.6.6. Open Problems 590 6.6.7. Exercises 590
7 General Cryptographic Protocols 599 7.1. Overview 600 7.1.1. The Definitional Approach and Some Models 601 7.1.2. Some Known Results 607 7.1.3. Construction Paradigms 609
viii CONTENTS
7.2.* The Two-Party Case: Definitions 615 7.2.1. The Syntactic Framework 615 7.2.2. The Semi-Honest Model 619 7.2.3. The Malicious Model 626 7.3.* Privately Computing (Two-Party) Functionalities 634 7.3.1. Privacy Reductions and a Composition Theorem 636 k 7.3.2. The OT1 Protocol: Definition and Construction 640 7.3.3. Privately Computing c1 + c2 = (a1 + a2) · (b1 + b2) 643 7.3.4. The Circuit Evaluation Protocol 645 7.4.* Forcing (Two-Party) Semi-Honest Behavior 650 7.4.1. The Protocol Compiler: Motivation and Overview 650 7.4.2. Security Reductions and a Composition Theorem 652 7.4.3. The Compiler: Functionalities in Use 657 7.4.4. The Compiler Itself 681 7.5.* Extension to the Multi-Party Case 693 7.5.1. Definitions 694 7.5.2. Security in the Semi-Honest Model 701 7.5.3. The Malicious Models: Overview and Preliminaries 708 7.5.4. The First Compiler: Forcing Semi-Honest Behavior 714 7.5.5. The Second Compiler: Effectively Preventing Abort 729 7.6.* Perfect Security in the Private Channel Model 741 7.6.1. Definitions 742 7.6.2. Security in the Semi-Honest Model 743 7.6.3. Security in the Malicious Model 746 7.7. Miscellaneous 747 7.7.1.* Three Deferred Issues 747 7.7.2.* Concurrent Executions 752 7.7.3. Concluding Remarks 755 7.7.4. Historical Notes 756 7.7.5. Suggestions for Further Reading 757 7.7.6. Open Problems 758 7.7.7. Exercises 759 Appendix C: Corrections and Additions to Volume 1 765 C.1. Enhanced Trapdoor Permutations 765 C.2. On Variants of Pseudorandom Functions 768 C.3. On Strong Witness Indistinguishability 768 C.3.1. On Parallel Composition 769 C.3.2. On Theorem 4.6.8 and an Afterthought 770 C.3.3. Consequences 771 C.4. On Non-Interactive Zero-Knowledge 772 C.4.1. On NIZKs with Efficient Prover Strategies 772 C.4.2. On Unbounded NIZKs 773 C.4.3. On Adaptive NIZKs 774
ix CONTENTS
C.5. Some Developments Regarding Zero-Knowledge 775 C.5.1. Composing Zero-Knowledge Protocols 775 C.5.2. Using the Adversary’s Program in the Proof of Security 780 C.6. Additional Corrections and Comments 783 C.7. Additional Mottoes 784
Bibliography 785 Index 795
Note: Asterisks indicate advanced material.
x List of Figures
0.1 Organization of this work page xvi 0.2 Rough organization of this volume xvii 0.3 Plan for one-semester course on Foundations of Cryptography xviii 5.1 Private-key encryption schemes: an illustration 375 5.2 Public-key encryption schemes: an illustration 376 6.1 Message-authentication versus signature schemes 500 6.2 Collision-free hashing via block-chaining (for t = 7) 519 6.3 Collision-free hashing via tree-chaining (for t = 8) 522 6.4 Authentication-trees: the basic authentication step 546 6.5 An authentication path for nodes 010 and 011 547 7.1 Secure protocols emulate a trusted party: an illustration 601 7.2 The functionalities used in the compiled protocol 658 7.3 Schematic depiction of a canonical protocol 690
xi
Preface
It is possible to build a cabin with no foundations, but not a lasting building. Eng. Isidor Goldreich (1906–1995)
Cryptography is concerned with the construction of schemes that withstand any abuse. Such schemes are constructed so as to maintain a desired functionality, even under malicious attempts aimed at making them deviate from their prescribed functionality. The design of cryptographic schemes is a very difficult task. One cannot rely on intuitions regarding the typical state of the environment in which the system operates. For sure, the adversary attacking the system will try to manipulate the environment into untypical states. Nor can one be content with countermeasures designed to withstand specific attacks because the adversary (which acts after the design of the system is completed) will try to attack the schemes in ways that are typically different from the ones envisioned by the designer. The validity of the foregoing assertions seems self- evident; still, some people hope that in practice, ignoring these tautologies will not result in actual damage. Experience shows that these hopes rarely come true; cryptographic schemes based on make-believe are broken, typically sooner than later. In view of these assertions, we believe that it makes little sense to make assumptions regarding the specific strategy that the adversary may use. The only assumptions that can be justified refer to the computational abilities of the adversary. Furthermore, it is our opinion that the design of cryptographic systems has to be based on firm foundations, whereas ad hoc approaches and heuristics are a very dangerous way to go. A heuristic may make sense when the designer has a very good idea about the environment in which a scheme is to operate, yet a cryptographic scheme has to operate in a maliciously selected environment that typically transcends the designer’s view. This work is aimed at presenting firm foundations for cryptography. The foundations of cryptography are the paradigms, approaches, and techniques used to conceptualize, define, and provide solutions to natural “security concerns.” We will present some of these paradigms, approaches, and techniques, as well as some of the fundamental results
xiii PREFACE obtained using them. Our emphasis is on the clarification of fundamental concepts and on demonstrating the feasibility of solving several central cryptographic problems. Solving a cryptographic problem (or addressing a security concern) is a two-stage process consisting of a definitional stage and a constructive stage. First, in the defini- tional stage, the functionality underlying the natural concern is to be identified, and an adequate cryptographic problem has to be defined. Trying to list all undesired situations is infeasible and prone to error. Instead, one should define the functionality in terms of operation in an imaginary ideal model, and require a candidate solution to emulate this operation in the real, clearly defined model (which specifies the adversary’s abilities). Once the definitional stage is completed, one proceeds to construct a system that satis- fies the definition. Such a construction may use some simpler tools, and its security is proven relying on the features of these tools. In practice, of course, such a scheme may also need to satisfy some specific efficiency requirements. This work focuses on several archetypical cryptographic problems (e.g., encryption and signature schemes) and on several central tools (e.g., computational difficulty, pseudorandomness, and zero-knowledge proofs). For each of these problems (resp., tools), we start by presenting the natural concern underlying it (resp., its intuitive objective), then define the problem (resp., tool), and finally demonstrate that the problem may be solved (resp., the tool can be constructed). In the last step, our focus is on demon- strating the feasibility of solving the problem, not on providing a practical solution. As a secondary concern, we typically discuss the level of practicality (or impracticality) of the given (or known) solution.
Computational Difficulty The specific constructs mentioned earlier (as well as most constructs in this area) can exist only if some sort of computational hardness exists. Specifically, all these problems and tools require (either explicitly or implicitly) the ability to generate instances of hard problems. Such ability is captured in the definition of one-way functions (see further discussion in Section 2.1). Thus, one-way functions are the very minimum needed for doing most sorts of cryptography. As we shall see, one-way functions actually suffice for doing much of cryptography (and the rest can be done by augmentations and extensions of the assumption that one-way functions exist). Our current state of understanding of efficient computation does not allow us to prove that one-way functions exist. In particular, the existence of one-way functions implies that NP is not contained in BPP ⊇ P (not even “on the average”), which would resolve the most famous open problem of computer science. Thus, we have no choice (at this stage of history) but to assume that one-way functions exist. As justification for this assumption, we may only offer the combined beliefs of hundreds (or thousands) of researchers. Furthermore, these beliefs concern a simply stated assumption, and their validity follows from several widely believed conjectures that are central to various fields (e.g., the conjecture that factoring integers is hard is central to computational number theory). Since we need assumptions anyhow, why not just assume what we want (i.e., the existence of a solution to some natural cryptographic problem)? Well, first we need xiv PREFACE to know what we want: As stated earlier, we must first clarify what exactly we want; that is, we must go through the typically complex definitional stage. But once this stage is completed, can we just assume that the definition derived can be met? Not really. Once a definition is derived, how can we know that it can be met at all? The way to demonstrate that a definition is viable (and so the intuitive security concern can be satisfied at all) is to construct a solution based on a better-understood assumption (i.e., one that is more common and widely believed). For example, looking at the definition of zero-knowledge proofs, it is not a priori clear that such proofs exist at all (in a non-trivial sense). The non-triviality of the notion was first demonstrated by presenting a zero-knowledge proof system for statements regarding Quadratic Residuosity that are believed to be hard to verify (without extra information). Furthermore, contrary to prior beliefs, it was later shown that the existence of one-way functions implies that any NP-statement can be proven in zero-knowledge. Thus, facts that were not at all known to hold (and were even believed to be false) were shown to hold by reduction to widely believed assumptions (without which most of modern cryptography collapses anyhow). To summarize, not all assumptions are equal, and so reducing a complex, new, and doubtful assumption to a widely believed simple (or even merely simpler) assumption is of great value. Furthermore, reducing the solution of a new task to the assumed security of a well-known primitive typically means providing a construction that, using the known primitive, solves the new task. This means that we not only know (or assume) that the new task is solvable but also have a solution based on a primitive that, being well known, typically has several candidate implementations.
Structure and Prerequisites Our aim is to present the basic concepts, techniques, and results in cryptography. As stated earlier, our emphasis is on the clarification of fundamental concepts and the rela- tionship among them. This is done in a way independent of the particularities of some popular number-theoretic examples. These particular examples played a central role in the development of the field and still offer the most practical implementations of all cryptographic primitives, but this does not mean that the presentation has to be linked to them. On the contrary, we believe that concepts are best clarified when presented at an abstract level, decoupled from specific implementations. Thus, the most relevant background for this work is provided by basic knowledge of algorithms (including randomized ones), computability, and elementary probability theory. Background on (computational) number theory, which is required for specific implementations of cer- tain constructs, is not really required here (yet a short appendix presenting the most relevant facts is included in the first volume so as to support the few examples of implementations presented here).
Organization of the Work. This work is organized in two parts (see Figure 0.1): Basic Tools and Basic Applications. The first volume (i.e., [108]) contains an introductory chapter as well as the first part (Basic Tools), which consists of chapters on computa- tional difficulty (one-way functions), pseudorandomness, and zero-knowledge proofs. These basic tools are used for the Basic Applications of the second part (i.e., the current xv PREFACE
Volume 1: Introduction and Basic Tools Chapter 1: Introduction Chapter 2: Computational Difficulty (One-Way Functions) Chapter 3: Pseudorandom Generators Chapter 4: Zero-Knowledge Proof Systems Volume 2: Basic Applications Chapter 5: Encryption Schemes Chapter 6: Digital Signatures and Message Authentication Chapter 7: General Cryptographic Protocols
Figure 0.1: Organization of this work. volume), which consists of chapters on Encryption Schemes, Digital Signatures and Message Authentication, and General Cryptographic Protocols. The partition of the work into two parts is a logical one. Furthermore, it has offered us the advantage of publishing the first part before the completion of the second part. Originally, a third part, entitled Beyond the Basics, was planned. That part was to have discussed the effect of Cryptography on the rest of Computer Science (and, in particular, complexity theory), as well as to have provided a treatment of a variety of more advanced security concerns. In retrospect, we feel that the first direction is addressed in [106], whereas the second direction is more adequate for a collection of surveys.
Organization of the Current Volume. The current (second) volume consists of three chapters that treat encryption schemes, digital signatures and message authentication, and general cryptographic protocols, respectively. Also included is an appendix that pro- vides corrections and additions to Volume 1. Figure 0.2 depicts the high-level structure of the current volume. Inasmuch as this volume is a continuation of the first (i.e., [108]), one numbering system is used for both volumes (and so the first chapter of the cur- rent volume is referred to as Chapter 5). This allows a simple referencing of sections, definitions, and theorems that appear in the first volume (e.g., Section 1.3 presents the computational model used throughout the entire work). The only exception to this rule is the use of different bibliographies (and consequently a different numbering of bibliographic entries) in the two volumes. Historical notes, suggestions for further reading, some open problems, and some exercises are provided at the end of each chapter. The exercises are mostly designed to help and test the basic understanding of the main text, not to test or inspire creativity. The open problems are fairly well known; still, we recommend a check on their current status (e.g., in our updated notices web site).
Web Site for Notices Regarding This Work. We intend to maintain a web site listing corrections of various types. The location of the site is
http://www.wisdom.weizmann.ac.il/∼oded/foc-book.html
xvi PREFACE
Chapter 5: Encryption Schemes The Basic Setting (Sec. 5.1) Definitions of Security (Sec. 5.2) Constructions of Secure Encryption Schemes (Sec. 5.3) Advanced Material (Secs. 5.4 and 5.5.1–5.5.3) Chapter 6: Digital Signatures and Message Authentication The Setting and Definitional Issues (Sec. 6.1) Length-Restricted Signature Scheme (Sec. 6.2) Basic Constructions (Secs. 6.3 and 6.4) Advanced Material (Secs. 6.5 and 6.6.1–6.6.3) Chapter 7: General Cryptographic Protocols Overview (Sec. 7.1) Advanced Material (all the rest): The Two-Party Case (Sec. 7.2–7.4) The Multi-Party Case (Sec. 7.5 and 7.6) Appendix C: Corrections and Additions to Volume 1 Bibliography and Index
Figure 0.2: Rough organization of this volume.
Using This Work This work is intended to serve as both a textbook and a reference text. That is, it is aimed at serving both the beginner and the expert. In order to achieve this aim, the presentation of the basic material is very detailed so as to allow a typical undergraduate in Computer Science to follow it. An advanced student (and certainly an expert) will find the pace (in these parts) far too slow. However, an attempt was made to allow the latter reader to easily skip details obvious to him/her. In particular, proofs are typically presented in a modular way. We start with a high-level sketch of the main ideas and only later pass to the technical details. Passage from high-level descriptions to lower-level details is typically marked by phrases such as “details follow.”
In a few places, we provide straightforward but tedious details in indented para- graphs such as this one. In some other (even fewer) places, such paragraphs provide technical proofs of claims that are of marginal relevance to the topic of the work. More advanced material is typically presented at a faster pace and with fewer details. Thus, we hope that the attempt to satisfy a wide range of readers will not harm any of them.
Teaching. The material presented in this work, on the one hand, is way beyond what one may want to cover in a course and, on the other hand, falls very short of what one may want to know about Cryptography in general. To assist these conflicting needs, we make a distinction between basic and advanced material and provide suggestions for further reading (in the last section of each chapter). In particular, sections, subsections, and subsubsections marked by an asterisk (*) are intended for advanced reading. xvii PREFACE
Depending on the class, each lecture consists of 50–90 minutes. Lectures 1–15 are covered by the first volume. Lectures 16–28 are covered by the current (second) volume.
Lecture 1: Introduction, Background, etc. (depending on class) Lectures 2–5: Computational Difficulty (One-Way Functions) Main: Definition (Sec. 2.2), Hard-Core Predicates (Sec. 2.5) Optional: Weak Implies Strong (Sec. 2.3), and Secs. 2.4.2–2.4.4 Lectures 6–10: Pseudorandom Generators Main: Definitional Issues and a Construction (Secs. 3.2–3.4) Optional: Pseudorandom Functions (Sec. 3.6) Lectures 11–15: Zero-Knowledge Proofs Main: Some Definitions and a Construction (Secs. 4.2.1, 4.3.1, 4.4.1–4.4.3) Optional: Secs. 4.2.2, 4.3.2, 4.3.3–4.3.4, 4.4.4 Lectures 16–20: Encryption Schemes Main: Definitions and Constructions (Secs. 5.1, 5.2.1–5.2.4, 5.3.2–5.3.4) Optional: Beyond Passive Notions of Security (Overview, Sec. 5.4.1) Lectures 21–24: Signature Schemes Definitions and Constructions (Secs. 6.1, 6.2.1–6.2.2, 6.3.1.1, 6.4.1–6.4.2) Lectures 25–28: General Cryptographic Protocols The Definitional Approach and a General Construction (Overview, Sec. 7.1).
Figure 0.3: Plan for one-semester course on Foundations of Cryptography.
This work is intended to provide all material required for a course on Foundations of Cryptography. For a one-semester course, the teacher will definitely need to skip all advanced material (marked by an asterisk) and perhaps even some basic material; see the suggestions in Figure 0.3. Depending on the class, this should allow coverage of the basic material at a reasonable level (i.e., all material marked as “main” and some of the “optional”). This work can also serve as a textbook for a two-semester course. In such a course, one should be able to cover the entire basic material suggested in Figure 0.3, and even some of the advanced material.
Practice. The aim of this work is to provide sound theoretical foundations for cryp- tography. As argued earlier, such foundations are necessary for any sound practice of cryptography. Indeed, practice requires more than theoretical foundations, whereas the current work makes no attempt to provide anything beyond the latter. However, given a sound foundation, one can learn and evaluate various practical suggestions that appear elsewhere (e.g., in [149]). On the other hand, lack of sound foundations results in an inability to critically evaluate practical suggestions, which in turn leads to unsound
xviii PREFACE decisions. Nothing could be more harmful to the design of schemes that need to with- stand adversarial attacks than misconceptions about such attacks.
Relationship to Another Book by the Author A frequently asked question refers to the relationship of the current work to my text Modern Cryptography, Probabilistic Proofs and Pseudorandomness [106]. That text consists of three brief introductions to the related topics in its title. Specifically,Chapter 1 of [106] provides a brief (i.e., 30-page) summary of the current work. The other two chapters of [106] provide a wider perspective on two topics mentioned in the current work (i.e., Probabilistic Proofs and Pseudorandomness). Further comments on the latter aspect are provided in the relevant chapters of the first volume of the current work (i.e., [108]).
A Comment Regarding the Current Volume
There are no privileges without duties. Adv. Klara Goldreich-Ingwer (1912–2004)
Writing the first volume was fun. In comparison to the current volume, the definitions, constructions, and proofs in the first volume were relatively simple and easy to write. Furthermore, in most cases, the presentation could safely follow existing texts. Conse- quently, the writing effort was confined to reorganizing the material, revising existing texts, and augmenting them with additional explanations and motivations. Things were quite different with respect to the current volume. Even the simplest notions defined in the current volume are more complex than most notions treated in the first volume (e.g., contrast secure encryption with one-way functions or secure protocols with zero-knowledge proofs). Consequently, the definitions are more complex, and many of the constructions and proofs are more complex. Furthermore, in most cases, the presentation could not follow existing texts. Indeed, most effort had to be (and was) devoted to the actual design of constructions and proofs, which were only inspired by existing texts. The mere fact that writing this volume required so much effort may imply that this volume will be very valuable: Even experts may be happy to be spared the hardship of trying to understand this material based on the original research manuscripts.
xix
Acknowledgments
...very little do we have and inclose which we can call our own in the deep sense of the word. We all have to accept and learn, either from our predecessors or from our contemporaries. Even the greatest genius would not have achieved much if he had wished to extract everything from inside himself. But there are many good people, who do not understand this, and spend half their lives wondering in darkness with their dreams of originality. I have known artists who were proud of not having followed any teacher and of owing everything only to their own genius. Such fools! Goethe, Conversations with Eckermann, 17.2.1832
First of all, I would like to thank three remarkable people who had a tremendous influence on my professional development: Shimon Even introduced me to theoretical computer science and closely guided my first steps. Silvio Micali and Shafi Goldwasser led my way in the evolving foundations of cryptography and shared with me their constant efforts for further developing these foundations. I have collaborated with many researchers, yet I feel that my collaboration with Benny Chor and Avi Wigderson had the most important impact on my professional development and career. I would like to thank them both for their indispensable contri- bution to our joint research and for the excitement and pleasure I had when collaborating with them. Leonid Levin deserves special thanks as well. I had many interesting discussions with Leonid over the years, and sometimes it took me too long to realize how helpful these discussions were. Special thanks also to four of my former students, from whom I have learned a lot (especially regarding the contents of this volume): to Boaz Barak for discovering the unexpected power of non-black-box simulations, to Ran Canetti for developing defini- tions and composition theorems for secure multi-party protocols, to Hugo Krawczyk for educating me about message authentication codes, and to YehudaLindell for signif- icant simplification of the construction of a posteriori CCA (which enables a feasible presentation). xxi ACKNOWLEDGMENTS
Next, I’d like to thank a few colleagues and friends with whom I had significant interaction regarding Cryptography and related topics. These include Noga Alon, Hagit Attiya, Mihir Bellare, Ivan Damgard, Uri Feige, Shai Halevi, Johan Hastad, Amir Herzberg, Russell Impagliazzo, Jonathan Katz, Joe Kilian, Eyal Kushilevitz, Yoad Lustig, Mike Luby, Daniele Micciancio, Moni Naor, Noam Nisan, Andrew Odlyzko, Yair Oren, Rafail Ostrovsky, Erez Petrank, Birgit Pfitzmann, Omer Reingold, Ron Rivest, Alon Rosen, Amit Sahai, Claus Schnorr, Adi Shamir, Victor Shoup, Madhu Sudan, Luca Trevisan, Salil Vadhan, Ronen Vainish, Yacob Yacobi, and David Zuckerman. Even assuming I did not forget people with whom I had significant interaction on topics touching upon this book, the list of people I’m indebted to is far more extensive. It certainly includes the authors of many papers mentioned in the reference list. It also includes the authors of many Cryptography-related papers that I forgot to mention, and the authors of many papers regarding the Theory of Computation at large (a theory taken for granted in the current book). Finally, I would like to thank Boaz Barak, Alex Healy, Vlad Kolesnikov, Yehuda Lindell, and Minh-Huyen Nguyen for reading parts of this manuscript and pointing out various difficulties and errors.
xxii CHAPTER FIVE Encryption Schemes
Up to the 1970s, Cryptography was understood as the art of building encryption schemes, that is, the art of constructing schemes allowing secret data exchange over insecure channels. Since the 1970s, other tasks (e.g., signature schemes) have been recognized as falling within the domain of Cryptography (and even being at least as central to Cryptography). Yet the construction of encryption schemes remains, and is likely to remain, a central enterprise of Cryptography. In this chapter we review the well-known notions of private-key and public-key encryption schemes. More importantly, we define what is meant by saying that such schemes are secure. This definitional treatment is a cornerstone of the entire area, and much of this chapter is devoted to various aspects of it. We also present several constructions of secure (private-key and public-key) encryption schemes. It turns out that using randomness during the encryption process (i.e., not only at the key-generation phase) is essential to security.
Organization. Our main treatment (i.e., Sections 5.1–5.3) refers to security under “passive” (eavesdropping) attacks. In contrast, in Section 5.4, we discuss notions of se- curity under active attacks, culminating in robustness against chosen ciphertext attacks. Additional issues are discussed in Section 5.5.
Teaching Tip. Wesuggest to focus on the basic definitional treatment (i.e., Sections 5.1 and 5.2.1–5.2.4) and on the the feasibility of satisfying these definitions (as demon- started by the simplest constructions provided in Sections 5.3.3 and 5.3.4.1). The overview to security under active attacks (i.e., Section 5.4.1) is also recommended. We assume that the reader is familiar with the material in previous chapters (and specifically with Sections 2.2, 2.4, 2.5, 3.2–3.4, and 3.6). This familiarity is important not only because we use some of the notions and results presented in these sections but also because we use similar proof techniques (and do so while assuming that this is not the reader’s first encounter with these techniques).
373 ENCRYPTION SCHEMES
5.1. The Basic Setting
Loosely speaking, encryption schemes are supposed to enable private exchange of information between parties that communicate over an insecure channel. Thus, the basic setting consists of a sender, a receiver, and an insecure channel that may be tapped by an adversary. The goal is to allow the sender to transfer information to the receiver, over the insecure channel, without letting the adversary figure out this information. Thus, we distinguish between the actual (secret) information that the receiver wishes to transmit and the message(s) sent over the insecure communication channel. The former is called the plaintext, whereas the latter is called the ciphertext. Clearly, the ciphertext must differ from the plaintext or else the adversary can easily obtain the plaintext by tapping the channel. Thus, the sender must transform the plaintext into a corresponding ciphertext such that the receiver can retrieve the plaintext from the ciphertext, but the adversary cannot do so. Clearly, something must distinguish the receiver (who is able to retrieve the plaintext from the corresponding ciphertext) from the adversary (who cannot do so). Specifically, the receiver knows something that the adversary does not know. This thing is called a key. An encryption scheme consists of a method of transforming plaintexts into cipher- texts and vice versa, using adequate keys. These keys are essential to the ability to effect these transformations. Formally, these transformations are performed by corresponding algorithms: an encryption algorithm that transforms a given plaintext and an adequate (encryption) key into a corresponding ciphertext, and a decryption algorithm that given the ciphertext and an adequate (decryption) key recovers the original plaintext. Actu- ally, we need to consider a third algorithm, namely, a probabilistic algorithm used to generate keys (i.e., a key-generation algorithm). This algorithm must be probabilistic (or else, by invoking it, the adversary obtains the very same key used by the receiver). We stress that the encryption scheme itself (i.e., the aforementioned three algorithms) may be known to the adversary, and the scheme’s security relies on the hypothesis that the adversary does not know the actual keys in use.1 In accordance with these principles, an encryption scheme consists of three algorithms. These algorithms are public (i.e., known to all parties). The two obvious algorithms are the encryption algorithm, which transforms plaintexts into ciphertexts, and the decryption algorithm, which transforms ciphertexts into plaintexts. By these principles, it is clear that the decryption algorithm must employ a key that is known to the receiver but is not known to the adversary. This key is generated using a third algorithm, called the key-generator. Furthermore, it is not hard to see that the encryp- tion process must also depend on the key (or else messages sent to one party can be read by a different party who is also a potential receiver). Thus, the key-generation algorithm is used to produce a pair of (related) keys, one for encryption and one for de- cryption. The encryption algorithm, given an encryption-key and a plaintext, produces a ciphertext that when fed to the decryption algorithm, together with the corresponding
1 In fact, in many cases, the legitimate interest may be served best by publicizing the scheme itself, because this allows an (independent) expert evaluation of the security of the scheme to be obtained.
374 5.1 THE BASIC SETTING
plaintext plaintext ciphertext X E D X
K K
Sender’s protected region Receiver’s protected region ADVERSARY
The key K is known to both receiver and sender, but is unknown to the adversary. For example, the receiver may generate K at random and pass it to the sender via a perfectly-private secondary channel (not shown here). Figure 5.1: Private-key encryption schemes: an illustration. decryption-key,yields the original plaintext. Westress that knowledge of the decryption- key is essential for the latter transformation.
5.1.1. Private-Key Versus Public-Key Schemes A fundamental distinction between encryption schemes refers to the relation between the aforementioned pair of keys (i.e., the encryption-key and the decryption-key). The simpler (and older) notion assumes that the encryption-key equals the decryption-key. Such schemes are called private-key (or symmetric).
Private-Key Encryption Schemes. To use a private-key scheme, the legitimate parties must first agree on the secret key. This can be done by having one party generate the key at random and send it to the other party using a (secondary) channel that (unlike the main channel) is assumed to be secure (i.e., it cannot be tapped by the adversary). A crucial point is that the key is generated independently of the plaintext, and so it can be generated and exchanged prior to the plaintext even being determined. Assuming that the legitimate parties have agreed on a (secret) key, they can secretly communicate by using this key (see illustration in Figure 5.1): The sender encrypts the desired plaintext using this key, and the receiver recovers the plaintext from the corresponding ciphertext (by using the same key). Thus, private-key encryption is a way of extending a private channel over time: If the parties can use a private channel today (e.g., they are currently in the same physical location) but not tomorrow, then they can use the private channel today to exchange a secret key that they may use tomorrow for secret communication. A simple example of a private-key encryption scheme is the one-time pad. The secret key is merely a uniformly chosen sequence of n bits, and an n-bit long ci- phertext is produced by XORing the plaintext, bit-by-bit, with the key. The plaintext is recovered from the ciphertext in the same way. Clearly, the one-time pad provides
375 ENCRYPTION SCHEMES
plaintext plaintext ciphertext X E D X
e eed
Sender’s protected region Receiver’s protected region ADVERSARY
The key-pair (e, d) is generated by the receiver, who posts the encryption-key e on a public media, while keeping the decryption-key d secret. Figure 5.2: Public-key encryption schemes: an illustration. absolute security. However, its usage of the key is inefficient; or, put in other words, it requires keys of length comparable to the total length (or information contents) of the data being communicated. By contrast, the rest of this chapter will focus on en- cryption schemes in which n-bit long keys allow for the secure communication of data having an a priori unbounded (albeit polynomial in n) length. In particular, n-bit long keys allow for significantly more than n bits of information to be communicated securely.
Public-Key Encryption Schemes. A new type of encryption schemes emerged in the 1970s. In these so-called public-key (or asymmetric) encryption schemes, the decryption-key differs from the encryption-key. Furthermore, it is infeasible to find the decryption-key, given the encryption-key. These schemes enable secure communication without the use of a secure channel. Instead, each party applies the key-generation algorithm to produce a pair of keys. The party (denoted P) keeps the decryption-key, denoted dP , secret and publishes the encryption-key, denoted eP . Now, any party can send P private messages by encrypting them using the encryption-key eP . Party P can decrypt these messages by using the decryption-key dP , but nobody else can do so. (See illustration in Figure 5.2.)
5.1.2. The Syntax of Encryption Schemes We start by defining the basic mechanism of encryption schemes. This definition says nothing about the security of the scheme (which is the subject of the next section).
Definition 5.1.1 (encryption scheme): An encryption scheme is a triple, (G, E, D), of probabilistic polynomial-time algorithms satisfying the following two conditions:
1. On input 1n, algorithm G (called the key-generator) outputs a pair of bit strings. 2. For every pair (e, d) in the range of G(1n), and for every α ∈{0, 1}∗, algorithms E 376 5.1 THE BASIC SETTING
(encryption) and D (decryption) satisfy
Pr[D(d, E(e, α))=α] = 1 where the probability is taken over the internal coin tosses of algorithms E and D.
The integer n serves as the security parameter of the scheme. Each (e, d) in the range of G(1n) constitutes a pair of corresponding encryption/decryption keys. The string E(e, α) is the encryption of the plaintext α ∈{0, 1}∗ using the encryption-key e, whereas D(d, β) is the decryption of the ciphertext β using the decryption-key d.
We stress that Definition 5.1.1 says nothing about security, and so trivial (insecure) algorithms may satisfy it (e.g., E(e, α) def= α and D(d, β) def= β). Furthermore, Defini- tion 5.1.1 does not distinguish private-key encryption schemes from public-key ones. The difference between the two types is introduced in the security definitions: In a public-key scheme the “breaking algorithm” gets the encryption-key (i.e., e)asanad- ditional input (and thus e = d follows), while in private-key schemes e is not given to the “breaking algorithm” (and thus, one may assume, without loss of generality, that e = d). We stress that this definition requires the scheme to operate for every plaintext, and specifically for plaintext of length exceeding the length of the encryption-key. (This rules out the information theoretic secure “one-time pad” scheme mentioned earlier.)
Notation. In the rest of this text, we write Ee(α) instead of E(e, α) and Dd (β) instead of D(d, β). Sometimes, when there is little risk of confusion, we drop these subscripts. n n Also, we let G1(1 ) (resp., G2(1 )) denote the first (resp., second) element in the n n n n pair G(1 ). That is, G(1 ) = (G1(1 ), G2(1 )). Without loss of generality, we may n n assume that |G1(1 )| and |G2(1 )| are polynomially related to n, and that each of these integers can be efficiently computed from the other. (In fact, we may even assume that n n |G1(1 )|=|G2(1 )|=n; see Exercise 6.)
Comments. Definition 5.1.1 may be relaxed in several ways without significantly harm- ing its usefulness. For example, we may relax Condition (2) and allow a negligible de- −n cryption error (e.g., Pr[Dd (Ee(α))=α] < 2 ). Alternatively, one may postulate that Condition (2) holds for all but a negligible measure of the key-pairs generated by G(1n). At least one of these relaxations is essential for some suggestions of (public-key) en- cryption schemes. Another relaxation consists of restricting the domain of possible plaintexts (and ciphertexts). For example, one may restrict Condition (2) to α’s of length (n), where : N→N is some fixed function. Given a scheme of the latter type (with plaintext length ), we may construct a scheme as in Definition 5.1.1 by breaking plaintexts into blocks of length (n) and applying the restricted scheme separately to each block. (Note that security of the resulting scheme requires that the security of the length-restricted scheme be preserved under multiple encryptions with the same key.) For more details see Sections 5.2.4 and 5.3.2. 377 ENCRYPTION SCHEMES
5.2. Definitions of Security
In this section we present two fundamental definitions of security and prove their equiv- alence. The first definition, called semantic security, is the most natural one. Semantic security is a computational-complexity analogue of Shannon’s definition of perfect pri- vacy (which requires that the ciphertext yield no information regarding the plaintext). Loosely speaking, an encryption scheme is semantically secure if it is infeasible to learn anything about the plaintext from the ciphertext (i.e., impossibility is replaced by infeasibility). The second definition has a more technical flavor. It interprets se- curity as the infeasibility of distinguishing between encryptions of a given pair of messages. This definition is useful in demonstrating the security of a proposed encryp- tion scheme and for the analysis of cryptographic protocols that utilize an encryption scheme. We stress that the definitions presented in Section 5.2.1 go far beyond saying that it is infeasible to recover the plaintext from the ciphertext. The latter statement is indeed a minimal requirement for a secure encryption scheme, but we claim that it is far too weak a requirement. For example, one should certainly not use an encryption scheme that leaks the first part of the plaintext (even if it is infeasible to recover the entire plaintext from the ciphertext). In general, an encryption scheme is typically used in applications where even obtaining partial information on the plaintext may endanger the security of the application. The question of which partial information endangers the security of a specific application is typically hard (if not impossible) to answer. Furthermore, we wish to design application-independent encryption schemes, and when doing so it is the case that each piece of partial information may endanger some application. Thus, we require that it be infeasible to obtain any information about the plaintext from the ciphertext. Moreover, in most applications the plaintext may not be uniformly distributed, and some a priori information regarding it may be available to the adversary. We thus require that the secrecy of all partial information be preserved also in such a case. That is, given any a priori information on the plaintext, it is infeasible to obtain any (new) information about the plaintext from the ciphertext (beyond what is feasible to obtain from the a priori information on the plaintext). The definition of semantic security postulates all of this.
Security of Multiple Plaintexts. Continuing the preceding discussion, the definitions are presented first in terms of the security of a single encrypted plaintext. However, in many cases, it is desirable to encrypt many plaintexts using the same encryption- key, and security needs to be preserved in these cases, too. Adequate definitions and discussions are deferred to Section 5.2.4.
A Technical Comment: Non-Uniform Complexity Formulation. To simplify the ex- position, we define security in terms of non-uniform complexity (see Section 1.3.3 of Volume 1). Namely, in the security definitions we expand the domain of efficient adver- saries (and algorithms) to include (explicitly or implicitly) non-uniform polynomial-size circuits, rather than only probabilistic polynomial-time machines. Likewise, we make 378 5.2 DEFINITIONS OF SECURITY no computational restriction regarding the probability distribution from which messages are taken, nor regarding the a priori information available on these messages. We note that employing such a non-uniform complexity formulation (rather than a uniform one) may only strengthen the definitions, yet it does weaken the implications proven between the definitions because these (simpler) proofs make free usage of non-uniformity. A uniform-complexity treatment is provided in Section 5.2.5.
5.2.1. Semantic Security A good disguise should not reveal the person’s height. Shafi Goldwasser and Silvio Micali, 1982
Loosely speaking, semantic security means that nothing can be gained by looking at a ciphertext. Following the simulation paradigm, this means that whatever can be efficiently learned from the ciphertext can also be efficiently learned from scratch (or from nothing).
5.2.1.1. The Actual Definitions To be somewhat more accurate, semantic security means that whatever can be efficiently computed from the ciphertext can be efficiently computed when given only the length of the plaintext. Note that this formulation does not rule out the possibility that the length of the plaintext can be inferred from the ciphertext. Indeed, some information about the length of the plaintext must be revealed by the ciphertext (see Exercise 4). We stress that other than information about the length of the plaintext, the ciphertext is required to yield nothing about the plaintext. In the actual definitions, we consider only information regarding the plaintext (rather than information regarding the ciphertext and/or the encryption-key) that can be ob- tained from the ciphertext. Furthermore, we restrict our attention to functions (rather than randomized processes) applied to the plaintext. We do so because of the intuitive appeal of this special case, and are comfortable doing so because this special case im- plies the general one (see Exercise 13). We augment this formulation by requiring that the infeasibility of obtaining information about the plaintext remain valid even in the presence of other auxiliary partial information about the same plaintext. Namely, what- ever can be efficiently computed from the ciphertext and additional partial information about the plaintext can be efficiently computed given only the length of the plaintext and the same partial information. In the definition that follows, the information regarding the plaintext that the adversary tries to obtain is represented by the function f, whereas the a priori partial information about the plaintext is represented by the function h. The in- feasibility of obtaining information about the plaintext is required to hold for any distribution of plaintexts, represented by the probability ensemble {Xn}n∈N. Security holds only for plaintexts of length polynomial in the security parameter. This is captured in the following definitions by the restriction |Xn|≤poly(n), where “poly” represents an arbitrary (unspecified) polynomial. Note that we cannot hope to provide computational security for plaintexts of unbounded length or for plaintexts of length 379 ENCRYPTION SCHEMES that is exponential in the security parameter (see Exercise 3). Likewise, we restrict the functions f and h to be polynomially-bounded, that is, | f (z)|, |h(z)|≤poly(|z|). The difference between private-key and public-key encryption schemes is manifested in the definition of security. In the latter case, the adversary (which is trying to obtain information on the plaintext) is given the encryption-key, whereas in the former case it is not. Thus, the difference between these schemes amounts to a difference in the adversary model (considered in the definition of security). We start by presenting the definition for private-key encryption schemes.
Definition 5.2.1 (semantic security – private-key): An encryption scheme, (G, E, D), is semantically secure (in the private-key model) if for every probabilistic polynomial- time algorithm A there exists a probabilistic polynomial-time algorithm A such that for every probability ensemble {Xn}n∈N, with |Xn|≤poly(n), every pair of polynomi- ally bounded functions f, h : {0, 1}∗ →{0, 1}∗, every positive polynomial p and all sufficiently large n n |X | n n n n = Pr A(1 , EG1(1 )(Xn), 1 , h(1 , Xn)) f (1 , Xn) | | 1 < Pr A (1n,1Xn , h(1n, X ))= f (1n, X ) + n n p(n)
(The probability in these terms is taken over Xn as well as over the internal coin tosses of either algorithms G, E, and A or algorithm A.)
We stress that all the occurrences of Xn in each of the probabilistic expressions re- fer to the same random variable (see the general convention stated in Section 1.2.1 in Volume 1). The security parameter 1n is given to both algorithms (as well as to the functions h and f ) for technical reasons.2 The function h provides both algorithms with partial information regarding the plaintext Xn. Furthermore, h also makes the defini- tion implicitly non-uniform; see further discussion in Section 5.2.1.2. In addition, both n algorithms get the length of Xn. These algorithms then try to guess the value f (1 , Xn); namely, they try to infer information about the plaintext Xn. Loosely speaking, in a se- mantically secure encryption scheme the ciphertext does not help in this inference task. That is, the success probability of any efficient algorithm (i.e., algorithm A) that is given the ciphertext can be matched, up to a negligible fraction, by the success probability of an efficient algorithm (i.e., algorithm A) that is not given the ciphertext at all. Definition 5.2.1 refers to private-key encryption schemes. To derive a definition of n security for public-key encryption schemes, the encryption-key (i.e., G1(1 )) should be given to the adversary as an additional input.
2 The auxiliary input 1n is used for several purposes. First, it allows smooth transition to fully non-uniform formulations (e.g., Definition 5.2.3) in which the (polynomial-size) adversary depends on n. Thus, it is good to provide A (and thus also A) with 1n . Once this is done, it is natural to allow also h and f to depend on n.In fact, allowing h and f to explicitly depend on n facilitates the proof of Proposition 5.2.7. In light of the fact n |X | that 1 is given to both algorithms, we may replace the input part 1 n by |Xn|, because the former may be recovered from the latter in poly(n)-time. 380 5.2 DEFINITIONS OF SECURITY
Definition 5.2.2 (semantic security – public-key): An encryption scheme, (G, E, D), is semantically secure (in the public-key model) if for every probabilistic polynomial- time algorithm A, there exists a probabilistic polynomial-time algorithm A such that for every {Xn}n∈N,f, h, p, and n as in Definition 5.2.1 n n |Xn | n n n = Pr A(1 , G1(1 ), EG1(1 )(Xn), 1 , h(1 , Xn)) f (1 , Xn) | | 1 < Pr A (1n,1Xn , h(1n, X ))= f (1n, X ) + n n p(n)
n Recall that (by our conventions) both occurrences of G1(1 ), in the first probabilistic expression, refer to the same random variable. We comment that it is pointless to give n the random encryption-key (i.e., G1(1 )) to algorithm A (because the task as well as the main inputs of A are unrelated to the encryption-key, and anyhow A could generate a random encryption-key by itself).
Terminology. For sake of simplicity, we refer to an encryption scheme that is seman- tically secure in the private-key (resp., public-key) model as a semantically secure private-key (resp., public-key) encryption scheme. The reader may note that a semantically secure public-key encryption scheme cannot employ a deterministic encryption algorithm; that is, Ee(x) must be a random variable rather than a fixed string. This is more evident with respect to the equivalent Defini- tion 5.2.4. See further discussion following Definition 5.2.4.
5.2.1.2. Further Discussion of Some Definitional Choices We discuss several secondary issues regarding Definitions 5.2.1 and 5.2.2. The in- terested reader is also referred to Exercises 16, 17, and 19, which present additional variants of the definition of semantic security.
Implicit Non-Uniformity of the Definitions. The fact that h is not required to be computable makes these definitions non-uniform. This is the case because both algo- n rithms are given h(1 , Xn) as auxiliary input, and the latter may account for arbitrary n poly(n) (polynomially bounded) advice. For example, letting h(1 , ·) = an ∈{0, 1} means that both algorithms are supplied with (non-uniform) advice (as in one of the com- mon formulations of non-uniform polynomial-time; see Section 1.3.3). In general, the function h can code both information regarding its main input and non-uniform ad- n vice depending on the security parameter (i.e., h(1 , x) = (h (x), an)). We comment that these definitions are equivalent to allowing A and A to be related families of non- ={ } uniform circuits, where by related we mean that the circuits in the family A An n∈N can be efficiently computed from the corresponding circuits in the family A ={An}n∈N. For further discussion, see Exercise 9.
Lack of Computational Restrictions Regarding the Function f. We do not even require that the function f be computable. This seems strange at first glance because (unlike the situation with respect to the function h, which codes a priori information 381 ENCRYPTION SCHEMES given to the algorithms) the algorithms are asked to guess the value of f (at a plaintext implicit in the ciphertext given only to A). However, as we shall see in the sequel (see also Exercise 13), the actual technical content of semantic security is that the proba- n |Xn | n n |Xn | |Xn | n bility ensembles {(1 , E(Xn), 1 , h(1 , Xn))}n and {(1 , E(1 ), 1 , h(1 , Xn))}n are computationally indistinguishable (and so whatever A can compute can also be computed by A). Note that the latter statement does not refer to the function f , which explains why we need not make any restriction regarding f.
Other Modifications of No Impact. Actually, inclusion of a priori information re- garding the plaintext (represented by the function h) does not affect the definition of semantic security: Definition 5.2.1 remains intact if we restrict h to only depend on the security parameter (and so only provide plaintext-oblivious non-uniform advice). (This can be shown in various ways; e.g., see Exercise 14.1.) Also, the function f can be restricted to be a Boolean function having polynomial-size circuits, and the random variable Xn may be restricted to be very “dull” (e.g., have only two strings in its sup- port): See proof of Theorem 5.2.5. On the other hand, Definition 5.2.1 implies stronger forms discussed in Exercises 13, 17 and 18.
5.2.2. Indistinguishability of Encryptions A good disguise should not allow a mother to distinguish her own children. Shafi Goldwasser and Silvio Micali, 1982
The following technical interpretation of security states that it is infeasible to distinguish the encryptions of two plaintexts (of the same length). That is, such ciphertexts are computationally indistinguishable as defined in Definition 3.2.7. Again, we start with the private-key variant.
Definition 5.2.3 (indistinguishability of encryptions – private-key): An encryption scheme, (G, E, D), has indistinguishable encryptions (in the private-key model) if for every polynomial-size circuit family {Cn}, every positive polynomial p, all suffi- ciently large n, and every x, y ∈{0, 1}poly(n) (i.e., |x|=|y|),