Proceedings of the 2014 ACM Symposium on Theory of Computing How to Use Indistinguishability Obfuscation: Deniable , and More

Amit Sahai∗ Brent Waters† UCLA University of Texas at Austin [email protected] [email protected]

We introduce a new technique, that we call punctured tively secure signatures, chosen- secure public programs, to apply indistinguishability obfuscation towards encryption, non-interactive zero knowledge proofs (NIZKs), cryptographic problems. We use this technique to carry out injective trapdoor functions, and oblivious transfer. These a systematic study of the applicability of indistinguishability results suggest the possibility of indistinguishability obfus- obfuscation to a variety of cryptographic goals. Along the cation becoming a “central hub” for . way, we resolve the 16-year-old open question of Deniable Encryption, posed by Canetti, Dwork, Naor, and Ostrovsky Categories and Subject Descriptors in 1997: In deniable encryption, a sender who is forced to re- veal to an adversary both her message and the randomness F.1.3 [COMPUTATION BY ABSTRACT DEVICES]: she used for encrypting it should be able to convincingly Complexity Measures and Classes—Reducibility and com- provide “fake” randomness that can explain any alternative pleteness message that she would like to pretend that she sent. We resolve this question by giving the first construction of de- General Terms niable encryption that does not require any pre-planning by Security; Theory the party that must later issue a denial. In addition, we show the generality of our punctured pro- grams technique by also constructing a variety of core cryp- Keywords tographic objects from indistinguishability obfuscation and Obfuscation, Encryption one-way functions (or close variants). In particular we ob- tain: public key encryption, short “hash-and-sign” selec- 1. INTRODUCTION ∗Research supported in part from a DARPA/ONR PRO- In 2001, Barak, Goldreich, Impagliazzo, Rudich, Sahai, CEED award, NSF grants 1228984, 1136174, 1118096, and Vadhan, and Yang [1, 2] initiated the systematic study of 1065276, a Xerox Faculty Research Award, a Google Fac- program obfuscation, which aims to make computer pro- ulty Research Award, an equipment grant from Intel, and an Okawa Foundation Research Grant. This material is based grams “unintelligible” while preserving their functionality. upon work supported by the Defense Advanced Research They offered two notions of general obfuscation: First, they Projects Agency through the U.S. Office of Naval Research suggested a quite intuitive notion called virtual black-box under Contract N00014-11- 1-0389. The views expressed are obfuscation – which asks that an obfuscated program be no those of the author and do not reflect the official policy or more useful than a black box implementing the program. position of the Department of Defense, the National Science Perhaps unsurprisingly, they showed that this notion of vir- Foundation, or the U.S. Government. tual black-box obfuscation would have many immediate and †Research supported in part from NSF CNS-0915361 and CNS-0952692, CNS-1228599 DARPA through the U.S. Of- intuitive applications throughout cryptography, including fice of Naval Research under Contract N00014-11-1-0382, for example, giving a transformation converting a natural DARPA N11AP20006, Google Faculty Research award, the private-key encryption scheme into a public-key encryption Alfred P. Sloan Fellowship, Microsoft Faculty Fellowship, scheme. Unfortunately, however, they also showed that this and Packard Foundation Fellowship. Any opinions, find- notion of general-purpose virtual black-box obfuscation is ings, and conclusions or recommendations expressed in this impossible to achieve. Second, motivated by this impossi- material are those of the author(s) and do not necessarily reflect the views of the Department of Defense or the U.S. bility, they also proposed the less intuitive, but potentially Government. realizable, notion of indistinguishability obfuscation – which asks only that obfuscations of any two distinct (equal-size) Permission to make digital or hard copies of all or part of this work for personal or programs that implement identical functionalities be compu- classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation tationally indistinguishable from each other. Unfortunately, on the first page. Copyrights for components of this work owned by others than the it has been less clear how useful indistinguishability obfus- author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or cation would be. Recently, Garg, Gentry, Halevi, Raykova, republish, to post on servers or to redistribute to lists, requires prior specific permission Sahai, and Waters [12] proposed the first candidate construc- and/or a fee. Request permissions from [email protected]. tion of an efficient indistinguishability obfuscator for general STOC ’14, May 31 - June 03 2014, New York, NY, USA programs (written as boolean circuits), and also showed how Copyright is held by the owner/author(s). Publication rights licensed to ACM. ACM 978-1-4503-2710-7/14/05 $15.00. to apply indistinguishability obfuscation to achieve power- http://dx.doi.org/10.1145/2591796.2591825. ful new functional encryption schemes for general circuits.

475 We believe the work of [12] is likely to lead to significant tured PRF secret keys, for the original GGM construction of follow-up research proposing alternative constructions of in- PRFs [17]. This was recently observed independently by [3, distinguishability obfuscators under various other (possibly 4, 20]. standard) assumptions, perhaps in a manner similar to re- All that remains is to argue that we can replace the obfus- search on Fully Homomorphic Encryption [14, 7, 6, 5, 15]. cation of the original function fK (r, m) = (r, PRF(K, r)⊕m) Despite this significant advance, some of the most basic with the obfuscation of the punctured function, in a manner questions about how to use indistinguishability obfuscation that is indistinguishable to the adversary. However, this is remain open. For example, can we use indistinguishability in fact false: Indeed, there is a trivial attack on this sug- obfuscation to transform a natural private-key encryption gestion because given the challenge ciphertext (r∗, e∗), the scheme to a public-key encryption scheme, as can be done adversary can simply query the obfuscated program on the using virtual black-box obfuscation? In this work, we take a input (r∗, 0) and thereby learn PRF(K, r∗) and recover the fresh look at the question of how to use indistinguishability message m∗ from the challenge ciphertext. Thus, we can obfuscation in general contexts. To this end, we introduce a never hope to swap the obfuscation of the original function new technique for applying indistinguishability obfuscation, with the punctured function, because the adversary can al- that we call punctured programs. We use this technique to ways tell the difference. Indeed, an indistinguishability ob- carry out a systematic study of the applicability of indistin- fuscator guarantees indistinguishable obfuscated programs guishability obfuscation to a variety of cryptographic goals. only when given two functionally equivalent input programs. Along the way, we resolve the 16-year-old open question of However, as we just saw, the punctured program is clearly Deniable Encryption, posed by Canetti, Dwork, Naor, and not equivalent to the original program, precisely because of Ostrovsky in 1997 [8]: In deniable encryption, a sender who the puncturing. is forced to reveal to an adversary both her message and This brings us to the second main idea behind the punc- the randomness she used for encrypting it should be able tured programs technique, which is crucial to hide where the to convincingly provide “fake” randomness that can explain puncturing takes place and allow the use of indistinguisha- any alternative message that she would like to pretend that bility obfuscation: The idea is to find a way to “move” the she sent. We resolve this question by giving the first con- puncturing to a location that is never functionally accessed struction of deniable encryption that does not require any by the program. Let us illustrate by returning to the ex- pre-planning by the party that must later issue a denial. ample of building a public-key encryption scheme from a natural private-key encryption scheme. Consider a pseudo- Punctured Programs. random generator PRG that maps λ bits to 2λ bits, where λ At the heart of our results is a new technique for apply- is a security parameter. Now consider the following modified ing indistinguishability obfuscation that we call punctured private-key encryption function: programs: At a very high-level, the idea of the technique is to alter a program (which is to be obfuscated) by surgically f (r, m) = PRG(r), PRF K, PRG(r) ⊕ m removing a key element of the program, without which the K adversary cannot win the security game it must play, but in As before, let the public key be an indistinguishability ob- a way that does not alter the functionality of the program. fuscation of this function. Now, however, a challenge cipher- This is best illustrated by means of an example: text will look like c∗ = (PRG(r∗), PRF(K, PRG(r∗)) ⊕ m∗). Consider the problem of transforming a natural private- Since a PRG is keyless,Ä however,Ä the adversaryä cannotä dis- key encryption scheme into a public-key encryption scheme, tinguish the original security game in which the challenge by means of obfuscation. This idea was first proposed by ciphertext was created as above, and a hybrid experiment Diffie and Hellman in their original paper on public-key where the challenge ciphertext is created with a freshly cho- cryptography [9]. For example, consider the following sim- sen randomr ˜ ∈ {0, 1}2λ, as follows: c∗ = (˜r, PRF(K, r˜) ⊕ ple and natural private-key encryption scheme: the secret m∗). However, now we observe that with overwhelming key is the key K to a pseudo-random function (PRF). To probability,r ˜ is outside the image of the PRG. Thus, cru- encrypt a message m, the sender picks a random string r, cially, the encryption function fK never makes use of the and outputs (r, PRF(K, r) ⊕ m). PRF evaluated atr ˜. How can we use indistinguishability obfuscation to convert Thus, indistinguishability obfuscation guarantees that an this encryption scheme into a public-key scheme? Intuitively obfuscation of an alternative program that uses a punctured speaking, security requires that an adversary who is given a PRF key that carves outr ˜ is indistinguishable from the ob- challenge ciphertext c∗ = (r∗, e∗) encrypting some message ∗ ∗ fuscation of the original program, because these two pro- m , should not be able to recover m . As a warmup, con- grams are functionally equivalent. Now, due to the punctur- sider the following (flawed) solution idea: Let a public key ing, the adversary simply does not have enough information simply be an indistinguishability obfuscation of the encryp- to distinguish PRF(K, r˜) from random, and thus security tion function itself, namely the function: follows.  This example illustrates the core ideas behind the punc- fK (r, m) = r, PRF(K, r) ⊕ m tured programs technique. While this example is quite sim- The central idea behind our approach is to observe that if ple, it already has surprising implications: For example, the it were possible to build the function fK using a “punctured” scheme above would have slow encryption, but the speed of PRF key which gave no information about PRF(K, r∗), but decryption would be as fast as evaluating a single call to the correctly defined the PRF at all other strings r 6= r∗, then GGM PRF, where the PRG underlying the GGM construc- even if the adversary knew this punctured PRF secret key, tion could be instantiated using a standard symmetric-key it would not be able to break the security of the challenge primitive such as AES. Furthermore, the size of ciphetexts ciphertext. In fact, it is possible to construct such punc- in the above scheme, if instantiated using the GGM con-

476 struction applied to 128-bit AES, could be as short as 300 crypting, it first checks for a special “hidden sparse bits. No existing construction of public-key encryption that trigger”. It checks to see if the random coins are a spe- we are aware of enjoys such features. cial encoding from a sparse space. To do this it checks We use this technique to use indistinguishability obfus- if the claimed randomness u is itself an encoding of cation to build several cryptographic primitives, including some valid ciphertext c and the message m, under a short signatures, CCA-secure encryption, and perfect non- (different internal) encoding scheme described below. interactive zero-knowledge arguments for NP. Many of these If so, it outputs this ciphertext c. Otherwise, it uses u applications enjoy similar surprising features to the ones to encrypt m in an (essentially) normal way to produce stated above for public-key encryption. Most notably, we a ciphertext c under a standard encryption EPKE. use this technique to solve the open question of deniable encryption. • The second program is an obfuscation of an Explain al- gorithm. The explanation algorithm takes as input a On the need for additional assumptions. ciphertext c, a message m, and some randomness. This explanation algorithm doesn’t care what c is “really” Before moving on to deniable encryption, we note that encrypting. It simply outputs an encoding of (c, m). our construction of public-key encryption above required not Thus, the explanation mode can be used (by anyone) only indistinguishability obfuscation, but also the assump- to create fake randomness that can explain any cipher- tion that one-way functions exist. It is natural to wonder if text as any desired message. this extra assumption of one-way functions is needed. We observe that this indeed seems necessary: If P=NP, then Thus, our approach uses two algorithms in tandem. The certainly one-way functions would not exist and public-key Encrypt algorithm will produce of some message encryption would be impossible; however, if P=NP then m. When run with coins u chosen truly at random, it will indistinguishability obfuscators would exist for all circuits. almost always correctly encrypt m, since the “hidden sparse This follows immediately from the fact that computing the triggers” form a negligible fraction of possible u values. lexicographically first equivalent circuit to an input circuit The Explain algorithm takes in a ciphertext-message pair ΣP 0 is in the polynomial hierarchy (specifically in P 3 ). Thus, (c, m) and will produce an encoding u that serves as a hid- it should not be possible to construct interesting crypto- den trigger for the Encrypt program. We use the termi- graphic functionalities from indistinguishability obfuscators nology “ciphertext” to refer to the ciphertext, produced by alone without proving P6=NP along the way. Encrypt, that will be sent to the receiver. We use “encoding” Subsequent to this work, Moran and Rosen [21] gave an or “hidden trigger encoding”to refer to a value output by the elegant and simple argument that if NP 6= co-RP, then iO Explain program. The intended semantics are that if Encrypt actually implies the existence of one way functions. program discovers this hidden trigger, it is to immediately output the ciphertext c that is hidden in the encoding. Deniable Encryption. For this approach to work, clearly Explain will need to pro- In 1997, Canetti, Dwork, Naor, and Ostrovsky [8] pro- duce pseudorandom encodings, so that the adversary cannot posed the intriguing problem of deniable encryption: In de- immediately distinguish fake randomness from true random- niable (public-key) encryption, there is a public key used ness. Upon closer inspection, we see that the attacker can for encryption by an honest sender, to produce a ciphertext also take some randomness u∗ given by the honest sender, that is made public. However, at a later time, an adversary modify it, and provide it as input to the Encrypt program, may approach the sender and force her to reveal the mes- and see what happens. Indeed, we see that the encodings sage that she encrypted together with the randomness that produced by Explain will in fact need to be CCA-secure, she used to encrypt the message. In deniable encryption, since the Encrypt program itself implements a limited kind we would like to provide the sender with a “faking” algo- of CCA attack on Explain encodings. For example, if the rithm that allows her to produce fake randomness for any adversary can take a fake randomness u∗, and modify it message of her choice, that appears indistinguishable from slightly so that it still encodes the same ciphertext-message the true randomness to any adversary (that does not possess pair (c∗, m∗), then the adversary would most likely learn the secret decryption key). Note that any deniable encryp- that the sender is faking. Note that we must show that en- tion scheme must incur at least some negligible decryption codings produced by Explain are pseudorandom, even when error. Unfortunately, all known proposed algorithms for de- subject to this kind of CCA attack. niable encryption allow an attacker to gain a noticeable 1/n Finally, we must make our construction work given only advantage in distinguishing real randomness from fake ran- an indistinguishability obfuscator, not a virtual black-box domness, where roughly speaking n is the length of the ci- obfuscator. To do this, we again make use of the punctured phertext [8]. In fact, over the last 16 years, high-profile pa- programs technique with multiple PRFs that have been en- pers have claimed solutions to this problem, most notably hanced to have additional properties: we consider injective the recent work of [10], but these have all been shown to be PRFs, where the function PRF(K, ·) is injective with high incorrect [11]. probability over the choice of key K; and also what we call We propose a completely new approach to deniable en- extracting PRFs, where the distribution (K, PRF(K,X)) is cryption based on obfuscation. The essential idea behind very close to a uniform distribution as long as the distribu- our approach is the following: The public key will consist of tion X has sufficient min-entropy. We show how to build two obfuscated programs. injective and extracting puncturable PRFs assuming only one-way functions. Thus, we obtain deniable encryption • The first obfuscated program Encrypt is used for en- from indistinguishability obfuscation and one-way functions. cryption of . This program takes as input a message m and a long random string u. Before en- Publicly Deniable Encryption.

477 Our obfuscation-based construction of deniable encryp- licly deniable encryption scheme works (this overview does tion actually achieves a new and stronger notion that we call not refer to universal deniable encryption). The technical publicly deniable encryption. Recall that in a traditional de- heart of our construction is the Explain encoding scheme, niable encryption scheme, the sender must remember his ac- and the argument that encodings produced by this encod- tual randomness and use it to construct fake randomness for ing scheme on input (c∗, m∗) are indistinguishable from true other messages. In a publicly deniable encryption scheme, randomness used to produce c∗ when encrypting m∗ using we do not require a sender to know the randomness that was EPKE, even when the adversary is given access to the ob- used to generate any particular ciphertext in order to be able fuscated Encrypt and Explain programs. The construction to generate fake randomness for it. Thus, even a third party and argument crucially relies on the punctured programs that had nothing to do with the actual honest generation technique. of a ciphertext can nonetheless produce fake randomness for We think of the randomness u used by Encrypt as con- that ciphertext corresponding to any message of his choice. sisting of two strings u[1] and u[2]. To create a fake ex- Note that our construction idea above achieves this, since planation that a ciphertext c is an encryption of a bit m, the explanation mode does not need to see what random- Explain will use additional randomness r to produce an en- ness was originally used to create a particular ciphertext. coding as follows: Set α = PRF2(K2, (m, c, PRG(r))). Let The public deniability aspect of our security notion al- β = PRF3(K3, α) ⊕ (m, c, PRG(r)). The encoding e will be lows us to think of producing fake randomness as a kind of e = (α, β). Here, it is important that PRF2 is an injective “simulation.” With this perspective, we observe that we can puncturable PRF with a large co-domain, and that PRF3 is separate out the security of publicly deniable encryption into a puncturable PRF. ∗ ∗ ∗ two parts: First, we must prove ordinary indistinguishabil- Consider a ciphertext c = EPKE(m , u ), and a particu- ity (IND-CPA) security of the encryption scheme. In other lar fake explanation encoding e∗ = (α∗, β∗) for (c∗, m∗), that ∗ ∗ words, an encryption of m0 must be indistinguishable from was generated using randomness r . We will show that e an encryption of m1 for any two messages m0, m1. Second, is “pseudorandom” by repeated application of the punctured we have a separate “explanation” security requirement, that programs technique, by which eventually we will replace e∗ says that for any fixed message m, an encryption of m to- with a random string (such that it always yields a partic- gether with m and the true randomness used to create that ular ciphertext c∗ when applied on input m∗ in Encrypt). ciphertext should be indistinguishable from an encryption Recall that the key technical difficulty in applying the punc- of m together with the same message m but with fake ran- tured programs technique is to ensure that when punctur- domness explaining the ciphertext (also as an encryption of ing a program – cutting some potential computation out of m). the program – we must make sure that the functionality of Note, however, that these two security properties together the program is not affected, so that we may apply indistin- imply ordinary deniability, since one can first invoke expla- guishability obfuscation. The essential way we do this with nation security to create a hybrid where a fake explanation respect to the Explain procedure described above is: is offered with respect to the actual message mb that was en- 1. First, we replace PRG(r∗) with a totally random value crypted, and then invoke IND-CPA to argue that this must r˜, so that we can argue that PRF2(K2, ·) will never be be indistinguishable from a fake explanation offered about invoked on (m∗, c∗, r˜) in the functionality of Explain. a ciphertext that was actually created using m1−b. This lets us puncture out (m∗, c∗, r˜) from the PRF key K . Universal Deniable Encryption. 2 ∗ ∗ We also consider another new notion that we call univer- 2. Having punctured out (m , c , r˜), we can then replace ∗ sal deniable encryption. In universal deniable encryption, we α with a completely random string, because of the imagine that there are several different public key infrastruc- punctured PRF key. By the sparseness of the image tures (PKIs) already set up for a variety of legacy public-key of PRF2(K2, ·), because it is injective, this means that ∗ encryption schemes (perhaps such as IND-CPA-secure sys- we can then safely puncture α from the PRF key K3 tems based on RSA or El-Gamal). However, we would like to without disturbing the functionality of Explain. enable users to deniably encrypt messages, without requir- 3. Finally, then, we are able to replace β∗ with a random ing all parties to rekey their public keys. We can do so as value. long as there is a trusted party (that cannot be coerced into revealing its randomness by the adversary) that is able to Above we sketched the changes to Explain, but we note that publish a single additional public parameter. (The trusted additional (even greater) changes will need to be made to party plays no further role.) Users are now instructed to Encrypt, because this procedure must maintain the invariant ∗ ∗ ∗ ∗ ∗ encrypt all messages using a combination of the public pa- that Encrypt(m ; u ) = Encrypt(m ; e ) = c throughout all rameters and the recipient’s legacy public key. A recipient our changes. This proceeds in a manner similar to what is will still be able to decrypt the message using its original described above, but is somewhat more technical, and we secret key and decryption algorithm. If an adversary now crucially use the injectivity of PRF(K2, ·) beyond just the approaches a user and forces the user to reveal what mes- sparseness of its image. sage and randomness it used (knowing that this user would have used the new standard encryption mechanism to en- Hidden Sparse Triggers. crypt), the user can provide convincing fake randomness to We briefly recap the main properties of our hidden sparse the adversary to explain any message of his choice. trigger concept as we believe that it will have applications elsewhere in cryptography. The highlighted properties are: Technical overview of Deniable Encryption Scheme. - Sparseness. The trigger values must live in a sparse We now provide a brief overview of how our main pub- subset of some larger set. For our application, if the

478 triggers were not sparse the deniable encryption sys- 2. PUBLICLY DENIABLE ENCRYPTION tem would not be correct. We now give our definition of publicly deniable encryp- tion. We show that our notion of publicly deniable encryp- - Oblivious Sampling. It must be possible to sample tion implies the notion of deniable encryption of [8] (without obliviously from the full set. In our construction, this any “plan-ahead” requirement). is simply choosing a string uniformly at random. Definition 1 (Publicly Deniable Encryption). A - Indistinguishability under malleability attacks. It sho- publicly deniable encryption scheme over a message space uld be hard to distinguish a value chosen randomly M = Mλ consists of four algorithms Setup Encrypt, Decrypt, from the larger set from the sparse hidden trigger set. Explain. This must be true even given a program that detects and utilizes the hidden trigger. • Setup(1λ) is a polynomial time algorithm that takes as input the unary representation of the security param- - Publicly Generated Triggers. It should be possible for eter λ and outputs a public key PK and a secret key anyone to generate hidden triggers. SK. • Encrypt(PK, m ∈ M; u) is a polynomial time algorithm Core primitives from indistinguishability obfuscation. that takes as input the public key PK a message m and We show the generality of our punctured programs tech- uses random coins u. (We call out the random coins nique by also constructing a variety of core cryptographic explicitly for this algorithm). It outputs a ciphertext c. objects from indistinguishability obfuscation and one-way functions (or close variants). In particular, we obtain: • Decrypt(SK, c) is a polynomial time algorithm that takes as input a secret key SK and ciphertext c, and outputs • Public-key encryption from indistinguishability obfus- a message m. cation and one-way functions, as already sketched in the introduction above. • Explain(PK, c, m; r) is a polynomial time algorithm that takes as input a public key PK, a ciphertext c, and • Short“hash-and-sign”selectively secure signatures from a message m, and outputs and a string e, that is the indistinguishability obfuscation and one-way functions. same size as the randomness u taken by Encrypt above. These signatures are secure in a model where the ad- versary must declare the message m∗ on which a forgery We say that a publicly deniable encryption scheme is cor- will take place at the start, but then can adaptively rect if for all messages m ∈ M: ∗ query any number of messages distinct from m on (PK, SK) ← Setup(1λ); Pr = negl(λ) which it can receive signatures. Decrypt(SK, Encrypt(PK, m; u)) 6= m • An adaptively chosen-ciphertext secure (CCA2-secure) where negl(·) is a negligible function. public-key encryption scheme from indistinguishability obfuscation and one-way functions. Remark 1. The Explain algorithm does not directly fac- tor intoï the correctness definition. ò • An adaptively chosen-ciphertext secure (CCA2-secure) key encapsulation mechanism (KEM) from indistin- 2.1 Security guishability obfuscation and one-way functions. This Any publicly deniable encryption system must meet two construction is remarkably simple. security requirements both given as game-based definitions. The first is Indistinguishability under Chosen Plaintext At- • Perfect non-interactive zero knowledge (NIZK) argu- tack (IND-CPA). This is exactly the same security game [18] ments for all NP languages in the common reference as in standard public key encryption schemes. (We include string (CRS) model from indistinguishability obfusca- a description here for self-containment.) tion and one-way functions. The second is a new security property called indistin- guishability of explanation. Informally speaking, if one con- • Injective trapdoor functions from indistinguishability siders a message m, an encryption c of m and the coins, u obfuscation and injective one-way functions. used in encryption, then indistinguishability of explanation states that no attacker can distinguish between (m, c, u) and • Oblivious Transfer protocols from indistinguishability (m, c, Explain(c, m; r)). We now provide both security defi- obfuscation and one-way functions. nitions formally: Taken altogether these results (along with the functional encryption result of GGHRSW [12]) cover a large swathe of Indistinguishability under Chosen Plainext Attack. cryptography. This feeds into our research vision of Indis- tinguishability Obfuscation becoming a central hub of cryp- We describe the IND-CPA game as a multi-phase game tography. In an ideal future “most” cryptographic primitives between an attacker A and a challenger. will be shown to be realizable from iO and one way func- λ tions. Setup: The challenger runs (PK, SK) ← Setup(1 ) and gives PK to A. Organization. As this is only an extended abstract, we defer a formal treatment of most results and proofs to our Challenge: A submits two messages m0, m1 ∈ M, and is ∗ full version [22]. given c = Encrypt(PK, mb; u) for random coins u.

479 0 Guess A then outputs a bit b in {0, 1}. • EncryptDE(PK, m ∈ M; u) is a polynomial time algo- rithm that takes as input the public key PK a message We define the advantage of an algorithm A in this game m and uses random coins u. (We call out the ran- to be dom coins explicitly for this algorithm). It outputs a 1 ciphertext c. Adv = Pr[b0 = b] − A 2 • DecryptDE(SK, c) is a polynomial time algorithm that takes as input a secret key SK and ciphertext c, and Definition 2. An publicly deniable scheme is IND-CPA outputs a message m. secure if for all poly-time A the function AdvA(λ) is negli- 0 gible. • FakeDE(PK, m, u, c, m ; r) is a polynomial time algo- rithm that takes as input a public key PK, an“original” message m, bitstring u, ciphertext c, and a “faked” Indistinguishability of Explanation. message m0, and outputs and a string e, that is the same size as the randomness u taken by Encrypt above. We describe the Indistinguishability of Explanation se- curity game as a multi-phased game between an A and a We can see that the algorithms essentially align with the challenger. publicly deniable algorithms. The main exception is that the FakeDE algorithm takes in the original randomness used Setup: The challenger runs (PK, SK) ← Setup(1λ) and gives for encryption as well as the original message. Whereas PK to A. the Explain algorithm of publicly deniable encryption only requires the transmitted ciphertext and new message (and Challenge: A submits a single message m ∈ M. The chal- none of the other history.) lenger creates c∗ = Encrypt(PK, m; u) for random u Security in standard deniable encryption consists of two and it computes e = Explain(PK, c∗; r) for random r. notions. The first is IND-CPA as given above for publicly The challenger finally flips a coin b ∈ {0, 1}. If b = 0 deniable encryption. The second is a deniability game given it outputs (c∗, u). If b = 1 it outputs (c∗, e). here.

λ Guess A then outputs a bit b0 in {0, 1}. Setup: The challenger runs (PK, SK) ← SetupDE(1 ) and gives PK to A. We define the advantage of an algorithm A in this game Challenge: A submits two message m0, m1 ∈ M. The to be challenger flips a random coin b. If b = 0, it creates 1 ∗ Adv = Pr[b0 = b] − c = EncryptDE(PK, m0; u) for random u. It outputs A 2 (c∗, u). ∗ If b = 1 it creates c = EncryptDE(PK, m0; u) and it Definition 3. An publicly deniable scheme has Indistin- ∗ guishability of Explanation if for all poly-time A the function and it computes e = Explain(PK, m1, u, c , m0; r) for random r. It outputs (c∗, e). AdvA(λ) is negligible. Guess A then outputs a bit b0 in {0, 1}. Remark 2. We remark that a publicly deniable encryp- tion scheme for one bit messages immediately implies a pub- We define the advantage of an algorithm A in this game licly deniable encryption scheme for multi-bit messages of to be any polynomial length n bits. One can simply create a ci- 1 Adv = Pr[b0 = b] − phertext for n bit message as a sequence of n single bit en- A 2 cryptions (i.e. “bit-by-bit encryption”). The Explain algo- rithm for the n bit scheme simply calls the single bit explain Definition 4. A deniable encryption scheme is Deniable algorithm for each of the n ciphertext components. Security if for all poly-time A the function AdvA(λ) is negligible. for both IND-CPA and Indistinguishability of Explanation follows via the usual one bit to many bit hybrid argument. The Implication. 2.2 Implication for Deniable Encryption Now we can show that publicly deniable encryption im- We now show that a publicly deniable encryption is also plies standard deniable encryption. (We use the subscript of a deniable encryption as defined by CDNO [8]. We begin by PDE to denote the original deniable encryption system and briefly recalling the CDNO notion of (sender) deniable en- DE to denote the constructed deniable encryption system.) cryption. We simplify the exposition to consider encryption The first three algorithms are set to be exactly the same. as a standard one pass algorithm that takes in the public SetupDE = SetupPDE, EncryptDE = EncryptPDE, DecryptDE = 0 key and outputs a ciphertext. DecryptPDE. The faking algorithm is FakeDE(PK, m, u, c, m ; r) A (standard) deniable encryption scheme has four algo- = Explain(PK, c, m0). In essence, the Explain algorithm is rithms: SetupDE, EncryptDE, DecryptDE, FakeDE. simply used to fake and just doesn’t bother to use the orig- inal randomness or message. λ • SetupDE(1 ) is a polynomial time algorithm that takes We now sketch by a simple hybrid argument that a de- as input the unary representation of the security pa- niable encryption scheme built from publicly deniable one rameter λ and outputs a public key PK and a secret (in the manner outlined above) is secure in the deniability key SK. game.

480 ∗ Let Hybrid:1, be the experiment where (c = EncryptDE • For all security parameters λ ∈ N, for all C ∈ Cλ, for (PK, m0; u), u) is output as an encryption of m0 with the real all inputs x, we have that ∗ coins. Now let Hybrid:2 be the experiment where (CT = 0 0 ∗ Pr[C (x) = C(x): C ← iO(λ, C)] = 1 EncryptDE(PK, m0; u), e = ExplainPDE(PK, c , m0; r)). By def- inition of the Indistinguishability of Explanation, Hybrid:1 • For any (not necessarily uniform) PPT adversaries is indistinguishabile from Hybrid:2. Samp, D, there exists a negligible function α such Now let Hybrid:3 be the experiment where the challenger ∗ ∗ that the following holds: if Pr[∀x, C0(x) = C1(x): outputs (c = EncryptDE(PK, m1; u), e = ExplainPDE(PK, c , λ (C0,C1, σ) ← Samp(1 )] > 1 − α(λ), then we have: m0; r)) = FakeDE(PK, m1, u, c, m0; r). Any attacker that can distinguish between Hybrid:1, Hybrid:2 can break IND-CPA  λ  of the scheme. A reduction algorithm B will simply pass Pr D(σ, iO(λ, C0)) = 1 : (C0,C1, σ) ← Samp(1 ) ∗  λ  the IND-CPA challenge ciphertext c of the attacker along − Pr D(σ, iO(λ, C1)) = 1 : (C0,C1, σ) ← Samp(1 ) ∗ with computing itself e = ExplainPDE(PK, c , m0; r)). It then simply relays the attacker’s answer to the IND-CPA game. ≤ α(λ) The simplicity of this argument derives from the fact that In this paper, we will make use of such indistinguishability the Explain algorithm does not use any of the coins used PDE obfuscators for all polynomial-size circuits: to create the original ciphertext. We conclude by observing that Hybrid:1 and Hybrid:3 cor- Definition 6. (Indistinguishability Obfuscator for respond respectively to where b = 0 and b = 1 in the CDNO P/poly). A uniform PPT machine iO is called an indis- game. Thus, a publicly deniable encryption scheme gives tinguishability obfuscator for P/poly if the following holds: one in the standard sense. Let Cλ be the class of circuits of size at most λ. Then iO is an indistinguishability obfuscator for the class {Cλ}. Advantages of Publicly Deniable Encryption. The abstraction of Publicly Deniable Encryption has two Such indistinguishability obfuscators for all polynomial- advantages. First, public deniability can be a useful prop- size circuits were constructed under novel algebraic hardness erty in practice. For example, consider a party was expected assumptions in [12]. to reveal coins for a ciphertext, but had lost them. Using the public deniability Explain algorithm, it could make them PRF variants. up just knowing the transmitted ciphertext. We first consider some simple types of constrained PRFs [3, Second, in proving our construction of Section 4 the we 4, 20], where a PRF is only defined on a subset of the usual found that proving indistinguishability of explanation is sig- input space. We focus on puncturable PRFs, which are PRFs nificantly simpler than directly proving deniability. 1 The that can be defined on all bit strings of a certain length, ex- primary reason stems from the fact that the indistinguisha- cept for any polynomial-size set of inputs: bility of explanation only requires a single message in its se- Definition 7. A puncturable family of PRFs F mapping curity game. Our abstraction separates the (in)ability to dis- is given by a triple of Turing Machines Key , Puncture , tinguish between encryptions of different messages from the F F and EvalF , and a pair of computable functions n(·) and m(·), (in)ability to distinguish between legitimate and explained satisfying the following conditions: randomness. 2 • [Functionality preserved under puncturing] For λ 3. PRELIMINARIES every PPT adversary A such that A(1 ) outputs a set S ⊆ {0, 1}n(λ), then for all x ∈ {0, 1}n(λ) where x∈ / S, In this section, we define indistinguishability obfuscation, we have that: and variants of pseudo-random functions (PRFs) that we " # will make use of. All the variants of PRFs that we consider EvalF (K, x) = EvalF (KS , x): λ will be constructed from one-way functions. Pr K ← KeyF (1 ), = 1. KS = PunctureF (K,S)

Indistinguishability Obfuscation. • [Pseudorandom at punctured points] For every PPT The definition below is from [12]; there it is called a“family- adversary (A ,A ) such that A (1λ) outputs a set S ⊆ indistinguishable obfuscator”, however they show that this 1 2 1 {0, 1}n(λ) and state σ, consider an experiment where notion follows immediately from their standard definition K ← Key (1λ) and K = Puncture (K,S). Then we of indistinguishability obfuscator using a non-uniform argu- F S F have ment.   Pr A2(σ, KS , S, EvalF (K,S)) = 1 −   = negl(λ) Definition 5. (Indistinguishability Obfuscator Pr A2(σ, KS , S, Um(λ)·|S|) = 1 (iO)). A uniform PPT machine iO is called an indistin- where EvalF (K,S) denotes the concatenation of EvalF guishability obfuscator for a circuit class {Cλ} if the follow- ing conditions are satisfied: (K, x1)),..., EvalF (K, xk)) where S = {x1, . . . , xk} is the enumeration of the elements of S in lexicographic 1In an earlier internal draft we proved deniability directly; order, negl(·) is a negligible function, and U` denotes however, the proof was more complex and required an ad- the uniform distribution over ` bits. ditional property on the constrained PRFs that was not ob- tainable from one way functions. For ease of notation, we write F (K, x) to represent EvalF 2 We shall see that even if an attacker is given the encryption (K, x). We also represent the punctured key PunctureF (K,S) systems secret key, it cannot win in the explanation game. by K(S).

481 The GGM tree-based construction of PRFs [17] from one- Theorem 3. If one-way functions exist, then for all ef- way functions are easily seen to yield puncturable PRFs, as ficiently computable functions n(λ), m(λ), k(λ), and e(λ) recently observed by [3, 4, 20]. Thus we have: such that n(λ) ≥ k(λ) ≥ m(λ) + 2e(λ) + 2, there exists an extracting puncturable PRF family that maps n(λ) bits to Theorem 1. [17, 3, 4, 20] If one-way functions exist, m(λ) bits with error 2−e(λ) for min-entropy k(λ). then for all efficiently computable functions n(λ) and m(λ), Proof. For ease of notation, we suppress the dependence there exists a puncturable PRF family that maps n(λ) bits of n, m, k, e on λ. By Theorem 2, let F be a family of to m(λ) bits. puncturable statistically injective PRFs mapping n bits to 2n + e + 1 bits, with error 2−(e+1). Let H be a family of Next we consider families of PRFs that are with high prob- 2-universal hash functions mapping 2n+e+1 bits to m bits. ability injective: Consider the family F 0 defined as follows: The key space for F 0 consists of a key K for F and a hash function h chosen Definition 8. A statistically injective (puncturable) PRF 0 family with failure probability (·) is a family of (puncturable) from H. We define F ((K, h), x) = h(F (K, x)). PRFs F such that with probability 1 − (λ) over the random Observe that if F were a truly random function, then λ h(F (x)) would still be a truly random function for an in- choice of key K ← KeyF (1 ), we have that F (K, ·) is injec- tive. dependent random choice of h (even if h were public), by the Leftover Hash Lemma. Thus, because F is a PRF, we If the failure probability function (·) is not specified, then 0 (·) is a negligible function. have that F is also a PRF. Similarly, because F is punc- turable, so is F 0: one can puncture the key (K, h) on a set Theorem 2. If one-way functions exist, then for all ef- S by simply puncturing K on the same set S. ficiently computable functions n(λ), m(λ), and e(λ) such Now suppose we have any distribution X over n bits with that m(λ) ≥ 2n(λ) + e(λ), there exists a puncturable sta- min-entropy at least k ≥ m+2e+2. Fix any key K such that tistically injective PRF family with failure probability 2−e(λ) F (K, ·) is injective. Then the Leftover Hash Lemma implies that maps n(λ) bits to m(λ) bits. that that the statistical distance between (h ← H, h(F (K,X))) −(e+1) and (h ← H,Um) is at most 2 . Proof. For ease of notation, we suppress the dependence Since the probability that K yields a non-injective F (K, ·) of n, m, e on λ. Let H be a family of 2-universal hash func- is also at most 2−(e+1), by a union bound, this implies that tions mapping n bits to m bits. Let F be a family of punc- that statistical distance between ((K, h),F 0((K, h),X)) and turable PRFs mapping n bits to m bits, which exists by −e ((K, h),Um) is at most 2 , as desired. Theorem 1. Consider the family F 0 defined as follows: The key space for F 0 consists of a key K for F and a hash function h chosen 4. OUR DENIABLE ENCRYPTION SYSTEM from H. We define F 0((K, h), x) = F (K, x) ⊕ h(x). Let λ be a security parameter. Below, we assume that the

Observe that if F were a truly random function, then EncryptPKE (PK, ·; ·) algorithm accepts a one-bit message m F (x)⊕h(x) would still be a truly random function for an in- and randomness of length `e ≥ 2, and outputs ciphertexts of dependent random choice of h (even if h were public). Thus, length `c. For ease of notation, we suppress the dependence because F is a PRF, we have that F 0 is also a PRF. Simi- of these lengths on λ. larly, because F is puncturable, so is F 0: one can puncture Our Encrypt program will take a one-bit message m, and the key (K, h) on a set S by simply puncturing K on the randomness u = (u[1], u[2]) of length `1 + `2, where |u[1]| = same set S. `1 and |u[2]| = `2. We will set `1 = 5λ + 2`c + `e, and 0 All that remains is to show that F is statistically injective. `2 = 2λ + `c + 1. Consider x 6= x0 ∈ {0, 1}n. Fix any K. By pairwise inde- We use a PRG that maps {0, 1}λ to {0, 1}2λ. We also 0 0 −m pendence, Prh[h(x) = h(x ) ⊕ F (K, x) ⊕ F (K, x )] = 2 . make use of three different PRF variants in our construction: Taking a union bound over all distinct pairs (x, x0) gives us • An puncturable extracting PRF F (K , ·) that accepts that: 1 1 inputs of length `1 + `2 + 1, and output strings of length `e. It is extracting when the input min-entropy 0 0 0 0 −(m−2n) −(λ+1) Pr[∃x 6= x : F ((K, h), x) = F ((K, h), x )] ≤ 2 is greater than `e +2λ+4, with error less than 2 . h Observe that n + `1 + `2 ≥ `e + 2λ + 4, and thus if one- Averaging over the choice of K finishes the proof, by the way functions exist, then such puncturable extracting choice of m ≥ 2n + e. PRFs exist by Theorem 3.

• A puncturable statistically injective PRF F2(K2, ·) that Finally, we consider PRFs that are also (strong) extractors accepts inputs of length 2λ+`c +1, and outputs strings over their inputs: of length `1. Observe that `1 ≥ 2 · (2λ + `c + 1) + λ, and thus if one-way functions exist, such puncturable Definition 9. An extracting (puncturable) PRF family statistically injective PRFs exist by Theorem 2. with error (·) for min-entropy k(·) is a family of (punc- turable) PRFs F mapping n(λ) bits to m(λ) bits such that • A puncturable PRF F3(K3, ·) that accepts inputs of for all λ, if X is any distribution over n(λ) bits with min- length `1 and outputs strings of length `2. If one-way entropy greater than k(λ), then the statistical distance be- functions exist, then such a puncturable PRF exists by λ λ Theorem 1. tween (K ← KeyF (1 ),F (K,X)) and (K ← KeyF (1 ), Um(λ)) is at most (λ), where U` denotes the uniform distri- The public key consist of indistinguishability obfuscation bution over `-bit strings. of the two programs described in Figures 1 and 2.

482 Encrypt

Constants: Public key PK and keys K1, K2, and K3. Input: Message m, randomness u = (u[1], u[2]). 0 0 0 0 0 0 1. If F3(K3, u[1]) ⊕ u[2] = (m , c , r ) for (proper length) strings c , r , m , and m0 = m, 0 0 0 0 and u[1] = F2(K2, (m , c , r )), then output c = c and end.

2. Else let x = F1(K1, (m, u)). Output c = EncryptPKE (PK, m; x).

Figure 1: Program Encrypt Explain

Constants: keys K2 and K3. Input: Message m, ciphertext c, randomness r ∈ {0, 1}λ.

1. Set α = F2(K2, (m, c, PRG(r))). Let β = F3(K3, α) ⊕ (m, c, PRG(r)). Output e = (α, β).

Figure 2: Program Explain

The analysis of our construction is deferred to the full if followed, would result in a public-key encryption scheme version of this paper [22]. where ciphertexts are themselves obfuscated programs. Our approach, on the other hand, yields public-key encryption 5. CORE PRIMITIVES FROM INDISTING- where public keys are obfuscated programs, and ciphertexts are short. UISHABILITY OBFUSCATION Let PRG be a pseudo random generator that maps {0, 1}λ In this section we show how to build up core crypto- to {0, 1}2λ. Let F be a puncturable PRF that takes inputs graphic primitives from Indistinguishable Obfuscation. The of 2λ bits and outputs ` bits. We describe our encryption al- primitives we build are: public key encryption, short “hash- gorithm which encrypts for the message space M = {0, 1}`. and-sign” selectively secure signatures, chosen-ciphertext se- cure public key encryption, non-Interactive zero knowledge - Setup(1λ) : The setup algorithm first chooses a punc- proofs (NIZKs), injective trapdoor functions, and oblivious turable PRF key K for F . Next, it creates an obfus- transfer. We show that each of these can be built from cation of the program PKE Encrypt of Figure 3. The Indistinguishability Obfuscation and (using known implica- size of the program is padded to be the maximum of tions [17, 19]) standard one way functions, with the excep- itself and PKE Encrypt* of Figure 4, which is also in- tion of trapdoor functions which requires an injective one cluded here to give a hint about the proof of security. way function. The public key, PK, is the obfuscated program. The Interestingly, the use of punctured programmability emerges secret key SK is K. as a repeated theme in building each of these. In addition, techniques used to realize one primitive will often have ap- - Encrypt(PK, m ∈ M) : The encryption algorithm chooses plications in another. For example, our NIZK system uses a random value r ∈ {0, 1}λ and runs the obfuscated an obfuscated program that takes as input an instance and program of PK on inputs m, r. witness and outputs a signature on the instance if the wit- ness relation holds. Internally, it leverages the structure of - Decrypt(SK, c = (c1, c2)) The decryption algorithm out- 0 our signature scheme. puts m = F (K, c1) ⊕ c2. For compactness we refer the reader to the literature (e.g. [16]) for the definitions of these core primitives. For each Theorem 4. If our obfuscation scheme is indistinguish- primitive we will present our construction and proof of se- ably secure, PRG is a secure pseudo random generator, and curity. In the proof of security we will give a detailed descrip- F is a secure punctured PRF, then our PKE scheme is IND- tion of a sequence of hybrid arguments (in the full version of CPA secure. this paper [22]). Once the hybrids are established arguing indistinguishability between them is fairly straightforward and we sketch these arguments. Acknowledgements In these proceedings, we describe only our public-key en- We are grateful to Mihir Bellare for his feedback on earlier cryption scheme. For our other results, see the full version versions of this paper. We are indebted to Vanishree Rao of this paper [22]. for her generous assistance in preparing this proceedings ver- sion. 5.1 Public Key Encryption We note that the implication that public-key encryption follows from indistinguishability obfuscation and one-way 6. REFERENCES functions already follows from claims in the literature: Indis- [1] B. Barak, O. Goldreich, R. Impagliazzo, S. Rudich, tinguishability obfuscation implies Witness Encryption [12]; A. Sahai, S. P. Vadhan, and K. Yang. On the Witness Encryption and one-way functions imply public- (im)possibility of obfuscating programs. In CRYPTO, key encryption [13]. However, that chain of implications, pages 1–18, 2001.

483 PKE Encrypt

Constants: Punctured PRF key K. Input: Message m ∈ {0, 1}`, randomness r ∈ {0, 1}λ. 1. Let t = PRG(r).

2. Output c = (c1 = t, c2 = F (K, t) ⊕ m).

Figure 3: Program PKE Encrypt PKE Encrypt*

Constants: Punctured PRF key K({t∗}). Input: Message m ∈ {0, 1}`, randomness r ∈ {0, 1}λ. 1. Let t = PRG(r).

2. Output c = (c1 = t, c2 = F (K, t) ⊕ m).

Figure 4: Program PKE Encrypt*

[2] B. Barak, O. Goldreich, R. Impagliazzo, S. Rudich, attribute-based. In CRYPTO, 2013. A. Sahai, S. P. Vadhan, and K. Yang. On the [16] O. Goldreich. Foundations of Cryptography. (im)possibility of obfuscating programs. J. ACM, Cambridge University Press, 2001. 59(2):6, 2012. [17] O. Goldreich, S. Goldwasser, and S. Micali. How to [3] D. Boneh and B. Waters. Constrained pseudorandom construct random functions (extended abstract). In functions and their applications. IACR Cryptology FOCS, pages 464–479, 1984. ePrint Archive, 2013:352, 2013. [18] S. Goldwasser and S. Micali. Probabilistic encryption. [4] E. Boyle, S. Goldwasser, and I. Ivan. Functional Jour. of Computer and System Science, 28(2):270–299, signatures and pseudorandom functions. IACR 1984. Cryptology ePrint Archive, 2013:401, 2013. [19] J. H˚astad, R. Impagliazzo, L. A. Levin, and M. Luby. [5] Z. Brakerski. Fully homomorphic encryption without A pseudorandom generator from any one-way modulus switching from classical gapsvp. In function. SIAM J. Comput., 28(4):1364–1396, 1999. CRYPTO, pages 868–886, 2012. [20] A. Kiayias, S. Papadopoulos, N. Triandopoulos, and [6] Z. Brakerski, C. Gentry, and V. Vaikuntanathan. T. Zacharias. Delegatable pseudorandom functions (leveled) fully homomorphic encryption without and applications. IACR Cryptology ePrint Archive, bootstrapping. In ITCS, 2012. 2013:379, 2013. [7] Z. Brakerski and V. Vaikuntanathan. Fully [21] T. Moran and A. Rosen. There is no homomorphic encryption from ring-lwe and security indistinguishability obfuscation in pessiland. IACR for key dependent messages. In CRYPTO, 2011. Cryptology ePrint Archive, 2013:643, 2013. [8] R. Canetti, C. Dwork, M. Naor, and R. Ostrovsky. [22] A. Sahai and B. Waters. How to use Deniable encryption. In CRYPTO, pages 90–104, 1997. indistinguishability obfuscation: Deniable encryption, [9] W. Diffie and M. E. Hellman. New directions in and more. Cryptology ePrint Archive, Report cryptography, 1976. 2013/454, 2013. http://eprint.iacr.org/. [10] M. Durmuth¨ and D. M. Freeman. Deniable encryption with negligible detection probability: An interactive construction. In EUROCRYPT, pages 610–626, 2011. [11] M. Durmuth¨ and D. M. Freeman. Deniable encryption with negligible detection probability: An interactive construction. IACR Cryptology ePrint Archive, 2011:66, 2011. [12] S. Garg, C. Gentry, S. Halevi, M. Raykova, A. Sahai, and B. Waters. Candidate indistinguishability obfuscation and functional encryption for all circuits. In FOCS, 2013. [13] S. Garg, C. Gentry, S. Halevi, A. Sahai, and B. Waters. Attribute-based encryption for circuits from multilinear maps. Cryptology ePrint Archive, Report 2013/128, 2013. http://eprint.iacr.org/. [14] C. Gentry. Fully homomorphic encryption using ideal lattices. In STOC, pages 169–178, 2009. [15] C. Gentry, A. Sahai, and B. Waters. Homomorphic encryption from learning with errors: Conceptually-simpler, asymptotically-faster,

484